CN112669299A - Defect detection method and device, computer equipment and storage medium - Google Patents

Defect detection method and device, computer equipment and storage medium Download PDF

Info

Publication number
CN112669299A
CN112669299A CN202011639446.0A CN202011639446A CN112669299A CN 112669299 A CN112669299 A CN 112669299A CN 202011639446 A CN202011639446 A CN 202011639446A CN 112669299 A CN112669299 A CN 112669299A
Authority
CN
China
Prior art keywords
picture
feature
target picture
pictures
information
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN202011639446.0A
Other languages
Chinese (zh)
Other versions
CN112669299B (en
Inventor
茅心悦
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Shanghai Xiaoi Robot Technology Co Ltd
Original Assignee
Shanghai Xiaoi Robot Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Shanghai Xiaoi Robot Technology Co Ltd filed Critical Shanghai Xiaoi Robot Technology Co Ltd
Priority to CN202011639446.0A priority Critical patent/CN112669299B/en
Publication of CN112669299A publication Critical patent/CN112669299A/en
Application granted granted Critical
Publication of CN112669299B publication Critical patent/CN112669299B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • YGENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
    • Y02TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
    • Y02PCLIMATE CHANGE MITIGATION TECHNOLOGIES IN THE PRODUCTION OR PROCESSING OF GOODS
    • Y02P90/00Enabling technologies with a potential contribution to greenhouse gas [GHG] emissions mitigation
    • Y02P90/30Computing systems specially adapted for manufacturing

Landscapes

  • Image Analysis (AREA)

Abstract

A flaw detection method and device, a computer device and a storage medium are provided, wherein the method comprises the following steps: acquiring a target picture; carrying out feature identification on a target picture to obtain feature information of the target picture; obtaining a flaw detection result according to the characteristic information of the target picture; the feature recognition of the target picture to obtain the feature information of the target picture includes: acquiring a plurality of intermediate pictures according to the target picture; performing feature identification on each intermediate picture to obtain feature information of each intermediate picture; analyzing the relevance of each intermediate picture to obtain the relevance coefficient between the intermediate picture and other intermediate pictures of the target picture; and combining the characteristic information and the correlation coefficients of the plurality of intermediate pictures to obtain the characteristic information of the target picture. By the method, the problems of detection interruption and information loss can be solved while the detection effect is ensured.

Description

Defect detection method and device, computer equipment and storage medium
Technical Field
The invention relates to the technical field of computer vision, in particular to a flaw detection method and device, computer equipment and a storage medium.
Background
Cloth flaw detection is an important link in production and quality management of the textile industry. At present, the defects on the surface of the cloth are mainly detected by manual detection, but the manual detection has the defects of low speed, high labor intensity, influence of subjective factors, lack of consistency and the like.
With the development of machine learning technology, this technology is also applied to flaw detection of cloth. Conventionally, machine learning algorithms for detecting defects in cloth are generally used to process easily extractable and quantifiable defect features, such as color, area, roundness, angle, length, etc., which are coarse in granularity for defect detection. Conventionally, a machine learning method for detecting defects of a fabric generally extracts surface features of the fabric by using traditional methods such as image enhancement, segmentation, denoising and the like in cooperation with a neural network, and screens out a portion including the features (i.e., the defects) to ensure the defect detection effect.
However, because the size of the original picture obtained by image acquisition of the cloth is too large, a sliding window and cutting method is generally adopted to obtain a small image from the original picture, and then the small image is used for training and feature recognition, and the method is called as a small image recognition or small image analysis method. However, the small image recognition method may cause the feature recognition in the original image to be interrupted, resulting in a phenomenon of information loss.
Similarly, similar problems exist in detecting flaws in other objects (e.g., metals, plastics, molds, mechanical parts, etc.).
Therefore, a defect detection method is needed to solve the problems of detection interruption and information loss while ensuring the detection effect.
Disclosure of Invention
The invention solves the technical problem of how to solve the problems of detection interruption and information loss while ensuring the detection effect.
In order to solve the above technical problem, an embodiment of the present invention provides a defect detection method, where the method includes: acquiring a target picture; carrying out feature identification on a target picture to obtain feature information of the target picture; obtaining a flaw detection result according to the characteristic information of the target picture; the feature recognition of the target picture to obtain the feature information of the target picture includes: acquiring a plurality of intermediate pictures according to the target picture; performing feature identification on each intermediate picture to obtain feature information of each intermediate picture; analyzing the relevance of each intermediate picture to obtain the relevance coefficient between the intermediate picture and other intermediate pictures of the target picture; and combining the characteristic information and the correlation coefficients of the plurality of intermediate pictures to obtain the characteristic information of the target picture.
Optionally, the obtaining the feature information of the target picture by combining the feature information and the correlation coefficients of the multiple intermediate pictures includes: and calculating the product of the characteristic information of each intermediate picture and the correlation coefficient of the intermediate picture, and summing the products of all the intermediate pictures of the target picture, wherein the sum is the characteristic information of the target picture.
Optionally, after obtaining the plurality of intermediate pictures according to the target picture and before performing the feature identification on each intermediate picture to obtain the feature information of each intermediate picture, the method further includes: respectively inputting each intermediate picture into a feature extraction model to obtain a plurality of feature information layers with different feature dimensions of the intermediate picture, wherein the feature extraction model is obtained by analyzing according to feature extraction in a sample picture and is used for extracting the feature information layers in the input picture; and for each intermediate picture, adding the detail information of the plurality of characteristic information layers to obtain one or more expansion layers of the intermediate picture.
Optionally, the feature extraction model includes mobilenet _ v 3.
Optionally, the performing feature identification on each intermediate picture to obtain feature information of each intermediate picture includes: and acquiring part or all of the feature information layer and the expansion layer of each intermediate picture, and inputting the part or all of the feature information layer and the expansion layer into a feature identification network to obtain the original mask features of the intermediate pictures.
Optionally, the analyzing the association degree of each intermediate picture to obtain the association coefficient between the intermediate picture and another intermediate picture of the target picture includes: and inputting part or all of the characteristic information layer and the expansion layer of each intermediate picture into a coefficient network to obtain the correlation coefficient of the intermediate picture.
Optionally, the target picture is taken from a plurality of consecutive frames of pictures, and the method further includes: acquiring a scene coefficient corresponding to the target picture according to the relation between each frame of picture in the continuous multi-frame pictures; and marking the scene coefficient in the characteristic information of the target picture.
An embodiment of the present invention further provides a defect detection apparatus, where the apparatus includes: the target picture acquisition module is used for acquiring a target picture; the first feature identification module is used for carrying out feature identification on a target picture to obtain feature information of the target picture; the result acquisition module is used for acquiring a flaw detection result according to the characteristic information of the target picture; wherein the first feature identification module comprises: the intermediate picture acquiring unit is used for acquiring a plurality of intermediate pictures according to the target picture; the second characteristic identification unit is used for carrying out characteristic identification on each intermediate picture to obtain characteristic information of each intermediate picture; the correlation analysis unit is used for analyzing the correlation degree of each intermediate picture to obtain the correlation coefficient between the intermediate picture and other intermediate pictures of the target picture; and the result output unit is used for combining the characteristic information and the correlation coefficient of the plurality of intermediate pictures to obtain the characteristic information of the target picture.
The embodiment of the present invention further provides a computer device, which includes a memory and a processor, where the memory stores a computer program capable of running on the processor, and the processor executes the steps of the method when running the computer program.
Embodiments of the present invention also provide a storage medium having a computer program stored thereon, where the computer program is executed by a processor to perform the steps of the method.
Compared with the prior art, the technical scheme of the embodiment of the invention has the following beneficial effects:
the embodiment of the invention provides a flaw detection method, which comprises the following steps: acquiring a target picture; carrying out feature identification on a target picture to obtain feature information of the target picture; obtaining a flaw detection result according to the characteristic information of the target picture; the feature recognition of the target picture to obtain the feature information of the target picture includes: acquiring a plurality of intermediate pictures according to the target picture; performing feature identification on each intermediate picture to obtain feature information contained in each intermediate picture; analyzing the relevance of each intermediate picture to obtain the relevance coefficient between the intermediate picture and other intermediate pictures of the target picture; and combining the characteristic information and the correlation coefficients of the plurality of intermediate pictures to obtain the characteristic information of the target picture. Compared with the prior art, in the method provided by the embodiment of the invention, a target picture is split (or intercepted) into a plurality of intermediate pictures, and each intermediate picture is respectively subjected to characteristic identification to obtain a flaw detection result. Therefore, the traditional small graph analysis technology can be used, and the detection effect is guaranteed. When the characteristics of each intermediate picture are identified, the corresponding association coefficient is calculated to reflect the association relation among the intermediate pictures, so that the global information of the target picture is reserved, and the problems of detection interruption and information loss in the small picture analysis technology are solved.
Furthermore, the method is executed by a trained feature recognition model, when the feature recognition model is trained, the whole original image is sent to training, the idea of image re-fusion is adopted, the feature recognition prediction of the whole image is realized through the feature recognition model, so that the complex preprocessing process is reduced, and the classification and positioning result based on the feature recognition result is more accurate.
Furthermore, each model does not need to directly calculate the whole image, but calculates a plurality of small images (namely intermediate images), so that the calculation amount is reduced, the efficiency is improved, the coefficient network learns the associated information among the small images and the weight of each small image in the target image, and the global information is reserved for subsequent fusion. Therefore, the global information of the target picture is considered, and the information interruption phenomenon caused by the cutting mode can not occur.
Further, for multi-frame pictures with the association relationship, the global characteristics of the shot object can be considered, and the defect detection effect of the shot object is further improved.
Drawings
FIG. 1 is a flowchart illustrating a first method for detecting defects according to an embodiment of the present invention;
FIG. 2 is a flowchart illustrating a second method for detecting defects according to an embodiment of the present invention;
FIG. 3 is a flowchart illustrating a third method for detecting defects according to an embodiment of the present invention;
FIG. 4 is an original drawing of a fabric;
FIG. 5 is a schematic view of a segmentation process performed on the fabric of FIG. 4 by a local segmentation fusion technique;
FIG. 6 is a graph of the detection effect of FIG. 5;
FIG. 7 is a diagram illustrating the effect of detecting flaws on the fabric shown in FIG. 4 according to an embodiment of the present invention;
fig. 8 is a schematic structural diagram of a defect detection apparatus according to an embodiment of the invention.
Detailed Description
As mentioned in the background, the conventional defect detection method may cause the phenomena of detection interruption and information loss.
To solve the above problem, an embodiment of the present invention provides a defect detection method, including: acquiring a target picture; carrying out feature identification on a target picture to obtain feature information of the target picture; obtaining a flaw detection result according to the characteristic information of the target picture; the feature recognition of the target picture to obtain the feature information of the target picture includes: acquiring a plurality of intermediate pictures according to the target picture; performing feature identification on each intermediate picture to obtain feature information of each intermediate picture; analyzing the relevance of each intermediate picture to obtain the relevance coefficient between the intermediate picture and other intermediate pictures of the target picture; and combining the characteristic information and the correlation coefficients of the plurality of intermediate pictures to obtain the characteristic information of the target picture.
By the method, the problems of detection interruption and information loss can be solved while the detection effect is ensured.
In order to make the aforementioned objects, features and advantages of the present invention comprehensible, embodiments accompanied with figures are described in detail below.
Referring to fig. 1, fig. 1 is a schematic flow chart of a defect detection method according to an embodiment of the present invention, the defect detection method specifically includes the following steps:
step S101, acquiring a target picture;
step S102, carrying out feature identification on a target picture to obtain feature information of the target picture;
and step S103, obtaining a flaw detection result according to the characteristic information of the target picture.
The target picture is a picture to be subjected to flaw detection, and can be a picture obtained by shooting an object, for example, when the flaw detection of the cloth is carried out, the target picture can be obtained by shooting the surface of the cloth to be detected. Alternatively, the target picture may be a true color picture (i.e., an RGB image).
Each intermediate picture can be subjected to image processing through a Convolutional Neural Network (CNN) deep learning method to obtain feature information included in the target picture. The feature is an object or a region which can be identified in the target picture and conforms to a specific identification rule, and the feature information is used for representing the category, the position and the like of the feature. As a non-limiting example, in cloth defect identification, the characteristic is an identified defect. And taking information such as the type and the position of the flaw corresponding to the characteristic information of the target picture as a flaw detection result.
The feature information may include information such as a category and a location of the feature. The feature classification is obtained by classifying the identified features, such as a flat background, a small object on the background, an edge of a large object on the background, or the like, or a defect classification of the cloth, such as a hanging thread, a pattern misprinting, or the like. The position of the feature is used to indicate the position of the feature identifying the type information in the target picture. A coordinate system may be established based on each intermediate picture to identify coordinates of the respective feature in the coordinate system as a location of the feature. Optionally, when the target picture is a two-dimensional picture, two adjacent edge lines of the picture are taken as two coordinate axes of a two-dimensional coordinate system. If the target picture is a three-dimensional picture, a three-dimensional coordinate system can be correspondingly established.
It should be noted that, although the embodiments herein have been described by taking the defect detection of the cloth as an example, the defect detection method of the embodiments of the present invention includes, but is not limited to, the defect detection of the cloth, and can also be used in related schemes for defect detection, such as defect identification of metals, plastics, molds, mechanical parts, and the like.
In step S102, the performing the feature recognition on the target picture to obtain the feature information of the target picture may specifically include the following steps S1021 to S1024:
step S1021, a plurality of intermediate pictures are obtained according to the target picture;
to ensure the detection effect, each target picture can be split (or intercepted) into a plurality of intermediate pictures. In the field of computer technology, pictures are generally transmitted as image data, and the image data of the target picture includes image data of a plurality of intermediate pictures.
Optionally, the size of the plurality of intermediate pictures included in each target picture is equal and fixed.
Optionally, the position relationship between each intermediate picture and its corresponding target picture is recorded.
Optionally, the target picture may be preprocessed, for example, the edge of the picture may be cut off, and then the target picture may be cut into a plurality of intermediate pictures.
Step S1022, perform feature identification on each intermediate picture to obtain feature information of each intermediate picture.
For each feature recognition of a single intermediate picture, the feature recognition and the obtained feature information may refer to the related description of step S102, which is not described herein again.
Step S1023, carrying out relevance analysis on each intermediate picture to obtain relevance coefficients between the intermediate picture and other intermediate pictures of the target picture;
and the relevancy analysis is used for analyzing the relevancy relation among the characteristics contained in each intermediate picture. Optionally, whether each intermediate picture is associated or not may be obtained through an outline, a pixel point, and the like of a feature included in each intermediate picture, and if so, the association degree between the two is represented by an association coefficient.
Optionally, the correlation coefficient is further used to represent a weight value occupied by the feature in each intermediate picture in the feature of the target picture. The value of the correlation coefficient is greater than or equal to zero and less than or equal to 1.
And step S1024, combining the characteristic information and the correlation coefficients of the plurality of intermediate pictures to obtain the characteristic information of the target picture.
For a target picture, obtaining feature information of the target picture according to the feature information of each intermediate picture included in the target picture obtained in step S102 and the correlation coefficient of each intermediate picture included in the target picture obtained in step S103, as a result of performing defect detection on the target picture.
Optionally, the features included in the multiple intermediate pictures may be spliced according to the association coefficient of each intermediate picture and the position relationship between each intermediate picture and the corresponding target picture.
By the method shown in fig. 1, a target picture is split (or intercepted) into a plurality of intermediate pictures, and each intermediate picture is subjected to feature recognition to obtain a flaw detection result, so that the detection effect can be ensured by using the conventional small picture analysis technology. When the characteristics of each intermediate picture are identified, the corresponding association coefficient is calculated to reflect the association relation among the intermediate pictures, so that the global information of the target picture is reserved, and the problems of detection interruption and information loss in the small picture analysis technology are solved.
In one embodiment, the method described in FIG. 1 is performed by a trained feature recognition model. The training samples for training the feature recognition model are pictures collected or generated by a program, and the format of the pictures is the same as that of the target pictures. The feature recognition model divides the picture of the training sample into a plurality of small pictures by the same execution scheme as the step S101, and performs model training by using the plurality of small pictures as the training sample, so that the feature recognition model learns the ability to execute the steps S102 to S104.
Therefore, when the feature recognition model is trained, the whole original image is trained, the idea of image re-fusion is adopted, feature recognition prediction of the whole image is realized through the feature recognition model, so that the complex preprocessing process is reduced, and the classification and positioning results based on the feature recognition result are more accurate.
In one embodiment, the obtaining the feature information of the target picture by combining the feature information and the correlation coefficient of the plurality of intermediate pictures includes: and calculating the product of the characteristic information of each intermediate picture and the correlation coefficient of the intermediate picture, and summing the products of all the intermediate pictures of the target picture, wherein the sum is the characteristic information of the target picture.
The feature information and the correlation coefficient of a plurality of intermediate pictures of a target picture are respectively denoted as charcter (i) and coefficient (i), where i is 1,2, … n, and n is the number of the intermediate pictures.
Figure BDA0002877915520000081
Where Σ denotes the summation.
Optionally, each target picture includes four intermediate pictures with the same size, and the four intermediate pictures may respectively correspond to upper left, lower left, upper right, and lower right regions of the target picture, where i is 4.
In one embodiment, referring to fig. 1 and fig. 2, after the multiple intermediate pictures are obtained according to the target picture in step S1021 in fig. 1 and before the feature identification is performed on each intermediate picture in step S1022 to obtain the feature information of each intermediate picture, the method further includes the following steps:
step S201, respectively inputting each intermediate picture into a feature extraction model to obtain a plurality of feature information layers with different feature dimensions of the intermediate picture, wherein the feature extraction model is obtained by analyzing according to feature extraction in a sample picture and is used for extracting the feature information layers in the input picture;
specifically, the feature extraction model is a model for acquiring feature information layers of input intermediate pictures based on different feature dimensions, which is trained by using a sample image as a training sample according to texture distribution characteristics in the sample. Optionally, the feature extraction model may adopt a convolutional neural network model, such as MobileNet _ v2, MobileNet _ v3, and the like, and perform convolution processing on each intermediate picture through a plurality of different convolution kernels to obtain a plurality of feature information layers corresponding to each intermediate picture.
Optionally, after obtaining a plurality of feature information layers of different feature dimensions of the intermediate picture, the method may further include: in step S202, a partial feature information layer is extracted from the plurality of feature information layers obtained.
Taking the MobileNet _ v3 network as an example, please refer to fig. 3, where fig. 3 is a schematic flow chart of another defect detection method according to the embodiment of the present invention, each target picture includes four intermediate pictures, and the four intermediate pictures may respectively correspond to Top-left (Top-left), bottom-left (Top-bottom), Top-Right (Right-left), and bottom-Right (Right-bottom) regions of the target picture; each intermediate picture can acquire 19 layers of feature information layers through the MobileNet _ v3 network, extract the fourth layer, the seventh layer and the thirteenth layer of feature information layers, assign the feature information layers to C2, C3 and C4 in FIG. 3 respectively, and continue to execute the subsequent steps.
Step S203, for each intermediate picture, add the detail information of the multiple feature information layers to obtain one or more extension layers of the intermediate picture.
Specifically, after obtaining a plurality of feature information layers with different feature dimensions of the intermediate picture in step S201, or after extracting part of the feature information layers from the obtained plurality of feature information layers in step S202, the method may further include: step S203, for each intermediate picture, add the detail information of the multiple feature information layers to obtain one or more extension layers of the intermediate picture.
Optionally, the detail information of multiple feature information layers may be added by methods such as upsampling, fusion of two or more feature information layers, and deconvolution of feature information layers.
In one embodiment, with continued reference to fig. 3, after obtaining the feature information layers C2, C3, and C4, C4 obtains a feature information layer P4 through a 3 × 3 convolution filter, P4 obtains a feature information layer P3 through upsampling and feature fusion with C3, P3 obtains a feature information layer P2 through upsampling and C2 fusion, P4 obtains the feature information layer P5 through a 3 × 3 convolution filter with a step size of 2, and P5 obtains a feature information layer P6 through a 3 × 3 convolution filter with a step size of 2. At this time, P2, P3, P4, P5 and P6 are the expansion layers. Thus, feature recognition can be performed by combining a plurality of feature information layers and extension layers.
Optionally, please continue to refer to fig. 1 to fig. 3, where the step S1022 in fig. 1 performs feature recognition on each intermediate picture to obtain feature information of each intermediate picture, which may include the step S204 in fig. 2, obtaining part or all of the feature information layer and the extension layer of each intermediate picture, and inputting the feature information layer and the extension layer into a feature recognition network, to obtain an original mask feature of the intermediate picture.
The feature recognition network is obtained by training big data by taking feature information layers of a plurality of sample pictures as training samples.
Optionally, the feature recognition network is configured to perform feature fusion on the input feature information layer and the extension layer to obtain a Mask map (Mask) of an intermediate picture corresponding to the input feature information layer and the extension layer, and record the Mask map as an initial Mask map (initial Mask) of the intermediate picture, where the initial Mask map includes an original Mask feature of the intermediate picture. Optionally, feature fusion may be performed on the input feature information layer and the extension layer through a contact (concat) layer in the neural network technology.
With reference to fig. 3, four layers P2, P3, P4 and C2 are input into the feature recognition network, and feature fusion is performed on the four layers through the concat layer in the feature recognition network to obtain initial mask maps of four intermediate pictures, which are respectively denoted as Top-left initial mask map, bottom-left initial mask map, Top-Right initial mask map, and bottom-Right initial mask map (as indicated in the figure).
In an embodiment, the performing, in step S1023 of fig. 1, the correlation degree analysis on each intermediate picture to obtain the correlation coefficient between the intermediate picture and another intermediate picture of the target picture may include, in step S205 of fig. 3, inputting part or all of the feature information layer and the extension layer of each intermediate picture into a coefficient network to obtain the correlation coefficient of the intermediate picture;
the coefficient network is obtained by training big data by taking the feature information layers and/or the expansion layers of a plurality of sample pictures as training samples. Specifically, big data training can be performed through the feature information layers and/or the extension layers of a plurality of sample pictures, so that the coefficient network learns the capability of acquiring the correlation coefficient corresponding to the intermediate picture.
With continued reference to fig. 3, the extended layers P3, P4, P5 and P6 are input into the coefficient network (the coefficient network is denoted by conv in the figure) to obtain the corresponding correlation coefficients of the four intermediate pictures: top-left Coefficient, Top-right Coefficient, Bottom-left Coefficient, and Bottom-right Coefficient.
Optionally, the method illustrated in fig. 2 may further include: and step S206, combining the original mask features and the correlation coefficients of the plurality of intermediate pictures to obtain target mask features.
Optionally, the products of the initial Mask images of the multiple intermediate images of the target image and the respective correlation coefficients are summed to obtain a Mask image (i.e., the Mask output last in fig. 3) of the target image, where the Mask image of the target image includes target Mask features.
Through the method described in fig. 2 and fig. 3, each model does not need to directly calculate the whole graph, but calculates a plurality of small graphs (i.e., intermediate pictures), so that the calculation amount is reduced, the efficiency is improved, the coefficient network learns the associated information among the small graphs and the weight of each small graph in the target picture, and the global information is retained for subsequent fusion. Therefore, the global information of the target picture is considered, and the information interruption phenomenon caused by the cutting mode can not occur.
Referring to fig. 4 to 7, when detecting a defect of a cloth, fig. 4 is a specific cloth original image. Fig. 5 is a schematic diagram of division in the case of detecting flaws in the fabric in fig. 4 by a local division fusion technique, that is, the fabric in fig. 4 is divided into 4 small images (corresponding to 501, 502, 503 and 504 in the figure) for detection. Fig. 6 is a diagram illustrating the detection effect of fig. 5, and a dotted area 601 is a detected defect area. Fig. 7 is a diagram illustrating the effect of detecting defects on the cloth shown in fig. 4 according to an embodiment of the present invention, and a dotted line area 701 is a detected defect area. Comparing the detection results of fig. 6 and fig. 7, it can be seen that the conventional local segmentation and fusion technique cannot identify the flaw at the bottom in fig. 7 due to the loss of global information.
In the category of fabric defects, the presence of a suspension wire together with a crossroad is called a broken end
The detection result of the conventional method in fig. 6 identifies the defect as a suspending wire (area 601), a crossroad (area 602), and the defect type is also detected in a disordered manner, and the detection range of the crossroad defect is also incomplete. In the detection result shown in fig. 7, the suspension wire (701 region) and the crossroad (702 region) are detected to be disconnected, while the crossroad defect detection range is relatively complete in fig. 7.
In one embodiment, the target picture is taken from a continuous multi-frame picture, the method further comprising: acquiring a scene coefficient corresponding to the target picture according to the relation between each frame of picture in the continuous multi-frame pictures; and marking the scene coefficient in the characteristic information of the target picture.
In an actual picture collecting process, a plurality of pictures are continuously collected as target pictures, for example, when a piece of cloth is subjected to flaw detection, one more long cloth is required to be continuously shot to obtain pictures of each region of the piece of cloth, and the pictures are combined to obtain an overall picture of the piece of cloth.
The scene coefficient is used for representing the association relation between each target picture and the continuously acquired target pictures. The expression and acquisition modes can refer to the correlation coefficient.
Optionally, the scene coefficient of each target picture may be extracted by monitoring the shooting interval of the picture acquisition device and the characteristics of the shot object, etc.
By the embodiment, the global characteristics of the shot object can be considered for the multi-frame pictures with the association relation, and the defect detection effect of the shot object is further improved.
Referring to fig. 8, an embodiment of the invention further provides a defect detecting apparatus 80, where the defect detecting apparatus 80 may include:
a target picture obtaining module 801, configured to obtain a target picture;
a first feature identification module 802, configured to perform feature identification on a target picture to obtain feature information of the target picture;
a result obtaining module 803, configured to obtain a flaw detection result according to the feature information of the target picture;
wherein the first feature identification module 802 comprises:
an intermediate picture acquiring unit 8021, configured to acquire multiple intermediate pictures according to the target picture;
a second feature identification unit 8022, configured to perform feature identification on each intermediate picture to obtain feature information of each intermediate picture;
the association analysis unit 8023 is configured to perform association degree analysis on each intermediate picture to obtain an association coefficient between the intermediate picture and another intermediate picture of the target picture;
and the result output unit 8024 is configured to obtain the feature information of the target picture by combining the feature information and the correlation coefficients of the multiple intermediate pictures.
In one embodiment, the result output unit 8024 is further configured to calculate a product of the feature information of each intermediate picture and the correlation coefficient of the intermediate picture, and sum the products of all intermediate pictures of the target picture, where the sum is the feature information of the target picture.
In one embodiment, the defect detecting apparatus 80 may further include:
the characteristic extraction model analysis module is used for respectively inputting each intermediate picture into a characteristic extraction model to obtain a plurality of characteristic information layers with different characteristic dimensions of the intermediate picture, and the characteristic extraction model is a model which is obtained by analyzing according to characteristic extraction in a sample picture and is used for extracting the characteristic information layers in the input picture;
and the extended layer acquisition module is used for increasing the detail information of the characteristic information layers for each intermediate picture to obtain one or more extended layers of the intermediate picture.
Optionally, the feature extraction model includes mobilenet _ v 3.
In an embodiment, the defect detection module 802 may be further configured to obtain part or all of the feature information layer and the extension layer of each intermediate picture, which are input to a feature recognition network, so as to obtain an original mask feature of the intermediate picture.
In an embodiment, the association analysis unit 8023 may be further configured to input part or all of the feature information layer and the extension layer of each intermediate picture into a coefficient network, so as to obtain the association coefficient of the intermediate picture.
In one embodiment, the target picture is taken from a plurality of consecutive pictures, and the defect detecting apparatus 80 further includes:
the scene coefficient acquisition module is used for acquiring the scene coefficient corresponding to the target picture according to the relation between each frame of picture in the continuous multi-frame pictures;
and the scene coefficient marking module is used for marking the scene coefficient in the characteristic information of the target picture.
For more details of the working principle and working mode of the defect detecting apparatus 80, reference may be made to the description of the defect detecting method in fig. 1 to 7, and details are not repeated here.
Further, the embodiment of the present invention further discloses a computer device, which includes a memory and a processor, where the memory stores a computer program capable of running on the processor, and the processor executes the technical solution of the flaw detection method in the embodiment shown in fig. 1 to 7 when running the computer program.
Further, an embodiment of the present invention further discloses a storage medium, on which a computer program is stored, and when the computer program is executed by a processor, the technical solution of the flaw detection method in the embodiments shown in fig. 1 to 7 is executed. Preferably, the storage medium may include a computer-readable storage medium such as a non-volatile (non-volatile) memory or a non-transitory (non-transient) memory. The storage medium may include ROM, RAM, magnetic or optical disks, and the like.
Specifically, in the embodiment of the present invention, the processor may be a Central Processing Unit (CPU), and the processor may also be other general-purpose processors, Digital Signal Processors (DSPs), Application Specific Integrated Circuits (ASICs), Field Programmable Gate Arrays (FPGAs) or other programmable logic devices, discrete gate or transistor logic devices, discrete hardware components, and the like. A general purpose processor may be a microprocessor or the processor may be any conventional processor or the like.
It will also be appreciated that the memory in the embodiments of the subject application can be either volatile memory or nonvolatile memory, or can include both volatile and nonvolatile memory. The nonvolatile memory may be a read-only memory (ROM), a Programmable ROM (PROM), an Erasable PROM (EPROM), an Electrically Erasable PROM (EEPROM), or a flash memory. Volatile memory can be Random Access Memory (RAM), which acts as external cache memory. By way of example and not limitation, many forms of Random Access Memory (RAM) are available, such as Static RAM (SRAM), Dynamic RAM (DRAM), Synchronous DRAM (SDRAM), double data rate SDRAM (DDR SDRAM), enhanced SDRAM (enhanced SDRAM), SDRAM (SLDRAM), synchlink DRAM (SLDRAM), and direct bus RAM (DR RAM).
It should be understood that the term "and/or" herein is merely one type of association relationship that describes an associated object, meaning that three relationships may exist, e.g., a and/or B may mean: a exists alone, A and B exist simultaneously, and B exists alone. In addition, the character "/" in this document indicates that the former and latter related objects are in an "or" relationship.
The "plurality" appearing in the embodiments of the present application means two or more.
The descriptions of the first, second, etc. appearing in the embodiments of the present application are only for illustrating and differentiating the objects, and do not represent the order or the particular limitation of the number of the devices in the embodiments of the present application, and do not constitute any limitation to the embodiments of the present application.
The term "connect" in the embodiments of the present application refers to various connection manners, such as direct connection or indirect connection, to implement communication between devices, which is not limited in this embodiment of the present application.
Although the present invention is disclosed above, the present invention is not limited thereto. Various changes and modifications may be effected therein by one skilled in the art without departing from the spirit and scope of the invention as defined in the appended claims.

Claims (10)

1. A method of defect detection, the method comprising:
acquiring a target picture;
carrying out feature identification on a target picture to obtain feature information of the target picture;
obtaining a flaw detection result according to the characteristic information of the target picture;
the feature recognition of the target picture to obtain the feature information of the target picture includes:
acquiring a plurality of intermediate pictures according to the target picture;
performing feature identification on each intermediate picture to obtain feature information of each intermediate picture;
analyzing the relevance of each intermediate picture to obtain the relevance coefficient between the intermediate picture and other intermediate pictures of the target picture;
and combining the characteristic information and the correlation coefficients of the plurality of intermediate pictures to obtain the characteristic information of the target picture.
2. The method according to claim 1, wherein the combining the feature information and the correlation coefficient of the plurality of intermediate pictures to obtain the feature information of the target picture comprises:
and calculating the product of the characteristic information of each intermediate picture and the correlation coefficient of the intermediate picture, and summing the products of all the intermediate pictures of the target picture, wherein the sum is the characteristic information of the target picture.
3. The method according to claim 1 or 2, wherein after the obtaining of the plurality of intermediate pictures according to the target picture and before the performing of the feature recognition on each intermediate picture to obtain the feature information of each intermediate picture, the method further comprises:
respectively inputting each intermediate picture into a feature extraction model to obtain a plurality of feature information layers with different feature dimensions of the intermediate picture, wherein the feature extraction model is obtained by analyzing according to feature extraction in a sample picture and is used for extracting the feature information layers in the input picture;
and for each intermediate picture, adding the detail information of the plurality of characteristic information layers to obtain one or more expansion layers of the intermediate picture.
4. The method of claim 3, wherein the feature extraction model comprises mobilenet _ v 3.
5. The method according to claim 3, wherein the performing feature recognition on each intermediate picture to obtain feature information of each intermediate picture comprises:
and acquiring part or all of the feature information layer and the expansion layer of each intermediate picture, and inputting the part or all of the feature information layer and the expansion layer into a feature identification network to obtain the original mask features of the intermediate pictures.
6. The method of claim 3, wherein the analyzing the correlation degree of each intermediate picture to obtain the correlation coefficient between the intermediate picture and other intermediate pictures of the target picture comprises:
and inputting part or all of the characteristic information layer and the expansion layer of each intermediate picture into a coefficient network to obtain the correlation coefficient of the intermediate picture.
7. The method of claim 1, wherein the target picture is taken from a consecutive plurality of pictures, the method further comprising:
acquiring a scene coefficient corresponding to the target picture according to the relation between each frame of picture in the continuous multi-frame pictures;
and marking the scene coefficient in the characteristic information of the target picture.
8. A defect detection apparatus, comprising:
the target picture acquisition module is used for acquiring a target picture;
the first feature identification module is used for carrying out feature identification on a target picture to obtain feature information of the target picture;
the result acquisition module is used for acquiring a flaw detection result according to the characteristic information of the target picture;
wherein the first feature identification module comprises:
the intermediate picture acquiring unit is used for acquiring a plurality of intermediate pictures according to the target picture;
the second characteristic identification unit is used for carrying out characteristic identification on each intermediate picture to obtain characteristic information of each intermediate picture;
the correlation analysis unit is used for analyzing the correlation degree of each intermediate picture to obtain the correlation coefficient between the intermediate picture and other intermediate pictures of the target picture;
and the result output unit is used for combining the characteristic information and the correlation coefficient of the plurality of intermediate pictures to obtain the characteristic information of the target picture.
9. A computer device comprising a memory and a processor, the memory having stored thereon a computer program operable on the processor, wherein the processor, when executing the computer program, performs the steps of the method of any of claims 1 to 7.
10. A storage medium having a computer program stored thereon, the computer program, when being executed by a processor, performing the steps of the method according to any one of claims 1 to 7.
CN202011639446.0A 2020-12-31 2020-12-31 Flaw detection method and device, computer equipment and storage medium Active CN112669299B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202011639446.0A CN112669299B (en) 2020-12-31 2020-12-31 Flaw detection method and device, computer equipment and storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202011639446.0A CN112669299B (en) 2020-12-31 2020-12-31 Flaw detection method and device, computer equipment and storage medium

Publications (2)

Publication Number Publication Date
CN112669299A true CN112669299A (en) 2021-04-16
CN112669299B CN112669299B (en) 2023-04-07

Family

ID=75413851

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202011639446.0A Active CN112669299B (en) 2020-12-31 2020-12-31 Flaw detection method and device, computer equipment and storage medium

Country Status (1)

Country Link
CN (1) CN112669299B (en)

Citations (10)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20150249774A1 (en) * 2014-02-28 2015-09-03 Konica Minolta Laboratory U.S.A., Inc. Ghost artifact detection and removal in hdr image creation using graph based selection of local reference
CN109767443A (en) * 2019-01-17 2019-05-17 深圳码隆科技有限公司 A kind of fabric flaws method of data capture and device
CN110243937A (en) * 2019-06-17 2019-09-17 江南大学 A kind of Analyse of Flip Chip Solder Joint missing defect intelligent detecting method based on high frequency ultrasound
CN110533046A (en) * 2019-08-30 2019-12-03 北京地平线机器人技术研发有限公司 A kind of image instance dividing method and device
CN110889437A (en) * 2019-11-06 2020-03-17 北京达佳互联信息技术有限公司 Image processing method and device, electronic equipment and storage medium
CN111199193A (en) * 2019-12-23 2020-05-26 平安国际智慧城市科技股份有限公司 Image classification method and device based on digital slicing and computer equipment
CN111310758A (en) * 2020-02-13 2020-06-19 上海眼控科技股份有限公司 Text detection method and device, computer equipment and storage medium
CN111444907A (en) * 2020-03-24 2020-07-24 上海东普信息科技有限公司 Character recognition method, device, equipment and storage medium
CN111475680A (en) * 2020-03-27 2020-07-31 深圳壹账通智能科技有限公司 Method, device, equipment and storage medium for detecting abnormal high-density subgraph
CN111598827A (en) * 2019-02-19 2020-08-28 富泰华精密电子(郑州)有限公司 Appearance flaw detection method, electronic device and storage medium

Patent Citations (10)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20150249774A1 (en) * 2014-02-28 2015-09-03 Konica Minolta Laboratory U.S.A., Inc. Ghost artifact detection and removal in hdr image creation using graph based selection of local reference
CN109767443A (en) * 2019-01-17 2019-05-17 深圳码隆科技有限公司 A kind of fabric flaws method of data capture and device
CN111598827A (en) * 2019-02-19 2020-08-28 富泰华精密电子(郑州)有限公司 Appearance flaw detection method, electronic device and storage medium
CN110243937A (en) * 2019-06-17 2019-09-17 江南大学 A kind of Analyse of Flip Chip Solder Joint missing defect intelligent detecting method based on high frequency ultrasound
CN110533046A (en) * 2019-08-30 2019-12-03 北京地平线机器人技术研发有限公司 A kind of image instance dividing method and device
CN110889437A (en) * 2019-11-06 2020-03-17 北京达佳互联信息技术有限公司 Image processing method and device, electronic equipment and storage medium
CN111199193A (en) * 2019-12-23 2020-05-26 平安国际智慧城市科技股份有限公司 Image classification method and device based on digital slicing and computer equipment
CN111310758A (en) * 2020-02-13 2020-06-19 上海眼控科技股份有限公司 Text detection method and device, computer equipment and storage medium
CN111444907A (en) * 2020-03-24 2020-07-24 上海东普信息科技有限公司 Character recognition method, device, equipment and storage medium
CN111475680A (en) * 2020-03-27 2020-07-31 深圳壹账通智能科技有限公司 Method, device, equipment and storage medium for detecting abnormal high-density subgraph

Non-Patent Citations (3)

* Cited by examiner, † Cited by third party
Title
叶子云等: "一种基于加权图模型的手指静脉识别方法", 《山东大学学报(工学版)》 *
杨晋生等: "基于深度可分离卷积的交通标志识别算法", 《液晶与显示》 *
王南杰: ""PCB瑕疵识别算法研究"", 《中国优秀硕士学位论文全文数据库 信息科技辑》 *

Also Published As

Publication number Publication date
CN112669299B (en) 2023-04-07

Similar Documents

Publication Publication Date Title
CN110110799B (en) Cell sorting method, cell sorting device, computer equipment and storage medium
JP6660313B2 (en) Detection of nuclear edges using image analysis
CN107230203B (en) Casting defect identification method based on human eye visual attention mechanism
Liang et al. An efficient forgery detection algorithm for object removal by exemplar-based image inpainting
CN113781402A (en) Method and device for detecting chip surface scratch defects and computer equipment
TWI559242B (en) Visual clothing retrieval
CN108446694B (en) Target detection method and device
US11538261B2 (en) Systems and methods for automated cell segmentation and labeling in immunofluorescence microscopy
CN112101386B (en) Text detection method, device, computer equipment and storage medium
AU2020272936B2 (en) Methods and systems for crack detection using a fully convolutional network
WO2021057395A1 (en) Heel type identification method, device, and storage medium
CN110599453A (en) Panel defect detection method and device based on image fusion and equipment terminal
CN112819796A (en) Tobacco shred foreign matter identification method and equipment
CN110782385A (en) Image watermark removing method based on deep learning
Zhao et al. Automatic blur region segmentation approach using image matting
CN104637060B (en) A kind of image partition method based on neighborhood principal component analysis-Laplce
CN108960247B (en) Image significance detection method and device and electronic equipment
CN114332086B (en) Textile defect detection method and system based on style migration and artificial intelligence
CN112669300A (en) Defect detection method and device, computer equipment and storage medium
US9875528B2 (en) Multi-frame patch correspondence identification in video
CN112669299B (en) Flaw detection method and device, computer equipment and storage medium
Cheng et al. Power pole detection based on graph cut
Torres-Gómez et al. Recognition and reconstruction of transparent objects for augmented reality
CN115424181A (en) Target object detection method and device
Khan et al. Segmentation of single and overlapping leaves by extracting appropriate contours

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant