CN115018846A - AI intelligent camera-based multi-target crack defect detection method and device - Google Patents
AI intelligent camera-based multi-target crack defect detection method and device Download PDFInfo
- Publication number
- CN115018846A CN115018846A CN202210946913.7A CN202210946913A CN115018846A CN 115018846 A CN115018846 A CN 115018846A CN 202210946913 A CN202210946913 A CN 202210946913A CN 115018846 A CN115018846 A CN 115018846A
- Authority
- CN
- China
- Prior art keywords
- image
- workpiece
- target
- detection
- target workpiece
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Granted
Links
- 238000001514 detection method Methods 0.000 title claims abstract description 195
- 230000007547 defect Effects 0.000 title claims abstract description 135
- 230000011218 segmentation Effects 0.000 claims abstract description 61
- 238000013135 deep learning Methods 0.000 claims abstract description 52
- 238000013145 classification model Methods 0.000 claims abstract description 37
- 239000011148 porous material Substances 0.000 claims abstract description 22
- 238000000034 method Methods 0.000 claims abstract description 16
- 238000012545 processing Methods 0.000 claims description 21
- 238000012549 training Methods 0.000 claims description 15
- 230000009466 transformation Effects 0.000 claims description 14
- 238000005520 cutting process Methods 0.000 claims description 12
- 238000013507 mapping Methods 0.000 claims description 8
- 230000001133 acceleration Effects 0.000 claims description 3
- 238000012937 correction Methods 0.000 claims description 3
- 238000013526 transfer learning Methods 0.000 claims description 3
- 230000004931 aggregating effect Effects 0.000 claims description 2
- 238000013473 artificial intelligence Methods 0.000 abstract description 71
- 238000007689 inspection Methods 0.000 description 11
- 238000004519 manufacturing process Methods 0.000 description 10
- 230000002093 peripheral effect Effects 0.000 description 7
- 238000004422 calculation algorithm Methods 0.000 description 6
- 238000010586 diagram Methods 0.000 description 6
- 230000006870 function Effects 0.000 description 6
- 238000004364 calculation method Methods 0.000 description 3
- 238000003709 image segmentation Methods 0.000 description 3
- 238000012360 testing method Methods 0.000 description 3
- 230000001131 transforming effect Effects 0.000 description 3
- 238000006243 chemical reaction Methods 0.000 description 2
- 238000007635 classification algorithm Methods 0.000 description 2
- 238000005516 engineering process Methods 0.000 description 2
- 238000012986 modification Methods 0.000 description 2
- 230000004048 modification Effects 0.000 description 2
- 238000003062 neural network model Methods 0.000 description 2
- 238000011897 real-time detection Methods 0.000 description 2
- 230000007704 transition Effects 0.000 description 2
- 230000003044 adaptive effect Effects 0.000 description 1
- 230000002776 aggregation Effects 0.000 description 1
- 238000004220 aggregation Methods 0.000 description 1
- 230000009286 beneficial effect Effects 0.000 description 1
- 239000003086 colorant Substances 0.000 description 1
- 238000004590 computer program Methods 0.000 description 1
- 238000011161 development Methods 0.000 description 1
- 230000018109 developmental process Effects 0.000 description 1
- 230000000694 effects Effects 0.000 description 1
- 238000000605 extraction Methods 0.000 description 1
- 230000004927 fusion Effects 0.000 description 1
- 239000011159 matrix material Substances 0.000 description 1
- 238000011176 pooling Methods 0.000 description 1
- 238000007781 pre-processing Methods 0.000 description 1
- 230000008569 process Effects 0.000 description 1
- 230000000007 visual effect Effects 0.000 description 1
Images
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T7/00—Image analysis
- G06T7/0002—Inspection of images, e.g. flaw detection
- G06T7/0004—Industrial image inspection
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V10/00—Arrangements for image or video recognition or understanding
- G06V10/20—Image preprocessing
- G06V10/26—Segmentation of patterns in the image field; Cutting or merging of image elements to establish the pattern region, e.g. clustering-based techniques; Detection of occlusion
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V10/00—Arrangements for image or video recognition or understanding
- G06V10/40—Extraction of image or video features
- G06V10/44—Local feature extraction by analysis of parts of the pattern, e.g. by detecting edges, contours, loops, corners, strokes or intersections; Connectivity analysis, e.g. of connected components
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V10/00—Arrangements for image or video recognition or understanding
- G06V10/70—Arrangements for image or video recognition or understanding using pattern recognition or machine learning
- G06V10/764—Arrangements for image or video recognition or understanding using pattern recognition or machine learning using classification, e.g. of video objects
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V10/00—Arrangements for image or video recognition or understanding
- G06V10/70—Arrangements for image or video recognition or understanding using pattern recognition or machine learning
- G06V10/77—Processing image or video features in feature spaces; using data integration or data reduction, e.g. principal component analysis [PCA] or independent component analysis [ICA] or self-organising maps [SOM]; Blind source separation
- G06V10/774—Generating sets of training patterns; Bootstrap methods, e.g. bagging or boosting
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V10/00—Arrangements for image or video recognition or understanding
- G06V10/70—Arrangements for image or video recognition or understanding using pattern recognition or machine learning
- G06V10/82—Arrangements for image or video recognition or understanding using pattern recognition or machine learning using neural networks
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V2201/00—Indexing scheme relating to image or video recognition or understanding
- G06V2201/07—Target detection
-
- Y—GENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
- Y02—TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
- Y02P—CLIMATE CHANGE MITIGATION TECHNOLOGIES IN THE PRODUCTION OR PROCESSING OF GOODS
- Y02P90/00—Enabling technologies with a potential contribution to greenhouse gas [GHG] emissions mitigation
- Y02P90/30—Computing systems specially adapted for manufacturing
Landscapes
- Engineering & Computer Science (AREA)
- Theoretical Computer Science (AREA)
- General Physics & Mathematics (AREA)
- Physics & Mathematics (AREA)
- Computer Vision & Pattern Recognition (AREA)
- Multimedia (AREA)
- Evolutionary Computation (AREA)
- Computing Systems (AREA)
- Databases & Information Systems (AREA)
- General Health & Medical Sciences (AREA)
- Medical Informatics (AREA)
- Software Systems (AREA)
- Artificial Intelligence (AREA)
- Health & Medical Sciences (AREA)
- Quality & Reliability (AREA)
- Image Analysis (AREA)
- Image Processing (AREA)
Abstract
The application discloses a multi-target crack defect detection method and device based on an AI (artificial intelligence) intelligent camera, wherein a first image covering a plurality of detection areas on the same detection station is shot by the single AI intelligent camera, a deep learning segmentation model is utilized to obtain segmentation images and workpiece types of a plurality of target workpieces from the first image, an opening contour area image is obtained from the segmentation image of each target workpiece in parallel according to the workpiece type, then a group of detection images are cut along the opening contour by a sliding window with preset image size and step length, a deep learning defect classification model corresponding to the workpiece type of each target workpiece is sequentially input, and whether crack defects exist in a group of detection images or not is identified. The method and the device can obviously reduce the cost for detecting the open pore crack defects of the multi-type target workpiece, improve the efficiency for detecting the open pore crack defects of the multi-type target workpiece, and improve the detection accuracy rate for the micro crack defects of the open pore outline in the target workpiece.
Description
Technical Field
The application relates to the technical field of AI intelligent cameras and machine vision, in particular to a multi-target crack defect detection method and device based on the AI intelligent cameras.
Background
With the development of image processing and machine vision technologies, AI intelligent cameras are increasingly used in the industry to detect appearance defects of product workpieces on production lines. The AI intelligent camera can deploy an AI image processing algorithm model in advance to realize integrated processing of appearance image shooting, image processing and defect detection of the product workpiece on the production line detection station.
For workpieces with opening parts, such as smart phone shells and lens holders, due to die reasons, very small crack defects may occur at edge positions of the opening parts, and the very small crack defects are usually only a few pixels in size and cannot be directly identified by naked eyes, but need to be automatically detected and identified by machine vision technology and image processing algorithm.
In a real production line, a plurality of different types of workpieces with the same function often exist, the shapes and the sizes of the opening parts of the workpieces with different types can be different, in addition, the sizes, shapes, textures, colors and the like of workpieces may also have differences, in order to realize crack defect detection on these various types of workpieces, in practice, it is usually necessary to train a deep learning defect segmentation and classification algorithm separately for each specific type of workpiece, deploy the algorithm in different AI smart cameras, and separately set up a detection station for each specific type of workpiece, an AI intelligent camera provided with a correspondingly trained deep learning defect segmentation and classification algorithm is correspondingly arranged on each detection station, defect identification and detection are carried out on the workpieces of the specific type, the method greatly increases the cost of workpiece defect detection on the production line and reduces the defect detection efficiency of various workpieces. In addition, according to the scheme for detecting the ultra-fine crack defects by using the AI intelligent camera, the crack defects are segmented from the workpiece image by training a deep learning defect segmentation algorithm and are classified and detected, and the scheme cannot accurately position the ultra-fine crack defects, is easily interfered by noise such as workpiece surface textures and the like, and influences the detection accuracy of the ultra-fine crack defects.
Disclosure of Invention
In order to solve the problems, the application provides a multi-target crack defect detection method and device based on an AI (Artificial intelligence) intelligent camera, so that the cost of detecting the open pore crack defects of the multi-type target workpiece is reduced, the open pore crack defect detection efficiency of the multi-type target workpiece is improved, and the detection accuracy of the micro crack defects of the open pore contour in the target workpiece is improved.
In a first aspect, the application provides a multi-target crack defect detection method based on an AI smart camera, including:
shooting a first image covering a plurality of detection areas on the same detection station through a single AI intelligent camera, wherein each detection area is respectively provided with a target workpiece;
performing segmentation detection on the first image by using a deep learning segmentation model, identifying the workpiece type of the target workpiece in each detection area, and obtaining a segmentation image of each target workpiece;
extracting the opening contour from the segmentation image of each target workpiece in parallel according to the workpiece type of each target workpiece, obtaining coordinate dimension data of the opening contour of each target workpiece, and obtaining at least one opening contour area image of each target workpiece from the segmentation image of each target workpiece according to the coordinate dimension data of the opening contour of each target workpiece;
and cutting a group of detection images along the opening contour from at least one opening contour area image of each target workpiece in a sliding window with preset image size and step length, sequentially inputting a deep learning defect classification model which is deployed in advance by the AI intelligent camera and corresponds to the workpiece type of each target workpiece, and identifying whether crack defects exist in the group of detection images, wherein each detection image in the group of detection images comprises a part of the opening contour.
In an alternative embodiment, the extracting the aperture contour from the segmented image of each target workpiece in parallel according to the workpiece type of each target workpiece, and the obtaining the coordinate dimension data of the aperture contour of each target workpiece includes:
obtaining the shape type of the opening contour of each target workpiece according to the workpiece type of each target workpiece, and obtaining the minimum circumscribed rectangular frame of the opening contour from the segmentation image of each target workpiece;
and generating coordinate size data of the opening contour of each target workpiece according to the shape type of the opening contour of each target workpiece and the minimum circumscribed rectangle frame of the opening contour.
In an alternative embodiment, if the shape type of the opening contour is the first shape type, the coordinate dimension data of the opening contour of the target workpiece comprises coordinate dimension data of a horizontal minimum bounding rectangular frame after performing rotation angle correction on the segmented image of the target workpiece according to the rotation angle of the minimum bounding rectangular frame of the opening contour and fitting dimension data of at least one rounded corner; if the shape type of the aperture outline is the second shape type, the coordinate dimension data of the aperture outline of the target workpiece includes coordinates of a center point of a minimum bounding rectangular frame of the aperture outline and a radius of a circle.
In an alternative embodiment, if the shape type of the opening contour is the first shape type, the at least one opening contour area image includes a first opening contour area image and at least one second opening contour area image, wherein the at least one second opening contour area image is obtained by performing a fillet contour straightening transformation on the at least one fillet area image.
In an alternative embodiment, if the shape type of the aperture contour is a second shape type, the at least one aperture contour area image comprises a second aperture contour area image obtained by performing a circular contour straightening transformation on the original aperture contour area image.
In an alternative embodiment, the method further comprises:
marking the cut open pore edge sample image dataset of each workpiece type as a crack image and a crack-free image according to whether the lines in the image are in contact with the bottom of the open pore edge;
training a first deep learning defect classification model aiming at the opening edge sample image data set of any workpiece type, carrying out transfer learning on the first deep learning defect classification model based on a small number of opening edge sample image data sets of other workpiece types, sequentially obtaining deep learning defect classification models aiming at other workpiece types, and deploying the deep learning defect classification model corresponding to each workpiece type to the AI intelligent camera.
In an alternative embodiment, the method further comprises:
obtaining a mapping relation between the image coordinates of the first image and the plurality of detection areas based on calibration data of the AI intelligent camera;
and determining a detection area identifier corresponding to each target workpiece according to the mapping relation and the position relation between the position coordinate of the prediction boundary frame of each target workpiece obtained by performing segmentation detection on the first image by the deep learning segmentation model and the image coordinate of the first image.
In an alternative embodiment, the method further comprises:
judging whether the same workpiece type exists in a plurality of target workpieces placed on the plurality of detection areas or not according to the workpiece type of each target workpiece;
if yes, aggregating the opening contour area images of the target workpieces into opening contour area image data sets of all workpiece types according to the same workpiece type;
and respectively cutting a group of detection images along the opening contour by using a sliding window with preset image size and step length in the opening contour region image data set of each workpiece type, and sequentially inputting a deep learning defect classification model corresponding to the workpiece type.
In an alternative embodiment, the method further comprises: and loading deep learning defect classification models respectively corresponding to a plurality of workpiece types in parallel based on an AI acceleration processing unit built in the AI intelligent camera to execute classification detection of the crack defects of the detection image.
In a second aspect, the present application further provides an AI intelligent camera-based multi-target crack defect detection apparatus, including:
the image acquisition unit is used for shooting a first image containing a plurality of detection areas on the same detection station through a single AI intelligent camera, and each detection area is respectively provided with a target workpiece;
the workpiece segmentation unit is used for carrying out segmentation detection on the first image by using a deep learning segmentation model, identifying the workpiece type of a target workpiece in each detection area and obtaining a segmentation image of each target workpiece;
an image processing unit for extracting an opening contour from the divided image of each target workpiece in parallel according to the workpiece type of each target workpiece, obtaining coordinate size data of the opening contour of each target workpiece, and obtaining at least one opening contour region image of each target workpiece from the divided image of each target workpiece according to the coordinate size data of the opening contour of each target workpiece;
and the defect detection unit is used for cutting a group of detection images along the open hole outline from at least one open hole outline area image of each target workpiece in a sliding window with preset image size and step length, sequentially inputting a deep learning defect classification model which is deployed in advance by the AI intelligent camera and corresponds to the workpiece type of each target workpiece, and identifying whether a crack defect exists in the group of detection images, wherein each detection image in the group of detection images comprises a part of the open hole outline.
The embodiment of the application has at least the following beneficial effects: the method can realize the real-time detection of the open pore crack defects of a plurality of different types of target workpieces on the same detection station at one time, obviously reduce the cost for detecting the open pore crack defects of a plurality of types of target workpieces, improve the efficiency for detecting the open pore crack defects of the plurality of types of target workpieces, and improve the detection accuracy for the micro crack defects of the open pore profile in the target workpieces.
Drawings
In order to more clearly illustrate the technical solutions of the embodiments of the present application, the drawings required to be used in the embodiments of the present application will be briefly described below. It is appreciated that the following drawings depict only certain embodiments of the application and are not to be considered limiting of its scope.
FIG. 1 is a schematic diagram of a workpiece surface defect detection system for use with the present application that deploys a single AI smart camera at a production line inspection station;
FIG. 2 is a schematic flow chart of a multi-target crack defect detection method based on an AI smart camera according to an embodiment of the application;
FIG. 3 is a schematic diagram of a network structure of a YOLO V5 network model in the embodiment of the present application;
FIG. 4 is a schematic partial flow chart of a multi-target crack defect detection method based on an AI smart camera according to an embodiment of the application;
FIG. 5 is an exemplary illustration of a fillet contour straightening transformation of a fillet area image of a target workpiece;
fig. 6 is a schematic network structure diagram of a network model of a deep residual error network Resnet50 in an embodiment of the present application;
fig. 7 is a schematic structural diagram of an AI smart camera-based multi-target crack defect detection apparatus according to an embodiment of the present application.
Detailed Description
In order to make the objects, technical solutions and advantages of the present application clearer, the technical solutions in the embodiments of the present application will be clearly and completely described below with reference to the drawings of the embodiments of the present application. It should be understood, however, that the described embodiments are merely exemplary of some, and not all, of the present application, and therefore the following detailed description of the embodiments of the present application is not intended to limit the scope of the present application, as claimed. All other embodiments, which can be derived by a person skilled in the art from the embodiments of the present application without making any creative effort, shall fall within the protection scope of the present application.
It should be noted that the terms "first," "second," and the like in the description and in the claims of this application are used for distinguishing between similar elements and not necessarily for describing a particular sequential or chronological order, or for indicating or implying any relative importance.
As mentioned above, the existing detection scheme for crack defects of a product workpiece based on an AI intelligent camera has a high cost for detecting the very small crack defects on the surfaces of multiple types of product workpieces, reduces the defect detection efficiency for the multiple types of product workpieces, and has a relatively low detection accuracy for the small crack defects of the open hole profile in the workpiece. Therefore, the multi-target crack defect detection method and device based on the AI intelligent camera can detect the open pore crack defects of a plurality of target workpieces of different types at one time, remarkably reduce the cost of open pore crack defect detection of a plurality of target workpieces of different types, improve the efficiency of open pore crack defect detection of the target workpieces of different types, and improve the detection accuracy of micro crack defects of open pore outlines in the workpieces.
FIG. 1 is a schematic diagram of a workpiece surface defect inspection system with a single AI smart camera deployed at a production line inspection station, to which the present application is applicable. As shown in FIG. 1, the system includes an AI smart camera 120 located at a detection station of the production line, and the field of view of the AI smart camera 120 ensures that a plurality of target workpieces 150-1-150-4 respectively placed in a plurality of detection areas 110-1-110-4 on the detection station of the production line are subjected to complete image capturing and defect detection, wherein the target workpiece 150-1 is correspondingly placed in the detection area 110-1, the target workpiece 150-2 is correspondingly placed in the detection area 110-2, the target workpiece 150-3 is correspondingly placed in the detection area 110-3, and the target workpiece 150-4 is correspondingly placed in the detection area 110-4. As an example, FIG. 1 only shows the case of four detection areas 110-1 to 110-4 on the production line detection station, and in practice, the number of the detection areas arranged on the production line detection station can be set as required according to the size of the field of view of the AI smart camera 120. The system further includes a client 140 accessing the network 130, and the client 140 may be a PC host accessing the network 130, on which the deployment platform software supporting the AI smart camera runs. The deployment platform software can provide visual configuration programming, debugging and operation control of an image processing program of the AI smart camera for a user in a Graphical User Interface (GUI) program or a WEB webpage mode. The client 140 may further receive the workpiece defect detection result output by the AI smart camera 120, and display the workpiece defect detection result.
Fig. 2 is a schematic flowchart of a multi-target crack defect detection method based on an AI smart camera according to an embodiment of the application. As shown in fig. 2, the AI intelligent camera-based multi-target crack defect detection method includes the following steps:
In this step, the target workpiece placed in each detection area may be the same workpiece type or different workpiece types, or only some of the detection areas may have the same workpiece type, while other detection areas may have different workpiece types. In this step, the multiple detection areas may be distributed in a matrix on the same detection station, so that the multiple detection areas are all within the field of view of the single AI smart camera 120, so as to capture a complete image covering the multiple detection areas on the same detection station by the single AI smart camera 120. In one embodiment, multiple detection areas are within the field of view of the single AI smart camera 120, which may be achieved by calibrating the AI smart camera 120.
And step 220, performing segmentation detection on the first image by using a deep learning segmentation model, identifying the workpiece type of the target workpiece in each detection area, and obtaining a segmentation image of each target workpiece.
In this step, the deep learning segmentation model is selected from a neural network model capable of realizing multi-target image segmentation detection, and considering the real-time performance of the AI smart camera 120 in image segmentation detection of a plurality of target workpieces, the neural network model may be selected from any one of YOLO V3, YOLO V4, and YOLO V5 for learning training. YOLO V3, YOLO V4, and YOLO V5 are single-Stage (One-Stage) multi-target detection models, are suitable for a computational environment to be deployed to the AI smart camera 120, and can meet the requirements for real-time image segmentation detection of multiple target workpieces.
Taking the YOLO V5 network model as an example, fig. 3 shows a schematic network structure of the YOLO V5 network model. As shown in fig. 3, the YOLO V5 network model can be divided into three parts in sequence from the input end: a trunk layer (Backbone) 310, a Neck layer (Neck) 320, a Prediction layer (Prediction) 330. The backbone layer 310 is responsible for extracting backbone features of the input first image, and usually adopts a CSPDarknet network structure to extract features, the input first image is subjected to feature extraction through the CSPDarknet network structure in the backbone layer 310, the extracted features may be called feature layers, are feature sets of the input first image, and a total of three feature layers may be extracted. The neck layer 320 is responsible for performing multi-scale feature fusion on the feature layer extracted by the main layer 310, and usually adopts a network structure of FPN (feature pyramid network) + PAN (path aggregation network), and the image features fused by the neck layer 320 are predicted by the prediction layer 330, and then a prediction bounding box, a workpiece type and a confidence thereof of each target workpiece in the first image can be generated, and a segmentation image of each target workpiece is obtained by cutting based on the prediction bounding box of each target workpiece.
In this embodiment, a YOLO V5 network model needs to be pre-trained, a loss function is first established for the YOLO V5 network model, the network model is iteratively trained based on sample images of various types of target workpieces labeled by manual segmentation, an optimal model parameter is selected according to the loss function value, and then the optimal model parameter is deployed to the AI intelligent camera 120 and used for performing segmentation detection on a plurality of target workpieces in a plurality of detection regions on a detection station, so that segmented images of the plurality of target workpieces can be obtained.
In one embodiment, in this step, a mapping relationship between the image coordinates of the captured first image and the plurality of detection areas on the detection station may also be obtained based on the calibration data of the AI smart camera 120, and the mapping relationship and the detection areas on the detection station are further obtained according to the mapping relationshipAnd the deep learning segmentation model is used for carrying out segmentation detection on the first image to obtain the position relation between the position coordinates of the prediction boundary frame of each target workpiece and the image coordinates of the first image, determining the detection area correspondingly placed by each target workpiece, and representing the detection area placed by the target workpiece by using the detection area identifier. Taking the example that the AI intelligent camera 120 shown in fig. 1 shoots four target workpieces 150-1 to 150-4 respectively placed on the plurality of detection areas 110-1 to 110-4, the mapping relationship between the image coordinates of the first image and the plurality of detection areas 110-1 to 110-4 on the detection station can be calculated by calibrating the AI intelligent camera 120, so that the pixel coordinate range of each detection area respectively positioned on the first image can be determined. Then, based on the position coordinates of the predicted bounding box of each target workpiece obtained by the division detection of the first image, it is possible to determine which detection region the position coordinates of each target workpiece belong to, respectively, which can be obtained by comparing the position coordinates of the predicted bounding box of each target workpiece with the position relationship between the pixel coordinate ranges corresponding to the detection regions on the first image. Therefore, in the embodiment, the AI intelligent camera 120 can quickly and accurately give an indication of the detection area where the target workpiece with the crack defect is located while identifying whether the crack defect exists in each target workpiece. Thus, as an example, each target workpiece detected by segmentation from the first image may be represented by a data structureAnd (4) showing.Is shown asiAn inspection area identification of each target workpiece, the inspection area identification being indicative of an inspection area on which the target workpiece is placed,is shown asiA workpiece type of the target workpiece.Is shown asiSegmented images of individual target workpieces. Wherein,,Nis the number of inspection areas on the inspection station.
And 230, extracting the opening contour from the segmentation image of each target workpiece in parallel according to the workpiece type of each target workpiece, obtaining coordinate size data of the opening contour of each target workpiece, and obtaining at least one opening contour area image of each target workpiece from the segmentation image of each target workpiece according to the coordinate size data of the opening contour of each target workpiece.
In this step, the aperture profiles of the target workpieces of the same workpiece type have the same shape and size, and the aperture profiles of the target workpieces of different workpiece types have different shapes and/or sizes. In one embodiment, the shape types of the aperture profile of the target workpiece are different according to geometry, including a first shape type and a second shape type. The first shape type may be a square aperture and the second shape type may be a circular aperture. The first shape type and the second shape type may each have a different size. In this embodiment, the square opening may be a square-like shape formed by four sides and at least one rounded corner. It should be understood that the embodiments of the present application are compatible with the detection of crack defects in target workpieces having openings of various shapes, and are not limited to the square openings and the circular openings described above.
In one embodiment, as shown in fig. 4, extracting the aperture contour from the segmented image of each target workpiece in parallel in step 230, and obtaining coordinate dimension data of the aperture contour of each target workpiece may include the sub-steps of:
and step 410, obtaining the shape type of the opening contour of each target workpiece according to the workpiece type of each target workpiece, and obtaining the minimum circumscribed rectangle frame of the opening contour from the segmentation image of each target workpiece. In this step, since the shape type of the aperture contour corresponding to each workpiece type is different, the shape type of the aperture contour of each target workpiece can be obtained according to the workpiece type of each target workpiece identified in step 220, so that the subsequent step can generate coordinate size data of the aperture contour of each target workpiece from the segmented image of the target workpiece according to the shape type of the aperture contour.
In one embodiment, the smallest circumscribing rectangular frame of the aperture outline is the smallest circumscribing rectangular frame that positions the shape dimensions of the aperture outline. The minimum bounding rectangle of the opening outline of the target workpiece can be obtained by using the self-adaptive thresholding image processing. Adaptive thresholding image processing of segmented images of a target workpiece includes:
firstly, the segmentation image of each target workpiece is divided into blocks, each segmentation image is divided into neighborhood blocks with a preset size, and in order to ensure that the boundary of the opening contour is clear, the size of each neighborhood block can be selected to be a proper value according to a test result, and for example, the size of each neighborhood block can be set to be 7 or 11. Then, in order to determine a binary threshold value in a neighborhood block, a gaussian weighted average method can be adopted, for each pixel point, the pixel values in the neighborhood block around the pixel point are subjected to gaussian weighted average according to the distance from the center point, the farther the weight from the center point of the neighborhood block is, the closer the weight from the center point of the neighborhood block is, the larger the weight is, so that a self-adaptive dynamic threshold value can be calculated for each pixel point, if the pixel value is greater than the dynamic threshold value, 1 is set, otherwise, 0 is set, the opening contour of each target workpiece can be finally obtained, and the minimum circumscribed rectangular frame of the opening contour is generated according to the positioning of the shape and the size of the opening contour. The minimum circumscribed rectangle frame for generating the outline of the opening can generate a group of coordinate size data for the minimum circumscribed rectangle frame, wherein the coordinate size data comprises four vertex coordinates, width and height, a central point coordinate, a rotation angle and the like of the minimum circumscribed rectangle frame.
And 420, generating coordinate size data of the opening contour of each target workpiece according to the shape type of the opening contour of each target workpiece and the minimum circumscribed rectangle frame of the opening contour. In this step, different shapes and types of the opening outline of the target workpiece can be represented by different coordinate size data. The specific description is as follows:
for a first shape type, square opening, the coordinate dimension data for the opening outline includes two parts: the first part is that the coordinate size data of the horizontal minimum circumscribed rectangular frame after the rotation angle correction is carried out on the segmentation image of the target workpiece according to the rotation angle of the minimum circumscribed rectangular frame of the outline of the opening, and the top left corner coordinate and the width and height of the horizontal minimum circumscribed rectangular frame are usedAnd (4) showing. Wherein,andis the coordinate of the top left corner of the horizontal minimum bounding rectangle,wandhpixel sizes representing the width and height of the horizontal minimum bounding rectangle, respectively; the second part is fitting size data of at least one round cornerIs represented by a quaternary set ofMThe number of the round corners of the square opening is,andis shown asmThe fitted center point of each of the rounded corners,is shown asmThe fitting radius of the round corners is the same as,indicating that the rounded corner is identified at the position of the minimum bounding rectangle, e.g. the positions corresponding to the upper left corner, the upper right corner, the lower left corner and the lower right corner. Due to the fact thatThe fillet of class square trompil is usually not regular 1/4 circular arc in the target work piece, for the convenience of subsequent image coordinate transformation of fillet part and the rate of accuracy that helps improving the categorised discernment of crack defect at fillet contour edge, so this application embodiment adopts the fitting dimensional data of fillet to represent the coordinate dimensional data of fillet with the fitting central point and the fitting radius of fillet promptly, the fitting central point and the fitting radius of fillet are central point and radius after carrying out 1/4 circular arc fitting to the fillet, 1/4 circular arc fitting to the fillet can adopt the least square method to realize.
For the second shape type, i.e. the circular opening, since the rotation angle of the minimum bounding rectangle of the circular opening outline is zero, i.e. the minimum bounding rectangle is the horizontal minimum bounding rectangle, the minimum bounding rectangle of the circular opening outline does not need to correct the rotation angle of the segmented image of the target workpiece, the coordinate size data of the opening outline may include the center point coordinate of the minimum bounding rectangle of the opening outline and the circular radius, the center point coordinate of the minimum bounding rectangle is the center point coordinate of the circular outline thereof, and the circular radius is 1/2 of the side length of the minimum bounding rectangle. Coordinate dimension data of the opening contour of the circular opening can be usedTo indicate that the user is not in a normal position,andis the coordinate of the center point of the minimum circumscribed rectangular frame of the outline of the opening,rthe radius of the circle of the outline of the opening.
Further, in step 230, after obtaining the coordinate dimension data according to the aperture contour of each target workpiece, the present application further needs to obtain at least one aperture contour area image of each target workpiece from the segmentation image of each target workpiece, which requires that the segmentation image of each target workpiece is processed differently according to whether the shape type of the aperture contour of each target workpiece is the first shape type or the second shape type, and the different shape types of the aperture contour need to execute different processing logics of the aperture contour area images, which is described in detail as follows:
(1) first shape type-square opening
The aperture outline area image of a target workpiece having a square aperture includes a first aperture outline area image and at least one second aperture outline area image. Firstly, a peripheral rectangular frame with a certain width dimension extends outwards by taking a horizontal minimum external rectangular frame of a square opening as a base line, and an original opening outline area image defined by the peripheral rectangular frame is obtained from a segmentation image of a target workpiece. The outwardly extending width is at least sized to enable subsequent steps to crop the image of the aperture outline area to a predetermined size to obtain a set of inspection images.
Secondly, based on the original opening contour area image defined by the peripheral rectangular frame, at least one round corner area image is extracted according to the size data of two parts in the coordinate size data of the opening contour, namely the top left corner vertex coordinate, the width and the height of the horizontal minimum circumscribed rectangular frame and the fitting size data of at least one round corner. The fillet area image is a square local segmentation image including a fillet contour, as shown in (a) on the left side in fig. 5, and an example of the fillet area image at the upper left corner position in the original opening contour area image obtained from the segmentation image of the target workpiece is shown in fig. 5 only by way of example. The rest part except the at least one round corner area image in the original opening contour area image obtained from the segmentation image of the target workpiece constitutes a first opening contour area image。
Subsequently, the extracted at least one fillet area image is subjected to transformation of polar coordinates and Cartesian coordinates, and the fillet contour in each fillet area image in the at least one fillet area image is subjected to straightening transformation, so that each fillet area image is respectively transformed into a second opening contour area image containing a straight line contour, and therefore the second opening contour area image containing the straight line contour can be obtainedSecond set of opening outline region imagesWhereinMThe number of rounded corners of the square opening. As shown in (B) on the right side in fig. 5, fig. 5 only exemplarily shows an example of the second opening contour region image after transforming the fillet region image at the upper left corner position in the original opening contour region image. The purpose of carrying out the conversion of polar coordinate and Cartesian coordinate on at least one fillet area image is because there is not straight marginal bottom department in the fillet outline department of trompil outline, can influence the accurate judgement to marginal crack defect, so this application embodiment adopts the fillet area image with the trompil outline to carry out the conversion of polar coordinate and Cartesian coordinate, changes it into the rectangle image that contains the straight line outline, cuts the detection image that obtains in the follow-up step like this and can improve the accuracy that crack defect classification detected.
In an embodiment, the performing of the transformation of the polar coordinates and the cartesian coordinates on the extracted at least one fillet area image may be specifically implemented by:
firstly, aiming at each round angle area image in at least one round angle area image, the round angle is used for centering the point
In polar coordinate system of pole, aiming at each coordinate pointCalculating the pixel point of the coordinate point corresponding to the fillet area image. The specific calculation formula is as follows:
wherein,representing pixel points in the rounded region imageThe radius of the arc on which the arc is located,,Lis the central point of a filletDistance to the opposite edge of the rounded corner region image;representing pixels in the rounded region imageThe corresponding arc length on the arc is,;for the pixel points in the fillet area imageRelative to the fillet center pointThe arc of (a) is,,degree of arcHas a maximum value ofBecause the rounded corners fit into 1/4 circular arcs.
According to the above formula, at each coordinate point in the polar coordinate systemAndunder the value of (2), the pixel point of the coordinate point corresponding to the fillet area image can be calculatedThe coordinates of (a). The calculated coordinates are floating point numbers, so the embodiment finds each coordinate point by using bilinear interpolationThe closest pixel value in the corresponding rounded corner area image. The method can ensure that the second opening contour region image obtained by transforming the polar coordinate and the Cartesian coordinate of the extracted at least one fillet region image has natural transition, and avoids image distortion. Specifically, for the calculated pixel points in the fillet area imageObtaining the nearest pixel point to the pixel pointAnduse firstxSingle linear interpolation of direction to compute separatelyAndpixel values of (2), reuseyDirectional single linear interpolation calculation pixel pointThe pixel value of (2) is the pixel point processed by bilinear interpolationAs a coordinate pointThe pixel value of (2).
Thereby, by a radius ofAnd arc lengthAnd (3) straightening the fillet contour in each fillet area image according to the transformation formula from small to large, so that each fillet area image is respectively transformed into a second opening contour area image containing a straight line contour.
(2) Second shape type-circular opening
The aperture outline area image of the target workpiece having a circular aperture includes only the second aperture outline area image. Firstly, a peripheral rectangular frame with a certain width dimension extends outwards by taking the minimum circumscribed rectangular frame of the circular opening as a base line, and an original opening outline area image defined by the peripheral rectangular frame is obtained from a segmentation image of a target workpiece. The outwardly extending width is at least sized to enable subsequent steps to crop the image of the aperture outline area to a predetermined size to obtain a set of inspection images.
Secondly, based on the original opening outline region image limited by the peripheral rectangular frame, according to the central point coordinate and the circular radius of the circular outline in the coordinate size data of the opening outline, the original opening outline region is subjected toThe domain image is transformed into polar coordinate and Cartesian coordinate, the circular outline in the original opening outline region image is straightened, and the original opening outline region image is transformed into a second opening outline region image containing straight line outline. The purpose of performing the polar coordinate and cartesian coordinate transformation on the original opening contour region image containing the circular contour is to influence the accurate determination of the edge crack defect because the circular contour does not have a straight edge bottom, so that the polar coordinate and cartesian coordinate transformation is performed on the original opening contour region image containing the circular contour in the embodiment of the application to convert the original opening contour region image containing the circular contour into a rectangular image containing a straight line contour, so that the accuracy of the classification detection of the crack defect can be improved by the detection image obtained by cutting in the subsequent step.
In one embodiment, the transformation of the original aperture contour region image in polar coordinates and cartesian coordinates may be specifically implemented by the following steps:
firstly, regarding the original opening contour region image, the center point of the circular contour is used as the center pointIn polar coordinate system as pole, aiming at each coordinate pointCalculating the pixel point of the coordinate point corresponding to the original opening contour region image. The specific calculation formula is as follows:
wherein,representing pixel points in the original aperture contour region image containing the circular contourThe radius of the arc on which the arc is located,,Ris the center point of a circular outlineDistance to opposite sides of the peripheral rectangular frame;representing pixel points in the original opening contour region imageThe corresponding arc length on the arc is,;for pixel points in the original opening contour region imageRelative to the center point of the circular profileThe arc of (a) is,,radian of circular profileHas a maximum value of。
According to the above formula, for each coordinate point in the polar coordinate systemAndunder the value of (2), the pixel point of the coordinate point corresponding to the original opening contour region image can be calculatedThe coordinates of (a). The floating point number is calculated, so the embodiment finds each coordinate point by using bilinear interpolationAnd the closest pixel value in the corresponding original opening contour region image. The method can ensure that the second opening contour region image containing the straight line contour obtained by transforming the polar coordinate and the Cartesian coordinate of the original opening contour region image has natural transition, and avoids image distortion. The specific bilinear interpolation process is as described above, and is not described herein again.
Thereby passing through the semi-diameterAnd arc lengthAnd (3) straightening the circular contour in the original opening contour region image containing the circular contour according to the transformation formula, so that the original opening contour region image is transformed into a second opening contour region image containing a straight line contour.
It should be noted that, in this step, the parallel thread processing may be performed on the opening outlines of the multiple target workpieces in different shapes based on the opening outline image processing algorithm deployed in the AI intelligent camera 120, so that the real-time performance of the AI intelligent camera 120 in processing the images of the multiple target workpieces in the multiple detection areas may be significantly improved, and the multi-target defect detection efficiency may be integrally improved. The opening contour image processing algorithm deployed in the AI smart camera 120 performs the different processing logic described above for different opening types.
And step 240, cutting a group of detection images from at least one opening contour area image of each target workpiece along the opening contour by a sliding window with preset image size and step length, sequentially inputting a deep learning defect classification model which is deployed in advance by the AI intelligent camera 120 and corresponds to the workpiece type of each target workpiece, and identifying whether crack defects exist in the group of detection images, wherein each detection image in the group of detection images comprises a part of the opening contour.
In this step, the predetermined image size of the sliding window is the input image size of the deep learning defect classification model. In one embodiment, the input image size of the deep learning defect classification model is 64 × 64 pixels, and thus, the predetermined image size of the sliding window is also set to 64 × 64 pixels accordingly in this step. In addition, the step length of the sliding window is selected to enable a group of images to be detected obtained by cutting each opening edge area image to be mutually overlapped, so that the crack defects are prevented from being segmented during cutting, the accuracy of classification detection is prevented from being influenced, meanwhile, because each of the group of detection images comprises a part of the opening outline, a group of detection images obtained by cutting each opening outline area image of each target workpiece can cover the opening edge crack defect image, and the detection rate and the accuracy of the opening edge crack defects are improved. In one embodiment, the step size of the sliding window may be 1/2 of the predetermined image size of the sliding window. For example, when the predetermined image size of the sliding window is 64 × 64, the step size of the sliding window is set to 32, i.e., the sliding window is moved by 32 pixels each time.
In this step, the AI intelligent camera 120 is pre-deployed with a deep learning defect classification model corresponding to each workpiece type, and these corresponding deep learning defect classification models are trained with respect to the opening edge sample image data set of each workpiece type. In one embodiment, the application selects a deep residual error network Resnet50 network model as a network model of a deep learning defect classification model as an example. As shown in fig. 6, the main network structure of the Resnet50 network model includes five convolution portions, namely, a first convolution layer 610, a second convolution layer 620, a third convolution layer 630, a fourth convolution layer 640 and a fifth convolution layer 650. The first convolution layer 610 is a preprocessing layer, the convolution kernel size is 7 × 7, the number of convolution kernels is 64, and the input image is preprocessed. The second convolutional layer 620, the third convolutional layer 630, the fourth convolutional layer 640, and the fifth convolutional layer 650 have 3, 4, 6, 3 convolutional blocks, respectively. Each convolution block comprises 21 × 1 convolution units and 13 × 3 convolution unit, each convolution block firstly reduces the dimension of the channel number of the characteristic image through the 1 × 1 convolution unit, then performs 3 × 3 convolution operation, and finally recovers the dimension of the channel number through the 1 × 1 convolution unit. After the input image is processed by five convolution parts, the Resnet50 can also be processed by an average pooling, full join, softmax layer 660 to output the prediction classification result of the image.
In this embodiment, the deep learning defect classification model corresponding to each workpiece type needs to be obtained through pre-training. First, the cut open hole edge sample image datasets for each workpiece type are labeled as two types: the main judgment basis of the cracked image and the crack-free image is based on whether the grains in the image are in contact with the bottom of the edge of the opening, and if the grains in the image extend out from the bottom of the edge of the opening, namely are in contact with the bottom of the edge of the opening, the images are marked as the cracked images. And if the grains in the image are not contacted with the bottom of the edge of the opening or the grains do not exist, marking the image as a crack-free image. Secondly, the cross entropy of the predicted value and the true value of the sample image is used as a loss function of the model, the data set of the opening edge sample image is divided into a training data set and a testing data set, the training set is input into a Resnet50 network model for training, and the iteration times and the learning rate can be set until the loss function reaches a convergence condition. And then, evaluating the trained network model by using the test data set.
In one embodiment, the training of the deep learning defect classification models of multiple workpiece types may adopt a transfer learning training mode, so as to improve the training and learning efficiency of the deep learning defect classification models corresponding to multiple workpiece types and improve the efficiency of deploying the models to the AI smart camera 120 on the detection station. Specifically, a first deep learning defect classification model is trained on an opening edge sample image data set of any workpiece type, the first deep learning defect classification model can be used as a pre-training model and directly used for deep learning defect classification model training of target workpieces of other workpiece types, for the target workpieces of other workpiece types, only a small number of opening edge sample image data sets are marked as training sets, and through fine adjustment of parameters of the first deep learning defect classification model, the deep learning defect classification model for other workpiece types can be quickly obtained. In this manner, the efficiency of training and deploying deep learning defect classification models for multiple workpiece types to the AI smart camera 120 may be improved.
In one embodiment, for a plurality of target workpieces divided in the foregoing step, if the workpiece types of some target workpieces are the same, in this step, it may also be determined whether the same workpiece type exists in the plurality of target workpieces according to the workpiece type of each target workpiece; if yes, the opening contour region images of the target workpieces are respectively aggregated into an opening contour region image data set of each workpiece type according to the same workpiece type. Then, a group of detection images are cut along the opening contour by a sliding window with preset image size and step length respectively in the opening contour region image data set of each workpiece type, and a deep learning defect classification model corresponding to the workpiece type is sequentially input for defect classification detection. In this embodiment, the opening contour region images of the target workpieces belonging to the same workpiece type are respectively aggregated into the opening contour region image dataset according to the same workpiece type, and the same deep learning defect classification model is loaded on the opening contour region image dataset of the same workpiece type for detection.
In one embodiment, in this step, deep learning defect classification models corresponding to a plurality of workpiece types may be loaded in parallel based on an AI acceleration processing unit built in the AI smart camera 120 to perform classification detection of crack defects of a detection image, so as to improve the efficiency of crack defect detection of a plurality of target workpieces on a detection station.
Fig. 7 is a schematic structural diagram of an AI smart camera-based multi-target crack defect detection apparatus according to an embodiment of the present application. As shown in fig. 7, the AI intelligent camera-based multi-target crack defect detection apparatus according to the present application includes the following units:
the image acquisition unit 710 is used for shooting a first image containing a plurality of detection areas on the same detection station through a single AI intelligent camera 120, wherein each detection area is respectively provided with a target workpiece;
a workpiece segmentation unit 720, configured to perform segmentation detection on the first image by using a deep learning segmentation model, identify a workpiece type of a target workpiece in each detection region, and obtain a segmentation image of each target workpiece;
an image processing unit 730 for extracting the opening contour from the divided image of each target workpiece in parallel according to the workpiece type of each target workpiece, obtaining coordinate size data of the opening contour of each target workpiece, and obtaining at least one opening contour region image of each target workpiece from the divided image of each target workpiece according to the coordinate size data of the opening contour of each target workpiece;
and a defect detection unit 740, configured to cut a set of detection images along the opening contour from at least one opening contour area image of each target workpiece in a sliding window with a predetermined image size and step length, sequentially input a deep learning defect classification model, which is pre-deployed by the AI smart camera 120 and corresponds to the workpiece type of each target workpiece, and identify whether a crack defect exists in the set of detection images, where each of the set of detection images includes a portion of the opening contour.
The multi-target crack defect detection method and device based on the AI smart camera of the embodiment of the application take a first image covering a plurality of detection areas on the same detection station by a single AI smart camera 120, obtain a segmentation image and a workpiece type of a target workpiece in each detection area from the first image by using a deep learning segmentation model, then extract an opening contour from the segmentation image of each target workpiece in parallel according to the workpiece type of each target workpiece, obtain an opening contour area image of each target workpiece from the segmentation image of each target workpiece, then cut a group of detection images along the opening contour from the opening contour area image of each target workpiece by a sliding window with a preset image size and step length, sequentially input a deep learning defect classification model which is pre-deployed by the AI smart camera 120 and corresponds to the workpiece type of each target workpiece, identifying whether a crack defect exists in the set of inspection images. According to the method, the real-time detection of the open pore crack defects of the target workpieces of different types on the same detection station at one time is realized by utilizing the image parallel processing and AI model processing capabilities of the AI intelligent camera 120, the open pore crack defect detection cost of the target workpieces of different types is obviously reduced, the open pore crack defect detection efficiency of the target workpieces of different types is improved, and the detection accuracy of the micro crack defects of the open pore contour of the target workpieces is improved.
It should be noted that, as can be understood by those skilled in the art, different implementation manners and explanations thereof described in the embodiments of the multi-target crack defect detection method of the present application and technical effects achieved thereby are also applicable to the embodiments of the multi-target crack defect detection apparatus of the present application, and are not described herein again.
The present application may be implemented in software, hardware, or a combination of software and hardware. When implemented as a computer software program, the computer software program may be installed in the memory of the AI smart camera for execution by the one or more processors to implement the corresponding functions.
Further, the embodiments of the present application may also include a computer-readable medium storing program instructions, which, in such embodiments, when loaded in the AI smart camera, may be executed by one or more processors to perform the method steps described in any of the embodiments of the present application.
Further, embodiments of the present application may also include a computer program product comprising a computer readable medium carrying program instructions, which in such embodiments may be executed by one or more processors to perform the method steps described in any of the embodiments of the present application.
The foregoing describes exemplary embodiments of the present application, and it is to be understood that the above-described exemplary embodiments are not limiting, but rather are illustrative and that the scope of the present application is not limited thereto. It is to be understood that modifications and variations may be made in the embodiments of the present application by those skilled in the art without departing from the spirit and scope of the present application, and that such modifications and variations are intended to be within the scope of the present application.
Claims (10)
1. A multi-target crack defect detection method based on an AI intelligent camera is characterized by comprising the following steps:
shooting a first image covering a plurality of detection areas on the same detection station through a single AI intelligent camera, wherein each detection area is respectively provided with a target workpiece;
performing segmentation detection on the first image by using a deep learning segmentation model, identifying the workpiece type of the target workpiece in each detection area, and obtaining a segmentation image of each target workpiece;
extracting the opening contour from the segmentation image of each target workpiece in parallel according to the workpiece type of each target workpiece, obtaining coordinate dimension data of the opening contour of each target workpiece, and obtaining at least one opening contour area image of each target workpiece from the segmentation image of each target workpiece according to the coordinate dimension data of the opening contour of each target workpiece;
and cutting a group of detection images along the opening contour from at least one opening contour area image of each target workpiece in a sliding window with preset image size and step length, sequentially inputting a deep learning defect classification model which is deployed in advance by the AI intelligent camera and corresponds to the workpiece type of each target workpiece, and identifying whether crack defects exist in the group of detection images, wherein each detection image in the group of detection images comprises a part of the opening contour.
2. The AI-smart camera-based multi-target crack defect detection method of claim 1, wherein the extracting an opening contour from the segmented image of each target workpiece in parallel according to the workpiece type of each target workpiece, and the obtaining coordinate dimension data of the opening contour of each target workpiece comprises:
obtaining the shape type of the opening contour of each target workpiece according to the workpiece type of each target workpiece, and obtaining the minimum circumscribed rectangular frame of the opening contour from the segmentation image of each target workpiece;
and generating coordinate size data of the opening contour of each target workpiece according to the shape type of the opening contour of each target workpiece and the minimum circumscribed rectangle frame of the opening contour.
3. The AI smart camera-based multi-target crack defect detection method of claim 2, wherein if the shape type of the aperture outline is a first shape type, the coordinate dimension data of the aperture outline of the target workpiece comprises coordinate dimension data of a horizontal minimum bounding rectangle frame after performing rotation angle correction on the segmented image of the target workpiece according to the rotation angle of the minimum bounding rectangle frame of the aperture outline, and fitting dimension data of at least one rounded corner; if the shape type of the aperture outline is the second shape type, the coordinate dimension data of the aperture outline of the target workpiece includes coordinates of a center point of a minimum bounding rectangular frame of the aperture outline and a radius of a circle.
4. The AI-smart camera-based multi-target crack defect detection method of claim 3,
if the shape type of the opening contour is a first shape type, the at least one opening contour area image comprises a first opening contour area image and at least one second opening contour area image, wherein the at least one second opening contour area image is obtained by performing fillet contour straightening transformation on the at least one fillet area image.
5. The AI smart camera-based multi-target crack defect detection method of claim 3, wherein if the shape type of the aperture profile is a second shape type, the at least one aperture profile area image comprises a second aperture profile area image obtained by performing a circular profile straightening transformation on an original aperture profile area image.
6. The AI-smart camera-based multi-target crack defect detection method of any one of claims 1-5, wherein the method further comprises:
marking the cut open pore edge sample image dataset of each workpiece type as a cracked image and a crack-free image based on whether the lines in the image are in contact with the bottom of the open pore edge;
training a first deep learning defect classification model aiming at the opening edge sample image data set of any workpiece type, carrying out transfer learning on the first deep learning defect classification model based on a small number of opening edge sample image data sets of other workpiece types, sequentially obtaining deep learning defect classification models aiming at other workpiece types, and deploying the deep learning defect classification model corresponding to each workpiece type to the AI intelligent camera.
7. The AI-smart camera-based multi-target crack defect detection method of claim 6, further comprising:
obtaining a mapping relation between the image coordinates of the first image and the plurality of detection areas based on the calibration data of the AI intelligent camera;
and determining a detection area identifier corresponding to each target workpiece according to the mapping relation and the position relation between the position coordinate of the prediction boundary frame of each target workpiece obtained by performing segmentation detection on the first image by the deep learning segmentation model and the image coordinate of the first image.
8. The AI smart camera-based multi-target crack defect detection method of claim 7, further comprising:
judging whether the same workpiece type exists in a plurality of target workpieces placed on the plurality of detection areas or not according to the workpiece type of each target workpiece;
if yes, aggregating the opening contour area images of the plurality of target workpieces into an opening contour area image dataset of each workpiece type according to the same workpiece type;
and respectively cutting a group of detection images along the opening contour by using a sliding window with preset image size and step length in the opening contour region image data set of each workpiece type, and sequentially inputting a deep learning defect classification model corresponding to the workpiece type.
9. The AI smart camera-based multi-target crack defect detection method of claim 8, further comprising: and loading deep learning defect classification models respectively corresponding to a plurality of workpiece types in parallel based on an AI acceleration processing unit built in the AI intelligent camera to execute classification detection of the crack defects of the detection image.
10. The utility model provides a multi-target crack defect detection device based on AI intelligence camera which characterized in that includes:
the image acquisition unit is used for shooting a first image containing a plurality of detection areas on the same detection station through a single AI intelligent camera, and each detection area is respectively provided with a target workpiece;
the workpiece segmentation unit is used for carrying out segmentation detection on the first image by using a deep learning segmentation model, identifying the workpiece type of a target workpiece in each detection area and obtaining a segmentation image of each target workpiece;
an image processing unit for extracting an opening contour from the divided image of each target workpiece in parallel according to the workpiece type of each target workpiece, obtaining coordinate size data of the opening contour of each target workpiece, and obtaining at least one opening contour region image of each target workpiece from the divided image of each target workpiece according to the coordinate size data of the opening contour of each target workpiece;
and the defect detection unit is used for cutting a group of detection images along the open hole outline from at least one open hole outline area image of each target workpiece in a sliding window with preset image size and step length, sequentially inputting a deep learning defect classification model which is deployed in advance by the AI intelligent camera and corresponds to the workpiece type of each target workpiece, and identifying whether a crack defect exists in the group of detection images, wherein each detection image in the group of detection images comprises a part of the open hole outline.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202210946913.7A CN115018846B (en) | 2022-08-09 | 2022-08-09 | AI intelligent camera-based multi-target crack defect detection method and device |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202210946913.7A CN115018846B (en) | 2022-08-09 | 2022-08-09 | AI intelligent camera-based multi-target crack defect detection method and device |
Publications (2)
Publication Number | Publication Date |
---|---|
CN115018846A true CN115018846A (en) | 2022-09-06 |
CN115018846B CN115018846B (en) | 2022-12-27 |
Family
ID=83066213
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN202210946913.7A Active CN115018846B (en) | 2022-08-09 | 2022-08-09 | AI intelligent camera-based multi-target crack defect detection method and device |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN115018846B (en) |
Cited By (4)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN115423783A (en) * | 2022-09-13 | 2022-12-02 | 优层智能科技(上海)有限公司 | Quality inspection method and device for photovoltaic module frame and junction box glue filling |
CN115619767A (en) * | 2022-11-09 | 2023-01-17 | 南京云创大数据科技股份有限公司 | Method and device for detecting surface defects of mirror-like workpiece based on multi-illumination condition |
CN116452791A (en) * | 2023-03-27 | 2023-07-18 | 广州市斯睿特智能科技有限公司 | Multi-camera point defect area positioning method, system, device and storage medium |
CN116612116A (en) * | 2023-07-19 | 2023-08-18 | 天津伍嘉联创科技发展股份有限公司 | Crystal appearance defect detection method based on deep learning image segmentation |
Citations (6)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN105447851A (en) * | 2015-11-12 | 2016-03-30 | 刘新辉 | Glass panel sound hole defect detection method and system |
CN109596625A (en) * | 2019-02-01 | 2019-04-09 | 东莞中科蓝海智能视觉科技有限公司 | Workpiece, defect detection recognition method in charging tray |
CN110598637A (en) * | 2019-09-12 | 2019-12-20 | 齐鲁工业大学 | Unmanned driving system and method based on vision and deep learning |
CN112862770A (en) * | 2021-01-29 | 2021-05-28 | 珠海迪沃航空工程有限公司 | Defect analysis and diagnosis system, method and device based on artificial intelligence |
CN113554582A (en) * | 2020-04-22 | 2021-10-26 | 中国科学院长春光学精密机械与物理研究所 | Defect detection method, device and system for functional hole in cover plate of electronic equipment |
CN113822890A (en) * | 2021-11-24 | 2021-12-21 | 中科慧远视觉技术(北京)有限公司 | Microcrack detection method, device and system and storage medium |
-
2022
- 2022-08-09 CN CN202210946913.7A patent/CN115018846B/en active Active
Patent Citations (6)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN105447851A (en) * | 2015-11-12 | 2016-03-30 | 刘新辉 | Glass panel sound hole defect detection method and system |
CN109596625A (en) * | 2019-02-01 | 2019-04-09 | 东莞中科蓝海智能视觉科技有限公司 | Workpiece, defect detection recognition method in charging tray |
CN110598637A (en) * | 2019-09-12 | 2019-12-20 | 齐鲁工业大学 | Unmanned driving system and method based on vision and deep learning |
CN113554582A (en) * | 2020-04-22 | 2021-10-26 | 中国科学院长春光学精密机械与物理研究所 | Defect detection method, device and system for functional hole in cover plate of electronic equipment |
CN112862770A (en) * | 2021-01-29 | 2021-05-28 | 珠海迪沃航空工程有限公司 | Defect analysis and diagnosis system, method and device based on artificial intelligence |
CN113822890A (en) * | 2021-11-24 | 2021-12-21 | 中科慧远视觉技术(北京)有限公司 | Microcrack detection method, device and system and storage medium |
Cited By (6)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN115423783A (en) * | 2022-09-13 | 2022-12-02 | 优层智能科技(上海)有限公司 | Quality inspection method and device for photovoltaic module frame and junction box glue filling |
CN115619767A (en) * | 2022-11-09 | 2023-01-17 | 南京云创大数据科技股份有限公司 | Method and device for detecting surface defects of mirror-like workpiece based on multi-illumination condition |
CN115619767B (en) * | 2022-11-09 | 2023-04-18 | 南京云创大数据科技股份有限公司 | Method and device for detecting surface defects of mirror-like workpiece based on multi-illumination condition |
CN116452791A (en) * | 2023-03-27 | 2023-07-18 | 广州市斯睿特智能科技有限公司 | Multi-camera point defect area positioning method, system, device and storage medium |
CN116452791B (en) * | 2023-03-27 | 2024-03-22 | 广州市斯睿特智能科技有限公司 | Multi-camera point defect area positioning method, system, device and storage medium |
CN116612116A (en) * | 2023-07-19 | 2023-08-18 | 天津伍嘉联创科技发展股份有限公司 | Crystal appearance defect detection method based on deep learning image segmentation |
Also Published As
Publication number | Publication date |
---|---|
CN115018846B (en) | 2022-12-27 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN115018846B (en) | AI intelligent camera-based multi-target crack defect detection method and device | |
CN111223088B (en) | Casting surface defect identification method based on deep convolutional neural network | |
CN113658132B (en) | Computer vision-based structural part weld joint detection method | |
CN110148130B (en) | Method and device for detecting part defects | |
CN108369650B (en) | Method for identifying possible characteristic points of calibration pattern | |
CN113205063A (en) | Visual identification and positioning method for defects of power transmission conductor | |
CN110596120A (en) | Glass boundary defect detection method, device, terminal and storage medium | |
CN115330767A (en) | Method for identifying production abnormity of corrosion foil | |
CN112734761B (en) | Industrial product image boundary contour extraction method | |
CN109255792B (en) | Video image segmentation method and device, terminal equipment and storage medium | |
CN110866915A (en) | Circular inkstone quality detection method based on metric learning | |
CN111539927B (en) | Detection method of automobile plastic assembly fastening buckle missing detection device | |
CN116758080A (en) | Method and system for detecting screen printing defects of solar cell | |
CN115830359A (en) | Workpiece identification and counting method based on target detection and template matching in complex scene | |
CN112232222A (en) | Bullet train axle box end cover bolt loss fault detection method based on image processing | |
CN117557565B (en) | Detection method and device for lithium battery pole piece | |
CN112184723B (en) | Image processing method and device, electronic equipment and storage medium | |
CN116523916B (en) | Product surface defect detection method and device, electronic equipment and storage medium | |
CN113591923A (en) | Engine rocker arm part classification method based on image feature extraction and template matching | |
Lan et al. | Weld Recognition of Pressure Vessel Based on Texture Feature | |
CN117197215B (en) | Robust extraction method for multi-vision round hole features based on five-eye camera system | |
CN117078665B (en) | Product surface defect detection method and device, storage medium and electronic equipment | |
CN112364783B (en) | Part detection method and device and computer readable storage medium | |
CN116523909B (en) | Visual detection method and system for appearance of automobile body | |
CN117474916B (en) | Image detection method, electronic equipment and storage medium |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
GR01 | Patent grant | ||
GR01 | Patent grant |