CN111507976B - Defect detection method and system based on multi-angle imaging - Google Patents

Defect detection method and system based on multi-angle imaging Download PDF

Info

Publication number
CN111507976B
CN111507976B CN202010350606.3A CN202010350606A CN111507976B CN 111507976 B CN111507976 B CN 111507976B CN 202010350606 A CN202010350606 A CN 202010350606A CN 111507976 B CN111507976 B CN 111507976B
Authority
CN
China
Prior art keywords
angle
product
detected
imaging
defect detection
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202010350606.3A
Other languages
Chinese (zh)
Other versions
CN111507976A (en
Inventor
王福伟
李小飞
王建凯
陈曦
麻志毅
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Advanced Institute of Information Technology AIIT of Peking University
Hangzhou Weiming Information Technology Co Ltd
Original Assignee
Advanced Institute of Information Technology AIIT of Peking University
Hangzhou Weiming Information Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Advanced Institute of Information Technology AIIT of Peking University, Hangzhou Weiming Information Technology Co Ltd filed Critical Advanced Institute of Information Technology AIIT of Peking University
Priority to CN202010350606.3A priority Critical patent/CN111507976B/en
Publication of CN111507976A publication Critical patent/CN111507976A/en
Application granted granted Critical
Publication of CN111507976B publication Critical patent/CN111507976B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/0002Inspection of images, e.g. flaw detection
    • G06T7/0004Industrial image inspection
    • GPHYSICS
    • G01MEASURING; TESTING
    • G01NINVESTIGATING OR ANALYSING MATERIALS BY DETERMINING THEIR CHEMICAL OR PHYSICAL PROPERTIES
    • G01N21/00Investigating or analysing materials by the use of optical means, i.e. using sub-millimetre waves, infrared, visible or ultraviolet light
    • G01N21/84Systems specially adapted for particular applications
    • G01N21/88Investigating the presence of flaws or contamination
    • G01N21/8851Scan or image signal processing specially adapted therefor, e.g. for scan signal adjustment, for detecting different kinds of defects, for compensating for structures, markings, edges
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/21Design or setup of recognition systems or techniques; Extraction of features in feature space; Blind source separation
    • G06F18/214Generating training patterns; Bootstrap methods, e.g. bagging or boosting
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/24Classification techniques
    • G06F18/241Classification techniques relating to the classification model, e.g. parametric or non-parametric approaches
    • G06F18/2415Classification techniques relating to the classification model, e.g. parametric or non-parametric approaches based on parametric or probabilistic models, e.g. based on likelihood ratio or false acceptance rate versus a false rejection rate
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/70Determining position or orientation of objects or cameras
    • G06T7/73Determining position or orientation of objects or cameras using feature-based methods
    • GPHYSICS
    • G01MEASURING; TESTING
    • G01NINVESTIGATING OR ANALYSING MATERIALS BY DETERMINING THEIR CHEMICAL OR PHYSICAL PROPERTIES
    • G01N21/00Investigating or analysing materials by the use of optical means, i.e. using sub-millimetre waves, infrared, visible or ultraviolet light
    • G01N21/84Systems specially adapted for particular applications
    • G01N21/88Investigating the presence of flaws or contamination
    • G01N21/8851Scan or image signal processing specially adapted therefor, e.g. for scan signal adjustment, for detecting different kinds of defects, for compensating for structures, markings, edges
    • G01N2021/8883Scan or image signal processing specially adapted therefor, e.g. for scan signal adjustment, for detecting different kinds of defects, for compensating for structures, markings, edges involving the calculation of gauges, generating models
    • GPHYSICS
    • G01MEASURING; TESTING
    • G01NINVESTIGATING OR ANALYSING MATERIALS BY DETERMINING THEIR CHEMICAL OR PHYSICAL PROPERTIES
    • G01N21/00Investigating or analysing materials by the use of optical means, i.e. using sub-millimetre waves, infrared, visible or ultraviolet light
    • G01N21/84Systems specially adapted for particular applications
    • G01N21/88Investigating the presence of flaws or contamination
    • G01N21/8851Scan or image signal processing specially adapted therefor, e.g. for scan signal adjustment, for detecting different kinds of defects, for compensating for structures, markings, edges
    • G01N2021/8887Scan or image signal processing specially adapted therefor, e.g. for scan signal adjustment, for detecting different kinds of defects, for compensating for structures, markings, edges based on image processing techniques
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10024Color image
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20081Training; Learning
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20084Artificial neural networks [ANN]
    • YGENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
    • Y02TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
    • Y02PCLIMATE CHANGE MITIGATION TECHNOLOGIES IN THE PRODUCTION OR PROCESSING OF GOODS
    • Y02P90/00Enabling technologies with a potential contribution to greenhouse gas [GHG] emissions mitigation
    • Y02P90/30Computing systems specially adapted for manufacturing

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • General Physics & Mathematics (AREA)
  • Data Mining & Analysis (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • General Engineering & Computer Science (AREA)
  • Evolutionary Computation (AREA)
  • Evolutionary Biology (AREA)
  • Bioinformatics & Computational Biology (AREA)
  • Bioinformatics & Cheminformatics (AREA)
  • Artificial Intelligence (AREA)
  • Signal Processing (AREA)
  • Pathology (AREA)
  • Probability & Statistics with Applications (AREA)
  • Immunology (AREA)
  • General Health & Medical Sciences (AREA)
  • Biochemistry (AREA)
  • Analytical Chemistry (AREA)
  • Chemical & Material Sciences (AREA)
  • Health & Medical Sciences (AREA)
  • Quality & Reliability (AREA)
  • Image Analysis (AREA)
  • Image Processing (AREA)

Abstract

The embodiment of the application provides a defect detection method and a defect detection system based on multi-angle imaging, wherein, firstly, an original image of a product to be detected is obtained; then, obtaining class information of a product to be detected according to the original image, obtaining actual position coordinates of the product to be detected according to the original image, and establishing a multi-angle parameter library for multi-angle photographing; then, obtaining a multi-angle image of the product to be detected according to the class information, the actual position coordinates and the multi-angle parameter library; and finally, obtaining a defect detection result of the product to be detected according to the multi-angle image of the product to be detected. The method realizes the self-adaptive intelligent quality inspection of the products in the production modes of 'small batches and multiple batches', realizes the multi-angle imaging of the three-dimensional multi-surface structure of the products and carries out more accurate defect detection based on the multi-angle imaging, and solves the problem that the defect detection of the products in multiple batches based on multi-angle images cannot be realized in the prior art.

Description

Defect detection method and system based on multi-angle imaging
Technical Field
The application belongs to the technical field of image recognition, and particularly relates to a defect detection method and system based on multi-angle imaging.
Background
When a factory produces a product, due to materials, environment and the like, some unavoidable defects, such as cracks, scratches, stains and the like, often occur, and the defects can influence the safety and the aesthetic property of the product, and even cause a certain degree of potential safety hazard. Therefore, factories generally need quality inspection of products to identify defects of the products.
Currently, most industrial manufacturers detect surface defects of products by means of manual observation, but the manual observation has many problems, for example: the cultivation to be a skilled manual quality inspector requires long period and high labor cost; the human quality inspector is easy to generate physical and visual fatigue under the long-time high-strength quality inspection work, and is easy to misjudge some tiny or unclear defects. The problems of manual observation greatly influence the accuracy of product defect detection, thereby influencing the economic benefit of industrial manufacturers.
Therefore, some manufacturers begin using machine vision to detect defects, including image preprocessing using algorithms such as image enhancement, image reconstruction, and image binarization, and then using classification networks to identify product surface defects. For example: the patent publication No. CN110632086A discloses a method and a system for detecting surface defects of an injection molding piece based on machine vision, and automatic detection of products is realized by arranging an upper computer unit and a detection device. However, only a fixing unit for fixing the object to be detected can be realized, only fixed shooting perpendicular to the object to be detected can be realized, and defect detection of multi-batch classification and multi-angle self-adaptive shooting of the product to be detected cannot be realized; according to the method and the device for detecting the surface defects of the columnar product based on the machine vision, the surface images of the columnar product are acquired, denoising is carried out on the columnar images, an ROI area is established, and defect detection is carried out. The patent can only detect two defects on the surface of a columnar product, the detected product is single, the detected defect type is also single, and the defect detection of multi-batch classification and self-adaptive multi-angle shooting of the product to be detected cannot be realized; the intelligent quality inspection equipment control system and method for the metal device based on machine vision disclosed by the patent publication number CN110335272A comprises the steps that a to-be-inspected workpiece is sent to a designated position through a first controller and a signal is sent to a second controller, the second controller drives a camera unit to shoot and analyze an obtained photo so as to judge whether the to-be-inspected workpiece is a genuine product or a defective product, and the vision processing system of the scheme is a static scheme and can only judge whether the to-be-inspected workpiece is a genuine product or a defective product and cannot judge the type of the defect; the multifunctional intelligent quality inspection system with the patent publication number of CN108681905A realizes acquisition of parameters among a plurality of through an external quality inspection device, realizes diversified quality inspection, but the patent still cannot realize self-adaptive multi-angle imaging.
However, the product to be detected often has a three-dimensional multi-surface structure, and multi-angle imaging is needed in actual detection; meanwhile, along with the increasing popularity of production modes of 'small batch and multiple batches', the specification, the size and the shape of the product to be detected also show diversity and complexity; in addition, the surface defects of the product have different forms and uneven shapes, and only one defect has a plurality of forms, so that under the condition of more defect types, the current complex defect detection requirement cannot be met by using a simple classification network in detection. Therefore, these problems all bring serious challenges to the design development and on-line deployment of quality inspection machines, and how to design a software-driven defect detection system to realize the detection of multiple batches of products has become an urgent problem to be solved.
Disclosure of Invention
The application provides a defect detection method and system based on multi-angle imaging, and aims to solve the problem that defect detection cannot be carried out based on multi-angle images of products in the prior art.
According to a first aspect of an embodiment of the present application, there is provided a defect detection method based on multi-angle imaging, including the steps of:
acquiring an original image of a product to be detected;
Obtaining class information of a product to be detected according to the original image; acquiring actual position coordinates of a product to be detected according to the original image; establishing a multi-angle parameter library for multi-angle photographing;
obtaining a multi-angle image of the product to be detected according to the class information, the actual position coordinates and the multi-angle parameter library;
and obtaining a defect detection result of the product to be detected according to the multi-angle image.
Optionally, obtaining category information of the original image according to the original image specifically includes:
constructing a classification network model;
training the classification network model by taking the manual labeling data of the product to be detected as a training sample to obtain a trained classification network model;
and inputting an original image of the product to be detected to the trained classification network model to obtain the class information of the product to be detected.
Optionally, acquiring actual position coordinates of the product to be inspected according to the original image specifically includes:
constructing a positioning network model based on target detection;
training a positioning network model by taking manual labeling data of a product to be detected as a training sample to obtain a trained positioning network model;
inputting an original image of a product to be detected to a trained positioning network model to obtain center point coordinates and direction angle information of the product to be detected;
And obtaining the actual position coordinates of the product to be detected based on the camera calibration principle according to the center point coordinates and the direction angle information of the product to be detected.
Optionally, the multi-angle parameter library includes multi-angle parameters of different kinds of products to be inspected, and the multi-angle parameters include initial angles, front shooting angles, left shooting angles and right shooting angles of the original image in one-to-one correspondence.
Optionally, obtaining a multi-angle image of the product to be detected according to the category information of the original image, the actual position coordinates of the product to be detected and the multi-angle parameter library, which specifically comprises the following steps:
according to the product information of the product to be detected, the multi-angle coordinates required by multi-angle photographing imaging under the corresponding product are called out from the multi-angle parameter library;
calculating a plurality of angle displacements of the hand-eye calibration robot for photographing and imaging according to the actual position coordinates of the product to be detected and the multi-angle coordinates required by multi-angle photographing and imaging;
the hand-eye calibration robot performs self-adaptive multi-angle imaging shooting according to a plurality of angle displacements of shooting imaging to obtain multi-angle images of the product to be detected.
Optionally, obtaining a defect detection result of the product to be detected according to the multi-angle image of the product to be detected specifically includes:
Constructing a defect segmentation network model;
training the defect segmentation network model by taking the manual labeling data of the product to be detected as a training sample to obtain a trained defect segmentation network model;
inputting the multi-angle images of the product to be detected into the trained defect segmentation network model to obtain a defect segmentation result of the product to be detected, and further obtaining a defect detection result of the product to be detected.
Optionally, the defect segmentation network is a PSPNet semantic segmentation network, a SegNet semantic segmentation network, a Fully Convolutional DenseNet semantic segmentation network, or a U-Net semantic segmentation network.
According to a second aspect of an embodiment of the present application, there is provided a defect detection system based on multi-angle imaging, specifically including:
original image acquisition module: the method comprises the steps of obtaining an original image of a product to be detected;
original image analysis module: the method comprises the steps of obtaining class information of a product to be detected according to an original image; the method comprises the steps of obtaining actual position coordinates of a product to be detected according to an original image;
the multi-angle parameter library module: the multi-angle parameter library is used for establishing a multi-angle photographing;
multi-angle image module: the multi-angle image acquisition module is used for acquiring a multi-angle image of a product to be detected according to the category information, the actual position coordinates and the multi-angle parameter library;
And a defect detection module: and the defect detection result of the product to be detected is obtained according to the multi-angle image.
According to a third aspect of an embodiment of the present application, there is provided a defect detection terminal including: a processor and a memory; wherein the memory stores a computer program adapted to be loaded by the processor and to perform a defect detection method based on multi-angle imaging.
According to a fourth aspect of an embodiment of the present application, there is provided a computer-readable storage medium having a computer program stored thereon; the computer program is executed by the processor to implement a multi-angle imaging-based defect detection method.
By adopting the defect detection method and system based on multi-angle imaging in the embodiment of the application, an original image of a product to be detected is firstly obtained; then, obtaining class information of a product to be detected according to the original image, obtaining actual position coordinates of the product to be detected according to the original image, and establishing a multi-angle parameter library for multi-angle photographing; then obtaining a multi-angle image of the product to be detected according to the class information of the original image, the actual position coordinates of the product to be detected and the multi-angle parameter library; and finally, obtaining a defect detection result of the product to be detected according to the multi-angle image of the product to be detected. The method realizes the self-adaptive intelligent quality inspection of the products in the production modes of 'small batches and multiple batches', realizes the multi-angle imaging of the three-dimensional multi-surface structure of the products and carries out more accurate defect detection based on the multi-angle imaging, and solves the problem that the defect detection of the products in multiple batches based on multi-angle images cannot be realized in the prior art.
Drawings
The accompanying drawings, which are included to provide a further understanding of the application and are incorporated in and constitute a part of this specification, illustrate embodiments of the application and together with the description serve to explain the application and do not constitute a limitation on the application. In the drawings:
an exemplary diagram of an image classification network application according to the present application is shown in FIG. 1;
an exemplary diagram of an image segmentation application according to the present application is shown in FIG. 2;
a flowchart illustrating steps of a defect detection method based on multi-angle imaging according to another embodiment of the present application is shown in fig. 3;
a schematic diagram of the structure of a classification network according to an embodiment of the application is shown in fig. 4;
a schematic diagram of the structure of an object detection network according to an embodiment of the present application is shown in fig. 5;
a schematic diagram of the structure of a defect segmentation network according to an embodiment of the present application is shown in fig. 6;
FIG. 7 is a flow chart of a defect detection method based on multi-angle imaging according to an embodiment of the present application;
FIG. 8 is a schematic diagram of a defect detection system based on multi-angle imaging according to an embodiment of the present application;
fig. 9 shows a schematic configuration of a defect detection terminal according to an embodiment of the present application.
Detailed Description
In the process of realizing the application, the inventor finds that the specifications, the sizes and the shapes of products to be detected can also show diversity and complexity along with the increasing popularity of the production modes of 'small batches and multiple batches' of products, and the surface defect forms of the products are different and uneven. The existing defect detection mode can not meet the current complex defect detection requirement. Meanwhile, the product to be detected often has a three-dimensional multi-surface structure, and in actual detection, multi-angle imaging is needed to accurately perform quality detection.
Aiming at the problems, the application discloses a self-adaptive intelligent quality inspection method and a system based on target detection positioning and multi-angle imaging. The intelligent imaging of the multi-batch products to be detected can be realized, three-dimensional industrial products with multi-angle structures are mainly identified and positioned by deep learning methods based on a classification network, a target detection network and the like, and then the multi-angle imaging shooting is carried out on the products to be detected on the detection table by driving the hand-eye calibration robot based on a pre-designed self-adaptive imaging angle parameter library. Meanwhile, the application also discloses a method based on the semantic segmentation network, which realizes fine granularity segmentation of the surface defects of the product to be detected so as to realize defect detection. According to the technical scheme, the self-adaptive intelligent quality inspection of the products in the 'small batch and multiple batches' production mode can be realized in the software driving mode, the hardware part of the quality inspection machine does not need to be manually erected and debugged frequently, and the labor cost of on-line deployment and debugging is greatly reduced. And the multi-angle imaging of the three-dimensional multi-surface structure of the product is realized, and more accurate defect detection is performed based on the multi-angle imaging.
Compared with the prior art, the application realizes the offset, rotation and multi-batch self-adaptive imaging shooting of the product to be detected aiming at the defect that the product to be detected cannot be processed to offset and rotate and needs manual adjustment for multiple batches in the prior art. The classification network is specifically adopted to realize classification of multiple batches of products to be detected; the actual position coordinate positioning of the product to be detected is realized by adopting a target detection network; and detecting the surface defect area of the product to be detected by adopting a semantic segmentation network.
Compared with the prior art, the semantic segmentation method based on the defect detection method provided by the application can detect different types of defects, especially small defects and complex defects, and can obtain better detection precision and more detailed defect detection information aiming at the defects of poor precision and single detected defect types of the classification method adopted by the prior quality detection method.
In order to make the technical solutions and advantages of the embodiments of the present application more apparent, the following detailed description of exemplary embodiments of the present application is provided in conjunction with the accompanying drawings, and it is apparent that the described embodiments are only some embodiments of the present application and not exhaustive of all embodiments. It should be noted that, without conflict, the embodiments of the present application and features of the embodiments may be combined with each other.
Example 1
In order to better illustrate the defect detection method based on multi-angle imaging according to the embodiment of the present application, a convolutional neural network, an image classification network, object detection and image segmentation adopted in the embodiment of the present application are described first.
With respect to convolutional neural networks, convolutional neural networks are a type of feedforward neural networks that include convolutional calculations and have a deep structure, and are one of representative algorithms for deep learning. The convolutional neural network has characteristic learning capability and can carry out translation invariant classification on input information according to a hierarchical structure of the convolutional neural network. The artificial neuron can respond to surrounding units in a part of coverage area, has excellent performance on large-scale image processing, and comprises a convolution layer and a pooling layer, wherein the convolution layer is used for extracting features of a small area and a small area on an original input image to obtain a feature map; the pooling layer performs further feature compression on the feature map. Convolutional neural networks are commonly used in visual tasks such as image classification, target detection, semantic segmentation and the like, and most of the tasks are to extract different features by using the convolutional neural networks, then construct different tasks, learn by using a large number of labeled training samples, and continuously adjust parameters in the network in the learning process so as to achieve the aim of minimizing prediction errors.
With respect to image classification networks, which are a relatively broad research direction in the field of computer vision, the heart of image classification is the task of assigning a label to a given image from a given set of classifications.
An exemplary diagram of an image classification network application according to the present application is shown in fig. 1. Referring to fig. 1, the image classification model reads the feature information of a picture and generates a tag probability that the picture belongs to an image set { cat, dog, hat, mug }. For a computer, the image is a massive three-dimensional array of numbers, in this example a cat image size of 248 pixels wide and 400 pixels high, with 3 channels of colors, red, green and blue (i.e., RGB), respectively. Thus, the image contains 248X400X3 = 297600 digits, each digit ranging from an integer between 0 and 255, where 0 represents full black and 255 represents full white. The task of image classification is to turn these numbers into a simple image tag "cat".
Regarding target detection, the target detection is a technology for identifying specific objects in an image and the positions of the objects in computer vision, and not only is an algorithm required to judge the types of the objects in the image, but also the position coordinates of the objects in the image are determined. The location information of object detection is generally in two formats: the top left corner of the picture is taken as the origin (0, 0), one is represented by polar coordinates (xmin, ymin, xmax, ymax). Wherein xmin and ymin respectively represent the minimum values of x and y coordinates, xmax and ymax respectively represent the maximum values of x and y coordinates; the other coordinates the center point, which is (x_center, y_center, w, h). Where x_center, y_center denote center coordinates of the target detection frame, and w, h denote width and height of the target detection frame. However, the conventional target detection method has a disadvantage for detecting surface defects of the product to be detected in a real scene, because the product to be detected in the real scene is not horizontally placed, and the surface of the product to be detected is an irregular curved surface, so that the photographed product to be detected often has some angle deviation or other side interference areas. Aiming at the problems in the prior art, the embodiment of the application adopts a rotating target detection network RRPN based on text detection, generates candidate areas with angles by setting rotating candidate frames with different proportions and sizes, calculates the actual deflection angle of the product to be detected by utilizing the angles of the generated candidate areas, and better adapts to the surface defect area of the product to be detected in reality and realizes the self-adaptive shooting of the hand-eye calibration robot.
With respect to image segmentation, image segmentation is a technique and process that divides an image into several specific, distinct regions of interest and extracts objects of interest. Image segmentation is a task of assigning semantic labels to each pixel in an image, so that features in the same sub-region have certain similarity, and features in different sub-regions show obvious differences. From a mathematical perspective, image segmentation is the process of dividing an image into mutually disjoint regions.
An exemplary diagram of an image segmentation application according to the present application is shown in fig. 2. As shown in fig. 2, the objects in the picture, horses and people are segmented out from them and identified with different colors. Through image segmentation, objects in the image can be identified simply and clearly, which can greatly simplify the image and facilitate highlighting of primary objects of interest.
Based on the above technical understanding, a flowchart of steps of a defect detection method based on multi-angle imaging according to an embodiment of the present application is shown in fig. 3.
As shown in fig. 3, the defect detection method based on multi-angle imaging of the present embodiment specifically includes the following steps:
s101: and acquiring an original image of the product to be detected.
S102: obtaining class information of a product to be detected according to the original image; acquiring actual position coordinates of a product to be detected according to the original image; and establishing a multi-angle parameter library for multi-angle photographing.
S103: and obtaining a multi-angle image of the product to be detected according to the class information, the actual position coordinates and the multi-angle parameter library.
S104: and obtaining a defect detection result of the product to be detected according to the multi-angle image.
In detail, in S101, an original image of a product to be inspected is acquired. Firstly, respectively placing a plurality of batches of products to be detected on a product detection table to be detected, and performing initial photographing once by using a hand-eye calibration robot to obtain an original image.
In step S102, category information of the original image is obtained according to the original image, and specifically includes the following steps:
first, a classification network model is constructed.
The embodiment of the application adopts a classification network method based on deep learning to acquire the category information of different batches from the photos of the products to be detected in multiple batches.
A schematic diagram of the structure of a classification network according to an embodiment of the application is shown in fig. 4. As shown in fig. 4, the structural characteristics are as follows:
(1) A 3-channel RGB original image of size 448 x 448 pixels is input.
(2) The input image is subjected to convolution operation and downsampling operation for 3 times respectively, and then a compressed image characteristic diagram is obtained.
The picture is processed by the convolution layer, so that the characteristic with higher robustness can be learned; the downsampling mode is to take the maximum value of the characteristic points in the neighborhood by adopting the mode of maximum pooling, the parameter scale can be reduced by the maximum pooling, the parameter complexity can be reduced, and finally, the characteristic diagram with the smaller size and the increased channel number can be obtained.
(3) The feature map after passing through the convolution layer is fed into the full connection layer, which maps the learned features to sample space and then calculates the probability of a class using the softmax activation function. Since the convolution kernel of 1x1 does not change the size of the image, a convolution layer of 1x1 is used instead of the full-join layer.
(4) The training speed can be accelerated by Batch Normalization before the convolution layers, the generalization capability of the network is improved, and a ReLU activation function is used after the convolution layers to improve the nonlinear relationship between the convolution layers, so that the training is prevented from being overfitted.
And secondly, taking the artificial labeling data of the product to be detected as a training sample, training the classification network model constructed above, and finally obtaining the trained classification network model.
The method specifically comprises the following steps:
a convolutional neural network is used as a classification network, and one feature of the convolutional neural network is that a large amount of artificial labeling data is required as a training sample.
Training data of a classification network is acquired and marked, and firstly, a large number of defect sample photos are shot by using a hand-eye calibration robot; then, the defective photos are manually marked: the images of each different category are named as different IDs under the corresponding category; then, the labeling data are divided into a training set, a testing set and a verification set according to the proportion of 5:1:1, and are used for classifying network training. Marking the area of the product to be detected in each image by using a rectangular frame, and recording the coordinates of the rectangular frame for the target detection network.
After the training samples are obtained and the classification network is obtained, the training samples are input to train the classification network, and related operations in the training process are as follows:
first, training data is reset to a length 600 and a width 600 size using a transform method; then, carrying out random horizontal overturn on the image, randomly cutting out 448 x 448 images from the image, and loading training data;
in the training process, the size of the batch size is set to be 16, and the initial learning rate is 0.001;
during training, a random gradient descent method with momentum is used. Other methods can also adopt optimization methods such as Adagrad, RMSprop and Adam, and relatively better training effects can be obtained by using a random gradient descent method with momentum in the training method;
in the training process, NLLLoss loss function is used;
during the training process, a test was performed every 5 epochs trained until the model was trained to converge.
Finally, based on the trained classification network, the classification network based on deep learning is used for carrying out class prediction on the product to be detected. And inputting an original image of the product to be detected to the trained classification network model to obtain the class information of the product to be detected.
Specifically, after the classification network is trained, the trained classification network is used for predicting the product to be detected, only the trained classification network model is needed to be loaded, an optimizer and a loss function are not needed, and the image shot by the hand-eye calibration robot is sent into the trained classification model for prediction.
After the classification network obtains the category information of the multi-batch products to be detected, the position coordinates of the products to be detected on the detection table are required to be obtained so as to realize self-adaptive multi-angle imaging, so that a positioning network based on target detection is required to be further introduced to obtain the actual position coordinates of the products to be detected.
In step S102, the actual position coordinates of the product to be inspected are obtained according to the original image, and specifically include the following steps:
first, a positioning network model based on object detection is constructed.
The positioning network based on object detection according to the embodiment of the present application adopts an RRPN object detection network based on text detection, and a schematic structure diagram of the object detection network according to the embodiment of the present application is shown in fig. 5.
As shown in fig. 5, the network structure is characterized in that:
(1) A 3-channel RGB image of size 448 x 448 pixels is input.
(2) The input image is passed through 5 convolution layers and 2 downsampling layers, the 2 downsampling layers being respectively after the first and second convolution layers, and then an image feature map is formed.
(3) Then through RRPN module, which contains anchors (anchors) with different sizes and different sizes, aspect ratio and-0,/>And finally generating candidate areas with inclination angles by six different rotation angles, so that the candidate frames are more accurately adapted to the directions of the areas of the products to be detected.
(4) Then through the RROIPooling layer, the input layer is conv5 output and region proposal, the number of region proposal is about 2000, and RoI Pooling Layer extracts the corresponding feature of each RoI on the feature map.
(5) Finally, through a fully connected layer with both outputs (outputs) of 4096 dimensions, one is a classification output and the other is a regression output.
And secondly, taking the manual labeling data of the product to be detected as a training sample, and training the constructed target detection positioning network model to obtain a trained target detection network model.
The method specifically comprises the following steps:
a convolutional neural network is used as a target detection network, and one feature of the convolutional neural network is that a large amount of artificial labeling data is required as a training sample.
Training data of a target detection network is acquired and marked, and a large number of defect sample photos are shot by using a hand-eye calibration robot; then, the defective photos are manually marked: the images of each different category are named as different IDs under the corresponding category; then, the labeling data are divided into a training set, a testing set and a verification set according to the proportion of 5:1:1, and are used for training the target detection network.
After the training sample is obtained and the target detection network is obtained, the training sample is input to train the target detection network, and related operations in the training process are as follows:
first, the training data is reset to 600, 600 wide, then the images are randomly flipped horizontally, then 448 x 448 sized images are randomly cropped from the images, and the training data is loaded.
In the training process, multiple loss functions are selected for testing regression output of the position of the product to be tested, the multiple loss functions comprise focal loss, krikage loss, lossless Triplet loss loss and Repulsion loss, and finally the focal loss function with the best test result is selected from the loss functions to serve as a final loss function.
In the training process, for any one RoI, the softmax loss value of the background area and the regression value of the background area are calculated.
In the training process, the training strategy adopts 4-step alternating training to train until the loss function converges.
Then, based on the trained target detection network, the network based on target detection is used for positioning the product to be detected. And inputting an original image of the product to be detected to a trained positioning network model based on target detection to obtain the center point coordinates and the direction angle information of the product to be detected.
And finally, obtaining the actual position coordinates of the product to be detected based on a camera calibration principle according to the center point coordinates and the direction angle information of the product to be detected.
The specific flow is as follows:
1) Inputting a training sample for target detection into a trained target detection network, outputting the center point coordinates of the product to be detected after regression, and generating an inclination angle with direction angle information of the product to be detected
2) And obtaining the actual position coordinates of the product to be detected on the detection table from the space points and the corresponding pixel points according to the camera calibration principle.
The camera calibration principle is as follows:
and (5) taking a point O as an origin to establish a camera coordinate system. Where point Q (X, Y, Z) is a point in camera coordinate system space where the point where it is projected by light onto the image plane is Q (X, Y, f). The image plane is perpendicular to the optical axis z-axis and the projection center point is at a distance f (f is the focal length of the camera). The triangular proportion relation can be obtained: the process of mapping the Q point with coordinates (X, Y, Z) to the Q point with coordinates (X, Y) on the projection plane is called projective transformation, where X/f=x/Z Y/f=y/Z, i.e. x=fx/Z Y =fy/Z.
The above-described Q-point-to-Q-point transformation relationship can be expressed as a matrix of 3*3: q=mq, wherein,
then, a perspective projective transformation matrix may be calculated:
The matrix M is called an internal reference matrix of the camera, and the units are all physical dimensions.
By the above method, the camera coordinate system can be converted into physical units of the image coordinate system [ i.e., (X, Y, Z) → (X, Y) ].
The conventional target detection method has the defects for detecting the surface defects of the product to be detected in the real scene, because the product to be detected in the real scene is not horizontally placed, and the surface of the product to be detected is an irregular curved surface, so that the photographed product to be detected often has some angle deviation or other side interference areas. According to the embodiment of the application, the rotating target detection network RRPN based on text detection is adopted, the candidate areas with angles are generated by setting the rotating candidate frames with different proportions and sizes, and the actual deflection angles of the products to be detected are calculated by utilizing the angles of the generated candidate areas, so that the device is better suitable for the surface defect areas of the products to be detected in reality and the self-adaptive shooting of the hand-eye calibration robot is realized.
In step S102, a multi-angle parameter library for multi-angle photographing is established. The multi-angle parameter library comprises multi-angle parameters of different types of products to be detected, wherein the multi-angle parameters comprise initial angles, front shooting angles, left shooting angles and right shooting angles of original images in one-to-one correspondence.
In step S103, a multi-angle image of the product to be inspected is obtained according to the category information of the original image, the actual position coordinates of the product to be inspected and the multi-angle parameter library. The method specifically comprises the following steps:
firstly, according to the class information of the product to be detected, the multi-angle coordinates required by multi-angle photographing imaging under the class corresponding to the multi-angle parameter library are called.
And then, calculating a plurality of angle displacements of the hand-eye calibration robot for photographing and imaging according to the actual position coordinates of the product to be detected and the multi-angle coordinates required by multi-angle photographing and imaging.
Finally, the hand-eye calibration robot performs self-adaptive multi-angle imaging shooting according to a plurality of angle displacements of shooting imaging to obtain multi-angle images of the product to be detected.
Before the adaptive multi-angle imaging is achieved, we complete the following three works through step S102, summarized as:
(1) Different kinds of information under multiple batches are predicted by using a classification network, namely, a computer is firstly used for determining which product to be detected the current photo shot by the hand-eye calibration robot belongs to;
(2) After the category of the current product to be detected is determined, the center point coordinate of the current product to be detected on the picture is regressed by utilizing a positioning neural network based on target detection, and then the actual position of the product to be detected on the detection table is calculated by utilizing a camera calibration principle.
(3) And establishing a multi-angle parameter library comprising various types of products to be detected. For different products to be detected, an initial shooting angle and multi-angle parameters are preset by taking the lower left corner of the detection table as an original point (unit cm), so that a multi-angle parameter library of the products to be detected is formed.
The initial shooting angle is used for shooting an initial photo for the classification network to use; the multi-angle parameters are preset for different products to be detected with different three-dimensional structures. Table 1 is an example of a multi-angle parameter library.
Product category to be inspected Initial shooting angle Front shooting angle Left shooting angle Right photographing angle
Product 1 to be inspected (25,25,45°) (25,25,45°) (-25,18,-135°) (38,24,25°)
Product to be inspected 2 (30,20,45°) (32,22,45°) (-28,23,-140°) (35,22,30°)
Product to be inspected n (28,26,45°) (30,26,45°) (-20,25,-145°) (39,20,20°)
Table 1 example of Multi-Angle parameter library
In S103, obtaining a multi-angle image of the product to be detected according to the category information of the original image, the actual position coordinates of the product to be detected and the multi-angle parameter library, wherein the specific implementation of the calculation process is as follows:
assuming that the coordinates of a fixed placement position of a product to be detected on a detection table are preset as (a, b, theta), and the coordinate of an angle on one surface of the product to be detected, which needs imaging, isThe actual position coordinates positioned by the target detection network are (x, y, theta');
calculating the deviation between the current placement position of the product to be detected and the preset position as (a-x, b-y, theta-theta');
The same product to be detected is known to be placed according to a designated surface (namely, the product to be detected is placed in a horizontal placement mode, and is not placed vertically, but is only slightly deviated in the horizontal direction during placement), so that the hand-eye calibration robot needs to move the coordinates and the rotation angles of m and n
The same principle can be used for obtaining that the coordinate deviation value to which the hand-eye calibration robot needs to move is equal to the calculated deviation value (a-x, b-y, theta-theta') of the product to be detected and the actual preset placement position;
inclination angle of direction angle of product to be detected returned by target detection networkI.e. < ->
Finally, the hand-eye calibration machine is obtainedThe coordinate position of the movement of the person isAnd further can drive the hand-eye calibration robot multi-angle image.
In step S104, a defect detection result of the product to be detected is obtained according to the multi-angle image of the product to be detected. After self-adaptive multi-angle imaging shooting, defect images of the product to be detected under multiple angles are obtained, and the embodiment adopts a defect segmentation network based on semantic segmentation to carry out defect segmentation and detection based on the defect images under multiple angles.
The defect detection process according to the defect segmentation network specifically comprises the following steps:
first, a defect segmentation network model is constructed.
The embodiment of the application adopts a multi-layer convolution segmentation network PSPNet to segment target defects from photos. A schematic structure of a defect segmentation network according to an embodiment of the present application is shown in fig. 6.
As shown in fig. 6, the structural characteristics are as follows:
(1) Fig. 6 (a) is an input picture of 3-channel RGB with an image size of 448×448 pixels.
(2) In the figure 6 (b), a residual network of expansion convolution and pre-training is applied, wherein the residual network comprises convolution and pooling operations of some columns, the convolution operations can extract the features of the image, the pooling operations can compress the features of the image, and finally the output feature map is 1/8 of the input original image.
(3) In the figure, 6 (c) is a pyramid pooling module used for aggregating context information, the pyramid level is 4, and the feature graphs of the four levels are respectively subjected to convolution layer, batch Normalization batch standardization and ReLU activation functions, so that overfitting can be prevented, and the network generalization capability is improved.
(4) For the four levels of feature maps, the spatial dimensions of each feature map are restored to the spatial dimensions of the input of the pyramid pooling module by upsampling (linear interpolation), respectively.
(5) And connecting the four-level feature graphs and the input of the pyramid pooling module in series, then connecting the previous pyramid feature map with the original feature map concat, and finally obtaining the final predicted feature graph through a convolution layer.
And secondly, taking the artificial labeling data of the product to be detected as a training sample, and training the defect segmentation network model constructed above to obtain a trained defect segmentation network model.
The method specifically comprises the following steps:
in the defect segmentation network, a convolutional neural network is used, and one characteristic of the convolutional neural network is that a large amount of manual labeling data is needed as a training sample. Firstly, shooting a large number of defect sample photos by using a hand-eye calibration robot; then, the defective photos are manually marked: the images of each different category are named as different IDs under the corresponding category; these annotation data are then divided into training, testing and validation sets in a ratio of 5:1:1. And manually observing each defect in the photo, marking the defective area according to the pixel level, and marking the final form of the mark as a mask image with the same size as the original photo for a defect segmentation network.
After a training sample is obtained and a defect segmentation network is constructed, the training sample is input to train the defect segmentation network, and related operations in the training process are as follows:
first, training data is loaded and randomly shuffled, scaling the training set photo and annotated defect mask map to the size of length 448, width 448.
In the training process, a focal loss function is selected for testing the segmentation loss, and an auxiliary loss function is added, wherein the weight of the auxiliary loss function is 0.4, so that the final loss and the auxiliary loss are balanced.
In the training process, an Adam optimizer is used, the initial learning rate is 0.01, and the model is subjected to one evaluation after each epoch by using cross-correlation comparison until the model converges.
And finally, inputting the multi-angle image of the product to be detected into the trained defect segmentation network model to obtain a defect segmentation result of the product to be detected, and further obtaining a defect detection result of the product to be detected.
Specifically, in order to complete the detection of the surface defects of the product to be detected, the following operations are required:
1) And loading a trained semantic segmentation network, and scaling a plurality of surface angle original images of the product to be detected, which are shot by the hand-eye calibration robot, to the sizes of length 448 and width 448.
2) And sending the original image into a defect segmentation network based on semantic segmentation, and finally obtaining a predicted probability distribution map.
In other embodiments, the defect segmentation network may be a SegNet semantic segmentation network, a Fully Convolutional DenseNet semantic segmentation network, or a U-Net semantic segmentation network in addition to the PSPNet semantic segmentation network.
Further described, a flow diagram of a multi-angle imaging-based defect detection method according to another embodiment of the present application is shown in fig. 7.
Referring to the quality inspection flow chart of fig. 7, firstly, placing a plurality of batches of products to be inspected on a product inspection table, and performing primary photographing by using a hand-eye calibration robot; secondly, carrying out class judgment on the current product to be detected by using a classification network based on deep learning, and positioning the actual position coordinates of the current product to be detected by using a positioning network based on target detection; then, performing multi-angle imaging photographing by using the self-adaptive multi-angle imaging system; and finally, performing defect segmentation by using a defect segmentation network based on semantic segmentation. Finally, multi-angle self-adaptive defect detection is carried out on the surface of the product to be detected.
The classification Network, the target detection Network and the semantic segmentation Network in the application all adopt convolutional neural networks, and Capsule networks (Capsule networks) and the like can be used in other embodiments.
The target detection network in the embodiment of the present application adopts an RRPN network, and other mainstream target detection networks may also be used in other embodiments, for example: r3Det network, gliding Vertex network, RSDet network, etc.
The defect segmentation network in the embodiment of the application adopts a PSPNet semantic segmentation network, and other mainstream semantic segmentation networks can also be adopted in other embodiments, for example: segNet, fully Convolutional DenseNet, U-Net, etc.
The product to be detected is not limited to a certain product to be detected, and is also applicable to various products with three-dimensional structures, which need to be subjected to multi-angle imaging, for example: wood board, power strip, package box, etc.
The defect detection method of the embodiment of the application comprises the following steps: firstly, placing a plurality of batches of products to be detected on a detection table, performing primary photographing by using a hand-eye calibration robot, performing class identification on the batches of products to be detected by using a classification network based on deep learning, and determining class information of the current products; then, a plurality of imaging angle parameters under different products to be detected are called, and position coordinates of the current products to be detected are estimated by utilizing a target detection network, so that actual position coordinates of the current products to be detected are determined; and then calculating the position value of the hand-eye calibration robot by using a camera calibration algorithm and combining the estimated actual position coordinates of the product to be detected and the multiple angle parameters of the different types of the products to be detected. Finally, multi-angle self-adaptive imaging of the surface defects of the product to be detected is realized. And finally, performing defect segmentation prediction on the obtained multi-angle imaging image by using a defect segmentation network based on semantic segmentation.
The defect detection method based on multi-angle imaging in the embodiment of the application has the following beneficial effects:
1. and a classification network method based on deep learning is used for realizing classification tasks of multiple batches of products to be detected.
2. And determining the position coordinates of the product to be detected on the detection table according to the image of the product to be detected by using a positioning method of target detection.
3. The self-adaptive multi-angle imaging system based on deep learning obtains class information of a plurality of batches of products to be detected according to a classification network, and then a plurality of self-adaptive shooting angles under different classes in multi-angle parameters are called; and obtaining the actual position coordinates of the product to be detected according to the target detection network, and finally enabling the hand-eye calibration robot to realize self-adaptive multi-angle shooting imaging.
4. By means of the defect segmentation method based on semantic segmentation, a defect area is segmented from the shot multi-angle photo, and quality inspection results of products to be inspected are accurate.
Example 2
The present embodiment provides a defect detection system based on multi-angle imaging, and for details not disclosed in the defect detection system of the present embodiment, please refer to the defect detection method based on multi-angle imaging in other embodiments.
Fig. 8 is a schematic structural diagram of a defect detection system based on multi-angle imaging according to an embodiment of the present application. As shown in fig. 8, the defect detection system based on multi-angle imaging provided in this embodiment includes: the system comprises an original image acquisition module 10, an original image analysis module 20, a multi-angle parameter library module 30, a multi-angle image module 40 and a defect detection module 50.
As shown in fig. 8, the defect detection system based on multi-angle imaging has the following specific structure:
the original image acquisition module 10: the method is used for acquiring the original image of the product to be inspected.
Raw image analysis module 20: the method comprises the steps of obtaining class information of a product to be detected according to an original image; and the method is used for acquiring the actual position coordinates of the product to be detected according to the original image.
Multi-angle parameter library module 30: the multi-angle parameter library is used for establishing a multi-angle parameter library for multi-angle photographing.
Multi-angle image module 40: the multi-angle image acquisition module is used for acquiring the multi-angle image of the product to be detected according to the category information, the actual position coordinates and the multi-angle parameter library.
Defect detection module 50: and the defect detection result of the product to be detected is obtained according to the multi-angle image of the product to be detected.
By adopting the defect detection system based on multi-angle imaging in the embodiment of the application, an original image of a product to be detected is firstly obtained; then, obtaining class information of a product to be detected according to the original image, obtaining actual position coordinates of the product to be detected according to the original image, and establishing a multi-angle parameter library for multi-angle photographing; then obtaining a multi-angle image of the product to be detected according to the class information of the original image, the actual position coordinates of the product to be detected and the multi-angle parameter library; and finally, obtaining a defect detection result of the product to be detected according to the multi-angle image of the product to be detected. The method realizes the self-adaptive intelligent quality inspection of the products in the production modes of 'small batches and multiple batches', realizes the multi-angle imaging of the three-dimensional multi-surface structure of the products and carries out more accurate defect detection based on the multi-angle imaging, and solves the problem that the defect detection of the products in multiple batches based on multi-angle images cannot be realized in the prior art.
Example 3
Fig. 9 is a schematic structural diagram of a defect detection terminal according to an embodiment of the present application. As shown in fig. 9, the terminal provided in this embodiment includes: a memory 301, a processor 302, and a computer program, wherein the computer program is stored in the memory 301 and configured to be executed by the processor 302 to implement the multi-angle imaging-based defect detection method provided by any of the embodiments.
Example 4
The present embodiment also provides a computer-readable storage medium having stored thereon a computer program that is executed by a processor to implement the multi-angle imaging-based defect detection method provided by any of the embodiments.
By adopting the defect detection terminal based on multi-angle imaging and the computer medium in the embodiment of the application, an original image of a product to be detected is firstly obtained; then, obtaining class information of a product to be detected according to the original image, obtaining actual position coordinates of the product to be detected according to the original image, and establishing a multi-angle parameter library for multi-angle photographing; then obtaining a multi-angle image of the product to be detected according to the class information of the original image, the actual position coordinates of the product to be detected and the multi-angle parameter library; and finally, obtaining a defect detection result of the product to be detected according to the multi-angle image of the product to be detected. The method realizes the self-adaptive intelligent quality inspection of the products in the production modes of 'small batches and multiple batches', realizes the multi-angle imaging of the three-dimensional multi-surface structure of the products and carries out more accurate defect detection based on the multi-angle imaging, and solves the problem that the defect detection of the products in multiple batches based on multi-angle images cannot be realized in the prior art.
It will be appreciated by those skilled in the art that embodiments of the present application may be provided as a method, system, or computer program product. Accordingly, the present application may take the form of an entirely hardware embodiment, an entirely software embodiment or an embodiment combining software and hardware aspects. Furthermore, the present application may take the form of a computer program product embodied on one or more computer-usable storage media (including, but not limited to, disk storage, CD-ROM, optical storage, and the like) having computer-usable program code embodied therein.
The present application is described with reference to flowchart illustrations and/or block diagrams of methods, apparatus (systems) and computer program products according to embodiments of the application. It will be understood that each flow and/or block of the flowchart illustrations and/or block diagrams, and combinations of flows and/or blocks in the flowchart illustrations and/or block diagrams, can be implemented by computer program instructions. These computer program instructions may be provided to a processor of a general purpose computer, special purpose computer, embedded processor, or other programmable data processing apparatus to produce a machine, such that the instructions, which execute via the processor of the computer or other programmable data processing apparatus, create means for implementing the functions specified in the flowchart flow or flows and/or block diagram block or blocks.
These computer program instructions may also be stored in a computer-readable memory that can direct a computer or other programmable data processing apparatus to function in a particular manner, such that the instructions stored in the computer-readable memory produce an article of manufacture including instruction means which implement the function specified in the flowchart flow or flows and/or block diagram block or blocks.
These computer program instructions may also be loaded onto a computer or other programmable data processing apparatus to cause a series of operational steps to be performed on the computer or other programmable apparatus to produce a computer implemented process such that the instructions which execute on the computer or other programmable apparatus provide steps for implementing the functions specified in the flowchart flow or flows and/or block diagram block or blocks.
While preferred embodiments of the present application have been described, additional variations and modifications in those embodiments may occur to those skilled in the art once they learn of the basic inventive concepts. It is therefore intended that the following claims be interpreted as including the preferred embodiments and all such alterations and modifications as fall within the scope of the application.
It will be apparent to those skilled in the art that various modifications and variations can be made to the present application without departing from the spirit or scope of the application. Thus, it is intended that the present application also include such modifications and alterations insofar as they come within the scope of the appended claims or the equivalents thereof.

Claims (9)

1. The defect detection method based on multi-angle imaging is characterized by comprising the following steps of:
acquiring an original image of a product to be detected;
obtaining class information of the product to be detected according to the original image; acquiring actual position coordinates of the product to be detected according to the original image;
establishing a multi-angle parameter library for multi-angle photographing;
obtaining a multi-angle image of the product to be detected according to the class information, the actual position coordinates and the multi-angle parameter library; the multi-angle image of the product to be detected is obtained according to the category information, the actual position coordinates and the multi-angle parameter library, and the method specifically comprises the following steps: according to the class information of the product to be detected, multi-angle coordinates required by multi-angle photographing imaging under the class corresponding to the multi-angle parameter library are called; calculating a plurality of angle displacements of the hand-eye calibration robot for photographing and imaging according to the actual position coordinates of the product to be detected and the multi-angle coordinates required by multi-angle photographing and imaging; the hand-eye calibration robot performs self-adaptive multi-angle imaging shooting according to the plurality of angle displacements of the shooting imaging to obtain multi-angle images of the product to be detected;
And obtaining a defect detection result of the product to be detected according to the multi-angle image.
2. The defect detection method based on multi-angle imaging according to claim 1, wherein the obtaining category information of the original image according to the original image specifically comprises:
constructing a classification network model;
training the classification network model by taking the manual labeling data of the product to be detected as a training sample to obtain a trained classification network model;
and inputting the original image of the product to be detected to the trained classification network model to obtain the class information of the product to be detected.
3. The defect detection method based on multi-angle imaging according to claim 1, wherein the obtaining the actual position coordinates of the product to be detected according to the original image specifically comprises:
constructing a positioning network model based on target detection;
training the positioning network model by taking the manual labeling data of the product to be detected as a training sample to obtain a trained positioning network model;
inputting the original image of the product to be detected to the trained positioning network model to obtain the center point coordinates and the direction angle information of the product to be detected;
and obtaining the actual position coordinates of the product to be detected based on a camera calibration principle according to the center point coordinates and the direction angle information of the product to be detected.
4. The defect detection method based on multi-angle imaging according to claim 1, wherein the multi-angle parameter library comprises multi-angle parameters of different kinds of products to be detected, and the multi-angle parameters comprise an initial angle, a front shooting angle, a left shooting angle and a right shooting angle of an original image in one-to-one correspondence.
5. The defect detection method based on multi-angle imaging according to claim 1, wherein the obtaining the defect detection result of the product to be detected according to the multi-angle image of the product to be detected specifically comprises:
constructing a defect segmentation network model;
training the defect segmentation network model by taking the manual labeling data of the product to be detected as a training sample to obtain a trained defect segmentation network model;
inputting the multi-angle image of the product to be detected into the trained defect segmentation network model to obtain a defect segmentation result of the product to be detected, and further obtaining a defect detection result of the product to be detected.
6. The multi-angle imaging-based defect detection method of claim 5, wherein the defect segmentation network is a PSPNet semantic segmentation network, a SegNet semantic segmentation network, a Fully Convolutional DenseNet semantic segmentation network, or a U-Net semantic segmentation network.
7. A defect detection system based on multi-angle imaging, comprising:
original image acquisition module: the method comprises the steps of obtaining an original image of a product to be detected;
original image analysis module: the method comprises the steps of obtaining class information of a product to be detected according to an original image; the method comprises the steps of obtaining actual position coordinates of a product to be detected according to an original image;
the multi-angle parameter library module: the multi-angle parameter library is used for establishing a multi-angle photographing;
multi-angle image module: the multi-angle image of the product to be detected is obtained according to the category information, the actual position coordinates and the multi-angle parameter library; the multi-angle image module is specifically configured to: according to the class information of the product to be detected, multi-angle coordinates required by multi-angle photographing imaging under the class corresponding to the multi-angle parameter library are called; calculating a plurality of angle displacements of the hand-eye calibration robot for photographing and imaging according to the actual position coordinates of the product to be detected and the multi-angle coordinates required by multi-angle photographing and imaging; the hand-eye calibration robot performs self-adaptive multi-angle imaging shooting according to the plurality of angle displacements of the shooting imaging to obtain multi-angle images of the product to be detected;
And a defect detection module: and the defect detection result of the product to be detected is obtained according to the multi-angle image.
8. A defect detection terminal, comprising: a processor and a memory; wherein the memory stores a computer program adapted to be loaded by the processor and to perform the multi-angle imaging based defect detection method according to any of claims 1-6.
9. A computer-readable storage medium, characterized in that a computer program is stored thereon; the computer program is executed by a processor to implement the multi-angle imaging-based defect detection method as claimed in any one of claims 1 to 6.
CN202010350606.3A 2020-04-28 2020-04-28 Defect detection method and system based on multi-angle imaging Active CN111507976B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202010350606.3A CN111507976B (en) 2020-04-28 2020-04-28 Defect detection method and system based on multi-angle imaging

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202010350606.3A CN111507976B (en) 2020-04-28 2020-04-28 Defect detection method and system based on multi-angle imaging

Publications (2)

Publication Number Publication Date
CN111507976A CN111507976A (en) 2020-08-07
CN111507976B true CN111507976B (en) 2023-08-18

Family

ID=71876496

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202010350606.3A Active CN111507976B (en) 2020-04-28 2020-04-28 Defect detection method and system based on multi-angle imaging

Country Status (1)

Country Link
CN (1) CN111507976B (en)

Families Citing this family (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112986260A (en) * 2021-02-08 2021-06-18 菲特(珠海横琴)智能科技有限公司 Camera matrix-based detection system, control system, terminal, medium and application
CN112700446A (en) * 2021-03-23 2021-04-23 常州微亿智造科技有限公司 Algorithm model training method and device for industrial quality inspection
CN113160204A (en) * 2021-04-30 2021-07-23 聚时科技(上海)有限公司 Semantic segmentation network training method for generating defect area based on target detection information
CN113362288B (en) * 2021-05-24 2024-03-08 深圳明锐理想科技股份有限公司 Golden finger scratch detection method and device and electronic equipment
CN113716146B (en) * 2021-07-23 2023-04-07 武汉纺织大学 Paper towel product packaging detection method based on deep learning
CN115532620B (en) * 2022-12-01 2023-05-16 杭州未名信科科技有限公司 Pulp molding product quality inspection device and method
CN116818664B (en) * 2023-06-16 2024-03-12 山东福特尔地毯有限公司 Carpet defect detection method and system based on visual detection

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN107657603A (en) * 2017-08-21 2018-02-02 北京精密机电控制设备研究所 A kind of industrial appearance detecting method based on intelligent vision
CN109816644A (en) * 2019-01-16 2019-05-28 大连理工大学 A kind of bearing defect automatic checkout system based on multi-angle light source image
CN109829893A (en) * 2019-01-03 2019-05-31 武汉精测电子集团股份有限公司 A kind of defect object detection method based on attention mechanism
CN109978870A (en) * 2019-03-29 2019-07-05 北京百度网讯科技有限公司 Method and apparatus for output information
CN110243826A (en) * 2019-07-10 2019-09-17 上海微现检测设备有限公司 A kind of On-line Product detection method and device

Family Cites Families (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP4095860B2 (en) * 2002-08-12 2008-06-04 株式会社日立ハイテクノロジーズ Defect inspection method and apparatus
US20170206658A1 (en) * 2016-01-15 2017-07-20 Abl Ip Holding Llc Image detection of mapped features and identification of uniquely identifiable objects for position estimation

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN107657603A (en) * 2017-08-21 2018-02-02 北京精密机电控制设备研究所 A kind of industrial appearance detecting method based on intelligent vision
CN109829893A (en) * 2019-01-03 2019-05-31 武汉精测电子集团股份有限公司 A kind of defect object detection method based on attention mechanism
CN109816644A (en) * 2019-01-16 2019-05-28 大连理工大学 A kind of bearing defect automatic checkout system based on multi-angle light source image
CN109978870A (en) * 2019-03-29 2019-07-05 北京百度网讯科技有限公司 Method and apparatus for output information
CN110243826A (en) * 2019-07-10 2019-09-17 上海微现检测设备有限公司 A kind of On-line Product detection method and device

Also Published As

Publication number Publication date
CN111507976A (en) 2020-08-07

Similar Documents

Publication Publication Date Title
CN111507976B (en) Defect detection method and system based on multi-angle imaging
Spencer Jr et al. Advances in computer vision-based civil infrastructure inspection and monitoring
CN111325713B (en) Neural network-based wood defect detection method, system and storage medium
CN107230203B (en) Casting defect identification method based on human eye visual attention mechanism
CN111640157B (en) Checkerboard corner detection method based on neural network and application thereof
CN113592845A (en) Defect detection method and device for battery coating and storage medium
CN108562250B (en) Keyboard keycap flatness rapid measurement method and device based on structured light imaging
CN113920107A (en) Insulator damage detection method based on improved yolov5 algorithm
CN110288612B (en) Nameplate positioning and correcting method and device
CN110763700A (en) Method and equipment for detecting defects of semiconductor component
US11948344B2 (en) Method, system, medium, equipment and terminal for inland vessel identification and depth estimation for smart maritime
CN111915485A (en) Rapid splicing method and system for feature point sparse workpiece images
CN115775236A (en) Surface tiny defect visual detection method and system based on multi-scale feature fusion
Avola et al. Real-time deep learning method for automated detection and localization of structural defects in manufactured products
CN115937203A (en) Visual detection method, device, equipment and medium based on template matching
CN115713476A (en) Visual detection method and device based on laser welding and readable storage medium
CN117214178A (en) Intelligent identification method for appearance defects of package on packaging production line
CN117079125A (en) Kiwi fruit pollination flower identification method based on improved YOLOv5
Kumar et al. Edge detection based shape identification
CN116125489A (en) Indoor object three-dimensional detection method, computer equipment and storage medium
CN116051808A (en) YOLOv 5-based lightweight part identification and positioning method
CN115717865A (en) Method for measuring full-field deformation of annular structure
CN116543247A (en) Data set manufacturing method and verification system based on photometric stereo surface reconstruction
CN115116026A (en) Automatic tracking method and system for logistics carrying robot
Lugo et al. Semi-supervised learning approach for localization and pose estimation of texture-less objects in cluttered scenes

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
CB02 Change of applicant information
CB02 Change of applicant information

Address after: Room 101, building 1, block C, Qianjiang Century Park, ningwei street, Xiaoshan District, Hangzhou City, Zhejiang Province

Applicant after: Hangzhou Weiming Information Technology Co.,Ltd.

Applicant after: Institute of Information Technology, Zhejiang Peking University

Address before: Room 288-1, 857 Xinbei Road, Ningwei Town, Xiaoshan District, Hangzhou City, Zhejiang Province

Applicant before: Institute of Information Technology, Zhejiang Peking University

Applicant before: Hangzhou Weiming Information Technology Co.,Ltd.

SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant