CN111507976A - Defect detection method and system based on multi-angle imaging - Google Patents

Defect detection method and system based on multi-angle imaging Download PDF

Info

Publication number
CN111507976A
CN111507976A CN202010350606.3A CN202010350606A CN111507976A CN 111507976 A CN111507976 A CN 111507976A CN 202010350606 A CN202010350606 A CN 202010350606A CN 111507976 A CN111507976 A CN 111507976A
Authority
CN
China
Prior art keywords
product
angle
detected
image
defect detection
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN202010350606.3A
Other languages
Chinese (zh)
Other versions
CN111507976B (en
Inventor
王福伟
李小飞
王建凯
陈曦
麻志毅
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Advanced Institute of Information Technology AIIT of Peking University
Hangzhou Weiming Information Technology Co Ltd
Original Assignee
Advanced Institute of Information Technology AIIT of Peking University
Hangzhou Weiming Information Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Advanced Institute of Information Technology AIIT of Peking University, Hangzhou Weiming Information Technology Co Ltd filed Critical Advanced Institute of Information Technology AIIT of Peking University
Priority to CN202010350606.3A priority Critical patent/CN111507976B/en
Publication of CN111507976A publication Critical patent/CN111507976A/en
Application granted granted Critical
Publication of CN111507976B publication Critical patent/CN111507976B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/0002Inspection of images, e.g. flaw detection
    • G06T7/0004Industrial image inspection
    • GPHYSICS
    • G01MEASURING; TESTING
    • G01NINVESTIGATING OR ANALYSING MATERIALS BY DETERMINING THEIR CHEMICAL OR PHYSICAL PROPERTIES
    • G01N21/00Investigating or analysing materials by the use of optical means, i.e. using sub-millimetre waves, infrared, visible or ultraviolet light
    • G01N21/84Systems specially adapted for particular applications
    • G01N21/88Investigating the presence of flaws or contamination
    • G01N21/8851Scan or image signal processing specially adapted therefor, e.g. for scan signal adjustment, for detecting different kinds of defects, for compensating for structures, markings, edges
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/21Design or setup of recognition systems or techniques; Extraction of features in feature space; Blind source separation
    • G06F18/214Generating training patterns; Bootstrap methods, e.g. bagging or boosting
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/24Classification techniques
    • G06F18/241Classification techniques relating to the classification model, e.g. parametric or non-parametric approaches
    • G06F18/2415Classification techniques relating to the classification model, e.g. parametric or non-parametric approaches based on parametric or probabilistic models, e.g. based on likelihood ratio or false acceptance rate versus a false rejection rate
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/70Determining position or orientation of objects or cameras
    • G06T7/73Determining position or orientation of objects or cameras using feature-based methods
    • GPHYSICS
    • G01MEASURING; TESTING
    • G01NINVESTIGATING OR ANALYSING MATERIALS BY DETERMINING THEIR CHEMICAL OR PHYSICAL PROPERTIES
    • G01N21/00Investigating or analysing materials by the use of optical means, i.e. using sub-millimetre waves, infrared, visible or ultraviolet light
    • G01N21/84Systems specially adapted for particular applications
    • G01N21/88Investigating the presence of flaws or contamination
    • G01N21/8851Scan or image signal processing specially adapted therefor, e.g. for scan signal adjustment, for detecting different kinds of defects, for compensating for structures, markings, edges
    • G01N2021/8883Scan or image signal processing specially adapted therefor, e.g. for scan signal adjustment, for detecting different kinds of defects, for compensating for structures, markings, edges involving the calculation of gauges, generating models
    • GPHYSICS
    • G01MEASURING; TESTING
    • G01NINVESTIGATING OR ANALYSING MATERIALS BY DETERMINING THEIR CHEMICAL OR PHYSICAL PROPERTIES
    • G01N21/00Investigating or analysing materials by the use of optical means, i.e. using sub-millimetre waves, infrared, visible or ultraviolet light
    • G01N21/84Systems specially adapted for particular applications
    • G01N21/88Investigating the presence of flaws or contamination
    • G01N21/8851Scan or image signal processing specially adapted therefor, e.g. for scan signal adjustment, for detecting different kinds of defects, for compensating for structures, markings, edges
    • G01N2021/8887Scan or image signal processing specially adapted therefor, e.g. for scan signal adjustment, for detecting different kinds of defects, for compensating for structures, markings, edges based on image processing techniques
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10024Color image
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20081Training; Learning
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20084Artificial neural networks [ANN]
    • YGENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
    • Y02TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
    • Y02PCLIMATE CHANGE MITIGATION TECHNOLOGIES IN THE PRODUCTION OR PROCESSING OF GOODS
    • Y02P90/00Enabling technologies with a potential contribution to greenhouse gas [GHG] emissions mitigation
    • Y02P90/30Computing systems specially adapted for manufacturing

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • General Physics & Mathematics (AREA)
  • Data Mining & Analysis (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Bioinformatics & Cheminformatics (AREA)
  • Bioinformatics & Computational Biology (AREA)
  • Evolutionary Biology (AREA)
  • Evolutionary Computation (AREA)
  • General Engineering & Computer Science (AREA)
  • Artificial Intelligence (AREA)
  • Chemical & Material Sciences (AREA)
  • Signal Processing (AREA)
  • Health & Medical Sciences (AREA)
  • Probability & Statistics with Applications (AREA)
  • Analytical Chemistry (AREA)
  • Biochemistry (AREA)
  • General Health & Medical Sciences (AREA)
  • Immunology (AREA)
  • Pathology (AREA)
  • Quality & Reliability (AREA)
  • Image Analysis (AREA)
  • Image Processing (AREA)

Abstract

The embodiment of the application provides a defect detection method and system based on multi-angle imaging, and the method comprises the following steps of firstly, obtaining an original image of a product to be detected; then, acquiring the class information of the product to be detected according to the original image, acquiring the actual position coordinate of the product to be detected according to the original image and establishing a multi-angle parameter library for multi-angle photographing; then, obtaining a multi-angle image of the product to be detected according to the category information, the actual position coordinate and the multi-angle parameter library; and finally, obtaining a defect detection result of the product to be detected according to the multi-angle image of the product to be detected. The method and the device realize the self-adaptive intelligent quality inspection of the products in the production mode of small batches and multiple batches of products, realize the multi-angle imaging of the three-dimensional multi-surface structure of the products and carry out more accurate defect detection based on the multi-angle imaging, and solve the problem that the defect detection of the multiple batches of products based on the multi-angle imaging cannot be realized in the prior art.

Description

Defect detection method and system based on multi-angle imaging
Technical Field
The application belongs to the technical field of image recognition, and particularly relates to a defect detection method and system based on multi-angle imaging.
Background
When a factory produces a product, inevitable defects such as cracks, scratches, stains and the like often occur due to the reasons of materials, environment and the like, and the defects can affect the safety and the aesthetic property of the product and even cause a certain degree of potential safety hazard. Therefore, the factory generally needs to perform quality inspection on the product to identify the defects of the product.
At present, most industrial manufacturers detect the surface defects of the products by means of manual observation, but the manual observation has many problems, such as: the cultivation into skilled manual quality inspectors requires long period and high labor cost; the manual quality inspector is easy to have physical and visual fatigue under long-time high-strength quality inspection work, and is easy to misjudge some small or not clear defects. The problems existing in the manual observation greatly influence the accuracy of the product defect detection, thereby influencing the economic benefit of industrial manufacturers.
Therefore, some manufacturers have started to use machine vision to perform defect detection, including image preprocessing using algorithms such as image enhancement, image reconstruction, and image binarization, and then use classification networks to perform product surface defect identification. For example: a method and a system for detecting surface defects of injection molding parts based on machine vision, which are disclosed in the patent publication No. CN110632086A, realize automatic detection of products by arranging an upper computer unit and a detection device. But only can realize a fixed unit for fixing the object to be detected, can only realize fixed shooting perpendicular to the object to be detected, and cannot realize defect detection of multi-batch classification and multi-angle self-adaptive shooting of the product to be detected; a cylindrical product surface defect detection method and device based on machine vision, with the patent publication number of CN107328781A, denoising a cylindrical image by obtaining the cylindrical product surface image, establishing an ROI area, and detecting defects. The method can only detect two defects on the surface of the columnar product, has single detection product and single detection defect type, and also cannot realize the defect detection of multi-batch classification and self-adaptive multi-angle shooting of the product to be detected; the patent publication No. CN110335272A is a control system, method and electronic device of intelligent quality inspection equipment of metal devices based on machine vision, wherein a workpiece to be inspected is sent to a designated position through a first controller and sent to a second controller, the second controller drives a camera unit to shoot and analyze an obtained picture so as to judge whether the workpiece to be inspected is a good product or a defective product, a vision processing system of the scheme is a static scheme and can only judge whether the workpiece to be inspected is a good product or a defective product, and the type of defects can not be judged; patent publication is CN 108681905A's multi-functional intelligent quality inspection system, through the acquisition of external quality inspection device realization parameter between a plurality of, realized diversified quality inspection, nevertheless this patent still can not realize self-adaptation multi-angle formation of image.
However, the product to be detected often has a three-dimensional multi-surface structure, and multi-angle imaging is needed in actual detection; meanwhile, with the production mode of small batch and multi-batch becoming more and more common, the specification, size and shape of the product to be detected can present diversity and complexity; in addition, the surface defects of the products have different and uneven shapes, and only one defect has a plurality of shapes, so that the complex defect detection requirements cannot be met by using a simple classification network in the detection under the condition of more defect types. Therefore, these problems all bring a serious challenge to the design development and online deployment of the quality inspection machine, and how to design a software-driven defect detection system to realize the detection of multiple batches of products has become an urgent problem to be solved.
Disclosure of Invention
The invention provides a defect detection method and system based on multi-angle imaging, and aims to solve the problem that defect detection cannot be performed based on multi-angle images of products in the prior art.
According to a first aspect of the embodiments of the present application, there is provided a defect detection method based on multi-angle imaging, including the following steps:
acquiring an original image of a product to be detected;
obtaining the class information of a product to be detected according to the original image; acquiring the actual position coordinates of a product to be detected according to the original image; establishing a multi-angle parameter library for multi-angle photographing;
obtaining a multi-angle image of the product to be detected according to the category information, the actual position coordinate and the multi-angle parameter library;
and obtaining a defect detection result of the product to be detected according to the multi-angle image.
Optionally, the obtaining of the category information of the original image according to the original image specifically includes:
constructing a classification network model;
training a classification network model by taking the manual labeling data of the product to be detected as a training sample to obtain a trained classification network model;
and inputting the original image of the product to be detected to the trained classification network model to obtain the class information of the product to be detected.
Optionally, obtaining the actual position coordinates of the product to be inspected according to the original image specifically includes:
constructing a positioning network model based on target detection;
training a positioning network model by taking the manual marking data of the product to be detected as a training sample to obtain a trained positioning network model;
inputting an original image of a product to be detected to a trained positioning network model to obtain a central point coordinate and direction angle information of the product to be detected;
and obtaining the actual position coordinate of the product to be detected based on the camera calibration principle according to the center point coordinate and the direction angle information of the product to be detected.
Optionally, the multi-angle parameter library includes multi-angle parameters of different types of products to be inspected, and the multi-angle parameters include initial angles, front shooting angles, left shooting angles and right shooting angles of original images in one-to-one correspondence.
Optionally, the multi-angle image of the product to be inspected is obtained according to the type information of the original image, the actual position coordinate of the product to be inspected and the multi-angle parameter library, and the method specifically includes:
calling multi-angle coordinates required by multi-angle photographing imaging under the corresponding product class of the multi-angle parameter library according to the product class information of the product to be detected;
calculating a plurality of angular displacements of the hand-eye calibration robot for photographing imaging according to the actual position coordinates of the product to be detected and multi-angle coordinates required by multi-angle photographing imaging;
the hand-eye calibration robot carries out self-adaptive multi-angle imaging shooting according to the plurality of angular displacements of the shooting imaging to obtain a multi-angle image of a product to be detected.
Optionally, the defect detection result of waiting to examine the product is obtained according to waiting to examine the multi-angle image of examining the product, specifically includes:
constructing a defect segmentation network model;
training a defect segmentation network model by taking the manual marking data of the product to be detected as a training sample to obtain a trained defect segmentation network model;
and inputting the multi-angle image of the product to be detected to the trained defect segmentation network model to obtain a defect segmentation result of the product to be detected, and further obtaining a defect detection result of the product to be detected.
Optionally, the defect segmentation network is a PSPNet semantic segmentation network, a SegNet semantic segmentation network, a fullyconditional densnet semantic segmentation network, or a U-Net semantic segmentation network.
According to a second aspect of the embodiments of the present application, a defect detection system based on multi-angle imaging is provided, which specifically includes:
an original image acquisition module: the method comprises the steps of obtaining an original image of a product to be detected;
an original image analysis module: the system is used for acquiring the class information of a product to be detected according to the original image; the system is used for acquiring the actual position coordinates of a product to be detected according to the original image;
multi-angle parameter library module: the system comprises a parameter database, a parameter database and a parameter database, wherein the parameter database is used for establishing a multi-angle parameter database for multi-angle photographing;
a multi-angle image module: the multi-angle parameter library is used for obtaining a multi-angle image of the product to be detected according to the category information, the actual position coordinate and the multi-angle parameter library;
a defect detection module: and obtaining a defect detection result of the product to be detected according to the multi-angle image.
According to a third aspect of the embodiments of the present application, there is provided a defect detection terminal, including: a processor and a memory; wherein the memory stores a computer program adapted to be loaded by the processor and to perform a defect detection method based on multi-angle imaging.
According to a fourth aspect of embodiments of the present application, there is provided a computer-readable storage medium having a computer program stored thereon; the computer program is executed by a processor to implement a defect detection method based on multi-angle imaging.
By adopting the defect detection method and system based on multi-angle imaging in the embodiment of the application, the original image of the product to be detected is obtained firstly; then, acquiring the class information of the product to be detected according to the original image, acquiring the actual position coordinate of the product to be detected according to the original image and establishing a multi-angle parameter library for multi-angle photographing; then, obtaining a multi-angle image of the product to be detected according to the type information of the original image, the actual position coordinate of the product to be detected and a multi-angle parameter library; and finally, obtaining a defect detection result of the product to be detected according to the multi-angle image of the product to be detected. The method and the device realize the self-adaptive intelligent quality inspection of the products in the production mode of small batches and multiple batches of products, realize the multi-angle imaging of the three-dimensional multi-surface structure of the products and carry out more accurate defect detection based on the multi-angle imaging, and solve the problem that the defect detection of the multiple batches of products based on the multi-angle imaging cannot be realized in the prior art.
Drawings
The accompanying drawings, which are included to provide a further understanding of the application and are incorporated in and constitute a part of this application, illustrate embodiment(s) of the application and together with the description serve to explain the application and not to limit the application. In the drawings:
an example diagram of an image classification network application according to the present application is shown in FIG. 1;
an example of an image segmentation application according to the present application is shown in FIG. 2;
FIG. 3 is a flowchart illustrating the steps of a multi-angle imaging based defect detection method according to another embodiment of the present application;
a schematic structural diagram of a classification network according to an embodiment of the present application is shown in fig. 4;
fig. 5 is a schematic structural diagram of an object detection network according to an embodiment of the present application;
a schematic structural diagram of a defect segmentation network according to an embodiment of the present application is shown in fig. 6;
FIG. 7 is a schematic flowchart illustrating a defect detection method based on multi-angle imaging according to an embodiment of the present application;
FIG. 8 is a schematic structural diagram of a multi-angle imaging-based defect detection system according to an embodiment of the present application;
fig. 9 shows a schematic structural diagram of a defect detection terminal according to an embodiment of the present application.
Detailed Description
In the process of implementing the application, the inventor finds that with the production mode of small-batch and multi-batch products becoming more and more common, the specifications, sizes and shapes of the products to be detected also present diversity and complexity, and the surface defects of the products have different shapes and are uneven. The existing defect detection mode can not meet the current complex defect detection requirement. Meanwhile, the product to be detected often has a three-dimensional multi-face structure, and quality inspection can be carried out more accurately only by multi-angle imaging in actual detection.
In order to solve the problems, the application discloses a self-adaptive intelligent quality inspection method and system based on target detection positioning and multi-angle imaging. The intelligent imaging method can realize intelligent imaging of multiple batches of products to be detected, mainly identifies and positions coordinates of three-dimensional industrial products with multi-angle structures through deep learning methods based on a classification network, a target detection network and the like, and then drives a hand-eye calibration robot to adaptively adjust the angle to perform multi-angle imaging shooting on the products to be detected on a detection table based on a pre-designed adaptive imaging angle parameter library. Meanwhile, the application also discloses a method based on the semantic segmentation network, which realizes fine-grained segmentation of the surface defects of the product to be detected so as to realize defect detection. According to the technical scheme, the self-adaptive intelligent quality inspection of products in a small-batch and multi-batch production mode can be realized in a software-driven mode, the hardware part of the quality inspection machine does not need to be manually erected and debugged frequently, and the labor cost of online deployment and debugging is greatly reduced. And multi-angle imaging of the three-dimensional multi-surface structure of the product is realized, and more accurate defect detection is carried out based on the multi-angle imaging.
Compared with the prior art, the invention realizes the self-adaptive imaging shooting of the products to be detected in the shifting, rotating and multi-batch modes aiming at the defects that the products to be detected in the prior art can not be shifted and rotated and the manual adjustment is needed for the multi-batch modes. Specifically, a classification network is adopted to realize the classification of multiple batches of products to be detected; positioning the actual position coordinates of the product to be detected by adopting a target detection network; and detecting the surface defect area of the product to be detected by adopting a semantic segmentation network.
Compared with the prior art, aiming at the defects that the classification method adopted by the existing quality inspection method is poor in precision and the detected defects are single in type, the semantic segmentation based method can detect different types of defects, particularly small defects and complex defects, and can obtain better detection precision and more detailed defect detection information.
In order to make the technical solutions and advantages of the embodiments of the present application more apparent, the following further detailed description of the exemplary embodiments of the present application with reference to the accompanying drawings makes it clear that the described embodiments are only a part of the embodiments of the present application, and are not exhaustive of all embodiments. It should be noted that the embodiments and features of the embodiments in the present application may be combined with each other without conflict.
Example 1
In order to better explain the defect detection method based on multi-angle imaging in the embodiment of the present application, a convolutional neural network, an image classification network, target detection, and image segmentation adopted in the embodiment of the present application are first described.
Regarding the convolutional neural network, the convolutional neural network is a kind of feedforward neural network including convolutional calculation and having a deep structure, and is one of the representative algorithms of deep learning. The convolutional neural network has the capability of feature learning, and can carry out translation invariant classification on input information according to the hierarchical structure of the convolutional neural network. The artificial neuron can respond to peripheral units in a part of coverage range, has excellent performance on large-scale image processing, and comprises a convolutional layer and a pooling layer, wherein the convolutional layer is used for carrying out feature extraction on a small region and a small region on an original input image to obtain a feature map; the pooling layer is used for further feature compression of the feature map. The convolutional neural network is commonly used in visual tasks such as image classification, target detection, semantic segmentation and the like, and most of the tasks utilize the convolutional neural network to extract different features, then construct different tasks, then utilize a large number of labeled training samples to learn, and continuously adjust parameters in the network in the learning process so as to achieve the purpose of minimizing prediction errors.
With respect to image classification networks, which are a relatively wide research direction in the field of computer vision, the core of image classification is the task of assigning a label to a given image from a given classification set.
An example image classification network application according to the present application is shown in fig. 1. Referring to fig. 1, the image classification model reads picture feature information and generates a tag probability that the picture belongs to the image set { cat, dog, hat, mug }. For the computer, the image is a large three-dimensional array of numbers, in this example the image size of a cat is 248 pixels wide and 400 pixels high, with 3 channels of color, red, green and blue (i.e., RGB). Thus, the image contains 248X400X 3-297600 numbers, each number ranging from an integer between 0 and 255, where 0 represents all black and 255 represents all white. The task of image classification is to turn these numbers into a simple image label "cat".
Regarding target detection, target detection is a technology used for identifying specific objects in an image and object positions in computer vision, and an algorithm is required to determine not only the category of the objects in the image, but also the position coordinates of the objects in the image. The location information for object detection generally consists of two formats: the upper left corner of the picture is taken as the origin (0,0), one is represented by polar coordinates, and the polar coordinates are (xmin, ymin, xmax, ymax). Wherein xmin and ymin respectively represent the minimum value of x and y coordinates, and xmax and ymax respectively represent the maximum value of the x and y coordinates; another is the center point coordinate, which is (x _ center, y _ center, w, h). Wherein, x _ center, y _ center represents the center point coordinate of the target detection frame, and w, h represents the width and height of the target detection frame. However, the conventional target detection method is insufficient for detecting the surface defects of the product to be detected in the real scene, because the product to be detected in the real scene is not horizontally placed and the surface of the product to be detected is an irregular curved surface, the shot product to be detected often has some angle deviation or other side interference areas. Aiming at the problems in the prior art, the embodiment of the application adopts a rotating target detection network RRPN based on text detection, generates a candidate region with an angle by setting rotating candidate frames with different proportions, and calculates the actual deflection angle of a product to be detected by using the generated candidate region angle so as to better adapt to the surface defect region of the product to be detected in reality and realize the self-adaptive shooting of a hand-eye calibration robot.
With respect to image segmentation, image segmentation is a technique and process that divides an image into several specific regions with unique properties and extracts an object of interest. The image segmentation is a task of distributing semantic labels to each pixel in an image, so that the features in the same sub-region have certain similarity, and the features of different sub-regions show obvious differences. From a mathematical point of view, image segmentation is a process of dividing an image into mutually disjoint regions.
An example image segmentation application according to the present application is shown in fig. 2. As shown in fig. 2, the objects, horses and people in the picture are segmented and identified with different colors. Through image segmentation, objects in the image can be simply and clearly identified, the image can be greatly simplified, and the main attention objects can be highlighted.
Based on the technical understanding, fig. 3 is a flowchart illustrating the steps of a defect detection method based on multi-angle imaging according to an embodiment of the present application.
As shown in fig. 3, the defect detection method based on multi-angle imaging of the present embodiment specifically includes the following steps:
s101: and acquiring an original image of the product to be detected.
S102: obtaining the class information of a product to be detected according to the original image; acquiring the actual position coordinates of a product to be detected according to the original image; and establishing a multi-angle parameter library for multi-angle photographing.
S103: and obtaining a multi-angle image of the product to be detected according to the category information, the actual position coordinate and the multi-angle parameter library.
S104: and obtaining a defect detection result of the product to be detected according to the multi-angle image.
In detail, in S101, an original image of a product to be inspected is acquired. Firstly, a plurality of batches of products to be detected are respectively placed on a detection table of the products to be detected, initial photographing is carried out by utilizing a hand-eye calibration robot, and an original image is obtained by initial photographing.
In step S102, the method for obtaining the category information of the original image according to the original image specifically includes the following steps:
first, a classification network model is constructed.
The method for classifying the product pictures comprises the steps of obtaining different batches of product information from multiple batches of product pictures to be detected by adopting a deep learning-based classification network method.
A schematic structural diagram of a classification network according to an embodiment of the present application is shown in fig. 4. As shown in fig. 4, the structure is characterized in that:
(1) a 3 channel RGB raw image of size 448 x 448 pixels is input.
(2) The input image is respectively subjected to convolution operation and down-sampling operation for 3 times, and then a compressed image feature map is obtained.
The convolution layer is used for processing the picture, so that the characteristic with higher robustness can be learned; the down-sampling mode is to adopt a maximum pooling mode to take the maximum value of the feature points in the neighborhood, the maximum pooling can reduce the parameter scale and reduce the parameter complexity, and finally the feature map with the reduced size and the increased channel number can be obtained.
(3) The feature map after the convolutional layer is fed into the fully-connected layer, which maps the learned features to the sample space, and then computes the probability of one class using the softmax activation function. Since the convolution kernel of 1x1 does not change the size of the image, a convolution layer of 1x1 is used instead of the full link layer.
(4) Training speed can be accelerated by Batch Normalization before the convolutional layer, generalization capability of the network is improved, and a Re L U activation function is used after the convolutional layer to improve the nonlinear relation between the convolutional layers and prevent over-training.
Secondly, training the constructed classification network model by taking the manual marking data of the product to be detected as a training sample, and finally obtaining the trained classification network model.
The method specifically comprises the following steps:
the convolutional neural network is used as a classification network, and one characteristic of the convolutional neural network is that a large amount of manual labeling data is required to be used as a training sample.
Acquiring and labeling training data of a classification network, namely firstly, shooting a large number of defect sample pictures by using a hand-eye calibration robot; and then, manually labeling the defect photos: naming the images of each different category as different IDs under the corresponding category; then, the labeled data are divided into a training set, a testing set and a verification set according to the proportion of 5:1:1 and are used for classification network training. And marking the area of the product to be detected in each image by using a rectangular frame, and recording the coordinates of the rectangular frame for the target detection network.
After obtaining a training sample and obtaining a classification network, starting inputting the training sample to train the classification network, wherein the related operations in the training process are as follows:
firstly, using transform's Resize method to reset the training data to 600 mm in length and 600 mm in width; then, randomly and horizontally turning the image, randomly cutting out 448 x 448 images from the image, and loading training data;
in the training process, the size of the used batchsize is set to be 16, and the initial learning rate is 0.001;
during training, a random gradient descent method with momentum is used. In other methods, optimization methods such as Adagrad, RMSprop, Adam and the like can be adopted, and a random gradient descent method with momentum can be used in the training method to achieve relatively better training effect;
during training, an N LLL oss loss function is used;
during the training process, the test is performed every 5 epochs of training until the model is trained to converge.
And finally, based on the trained classification network, performing class prediction on the product to be detected by using the classification network based on deep learning. And inputting the original image of the product to be detected to the trained classification network model to obtain the class information of the product to be detected.
Specifically, after the classification network is trained, the trained classification network is used for predicting the product to be detected, and the image shot by the hand-eye calibration robot is sent into the trained classification model for prediction only by loading the trained classification network model without using an optimizer and a loss function.
After the classification network obtains the category information of a plurality of batches of products to be detected, the position coordinates of the products to be detected on the detection platform are required to be obtained so as to realize self-adaptive multi-angle imaging, and therefore a positioning network based on target detection is required to be further introduced to obtain the actual position coordinates of the products to be detected.
In step S102, obtaining the actual position coordinates of the product to be detected according to the original image, specifically comprising the following steps:
firstly, a positioning network model based on target detection is constructed.
The positioning network based on target detection in the embodiment of the present application adopts an RRPN target detection network based on text detection, and fig. 5 shows a schematic structural diagram of the target detection network according to the embodiment of the present application.
As shown in fig. 5, the network structure is characterized in that:
(1) a 3-channel RGB image of size 448 x 448 pixels is input.
(2) The input image passes through 5 convolutional layers and 2 downsampling layers, the 2 downsampling layers are respectively behind the first convolutional layer and the second convolutional layer, and then an image feature map is formed.
(3) Then go through RRPN module, which contains anchor points (anchors) with different sizes, aspect ratios and-
Figure BDA0002471802290000081
0,
Figure BDA0002471802290000082
And finally generating a candidate area with an inclined angle by six different rotation angles, so that the candidate frame can be more accurately adapted to the direction of the area of the product to be detected.
(4) Then, the images pass through an RROIPooling layer, the input layer of the RROIPooling layer is the output of conv5 and a region propofol, the number of the region propofol is about 2000, and features corresponding to each RoI are extracted from a feature map by RoI Pooling L eye.
(5) Finally, through a fully connected layer with 4096 dimensions of two outputs (output), one is classified output and the other is regression output.
Secondly, training the constructed target detection positioning network model by taking the manual marking data of the product to be detected as a training sample to obtain the trained target detection network model.
The method specifically comprises the following steps:
a convolutional neural network is used as a target detection network, and one characteristic of the convolutional neural network is that a large amount of manually labeled data is required to be used as a training sample.
Acquiring and labeling training data of a target detection network, firstly, shooting a large number of defect sample pictures by using a hand-eye calibration robot; and then, manually labeling the defect photos: naming the images of each different category as different IDs under the corresponding category; then, dividing the labeled data into a training set, a testing set and a verification set according to the proportion of 5:1:1, and training the labeled data for the target detection network.
After a training sample is obtained and a target detection network is obtained, the training sample is input to train the target detection network, and the related operations in the training process are as follows:
first, the Resize method is used to reset the training data to the size of 600 a, then the image is randomly flipped horizontally, then 448 x 448 images are randomly cropped from the image, and the training data is loaded.
In the training process, a plurality of loss functions are selected for testing regression output of the position of a product to be tested, the loss functions comprise focal loss, skrinkage loss, loss triple loss and reporting loss, and finally the focal loss function with the best test result is selected from the loss functions to serve as a final loss function.
During training, for any one RoI, calculating the softmax loss value of the RoI belonging to the background area and the regression value of the RoI not belonging to the background.
In the training process, the training strategy adopts 4-step alternating training until the loss function is converged.
And then, based on the trained target detection network, positioning the product to be detected by using the network based on the target detection. And inputting an original image of the product to be detected to a trained positioning network model based on target detection to obtain the coordinate of the central point and the direction angle information of the product to be detected.
And finally, obtaining the actual position coordinate of the product to be detected based on the camera calibration principle according to the center point coordinate and the direction angle information of the product to be detected.
The specific process is as follows:
1) inputting a training sample for target detection into a trained target detection network, outputting coordinates of a center point of a regressed product to be detected and generating a toolInclination angle of direction angle information of product to be detected
Figure BDA0002471802290000091
2) And obtaining the actual position coordinates of the product to be detected on the detection platform from the space points and the corresponding pixel points according to the camera calibration principle.
The camera calibration principle is as follows:
and establishing a camera coordinate system by taking one point O as an origin. Where point Q (X, Y, Z) is a point in the camera coordinate system space that is projected by the ray onto the image plane at point Q (X, Y, f). The image plane is perpendicular to the optical axis z-axis and the distance from the projected center point is f (f is the focal length of the camera). According to the triangular proportional relation, the following can be obtained: x/f is X/Z Y/f is Y/Z, that is, X fX/Z Y is fY/Z, and a process of mapping a Q point having coordinates (X, Y, Z) to a Q point having coordinates (X, Y) on a projection plane is called projection transformation.
The above Q point to Q point transformation relationship can be expressed by a 3 × 3 matrix as: q is MQ, wherein,
Figure BDA0002471802290000101
then, a perspective projective transformation matrix can be computed:
Figure BDA0002471802290000102
the matrix M is called an intrinsic parameter matrix of the camera, and the units are all physical dimensions.
By the above method, the camera coordinate system can be converted into physical units [ i.e., (X, Y, Z) → (X, Y) ] like the image coordinate system.
The traditional target detection method is not enough for detecting the surface defects of the product to be detected in the real scene, because the product to be detected in the real scene is not horizontally placed, and the surface of the product to be detected is an irregular curved surface, the shot product to be detected often has some angle deviation or other side interference areas. The embodiment of the application adopts the rotating target detection network RRPN based on text detection, generates the candidate area with the angle by setting the rotating candidate frames with different proportions, and calculates the actual deflection angle of the product to be detected by using the generated candidate area angle so as to better adapt to the surface defect area of the product to be detected in reality and realize the self-adaptive shooting of the hand-eye calibration robot.
In step S102, a multi-angle parameter library for multi-angle photographing is established. The multi-angle parameter library comprises multi-angle parameters of different types of products to be detected, and the multi-angle parameters comprise initial angles, front shooting angles, left shooting angles and right shooting angles of original images in one-to-one correspondence.
In step S103, a multi-angle image of the product to be detected is obtained according to the type information of the original image, the actual position coordinates of the product to be detected and the multi-angle parameter library. The method specifically comprises the following steps:
firstly, calling multi-angle coordinates required by multi-angle photographing imaging under a product class corresponding to a multi-angle parameter library according to the product class information of a product to be detected.
And then, calculating a plurality of angular displacements of the hand-eye calibration robot for photographing and imaging according to the actual position coordinates of the product to be detected and the multi-angle coordinates required by multi-angle photographing and imaging.
And finally, the hand-eye calibration robot carries out self-adaptive multi-angle imaging shooting according to the plurality of angular displacements of the shooting imaging to obtain a multi-angle image of the product to be detected.
Before the adaptive multi-angle imaging is realized, we complete the following three tasks through step S102, summarized as:
(1) predicting different types of information under multiple batches by utilizing a classification network, namely firstly enabling a computer to determine which product to be detected a picture shot by a current hand-eye calibration robot belongs to;
(2) after the category of the current product to be detected is determined, the coordinate of the central point of the current product to be detected on the picture is regressed by using a positioning neural network based on target detection, and then the actual position of the product to be detected on the detection table is calculated by using a camera calibration principle.
(3) And establishing a multi-angle parameter library comprising various categories of products to be detected. Aiming at different products to be detected, an initial shooting angle and multi-angle parameters are preset by taking the lower left corner of a detection table as an original point (unit cm), so that a multi-angle parameter library of the products to be detected is formed.
Wherein, the initial shooting angle is used for shooting an initial photo for the classification network to use; the multi-angle parameters are some multi-angle parameters preset for different products to be detected to have different three-dimensional structures. Table 1 is an example of a multi-angle parameter library.
Product category to be inspected Initial shooting angle Front shooting angle Left side shooting angle Right side shooting angle
Product to be inspected 1 (25,25,45°) (25,25,45°) (-25,18,-135°) (38,24,25°)
Product to be inspected 2 (30,20,45°) (32,22,45°) (-28,23,-140°) (35,22,30°)
Product n to be inspected (28,26,45°) (30,26,45°) (-20,25,-145°) (39,20,20°)
TABLE 1 Multi-Angle parameter library example
In S103, a multi-angle image of the product to be detected is obtained according to the type information of the original image, the actual position coordinates of the product to be detected and a multi-angle parameter library, and the specific implementation calculation process is as follows:
assuming that the coordinates of a preset fixed placing position of a product to be detected on a detection table are (a, b, theta), and the coordinate of an angle required to be imaged on a certain surface of the product to be detected is
Figure BDA0002471802290000111
The actual position coordinate positioned by the target detection network is (x, y, theta');
calculating the deviation between the current placing position of the product to be detected and a preset position as (a-x, b-y, theta-theta');
given that the same product to be inspected is placed according to the appointed surface (namely, the product to be inspected is only placed horizontally, can not be placed vertically, and is only slightly deviated in the horizontal direction when being placed), the hand-eye calibration robot needs to move the coordinates and the rotation angle of m and n
Figure BDA0002471802290000112
In the same way, the coordinate deviation value to which the hand-eye calibration robot needs to move is equal to the calculated deviation value (a-x, b-y, theta-theta') of the to-be-detected product and the actual preset placing position;
inclination angle of direction angle of product to be detected returned by target detection network
Figure BDA0002471802290000113
Namely, it is
Figure BDA0002471802290000114
Finally, the coordinate position of the movement of the hand-eye calibration robot is obtained as
Figure BDA0002471802290000115
And then the multi-angle image of the hand-eye calibration robot can be driven.
In step S104, a defect detection result of the product to be detected is obtained according to the multi-angle image of the product to be detected. After the self-adaptive multi-angle imaging shooting, the defect images of the product to be detected under multiple angles are obtained, and the defect segmentation and detection are carried out on the defect images under multiple angles by adopting a defect segmentation network based on semantic segmentation.
The defect detection process according to the defect segmentation network specifically comprises the following steps:
first, a defect segmentation network model is constructed.
The embodiment of the application adopts a multilayer convolution segmentation network PSPNet to segment target defects from the photos. A schematic structural diagram of a defect segmentation network according to an embodiment of the present application is shown in fig. 6.
As shown in fig. 6, the structure is characterized in that:
(1) fig. 6(a) is an input picture of 3 channels RGB with an image size of 448 × 448 pixels.
(2) Fig. 6(b) is a pre-trained residual network by applying dilation convolution, which includes a series of convolution and pooling operations, where the convolution operation can extract image features, the pooling operation can compress the image features, and finally the output feature map is 1/8 size of the input original image.
(3) In fig. 6(c), the pyramid pooling module is used to aggregate context information, the pyramid level is 4 levels, and for the four levels of feature maps, overfitting can be prevented by convolution layer, Batch Normalization by Batch wise normalized by Re L U activation function, thereby improving the network generalization capability.
(4) For the feature maps of the four levels, the spatial size of each feature map is reduced to the spatial size of the input of the pyramid pooling module through up-sampling (linear interpolation) respectively.
(5) And connecting the feature maps of the four levels and the input of the pyramid pooling module in series, then combining the previous pyramid feature map with the original feature map concat, and finally obtaining the final predicted feature map through the convolution layer.
Secondly, training the constructed defect segmentation network model by taking the manual marking data of the product to be detected as a training sample to obtain the trained defect segmentation network model.
The method specifically comprises the following steps:
in the defect segmentation network, a convolutional neural network is used, and one characteristic of the convolutional neural network is that a large amount of manual labeling data is required to be used as a training sample. Firstly, a hand-eye calibration robot is used for shooting a large number of defect sample pictures; and then, manually labeling the defect photos: naming the images of each different category as different IDs under the corresponding category; these annotation data were then divided into a training set, a test set, and a validation set on a 5:1:1 scale. Wherein, each defect in the photo is observed manually, the defect area is identified according to the pixel level, and the final form of the label is a mask image with the same size as the original photo and is used for a defect segmentation network.
After a training sample is obtained and a defect segmentation network is constructed, the training sample is input to train the defect segmentation network, and the related operations in the training process are as follows:
first, the training data is loaded and randomly shuffled, scaling the training set photos and labeled defect mask images to 448 long and 448 wide dimensions.
In the training process, a focal loss function is selected for testing segmentation loss, an auxiliary loss function is added, and the weight of the auxiliary loss function is 0.4 to balance final loss and auxiliary loss.
In the training process, an Adam optimizer is used, the initial learning rate is 0.01, and the model is evaluated once after each epoch by using class intersection and comparison until the model converges.
And finally, inputting the multi-angle image of the product to be detected to the trained defect segmentation network model to obtain a defect segmentation result of the product to be detected, and further obtaining a defect detection result of the product to be detected.
Specifically, in order to complete the detection of the surface defects of the product to be detected, the following operations are required:
1) and loading a trained semantic segmentation network, and scaling the original images of the product to be detected at a plurality of surface angles, which are shot by the hand-eye calibration robot, to the size of 448 the length and 448 the width.
2) And (4) sending the original image into a defect segmentation network based on semantic segmentation, and finally obtaining a predicted probability distribution map.
In other embodiments, the defect segmentation network may be a SegNet semantic segmentation network, a Fully conditional DenseneNet semantic segmentation network, or a U-Net semantic segmentation network in addition to the PSPNet semantic segmentation network.
Described further, a flowchart of a defect detection method based on multi-angle imaging according to another embodiment of the present application is shown in fig. 7.
Referring to the quality inspection flow chart of fig. 7, firstly, a plurality of batches of products to be inspected are respectively placed on a detection platform of the products to be inspected, and initial photographing is carried out by using a hand-eye calibration robot; secondly, judging the class of the current product to be detected by using a classification network based on deep learning, and positioning the actual position coordinate of the current product to be detected by using a positioning network based on target detection; then, a self-adaptive multi-angle imaging system is used for carrying out multi-angle imaging photographing; and finally, performing defect segmentation by using a defect segmentation network based on semantic segmentation. And finally, performing multi-angle self-adaptive defect detection on the surface of the product to be detected.
In the classification Network, the target detection Network and the semantic segmentation Network, a convolutional neural Network is adopted in the embodiment of the application, and a Capsule Network (Capsule Network) and the like can be used in other embodiments.
The target detection network in the embodiment of the present application adopts an RRPN network, and other mainstream target detection networks may also be used in other embodiments, for example: r3Det network, Gliding Vertex network, RSDet network, etc.
The defect segmentation network in the embodiment of the present application adopts a PSPNet semantic segmentation network, and other embodiments may also adopt other mainstream semantic segmentation networks, for example: SegNet, Fully conditional DenseNet and U-Net.
The product of waiting to examine of this application is not limited to a certain product of waiting to examine, equally also is applicable to the multiple product that has three-dimensional structure and need carry out multi-angle formation of image, for example: wood boards, socket bars, packaging boxes and the like.
The defect detection method of the embodiment of the application comprises the following steps: firstly, placing a plurality of batches of products to be detected on a detection table, carrying out primary photographing by using a hand-eye calibration robot, carrying out class identification on the plurality of batches of products to be detected by using a deep learning-based classification network, and determining class information of the current product; then, a plurality of imaging angle parameters under different products to be detected are called, and position coordinate estimation is carried out on the current product to be detected by using a target detection network to determine the actual position coordinate of the current product to be detected; and then, calculating the position value of the hand-eye calibration robot by utilizing a camera calibration algorithm and combining the estimated actual position coordinates of the product to be detected and a plurality of angle parameters of the called different products to be detected. And finally, realizing multi-angle self-adaptive imaging of the surface defects of the product to be detected. And finally, performing defect segmentation prediction on the obtained multi-angle imaging image by using a defect segmentation network based on semantic segmentation.
The beneficial effects of the defect detection method based on multi-angle imaging in the embodiment of the application can be summarized as follows:
1. and a classification network method based on deep learning is used for realizing the classification task of multiple batches of products to be detected.
2. And determining the position coordinates of the product to be detected on the detection table according to the image of the product to be detected by a target detection positioning method.
3. The self-adaptive multi-angle imaging system based on deep learning obtains the category information of a plurality of batches of products to be detected according to a classification network, and then calls a plurality of self-adaptive shooting angles under different categories in multi-angle parameters; and then the actual position coordinates of the product to be detected are obtained according to the target detection network, and finally the hand-eye calibration robot can realize self-adaptive multi-angle shooting imaging.
4. By the defect segmentation method based on semantic segmentation, defect areas are segmented from the shot multi-angle pictures, so that the quality inspection result of the product to be inspected is more accurate.
Example 2
For details not disclosed in the defect detection system of this embodiment, please refer to the defect detection method based on multi-angle imaging in other embodiments.
FIG. 8 is a schematic structural diagram of a multi-angle imaging-based defect detection system according to an embodiment of the present application. As shown in fig. 8, the defect detection system based on multi-angle imaging provided by this embodiment includes: a raw image acquisition module 10, a raw image analysis module 20, a multi-angle parameter library module 30, a multi-angle image module 40, and a defect detection module 50.
As shown in FIG. 8, the defect detection system based on multi-angle imaging has the following specific structure:
the original image acquisition module 10: used for acquiring an original image of a product to be detected.
Raw image analysis module 20: the system is used for acquiring the class information of a product to be detected according to the original image; and the method is used for acquiring the actual position coordinates of the product to be detected according to the original image.
Multi-angle parameter library module 30: the method is used for establishing a multi-angle parameter library for multi-angle photographing.
Multi-angle image module 40: and the multi-angle image acquisition module is used for acquiring the multi-angle image of the product to be detected according to the category information, the actual position coordinate and the multi-angle parameter library.
The defect detection module 50: and obtaining a defect detection result of the product to be detected according to the multi-angle image of the product to be detected.
By adopting the defect detection system based on multi-angle imaging in the embodiment of the application, the original image of the product to be detected is obtained firstly; then, acquiring the class information of the product to be detected according to the original image, acquiring the actual position coordinate of the product to be detected according to the original image and establishing a multi-angle parameter library for multi-angle photographing; then, obtaining a multi-angle image of the product to be detected according to the type information of the original image, the actual position coordinate of the product to be detected and a multi-angle parameter library; and finally, obtaining a defect detection result of the product to be detected according to the multi-angle image of the product to be detected. The method and the device realize the self-adaptive intelligent quality inspection of the products in the production mode of small batches and multiple batches of products, realize the multi-angle imaging of the three-dimensional multi-surface structure of the products and carry out more accurate defect detection based on the multi-angle imaging, and solve the problem that the defect detection of the multiple batches of products based on the multi-angle imaging cannot be realized in the prior art.
Example 3
Fig. 9 is a schematic structural diagram of a defect detection terminal according to an embodiment of the present application. As shown in fig. 9, the terminal provided in this embodiment includes: the defect detection system comprises a memory 301, a processor 302 and a computer program, wherein the computer program is stored in the memory 301 and is configured to be executed by the processor 302 to realize the defect detection method based on multi-angle imaging provided by any embodiment.
Example 4
The present embodiment also provides a computer-readable storage medium having a computer program stored thereon, where the computer program is executed by a processor to implement the multi-angle imaging-based defect detection method provided in any of the embodiments.
By adopting the defect detection terminal and the computer medium based on multi-angle imaging in the embodiment of the application, the original image of the product to be detected is obtained firstly; then, acquiring the class information of the product to be detected according to the original image, acquiring the actual position coordinate of the product to be detected according to the original image and establishing a multi-angle parameter library for multi-angle photographing; then, obtaining a multi-angle image of the product to be detected according to the type information of the original image, the actual position coordinate of the product to be detected and a multi-angle parameter library; and finally, obtaining a defect detection result of the product to be detected according to the multi-angle image of the product to be detected. The method and the device realize the self-adaptive intelligent quality inspection of the products in the production mode of small batches and multiple batches of products, realize the multi-angle imaging of the three-dimensional multi-surface structure of the products and carry out more accurate defect detection based on the multi-angle imaging, and solve the problem that the defect detection of the multiple batches of products based on the multi-angle imaging cannot be realized in the prior art.
As will be appreciated by one skilled in the art, embodiments of the present application may be provided as a method, system, or computer program product. Accordingly, the present application may take the form of an entirely hardware embodiment, an entirely software embodiment or an embodiment combining software and hardware aspects. Furthermore, the present application may take the form of a computer program product embodied on one or more computer-usable storage media (including, but not limited to, disk storage, CD-ROM, optical storage, and the like) having computer-usable program code embodied therein.
The present application is described with reference to flowchart illustrations and/or block diagrams of methods, apparatus (systems), and computer program products according to embodiments of the application. It will be understood that each flow and/or block of the flow diagrams and/or block diagrams, and combinations of flows and/or blocks in the flow diagrams and/or block diagrams, can be implemented by computer program instructions. These computer program instructions may be provided to a processor of a general purpose computer, special purpose computer, embedded processor, or other programmable data processing apparatus to produce a machine, such that the instructions, which execute via the processor of the computer or other programmable data processing apparatus, create means for implementing the functions specified in the flowchart flow or flows and/or block diagram block or blocks.
These computer program instructions may also be stored in a computer-readable memory that can direct a computer or other programmable data processing apparatus to function in a particular manner, such that the instructions stored in the computer-readable memory produce an article of manufacture including instruction means which implement the function specified in the flowchart flow or flows and/or block diagram block or blocks.
These computer program instructions may also be loaded onto a computer or other programmable data processing apparatus to cause a series of operational steps to be performed on the computer or other programmable apparatus to produce a computer implemented process such that the instructions which execute on the computer or other programmable apparatus provide steps for implementing the functions specified in the flowchart flow or flows and/or block diagram block or blocks.
While the preferred embodiments of the present application have been described, additional variations and modifications in those embodiments may occur to those skilled in the art once they learn of the basic inventive concepts. Therefore, it is intended that the appended claims be interpreted as including preferred embodiments and all alterations and modifications as fall within the scope of the application.
It will be apparent to those skilled in the art that various changes and modifications may be made in the present application without departing from the spirit and scope of the application. Thus, if such modifications and variations of the present application fall within the scope of the claims of the present application and their equivalents, the present application is intended to include such modifications and variations as well.

Claims (10)

1. A defect detection method based on multi-angle imaging is characterized by comprising the following steps:
acquiring an original image of a product to be detected;
obtaining the type information of the product to be detected according to the original image; acquiring the actual position coordinates of the product to be detected according to the original image;
establishing a multi-angle parameter library for multi-angle photographing;
obtaining a multi-angle image of the product to be detected according to the class information, the actual position coordinates and the multi-angle parameter library;
and obtaining a defect detection result of the product to be detected according to the multi-angle image.
2. The method for detecting defects based on multi-angle imaging of claim 1, wherein the obtaining of the type information of the original image according to the original image specifically comprises:
constructing a classification network model;
training the classification network model by taking the manual labeling data of the product to be detected as a training sample to obtain a trained classification network model;
and inputting the original image of the product to be detected to the trained classification network model to obtain the class information of the product to be detected.
3. The multi-angle imaging-based defect detection method as claimed in claim 1, wherein the obtaining of the actual position coordinates of the product to be inspected according to the original image specifically comprises:
constructing a positioning network model based on target detection;
training the positioning network model by taking the manual labeling data of the product to be detected as a training sample to obtain a trained positioning network model;
inputting the original image of the product to be detected to the trained positioning network model to obtain the coordinate of the center point and the direction angle information of the product to be detected;
and obtaining the actual position coordinate of the product to be detected based on a camera calibration principle according to the center point coordinate and the direction angle information of the product to be detected.
4. The multi-angle imaging-based defect detection method as claimed in claim 1, wherein the multi-angle parameter library comprises multi-angle parameters of different kinds of products to be inspected, and the multi-angle parameters comprise initial angles, front shooting angles, left shooting angles and right shooting angles of original images in a one-to-one correspondence.
5. The multi-angle imaging-based defect detection method as claimed in claim 1, wherein the obtaining of the multi-angle image of the product to be detected according to the type information of the original image, the actual position coordinates of the product to be detected and the multi-angle parameter library specifically comprises:
calling multi-angle coordinates required by multi-angle photographing imaging under the corresponding product class of the multi-angle parameter library according to the product class information of the product to be detected;
calculating a plurality of angular displacements of the hand-eye calibration robot for photographing imaging according to the actual position coordinates of the product to be detected and the multi-angle coordinates required by multi-angle photographing imaging;
and the hand-eye calibration robot carries out self-adaptive multi-angle imaging shooting according to the plurality of angular displacements of the shooting imaging to obtain a multi-angle image of the product to be detected.
6. The multi-angle imaging-based defect detection method as claimed in claim 1, wherein the obtaining of the defect detection result of the product to be detected according to the multi-angle image of the product to be detected specifically comprises:
constructing a defect segmentation network model;
training the defect segmentation network model by taking the manual labeling data of the product to be detected as a training sample to obtain a trained defect segmentation network model;
and inputting the multi-angle image of the product to be detected to the trained defect segmentation network model to obtain a defect segmentation result of the product to be detected, and further obtaining a defect detection result of the product to be detected.
7. The defect detection method based on multi-angle imaging of claim 6, wherein the defect segmentation network is a PSPNet semantic segmentation network, a SegNet semantic segmentation network, a full volumetric DenseNet semantic segmentation network or a U-Net semantic segmentation network.
8. The utility model provides a defect detecting system based on multi-angle formation of image which characterized in that specifically includes:
an original image acquisition module: the method comprises the steps of obtaining an original image of a product to be detected;
an original image analysis module: the system is used for acquiring the class information of a product to be detected according to the original image; the system is used for acquiring the actual position coordinates of the product to be detected according to the original image;
multi-angle parameter library module: the system comprises a parameter database, a parameter database and a parameter database, wherein the parameter database is used for establishing a multi-angle parameter database for multi-angle photographing;
a multi-angle image module: the multi-angle parameter library is used for acquiring a multi-angle image of the product to be detected according to the class information, the actual position coordinate and the multi-angle parameter library;
a defect detection module: and obtaining a defect detection result of the product to be detected according to the multi-angle image.
9. A defect detection terminal, comprising: a processor and a memory; wherein the memory stores a computer program adapted to be loaded by the processor and to perform a method for multi-angle imaging based defect detection according to any of claims 1-7.
10. A computer-readable storage medium, having stored thereon a computer program; the computer program is executed by a processor to implement the multi-angle imaging based defect detection method of any one of claims 1-7.
CN202010350606.3A 2020-04-28 2020-04-28 Defect detection method and system based on multi-angle imaging Active CN111507976B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202010350606.3A CN111507976B (en) 2020-04-28 2020-04-28 Defect detection method and system based on multi-angle imaging

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202010350606.3A CN111507976B (en) 2020-04-28 2020-04-28 Defect detection method and system based on multi-angle imaging

Publications (2)

Publication Number Publication Date
CN111507976A true CN111507976A (en) 2020-08-07
CN111507976B CN111507976B (en) 2023-08-18

Family

ID=71876496

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202010350606.3A Active CN111507976B (en) 2020-04-28 2020-04-28 Defect detection method and system based on multi-angle imaging

Country Status (1)

Country Link
CN (1) CN111507976B (en)

Cited By (11)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112700446A (en) * 2021-03-23 2021-04-23 常州微亿智造科技有限公司 Algorithm model training method and device for industrial quality inspection
CN112986260A (en) * 2021-02-08 2021-06-18 菲特(珠海横琴)智能科技有限公司 Camera matrix-based detection system, control system, terminal, medium and application
CN113160204A (en) * 2021-04-30 2021-07-23 聚时科技(上海)有限公司 Semantic segmentation network training method for generating defect area based on target detection information
CN113362288A (en) * 2021-05-24 2021-09-07 深圳明锐理想科技有限公司 Golden finger scratch detection method and device and electronic equipment
CN113538417A (en) * 2021-08-24 2021-10-22 安徽顺鼎阿泰克科技有限公司 Transparent container defect detection method and device based on multi-angle and target detection
CN113716146A (en) * 2021-07-23 2021-11-30 武汉纺织大学 Paper towel product packaging detection method based on deep learning
CN113920075A (en) * 2021-09-29 2022-01-11 广州鲁邦通物联网科技股份有限公司 Simple defect detection method and system based on object identification
CN115532620A (en) * 2022-12-01 2022-12-30 杭州未名信科科技有限公司 Pulp molding product quality inspection device and method
CN116818664A (en) * 2023-06-16 2023-09-29 山东福特尔地毯有限公司 Carpet defect detection method and system based on visual detection
CN118429352A (en) * 2024-07-05 2024-08-02 南京航空航天大学 Surface defect detection method for braiding and forming large-scale preform
CN118429352B (en) * 2024-07-05 2024-09-24 南京航空航天大学 Surface defect detection method for braiding and forming large-scale preform

Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20040032979A1 (en) * 2002-08-12 2004-02-19 Hitachi High-Technologies Corporation Defect inspection method
US20170206658A1 (en) * 2016-01-15 2017-07-20 Abl Ip Holding Llc Image detection of mapped features and identification of uniquely identifiable objects for position estimation
CN107657603A (en) * 2017-08-21 2018-02-02 北京精密机电控制设备研究所 A kind of industrial appearance detecting method based on intelligent vision
CN109816644A (en) * 2019-01-16 2019-05-28 大连理工大学 A kind of bearing defect automatic checkout system based on multi-angle light source image
CN109829893A (en) * 2019-01-03 2019-05-31 武汉精测电子集团股份有限公司 A kind of defect object detection method based on attention mechanism
CN109978870A (en) * 2019-03-29 2019-07-05 北京百度网讯科技有限公司 Method and apparatus for output information
CN110243826A (en) * 2019-07-10 2019-09-17 上海微现检测设备有限公司 A kind of On-line Product detection method and device

Patent Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20040032979A1 (en) * 2002-08-12 2004-02-19 Hitachi High-Technologies Corporation Defect inspection method
US20170206658A1 (en) * 2016-01-15 2017-07-20 Abl Ip Holding Llc Image detection of mapped features and identification of uniquely identifiable objects for position estimation
CN107657603A (en) * 2017-08-21 2018-02-02 北京精密机电控制设备研究所 A kind of industrial appearance detecting method based on intelligent vision
CN109829893A (en) * 2019-01-03 2019-05-31 武汉精测电子集团股份有限公司 A kind of defect object detection method based on attention mechanism
CN109816644A (en) * 2019-01-16 2019-05-28 大连理工大学 A kind of bearing defect automatic checkout system based on multi-angle light source image
CN109978870A (en) * 2019-03-29 2019-07-05 北京百度网讯科技有限公司 Method and apparatus for output information
CN110243826A (en) * 2019-07-10 2019-09-17 上海微现检测设备有限公司 A kind of On-line Product detection method and device

Cited By (14)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112986260A (en) * 2021-02-08 2021-06-18 菲特(珠海横琴)智能科技有限公司 Camera matrix-based detection system, control system, terminal, medium and application
CN112700446A (en) * 2021-03-23 2021-04-23 常州微亿智造科技有限公司 Algorithm model training method and device for industrial quality inspection
CN113160204A (en) * 2021-04-30 2021-07-23 聚时科技(上海)有限公司 Semantic segmentation network training method for generating defect area based on target detection information
CN113362288B (en) * 2021-05-24 2024-03-08 深圳明锐理想科技股份有限公司 Golden finger scratch detection method and device and electronic equipment
CN113362288A (en) * 2021-05-24 2021-09-07 深圳明锐理想科技有限公司 Golden finger scratch detection method and device and electronic equipment
CN113716146A (en) * 2021-07-23 2021-11-30 武汉纺织大学 Paper towel product packaging detection method based on deep learning
CN113538417A (en) * 2021-08-24 2021-10-22 安徽顺鼎阿泰克科技有限公司 Transparent container defect detection method and device based on multi-angle and target detection
CN113920075A (en) * 2021-09-29 2022-01-11 广州鲁邦通物联网科技股份有限公司 Simple defect detection method and system based on object identification
CN115532620B (en) * 2022-12-01 2023-05-16 杭州未名信科科技有限公司 Pulp molding product quality inspection device and method
CN115532620A (en) * 2022-12-01 2022-12-30 杭州未名信科科技有限公司 Pulp molding product quality inspection device and method
CN116818664A (en) * 2023-06-16 2023-09-29 山东福特尔地毯有限公司 Carpet defect detection method and system based on visual detection
CN116818664B (en) * 2023-06-16 2024-03-12 山东福特尔地毯有限公司 Carpet defect detection method and system based on visual detection
CN118429352A (en) * 2024-07-05 2024-08-02 南京航空航天大学 Surface defect detection method for braiding and forming large-scale preform
CN118429352B (en) * 2024-07-05 2024-09-24 南京航空航天大学 Surface defect detection method for braiding and forming large-scale preform

Also Published As

Publication number Publication date
CN111507976B (en) 2023-08-18

Similar Documents

Publication Publication Date Title
CN111507976B (en) Defect detection method and system based on multi-angle imaging
CN111640157B (en) Checkerboard corner detection method based on neural network and application thereof
CN107230203B (en) Casting defect identification method based on human eye visual attention mechanism
CN113920107A (en) Insulator damage detection method based on improved yolov5 algorithm
CN105787923A (en) Vision system and analytical method for planar surface segmentation
CN110596120A (en) Glass boundary defect detection method, device, terminal and storage medium
CN111833237A (en) Image registration method based on convolutional neural network and local homography transformation
CN110288612B (en) Nameplate positioning and correcting method and device
CN113205511B (en) Electronic component batch information detection method and system based on deep neural network
CN115775236A (en) Surface tiny defect visual detection method and system based on multi-scale feature fusion
CN115830004A (en) Surface defect detection method, device, computer equipment and storage medium
CN115713476A (en) Visual detection method and device based on laser welding and readable storage medium
TW202240546A (en) Image augmentation techniques for automated visual inspection
CN115937203A (en) Visual detection method, device, equipment and medium based on template matching
CN114998308A (en) Defect detection method and system based on photometric stereo
CN117557565B (en) Detection method and device for lithium battery pole piece
CN116580026A (en) Automatic optical detection method, equipment and storage medium for appearance defects of precision parts
CN117079125A (en) Kiwi fruit pollination flower identification method based on improved YOLOv5
CN114862866B (en) Calibration plate detection method and device, computer equipment and storage medium
CN116125489A (en) Indoor object three-dimensional detection method, computer equipment and storage medium
CN117237835A (en) Automatic shelf safety detection method and device based on yolov7
CN113591548B (en) Target ring identification method and system
CN115358981A (en) Glue defect determining method, device, equipment and storage medium
CN117576037A (en) X-ray weld defect detection method, device, equipment and storage medium
CN117635603B (en) System and method for detecting on-line quality of hollow sunshade product based on target detection

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
CB02 Change of applicant information
CB02 Change of applicant information

Address after: Room 101, building 1, block C, Qianjiang Century Park, ningwei street, Xiaoshan District, Hangzhou City, Zhejiang Province

Applicant after: Hangzhou Weiming Information Technology Co.,Ltd.

Applicant after: Institute of Information Technology, Zhejiang Peking University

Address before: Room 288-1, 857 Xinbei Road, Ningwei Town, Xiaoshan District, Hangzhou City, Zhejiang Province

Applicant before: Institute of Information Technology, Zhejiang Peking University

Applicant before: Hangzhou Weiming Information Technology Co.,Ltd.

SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant