CN111862080A - Deep learning defect identification method based on multi-feature fusion - Google Patents

Deep learning defect identification method based on multi-feature fusion Download PDF

Info

Publication number
CN111862080A
CN111862080A CN202010757143.2A CN202010757143A CN111862080A CN 111862080 A CN111862080 A CN 111862080A CN 202010757143 A CN202010757143 A CN 202010757143A CN 111862080 A CN111862080 A CN 111862080A
Authority
CN
China
Prior art keywords
feature
array
deep learning
characteristic
map
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN202010757143.2A
Other languages
Chinese (zh)
Other versions
CN111862080B (en
Inventor
尹仕斌
郭寅
孙博
赵进
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Yi Si Si Hangzhou Technology Co ltd
Original Assignee
Isvision Hangzhou Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Isvision Hangzhou Technology Co Ltd filed Critical Isvision Hangzhou Technology Co Ltd
Priority to CN202010757143.2A priority Critical patent/CN111862080B/en
Publication of CN111862080A publication Critical patent/CN111862080A/en
Application granted granted Critical
Publication of CN111862080B publication Critical patent/CN111862080B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/0002Inspection of images, e.g. flaw detection
    • G06T7/0004Industrial image inspection
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/21Design or setup of recognition systems or techniques; Extraction of features in feature space; Blind source separation
    • G06F18/214Generating training patterns; Bootstrap methods, e.g. bagging or boosting
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/24Classification techniques
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/25Fusion techniques
    • G06F18/253Fusion techniques of extracted features
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/40Extraction of image or video features
    • G06V10/44Local feature extraction by analysis of parts of the pattern, e.g. by detecting edges, contours, loops, corners, strokes or intersections; Connectivity analysis, e.g. of connected components
    • G06V10/443Local feature extraction by analysis of parts of the pattern, e.g. by detecting edges, contours, loops, corners, strokes or intersections; Connectivity analysis, e.g. of connected components by matching or filtering
    • G06V10/449Biologically inspired filters, e.g. difference of Gaussians [DoG] or Gabor filters
    • G06V10/451Biologically inspired filters, e.g. difference of Gaussians [DoG] or Gabor filters with interaction between the filter responses, e.g. cortical complex cells
    • G06V10/454Integrating the filters into a hierarchical structure, e.g. convolutional neural networks [CNN]
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20076Probabilistic image processing
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20081Training; Learning
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20084Artificial neural networks [ANN]

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Data Mining & Analysis (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Artificial Intelligence (AREA)
  • Evolutionary Computation (AREA)
  • General Engineering & Computer Science (AREA)
  • Evolutionary Biology (AREA)
  • Bioinformatics & Computational Biology (AREA)
  • Bioinformatics & Cheminformatics (AREA)
  • Health & Medical Sciences (AREA)
  • Biodiversity & Conservation Biology (AREA)
  • Biomedical Technology (AREA)
  • General Health & Medical Sciences (AREA)
  • Molecular Biology (AREA)
  • Multimedia (AREA)
  • Quality & Reliability (AREA)
  • Investigating Materials By The Use Of Optical Means Adapted For Particular Applications (AREA)
  • Length Measuring Devices By Optical Means (AREA)

Abstract

The invention discloses a deep learning defect identification method based on multi-feature fusion, which comprises the steps of collecting a sine stripe map set, respectively calculating at least two of a phase distribution map, a curvature distribution map and a gray distribution map, and recording the distribution maps as a feature map set; inputting the feature map set into a pre-trained deep learning model, processing the feature map set through the deep learning model to obtain the probability of each defect type corresponding to the feature map set, and identifying the type of the surface of the mirror surface/mirror-like object according to the probability; the method can simultaneously acquire various types of characteristic distribution maps for different types of defect images, has the characteristics of high identification efficiency and good accuracy compared with the traditional method, and can realize automatic and intelligent identification of the mirror surface defect types.

Description

Deep learning defect identification method based on multi-feature fusion
Technical Field
The invention relates to the field of defect detection, in particular to a deep learning defect identification method based on multi-feature fusion.
Background
Mirror/mirror-like objects are widely present in modern manufacturing industries, such as automotive painted bodies, aircraft painted bodies, electronic display screen panels, optical mirrors, polishing molds, and the like; the surface quality of the mirror surface object is influenced by factors such as processing technology, coating quality, manufacturing environment and the like, and some defect defects such as pits, bulges, scratches, dirt and the like are inevitably generated. The presence of defects not only affects the functions of the specular object, such as reflection, transmission, corrosion protection, etc., but also reduces the surface aesthetics. Therefore, the surface defects of the mirror surface/mirror-like object need to be detected in time, so that the mirror surface object can meet the high standard requirements of functions and appearance.
The mirror surface/mirror-like object has the characteristics of good flatness and high reflectivity, the traditional defect detection method cannot meet the detection requirement, and at present, the phase measurement deflection technology (stripe deflection measurement system) is the mainstream method for acquiring the surface characteristics of the mirror surface/mirror-like object and comprises a display screen and a camera, wherein the display screen sequentially projects a plurality of sinusoidal stripes to the surface of the mirror surface/mirror-like object; the camera collects a plurality of sinusoidal stripes projected on the surface of the mirror surface/mirror-like object, detects the surface defects of the mirror surface to be detected by demodulating the phase information of the deformed stripes, and restores the three-dimensional shape of the mirror surface to be detected, and the method has the following defects:
1) a single feature such as phase information, gray scale information or curvature information cannot fully represent the feature, for example: the defect of lack of finishing paint on the coated car body has obvious characteristics in a phase distribution diagram and is not easy to be perceived in other characteristic distribution diagrams; the solvent pit defect has obvious characteristics in the curvature distribution diagram, and the sheet metal deformation defect is obvious in the gray distribution diagram; if only single characteristics are adopted for defect identification, the problems of false detection and missed detection are easy to occur;
2) the existing method only identifies defects, manual classification is needed for the types of the defects on the mirror surface/mirror-like surface, intelligent classification is not achieved, and overall planning of detection lines on the occurrence probability of various defects is not facilitated.
Disclosure of Invention
In order to solve the technical problems, the invention provides a deep learning defect identification method based on multi-feature fusion.
The technical scheme is as follows:
a deep learning defect identification method based on multi-feature fusion is used for detecting surface defects of a mirror surface/mirror-like object, and a display screen is utilized to project a plurality of sinusoidal stripes to the surface of the mirror surface/mirror-like object in sequence; a camera collects a plurality of sine stripes projected on the surface of the mirror surface/mirror-like object, and the collected sine stripe images are recorded as a sine stripe image set;
utilizing the sine stripe atlas to identify surface defects of the mirror surface/mirror-like object, comprising the following steps:
1) respectively obtaining at least two characteristic distribution maps by utilizing the sine stripe map set and recording the characteristic distribution maps as a characteristic map set; the characteristic distribution map comprises a phase distribution map, a curvature distribution map and a gray level distribution map;
the phase distribution map is obtained through gray values of all points of the transverse stripe image and gray values of all points of the longitudinal stripe image;
the curvature distribution map is obtained through a phase map and a gradient map;
the gray distribution map is obtained by averaging gray values of the same image point positions in different sine stripe images, taking a gray average value as a gray value of an image point in the gray distribution map and traversing each image point;
2) inputting the feature map set into a pre-trained deep learning model, processing the feature map set through the deep learning model to obtain the probability of each defect type corresponding to the feature map set, and identifying the type of the surface of the mirror surface/mirror-like object according to the probability.
Further, the defects include: pit defects, paint starvation defects, wear scar defects, scratch defects, and smudge defects;
in the step 2), judging whether the obtained probabilities are all smaller than a preset threshold value T, wherein T is set according to an empirical value;
if so, determining that the surface of the mirror surface/mirror-like object is free of defects;
and if not, recording the defect type with the maximum probability value as the defect type of the surface of the mirror surface/mirror-like object.
Further, in step 2), the deep learning model processing procedure is as follows:
inputting the N distribution maps in the feature map set into corresponding convolution layers and pooling layers respectively for feature extraction;
inputting the extracted N characteristics into corresponding first full-connection layers respectively, and correspondingly obtaining N characteristic arrays through full-connection calculation;
respectively taking the N characteristic arrays as input parameters to perform fusion calculation, and recording the fused array as a first array;
inputting the first array into a rear full-connection layer, and converting the first array into a parameter array through full-connection calculation;
the data quantity of the parameter array is consistent with the defect type quantity;
and the Softmax layer calculates the probability of each defect type corresponding to the characteristic atlas according to the parameter array, and identifies the defect type of the surface of the mirror surface/mirror-like object according to the probability value.
Further, the method for merging the N feature arrays into the first array comprises:
recording the size of a single characteristic array as 1 multiplied by M, wherein the single characteristic array comprises M characteristic numerical values;
m is a preset number of nodes, where M is 256, 512, 1024, or 2048;
the first array includes M node output parameters, where a single node output parameter Fshare tThe calculation is as follows:
if the feature map set only includes two distribution maps, the plurality of feature arrays include a first feature array and a second feature array, and at this time:
Figure BDA0002611959180000041
if the feature map set comprises three distribution maps, the plurality of feature arrays comprise a first feature array, a second feature array and a third feature array, and at this time:
Figure BDA0002611959180000042
wherein i is 1,2,3 … … M, t is ∈ [1, M [ ]],
Figure BDA0002611959180000043
Respectively representing the t-th characteristic numerical value in the first characteristic array, the second characteristic array and the third characteristic array;
Figure BDA0002611959180000044
and respectively representing the sum of all the feature values in the first feature array, the second feature array and the third feature array.
Further, the gray distribution map is obtained by the following process:
denoising each sinusoidal fringe image, synthesizing a gray distribution graph by using the processed n images, wherein the gray value F of a single image point (u, v) in the gray distribution graph is as follows:
Figure BDA0002611959180000045
wherein, i is 1,2 … … n, n is the total number of the sine stripe graph; diThe gray value of the image point (u, v) in the ith denoising processed sine stripe image is shown.
Further, if the sine stripe map set comprises four transverse stripe maps and four longitudinal stripe maps;
then: the phase profile is obtained by the following calculation:
Figure BDA0002611959180000051
wherein the transverse phase pattern phasexAnd longitudinal phase map phaseyThe function is calculated as follows:
Figure BDA0002611959180000052
Figure BDA0002611959180000053
wherein, pattern1x(u,v)、pattern2x(u,v)、pattern3x(u,v)、pattern4x(u, v) represent the gray values of the four transverse stripe images at the point (u, v), respectively; pattern1y(u,v)、pattern2y(u,v)、pattern3y(u,v)、pattern4y(u, v) represent the gray values of the four longitudinal stripe images at the point (u, v), respectively.
Further, if the sine stripe map set comprises four transverse stripe maps and four longitudinal stripe maps;
then: curvature profile CurvaturexyObtained by the following process:
Figure BDA0002611959180000054
wherein, the transverse gradient map gradientxAnd longitudinal gradient map gradientyThe calculation is as follows:
Figure BDA0002611959180000055
Figure BDA0002611959180000056
wherein, phasexAnd phaseyThe transverse phase diagram and the longitudinal phase diagram are shown separately.
Further, the process of training the deep learning model in advance comprises:
respectively collecting sine stripe pattern sets corresponding to the surfaces of different measured objects; obtaining a feature atlas by using the method in the step 1), labeling defect labels for the feature atlas in advance, and forming training samples corresponding to various types of defects, wherein the number of the feature atlas of each type of defects is not less than 50;
setting a deep learning model comprising N groups of characteristic layers, a full connection layer, a Softmax layer, a dropout layer and an output layer; wherein the single group of characteristic layers comprise a convolution layer and a pooling layer; the full-connection layer comprises N first full-connection layers and a plurality of rear full-connection layers, and a fusion calculation layer is arranged between the first full-connection layer and the rear full-connection layer;
inputting various distribution graphs in a training sample to respective corresponding feature layers, performing feature extraction, obtaining N feature arrays through a first full-connection layer, and performing fusion calculation on the N feature arrays to obtain a first array; inputting the first array into a rear full-connection layer, and calculating the defect probability through a Softmax layer;
and continuously adjusting model parameters by utilizing a plurality of sample set iterative network models through forward calculation and backward propagation to obtain a deep learning model with the accuracy meeting the requirement, and storing the deep learning model as a trained deep learning model.
The scheme of the invention combines the degree of highlighting of each defect in different characteristic images, designs a deep learning method based on multi-feature fusion, simultaneously obtains various types of characteristic distribution maps for different types of defect images, inputs the multi-features into a deep learning model, and realizes the fusion of the features and the classification and identification of the defects through a full connection layer of the deep learning model; compared with the traditional method, the method has the characteristics of high identification efficiency and good accuracy, and can realize automatic and intelligent identification of the mirror surface defect types.
Drawings
FIG. 1 is a schematic flow chart of a defect identification method according to an embodiment;
FIG. 2 is a diagram of eight sinusoidal fringe patterns corresponding to different types of defects;
FIG. 3 is a phase distribution diagram, a curvature distribution diagram and a gray scale distribution diagram corresponding to different types of defects in the embodiment;
fig. 4 is the probability corresponding to each type of defect calculated by the deep learning model.
Detailed Description
The technical solution of the present invention will be described in detail with reference to the specific embodiments.
A deep learning defect identification method based on multi-feature fusion is used for detecting surface defects of mirror surfaces/mirror-like objects, and a display screen is utilized to project a plurality of sinusoidal stripes to the surface of the mirror surfaces/mirror-like objects in sequence; a camera collects a plurality of sinusoidal stripes projected on the surface of a mirror surface/mirror-like object, and records a plurality of collected sinusoidal stripe images as a sinusoidal stripe image set (as shown in fig. 2, the sinusoidal stripe image set is correspondingly collected by gray and black defects, grinding mark defects and natural color scratch defects (when one-frequency four-phase is adopted));
the method for identifying the surface defects of the mirror surface/mirror-like object by utilizing the sine stripe image set comprises the following steps as shown in figure 1:
1) respectively obtaining at least two characteristic distribution maps by utilizing the sine stripe map set and recording the characteristic distribution maps as a characteristic map set; the characteristic distribution map comprises a phase distribution map, a curvature distribution map and a gray level distribution map;
the phase distribution map is obtained through gray values of all points of the transverse stripe image and gray values of all points of the longitudinal stripe image;
the curvature distribution map is obtained through a phase map and a gradient map;
the gray distribution map is obtained by averaging gray values of the same image point positions in different sine stripe images, taking a gray average value as a gray value of an image point in the gray distribution map and traversing each image point;
as shown in fig. 3, the phase distribution map, the curvature distribution map and the gray distribution map calculated correspondingly for the gray-black defect, the grinding defect and the natural color scratch defect; it can be seen that different defect features appear to different extents in different profiles; if the black and gray defects are more obvious in the curvature distribution diagram, the grinding defect is more obvious in the phase distribution diagram, and the natural color scratch defect is more obvious in the curvature distribution diagram;
2) inputting the feature map set into a pre-trained deep learning model, obtaining the probability of each defect type corresponding to the feature map set through deep learning model processing, and identifying the type of the surface of the mirror surface/mirror-like object according to the probability.
In this example, three profiles were calculated, N being 3; the defects include: pit defects (sheet metal pits), paint-poor defects (lack of finishing paint), grinding mark defects (grinding marks), scratch defects (natural color scratches) and smudge defects (black ash and/or fiber fuzz);
in step 2), judging whether the obtained probabilities are all smaller than a preset threshold value T, wherein T is set according to an empirical value, and T can be set to be 0.5 in specific implementation;
if so, determining that the surface of the mirror surface/mirror-like object is free of defects;
and if not, recording the defect type with the maximum probability value as the defect type of the surface of the mirror surface/mirror-like object.
Specifically, as shown in fig. 1, in step 2), the deep learning model processing procedure is as follows:
inputting the N distribution maps in the feature map set into corresponding convolution layers and pooling layers respectively for feature extraction;
inputting the extracted N characteristics into corresponding first full-connection layers respectively, and correspondingly obtaining N characteristic arrays through full-connection calculation;
respectively taking the N characteristic arrays as input parameters to perform fusion calculation, and recording the fused array as a first array;
then inputting the first array into a following full-connection layer (in this embodiment, the following full-connection layer includes four layers), and converting the first array into a parameter array through full-connection calculation;
the data quantity of the parameter array is consistent with the defect type quantity;
and the Softmax layer calculates the probability of each defect type corresponding to the characteristic atlas according to the parameter array, and identifies the defect type of the surface of the mirror surface/mirror-like object according to the probability value.
In more detail, the method for merging the N feature arrays into the first array is as follows:
recording the size of a single characteristic array as 1 multiplied by M, wherein the single characteristic array comprises M characteristic numerical values;
m is a preset number of nodes, where M is 256, 512, 1024, or 2048; in this embodiment, M is 1024;
the first array includes M node output parameters, where a single node output parameter Fshare tThe calculation is as follows:
if the feature map set only includes two distribution maps, the plurality of feature arrays include a first feature array and a second feature array, and at this time:
Figure BDA0002611959180000091
if the feature map set comprises three distribution maps, the plurality of feature arrays comprise a first feature array, a second feature array and a third feature array, and at this time:
Figure BDA0002611959180000092
wherein i is 1,2,3 … … M, t is ∈ [1, M [ ]],
Figure BDA0002611959180000093
Respectively representing the t-th characteristic numerical value in the first characteristic array, the second characteristic array and the third characteristic array;
Figure BDA0002611959180000094
and respectively representing the sum of all the feature values in the first feature array, the second feature array and the third feature array.
As shown in fig. 4, the probability of the defect corresponding to the different images is respectively output, wherein the probability of the black and gray defect corresponding to the first image is up to 55.68%, and if the probability is greater than the threshold T, the image is considered to have the black and gray defect;
the probability of the second image corresponding to the grinding defect is up to 72.11%, and if the probability is greater than the threshold value T, the image is considered to have the grinding defect;
the highest probability of the third image corresponding to the natural color scratch defect is 50.74%, and if the probability is greater than the threshold value T, the third image is determined to have the natural color scratch defect;
specifically, the following are three specific calculation methods of distribution diagram:
wherein the gray distribution map is obtained by the following process:
denoising each sinusoidal fringe image, synthesizing a gray distribution diagram by using the processed n sinusoidal fringe images, wherein the gray value F of a single image point (u, v) in the gray distribution diagram is as follows:
Figure BDA0002611959180000101
wherein, i is 1,2 … … n, n is the total number of the sine stripe graph; diThe gray value of the image point (u, v) in the ith denoising processed sine stripe image is shown.
Wherein, if the sine stripe pattern set comprises four transverse stripe patterns and four longitudinal stripe patterns (one frequency is four phases);
then: the phase profile is obtained by the following calculation:
Figure BDA0002611959180000102
wherein the transverse phase pattern phasexAnd longitudinal phase map phaseyThe function is calculated as follows:
Figure BDA0002611959180000103
Figure BDA0002611959180000104
wherein, pattern1x(u,v)、pattern2x(u,v)、pattern3x(u,v)、pattern4x(u, v) represent the gray values of the four transverse stripe images at the point (u, v), respectively; pattern1y(u,v)、pattern2y(u,v)、pattern3y(u,v)、pattern4y(u, v) represent the gray values of the four longitudinal stripe images at the point (u, v), respectively.
If the sine stripe pattern set comprises four transverse stripe patterns and four longitudinal stripe patterns (one frequency is four phases);
then: curvature profile CurvaturexyObtained by the following process:
Figure BDA0002611959180000105
wherein, the transverse gradient map gradientxAnd longitudinal gradient map gradientyThe calculation is as follows:
Figure BDA0002611959180000106
Figure BDA0002611959180000107
wherein, phasexAnd phaseyThe transverse phase diagram and the longitudinal phase diagram are shown separately.
The deep learning model needs to be trained in advance, and the trained deep learning model can be directly used for defect identification;
specifically, the pre-training process of the deep learning model is as follows:
respectively collecting sine stripe pattern sets corresponding to the surfaces of different measured objects; obtaining a feature atlas by using the method in the step 1), labeling defect labels for the feature atlas in advance, and forming training samples corresponding to various types of defects, wherein the number of the feature atlas of each type of defects is not less than 50;
setting a deep learning model comprising N groups of characteristic layers, a full connection layer, a Softmax layer, a dropout layer and an output layer; wherein the single group of characteristic layers comprise a convolution layer and a pooling layer; the full-connection layer comprises N first full-connection layers and a plurality of rear full-connection layers, and a fusion calculation layer is arranged between the first full-connection layer and the rear full-connection layer;
inputting various distribution graphs in a training sample to respective corresponding feature layers, performing feature extraction, obtaining N feature arrays through a first full-connection layer, and performing fusion calculation on the N feature arrays to obtain a first array; inputting the first array into a rear full-connection layer, and calculating the defect probability through a Softmax layer;
and continuously adjusting model parameters by utilizing a plurality of sample set iterative network models through forward calculation and backward propagation to obtain a deep learning model with the accuracy meeting the requirement, and storing the deep learning model as a trained deep learning model.
The foregoing descriptions of specific exemplary embodiments of the present invention have been presented for purposes of illustration and description. The foregoing description is not intended to be exhaustive or to limit the invention to the precise form disclosed, and obviously many modifications and variations are possible in light of the above teaching. The exemplary embodiments were chosen and described in order to explain certain principles of the invention and its practical application to enable others skilled in the art to make and use various exemplary embodiments of the invention and various alternatives and modifications thereof. It is intended that the scope of the invention be defined by the following claims and their equivalents.

Claims (8)

1. A deep learning defect identification method based on multi-feature fusion is used for detecting surface defects of a mirror surface/mirror-like object, and a display screen is utilized to project a plurality of sinusoidal stripes to the surface of the mirror surface/mirror-like object in sequence; a camera collects a plurality of sine stripes projected on the surface of the mirror surface/mirror-like object, and the collected sine stripe images are recorded as a sine stripe image set; the method is characterized in that the surface defects of the mirror surface/mirror-like object are identified by utilizing the sine stripe atlas, and the method comprises the following steps:
1) respectively obtaining at least two characteristic distribution maps by utilizing the sine stripe map set and recording the characteristic distribution maps as a characteristic map set; the characteristic distribution map comprises a phase distribution map, a curvature distribution map and a gray level distribution map;
the phase distribution map is obtained through gray values of all points of the transverse stripe image and gray values of all points of the longitudinal stripe image;
the curvature distribution map is obtained through a phase map and a gradient map;
the gray distribution map is obtained by averaging gray values of the same image point positions in different sine stripe images, taking a gray average value as a gray value of an image point in the gray distribution map and traversing each image point;
2) inputting the feature map set into a pre-trained deep learning model, processing the feature map set through the deep learning model to obtain the probability of each defect type corresponding to the feature map set, and identifying the type of the surface of the mirror surface/mirror-like object according to the probability.
2. The method for deep learning defect identification based on multi-feature fusion as claimed in claim 1, wherein: the defects include: pit defects, paint starvation defects, wear scar defects, scratch defects, and smudge defects;
in the step 2), judging whether the obtained probabilities are all smaller than a preset threshold value T, wherein T is set according to an empirical value;
if so, determining that the surface of the mirror surface/mirror-like object is free of defects;
and if not, recording the defect type with the maximum probability value as the defect type of the surface of the mirror surface/mirror-like object.
3. The method for deep learning defect identification based on multi-feature fusion as claimed in claim 1, wherein: in step 2), the deep learning model processing process is as follows:
inputting the N distribution maps in the feature map set into corresponding convolution layers and pooling layers respectively for feature extraction;
inputting the extracted N characteristics into corresponding first full-connection layers respectively, and correspondingly obtaining N characteristic arrays through full-connection calculation;
respectively taking the N characteristic arrays as input parameters to perform fusion calculation, and recording the fused array as a first array;
inputting the first array into a rear full-connection layer, and converting the first array into a parameter array through full-connection calculation;
the data quantity of the parameter array is consistent with the defect type quantity;
and the Softmax layer calculates the probability of each defect type corresponding to the characteristic atlas according to the parameter array, and identifies the defect type of the surface of the mirror surface/mirror-like object according to the probability value.
4. The method for deep learning defect identification based on multi-feature fusion as claimed in claim 3, wherein: the method for fusing the N characteristic arrays into the first array comprises the following steps:
recording the size of a single characteristic array as 1 multiplied by M, wherein the single characteristic array comprises M characteristic numerical values;
m is a preset number of nodes, where M is 256, 512, 1024, or 2048;
the first array includes M node output parameters, where a single node output parameter Fshare tThe calculation is as follows:
if the feature map set only includes two distribution maps, the plurality of feature arrays include a first feature array and a second feature array, and at this time:
Figure FDA0002611959170000021
if the feature map set comprises three distribution maps, the plurality of feature arrays comprise a first feature array, a second feature array and a third feature array, and at this time:
Figure FDA0002611959170000031
wherein i is 1,2,3 … … M, t is ∈ [1, M [ ]],
Figure FDA0002611959170000032
Respectively representing the t-th characteristic numerical value in the first characteristic array, the second characteristic array and the third characteristic array;
Figure FDA0002611959170000033
and respectively representing the sum of all the feature values in the first feature array, the second feature array and the third feature array.
5. The method for deep learning defect identification based on multi-feature fusion as claimed in claim 1, wherein: the gray distribution map is obtained by the following process:
denoising each sinusoidal fringe image, synthesizing a gray distribution graph by using the processed n images, wherein the gray value F of a single image point (u, v) in the gray distribution graph is as follows:
Figure FDA0002611959170000034
wherein, i is 1,2 … … n, n is the total number of the sine stripe graph; diThe gray value of the image point (u, v) in the ith denoising processed sine stripe image is shown.
6. The method for deep learning defect identification based on multi-feature fusion as claimed in claim 1, wherein: if the sine stripe image set comprises four transverse stripe images and four longitudinal stripe images;
then: the phase profile is obtained by the following calculation:
Figure FDA0002611959170000035
wherein the transverse phase pattern phasexAnd longitudinal phase map phaseyThe function is calculated as follows:
Figure FDA0002611959170000041
Figure FDA0002611959170000042
wherein, pattern1x(u,v)、pattern2x(u,v)、pattern3x(u,v)、pattern4x(u, v) represent the gray values of the four transverse stripe images at the point (u, v), respectively; pattern1y(u,v)、pattern2y(u,v)、pattern3y(u,v)、pattern4y(u, v) represent the gray values of the four longitudinal stripe images at the point (u, v), respectively.
7. The method for deep learning defect identification based on multi-feature fusion as claimed in claim 1, wherein: if the sine stripe image set comprises four transverse stripe images and four longitudinal stripe images;
then: curvature profile CurvaturexyObtained by the following process:
Figure FDA0002611959170000043
wherein, the transverse gradient map gradientxAnd longitudinal gradient map gradientyThe calculation is as follows:
Figure FDA0002611959170000044
Figure FDA0002611959170000045
wherein, phasexAnd phaseyThe transverse phase diagram and the longitudinal phase diagram are shown separately.
8. The method for deep learning defect identification based on multi-feature fusion as claimed in claim 1, wherein: the process of training the deep learning model in advance comprises the following steps:
respectively collecting sine stripe pattern sets corresponding to the surfaces of different measured objects; obtaining a feature atlas by using the method in the step 1), labeling defect labels for the feature atlas in advance, and forming training samples corresponding to various types of defects, wherein the number of the feature atlas of each type of defects is not less than 50;
setting a deep learning model comprising N groups of characteristic layers, a plurality of full connection layers, a Softmax layer, a dropout layer and an output layer; wherein the single group of characteristic layers comprise a convolution layer and a pooling layer; the full-connection layer comprises N first full-connection layers and a plurality of rear full-connection layers, and a fusion calculation layer is arranged between the first full-connection layer and the rear full-connection layer;
inputting various distribution graphs in a training sample to respective corresponding feature layers, performing feature extraction, obtaining N feature arrays through a first full-connection layer, and performing fusion calculation on the N feature arrays to obtain a first array; inputting the first array into a rear full-connection layer, and calculating the defect probability through a Softmax layer;
and continuously adjusting model parameters by utilizing a plurality of sample set iterative network models through forward calculation and backward propagation to obtain a deep learning model with the accuracy meeting the requirement, and storing the deep learning model as a trained deep learning model.
CN202010757143.2A 2020-07-31 2020-07-31 Deep learning defect identification method based on multi-feature fusion Active CN111862080B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202010757143.2A CN111862080B (en) 2020-07-31 2020-07-31 Deep learning defect identification method based on multi-feature fusion

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202010757143.2A CN111862080B (en) 2020-07-31 2020-07-31 Deep learning defect identification method based on multi-feature fusion

Publications (2)

Publication Number Publication Date
CN111862080A true CN111862080A (en) 2020-10-30
CN111862080B CN111862080B (en) 2021-05-18

Family

ID=72952627

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202010757143.2A Active CN111862080B (en) 2020-07-31 2020-07-31 Deep learning defect identification method based on multi-feature fusion

Country Status (1)

Country Link
CN (1) CN111862080B (en)

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN115272293A (en) * 2022-08-29 2022-11-01 新极技术(北京)有限公司 Strip steel surface defect detection method and system
CN116433658A (en) * 2023-06-08 2023-07-14 季华实验室 Mirror-like defect detection method, device, electronic equipment and storage medium

Citations (12)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN104279980A (en) * 2014-10-20 2015-01-14 电子科技大学 Mirror surface three-dimensional-surface-shape measuring system based on intelligent photographing mobile phone
CN107192717A (en) * 2017-04-26 2017-09-22 深圳市计量质量检测研究院 A kind of 3 D defects detection method and device on object near flat surface
CN108645871A (en) * 2018-05-15 2018-10-12 佛山市南海区广工大数控装备协同创新研究院 A kind of 3D bend glass defect inspection methods based on streak reflex
DE102017129356B3 (en) * 2017-12-08 2019-03-07 Infineon Technologies Ag INSPECTION PROCEDURE FOR SEMICONDUCTOR SUBSTRATES USING TILTING DATA AND INSPECTION DEVICE
CN109829906A (en) * 2019-01-31 2019-05-31 桂林电子科技大学 It is a kind of based on the workpiece, defect of the field of direction and textural characteristics detection and classification method
CN109900706A (en) * 2019-03-20 2019-06-18 易思维(杭州)科技有限公司 A kind of weld seam and weld defect detection method based on deep learning
CN109916922A (en) * 2019-04-02 2019-06-21 易思维(杭州)科技有限公司 Mirror surface/class mirror article defect inspection method
CN109932371A (en) * 2019-04-02 2019-06-25 易思维(杭州)科技有限公司 Mirror surface/class mirror article defect detecting device
CN110293684A (en) * 2019-06-03 2019-10-01 深圳市科迈爱康科技有限公司 Dressing Method of printing, apparatus and system based on three-dimensional printing technology
CN110646376A (en) * 2019-04-22 2020-01-03 天津大学 Lens defect detection method based on fringe deflection
CN110689039A (en) * 2019-08-19 2020-01-14 浙江工业大学 Trunk texture identification method based on four-channel convolutional neural network
CN111323434A (en) * 2020-03-16 2020-06-23 征图新视(江苏)科技股份有限公司 Application of phase deflection technology in glass defect detection

Patent Citations (12)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN104279980A (en) * 2014-10-20 2015-01-14 电子科技大学 Mirror surface three-dimensional-surface-shape measuring system based on intelligent photographing mobile phone
CN107192717A (en) * 2017-04-26 2017-09-22 深圳市计量质量检测研究院 A kind of 3 D defects detection method and device on object near flat surface
DE102017129356B3 (en) * 2017-12-08 2019-03-07 Infineon Technologies Ag INSPECTION PROCEDURE FOR SEMICONDUCTOR SUBSTRATES USING TILTING DATA AND INSPECTION DEVICE
CN108645871A (en) * 2018-05-15 2018-10-12 佛山市南海区广工大数控装备协同创新研究院 A kind of 3D bend glass defect inspection methods based on streak reflex
CN109829906A (en) * 2019-01-31 2019-05-31 桂林电子科技大学 It is a kind of based on the workpiece, defect of the field of direction and textural characteristics detection and classification method
CN109900706A (en) * 2019-03-20 2019-06-18 易思维(杭州)科技有限公司 A kind of weld seam and weld defect detection method based on deep learning
CN109916922A (en) * 2019-04-02 2019-06-21 易思维(杭州)科技有限公司 Mirror surface/class mirror article defect inspection method
CN109932371A (en) * 2019-04-02 2019-06-25 易思维(杭州)科技有限公司 Mirror surface/class mirror article defect detecting device
CN110646376A (en) * 2019-04-22 2020-01-03 天津大学 Lens defect detection method based on fringe deflection
CN110293684A (en) * 2019-06-03 2019-10-01 深圳市科迈爱康科技有限公司 Dressing Method of printing, apparatus and system based on three-dimensional printing technology
CN110689039A (en) * 2019-08-19 2020-01-14 浙江工业大学 Trunk texture identification method based on four-channel convolutional neural network
CN111323434A (en) * 2020-03-16 2020-06-23 征图新视(江苏)科技股份有限公司 Application of phase deflection technology in glass defect detection

Non-Patent Citations (2)

* Cited by examiner, † Cited by third party
Title
尹仕斌: "机器视觉技术在现代汽车制造中的应用综述", 《光学学报》 *
熊显名等: "基于反射云纹的抛光曲面表面缺陷检测研究", 《激光与光电子学进展》 *

Cited By (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN115272293A (en) * 2022-08-29 2022-11-01 新极技术(北京)有限公司 Strip steel surface defect detection method and system
CN116433658A (en) * 2023-06-08 2023-07-14 季华实验室 Mirror-like defect detection method, device, electronic equipment and storage medium
CN116433658B (en) * 2023-06-08 2023-08-15 季华实验室 Mirror-like defect detection method, device, electronic equipment and storage medium

Also Published As

Publication number Publication date
CN111862080B (en) 2021-05-18

Similar Documents

Publication Publication Date Title
CN110570393B (en) Mobile phone glass cover plate window area defect detection method based on machine vision
CN113450307B (en) Product edge defect detection method
CN106504248B (en) Vehicle damage judging method based on computer vision
CN111862080B (en) Deep learning defect identification method based on multi-feature fusion
CN108346144B (en) Automatic bridge crack monitoring and identifying method based on computer vision
CN104112269B (en) A kind of solar battery laser groove parameter detection method and system based on machine vision
Arnal et al. Detecting dings and dents on specular car body surfaces based on optical flow
CN107622277B (en) Bayesian classifier-based complex curved surface defect classification method
CN111257338B (en) Surface defect detection method for mirror surface and mirror-like object
CN110705553B (en) Scratch detection method suitable for vehicle distant view image
CN113393426A (en) Method for detecting surface defects of rolled steel plate
CN113743473A (en) Intelligent identification and detection method for automatic spraying process of complex parts
CN113538331A (en) Metal surface damage target detection and identification method, device, equipment and storage medium
CN111127417A (en) Soft package coil stock printing defect detection method based on SIFT feature matching and improved SSD algorithm
CN111523611A (en) Gluing detection method
CN115656182A (en) Sheet material point cloud defect detection method based on tensor voting principal component analysis
CN114881998A (en) Workpiece surface defect detection method and system based on deep learning
CN117291918B (en) Automobile stamping part defect detection method based on three-dimensional point cloud
CN104515473A (en) Online diameter detection method of varnished wires
CN112085754B (en) Edge detection method of reflective adhesive tape
CN113570549A (en) Defect detection method and device for reflective surface
CN116385356A (en) Method and system for extracting regular hexagonal hole features based on laser vision
CN116883313A (en) Method for rapidly detecting vehicle body paint surface defects, image processing equipment and readable medium
CN114140400B (en) Method for detecting cigarette packet label defect based on RANSAC and CNN algorithm
CN114548250A (en) Mobile phone appearance detection method and device based on data analysis

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant
CP01 Change in the name or title of a patent holder

Address after: Room 495, building 3, 1197 Bin'an Road, Binjiang District, Hangzhou City, Zhejiang Province 310051

Patentee after: Yi Si Si (Hangzhou) Technology Co.,Ltd.

Address before: Room 495, building 3, 1197 Bin'an Road, Binjiang District, Hangzhou City, Zhejiang Province 310051

Patentee before: ISVISION (HANGZHOU) TECHNOLOGY Co.,Ltd.

CP01 Change in the name or title of a patent holder