CN117315376B - Machine learning-based mechanical part industrial quality inspection method - Google Patents

Machine learning-based mechanical part industrial quality inspection method Download PDF

Info

Publication number
CN117315376B
CN117315376B CN202311596473.8A CN202311596473A CN117315376B CN 117315376 B CN117315376 B CN 117315376B CN 202311596473 A CN202311596473 A CN 202311596473A CN 117315376 B CN117315376 B CN 117315376B
Authority
CN
China
Prior art keywords
image
algorithm
data
samples
sample
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202311596473.8A
Other languages
Chinese (zh)
Other versions
CN117315376A (en
Inventor
张镇
靖婉琦
刘晨甲
王兆信
宋光恒
靖朋鹤
徐如明
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Shuju Shandong Intelligent Technology Co ltd
Liaocheng Laike Intelligent Robot Co ltd
Original Assignee
Shuju Shandong Intelligent Technology Co ltd
Liaocheng Laike Intelligent Robot Co ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Shuju Shandong Intelligent Technology Co ltd, Liaocheng Laike Intelligent Robot Co ltd filed Critical Shuju Shandong Intelligent Technology Co ltd
Priority to CN202311596473.8A priority Critical patent/CN117315376B/en
Publication of CN117315376A publication Critical patent/CN117315376A/en
Application granted granted Critical
Publication of CN117315376B publication Critical patent/CN117315376B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/70Arrangements for image or video recognition or understanding using pattern recognition or machine learning
    • G06V10/764Arrangements for image or video recognition or understanding using pattern recognition or machine learning using classification, e.g. of video objects
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N20/00Machine learning
    • G06N20/10Machine learning using kernel methods, e.g. support vector machines [SVM]
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/004Artificial life, i.e. computing arrangements simulating life
    • G06N3/006Artificial life, i.e. computing arrangements simulating life based on simulated virtual individual or collective life forms, e.g. social simulations or particle swarm optimisation [PSO]
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/70Arrangements for image or video recognition or understanding using pattern recognition or machine learning
    • G06V10/77Processing image or video features in feature spaces; using data integration or data reduction, e.g. principal component analysis [PCA] or independent component analysis [ICA] or self-organising maps [SOM]; Blind source separation
    • G06V10/774Generating sets of training patterns; Bootstrap methods, e.g. bagging or boosting
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/70Arrangements for image or video recognition or understanding using pattern recognition or machine learning
    • G06V10/82Arrangements for image or video recognition or understanding using pattern recognition or machine learning using neural networks
    • YGENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
    • Y02TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
    • Y02PCLIMATE CHANGE MITIGATION TECHNOLOGIES IN THE PRODUCTION OR PROCESSING OF GOODS
    • Y02P90/00Enabling technologies with a potential contribution to greenhouse gas [GHG] emissions mitigation
    • Y02P90/30Computing systems specially adapted for manufacturing

Abstract

The invention discloses a machine learning-based mechanical part industrial quality inspection method, which belongs to the technical field of image processing and comprises the following steps: s1, data acquisition and labeling; s2, preprocessing data; s3, expanding data; s4, extracting data characteristics; s5, training a classifier; the invention can better adapt to different types of parts and quality inspection scenes, accurately capture the key characteristics of the shapes, textures and the like of the parts, provide accurate input for the subsequent classifier and adaptively adjust the classification boundary through the introduction of a negative feedback algorithm by using the data expansion algorithm, the artificial ant colony algorithm, the graph neural network algorithm and the improved classifier algorithm.

Description

Machine learning-based mechanical part industrial quality inspection method
Technical Field
The invention relates to the technical field of image processing, in particular to a machine learning-based mechanical part industrial quality inspection method.
Background
In the traditional quality inspection of mechanical parts, the visual inspection and judgment are usually needed to be carried out manually, and the problems of high subjectivity, low efficiency, easy generation of misjudgment and the like exist in the mode. With the development of machine learning and computer vision technologies, more and more research is beginning to apply these technologies to quality inspection of mechanical parts to improve accuracy, efficiency and automation degree of quality inspection.
Machine learning has made significant progress in the fields of image recognition and computer vision. Deep learning models such as convolutional neural networks have achieved dramatic results in the tasks of image classification, object detection, and segmentation. Successful application of these techniques has stimulated interest in their application to quality inspection of mechanical parts.
However, many challenges remain in the field of quality inspection of mechanical parts. First, mechanical parts have complex shapes and various defect types, and conventional image processing and feature extraction methods may not accurately capture key features. Second, due to the variety of parts and the variety of defects, a large number of data samples are required to train a robust and generalization capable model. However, the high cost of data acquisition, limited data sample size and difficulty in labeling limit the application of conventional methods. In addition, conventional classifier models may have difficulty accurately classifying complex part morphologies and defects.
Aiming at the technical background and the problems, the invention provides a machine learning-based mechanical part industrial quality inspection method. By introducing machine learning, deep learning and related technologies and combining image processing, feature extraction and classifier algorithms, the method aims to overcome the defects of the traditional technology and improve the accuracy, efficiency and automation degree of quality inspection of mechanical parts. Through the steps of data acquisition and marking, data preprocessing, data expansion, data characteristic extraction, classifier training and the like, the automatic quality control of mechanical parts can be realized, and the quality control and efficiency on a production line are improved.
The Chinese patent with the bulletin number of CN116087217B in the prior art discloses a dynamic quality detection module and a dynamic quality detection method for an industrial assembly line based on machine vision, and the comprehensive quality detection of a plurality of surfaces is realized on the industrial assembly line by arranging a camera assembly, a camera controller and a quality detection assembly based on an image characteristic algorithm;
the Chinese patent of the prior art with the publication number of CN116309556B discloses a quality management method of a steel member finished product based on a machine vision technology, and the technical scheme is characterized in that the quality evaluation coefficients of all parts in the steel member are analyzed, so that the quality of all the parts is further evaluated, all the steel members with qualified quality are comprehensively evaluated, and meanwhile, the quality inspection problems of all the steel members with unqualified quality are summarized, so that tracking and tracing are facilitated;
the Chinese patent with the bulletin number of CN116091506B in the prior art discloses a machine vision defect quality inspection method based on YOLOV5, which is characterized in that a quality inspection platform is built by utilizing collected qualified or unqualified sample information to perform model training, and the concept of a difference threshold value is introduced simultaneously based on a deep learning algorithm, so that the product quality is improved;
although the prior art realizes the aim of quality inspection of parts in different modes, the following problems need to be further solved:
1. the data acquisition and labeling are not enough: the traditional mechanical part quality inspection method generally needs to rely on manual visual inspection and labeling of parts, and the method has the problems of high subjectivity, low efficiency, easy generation of misjudgment and the like. Meanwhile, due to inconsistency in the data acquisition process and subjectivity of labeling, inaccuracy and deviation of data acquisition can be caused, and therefore performance of a machine learning algorithm is affected;
2. limitations of conventional image processing methods: the traditional image processing method has certain limitation in processing the mechanical part image, for example, for the condition that the complex part shape and the illumination condition change greatly, the traditional image processing method can not accurately extract and express key features, so that the quality inspection result is inaccurate or unstable;
3. deficiency and singleness of data samples: in the traditional mechanical part quality inspection, due to the high data acquisition cost and the difficulty in acquiring samples, only limited data samples can be acquired, and the samples possibly have high similarity, so that the data samples trained by the model are insufficient to cover various part forms and defect types, and the generalization capability and adaptability of the model are limited;
4. difficulty in feature extraction: the feature extraction in the quality inspection of mechanical parts is a key step, and the traditional feature extraction method often needs to manually design feature descriptors, which needs experience and knowledge of field experts, however, due to the variety of the shapes and defect types of the parts, the traditional manual feature extraction method may not fully capture the key features of the parts, so that the accuracy and stability of quality inspection results are affected;
5. performance limitations of the classifier: the classifier model commonly used in the traditional mechanical part quality inspection method has performance limitation, for example, a linear classifier may not perform well when dealing with nonlinear separable problems, and complex part forms and defects cannot be accurately classified; in addition, the traditional classifier is sensitive to sample distribution and label imbalance, so that quality inspection result deviation is larger;
based on the method, the invention provides a machine learning-based mechanical part industrial quality inspection method.
Disclosure of Invention
The invention aims to provide a quality inspection method for realizing automatic quality inspection of mechanical parts by using a machine learning method so as to improve the accuracy, efficiency and consistency of quality inspection.
In order to solve the technical problems, the invention adopts the following technical scheme: the machine learning-based mechanical part industrial quality inspection method comprises the following steps:
s1, collecting and marking data, namely collecting picture data of mechanical parts, and marking the data, wherein the picture data comprises defect detection and classification information;
s2, preprocessing data, namely preprocessing acquired picture data, including graying, scale normalization and contrast enhancement operation, so as to facilitate subsequent machine learning algorithm processing;
s3, data expansion, namely generating a synthetic sample with diversity by generating an countermeasure network (GAN), a two-stage training strategy, a multi-objective optimization algorithm and an adaptive sampling algorithm so as to increase the diversity of training data;
s4, extracting data features, extracting key feature points in part images by using an artificial ant colony algorithm and a graph neural network algorithm, and generating feature descriptors through the graph neural network for training and predicting a subsequent classifier;
s5, training the classifier, adopting an improved classifier algorithm, combining the ideas of a support vector machine and negative feedback, searching an optimal hyperplane, realizing classification of the part images, and improving the performance and the robustness of the classifier by optimizing the problem and adjusting the classification boundary.
Further, the data preprocessing in S2 includes the following steps:
s201, graying processing: converting the color image into a gray scale image;
s202, scale normalization processing: carrying out scale normalization on the gray level image to ensure that the sizes of the images are consistent;
s203, contrast enhancement processing: by increasing the contrast of the image, the target features in the image are made more pronounced.
Further, the image matrix after graying in S201 is denoted as G, and for a color image, it is converted into a gray image, the gray image contains only one channel, and for each pixel, its gray value represents the luminance information in the image, the gray value range of the gray image is 0 to 255, where 0 represents black, 255 represents white, and the color image is converted into a gray image using the following formula:
,
wherein,representing gray scale image at position +.>Gray value at>、/>Andrespectively represent the positions of the color images>Red, green, blue channel values.
Further, the image matrix after scale normalization in S202 is denoted as N, and the size of the original image is set asAdjust it to a fixed size +.>The bilinear interpolation method is used for carrying out image scale normalization, and the specific formula is as follows:
,
wherein,representing normalized image at position +.>The pixel value at which it is located,representing adjacent pixel values in the original gray image, and>is bilinear interpolation weight.
Further, the normalized image in S203 isThe gray value range isWhere L is the number of gray levels, the histogram equalization formula is as follows:
,
wherein,representing the enhanced image in position +.>Pixel value at +.>Representing normalized image at position +.>Pixel value at +.>Representing gray level +.>The number of pixels in the normalized image.
Further, the method for expanding the data in S3 includes:
s301, training a generator network G for generating a synthesized part image; meanwhile, training a discriminator network D for distinguishing a real sample from a synthesized sample; gradually improving the generating capacity of the generator through alternately training the generator and the discriminator;
the goal of the generator network G is to minimize the difference between the generated samples and the real samples, defining the loss function of the generator as follows:
,
the goal of the arbiter network D is to maximize the discrimination capability for both real and composite samples, defining the loss function of the arbiter as follows:
,
where x represents the real sample, z represents the input noise of the generator,representing the discrimination result of the discriminator on the real sample, < >>Representing the discrimination result of the discriminator on the generated sample, < >>For regularization parameters, ++>Indicating compliance with data distribution->Representation ofSubject to noise distribution;
by minimizing the loss function of the generatorAnd maximizing the loss function of the arbiter>The generator and the discriminator can be mutually promoted, and the effect of data expansion is gradually improved;
s302, introducing a diversity objective function and a similarity objective function, wherein the diversity objective function is used for encouraging the generator to generate diversity samples, and the similarity objective function is used for keeping the similarity between the generated samples and the real samples;
the diversity objective function is defined as follows:
,
the similarity objective function is defined as follows:
;
where N represents the number of noise input by the generator, K represents the number of synthesized samples corresponding to each noise,indicate->Noise (S)>Indicate->The corresponding->Sample number->Representing a real sample by minimizing the diversity objective function +.>And minimizing the similarity objective function +.>Balancing the diversity and similarity of the generated samples;
s303, dynamically adjusting a sampling strategy by utilizing an adaptive sampling algorithm according to training states of the generator and the discriminator, and improving the coverage of the generator in a sample space.
Further, the adaptive sampling algorithm in S303 includes the following steps:
s3031, initializing sampling probability distributionSetting the sampling wheel number +.>
S3032, for each round of samplingThe following operations are performed:
1) Probability distribution from current sampleSampling to obtain sample->
2) Calculating a sampleImportance weight of->
3) Updating sampling probability distribution:/>
S3033, returning all sample sets obtained by sampling.
Further, the data feature extraction in S4 includes the following steps:
s401, searching important feature points in the part image by using an artificial ant colony algorithm: assuming that the part image isWherein n pixels are included +.>Finding m feature points in an image using artificial ant colony algorithmThe artificial ant colony algorithm moves on the image by simulating ants, selects the next moving position according to specific probability, and guides the movement of the ants by local pheromones and global pheromones;
assuming ants are located at a pointAnd it is necessary to select the position of the next movement +.>Ant select move to the point +.>Is passed by the pheromone concentration->And heuristic information->Calculating;
heuristic information represents the expected distance of ants from the current location to the next location, using euclidean distance as heuristic information, expressed as:
,
the pheromone concentration represents the probability of selecting the direction of ant movement, and initially, all the pheromone concentrations are the same constant value and expressed as
The pheromone updating formula of the artificial ant colony algorithm is as follows:
,
wherein,represents the time point of the t-th iteration +.>To the point->Is used for the concentration of the pheromone,indicate->Time of iteration->To the point->Pheromone concentration,/->Is the evaporation coefficient of pheromone, < >>The pheromone increment is the pheromone increment of ants at the current moment. Specifically, ant is at the current position +.>And the next position->The pheromone increment in between is defined as:
,
wherein,for ant quantity, add>For the kth ant in the current position +.>And the next positionA pheromone increment therebetween;
calculation using the following formula:/>,
Wherein,is constant and indicates the size of the pheromone increment, < >>The path length of the kth ant at the current time t is represented;
by continuously iteratively updating the pheromone concentration, ants will be guided to key feature points in the image;
s402, obtaining characteristic pointsThereafter, a graph neural network algorithm is used to extract featuresThe feature descriptor, the graph neural network is a deep learning model capable of processing graph structure data, and is capable of extracting the relation between nodes in an image and generating a vector representing the feature of the node;
assume that the input of the graph neural network is a feature pointAnd its surrounding neighbor node feature vector, expressed as +.>And->Learning representation of feature points using a graph neural network>By minimizing the loss function->To train network parameters, the loss function of the graph neural network is defined as follows:
,
wherein,representing characteristic points->Is indicated by->Representing characteristic points obtained by artificial ant colony algorithmIs true of->Representation->A norm;
and obtaining feature descriptors of each feature point in the part image by training the graph neural network, wherein the descriptors are used as input of a subsequent classifier and are used for performing quality inspection tasks.
Further, the improved classifier algorithm in S5 combines the ideas of support vector machine and negative feedback, adjusts the classification boundary according to the output of the classifier, and improves the robustness of the classifier, comprising the following steps:
s501, searching an optimal hyperplane through a support vector machine so as to maximize the interval between positive and negative samples; assume that the training set contains ms samplesWherein->For inputting features +.>For the corresponding tag, for the task of classification, tag +.>1 represents a positive sample, and-1 represents a negative sample; the optimization objective of the support vector machine is expressed as the following convex optimization problem:
,
the constraints are:
wherein w is the normal vector of the hyperplane, b is the offset of the hyperplane,representation->Norms (F/F)>C is a regularization parameter for relaxation variables, so that an optimal hyperplane is obtained, and classification of samples is realized;
s502, assuming that some difficult samples exist in the training set, they cannot be correctly classified by the current classifier, and the error samples are called negative samples and are expressed as a set
The classification boundary is adjusted through a negative feedback algorithm, the error rate of a negative sample is reduced, the process of adjusting the classification boundary is converted into an optimization problem through introducing a punishment item, and an optimization target is defined as follows:
,
wherein,for the output of the classifier->Is a regularization parameter; the classification boundary is adjusted by optimizing the objective function, so that the performance of the classifier is improved;
s503, introducing Lagrangian multiplierAnd->Obtaining a Lagrangian function:
,
wherein,and->Is a Lagrangian multiplier;
by maximising the Lagrangian function pairTo obtain an optimal solution to the dual problem:
and solving the dual problem to obtain an optimal solution of the support vector machine and realize classification of samples.
Compared with the prior art, the invention has the beneficial effects that:
(1) Quality control accuracy is improved: the machine learning-based method is adopted to carry out industrial quality inspection of mechanical parts, so that the accuracy and consistency of quality inspection can be effectively improved, a large amount of accurate training data can be obtained through data acquisition and marking, the machine learning model can learn the characteristics of part defects from the training data, and automatic quality inspection is realized;
(2) Increasing data diversity: through a data expansion algorithm, synthetic samples with diversity are generated, so that the diversity of training data can be increased, the model can be better adapted to different types of parts and quality inspection scenes, and the robustness and generalization capability of the model are improved;
(3) Adaptive data feature extraction: the data feature extraction is carried out by adopting an artificial ant colony algorithm and a graph neural network algorithm, so that key feature points and feature descriptors can be automatically extracted from part images, key features such as shapes and textures of parts can be accurately captured, and accurate input is provided for subsequent classifiers;
(4) Improved classifier algorithm: by combining the ideas of the support vector machine and the negative feedback, an improved classifier algorithm is provided, the algorithm can find the optimal hyperplane to realize the accurate classification of the part images, and meanwhile, the classification boundary can be adaptively adjusted by introducing the negative feedback algorithm, so that the robustness and the adaptability of the classifier are improved.
Detailed Description
Examples: the machine learning-based mechanical part industrial quality inspection method comprises the following steps:
s1, data acquisition and labeling, namely acquiring picture data of mechanical parts, and performing data labeling, wherein the data comprises defect detection and classification information.
The data sources mainly comprise mechanical parts on the production line, each part is shot by a high-resolution industrial camera, and the generated picture data are used for subsequent machine learning training, wherein the picture data comprise views of the parts at various angles so as to ensure that the information of the parts can be captured from multiple directions, and the picture is in a common image format, such as ". Jpg" or ". Png".
After the data are collected, the data are marked, wherein the data marking is a process of manually judging and classifying the data, in the industrial quality inspection of mechanical parts, the data marking generally comprises marking information such as whether the parts have defects, the types of the defects, the positions of the defects and the like, and when a machine learning model is trained, the marked information can be used as a true value or a target value to help the model understand the relation between different characteristics and the quality of the parts.
S2, preprocessing the data, namely preprocessing the acquired picture data, including graying, scale normalization and contrast enhancement, and converting the acquired picture data into a form suitable for machine learning algorithm processing so as to facilitate subsequent machine learning algorithm processing.
S3, data expansion, namely generating a synthetic sample with diversity by generating an countermeasure network (GAN), a two-stage training strategy, a multi-objective optimization algorithm and an adaptive sampling algorithm so as to increase the diversity of training data.
S4, extracting data features, extracting key feature points in the part images by using an artificial ant colony algorithm and a graph neural network algorithm, and generating feature descriptors through the graph neural network for training and predicting of subsequent classifiers.
S5, training the classifier, adopting an improved classifier algorithm, combining the ideas of a support vector machine and negative feedback, searching an optimal hyperplane, realizing classification of the part images, and improving the performance and the robustness of the classifier by optimizing the problem and adjusting the classification boundary.
The goal of the data preprocessing in S2 is to perform a series of transformations and processing on the original image to eliminate noise in the image, improve image quality, highlight target features, etc., comprising the steps of:
s201, graying processing: converting the color image into a gray image, and simplifying the image processing process; the image matrix after graying is denoted as G, which for a color image is converted into a gray image comprising only one channel, for each pixel the gray value of which represents the luminance information in the image, the gray value of the gray image ranges from 0 to 255, wherein 0 represents black and 255 represents white, the color image is converted into a gray image using the following formula:
,
wherein,representing gray scale image at position +.>Gray value at>、/>Andrespectively represent the positions of the color images>Red, green, blue channel values.
S202, scale normalization processing: carrying out scale normalization on the gray level image to ensure that the sizes of the images are consistent; the matrix of the image after scale normalization is expressed as N, and the size of the original image is set asAdjust it to a fixed sizeThe bilinear interpolation method is used for carrying out image scale normalization, and the specific formula is as follows:
,
wherein,representing normalized image at position +.>The pixel value at which it is located,representing adjacent pixel values in the original gray image, and>is bilinear interpolation weight.
S203, contrast enhancement processing: by increasing the contrast of the image, the target features in the image are more obvious, and the normalized image isThe gray value range is +.>Where L is the number of gray levels, the histogram equalization formula is as follows:
,
wherein,representing the enhanced image in position +.>Pixel value at +.>Representing normalized image at position +.>Pixel value at +.>Representing gray level +.>The number of pixels in the normalized image.
The purpose of data expansion in S3 is to increase the diversity of training data so as to improve the robustness and generalization capability of a model, and in an industrial quality inspection task, a large number of samples are required to cover different conditions due to the diversity of factors such as the shape, the angle, the illumination and the like of parts so as to obtain an accurate classification model, so that an innovative data expansion algorithm is provided to generate synthetic samples with diversity; the data expansion algorithm is based on a Generated Antagonism Network (GAN), a two-stage training strategy, a multi-objective optimization algorithm and an adaptive sampling algorithm, and the data expansion method comprises the following steps: s301, training a generator network G for generating a synthesized part image; meanwhile, training a discriminator network D for distinguishing a real sample from a synthesized sample; the generating capacity of the generator is gradually improved by alternately training the generator and the discriminator.
The goal of the generator network G is to minimize the difference between the generated samples and the real samples, defining the loss function of the generator as follows:
,
the goal of the arbiter network D is to maximize the discrimination capability for both real and composite samples, defining the loss function of the arbiter as follows:
,
where x represents the real sample, z represents the input noise of the generator,representing the discrimination result of the discriminator on the real sample, < >>Representing the discrimination result of the discriminator on the generated sample, < >>For regularization parameters, ++>Indicating compliance with data distribution->The representation is subject to a noise distribution;
by minimizing the loss function of the generatorAnd maximizing the loss function of the arbiter>The generator and the discriminator can be mutually promoted, and the effect of data expansion is gradually improved.
S302, introducing a diversity objective function and a similarity objective function, wherein the diversity objective function is used for encouraging the generator to generate diversity samples, and the similarity objective function is used for keeping the similarity of the generated samples and the real samples.
The diversity objective function is defined as follows:
,
the similarity objective function is defined as follows:
,
where N represents the number of noise input by the generator, K represents the number of synthesized samples corresponding to each noise,indicate->Noise (S)>Indicate->The corresponding->Sample number->Representing a real sample by minimizing the diversity objective function +.>And minimizing the similarity objective function +.>The diversity and similarity of the generated samples are balanced.
S303, dynamically adjusting a sampling strategy by utilizing an adaptive sampling algorithm according to training states of the generator and the discriminator, and improving the coverage of the generator in a sample space.
The adaptive sampling algorithm in S303 includes the steps of:
s3031, initializing sampling probability distributionSetting the sampling wheel number +.>
S3032, for each round of samplingThe following operations are performed:
1) Probability distribution from current sampleSampling to obtain sample->
2) Calculating a sampleImportance weight of->
3) Updating sampling probability distribution:/>
S3033, returning all sample sets obtained by sampling.
The invention provides an innovative data feature extraction algorithm for automatically extracting key features of a part image, which aims to extract the key features from the part image for training and predicting by a subsequent classifier, wherein the features such as the shape, the texture and the like of the part have significance for the classification task in an industrial quality inspection task, and the invention comprises the following steps:
s401, searching important feature points in the part image by using an artificial ant colony algorithm: assuming that the part image isWherein n pixels are included +.>Finding m feature points in an image using artificial ant colony algorithmThe artificial ant colony algorithm moves on the image by simulating ants, selects the position of the next movement according to a specific probability, and guides the movement of the ants through local pheromones and global pheromones.
Assuming ants are located at a pointAnd it is necessary to select the position of the next movement +.>Ant select move to the point +.>Can be determined by the pheromone concentration +.>And heuristic information->To calculate. />
Heuristic information represents the expected distance of ants from the current location to the next location, using euclidean distance as heuristic information, expressed as:
,
the pheromone concentration represents the probability of selecting the direction of ant movement, and initially, all the pheromone concentrations are the same constant value and expressed as
The pheromone updating formula of the artificial ant colony algorithm is as follows:
,
wherein,represents the time point of the t-th iteration +.>To the point->Is used for the concentration of the pheromone,indicate->Time of iteration->To the point->Pheromone concentration,/->Is the evaporation coefficient of pheromone, < >>The pheromone increment is the pheromone increment of ants at the current moment. Specifically, ant is at the current position +.>And the next position->The pheromone delta in between can be defined as:
,
wherein K is the number of ants,for the kth ant in the current position +.>And the next positionA pheromone increment in between.
Calculation using the following formula
,
Wherein,is constant and indicates the size of the pheromone increment, < >>The path length of the kth ant at the current time t is indicated.
By continually iteratively updating the pheromone concentration, ants will be directed to key feature points in the image.
S402, obtaining characteristic pointsAnd then, extracting the feature descriptors by using a graph neural network algorithm, wherein the graph neural network is a deep learning model capable of processing graph structure data, and is capable of extracting the relation between nodes in an image and generating vectors representing the features of the nodes.
Assume that the input of the graph neural network is a feature pointAnd its surrounding neighbor node feature vector, expressed as +.>And->Learning representation of feature points using a graph neural network>By minimizing the loss function->To train network parameters, the loss function of the graph neural network is defined as follows:
,
wherein,representing characteristic points->Is indicated by->Representing characteristic points obtained by artificial ant colony algorithmIs true of->Representation->Norms.
And obtaining feature descriptors of each feature point in the part image by training the graph neural network, wherein the descriptors are used as input of a subsequent classifier and are used for performing quality inspection tasks.
The classifier algorithm in S5 is used for classifying the part image and judging whether the quality meets the requirement, in the invention, an improved classifier algorithm is provided, and the support vector machine is a commonly used classification model and can find the optimal hyperplane in a high-dimensional space by combining the thought of the support vector machine and negative feedback, so that the effective classification of the sample is realized, the negative feedback is a self-adaptive learning algorithm, the classification boundary can be adjusted according to the output of the classifier, and the robustness of the classifier is improved, and the method comprises the following steps:
s501, searching an optimal through a support vector machineMaximizing the separation between positive and negative samples; assume that the training set contains ms samplesWherein->For inputting features +.>For the corresponding tag, for the task of classification, tag +.>1 represents a positive sample, and-1 represents a negative sample; the optimization objective of the support vector machine can be expressed as the following convex optimization problem:
,
the constraints are:,
wherein w is the normal vector of the hyperplane, b is the offset of the hyperplane,representation->Norms (F/F)>For relaxation variables, C is a regularization parameter, so that an optimal hyperplane is obtained, and classification of samples is achieved.
S502, assuming that some difficult samples exist in the training set, they cannot be correctly classified by the current classifier, and the error samples are called negative samples and are expressed as a set
The classification boundary is adjusted through a negative feedback algorithm, the error rate of a negative sample is reduced, the process of adjusting the classification boundary is converted into an optimization problem through introducing a punishment item, and an optimization target is defined as follows:
,
wherein,for the output of the classifier->Is a regularization parameter; and the classification boundary is adjusted by optimizing the objective function, so that the performance of the classifier is improved.
S503, introducing Lagrangian multiplierAnd->Obtaining a Lagrangian function:
,
wherein,and->Is a Lagrangian multiplier;
by maximising the Lagrangian function pairTo obtain an optimal solution to the dual problem:
and solving the dual problem to obtain an optimal solution of the support vector machine and realize classification of samples.

Claims (5)

1. The machine learning-based mechanical part industrial quality inspection method is characterized by comprising the following steps of:
s1, collecting and marking data, namely collecting picture data of mechanical parts, and marking the data, wherein the picture data comprises defect detection and classification information;
s2, preprocessing data, namely preprocessing acquired picture data, wherein the preprocessing comprises graying, scale normalization and contrast enhancement;
s3, data expansion, namely generating a synthetic sample with diversity by generating an countermeasure network, a two-stage training strategy, a multi-objective optimization algorithm and an adaptive sampling algorithm;
the method for expanding the data in the S3 comprises the following steps:
s301, training a generator network G for generating a synthesized part image; meanwhile, training a discriminator network D for distinguishing a real sample from a synthesized sample; gradually improving the generating capacity of the generator through alternately training the generator and the discriminator;
the goal of the generator network G is to minimize the difference between the generated samples and the real samples, defining the loss function of the generator as follows:
the goal of the arbiter network D is to maximize the discrimination capability for both real and composite samples, defining the loss function of the arbiter as follows:
where x represents the real sample, z represents the input noise of the generator,representing the discrimination result of the discriminator on the real sample, < >>Representing the discrimination result of the discriminator on the generated sample, < >>For regularization parameters, ++>Indicating compliance with data distribution->The representation is subject to a noise distribution;
by minimizing the loss function of the generatorAnd maximizing the loss function of the arbiter>The generator and the discriminator are mutually promoted, and the effect of data expansion is gradually improved;
s302, introducing a diversity objective function and a similarity objective function, wherein the diversity objective function is used for encouraging the generator to generate diversity samples, and the similarity objective function is used for keeping the similarity between the generated samples and the real samples;
the diversity objective function is defined as follows:
the similarity objective function is defined as follows:
where N represents the number of noise input by the generator, K represents the number of synthesized samples corresponding to each noise,indicate->Noise (S)>Indicate->The corresponding->Sample number->Representing a real sample by minimizing the diversity objective function +.>And minimizing the similarity objective function +.>Balancing the diversity and similarity of the generated samples;
s303, dynamically adjusting a sampling strategy by utilizing an adaptive sampling algorithm according to training states of the generator and the discriminator, and improving the coverage of the generator in a sample space;
the adaptive sampling algorithm in S303 includes the steps of:
s3031, initializing sampling probability distributionSetting the number T of sampling wheels;
s3032, for each round of samplingThe following operations are performed:
1) Probability distribution from current sampleSampling to obtain sample->
2) Calculating a sampleImportance weight of->
3) Updating sampling probability distribution:/>
S3033, returning all sample sets obtained by sampling;
s4, extracting data features, namely extracting key feature points in the part images by using an artificial ant colony algorithm and a graph neural network algorithm, and generating feature descriptors through the graph neural network;
the data feature extraction in S4 includes the following steps:
s401, searching important feature points in the part image by using an artificial ant colony algorithm: assuming that the part image is I, the part image comprises n pixel pointsFinding m feature points in an image using artificial ant colony algorithm,/>The artificial ant colony algorithm moves on the image by simulating ants, selects the next moving position according to specific probability, and guides the movement of the ants by local pheromones and global pheromones;
assuming ants are located at a pointAnd it is necessary to select the position of the next movement +.>Ant selection moves to a pointIs passed by the pheromone concentration->And heuristic information->Calculating;
heuristic information represents the expected distance of ants from the current location to the next location, using euclidean distance as heuristic information, expressed as:
the pheromone concentration represents the probability of selecting the direction of ant movement, and initially, all the pheromone concentrations are the same constant value and expressed as
The pheromone updating formula of the artificial ant colony algorithm is as follows:
wherein,represents the time point of the t-th iteration +.>To the point->Pheromone concentration,/->Indicate->Time of iteration->To the point->Pheromone concentration,/->Is the evaporation coefficient of the pheromone,for the increment of pheromone, the increment of the pheromone of ants at the current moment is expressed, and specifically, the ants at the current positionAnd the next position->The pheromone increment in between is defined as:
wherein K is the number of ants,for the kth ant in the current position +.>And the next position->A pheromone increment therebetween;
calculation using the following formula
Wherein,is constant and indicates the size of the pheromone increment, < >>The path length of the kth ant at the current time t is represented;
by continuously iteratively updating the pheromone concentration, ants will be guided to key feature points in the image;
s402, obtaining characteristic pointsThen, extracting feature descriptors by using a graph neural network algorithm, wherein the graph neural network is a deep learning model capable of processing graph structure data, can extract relations among nodes in an image, and generates vectors representing node features;
assume that the input of the graph neural network is a feature pointAnd its surrounding neighbor node feature vectors, expressed asAnd->Learning representations of feature points using a graph neural network/>By minimizing the loss function->To train network parameters, the loss function of the graph neural network is defined as follows:
wherein,representing characteristic points->Is indicated by->Representing the characteristic points obtained by artificial ant colony algorithm +.>Is true of->Represents an L2 norm;
obtaining feature descriptors of each feature point in the part image by training a graph neural network, wherein the descriptors are used as input of a subsequent classifier and are used for performing quality inspection tasks;
s5, training a classifier, adopting an improved classifier algorithm, and combining a support vector machine and a negative feedback idea to find an optimal hyperplane so as to classify part images;
s5, the improved classifier algorithm combines the ideas of a support vector machine and negative feedback, adjusts classification boundaries according to the output of the classifier, improves the robustness of the classifier, and comprises the following steps:
s501, searching an optimal through a support vector machineHyperplane to maximize the separation between positive and negative samples; assume that the training set contains ms samplesWherein->For inputting features +.>For the corresponding tag, for the task of classification, tag +.>1 represents a positive sample, and-1 represents a negative sample; the optimization objective of the support vector machine is expressed as the following convex optimization problem:
wherein w is the normal vector of the hyperplane, b is the offset of the hyperplane,represents L1 norm,/->C is a regularization parameter for relaxation variables, so that an optimal hyperplane is obtained, and classification of samples is realized;
s502, assuming that some difficult samples exist in the training set, they cannot be correctly classified by the current classifier, and the error samples are called negative samples and are expressed as a set
The classification boundary is adjusted through a negative feedback algorithm, the error rate of a negative sample is reduced, the process of adjusting the classification boundary is converted into an optimization problem through introducing a punishment item, and an optimization target is defined as follows:
wherein,for the output of the classifier->Is a regularization parameter; the classification boundary is adjusted by optimizing the objective function, so that the performance of the classifier is improved;
s503, introducing Lagrangian multiplierObtaining a Lagrangian function:
wherein,is a Lagrangian multiplier;
by maximising the Lagrangian function pairTo obtain an optimal solution to the dual problem:
and solving the dual problem to obtain an optimal solution of the support vector machine and realize classification of samples.
2. The machine learning based machine part industrial quality inspection method of claim 1, wherein the data preprocessing in S2 comprises the steps of:
s201, graying processing: converting the color image into a gray scale image;
s202, scale normalization processing: carrying out scale normalization on the gray level image;
s203, contrast enhancement processing: by increasing the contrast of the image, the target features in the image are made more pronounced.
3. The machine learning based machine part industrial quality inspection method of claim 2, wherein the image matrix after graying in S201 is denoted as G, for a color image, it is converted into a gray image, the gray image contains only one channel, for each pixel, its gray value represents the luminance information in the image, the gray value of the gray image ranges from 0 to 255, where 0 represents black, 255 represents white, and the color image is converted into a gray image using the following formula:
wherein,representing gray scale image at position +.>The gray value at which the color is to be changed,respectively represent the positions of the color images>Red, green, blue channel values.
4. The machine learning based machine part industry quality inspection method of claim 2, wherein the S202 mesoscale normalized image matrix representationLet N be the size of the original imageAdjust it to a fixed sizeThe bilinear interpolation method is used for carrying out image scale normalization, and the specific formula is as follows:
wherein,representing normalized image at position +.>Pixel value at +.>Representing adjacent pixel values in the original gray image, and>is bilinear interpolation weight.
5. The machine learning based machine part industrial quality inspection method of claim 4, wherein the normalized image in S203 isThe gray value range is +.>Where L is the number of gray levels, the histogram equalization formula is as follows:
wherein,representing the enhanced image in position +.>Pixel value at +.>Representing normalized image at position +.>Pixel value at +.>Representing the number of pixels of the gray level k in the normalized image.
CN202311596473.8A 2023-11-28 2023-11-28 Machine learning-based mechanical part industrial quality inspection method Active CN117315376B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202311596473.8A CN117315376B (en) 2023-11-28 2023-11-28 Machine learning-based mechanical part industrial quality inspection method

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202311596473.8A CN117315376B (en) 2023-11-28 2023-11-28 Machine learning-based mechanical part industrial quality inspection method

Publications (2)

Publication Number Publication Date
CN117315376A CN117315376A (en) 2023-12-29
CN117315376B true CN117315376B (en) 2024-02-13

Family

ID=89273926

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202311596473.8A Active CN117315376B (en) 2023-11-28 2023-11-28 Machine learning-based mechanical part industrial quality inspection method

Country Status (1)

Country Link
CN (1) CN117315376B (en)

Citations (9)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN106508046B (en) * 2011-12-28 2014-11-05 上海机电工程研究所 One kind is based on multiple dimensioned bilateral optimized detection method of small target
WO2020172838A1 (en) * 2019-02-26 2020-09-03 长沙理工大学 Image classification method for improvement of auxiliary classifier gan
WO2021022752A1 (en) * 2019-08-07 2021-02-11 深圳先进技术研究院 Multimodal three-dimensional medical image fusion method and system, and electronic device
CN115345822A (en) * 2022-06-08 2022-11-15 南京航空航天大学 Automatic three-dimensional detection method for surface structure light of aviation complex part
CN115565009A (en) * 2022-10-12 2023-01-03 淮阴工学院 Electronic component classification method based on deep denoising sparse self-encoder and ISSVM
CN116843628A (en) * 2023-06-15 2023-10-03 华中农业大学 Lotus root zone nondestructive testing and grading method based on machine learning composite optimization
CN116938810A (en) * 2023-08-10 2023-10-24 重庆大学 Deep reinforcement learning SDN intelligent route optimization method based on graph neural network
CN117010576A (en) * 2023-10-07 2023-11-07 聊城莱柯智能机器人有限公司 Energy consumption prediction method based on elastic dynamic neural network
CN117009876A (en) * 2023-10-07 2023-11-07 长春光华学院 Motion state quantity evaluation method based on artificial intelligence

Family Cites Families (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
IL239191A0 (en) * 2015-06-03 2015-11-30 Amir B Geva Image classification system

Patent Citations (9)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN106508046B (en) * 2011-12-28 2014-11-05 上海机电工程研究所 One kind is based on multiple dimensioned bilateral optimized detection method of small target
WO2020172838A1 (en) * 2019-02-26 2020-09-03 长沙理工大学 Image classification method for improvement of auxiliary classifier gan
WO2021022752A1 (en) * 2019-08-07 2021-02-11 深圳先进技术研究院 Multimodal three-dimensional medical image fusion method and system, and electronic device
CN115345822A (en) * 2022-06-08 2022-11-15 南京航空航天大学 Automatic three-dimensional detection method for surface structure light of aviation complex part
CN115565009A (en) * 2022-10-12 2023-01-03 淮阴工学院 Electronic component classification method based on deep denoising sparse self-encoder and ISSVM
CN116843628A (en) * 2023-06-15 2023-10-03 华中农业大学 Lotus root zone nondestructive testing and grading method based on machine learning composite optimization
CN116938810A (en) * 2023-08-10 2023-10-24 重庆大学 Deep reinforcement learning SDN intelligent route optimization method based on graph neural network
CN117010576A (en) * 2023-10-07 2023-11-07 聊城莱柯智能机器人有限公司 Energy consumption prediction method based on elastic dynamic neural network
CN117009876A (en) * 2023-10-07 2023-11-07 长春光华学院 Motion state quantity evaluation method based on artificial intelligence

Non-Patent Citations (7)

* Cited by examiner, † Cited by third party
Title
"基于对抗多关系图神经网络的机器账号检测";杨英光等;《中文信息学报》;第37卷(第07期);全文 *
"基于梯度结构的图神经网络对抗攻击";李凝书;关东海;袁伟伟;《计算机系统应用》;第32卷(第07期);全文 *
"面向深度学习模型的对抗攻击与防御方法综述";姜妍,张立国;《计算机工程》;第47卷(第01期);全文 *
Geethalekshmy, V. ; Keerthana, P.G. ; Nair, P.N.."Comparative Study of Classification Algorithms for Weed detection".《2022 IEEE 3rd Global Conference for Advancement in Technology (GCAT)》.2023,全文. *
基于像素比对法的遥感监测应用现状;杨红卫;黄福伟;《地理空间信息》(第09期);全文 *
基于梯度算子的蚁群图像分割算法研究;薛琴;陈玮;罗俊奇;《计算机工程与设计》(第23期);全文 *
宫久路等.《目标检测与识别技术》.北京理工大学出版社,2022,第163-165页. *

Also Published As

Publication number Publication date
CN117315376A (en) 2023-12-29

Similar Documents

Publication Publication Date Title
CN109859171B (en) Automatic floor defect detection method based on computer vision and deep learning
CN108960245B (en) Tire mold character detection and recognition method, device, equipment and storage medium
CN108109162B (en) Multi-scale target tracking method using self-adaptive feature fusion
CN112069921A (en) Small sample visual target identification method based on self-supervision knowledge migration
CN107194418B (en) Rice aphid detection method based on antagonistic characteristic learning
CN111461134A (en) Low-resolution license plate recognition method based on generation countermeasure network
CN112200121B (en) Hyperspectral unknown target detection method based on EVM and deep learning
CN109325440B (en) Human body action recognition method and system
CN108537168B (en) Facial expression recognition method based on transfer learning technology
CN107169417B (en) RGBD image collaborative saliency detection method based on multi-core enhancement and saliency fusion
CN111652292B (en) Similar object real-time detection method and system based on NCS and MS
CN112862811A (en) Material microscopic image defect identification method, equipment and device based on deep learning
CN112464983A (en) Small sample learning method for apple tree leaf disease image classification
CN109815923B (en) Needle mushroom head sorting and identifying method based on LBP (local binary pattern) features and deep learning
CN111340019A (en) Grain bin pest detection method based on Faster R-CNN
CN111652836A (en) Multi-scale target detection method based on clustering algorithm and neural network
Zeng et al. Steel sheet defect detection based on deep learning method
CN104200226B (en) Particle filter method for tracking target based on machine learning
CN108921872B (en) Robust visual target tracking method suitable for long-range tracking
CN116206208B (en) Forestry plant diseases and insect pests rapid analysis system based on artificial intelligence
CN117315376B (en) Machine learning-based mechanical part industrial quality inspection method
CN113408573A (en) Method and device for automatically classifying and classifying tile color numbers based on machine learning
CN107704864B (en) Salient object detection method based on image object semantic detection
CN106447691A (en) Weighted extreme learning machine video target tracking method based on weighted multi-example learning
CN115439405A (en) Classification method for surface defects of steel plate

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant