CN116468961A - Image classification method and intelligent flaw detection system for injection molding product - Google Patents

Image classification method and intelligent flaw detection system for injection molding product Download PDF

Info

Publication number
CN116468961A
CN116468961A CN202310728109.6A CN202310728109A CN116468961A CN 116468961 A CN116468961 A CN 116468961A CN 202310728109 A CN202310728109 A CN 202310728109A CN 116468961 A CN116468961 A CN 116468961A
Authority
CN
China
Prior art keywords
flaw
product
image
feature
probability
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Withdrawn
Application number
CN202310728109.6A
Other languages
Chinese (zh)
Inventor
唐玉龙
麦海盛
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Zhongshan Dongrun Intelligent Equipment Co ltd
Original Assignee
Zhongshan Dongrun Intelligent Equipment Co ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Zhongshan Dongrun Intelligent Equipment Co ltd filed Critical Zhongshan Dongrun Intelligent Equipment Co ltd
Priority to CN202310728109.6A priority Critical patent/CN116468961A/en
Publication of CN116468961A publication Critical patent/CN116468961A/en
Withdrawn legal-status Critical Current

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/70Arrangements for image or video recognition or understanding using pattern recognition or machine learning
    • G06V10/764Arrangements for image or video recognition or understanding using pattern recognition or machine learning using classification, e.g. of video objects
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/0002Inspection of images, e.g. flaw detection
    • G06T7/0004Industrial image inspection
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/70Arrangements for image or video recognition or understanding using pattern recognition or machine learning
    • G06V10/77Processing image or video features in feature spaces; using data integration or data reduction, e.g. principal component analysis [PCA] or independent component analysis [ICA] or self-organising maps [SOM]; Blind source separation
    • G06V10/774Generating sets of training patterns; Bootstrap methods, e.g. bagging or boosting
    • YGENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
    • Y02TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
    • Y02PCLIMATE CHANGE MITIGATION TECHNOLOGIES IN THE PRODUCTION OR PROCESSING OF GOODS
    • Y02P90/00Enabling technologies with a potential contribution to greenhouse gas [GHG] emissions mitigation
    • Y02P90/30Computing systems specially adapted for manufacturing

Abstract

The invention discloses an image classification method and an intelligent flaw detection system for injection products, and relates to the technical field of industrial quality inspection. Acquiring an image of a qualified product and images of various flaw products; respectively training a qualified product image and an image of a defective product as a sample set; training a flaw type classifier by taking images of various flaw products as a sample set; acquiring an image of a product; extracting a first feature of an image of the product; inputting the first characteristic of the image into a qualified classifier to obtain the qualified and unqualified probability of the product; extracting a second feature of the image of the product according to the probability of passing and failing the product; inputting the first characteristic and the second characteristic of the image into a flaw type classifier to obtain flaw types and corresponding probabilities of products; and obtaining the flaw types of the product according to the flaw types of the product and the corresponding probabilities. By classifying the images, the detection efficiency and accuracy of the defective products are improved.

Description

Image classification method and intelligent flaw detection system for injection molding product
Technical Field
The invention belongs to the technical field of industrial quality inspection, and particularly relates to an image classification method and an intelligent flaw detection system for injection products.
Background
Product quality control is critical in the manufacturing industry, especially in the production of injection molded articles. Product flaws such as cracks, bubbles, chromatic aberration, deformation and the like have a great influence on product performance and service life. In the conventional quality inspection process, manual vision and manual inspection are often relied upon. However, this method is labor intensive, inefficient, prone to error, and difficult to meet the demands of mass production.
The patent with publication number CN104345061A discloses a flaw detection method for open and close strips, which comprises a frame, wherein the frame is sequentially provided with a guide roller for guiding in, two rolling shafts which are arranged, a guide roller for guiding out and a driving rubber roller, one side of a plane between the two rolling shafts which are arranged is provided with a parallel reflecting plate, one side of the plane between the two rolling shafts is provided with a camera, the output end of the camera is connected with the input end of a controller host (PLC for short), the input end of the controller host is connected with the input end of a servo controller, and the servo controller controls a servo motor for driving the driving rubber roller. The camera is called lens for short. The invention has the main effects that the whole process of monitoring the opening and closing strip in the product closing process is realized after the product is produced at high speed, and the alarm can be automatically stopped when smaller flaws with unqualified width and dimension and difficult recognition by human eyes are encountered. According to the scheme, flaw detection is carried out in a mode of image comparison, the states of flaws cannot be classified, and flaws belonging to qualified states are also judged to be unqualified products.
Disclosure of Invention
The invention aims to provide an image classification method and an intelligent flaw detection system for injection products, which improve the detection efficiency and accuracy of flaw products by classifying images.
In order to solve the technical problems, the invention is realized by the following technical scheme:
the invention provides an image classification method, which comprises the following steps,
acquiring an image of a qualified product and images of various flaw products;
respectively training a qualified product image and an image of a defective product as a sample set;
training a flaw type classifier by taking images of various flaw products as a sample set;
acquiring an image of a product;
extracting a first feature of an image of the product;
inputting the first characteristic of the image into the qualified classifier to obtain the qualified and unqualified probability of the product;
extracting a second feature of the image of the product according to the probability of passing and failing the product;
inputting the first characteristic and the second characteristic of the image into the flaw type classifier to obtain flaw types and corresponding probabilities of products;
and obtaining the flaw types of the product according to the flaw types of the product and the corresponding probabilities.
The invention also discloses an intelligent flaw detection system of the injection molding product, which comprises,
the model training unit is used for acquiring images of qualified products and images of various flaw products;
respectively training a qualified product image and an image of a defective product as a sample set;
training a flaw type classifier by taking images of various flaw products as a sample set;
the image acquisition unit is used for acquiring an image of a product;
a classification and identification unit for extracting a first feature of an image of a product;
inputting the first characteristic of the image into the qualified classifier to obtain the qualified and unqualified probability of the product;
extracting a second feature of the image of the product according to the probability of passing and failing the product;
inputting the first characteristic and the second characteristic of the image into the flaw type classifier to obtain flaw types and corresponding probabilities of products;
and obtaining the flaw types of the product according to the flaw types of the product and the corresponding probabilities.
The invention also discloses an intelligent flaw detection system of the injection molding product, which comprises,
the data receiving unit is used for receiving the flaw types of the products;
and the sorting unit is used for sorting and placing the products according to the defect types of the products.
The invention improves the efficiency and the precision of product flaw detection through image classification. Mainly comprises three units: model training unit, image acquisition unit and categorised recognition unit. The model training unit trains by acquiring images of qualified products and flaw products, and constructs a qualified classifier and a flaw type classifier. The image acquisition unit is responsible for acquiring the product image. The classification and identification unit extracts the image features of the product, judges the qualification probability of the product through the qualification classifier, then extracts the second feature and combines the first feature to pass through the flaw type classifier to obtain the flaw type and probability of the product.
Of course, it is not necessary for any one product to practice the invention to achieve all of the advantages set forth above at the same time.
Drawings
In order to more clearly illustrate the technical solutions of the embodiments of the present invention, the drawings that are needed for the description of the embodiments will be briefly described below, and it is obvious that the drawings in the following description are only some embodiments of the present invention, and that other drawings may be obtained according to these drawings without inventive effort for a person skilled in the art.
FIG. 1 is a schematic diagram showing functional units and information interaction of an intelligent flaw detection system for injection products according to an embodiment of the present invention;
FIG. 2 is a schematic flow chart illustrating steps of an intelligent flaw detection system for injection molding products according to an embodiment of the present invention;
FIG. 3 is a flowchart illustrating the step S4 according to an embodiment of the present invention;
FIG. 4 is a flowchart illustrating the step S71 according to an embodiment of the present invention;
FIG. 5 is a flowchart illustrating the step of step S711 according to an embodiment of the present invention;
FIG. 6 is a flowchart illustrating the step S9 according to an embodiment of the present invention;
FIG. 7 is a flowchart illustrating the step S91 according to an embodiment of the present invention;
FIG. 8 is a flowchart illustrating the step S93 according to an embodiment of the present invention;
FIG. 9 is a flowchart illustrating the step S931 according to an embodiment of the present invention;
in the drawings, the list of components represented by the various numbers is as follows:
the system comprises a 1-model training unit, a 2-image acquisition unit, a 3-classification and identification unit, a 4-data receiving unit and a 5-sorting unit.
Detailed Description
The following description of the embodiments of the present invention will be made clearly and completely with reference to the accompanying drawings, in which it is apparent that the embodiments described are only some embodiments of the present invention, but not all embodiments. All other embodiments, which can be made by those skilled in the art based on the embodiments of the invention without making any inventive effort, are intended to be within the scope of the invention.
In the process of pattern recognition, the more the data quantity input by an input layer is, the recognition complexity is raised exponentially, so that the recognition speed is raised in a nonlinear way along with the linear rise of the input data quantity.
Referring to fig. 1 to 2, the present invention provides an intelligent flaw detection system for injection products, which may include a model training unit 1, an image acquisition unit 2, a classification and identification unit 3, a data receiving unit 4, and a sorting unit 5. Wherein the model training unit 1, the image acquisition unit 2 and the classification recognition unit 3 can jointly form a recognition classification device. The data receiving unit 4 and the sorting unit 5 may constitute a sorting robot or may be a sorting device operated by a worker.
In the specific implementation process, the model training unit 1 may first perform step S1 to obtain images of qualified products and images of various defective products. Step S2 may then be performed to train the qualified product image and the image of the defective product as a sample set, respectively, to a classification of the classification. Step S3 may be performed to train the defect type classifier using the images of the various types of defect products as a sample set. In practice it is necessary to train the pass classifier and the flaw class classifier to convergence.
Step S4 may then be performed by the image acquisition unit 2 to acquire an image of the product. Next, step S5 is performed by the classification recognition unit 3 to extract a first feature of the image of the product. Step S6 may then be performed to input the first feature of the image into the pass classifier to obtain probabilities of pass and fail of the product. Step S7 may then be performed to extract a second feature of the image of the product based on the probabilities of the product being acceptable and unacceptable. Step S8 may be performed to input the first and second features of the image into the defect type classifier to obtain the defect type and the corresponding probability of the product. Step S9 may be performed to obtain the defect type of the product according to the defect type of the product and the corresponding probability.
The data receiving unit 4 may then perform step S10 to receive the defect type of the product in step S9. Finally, the sorting unit 5 may perform step S11 to sort and place the products according to the defect types of the products, which may be sorting and stacking, or sorting and placing at a set position.
In the implementation process, firstly, a model training model is utilized to collect pictures of qualified products and defective products, and the aim is to train and construct classifiers for the qualified products and various defective products. The image collection unit is then responsible for taking the product picture. And then extracting the characteristics of the product picture, inputting the extracted characteristics into a qualified classifier to calculate the qualification probability of the product, and inputting the first characteristics and the second characteristics into a flaw type classifier together to identify and confirm the flaw type of the product.
To supplement the above step flow, only a part of the source codes of the functional modules are provided and explained in the annotation section.
#include <iostream>
Class of- # include "classifier.h"// classifier, including training and prediction methods
# include "imageFeatureExtractor. H"// image feature extraction class
int main() {
Obtaining images of acceptable products and images of various defective products
vector<Image> qualifiedImages = getQualifiedImages();
vector<Image> defectiveImages = getDefectiveImages();
Construction classifier
Classifier qualifiedClassifier;
Classifier defectTypeClassifier;
Extraction of features
ImageFeatureExtractor featureExtractor;
Training of a// involutory classifier
for(Image &img : qualifiedImages) {
vector<float> features = featureExtractor.extractFeature1(img);
qualifiedClassifier.train(img, features);
}
Training of flaw class classifier
for(Image &img : defectiveImages) {
vector<float> features = featureExtractor.extractFeature1(img);
defectTypeClassifier.train(img, features);
}
Image of/acquisition product
Image productImage = getProductImage();
First feature of image of/extracted product
vector<float> productFeatures1 = featureExtractor.extractFeature1(productImage);
Inputting the first feature of the image into a pass classifier to obtain the probability of pass and fail of the product
float productQualifiedProbability = qualifiedClassifier.predict(productFeatures1);
Extracting a second feature of the image of the product based on the probability of passing and failing the product
vector<float> productFeatures2 = featureExtractor.extractFeature2(productImage, productQualifiedProbability);
Inputting the first and second features of the image into a defect type classifier to obtain the defect type and corresponding probability of the product
vector<float> combinedFeatures = combineFeatures(productFeatures1, productFeatures2);
string defectType = defectTypeClassifier.predict(combinedFeatures);
Defect type of output product
The defect type of the cout < < < deltacttype < < endl ";
return 0;
}
referring to fig. 3, in order to provide accuracy of product classification, images of multiple angles of a product can be used as the basis of classification. In view of this, the above-mentioned step S4 may be implemented by first performing step S41 to obtain a plurality of surface images and/or cross-sectional images of the product set position. Finally, step S42 may be performed to combine the surface images and/or the cross-sectional images in a predetermined order to obtain an image of the product.
To supplement the above step flow, only a part of the source codes of the functional modules are provided and explained in the annotation section.
#include <opencv2/opencv.hpp>
#include <vector>
#include <string>
Definition of the image capturing function
cv::Mat getImage(const std::string& imagePath){
cv::Mat image = cv::imread(imagePath, cv::IMREAD_COLOR);
return image;
}
Obtaining a plurality of surface images and/or section images of a set position of a product
std::vector<cv::Mat> getImages(const std::vector<std::string>& imagePaths) {
std::vector<cv::Mat> images;
for(const auto& path : imagePaths) {
cv::Mat img = getImage(path);
images.push_back(img);
}
return images;
}
Combining the surface images and/or the section images in a set order to obtain an image of the product
cv::Mat combineImages(const std::vector<cv::Mat>& images) {
cv::Mat combinedImage;
for(const auto& img : images) {
combinedImage.push_back(img);
}
return combinedImage;
}
int main() {
Setting product image position
std:: vector < std:: string > imagepath = { "path1", "path2", "path3" }// here you picture path is entered
Obtaining a plurality of surface images and/or section images of a set position of a product
std::vector<cv::Mat> images = getImages(imagePaths);
Combining the surface images and/or the section images in a set order to obtain an image of the product
cv::Mat combinedImage = combineImages(images);
Display image
cv::imshow("Combined Image", combinedImage);
cv::waitKey(0);
return 0;
}
Referring to fig. 4, in order to reduce the data size of the input defect type classifier as much as possible, the number of feature points in the second feature needs to be effectively reduced. In view of this, in the implementation process of step S7, step S71 may be performed to input the first feature into the defect type classifier to obtain the defect type and the probability corresponding to the defect type. Step S72 may be performed to calculate the accuracy of classifying the defect types according to the probability of the defect types. Step S73 may be performed to determine whether the defect type classification accuracy meets a set requirement. If yes, step S74 may be performed next as an empty set, and if not, step S75 may be performed next to extract the second feature in the image of the product. Step S76 may be performed to input the first feature and the second feature into the defect type classifier to obtain updated defect types and their corresponding probabilities. Step S77 may be performed to calculate the updated defect type classification accuracy according to the updated defect type and the probability thereof. Step S78 may be performed to determine whether the defect type classification accuracy meets a set requirement. If yes, step S79 may be performed next to obtain the second feature, and if not, step S710 may be performed next to continue to extract feature points in the image of the product to supplement the second feature. Step S711 may be performed to calculate the number of feature points included in the second feature according to the first feature, the continuously updated second feature, and the continuously updated defect type and the probability corresponding to the defect type. Finally, step S712 may be performed to extract the second feature from the image of the product according to the number of feature points included in the second feature.
To supplement the above step flow, only a part of the source codes of the functional modules are provided and explained in the annotation section.
Class of- # include "classifier.h"// classifier
# include "imageFeatureExtractor. H"// image feature extraction class
vector<float> extractSecondFeature(Classifier& classifier, ImageFeatureExtractor& featureExtractor, Image& img, vector<float>& firstFeature, float threshold) {
vector<float> secondFeature;
The probability of obtaining the defect type and its correspondence
float defectProbability = classifier.predict(firstFeature);
The accuracy of flaw classification is obtained by means of/(and/or calculation
float accuracy = classifier.calculateAccuracy(defectProbability);
if (accuracy < threshold) {
Second features in the image of the/(and/or extracted product)
secondFeature = featureExtractor.extractFeature2(img);
Input of first and second features together into a defect class classifier
vector<float> combinedFeatures = combineFeatures(firstFeature, secondFeature);
The updated flaw type and the corresponding probability thereof
float updatedDefectProbability = classifier.predict(combinedFeatures);
Obtaining updated flaw type classification accuracy by means of/(and/or calculation
float updatedAccuracy = classifier.calculateAccuracy(updatedDefectProbability);
if (updatedAccuracy < threshold) {
Continuously extracting characteristic points in the image of the product to supplement the second characteristic
vector<float> moreFeatures = featureExtractor.extractMoreFeatures(img);
secondFeature.insert(secondFeature.end(), moreFeatures.begin(), moreFeatures.end());
}
}
Extracting the second feature in the image of the product according to the number of feature points contained in the second feature
int featureCount = secondFeature.size();
vector<float> finalSecondFeature = featureExtractor.extractFeatures(img, featureCount);
return finalSecondFeature;
}
In this code, there is a classifier class that can handle classification and computation accuracy and a feature extractor class that can extract features from images. Among them, the extrafectfeature 1, extrafectfeature 2 and extraMoreFeats methods need to be implemented according to actual situations. The combineFeatures method can simply connect two feature vectors.
Referring to fig. 5, in order to quickly obtain the second feature, step S711 may be executed first in the implementation process to continuously obtain the number of feature points in the first feature and the second feature as the total number of feature points in step S7111. Step S7112 may then be performed to continuously obtain the accuracy of the classification of the flaw class after each update. Step S7113 may then be performed to establish a fitting function of the flaw class classification accuracy with respect to the total number of feature points. Step S7114 may be executed to estimate the total number of feature points corresponding to the defect type classification accuracy meeting the set requirement according to the fitting function of the defect type classification accuracy with respect to the total number of feature points. Finally, step S7115 may be performed to obtain the number of feature points included in the second feature according to the total number of feature points.
To supplement the above step flow, only a part of the source codes of the functional modules are provided and explained in the annotation section.
#include <vector>
#include <algorithm>
# include "Linear rRegulation.h"// Linear regression class
int calculateFeaturePointCount(LinearRegression& regression,
std::vector<std::pair<int, float>>& data, float targetAccuracy) {
The number of feature points in the first feature and the second feature is continuously obtained and recorded as the total number of feature points
Method for continuously obtaining defect type classification accuracy after each update
Establishing a fitting function of flaw class classification accuracy with respect to the total number of feature points
regression.train(data);
Estimating the total feature point number corresponding to the flaw type classification accuracy meeting the set requirement according to the fitting function of the flaw type classification accuracy about the total feature point number
int estimatedFeaturePointCount = regression.predict(targetAccuracy);
return estimatedFeaturePointCount;
}
int main() {
Training data for constructing fitting functions
std::vector<std::pair<int, float>> data = {
{10, 0.5},
{20, 0.6},
{30, 0.7},
{40, 0.8},
{50, 0.9},
{60, 0.95}
};
LinearRegression regression;
float targetAccuracy = 0.9;
Obtaining the number of feature points contained in the second feature according to the total feature point number
int featurePointCount = calculateFeaturePointCount(regression, data, targetAccuracy);
return 0;
}
In this example, there is a linear regression class, which can be trained and predicted using pairs of data points. The data point pairs consist of the number of feature points and the corresponding classification accuracy. These pairs of data points are then used to train a linear regression model and the model is used to predict the number of feature points needed to achieve the target classification accuracy.
Referring to fig. 6, since it is difficult to accurately determine the defect type of one image of the product, step S91 may be performed to obtain the defect type and the corresponding probability obtained by outputting each surface image in the process of classifying and determining the defect type by using a plurality of images. Step S92 may be performed to acquire an angle between the photographing axis of each surface image and the normal line of the product focal point tangential plane as a photographing angle. Finally, step S93 may be executed to calculate the defect type of the product according to the defect type and the corresponding probability of each surface image output and the shooting angle of each surface image.
To supplement the above step flow, only a part of the source codes of the functional modules are provided and explained in the annotation section.
#include <vector>
#include <string>
struct ImageData {
std is string defectType;// defect type
float probability,// probability of correspondence
float shootingAngle,// included angle of shooting
};
std::string calculateDefectType(const std::vector<ImageData>& imageDataSet) {
std::string resultDefectType;
float maxProbability = 0.0;
Every surface image is/is traversed
for (const auto& data : imageDataSet) {
Calculating the flaw type of the product according to the flaw type obtained by outputting each surface image, the corresponding probability and the shooting included angle of each surface image
Here we simply take the highest probability flaw as the end result, a more complex algorithm may be needed in practice
if (data.probability > maxProbability) {
maxProbability = data.probability;
resultDefectType = data.defectType;
}
}
return resultDefectType;
}
int main() {
The image of the/(product) is a single Zhang Biaomian image of the product
std::vector<ImageData> imageDataSet = {
{"defectType1", 0.8, 30.0},
{"defectType2", 0.6, 45.0},
{"defectType3", 0.9, 60.0}
};
std::string defectType = calculateDefectType(imageDataSet);
return 0;
}
The code calculates the flaw type of the product according to the flaw type obtained by outputting each surface image, the corresponding probability and the shooting included angle of each surface image.
Referring to fig. 7, for each surface image, the defect types and the corresponding probabilities of the image classification output due to the image acquisition mode have no practical value. In view of this, in the implementation process of step S91, the standard deviation of the probability corresponding to each flaw type is obtained by performing step S911 according to the flaw type obtained by the surface image output and the corresponding probability. Step S912 may be performed to reject the defect types with the probability smaller than the standard deviation among the defect types obtained by outputting the surface image, so as to obtain the rejected defect types and the corresponding probabilities. Step S913 may be performed to reassign the probability corresponding to the removed defect type according to the probability ratio, so that the probability corresponding to the removed defect type is equal to one. Finally, step S914 may be executed to obtain defect types and corresponding probabilities obtained by outputting each surface image after eliminating and reassigning probabilities.
To supplement the above step flow, only a part of the source codes of the functional modules are provided and explained in the annotation section.
#include <vector>
#include <string>
#include <cmath>
struct Defect {
std::string type;
float probability;
};
void adjustDefectProbabilities(std::vector<Defect>& defects) {
Calculating standard deviation
float mean = 0.0;
for (const auto& defect : defects)
mean += defect.probability;
mean /= defects.size();
float variance = 0.0;
for (const auto& defect : defects)
variance += std::pow(defect.probability - mean, 2);
variance /= defects.size();
float standard_deviation = std::sqrt(variance);
Flaws with probability of rejection lower than standard deviation
defects.erase(std::remove_if(defects.begin(), defects.end(),
[&](const Defect& defect) { return defect.probability < standard_deviation; }),
defects.end());
The probability of reassignment is 1 as the sum
float total_probability = 0.0;
for (const auto& defect : defects)
total_probability += defect.probability;
for (auto& defect : defects)
defect.probability /= total_probability;
}
int main() {
std::vector<Defect> defects = {{"defect1", 0.2}, {"defect2", 0.5}, {"defect3", 0.3}};
adjustDefectProbabilities(defects);
return 0;
}
This code first calculates the mean and standard deviation of the probabilities of all flaws, then deletes all flaws with probabilities smaller than the standard deviation, and finally reassigns the probabilities of the remaining flaws so that their sum is 1. Thus, the flaw types and the corresponding probabilities after the probability is reassigned can be obtained.
Referring to fig. 8, in order to comprehensively analyze the defect types and the corresponding probabilities corresponding to the multiple images, step S93 may be performed first in the implementation process, and step S931 may be performed to calculate and obtain the weight coefficient of the defect types and the corresponding probabilities output by each surface image according to the photographing included angle of each surface image. Step S932 may be performed to calculate a weighted average of the defect types and the corresponding probabilities obtained by outputting each surface image according to the defect types and the weight coefficients of the corresponding probabilities obtained by outputting each surface image, so as to obtain the defect types and the corresponding weighted probabilities of the product. Finally, step S933 may be performed to select the defect type corresponding to the maximum weighted probability as the defect type of the product.
To supplement the above step flow, only a part of the source codes of the functional modules are provided and explained in the annotation section.
#include <vector>
#include <string>
#include <cmath>
#include <algorithm>
struct Defect {
std::string type;
float probability;
};
struct Image {
std::vector<Defect> defects;
float angle;
};
std::string calculateProductDefectType(std::vector<Image>& images) {
float total_weight = 0.0;
std::vector<float> weights(images.size());
Weight per image calculation
for (int i = 0; i < images.size(); i++) {
weight [ i ] = std:: cos (images [ i ]. Angle);// weight is proportional to cosine of shooting angle
total_weight += weights[i];
}
Class of// initialization flaws and cumulative sum of probabilities thereof
std::map<std::string, float> defect_probabilities;
Weighted mean of probability of class of flaws
for (int i = 0; i < images.size(); i++) {
for (const auto& defect : images[i].defects) {
defect_probabilities[defect.type] += weights[i] / total_weight * defect.probability;
}
}
Maximum probability of flaw type/search
std::string max_defect_type;
float max_probability = 0.0;
for (const auto& pair : defect_probabilities) {
if (pair.second > max_probability) {
max_defect_type = pair.first;
max_probability = pair.second;
}
}
return max_defect_type;
}
int main() {
std::vector<Image> images = {{ {{"defect1", 0.2}, {"defect2", 0.5}, {"defect3", 0.3}}, 0.1 },
{ {{"defect1", 0.3}, {"defect2", 0.4}, {"defect3", 0.3}}, 0.2 }};
std::string productDefectType = calculateProductDefectType(images);
return 0;
}
The code firstly calculates the weight of each image, the weight is in direct proportion to the cosine value of the shooting included angle, then calculates the weighted probability of each flaw, and finally returns the flaw type with the maximum weighted probability as the flaw type of the product.
Referring to fig. 9, in order to quantify the influence of the photographing angle on the image classification and recognition result and also weaken the influence of the image with poor photographing state on the final flaw type result, step S931 may be performed first to obtain the photographing included angle corresponding to each surface image in the specific implementation process. Step S9312 may then be performed to calculate a cosine value of the included shooting angle. Finally, step S9313 may be executed to output the cosine value of the included angle added by the shot corresponding to each surface image as the weight coefficient of the obtained flaw type and the corresponding probability of each surface image.
To supplement the above step flow, only a part of the source codes of the functional modules are provided and explained in the annotation section.
#include <vector>
#include <cmath>
struct Image {
float angle;// shooting angle
};
Weight per image calculation
std::vector<float> calculateWeights(std::vector<Image>& images) {
std::vector<float> weights(images.size());
for (int i = 0; i < images.size(); i++) {
Calculating cosine value of shooting included angle corresponding to each surface image as weight
weights[i] = std::cos(images[i].angle);
}
return weights;
}
int main() {
std::vector<Image> images = { {0.1}, {0.2}, {0.3} };
std::vector<float> weights = calculateWeights(images);
return 0;
}
In this code, an Image structure is first defined, and each Image has an angle attribute, which represents the included angle of shooting. Then creating a function calculteweights to calculate a weight according to the shooting included angle of each image, wherein the weight is the cosine value of the shooting included angle. Finally, an array of images is created in the main function and the weights of the images are calculated.
In summary, the method of image classification improves accuracy and efficiency of product flaw identification. In the implementation process, firstly, qualified and flawed product pictures are collected through a model training unit and used for constructing and training a classifier, and the qualified products and the flawed products of different types are respectively corresponding to the classifier. And then collecting the picture of the product through an image collecting unit. Finally, extracting the characteristics of the product picture through a classification and identification unit, inputting the qualified classifier to obtain the qualification possibility of the product, extracting the first characteristics and the second characteristics, and inputting the first characteristics and the second characteristics into a flaw type classifier to obtain the flaw type of the product.
The flowcharts and block diagrams in the figures illustrate the architecture, functionality, and operation of possible implementations of apparatus, systems, methods and computer program products according to various embodiments of the present application. In this regard, each block in the flowchart or block diagrams may represent a module, segment, or portion of instructions, which comprises one or more executable instructions for implementing the specified logical function(s). In some alternative implementations, the functions noted in the block may occur out of the order noted in the figures. For example, two blocks shown in succession may, in fact, be executed substantially concurrently, or the blocks may sometimes be executed in the reverse order, depending upon the functionality involved.
It will also be noted that each block of the block diagrams and/or flowchart illustration, and combinations of blocks in the block diagrams and/or flowchart illustration, can be implemented by hardware, such as circuits or ASICs (application specific integrated circuits, application Specific Integrated Circuit), which perform the corresponding functions or acts, or combinations of hardware and software, such as firmware, etc.
Although the invention is described herein in connection with various embodiments, other variations to the disclosed embodiments can be understood and effected by those skilled in the art in practicing the claimed invention, from a study of the drawings, the disclosure, and the appended claims. In the claims, the word "comprising" does not exclude other elements or steps, and the "a" or "an" does not exclude a plurality. A single processor or other unit may fulfill the functions of several items recited in the claims. The mere fact that certain measures are recited in mutually different dependent claims does not indicate that a combination of these measures cannot be used to advantage.
The embodiments of the present application have been described above, the foregoing description is exemplary, not exhaustive, and not limited to the embodiments disclosed. Many modifications and variations will be apparent to those of ordinary skill in the art without departing from the scope of the various embodiments described. The terminology used herein was chosen in order to best explain the principles of the embodiments, the practical application, or the improvement of technology in the marketplace, or to enable others of ordinary skill in the art to understand the embodiments disclosed herein.

Claims (10)

1. An image classification method, comprising,
acquiring an image of a qualified product and images of various flaw products;
respectively training a qualified product image and an image of a defective product as a sample set;
training a flaw type classifier by taking images of various flaw products as a sample set;
acquiring an image of a product;
extracting a first feature of an image of the product;
inputting the first characteristic of the image into the qualified classifier to obtain the qualified and unqualified probability of the product;
extracting a second feature of the image of the product according to the probability of passing and failing the product;
inputting the first characteristic and the second characteristic of the image into the flaw type classifier to obtain flaw types and corresponding probabilities of products;
and obtaining the flaw types of the product according to the flaw types of the product and the corresponding probabilities.
2. The method of claim 1, wherein the step of capturing an image of the product comprises,
acquiring a plurality of surface images and/or section images of a product set position;
and combining a plurality of surface images and/or the section images according to a set sequence to obtain images of the product.
3. The method of claim 1, wherein the step of extracting the second feature of the image of the product based on the probability of the product being acceptable and unacceptable comprises,
inputting the first characteristic into the flaw type classifier to obtain flaw types and the corresponding probabilities thereof;
calculating according to the flaw types and the corresponding probabilities thereof to obtain flaw type classification accuracy;
judging whether the accuracy of the flaw type classification meets a set requirement or not;
if so, the second feature is an empty set;
if not, extracting a second feature in the image of the product;
inputting the first feature and the second feature into the flaw type classifier to obtain updated flaw types and corresponding probabilities thereof;
calculating to obtain the classification accuracy of the updated flaw types according to the updated flaw types and the corresponding probabilities thereof;
judging whether the accuracy of the flaw type classification meets a set requirement or not;
if yes, the second characteristic is obtained;
if not, continuously extracting feature points in the image of the product and supplementing the second features;
calculating the number of feature points contained in the second feature according to the first feature, the continuously updated second feature, the continuously updated flaw types and the corresponding probabilities thereof;
and extracting the second feature from the image of the product according to the number of feature points contained in the second feature.
4. The method of claim 3, wherein the step of calculating the number of feature points included in the second feature based on the first feature and the continuously updated second feature, and the continuously updated flaw type and its corresponding probability, comprises,
continuously acquiring the number of feature points in the first feature and the second feature, and recording the number as the total number of feature points;
continuously acquiring the flaw type classification accuracy after each update;
establishing a fitting function of flaw type classification accuracy with respect to the total feature point quantity;
estimating the total feature point quantity corresponding to the flaw type classification accuracy meeting the set requirement according to a fitting function of the flaw type classification accuracy about the total feature point quantity;
and obtaining the number of feature points contained in the second feature according to the total number of feature points.
5. The method of claim 1, wherein the step of deriving the defect type of the product based on the defect type of the product and the corresponding probability comprises,
the image of the product is a single Zhang Biaomian image of the product;
obtaining flaw types and corresponding probabilities obtained by outputting each surface image;
acquiring an included angle between a shooting axis of each surface image and a normal line of a product focus tangential plane as a shooting included angle;
and calculating the flaw types of the product according to the flaw types obtained by outputting each surface image, the corresponding probability and the shooting included angle of each surface image.
6. The method of claim 5, wherein the step of obtaining the type of flaw and the corresponding probability of each surface image output comprises,
for each of the surface images,
obtaining standard deviation of probability corresponding to each flaw type according to flaw types obtained by outputting the surface image and the corresponding probability;
removing the flaw types with the probability smaller than the standard deviation from flaw types obtained by outputting the surface image, and obtaining the flaw types after removal and the corresponding probabilities;
reassigning the probability corresponding to the removed flaw types according to the corresponding probability proportion, so that the probability corresponding to the removed flaw types is equal to one;
and outputting the obtained flaw types and the corresponding probabilities of each surface image after eliminating and reassigning the probabilities.
7. The method of claim 5, wherein the step of calculating the defect type of the product based on the defect type and the corresponding probability of each surface image output and the included angle of each surface image, comprises,
calculating and obtaining flaw types obtained by outputting each surface image and a weight coefficient of the corresponding probability according to the shooting included angle of each surface image;
calculating the flaw types obtained by outputting each surface image and the weighted average value of the corresponding probabilities according to the flaw types obtained by outputting each surface image and the weight coefficients of the corresponding probabilities to obtain the flaw types and the corresponding weighted probabilities of the product;
and selecting the flaw type corresponding to the maximum value of the weighted probability as the flaw type of the product.
8. The method of claim 7, wherein the step of calculating and obtaining the flaw type and the weight coefficient of the corresponding probability obtained by outputting each surface image according to the shooting included angle of each surface image comprises the following steps of,
acquiring the shooting included angle corresponding to each surface image;
calculating and obtaining a cosine value of the shooting included angle;
and taking the cosine value of the included angle added by the shooting corresponding to each surface image as the flaw type obtained by outputting each surface image and the weight coefficient of the corresponding probability.
9. An intelligent flaw detection system for injection products is characterized by comprising,
the model training unit is used for acquiring images of qualified products and images of various flaw products;
respectively training a qualified product image and an image of a defective product as a sample set;
training a flaw type classifier by taking images of various flaw products as a sample set;
the image acquisition unit is used for acquiring an image of a product;
a classification and identification unit for extracting a first feature of an image of a product;
inputting the first characteristic of the image into the qualified classifier to obtain the qualified and unqualified probability of the product;
extracting a second feature of the image of the product according to the probability of passing and failing the product;
inputting the first characteristic and the second characteristic of the image into the flaw type classifier to obtain flaw types and corresponding probabilities of products;
and obtaining the flaw types of the product according to the flaw types of the product and the corresponding probabilities.
10. An intelligent flaw detection system for injection products is characterized by comprising,
a data receiving unit for receiving the defect type of the product of claim 9;
and the sorting unit is used for sorting and placing the products according to the defect types of the products.
CN202310728109.6A 2023-06-20 2023-06-20 Image classification method and intelligent flaw detection system for injection molding product Withdrawn CN116468961A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202310728109.6A CN116468961A (en) 2023-06-20 2023-06-20 Image classification method and intelligent flaw detection system for injection molding product

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202310728109.6A CN116468961A (en) 2023-06-20 2023-06-20 Image classification method and intelligent flaw detection system for injection molding product

Publications (1)

Publication Number Publication Date
CN116468961A true CN116468961A (en) 2023-07-21

Family

ID=87179282

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202310728109.6A Withdrawn CN116468961A (en) 2023-06-20 2023-06-20 Image classification method and intelligent flaw detection system for injection molding product

Country Status (1)

Country Link
CN (1) CN116468961A (en)

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN116994064A (en) * 2023-08-25 2023-11-03 河北地质大学 Seed lesion particle identification method and seed intelligent screening system
CN117455316A (en) * 2023-12-20 2024-01-26 中山市东润智能装备有限公司 Method for acquiring data of injection molding factory equipment

Cited By (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN116994064A (en) * 2023-08-25 2023-11-03 河北地质大学 Seed lesion particle identification method and seed intelligent screening system
CN116994064B (en) * 2023-08-25 2024-02-27 河北地质大学 Seed lesion particle identification method and seed intelligent screening system
CN117455316A (en) * 2023-12-20 2024-01-26 中山市东润智能装备有限公司 Method for acquiring data of injection molding factory equipment
CN117455316B (en) * 2023-12-20 2024-04-19 中山市东润智能装备有限公司 Method for acquiring data of injection molding factory equipment

Similar Documents

Publication Publication Date Title
TWI653605B (en) Automatic optical detection method, device, computer program, computer readable recording medium and deep learning system using deep learning
CN116468961A (en) Image classification method and intelligent flaw detection system for injection molding product
CN111325713A (en) Wood defect detection method, system and storage medium based on neural network
US20180253836A1 (en) Method for automated detection of defects in cast wheel products
CN111223093A (en) AOI defect detection method
US11715190B2 (en) Inspection system, image discrimination system, discrimination system, discriminator generation system, and learning data generation device
CN110992349A (en) Underground pipeline abnormity automatic positioning and identification method based on deep learning
CN112200045A (en) Remote sensing image target detection model establishing method based on context enhancement and application
CN114663346A (en) Strip steel surface defect detection method based on improved YOLOv5 network
CN109840900A (en) A kind of line detection system for failure and detection method applied to intelligence manufacture workshop
CN114140385A (en) Printed circuit board defect detection method and system based on deep learning
CN111712769A (en) Method, apparatus, system, and program for setting lighting condition, and storage medium
CN111179250A (en) Industrial product defect detection system based on multitask learning
CN111758117B (en) Inspection system, recognition system, and learning data generation device
CN109446964A (en) Face detection analysis method and device based on end-to-end single-stage multiple scale detecting device
CN114881987A (en) Improved YOLOv 5-based hot-pressing light guide plate defect visual detection method
CN110827263A (en) Magnetic shoe surface defect detection system and detection method based on visual identification technology
CN114240908A (en) Product appearance defect detection method only adopting good product images
CN116091506B (en) Machine vision defect quality inspection method based on YOLOV5
CN112505049A (en) Mask inhibition-based method and system for detecting surface defects of precision components
CN115953387A (en) Radiographic image weld defect detection method based on deep learning
CN115601293A (en) Object detection method and device, electronic equipment and readable storage medium
CN114897863A (en) Defect detection method, device and equipment
CN113267506A (en) Wood board AI visual defect detection device, method, equipment and medium
CN117455917B (en) Establishment of false alarm library of etched lead frame and false alarm on-line judging and screening method

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
WW01 Invention patent application withdrawn after publication

Application publication date: 20230721

WW01 Invention patent application withdrawn after publication