CN117036226A - Article defect detection method and device based on artificial intelligence and readable storage medium - Google Patents

Article defect detection method and device based on artificial intelligence and readable storage medium Download PDF

Info

Publication number
CN117036226A
CN117036226A CN202211024153.0A CN202211024153A CN117036226A CN 117036226 A CN117036226 A CN 117036226A CN 202211024153 A CN202211024153 A CN 202211024153A CN 117036226 A CN117036226 A CN 117036226A
Authority
CN
China
Prior art keywords
image
article
defect
value
defect detection
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202211024153.0A
Other languages
Chinese (zh)
Inventor
张博深
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Tencent Technology Shenzhen Co Ltd
Original Assignee
Tencent Technology Shenzhen Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Tencent Technology Shenzhen Co Ltd filed Critical Tencent Technology Shenzhen Co Ltd
Priority to CN202211024153.0A priority Critical patent/CN117036226A/en
Publication of CN117036226A publication Critical patent/CN117036226A/en
Pending legal-status Critical Current

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/0002Inspection of images, e.g. flaw detection
    • G06T7/0004Industrial image inspection
    • G06T7/0008Industrial image inspection checking presence/absence
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/08Learning methods
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/40Extraction of image or video features
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/70Arrangements for image or video recognition or understanding using pattern recognition or machine learning
    • G06V10/764Arrangements for image or video recognition or understanding using pattern recognition or machine learning using classification, e.g. of video objects
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/70Arrangements for image or video recognition or understanding using pattern recognition or machine learning
    • G06V10/82Arrangements for image or video recognition or understanding using pattern recognition or machine learning using neural networks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20081Training; Learning
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20084Artificial neural networks [ANN]
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/30Subject of image; Context of image processing
    • G06T2207/30108Industrial image inspection
    • G06T2207/30164Workpiece; Machine component

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Evolutionary Computation (AREA)
  • General Health & Medical Sciences (AREA)
  • Software Systems (AREA)
  • Health & Medical Sciences (AREA)
  • Artificial Intelligence (AREA)
  • Computing Systems (AREA)
  • Multimedia (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Databases & Information Systems (AREA)
  • Medical Informatics (AREA)
  • Quality & Reliability (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Biomedical Technology (AREA)
  • Biophysics (AREA)
  • Computational Linguistics (AREA)
  • Data Mining & Analysis (AREA)
  • Molecular Biology (AREA)
  • General Engineering & Computer Science (AREA)
  • Mathematical Physics (AREA)
  • Image Analysis (AREA)

Abstract

The embodiment of the application discloses an article defect detection method and device based on artificial intelligence and a readable storage medium, which can acquire an image to be detected of an article to be detected; acquiring feature extraction parameters of the image to be detected, wherein the feature extraction parameters are obtained by learning based on an article defect image of a sample article, a first noise image generated based on the article defect image and a contrast loss function constructed based on a predicted value of the first noise image and a predicted value of the article defect image; performing feature extraction on the image to be detected based on the feature extraction parameters to obtain image features of the image to be detected; and predicting based on the image characteristics to obtain a defect detection result of the object to be detected. The scheme can avoid the problems that the existing defect detection algorithm is only good in fitting on a training set, but poor in generalization of the algorithm, and improves the accuracy of object defect detection.

Description

Article defect detection method and device based on artificial intelligence and readable storage medium
Technical Field
The embodiment of the application relates to the technical field of artificial intelligence, in particular to an article defect detection method and device based on artificial intelligence and a computer readable storage medium.
Background
With the rapid development of artificial intelligence and machine learning technologies, there are more and more fields in which artificial intelligence and machine learning technologies are used, and computer vision, which is one field of artificial intelligence, is also rapidly developed, wherein the computer vision technology is also applied to defect detection such as quality detection of industrial products.
In the prior art, feature extraction is performed on an article image based on a convolutional neural network, and whether the article has defects is determined by classifying the article based on the extracted image features.
However, the present inventors found in the actual development process that: the distribution of the defect images may be varied, and the defect images contained in the training set are always limited, so that the model obtained by training can be well fit to the training data set, but if the defect images which are not collected in the training data set are encountered in the actual deployment stage, the prediction output of the model becomes unreliable. Therefore, the existing defect detection algorithm is good in fitting on the training set only, but poor in algorithm generalization, so that the defect detection accuracy is low.
Disclosure of Invention
The embodiment of the application provides an article defect detection method, device, computer equipment and computer readable storage medium based on artificial intelligence, which can avoid the problems that the existing defect detection algorithm is only better in fitting on a training set, but the generalization of the algorithm is poor, and improve the accuracy of article defect detection.
In a first aspect, an embodiment of the present application provides an artificial intelligence-based object defect detection method, where the method includes:
acquiring an image to be detected of an object to be detected;
acquiring feature extraction parameters of the image to be detected, wherein the feature extraction parameters are obtained by learning based on an article defect image of a sample article, a first noise image generated based on the article defect image and a contrast loss function constructed based on a predicted value of the first noise image and a predicted value of the article defect image;
performing feature extraction on the image to be detected based on the feature extraction parameters to obtain image features of the image to be detected;
and predicting based on the image characteristics to obtain a defect detection result of the object to be detected.
In a second aspect, an embodiment of the present application provides an article defect detection apparatus, including:
the first acquisition unit is used for acquiring an image to be detected of the object to be detected;
the second acquisition unit is used for acquiring feature extraction parameters of the image to be detected, wherein the feature extraction parameters are obtained by learning based on an article defect image of a sample article, a first noise image generated based on the article defect image and a contrast loss function constructed based on a predicted value of the first noise image and a predicted value of the article defect image;
The extraction unit is used for carrying out feature extraction on the image to be detected based on the feature extraction parameters to obtain image features of the image to be detected;
and the detection unit is used for predicting based on the image characteristics to obtain a defect detection result of the object to be detected.
In a third aspect, an embodiment of the present application further provides a computer device, where the computer device includes a processor and a memory, where the memory stores a computer program, and when the processor invokes the computer program in the memory, any one of the method for detecting an article defect based on artificial intelligence provided in the embodiment of the present application is executed.
In a fourth aspect, embodiments of the present application also provide a computer readable storage medium having stored thereon a computer program, the computer program being loaded by a processor to perform the artificial intelligence based article defect detection method.
In a fifth aspect, embodiments of the present application further provide a computer program product, including a computer program or instructions, which when executed by a processor implement any of the artificial intelligence based object defect detection methods provided by the embodiments of the present application.
From the above, the embodiment of the present application has the following advantages:
according to the embodiment of the application, the feature extraction parameters obtained by learning the article defect image based on the sample article, the first noise image generated based on the article defect image and the contrast loss function constructed based on the predicted value of the first noise image and the predicted value of the article defect image are adopted, and the feature extraction is carried out on the image to be detected, so that the image feature of the image to be detected is obtained; predicting based on the image characteristics to obtain a defect detection result of the object to be detected; in the first aspect, since the feature extraction parameters are obtained through training based on the first noise image generated by the object defect image, the sample can be expanded in addition to the original sample object defect image, so that the generalization of the feature extraction parameters obtained through learning is stronger. In the second aspect, the predicted value of the defect image of the article and the predicted value of the first noise image are closer due to the first contrast loss value, so that the algorithm can pull the characteristic distance in the class of the defect image of the article, the characteristic distribution of the defect image data is more compact, and the classification of the defect image is easier (namely, the classification accuracy is higher); therefore, the image characteristics of the image to be detected are extracted through the characteristic extraction parameters and are used for predicting and obtaining the defect detection result of the object to be detected, so that the accuracy of object defect detection can be improved; the problem that the existing defect detection algorithm is good in fitting only on a training set, but poor in algorithm generalization is solved to a certain extent.
Drawings
In order to more clearly illustrate the technical solutions of the embodiments of the present application, the drawings that are needed in the description of the embodiments will be briefly described below, it being obvious that the drawings in the following description are only some embodiments of the present application, and that other drawings may be obtained according to these drawings without inventive effort for a person skilled in the art.
FIG. 1 is a schematic diagram of an application scenario of an artificial intelligence-based object defect detection method according to an embodiment of the present application;
FIG. 2 is a flow chart of one embodiment of an artificial intelligence based method for detecting defects in an article provided in an embodiment of the application;
FIG. 3 is a schematic flow diagram of model training provided in an embodiment of the present application;
FIG. 4 is a schematic diagram of a predetermined defect detection model according to an embodiment of the present application;
FIG. 5 is a first comparative schematic illustration of a normal article and a defective article provided in an embodiment of the present application;
FIG. 6 is a second comparative schematic illustration of a normal article and a defective article according to an embodiment of the present application;
FIG. 7 is a schematic flow diagram of a model for detecting defects of an article according to an embodiment of the present application;
FIG. 8 is a schematic illustration of an article defect detection process according to an embodiment of the present application;
FIG. 9 is a schematic diagram showing an example of the structure of an apparatus for detecting defects of an article according to the embodiment of the present application;
fig. 10 is a schematic structural diagram of a computer device according to an embodiment of the present application.
Detailed Description
The following description of the embodiments of the present application will be made clearly and completely with reference to the accompanying drawings, in which it is apparent that the embodiments described are only some embodiments of the present application, but not all embodiments. All other embodiments, which can be made by those skilled in the art based on the embodiments of the application without making any inventive effort, are intended to fall within the scope of the application.
In the description of embodiments of the present application, it should be understood that the terms "first" and "second" are used to distinguish between different objects and should not be interpreted as indicating or implying a relative importance or implying a number of technical features which is indicated. Features defining "first", "second" may include one or more of the described features explicitly or implicitly, rather than by describing particular sequences. Furthermore, the terms "comprise" and "have," as well as any variations thereof, are intended to cover a non-exclusive inclusion.
The embodiment of the application provides an article defect detection method, device, computer equipment and computer readable storage medium based on artificial intelligence. The object defect detection device can be integrated in computer equipment, and the computer equipment can be a server, or can be equipment such as a mobile phone, a tablet computer, a notebook computer, a desktop computer and other terminals.
The method for detecting the object defects based on the artificial intelligence can be realized by a server or a terminal and the server together. The server may be an independent physical server, a server cluster or a distributed system formed by a plurality of physical servers, or a cloud server providing cloud services, cloud databases, cloud computing, cloud functions, cloud storage, network services, cloud communication, middleware services, domain name services, security services, CDNs, basic cloud computing services such as big data and artificial intelligence platforms, but is not limited thereto. The terminal may be, but is not limited to, a smart phone, a tablet computer, a notebook computer, a desktop computer, a smart speaker, a smart watch, etc. The terminal and the server may be directly or indirectly connected through wired or wireless communication, and the present application is not limited herein.
The method for detecting the object defect based on artificial intelligence is jointly implemented by a terminal and a server and is described below.
Referring to fig. 1, an artificial intelligence-based object defect detection system provided by an embodiment of the present invention includes a terminal 10, a server 20, and the like; the terminal 10 and the server 20 are connected to each other through a network, for example, a wired or wireless network connection.
The terminal 10 may be a terminal provided for the image to be detected, and is configured to send the image to be detected to the server 20. The server 20 may be configured to receive an image to be detected sent by the terminal 10, and obtain a feature extraction parameter of the image to be detected; based on the feature extraction parameters, carrying out feature extraction on the image to be detected to obtain image features of the image to be detected; and predicting based on the image characteristics to obtain a defect detection result of the object to be detected.
The method for detecting object defects based on artificial intelligence provided in this embodiment may specifically relate to an artificial intelligence cloud service, and the description below uses a computer device as an execution body of the method for detecting object defects based on artificial intelligence, and the execution body will be omitted hereinafter for simplicity of description.
The artificial intelligence cloud Service is also commonly called AIaaS (AI as a Service, chinese is "AI as Service"). The service mode of the artificial intelligent platform is the mainstream at present, and particularly, the AIaaS platform can split several common AI services and provide independent or packaged services at the cloud. This service mode is similar to an AI theme mall: all developers can access one or more artificial intelligence services provided by the use platform through an API interface, and partial deep developers can also use an AI framework and AI infrastructure provided by the platform to deploy and operate and maintain self-proprietary cloud artificial intelligence services.
The following detailed description is provided with reference to the accompanying drawings. The following description of the embodiments is not intended to limit the preferred embodiments. Although a logical order is depicted in the flowchart, in some cases the steps shown or described may be performed in an order different than depicted in the figures.
As shown in fig. 2, the specific flow of the method for detecting article defects based on artificial intelligence may be as follows steps 201 to 204, where:
201. and obtaining an image to be detected of the object to be detected.
Wherein the object to be detected is an object to be detected for defects, such as industrial products to be detected for industrial defects. The quality inspection of industrial defects refers to quality inspection of industrial products in the production and manufacturing processes.
202. And acquiring characteristic extraction parameters of the image to be detected.
The feature extraction parameters are obtained by learning based on an article defect image of the sample article, a first noise image generated based on the article defect image, and a contrast loss function constructed based on a predicted value of the first noise image and a predicted value of the article defect image. The specific learning process is described in detail later (see steps 301 to 305 for details), and will not be described here.
There are various ways to obtain the feature extraction parameters in step 202, which illustratively include:
(1) Training a preset defect detection model in advance by referring to the model training modes in the steps 301-304 to obtain a trained defect detection model; in step 202, model parameters of a feature extraction layer of the trained defect detection model are extracted in real time and used as feature extraction parameters of an image to be detected.
(2) Referring to the method in steps 301 to 305, the model parameters of the feature extraction layer of the trained defect detection model are extracted in advance and stored in the database, and the model parameters are directly read from the data in step 202.
203. And carrying out feature extraction on the image to be detected based on the feature extraction parameters to obtain the image features of the image to be detected.
In step 203, there are various ways of determining the image characteristics of the image to be detected, which illustratively include the following ways 1 and 2:
mode 1: and (3) extracting the characteristics of the image to be detected by the characteristic extraction parameters extracted in the step 305 to obtain the image characteristics of the image to be detected.
Mode 2: the image to be detected is directly input into the trained defect detection model in step 304, so that the feature extraction of the image to be detected is performed through the feature extraction parameters of the feature extraction layer in the trained defect detection model, and the image features of the image to be detected are obtained.
204. And predicting based on the image characteristics to obtain a defect detection result of the object to be detected.
The defect detection result of the article to be detected is used for indicating whether the article to be detected has a defect or not, and specifically can be used for indicating whether the article to be detected is a normal article or a defective article.
In step 204, there are various ways of determining the defect detection result of the object to be detected, including, illustratively, the following ways 1) and 2):
mode 1) extracting model parameters of a prediction layer in the trained defect detection model in step 304 to serve as prediction parameters of an image to be detected; and predicting based on the image characteristics of the image to be detected by extracting the obtained prediction parameters to obtain a defect detection result of the object to be detected.
Mode 2) directly inputting the image features of the image to be detected into the trained defect detection model in step 304, so as to predict based on the image features of the image to be detected through the model parameters of the prediction layer in the trained defect detection model, and obtain the defect detection result of the object to be detected.
As illustrated in fig. 4, 6 and 7, the defect detection result specifically includes that the object to be detected is a defective object, and the object to be detected is a normal object, "predicting based on the image characteristics of the image to be detected, to obtain the defect detection result of the object to be detected" specifically may include: predicting based on the image characteristics to obtain the prediction probability of the defect of the object to be detected; if the predicted probability (e.g., 0.95) of the defect of the object to be detected is greater than the preset probability threshold (e.g., 0.5), determining that the object to be detected is the defective object. If the predicted probability (e.g., 0.05) of the defect of the object to be detected is less than or equal to the preset probability threshold (e.g., 0.5), determining that the object to be detected is a normal object.
The specific value of the prediction probability threshold may be set according to the actual service scene requirement, where the specific value of the prediction probability threshold is not limited.
The training process of the preset defect detection model and the acquisition of the feature extraction parameters used in step 202 in the embodiment of the present application are described below. Referring to fig. 3 and fig. 4, the training process of the preset defect detection model may specifically include the following steps 301 to 304, taking training the preset defect detection model (as shown in fig. 4) provided in the embodiment of the present application to obtain a trained defect detection model, and extracting model parameters of the trained defect detection model as feature extraction parameters in step 202 as an example, where the learning process of the feature extraction parameters may include the following steps 301 to 305.
Referring to fig. 4, the following describes the default detection model, which includes a feature extraction layer and a prediction layer.
And (one) a feature extraction layer, which is used for extracting the features of the image to obtain the features of the image. The input of the feature extraction layer is an image, and the output is the feature of the image. Illustratively, the feature extraction layer may be designed to use a convolutional neural network (Convolutional Neural Network, CNN), and the convolutional network used by the feature extraction layer may specifically include a convolutional calculation layer, a ReLU excitation layer, a pooling layer, a full connection layer, and the like, and when an image is input to the feature extraction layer, the feature extraction layer performs processing such as convolutional calculation, pooling, activation, and the like, and finally outputs the feature of the image. For example, taking feature extraction of each sample image in the sample data set as an example, the feature extraction layer takes each sample image in the sample data set as an input, and performs processing such as convolution calculation, pooling, activation, and the like on each sample image in the sample data set to obtain a sample feature of each sample image in the sample data set. For another example, taking feature extraction of an image to be detected as an example, the feature extraction layer takes the image to be detected as input, and performs processing such as convolution calculation, pooling, activation and the like on the image to be detected to obtain the image features of the image to be detected.
And (II) a prediction layer, which is used for carrying out classification prediction according to the characteristics of the image and determining whether the object in the image has defects. (1) In some embodiments, the probability of the article in the image being defective may be predicted as the predicted value of the image, for example, taking article defect detection for each sample image in the sample data set as an example, the prediction layer predicts according to the sample characteristics of each sample image to obtain the probability of the article in each sample image being defective; if the probability of the defect of the object in the sample image is larger than the preset probability threshold, determining that the defect exists in the object in the sample image (namely, the object in the sample image is a defective object); if the probability of the defect of the object in the sample image is smaller than or equal to the preset probability threshold value, determining that the defect of the object in the sample image does not exist (namely, the object in the sample image is a normal object). For another example, taking object defect detection of an image to be detected as an example, the prediction layer predicts according to the image characteristics of the image to be detected to obtain the probability of the object defect in the image to be detected; if the probability of the defect of the object in the image to be detected is larger than the preset probability threshold, determining that the defect exists in the object in the image to be detected (namely, the object to be detected in the image to be detected is a defective object); if the probability of the defect of the object in the image to be detected is smaller than or equal to the preset probability threshold, determining that the defect of the object in the image to be detected does not exist (namely, the object in the image to be detected is a normal object). (2) In some embodiments, the probability that the article in the image does not have a defect may also be predicted as the predicted value of the image, for example, taking article defect detection for each sample image in the sample data set as an example, the prediction layer predicts according to the sample characteristics of each sample image to obtain the probability that the article in each sample image does not have a defect; if the probability that the object in the sample image is not defective is smaller than the preset probability threshold, determining that the object in the sample image is defective (namely, the object in the sample image is defective); if the probability of the article in the sample image not having the defect is greater than or equal to the preset probability threshold, determining that the article in the sample image does not have the defect (namely, the article in the sample image is a normal article).
301. And carrying out feature extraction on each sample image in the sample data set through a feature extraction layer in a preset defect detection model to obtain sample features of each sample image.
Wherein the sample dataset includes an item defect image, and a first noise image generated based on the item defect image.
Wherein the article defect image refers to an image of an article contained to be defective, that is, an image of an article having a defect, for example, an image of an article having a slight defect, as shown in (b) of fig. 5; as another example, an image of an article having a serious defect is shown in fig. 5 (c).
The normal image of an article refers to an image of an article that is included without a defect, that is, an image of an article without a defect, that is, an image of a normal article, for example, as shown in (a) of fig. 5. It can be appreciated that the specific criterion for judging whether the article has a defect may be determined according to the actual service scene requirement, and in this embodiment, the specific criterion for judging whether the article has a defect is not limited, for example, in some service scenes, the damaged area of the article reaches the preset area threshold, that is, the article is considered to have a defect; in some business scenarios, an irregular area of an item reaching a preset area threshold is considered defective.
It can be understood that the preset defect detection model is trained by adopting the article defect image and the article normal image, and the trained defect detection model is also used for detecting article defects, but the defect detection model trained in this way can be better fit with the training data set, but the prediction output of the defect image which is not collected in the training data set becomes unreliable, so that the generalization of the trained defect detection model is poor. Therefore, in the model training process, the embodiment of the application provides two ways (first, expanding samples and second, increasing contrast loss values in classes) for improving the generalization of the trained defect detection model, and the method is concretely as follows:
1. the sample is expanded. Referring to fig. 4, in addition to the original image (the defect image of the article and the normal image of the article are referred to as the original image), the sample is further expanded, and in the embodiment of the present application, there are various ways to expand the sample, which illustratively includes:
(1) the first noisy image is expanded.
The first noise image is a new image obtained by adding noise based on the object defect image. For example, a random noise may be added to the item defect image to generate a first noise image. For example, the following formula 1 may be referred to, and the first noise image may be generated based on the object defect image, so that a noise enhancement version of the object defect image may be obtained, thereby improving the richness and diversity of the training data, and further improving the generalization of the trained defect detection model to a certain extent.
In formula 1, x 1 An image representing an article defect;representing a first noisy image; ρ 1 Is a random noise information, which is associated with x 1 But with a random value between 0 and 255.
Wherein the labeling value of the first noise image and the labeling value of the object defect image (denoted as y 1 ) And consistent.
(2) The second noisy image is expanded.
The second noise image is a new image obtained by adding noise based on the normal image of the article. For example, a random noise may be added to the normal image of the item, thereby generating a second noise image. For example, the following formula 2 may be referred to, and the second noise image may be generated based on the normal image of the article, so that a noise enhancement version of the normal image of the article may be obtained, thereby improving the richness and diversity of the training data, and further improving the generalization of the trained defect detection model to a certain extent.
In formula 2, x 2 Representing a normal image of the article;representing a second noisy image; ρ 2 Is a random noise information, which is associated with x 2 But with a random value between 0 and 255.
Wherein the labeling value of the second noise image is equal to the labeling value of the normal image (denoted as y 2 ) And consistent.
(3) The composite image is expanded.
The synthesized image is a new image obtained by linearly synthesizing the first noise image and the second noise image. For example, the following formula 3 may be referred to, and the composite image may be obtained by synthesizing based on the first noise image and the second noise image, so that the sample images that have the characteristics of the article defect image and the article normal image and are different from the article defect image and the article normal image may be simultaneously owned, so as to improve the richness and diversity of training data, and further improve the generalization of the trained defect detection model to a certain extent.
In the formula 3 of the present application,representing a composite image; />Representing a first noisy image; />Representing a second noisy image; alpha is a random number entered between 0 and 1.
Wherein, for the synthesized image obtained by synthesis, it has the object defect image x at the same time 1 And normal image x of the article 2 In an embodiment of the application, the labeling value of the composite image is also changed, in particular, the labeling value of the composite image (denoted as y 3 ) Is set to be between the labeling value (y 1 ) Labeling value (y) with normal image of article 2 ) The value between, e.g. assuming the labeling value y of the defect image of the article 1 =1, labeling value y of article normal image 2 =0, the labeling value y of the synthesized image 3 Between 0 and 1. For example, a labeling value may be added to the composite image with reference to the following equation 4.
y 3 =αy 1 +(1-α)y 2 Equation 4
In formula 4, y 3 Representing the labeling value, y, of the composite image 1 Labeling value, y for representing defect image of article 2 The label value representing the normal image of the article, alpha is a random number between 0 and 1.
Thus, in steps 301 to 304, there are various ways of constructing the sample data set for training the preset defect detection model, which exemplarily include:
(1) The sample dataset includes an article defect image and a first noise image.
That is, each sample image in the sample data set includes the object defect image and the first noise image, and the first noise image may be generated for each object defect image in the initial sample set by referring to the generation method of the first noise image mentioned in "(1) expanding the first noise image" after the acquired object defect image is added to the initial sample set. And constructing a sample data set by adopting the generated first noise image and each object defect image in the initial sample set, and training a preset defect detection model. At this time, in step 301, feature extraction is performed on the object defect image and the first noise image by presetting a feature extraction layer in the defect detection model, so as to obtain sample features of the object defect image and sample features of the first noise image.
(2) The sample dataset includes an article defect image, a first noise image, and a second noise image.
That is, each sample image in the sample data set includes an article defect image, a first noise image, and a second noise image, and after the article defect image and the article normal image are acquired and added to the initial sample set, the first noise image may be generated for each article defect image in the initial sample set and the second noise image may be generated for each article normal image in the initial sample set, respectively, with reference to the generation method of the first noise image mentioned in the foregoing "(1) expansion of the first noise image", and the generation method of the second noise image mentioned in the foregoing "(2) expansion of the second noise image". And constructing a sample data set by adopting the generated first noise image, the generated second noise image and each object defect image in the initial sample set, and training a preset defect detection model. At this time, in step 301, feature extraction is performed on the object defect image, the first noise image, and the second noise image by presetting a feature extraction layer in the defect detection model, so as to obtain sample features of the object defect image, sample features of the first noise image, and sample features of the second noise image.
(3) The sample dataset includes an article defect image, a first noise image, and a composite image.
That is, each sample image in the sample data set includes the article defect image, the first noise image, and the composite image, and after the article defect image and the article normal image are acquired and added to the initial sample set, the first noise image may be generated for each article defect image in the initial sample set, the second noise image may be generated for each article normal image in the initial sample set, and the composite image may be generated based on the first noise image and the second noise image, respectively, with reference to the generation method of the first noise image mentioned in the first noise image, the generation method of the second noise image mentioned in the second noise image, and the generation method of the composite image mentioned in the composite image. And constructing a sample data set by adopting the generated first noise image, the generated synthetic image and each object defect image in the initial sample set, and training a preset defect detection model. At this time, in step 301, feature extraction is performed on the object defect image, the first noise image, and the composite image by presetting a feature extraction layer in the defect detection model, so as to obtain sample features of the object defect image, sample features of the first noise image, and sample features of the composite image.
(4) The sample dataset includes an article defect image, a first noise image, a second noise image, and a composite image.
That is, each sample image in the sample data set includes an article defect image, a first noise image, a second noise image, and a composite image, and after the article defect image and the article normal image are acquired and added to the initial sample set, the first noise image generation method mentioned in the first noise image is referred to in the foregoing "(1)" expansion ", the second noise image generation method mentioned in the second noise image is referred to in the foregoing" (2) "expansion", and the composite image generation method mentioned in the composite image is referred to in the foregoing "(3)" expansion ", and the first noise image is generated for each article defect image in the initial sample set, the second noise image is generated for each article normal image in the initial sample set, and the composite image is generated based on the first noise image and the second noise image. And constructing a sample data set by adopting the generated first noise image, the generated second noise image, the generated synthetic image and the object defect images in the initial sample set, and training a preset defect detection model. At this time, in step 301, feature extraction is performed on the object defect image, the first noise image, and the composite image by presetting a feature extraction layer in the defect detection model, so as to obtain sample features of the object defect image, sample features of the first noise image, sample features of the second noise image, and sample features of the composite image.
(5) The sample data set includes an article defect image, a first noise image, a second noise image, a composite image, and an article normal image.
That is, each sample image in the sample data set includes an article defect image, a first noise image, a second noise image, and a composite image, and after the article defect image and the article normal image are acquired and added to the initial sample set, the first noise image generation method mentioned in the first noise image is referred to in the foregoing "(1)" expansion ", the second noise image generation method mentioned in the second noise image is referred to in the foregoing" (2) "expansion", and the composite image generation method mentioned in the composite image is referred to in the foregoing "(3)" expansion ", and the first noise image is generated for each article defect image in the initial sample set, the second noise image is generated for each article normal image in the initial sample set, and the composite image is generated based on the first noise image and the second noise image. And constructing a sample data set by adopting the generated first noise image, the generated second noise image, the generated composite image, the defect images of all the articles in the initial sample set and the normal images of all the articles in the initial sample set, and training a preset defect detection model. At this time, in step 301, feature extraction is performed on the object defect image, the first noise image, and the composite image by presetting a feature extraction layer in the defect detection model, so as to obtain sample features of the object defect image, sample features of the first noise image, sample features of the second noise image, sample features of the composite image, and sample features of the normal image of the object.
2. The contrast loss value within the class is increased. In the embodiment of the present application, there are various ways to increase the contrast loss value in the class, which illustratively includes:
(1) the predicted value contrast loss (i.e., the first contrast loss value) between the object defect image and the first noise image is increased.
(2) And increasing the predicted value contrast loss (namely, a second contrast loss value) between the normal image and the second noise image of the article.
The specific implementation of the contrast loss value in the added class will be described in detail later (see step 303 for details), and for simplicity of description, it will not be repeated here.
302. And predicting based on sample characteristics of each sample image by presetting a prediction layer in the defect detection model to obtain a prediction value of each sample image.
Wherein the predicted value of each sample image includes a predicted value of the item defect image and a predicted value of the first noise image.
In step 302, there are various ways of predicting the predicted value of each sample image, corresponding to the way of constructing the sample data set in step 301, and taking the probability of predicting that an object in the sample image has a defect as the predicted value of the sample image as an example, the method includes:
(1) The sample dataset includes an article defect image and a first noise image.
At this time, in step 301, feature extraction is performed to obtain sample features of the object defect image and sample features of the first noise image, and the predicted values of each sample image predicted in step 302 specifically include: predicted value of the defective image of the article, predicted value of the first noise image. In step 302, predicting sample features of an object defect image and sample features of a first noise image respectively by presetting a prediction layer in a defect detection model to obtain probability of object defects in the object defect image and probability of object defects in the first noise image; and taking the probability of the defect of the article in the article defect image as a predicted value of the article defect image and taking the probability of the defect of the article in the first noise image as a predicted value of the first noise image.
(2) The sample dataset includes an article defect image, a first noise image, and a second noise image.
At this time, in step 301, feature extraction is performed to obtain sample features of the object defect image, sample features of the first noise image, and sample features of the second noise image, and the predicted values of the sample images predicted in step 302 specifically include: a predicted value of the defective image of the article, a predicted value of the first noise image, and a predicted value of the second noise image. In step 302, predicting sample features of an object defect image, sample features of a first noise image and sample features of a second noise image respectively by presetting a prediction layer in a defect detection model to obtain probability of object defects in the object defect image, probability of object defects in the first noise image and probability of object defects in the second noise image; and taking the probability of the defect of the article in the article defect image as a predicted value of the article defect image, taking the probability of the defect of the article in the first noise image as a predicted value of the first noise image, and taking the probability of the defect of the article in the second noise image as a predicted value of the second noise image.
(3) The sample dataset includes an article defect image, a first noise image, and a composite image.
At this time, in step 301, feature extraction is performed to obtain sample features of the object defect image, sample features of the first noise image, and sample features of the composite image, and the predicted values of each sample image predicted in step 302 specifically include: predicted value of the defective image of the article, predicted value of the first noise image, predicted value of the composite image. In step 302, predicting sample features of an article defect image, sample features of a first noise image and sample features of a composite image respectively by presetting a prediction layer in a defect detection model to obtain probability of defects of articles in the article defect image, probability of defects of articles in the first noise image and probability of defects of articles in the composite image; and taking the probability of the defect of the article in the article defect image as a predicted value of the article defect image, taking the probability of the defect of the article in the first noise image as a predicted value of the first noise image, and taking the probability of the defect of the article in the composite image as a predicted value of the composite image.
(4) The sample dataset includes an article defect image, a first noise image, a second noise image, and a composite image.
At this time, in step 301, feature extraction is performed to obtain sample features of the object defect image, sample features of the first noise image, sample features of the second noise image, and sample features of the composite image, and the predicted values of each sample image predicted in step 302 specifically include: predicted values of the article defect image, the first noise image, the second noise image, and the composite image. In step 302, predicting sample features of an article defect image, sample features of a first noise image, sample features of a second noise image and sample features of a composite image respectively by presetting a prediction layer in a defect detection model to obtain probability of article defects in the article defect image, probability of article defects in the first noise image, probability of article defects in the second noise image and probability of article defects in the composite image; and taking the probability of the defect of the article in the article defect image as a predicted value of the article defect image, taking the probability of the defect of the article in the first noise image as a predicted value of the first noise image, taking the probability of the defect of the article in the second noise image as a predicted value of the second noise image, and taking the probability of the defect of the article in the composite image as a predicted value of the composite image.
(5) The sample data set includes an article defect image, a first noise image, a second noise image, a composite image, and an article normal image.
At this time, in step 301, feature extraction is performed to obtain sample features of the object defect image, sample features of the first noise image, sample features of the second noise image, sample features of the composite image, and sample features of the object normal image, and the predicted values of each sample image predicted in step 302 specifically include: a predicted value of an article defect image, a predicted value of a first noise image, a predicted value of a second noise image, a predicted value of a composite image, and a sample feature of a normal image of the article. In step 302, predicting sample features of an article defect image, sample features of a first noise image, sample features of a second noise image, sample features of a composite image and sample features of an article normal image by presetting a prediction layer in a defect detection model to obtain probability of article defect in the article defect image, probability of article defect in the first noise image, probability of article defect in the second noise image, probability of article defect in the composite image and probability of article defect in the article normal image; and taking the probability of the defect of the article in the article defect image as a predicted value of the article defect image, taking the probability of the defect of the article in the first noise image as a predicted value of the first noise image, taking the probability of the defect of the article in the second noise image as a predicted value of the second noise image, taking the probability of the defect of the article in the composite image as a predicted value of the composite image, and taking the probability of the defect of the article in the article normal image as a predicted value of the article normal image.
Taking the probability of predicting that an object in a sample image has a defect as a predicted value of the sample image as an example, how to predict and obtain the predicted value of each sample image is described. It can be appreciated that the probability that the object in the sample image is not defective can be predicted as the predicted value of the sample image in practical application. Alternatively, a two-dimensional vector (the two dimensions of which are the probability that an article in the sample image is not defective and the probability that an article in the sample image is not defective, respectively) may also be predicted at the same time as the predicted value of the sample image.
303. A total loss value of the preset defect detection model is determined based on a first contrast loss value between the predicted value of the item defect image and the predicted value of the first noise image.
The total loss value is a loss value trained by a preset defect detection model.
In order to improve generalization of the trained defect detection model, when determining the total loss value of the preset defect detection model in step 303, in addition to adding the total classification loss value of the preset defect detection model, the total contrast loss value in the class is increased (wherein the total contrast loss value may be determined by at least one of the first contrast loss value and the second contrast loss value), and at this time, there are various ways of obtaining the total loss value in step 303, which illustratively include the following ways of case 1 and case 2:
Case 1: the first contrast loss value is taken as a total contrast loss value in the class of the preset defect detection model, and the total classification loss value and the first contrast loss value can be combined to determine the total loss value.
Wherein the first contrast loss value is a loss value between a predicted value of the item defect image and a predicted value of the first noise image.
At this time, the sample dataset includes at least the article defect image and the first noise image. Illustratively, step 303 may specifically include steps 3031A-3033A:
3031A, acquiring a first contrast loss value of a preset defect detection model based on the predicted value of the object defect image and the predicted value of the first noise image.
3032A, acquiring the total classification loss value of the preset defect detection model based on the predicted value of each sample image and the labeling value of each sample image.
3033A, determining a total loss value based on the first comparative loss value and the total classification loss value.
Specifically, firstly, substituting a predicted value of each sample image and a label value of each sample image into a preset classification loss function (for example, shown in a formula 7-1), and calculating to obtain a total classification loss value of a preset defect detection model; substituting the predicted value of the object defect image and the predicted value of the first noise image into a preset contrast loss value function (shown in a formula 6-1 for example), and calculating to obtain a first contrast loss value of a preset defect detection model; then, the total classified loss value and the first contrast loss value are added according to a preset duty ratio relation (for example, as shown in formula 5), so as to obtain a total loss value.
L=L C0 +βL S0 Equation 5
Equation 5 and herein, L is the total lossA value; l (L) C0 The method comprises the steps of presetting a total contrast loss value in a class of a defect detection model; l (L) S0 A total classification loss value; beta is an equilibrium parameter for adjusting the total classification loss value L S0 And the ratio of the total contrast loss value.
In case 1, the first contrast loss value may be determined based on a preset contrast loss function (such as an absolute value loss function, an L2 loss function, etc.), as shown in the following formula 6-1, taking the example that the contrast loss function is an absolute value loss function: the predicted value of the first noise image and the predicted value of the object defect image may be substituted into the absolute value loss function of equation 6-1, and the first contrast loss value may be calculated.
Equation 6-1 and herein, H is the number of noise images in the sample dataset (case 1 is the number of first noise images, case 2 is the total number of first noise images and second noise images), L C0 For the total contrast loss value, p i For the predicted value of the i-th sample image in the sample dataset,the predicted values of the noise image generated for the i-th sample image in the sample data set (the predicted value of the first noise image in case 1, the predicted value of the first noise image in case 2, and the predicted value of the second noise image).
When determining the first contrast loss value for a set of images (i.e., the first noise image and the article defect image), as shown in equation 6-2, the predicted value of the first noise image (denoted as) A predicted value of the article defect image corresponding to the first noise image (denoted as P 1 ) Performing a loss calculation to obtain a first classification loss value (i.e. L) C1 )。
When determining the second contrast loss value for a certain set of images (i.e., the second noise image and the normal image of the article), as shown in equation 6-3, the predicted value of the second noise image (noted as) A predicted value of the object defect image corresponding to the second noise image (denoted as P 2 ) Performing a loss calculation to obtain a second class loss value (L C2 )。
In order to enable the trained defect detection model to learn the association between the characteristics of the expanded sample and the defect detection result, so as to improve generalization of the trained defect detection model, the classification supervision loss of the expanded sample is built except for the classification supervision loss of the normal image of the article and the defect image of the article. That is, the total classification loss value in step 3032A may be determined based on at least one of the first classification loss value of the first noise image, the second classification loss value of the second noise image, and the third classification loss value of the composite image, in addition to the classification loss values of the article defect image and the article normal image. For example, for the construction modes of the sample data sets (1) to (5) described in the example in step 301, the total classification loss value in step 3032A may be specifically determined by the following modes (1) to (5):
(1) The sample dataset includes an article defect image and a first noise image. At this time, the total classification loss value may be determined based on the classification loss value of the article defect image and the first classification loss value of the first noise image.
The total classification loss value may be determined based on a preset classification loss function (e.g., cross entropy loss function, KL divergence loss function, JS divergence loss function, etc.), as shown in the following formula 7-1, taking the example that the classification loss function is a cross entropy loss function: the predicted value of the first noise image, the marked value of the first noise image, the predicted value of the article defect image and the marked value of the article defect image can be substituted into the classification loss function of the formula 7-1, and the total classification loss value of the preset defect detection model is calculated. At this time, the obtained total classification loss value includes the classification loss value of the article defect image and the first classification loss value of the first noise image.
In formula 7-1 and text, L S0 Representing the total classification loss value, y i For the labeling value, p, of the ith sample image in the sample data set i The predicted value of the ith sample image in the sample data set is given, and N is the number of sample images in the sample data set.
If the ith sample image is the first noise image, as shown in equation 7-2, i.e., when determining the first classification loss value for a certain first noise image, the labeling value of the first noise image (i.e., y 1 ) And the predicted value of the first noise image (denoted as) Performing a loss calculation to obtain a first classification loss value (L S1 ). Similarly, if the ith sample image is an article defect image, that is, if a classification loss value is determined for a certain article defect image, the labeling value (that is, y 1 ) And a predicted value of the object defect image (denoted as p 1 ) And performing one-time loss calculation to obtain the classification loss value of the object defect image.
If the ith sample image is the second noise image, as shown in equation 7-3, i.e., when determining the second classification loss value for a certain second noise image, the labeling value of the second noise image (i.e., y 2 ) And the predicted value of the second noise image (denoted as) Performing a loss calculation to obtain a second classification loss value (L S2 )。
(2) The sample dataset includes an article defect image, a first noise image, and a second noise image. At this time, the total classification loss value may be determined based on the classification loss value of the article defect image, the first classification loss value of the first noise image, and the second classification loss value of the second noise image. In this case, the obtained total classification loss value includes the classification loss value of the article defect image, the first classification loss value of the first noise image, and the second classification loss value of the second noise image. The specific determination process may refer to the related description of the case (1), and for simplicity of description, it is not repeated here.
(3) The sample dataset includes an article defect image, a first noise image, and a composite image. At this time, the total classification loss value may be determined based on the classification loss value of the article defect image, the first classification loss value of the first noise image, and the third classification loss value of the composite image. In this case, the obtained total classification loss value includes the classification loss value of the article defect image, the first classification loss value of the first noise image, and the third classification loss value of the composite image. The specific determination process may refer to the related description of the case (1), and for simplicity of description, it is not repeated here.
(4) The sample dataset includes an article defect image, a first noise image, a second noise image, and a composite image. At this time, the total classification loss value may be determined based on the classification loss value of the article defect image, the first classification loss value of the first noise image, the second classification loss value of the second noise image, and the third classification loss value of the composite image. In this case, the obtained total classification loss value includes a classification loss value of the article defect image, a first classification loss value of the first noise image, a second classification loss value of the second noise image, and a third classification loss value of the composite image. The specific determination process may refer to the related description of the case (1), and for simplicity of description, it is not repeated here.
(5) The sample data set includes an article defect image, a first noise image, a second noise image, a composite image, and an article normal image. At this time, the total classification loss value may be determined based on the classification loss value of the article defect image, the first classification loss value of the first noise image, the second classification loss value of the second noise image, the third classification loss value of the composite image, and the classification loss value of the article normal image. In this case, the obtained total classification loss value includes a classification loss value of the article defect image, a first classification loss value of the first noise image, a second classification loss value of the second noise image, a third classification loss value of the composite image, and a classification loss value of the article normal image. The specific determination process may refer to the related description of the case (1), and for simplicity of description, it is not repeated here.
Case 2: taking the sum of the first contrast loss value and the second contrast loss value as the total contrast loss value in the class of the preset defect detection model, and combining the total classification loss value, the first contrast loss value and the second contrast loss value to determine the total loss value.
Wherein the second contrast loss value is a loss value between a predicted value of the normal image of the item and a predicted value of the second noise image.
At this time, the sample dataset includes an article defect image, an article normal image, a first noise image, and a second noise image. Illustratively, step 303 may include steps 3031B-3034B:
3031B, acquiring a first contrast loss value of a preset defect detection model based on the predicted value of the object defect image and the predicted value of the first noise image.
3032B, obtaining a second contrast loss value of the preset defect detection model based on the predicted value of the article normal image and the predicted value of the second noise image.
3033B, obtaining the total classification loss value of the preset defect detection model based on the predicted value of each sample image and the labeling value of each sample image.
3034B, determining a total loss value based on the first comparative loss value and the total classification loss value.
Specifically, firstly, substituting a predicted value of each sample image and a label value of each sample image into a preset classification loss function (for example, shown in a formula 7-1), and calculating to obtain a total classification loss value of a preset defect detection model; substituting the predicted value of the defect image of the article and the predicted value of the first noise image, and the predicted value of the normal image of the article and the predicted value of the second noise image into a preset contrast loss value function (for example, shown in formula 6-1), and calculating to obtain a total contrast loss value in the class of the preset defect detection model (the obtained total contrast loss value comprises the first contrast loss value and the second contrast loss value, wherein the determination of the second contrast loss value is similar to the determination of the first contrast loss value, and specific reference can be made to the related description of formula 6-2 in the case 1); then, the total classified loss value and the total contrast loss value are added according to a preset duty ratio relation (for example, as shown in formula 5), so as to obtain a total loss value.
In case 2, "determining the total loss value by combining the total classification loss value, the first comparison loss value and the second comparison loss value" is similar to case 1, and specific reference may be made to the description related to case 1, so that details are not repeated here for simplicity.
304. And adjusting model parameters of the preset defect detection model based on the total loss value until the preset defect detection model meets the preset training stopping condition, and taking the preset defect detection model as a trained defect detection model.
For example, gradient back propagation is finally performed on the preset defect detection model according to the total loss value L to update model parameters, so as to adjust model parameters of the preset defect detection model, and the preset defect detection model can be used as a trained defect detection model when the preset training stopping condition is met.
The preset stopping training condition may be set according to an actual service scene requirement, for example, the preset stopping training condition may be when the weight of the model is not able to be updated basically, when the training iteration number reaches a preset number of times threshold, and the like.
In this way, a trained defect detection model is obtained through training in the modes 301 to 304, in the first aspect, the direction of parameter optimization of a preset defect detection model is determined through the total comparison loss value, so that probability distribution in a class can be pulled up by the model, namely, the model can be pulled up by the feature distance in the same class, wherein the first comparison loss value can enable the prediction probability (namely, the prediction value) of an object defect image and a first noise image to be closer, so that the model can be pulled up by the feature distance in the class of the object with the defect image, further, feature distribution of defect image data is more compact, and classification of the defect image is easier (namely, classification accuracy is higher); the second contrast loss value can enable the prediction probability (namely the prediction value) of the normal image of the article and the second noise image to be closer, so that the model can pull up the feature distance in the class of the normal image of the article, further the feature distribution of the normal image data is more compact, and the normal image is more easily classified (namely the classification accuracy is higher); thereby improving the detection accuracy of the trained defect detection model on the article defects to a certain extent.
According to the second aspect, the direction of parameter optimization of the preset defect detection model is determined through the first classification loss value, the second classification loss value and the third classification loss value, namely the model is based on (1) a first noise image which has the characteristics of an article defect image and is different from the article defect image, (2) a second noise image which has the characteristics of an article normal image and is different from the article normal image, and (3) a composite image which has the characteristics of the article defect image and the article normal image and is different from the article defect image and the article normal image is learned, so that the richness and the diversity of training data are improved, and the generalization of the trained defect detection model is improved to a certain extent.
In a third aspect, the method is performed by comparing the labeling value (denoted as y 3 ) Is set to be between the labeling value (y 1 ) Is right aligned with the articleLabeling value of constant image (y 2 ) The values are taken, so that other images between the article defect image and the article normal image in the training data set can be better fitted by learning based on the synthetic image, the trained defect detection model is better fitted on the training set, and the data fitting outside the training data set is more accurate, so that the generalization of the model is improved.
305. And extracting model parameters of a feature extraction layer in the trained defect detection model to serve as feature extraction parameters.
After the training is completed, the model parameters of the feature extraction layer in the trained defect detection model may be extracted and used as feature extraction parameters for performing feature extraction on the image to be detected in step 203 to obtain the image features of the image to be detected.
From the above, it can be seen that, by using the feature extraction parameters obtained by learning the article defect image based on the sample article, the first noise image generated based on the article defect image, and the contrast loss function constructed based on the predicted value of the first noise image and the predicted value of the article defect image, feature extraction is performed on the image to be detected, so as to obtain the image feature of the image to be detected; predicting based on the image characteristics to obtain a defect detection result of the object to be detected; in the first aspect, since the feature extraction parameters are obtained through training based on the first noise image generated by the object defect image, the sample can be expanded in addition to the original sample object defect image, so that the generalization of the feature extraction parameters obtained through learning is stronger. In the second aspect, the predicted value of the defect image of the article and the predicted value of the first noise image are closer due to the first contrast loss value, so that the algorithm can pull the characteristic distance in the class of the defect image of the article, the characteristic distribution of the defect image data is more compact, and the classification of the defect image is easier (namely, the classification accuracy is higher); therefore, the image characteristics of the image to be detected are extracted through the characteristic extraction parameters and are used for predicting and obtaining the defect detection result of the object to be detected, so that the accuracy of object defect detection can be improved; the problem that the existing defect detection algorithm is good in fitting only on a training set, but poor in algorithm generalization is solved to a certain extent.
For easy understanding, taking an image as an industrial product image and performing defect detection on an industrial product as an example, an article defect detection process in an embodiment of the present application is described with reference to fig. 4, 7 and 8, and as shown in fig. 8, the article defect detection process is specifically as follows:
801. an article defect image, an article normal image, a first noise image generated based on the article defect image, and a second noise image generated based on the article normal image are acquired as a sample data set.
802. And carrying out feature extraction on each sample image in the sample data set through a feature extraction layer in a preset defect detection model to obtain sample features of each sample image.
803. And predicting based on sample characteristics of each sample image by presetting a prediction layer in the defect detection model to obtain a prediction value of each sample image.
Specifically, in steps 802 to 803, all data (including an article defect image, an article normal image, a first noise image, and a second noise image) in the sample data are respectively expressed as: (x) 1 ,y 1 )、(x 2 ,y 2 )、Inputting the predicted values into a preset defect detection model, performing feature extraction by a feature extraction layer, predicting by a prediction layer, and finally respectively outputting predicted values of each sample image, and sequentially marking as: p (P) 1 、P 2P 3
804. Acquiring a first contrast loss value of a preset defect detection model based on a predicted value of the object defect image and a predicted value of the first noise image; acquiring a second contrast loss value of a preset defect detection model based on the predicted value of the normal image of the article and the predicted value of the second noise image; and determining the total classification loss value of the preset defect detection model based on the predicted value of each sample image and the labeling value of each sample image.
Wherein the total classification loss may include (1) a first classification loss value determined based on the predicted value of the first noise image and the labeling value of the first noise image; (2) a second classification loss value is determined based on the predicted value of the second noise image and the labeling value of the second noise image; (3) a third classification loss value is determined based on the predicted value of the composite image and the labeling value of the composite image; (4) determining a classification loss value of the object defect image based on the predicted value of the object defect image and the labeling value of the object defect image; (5) and determining the classification loss value of the normal image of the article based on the predicted value of the normal image of the article and the labeling value of the normal image of the article.
805. Determining a total loss value of the preset defect detection model according to the first comparison loss value, the second comparison loss value and the total classification loss value, adjusting model parameters of the preset defect detection model until the preset defect detection model meets preset training stopping conditions, and taking the preset defect detection model as a trained defect detection model.
806. Extracting model parameters of a feature extraction layer in the trained defect detection model to serve as feature extraction parameters; model parameters of a prediction layer in the trained defect detection model are extracted to serve as prediction parameters.
807. And carrying out feature extraction on the image to be detected based on the feature extraction parameters to obtain the image features of the image to be detected.
808. And predicting according to the image characteristics based on the prediction parameters to obtain a defect detection result of the object to be detected.
The prediction parameters are used for predicting according to the image characteristics, so that the prediction probability of the defect of the object to be detected can be obtained; if the predicted probability (e.g., 0.95) that the object to be detected is defective is greater than the preset probability threshold (e.g., 0.5), it is determined that the object to be detected is a defective object, as shown in fig. 6 (a). If the predicted probability (e.g., 0.05) that the object to be detected is defective is less than or equal to the preset probability threshold (e.g., 0.5), it is determined that the object to be detected is a normal object, as shown in (b) of fig. 6.
In order to better implement the method for detecting the article defect based on the artificial intelligence in the embodiment of the application, the embodiment of the application also provides a device for detecting the article defect based on the method for detecting the article defect based on the artificial intelligence, and the device for detecting the article defect can be integrated in computer equipment, such as a server or a terminal and other equipment.
For example, as shown in fig. 9, fig. 9 is a schematic structural diagram of an embodiment of an article defect detecting device according to an embodiment of the present application, where the article defect detecting device may include a first acquiring unit 901, a second acquiring unit 902, an extracting unit 903, a detecting unit 904, and the like, as follows:
a first acquiring unit 901, configured to acquire an image to be detected of an object to be detected;
a second obtaining unit 902, configured to obtain a feature extraction parameter of an image to be detected, where the feature extraction parameter is obtained by learning based on an article defect image of a sample article, a first noise image generated based on the article defect image, and a contrast loss function constructed based on a predicted value of the first noise image and a predicted value of the article defect image;
the extracting unit 903 is configured to perform feature extraction on the image to be detected based on the feature extraction parameter, so as to obtain an image feature of the image to be detected;
and the detection unit 904 is used for predicting based on the image characteristics to obtain a defect detection result of the object to be detected.
In some embodiments of the present application, the article defect detection apparatus further includes a learning unit (not shown in the drawings), and the learning unit is specifically configured to:
extracting features of each sample image in a sample data set through a feature extraction layer in a preset defect detection model to obtain sample features of each sample image, wherein the sample data set comprises an article defect image and a first noise image generated based on the article defect image;
Predicting based on sample characteristics of each sample image through a prediction layer in a preset defect detection model to obtain a prediction value of each sample image, wherein the prediction value of each sample image comprises a prediction value of an article defect image and a prediction value of a first noise image;
determining a total loss value of a preset defect detection model based on a first contrast loss value between a predicted value of the article defect image and a predicted value of the first noise image;
adjusting model parameters of a preset defect detection model based on the total loss value until the preset defect detection model meets the preset training stopping condition, and taking the preset defect detection model as a trained defect detection model;
and extracting model parameters of a feature extraction layer in the trained defect detection model to serve as feature extraction parameters.
In some embodiments of the application, the sample data set further comprises a normal image of the item and a second noise image generated based on the normal image of the item, the predicted value of each sample image further comprising a predicted value of the normal image of the item and a predicted value of the second noise image; the learning unit is specifically further configured to:
acquiring a first contrast loss value of a preset defect detection model based on a predicted value of the object defect image and a predicted value of the first noise image;
Acquiring a second contrast loss value of a preset defect detection model based on the predicted value of the normal image of the article and the predicted value of the second noise image;
based on the first contrast loss value and the second contrast loss value, a total loss value is determined.
In some embodiments of the application, the learning unit is specifically further configured to:
acquiring a first contrast loss value of a preset defect detection model based on a predicted value of the object defect image and a predicted value of the first noise image;
acquiring a first classification loss value of the first noise image based on the predicted value of the first noise image and the labeling value of the first noise image;
based on the first comparative loss value and the first classification loss value, a total loss value is determined.
In some embodiments of the application, the sample data set further comprises a second noise image generated based on the normal image of the item, the predicted value of each sample image further comprising a predicted value of the second noise image; the learning unit is specifically further configured to:
acquiring a first contrast loss value of a preset defect detection model based on a predicted value of the object defect image and a predicted value of the first noise image;
acquiring a second classification loss value of the second noise image based on the predicted value of the second noise image and the labeling value of the second noise image;
Based on the first comparative loss value and the second classification loss value, a total loss value is determined.
In some embodiments of the application, the sample data set further comprises a composite image of the first noise image and a second noise image, the second noise image being generated based on the normal image of the item, the predicted value of each sample image further comprising a predicted value of the composite image; the learning unit is specifically further configured to:
acquiring a first contrast loss value of a preset defect detection model based on a predicted value of the object defect image and a predicted value of the first noise image;
acquiring a third classification loss value of the composite image based on the predicted value of the composite image and the labeling value of the composite image, wherein the labeling value of the composite image is between the labeling value of the article defect image and the labeling value of the article normal image;
based on the first comparative loss value and the third categorical loss value, a total loss value is determined.
In some embodiments of the present application, the detection unit 904 is specifically further configured to:
extracting model parameters of a prediction layer in the trained defect detection model to serve as prediction parameters of an image to be detected;
and predicting based on the image characteristics based on the prediction parameters to obtain a defect detection result of the object to be detected.
In some embodiments of the present application, the defect detection result includes that the object to be detected is a defective object, and the detection unit 904 is specifically further configured to:
predicting based on the image characteristics to obtain the prediction probability of the defect of the object to be detected;
and if the prediction probability is larger than the preset probability threshold value, determining that the object to be detected is a defective object.
As can be seen from the above, the object defect detecting device according to the embodiment of the present application may be configured to obtain an image to be detected of an object to be detected by the first obtaining unit 901; acquiring feature extraction parameters of the image to be detected by a second acquisition unit 902; the extracting unit 903 performs feature extraction on the image to be detected based on the feature extraction parameters to obtain image features of the image to be detected; the detection unit 904 predicts based on the image features to obtain a defect detection result of the object to be detected.
Therefore, the object defect detection device provided by the embodiment of the application can bring the following technical effects: in the first aspect, since the feature extraction parameters are obtained through training based on the first noise image generated by the object defect image, the sample can be expanded in addition to the original sample object defect image, so that the generalization of the feature extraction parameters obtained through learning is stronger. In the second aspect, the predicted value of the defect image of the article and the predicted value of the first noise image are closer due to the first contrast loss value, so that the algorithm can pull the characteristic distance in the class of the defect image of the article, the characteristic distribution of the defect image data is more compact, and the classification of the defect image is easier (namely, the classification accuracy is higher); therefore, the image characteristics of the image to be detected are extracted through the characteristic extraction parameters and are used for predicting and obtaining the defect detection result of the object to be detected, so that the accuracy of object defect detection can be improved; the problem that the existing defect detection algorithm is good in fitting only on a training set, but poor in algorithm generalization is solved to a certain extent.
In addition, in order to better implement the method for detecting the object defect based on the artificial intelligence in the embodiment of the present application, based on the method for detecting the object defect based on the artificial intelligence, the embodiment of the present application further provides a computer device, as shown in fig. 10, which shows a schematic structural diagram of the computer device according to the embodiment of the present application, specifically:
the computer device may include one or more processors 1001 of a processing core, one or more memories 1002 of a computer readable storage medium, a power supply 1003, and an input unit 1004, among other components. Those skilled in the art will appreciate that the computer device structure shown in FIG. 10 is not limiting of the computer device and may include more or fewer components than shown, or may be combined with certain components, or a different arrangement of components. Wherein:
the processor 1001 is a control center of the computer device, connects respective portions of the entire computer device using various interfaces and lines, and performs various functions of the computer device and processes data by running or executing software programs and/or modules stored in the memory 1002 and calling data stored in the memory 1002, thereby performing overall detection of the computer device. Optionally, the processor 1001 may include one or more processing cores; preferably, the processor 1001 may integrate an application processor and a modem processor, wherein the application processor mainly processes an operating system, a user interface, an application program, and the like, and the modem processor mainly processes wireless communication. It will be appreciated that the modem processor described above may not be integrated into the processor 1001.
The memory 1002 may be used to store software programs and modules, and the processor 1001 executes various functional applications and data processing by executing the software programs and modules stored in the memory 1002. The memory 1002 may mainly include a storage program area and a storage data area, wherein the storage program area may store an operating system, an application program (such as a sound playing function, an image playing function, etc.) required for at least one function, and the like; the storage data area may store data created according to the use of the computer device, etc. In addition, memory 1002 may include high-speed random access memory, and may also include non-volatile memory, such as at least one magnetic disk storage device, flash memory device, or other volatile solid-state storage device. Accordingly, the memory 1002 may also include a memory controller to provide the processor 1001 with access to the memory 1002.
The computer device also includes a power supply 1003 for powering the various components, preferably, the power supply 1003 is logically connected to the processor 1001 by a power management system, such that charge, discharge, and power consumption management functions are performed by the power management system. The power supply 1003 may also include one or more of any of a direct current or alternating current power supply, a recharging system, a power failure detection circuit, a power converter or inverter, a power status indicator, and the like.
The computer device may also include an input unit 1004, which input unit 1004 may be used to receive input numeric or character information and to generate keyboard, mouse, joystick, optical or trackball signal inputs related to user settings and function control.
Although not shown, the computer device may further include a display unit or the like, which is not described herein. In particular, in this embodiment, the processor 1001 in the computer device loads executable files corresponding to the processes of one or more application programs into the memory 1002 according to the following instructions, and the processor 1001 executes the application programs stored in the memory 1002, so as to implement various functions as follows:
acquiring an image to be detected of an object to be detected;
acquiring feature extraction parameters of an image to be detected, wherein the feature extraction parameters are obtained by learning based on an article defect image of a sample article, a first noise image generated based on the article defect image and a contrast loss function constructed based on a predicted value of the first noise image and a predicted value of the article defect image;
based on the feature extraction parameters, carrying out feature extraction on the image to be detected to obtain image features of the image to be detected;
And predicting based on the image characteristics to obtain a defect detection result of the object to be detected.
The above operations may be specifically referred to the foregoing embodiments, and are not described herein in detail.
Thus, the computer device of the present embodiment may bring about the following technical effects: in the first aspect, since the feature extraction parameters are obtained through training based on the first noise image generated by the object defect image, the sample can be expanded in addition to the original sample object defect image, so that the generalization of the feature extraction parameters obtained through learning is stronger. In the second aspect, the predicted value of the defect image of the article and the predicted value of the first noise image are closer due to the first contrast loss value, so that the algorithm can pull the characteristic distance in the class of the defect image of the article, the characteristic distribution of the defect image data is more compact, and the classification of the defect image is easier (namely, the classification accuracy is higher); therefore, the image characteristics of the image to be detected are extracted through the characteristic extraction parameters and are used for predicting and obtaining the defect detection result of the object to be detected, so that the accuracy of object defect detection can be improved; the problem that the existing defect detection algorithm is good in fitting only on a training set, but poor in algorithm generalization is solved to a certain extent.
Those of ordinary skill in the art will appreciate that all or a portion of the steps of the various methods of the above embodiments may be performed by instructions, or by instructions controlling associated hardware, which may be stored in a computer-readable storage medium and loaded and executed by a processor.
To this end, an embodiment of the present application provides a computer-readable storage medium in which a computer program is stored, the computer program being capable of being loaded by a processor to perform the steps of any one of the object detection methods provided by the embodiment of the present application. For example, the computer program may perform the steps of:
acquiring an image to be detected of an object to be detected;
acquiring feature extraction parameters of an image to be detected, wherein the feature extraction parameters are obtained by learning based on an article defect image of a sample article, a first noise image generated based on the article defect image and a contrast loss function constructed based on a predicted value of the first noise image and a predicted value of the article defect image;
based on the feature extraction parameters, carrying out feature extraction on the image to be detected to obtain image features of the image to be detected;
and predicting based on the image characteristics to obtain a defect detection result of the object to be detected.
It can be seen that the computer program can be loaded by a processor to perform the steps in any of the method for detecting article defects based on artificial intelligence provided by the embodiment of the present application, so that the computer readable storage medium of the embodiment of the present application can bring about the following technical effects: in the first aspect, since the feature extraction parameters are obtained through training based on the first noise image generated by the object defect image, the sample can be expanded in addition to the original sample object defect image, so that the generalization of the feature extraction parameters obtained through learning is stronger. In the second aspect, the predicted value of the defect image of the article and the predicted value of the first noise image are closer due to the first contrast loss value, so that the algorithm can pull the characteristic distance in the class of the defect image of the article, the characteristic distribution of the defect image data is more compact, and the classification of the defect image is easier (namely, the classification accuracy is higher); therefore, the image characteristics of the image to be detected are extracted through the characteristic extraction parameters and are used for predicting and obtaining the defect detection result of the object to be detected, so that the accuracy of object defect detection can be improved; the problem that the existing defect detection algorithm is good in fitting only on a training set, but poor in algorithm generalization is solved to a certain extent.
The specific implementation of each operation above may be referred to the previous embodiments, and will not be described herein.
Wherein the computer-readable storage medium may comprise: read Only Memory (ROM), random access Memory (RAM, random Access Memory), magnetic or optical disk, and the like.
According to the artificial intelligence based object defect detection method of the present application, there is also provided a computer program product or a computer program comprising computer instructions stored in a computer readable storage medium. The processor of the electronic device reads the computer instructions from the computer-readable storage medium and executes the computer instructions to cause the electronic device to perform the methods provided in the various alternative implementations of the embodiments described above.
It will be clearly understood by those skilled in the art that, for convenience and brevity of description, the specific working process and the beneficial effects of the above-described article defect detection apparatus, computer readable storage medium, computer device and corresponding units may refer to the description of the article defect detection method based on artificial intelligence in the above embodiments, which is not described in detail herein.
The above description has been made in detail of an article defect detection method, apparatus, computer device and computer readable storage medium based on artificial intelligence provided in the embodiments of the present application, and specific examples are applied to illustrate the principles and implementations of the present application, and the above description of the embodiments is only for helping to understand the method and core idea of the present application; meanwhile, as those skilled in the art will have variations in the specific embodiments and application scope in light of the ideas of the present application, the present description should not be construed as limiting the present application.

Claims (12)

1. An artificial intelligence-based object defect detection method, comprising:
acquiring an image to be detected of an object to be detected;
acquiring feature extraction parameters of the image to be detected, wherein the feature extraction parameters are obtained by learning based on an article defect image of a sample article, a first noise image generated based on the article defect image and a contrast loss function constructed based on a predicted value of the first noise image and a predicted value of the article defect image;
performing feature extraction on the image to be detected based on the feature extraction parameters to obtain image features of the image to be detected;
And predicting based on the image characteristics to obtain a defect detection result of the object to be detected.
2. The method for detecting defects of an article based on artificial intelligence according to claim 1, wherein the learning the feature extraction parameters comprises:
extracting features of each sample image in a sample data set through a feature extraction layer in a preset defect detection model to obtain sample features of each sample image, wherein the sample data set comprises an article defect image and a first noise image generated based on the article defect image;
predicting, by a prediction layer in the preset defect detection model, based on sample features of each sample image to obtain predicted values of each sample image, where the predicted values of each sample image include predicted values of the object defect image and predicted values of the first noise image;
determining a total loss value of the preset defect detection model based on a first contrast loss value between the predicted value of the object defect image and the predicted value of the first noise image;
adjusting model parameters of the preset defect detection model based on the total loss value until the preset defect detection model meets the preset training stopping condition, and taking the preset defect detection model as a trained defect detection model;
And extracting model parameters of a feature extraction layer in the trained defect detection model to serve as the feature extraction parameters.
3. The method of claim 2, wherein the sample dataset further comprises a normal image of the item and a second noise image generated based on the normal image of the item, the predicted value of each sample image further comprising a predicted value of the normal image of the item and a predicted value of the second noise image;
the determining the total loss value of the preset defect detection model based on a first contrast loss value between the predicted value of the object defect image and the predicted value of the first noise image comprises:
acquiring a first contrast loss value of the preset defect detection model based on the predicted value of the object defect image and the predicted value of the first noise image;
acquiring a second contrast loss value of the preset defect detection model based on the predicted value of the article normal image and the predicted value of the second noise image;
the total loss value is determined based on the first contrast loss value and the second contrast loss value.
4. The method of claim 2, wherein determining the total loss value of the predetermined defect detection model based on a first contrast loss value between the predicted value of the article defect image and the predicted value of the first noise image comprises:
Acquiring a first contrast loss value of the preset defect detection model based on the predicted value of the object defect image and the predicted value of the first noise image;
acquiring a first classification loss value of the first noise image based on the predicted value of the first noise image and the labeling value of the first noise image;
the total loss value is determined based on the first contrast loss value and the first classification loss value.
5. The method of claim 2, wherein the sample dataset further comprises a second noise image generated based on a normal image of the article, the predicted value of each sample image further comprising a predicted value of the second noise image;
the determining the total loss value of the preset defect detection model based on a first contrast loss value between the predicted value of the object defect image and the predicted value of the first noise image comprises:
acquiring a first contrast loss value of the preset defect detection model based on the predicted value of the object defect image and the predicted value of the first noise image;
acquiring a second classification loss value of the second noise image based on the predicted value of the second noise image and the labeling value of the second noise image;
The total loss value is determined based on the first contrast loss value and the second classification loss value.
6. The method of claim 2, wherein the sample dataset further comprises a composite image of the first noise image and a second noise image, the second noise image being generated based on a normal image of the article, the predicted values of each sample image further comprising predicted values of the composite image;
the determining the total loss value of the preset defect detection model based on a first contrast loss value between the predicted value of the object defect image and the predicted value of the first noise image comprises:
acquiring a first contrast loss value of the preset defect detection model based on the predicted value of the object defect image and the predicted value of the first noise image;
acquiring a third classification loss value of the composite image based on the predicted value of the composite image and the labeling value of the composite image, wherein the labeling value of the composite image is between the labeling value of the article defect image and the labeling value of the article normal image;
the total loss value is determined based on the first contrast loss value and the third classification loss value.
7. The method for detecting defects of an article based on artificial intelligence according to claim 2, wherein the predicting based on the image features to obtain the defect detection result of the article to be detected comprises:
extracting model parameters of a prediction layer in the trained defect detection model to serve as prediction parameters of the image to be detected;
and predicting based on the image characteristics and the prediction parameters to obtain a defect detection result of the object to be detected.
8. The method for detecting defects of an article based on artificial intelligence according to claim 1, wherein the defect detection result includes that the article to be detected is a defective article, the predicting based on the image features, to obtain the defect detection result of the article to be detected, includes:
predicting based on the image characteristics to obtain the prediction probability of the defect of the object to be detected;
and if the prediction probability is larger than a preset probability threshold, determining that the object to be detected is a defective object.
9. An article defect detection apparatus, characterized in that the article defect detection apparatus comprises:
the first acquisition unit is used for acquiring an image to be detected of the object to be detected;
The second acquisition unit is used for acquiring feature extraction parameters of the image to be detected, wherein the feature extraction parameters are obtained by learning based on an article defect image of a sample article, a first noise image generated based on the article defect image and a contrast loss function constructed based on a predicted value of the first noise image and a predicted value of the article defect image;
the extraction unit is used for carrying out feature extraction on the image to be detected based on the feature extraction parameters to obtain image features of the image to be detected;
and the detection unit is used for predicting based on the image characteristics to obtain a defect detection result of the object to be detected.
10. A computer device comprising a memory, a processor and a computer program stored on the memory and executable on the processor, wherein the processor implements the artificial intelligence based article defect detection method of any of claims 1-8 when the program is executed.
11. A computer readable storage medium having stored thereon a computer program, wherein the computer program when executed by a processor implements the method of any of claims 1-8.
12. A computer program product comprising a computer program or instructions which, when executed by a processor, implements the artificial intelligence based article defect detection method of any one of claims 1 to 8.
CN202211024153.0A 2022-08-24 2022-08-24 Article defect detection method and device based on artificial intelligence and readable storage medium Pending CN117036226A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202211024153.0A CN117036226A (en) 2022-08-24 2022-08-24 Article defect detection method and device based on artificial intelligence and readable storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202211024153.0A CN117036226A (en) 2022-08-24 2022-08-24 Article defect detection method and device based on artificial intelligence and readable storage medium

Publications (1)

Publication Number Publication Date
CN117036226A true CN117036226A (en) 2023-11-10

Family

ID=88643551

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202211024153.0A Pending CN117036226A (en) 2022-08-24 2022-08-24 Article defect detection method and device based on artificial intelligence and readable storage medium

Country Status (1)

Country Link
CN (1) CN117036226A (en)

Similar Documents

Publication Publication Date Title
US10534999B2 (en) Apparatus for classifying data using boost pooling neural network, and neural network training method therefor
CN110908667A (en) Method and device for joint compilation of neural network and electronic equipment
CN110807515A (en) Model generation method and device
KR20220112766A (en) Federated Mixed Models
CN111708876B (en) Method and device for generating information
CN113361593B (en) Method for generating image classification model, road side equipment and cloud control platform
JPWO2019102984A1 (en) Learning device, identification device and program
CN112084959B (en) Crowd image processing method and device
US20230222781A1 (en) Method and apparatus with object recognition
CN114072809A (en) Small and fast video processing network via neural architectural search
CN113111917B (en) Zero sample image classification method and device based on dual self-encoders
CN113822144A (en) Target detection method and device, computer equipment and storage medium
CN109961163A (en) Gender prediction's method, apparatus, storage medium and electronic equipment
CN109559345B (en) Garment key point positioning system and training and positioning method thereof
CN115439449B (en) Full-field histological image processing method, device, medium and electronic equipment
KR102561799B1 (en) Method and system for predicting latency of deep learning model in device
CN117036226A (en) Article defect detection method and device based on artificial intelligence and readable storage medium
CN112529025A (en) Data processing method and device
US20220343146A1 (en) Method and system for temporal graph neural network acceleration
CN115112661A (en) Defect detection method and device, computer equipment and storage medium
US9886652B2 (en) Computerized correspondence estimation using distinctively matched patches
CN114820558A (en) Automobile part detection method and device, electronic equipment and computer readable medium
CN113868460A (en) Image retrieval method, device and system
CN115358379B (en) Neural network processing method, neural network processing device, information processing method, information processing device and computer equipment
CN111105031B (en) Network structure searching method and device, storage medium and electronic equipment

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination