CN113658183A - Workpiece quality inspection method and device and computer equipment - Google Patents

Workpiece quality inspection method and device and computer equipment Download PDF

Info

Publication number
CN113658183A
CN113658183A CN202111223818.6A CN202111223818A CN113658183A CN 113658183 A CN113658183 A CN 113658183A CN 202111223818 A CN202111223818 A CN 202111223818A CN 113658183 A CN113658183 A CN 113658183A
Authority
CN
China
Prior art keywords
network model
behavior
state
deep
quality inspection
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN202111223818.6A
Other languages
Chinese (zh)
Other versions
CN113658183B (en
Inventor
肖智恒
郭骏
潘正颐
侯大为
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Changzhou Weiyizhi Technology Co Ltd
Original Assignee
Changzhou Weiyizhi Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Changzhou Weiyizhi Technology Co Ltd filed Critical Changzhou Weiyizhi Technology Co Ltd
Priority to CN202111223818.6A priority Critical patent/CN113658183B/en
Publication of CN113658183A publication Critical patent/CN113658183A/en
Application granted granted Critical
Publication of CN113658183B publication Critical patent/CN113658183B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/0002Inspection of images, e.g. flaw detection
    • G06T7/0004Industrial image inspection
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/21Design or setup of recognition systems or techniques; Extraction of features in feature space; Blind source separation
    • G06F18/214Generating training patterns; Bootstrap methods, e.g. bagging or boosting
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/08Learning methods
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20081Training; Learning
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20084Artificial neural networks [ANN]
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/30Subject of image; Context of image processing
    • G06T2207/30108Industrial image inspection
    • G06T2207/30164Workpiece; Machine component
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/30Subject of image; Context of image processing
    • G06T2207/30168Image quality inspection

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • Data Mining & Analysis (AREA)
  • General Physics & Mathematics (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Artificial Intelligence (AREA)
  • General Engineering & Computer Science (AREA)
  • Evolutionary Computation (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Molecular Biology (AREA)
  • Software Systems (AREA)
  • Mathematical Physics (AREA)
  • Computing Systems (AREA)
  • Health & Medical Sciences (AREA)
  • Biomedical Technology (AREA)
  • Biophysics (AREA)
  • Computational Linguistics (AREA)
  • General Health & Medical Sciences (AREA)
  • Evolutionary Biology (AREA)
  • Quality & Reliability (AREA)
  • Bioinformatics & Cheminformatics (AREA)
  • Bioinformatics & Computational Biology (AREA)
  • Image Analysis (AREA)

Abstract

The invention relates to the technical field of industrial quality inspection, and provides a workpiece quality inspection method, a workpiece quality inspection device and computer equipment, wherein the method comprises the following steps: acquiring a test image corresponding to a workpiece to be tested and a trained neural network model; acquiring a training sample of a deep Q network; constructing a deep Q network, and performing learning training on the deep Q network based on training samples to obtain a state-behavior deep Q network model corresponding to the neural network model; inputting the test image into a depth Q network model to obtain at least one target enhancement behavior corresponding to the test image; performing data enhancement on the test image according to at least one target enhancement behavior, and inputting the enhanced test image into the neural network model to obtain at least one prediction result; and performing quality inspection on the workpiece according to at least one prediction result. Therefore, the testing effect can be ensured, and meanwhile, the testing time is greatly reduced, so that the workpiece quality inspection efficiency is improved, and the workpiece quality inspection accuracy is improved.

Description

Workpiece quality inspection method and device and computer equipment
Technical Field
The invention relates to the technical field of industrial quality inspection, in particular to a workpiece quality inspection method, a workpiece quality inspection device and computer equipment.
Background
Industrial quality control is understood to be quality control of workpieces or products in various processes of industrial manufacturing and industrial production, for example, defects such as tiny scratches and pits on the surface of the products can be detected, and the defects such as tiny scratches and pits on the surface of the products can cause serious accidents, for example, surface defects such as aircraft tires can directly affect the use effect and even bring irreparable loss to passengers. It is seen that industrial quality control is very important in modern manufacturing.
In the related art, when workpiece quality inspection is performed, a picture is usually directly input into a trained neural network model, and since the picture has various versions, for example, a picture in an inverted version, when the picture is predicted by the trained neural network model, problems of low model prediction accuracy, poor effect and long prediction time exist, so that problems of low industrial quality inspection accuracy, poor quality inspection effect and low efficiency are caused.
Disclosure of Invention
In order to solve one of the above technical problems, the present invention proposes the following technical solutions.
The embodiment of the first aspect of the invention provides a workpiece quality inspection method, which comprises the following steps: acquiring a test image corresponding to a workpiece to be tested and a trained neural network model; obtaining training samples of a deep Q network, the training samples comprising: the method comprises the following steps that a plurality of original images, data enhancement images corresponding to the original images and data enhancement behaviors are obtained; constructing a deep Q network, and performing learning training on the deep Q network based on the training sample to obtain a state-behavior deep Q network model corresponding to the neural network model; inputting the test image into the depth Q network model to obtain at least one target enhancement behavior corresponding to the test image; performing data enhancement on the test image according to the at least one target enhancement behavior, and inputting the enhanced test image into the neural network model to obtain at least one prediction result; and performing quality inspection on the workpiece according to the at least one prediction result.
In addition, the workpiece quality inspection method according to the above embodiment of the present invention may have the following additional features.
According to an embodiment of the present invention, the learning and training the deep Q network based on the training samples to obtain a state-behavior deep Q network model corresponding to the neural network model includes: a plurality of data-enhanced images are grouped into a state space set S of an environment,
Figure 749723DEST_PATH_IMAGE001
wherein S istIndicating the state of the environment at time t, St+1Indicating the state of the environment at time t +1, STRepresenting the final state of the environment; composing the plurality of data-enhanced behaviors into a behavior space set A of the agent,
Figure 52004DEST_PATH_IMAGE002
wherein A iskRepresenting a kth data enhancement behavior; and obtaining the state-behavior deep Q network model based on the interaction between the agent and the environment.
According to an embodiment of the present invention, obtaining the state-behavior deep Q network model based on the interaction between the agent and the environment comprises: obtaining the environmental state S at the moment tt(ii) a The environmental state S at the time t is measuredtInputting the agent to enable the agent to be in accordance with the environmental state S at the time ttPerforming a data enhancement action A at time tt(ii) a Data enhancement action A at the moment of execution t of the agenttThen, the state of the environment is changed to an environment state S at the moment t +1t+1Simultaneously, feeding back the feedback reward value at the time of t +1 to the intelligent agent so that the intelligent agent executes the data enhancement behavior at the time of t + 1; obtaining the feedback reward value corresponding to each time, so as to combine the feedback reward values into a feedback reward set R,
Figure 152684DEST_PATH_IMAGE003
wherein R istIndicating the corresponding feedback prize value, R, at time tt+1Representing the corresponding feedback reward value at the t +1 moment; determining a cost function of the agent according to each feedback reward value; and obtaining the state-behavior depth Q network model based on the cost function.
According to one embodiment of the invention, determining a cost function for the agent based on the respective feedback reward values comprises: acquiring a control strategy pi of the agent, and a state-value function and a behavior-value function corresponding to the control strategy pi;
obtaining the state-behavior depth Q network model based on the cost function, including: optimizing the deep Q network model by maximizing the state-cost function and the behavior-cost function.
According to one embodiment of the present invention, the state-cost function corresponding to the control strategy pi is:
Figure 291059DEST_PATH_IMAGE004
the behavior-value function corresponding to the control strategy pi is as follows:
Figure 622945DEST_PATH_IMAGE005
wherein,
Figure 705914DEST_PATH_IMAGE006
indicating when an agent takes a policy
Figure 372519DEST_PATH_IMAGE007
The expectation of the time-random variable is,
Figure 199136DEST_PATH_IMAGE008
represents a discount factor, Rt+k+1Denotes a feedback award value at time t + k +1, and S denotes a grant StA represents the value given to AtThe value of (c).
According to an embodiment of the present invention, the quality inspection of the workpiece according to the at least one prediction result comprises: when the number of the prediction results is multiple, preprocessing the multiple prediction results; and performing quality inspection on the workpiece according to the prediction result after pretreatment.
According to one embodiment of the invention, the plurality of data enhancement activities include: turning, rotating, zooming and brightness adjusting.
In a second aspect, an embodiment of the present invention provides a workpiece quality inspection apparatus, including: the first acquisition module is used for acquiring a test image corresponding to a workpiece to be tested and a trained neural network model; a second obtaining module, configured to obtain a training sample of the deep Q network, where the training sample includes: the method comprises the following steps that a plurality of original images, data enhancement images corresponding to the original images and data enhancement behaviors are obtained; the training module is used for constructing a deep Q network and carrying out learning training on the deep Q network based on the training samples so as to obtain a state-behavior deep Q network model corresponding to the neural network model; a first determining module, configured to input the test image into the depth Q network model to obtain at least one target enhancement behavior corresponding to the test image; the second determination module is used for performing data enhancement on the test image according to the at least one target enhancement behavior and inputting the enhanced test image into the neural network model to obtain at least one prediction result; and the quality inspection module is used for performing quality inspection on the workpiece according to the at least one prediction result.
In addition, the workpiece quality inspection apparatus according to the above embodiment of the present invention may have the following additional features.
According to an embodiment of the invention, the training module comprises: a first composing unit for composing the plurality of data-enhanced images into a state space set S of the environment,
Figure 385529DEST_PATH_IMAGE009
wherein S istIndicating the state of the environment at time t, St+1Indicating the state of the environment at time t +1, STRepresenting the final state of an environment(ii) a A second composing unit for composing the plurality of data-enhanced behaviors into a behavior space set A of a smart agent,
Figure 327815DEST_PATH_IMAGE010
wherein A iskRepresenting a kth data enhancement behavior; and the interaction unit is used for obtaining the state-behavior deep Q network model based on the interaction between the agent and the environment.
A third aspect of the present invention provides a computer device, which includes a memory, a processor, and a computer program stored in the memory and executable on the processor, wherein the processor executes the computer program to implement the workpiece quality inspection method according to the first aspect of the present invention.
According to the technical scheme of the embodiment of the invention, before the neural network model is adopted to realize workpiece quality inspection, the deep Q network model is trained, the optimal data enhancement behavior is determined through the deep Q network model, then the optimal data enhancement behavior is used for carrying out data enhancement on the test image, the test image is input into the neural network model after the data enhancement, and the quality inspection is realized according to the prediction result of the neural network model. Therefore, the testing effect can be ensured, and meanwhile, the testing time is greatly reduced, so that the workpiece quality inspection efficiency is improved, and the workpiece quality inspection accuracy is improved.
Drawings
Fig. 1 is a flowchart of a workpiece quality inspection method according to an embodiment of the invention.
Fig. 2 is a schematic diagram illustrating a principle of data enhancement of a test image by TTA in the related art.
Fig. 3 is a schematic diagram illustrating a principle of data enhancement of a test image by a deep Q network model according to an embodiment of the present invention.
FIG. 4 is a schematic diagram of the interaction between agents and the environment of a deep Q network, according to one embodiment of the present invention.
Fig. 5 is a block diagram of a workpiece quality inspection apparatus according to an embodiment of the present invention.
Detailed Description
The technical solutions in the embodiments of the present invention will be clearly and completely described below with reference to the drawings in the embodiments of the present invention, and it is obvious that the described embodiments are only a part of the embodiments of the present invention, and not all of the embodiments. All other embodiments, which can be derived by a person skilled in the art from the embodiments given herein without making any creative effort, shall fall within the protection scope of the present invention.
In the related art, for a neural network model used in industrial quality inspection, a test data enhancement (Tset Time Augmentation, TTA) technology is usually adopted to perform data enhancement processing on training data or test data of the model to increase the training data or improve the test accuracy, where TTA is to perform data enhancement operation on a test image in a test stage, input enhanced images of multiple versions into a trained model to obtain an output result, and then calculate the output result to obtain an average output as a final prediction result. This approach works well because the area displayed by the original test image may lack some important features, and so it is possible to take multiple versions of the input image and average them across the model.
However, a problem of the neural network model is that the image transformation such as rotation and brightness is not robust, and in order to enhance the generalization ability of the neural network model, a series of data enhancement is usually performed on the image before training to enhance the generalization ability of the model.
In summary, TTA is equivalent to adapting a test image to a model through different data enhancements, so as to improve the effect of the model, but this method wastes a large amount of test time; training data enhancement improves the test effect of the model by increasing the diversity of the training data, which is equivalent to adapting the model to the test data, but essentially improves the "insight" of the model to some extent, and the model is not really robust to data enhancement.
In human thinking habit, after a picture is rotated, the picture is recognized not by memorizing the form of each rotation angle of the picture, but by rotating the picture to a normal angle in the brain before recognition, that is, a human has the ability of rotating thinking, but a convolutional neural network model lacks the ability.
That is, TTA obtains a prediction result by performing a large amount of expansion on a test image and then screening and combining various test results, which requires a large amount of test time and is difficult to use in a real-world scenario. There is no way for the training data enhancement to cover all the variations, and the effect of the model can only be improved to a limited extent. For example, a model trained by some non-rotated pictures only shows good performance on the non-rotated data, and when the rotated pictures are input into the model, the prediction capability of the model is reduced, resulting in reduction of prediction accuracy.
Therefore, the convolutional neural network model in the industrial quality inspection does not rotate to a normal or proper angle before identifying the test image, and then identifies the test image, that is, a person has the ability of rotating thinking, that is, the convolutional neural network has the problem of poor robustness, so that the workpiece quality inspection efficiency is low, the accuracy is low, and the quality inspection effect is influenced.
Aiming at the problems, the invention provides a workpiece quality inspection method, a workpiece quality inspection device and computer equipment.
Specific embodiments of the present invention will be described below with reference to the drawings.
Fig. 1 is a flowchart of a workpiece quality inspection method according to an embodiment of the invention.
It should be noted that the main execution body of the workpiece quality inspection method according to the embodiment of the present invention may be an electronic device in an industrial field, and specifically, the electronic device may be, but is not limited to, an industrial computer and a mobile terminal. The application scenario of the embodiment of the invention can be a scenario that a workpiece needs to be subjected to quality inspection in an industrial production site, wherein the workpiece can be an industrial object, such as an industrial computer, a tire and the like.
As shown in fig. 1, the workpiece quality inspection method includes the following steps S1 to S6.
And S1, obtaining a test image corresponding to the workpiece to be inspected and a trained neural network model.
The trained neural network model can be a neural network model required by each stage of industrial quality inspection, and the trained neural network model has the function of outputting a prediction result corresponding to a test image according to the input test image. For example, the neural network model may be an image recognition model, an object detection model, an instance segmentation model, or a defect detection model.
Specifically, when a workpiece to be quality-tested needs to be quality-tested, a camera can be used to capture a test image of the workpiece, and a trained neural network model, such as a trained target detection model, needed by quality testing is obtained, and the target detection model can detect an object, such as a kitten, in the test image.
S2, obtaining a training sample of the deep Q network, wherein the training sample comprises: the image enhancement method comprises a plurality of original images, data enhancement images corresponding to the original images and a plurality of data enhancement behaviors.
Wherein the original image may be an image of a workpiece, as many images of the workpiece as possible may be acquired in embodiments of the present invention. The type of the acquired workpiece image can be determined according to the function of the trained model, for example, when the function of the trained neural network model is image content identification detection, a workpiece image which can contain image content can be acquired; when the trained neural network model functions as defect detection, a workpiece image with defects can be acquired.
Specifically, after the neural network model is acquired, a plurality of original images are acquired, after the plurality of original images are acquired, as many data enhancement behaviors as possible, such as rotation, inversion, scaling, brightness adjustment, and the like, can be acquired, and each original image is sequentially subjected to data enhancement by the plurality of data enhancement behaviors, so that a data enhancement image corresponding to each original image can be acquired.
For example, assuming an original image W, three data enhancement actions of rotation, scaling and brightness adjustment are performed, and W can be rotated to obtain a data enhanced image W1W is mixed with1Zooming to obtain a data enhanced image W2W is mixed with2Adjusting brightness to obtain data enhanced image W3Scaling W to obtain a data enhanced image W4Jing W4Adjusting brightness to obtain data enhanced image W5Adjusting the brightness of W to obtain a data enhanced image W6Thereby obtaining a plurality of data enhanced images W1、W2、W3、W4、W5And W6
S3, constructing a deep Q network, and performing learning training on the deep Q network based on the training samples to obtain a state-behavior deep Q network model corresponding to the neural network model.
Specifically, after the training sample is obtained, a deep Q network is constructed, and the deep Q network is subjected to learning training based on the training sample to obtain a state-behavior deep Q network model corresponding to the trained neural network model.
The state-behavior depth Q network model is used for outputting a data enhancement behavior corresponding to the test image according to the state or version of the test image input to the state-behavior depth Q network model, and the data enhancement behavior belongs to the optimal data enhancement behavior for the trained neural network model and the trained test image. One trained model corresponds to a state-behavior deep Q-network model, and one test image corresponds to one or more optimal data enhancement behaviors (in the embodiment of the present invention, such optimal data enhancement behaviors are referred to as target enhancement behaviors).
And S4, inputting the test image into the depth Q network model to obtain at least one target enhancement behavior corresponding to the test image.
Specifically, after the depth Q network model corresponding to the neural network model is obtained, the obtained test image corresponding to the workpiece may be input into the depth Q network model, and the depth Q network model outputs at least one target enhancement behavior corresponding to the test image according to the state of the test image.
And S5, enhancing the test image according to the at least one target enhancement behavior, and inputting the enhanced test image into the trained neural network model to obtain at least one prediction result.
In the embodiment of the present invention, a result output by the neural network model according to the enhanced test image may be referred to as a prediction result.
Specifically, after the target enhancement behavior corresponding to the test image is determined by using the depth Q network model, data enhancement can be performed on the test image by using the target enhancement behavior to obtain an enhanced test image, then the enhanced test image is input into the trained neural network model, and then the neural network model outputs a prediction result according to the test image. When the test image is subjected to data enhancement by the target enhancement behavior and then is input into the neural network model, compared with the neural network model which identifies the test image without data enhancement and the test images of other versions, the neural network model has the advantages of highest identification accuracy, shortest test time and best identification effect on the enhanced test image.
It should be noted that the target enhancement behaviors may correspond to the prediction results one by one, and when one target enhancement behavior is used, one prediction result is used; when the target enhancement behaviors are multiple, the prediction results are multiple, and the neural network model identifies the enhanced test image, so that the identification accuracy is high, the phenomenon that the prediction results are inaccurate due to inaccurate identification is avoided for the prediction results output by the neural network model, and the multiple prediction results can be consistent.
And S6, performing quality inspection on the workpiece according to at least one prediction result.
Specifically, in practical application, when the trained neural network model is needed to implement quality inspection on a workpiece, step S1 is first executed to obtain a test image and a trained neural network model corresponding to the workpiece to be inspected, steps S2 and S3 are then executed to obtain a training sample of a deep Q network, and the deep Q network is then subjected to learning training based on the training sample to obtain a state-behavior deep Q network model corresponding to the trained neural network model. After the depth Q network model is trained, executing a step S4 of determining a target enhancement behavior, inputting a test image into the depth Q network model to obtain at least one target enhancement behavior corresponding to the test image, then executing a step S5 of enhancing the test image by the target enhancement behavior, and inputting the enhanced test image into the trained neural network model, so that the neural network model outputs at least one prediction result according to the test image, wherein the prediction result has the characteristic of high accuracy. And finally, performing quality inspection on the workpiece according to at least one prediction result to obtain a quality inspection result.
For example, when the workpiece to be subjected to quality inspection is a vase, the prediction result can be patterns and positions where the patterns are located, the patterns on the surface of the vase can be accurately identified by using the depth Q network model and the neural network model, then, quality inspection of the vase can be realized through the characteristics of patterns, such as pattern style, size and the like, for example, when the patterns are regularly distributed and uniform in size, the quality inspection of the vase is passed, and the vase is qualified.
That is to say, in the embodiment of the present invention, a deep Q network model corresponding to a neural network model is trained, when quality inspection needs to be performed using the model, an optimal enhancement strategy is selected before using the neural network model, a test image is enhanced by the optimal enhancement strategy to enhance the test image to a state or version most suitable for the test of the neural network model, then, prediction of the model is implemented by using the neural network model, and quality inspection is performed according to a prediction result output by the neural network model, thereby substantially solving the problem of robustness of the neural network model. The method for selecting the optimal enhancement strategy has good universality, real-time performance and testing effect, is closer to the thinking mode of human beings, and can greatly reduce the testing time while ensuring the testing effect.
Assume that, as shown in FIGS. 2 and 3, it is assumed that
Figure 291835DEST_PATH_IMAGE011
Is a test image for the purpose of testing,
Figure 925073DEST_PATH_IMAGE012
representing a trained neural network model, training
Figure 776092DEST_PATH_IMAGE012
Sample image and test image of
Figure 780957DEST_PATH_IMAGE011
With the same data distribution. In the related art, referring to fig. 2,
Figure 779744DEST_PATH_IMAGE011
obtaining a data set through a series of data enhancement
Figure 909284DEST_PATH_IMAGE013
Figure 132586DEST_PATH_IMAGE014
Figure 728259DEST_PATH_IMAGE015
……
Figure 902758DEST_PATH_IMAGE016
TTA is
Figure 874868DEST_PATH_IMAGE013
Figure 638163DEST_PATH_IMAGE014
Figure 673246DEST_PATH_IMAGE015
……
Figure 771259DEST_PATH_IMAGE016
All are sent to
Figure 471361DEST_PATH_IMAGE012
Middle prediction to obtain output result
Figure 26845DEST_PATH_IMAGE017
Figure 292217DEST_PATH_IMAGE018
Figure 589379DEST_PATH_IMAGE019
……
Figure 654025DEST_PATH_IMAGE020
Then all the output results are merged to obtain the final prediction result
Figure 440846DEST_PATH_IMAGE021
And realizing industrial quality inspection according to the prediction result. Referring to FIG. 3, the embodiment of the present invention is training first
Figure 814802DEST_PATH_IMAGE012
A corresponding depth Q network model, and determining the test image according to the depth Q network model
Figure 342867DEST_PATH_IMAGE011
Enhancing the data to obtain a target enhancement behavior, and enhancing the test image according to the target enhancement behavior to obtain an enhanced test image
Figure 929312DEST_PATH_IMAGE022
Will be
Figure 538017DEST_PATH_IMAGE022
Input device
Figure 207508DEST_PATH_IMAGE012
Obtaining a predicted result
Figure 380126DEST_PATH_IMAGE023
And realizing industrial quality inspection according to the prediction result.
According to the embodiment of the invention, the decision of data enhancement behavior is realized based on deep reinforcement learning, and then model prediction is carried out, so that the test speed is higher compared with TTA, the decision of an enhancement strategy can be realized in a shorter time, the problem of poor robustness of the convolutional neural network model on rotation and brightness change can be effectively solved on the basis, and the thinking habit of human brain is better met.
For example, a neural network model is trained according to images acquired in a factory, but if the brightness of a camera platform changes during image prediction, the acquired images are different from images of the previously trained neural network model, the recognition effect of the neural network model is reduced, and in order to avoid the phenomenon of reduction of the recognition effect, a decision on the optimal enhancement behavior of the acquired images is made through the depth Q network model of the embodiment of the invention, so that some state or version adjustment is performed on the images, and then the images are input into the neural network model, so that the model recognition effect can be improved.
According to the workpiece quality inspection method provided by the embodiment of the invention, before the workpiece quality inspection is realized by adopting the neural network model, the deep Q network model is trained, the optimal data enhancement behavior is determined through the deep Q network model, the data enhancement is further carried out on the test image through the optimal data enhancement behavior, the test image is input into the neural network model after the data enhancement, and the quality inspection is realized according to the prediction result of the neural network model. Therefore, the testing effect can be ensured, and meanwhile, the testing time is greatly reduced, so that the workpiece quality inspection efficiency is improved, and the workpiece quality inspection accuracy is improved.
It should be noted that the deep Q network model is trained based on an Agent and an Environment (Environment).
That is, in an embodiment of the present invention, the learning and training of the deep Q network based on the training samples in the step S3 to obtain the state-behavior deep Q network model corresponding to the neural network model may include the following steps S31 to S33.
S31, forming a plurality of data enhancement images into a state space set S of the environment,
Figure 488372DEST_PATH_IMAGE024
wherein S istIndicating the state of the environment at time t, St+1Indicating the state of the environment at time t +1, STRepresenting the final state of the environment.
S32, forming a plurality of data enhancement behaviors into a behavior space set A of the agent,
Figure 872955DEST_PATH_IMAGE025
wherein A iskIndicating the kth data enhancement behavior.
And S33, obtaining a state-behavior deep Q network model based on the interaction between the agent and the environment.
Further, the step S33 may include the following steps S331 to S336.
S331, obtaining the environmental state S at the time tt
S332, setting the environmental state S at the time ttInputting the agent to make the agent according to the environmental state S at the time ttPerforming a data enhancement action A at time tt
S333, executing data enhancement action A in the intelligent agenttThereafter, the state of the environment is changed to the environment state S at the time t +1t+1And simultaneously, the feedback reward value at the time of t +1 is fed back to the intelligent agent, so that the intelligent agent executes the data enhancement behavior at the time of t + 1.
S334, obtaining the feedback reward value corresponding to each time, so as to combine the feedback reward values into a feedback reward set R,
Figure 716277DEST_PATH_IMAGE026
wherein R istIndicating the corresponding feedback prize value, R, at time tt+1Indicating the corresponding feedback prize value at time t + 1.
And S335, determining the value function of the intelligent agent according to the feedback reward values.
In one example, determining a cost function for the agent based on the respective feedback reward values may include: and acquiring a control strategy pi of the agent, and a state-value function and a behavior-value function corresponding to the control strategy pi.
Wherein, the state-value function corresponding to the control strategy pi is as follows:
Figure 209748DEST_PATH_IMAGE027
the behavior-cost function corresponding to the control strategy pi is:
Figure 528209DEST_PATH_IMAGE028
wherein,
Figure 409708DEST_PATH_IMAGE029
indicating when an agent takes a policy
Figure 296368DEST_PATH_IMAGE030
The expectation of the time-random variable is,
Figure 568212DEST_PATH_IMAGE031
represents a discount factor, Rt+k+1Denotes a feedback award value at time t + k +1, and S denotes a grant StA represents the value given to AtThe value of (c).
And S336, obtaining a state-behavior depth Q network model based on the cost function.
Further, a state-behavior deep Q network model is obtained based on the cost function, and the method comprises the following steps: the deep Q network model is optimized by maximizing the state-cost function and the behavior-cost function.
Specifically, after acquiring the data enhanced images and the data enhanced behaviors corresponding to the original images in the above step S2, the data enhanced images are combined into the state space set S of the environment,
Figure 254146DEST_PATH_IMAGE032
and a plurality of data enhancement behaviors are combined into a behavior space set A of the agent,
Figure 987222DEST_PATH_IMAGE033
to complete the configuration of the agents and environments of the deep Q network. And interacting the configured intelligent agent with the environment to obtain a state-behavior depth Q network model corresponding to the trained model.
FIG. 4 is a schematic diagram of the interaction between agents and the environment of a deep Q network, according to one embodiment of the present invention. In fig. 4: action stands for (data enhanced) behavior, State stands for status of the environment, and Reward stands for feedback rewards. As shown in FIG. 4, upon interaction between an Agent and the environment, the initial state of the Agent is the original test image
Figure 47713DEST_PATH_IMAGE034
Inputting the new state S into the trained Model after the environment is in the new state S every time to obtain a prediction result, evaluating the prediction result through a real label of the test image X and providing a feedback reward set
Figure 990874DEST_PATH_IMAGE035
And as feedback, real label represents the image content really represented by the image, such as the image of a cat, the cat is the real label of the image, the image is input into a trained model to obtain a prediction result, and a corresponding feedback reward value is calculated according to the matching degree of the prediction result and the real label.
And then, acquiring a control strategy pi of the agent and a value function corresponding to the control strategy pi, maximizing the value function, optimizing to obtain a state-behavior deep Q network model, and obtaining an optimal strategy, namely a target enhancement behavior through the deep Q network model. The purpose of an agent is to learn a control strategy
Figure 418182DEST_PATH_IMAGE036
To maximize the expected reward.
It should be noted that, in the embodiment of the present invention, there may be one or more target enhancement behaviors, and when there is one target enhancement behavior, the target enhancement behavior is used to perform data enhancement on the test image, and the enhanced test image is input into the model to obtain a prediction result correspondingly; and when the target enhancement behaviors are multiple, performing data enhancement on the test image by using the multiple target enhancement behaviors respectively, and inputting the enhanced test image into the model in sequence to correspondingly obtain multiple prediction results.
In an embodiment of the present invention, the step S6 of performing quality inspection on the workpiece according to at least one prediction result may include: when the number of the prediction results is multiple, preprocessing the multiple prediction results; and performing quality inspection on the workpiece according to the prediction result after pretreatment.
Specifically, after obtaining the plurality of prediction results, preprocessing may be performed, for example, accuracy and recognition time corresponding to the plurality of prediction results are determined, the prediction result with the highest accuracy and the shortest recognition time is selected, and other prediction results are deleted, so that the processed prediction results only include the prediction result with the highest accuracy and the shortest recognition time, and industrial quality inspection is implemented according to the prediction results.
Therefore, on the basis that the model outputs a plurality of prediction results, the prediction results are screened, the accuracy of industrial quality inspection is further ensured, and the efficiency of industrial quality inspection is improved.
In summary, the embodiment of the invention provides a model test mode with high efficiency and good robustness for industrial quality inspection. A deep Q network model is adopted to make a decision on a data enhancement mode, a general idea is provided for training and testing of a deep learning algorithm, and a convenient and efficient model identification mode is provided for industrial quality inspection.
Corresponding to the workpiece quality inspection method of the above embodiment, the invention further provides a workpiece quality inspection device.
Fig. 5 is a block diagram of a workpiece quality inspection apparatus according to an embodiment of the present invention.
As shown in fig. 5, the workpiece quality inspection apparatus 100 includes: the system comprises a first acquisition module 10, a second acquisition module 20, a training module 30, a first determination module 40, a second determination module 50 and a quality inspection module 60.
The first acquisition module 10 is used for acquiring a test image corresponding to a workpiece to be tested and a trained neural network model; the second obtaining module 20 is configured to obtain training samples of the deep Q network, where the training samples include: the method comprises the following steps that a plurality of original images, data enhancement images corresponding to the original images and data enhancement behaviors are obtained; the training module 30 is configured to construct a deep Q network, and perform learning training on the deep Q network based on a training sample to obtain a state-behavior deep Q network model corresponding to the neural network model; the first determining module 40 is configured to input the test image into the depth Q network model to obtain at least one target enhancement behavior corresponding to the test image; the second determining module 50 is configured to perform data enhancement on the test image according to at least one target enhancement behavior, and input the enhanced test image into the neural network model to obtain at least one prediction result; the quality inspection module 60 is used for performing quality inspection on the workpiece according to at least one prediction result.
In one embodiment of the present invention, training module 30 may include: a first composing unit for composing the plurality of data-enhanced images into a state space set S of the environment,
Figure 8695DEST_PATH_IMAGE037
wherein S istIndicating the state of the environment at time t, St+1Indicating the state of the environment at time t +1, STRepresenting the final state of the environment; a second composing unit for composing the plurality of data-enhanced behaviors into a behavior space set A of the agent,
Figure 987890DEST_PATH_IMAGE038
wherein A iskRepresenting a kth data enhancement behavior; and the interaction unit is used for obtaining a state-behavior deep Q network model based on the interaction between the intelligent agent and the environment.
In an embodiment of the present invention, the interaction unit may be specifically configured to: obtaining the environmental state S at the moment tt(ii) a The environmental state S at the moment ttInputting the agent to make the agent according to the environmental state S at the time ttPerforming a data enhancement action A at time tt(ii) a Data enhancement behavior A at the time of execution of the agent ttThereafter, the state of the environment is changed to t +1Environmental state of time St+1Simultaneously, feeding back the feedback reward value at the time of t +1 to the intelligent agent to enable the intelligent agent to execute the data enhancement behavior at the time of t + 1; obtaining the feedback reward value corresponding to each time, so as to combine the feedback reward values into a feedback reward set R,
Figure 989254DEST_PATH_IMAGE039
wherein R istIndicating the corresponding feedback prize value, R, at time tt+1Representing the corresponding feedback reward value at the t +1 moment; determining a value function of the agent according to each feedback reward value; and obtaining a state-behavior depth Q network model based on the cost function.
In an embodiment of the invention, the interaction unit, when determining the cost function of the agent according to the respective feedback reward values, may be specifically configured to: acquiring a control strategy pi of an agent and a state-value function and a behavior-value function corresponding to the control strategy pi; the interaction unit, when obtaining the state-behavior depth Q network model based on the cost function, may be specifically configured to: the state-behavior deep Q network model is optimized by maximizing the state-value function and the behavior-value function.
In one embodiment of the present invention, the state-cost function for a control strategy pi is:
Figure 784035DEST_PATH_IMAGE040
the behavior-cost function corresponding to the control strategy pi is:
Figure 163807DEST_PATH_IMAGE041
wherein,
Figure 566101DEST_PATH_IMAGE042
indicating when an agent takes a policy
Figure 359220DEST_PATH_IMAGE043
The expectation of the time-random variable is,
Figure 505162DEST_PATH_IMAGE044
represents a discount factor, Rt+k+1Denotes a feedback award value at time t + k +1, and S denotes a grant StA represents the value given to AtThe value of (c).
In one embodiment of the present invention, the quality inspection module 60 includes: the processing unit is used for preprocessing a plurality of prediction results when the number of the prediction results is multiple; and the quality inspection unit is used for performing quality inspection on the workpiece according to the preprocessed prediction result.
In one embodiment of the invention, the plurality of data enhancement activities include: turning, rotating, zooming and brightness adjusting.
It should be noted that, for the specific implementation and implementation principle of the workpiece quality inspection apparatus, reference may be made to the specific implementation of the workpiece quality inspection method, and details are not described here again to avoid redundancy.
According to the workpiece quality inspection device provided by the embodiment of the invention, before the workpiece quality inspection is realized by adopting the neural network model, the deep Q network model is trained, the optimal data enhancement behavior is determined through the deep Q network model, then the data enhancement is carried out on the test image through the optimal data enhancement behavior, the test image is input into the neural network model after the data enhancement, and the quality inspection is realized according to the prediction result of the neural network model. Therefore, the testing effect can be ensured, and meanwhile, the testing time is greatly reduced, so that the workpiece quality inspection efficiency is improved, and the workpiece quality inspection accuracy is improved.
The invention further provides a computer device corresponding to the embodiment.
The computer device of the embodiment of the invention comprises a memory, a processor and a computer program which is stored on the memory and can run on the processor, and when the processor executes the computer program, the workpiece quality inspection method of the embodiment of the invention can be realized.
According to the computer device provided by the embodiment of the invention, when a processor executes a computer program stored on a memory, a test image and a trained neural network model corresponding to a workpiece to be quality-tested are firstly obtained, a training sample of a deep Q network is obtained, then the deep Q network is constructed, the deep Q network is subjected to learning training based on the training sample to obtain a state-behavior deep Q network model corresponding to the neural network model, then the test image is input into the deep Q network model to obtain at least one target enhancement behavior corresponding to the test image, then the test image is subjected to data enhancement according to the at least one target enhancement behavior, the enhanced test image is input into the neural network model to obtain at least one prediction result, and finally the workpiece is subjected to quality testing according to the at least one prediction result.
According to the computer equipment provided by the embodiment of the invention, before the neural network model is adopted to realize the workpiece quality inspection, the deep Q network model is trained, the optimal data enhancement behavior is determined through the deep Q network model, the optimal data enhancement behavior is further used for carrying out data enhancement on the test image, the test image is input into the neural network model after the data enhancement, and the quality inspection is realized according to the prediction result of the neural network model. Therefore, the testing effect can be ensured, and meanwhile, the testing time is greatly reduced, so that the workpiece quality inspection efficiency is improved, and the workpiece quality inspection accuracy is improved.
In the description of the present invention, the terms "first" and "second" are used for descriptive purposes only and are not to be construed as indicating or implying relative importance or implying any number of technical features indicated. Thus, a feature defined as "first" or "second" may explicitly or implicitly include one or more of that feature. The meaning of "plurality" is two or more unless specifically limited otherwise.
In the description herein, references to the description of the term "one embodiment," "some embodiments," "an example," "a specific example," or "some examples," etc., mean that a particular feature, structure, material, or characteristic described in connection with the embodiment or example is included in at least one embodiment or example of the invention. In this specification, the schematic representations of the terms used above are not necessarily intended to refer to the same embodiment or example. Furthermore, the particular features, structures, materials, or characteristics described may be combined in any suitable manner in any one or more embodiments or examples. Furthermore, various embodiments or examples and features of different embodiments or examples described in this specification can be combined and combined by one skilled in the art without contradiction.
Any process or method descriptions in flow charts or otherwise described herein may be understood as representing modules, segments, or portions of code which include one or more executable instructions for implementing specific logical functions or steps of the process, and alternate implementations are included within the scope of the preferred embodiment of the present invention in which functions may be executed out of order from that shown or discussed, including substantially concurrently or in reverse order, depending on the functionality involved, as would be understood by those reasonably skilled in the art of the present invention.
It should be understood that portions of the present invention may be implemented in hardware, software, firmware, or a combination thereof. In the above embodiments, the various steps or methods may be implemented in software or firmware stored in memory and executed by a suitable instruction execution system. For example, if implemented in hardware, as in another embodiment, any one or combination of the following techniques, which are known in the art, may be used: a discrete logic circuit having a logic gate circuit for implementing a logic function on a data signal, an application specific integrated circuit having an appropriate combinational logic gate circuit, a Programmable Gate Array (PGA), a Field Programmable Gate Array (FPGA), or the like.
It will be understood by those skilled in the art that all or part of the steps carried by the method for implementing the above embodiments may be implemented by hardware related to instructions of a program, which may be stored in a computer readable storage medium, and when the program is executed, the program includes one or a combination of the steps of the method embodiments. In addition, functional units in the embodiments of the present invention may be integrated into one processing module, or each unit may exist alone physically, or two or more units are integrated into one module. The integrated module can be realized in a hardware mode, and can also be realized in a software functional module mode. The integrated module, if implemented in the form of a software functional module and sold or used as a stand-alone product, may also be stored in a computer readable storage medium.
Although embodiments of the present invention have been shown and described above, it is understood that the above embodiments are exemplary and should not be construed as limiting the present invention, and that variations, modifications, substitutions and alterations can be made to the above embodiments by those of ordinary skill in the art within the scope of the present invention.

Claims (10)

1. A method of inspecting a workpiece, comprising:
acquiring a test image corresponding to a workpiece to be tested and a trained neural network model;
obtaining training samples of a deep Q network, the training samples comprising: the method comprises the following steps that a plurality of original images, data enhancement images corresponding to the original images and data enhancement behaviors are obtained;
constructing a deep Q network, and performing learning training on the deep Q network based on the training sample to obtain a state-behavior deep Q network model corresponding to the neural network model;
inputting the test image into the depth Q network model to obtain at least one target enhancement behavior corresponding to the test image;
performing data enhancement on the test image according to the at least one target enhancement behavior, and inputting the enhanced test image into the neural network model to obtain at least one prediction result;
and performing quality inspection on the workpiece according to the at least one prediction result.
2. The workpiece quality inspection method according to claim 1, wherein learning and training the deep Q network based on the training samples to obtain a state-behavior deep Q network model corresponding to the neural network model comprises:
a plurality of data-enhanced images are grouped into a state space set S of an environment,
Figure 147455DEST_PATH_IMAGE001
wherein S istIndicating the state of the environment at time t, St+1Indicating the state of the environment at time t +1, STRepresenting the final state of the environment;
composing the plurality of data-enhanced behaviors into a behavior space set A of the agent,
Figure 293396DEST_PATH_IMAGE002
wherein A iskRepresenting a kth data enhancement behavior;
and obtaining the state-behavior deep Q network model based on the interaction between the agent and the environment.
3. The method of claim 2, wherein deriving the state-behavior deep Q-network model based on interactions between the agent and the environment comprises:
obtaining the environmental state S at the moment tt
The environmental state S at the time t is measuredtInputting the agent to enable the agent to be in accordance with the environmental state S at the time ttPerforming a data enhancement action A at time tt
Data enhancement action A at the moment of execution t of the agenttThen, the state of the environment is changed to an environment state S at the moment t +1t+1Simultaneously, feeding back the feedback reward value at the time of t +1 to the intelligent agent so that the intelligent agent executes the data enhancement behavior at the time of t + 1;
obtaining the feedback reward value corresponding to each time, so as to combine the feedback reward values into a feedback reward set R,
Figure 714626DEST_PATH_IMAGE003
wherein R istIndicating the corresponding feedback prize value, R, at time tt+1Representing the corresponding feedback reward value at the t +1 moment;
determining a cost function of the agent according to each feedback reward value;
and obtaining the state-behavior depth Q network model based on the cost function.
4. A method according to claim 3, wherein determining a cost function for the agent based on the respective feedback reward values comprises:
acquiring a control strategy pi of the agent, and a state-value function and a behavior-value function corresponding to the control strategy pi;
obtaining the state-behavior depth Q network model based on the cost function, including:
optimizing the state-behavior deep Q network model by maximizing the state-cost function and the behavior-cost function.
5. The method of claim 4, wherein the state-cost function for the control strategy pi is:
Figure 491083DEST_PATH_IMAGE004
the behavior-cost function corresponding to the control strategy pi is as follows:
Figure 194335DEST_PATH_IMAGE005
wherein,
Figure 658814DEST_PATH_IMAGE006
indicating when an agent takes a policy
Figure 419703DEST_PATH_IMAGE007
The expectation of the time-random variable is,
Figure 364132DEST_PATH_IMAGE008
represents a discount factor, Rt+k+1Denotes a feedback award value at time t + k +1, and S denotes a grant StA represents the value given to AtThe value of (c).
6. The method of claim 1, wherein inspecting the workpiece based on the at least one predictor comprises:
when the number of the prediction results is multiple, preprocessing the multiple prediction results;
and performing quality inspection on the workpiece according to the prediction result after pretreatment.
7. The method of claim 1, wherein the plurality of data enhancement activities comprise: turning, rotating, zooming and brightness adjusting.
8. A workpiece quality inspection apparatus, comprising:
the first acquisition module is used for acquiring a test image corresponding to a workpiece to be tested and a trained neural network model;
a second obtaining module, configured to obtain a training sample of the deep Q network, where the training sample includes: the method comprises the following steps that a plurality of original images, data enhancement images corresponding to the original images and data enhancement behaviors are obtained;
the training module is used for constructing a deep Q network and carrying out learning training on the deep Q network based on the training samples so as to obtain a state-behavior deep Q network model corresponding to the neural network model;
a first determining module, configured to input the test image into the depth Q network model to obtain at least one target enhancement behavior corresponding to the test image;
the second determination module is used for performing data enhancement on the test image according to the at least one target enhancement behavior and inputting the enhanced test image into the neural network model to obtain at least one prediction result;
and the quality inspection module is used for performing quality inspection on the workpiece according to the at least one prediction result.
9. The workpiece quality inspection apparatus of claim 8, wherein the training module comprises:
a first composing unit for composing the plurality of data-enhanced images into a state space set S of the environment,
Figure 72456DEST_PATH_IMAGE009
wherein S istIndicating the state of the environment at time t, St+1Indicating the state of the environment at time t +1, STRepresenting the final state of the environment;
a second composing unit for composing the plurality of data-enhanced behaviors into a behavior space set A of a smart agent,
Figure 183369DEST_PATH_IMAGE010
wherein A iskRepresenting a kth data enhancement behavior;
and the interaction unit is used for obtaining the state-behavior deep Q network model based on the interaction between the agent and the environment.
10. A computer device comprising a memory, a processor and a computer program stored on the memory and executable on the processor, wherein the processor, when executing the computer program, implements the method of workpiece quality inspection according to any one of claims 1-7.
CN202111223818.6A 2021-10-21 2021-10-21 Workpiece quality inspection method and device and computer equipment Active CN113658183B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202111223818.6A CN113658183B (en) 2021-10-21 2021-10-21 Workpiece quality inspection method and device and computer equipment

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202111223818.6A CN113658183B (en) 2021-10-21 2021-10-21 Workpiece quality inspection method and device and computer equipment

Publications (2)

Publication Number Publication Date
CN113658183A true CN113658183A (en) 2021-11-16
CN113658183B CN113658183B (en) 2022-02-08

Family

ID=78484343

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202111223818.6A Active CN113658183B (en) 2021-10-21 2021-10-21 Workpiece quality inspection method and device and computer equipment

Country Status (1)

Country Link
CN (1) CN113658183B (en)

Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN107784661A (en) * 2017-09-08 2018-03-09 上海电力学院 Substation equipment infrared image classifying identification method based on region-growing method
CN111489334A (en) * 2020-04-02 2020-08-04 暖屋信息科技(苏州)有限公司 Defect workpiece image identification method based on convolution attention neural network
CN211839079U (en) * 2019-11-22 2020-11-03 深圳信息职业技术学院 Chip detecting, sorting and counting system
CN112633245A (en) * 2020-12-31 2021-04-09 西安交通大学 Planetary gear box fault diagnosis method based on deep reinforcement learning model

Patent Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN107784661A (en) * 2017-09-08 2018-03-09 上海电力学院 Substation equipment infrared image classifying identification method based on region-growing method
CN211839079U (en) * 2019-11-22 2020-11-03 深圳信息职业技术学院 Chip detecting, sorting and counting system
CN111489334A (en) * 2020-04-02 2020-08-04 暖屋信息科技(苏州)有限公司 Defect workpiece image identification method based on convolution attention neural network
CN112633245A (en) * 2020-12-31 2021-04-09 西安交通大学 Planetary gear box fault diagnosis method based on deep reinforcement learning model

Non-Patent Citations (2)

* Cited by examiner, † Cited by third party
Title
徐一丁: "基于卷积神经网络的工件识别算法", 《组合机床与自动化加工技术》 *
郭瑞鹏: "基于神经网络的磨削工件表面质量的在线检测", 《电子设计工程》 *

Also Published As

Publication number Publication date
CN113658183B (en) 2022-02-08

Similar Documents

Publication Publication Date Title
US10818000B2 (en) Iterative defect filtering process
JP7004145B2 (en) Defect inspection equipment, defect inspection methods, and their programs
CN110619618A (en) Surface defect detection method and device and electronic equipment
CN110136101B (en) Tire X-ray defect detection method based on twinning distance comparison
US20200402221A1 (en) Inspection system, image discrimination system, discrimination system, discriminator generation system, and learning data generation device
JP2021515927A (en) Lighting condition setting method, devices, systems and programs, and storage media
JP2020077326A (en) Photographing method and photographing device
CN114170227B (en) Product surface defect detection method, device, equipment and storage medium
CN114945938A (en) Method and device for detecting actual area of defect and method and device for detecting display panel
CN113554645B (en) Industrial anomaly detection method and device based on WGAN
CN112070762A (en) Mura defect detection method and device for liquid crystal panel, storage medium and terminal
CN113658183B (en) Workpiece quality inspection method and device and computer equipment
CN114757868A (en) Product flaw detection method, computer device and storage medium
CN116777861A (en) Marking quality detection method and system for laser engraving machine
CN115861305A (en) Flexible circuit board detection method and device, computer equipment and storage medium
CN115601341A (en) Method, system, equipment, medium and product for detecting defects of PCBA (printed circuit board assembly) board
US20230401670A1 (en) Multi-scale autoencoder generation method, electronic device and readable storage medium
CN115222691A (en) Image defect detection method, system and related device
CN113592859B (en) Deep learning-based classification method for defects of display panel
KR102434442B1 (en) Method of performing defect inspection of inspection object at high speed and apparatuses performing the same
CN113034432B (en) Product defect detection method, system, device and storage medium
KR20220010516A (en) Inspection Device, Inspection Method and Inspection Program, and Learning Device, Learning Method and Learning Program
JP7397404B2 (en) Image processing device, image processing method, and image processing program
WO2022202456A1 (en) Appearance inspection method and appearance inspection system
WO2021229901A1 (en) Image inspection device, image inspection method, and prelearned model generation device

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant