CN111986198B - Mold residue detection method and device - Google Patents
Mold residue detection method and device Download PDFInfo
- Publication number
- CN111986198B CN111986198B CN202010942710.1A CN202010942710A CN111986198B CN 111986198 B CN111986198 B CN 111986198B CN 202010942710 A CN202010942710 A CN 202010942710A CN 111986198 B CN111986198 B CN 111986198B
- Authority
- CN
- China
- Prior art keywords
- image data
- image
- mold cavity
- residue detection
- discriminator
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Active
Links
- 238000001514 detection method Methods 0.000 title claims abstract description 69
- 238000000034 method Methods 0.000 claims abstract description 70
- 238000012937 correction Methods 0.000 claims abstract description 37
- 238000001746 injection moulding Methods 0.000 claims abstract description 31
- 238000007781 pre-processing Methods 0.000 claims abstract description 21
- 238000003062 neural network model Methods 0.000 claims description 10
- 238000001914 filtration Methods 0.000 claims description 9
- 238000005286 illumination Methods 0.000 claims description 8
- 230000003042 antagnostic effect Effects 0.000 claims description 6
- PXFBZOLANLWPMH-UHFFFAOYSA-N 16-Epiaffinine Natural products C1C(C2=CC=CC=C2N2)=C2C(=O)CC2C(=CC)CN(C)C1C2CO PXFBZOLANLWPMH-UHFFFAOYSA-N 0.000 claims description 5
- 230000009466 transformation Effects 0.000 claims description 5
- 238000006243 chemical reaction Methods 0.000 claims description 3
- 238000004519 manufacturing process Methods 0.000 abstract description 10
- 238000013528 artificial neural network Methods 0.000 description 4
- 238000010586 diagram Methods 0.000 description 4
- 230000006870 function Effects 0.000 description 4
- 238000011176 pooling Methods 0.000 description 4
- 239000000463 material Substances 0.000 description 3
- 238000000465 moulding Methods 0.000 description 3
- 239000002994 raw material Substances 0.000 description 3
- 239000000243 solution Substances 0.000 description 3
- 238000001816 cooling Methods 0.000 description 2
- 238000011161 development Methods 0.000 description 2
- 238000005516 engineering process Methods 0.000 description 2
- 238000002347 injection Methods 0.000 description 2
- 239000007924 injection Substances 0.000 description 2
- 239000004033 plastic Substances 0.000 description 2
- 229920003023 plastic Polymers 0.000 description 2
- 230000008569 process Effects 0.000 description 2
- ORILYTVJVMAKLC-UHFFFAOYSA-N Adamantane Natural products C1C(C2)CC3CC1CC2C3 ORILYTVJVMAKLC-UHFFFAOYSA-N 0.000 description 1
- 230000003213 activating effect Effects 0.000 description 1
- 230000004913 activation Effects 0.000 description 1
- 230000006978 adaptation Effects 0.000 description 1
- 238000000354 decomposition reaction Methods 0.000 description 1
- 230000007547 defect Effects 0.000 description 1
- 238000013461 design Methods 0.000 description 1
- 238000012850 discrimination method Methods 0.000 description 1
- 238000012986 modification Methods 0.000 description 1
- 230000004048 modification Effects 0.000 description 1
- 238000012544 monitoring process Methods 0.000 description 1
- 238000010137 moulding (plastic) Methods 0.000 description 1
- 210000002569 neuron Anatomy 0.000 description 1
- 230000003287 optical effect Effects 0.000 description 1
- 238000012545 processing Methods 0.000 description 1
- 238000003672 processing method Methods 0.000 description 1
- 229920001169 thermoplastic Polymers 0.000 description 1
- 229920001187 thermosetting polymer Polymers 0.000 description 1
- 239000004416 thermosoftening plastic Substances 0.000 description 1
- 239000013598 vector Substances 0.000 description 1
- 230000000007 visual effect Effects 0.000 description 1
Images
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T7/00—Image analysis
- G06T7/0002—Inspection of images, e.g. flaw detection
- G06T7/0004—Industrial image inspection
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06N—COMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
- G06N3/00—Computing arrangements based on biological models
- G06N3/02—Neural networks
- G06N3/04—Architecture, e.g. interconnection topology
- G06N3/045—Combinations of networks
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T5/00—Image enhancement or restoration
- G06T5/70—Denoising; Smoothing
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T7/00—Image analysis
- G06T7/10—Segmentation; Edge detection
- G06T7/13—Edge detection
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T7/00—Image analysis
- G06T7/90—Determination of colour characteristics
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/20—Special algorithmic details
- G06T2207/20081—Training; Learning
Landscapes
- Engineering & Computer Science (AREA)
- Physics & Mathematics (AREA)
- Theoretical Computer Science (AREA)
- General Physics & Mathematics (AREA)
- Computer Vision & Pattern Recognition (AREA)
- General Health & Medical Sciences (AREA)
- Artificial Intelligence (AREA)
- Computational Linguistics (AREA)
- Data Mining & Analysis (AREA)
- Evolutionary Computation (AREA)
- Biomedical Technology (AREA)
- Molecular Biology (AREA)
- Computing Systems (AREA)
- General Engineering & Computer Science (AREA)
- Biophysics (AREA)
- Mathematical Physics (AREA)
- Software Systems (AREA)
- Life Sciences & Earth Sciences (AREA)
- Quality & Reliability (AREA)
- Health & Medical Sciences (AREA)
- Image Analysis (AREA)
- Injection Moulding Of Plastics Or The Like (AREA)
Abstract
The present disclosure relates to a mold residue detection method and apparatus, the method comprising: acquiring first image data corresponding to an image, acquired by an industrial camera, inside a mold cavity of an injection molding machine; preprocessing the first image data by a light compensation method and a position correction method to obtain preprocessed second image data; and taking the second image data as the input of a preset residue detection model, and determining whether the interior of the mold cavity contains residues or not according to the output result of the residue detection model. Whether the residues are contained in the preprocessed injection molding machine mold cavity or not can be identified through the trained residue detection model, a manual identification method with low production efficiency and high identification error rate is replaced, and the labor cost is saved while the identification accuracy rate is improved.
Description
Technical Field
The present disclosure relates to the field of image processing technologies, and in particular, to a method and an apparatus for detecting mold residues.
Background
As a method for producing and molding industrial products, an injection molding technology has wide development space with continuous progress of modern industry in recent years, wherein an injection molding machine is used as main production mechanical equipment, and the automation level is also continuously improved. However, because the cooling time of the mold of the injection molding machine is insufficient when the mold is closed, the raw material is often left in the mold cavity, if the raw material cannot be found and processed in time, the mold is easily damaged, and the subsequent products are affected. Injection molding plays a significant role in the plastics industry. The main equipment of injection molding is an injection molding machine, and a production control system and an injection molding monitoring system are researched from the perspective of the injection molding machine, so that the product quality can be improved, the processing method can be improved, and the development of the injection molding industry can be even promoted. The injection molding process is a complicated process and is influenced by a plurality of factors such as process parameters, machine performance, manual operation, production environment, material parameters and the like, so that after the injection molding machine molds a product, when the mold is opened and the manipulator grabs the molded product, materials are often remained in the mold cavity, if the remained materials are not found in time, then when the mold is closed, the mold cavity of the injection molding machine is damaged, and the production efficiency is greatly influenced.
At present, in injection molding workshops of large enterprises, most manufacturers monitor injection molds in a manual shift operation mode in order to ensure the qualified rate of products. The worker should perform the residue detection work of the mold by visual and manual means. Such a method is inefficient, labor intensive, requires a large amount of manpower input, and is prone to leakage in a state of fatigue of personnel during operation. Therefore, there is a need for an intelligent injection mold residue detection system.
Disclosure of Invention
To overcome the problems in the related art, the present disclosure provides a mold residue detection method and apparatus.
According to a first aspect of embodiments of the present disclosure, there is provided a mold residue detection method, the method comprising:
acquiring first image data corresponding to an image, acquired by an industrial camera, inside a mold cavity of an injection molding machine;
preprocessing the first image data by a light compensation method and a position correction method to obtain preprocessed second image data;
and taking the second image data as the input of a preset residue detection model, and determining whether the interior of the mold cavity contains residues or not according to the output result of the residue detection model.
Optionally, the pre-trained residue detection model is a generative confrontation neural network model, and the generative confrontation neural network model includes: the generator and the discriminator, the said second image data is used as the input of the preset residue detection model, and the result according to the output of the residue detection model determines whether the interior of the mould cavity contains residue, including:
taking the noise data of the second image data as the input of the generator, and acquiring the generated image data output by the generator;
and taking the generated image data and the second image data as the input of the discriminator to determine whether the interior of the mold cavity contains residues according to the output result of the discriminator.
Optionally, the using the generated image data and the second image data as the input of the discriminator to determine whether the interior of the mold cavity contains residues according to the result output by the discriminator includes:
taking the generated image data and the second image data as inputs of the discriminator;
when the discrimination result output by the discriminator according to the second image data is a number 1, the interior of the mold cavity contains residues;
and when the discrimination result output by the discriminator according to the second image data is digital 0, no residue exists in the mold cavity.
Optionally, after determining whether the interior of the mold cavity contains the residue according to the result output by the residue detection model, the method further includes:
and if the residues are determined to be contained in the mold cavity, sending alarm information indicating that the residues are contained in the mold cavity.
Optionally, the preprocessing the first image data by the light compensation method and the position correction method to obtain preprocessed second image data includes:
acquiring a preset number of standard image data under a standard illumination condition;
obtaining the mean value and the variance of the preset number of standard image data by a median filtering method;
determining third image data subjected to light compensation according to the mean value, the variance, the first image data and a preset light compensation formula;
and carrying out position correction on the image corresponding to the third image data by a position correction method to obtain second image data corresponding to the image after the position correction.
According to a second aspect of embodiments of the present disclosure, there is provided a mold residue detection apparatus, the apparatus including:
the data acquisition module is used for acquiring first image data corresponding to an image, collected by an industrial camera, in a mold cavity of the injection molding machine;
the preprocessing module is connected with the data acquisition module and used for preprocessing the first image data through a light compensation method and a position correction method to acquire preprocessed second image data;
and the judging module is connected with the preprocessing module and used for taking the second image data as the input of a preset residue detection model and determining whether the interior of the mold cavity contains residues or not according to the output result of the residue detection model.
Optionally, the pre-trained residue detection model is a generative confrontation neural network model, and the generative confrontation neural network model includes: a generator and a discriminator, the discriminating module being configured to:
taking the noise data of the second image data as the input of the generator, and acquiring the generated image data output by the generator;
and taking the generated image data and the second image data as the input of the discriminator so as to determine whether the interior of the mold cavity contains residues according to the output result of the discriminator.
Optionally, the determining module is configured to:
taking the generated image data and the second image data as inputs of the discriminator;
when the discrimination result output by the discriminator according to the second image data is a number 1, the interior of the mold cavity contains residues;
and when the discrimination result output by the discriminator according to the second image data is digital 0, no residue exists in the mold cavity.
Optionally, the mold residue detecting device further includes:
and the alarm module is connected with the judging module and used for sending alarm information indicating that the mold cavity internally contains residues if the mold cavity internally contains residues is determined.
Optionally, the preprocessing module is configured to:
acquiring a preset number of standard image data under a standard illumination condition;
obtaining the mean value and the variance of the preset number of standard image data by a median filtering method;
determining third image data subjected to light compensation according to the mean value, the variance, the first image data and a preset light compensation formula;
and carrying out position correction on the image corresponding to the third image data by a position correction method to obtain second image data corresponding to the image after the position correction.
The technical scheme disclosed by the invention comprises the following steps: acquiring first image data corresponding to an image, acquired by an industrial camera, inside a mold cavity of an injection molding machine; preprocessing the first image data by a light compensation method and a position correction method to obtain preprocessed second image data; and taking the second image data as the input of a preset residue detection model, and determining whether the interior of the mold cavity contains residues or not according to the output result of the residue detection model. Whether the residues are contained in the preprocessed injection molding machine mold cavity or not can be identified through the trained residue detection model, a manual identification method with low production efficiency and high identification error rate is replaced, and the labor cost is saved while the identification accuracy rate is improved.
It is to be understood that both the foregoing general description and the following detailed description are exemplary and explanatory only and are not restrictive of the disclosure.
Drawings
The accompanying drawings, which are incorporated in and constitute a part of this specification, illustrate embodiments consistent with the present disclosure and together with the description, serve to explain the principles of the disclosure.
FIG. 1 is a flow chart illustrating a mold residue detection method according to an exemplary embodiment;
FIG. 2 is a flow chart of a method for determining whether a mold cavity contains residue according to the method shown in FIG. 1;
FIG. 3 is a flow chart of a method of discrimination according to the one shown in FIG. 2;
FIG. 4 is a flow chart of another mold residue detection method according to that shown in FIG. 1;
FIG. 5 is a block diagram illustrating a mold residue detection apparatus according to an exemplary embodiment;
fig. 6 is a block diagram of another mold residue detection apparatus according to fig. 5.
Detailed Description
Reference will now be made in detail to the exemplary embodiments, examples of which are illustrated in the accompanying drawings. When the following description refers to the accompanying drawings, like numbers in different drawings represent the same or similar elements unless otherwise indicated. The implementations described in the exemplary embodiments below are not intended to represent all implementations consistent with the present disclosure. Rather, they are merely examples of apparatus and methods consistent with certain aspects of the present disclosure, as detailed in the appended claims.
FIG. 1 is a flow chart illustrating a mold residue detection method according to an exemplary embodiment, as shown in FIG. 1, including the steps of:
in step 110, first image data corresponding to an image of the interior of a mold cavity of an injection molding machine acquired by an industrial camera is acquired.
For example, an injection molding machine is a main molding apparatus for molding a thermoplastic or thermosetting plastic into various shapes of plastic products using a plastic molding mold, and the mold is closed for an insufficient cooling time, so that a problem of damage to the mold due to a residual raw material inside the cavity of the mold often occurs. Therefore, before closing the mold, an industrial camera collects an image of the interior of the mold cavity of the injection molding machine and acquires corresponding first image data, and then the first image data is processed through steps 120 to 130 to determine whether the interior of the mold cavity contains residues.
Among them, the industrial camera is a key component in the machine vision system, and its most essential function is to convert the optical signal into an ordered electrical signal. The selection of a proper camera is also an important link in the design of a machine vision system, and the camera not only directly determines the resolution, image quality and the like of the acquired image, but also is directly related to the operation mode of the whole system.
In step 120, the first image data is preprocessed by a light compensation method and a position correction method, and preprocessed second image data is obtained.
For example, there may be variations in lighting conditions each time an image is captured by an industrial camera, and in the position of each captured image. Therefore, when determining whether the mold cavity contains residues or not by using the images of the interior of the mold cavity acquired by the residue detection mold and the industrial camera, the first image data corresponding to the primarily acquired image needs to be preprocessed, and the second image data corresponding to the image which is stable in illumination, uniform in size and correct in position is acquired by using a light compensation method and a position deviation correction method. The method specifically comprises the following steps: acquiring a preset number of standard image data under a standard illumination condition; obtaining the mean value and the variance of the preset number of standard image data by a median filtering method; determining third image data subjected to light compensation according to the mean value, the variance, the first image data and a preset light compensation formula; and carrying out position correction on the image corresponding to the third image data by a position correction method to obtain second image data corresponding to the image after the position correction.
Illustratively, the light compensation method includes: collecting 10 standard images under standard illumination condition, performing median filtering on the 10 standard images respectively, and determining the mean value μ of the 10 images 0 Sum variance σ 0 Performing median filtering on the image corresponding to the first image data obtained in step 110, determining a mean value μ and a variance σ of the image corresponding to the first image data after image filtering, and performing ray compensation on the image corresponding to the first image data according to a ray compensation formula (1).
Wherein, the light compensation formula (1) is:
f(x,y)=(f 0 (x,y)-u)σ 0 /σ+μ 0
wherein f (x, y) is the brightness function of the compensated image, f 0 (x, y) is the luminance function of the image before compensation, μ 0 Mean, σ, of 10 standard images 0 Is the variance of 10 standard images, mu is the mean of the first image, and sigma is the variance of the first image.
The position correction method includes: the first step is as follows: performing color space conversion on the image corresponding to the third image data after the light compensation, and converting the RGB color space into Gray color space; the second step: determining the maximum profile of the image obtained in the first step; the third step: searching the elliptic feature of the contour map according to the contour map obtained in the second step, and calculating the center coordinate of the contour; the fourth step: performing affine transformation on the image corresponding to the third image data according to the ellipse feature angle and the center coordinate of the outline obtained in the third step to enable the image to be translated to the center position of the image, and performing the same operation on the corresponding image without the background corresponding to the third image data; the fifth step: determining the maximum contour map again according to the result image obtained in the third step; and calculating the coordinates of the center of the outline; and a sixth step: searching the contour boundary of the contour in the horizontal direction and the vertical direction according to the contour information obtained in the fifth step; the seventh step: and cutting the mould image with the background removed and the corresponding mould mask according to the boundary information obtained in the sixth step and the contour center coordinate obtained in the fifth step.
In step 130, the second image data is used as an input of a preset residue detection model, and whether the interior of the mold cavity contains residues is determined according to the output result of the residue detection model.
Illustratively, the second image data after the light compensation and the position correction in step 120 is used as an input of the residue model, and whether the interior of the mold cavity contains the residue is determined according to an output result of the residue model, and the mold can be subjected to the next mold closing operation only when the mold cavity does not contain the residue. If the mold cavity is determined to contain the residues, the mold needs to be processed by the worker, and after the residues are removed, whether the mold cavity after the residues are removed manually contains the residues is detected by a manual detection method or a method of repeatedly executing the steps 110 to 130.
In summary, the technical solutions provided by the embodiments of the present disclosure include: acquiring first image data corresponding to an image, acquired by an industrial camera, inside a mold cavity of an injection molding machine; preprocessing the first image data by a light compensation method and a position correction method to obtain preprocessed second image data; and taking the second image data as an input of a preset residue detection model, and determining whether the interior of the mold cavity contains residues according to a result output by the residue detection model. Whether the residues are contained in the preprocessed injection molding machine mold cavity or not can be identified through the trained residue detection model, a manual identification method with low production efficiency and high identification error rate is replaced, and the labor cost is saved while the identification accuracy rate is improved.
Fig. 2 is a flowchart of a method for determining whether a mold cavity contains a residue shown in fig. 1, and as shown in fig. 2, the pre-trained residue detection model is a generative antagonistic neural network model, which includes: a generator and an arbiter, the step 130 comprising:
in step 131, the noise data of the second image data is used as the input of the generator, and the generated image data output by the generator is obtained.
In step 132, the generated image data and the second image data are used as inputs to the discriminator to determine whether the interior of the mold cavity contains residue based on the output of the discriminator.
Illustratively, the detailed structure of the generative antagonistic neural network model is divided into two parts: a Generator (Generator, G) and a Discriminator (Discriminator, B). The noise data A of the second image data is used for obtaining generated image data B through a generator G (AB), and then the second image data B and the generated image data B are judged whether the generated image data B is true (namely whether the interior of the mold cavity contains residues or not) through a decider.
When the generator of the generative countermeasure neural network is trained, the image data B is generated by using the noise data acquisition of the second image data. The size of the input and the output is 480 multiplied by 480 pixels, and an Adam optimizer is selected to update the weight of the neural network. The specific parameter structure is shown in table 1.1.
TABLE 1.1 Generator parameter settings for generative antagonistic neural networks
The discriminator is used for evaluating the authenticity of the generated image and prompting the generator to adjust parameters correspondingly, and the specific parameter structure is shown in table 1.2.
TABLE 1.2 decision device parameter setting table of generative antagonistic neural network
In addition, the residue detection model further includes: four convolutional layers and three full-link layers. Extracting the characteristics of each channel of the surface image of the inner surface of the cavity of the mold by the convolution layer; removing redundant information of the characteristics of each channel by the pooling layer, and further extracting main characteristics; the fully connected layer converts the two-dimensional characteristic maps learned by the convolutional layer and the pooling layer into one-dimensional vectors. The specific method comprises the following steps: firstly, inputting an integral die cavity internal surface image with the size of 480 multiplied by 3 pixels, obtaining 64 local images with the size of 60 multiplied by 3 pixels through image decomposition, then carrying out convolution and pooling operation on the local images, extracting effective information in each channel of the die cavity internal surface image, simultaneously integrating the extracted features by using a full connecting layer, activating neurons through Softmax, and finally outputting a classification result.
Padding of the convolutional layer is set to "SAME", padding of the max pooling layer is set to "VALID", activation functions are all Linear rectification units (ReLU), regularizer is set to 0.0001, and overall learning rate is set to 0.001. Specific network structure parameters are shown in table 1.3.
TABLE 1.3 parameter settings for mold cavity interior surface defect detection network structures
Fig. 3 is a flow chart of a discrimination method according to fig. 2, and as shown in fig. 3, the step 132 includes:
in step 1321, the generated image data and the second image data are used as input of the discriminator;
in step 1322, when the determination result output by the determiner according to the second image data is number 1, it is determined that the interior of the mold cavity contains residues.
In step 1323, when the determination result output by the determiner according to the second image data is a number 0, it is determined that there is no residue inside the mold cavity.
Fig. 4 is a flow chart of another mold residue detection method according to fig. 3, as shown in fig. 4, further comprising:
in step 140, if it is determined that the interior of the mold cavity contains residues, an alarm message indicating that the interior of the mold cavity contains residues is issued.
In summary, the technical solutions provided by the embodiments of the present disclosure include: acquiring first image data corresponding to an image acquired by an industrial camera and inside a mold cavity of an injection molding machine; preprocessing the first image data by a light compensation method and a position correction method to obtain preprocessed second image data; and taking the second image data as the input of a preset residue detection model, and determining whether the interior of the mold cavity contains residues or not according to the output result of the residue detection model. Whether the residues are contained in the preprocessed injection molding machine mold cavity or not can be identified through the trained residue detection model, a manual identification method with low production efficiency and high identification error rate is replaced, and the labor cost is saved while the identification accuracy rate is improved.
Fig. 5 is a block diagram illustrating a mold residue detection apparatus according to an exemplary embodiment, as shown in fig. 5, the mold residue detection apparatus 500 includes:
the data acquisition module 510 is used for acquiring first image data corresponding to an image, acquired by an industrial camera, inside a mold cavity of the injection molding machine;
a preprocessing module 520, connected to the data obtaining module 510, for preprocessing the first image data by a light compensation method and a position correction method to obtain a preprocessed second image data;
a determining module 530, connected to the preprocessing module 520, configured to use the second image data as an input of a preset residue detection model, and determine whether the interior of the mold cavity contains residues according to an output result of the residue detection model.
Optionally, the pre-trained residue detection model is a generative confrontation neural network model, and the generative confrontation neural network model includes: a generator and a discriminator, the discrimination module 530 for:
taking the noise data of the second image data as the input of the generator, and acquiring the generated image data output by the generator;
and taking the generated image data and the second image data as the input of the discriminator so as to determine whether the interior of the mold cavity contains residues according to the output result of the discriminator.
Optionally, the determining module 530 is configured to:
taking the generated image data and the second image data as the input of the discriminator;
when the discrimination result output by the discriminator according to the second image data is a number 1, the interior of the mold cavity contains residues;
when the discrimination result output by the discriminator according to the second image data is digital 0, no residue exists in the mold cavity.
Fig. 6 is a block diagram of another mold residue detection apparatus according to fig. 5, as shown in fig. 6, the mold residue detection apparatus 500 further includes:
and an alarm module 540, connected to the determination module 530, for sending an alarm message indicating that the mold cavity contains residues if it is determined that the mold cavity contains residues.
Optionally, the preprocessing module 520 is configured to:
acquiring a preset number of standard image data under a standard illumination condition;
obtaining the mean value and the variance of the preset number of standard image data by a median filtering method;
determining third image data subjected to light compensation according to the mean value, the variance, the first image data and a preset light compensation formula;
and carrying out position correction on the image corresponding to the third image data by a position correction method to obtain second image data corresponding to the image after the position correction.
In summary, the technical solutions provided by the embodiments of the present disclosure include: acquiring first image data corresponding to an image, acquired by an industrial camera, inside a mold cavity of an injection molding machine; preprocessing the first image data by a light compensation method and a position correction method to obtain preprocessed second image data; and taking the second image data as an input of a preset residue detection model, and determining whether the interior of the mold cavity contains residues according to a result output by the residue detection model. Whether the residues are contained in the preprocessed injection molding machine mold cavity or not can be identified through the trained residue detection model, a manual identification method which is low in production efficiency and high in identification error rate is replaced, and the labor cost is saved while the identification accuracy rate is improved.
Other embodiments of the disclosure will be apparent to those skilled in the art from consideration of the specification and practice of the disclosure. This application is intended to cover any variations, uses, or adaptations of the disclosure following, in general, the principles of the disclosure and including such departures from the present disclosure as come within known or customary practice within the art to which the disclosure pertains. It is intended that the specification and examples be considered as exemplary only, with a true scope and spirit of the disclosure being indicated by the following claims.
It will be understood that the present disclosure is not limited to the precise arrangements described above and shown in the drawings and that various modifications and changes may be made without departing from the scope thereof. The scope of the present disclosure is limited only by the appended claims.
Claims (8)
1. A mold residue detection method, the method comprising:
acquiring first image data corresponding to an image, acquired by an industrial camera, inside a mold cavity of an injection molding machine;
preprocessing the first image data by a light compensation method and a position correction method to obtain preprocessed second image data;
taking the second image data as the input of a preset residue detection model, and determining whether the interior of the mold cavity contains residues according to the output result of the residue detection model;
the preprocessing the first image data by the light compensation method and the position correction method to obtain preprocessed second image data includes:
acquiring a preset number of standard image data under a standard illumination condition;
obtaining the mean value and the variance of the preset number of standard image data by a median filtering method;
determining third image data subjected to light compensation according to the mean value, the variance, the first image data and a preset light compensation formula;
performing position correction on the image corresponding to the third image data through a position correction method to obtain second image data corresponding to the image after the position correction;
the light compensation formula (1) is as follows:
f(x,y)=(f 0 (x,y)-u)σ 0 /σ+μ 0
wherein f (x, y) is a brightness function of the compensated image, and f 0 (x, y) is the luminance function of the image before compensation, said mu 0 Is the mean of 10 standard images, the said sigma 0 The variance of 10 standard images is obtained, mu is the mean value of the first image, and sigma is the variance of the first image;
the position correction method includes: the first step is as follows: performing color space conversion on the image corresponding to the third image data obtained by the light compensation method, and converting the RGB color space into Gray color space; the second step: determining a first maximum contour map of the image obtained in the first step; the third step: according to the first maximum contour map obtained in the second step, searching an ellipse characteristic angle of the first maximum contour map, and calculating a center coordinate of the first maximum contour map; the fourth step: performing affine transformation on the image corresponding to the third image data according to the ellipse feature angle and the center coordinate obtained in the third step to enable the image to be translated to the center position of the image, and performing the same affine transformation and translation operation on the background-removed image corresponding to the corresponding background-removed third image data; the fifth step: determining a second maximum contour map according to the result image obtained in the third step; and calculating the coordinates of the center of the outline; and a sixth step: according to the second maximum contour map obtained in the fifth step, contour boundaries of the second maximum contour map in the horizontal direction and the vertical direction are searched; the seventh step: and cutting the mould image with the background removed and the corresponding mould mask according to the contour boundary obtained in the sixth step and the contour center coordinate obtained in the fifth step.
2. The mold residue detection method of claim 1, wherein the pre-trained residue detection model is a generative antagonistic neural network model comprising: the generator and the discriminator, the said second image data is used as the input of the preset residue detection model, and the result according to the output of the residue detection model determines whether the interior of the mould cavity contains residue, including:
taking the noise data of the second image data as the input of the generator, and acquiring the generated image data output by the generator;
and taking the generated image data and the second image data as the input of the discriminator to determine whether the interior of the mold cavity contains residues according to the output result of the discriminator.
3. The mold residue detection method of claim 2, wherein the using the generated image data and the second image data as inputs to the discriminator to determine whether the interior of the mold cavity contains residues according to a result output by the discriminator comprises:
taking the generated image data and the second image data as inputs of the discriminator;
when the discrimination result output by the discriminator according to the second image data is a number 1, the interior of the mold cavity contains residues;
and when the discrimination result output by the discriminator according to the second image data is digital 0, no residue exists in the mold cavity.
4. The mold residue detection method of claim 1, wherein after determining whether the interior of the mold cavity contains residues according to the result output by the residue detection model, the method further comprises:
and if the mold cavity is determined to contain the residues, sending alarm information indicating that the mold cavity contains the residues.
5. A mold residue detection apparatus, the apparatus comprising:
the data acquisition module is used for acquiring first image data corresponding to an image, collected by an industrial camera, in a mold cavity of the injection molding machine;
the preprocessing module is connected with the data acquisition module and used for preprocessing the first image data through a light compensation method and a position correction method to acquire preprocessed second image data;
the judging module is connected with the preprocessing module and used for taking the second image data as the input of a preset residue detection model and determining whether the interior of the mold cavity contains residues according to the output result of the residue detection model;
the preprocessing the first image data by the light compensation method and the position correction method to obtain preprocessed second image data includes:
acquiring a preset number of standard image data under a standard illumination condition;
obtaining the mean value and the variance of the preset number of standard image data by a median filtering method;
determining third image data subjected to light compensation according to the mean value, the variance, the first image data and a preset light compensation formula;
performing position correction on the image corresponding to the third image data through a position correction method to obtain second image data corresponding to the image after the position correction;
the light compensation formula (1) is as follows:
f(x,y)=(f 0 (x,y)-u)σ 0 /σ+μ 0
wherein f (x, y) is a brightness function of the compensated image, and f 0 (x, y) is the luminance function of the image before compensation, said mu 0 Is the mean of 10 standard images, the sigma 0 The variance of 10 standard images is obtained, mu is the mean value of the first image, and sigma is the variance of the first image;
the position correction method includes: the first step is as follows: performing color space conversion on the image corresponding to the third image data obtained by the light compensation method, and converting the RGB color space into Gray color space; the second step: determining a first maximum profile of the image obtained in the first step; the third step: according to the first maximum contour map obtained in the second step, searching an ellipse characteristic angle of the first maximum contour map, and calculating a center coordinate of the first maximum contour map; the fourth step: performing affine transformation on the image corresponding to the third image data according to the ellipse feature angle and the center coordinate obtained in the third step to enable the image to be translated to the center position of the image, and performing the same affine transformation and translation operation on the background-removed image corresponding to the corresponding background-removed third image data; the fifth step: determining a second maximum contour map according to the result image obtained in the third step; and calculating the coordinates of the center of the outline; and a sixth step: according to the second maximum contour map obtained in the fifth step, contour boundaries of the second maximum contour map in the horizontal direction and the vertical direction are searched; the seventh step: and cutting the mould image with the background removed and the corresponding mould mask according to the contour boundary obtained in the sixth step and the contour center coordinate obtained in the fifth step.
6. The mold residue detection apparatus of claim 5, wherein the pre-trained residue detection model is a generative antagonistic neural network model comprising: a generator and a discriminator, the discriminating module being configured to:
taking the noise data of the second image data as the input of the generator, and acquiring the generated image data output by the generator;
and taking the generated image data and the second image data as the input of the discriminator so as to determine whether the interior of the mold cavity contains residues according to the output result of the discriminator.
7. The mold residue detection apparatus of claim 6, wherein the discrimination module is configured to:
taking the generated image data and the second image data as inputs of the discriminator;
when the discrimination result output by the discriminator according to the second image data is a number 1, the interior of the mold cavity contains residues;
and when the discrimination result output by the discriminator according to the second image data is digital 0, no residue exists in the mold cavity.
8. The mold residue detection apparatus of claim 7, further comprising:
and the alarm module is connected with the judging module and used for sending alarm information indicating that the mold cavity internally contains residues if the mold cavity internally contains residues is determined.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202010942710.1A CN111986198B (en) | 2020-09-09 | 2020-09-09 | Mold residue detection method and device |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202010942710.1A CN111986198B (en) | 2020-09-09 | 2020-09-09 | Mold residue detection method and device |
Publications (2)
Publication Number | Publication Date |
---|---|
CN111986198A CN111986198A (en) | 2020-11-24 |
CN111986198B true CN111986198B (en) | 2023-04-18 |
Family
ID=73449729
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN202010942710.1A Active CN111986198B (en) | 2020-09-09 | 2020-09-09 | Mold residue detection method and device |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN111986198B (en) |
Family Cites Families (4)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN106355590B (en) * | 2016-11-23 | 2023-03-31 | 河北工业大学 | Mold residue visual detection method and device based on image difference making |
EP3486675B1 (en) * | 2017-11-21 | 2020-02-19 | Siemens Healthcare GmbH | Automatic failure detection in medical devices |
CN110516575A (en) * | 2019-08-19 | 2019-11-29 | 上海交通大学 | GAN based on residual error domain richness model generates picture detection method and system |
CN111340791A (en) * | 2020-03-02 | 2020-06-26 | 浙江浙能技术研究院有限公司 | Photovoltaic module unsupervised defect detection method based on GAN improved algorithm |
-
2020
- 2020-09-09 CN CN202010942710.1A patent/CN111986198B/en active Active
Non-Patent Citations (1)
Title |
---|
俞朝晖等.光线补偿.《Visual C++ 数字图像处理与工程应用实践》.北京:中国铁道出版社,2012,第436页. * |
Also Published As
Publication number | Publication date |
---|---|
CN111986198A (en) | 2020-11-24 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN108109137A (en) | The Machine Vision Inspecting System and method of vehicle part | |
CN107804514B (en) | Toothbrush sorting method based on image recognition | |
CN108802041B (en) | Method for rapidly changing small sample set of screen detection | |
WO2018192662A1 (en) | Defect classification in an image or printed output | |
CN108491892A (en) | fruit sorting system based on machine vision | |
CN106651966B (en) | Picture color identification method and system | |
CN117095005A (en) | Plastic master batch quality inspection method and system based on machine vision | |
CN111062938A (en) | Plate expansion plug detection system and method based on machine learning | |
CN116563293B (en) | Photovoltaic carrier production quality detection method and system based on machine vision | |
CN116337887A (en) | Method and system for detecting defects on upper surface of casting cylinder body | |
CN117252926B (en) | Mobile phone shell auxiliary material intelligent assembly control system based on visual positioning | |
CN113592813B (en) | New energy battery welding defect detection method based on deep learning semantic segmentation | |
CN113012228B (en) | Workpiece positioning system and workpiece positioning method based on deep learning | |
CN117237340B (en) | Method and system for detecting appearance of mobile phone shell based on artificial intelligence | |
CN111986198B (en) | Mold residue detection method and device | |
CN112750113B (en) | Glass bottle defect detection method and device based on deep learning and linear detection | |
CN107121063A (en) | The method for detecting workpiece | |
CN116106319A (en) | Automatic detection method and system for defects of synthetic leather | |
CN112989881A (en) | Unsupervised migratable 3D visual object grabbing method | |
CN111626339B (en) | Abnormal detection method for mold cavity of injection molding machine with light shadow and jitter influence resistance | |
CN113052260A (en) | Transformer substation foreign matter identification method and system based on image registration and target detection | |
CN117132600B (en) | Injection molding product quality detection system and method based on image | |
CN117649564B (en) | Aircraft cabin assembly deviation recognition device and quantitative evaluation method | |
CN110827281A (en) | Camera module optical center detection method | |
CN110196152A (en) | The method for diagnosing faults and system of large-scale landscape lamp group based on machine vision |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
GR01 | Patent grant | ||
GR01 | Patent grant |