CN113920116B - Intelligent control method and system for color box facial tissue attaching process based on artificial intelligence - Google Patents
Intelligent control method and system for color box facial tissue attaching process based on artificial intelligence Download PDFInfo
- Publication number
- CN113920116B CN113920116B CN202111516654.6A CN202111516654A CN113920116B CN 113920116 B CN113920116 B CN 113920116B CN 202111516654 A CN202111516654 A CN 202111516654A CN 113920116 B CN113920116 B CN 113920116B
- Authority
- CN
- China
- Prior art keywords
- temperature
- texture
- degree
- thickness degree
- pixel
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Active
Links
- 238000000034 method Methods 0.000 title claims abstract description 64
- 230000001815 facial effect Effects 0.000 title claims abstract description 30
- 230000008569 process Effects 0.000 title claims abstract description 28
- 238000013473 artificial intelligence Methods 0.000 title claims abstract description 23
- 238000009826 distribution Methods 0.000 claims abstract description 82
- 238000013528 artificial neural network Methods 0.000 claims abstract description 50
- 238000004026 adhesive bonding Methods 0.000 claims abstract description 32
- 238000010030 laminating Methods 0.000 claims abstract description 14
- 230000006870 function Effects 0.000 claims description 25
- 238000010586 diagram Methods 0.000 claims description 12
- 239000000203 mixture Substances 0.000 claims description 11
- 238000012549 training Methods 0.000 claims description 10
- 239000011159 matrix material Substances 0.000 claims description 9
- 238000010606 normalization Methods 0.000 claims description 7
- 238000012545 processing Methods 0.000 claims description 5
- 238000004590 computer program Methods 0.000 claims description 4
- 238000006243 chemical reaction Methods 0.000 claims description 2
- 238000001514 detection method Methods 0.000 abstract description 9
- 230000000875 corresponding effect Effects 0.000 description 39
- 239000003292 glue Substances 0.000 description 32
- 238000000576 coating method Methods 0.000 description 25
- 239000011248 coating agent Substances 0.000 description 24
- 239000013598 vector Substances 0.000 description 15
- 238000004519 manufacturing process Methods 0.000 description 8
- 238000003475 lamination Methods 0.000 description 5
- 238000001035 drying Methods 0.000 description 3
- 239000003106 tissue adhesive Substances 0.000 description 3
- 230000005856 abnormality Effects 0.000 description 2
- 238000004364 calculation method Methods 0.000 description 2
- 230000008859 change Effects 0.000 description 2
- 238000013461 design Methods 0.000 description 2
- 239000006185 dispersion Substances 0.000 description 2
- 230000000694 effects Effects 0.000 description 2
- JXASPPWQHFOWPL-UHFFFAOYSA-N Tamarixin Natural products C1=C(O)C(OC)=CC=C1C1=C(OC2C(C(O)C(O)C(CO)O2)O)C(=O)C2=C(O)C=C(O)C=C2O1 JXASPPWQHFOWPL-UHFFFAOYSA-N 0.000 description 1
- 239000000853 adhesive Substances 0.000 description 1
- 230000001070 adhesive effect Effects 0.000 description 1
- 238000013459 approach Methods 0.000 description 1
- 230000009286 beneficial effect Effects 0.000 description 1
- 238000004422 calculation algorithm Methods 0.000 description 1
- 238000010276 construction Methods 0.000 description 1
- 230000001276 controlling effect Effects 0.000 description 1
- 230000002596 correlated effect Effects 0.000 description 1
- 238000011156 evaluation Methods 0.000 description 1
- 238000011478 gradient descent method Methods 0.000 description 1
- 238000010801 machine learning Methods 0.000 description 1
- 238000012986 modification Methods 0.000 description 1
- 230000004048 modification Effects 0.000 description 1
- 230000000750 progressive effect Effects 0.000 description 1
- 230000001105 regulatory effect Effects 0.000 description 1
- 230000037303 wrinkles Effects 0.000 description 1
Images
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T7/00—Image analysis
- G06T7/0002—Inspection of images, e.g. flaw detection
- G06T7/0004—Industrial image inspection
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06N—COMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
- G06N3/00—Computing arrangements based on biological models
- G06N3/02—Neural networks
- G06N3/04—Architecture, e.g. interconnection topology
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06N—COMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
- G06N3/00—Computing arrangements based on biological models
- G06N3/02—Neural networks
- G06N3/08—Learning methods
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/20—Special algorithmic details
- G06T2207/20081—Training; Learning
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/20—Special algorithmic details
- G06T2207/20084—Artificial neural networks [ANN]
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/30—Subject of image; Context of image processing
- G06T2207/30108—Industrial image inspection
- G06T2207/30124—Fabrics; Textile; Paper
Landscapes
- Engineering & Computer Science (AREA)
- Physics & Mathematics (AREA)
- Theoretical Computer Science (AREA)
- General Physics & Mathematics (AREA)
- General Health & Medical Sciences (AREA)
- General Engineering & Computer Science (AREA)
- Biophysics (AREA)
- Computational Linguistics (AREA)
- Data Mining & Analysis (AREA)
- Evolutionary Computation (AREA)
- Artificial Intelligence (AREA)
- Molecular Biology (AREA)
- Computing Systems (AREA)
- Biomedical Technology (AREA)
- Life Sciences & Earth Sciences (AREA)
- Mathematical Physics (AREA)
- Software Systems (AREA)
- Health & Medical Sciences (AREA)
- Quality & Reliability (AREA)
- Computer Vision & Pattern Recognition (AREA)
- Image Analysis (AREA)
Abstract
The invention relates to the technical field of artificial intelligence, in particular to an intelligent control method and system for a color box facial tissue laminating process based on artificial intelligence. Acquiring an infrared image of the face paper before the face paper is attached and a brightness image of the attached paper formed after the face paper is attached; obtaining temperature distribution characteristics and temperature unevenness degree according to the infrared image; obtaining texture distribution characteristics and unevenness degree according to the brightness image; inputting texture temperature characteristics consisting of temperature distribution characteristics, temperature non-uniformity degree, texture distribution characteristics and unevenness degree corresponding to each pixel point into a neural network to obtain the thickness degree of each pixel point, and determining whether the gluing of the area corresponding to the corresponding window is uniform or not according to the thickness degree of all the pixel points. The method does not depend on the experience of detection personnel any more, has strong detection subjectivity, learns the hidden complex relation among corresponding characteristics by utilizing a neural network, and judges whether the gluing is uniform or not according to the characteristics.
Description
Technical Field
The invention relates to the technical field of artificial intelligence, in particular to an intelligent control method and system for a color box facial tissue laminating process based on artificial intelligence.
Background
In the production process of the color box, the surface paper is required to be attached to the corrugated paper or other paper after color printing, glazing and film covering, so that the corrugated paper has certain color, pattern and style.
The surface paper is bonded to the corrugated paper by a certain process flow, for example, the surface paper is firstly glued and then dried, and then bonded to the corrugated paper after being dried, so that the bonded paper is called bonded paper, the surface of the bonded paper becomes uneven due to uneven gluing or improper temperature during drying, the problems of glue failure, false adhesion and the like exist, the surface of the paper with the production number is uneven, even obvious wrinkles exist, and the finished color box is not attractive. In order to ensure the yield, the lamination process flow needs to be monitored manually in real time, even if the problems of the process are found and adjusted.
In practice, the inventors found that the above prior art has the following disadvantages:
the detection method depends on the experience of detection personnel, the detection subjectivity is strong, and the same evaluation standard is not available.
Disclosure of Invention
In order to solve the technical problems, the invention aims to provide an intelligent control method and system for a color box facial tissue attaching process based on artificial intelligence, and the adopted technical scheme is as follows:
in a first aspect, an embodiment of the present invention provides an intelligent control method for a color box facial tissue attaching process based on artificial intelligence, which includes the following steps: acquiring an infrared image of the face paper before the face paper is attached and a brightness image of the attached paper formed after the face paper is attached; taking each pixel in the infrared image as a first central point, taking the gray average value of all neighborhood pixels of the first central point as a temperature distribution characteristic, and taking the variance of the gray values of all neighborhood pixels as the temperature non-uniformity degree; taking each pixel in the brightness image as a second central point, acquiring a hessian matrix of each neighborhood pixel of the second central point, wherein the eigenvectors of the hessian matrix corresponding to all the neighborhood pixels are the texture distribution characteristics of the second central point; the gray level variance of all the neighborhood pixels of the second central point is the unevenness degree; the temperature distribution characteristic, the temperature unevenness degree, the texture distribution characteristic and the unevenness degree corresponding to each pixel point form texture temperature characteristics, the texture temperature characteristics are input into a neural network to obtain the thickness degree of each pixel point, and the thickness degree distribution diagram of the facial tissue glue coating is formed by the thickness degrees of all the pixel points; and performing normalization processing on the thickness degree distribution diagram, dividing the thickness degree distribution diagram after normalization into a plurality of windows, calculating gray level deviation in each window, and determining whether the gluing of the area corresponding to the corresponding window is uniform or not according to the gray level deviation.
Further, the training set of the neural network comprises a smooth region set of which the texture temperature features of the pixels belong to a smooth region and an uneven region set of which the texture temperature features of the pixels belong to an uneven region, the smooth region is a pixel neighborhood corresponding to the texture temperature features with a feature value smaller than a preset threshold, and the feature value is an average value of the uneven degree and the uneven temperature degree.
Further, the neural network comprises a first loss function aiming at the texture temperature characteristic belonging to the flat area and a second loss function aiming at the texture temperature characteristic belonging to the non-flat area, wherein the first loss function comprises thickness degree loss before and after the output of the neural network and area difference loss formed by the sum of difference loss of the thickness degree between the current pixel before and after the output and other pixels in the flat area set; the second loss function is thickness degree loss between thickness degree deviation between the thickness degree output by the neural network and ideal thickness degree at the same temperature and thickness degree deviation mean value.
Further, the area difference loss is a weighted sum of the thickness difference losses, and the weight of the difference loss is a temperature distribution difference between the current pixel and other pixels in the flat area set.
Further, the thickness degree loss is the loss between the thickness degree output by the neural network and an expected thickness degree, the expected thickness degree is the average value of Gaussian sub models to which the temperature corresponding to the current pixel belongs, and the Gaussian sub models are Gaussian sub models of a mixed Gaussian distribution model for obtaining the smooth region set in a fitting mode.
Further, the step of obtaining the difference loss of thickness degree between the current pixel before and after the outputting and each pixel in the flat region set comprises: obtaining a first deviation of the thickness degree between the current pixel and each pixel in the flat region set based on the thickness degree output by the neural network, obtaining a second deviation of the desired thickness degree between the current pixel and each pixel in the flat region set based on the desired thickness degree, and obtaining the loss between the first deviation and the second deviation as the difference loss.
Further, the step of obtaining the temperature distribution difference between the current pixel and other pixels in the flat region set includes: and acquiring the weight of the Gaussian sub-model and the model thereof to which the current pixel and other pixels belong, and calculating the KL divergence between the Gaussian sub-model and the model weight thereof to which the current pixel belongs and the Gaussian sub-model and the model weight thereof to which other pixels belong, wherein the KL divergence is the weight of the difference loss.
Further, the Gaussian mixture distribution model is a Gaussian mixture model fitted by using the temperature in the flat region set and the probability corresponding to the temperature.
Further, the luminance image is a luminance map after color space conversion of the captured RGB image.
In a second aspect, another embodiment of the present invention provides an intelligent control system for a color box tissue laminating process based on artificial intelligence, which includes a memory, a processor, and a computer program stored in the memory and running on the processor, wherein the processor implements the steps of the method according to any one of the above items when executing the computer program.
The invention has the following beneficial effects:
according to the embodiment of the invention, the infrared image of the surface paper before lamination and the brightness image of the laminated paper formed after laminating the surface paper are obtained; and (3) forming texture temperature characteristics according to the temperature distribution characteristics, the temperature unevenness degree, the texture distribution characteristics and the unevenness degree corresponding to each pixel point obtained from the infrared image and the brightness image, inputting the texture temperature characteristics into a neural network to obtain the thickness degree of each pixel point, and judging whether the gluing of the corresponding region is uniform according to the thickness degree of all the pixel points. The method does not depend on the experience of detection personnel any more, has strong detection subjectivity, learns the hidden complex relation among corresponding characteristics by utilizing a neural network, and judges whether the gluing is uniform or not according to the characteristics.
Drawings
In order to more clearly illustrate the embodiments of the present invention or the technical solutions and advantages of the prior art, the drawings used in the description of the embodiments or the prior art will be briefly described below, it is obvious that the drawings in the following description are only some embodiments of the present invention, and other drawings can be obtained by those skilled in the art without creative efforts.
Fig. 1 is a flowchart of an intelligent control method for a color box surface paper lamination process based on artificial intelligence according to an embodiment of the present invention.
Detailed Description
In order to further explain the technical means and effects of the present invention adopted to achieve the predetermined objects, the following detailed description, with reference to the accompanying drawings and preferred embodiments, describes an intelligent control method and system for a color box facial tissue pasting process based on artificial intelligence, and the specific implementation, structure, features and effects thereof. In the following description, different "one embodiment" or "another embodiment" refers to not necessarily the same embodiment. Furthermore, the particular features, structures, or characteristics may be combined in any suitable manner in one or more embodiments.
Unless defined otherwise, all technical and scientific terms used herein have the same meaning as commonly understood by one of ordinary skill in the art to which this invention belongs.
The following specifically describes a specific scheme of an intelligent control method and system for a color box facial tissue laminating process based on artificial intelligence, which is provided by the invention, with reference to the accompanying drawings.
Referring to fig. 1, an intelligent control method for a color box facial tissue attaching process based on artificial intelligence is shown, and is characterized by comprising the following steps:
step S001, an infrared image of the face paper before the lamination and a luminance image of the laminated paper formed after the lamination of the face paper are acquired.
The acquisition of the infrared image is to acquire the infrared image of the surface of the facial tissue by using an infrared camera after the facial tissue is dried, and the infrared image is used for describing the temperature of the dried facial tissue.
The brightness image is obtained by converting the collected RGB image on the surface of the laminating paper into HSV color space and then obtaining the brightness image, wherein the RGB image needs to be obtained by arranging an RGB camera above the laminating paper and collecting texture information on the surface of the laminating paper for obtaining the unevenness degree of the surface of the laminating paper.
Every time an infrared image is collected, the facial tissue in the field of view of the infrared image is conveyed to corrugated paper by a production line at a fixed speed, then the facial tissue is adhered together to form adhered paper, and then an RGB camera collects the image; the facial tissue in the infrared image field and the facial tissue attached to the surface of the paper in the RGB camera field are the same, and the same facial tissue image can be obtained by controlling the time interval of image acquisition of the infrared camera and the RGB camera. Every time an RGB camera collects an RGB image, the RGB camera corresponds to an infrared image, and the same pixel point of the two images is represented as the same position of the facial tissue.
Step S002, each pixel in the infrared image is taken as a first central point, the mean value of the gray levels of all the neighborhood pixels of the first central point is taken as a temperature distribution characteristic, and the variance of the gray levels of all the neighborhood pixels is taken as a non-uniform degree.
The method for acquiring the temperature distribution characteristics comprises the following steps: a K multiplied by K window is obtained by taking a pixel point q as a central point, and the gray values of all neighborhood pixels in the window on an infrared image form a gray value vector which is called as the temperature distribution characteristic of the pixel point q.
The method for acquiring the non-uniformity degree comprises the following steps: the variance of all gray values in the window is called the temperature non-uniformity degree of the pixel point p.
Step S003, each pixel in the brightness image is taken as a second central point, a Hessian matrix of each neighborhood pixel of the second central point is obtained, and the feature vectors of the Hessian matrix corresponding to all the neighborhood pixels are taken as texture distribution features of the second central point; and the gray level variance of all the neighborhood pixels of the second central point is the unevenness degree.
The method for acquiring the texture distribution characteristics comprises the following steps: constructing a K multiplied by K window by taking any pixel point p in the brightness image as a central point, and acquiring a Hessian matrix of a kth neighborhood pixel point in the window, wherein the Hessian matrix is used for describing a second derivative of the kth neighborhood pixel point on the image and can represent curvature change information of a neighborhood pixel point gray value; and obtaining the characteristic value and the characteristic vector of the Hessian matrix corresponding to the kth neighborhood pixel point, wherein the characteristic vector with a small characteristic value represents the direction with the minimum curvature change of the gray value of the kth neighborhood pixel point and can be used for representing the texture trend of each neighborhood pixel point, and the characteristic vector with the small characteristic value is called as the texture characteristic of the kth neighborhood pixel point. And in the same way, the texture characteristics of all the pixel points in the window can be obtained and used for expressing the texture distribution condition around the pixel point p. The texture features are combined into a high-dimensional vector, which is called the texture distribution feature of the pixel point p.
The method for acquiring the unevenness comprises the following steps: the variance of the gray values of all neighborhood pixels in the window is obtained and used for representing the distribution dispersion condition of the gray values in the window, the distribution dispersion condition is called the unevenness degree of the pixel p, and the unevenness is more indicated when the value is larger.
And step S004, the temperature distribution characteristic, the non-uniform degree, the texture distribution characteristic and the unevenness degree corresponding to each pixel point form a first characteristic vector, the first characteristic vector is input into a neural network to obtain the thickness degree of each pixel point, and the thickness degree distribution diagram of the facial tissue glue coating is formed by the thickness degrees of all the pixel points.
As can be seen from steps S002 and S003, each pixel point corresponds to a group of features including a temperature distribution feature, a non-uniform degree, a texture distribution feature, and a non-uniform degree. That is, each pixel point corresponds to a texture temperature feature, and the texture temperature feature describes the temperature distribution and texture distribution condition of a local area.
The texture temperature characteristics of each pixel point can be obtained according to the infrared image and the RGB image, the texture temperature characteristics are respectively input into the neural network, the thickness degree of gluing of each pixel point is obtained, and the thickness degree of gluing of all the pixel points forms a gluing thickness degree distribution diagram. The thickness degree is not a real physical quantity but a relative quantity, the aim of the embodiment of the invention is to obtain the gluing thickness but the non-uniformity of gluing, because the non-uniformity of gluing is an important reason which causes poor product quality, and the failure and the abnormality of gluing equipment or the abnormality of the control parameters of the gluing equipment can be reflected.
Step S005 is to perform normalization processing on the thickness degree distribution map, divide the thickness degree distribution map after normalization into a plurality of windows, calculate a gray variance in each window, and determine whether the region corresponding to the corresponding window is uniformly coated with glue according to the gray variance.
The method for judging whether the gluing at each position is uniform comprises the following steps: and normalizing the glued thickness degree distribution diagram, based on the normalized thickness degree distribution diagram, establishing a 21 x 21 window by taking a certain pixel point as a center, calculating the variance of the gluing thickness of all pixels in the window, and if the variance is greater than a threshold, indicating that the gluing of the pixel is not uniform, so that the positions where the gluing is not uniform can be obtained. Preferably, the threshold value is set to 0.1 in the embodiment of the present invention.
And presenting the result to a manager of the production line, and the manager overhauls and adjusts the equipment according to the data, for example, adjusting the glue yield or the temperature of the sol, and the like. In addition, the temperature of drying can be regulated and controlled by the management and control personnel according to the infrared images. According to the embodiment of the invention, the gluing nonuniformity of each position and the temperature data of each position are improved, so that the production control of management and control personnel is assisted, and the production management and control efficiency is improved to a certain extent.
In summary, in the embodiment of the present invention, the infrared image of the face paper before being attached and the brightness image of the attached paper formed after attaching the face paper are obtained; acquiring a temperature distribution characteristic and a temperature unevenness degree according to the infrared image, and acquiring a texture distribution characteristic and an unevenness degree according to the brightness image; forming texture temperature characteristics by the temperature distribution characteristics, the temperature unevenness degree, the texture distribution characteristics and the unevenness degree corresponding to each pixel point, inputting the texture temperature characteristics into a neural network to obtain the thickness degree of each pixel point, and forming a thickness degree distribution diagram of the facial tissue glue coating by the thickness degrees of all the pixel points; and judging whether the corresponding area is uniformly coated with the glue according to the thickness degree distribution diagram. The method does not depend on the experience of detection personnel any more, has strong detection subjectivity, learns the hidden complex relation among corresponding characteristics by utilizing a neural network, and judges whether the gluing is uniform or not according to the characteristics.
Preferably, the neural network in the embodiment of the present invention is a fully-connected neural network, and the fully-connected neural network is a 6-layer fully-connected neural network. The input of the neural network is a texture temperature characteristic, and the output of the network is the thickness degree of glue coating, and is used for representing the glue coating amount at a certain position, namely the thickness degree of glue coating corresponding to one texture temperature characteristic. In the training set of the neural network, in the historical production record, each RGB camera acquires a texture image of the laminated paper and a corresponding infrared image, and the infrared images represent the temperature distribution of the dried surface paper attached to the surface of the paper in the texture images, so that a plurality of groups of texture images and infrared images are obtained in the historical production process; texture temperature characteristics of all pixel points can be obtained according to a group of images by using the same method as the steps S001-S003, so that a plurality of groups of images can obtain a large number of texture temperature characteristics, and the texture temperature characteristics form a set; the neural network is trained by taking the set of texture temperature characteristics as a training set, so that the relation among temperature distribution, unevenness degree and non-uniformity degree of gluing in the gluing process is obtained by means of machine learning, and then the non-uniformity of gluing is obtained in real time by utilizing the relation. The training set of the neural network in the embodiment of the invention comprises a smooth region set of which the texture temperature features of pixels belong to a smooth region and an uneven region set of which the texture temperature features of pixels belong to an uneven region, wherein the smooth region is a pixel neighborhood corresponding to the texture temperature features with a feature value smaller than a preset threshold, and the feature value is the average value of the unevenness degree and the temperature unevenness degree.
Preferably, the loss function of the neural network comprises a first loss function for the texture temperature characteristic belonging to the flat region and a second loss function for the texture temperature characteristic belonging to the non-flat region, and the neural network is jointly trained by using the first loss function and the second loss function. Wherein the first loss function comprises thickness degree loss before and after the neural network outputs, and area difference loss formed by the sum of difference loss of thickness degree between the current pixel before and after the output and each pixel in the flat area set; the second loss function is thickness degree loss between thickness degree deviation between the thickness degree output by the neural network and ideal thickness degree at the same temperature and thickness degree deviation mean value. The design concept of the first loss function is as follows:
(1) for a texture temperature feature, acquiring the unevenness degree and the temperature unevenness degree contained in the texture temperature feature, and taking the average value of the unevenness degree and the temperature unevenness degree as the characteristic value of the texture temperature feature; acquiring texture temperature features with the characteristic value smaller than a preset threshold, and recording a set of texture temperature features smaller than the preset threshold as a flat area set S1, wherein the value of the preset threshold is 0.2 in the embodiment of the invention;
the texture temperature characteristic in the flat region set S1 has a relatively small degree of unevenness and temperature unevenness. Since the texture temperature characteristics are calculated from the neighborhood of the center pixel in steps S002 and S003, each texture temperature characteristic in the flat region set S1 represents a flat region whose temperature distribution is uniform at the time of baking. Because the thickness degree of the glue coating corresponding to the texture temperature features is only related to the temperature, that is, the neural network only needs to learn the relationship that the temperature is greater than the thickness degree of the glue coating in the flat region set S1. It should be noted that the thickness of the coating is only related to the temperature, which means: after the temperature is set, the surface of the attached paper can be smooth under the thickness degree of the adhesive;
for each texture temperature characteristic outside the flat region set S1, an uneven or uneven temperature distribution region is shown, and the thickness degree of the glue coating corresponding to the texture temperature characteristics is also related to the unevenness degree or the uneven temperature degree besides the temperature, namely, the neural network needs to additionally learn the relationship between the temperature unevenness degree and the thickness degree of the glue coating on the data outside the flat region set S1;
therefore, at the temperature included in the texture temperature feature in the flat region set S1, the thickness of the glue applied is correspondingly small, so that the surface of the bonded paper is flat. The temperature included in the texture temperature feature refers to the mean value of the numerical values of each dimension of the temperature distribution feature in the texture temperature feature, and represents the mean value of the gray values in the neighborhood of the central pixel point on an infrared image. That is, each texture temperature feature in the set of flattened regions corresponds to a temperature magnitude.
(2) For the temperatures of all texture temperature features in the leveling region set S1, a temperature histogram is obtained through statistics and is used for representing the probability of each temperature, and the greater the probability is, the more probable a gluing thickness degree exists at the corresponding temperature, so that the surface of the laminated paper is leveled; the smaller the probability is or approaches to 0, which indicates that whether the thickness of the glue coating is uncertain at the corresponding temperature or not, so that the surface of the bonding paper is smooth. In the embodiment of the invention, the probability corresponding to the temperature and the temperature is taken as sample data, an EM algorithm is utilized to fit a one-dimensional Gaussian mixture model for describing the histogram, and the number of the Gaussian models in the mixture Gaussian model is set to be L = 20. The obtained temperature histogram contains a plurality of temperature values, and L temperature values are assumed to have the thickness degree corresponding to gluing so that the attached paper becomes flat. Since the number of temperature values in the histogram is clearly larger than L, some temperatures are due to offsets from the true temperature under error and noise disturbances. For example, there are 300 temperatures in the flat region set S1, and the values of the 300 temperatures are all 15, that is, the probability of occurrence of the temperature value 15 is 1, but due to the influence of noise or error, there is a portion of the temperatures that are no longer 15, where the temperature value of 100 temperatures becomes 13, and the temperature value of 50 temperatures becomes 10, that is, the temperature is shifted under the interference of noise and error. In summary, the temperature histogram is described by a Gaussian mixture model, which can eliminate the interference of noise and facilitate subsequent calculation.
(3) Acquiring any texture temperature characteristic p in the smooth region set S1, wherein the corresponding temperature is,Is the value of an argument of the gaussian mixture model or L sub-gaussian models fitted in (2), since the gaussian mixture model is used to describe the probability corresponding to each temperature magnitude.
Recording the texture temperature characteristic as p, inputting the texture temperature characteristic p into a neural network, and recording the output result as p,Indicating the thickness of the glue at which the texture temperature characteristic p corresponds, i.e. it can be considered that at a given temperatureThe thickness of the glue coated for flattening the surface of the laminated paper is. However, the current position is unknownIs what the tag is, but knows thatIs sized byDetermined by corresponding sub-Gaussian model, and obtaining different gluing thickness degrees when the texture temperature characteristic p takes different valuesThe thickness of these coatingsIs divided intoThe distribution should be determined by the relative positions of the L sub-Gaussian models, so that the first loss function pair is designedCarrying out supervision training:
wherein,to a desired thickness,The thickness degree of the output of the neural network,The weight lost for the difference.Shows the thickness loss before and after the output of the neural network,The difference of thickness degree between the current pixel output by the neural network and other pixels in the flat region set,Representing the difference between the current pixel before output and the desired thickness between the other pixels in the set of flat regions, respectively.
Wherein,by the temperature magnitude of the texture temperature characteristic pDetermined by the mean of the sub-gaussian models to which it belongs,since the temperature in the region represented by the texture temperature characteristic p is uniformly distributed and the attached paper is flat, the temperature is considered as the only condition for determining the thickness degree of the glue coating, and therefore the embodiment of the invention expects that the temperature is uniformly distributed in the region represented by the texture temperature characteristic p, and the temperature is the only condition for determining the thickness degree of the glue coatingShould be determined by the temperature corresponding to the texture temperature characteristic pAnd (4) determining. WhereinThe acquisition method comprises the following steps: in step (2), the mean and variance of the L gaussian submodels in the gaussian mixture model, and the weight of each gaussian submodel are known. Will be at temperatureInputting the result to each Gaussian submodel to obtain an output result, multiplying the output result on each Gaussian submodel by the weight of each Gaussian submodel to obtain a result representingThe probability of each Gaussian submodel is obtained, and then the Gaussian submodel corresponding to the maximum probability is obtained, wherein the Gaussian submodel is the temperature of the texture temperature characteristic pThe Gaussian sub-model belongs to; then the mean of all Gaussian sub-models is normalized, so the temperature of pThe result of the normalization processing of the mean value of the Gaussian sub-model is。
Since the invention expectsThickness degree of outputIs a temperature of pDetermined by the associated Gaussian sub-model, thereforeAs small as possible.
The embodiment of the invention expects the thickness degree of the glue coating corresponding to the texture temperature characteristic pFrom the temperature of the texture temperature characteristic pBesides the Gauss submodel, the thickness and thickness of the glue coating of other texture temperature features in the flat region set are required to be determinedThe relative size of the model is also determined by the relative distribution relation of the L Gaussian mixture models, so that the accuracy of neural network training can be ensured. The specific method comprises the following steps: obtaining another texture temperature feature q in the set of flattened regions, i.e.Inputting the texture temperature characteristic q into a neural network, and obtaining the resultThe thickness degree of the glue corresponding to q is shown; example of the inventionAndis determined by a sub-Gaussian model to which the temperatures of the two texture temperature features belong, whereinThe difference in the thickness of the glue applied to the two representing the neural network outputs,expressing the difference in the distribution of the Gaussian models, the present invention requires that the difference of the former is determined by the difference of the latter, and therefore it is expectedAs small as possible.
When all texture temperature features in the set of flat regions are considered, then the method is executedThe thickness degree of the glue coating of the texture temperature features p and the thickness degree of the glue coating of all the texture temperature features in the flat area set are determined by sub-Gaussian models of the temperatures of other texture temperature features.
WhereinIs a weight, also called attention coefficient, and the calculation method is as follows: the Gaussian sub-model to which the temperature of p belongs is obtained and is recorded asCorresponding weight isObtaining the Gaussian sub-model to which the temperature of q belongs, and recording asCorresponding weight is(ii) a ThenTherein is disclosedIndicating KL divergence of two Gaussian distributions, a smaller value indicating that the two distributions are more similar, and a more similar value indicating that the present invention is less desirableThe difference of the glue coating is influenced or restricted by the two Gaussian distributions, the value range of the thickness degree of the glue coating can be increased, the diversity of the thickness degree of the glue coating can be increased by reducing restriction conditions, and overfitting is avoided; instead, it is usedThe larger the difference between the two Gaussian distributions, the more necessary it is to letThe difference of the two Gaussian distributions is influenced or restricted, so that the noise caused by the thickness degree of the obtained glue coating is avoided; because of the large difference, there is a high possibility of introducing noise between the two gaussian distributions, and even erroneous data occurs.Is a normalized coefficient.
The first loss function is constructed for the thickness degree of the glue coating of each texture temperature feature p in the flat region set, and then network training can be performed, but the training is not accurate only according to the loss function, because the loss function is constructed only for the texture temperature features in the flat region set, but a large number of texture temperature features besides the flat region set contain many useful features and need to be learned by a network, and the embodiment of the invention needs to construct a second loss function according to other texture temperature features.
The set of texture temperature features other than the flat region set S1 is referred to as an "uneven region set" by designating "S2", and an arbitrary texture temperature feature m in the uneven region set S2 is input to the neural network, and the output result is designated as "m" in the neural network。
The neural network needs to learn the relationship between the degree of temperature unevenness and the degree of thickness of the paste in the uneven area set S2. The second Loss function Loss2 is designed, and the specific design idea and method comprises the following steps:
(1) obtaining the temperature characteristic corresponding to the texture temperature characteristic m:The mean value of the values of each dimension of the temperature distribution characteristic in the texture temperature characteristic m represents the mean value of the temperature of a local area represented by the texture temperature characteristic m.
For a region, the temperature distribution is uniform and has a size ofIn the case of leveling as well, what the thickness of the glue applied to the area needs to be obtained. In order to obtain a desired glue thickness, a texture temperature characteristic which can reflect the area needs to be constructed, and the texture temperature characteristic is constructed by the following method: first, a temperature unevenness degree with a value of 0 is generated, and values of all dimensions areThese two quantities are used to describe the regions of uniform temperature distribution. In addition, an unevenness degree and a texture distribution characteristic are needed, and the two obtaining methods are as follows: obtaining all textures in S1The degree of unevenness of the temperature characteristic and the texture distribution characteristic are combined into a vector, and the vector is called a region unevenness vector. Then performing mean shift clustering on all the region uneven vectors obtained in S1 to obtain multiple classes, assuming that X classes are obtained, wherein each class is a set of some region uneven vectors, and the same class has similar region uneven vectors; and then obtaining the mean value of the unevenness vectors of all the regions in each category, and finally obtaining X mean values from the X categories, wherein each mean value is regarded as a group of unevenness degrees and texture distribution characteristics.
Thus, a set of temperature unevenness and temperature distribution characteristics and X sets of unevenness and texture distribution characteristics are obtained, and then X texture temperature characteristics are constructed by using the data. Suppose the xth texture temperature characteristic isThen will beIn the input neural network, the output result is recorded asRepresents the xth texture feature where the texture temperature feature m is constructedCorresponding glue thickness. Then the texture temperature feature m is obtained after being input into the neural networkThe second loss function of time is:
whereinIndicating texture temperature characteristicsCharacterizing the xth textural feature at m constructionThe corresponding glue thickness, means: if the texture temperature characteristic m is regarded as a flat area with uniform temperature distribution, the corresponding gluing thickness isOr, if the temperature distribution of the region represented by the texture temperature characteristic m is uniform, the coating thickness isThis area can be guaranteed to be smooth. But actually the texture temperature characteristic m shows that the area is not flat or the temperature distribution is not uniform, so that the gluing thickness corresponding to m is obtainedAre certainly biased by an amount of(ii) a It can also be said that: because of the uneven regional temperature distribution of the texture temperature characteristic m, the gluing thickness and the ideal gluing thicknessThis leads to an unevenness of this region of m.
The mean value of deviation amounts under various conditions is integrated, and noise and errors are avoided.
The present invention recognizes that the more uneven or uneven the temperature of the region m, whereThe temperature unevenness and the unevenness of m are expressedThe value is obtained. The present invention contemplates that the mean is positively correlated with the amount of deviation, and so desiresThe smaller the better.
Two loss functions are constructed for the data distributions in the set of flat regions S1 and the set of non-flat regions S2. During training, respectively inputting texture temperature characteristics in the flat region set S1 and the non-flat region set S2 into the corresponding loss function, and then updating network parameters by using a random gradient descent method; this process is repeated until the neural network converges. The neural network after convergence can describe the relationship among the drying temperature, the thickness degree and the unevenness degree of gluing in the laminating process.
It should be noted that: the precedence order of the above embodiments of the present invention is only for description, and does not represent the merits of the embodiments. And specific embodiments thereof have been described above. Other embodiments are within the scope of the following claims. In some cases, the actions or steps recited in the claims may be performed in a different order than in the embodiments and still achieve desirable results. In addition, the processes depicted in the accompanying figures do not necessarily require the particular order shown, or sequential order, to achieve desirable results. In some embodiments, multitasking and parallel processing may also be possible or may be advantageous.
The embodiments in the present specification are described in a progressive manner, and the same and similar parts among the embodiments are referred to each other, and each embodiment focuses on the differences from the other embodiments.
The above description is only for the purpose of illustrating the preferred embodiments of the present invention and is not to be construed as limiting the invention, and any modifications, equivalents, improvements and the like that fall within the spirit and principle of the present invention are intended to be included therein.
Claims (10)
1. An intelligent control method for a color box facial tissue attaching process based on artificial intelligence is characterized by comprising the following steps:
acquiring an infrared image of the face paper before the face paper is attached and a brightness image of the attached paper formed after the face paper is attached;
taking each pixel in the infrared image as a first central point, taking the gray average value of all neighborhood pixels of the first central point as a temperature distribution characteristic, and taking the variance of the gray values of all neighborhood pixels as the temperature non-uniformity degree;
taking each pixel in the brightness image as a second central point, acquiring a hessian matrix of each neighborhood pixel of the second central point, wherein the eigenvectors of the hessian matrix corresponding to all the neighborhood pixels are the texture distribution characteristics of the second central point; the gray level variance of all the neighborhood pixels of the second central point is the unevenness degree;
the temperature distribution characteristic, the temperature unevenness degree, the texture distribution characteristic and the unevenness degree corresponding to each pixel point form texture temperature characteristics, the texture temperature characteristics are input into a neural network to obtain the thickness degree of each pixel point, and the thickness degree distribution diagram of the coated surface paper is formed by the thickness degrees of all the pixel points;
and performing normalization processing on the thickness degree distribution diagram, dividing the thickness degree distribution diagram after normalization into a plurality of windows, calculating gray level deviation in each window, and determining whether the gluing of the area corresponding to the corresponding window is uniform or not according to the gray level deviation.
2. The intelligent control method for the color box facial tissue attaching process based on the artificial intelligence as claimed in claim 1, wherein the training set of the neural network comprises a flat region set in which the texture temperature features of the pixels belong to a flat region and an uneven region set in which the texture temperature features of the pixels belong to an uneven region, the flat region is a pixel neighborhood corresponding to the texture temperature features with a feature value smaller than a preset threshold, and the feature value is an average value of the unevenness degree and the temperature unevenness degree.
3. The intelligent control method for the color box facial tissue attaching process based on the artificial intelligence is characterized in that the neural network comprises a first loss function aiming at the texture temperature characteristics belonging to the flat area and a second loss function aiming at the texture temperature characteristics belonging to the non-flat area, the first loss function comprises the thickness degree loss before and after the output of the neural network and the area difference loss formed by the sum of the thickness degree difference losses between the current pixel before and after the output and other pixels in the flat area set; the second loss function is thickness degree loss between thickness degree deviation between the thickness degree output by the neural network and ideal thickness degree at the same temperature and thickness degree deviation mean value.
4. The intelligent control method for the color box facial tissue attaching process based on the artificial intelligence as claimed in claim 3, wherein the area difference loss is a weighted sum of the thickness difference losses, and the weight of the thickness difference losses is a temperature distribution difference between a current pixel and other pixels in the flat area set.
5. The intelligent control method for the color box facial tissue laminating process based on the artificial intelligence is characterized in that the thickness degree loss is the loss between the thickness degree of the output of the neural network and the expected thickness degree, the expected thickness degree is the mean value of Gaussian sub models to which the temperature corresponding to the current pixel belongs, and the Gaussian sub models are Gaussian sub models of mixed Gaussian distribution models for obtaining the smooth region set in a fitting mode.
6. The intelligent control method for the color box facial tissue attaching process based on the artificial intelligence as claimed in claim 5, wherein the step of obtaining the difference loss of the thickness degree between the current pixel before and after the outputting and each pixel in the flat region set comprises: obtaining a first deviation of the thickness degree between the current pixel and each pixel in the flat region set based on the thickness degree output by the neural network, obtaining a second deviation of the desired thickness degree between the current pixel and each pixel in the flat region set based on the desired thickness degree, and obtaining the loss between the first deviation and the second deviation as the difference loss.
7. The intelligent control method for the color box facial tissue attaching process based on the artificial intelligence as claimed in claim 5, wherein the step of obtaining the temperature distribution difference between the current pixel and other pixels in the flat area set comprises: and acquiring the weight of the Gaussian sub-model and the model thereof to which the current pixel and other pixels belong, and calculating the KL divergence between the Gaussian sub-model and the model weight thereof to which the current pixel belongs and the Gaussian sub-model and the model weight thereof to which other pixels belong, wherein the KL divergence is the weight of the difference loss.
8. The intelligent control method for the color box facial tissue laminating process based on the artificial intelligence as claimed in claim 5, wherein the Gaussian mixture distribution model is a Gaussian mixture model fitted by using the temperature in the flat region set and the probability corresponding to the temperature.
9. The intelligent control method for the facing paper laminating process of the color boxes based on the artificial intelligence as claimed in claim 1, wherein the brightness image is a brightness image obtained by color space conversion of the collected RGB image.
10. An intelligent control system for a color box facial tissue laminating process based on artificial intelligence, comprising a memory, a processor and a computer program stored in the memory and running on the processor, wherein the steps of the method according to any one of claims 1-9 are realized when the computer program is executed by the processor.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202111516654.6A CN113920116B (en) | 2021-12-13 | 2021-12-13 | Intelligent control method and system for color box facial tissue attaching process based on artificial intelligence |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202111516654.6A CN113920116B (en) | 2021-12-13 | 2021-12-13 | Intelligent control method and system for color box facial tissue attaching process based on artificial intelligence |
Publications (2)
Publication Number | Publication Date |
---|---|
CN113920116A CN113920116A (en) | 2022-01-11 |
CN113920116B true CN113920116B (en) | 2022-03-15 |
Family
ID=79249017
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN202111516654.6A Active CN113920116B (en) | 2021-12-13 | 2021-12-13 | Intelligent control method and system for color box facial tissue attaching process based on artificial intelligence |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN113920116B (en) |
Families Citing this family (1)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN115266777B (en) * | 2022-09-19 | 2022-12-13 | 江苏芸裕金属制品有限公司 | Real-time monitoring method for steel wire rope core conveying belt |
Citations (6)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN110245663A (en) * | 2019-06-26 | 2019-09-17 | 上海电器科学研究所(集团)有限公司 | One kind knowing method for distinguishing for coil of strip information |
CN110763728A (en) * | 2019-11-06 | 2020-02-07 | 安徽建筑大学 | Fatigue damage assessment method based on metal surface infrared polarization thermal image characteristics |
CN111310722A (en) * | 2020-03-12 | 2020-06-19 | 广东电网有限责任公司广州供电局 | Power equipment image fault identification method based on improved neural network |
CN112606490A (en) * | 2020-12-11 | 2021-04-06 | 岳阳市青方环保科技有限公司 | Corrugated container board automated production control system |
CN112916407A (en) * | 2020-04-29 | 2021-06-08 | 江苏旷博智能技术有限公司 | Method for sorting coal and gangue |
CN113638104A (en) * | 2021-10-14 | 2021-11-12 | 海门市恒昌织带有限公司 | Intelligent yarn cleaning control method and system for bobbin winder |
-
2021
- 2021-12-13 CN CN202111516654.6A patent/CN113920116B/en active Active
Patent Citations (6)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN110245663A (en) * | 2019-06-26 | 2019-09-17 | 上海电器科学研究所(集团)有限公司 | One kind knowing method for distinguishing for coil of strip information |
CN110763728A (en) * | 2019-11-06 | 2020-02-07 | 安徽建筑大学 | Fatigue damage assessment method based on metal surface infrared polarization thermal image characteristics |
CN111310722A (en) * | 2020-03-12 | 2020-06-19 | 广东电网有限责任公司广州供电局 | Power equipment image fault identification method based on improved neural network |
CN112916407A (en) * | 2020-04-29 | 2021-06-08 | 江苏旷博智能技术有限公司 | Method for sorting coal and gangue |
CN112606490A (en) * | 2020-12-11 | 2021-04-06 | 岳阳市青方环保科技有限公司 | Corrugated container board automated production control system |
CN113638104A (en) * | 2021-10-14 | 2021-11-12 | 海门市恒昌织带有限公司 | Intelligent yarn cleaning control method and system for bobbin winder |
Also Published As
Publication number | Publication date |
---|---|
CN113920116A (en) | 2022-01-11 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN109977757B (en) | Multi-modal head posture estimation method based on mixed depth regression network | |
CN108681692B (en) | Method for identifying newly added buildings in remote sensing image based on deep learning | |
CN112181666A (en) | Method, system, equipment and readable storage medium for equipment evaluation and federal learning importance aggregation based on edge intelligence | |
CN109815864B (en) | Facial image age identification method based on transfer learning | |
US11113597B2 (en) | Artificial neural network and method of training an artificial neural network with epigenetic neurogenesis | |
CN113920116B (en) | Intelligent control method and system for color box facial tissue attaching process based on artificial intelligence | |
CN108345843B (en) | Head posture estimation method based on mixed depth regression network | |
CN110232326A (en) | A kind of D object recognition method, device and storage medium | |
CN116740384B (en) | Intelligent control method and system of floor washing machine | |
CN110298394A (en) | A kind of image-recognizing method and relevant apparatus | |
CN109886342A (en) | Model training method and device based on machine learning | |
CN107292272A (en) | A kind of method and system of the recognition of face in the video of real-time Transmission | |
CN110909783A (en) | Blind domain image classification and reconstruction method based on enhanced reconstruction classification network | |
CN110969571A (en) | Method and system for specified self-adaptive illumination migration in camera-crossing scene | |
CN111079790A (en) | Image classification method for constructing class center | |
CN115761240A (en) | Image semantic segmentation method and device for neural network of chaotic back propagation map | |
JPH07113943B2 (en) | Learning method of neural network | |
CN102013020A (en) | Method and system for synthesizing human face image | |
CN113706477B (en) | Defect category identification method, device, equipment and medium | |
CN113313179B (en) | Noise image classification method based on l2p norm robust least square method | |
CN117994635A (en) | Federal element learning image recognition method and system with enhanced noise robustness | |
CN114970660A (en) | Power load clustering method | |
CN111160161B (en) | Self-learning face age estimation method based on noise elimination | |
CN109993728A (en) | A kind of thermal transfer glue deviation automatic testing method and system | |
CN112541397A (en) | Flame detection method based on improved ViBe algorithm and lightweight convolutional network |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
GR01 | Patent grant | ||
GR01 | Patent grant |