CN112760756B - Textile process self-adaptive cotton cleaning system based on artificial intelligence - Google Patents

Textile process self-adaptive cotton cleaning system based on artificial intelligence Download PDF

Info

Publication number
CN112760756B
CN112760756B CN202011482523.6A CN202011482523A CN112760756B CN 112760756 B CN112760756 B CN 112760756B CN 202011482523 A CN202011482523 A CN 202011482523A CN 112760756 B CN112760756 B CN 112760756B
Authority
CN
China
Prior art keywords
matrix
cotton
sound
dimensional
sound signal
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202011482523.6A
Other languages
Chinese (zh)
Other versions
CN112760756A (en
Inventor
朱丹
张平
张霄蝶
陈童
冯培培
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
HENAN PROVINCE PRODUCT QUALITY SUPERVISION AND INSPECTION CENTER
Original Assignee
HENAN PROVINCE PRODUCT QUALITY SUPERVISION AND INSPECTION CENTER
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by HENAN PROVINCE PRODUCT QUALITY SUPERVISION AND INSPECTION CENTER filed Critical HENAN PROVINCE PRODUCT QUALITY SUPERVISION AND INSPECTION CENTER
Priority to CN202011482523.6A priority Critical patent/CN112760756B/en
Publication of CN112760756A publication Critical patent/CN112760756A/en
Application granted granted Critical
Publication of CN112760756B publication Critical patent/CN112760756B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • DTEXTILES; PAPER
    • D01NATURAL OR MAN-MADE THREADS OR FIBRES; SPINNING
    • D01GPRELIMINARY TREATMENT OF FIBRES, e.g. FOR SPINNING
    • D01G9/00Opening or cleaning fibres, e.g. scutching cotton
    • D01G9/14Details of machines or apparatus

Abstract

The invention relates to the technical field of computer vision, in particular to an artificial intelligence-based textile process self-adaptive scutching system which comprises a data acquisition module, an image analysis module, a sound signal detection module, an opening degree estimation module and a scutching parameter adjustment module and solves the problems that in the existing scutcher, the scutcher mainly adjusts scutching factors according to experience by workers, the scutcher is poor in scutcher effect and wastes resources. According to the invention, the optimal impurity removal parameters of the cotton cleaner are obtained by utilizing the sound signal three-dimensional matrix obtained through the edge and texture characteristics of cotton, so that the cotton processing quality is ensured; the invention realizes the intelligent control of scutching, does not need to manually adjust impurity removal parameters, further ensures the product quality, improves the textile efficiency and has high reliability; the invention realizes the cotton cleaning system by utilizing the neural network, has excellent impurity cleaning effect, low cost and wide application range.

Description

Textile process self-adaptive cotton cleaning system based on artificial intelligence
Technical Field
The invention relates to the technical field of computer vision, in particular to a textile process self-adaptive cotton cleaning system based on artificial intelligence.
Background
The opening and picking process is the first process of the spinning process, and the task and purpose of the opening and picking process are to open and remove impurities and the like for the raw cotton. Because cotton contains impurities such as cotton seeds and broken leaves, the impurities are generally required to be removed before carding, and if the impurities are not removed, the damage of carding equipment is easily caused.
At present, the prior art adopts scutcher to carry out the edulcoration more, and its edulcoration effect is influenced by factors such as beater speed, dirt bar installation angle and beater and dirt bar's interval, and these factors are adjusted according to experience by the staff usually, and this kind of mode leads to the impurity clearance thoroughly not easily, and the edulcoration effect is relatively poor to influence the production quality of product.
Disclosure of Invention
The invention provides an artificial intelligence-based textile process self-adaptive scutching system, and solves the technical problems that in the existing scutcher, the impurity removal parameters are adjusted by workers according to experience, the impurity removal effect is poor, and resources are wasted.
In order to solve the technical problems, the invention provides an artificial intelligence-based textile process self-adaptive cotton cleaning system, which comprises a data acquisition module, an image analysis module, a sound signal detection module, an opening degree estimation module and an impurity removal parameter adjustment module, wherein the image analysis module, the sound signal detection module, the opening degree estimation module and the impurity removal parameter adjustment module are sequentially connected;
the image analysis module is used for constructing an edge point proportion matrix and a texture entropy matrix according to the edge point characteristics and the texture characteristics of the collected left and right raw cotton images in a preset window; acquiring a complexity weight matrix by using the edge point proportion matrix and the texture entropy matrix; the left and right raw cotton images are input into a three-dimensional reconstruction network to obtain a cotton three-dimensional matrix;
the sound signal detection module is used for acquiring the distance between each unknown point and a plurality of nearest neighbor fixed points in the cotton three-dimensional matrix and the number of cotton points, acquiring a distance coefficient sequence according to the distance, acquiring an obstruction coefficient sequence according to the number of cotton points and the complexity weight matrix, acquiring a sound signal value of each unknown point according to the distance coefficient sequence, the obstruction coefficient sequence and acquired sound data, and constructing a sound signal three-dimensional labeling matrix according to the sound signal value;
the openness estimation module is used for training a sound matrix to construct a network by utilizing the sound signal three-dimensional labeling matrix, acquiring a sound signal three-dimensional matrix, and obtaining an openness quantization index according to the sound signal three-dimensional matrix based on a first neural network;
and the impurity removal parameter adjusting module is used for obtaining an optimal impurity removal parameter according to the collected weight of the raw cotton, the opening degree quantization index and the impurity removal parameter sequence based on a second neural network and adjusting the scutcher.
Further, the obtaining of the quantitative index of the looseness according to the three-dimensional matrix of the sound signal based on the first neural network specifically includes:
inputting the sound signal three-dimensional matrix into a first neural network, and simultaneously obtaining a feature vector of the raw cotton by using the weight of the raw cotton;
obtaining a characteristic similarity sequence according to the raw cotton characteristic vector and a stored raw cotton standard characteristic vector;
and obtaining the quantified index of the looseness according to the characteristic similarity sequence.
Further, the data acquisition module is used for deploying a data acquisition device and acquiring raw cotton data through the data acquisition device;
the raw cotton data includes the left raw cotton image, the right raw cotton image, the sound data, and the raw cotton weight.
Further, the unknown points include all points of the cotton three-dimensional matrix except the fixed points.
Further, the sound data includes sound intensity data and sound frequency data;
the sound signal values comprise sound intensity signal values and sound frequency signal values;
the sound signal three-dimensional labeling matrix comprises a sound intensity three-dimensional labeling matrix and a sound frequency three-dimensional labeling matrix;
the sound signal three-dimensional matrix comprises a sound intensity three-dimensional matrix and a sound frequency three-dimensional matrix.
Furthermore, the impurity removal parameter sequence comprises a dust rod installation angle parameter sequence and a dust rod and beater distance parameter sequence.
Further, the optimal impurity removal parameter is an impurity removal parameter corresponding to the minimum picking weight output by the second neural network.
Further, the edge point proportion matrix comprises a left edge point proportion matrix and a right edge point proportion matrix;
the texture entropy matrices include a left texture entropy matrix and a right texture entropy matrix.
Further, the obtaining the complexity weight matrix specifically includes:
inputting the left edge point proportion matrix and the right edge point proportion matrix into a first fusion model to obtain an edge point proportion fusion matrix;
inputting the left texture entropy matrix and the right texture entropy matrix into a second fusion model to obtain a texture entropy fusion matrix;
and constructing a complexity weight matrix according to the edge point proportion fusion matrix and the texture entropy fusion matrix.
Further, the three-dimensional reconstruction network employs a first two-dimensional convolutional encoder-first three-dimensional convolutional decoder infrastructure; the first neural network is a twin network.
The invention provides an artificial intelligence-based textile process self-adaptive scutching system which comprises a data acquisition module, an image analysis module, a sound signal detection module, an opening degree estimation module and an impurity removal parameter adjustment module, and solves the problems that in the existing scutcher, the impurity removal factor is adjusted mainly by workers according to experience, the impurity removal effect is poor, and resources are wasted; according to the cotton picking machine, the edge and texture characteristics of cotton are analyzed to obtain the sound signal three-dimensional matrix, and the impurity removal parameters of the cotton picking machine are adjusted in a self-adaptive mode by utilizing the sound signal three-dimensional matrix, so that the requirements on manual labor are reduced, the intelligent control of cotton picking is realized, the spinning efficiency is improved, and the processing quality of cotton is guaranteed; the cotton cleaning system has strong practicability and high reliability, can obtain the optimal impurity removal parameters only through the raw cotton image and the neural network in practical application, greatly reduces the impurity content of cotton, improves the cleaning quality of the cotton, has good economic and social benefits, and is suitable for industrial application of various scales.
Drawings
FIG. 1 is a block diagram of an artificial intelligence-based adaptive textile process scutching system according to an embodiment of the present invention;
FIG. 2 is a schematic deployment diagram of a data acquisition device according to an embodiment of the present invention;
fig. 3 is a schematic view of a cotton tuft provided by an embodiment of the present invention.
And (3) graphic labeling:
a data acquisition module 1; an image analysis module 2; a sound signal detection module 3;
a looseness estimation module 4; an impurity removal parameter adjusting module 5;
a first RGB camera 61; a second RGB camera 62;
a first sound sensor 71; a second sound sensor 72; a third sound sensor 73;
a fourth sound sensor 74; a fifth sound sensor 75; a sixth sound sensor 76;
a sound playing device 8; a load cell 9; raw cotton 10; a conveyor belt 11.
Detailed Description
The embodiments of the present invention will be described in detail below with reference to the accompanying drawings, which are given solely for the purpose of illustration and are not to be construed as limitations of the invention, including the drawings which are incorporated herein by reference and for illustration only and are not to be construed as limitations of the invention, since many variations thereof are possible without departing from the spirit and scope of the invention.
Aiming at the problems that the impurity removal factor is mainly adjusted by workers according to experience, the impurity removal effect is poor and resources are wasted in the existing impurity removal of the cotton cleaner, the embodiment of the invention provides an artificial intelligence-based textile process self-adaptive cotton cleaning system, which comprises a data acquisition module 1, an image analysis module 2, a sound signal detection module 3, an opening degree estimation module 4 and an impurity removal parameter adjustment module 5, wherein the image analysis module 2, the sound signal detection module 3, the opening degree estimation module 4 and the impurity removal parameter adjustment module are connected with the data acquisition module 1 in sequence;
the data acquisition module 1 needs to deploy a data acquisition device in advance, and acquires raw cotton data through the data acquisition device;
the data acquisition device comprises an RGB camera, a sound sensor, a sound playing device and a weighing sensor; the raw cotton data comprises a left raw cotton image, a right raw cotton image, sound data and raw cotton weight;
in the embodiment of the present invention, the data acquisition device is disposed as shown in fig. 2, and the RGB camera includes two RGB cameras, i.e., a first RGB camera 61 and a second RGB camera 62, which are respectively disposed on the left and right sides of the conveyor belt to acquire the raw cotton images on the left and right sides; the number of the sound sensors is six, and the sound sensors are uniformly disposed on the left side and the right side of the conveyor belt by using a raw cotton input port of the conveyor belt as a starting point and a set distance, that is, in this embodiment, the sound data at corresponding positions are collected by the first sound sensor 71, the second sound sensor 72, the third sound sensor 73, the fourth sound sensor 74, the fifth sound sensor 75 and the sixth sound sensor 76 shown in fig. 2; the sound playing device 8 is used for playing audio with certain intensity, certain duration and certain frequency, and the sound sensors are all deployed in the audio range played by the sound playing device; the weighing sensor 9 is arranged below the conveyor belt 11 to detect the weight of the raw cotton 10 in real time, and the weighing sensors are arranged at the raw cotton input port and the raw cotton output port in the embodiment; the deployment of the data acquisition devices in this embodiment is only exemplary, and those skilled in the art can adjust the data acquisition devices and the number thereof according to actual situations.
The image analysis module 2 respectively performs edge extraction on the collected left and right raw cotton images to obtain left and right edge images, and the Canny edge detection operator is preferentially adopted for edge extraction in the embodiment;
in this embodiment, an edge point proportion matrix is constructed according to the left and right edge images, specifically:
first, the image analysis module 2 needs to preset a preset window, such as: when the preset window size is set to 32, 16 × 16 windows exist in the cotton three-dimensional matrix with the size of 512 × 512, the cotton three-dimensional matrix comprises 16 × 16 voxel units, and 16 × 16 windows exist in the two-dimensional image;
then, in this embodiment, edge point features of the left and right edge images in each preset window are respectively obtained, an edge point proportion is obtained according to the edge point features, and an edge point proportion matrix is constructed according to the edge point proportion, wherein the edge point proportion matrix includes a left edge point proportion matrix and a right edge point proportion matrix; in this embodiment, the number of edge points is selected as the edge point feature, and the calculation formula of the edge point proportion in each preset window is as follows:
Figure BDA0002838000210000061
wherein C represents the edge point proportion of the ith preset window, NiAnd K represents the size of the preset window.
In an embodiment of the present invention, the shape of the edge point proportion matrix is [ k ', k ", C ], where k' represents the number of the preset windows in each row of the edge image on one side, and k" represents the number of the preset windows in each column of the edge image on one side.
Secondly, the image analysis module 2 respectively extracts texture features of the collected left and right raw cotton images to obtain left and right gray level co-occurrence matrixes, and the embodiment preferentially selects texture feature analysis based on the gray level co-occurrence matrixes to extract the texture features;
in this embodiment, according to the left and right gray level co-occurrence matrices, texture entropies in the preset windows are obtained, and a texture entropy matrix is constructed according to the texture entropies; in this embodiment, the texture entropy matrix includes a left texture entropy matrix and a right texture entropy matrix;
wherein, the calculation formula of the texture entropy is as follows:
Figure BDA0002838000210000062
in the formula, D represents texture entropy in the preset window, M, N represents the length and width of the preset window, and G (m, n) represents the pixel value of the m-th row and n-th column of the gray level co-occurrence matrix in the preset window.
In the embodiment of the present invention, the shape of the texture entropy matrix is [ g ', g ", D ], where g' represents the number of the preset windows in each row of the raw cotton image on one side, and g" represents the number of the preset windows in each column of the raw cotton image on one side.
It should be noted that entropy is a measure of information contained in an image, texture features belong to information contained in the image, and is a measure of randomness, and in a gray level co-occurrence matrix, when element distribution is relatively dispersed, entropy is relatively large, which indicates that the degree of non-uniformity or complexity of textures in the image is relatively high, and the capability of blocking sound is relatively strong.
The image analysis module 2 obtains a complexity weight matrix by using the edge point proportion matrix and the texture entropy matrix, and specifically comprises:
inputting the left edge point proportion matrix and the right edge point proportion matrix into a first fusion model to obtain an edge point proportion fusion value, and obtaining an edge point proportion fusion matrix according to the edge point proportion fusion value; wherein the first fusion model is:
EC=μCC*F′L+(1-μCC)*F′R 1-3
in the formula, ECRepresents the edge point proportional fusion value, muCDenotes a first unit distance value, αCRepresenting the number of voxel units, F ', between the current voxel unit and the rightmost edge of the same side'LRepresenting the left edge point scale matrix, F'RRepresenting the right edge point scale matrix.
In this embodiment, the first unit distance value is set as the reciprocal of the sum of the total column number of the left or right edge point scale matrix and the value 1, such as: when the cotton three-dimensional matrix comprises 16 × 16 voxel units, the left or right edge point proportion matrix total column number is 16, and the first unit distance value is
Figure BDA0002838000210000071
Inputting the left texture entropy matrix and the right texture entropy matrix into a second fusion model to obtain a texture entropy fusion value, and constructing a texture entropy fusion matrix according to the texture entropy fusion value; wherein the second fusion model is:
ED=μDD*F″L+(1-μDD)*F″R 1-4
in the formula, EDRepresenting texture entropy fusion value, μDDenotes a second unit distance value, αDIndicates the number of voxel units between the current voxel unit and the rightmost edge on the same side, F ″LDenotes the left texture entropy matrix, F ″RRepresenting the right texture entropy matrix.
In this embodiment, the second unit distance value is set as the reciprocal of the sum of the total column number of the left or right texture entropy matrices and the value 1.
And constructing a complexity weight matrix according to the edge point proportion fusion matrix and the texture entropy fusion matrix, wherein each voxel unit in the complexity weight matrix comprises the corresponding edge point proportion fusion value and the corresponding texture entropy fusion value.
The image analysis module 2 further inputs the received left and right raw cotton images into a three-dimensional reconstruction network to obtain a cotton three-dimensional matrix, wherein the three-dimensional reconstruction network adopts a first two-dimensional convolutional encoder-first three-dimensional convolutional decoder infrastructure, and the network training process is as follows:
the two-dimensional convolutional encoder performs feature extraction on the input left and right raw cotton images to obtain a feature map, the feature map is subjected to tensor remodeling, namely Reshape operation to obtain a feature three-dimensional matrix, the feature three-dimensional matrix passes through the first three-dimensional convolutional decoder to obtain a cotton three-dimensional matrix, in this embodiment, the shape of the cotton three-dimensional matrix is [ B, L, W, H, I ], wherein B represents the number of samples input by network one-time training, L, W, H represents the length, width and height of a raw cotton three-dimensional space, I represents the cotton probability, the values of the cotton probability are 0 and 1 respectively, when the value of the cotton probability is 1, the substance at the coordinate point is cotton, and when the value of the cotton probability is 0, the substance at the coordinate point is non-cotton; in this embodiment, tag data obtains a cotton three-dimensional tag matrix through a simulator, where the simulator includes OpenCL and 3 Dmax.
The sound signal detection module 3 obtains the distance between each unknown point and a plurality of nearest neighbor fixed points in the three-dimensional cotton matrix and the number of cotton points, where the unknown points include all coordinate points in the three-dimensional cotton matrix except for the fixed points, and the coordinate position of the fixed point is the coordinate position of the sound sensor deployed in the three-dimensional cotton matrix, and in this embodiment, there are six fixed points;
in the embodiment of the invention, four fixed points which are nearest to the unknown point are selected from six fixed points to serve as nearest neighbor fixed points, the distances between the unknown point and the four nearest neighbor fixed points are obtained, and a distance coefficient sequence is obtained according to the distances;
wherein the distance coefficient of each of the unknown points is:
Figure BDA0002838000210000081
in formula (II), lambda'aRepresenting a distance coefficient, d ', of the unknown point to an a-nearest neighbor fixed point'aAnd (b) representing the distance between the unknown point and the a-th nearest neighbor fixed point, and A representing the total number of the selected nearest neighbor fixed points.
In this embodiment, the number of the cotton points is the number of the coordinate points with the cotton probability of 1;
according to the cotton point number and the complexity weight matrix, an obstruction coefficient sequence is obtained, wherein the obstruction coefficient is as follows:
Figure BDA0002838000210000082
wherein the content of the first and second substances,
Figure BDA0002838000210000091
in the formula, λ ″)aMeans that the unknown point is fixed with the a-th nearest neighborThe obstruction factor of the point; d'aj(ii) the number of cotton points representing the jth voxel unit on the line connecting the unknown point and the ith nearest neighbor fixed point;
Figure BDA0002838000210000092
representing the edge point proportion fusion value contained in the jth voxel unit on the line connecting the unknown point and the a nearest neighbor fixed point;
Figure BDA0002838000210000093
and expressing the texture entropy fusion value contained in the jth voxel unit on the line connecting the unknown point and the a nearest fixed point, wherein epsilon 'and epsilon' respectively express an edge mapping factor and a texture mapping factor, and J expresses the number of the voxel units contained on the line connecting the unknown point and the a nearest fixed point.
The sound signal detection module 3 receives the distance coefficient sequence, the obstruction coefficient sequence and the sound data collected by the sound sensor; in the present embodiment, the sound data collected by the sound sensor includes sound intensity data and sound frequency data, and the average sound intensity and the average sound frequency within three seconds are used as the sound intensity data and the sound frequency data;
in this embodiment, a sound signal value of each unknown point is obtained according to the distance coefficient sequence, the obstruction coefficient sequence, and the collected sound data, where the sound signal value includes a sound intensity signal value and a sound frequency signal value; meanwhile, in this embodiment, a three-dimensional labeling matrix of the sound signal is constructed according to the sound signal value, and the three-dimensional labeling matrix of the sound signal is sent to the looseness estimation module 4, where the three-dimensional labeling matrix of the sound signal includes a three-dimensional labeling matrix of sound intensity and a three-dimensional labeling matrix of sound frequency;
wherein, the calculation formula of the sound signal value is as follows:
Figure BDA0002838000210000094
wherein S represents a sound signal value of a certain unknown point, TaThe sound data collected by the sound sensor of the a-th nearest fixed point is represented, and beta' respectively represent a distance factor and an obstruction factor; it should be noted that, when in said formulas 1 to 6,
Figure BDA0002838000210000095
when the value of β is 0, the present embodiment sets β', β ″ to 1, 0, respectively.
In this embodiment, the sound intensity signal value and the sound frequency signal value are calculated by equations 1-8, that is: in formulas 1-8, if TaIf the data is the sound intensity data collected by the sound sensor of the a-th nearest fixed point, S is the sound intensity signal value of a certain unknown point; if TaAnd S is the sound frequency signal value of a certain unknown point if the sound frequency data is the sound frequency data collected by the sound sensor of the nearest fixed point a.
Because the different degrees of openness of raw cotton, the fluffy degree of raw cotton after the opening process promptly, can make the raw cotton of same quality have different volumes or areas, thereby lead to the impurity content who obtains through image calculation to have great error, in addition, the degree of openness of raw cotton is different, also can make the clearance difficulty degree of impurity different, the degree of openness of raw cotton is less promptly, the raw cotton is compacter, the cotton fiber is to the parcel of impurity, the winding effect is stronger, thereby it is more difficult to make the edulcoration of centrifugal force, like the cotton cluster shown in figure 3, the degree of openness of right side cotton cluster is greater than left side cotton cluster, right side cotton cluster is more fluffy than left side cotton cluster promptly, and the cotton fiber of left side cotton cluster is to the parcel of impurity, the winding effect is stronger than right side cotton cluster.
The looseness estimation module 4 trains a sound matrix to construct a network by using the sound signal three-dimensional labeling matrix as a label, in this embodiment, the sound matrix construction network adopts a second two-dimensional convolution encoder-second three-dimensional convolution decoder infrastructure, the input of the second two-dimensional convolution encoder is the left and right raw cotton images, a raw cotton feature map is output, the raw cotton feature map is subjected to tensor reshaping Reshape operation and a second three-dimensional convolution decoder to obtain a sound signal three-dimensional matrix, and the network is trained by adopting a mean square error loss function; the sound signal three-dimensional matrix comprises a sound intensity three-dimensional matrix and a sound frequency three-dimensional matrix.
The purpose of training the sound matrix to construct the network in this embodiment is: when the method is specifically implemented, a person skilled in the art can acquire the three-dimensional matrix of the sound signal only by constructing the network by using the acquired raw cotton image and the trained sound matrix, so that the data acquisition device is not required to be arranged, the application cost is reduced, and the detection efficiency is greatly improved.
In this embodiment, the sound signal three-dimensional matrix is input into a first neural network to obtain a raw cotton feature vector, in this embodiment, the first neural network is a twin network, the two branches of the twin network are both a three-dimensional convolutional encoder-a first fully-connected network, and the specific training process of the network is as follows:
the training sets of the two sub-networks of the twin network are as follows: one sub-network is the sound signal three-dimensional matrix corresponding to the raw cotton with low opening degree, and the other sub-network is the sound signal three-dimensional matrix corresponding to the raw cotton with high opening degree;
the three-dimensional convolution encoder performs feature extraction on the input sound signal three-dimensional matrix to obtain sound signal features, the first full-connection network maps the sound signal features into a sound signal feature vector, and feature fusion is performed on the sound signal feature vector, namely: the first full-connection network comprises a plurality of full-connection layers, the sound signal characteristic vector is mapped into one sound signal characteristic vector by the first full-connection layers, then the normalized or standardized weight of the raw cotton collected by the weighing sensor 9 is multiplied by the sound signal characteristic vector to obtain a sound signal fusion characteristic vector, and finally the sound signal fusion characteristic vector is subjected to characteristic fitting by the second full-connection layers to output the raw cotton characteristic vector.
And in the training process of the twin network, comparing the inputs of the two branches and calculating the comparison loss, namely training the twin network by adopting a loss function-comparison loss.
In this embodiment, the twin network is used to infer the sound signal three-dimensional matrix corresponding to the raw cotton with low openness to obtain a raw cotton feature vector, and to infer the sound signal three-dimensional matrix corresponding to the raw cotton with high openness to obtain a raw cotton standard feature vector, and store the raw cotton standard feature vector; it should be noted that, when the trained twin network is implemented specifically, the sound signal three-dimensional matrix can be obtained by inputting only the sound matrix construction network;
in this embodiment, the euclidean distance is used to calculate the obtained feature vectors of the raw cotton and all the stored standard feature vectors of the raw cotton, so as to obtain a feature similarity sequence, and the feature similarity sequence is used to obtain a quantitative index of the openness;
wherein, the similarity quantization index is as follows:
Figure BDA0002838000210000111
in the formula, V represents a quantitative index of the degree of looseness; q represents the number of elements of the characteristic similarity sequence, namely the total number of the standard characteristic vectors of the raw cotton; u shapeqRepresenting the qth feature similarity.
In this embodiment, the lower the value of the opening degree quantization index is, the higher the opening degree is, that is, the bulkier the raw cotton is, and conversely, the lower the opening degree is.
Because when cotton of the same quality is scutched, the dirt stick and hired roughneck interval, dirt stick installation angle, looseness etc. all can all produce the influence to the edulcoration effect, consequently, this embodiment utilizes these influence factor to obtain the optimum edulcoration parameter.
The impurity removal parameter adjusting module 5 inputs the collected weight of the raw cotton, the opening degree quantization index and the obtained impurity removal parameter sequence into a second neural network to obtain a cotton picking weight sequence, selects an impurity removal parameter corresponding to the minimum cotton picking weight as an optimal impurity removal parameter, and adjusts the actual cotton picker according to the optimal impurity removal parameter, so that the optimization of an impurity removal effect is realized; the person skilled in the art can adjust the input data of the second neural network according to the actual operation for training.
In this embodiment, the impurity removal parameter sequence includes a dust rod installation angle parameter sequence and a dust rod-beater distance parameter sequence; the optimal impurity removal parameters are the dust rod installation angle parameter and the beater spacing parameter corresponding to the minimum picking weight.
The second neural network adopts a second fully-connected network, the mean square error loss function is adopted for network training in the embodiment, the second fully-connected network corresponds to the actual cotton picker, the impurity removal effect of the cotton picker can be simulated by changing the impurity removal parameters, corresponding output data can be obtained by adjusting input data, and therefore a cotton picking weight sequence is obtained, data are provided for the optimization of subsequent impurity removal parameters, and self-adaptive adjustment is achieved.
The invention provides an artificial intelligence-based textile process self-adaptive cotton picking system, which comprises a data acquisition module 1, an image analysis module 2, a sound signal detection module 3, an opening degree estimation module 4 and an impurity removal parameter adjustment module 5 which are connected with the data acquisition module 1 in sequence, solves the problems that the impurity removal effect is poor and resources are wasted due to the fact that the impurity removal factors are adjusted by workers according to experience in the existing cotton picker, obtains a sound signal three-dimensional matrix with cotton attributes through three-dimensional reconstruction and a neural network, further obtains cotton opening degree quantization indexes, achieves the excellent impurity removal effect of cotton, carries out cotton picking and impurity removal by utilizing the neural network trained in the embodiment, further achieves intellectualization, does not need to adjust impurity removal parameters manually, and greatly improves the working efficiency of equipment, the cost is reduced, the cotton cleaning system realized by the embodiment can adaptively adjust the cotton cleaning machine, the stability is high, the practicability is strong, and the potential application value is realized in the textile field.
The above embodiments are preferred embodiments of the present invention, but the present invention is not limited to the above embodiments, and any other changes, modifications, substitutions, combinations, and simplifications which do not depart from the spirit and principle of the present invention should be construed as equivalents thereof, and all such changes, modifications, substitutions, combinations, and simplifications are intended to be included in the scope of the present invention.

Claims (9)

1. The utility model provides a textile process self-adaptation scutching system based on artificial intelligence, includes data acquisition module, its characterized in that: the device also comprises an image analysis module, a sound signal detection module, a looseness estimation module and an impurity removal parameter adjustment module which are sequentially connected;
the image analysis module is used for constructing an edge point proportion matrix and a texture entropy matrix according to the edge point characteristics and the texture characteristics of the collected left and right raw cotton images in a preset window; acquiring a complexity weight matrix by using the edge point proportion matrix and the texture entropy matrix; the left and right raw cotton images are input into a three-dimensional reconstruction network to obtain a cotton three-dimensional matrix;
the sound signal detection module is used for acquiring the distance between each unknown point and a plurality of nearest neighbor fixed points in the cotton three-dimensional matrix and the number of cotton points, acquiring a distance coefficient sequence according to the distance, acquiring an obstruction coefficient sequence according to the number of cotton points and the complexity weight matrix, acquiring a sound signal value of each unknown point according to the distance coefficient sequence, the obstruction coefficient sequence and acquired sound data, and constructing a sound signal three-dimensional labeling matrix according to the sound signal value;
the openness estimation module is used for training a sound matrix to construct a network by utilizing the sound signal three-dimensional labeling matrix, acquiring a sound signal three-dimensional matrix, and obtaining an openness quantization index according to the sound signal three-dimensional matrix based on a first neural network;
the impurity removal parameter adjusting module is used for obtaining an optimal impurity removal parameter according to the collected weight of the raw cotton, the opening degree quantization index and the impurity removal parameter sequence based on a second neural network and adjusting the scutcher;
the method comprises the following steps of obtaining a looseness quantization index according to the sound signal three-dimensional matrix based on the first neural network, and specifically:
inputting the sound signal three-dimensional matrix into a first neural network, and simultaneously obtaining a feature vector of the raw cotton by using the weight of the raw cotton;
obtaining a characteristic similarity sequence according to the raw cotton characteristic vector and a stored raw cotton standard characteristic vector;
and obtaining the quantified index of the looseness according to the characteristic similarity sequence.
2. The artificial intelligence based textile process adaptive scutching system according to claim 1, wherein the data acquisition module is used for deploying a data acquisition device and acquiring raw cotton data through the data acquisition device;
the raw cotton data includes the left raw cotton image, the right raw cotton image, the sound data, and the raw cotton weight.
3. The artificial intelligence based textile process adaptive scutching system of claim 1, wherein: the unknown points include all points in the cotton three-dimensional matrix except for fixed points.
4. The artificial intelligence based textile process adaptive scutching system of claim 1, wherein: the sound data comprises sound intensity data and sound frequency data;
the sound signal values comprise sound intensity signal values and sound frequency signal values;
the sound signal three-dimensional labeling matrix comprises a sound intensity three-dimensional labeling matrix and a sound frequency three-dimensional labeling matrix;
the sound signal three-dimensional matrix comprises a sound intensity three-dimensional matrix and a sound frequency three-dimensional matrix.
5. The artificial intelligence based textile process adaptive scutching system of claim 4, wherein: the impurity removal parameter sequence comprises a dust rod installation angle parameter sequence and a dust rod and beater spacing parameter sequence.
6. The artificial intelligence based textile process adaptive scutching system of claim 1, wherein: and the optimal impurity removal parameter is an impurity removal parameter corresponding to the minimum picking weight output by the second neural network.
7. The artificial intelligence based textile process adaptive scutching system of claim 1, wherein: the edge point proportion matrix comprises a left edge point proportion matrix and a right edge point proportion matrix;
the texture entropy matrices include a left texture entropy matrix and a right texture entropy matrix.
8. The artificial intelligence based textile process adaptive scutching system of claim 7, wherein: the obtaining of the complexity weight matrix specifically includes:
inputting the left edge point proportion matrix and the right edge point proportion matrix into a first fusion model to obtain an edge point proportion fusion matrix;
inputting the left texture entropy matrix and the right texture entropy matrix into a second fusion model to obtain a texture entropy fusion matrix;
and constructing a complexity weight matrix according to the edge point proportion fusion matrix and the texture entropy fusion matrix.
9. An artificial intelligence based adaptive scutching system for textile processes according to claim 6, wherein the three-dimensional reconstruction network employs a first two-dimensional convolutional encoder-first three-dimensional convolutional decoder infrastructure;
the first neural network is a twin network;
the second neural network is a fully connected network.
CN202011482523.6A 2020-12-16 2020-12-16 Textile process self-adaptive cotton cleaning system based on artificial intelligence Active CN112760756B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202011482523.6A CN112760756B (en) 2020-12-16 2020-12-16 Textile process self-adaptive cotton cleaning system based on artificial intelligence

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202011482523.6A CN112760756B (en) 2020-12-16 2020-12-16 Textile process self-adaptive cotton cleaning system based on artificial intelligence

Publications (2)

Publication Number Publication Date
CN112760756A CN112760756A (en) 2021-05-07
CN112760756B true CN112760756B (en) 2021-11-16

Family

ID=75694949

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202011482523.6A Active CN112760756B (en) 2020-12-16 2020-12-16 Textile process self-adaptive cotton cleaning system based on artificial intelligence

Country Status (1)

Country Link
CN (1) CN112760756B (en)

Families Citing this family (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN114622308A (en) * 2022-03-14 2022-06-14 王陶 Textile process self-adaptive cotton cleaning system based on artificial intelligence
CN115074871A (en) * 2022-06-29 2022-09-20 海安冠益纺织科技有限公司 Weaving process self-adaptation scutching device based on artificial intelligence

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101381908A (en) * 2007-09-05 2009-03-11 际华三五四二纺织有限公司 Bale-plucking and picking integration electric control cabinet
CN101424645A (en) * 2008-11-20 2009-05-06 上海交通大学 Soldered ball surface defect detection device and method based on machine vision
CN106290378A (en) * 2016-08-23 2017-01-04 东方晶源微电子科技(北京)有限公司 Defect classification method and defect inspecting system
CN110928216A (en) * 2019-11-14 2020-03-27 深圳云天励飞技术有限公司 Artificial intelligence device
CN111519283A (en) * 2020-05-09 2020-08-11 苏州基列德智能制造有限公司 Pre-impurity-removing system and impurity-removing method used before raw cotton processing

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101381908A (en) * 2007-09-05 2009-03-11 际华三五四二纺织有限公司 Bale-plucking and picking integration electric control cabinet
CN101424645A (en) * 2008-11-20 2009-05-06 上海交通大学 Soldered ball surface defect detection device and method based on machine vision
CN106290378A (en) * 2016-08-23 2017-01-04 东方晶源微电子科技(北京)有限公司 Defect classification method and defect inspecting system
CN110928216A (en) * 2019-11-14 2020-03-27 深圳云天励飞技术有限公司 Artificial intelligence device
CN111519283A (en) * 2020-05-09 2020-08-11 苏州基列德智能制造有限公司 Pre-impurity-removing system and impurity-removing method used before raw cotton processing

Also Published As

Publication number Publication date
CN112760756A (en) 2021-05-07

Similar Documents

Publication Publication Date Title
CN104992223B (en) Intensive Population size estimation method based on deep learning
Sun et al. Three-dimensional photogrammetric mapping of cotton bolls in situ based on point cloud segmentation and clustering
CN112760756B (en) Textile process self-adaptive cotton cleaning system based on artificial intelligence
Faisal et al. Deep learning and computer vision for estimating date fruits type, maturity level, and weight
CN109784204B (en) Method for identifying and extracting main fruit stalks of stacked cluster fruits for parallel robot
CN110479636B (en) Method and device for automatically sorting tobacco leaves based on neural network
CN108288271A (en) Image detecting system and method based on three-dimensional residual error network
CN108876797B (en) Image segmentation system and method based on Spiking-SOM neural network clustering
CN108416353A (en) Crop field spike of rice fast partition method based on the full convolutional neural networks of depth
CN111274860A (en) Machine vision-based online automatic tobacco leaf grade sorting identification method
CN106326932A (en) Power line inspection image automatic identification method based on neural network and power line inspection image automatic identification device thereof
CN107491734A (en) Semi-supervised Classification of Polarimetric SAR Image method based on multi-core integration Yu space W ishart LapSVM
CN107403180A (en) A kind of numeric type equipment detection recognition method and system
CN110399908A (en) Classification method and device based on event mode camera, storage medium, electronic device
Jenifa et al. Classification of cotton leaf disease using multi-support vector machine
Yang et al. Plot-scale rice grain yield estimation using UAV-based remotely sensed images via CNN with time-invariant deep features decomposition
CN116612389B (en) Building construction progress management method and system
CN115861409B (en) Soybean leaf area measuring and calculating method, system, computer equipment and storage medium
CN116740337A (en) Safflower picking point identification positioning method and safflower picking system
CN111160422A (en) Analysis method for detecting attack behaviors of group-raised pigs by adopting convolutional neural network and long-term and short-term memory
CN116524279A (en) Artificial intelligent image recognition crop growth condition analysis method for digital agriculture
Islam et al. An approach to evaluate classifiers for automatic disease detection and classification of plant leaf
CN114898100A (en) Point cloud data extraction method, device, system, equipment and storage medium
CN114612549A (en) Method and device for predicting optimal fruiting picking time
Yang et al. Research and design of a machine vision-based silk cocoon quality inspection system

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant