CN116051853A - Automatic water adding dough kneading tank and application method thereof - Google Patents

Automatic water adding dough kneading tank and application method thereof Download PDF

Info

Publication number
CN116051853A
CN116051853A CN202211379092.XA CN202211379092A CN116051853A CN 116051853 A CN116051853 A CN 116051853A CN 202211379092 A CN202211379092 A CN 202211379092A CN 116051853 A CN116051853 A CN 116051853A
Authority
CN
China
Prior art keywords
feature
local expansion
feature vectors
vectors
self
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Withdrawn
Application number
CN202211379092.XA
Other languages
Chinese (zh)
Inventor
何承云
白璐
郭延成
张永生
李光磊
陈小丹
葛丽敏
赵海露
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Henan Institute of Science and Technology
Original Assignee
Henan Institute of Science and Technology
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Henan Institute of Science and Technology filed Critical Henan Institute of Science and Technology
Priority to CN202211379092.XA priority Critical patent/CN116051853A/en
Publication of CN116051853A publication Critical patent/CN116051853A/en
Withdrawn legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/40Extraction of image or video features
    • G06V10/44Local feature extraction by analysis of parts of the pattern, e.g. by detecting edges, contours, loops, corners, strokes or intersections; Connectivity analysis, e.g. of connected components
    • G06V10/443Local feature extraction by analysis of parts of the pattern, e.g. by detecting edges, contours, loops, corners, strokes or intersections; Connectivity analysis, e.g. of connected components by matching or filtering
    • G06V10/449Biologically inspired filters, e.g. difference of Gaussians [DoG] or Gabor filters
    • G06V10/451Biologically inspired filters, e.g. difference of Gaussians [DoG] or Gabor filters with interaction between the filter responses, e.g. cortical complex cells
    • G06V10/454Integrating the filters into a hierarchical structure, e.g. convolutional neural networks [CNN]
    • AHUMAN NECESSITIES
    • A21BAKING; EDIBLE DOUGHS
    • A21CMACHINES OR EQUIPMENT FOR MAKING OR PROCESSING DOUGHS; HANDLING BAKED ARTICLES MADE FROM DOUGH
    • A21C1/00Mixing or kneading machines for the preparation of dough
    • A21C1/02Mixing or kneading machines for the preparation of dough with vertically-mounted tools; Machines for whipping or beating
    • AHUMAN NECESSITIES
    • A21BAKING; EDIBLE DOUGHS
    • A21CMACHINES OR EQUIPMENT FOR MAKING OR PROCESSING DOUGHS; HANDLING BAKED ARTICLES MADE FROM DOUGH
    • A21C1/00Mixing or kneading machines for the preparation of dough
    • A21C1/14Structural elements of mixing or kneading machines; Parts; Accessories
    • A21C1/145Controlling; Testing; Measuring
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/70Arrangements for image or video recognition or understanding using pattern recognition or machine learning
    • G06V10/764Arrangements for image or video recognition or understanding using pattern recognition or machine learning using classification, e.g. of video objects
    • G06V10/765Arrangements for image or video recognition or understanding using pattern recognition or machine learning using classification, e.g. of video objects using rules for classification or partitioning the feature space
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/70Arrangements for image or video recognition or understanding using pattern recognition or machine learning
    • G06V10/77Processing image or video features in feature spaces; using data integration or data reduction, e.g. principal component analysis [PCA] or independent component analysis [ICA] or self-organising maps [SOM]; Blind source separation
    • G06V10/80Fusion, i.e. combining data from various sources at the sensor level, preprocessing level, feature extraction level or classification level
    • G06V10/806Fusion, i.e. combining data from various sources at the sensor level, preprocessing level, feature extraction level or classification level of extracted features
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/70Arrangements for image or video recognition or understanding using pattern recognition or machine learning
    • G06V10/82Arrangements for image or video recognition or understanding using pattern recognition or machine learning using neural networks
    • YGENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
    • Y02TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
    • Y02ATECHNOLOGIES FOR ADAPTATION TO CLIMATE CHANGE
    • Y02A20/00Water conservation; Efficient water supply; Efficient water use

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Evolutionary Computation (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Physics & Mathematics (AREA)
  • Artificial Intelligence (AREA)
  • Health & Medical Sciences (AREA)
  • Multimedia (AREA)
  • General Health & Medical Sciences (AREA)
  • General Physics & Mathematics (AREA)
  • Databases & Information Systems (AREA)
  • Computing Systems (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Medical Informatics (AREA)
  • Software Systems (AREA)
  • Food Science & Technology (AREA)
  • Molecular Biology (AREA)
  • Biomedical Technology (AREA)
  • Biodiversity & Conservation Biology (AREA)
  • Bakery Products And Manufacturing Methods Therefor (AREA)

Abstract

The application relates to the field of intelligent control, and particularly discloses a self-control water adding dough kneading tank and a use method thereof, wherein the implicit characteristic of state change after stirring is represented by extracting and analyzing the implicit characteristic of a state image of dough kneading through adopting an artificial intelligent control algorithm based on deep learning, and intelligent water adding control is performed according to the implicit characteristic. Therefore, the water adding self-adaptive control can be accurately carried out according to the actual dough kneading state change condition, so that the dough kneading quality is ensured.

Description

Automatic water adding dough kneading tank and application method thereof
Technical Field
The application relates to the field of intelligent control, and more particularly relates to a self-control water adding dough kneading tank and a using method thereof.
Background
Dough mixers are commonly used dough mixing equipment, i.e., flour and water are mixed and stirred in a certain ratio. The existing dough mixer mixes flour and water by manually and directly adding water through a water injection pipe, and the flour and the water are difficult to be fully and uniformly mixed due to the manual water adding mode, so that the whole dough has different dry humidity.
Secondly, the mode of manually adding water is difficult to control the water adding amount according to the adding amount of flour, and excessive or insufficient water content of dough is easily caused, so that the quality of the noodles is affected.
In addition, the types of flour are different, and the water adding amount of different types of flour is also different.
Therefore, a more intelligent self-controlled water-adding dough kneading device is desired.
Disclosure of Invention
The present application has been made in order to solve the above technical problems. The embodiment of the application provides a self-control water adding dough kneading tank and a using method thereof, wherein the hidden characteristics of the dough kneading state image are extracted and analyzed by adopting an artificial intelligent control algorithm based on deep learning to represent the hidden characteristics of the dough kneading state image after stirring, and the intelligent water adding control is performed. Therefore, the water adding self-adaptive control can be accurately carried out according to the actual dough kneading state change condition, so that the dough kneading quality is ensured.
According to one aspect of the present application, there is provided a self-controlled watering dough pot comprising:
the monitoring module is used for acquiring detection images of the water and the flour after being stirred;
the mixed state feature extraction module is used for enabling the detection image to pass through a first convolution neural network model comprising a depth feature fusion module so as to obtain a multi-scale aggregation feature map;
the local feature expansion module is used for expanding each feature matrix of the multi-scale aggregation feature graph along the channel dimension into feature vectors so as to obtain a plurality of local expansion feature vectors;
A global context coding module, configured to pass the plurality of local expansion feature vectors through a context encoder based on a converter to obtain a plurality of context semantic local expansion feature vectors;
the feature enhancement module is used for carrying out feature data enhancement on each context semantic local expansion feature vector in the context semantic local expansion feature vectors so as to obtain a plurality of optimized context semantic local expansion feature vectors;
the fusion module is used for fusing the plurality of optimization context semantic local expansion feature vectors to obtain classification feature vectors; and
and the control result generation module is used for enabling the classification feature vector to pass through a classifier to obtain a classification result, and the classification result is used for indicating whether water needs to be added.
In the above-mentioned automatic water adding dough tank, the mixing state feature extraction module includes: the shallow feature extraction unit is used for obtaining a shallow feature map from an M-th layer of the first convolutional neural network model, wherein M is more than or equal to 1 and less than or equal to 6; the deep feature extraction unit is used for obtaining a deep feature map from the last layer of the first convolutional neural network model; and the depth feature fusion unit is used for fusing the shallow feature map and the deep feature map through a depth feature fusion module of the first convolutional neural network model so as to obtain the multi-scale aggregation feature map.
In the above self-controlled water-adding dough tank, the local feature expansion module is further configured to expand each feature matrix of the multi-scale aggregated feature map along a channel dimension along a row vector or a column vector to obtain the plurality of local expansion feature vectors.
In the above-mentioned automatic water adding dough tank, the global context coding module includes: the query vector construction unit is used for carrying out one-dimensional arrangement on the plurality of local expansion feature vectors to obtain global expansion feature vectors; a self-attention unit, configured to calculate a product between the global expansion feature vector and a transpose vector of each local expansion feature vector in the plurality of local expansion feature vectors to obtain a plurality of self-attention correlation matrices; the normalization unit is used for respectively performing normalization processing on each self-attention correlation matrix in the plurality of self-attention correlation matrices to obtain a plurality of normalized self-attention correlation matrices; the attention calculating unit is used for obtaining a plurality of probability values through a Softmax classification function by each normalized self-attention correlation matrix in the normalized self-attention correlation matrices; and the attention applying unit is used for weighting each local expansion characteristic vector in the local expansion characteristic vectors by taking each probability value in the plurality of probability values as a weight so as to obtain the context semantic local expansion characteristic vectors.
In the above-mentioned automatic water adding dough tank, the feature enhancing module includes: an enhancement factor calculation unit, configured to calculate, respectively, a high-frequency enhancement distillation factor of wavelet function family energy aggregation of each of the plurality of context semantic local expansion feature vectors as a weighting weight of each of the plurality of context semantic local expansion feature vectors according to the following formula;
wherein, the formula is:
Figure BDA0003927566320000031
wherein w is i Representing individual ones of each of the plurality of contextual semantic local expansion feature vectorsCharacteristic value of position, sigma i (v i ) Representing the variance of all the position feature value sets of each of the plurality of context semantic local expansion feature vectors, wherein L is the length of each of the plurality of context semantic local expansion feature vectors, and log represents a log function value based on 2; and the weighted optimization unit is used for weighted optimization of each context semantic local expansion feature vector in the plurality of context semantic local expansion feature vectors by the weighted weight of each context semantic local expansion feature vector so as to obtain the plurality of optimized context semantic local expansion feature vectors.
In the above-mentioned automatic water adding dough tank, the fusion module is further used for: fusing the plurality of optimization context semantic local expansion feature vectors with the following formula to obtain a classification feature vector;
wherein, the formula is:
V c =Concat[V 1 ,V 2 ,...V n ]
wherein V is 1 ,V 2 ,...V n Representing the plurality of optimization context semantic local expansion feature vectors, concat [ &]Representing a cascade function, V c Representing the classification feature vector.
In the above automatic water adding dough kneading tank, the control result generating module is further configured to: processing the classification feature vector using the classifier in the following formula to obtain a classification result;
wherein, the formula is: o=softmax { (W) n ,B n ):…:(W 1 ,B 1 ) X, where W 1 To W n Is a weight matrix, B 1 To B n For bias vectors, X is a classification feature vector.
According to another aspect of the present application, there is provided a method of using a self-controlled water-adding dough pot, comprising:
acquiring a detection image of the water and the flour after being stirred;
passing the detection image through a first convolution neural network model comprising a depth feature fusion module to obtain a multi-scale aggregation feature map;
expanding each feature matrix of the multi-scale aggregation feature map along the channel dimension into feature vectors to obtain a plurality of local expansion feature vectors;
Passing the plurality of local expansion feature vectors through a converter-based context encoder to obtain a plurality of context semantic local expansion feature vectors;
respectively carrying out feature data enhancement on each context semantic local expansion feature vector in the context semantic local expansion feature vectors to obtain a plurality of optimized context semantic local expansion feature vectors;
fusing the plurality of optimization context semantic local expansion feature vectors to obtain classification feature vectors; and
and the control result generation module is used for enabling the classification feature vector to pass through a classifier to obtain a classification result, and the classification result is used for indicating whether water needs to be added.
Compared with the prior art, the automatic control water adding dough kneading tank and the application method thereof provided by the application are characterized in that the hidden characteristic of the state image of the dough kneading is extracted and analyzed by adopting an artificial intelligent control algorithm based on deep learning to represent the hidden characteristic of the state change after stirring, and the intelligent water adding control is performed. Therefore, the water adding self-adaptive control can be accurately carried out according to the actual dough kneading state change condition, so that the dough kneading quality is ensured.
Drawings
The foregoing and other objects, features and advantages of the present application will become more apparent from the following more particular description of embodiments of the present application, as illustrated in the accompanying drawings. The accompanying drawings are included to provide a further understanding of embodiments of the application and are incorporated in and constitute a part of this specification, illustrate the application and not constitute a limitation to the application. In the drawings, like reference numerals generally refer to like parts or steps.
FIG. 1 illustrates an application scenario diagram of a self-controlled water-adding dough tank according to an embodiment of the present application;
FIG. 2 illustrates a block diagram of a self-controlled water addition dough tank according to an embodiment of the present application;
FIG. 3 illustrates a system architecture diagram of a self-controlled water-adding dough tank according to an embodiment of the present application;
FIG. 4 illustrates a block diagram of a mixing state feature extraction module in a self-controlled water-filled dough tank, according to an embodiment of the present application;
FIG. 5 illustrates a block diagram of a global context encoding module in a self-controlled water-adding dough tank, according to an embodiment of the present application;
FIG. 6 illustrates a block diagram of a feature enhancement module in a self-controlled water addition dough tank, according to an embodiment of the present application;
fig. 7 illustrates a flow chart of a method of using a self-controlled water-adding dough tank according to an embodiment of the present application.
Detailed Description
Hereinafter, example embodiments according to the present application will be described in detail with reference to the accompanying drawings. It should be apparent that the described embodiments are only some of the embodiments of the present application and not all of the embodiments of the present application, and it should be understood that the present application is not limited by the example embodiments described herein.
Scene overview
As mentioned above, dough mixers are commonly used dough mixing equipment, i.e., mixing and stirring flour and water in a certain ratio. The existing dough mixer mixes flour and water by manually and directly adding water through a water injection pipe, and the flour and the water are difficult to be fully and uniformly mixed due to the manual water adding mode, so that the whole dough has different dry humidity. Secondly, the mode of manually adding water is difficult to control the water adding amount according to the adding amount of flour, and excessive or insufficient water content of dough is easily caused, so that the quality of the noodles is affected. In addition, the types of flour are different, and the water adding amount of different types of flour is also different. Therefore, a more intelligent self-controlled water-adding dough kneading device is desired.
At present, deep learning and neural networks have been widely used in the fields of computer vision, natural language processing, speech signal processing, and the like. In addition, deep learning and neural networks have also shown levels approaching and even exceeding humans in the fields of image classification, object detection, semantic segmentation, text translation, and the like.
In recent years, the development of deep learning and neural networks provides a new solution idea and scheme for automatic control and water adding dough mixing intelligent control.
Specifically, in the technical solution of the present application, it is desirable to perform a state analysis on the dough state to determine whether or not water addition is necessary. In particular, in the technical scheme of the application, the stirring equipment is used for stirring for a preset time, then the state image of the dough is collected, and whether water is added or not is determined based on the state analysis of the state image. That is, considering whether the dough kneading process needs to continue adding water or not, the dough kneading process should be performed according to the state change characteristics of the water and the flour after stirring, an artificial intelligent control algorithm based on deep learning is adopted to represent the state change hidden characteristics of the dough kneading state image by extracting and analyzing the hidden characteristics, and intelligent water adding control is performed according to the state change hidden characteristics. Therefore, the water adding self-adaptive control can be accurately carried out according to the actual dough kneading state change condition, so that the dough kneading quality is ensured.
Specifically, in the technical scheme of the application, firstly, a detection image of water and flour after being stirred is obtained through a camera. Then, the feature extraction is performed on the detected image using a convolutional neural network model having excellent performance in terms of implicit feature extraction of the image, but considering that shallow features of the flour and flour state, such as texture feature parts, may be submerged in background information or become blurred as depth is deepened as encoding proceeds, in the technical scheme of the present application, in order to accurately extract the feature of the flour and flour state for accurate water addition control, more attention is required to texture features in the shallow features. If the classification control is performed by only adopting the shallow features, the deep features of the object cannot be ensured due to the shallow features, and the classification judgment accuracy is affected by the interference information such as the background in the image. Therefore, in the technical scheme of the application, the first convolution neural network model with the depth feature fusion module, such as a pyramid network, is used for processing the detection image so as to extract hidden features of the water and the flour in the stirred detection image, namely multi-scale hidden feature information of the flour and the flour state, so that a multi-scale aggregation feature map is obtained.
It should be understood that, since each feature matrix of the multi-scale aggregated feature map along the channel dimension is a local feature, that is, only a locally associated feature can be extracted by the convolutional neural network model, in order to increase the feature receptive field to make full use of the context information to improve the classification accuracy, in the technical solution of the present application, each feature matrix of the multi-scale aggregated feature map along the channel dimension is further expanded into a feature vector, and a transform encoder is used to perform context semantic coding that depends on a long distance, so as to extract each feature matrix of the multi-scale aggregated feature map along the channel dimension based on a global high-dimensional semantic feature to be more suitable for characterizing the state essential feature of the sum surface, thereby obtaining a plurality of context semantic locally expanded feature vectors.
And then fusing the plurality of optimization context semantic local expansion feature vectors to obtain a classification feature vector. Accordingly, in a specific example of the present application, feature fusion of the plurality of optimization context semantic local expansion feature vectors may be performed in a cascade manner to obtain a classification feature vector, and classification is performed in this manner, so as to obtain a classification result for indicating whether water needs to be added.
In particular, in the technical solution of the present application, since the plurality of context semantic local expansion feature vectors are directly cascaded to obtain the classification feature vector, the context encoder based on the converter can promote the relevance of the context semantics among the plurality of context semantic local expansion feature vectors, but there is still a disadvantage in the aggregation of the expression information. That is, since the plurality of context semantic local expansion feature vectors are derived from the respective feature matrices of the multi-scale aggregated feature map along the channel dimension, and the multi-scale aggregated feature map is not aggregated in the channel dimension, it is still desirable to increase the degree of information aggregation between the plurality of context semantic local expansion feature vectors in order to increase the accuracy of the classification result of the classification feature vectors.
Based on this, for each of the plurality of context semantic local expansion feature vectors, a high-frequency enhanced distillation factor for wavelet function family energy aggregation thereof is calculated separately, expressed as:
Figure BDA0003927566320000061
σ i (v i ) Representing a set of eigenvalues v i Variance of E V, V i Is the eigenvalue of the context semantic local expansion eigenvector V, and L is the length of the context semantic local expansion eigenvector V.
Here, the inventors of the present application considered that the information representation of the feature distribution tends to concentrate on the high frequency component, i.e. the information tends to be distributed on the manifold edge of the high-dimensional manifold, whereby the high frequency component of the Gao Weiyin state feature can be enhanced by distillation of the collective variance of the feature distribution and the low frequency component thereof is constrained in a manner that uses high frequency enhanced distillation of wavelet-like function family energy aggregation. In this way, the information aggregation degree among the context semantic local expansion feature vectors can be enhanced by recovering the basic information in the full-precision information representation space through weighting the context semantic local expansion feature vectors by the high-frequency enhancement distillation factors of the wavelet-like function family energy aggregation and then cascading, and the expression effect of the context semantic local expansion feature vectors relative to the associated information of the multi-scale aggregation feature images along the channel dimension is improved, so that the accuracy of the classification result of the classification feature vectors is improved. Therefore, the water adding self-adaptive control can be accurately carried out according to the actual dough kneading state change condition, so that the dough kneading quality is ensured.
Based on this, the application proposes a self-controlled water-adding dough jar, comprising: the monitoring module is used for acquiring detection images of the water and the flour after being stirred; the mixed state feature extraction module is used for enabling the detection image to pass through a first convolution neural network model comprising a depth feature fusion module so as to obtain a multi-scale aggregation feature map; the local feature expansion module is used for expanding each feature matrix of the multi-scale aggregation feature graph along the channel dimension into feature vectors so as to obtain a plurality of local expansion feature vectors; a global context coding module, configured to pass the plurality of local expansion feature vectors through a context encoder based on a converter to obtain a plurality of context semantic local expansion feature vectors; the feature enhancement module is used for carrying out feature data enhancement on each context semantic local expansion feature vector in the context semantic local expansion feature vectors so as to obtain a plurality of optimized context semantic local expansion feature vectors; the fusion module is used for fusing the plurality of optimization context semantic local expansion feature vectors to obtain classification feature vectors; and the control result generation module is used for enabling the classification feature vector to pass through a classifier to obtain a classification result, wherein the classification result is used for indicating whether water needs to be added.
Fig. 1 illustrates an application scenario diagram of a self-controlled water-adding dough tank according to an embodiment of the present application. As shown in fig. 1, in this application scenario, a detection image of water and flour after being stirred is acquired by a camera (e.g., C as illustrated in fig. 1). Next, the image is input to a server (e.g., S in fig. 1) in which an algorithm for self-controlling water addition and a canister are deployed, wherein the server can process the input image with the algorithm for self-controlling water addition and the canister to generate a classification result indicating whether water addition is required.
Having described the basic principles of the present application, various non-limiting embodiments of the present application will now be described in detail with reference to the accompanying drawings.
Exemplary System
Fig. 2 illustrates a block diagram of a self-controlled water-adding dough tank according to an embodiment of the present application. As shown in fig. 2, the self-controlling water-adding dough pot 300 according to the embodiment of the present application includes: a monitoring module 310; a mixed state feature extraction module 320; a local feature expansion module 330; a global context encoding module 340; a feature enhancement module 350; a fusion module 360; and a control result generation module 370.
Wherein, the monitoring module 310 is used for acquiring a detection image of the water and the flour after being stirred; the mixed state feature extraction module 320 is configured to pass the detected image through a first convolutional neural network model including a depth feature fusion module to obtain a multi-scale aggregated feature map; the local feature expansion module 330 is configured to expand each feature matrix of the multi-scale aggregated feature map along the channel dimension into feature vectors to obtain a plurality of local expanded feature vectors; the global context coding module 340 is configured to pass the plurality of local expansion feature vectors through a context encoder based on a converter to obtain a plurality of context semantic local expansion feature vectors; the feature enhancement module 350 is configured to perform feature level data enhancement on each of the plurality of context semantic local expansion feature vectors to obtain a plurality of optimized context semantic local expansion feature vectors; the fusion module 360 is configured to fuse the plurality of optimization context semantic local expansion feature vectors to obtain a classification feature vector; and the control result generating module 370 is configured to pass the classification feature vector through a classifier to obtain a classification result, where the classification result is used to indicate whether water needs to be added.
Fig. 3 illustrates a system architecture diagram of a self-controlled water-adding dough tank according to an embodiment of the present application. As shown in fig. 3, in the system architecture of the self-controlled water-adding dough pot 300, firstly, a detection image of water and flour after being stirred is obtained through the monitoring module 310; the mixed state feature extraction module 320 passes the detection image acquired by the monitoring module 310 through a first convolutional neural network model including a depth feature fusion module to obtain a multi-scale aggregated feature map; next, the local feature expansion module 330 expands each feature matrix along the channel dimension of the multi-scale aggregated feature map generated by the mixed state feature extraction module 320 into feature vectors to obtain a plurality of local expanded feature vectors; the global context coding module 340 passes the plurality of local expansion feature vectors obtained by the local feature expansion module 330 through a context encoder based on a converter to obtain a plurality of context semantic local expansion feature vectors; then, the feature enhancement module 350 performs feature level data enhancement on each of the plurality of context semantic local expansion feature vectors obtained by the global context encoding module 340 to obtain a plurality of optimized context semantic local expansion feature vectors; further, the fusion module 360 fuses the plurality of optimized context semantic local expansion feature vectors obtained by the feature enhancement module 350 to obtain a classification feature vector; further, the control result generating module 370 passes the classification feature vector through a classifier to obtain a classification result, which indicates whether water addition is required.
Specifically, during operation of the self-controlled water-adding dough tank 300, the monitoring module 310 is configured to obtain a detected image of the water and flour after being stirred. In the technical scheme of the application, the stirring equipment is used for stirring for a preset time, then the state image of the dough is collected, and whether water is added or not is determined based on the state analysis of the state image. Thus, in one specific example of the present application, a camera may be used to obtain a detected image of the water and flour after being agitated.
Specifically, during the operation of the self-controlled water-adding dough tank 300, the mixed state feature extraction module 320 is configured to pass the detected image through a first convolutional neural network model including a depth feature fusion module to obtain a multi-scale aggregated feature map. The convolutional neural network model with excellent performance in terms of implicit feature extraction of images is used for feature extraction of the detected images, but considering that shallow features of the flour and flour states, such as texture features, are submerged in background information or become blurred with depth, as encoding goes deep, in the technical solution of the present application, in order to accurately extract the features of the flour and flour states for accurate water addition control, more attention is required to texture features in shallow features. If the classification control is performed by only adopting the shallow features, the deep features of the object cannot be ensured due to the shallow features, and the classification judgment accuracy is affected by the interference information such as the background in the image. Therefore, in the technical scheme of the application, the first convolution neural network model with the depth feature fusion module, such as a pyramid network, is used for processing the detection image so as to extract hidden features of the water and the flour in the stirred detection image, namely multi-scale hidden feature information of the flour and the flour state, so that a multi-scale aggregation feature map is obtained. More specifically, the step of passing the detected image through a first convolutional neural network model including a depth feature fusion module to obtain a multi-scale aggregated feature map includes: the M layers and the last layer of the first convolution neural network model containing the depth feature fusion module are used for respectively: carrying out convolution processing on input data to obtain a convolution characteristic diagram; pooling the convolution feature map along a channel dimension to obtain a pooled feature map; non-linear activation is carried out on the pooled feature map so as to obtain an activated feature map; wherein M is greater than or equal to 1 and less than or equal to 6; the output of the M layers of the first convolution neural network containing the depth feature fusion module is the shallow feature map, the output of the last layer of the first convolution neural network containing the depth feature fusion module is the deep feature map, and the input of the first convolution neural network containing the depth feature fusion module is the detection image; and fusing the shallow feature map and the deep feature map by a deep feature fusion module of the first convolutional neural network model to obtain the multi-scale aggregation feature map.
Fig. 4 illustrates a block diagram of a mixing state feature extraction module in a self-controlled water-filled dough tank according to an embodiment of the present application. As shown in fig. 4, the mixed state feature extraction module 320 includes: a shallow feature extraction unit 321, configured to obtain a shallow feature map from an mth layer of the first convolutional neural network model, where M is greater than or equal to 1 and less than or equal to 6; a deep feature extraction unit 322, configured to obtain a deep feature map from a last layer of the first convolutional neural network model; and a depth feature fusion unit 323, configured to fuse the shallow feature map and the deep feature map through a depth feature fusion module of the first convolutional neural network model to obtain the multi-scale aggregated feature map.
Specifically, during operation of the self-controlled water-adding dough tank 300, the local feature expansion module 330 is configured to expand each feature matrix of the multi-scale aggregated feature map along the channel dimension into a feature vector to obtain a plurality of local expansion feature vectors. In a specific example of the present application, the local feature expansion module is further configured to expand each feature matrix along a channel dimension of the multi-scale aggregated feature map along a row vector or a column vector to obtain the plurality of local expanded feature vectors.
Specifically, during operation of the self-controlling water-filled dough tank 300, the global context encoding module 340 is configured to pass the plurality of local expansion feature vectors through a context encoder based on a transducer to obtain a plurality of context semantic local expansion feature vectors. It should be understood that, since each feature matrix of the multi-scale aggregated feature map along the channel dimension is a local feature, that is, only a locally associated feature can be extracted by the convolutional neural network model, in order to increase the feature receptive field to make full use of the context information to improve the classification accuracy, in the technical solution of the present application, each feature matrix of the multi-scale aggregated feature map along the channel dimension is further expanded into a feature vector, and a transform encoder is used to perform context semantic coding that depends on a long distance, so as to extract each feature matrix of the multi-scale aggregated feature map along the channel dimension based on a global high-dimensional semantic feature to be more suitable for characterizing the state essential feature of the sum surface, thereby obtaining a plurality of context semantic locally expanded feature vectors.
FIG. 5 illustrates a block diagram of a global context encoding module in a self-controlled water-adding dough tank, according to an embodiment of the present application. As shown in fig. 5, the global context encoding module 340 includes: a query vector construction unit 341, configured to perform one-dimensional arrangement on the plurality of local expansion feature vectors to obtain a global expansion feature vector; a self-attention unit 342 for calculating the product between the global expansion feature vector and the transpose vector of each local expansion feature vector in the plurality of local expansion feature vectors to obtain a plurality of self-attention correlation matrices; a normalization unit 343, configured to perform normalization processing on each of the plurality of self-attention correlation matrices to obtain a plurality of normalized self-attention correlation matrices; a attention calculating unit 344, configured to obtain a plurality of probability values by using a Softmax classification function for each normalized self-attention correlation matrix in the plurality of normalized self-attention correlation matrices; the attention applying unit 345 is configured to weight each of the local expansion feature vectors with each of the probability values as a weight to obtain the context semantic local expansion feature vectors.
Specifically, during the operation of the self-controlled water-adding dough tank 300, the feature enhancement module 350 is configured to perform feature level data enhancement on each of the plurality of context semantic local expansion feature vectors to obtain a plurality of optimized context semantic local expansion feature vectors. In particular, in the technical solution of the present application, since the plurality of context semantic local expansion feature vectors are directly cascaded to obtain the classification feature vector, the context encoder based on the converter can promote the relevance of the context semantics among the plurality of context semantic local expansion feature vectors, but there is still a disadvantage in the aggregation of the expression information. That is, since the plurality of context semantic local expansion feature vectors are derived from the respective feature matrices of the multi-scale aggregated feature map along the channel dimension, and the multi-scale aggregated feature map is not aggregated in the channel dimension, it is still desirable to increase the degree of information aggregation between the plurality of context semantic local expansion feature vectors in order to increase the accuracy of the classification result of the classification feature vectors.
Based on this, for each of the plurality of context semantic local expansion feature vectors, a high-frequency enhanced distillation factor for wavelet function family energy aggregation thereof is calculated separately, expressed as:
Figure BDA0003927566320000111
σ i (v i ) Representing a set of eigenvalues v i Variance of E V, V i Is the eigenvalue of the context semantic local expansion eigenvector V, and L is the length of the context semantic local expansion eigenvector V.
Here, the inventors of the present application considered that the information representation of the feature distribution tends to concentrate on the high frequency component, i.e. the information tends to be distributed on the manifold edge of the high-dimensional manifold, whereby the high frequency component of the Gao Weiyin state feature can be enhanced by distillation of the collective variance of the feature distribution and the low frequency component thereof is constrained in a manner that uses high frequency enhanced distillation of wavelet-like function family energy aggregation. In this way, the information aggregation degree among the context semantic local expansion feature vectors can be enhanced by recovering the basic information in the full-precision information representation space through weighting the context semantic local expansion feature vectors by the high-frequency enhancement distillation factors of the wavelet-like function family energy aggregation and then cascading, and the expression effect of the context semantic local expansion feature vectors relative to the associated information of the multi-scale aggregation feature images along the channel dimension is improved, so that the accuracy of the classification result of the classification feature vectors is improved. Therefore, the water adding self-adaptive control can be accurately carried out according to the actual dough kneading state change condition, so that the dough kneading quality is ensured.
FIG. 6 illustrates a block diagram of a feature enhancement module in a self-controlled water addition dough tank according to an embodiment of the present application. As shown in fig. 6, the feature enhancement module 350 includes: an enhancement factor calculation unit 351 configured to calculate, as a weighted weight of each of the plurality of context semantic local expansion feature vectors, a high-frequency enhancement distillation factor for wavelet-like function family energy aggregation of each of the plurality of context semantic local expansion feature vectors, respectively, with the following formula;
wherein, the formula is:
Figure BDA0003927566320000121
wherein v is i Feature values, σ, representing respective positions in each of the plurality of contextual semantic local expansion feature vectors i (v i ) Representing the variance of all the position feature value sets of each of the plurality of context semantic local expansion feature vectors, wherein L is the length of each of the plurality of context semantic local expansion feature vectors, and log represents a log function value based on 2; and the weighted optimization unit 352 is configured to perform weighted optimization on each of the plurality of context semantic local expansion feature vectors by using the weighted weights of the context semantic local expansion feature vectors to obtain the plurality of optimized context semantic local expansion feature vectors.
Specifically, during the operation of the self-controlled water-adding dough tank 300, the fusion module 360 is configured to fuse the plurality of semantic local expansion feature vectors of the optimized context to obtain a classification feature vector. Accordingly, feature fusion of the plurality of optimization context semantic local expansion feature vectors can be performed in a cascading manner to obtain a classification feature vector, and classification is performed according to the classification feature vector, so that a classification result for indicating whether water addition is needed or not can be obtained. In a specific example of the present application, the fusion module is further configured to: fusing the plurality of optimization context semantic local expansion feature vectors with the following formula to obtain a classification feature vector;
wherein, the formula is:
V c =Concat[V 1 ,V 2 ,...V n ]
wherein V is 1 ,V 2 ,...V n Representing the plurality of optimization context semantic local expansion feature vectors, concat [ &]Representing a cascade function, V c Representing the classification feature vector.
Specifically, during the operation of the self-controlled water adding dough tank 300, the control result generating module 370 is configured to pass the classification feature vector through a classifier to obtain a classification result, where the classification result is used to indicate whether water needs to be added. In a specific example of the present application, the control result generating module is further configured to: processing the classification feature vector using the classifier in the following formula to obtain a classification result;
Wherein, the formula is: o=softmax { (W) n ,B n ):…:(W 1 ,B 1 ) X, where O is the classification result, W 1 To W n Is a weight matrix, B 1 To B n For bias vectors, X is a classification feature vector.
In summary, the self-controlled watering dough pot 300 according to the embodiments of the present application is illustrated, which represents its hidden characteristics of state change after stirring by extracting and analyzing the hidden characteristics of the state image of the dough by adopting an artificial intelligence control algorithm based on deep learning, and thus performs intelligent watering control. Therefore, the water adding self-adaptive control can be accurately carried out according to the actual dough kneading state change condition, so that the dough kneading quality is ensured.
As described above, the self-controlled water-adding dough tank according to the embodiment of the present application may be implemented in various terminal devices. In one example, the self-controlled water-adding dough tank 300 according to embodiments of the present application may be integrated into the terminal device as a software module and/or hardware module. For example, the self-controlling water-adding dough tank 300 may be a software module in the operating system of the terminal device, or may be an application developed for the terminal device; of course, the self-controlled water addition dough tank 300 may also be one of a plurality of hardware modules of the terminal device.
Alternatively, in another example, the self-controlling added water sum tank 300 and the terminal device may be separate devices, and the self-controlling added water sum tank 300 may be connected to the terminal device through a wired and/or wireless network and transmit interactive information in a agreed data format.
Exemplary method
Fig. 7 illustrates a flowchart of a method for using a self-controlling water-adding dough pot according to an embodiment of the present application, as shown in fig. 7, and the method for using the self-controlling water-adding dough pot according to an embodiment of the present application includes the steps of: s110, acquiring a detection image of the stirred water and flour; s120, passing the detection image through a first convolution neural network model comprising a depth feature fusion module to obtain a multi-scale aggregation feature map; s130, expanding each feature matrix of the multi-scale aggregation feature graph along the channel dimension into feature vectors to obtain a plurality of local expansion feature vectors; s140, the local expansion feature vectors pass through a context encoder based on a converter to obtain a plurality of context semantic local expansion feature vectors; s150, respectively carrying out feature data enhancement on each context semantic local expansion feature vector in the context semantic local expansion feature vectors to obtain a plurality of optimized context semantic local expansion feature vectors; s160, fusing the plurality of optimization context semantic local expansion feature vectors to obtain classification feature vectors; and S170, a control result generation module is used for enabling the classification feature vector to pass through a classifier to obtain a classification result, wherein the classification result is used for indicating whether water needs to be added.
In one example, in the above method for using a self-controlled water adding dough pot, the step S120 includes: obtaining a shallow feature map from an M-th layer of the first convolutional neural network model, wherein M is more than or equal to 1 and less than or equal to 6; obtaining a deep feature map from the last layer of the first convolutional neural network model; and fusing the shallow feature map and the deep feature map through a deep and shallow feature fusion module of the first convolutional neural network model to obtain the multi-scale aggregation feature map.
In one example, in the above method for using a self-controlled water adding dough pot, the step S130 includes: and expanding each feature matrix of the multi-scale aggregation feature map along the channel dimension along a row vector or a column vector to obtain the plurality of local expansion feature vectors.
In one example, in the above method for using a self-controlled water adding dough pot, the step S140 includes: one-dimensional arrangement is carried out on the plurality of local expansion feature vectors so as to obtain global expansion feature vectors; calculating the product between the global expansion feature vector and the transpose vector of each local expansion feature vector in the local expansion feature vectors to obtain a plurality of self-attention association matrixes; respectively carrying out standardization processing on each self-attention correlation matrix in the plurality of self-attention correlation matrices to obtain a plurality of standardized self-attention correlation matrices; obtaining a plurality of probability values by using a Softmax classification function through each normalized self-attention correlation matrix in the normalized self-attention correlation matrices; and weighting each local expansion characteristic vector in the local expansion characteristic vectors by taking each probability value in the probability values as a weight so as to obtain the context semantic local expansion characteristic vectors.
In one example, in the above method for using a self-controlled water adding dough tank, the step S150 includes: respectively calculating high-frequency enhanced distillation factors of wavelet-like function family energy aggregation of each context semantic local expansion feature vector in the context semantic local expansion feature vectors by using the following formula as a weighting weight of each context semantic local expansion feature vector in the context semantic local expansion feature vectors;
wherein, the formula is:
Figure BDA0003927566320000151
wherein v is i Representing individual ones of each of the plurality of contextual semantic local expansion feature vectorsCharacteristic value of position, sigma i (v i ) Representing the variance of all the position feature value sets of each of the plurality of context semantic local expansion feature vectors, wherein L is the length of each of the plurality of context semantic local expansion feature vectors, and log represents a log function value based on 2; and performing weighted optimization on each of the plurality of context semantic local expansion feature vectors by using the weighted weight of each of the context semantic local expansion feature vectors to obtain the plurality of optimized context semantic local expansion feature vectors.
In one example, in the above method for using a self-controlled water adding dough pot, the step S160 includes: fusing the plurality of optimization context semantic local expansion feature vectors with the following formula to obtain a classification feature vector;
wherein, the formula is:
V c =Concat[V 1 ,V 2 ,...V n ]
wherein V is 1 ,V 2 ,...V n Representing the plurality of optimization context semantic local expansion feature vectors, concat [ &]Representing a cascade function, V c Representing the classification feature vector.
In one example, in the above method for using a self-controlled water adding dough pot, the step S170 includes: processing the classification feature vector using the classifier in the following formula to obtain a classification result;
wherein, the formula is: o=softmax { (W) n ,B n ):…:(W 1 ,B 1 ) X, where W 1 To W n Is a weight matrix, B 1 To B n For bias vectors, X is a classification feature vector.
In summary, the use method of the automatic water adding dough kneading tank is explained, wherein the implicit characteristic of the state image of the dough kneading is represented by extracting and analyzing the implicit characteristic of the state image of the dough kneading by adopting an artificial intelligent control algorithm based on deep learning, and intelligent water adding control is performed according to the implicit characteristic. Therefore, the water adding self-adaptive control can be accurately carried out according to the actual dough kneading state change condition, so that the dough kneading quality is ensured.

Claims (10)

1. An automatic water-adding dough kneading tank, which is characterized by comprising:
the monitoring module is used for acquiring detection images of the water and the flour after being stirred;
the mixed state feature extraction module is used for enabling the detection image to pass through a first convolution neural network model comprising a depth feature fusion module so as to obtain a multi-scale aggregation feature map;
the local feature expansion module is used for expanding each feature matrix of the multi-scale aggregation feature graph along the channel dimension into feature vectors so as to obtain a plurality of local expansion feature vectors;
a global context coding module, configured to pass the plurality of local expansion feature vectors through a context encoder based on a converter to obtain a plurality of context semantic local expansion feature vectors;
the feature enhancement module is used for carrying out feature data enhancement on each context semantic local expansion feature vector in the context semantic local expansion feature vectors so as to obtain a plurality of optimized context semantic local expansion feature vectors;
the fusion module is used for fusing the plurality of optimization context semantic local expansion feature vectors to obtain classification feature vectors; and
and the control result generation module is used for enabling the classification feature vector to pass through a classifier to obtain a classification result, and the classification result is used for indicating whether water needs to be added.
2. The self-controlled, water-filled dough tank of claim 1, wherein said mixing state feature extraction module comprises:
the shallow feature extraction unit is used for obtaining a shallow feature map from an M-th layer of the first convolutional neural network model, wherein M is more than or equal to 1 and less than or equal to 6;
the deep feature extraction unit is used for obtaining a deep feature map from the last layer of the first convolutional neural network model;
and the depth feature fusion unit is used for fusing the shallow feature map and the deep feature map through a depth feature fusion module of the first convolutional neural network model so as to obtain the multi-scale aggregation feature map.
3. The self-controlled, water-filled dough tank of claim 2, wherein said local feature expansion module is further configured to expand each feature matrix of said multi-scale aggregated feature map along a channel dimension along a row vector or a column vector to obtain said plurality of local expanded feature vectors.
4. The self-contained, water-adding, dough jar of claim 3, wherein said global context coding module comprises:
the query vector construction unit is used for carrying out one-dimensional arrangement on the plurality of local expansion feature vectors to obtain global expansion feature vectors;
A self-attention unit, configured to calculate a product between the global expansion feature vector and a transpose vector of each local expansion feature vector in the plurality of local expansion feature vectors to obtain a plurality of self-attention correlation matrices;
the normalization unit is used for respectively performing normalization processing on each self-attention correlation matrix in the plurality of self-attention correlation matrices to obtain a plurality of normalized self-attention correlation matrices;
the attention calculating unit is used for obtaining a plurality of probability values through a Softmax classification function by each normalized self-attention correlation matrix in the normalized self-attention correlation matrices;
and the attention applying unit is used for weighting each local expansion characteristic vector in the local expansion characteristic vectors by taking each probability value in the plurality of probability values as a weight so as to obtain the context semantic local expansion characteristic vectors.
5. The self-contained, water-filled dough tank of claim 4, wherein said feature enhancement module comprises:
an enhancement factor calculation unit, configured to calculate, respectively, a high-frequency enhancement distillation factor of wavelet function family energy aggregation of each of the plurality of context semantic local expansion feature vectors as a weighting weight of each of the plurality of context semantic local expansion feature vectors according to the following formula;
Wherein, the formula is:
Figure FDA0003927566310000021
wherein v is i Feature values, σ, representing respective positions in each of the plurality of contextual semantic local expansion feature vectors i (v i ) Representing the variance of all the position feature value sets of each of the plurality of context semantic local expansion feature vectors, wherein L is the length of each of the plurality of context semantic local expansion feature vectors, and log represents a log function value based on 2;
and the weighted optimization unit is used for weighted optimization of each context semantic local expansion feature vector in the plurality of context semantic local expansion feature vectors by the weighted weight of each context semantic local expansion feature vector so as to obtain the plurality of optimized context semantic local expansion feature vectors.
6. The self-controlled, water-filled dough jar of claim 5, wherein said fusion module is further configured to: fusing the plurality of optimization context semantic local expansion feature vectors with the following formula to obtain a classification feature vector;
wherein, the formula is:
V c =Concat[V 1 ,V 2 ,...V n ]
wherein V is 1 ,V 2 ,...V n Representing the plurality of optimization context semantic local expansion feature vectors, concat [ &]Representing a cascade function, V c Representing the classification feature vector.
7. The self-controlled, water-filled dough tank of claim 6, wherein said control result generation module is further configured to: processing the classification feature vector using the classifier in the following formula to obtain a classification result;
wherein, the formula is: o=softmax { (W) n ,B n ):…:(W 1 ,B 1 ) X, where W 1 To W n Is a weight matrix, B 1 To B n For bias vectors, X is a classification feature vector.
8. The use method of the automatic water adding dough kneading tank is characterized by comprising the following steps of:
acquiring a detection image of the water and the flour after being stirred;
passing the detection image through a first convolution neural network model comprising a depth feature fusion module to obtain a multi-scale aggregation feature map;
expanding each feature matrix of the multi-scale aggregation feature map along the channel dimension into feature vectors to obtain a plurality of local expansion feature vectors;
passing the plurality of local expansion feature vectors through a converter-based context encoder to obtain a plurality of context semantic local expansion feature vectors;
Respectively carrying out feature data enhancement on each context semantic local expansion feature vector in the context semantic local expansion feature vectors to obtain a plurality of optimized context semantic local expansion feature vectors;
fusing the plurality of optimization context semantic local expansion feature vectors to obtain classification feature vectors; and
and the control result generation module is used for enabling the classification feature vector to pass through a classifier to obtain a classification result, and the classification result is used for indicating whether water needs to be added.
9. The method of using the self-controlled water-adding dough pot according to claim 8, wherein said passing said classification feature vector through a classifier to obtain a classification result comprises: and expanding each feature matrix of the multi-scale aggregation feature map along the channel dimension along a row vector or a column vector to obtain the plurality of local expansion feature vectors.
10. The method of using the self-controlled water-filled dough tank of claim 9, wherein expanding each feature matrix of the multi-scale aggregated feature map along the channel dimension into feature vectors to obtain a plurality of locally expanded feature vectors, comprises: processing the classification feature vector using the classifier in the following formula to obtain a classification result;
Wherein, the formula is: o=softmax { (W) n ,B n ):…:(W 1 ,B 1 ) X, where W 1 To W n Is a weight matrix, B 1 To B n For bias vectors, X is a classification feature vector.
CN202211379092.XA 2022-11-04 2022-11-04 Automatic water adding dough kneading tank and application method thereof Withdrawn CN116051853A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202211379092.XA CN116051853A (en) 2022-11-04 2022-11-04 Automatic water adding dough kneading tank and application method thereof

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202211379092.XA CN116051853A (en) 2022-11-04 2022-11-04 Automatic water adding dough kneading tank and application method thereof

Publications (1)

Publication Number Publication Date
CN116051853A true CN116051853A (en) 2023-05-02

Family

ID=86115333

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202211379092.XA Withdrawn CN116051853A (en) 2022-11-04 2022-11-04 Automatic water adding dough kneading tank and application method thereof

Country Status (1)

Country Link
CN (1) CN116051853A (en)

Cited By (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN116716079A (en) * 2023-06-14 2023-09-08 山东沃赛新材料科技有限公司 High-performance mildew-proof alcohol type beautifying trimming adhesive and preparation method thereof
CN116862877A (en) * 2023-07-12 2023-10-10 新疆生产建设兵团医院 Scanning image analysis system and method based on convolutional neural network
CN117138588A (en) * 2023-10-27 2023-12-01 克拉玛依曜诚石油科技有限公司 Intelligent online cleaning method and system for reverse osmosis system

Cited By (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN116716079A (en) * 2023-06-14 2023-09-08 山东沃赛新材料科技有限公司 High-performance mildew-proof alcohol type beautifying trimming adhesive and preparation method thereof
CN116716079B (en) * 2023-06-14 2024-01-19 山东沃赛新材料科技有限公司 High-performance mildew-proof alcohol type beautifying trimming adhesive and preparation method thereof
CN116862877A (en) * 2023-07-12 2023-10-10 新疆生产建设兵团医院 Scanning image analysis system and method based on convolutional neural network
CN117138588A (en) * 2023-10-27 2023-12-01 克拉玛依曜诚石油科技有限公司 Intelligent online cleaning method and system for reverse osmosis system
CN117138588B (en) * 2023-10-27 2024-02-13 克拉玛依曜诚石油科技有限公司 Intelligent online cleaning method and system for reverse osmosis system

Similar Documents

Publication Publication Date Title
CN112116030B (en) Image classification method based on vector standardization and knowledge distillation
CN116051853A (en) Automatic water adding dough kneading tank and application method thereof
CN107527023B (en) Polarized SAR image classification method based on superpixels and topic models
CN112668579A (en) Weak supervision semantic segmentation method based on self-adaptive affinity and class distribution
CN105117736B (en) Classification of Polarimetric SAR Image method based on sparse depth heap stack network
CN111368671A (en) SAR image ship target detection and identification integrated method based on deep learning
CN111582320A (en) Dynamic individual identification method based on semi-supervised learning
CN112163450A (en) Based on S3High-frequency ground wave radar ship target detection method based on D learning algorithm
CN112288026B (en) Infrared weak and small target detection method based on class activation diagram
CN116844217B (en) Image processing system and method for generating face data
CN110490894A (en) Background separating method before the video decomposed based on improved low-rank sparse
CN117227005A (en) Production control system and method for concrete raw material processing
CN112084842A (en) Hydrological remote sensing image target identification method based on depth semantic model
CN115079116A (en) Radar target identification method based on Transformer and time convolution network
EP4390860A1 (en) Constellation-based distributed collaborative remote sensing determination method and apparatus, storage medium and satellite
CN111860356B (en) Polarization SAR image classification method based on nonlinear projection dictionary pair learning
CN110866609B (en) Method, device, server and storage medium for acquiring interpretation information
CN116796248A (en) Forest health environment assessment system and method thereof
CN117216517A (en) Radiation source individual identification method based on contrast generation sample enhancement
Syahputra et al. Comparison of CNN models with transfer learning in the classification of insect pests
CN115223033A (en) Synthetic aperture sonar image target classification method and system
CN112215282B (en) Meta-generalization network system based on small sample image classification
CN109215057B (en) High-performance visual tracking method and device
Liu et al. Polsar image classification based on polarimetric scattering coding and sparse support matrix machine
CN113239895A (en) SAR image change detection method of capsule network based on attention mechanism

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
WW01 Invention patent application withdrawn after publication

Application publication date: 20230502

WW01 Invention patent application withdrawn after publication