CN116451856A - Cement raw material mill raw material fineness index prediction method, model training method and system - Google Patents

Cement raw material mill raw material fineness index prediction method, model training method and system Download PDF

Info

Publication number
CN116451856A
CN116451856A CN202310374251.5A CN202310374251A CN116451856A CN 116451856 A CN116451856 A CN 116451856A CN 202310374251 A CN202310374251 A CN 202310374251A CN 116451856 A CN116451856 A CN 116451856A
Authority
CN
China
Prior art keywords
raw material
fineness
sample
encoder
fineness index
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202310374251.5A
Other languages
Chinese (zh)
Inventor
李婉莹
胡要林
景世青
欧阳葆青
江开放
王国勋
刘雨桐
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Shenzhen Runfeng Intelligent Technology Co ltd
China Resources Digital Technology Co Ltd
Original Assignee
Shenzhen Runfeng Intelligent Technology Co ltd
China Resources Digital Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Shenzhen Runfeng Intelligent Technology Co ltd, China Resources Digital Technology Co Ltd filed Critical Shenzhen Runfeng Intelligent Technology Co ltd
Priority to CN202310374251.5A priority Critical patent/CN116451856A/en
Publication of CN116451856A publication Critical patent/CN116451856A/en
Pending legal-status Critical Current

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06QINFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES; SYSTEMS OR METHODS SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES, NOT OTHERWISE PROVIDED FOR
    • G06Q10/00Administration; Management
    • G06Q10/04Forecasting or optimisation specially adapted for administrative or management purposes, e.g. linear programming or "cutting stock problem"
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F17/00Digital computing or data processing equipment or methods, specially adapted for specific functions
    • G06F17/10Complex mathematical operations
    • G06F17/18Complex mathematical operations for evaluating statistical data, e.g. average values, frequency distributions, probability functions, regression analysis
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/21Design or setup of recognition systems or techniques; Extraction of features in feature space; Blind source separation
    • G06F18/214Generating training patterns; Bootstrap methods, e.g. bagging or boosting
    • G06F18/2148Generating training patterns; Bootstrap methods, e.g. bagging or boosting characterised by the process organisation or structure, e.g. boosting cascade
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/24Classification techniques
    • G06F18/243Classification techniques relating to the number of classes
    • G06F18/24323Tree-organised classifiers
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/045Combinations of networks
    • G06N3/0455Auto-encoder networks; Encoder-decoder networks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/08Learning methods
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06QINFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES; SYSTEMS OR METHODS SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES, NOT OTHERWISE PROVIDED FOR
    • G06Q10/00Administration; Management
    • G06Q10/06Resources, workflows, human or project management; Enterprise or organisation planning; Enterprise or organisation modelling
    • G06Q10/063Operations research, analysis or management
    • G06Q10/0639Performance analysis of employees; Performance analysis of enterprise or organisation operations
    • G06Q10/06393Score-carding, benchmarking or key performance indicator [KPI] analysis
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06QINFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES; SYSTEMS OR METHODS SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES, NOT OTHERWISE PROVIDED FOR
    • G06Q50/00Information and communication technology [ICT] specially adapted for implementation of business processes of specific business sectors, e.g. utilities or tourism
    • G06Q50/04Manufacturing
    • YGENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
    • Y02TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
    • Y02PCLIMATE CHANGE MITIGATION TECHNOLOGIES IN THE PRODUCTION OR PROCESSING OF GOODS
    • Y02P90/00Enabling technologies with a potential contribution to greenhouse gas [GHG] emissions mitigation
    • Y02P90/30Computing systems specially adapted for manufacturing

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • Business, Economics & Management (AREA)
  • General Physics & Mathematics (AREA)
  • Data Mining & Analysis (AREA)
  • Human Resources & Organizations (AREA)
  • Economics (AREA)
  • General Engineering & Computer Science (AREA)
  • Strategic Management (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Mathematical Physics (AREA)
  • Artificial Intelligence (AREA)
  • Evolutionary Computation (AREA)
  • Bioinformatics & Cheminformatics (AREA)
  • Evolutionary Biology (AREA)
  • Development Economics (AREA)
  • General Business, Economics & Management (AREA)
  • Health & Medical Sciences (AREA)
  • General Health & Medical Sciences (AREA)
  • Entrepreneurship & Innovation (AREA)
  • Tourism & Hospitality (AREA)
  • Software Systems (AREA)
  • Bioinformatics & Computational Biology (AREA)
  • Marketing (AREA)
  • Operations Research (AREA)
  • Game Theory and Decision Science (AREA)
  • Computational Linguistics (AREA)
  • Mathematical Optimization (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Pure & Applied Mathematics (AREA)
  • Mathematical Analysis (AREA)
  • Computational Mathematics (AREA)
  • Computing Systems (AREA)
  • Quality & Reliability (AREA)
  • Molecular Biology (AREA)
  • Educational Administration (AREA)
  • Biomedical Technology (AREA)
  • Biophysics (AREA)
  • Algebra (AREA)

Abstract

The application provides a cement raw material mill raw material fineness index prediction method, a model training method and a model training system, and belongs to the technical field of cement production. The method comprises the following steps: obtaining target technological parameters in the production process of the raw materials of the cement raw material mill; calculating the statistical characteristics of the target process parameters to obtain statistical characteristic values of data characteristics for describing the target process parameters; inputting target technological parameters into a pre-trained self-encoder, and extracting an output result of the encoder in the self-encoder to obtain hidden layer information; obtaining technological parameter index information according to the hidden layer information and the statistical feature values, and inputting the technological parameter index information into a pre-trained fineness index prediction model to obtain a fineness index prediction result. The method solves the problems of nonlinearity, strong coupling and the like of fineness indexes in the raw material grinding process, can provide accurate raw material grinding fineness prediction, and improves the efficiency and quality of raw material grinding.

Description

Cement raw material mill raw material fineness index prediction method, model training method and system
Technical Field
The application relates to the technical field of cement production, in particular to a cement raw material mill raw material fineness index prediction method, a model training method and a model training system.
Background
The cement industry is used as an important basic industry in China, and is widely applied to industries such as civil construction, national defense and the like, and the production and preparation process comprises three stages of raw material grinding, raw material calcination and clinker grinding. The raw material grinding process is an important link of cement production, the quality of raw materials and the stability of the production process directly influence the yield and quality of cement clinker, the power consumption can be increased when the raw material grinding is too fine, and the product quality is influenced when the raw material grinding is too coarse.
The fineness index in the raw material grinding process has the problems of nonlinearity, strong coupling and the like. In the related art, when raw material production is carried out, parameter setting and adjustment are carried out manually through experience of staff to produce raw material meeting fineness requirements, however, the mode has subjectivity and randomness, production rules are difficult to find in the face of nonlinear, strong coupling and other problems in the production process, and grinding efficiency and quality of raw material can be reduced.
Disclosure of Invention
The embodiment of the application mainly aims to provide a cement raw material mill raw material fineness index prediction method, a model training method and a model training system, which can provide accurate raw material grinding fineness prediction and improve raw material grinding efficiency and quality.
To achieve the above object, a first aspect of an embodiment of the present application provides a method for predicting a fineness index of a raw material of a cement raw material mill, the method comprising: obtaining target technological parameters in the production process of the raw materials of the cement raw material mill; calculating the statistical characteristics of the target process parameters to obtain statistical characteristic values for describing the data characteristics of the target process parameters; inputting the target process parameters into a pre-trained self-encoder, and extracting an output result of the encoder in the self-encoder to obtain hidden layer information; obtaining technological parameter index information according to the hidden layer information and the statistical feature values, and inputting the technological parameter index information into a pre-trained fineness index prediction model to obtain a fineness index prediction result.
In some embodiments, the fineness index prediction model includes a first fineness index prediction model and a second fineness index prediction model, the first fineness index prediction model is trained by sample process parameters under a first fineness, and the second fineness index prediction model is trained by sample process parameters under a second fineness; inputting the technological parameter index information into a pre-trained fineness index prediction model to obtain a fineness index prediction result, wherein the technological parameter index information is input into a pre-trained first fineness index prediction model to obtain a first index prediction result based on a first fineness; inputting the technological parameter index information into the pre-trained second fineness index prediction model to obtain a second index prediction result based on the second fineness.
In some embodiments, the obtaining target process parameters in the cement raw mill raw material production process comprises: acquiring a plurality of historical process parameters at historical moments in the cement raw material mill raw material production process, wherein the historical process parameters comprise powder concentrator current, circulating fan current, mill pressure difference, mill outlet pressure, mill outlet temperature, elevator current, grinding pressure, yield feedback information and vertical mill host current; and taking a plurality of the historical process parameters as target process parameters in the production process of the raw materials of the cement raw material mill.
In some embodiments, the obtaining target process parameters in the cement raw mill raw material production process comprises: acquiring a plurality of historical process parameters at historical moments in the production process of the raw materials of the cement raw material mill; calculating the average value and standard deviation corresponding to each historical technological parameter; and respectively carrying out standardization treatment on each historical process parameter according to the mean value and the standard deviation to obtain a plurality of target process parameters.
In some embodiments, the target process parameter is a plurality of; the calculating the statistical characteristic of the target process parameter to obtain a statistical characteristic value for describing the data characteristic of the target process parameter comprises the following steps: calculating the average value of a plurality of target process parameters to obtain a target average value for describing the concentrated trend of the target process parameters; calculating variances of a plurality of target process parameters to obtain target variances for describing the discrete degree of the target process parameters; calculating the skewness of a plurality of target process parameters to obtain target skewness for describing the symmetry of the distribution of the target process parameters; calculating kurtosis of a plurality of target process parameters to obtain target kurtosis for describing the steepness degree of the distribution of the target process parameters; at least one of the target mean, the target variance, the target skewness, and the target kurtosis is taken as a statistical feature value for describing a data feature of the target process parameter.
In some embodiments, the self-encoder is further provided with a decoder; the self-encoder is obtained through training the following steps: obtaining sample technological parameters; inputting the sample technological parameters into the encoder for processing to obtain sample hidden layer information output by the encoder; inputting the information of the sample hidden layer into the decoder for processing to obtain sample output information output by the decoder; and calculating a first loss value of the self-encoder according to the sample process parameters and the sample output information, and adjusting the parameters of the self-encoder according to the first loss value to obtain the trained self-encoder.
In some embodiments, the encoder has a forget gate, an input candidate gate, and an output gate disposed therein; the obtaining hidden layer information comprises the following steps: acquiring a first parameter of the forgetting gate, a second parameter of the input gate, a third parameter of the output gate and output information of the encoder at the last moment; updating the memory unit storage information of the input candidate gate according to the target technological parameter, the first parameter, the second parameter and the output information at the previous moment; and performing element multiplication operation processing according to the memory unit storage information and the third parameter to obtain hidden layer information output by the encoder.
In some embodiments, the fineness index prediction model is trained by: acquiring sample technological parameters and corresponding sample fineness index true values; calculating the statistical characteristics of the sample process parameters to obtain sample statistical characteristic values for describing the data characteristics of the sample process parameters; inputting the sample process parameters into the self-encoder trained in advance, and extracting an output result of the encoder to obtain sample hidden layer information; according to the sample hidden layer information and the sample statistical feature value, inputting the sample process parameter index information into the fineness index prediction model to obtain a sample fineness index prediction result; obtaining a second loss value according to the sample fineness index true value and the sample fineness index prediction result, and adjusting parameters of the fineness index prediction model according to the second loss value to obtain the trained fineness index prediction model.
In some embodiments, the fineness index prediction model is a gradient lifting tree model, and classification and regression trees are adopted as decision trees of the model, and the trained fineness index prediction model is generated through multiple iterations; the step of adjusting the parameters of the fineness index prediction model according to the second loss value to obtain the trained fineness index prediction model, comprising: acquiring the total number of leaf nodes, the weight of each leaf node and a preset penalty factor in the fineness index prediction model; obtaining regularization values of the fineness index prediction model according to the total number of the leaf nodes, the weight of each leaf node and the penalty factors; obtaining a first objective function value of the fineness index prediction model according to the second loss value and the regularization value; and performing second-order Taylor expansion processing on the objective function value to obtain a processed second objective function value, performing multiple iterations on the fineness index prediction model according to the second objective function value, and determining a target leaf node meeting the gain loss requirement after adjusting the weight and the second objective function value in the iterations to obtain the trained fineness index prediction model.
To achieve the above object, a second aspect of the embodiments of the present application proposes a model training method, including: acquiring sample technological parameters and corresponding sample fineness index true values; calculating the statistical characteristics of the sample process parameters to obtain sample statistical characteristic values for describing the data characteristics of the sample process parameters; inputting the sample process parameters into a pre-trained self-encoder, and extracting an output result of the encoder in the self-encoder to obtain sample hidden layer information; according to the sample hidden layer information and the sample statistical feature value, inputting the sample process parameter index information into a fineness index prediction model to obtain a sample fineness index prediction result; obtaining a second loss value according to the sample fineness index true value and the sample fineness index prediction result, and adjusting parameters of the fineness index prediction model according to the second loss value to obtain the trained fineness index prediction model.
To achieve the above object, a third aspect of the embodiments of the present application provides a cement raw material mill raw material fineness index prediction system, the system including: the parameter acquisition module is used for acquiring target technological parameters in the production process of the raw materials of the cement raw material mill; the statistical characteristic calculation module is used for calculating the statistical characteristic of the target process parameter to obtain a statistical characteristic value for describing the data characteristic of the target process parameter; the self-encoder processing module is used for inputting the target process parameters into a pre-trained self-encoder, extracting an output result of the encoder in the self-encoder and obtaining hidden layer information; the model processing module is used for inputting the technological parameter index information into a pre-trained fineness index prediction model according to the hidden layer information and the statistical feature value, and obtaining a fineness index prediction result.
To achieve the above object, a fourth aspect of the embodiments of the present application provides an electronic device, where the electronic device includes a memory and a processor, and the memory stores a computer program, and when the processor executes the computer program, the processor implements the method for predicting a fineness index of a cement raw material mill raw material according to the first aspect embodiment and the method for training a model according to the second aspect embodiment.
To achieve the above object, a fifth aspect of the embodiments of the present application proposes a storage medium, which is a computer-readable storage medium, and the storage medium stores a computer program, where the computer program, when executed by a processor, implements the method for predicting fineness index of a cement raw material mill raw material according to the embodiment of the first aspect and the method for training a model according to the embodiment of the second aspect.
The cement raw material grinding raw material fineness index prediction method, the model training method and the model training system provided by the embodiment of the application can be applied to a cement raw material grinding raw material fineness index prediction system. By executing the cement raw material mill raw material fineness index prediction method, the statistical characteristic value can be calculated based on the target process parameters, the representation capability of the data is improved, the target process parameters are input into a pre-trained self-encoder, the process parameter index can be extracted, meanwhile, the fineness treatment prediction model is adopted to process the data, and the accurate fineness index prediction result can be obtained. By applying the machine learning model, the method and the device can deeply mine potential characteristic change rules of process indexes, well solve the problems of nonlinearity, strong coupling and the like of fineness indexes in a raw material grinding process, provide accurate raw material grinding fineness prediction, and improve raw material grinding efficiency and quality
Drawings
FIG. 1 is a schematic flow chart of a method for predicting fineness index of a raw material of a cement raw material mill according to an embodiment of the present application;
fig. 2 is a schematic flow chart of step S104 in fig. 1;
fig. 3 is a schematic flow chart of step S101 in fig. 1;
fig. 4 is another flow chart of step S101 in fig. 1;
fig. 5 is a schematic flow chart of step S102 in fig. 1;
FIG. 6 is a schematic flow chart of a self-encoder training process provided in an embodiment of the present application;
fig. 7 is a flow chart of step S103 in fig. 1;
FIG. 8 is a schematic flow chart of a training process of the fineness index prediction model according to the embodiment of the present application;
fig. 9 is a flowchart of step S805 in fig. 8;
FIG. 10 is a flow chart of a model training method provided in an embodiment of the present application;
FIG. 11 is a schematic diagram of functional modules of a system for predicting fineness index of raw materials of a cement raw material mill according to an embodiment of the present application;
fig. 12 is a schematic hardware structure of an electronic device according to an embodiment of the present application.
Detailed Description
In order to make the objects, technical solutions and advantages of the present application more apparent, the present application will be further described in detail with reference to the accompanying drawings and examples. It should be understood that the specific embodiments described herein are for purposes of illustration only and are not intended to limit the present application.
It should be noted that although functional block division is performed in a device diagram and a logic sequence is shown in a flowchart, in some cases, the steps shown or described may be performed in a different order than the block division in the device, or in the flowchart. The terms first, second and the like in the description and in the claims and in the above-described figures, are used for distinguishing between similar elements and not necessarily for describing a particular sequential or chronological order.
Unless defined otherwise, all technical and scientific terms used herein have the same meaning as commonly understood by one of ordinary skill in the art to which this application belongs. The terminology used herein is for the purpose of describing embodiments of the present application only and is not intended to be limiting of the present application.
First, several nouns referred to in this application are parsed:
artificial intelligence (artificial intelligence, AI): is a new technical science for researching and developing theories, methods, technologies and application systems for simulating, extending and expanding the intelligence of people; artificial intelligence is a branch of computer science that attempts to understand the nature of intelligence and to produce a new intelligent machine that can react in a manner similar to human intelligence, research in this field including robotics, language recognition, image recognition, natural language processing, and expert systems. Artificial intelligence can simulate the information process of consciousness and thinking of people. Artificial intelligence is also a theory, method, technique, and application system that utilizes a digital computer or digital computer-controlled machine to simulate, extend, and expand human intelligence, sense the environment, acquire knowledge, and use knowledge to obtain optimal results.
A Long Short-Term Memory (LSTM) is a time-circulating neural network, which is specially designed to solve the Long-Term dependency problem of a general circulating neural network, and all the circulating neural networks have a chain type of repeating neural network modules.
XGBoost is an optimized distributed gradient enhancement library, and is intended to be efficient, flexible and portable. It implements a machine learning algorithm under the Gradient Boosting framework. XGBoost provides parallel tree promotion, which can quickly and accurately solve many data science problems.
Classification and regression trees (Classification and regression tree, CART) are learning methods that output a conditional probability distribution of a random variable Y given an input random variable X. CART assumes that the decision tree is a binary tree, the internal node features are valued "yes" and "no", the left branch is the branch that is valued "yes", and the right branch is the branch that is valued "no". Such a decision tree is equivalent to recursively bisecting each feature, dividing the input space, i.e. the feature space, into a finite number of cells, and determining a predicted probability distribution over these cells, i.e. a conditional probability distribution that is output under given input conditions. It is composed of tree generation and tree pruning.
Proper fineness of raw materials is a key factor for realizing high quality and high yield of cement production and reducing energy consumption. However, in actual production, parameters are usually manually operated and adjusted according to the experience of personnel, and thus have subjectivity and randomness, resulting in production fluctuation and excessive power consumption. Meanwhile, the raw material grinding process is a complex physicochemical process, and has the characteristics of nonlinearity, strong coupling, multiple inputs and the like; therefore, it is difficult to build an accurate model according to the conventional method, and it is also difficult to find a rule from a large amount of real-time production data. Therefore, how to build a raw material fineness index model of the raw material grinding process and determine the optimal operation parameters to guide production becomes a problem to be solved in the industry.
Based on the above, the embodiment of the application provides a cement raw material mill raw material fineness index prediction method, a model training method and a system, by integrating statistics, a self-encoder and a gradient lifting tree, the potential characteristic change rule of process indexes can be deeply excavated, the problems of nonlinearity, strong coupling and the like of the fineness indexes in the raw material grinding process are well solved, accurate raw material grinding fineness index prediction can be provided, and the efficiency and quality of raw material grinding are improved
The method for predicting the fineness index of the raw material of the cement raw material mill, the model training method and the system provided by the embodiment of the application are specifically described through the following embodiments, and the method for predicting the fineness index of the raw material of the cement raw material mill in the embodiment of the application is described first.
Embodiments of the present application may also be attributed to the field of machine learning, where related data may be acquired and processed based on artificial intelligence techniques. Among these, artificial intelligence (Artificial Intelligence, AI) is the theory, method, technique and application system that uses a digital computer or a digital computer-controlled machine to simulate, extend and extend human intelligence, sense the environment, acquire knowledge and use knowledge to obtain optimal results.
Artificial intelligence infrastructure technologies generally include technologies such as sensors, dedicated artificial intelligence chips, cloud computing, distributed storage, big data processing technologies, operation/interaction systems, mechatronics, and the like. The artificial intelligence software technology mainly comprises a computer vision technology, a robot technology, a biological recognition technology, a voice processing technology, a natural language processing technology, machine learning/deep learning and other directions.
The embodiment of the application provides a cement raw material mill raw material fineness index prediction method, which relates to the technical field of artificial intelligence. The cement raw material mill raw material fineness index prediction method provided by the embodiment of the application can be applied to a terminal, a server side and software running in the terminal or the server side. In some embodiments, the terminal may be a smart phone, tablet, notebook, desktop, etc.; the server side can be configured as an independent physical server, a server cluster or a distributed system formed by a plurality of physical servers, and a cloud server for providing cloud services, cloud databases, cloud computing, cloud functions, cloud storage, network services, cloud communication, middleware services, domain name services, security services, CDNs, basic cloud computing services such as big data and artificial intelligent platforms and the like; the software may be an application of the method for realizing the raw material fineness index prediction of the cement raw material mill, etc., but is not limited to the above form.
The subject application is operational with numerous general purpose or special purpose computer system environments or configurations. For example: personal computers, server computers, hand-held or portable devices, tablet devices, multiprocessor systems, microprocessor-based systems, set top boxes, programmable consumer electronics, network PCs, minicomputers, mainframe computers, distributed computing environments that include any of the above systems or devices, and the like. The application may be described in the general context of computer-executable instructions, such as program modules, being executed by a computer. Generally, program modules include routines, programs, objects, components, data structures, etc. that perform particular tasks or implement particular abstract data types. The application may also be practiced in distributed computing environments where tasks are performed by remote processing devices that are linked through a communications network. In a distributed computing environment, program modules may be located in both local and remote computer storage media including memory storage devices.
It should be noted that, in each specific embodiment of the present application, when related processing needs to be performed according to data related to a user identity or a characteristic, such as user information, user behavior data, user history data, user location information, etc., permission or consent of the user is obtained first, for example, when data stored by the user and a request for accessing cached data of the user are obtained first; when acquiring target process information or sample process information for cement production, the embodiments of the present application may first obtain a user's approval or consent. Moreover, the collection, use, processing, etc. of such data would comply with relevant laws and regulations. In addition, when the embodiment of the application needs to acquire the sensitive personal information of the user, the independent permission or independent consent of the user is acquired through a popup window or a jump to a confirmation page or the like, and after the independent permission or independent consent of the user is explicitly acquired, necessary user related data for enabling the embodiment of the application to normally operate is acquired.
Fig. 1 is an alternative flowchart of a method for predicting fineness index of a raw material of cement raw material mill according to an embodiment of the present application, and the method in fig. 1 may include, but is not limited to, steps S101 to S104.
Step S101, obtaining target technological parameters in the production process of raw materials of the cement raw material mill;
the method for predicting the fineness index of the raw material of the cement raw material mill can be applied to a system for predicting the fineness index of the raw material of the cement raw material mill (the system for short), and the system needs to acquire target technological parameters in the production process of the raw material of the cement raw material mill in the process of predicting the fineness index of the raw material of the cement raw material mill.
The target process parameters are exemplary process parameters which affect the efficiency and quality of raw material grinding in the raw material grinding process, and it is understood that the target process parameters are parameters of a production machine, and by adjusting different target process parameters, the production machine can grind raw materials to different degrees, so as to obtain raw materials with different fineness.
It will be appreciated that there may be a plurality of target process parameters, which together determine the efficiency and quality of raw meal grinding, and embodiments of the present application are not particularly limited.
Step S102, calculating the statistical characteristics of the target process parameters to obtain statistical characteristic values of the data characteristics for describing the target process parameters;
by way of example, the embodiment of the application is directed to the problems of nonlinearity, strong coupling and the like of fineness indexes in the raw material grinding process, so that data needs to be processed, characteristics of the data can be quantized, and accordingly the processing capacity of the model can be improved and the processing errors of the model can be reduced by the data which are subsequently input into the model.
Illustratively, in the embodiments of the present application, by calculating the statistical characteristics of the target process parameters, characterizing the data characteristics of the target process parameters by the statistical characteristics, the calculated statistical characteristics may be referred to as statistical characteristic values.
It will be appreciated that there may be a variety of data characteristics, such as trend of change of data, degree of dispersion, symmetry, degree of steepness of data distribution, etc., and that the processing power of data may be improved by calculating statistical characteristics in the face of a large number of target process parameters.
Step S103, inputting target process parameters into a pre-trained self-encoder, and extracting an output result of the encoder in the encoder to obtain hidden layer information;
Illustratively, the potential spatial features of the index parameters need to be extracted by the self-encoder in embodiments of the present application. The self-encoder in the embodiments of the present application is an LSTM self-encoder. Specifically, in the application process, the embodiment of the application may input the target process parameter into a pre-trained self-encoder, where the self-encoder is a data compression algorithm, and is provided with an encoder and a decoder, the encoder may process the input data, input the processing result into the decoder, and finally output the output information of the self-encoder after the processing of the decoder, where the encoding part maps the target process parameter to a hidden layer feature space, which is equivalent to performing a potential spatial representation learning on the index, and the decoding part tries to restore the new representation of the index to the original data.
Illustratively, in the embodiment of the present application, the potential spatial features of the extracted index parameters need to be implemented by using an encoder in the self-encoder, and the output result of the encoder is derived to obtain hidden layer information, and the hidden layer information can be used for inputting a subsequent model. It will be appreciated that the hidden layer information is a hidden layer feature information, which is a potential feature extracted from the encoder.
Step S104, obtaining technological parameter index information according to the hidden layer information and the statistical feature values, and inputting the technological parameter index information into a pre-trained fineness index prediction model to obtain a fineness index prediction result.
After obtaining the hidden layer information output by the encoder, the embodiment of the application can form a combination of the statistical feature and the latent feature of the self-encoder based on the hidden layer information and the statistical feature value, and is called as process parameter index information, and the process parameter index information can be used as an input parameter of a model, so that the process parameter index information is input into a pre-trained fineness index prediction model to obtain a fineness index prediction result.
The fineness index prediction model is an exemplary pre-trained machine learning model, the model can input technological parameter index information, process and predict the technological parameter index information, and in the application stage, the fineness index prediction result of raw material grinding can be finally output through the model.
By way of example, the embodiment of the application adopts a gradient lifting tree XGBoost algorithm, and the fineness index prediction model is a gradient lifting tree model. The XGBoost method integrates a plurality of weak classifiers into one strong classifier, reduces errors of a model predicted value and an actual value, and has the main idea that each decision tree is learned and trained based on residual errors between the predicted value and a target value of the last decision tree by continuously generating new decision trees, so that the deviation of the model is reduced, and the prediction precision is improved.
The method for predicting the raw material fineness index of the cement raw material mill in the embodiment of the application is a method for predicting the raw material fineness index of the cement raw material mill by integrating statistics, an LSTM (least squares) self-encoder and a gradient lifting tree, and the process parameter index is extracted through statistical calculation and the LSTM self-encoder model and then is input into an XGBoost model for prediction, the model is deeply excavated into a potential characteristic change rule of the process index, the problems of nonlinearity, strong coupling and the like of the fineness index in the raw material grinding process are well solved, the accuracy and stability of the fineness prediction model are improved, and finally accurate raw material grinding fineness prediction can be provided, and the efficiency and quality of raw material grinding are improved.
Illustratively, in the production of raw materials for cement raw material mills, there are various fineness production requirements, for example, the fineness of raw materials produced includes 0.08 fineness and 0.2 fineness. In the production under different fineness, the process parameters are different, and based on the process parameters, the embodiment of the application can respectively train to obtain different fineness index prediction models aiming at different fineness, and each model can respectively predict the fineness of raw meal powder with one fineness, so that the accuracy of each fineness prediction is improved.
Referring to fig. 2, in some embodiments, step S104 may include steps S201 to S202:
step S201, inputting technological parameter index information into a pre-trained first fineness index prediction model to obtain a first index prediction result based on a first fineness;
step S202, inputting the technological parameter index information into a pre-trained second fineness index prediction model to obtain a second index prediction result based on the second fineness.
The fineness index prediction model includes a first fineness index prediction model and a second fineness index prediction model, wherein the first fineness index prediction model is obtained by training a sample process parameter under a first fineness and can be used for obtaining a prediction result under the first fineness after processing the process parameter, and the second fineness index prediction model is obtained by training a sample process parameter under a second fineness and can be used for obtaining a prediction result under the second fineness after processing the process parameter.
For example, in the embodiment of the present application, when performing fineness prediction, the process parameter index information may be input into the first fineness index prediction model and the second fineness index prediction model, so as to finally obtain a first index prediction result and a second fineness prediction result under the first fineness.
Illustratively, in the embodiment of the present application, the first fineness is 0.08 fineness, and the second fineness is 0.2 fineness, so the first fineness prediction result is a prediction result under 0.08 fineness, and the second fineness prediction result is a prediction result under 0.2 fineness. On the premise of meeting the requirements of the embodiment of the application, the first fineness and the second fineness can be the fineness of other sizes, and the first fineness and the second fineness in the embodiment of the application are only different fineness and are not represented as limitations on the embodiment of the application.
Referring to fig. 3, in some embodiments, step S101 may include steps S301 to S302:
step S301, a plurality of historical process parameters at historical moments in the cement raw material mill raw material production process are obtained, wherein the historical process parameters comprise powder concentrator current, circulating fan current, mill pressure difference, mill outlet pressure, mill outlet temperature, elevator current, grinding pressure, yield feedback information and vertical mill host current;
and step S302, taking a plurality of historical process parameters as target process parameters in the production process of the raw materials of the cement raw material mill.
Illustratively, the fineness of the raw meal at this time is predicted by the process parameters over a period of time in the embodiments of the present application. Specifically, in the embodiment of the application, the current time of the current predicted raw material fineness can be determined, and a plurality of historical process parameters at historical time in the cement raw material mill raw material production process are obtained based on the current time. It will be appreciated that there may be a plurality of historical process parameters, which are process parameters over a period of time.
For example, the historical process parameters in the embodiments of the present application may also include various types, for example, the current of the powder concentrator, the current of the circulating fan, the differential pressure of the mill, the outlet pressure of the mill, the grinding temperature of the mill, the current of the lifter, the grinding pressure, the yield feedback information or the main machine current of the vertical mill, where the process parameters are related to fineness indexes, and different historical process parameters affect fineness of the raw powder grinding.
Illustratively, a plurality of historical process parameters are used as target process parameters in the cement raw material mill raw material production process in the embodiment of the application. It may be understood that the target process parameters are some historical process parameters finally determined as requiring subsequent processing to predict the fineness of the raw meal, and in the embodiment of the present application, all acquired multiple or multiple historical process parameters may be used as target process parameters, or several or multiple historical process parameters may be determined as target process parameters according to a random or preset selection manner.
For example, in the embodiments of the present application, historical process parameters may be continuously acquired according to the sampling frequency during the past period of time. In determining the historical process parameters as the target process parameters, there are a plurality of historical process parameters in the same kind, but in acquiring the historical process parameters, the embodiment of the application can predict the fineness of the raw meal at this time according to the historical process parameters of the past one hour, for example, the fineness of the raw meal can be measured in each minute, and in the past one hour, various historical process parameters are acquired once every minute, and the obtained historical process parameters are stored as a data set. On the premise of meeting the requirements of the embodiment of the application, historical process parameters of other historical time periods can be obtained, and different sampling frequencies can be set to obtain different quantities of historical process parameters, and the method is not particularly limited.
Specifically, in the embodiment of the present application, 9 historical process parameters x= [ X ] related to the fineness index are selected 1 ,x 2 ,...,x 9 ]As a feature of raw meal fineness prediction; the fineness of 0.08 and the fineness of 0.2 are selected as the true value Y= [ Y ] in the raw material fineness prediction training process 1 ,y 2 ]The fineness of the raw meal at this time is predicted mainly by the past one hour of historical process parameters. Thus, the data set in embodiments of the present application may represent: { (T) i ,Y i ) I=1, 2,..n (number of samples) }, the number of samples being the number of target process parameters during the application process and the number of samples being the number of sample process parameters during the training process. Wherein T is i =[X 1i ,X 2i ,...,X 60i ] T Historical process parameter data (X) for 60 minutes 1i Process parameter data for the first minute).
Referring to fig. 4, in some embodiments, step S101 may include steps S401 to S403:
step S401, obtaining a plurality of historical process parameters at historical time in the production process of raw materials of the cement raw material mill;
step S402, calculating the mean value and standard deviation corresponding to each historical process parameter;
step S403, respectively carrying out standardization processing on each historical process parameter according to the mean value and the standard deviation to obtain a plurality of target process parameters.
Illustratively, as can be seen from the above examples, there are a variety of historical process parameters, where differences in the unit of measure employed by the process parameters may result in differences in the predicted result of the fineness of the feedstock, large scale parameter features may play a decisive role, and small scale feature effects may be ignored. In order to eliminate the influence of unit and scale differences among features, in the embodiment of the application, after obtaining a plurality of historical process parameters, the parameters also need to be standardized.
In the embodiment of the application, the mean value and standard deviation corresponding to each historical process parameter can be calculated, and in the standardization process, the parameter data are converted into normal distribution new data which accords with the mean value of 0 and the standard deviation of 1, and the calculation formula is as follows:
wherein T represents the historical process parameters, T' is the target process parameters obtained after the standardization treatment, mu is the average value of each historical process parameter, and sigma is the standard deviation of each historical process parameter.
For example, in the process of the normalization process, the normalization process is performed according to different types of historical process parameters, and the average value and the standard deviation of the historical process parameters under each type are calculated respectively, so that the above formula (1) is performed according to the historical process parameters T under each type during the calculation.
Referring to fig. 5, in some embodiments, step S102 may include steps S501 to S505:
step S501, calculating the average value of a plurality of target process parameters to obtain a target average value for describing the concentrated trend of the target process parameters;
step S502, calculating variances of a plurality of target process parameters to obtain target variances for describing the discrete degree of the target process parameters;
step S503, calculating the skewness of a plurality of target process parameters to obtain a target skewness for describing the symmetry of the distribution of the target process parameters;
Step S504, calculating kurtosis of a plurality of target process parameters to obtain target kurtosis for describing the steepness degree of the distribution of the target process parameters;
step S505, at least one of the target mean, the target variance, the target skewness, and the target kurtosis is used as a statistical feature value for describing the data feature of the target process parameter.
Illustratively, in the embodiments of the present application, by calculating the statistical characteristics of the target process parameters, characterizing the data characteristics of the target process parameters by the statistical characteristics, the calculated statistical characteristics may be referred to as statistical characteristic values.
Specifically, in the embodiment of the present application, the Mean (Mean), variance (Var), skewness (Skew) and Kurt are used as statistical features of the indicators, respectively. Obtaining a target average value by calculating the average value of a plurality of target process parameters, wherein the target average value is used for describing the centralized trend of the data; obtaining a target variance by calculating variances of a plurality of target process parameters, wherein the target variance is used for describing the discrete degree of data; obtaining target skewness by calculating the skewness of a plurality of target process parameters, wherein the target skewness is used for reflecting the symmetry of data distribution; and obtaining the target kurtosis by calculating kurtosis of a plurality of target process parameters, wherein the target kurtosis is used for reflecting the steep degree of data distribution. The calculation formula is as follows:
Where m is a historical data length, for example, when the historical process parameter is obtained according to one hour in the above embodiment, m is 60, and is a process parameter data length under 60 minutes; k (k)For each minute, the frequency of historical process data acquisition; t (T) k Representing historical process data obtained by downsampling at different frequencies; mu is the average value of each historical process parameter, sigma is the standard deviation of each historical process parameter, and no description is repeated here.
In the embodiment of the application, at least one of the target mean, the target variance, the target skewness and the target kurtosis can be used as a statistical characteristic value for describing the data characteristic of the target process parameter. Specifically, the target mean, the target variance, the target skewness, and the target kurtosis may be taken together as the statistical feature values, so there are a plurality of the statistical feature values. On the premise of meeting the requirements of the embodiment of the application, any one or a combination of a plurality of the target mean, the target variance, the target skewness and the target kurtosis can be used as a statistical characteristic value for subsequent calculation, and the method is not particularly limited.
Referring to fig. 6, in some embodiments, the self-encoder is trained by the following steps, which may include steps S601 to S604:
Step S601, obtaining sample process parameters;
step S602, inputting the sample process parameters into an encoder for processing to obtain sample hidden layer information output by the encoder;
step S603, inputting the information of the sample hidden layer into a decoder for processing to obtain sample output information output by the decoder;
step S604, calculating a first loss value of the self-encoder according to the sample process parameters and the sample output information, and adjusting the parameters of the self-encoder according to the first loss value to obtain the trained self-encoder.
Illustratively, a self-encoder may be pre-trained in embodiments of the present application. The self-encoder in the embodiment of the application can extract potential characteristics, and the self-encoder is a data compression algorithm, and the structure of the self-encoder comprises an encoding part and a decoding part. Wherein the encoding part maps the target process parameters in the steps to the hidden layer feature space, which is equivalent to performing a potential space representation learning on the index, and the decoding part tries to restore the new representation of the index to the original data. The self-encoder in the embodiments of the present application uses the mean absolute error (Mean Absolute Error, MAE) to measure the degree of difference (i.e., reconstruction effect) between the original input and the reconstructed input, and both the encoding and decoding portions employ LSTM neural networks.
In the process of training the self-encoder, the sample process parameters can be obtained, are similar to the target process parameters and are only process parameters in the training process, so that the sample process parameters are process parameters which can influence the efficiency and quality of raw material grinding in the raw material grinding process, and it is understood that the sample process parameters are parameters of a production machine, and the production machine can be enabled to grind raw material with different degrees by adjusting different sample process parameters, so that raw materials with different fineness are obtained.
In the embodiment of the application, the sample process parameters can be input into an encoder for processing to obtain sample hidden layer information output by the encoder, the sample hidden layer information is input into a decoder for processing to obtain sample output information output by the decoder, then a first loss value of the self-encoder is calculated according to the sample process parameters and the sample output information, and the parameters of the self-encoder are adjusted according to the first loss value to obtain the trained self-encoder.
In the application stage, the input information of the self-encoder is a target process parameter, and is encoded into hidden layer information h after passing through the input layer, in the training process, the input information of the self-encoder is a sample process parameter, h in the training process is the sample hidden layer information, similarly, the hidden layer information h is decoded and remapped into output information y, y in the training process is the sample output information, and other parameter parameters are not described herein. The overall process can be described as:
Wherein W is 1 And b 1 、W 2 And b 2 Weight parameters and offset parameters of encoder and decoder, respectivelyA number matrix; η is the activation function between the self-encoder neurons.
The training goal of the self-encoder is to minimize the error of the input information and the output information in the iteration, solving and updating the encoder and decoder parameters by minimizing the loss function, which loss function (MAE) can be characterized as:
wherein L is a first loss value, i is different sample data, y i Representing the i-th sample output information, N represents the number of sample process parameters entered.
Referring to fig. 7, in some embodiments, step S103 may include steps S701 to S703:
step S701, obtaining a first parameter of a forgetting gate, a second parameter of an input gate, a third parameter of an output gate and output information of an encoder at the last moment;
step S702, updating the memory cell storage information of the input candidate gate according to the target process parameter, the first parameter, the second parameter and the output information at the previous moment;
step S703, performing element multiplication processing according to the memory unit storage information and the third parameter to obtain hidden layer information output by the encoder.
The encoder is provided with a forgetting gate, an input candidate gate and an output gate, wherein in the embodiment of the application, the first parameter of the forgetting gate, the second parameter of the input gate, the third parameter of the output gate and the last time output information of the encoder are obtained, the memory unit storage information of the input candidate gate is updated according to the target process parameter, the first parameter, the second parameter and the last time output information, and element multiplication operation processing is performed according to the memory unit storage information and the third parameter to obtain hidden layer information output by the encoder.
Specifically, since the LSTM unit may process a variable length sequence and capture long-term dependencies and nonlinear relationships therein, the embodiment of the present application uses a long-term memory (LSTM) variant of an automatic encoder, that is, the following formula is used to replace the hidden layer information h (which may also be sample hidden layer information of the training process):
h t =o t ⊙tanh(c t )=μ(W t T′+b i ) (8)
wherein o is t And c t Is the output gate of the LSTM cell and the cell state activation vector. tan h (x) is the hyperbolic tangent activation function, and as a result, the element multiplication is performed.
Specifically, the output quantity of the neuron at each moment is input by the current moment t ' target technological parameter, sample technological parameter in training process), output quantity h at last moment t-1 (output information at last time) and memory cell storage information c t And (5) jointly determining. These information go through the forget gate f t (first parameter), input gate i t After the (second parameter) operation, a new memory cell storage information c is obtained t Storing information c by a memory unit t And output door state o t (third parameter) determining neuron output h t (hidden layer information). The specific calculation process is as follows:
o t =σ(W o ·[h t-1 ,T t ']+b o ) (10)
f t =σ(W f ·[h t-1 ,T t ']+b f ) (11)
i t =σ(W i ·[h t-1 ,T t ']+b i ) (12)
wherein sigma is a Sigmoid activation function; w (W) f 、W i 、W c And W is o B is a weight matrix of forgetting gate, input candidate gate and output gate f 、b i 、b c And b o Bias for forget gate, input candidate gate and output gate.
Through the steps, each model parameter of the LSTM self-encoder is solved and updated continuously, and the hidden layer characteristic space h= [ h ] of the encoder is obtained after training is finished 1 ,h 2 ,...,h j ](j is the dimension of the hidden layer feature space).
Therefore, it can be understood that the set of process characteristics and fineness indexes obtained in the embodiments of the present application are: { (F) i ,Y i ) I=1, 2,..n (number of samples) }, the number of samples being the number of target process parameters during the application process and the number of samples being the number of sample process parameters during the training process. Wherein F is i =[f 1 ,f 2 ,...,f j+4 ]The number of the dimension of the hidden layer feature space of the self-encoder for j is finally increased by the 4 statistical feature values in the set for the combination of the statistical feature and the potential feature of the self-encoder, so that the obtained Fi is the technological parameter index information.
Referring to fig. 8, in some embodiments, the fineness index prediction model is obtained through training steps, which may include steps S801 to S805:
step S801, obtaining sample technological parameters and corresponding sample fineness index true values;
step S802, calculating the statistical characteristics of the sample process parameters to obtain sample statistical characteristic values for describing the data characteristics of the sample process parameters;
Step S803, inputting the sample process parameters into a pre-trained self-encoder, and extracting the output result of the encoder to obtain the information of a sample hidden layer;
step S804, according to the sample hidden layer information and the sample statistical feature value, inputting the sample process parameter index information into a fineness index prediction model to obtain a sample fineness index prediction result;
step S805, obtaining a second loss value according to the sample fineness index true value and the sample fineness index prediction result, and adjusting parameters of the fineness index prediction model according to the second loss value to obtain the trained fineness index prediction model.
For example, in the embodiment of the present application, the fineness index prediction model may be trained in advance. During the training process, the embodiment of the application can obtain the sample technological parameters.
The sample process parameters are similar to the target process parameters, and are only process parameters in the training process, so that the sample process parameters are process parameters which affect the efficiency and quality of raw meal grinding in the raw meal grinding process, and it is understood that the sample process parameters are parameters of a production machine, and the production machine can be enabled to grind raw meal with different degrees by adjusting different sample process parameters, so that raw meal with different fineness is obtained.
Exemplary, the sample process parameters may also be obtained after the normalization process, which is not described herein.
The fineness index real value is the fineness real value corresponding to different sample technological parameters, and the corresponding real value is marked in advance when a sample is obtained. For example, in the sample, the actual fineness index value corresponding to some sample process parameters is 0.08 fineness, and may also be 0.2 fineness.
By way of example, the embodiment of the application is directed to the problems of nonlinearity, strong coupling and the like of fineness indexes in the raw material grinding process, so that sample data needs to be processed, characteristics of the data can be quantized, and accordingly the processing capacity of a model can be improved and the processing errors of the model can be reduced by the data which are subsequently input into the model.
Illustratively, in the embodiments of the present application, by calculating the statistical characteristics of the process parameters of the sample, the data characteristics of the process parameters of the sample are characterized by the statistical characteristics, and the calculated statistical characteristics may be referred to as statistical characteristic values.
It will be appreciated that there may be a variety of data characteristics, such as trend of change of data, degree of dispersion, symmetry, steepness of data distribution, etc., and that the processing power of data may be improved by calculating sample statistics in the face of a large number of target process parameters.
Illustratively, the potential spatial features of the index parameters need to be extracted by the self-encoder in embodiments of the present application. The self-encoder in the embodiments of the present application is an LSTM self-encoder. Specifically, in the training process, the embodiment of the application may input the sample process parameter into a pre-trained self-encoder, where the self-encoder is a data compression algorithm, and is provided with an encoder and a decoder, where the encoder may process the input data and input the processing result into the decoder, and finally the processing of the decoder may output the output information of the self-encoder, where the encoding part maps the sample process parameter to a hidden layer feature space, which is equivalent to performing a latent spatial representation learning on the index, and the decoding part tries to restore the new representation of the index to the original data.
For example, in the embodiment of the application, the potential spatial features of the extracted index parameters need to be implemented by using an encoder in the self-encoder, and the output result of the encoder is derived to obtain the information of the sample hidden layer, and the information of the sample hidden layer can be used for inputting a subsequent model. It will be appreciated that the sample hidden layer information is a hidden layer feature information, which is a potential feature extracted from the encoder.
After obtaining the sample hidden layer information output by the encoder, the combination of the statistical feature and the potential feature of the self-encoder can be formed based on the sample hidden layer information and the sample statistical feature value, which is called sample process parameter index information and can be used as an input parameter of a model, so that the process parameter index information is input into a fineness index prediction model to obtain a sample fineness index prediction result.
After the prediction result output by the model is obtained, in the embodiment of the application, the model is optimally trained according to the comparison between the output result of the model and the true value. Specifically, in the embodiment of the application, a second loss value is obtained according to the sample fineness index true value and the sample fineness index prediction result, and parameters of the fineness index prediction model are adjusted according to the second loss value, so that a trained fineness index prediction model is obtained.
By way of example, the embodiment of the application adopts a gradient lifting tree XGBoost algorithm, and the fineness index prediction model is a gradient lifting tree model. The XGBoost method integrates a plurality of weak classifiers into one strong classifier, reduces errors of a model predicted value and an actual value, and has the main idea that each decision tree is learned and trained based on residual errors between the predicted value and a target value of the last decision tree by continuously generating new decision trees, so that the deviation of the model is reduced, and the prediction precision is improved.
According to the embodiment of the application, the fineness index prediction model is trained, the potential characteristic change rule of the process index is deeply excavated, the problems of nonlinearity, strong coupling and the like of the fineness index in the raw material grinding process are well solved, the accuracy and stability of the fineness index prediction model are improved, and finally, accurate raw material grinding fineness prediction can be provided in an application stage, and the efficiency and quality of raw material grinding are improved.
Referring to fig. 9, in some embodiments, step S805 may include steps S901 to S904:
step S901, obtaining the total number of leaf nodes, the weight of each leaf node and a preset penalty factor in a fineness index prediction model;
step S902, obtaining regularization values of the fineness index prediction model according to the total number of the leaf nodes, the weight of each leaf node and the penalty factors;
step S903, performing second-order Taylor expansion processing on the target function value according to the second loss value and the regularization value to obtain a processed second target function value;
and step S904, iterating the fineness index prediction model for a plurality of times according to the second objective function value, and determining a target leaf node meeting the gain loss requirement after adjusting the weight and the second objective function value in the iteration to obtain the trained fineness index prediction model.
According to the above embodiment, the fineness index prediction model is a gradient lifting tree model, that is, an XGBoost model of a gradient lifting tree, and the trained fineness index prediction model is generated through multiple iterations.
Specifically, the prediction function of the model is as follows, and in the embodiment of the present application, classification and regression trees are used as decision trees of the model:
wherein,,the final prediction result of the ith sample is the sample fineness index prediction result. F (F) i The input is the ith characteristic sample, namely sample technological parameter index information; f (f) k (F i ) The predicted value of the ith characteristic sample on the kth tree; k is the number of CART regression trees, and C is the set of all possible CART regression trees.
The objective function of the model consists of a second loss value obtained by training errors and a regularization value obtained by regularization calculation, and the calculation formula is as follows:
wherein L is (t) The target function after t iterations is the first target function,is the predicted result of the ith sample after t-1 iterations. l is a second loss value of the model, and represents an error between a sample fineness index prediction result and a sample fineness index true value, and can be obtained according to a difference value between two values; m is the number of samples of the sample process parameters of the input model. Omega is a regularization function, the resulting value is positive The value is used for preventing the model from being over-fitted; g is the total number of leaf nodes of the CART regression tree model, w j The weight of the jth leaf node of the constructed CART regression tree model is calculated; gamma and lambda are penalty factors for avoiding overfitting, it being understood that the penalty factors may be adjusted during training according to the actual situation.
The second-order taylor expansion of the above equation (15) is available:
after the second-order Taylor expansion treatment, the obtained L (t) Is a second objective function. It can be understood that in the iterative process, the model in the embodiment of the present application obtains w by continuously calculating the loss value of the node j And finally selecting leaf nodes meeting the gain loss requirement to obtain the required trained fineness index prediction model. Further, in the embodiment of the present application, the leaf node with the largest gain loss is selected as the target leaf node, so as to obtain the optimal fineness index prediction model.
By introducing the above embodiments, in the method for predicting fineness index of cement raw material mill raw material in the embodiment of the present application, when predicting fineness index of raw material, firstly, standardized processing is performed on input process parameter data (historical process parameter) to obtain target process parameter, then, process parameter indexes are extracted by statistical calculation and trained self-encoder model, and then, the parameter indexes are respectively input into fineness index prediction models under different fineness to perform prediction, and finally fineness prediction results under different fineness (0.08 fineness and 0.2 fineness) are obtained.
Referring to fig. 10, the embodiment of the present application further provides a model training method, fig. 10 is an optional flowchart of the method for predicting fineness index of raw materials of a cement raw material mill provided in the embodiment of the present application, and the method in fig. 10 may include, but is not limited to, steps S1001 to S1005.
Step S1001, obtaining sample process parameters and corresponding sample fineness index true values;
step S1002, calculating the statistical characteristics of the sample process parameters to obtain sample statistical characteristic values for describing the data characteristics of the sample process parameters;
step S1003, inputting sample process parameters into a pre-trained self-encoder, and extracting output results of the encoder in the encoder to obtain sample hidden layer information;
step S1004, according to the sample hidden layer information and the sample statistical feature value, inputting the sample process parameter index information into a fineness index prediction model to obtain a sample fineness index prediction result;
step S1005, obtaining a second loss value according to the sample fineness index true value and the sample fineness index prediction result, and adjusting parameters of the fineness index prediction model according to the second loss value to obtain the trained fineness index prediction model.
For example, in the embodiment of the present application, the fineness index prediction model may be trained in advance. During the training process, the embodiment of the application can obtain the sample technological parameters. The model training method may also be applied to the cement raw material mill raw material fineness index prediction system described in the above embodiment, and is not particularly limited herein.
The sample process parameters are similar to the target process parameters, and are only process parameters in the training process, so that the sample process parameters are process parameters which affect the efficiency and quality of raw meal grinding in the raw meal grinding process, and it is understood that the sample process parameters are parameters of a production machine, and the production machine can be enabled to grind raw meal with different degrees by adjusting different sample process parameters, so that raw meal with different fineness is obtained.
The fineness index real value is the fineness real value corresponding to different sample technological parameters, and the corresponding real value is marked in advance when a sample is obtained. For example, in the sample, the actual fineness index value corresponding to some sample process parameters is 0.08 fineness, and may also be 0.2 fineness.
By way of example, the embodiment of the application is directed to the problems of nonlinearity, strong coupling and the like of fineness indexes in the raw material grinding process, so that sample data needs to be processed, characteristics of the data can be quantized, and accordingly the processing capacity of a model can be improved and the processing errors of the model can be reduced by the data which are subsequently input into the model.
Illustratively, in the embodiments of the present application, by calculating the statistical characteristics of the process parameters of the sample, the data characteristics of the process parameters of the sample are characterized by the statistical characteristics, and the calculated statistical characteristics may be referred to as statistical characteristic values.
It will be appreciated that there may be a variety of data characteristics, such as trend of change of data, degree of dispersion, symmetry, steepness of data distribution, etc., and that the processing power of data may be improved by calculating sample statistics in the face of a large number of target process parameters.
Illustratively, the potential spatial features of the index parameters need to be extracted by the self-encoder in embodiments of the present application. The self-encoder in the embodiments of the present application is an LSTM self-encoder. Specifically, in the training process, the embodiment of the application may input the sample process parameter into a pre-trained self-encoder, where the self-encoder is a data compression algorithm, and is provided with an encoder and a decoder, where the encoder may process the input data and input the processing result into the decoder, and finally the processing of the decoder may output the output information of the self-encoder, where the encoding part maps the sample process parameter to a hidden layer feature space, which is equivalent to performing a latent spatial representation learning on the index, and the decoding part tries to restore the new representation of the index to the original data.
For example, in the embodiment of the application, the potential spatial features of the extracted index parameters need to be implemented by using an encoder in the self-encoder, and the output result of the encoder is derived to obtain the information of the sample hidden layer, and the information of the sample hidden layer can be used for inputting a subsequent model. It will be appreciated that the sample hidden layer information is a hidden layer feature information, which is a potential feature extracted from the encoder.
After obtaining the sample hidden layer information output by the encoder, the combination of the statistical feature and the potential feature of the self-encoder can be formed based on the sample hidden layer information and the sample statistical feature value, which is called sample process parameter index information and can be used as an input parameter of a model, so that the process parameter index information is input into a fineness index prediction model to obtain a sample fineness index prediction result.
After the prediction result output by the model is obtained, in the embodiment of the application, the model is optimally trained according to the comparison between the output result of the model and the true value. Specifically, in the embodiment of the application, a second loss value is obtained according to the sample fineness index true value and the sample fineness index prediction result, and parameters of the fineness index prediction model are adjusted according to the second loss value, so that a trained fineness index prediction model is obtained.
By way of example, the embodiment of the application adopts a gradient lifting tree XGBoost algorithm, and the fineness index prediction model is a gradient lifting tree model. The XGBoost method integrates a plurality of weak classifiers into one strong classifier, reduces errors of a model predicted value and an actual value, and has the main idea that each decision tree is learned and trained based on residual errors between the predicted value and a target value of the last decision tree by continuously generating new decision trees, so that the deviation of the model is reduced, and the prediction precision is improved.
According to the embodiment of the application, the fineness index prediction model is trained, the potential characteristic change rule of the process index is deeply excavated, the problems of nonlinearity, strong coupling and the like of the fineness index in the raw material grinding process are well solved, the accuracy and stability of the fineness index prediction model are improved, and finally, accurate raw material grinding fineness prediction can be provided in an application stage, and the efficiency and quality of raw material grinding are improved.
Referring to fig. 11, an embodiment of the present application further provides a system for predicting a fineness index of a raw material of a cement raw material mill, which may implement the method for predicting a fineness index of a raw material of a cement raw material mill, where the system for predicting a fineness index of a raw material of a cement raw material mill includes:
A parameter obtaining module 1101, configured to obtain a target process parameter in a cement raw material mill raw material production process;
a statistical feature calculation module 1102, configured to calculate a statistical feature of the target process parameter, and obtain a statistical feature value for describing a data feature of the target process parameter;
the self-encoder processing module 1103 is configured to input the target process parameter into a pre-trained self-encoder, and extract an output result of the encoder in the self-encoder to obtain hidden layer information;
the model processing module 1104 is configured to obtain process parameter index information according to the hidden layer information and the statistical feature values, and input the process parameter index information into a pre-trained fineness index prediction model to obtain a fineness index prediction result.
For example, the system for predicting the fineness index of the raw material of the cement raw material mill (may be simply referred to as a system) may perform the method for predicting the fineness index of the raw material of the cement raw material mill in any one of the above embodiments, and in the process of predicting the fineness index of the raw material of the cement raw material mill, the system needs to obtain the target process parameters in the process of producing the raw material of the cement raw material mill.
The target process parameters are exemplary process parameters which affect the efficiency and quality of raw material grinding in the raw material grinding process, and it is understood that the target process parameters are parameters of a production machine, and by adjusting different target process parameters, the production machine can grind raw materials to different degrees, so as to obtain raw materials with different fineness.
It will be appreciated that there may be a plurality of target process parameters, which together determine the efficiency and quality of raw meal grinding, and embodiments of the present application are not particularly limited.
By way of example, the embodiment of the application is directed to the problems of nonlinearity, strong coupling and the like of fineness indexes in the raw material grinding process, so that data needs to be processed, characteristics of the data can be quantized, and accordingly the processing capacity of the model can be improved and the processing errors of the model can be reduced by the data which are subsequently input into the model.
Illustratively, in the embodiments of the present application, by calculating the statistical characteristics of the target process parameters, characterizing the data characteristics of the target process parameters by the statistical characteristics, the calculated statistical characteristics may be referred to as statistical characteristic values.
It will be appreciated that there may be a variety of data characteristics, such as trend of change of data, degree of dispersion, symmetry, degree of steepness of data distribution, etc., and that the processing power of data may be improved by calculating statistical characteristics in the face of a large number of target process parameters.
Illustratively, the potential spatial features of the index parameters need to be extracted by the self-encoder in embodiments of the present application. The self-encoder in the embodiments of the present application is an LSTM self-encoder. Specifically, in the application process, the embodiment of the application may input the target process parameter into a pre-trained self-encoder, where the self-encoder is a data compression algorithm, and is provided with an encoder and a decoder, the encoder may process the input data, input the processing result into the decoder, and finally output the output information of the self-encoder after the processing of the decoder, where the encoding part maps the target process parameter to a hidden layer feature space, which is equivalent to performing a potential spatial representation learning on the index, and the decoding part tries to restore the new representation of the index to the original data.
Illustratively, in the embodiment of the present application, the potential spatial features of the extracted index parameters need to be implemented by using an encoder in the self-encoder, and the output result of the encoder is derived to obtain hidden layer information, and the hidden layer information can be used for inputting a subsequent model. It will be appreciated that the hidden layer information is a hidden layer feature information, which is a potential feature extracted from the encoder.
After obtaining the hidden layer information output by the encoder, the embodiment of the application can form a combination of the statistical feature and the latent feature of the self-encoder based on the hidden layer information and the statistical feature value, and is called as process parameter index information, and the process parameter index information can be used as an input parameter of a model, so that the process parameter index information is input into a pre-trained fineness index prediction model to obtain a fineness index prediction result.
The fineness index prediction model is an exemplary pre-trained machine learning model, the model can input technological parameter index information, process and predict the technological parameter index information, and in the application stage, the fineness index prediction result of raw material grinding can be finally output through the model.
By way of example, the embodiment of the application adopts a gradient lifting tree XGBoost algorithm, and the fineness index prediction model is a gradient lifting tree model. The XGBoost method integrates a plurality of weak classifiers into one strong classifier, reduces errors of a model predicted value and an actual value, and has the main idea that each decision tree is learned and trained based on residual errors between the predicted value and a target value of the last decision tree by continuously generating new decision trees, so that the deviation of the model is reduced, and the prediction precision is improved.
The cement raw material mill raw material fineness index prediction system in the embodiment of the application is a cement raw material mill raw material fineness index prediction system integrating statistics, an LSTM self-encoder and a gradient lifting tree, and the process parameter indexes are extracted through statistical calculation and the LSTM self-encoder model and then input into the XGBoost model for prediction, the extracted model digs the potential characteristic change rule of the process indexes deeply, the problems of nonlinearity, strong coupling and the like of the fineness indexes in the raw material grinding process are well solved, the accuracy and stability of the fineness prediction model are improved, and finally accurate raw material grinding fineness prediction can be provided, and the efficiency and quality of raw material grinding are improved.
The specific implementation of the system for predicting the fineness index of the raw material of the cement raw material mill is basically the same as that of the specific embodiment of the method for predicting the fineness index of the raw material of the cement raw material mill, and is not repeated here. On the premise of meeting the requirements of the embodiment of the application, the system for predicting the fineness index of the raw material of the cement raw material mill can be provided with other functional modules so as to realize the method for predicting the fineness index of the raw material of the cement raw material mill in the embodiment.
The embodiment of the application also provides electronic equipment, which comprises a memory and a processor, wherein the memory stores a computer program, and the processor realizes the cement raw material grinding material fineness index prediction method when executing the computer program. The electronic equipment can be any intelligent terminal including a tablet personal computer, a vehicle-mounted computer and the like.
Referring to fig. 12, fig. 12 illustrates a hardware structure of an electronic device according to another embodiment, the electronic device includes:
the processor 1201 may be implemented by a general purpose CPU (central processing unit), a microprocessor, an application specific integrated circuit (ApplicationSpecificIntegratedCircuit, ASIC), or one or more integrated circuits, etc. for executing related programs to implement the technical solutions provided by the embodiments of the present application;
memory 1202 may be implemented in the form of read-only memory (ReadOnlyMemory, ROM), static storage, dynamic storage, or random access memory (RandomAccessMemory, RAM). The memory 1202 may store an operating system and other application programs, and when the technical solution provided in the embodiments of the present disclosure is implemented by software or firmware, relevant program codes are stored in the memory 1202, and the processor 1201 invokes the cement raw material mill fineness index prediction method or the model training method to execute the embodiments of the present disclosure;
an input/output interface 1203 for implementing information input and output;
the communication interface 1204 is configured to implement communication interaction between the device and other devices, and may implement communication in a wired manner (e.g., USB, network cable, etc.), or may implement communication in a wireless manner (e.g., mobile network, WIFI, bluetooth, etc.);
A bus 1205 for transferring information between various components of the device such as the processor 1201, memory 1202, input/output interface 1203, and communication interface 1204;
wherein the processor 1201, the memory 1202, the input/output interface 1203 and the communication interface 1204 enable communication connection between each other inside the device via a bus 1205.
The embodiment of the application also provides a computer readable storage medium, wherein the computer readable storage medium stores a computer program, and the computer program realizes the cement raw material grinding raw material fineness index prediction method or the model training method when being executed by a processor.
The memory, as a non-transitory computer readable storage medium, may be used to store non-transitory software programs as well as non-transitory computer executable programs. In addition, the memory may include high-speed random access memory, and may also include non-transitory memory, such as at least one magnetic disk storage device, flash memory device, or other non-transitory solid state storage device. In some embodiments, the memory optionally includes memory remotely located relative to the processor, the remote memory being connectable to the processor through a network. Examples of such networks include, but are not limited to, the internet, intranets, local area networks, mobile communication networks, and combinations thereof.
The embodiments described in the embodiments of the present application are for more clearly describing the technical solutions of the embodiments of the present application, and do not constitute a limitation on the technical solutions provided by the embodiments of the present application, and as those skilled in the art can know that, with the evolution of technology and the appearance of new application scenarios, the technical solutions provided by the embodiments of the present application are equally applicable to similar technical problems.
It will be appreciated by those skilled in the art that the technical solutions shown in the figures do not constitute limitations of the embodiments of the present application, and may include more or fewer steps than shown, or may combine certain steps, or different steps.
The above described apparatus embodiments are merely illustrative, wherein the units illustrated as separate components may or may not be physically separate, i.e. may be located in one place, or may be distributed over a plurality of network elements. Some or all of the modules may be selected according to actual needs to achieve the purpose of the solution of this embodiment.
Those of ordinary skill in the art will appreciate that all or some of the steps of the methods, systems, functional modules/units in the devices disclosed above may be implemented as software, firmware, hardware, and suitable combinations thereof.
The terms "first," "second," "third," "fourth," and the like in the description of the present application and in the above-described figures, if any, are used for distinguishing between similar objects and not necessarily for describing a particular sequential or chronological order. It is to be understood that the data so used may be interchanged where appropriate such that embodiments of the present application described herein may be implemented in sequences other than those illustrated or otherwise described herein. Furthermore, the terms "comprises," "comprising," and "having," and any variations thereof, are intended to cover a non-exclusive inclusion, such that a process, method, system, article, or apparatus that comprises a list of steps or elements is not necessarily limited to those steps or elements expressly listed but may include other steps or elements not expressly listed or inherent to such process, method, article, or apparatus.
It should be understood that in this application, "at least one" means one or more, and "a plurality" means two or more. "and/or" for describing the association relationship of the association object, the representation may have three relationships, for example, "a and/or B" may represent: only a, only B and both a and B are present, wherein a, B may be singular or plural. The character "/" generally indicates that the context-dependent object is an "or" relationship. "at least one of" or the like means any combination of these items, including any combination of single item(s) or plural items(s). For example, at least one (one) of a, b or c may represent: a, b, c, "a and b", "a and c", "b and c", or "a and b and c", wherein a, b, c may be single or plural.
In the several embodiments provided in this application, it should be understood that the disclosed systems and methods may be implemented in other ways. For example, the system embodiments described above are merely illustrative, e.g., the division of the above elements is merely a logical functional division, and there may be additional divisions in actual implementation, e.g., multiple elements or components may be combined or integrated into another system, or some features may be omitted, or not performed. Alternatively, the coupling or direct coupling or communication connection shown or discussed with each other may be an indirect coupling or communication connection via some interfaces, devices or units, which may be in electrical, mechanical or other form.
The units described above as separate components may or may not be physically separate, and components shown as units may or may not be physical units, may be located in one place, or may be distributed over a plurality of network units. Some or all of the units may be selected according to actual needs to achieve the purpose of the solution of this embodiment.
In addition, each functional unit in each embodiment of the present application may be integrated in one processing unit, or each unit may exist alone physically, or two or more units may be integrated in one unit. The integrated units may be implemented in hardware or in software functional units.
The integrated units, if implemented in the form of software functional units and sold or used as stand-alone products, may be stored in a computer readable storage medium. Based on such understanding, the technical solution of the present application may be embodied essentially or in part or all of the technical solution or in part in the form of a software product stored in a storage medium, including multiple instructions to cause a computer device (which may be a personal computer, a server, or a network device, etc.) to perform all or part of the steps of the methods of the various embodiments of the present application. And the aforementioned storage medium includes: a U-disk, a removable hard disk, a Read-Only Memory (ROM), a random access Memory (Random Access Memory, RAM), a magnetic disk, or an optical disk, or other various media capable of storing a program.
Preferred embodiments of the present application are described above with reference to the accompanying drawings, and thus do not limit the scope of the claims of the embodiments of the present application. Any modifications, equivalent substitutions and improvements made by those skilled in the art without departing from the scope and spirit of the embodiments of the present application shall fall within the scope of the claims of the embodiments of the present application.

Claims (13)

1. A method for predicting a raw material fineness index of a cement raw material mill, the method comprising:
obtaining target technological parameters in the production process of the raw materials of the cement raw material mill;
calculating the statistical characteristics of the target process parameters to obtain statistical characteristic values for describing the data characteristics of the target process parameters;
inputting the target process parameters into a pre-trained self-encoder, and extracting an output result of the encoder in the self-encoder to obtain hidden layer information;
obtaining technological parameter index information according to the hidden layer information and the statistical feature values, and inputting the technological parameter index information into a pre-trained fineness index prediction model to obtain a fineness index prediction result.
2. The method for predicting fineness index of raw material of cement raw material mill according to claim 1, wherein the fineness index prediction model comprises a first fineness index prediction model and a second fineness index prediction model, the first fineness index prediction model is obtained by training sample process parameters under a first fineness, and the second fineness index prediction model is obtained by training sample process parameters under a second fineness;
Inputting the technological parameter index information into a pre-trained fineness index prediction model to obtain a fineness index prediction result, wherein the method comprises the steps of,
inputting the technological parameter index information into the pre-trained first fineness index prediction model to obtain a first index prediction result based on a first fineness;
inputting the technological parameter index information into the pre-trained second fineness index prediction model to obtain a second index prediction result based on the second fineness.
3. The method for predicting fineness index of raw material of cement raw material mill according to claim 1, wherein the obtaining the target process parameters in the production process of raw material of cement raw material mill comprises:
acquiring a plurality of historical process parameters at historical moments in the cement raw material mill raw material production process, wherein the historical process parameters comprise powder concentrator current, circulating fan current, mill pressure difference, mill outlet pressure, mill outlet temperature, elevator current, grinding pressure, yield feedback information and vertical mill host current;
and taking a plurality of the historical process parameters as target process parameters in the production process of the raw materials of the cement raw material mill.
4. A method for predicting a raw material fineness index of a cement raw material mill according to claim 1 or 3, wherein the obtaining the target process parameters in the raw material production process of the cement raw material mill comprises:
acquiring a plurality of historical process parameters at historical moments in the production process of the raw materials of the cement raw material mill;
calculating the average value and standard deviation corresponding to each historical technological parameter;
and respectively carrying out standardization treatment on each historical process parameter according to the mean value and the standard deviation to obtain a plurality of target process parameters.
5. The method for predicting the fineness index of a raw material of a cement raw material mill according to claim 1, wherein the target process parameters are plural;
the calculating the statistical characteristic of the target process parameter to obtain a statistical characteristic value for describing the data characteristic of the target process parameter comprises the following steps:
calculating the average value of a plurality of target process parameters to obtain a target average value for describing the concentrated trend of the target process parameters;
calculating variances of a plurality of target process parameters to obtain target variances for describing the discrete degree of the target process parameters;
calculating the skewness of a plurality of target process parameters to obtain target skewness for describing the symmetry of the distribution of the target process parameters;
Calculating kurtosis of a plurality of target process parameters to obtain target kurtosis for describing the steepness degree of the distribution of the target process parameters;
at least one of the target mean, the target variance, the target skewness, and the target kurtosis is taken as a statistical feature value for describing a data feature of the target process parameter.
6. The method for predicting fineness index of raw material of cement raw material mill according to claim 1, wherein the self-encoder is further provided with a decoder;
the self-encoder is obtained through training the following steps:
obtaining sample technological parameters;
inputting the sample technological parameters into the encoder for processing to obtain sample hidden layer information output by the encoder;
inputting the information of the sample hidden layer into the decoder for processing to obtain sample output information output by the decoder;
and calculating a first loss value of the self-encoder according to the sample process parameters and the sample output information, and adjusting the parameters of the self-encoder according to the first loss value to obtain the trained self-encoder.
7. The cement raw material mill raw material fineness index prediction method according to claim 1 or 6, wherein a forgetting gate, an input candidate gate and an output gate are provided in the encoder;
The obtaining hidden layer information comprises the following steps:
acquiring a first parameter of the forgetting gate, a second parameter of the input gate, a third parameter of the output gate and output information of the encoder at the last moment;
updating the memory unit storage information of the input candidate gate according to the target technological parameter, the first parameter, the second parameter and the output information at the previous moment;
and performing element multiplication operation processing according to the memory unit storage information and the third parameter to obtain hidden layer information output by the encoder.
8. The method for predicting fineness index of raw mill material for cement raw material according to claim 1, wherein the fineness index prediction model is obtained by training the following steps:
acquiring sample technological parameters and corresponding sample fineness index true values;
calculating the statistical characteristics of the sample process parameters to obtain sample statistical characteristic values for describing the data characteristics of the sample process parameters;
inputting the sample process parameters into the self-encoder trained in advance, and extracting an output result of the encoder to obtain sample hidden layer information;
according to the sample hidden layer information and the sample statistical feature value, inputting the sample process parameter index information into the fineness index prediction model to obtain a sample fineness index prediction result;
Obtaining a second loss value according to the sample fineness index true value and the sample fineness index prediction result, and adjusting parameters of the fineness index prediction model according to the second loss value to obtain the trained fineness index prediction model.
9. The method for predicting fineness index of raw material of cement raw material mill according to claim 8, wherein the fineness index prediction model is a gradient lifting tree model, and classification and regression trees are adopted as decision trees of the model, and the trained fineness index prediction model is generated through multiple iterations;
the step of adjusting the parameters of the fineness index prediction model according to the second loss value to obtain the trained fineness index prediction model, comprising:
acquiring the total number of leaf nodes, the weight of each leaf node and a preset penalty factor in the fineness index prediction model;
obtaining regularization values of the fineness index prediction model according to the total number of the leaf nodes, the weight of each leaf node and the penalty factors;
obtaining a first objective function value of the fineness index prediction model according to the second loss value and the regularization value, and performing second-order Taylor expansion processing on the objective function value to obtain a processed second objective function value;
And iterating the fineness index prediction model for a plurality of times according to the second objective function value, and determining a target leaf node meeting the gain loss requirement after adjusting the weight and the second objective function value in the iteration to obtain the trained fineness index prediction model.
10. A method of model training, the method comprising:
acquiring sample technological parameters and corresponding sample fineness index true values;
calculating the statistical characteristics of the sample process parameters to obtain sample statistical characteristic values for describing the data characteristics of the sample process parameters;
inputting the sample process parameters into a pre-trained self-encoder, and extracting an output result of the encoder in the self-encoder to obtain sample hidden layer information;
according to the sample hidden layer information and the sample statistical feature value, inputting the sample process parameter index information into a fineness index prediction model to obtain a sample fineness index prediction result;
obtaining a second loss value according to the sample fineness index true value and the sample fineness index prediction result, and adjusting parameters of the fineness index prediction model according to the second loss value to obtain the trained fineness index prediction model.
11. A cement raw material mill raw material fineness index prediction system, the system comprising:
the parameter acquisition module is used for acquiring target technological parameters in the production process of the raw materials of the cement raw material mill;
the statistical characteristic calculation module is used for calculating the statistical characteristic of the target process parameter to obtain a statistical characteristic value for describing the data characteristic of the target process parameter;
the self-encoder processing module is used for inputting the target process parameters into a pre-trained self-encoder, extracting an output result of the encoder in the self-encoder and obtaining hidden layer information;
the model processing module is used for inputting the technological parameter index information into a pre-trained fineness index prediction model according to the hidden layer information and the statistical feature value, and obtaining a fineness index prediction result.
12. An electronic device comprising a memory and a processor, the memory storing a computer program, the processor implementing the cement raw mill raw meal fineness index prediction method according to any one of claims 1 to 9, the model training method according to claim 10 when executing the computer program.
13. A computer-readable storage medium storing a computer program, characterized in that the computer program, when executed by a processor, implements the cement raw material mill raw material fineness index prediction method according to any one of claims 1 to 9, the model training method according to claim 10.
CN202310374251.5A 2023-04-04 2023-04-04 Cement raw material mill raw material fineness index prediction method, model training method and system Pending CN116451856A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202310374251.5A CN116451856A (en) 2023-04-04 2023-04-04 Cement raw material mill raw material fineness index prediction method, model training method and system

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202310374251.5A CN116451856A (en) 2023-04-04 2023-04-04 Cement raw material mill raw material fineness index prediction method, model training method and system

Publications (1)

Publication Number Publication Date
CN116451856A true CN116451856A (en) 2023-07-18

Family

ID=87135091

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202310374251.5A Pending CN116451856A (en) 2023-04-04 2023-04-04 Cement raw material mill raw material fineness index prediction method, model training method and system

Country Status (1)

Country Link
CN (1) CN116451856A (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN117930760A (en) * 2023-12-19 2024-04-26 天瑞集团信息科技有限公司 Automatic regulation and control method, system and storage medium for cement production line based on AI technology

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN117930760A (en) * 2023-12-19 2024-04-26 天瑞集团信息科技有限公司 Automatic regulation and control method, system and storage medium for cement production line based on AI technology

Similar Documents

Publication Publication Date Title
Chattopadhyay et al. Data-driven predictions of a multiscale Lorenz 96 chaotic system using machine-learning methods: reservoir computing, artificial neural network, and long short-term memory network
CN109492822B (en) Air pollutant concentration time-space domain correlation prediction method
Yoon et al. Semi-supervised learning with deep generative models for asset failure prediction
US20200265301A1 (en) Incremental training of machine learning tools
CN112434848B (en) Nonlinear weighted combination wind power prediction method based on deep belief network
CN109598387A (en) Forecasting of Stock Prices method and system based on two-way cross-module state attention network model
Liu et al. Large-scale heteroscedastic regression via Gaussian process
CN114937182B (en) Image emotion distribution prediction method based on emotion wheel and convolutional neural network
CN116451856A (en) Cement raw material mill raw material fineness index prediction method, model training method and system
CN115348182B (en) Long-term spectrum prediction method based on depth stack self-encoder
Lataniotis Data-driven uncertainty quantification for high-dimensional engineering problems
CN110335160B (en) Medical care migration behavior prediction method and system based on grouping and attention improvement Bi-GRU
Yang et al. A prediction model of aquaculture water quality based on multiscale decomposition
Wu et al. [Retracted] Research on Lightweight Infrared Pedestrian Detection Model Algorithm for Embedded Platform
Huang et al. A decomposition‐based multi‐time dimension long short‐term memory model for short‐term electric load forecasting
CN117875481A (en) Carbon emission prediction method, electronic device, and computer-readable medium
CN116885697A (en) Load prediction method based on combination of cluster analysis and intelligent algorithm
CN115713044B (en) Method and device for analyzing residual life of electromechanical equipment under multi-condition switching
CN116843012A (en) Time sequence prediction method integrating personalized context and time domain dynamic characteristics
Guo et al. Mobile user credit prediction based on lightgbm
Monfared et al. How to train RNNs on chaotic data?
CN111402042B (en) Data analysis and display method for stock market big disk shape analysis
Fan et al. Aedmts: an attention-based encoder-decoder framework for multi-sensory time series analytic
Auber et al. Identification of AR time‐series based on binary data
CN111353523A (en) Method for classifying railway customers

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination