CN109213807A - A kind of Increment Learning Algorithm and device of intelligence manufacture big data - Google Patents
A kind of Increment Learning Algorithm and device of intelligence manufacture big data Download PDFInfo
- Publication number
- CN109213807A CN109213807A CN201811119092.XA CN201811119092A CN109213807A CN 109213807 A CN109213807 A CN 109213807A CN 201811119092 A CN201811119092 A CN 201811119092A CN 109213807 A CN109213807 A CN 109213807A
- Authority
- CN
- China
- Prior art keywords
- data
- intelligence manufacture
- big data
- parameter
- follows
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Granted
Links
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06N—COMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
- G06N3/00—Computing arrangements based on biological models
- G06N3/02—Neural networks
- G06N3/08—Learning methods
- G06N3/084—Backpropagation, e.g. using gradient descent
Abstract
The present invention relates to intelligence manufacture incremental learning technical fields, and in particular to a kind of Increment Learning Algorithm and device of intelligence manufacture big data form categorized data set by acquiring the historical data of manufacturing process;And then obtain new data, as input data, the mapping relations of input layer data and hiding layer data are established by coding function, hiding layer data is mapped to the reality output of network by decoding functions, node parameter and weight are optimized, sorted incremental learning model is formed, to guarantee the integrality for the big data that intelligence manufacture generates, and analysis in time and processing are carried out to big data.
Description
Technical field
The present invention relates to intelligence manufacture incremental learning technical fields, and in particular to a kind of increment of intelligence manufacture big data
Learning method and device.
Background technique
It is further to carry out intelligence to manufacture system to energize that machine learning is applied in manufacturing process, realizes substitution or auxiliary tube
Reason personnel and professional carry out the ability of decision to uncertain business.
The big data generated in manufacturing process has high speed variation characteristic, and data generate at a terrific speed, content and
Distribution characteristics is among high speed dynamic change.In this case, typical method is that new data is all with before
Data combine, and train new classifier from the beginning.This method causes all knowledge previously acquired to be lost, this existing
As being known as catastrophic forgetting.In addition, if previous data sets, abandon, damage can not be accessed or unavailable, then new and old
The combination of data set even not always feasible option.Incremental learning is the solution of this scene, can be defined as
New information is extracted without the process of the priori knowledge of other available data sets after losing.
Meanwhile big data requires to obtain analysis in time and processing.Typical deep learning is stacked using feedforward neural network
It forms, feedforward neural network is a kind of Static Learning model, and deep learning is caused to be difficult to learn in high speed dynamic change
Big data feature.
Therefore, how to provide it is a kind of can guarantee intelligence manufacture generate big data integrality, and to big data carry out and
When analysis be treated as critical issue.
Summary of the invention
The present invention provides the Increment Learning Algorithm and device of a kind of intelligence manufacture big data, can guarantee what intelligence manufacture generated
The integrality of big data, and analysis in time and processing are carried out to big data.
A kind of Increment Learning Algorithm of intelligence manufacture big data provided by the invention, comprising the following steps:
Step A, the historical data of manufacturing process is acquired, categorized data set D is formedt;
Step B, new data is obtained, as input data x, input layer data x and the hiding number of plies are established by coding function
According to the mapping relations of f (x), the calculation formula of the coding function is as follows:
Wherein, ai, biFor random input layer parameter, βiFor the weight of random input layer to hidden layer, L is defeated
Enter the node number of layer;
Step C, hiding layer data f (x) is mapped to the reality output g (x) of network, the decoding letter by decoding functions
Several calculation formula is as follows:
Wherein, aj, bjFor random hidden layer node parameter, βjFor the weight of random hidden layer to output layer, L ' is hidden
Hide the node number of layer;
Step D, to node parameter ai, bi, aj, bjAnd weight betai, βjIt optimizes;
Step E, sorted incremental learning model is formed.
Further, the data in the step A specifically include: the amplitude of the signal generated in intelligence manufacture link, frequency,
Phase, manufacture node locating for signal.
Further, the step A is specifically included:
Step A1, by Xn={ x1,x2,...,xmIt is item to be sorted, xiIndicate a feature category of n dimensional feature space Xn
Property, by C={ Y1,Y2,...,YmIt is used as category set;
Step A2, P (y is calculated separately1/Xn),P(y2/Xn),...,P(ym/Xn);
Step A3, Xn is classified by following formula,
P(yi/ Xn)=max { P (y1/Xn),P(y2/Xn),...,P(ym/ Xn) },
Wherein, Xn ∈ Yi;
Step A4, data set D is formedt, wherein Dt=(xi,yi) (i=1 ..., m), yiIndicate xiCorresponding classification mark
Label.
Further, the step D is specifically included:
Step D1, the output valve f (x, θ) and neural network model of hidden layer neuron are calculated by propagated forward model
Output valve g (x, θ), wherein f (x, θ), g (x, θ) indicate that x passes through the reality output that Nonlinear Mapping obtains, θ when parameter is θ
={ ai,bi,βi,aj,bj,βj};
Step D2, the real output value of network model and the difference △ y of idea output y, calculation formula are as follows: △ y are calculated
=y-g (x, θ);
Step D3, neural network model output valve g (x, θ) is calculated to the local derviation of initial parameter θ by back propagation model
Number, calculation formula are as follows:
Step D4, the inverse matrix of partial derivative matrix f ' (x, θ) is calculated, calculation formula is as follows:
Step D5, the parameter increase △ θ of network model is calculated according to the following formula, and calculation formula is as follows:
Wherein, u is learning efficiency;
Step D6, using θ+△ θ as the parameter of the network model updated.
A kind of incremental learning device of intelligence manufacture big data provided by the invention, including computer-readable medium, it is described
Media storage has computer-readable instruction, and the computer-readable instruction can be executed by processor to realize described in any of the above-described
Method.
The beneficial effects of the present invention are: the present invention discloses the Increment Learning Algorithm and device of a kind of intelligence manufacture big data,
By acquiring the historical data of manufacturing process, categorized data set is formed;And then new data is obtained as input data and passes through volume
Code function establishes the mapping relations of input layer data and hiding layer data, and hiding layer data is mapped to network by decoding functions
Reality output, node parameter and weight are optimized, sorted incremental learning model is formed, to guarantee intelligence manufacture
The integrality of the big data of generation, and analysis in time and processing are carried out to big data.
Detailed description of the invention
The invention will be further described with example with reference to the accompanying drawing.
Fig. 1 is a kind of method flow schematic diagram of the incremental learning of intelligence manufacture big data of the embodiment of the present invention;
Fig. 2 is a kind of flow diagram of the method and step D of the incremental learning of intelligence manufacture big data of the embodiment of the present invention.
Specific embodiment
With reference to Fig. 1~2, a kind of Increment Learning Algorithm of intelligence manufacture big data provided by the invention, comprising the following steps:
Step A, the historical data of manufacturing process is acquired, categorized data set D is formedt;
Step B, new data is obtained, as input data x, input layer data x and the hiding number of plies are established by coding function
According to the mapping relations of f (x), the calculation formula of the coding function is as follows:
Wherein, ai, biFor random input layer parameter, βiFor the weight of random input layer to hidden layer, L is defeated
Enter the node number of layer;
Step C, hiding layer data f (x) is mapped to the reality output g (x) of network, the decoding letter by decoding functions
Several calculation formula is as follows:
Wherein, aj, bjFor random hidden layer node parameter, βjFor the weight of random hidden layer to output layer, L ' is hidden
Hide the node number of layer;
Step D, to node parameter ai, bi, aj, bjAnd weight betai, βjIt optimizes;
Step E, sorted incremental learning model is formed.
Further, the data in the step A specifically include: the amplitude of the signal generated in intelligence manufacture link, frequency,
Phase, manufacture node locating for signal.
Further, the step A is specifically included:
Step A1, by Xn={ x1,x2,...,xmIt is item to be sorted, xiIndicate a feature category of n dimensional feature space Xn
Property, by C={ Y1,Y2,...,YmIt is used as category set;
Step A2, P (y is calculated separately1/Xn),P(y2/Xn),...,P(ym/Xn);
Step A3, Xn is classified by following formula,
P(yi/ Xn)=max { P (y1/Xn),P(y2/Xn),...,P(ym/ Xn) },
Wherein, Xn ∈ Yi;
Step A4, data set D is formedt, wherein Dt=(xi,yi) (i=1 ..., m), yiIndicate xiCorresponding classification mark
Label.
Further, the step D is specifically included:
Step D1, the output valve f (x, θ) and neural network model of hidden layer neuron are calculated by propagated forward model
Output valve g (x, θ), wherein f (x, θ), g (x, θ) indicate that x passes through the reality output that Nonlinear Mapping obtains, θ when parameter is θ
={ ai,bi,βi,aj,bj,βj};
Step D2, the real output value of network model and the difference △ y of idea output y, calculation formula are as follows: △ y are calculated
=y-g (x, θ);
Step D3, neural network model output valve g (x, θ) is calculated to the local derviation of initial parameter θ by back propagation model
Number, calculation formula are as follows:
Step D4, the inverse matrix of partial derivative matrix f ' (x, θ) is calculated, calculation formula is as follows:
Step D5, the parameter increase △ θ of network model is calculated according to the following formula, and calculation formula is as follows:
Wherein, u is learning efficiency;
Step D6, using θ+△ θ as the parameter of the network model updated.
A kind of incremental learning device of intelligence manufacture big data provided by the invention, including computer-readable medium, it is described
Media storage has computer-readable instruction, and the computer-readable instruction can be executed by processor to realize described in any of the above-described
Method.
The above, only presently preferred embodiments of the present invention, the invention is not limited to above embodiment, as long as
It reaches technical effect of the invention with identical means, all should belong to protection scope of the present invention.
Claims (5)
1. a kind of Increment Learning Algorithm of intelligence manufacture big data, which comprises the following steps:
Step A, the data during history making are acquired, categorized data set D is formedt;
Step B, obtain the new data in manufacturing process, as input data x, by coding function establish input layer data x with
The mapping relations of layer data f (x) are hidden, the calculation formula of the coding function is as follows:
Wherein, ai, biFor random input layer parameter, βiFor the weight of random input layer to hidden layer, L is input layer
Node number;
Step C, hiding layer data f (x) is mapped to the reality output g (x) of network by decoding functions, the decoding functions
Calculation formula is as follows:
Wherein, aj, bjFor random hidden layer node parameter, βjFor the weight of random hidden layer to output layer, L ' is hidden layer
Node number;
Step D, to node parameter ai, bi, aj, bjAnd weight betai, βjIt optimizes;
Step E, sorted incremental learning model is formed.
2. a kind of Increment Learning Algorithm of intelligence manufacture big data according to claim 1, which is characterized in that the step
Data in A specifically include: amplitude, frequency, the phase of the signal generated in intelligence manufacture link, manufacture section locating for signal
Point.
3. a kind of Increment Learning Algorithm of intelligence manufacture big data according to claim 2, which is characterized in that the step
A is specifically included:
Step A1, by Xn={ x1,x2,...,xmIt is item to be sorted, xiA characteristic attribute for indicating n dimensional feature space Xn, by C
={ Y1,Y2,...,YmIt is used as category set;
Step A2, P (y is calculated separately1/Xn),P(y2/Xn),...,P(ym/Xn);
Step A3, Xn is classified by following formula,
P(yi/ Xn)=max { P (y1/Xn),P(y2/Xn),...,P(ym/ Xn) },
Wherein, Xn ∈ Yi;
Step A4, data set D is formedt, wherein Dt=(xi,yi) (i=1 ..., m), yiIndicate xiCorresponding class label.
4. a kind of Increment Learning Algorithm of intelligence manufacture big data according to claim 1, which is characterized in that the step
D is specifically included:
Step D1, the output of the output valve f (x, θ) and neural network model of hidden layer neuron are calculated by propagated forward model
Value g (x, θ), wherein f (x, θ), g (x, θ) are indicated when parameter is θ, the reality output that x is obtained by Nonlinear Mapping, θ=
{ai,bi,βi,aj,bj,βj};
Step D2, the real output value of network model and the difference △ y of idea output y, calculation formula are as follows: △ y=y-g are calculated
(x,θ);
Step D3, neural network model output valve g (x, θ) is calculated to the partial derivative of initial parameter θ, meter by back propagation model
It is as follows to calculate formula:
Step D4, the inverse matrix of partial derivative matrix f ' (x, θ) is calculated, calculation formula is as follows:
Step D5, the parameter increase △ θ of network model is calculated according to the following formula, and calculation formula is as follows:
Wherein, u is learning efficiency;
Step D6, using θ+△ θ as the parameter of the network model updated.
5. a kind of incremental learning device of intelligence manufacture big data, which is characterized in that including computer-readable medium, the medium
It is stored with computer-readable instruction, the computer-readable instruction can be executed by processor to realize as appointed in Claims 1 to 4
Method described in one.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201811119092.XA CN109213807B (en) | 2018-09-25 | 2018-09-25 | Incremental learning method and device for intelligently manufacturing big data |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201811119092.XA CN109213807B (en) | 2018-09-25 | 2018-09-25 | Incremental learning method and device for intelligently manufacturing big data |
Publications (2)
Publication Number | Publication Date |
---|---|
CN109213807A true CN109213807A (en) | 2019-01-15 |
CN109213807B CN109213807B (en) | 2021-08-31 |
Family
ID=64981376
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN201811119092.XA Active CN109213807B (en) | 2018-09-25 | 2018-09-25 | Incremental learning method and device for intelligently manufacturing big data |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN109213807B (en) |
Cited By (1)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN116303687A (en) * | 2023-05-12 | 2023-06-23 | 烟台黄金职业学院 | Intelligent management method and system for engineering cost data |
Citations (1)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN104598552A (en) * | 2014-12-31 | 2015-05-06 | 大连钜正科技有限公司 | Method for learning incremental update-supported big data features |
-
2018
- 2018-09-25 CN CN201811119092.XA patent/CN109213807B/en active Active
Patent Citations (1)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN104598552A (en) * | 2014-12-31 | 2015-05-06 | 大连钜正科技有限公司 | Method for learning incremental update-supported big data features |
Non-Patent Citations (2)
Title |
---|
卜范玉 等: "支持增量式更新的大数据特征学习模型", 《计算机工程与应用》 * |
张清辰: "面向大数据特征学习的深度计算模型研究", 《中国博士学位论文全文数据库(电子期刊)》 * |
Cited By (2)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN116303687A (en) * | 2023-05-12 | 2023-06-23 | 烟台黄金职业学院 | Intelligent management method and system for engineering cost data |
CN116303687B (en) * | 2023-05-12 | 2023-08-01 | 烟台黄金职业学院 | Intelligent management method and system for engineering cost data |
Also Published As
Publication number | Publication date |
---|---|
CN109213807B (en) | 2021-08-31 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
US20210342345A1 (en) | Latent network summarization | |
CN107526799A (en) | A kind of knowledge mapping construction method based on deep learning | |
WO2019144066A1 (en) | Systems and methods for preparing data for use by machine learning algorithms | |
CN109992668A (en) | A kind of enterprise's the analysis of public opinion method and apparatus based on from attention | |
CN110110372B (en) | Automatic segmentation prediction method for user time sequence behavior | |
Huang et al. | Large-scale heterogeneous feature embedding | |
CN103324954A (en) | Image classification method based on tree structure and system using same | |
CN116684200B (en) | Knowledge completion method and system for attack mode of network security vulnerability | |
CN114863091A (en) | Target detection training method based on pseudo label | |
CN115456043A (en) | Classification model processing method, intent recognition method, device and computer equipment | |
CN105844335A (en) | Self-learning method based on 6W knowledge representation | |
CN109213807A (en) | A kind of Increment Learning Algorithm and device of intelligence manufacture big data | |
CN109977131A (en) | A kind of house type matching system | |
CN111275079B (en) | Crowd-sourced label presumption method and system based on graph neural network | |
US11720600B1 (en) | Methods and apparatus for machine learning to produce improved data structures and classification within a database | |
CN112286996A (en) | Node embedding method based on network link and node attribute information | |
Singhal et al. | Universal quantitative steganalysis using deep residual networks | |
Chen et al. | Towards automated cost analysis, benchmarking and estimating in construction: A machine learning approach | |
CN107767278B (en) | Method and device for constructing community hierarchy | |
Liu et al. | Multimodal learning based approaches for link prediction in social networks | |
CN111199259B (en) | Identification conversion method, device and computer readable storage medium | |
CN114153977A (en) | Abnormal data detection method and system | |
CN114372148A (en) | Data processing method based on knowledge graph technology and terminal equipment | |
Ashraf et al. | Group decision making with incomplete interval-valued fuzzy preference relations based on the minimum operator | |
Zhu et al. | Software defect prediction model based on stacked denoising auto-encoder |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
GR01 | Patent grant | ||
GR01 | Patent grant |