CN116739156A - Workshop dynamic bottleneck prediction method based on time-space - Google Patents

Workshop dynamic bottleneck prediction method based on time-space Download PDF

Info

Publication number
CN116739156A
CN116739156A CN202310625389.8A CN202310625389A CN116739156A CN 116739156 A CN116739156 A CN 116739156A CN 202310625389 A CN202310625389 A CN 202310625389A CN 116739156 A CN116739156 A CN 116739156A
Authority
CN
China
Prior art keywords
time
data
workshop
bottleneck
station
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202310625389.8A
Other languages
Chinese (zh)
Inventor
谢龙汉
岑伟洪
陈刚
苏楚鹏
王闯
林泽
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
South China University of Technology SCUT
Original Assignee
South China University of Technology SCUT
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by South China University of Technology SCUT filed Critical South China University of Technology SCUT
Priority to CN202310625389.8A priority Critical patent/CN116739156A/en
Publication of CN116739156A publication Critical patent/CN116739156A/en
Pending legal-status Critical Current

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06QINFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES; SYSTEMS OR METHODS SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES, NOT OTHERWISE PROVIDED FOR
    • G06Q10/00Administration; Management
    • G06Q10/04Forecasting or optimisation specially adapted for administrative or management purposes, e.g. linear programming or "cutting stock problem"
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/10Pre-processing; Data cleansing
    • G06F18/15Statistical pre-processing, e.g. techniques for normalisation or restoring missing data
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/21Design or setup of recognition systems or techniques; Extraction of features in feature space; Blind source separation
    • G06F18/214Generating training patterns; Bootstrap methods, e.g. bagging or boosting
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/24Classification techniques
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/0464Convolutional networks [CNN, ConvNet]
    • YGENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
    • Y02TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
    • Y02PCLIMATE CHANGE MITIGATION TECHNOLOGIES IN THE PRODUCTION OR PROCESSING OF GOODS
    • Y02P90/00Enabling technologies with a potential contribution to greenhouse gas [GHG] emissions mitigation
    • Y02P90/30Computing systems specially adapted for manufacturing

Abstract

The invention provides a workshop dynamic bottleneck prediction method based on time-space, which comprises the steps of firstly collecting multi-source data of different station devices through sensors of workshop devices; secondly, processing the multi-source data to obtain a data format required by an algorithm; thirdly, regarding stations and equipment of a production workshop as nodes of the graphs through a dynamic neural network model, regarding the transfer relationship of the upstream and downstream workpieces among the stations and the equipment as the side of the connection among the graphs, and dynamically constructing a directed graph neural network or a undirected graph neural network; encoding and decoding the diagram structure information; finally, using a time sequence prediction model to perform sequence prediction on the relevant characteristics of the equipment; and finally, using bottleneck identification indexes to realize real-time bottleneck judgment. The system can globally monitor the discrete workshop manufacture in space and time, dynamically predict the bottleneck of the discrete workshop and provide predictive information for the discrete workshop manufacture in real time.

Description

Workshop dynamic bottleneck prediction method based on time-space
Technical Field
The invention belongs to the field of manufacturing dynamic bottlenecks of discrete workshops, and particularly relates to a workshop dynamic bottleneck prediction method based on time-space.
Background
In recent years, bottleneck prediction has become a concern for shop manufacturing. Based on discrete workshop state information, bottle neck prediction is performed, a production strategy is improved, and production quality and efficiency can be improved. Nowadays, the development of big data makes the workshop gradually become unmanned or less man-operated workshop production line, increases production efficiency, reduces the cost of labor. The big data is used for carrying out the operation of an algorithm model 'black box', the bottle neck prediction is carried out, and the production plan is adjusted and optimized in advance, so that the method becomes the development situation of the current research.
At present, the current state of development of dynamic bottleneck prediction research is as follows:
the method for predicting the dynamic bottleneck of a production workshop based on the Internet of things technology of the northwest university of industry utilizes the Internet of things technology to collect multi-source heterogeneous information, establishes an LMBP neural network model and judges the bottleneck by using a comprehensive index method when the dynamic three-time index smoothing method is used for steady state. The method for predicting the dynamic bottleneck of the mixed flow production line by using the time sequence information and the VAR vector autoregressive model is used for modeling, extracting the mutually uncorrelated evaluation indexes and realizing the real-time dynamic bottleneck prediction. These methods consider the predictions of bottlenecks over time, but do not consider the spatial interactions of the discrete workshops to the predictions of bottlenecks, failing to comprehensively consider the identification of bottleneck drifts during the manufacturing process of the discrete workshops.
The influence factors of the discrete workshop manufacturing bottleneck recognition are many, and the evaluation object indexes are different. Common plant bottleneck influencing factors are equipment production load, average fault and maintenance time, production capacity, queue length and processing time of products in a buffer zone, blocking and starvation time in a system and the like. The above factors may have correlation problems during the time series history, and may affect each other. In a discrete shop manufacturing process, stations having downstream relationship must interact. These factors can further affect the accuracy of bottleneck identification and bottleneck prediction.
Disclosure of Invention
In order to solve the problems in the prior art, the invention provides a workshop dynamic bottleneck prediction method based on time-space, which can be used for carrying out global monitoring on the space and time of discrete workshop manufacture, dynamically predicting the bottleneck of the discrete workshop and providing predictive information for the discrete workshop manufacture in real time.
In order to achieve the purpose of the invention, the invention provides a workshop dynamic bottleneck prediction method based on time-space, which comprises the following steps:
step 1: the station machine equipment is provided with a sensor with data characteristics to be collected, sensor data are transmitted to an MES system in real time, multi-source heterogeneous data are acquired, and preprocessing is carried out on the multi-source heterogeneous data to obtain a time sequence standardized data set;
step 2: according to the relation between the workpiece processing processes of the discrete workshops, a dynamic neural network model is used for constructing a discrete workshop graph relation network undirected graph, the data relation among stations is processed, and a time sequence prediction model is used for predicting time sequence data to obtain a predicted characteristic value;
step 3: and judging the bottleneck at the future moment according to the obtained predicted characteristic value, and evaluating the bottleneck drift condition to realize real-time dynamic bottleneck prediction.
Further, in step 1, the heterogeneous multi-source data includes a blocking Time (TB), a starvation Time (TS), a Work In Process (WIP), a Buffer Length (BL), a device utilization (U), a work in process time (P), an average failure time (MTTF), and a device average failure maintenance time (MTTR), which are data feature types.
Further, in step 1, when multi-source heterogeneous data are collected, an RFID and a wireless network are installed and deployed in a discrete workshop station machine device to serve as a basis for data information collection, and product processing state data and equipment state data characteristic data required by a workshop are collected in real time for a subsequent data processing process.
Further, in step 1, the preprocessing of the multi-source heterogeneous data includes performing data cleaning processing on the data, removing redundant data, interpolating missing data, optimizing abnormal data and filtering interference data, and finally performing unified regularization and standardization operation on the data to obtain a standard data set.
Further, step 2 includes:
step 2.1: constructing a network graph structure of the discrete workshop through a graph neural network by taking each station device of the discrete workshop as a node and the relation between stations as the relation degree between edges and stations as weight, dynamically constructing the relation change between stations of the discrete workshop according to the change of time dimension and the occurrence of machine abnormality of the discrete workshop, and constructing a neural network graph structure relation G= (V, E) and a point station matrix V= (V) 1 ,…,v N ) Adjacent matrix A, weight between stations has a direct relation between 0 and 1, N is the number of stations, v N For the nth station, E is a matrix of relationship edges between the two stations;
step 2.2: in the time sequence, a multi-head attention mechanism is used for enhancing the graph network correlation between the front timestamp and the back timestamp;
step 2.3: and predicting the state characteristics of each station of the discrete workshop by using a time sequence prediction model according to the time sequence standard data set.
Further, in step 2.2, a transitioner model with Multi-headed attention mechanism sequence prediction is used to enhance the graph network correlation between the front and back timestamps, and the structural feature Multi-headed attention mechanism of the transitioner model includes a plurality of self-attention mechanisms, att represents the self-attention mechanism, and multi_att Multi-headed self-attention mechanism is expressed as follows:
Multi_Att=concat(Att 1 ,…,Att n )
n represents the number of self-care mechanisms, att n The nth head self-attention mechanism, Q represents a query vector, K represents an index vector, d k Representing the dimension of the matrix K, K being the index of the matrix K, att n The dimension representing the matrix K, K being the sign of the matrix K, K T Representing the transposed matrix of the K matrix, softmax representing the normalized exponential function, concat representing the multi-headed self-attention mechanism fusion.
Further, in step 2.3, prediction is performed using a GRU time series prediction model, u in the GRU time series t For update gating at time t, r t Gating for control reset at time t, h t For the hidden layer state at time t, the model formula:
wherein x is t Representing a matrix of input states, W z 、W r 、W h Representing a matrix of learnable weights.
In step 2, after the prediction is performed by the time series prediction model, the feature selection is performed by XGBoost feature classification, so as to obtain a final predicted value.
Further, the inflection point method is used in step 3 to determine the bottleneck.
Further, the inflection point method for determining the bottleneck includes the steps of:
step 3.1: firstly, predicting station blocking conditions at a moment in the future through a time sequence prediction model;
step 3.2: based on the station blocking condition, a dynamic bottleneck judging mechanism is formed, the operation state of each station is calculated through an inflection point method, and the production operation states of each station are compared with each other to judge whether the station is a bottleneck.
Compared with the prior art, the invention at least has the following beneficial effects:
according to the method, the influence of time-space on bottleneck prediction is considered, a time-space diagram neural network time sequence prediction model is established, and the global and historical data relationship can be better summarized.
According to the invention, the relation between the station space layout of the discrete workshop processing flow and the stations is focused on by the undirected graph, the multi-head self-attention and cross-attention mechanism of the transducer model is used for strengthening the relation between stations by using the graph neural network feature matrix structure, and the station processing state characteristics at the future moment are predicted according to the change of historical data by using the time sequence prediction model. The invention increases the accuracy of the prediction result from the space and time consideration, and can better provide reliable prediction information for manufacturing.
Drawings
For a clearer description of an embodiment of the invention or of the solutions of the prior art, the drawings that are required to be used in the description of the embodiment or of the prior art will be briefly described, it being obvious that the drawings in the description below are only some embodiments of the invention, from which, without the inventive effort, other drawings can be obtained for a person skilled in the art, in which:
fig. 1 is a flowchart of a workshop dynamic bottleneck prediction method based on time-space according to an embodiment of the present invention.
FIG. 2 is a diagram of the construction of a discrete shop layout undirected graph in an embodiment of the present invention.
FIG. 3 is a flowchart of a neural network timing prediction model in accordance with an embodiment of the present invention.
Detailed description of the preferred embodiments
For the purpose of making the objects, technical solutions and advantages of the embodiments of the present invention more apparent, the technical solutions of the embodiments of the present invention will be clearly and completely described below with reference to the accompanying drawings in the embodiments of the present invention, and it is apparent that the described embodiments are some embodiments of the present invention, but not all embodiments of the present invention. All other embodiments, which can be made by those skilled in the art based on the embodiments of the invention without making any inventive effort, are intended to be within the scope of the invention.
The invention considers the influence of time and space on bottleneck prediction of discrete workshop manufacture, improves the accuracy of prediction, and provides a workshop dynamic bottleneck prediction method based on time-space, which comprises the following steps:
step 1: the station machine equipment is provided with a sensor with data characteristics to be collected, sensor data are transmitted to an MES system in real time, multi-source heterogeneous data are acquired, and preprocessing is carried out on the multi-source heterogeneous data to obtain a normalized time sequence standardized data set.
Among other things, in some embodiments of the invention, the multi-source heterogeneous data is primarily characteristics of occlusion Time (TB), starvation Time (TS), work In Process (WIP), buffer Length (BL), equipment utilization (U), workpiece processing time (P), mean Time To Failure (MTTF), and mean time to failure (MTTR) as the data characteristic tag types.
The method comprises the steps of determining the characteristics of multi-source heterogeneous data, installing and deploying RFID and a wireless network in discrete workshop station machine equipment to serve as a data information acquisition basis, and acquiring characteristic data such as product processing state data, equipment state data and the like required by a workshop in real time for a subsequent data processing process.
Point (station) feature matrix X N×P ,X N×P ∈R N×P The feature matrix of the point i (station i) is X i×P The historical time series matrix of the input T period is
Wherein P is the feature number of points (stations), P is the number of points (stations), i is the ith point (station), T is the time period,is the feature matrix X at the point (station) of time t+T N×P
Performing data cleaning processing on original multi-source heterogeneous data, removing redundant data, interpolation missing data, optimizing abnormal data, filtering interference data and the like, and finally performing unified regularization and standardization operation on the data to obtain a standard data set Is->And (5) carrying out normalized point (station) feature matrix.
Step 2: according to the relation between the workpiece processing processes of the discrete workshops, a graph neural network model is used for constructing a discrete workshop graph relation network undirected graph, the data relation among stations is processed, and a time sequence prediction model is used for predicting time sequence data.
Step 2.1: the network diagram structure of the discrete workshops is constructed through a graph neural network by taking the station devices S1-S6 of the discrete workshops as nodes S1-S6 and the relation degree between the stations as the weight and the relation degree between the sides and the stations, as shown in figure 2. According to the time dimension change, if the machine downtime condition or other machine abnormal conditions occur in the discrete workshops, the relation among stations can be changed, the relation change among the stations of the discrete workshops can be dynamically constructed, and the structural relation G= (V, E) of the neural network graph is constructed, and the lattice (station matrix) V= (V) 1 ,…,v N ) Constructing an adjacency matrix A, wherein weights among points (stations) have direct relation between 0 and 1, N is the number of the points (stations), v N For the nth point (station), E is a matrix of relationship edges between two points (stations). Computing structural features using GCN graph neural network models
Step 2.2: in the time sequence, the discrete production workshops are in different production states, the dynamically constructed graph network changes and differs, and a multi-head attention mechanism is used for strengthening the correlation of the graph network between the front timestamp and the rear timestamp.
In some embodiments of the present invention, a transducer model with multi-headed attention mechanism sequence prediction is used to enforce graph network structure correlation between the front and back time stamps. Structural features of the transducer model the multi-head attention mechanism is composed of multiple self-attention mechanisms.
Att represents a self-attention mechanism, multi_att Multi-headed self-attention mechanism expressed as follows:
Multi_Att=concat(Att 1 ,…,Att n )
n represents the number of self-care mechanisms, att n The nth head self-attention mechanism, Q represents a query vector, K represents an index vector, d k Representing the dimension of the matrix K, K being the index of the matrix K, att n The dimension representing the matrix K, K being the sign of the matrix K, K T Representing the transposed matrix of the K matrix, softmax representing the normalized exponential function, concat representing the multi-headed self-attention mechanism fusion.
Step 2.3: and predicting the state characteristics of each station of the discrete workshop by using a GRU time sequence prediction model according to the time sequence standard data set.
In some embodiments of the invention, u in the GRU time series t For update gating at time t, r t Gating for control reset at time t, h t For the hidden layer state at time t, the model formula:
wherein x is t Representing a matrix of input states, W z 、W r 、W h Representing a matrix of learnable weights.
Through the time-space dynamic bottleneck algorithm model, the characteristics of the future T time period are predicted by inputting n sequence history diagram characteristicsf represents the mapping function of the prediction model.
As shown in fig. 3, the input map feature is used as the current state input, and the calculated hidden layer state of the GRU at the previous stage is used as the input of the state at the next stage.
At time t, the feature matrix [ X ] processed by the GCN graph neural network t-n ,…,X t ]And the hidden layer state h at the previous time t-1 As input to the transducer model. The feature matrix [ X 'is obtained after the multi-head self-attention mechanism processing of the transducer model' t-n ,…,X’ t ]。
Hidden layer state h processed by GRU in last stage t-1 Matrix [ X 'with features processed by a transducer model' t-n ,…,X’ t ]Obtaining a hidden layer state h in a time period from t-n to t through GRU time sequence prediction t Prediction features with future T time periods
Step 2.4: and obtaining two prediction feature matrixes of the blocking time and the starving time through XGBoost feature classification.
Step 3: and (3) judging the bottleneck at the future moment by using a turn-Point method according to the predicted characteristic value obtained by the time sequence prediction model in the step (2), and evaluating the bottleneck drift condition so as to realize real-time dynamic bottleneck prediction. The inflection point method for judging the bottleneck can be divided into the following two steps:
step 3.1: and predicting the time of station blockage and starvation time at a future moment by using the improved time sequence model.
Step 3.2: the two characteristics of the dynamic blocking time and the starving time form a bottleneck judging mechanism, the running state of each station is calculated by a turn-point formula, the production running states of each station are compared with each other, and whether the station is a bottleneck is judged, so that real-time bottleneck prediction is achieved.
Bottleneck turn-point method formula:
(TB i,t+T -TS i,t+T )>0:i∈[1,…,j-1],j≠1,j≠N
(TB i,t+T -TS i,t+T )>0:i∈[j+1,…,N],j≠1,j≠N
TB j,t+T -TS j,t+T <TB j-1,t+T +TS j-1,t+T ,j≠1,j≠N
TB j,t+T -TS j,t+T <TB j+1,t+T +TS j+1,t+T ,j≠1,j≠N
If j=1:
(TB 1,t+T -TS 1,t+T )>0,
(TB 2,t+T -TS 2,t+T )<0and TB 1,t+T -TS 1,t+T <TB 2,t+T -TS 2,t+T
If j=N:
(TB N-1,t+T -TS N-1,t+T )>0,
(TB N,t+T -TS N,t+T )<0 and TB N,t+T -TS N,t+T <TB N,t+T -TS N,t+T
wherein TB is i,t+T Indicating the blocking time of the ith station at the time t+T, TS i,t+T Indicating the starvation time, TB, of the ith station at time t+T j,t+T Represents the blocking time, TS, of the jth station at time t+T j,t+T Indicating the starvation time, TB, of the jth station at time t+T N-1,t+T The blocking time of the N-1 station at the time t+T is shown as TS N-1,t+T Indicating the starvation time of the N-1 th station at time t+t. j represents a bottleneck station, and the value range of j is 1 to N.
The method provided by the embodiment of the invention firstly collects multi-source heterogeneous data of different station equipment; secondly, processing the multi-source data and classifying the characteristics; thirdly, constructing an undirected graph neural network; encoding by a transducer; performing sequence prediction on the relevant characteristics of the equipment by using a time sequence prediction model; finally, a Turning-Point method bottleneck identification index is used for realizing real-time bottleneck judgment.
The previous description of the disclosed embodiments is provided to enable any person skilled in the art to make or use the present invention. Various modifications to these embodiments will be readily apparent to those skilled in the art, and the generic principles defined herein may be applied to other embodiments without departing from the spirit or scope of the invention. Thus, the present invention is not intended to be limited to the embodiments shown herein but is to be accorded the widest scope consistent with the principles and novel features disclosed herein.

Claims (10)

1. The workshop dynamic bottleneck prediction method based on the time-space is characterized by comprising the following steps of:
step 1: the station machine equipment is provided with a sensor with data characteristics to be collected, sensor data are transmitted to an MES system in real time, multi-source heterogeneous data are acquired, and preprocessing is carried out on the multi-source heterogeneous data to obtain a time sequence standardized data set;
step 2: constructing a discrete workshop diagram relation network diagram by using a dynamic neural network model according to the relation between workpiece machining processes of the discrete workshops, processing the data relation among stations, and predicting time sequence data by using a time sequence prediction model to obtain a predicted characteristic value;
step 3: and judging the bottleneck at the future moment according to the obtained predicted characteristic value, and evaluating the bottleneck drift condition to realize real-time dynamic bottleneck prediction.
2. The method of claim 1, wherein in step 1, the multi-source heterogeneous data includes a congestion Time (TB), a starvation Time (TS), a Work In Process (WIP), a Buffer Length (BL), a device utilization (U), a work in process (P), an average fault time (MTTF), and a device average fault maintenance time (MTTR), as data feature types.
3. The method for predicting the dynamic bottleneck of the workshop based on the time-space as claimed in claim 1, wherein in the step 1, when multi-source heterogeneous data are collected, the RFID and the wireless network are installed and deployed in the machine equipment of the station of the discrete workshop as the basis for data information collection, and the product processing state data and the equipment state data characteristic data required by the workshop are collected in real time for subsequent data processing.
4. The method for predicting the dynamic bottleneck of the workshop based on time-space as claimed in claim 1, wherein in step 1, the preprocessing of the multi-source heterogeneous data includes performing data cleaning processing on the data, removing redundant data, interpolating missing data, optimizing abnormal data and filtering interference data, and finally performing unified regularization and normalization operation on the data to obtain a standard data set.
5. The method for predicting dynamic bottlenecks in a space-time-based plant of claim 1, wherein step 2 comprises:
step 2.1: constructing a network graph structure of the discrete workshop through a graph neural network by taking each station device of the discrete workshop as a node and the relation between stations as the relation degree between edges and stations as weight, dynamically constructing the relation change between stations of the discrete workshop according to the change of time dimension and the occurrence of machine abnormality of the discrete workshop, and constructing a neural network graph structure relation G= (V, E) and a point station matrix V= (V) 1 ,…,v N ) Adjacent matrix A, weight between stations has a direct relation between 0 and 1, N is the number of stations, v N For the nth station, E is a matrix of relationship edges between the two stations;
step 2.2: in the time sequence, a multi-head attention mechanism is used for enhancing the graph network correlation between the front timestamp and the back timestamp;
step 2.3: and predicting the state characteristics of each station of the discrete workshop by using a time sequence prediction model according to the time sequence standard data set.
6. The method according to claim 5, wherein in step 2.2, a transducer model with Multi-head attention mechanism sequence prediction is used to enhance the graph network correlation between the front and back time stamps, and the Multi-head attention mechanism of the structural feature of the transducer model includes a plurality of self-attention mechanisms, att represents the self-attention mechanism, and multi_att Multi-head self-attention mechanism is expressed as follows:
Multi_Att=concat(Att 1 ,…,Att n )
n represents the number of self-care mechanisms, att n The nth head self-attention mechanism, Q represents a query vector, K represents an index vector, d k Representing the dimension of the matrix K, K being the index of the matrix K, att n The dimension representing the matrix K, K being the sign of the matrix K, K T Representing the transposed matrix of the K matrix, softmax representing the normalized exponential function, concat representing the multi-headed self-attention mechanism fusion.
7. The method of claim 5, wherein in step 2.3, the prediction is performed using a GRU time series prediction model, and u in the GRU time series is t For update gating at time t, r t Gating for control reset at time t, h t For the hidden layer state at time t, the model formula:
wherein x is t Representing a matrix of input states, W z 、W r 、W n Representing a matrix of learnable weights.
8. The method for predicting dynamic bottlenecks in workshops based on time-space according to claim 1, wherein in step 2, after prediction is performed by a time-series prediction model, feature selection is performed by feature classification, so as to obtain a final predicted value.
9. The method for predicting dynamic bottlenecks in a space-time based shop as claimed in claim 1, wherein the inflection point method is used in step 3 to determine the bottlenecks.
10. The method for predicting dynamic bottlenecks in a workshop based on time-space according to any one of claims 1 to 9, wherein the inflection point method for determining the bottlenecks comprises the steps of:
firstly, predicting station blocking conditions at a moment in the future through a time sequence prediction model;
step 3.2: based on the station blocking condition, a dynamic bottleneck judging mechanism is formed, the operation state of each station is calculated through an inflection point method, and the production operation states of each station are compared with each other to judge whether the station is a bottleneck.
CN202310625389.8A 2023-05-30 2023-05-30 Workshop dynamic bottleneck prediction method based on time-space Pending CN116739156A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202310625389.8A CN116739156A (en) 2023-05-30 2023-05-30 Workshop dynamic bottleneck prediction method based on time-space

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202310625389.8A CN116739156A (en) 2023-05-30 2023-05-30 Workshop dynamic bottleneck prediction method based on time-space

Publications (1)

Publication Number Publication Date
CN116739156A true CN116739156A (en) 2023-09-12

Family

ID=87900415

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202310625389.8A Pending CN116739156A (en) 2023-05-30 2023-05-30 Workshop dynamic bottleneck prediction method based on time-space

Country Status (1)

Country Link
CN (1) CN116739156A (en)

Similar Documents

Publication Publication Date Title
Choi et al. Deep learning for anomaly detection in time-series data: Review, analysis, and guidelines
CN107742168B (en) Production workshop dynamic bottleneck prediction method based on Internet of things technology
CN102361014B (en) State monitoring and fault diagnosis method for large-scale semiconductor manufacture process
CN110990461A (en) Big data analysis model algorithm model selection method and device, electronic equipment and medium
CN108762228A (en) A kind of multi-state fault monitoring method based on distributed PCA
CN103221807B (en) Fast processing and the uneven factor detected in web shaped material
CN113762329A (en) Method and system for constructing state prediction model of large rolling mill
Niaki et al. A hybrid method of artificial neural networks and simulated annealing in monitoring auto-correlated multi-attribute processes
CN107862406A (en) Using deep learning and the method for the primary equipment risk profile for improving Apriori algorithm synthesis
KR20230017556A (en) System and operational methods for manufacturing execution based on artificial intelligence and bigdata
CN115096627A (en) Method and system for fault diagnosis and operation and maintenance in manufacturing process of hydraulic forming intelligent equipment
CN116703303A (en) Warehouse visual supervision system and method based on multi-layer perceptron and RBF
CN117272196A (en) Industrial time sequence data anomaly detection method based on time-space diagram attention network
KR20230003819A (en) Facility management system that enables preventive maintenance using deep learning
CN116614366B (en) Industrial Internet optimization method and system based on edge calculation
CN116739156A (en) Workshop dynamic bottleneck prediction method based on time-space
KR102437917B1 (en) Equipment operation system
Zhang et al. Machine learning applications in Cyber-Physical Production Systems: a survey
Tin et al. Incoming work-in-progress prediction in semiconductor fabrication foundry using long short-term memory
CN112508382A (en) Industrial control system based on big data
CN109978038A (en) A kind of cluster abnormality determination method and device
Yan et al. Nonlinear quality-relevant process monitoring based on maximizing correlation neural network
CN117807377B (en) Multidimensional logistics data mining and predicting method and system
CN117579513B (en) Visual operation and maintenance system and method for convergence and diversion equipment
CN113312968B (en) Real abnormality detection method in monitoring video

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination