CN114662658A - On-chip optical network hot spot prediction method based on LSTM neural network - Google Patents

On-chip optical network hot spot prediction method based on LSTM neural network Download PDF

Info

Publication number
CN114662658A
CN114662658A CN202210237289.3A CN202210237289A CN114662658A CN 114662658 A CN114662658 A CN 114662658A CN 202210237289 A CN202210237289 A CN 202210237289A CN 114662658 A CN114662658 A CN 114662658A
Authority
CN
China
Prior art keywords
data
input
network
training
output
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202210237289.3A
Other languages
Chinese (zh)
Inventor
仇星
郭鹏星
侯维刚
何香玉
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Chongqing University of Post and Telecommunications
Original Assignee
Chongqing University of Post and Telecommunications
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Chongqing University of Post and Telecommunications filed Critical Chongqing University of Post and Telecommunications
Priority to CN202210237289.3A priority Critical patent/CN114662658A/en
Publication of CN114662658A publication Critical patent/CN114662658A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/044Recurrent networks, e.g. Hopfield networks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06EOPTICAL COMPUTING DEVICES; COMPUTING DEVICES USING OTHER RADIATIONS WITH SIMILAR PROPERTIES
    • G06E3/00Devices not provided for in group G06E1/00, e.g. for processing analogue or hybrid data
    • G06E3/006Interconnection networks, e.g. for shuffling
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F15/00Digital computers in general; Data processing equipment in general
    • G06F15/76Architectures of general purpose stored program computers
    • G06F15/78Architectures of general purpose stored program computers comprising a single central processing unit
    • G06F15/7807System on chip, i.e. computer system on a single chip; System in package, i.e. computer system on one or more chips in a single package
    • G06F15/7825Globally asynchronous, locally synchronous, e.g. network on chip
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/06Physical realisation, i.e. hardware implementation of neural networks, neurons or parts of neurons
    • G06N3/067Physical realisation, i.e. hardware implementation of neural networks, neurons or parts of neurons using optical means
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/08Learning methods

Abstract

The invention requests to protect an on-chip optical network hot spot prediction method based on an LSTM neural network, which obtains training data and test data by cleaning, normalizing and dividing a flow value of each node in the on-chip optical network; building a multi-input and multi-output LSTM neural network model for adapting to multi-node characteristics in an on-chip optical network, and inputting training data into the model for training; and after the training model is obtained, inputting data into the model to obtain the predicted flow value of each node. Compared with the traditional modeling method, the LSTM neural network has the characteristics of self-learning, strong self-adaption and the like, so that the hot spot change condition of the nodes in the network can be analyzed and predicted by utilizing the characteristics. Compared with the current more typical prediction model Recurrent Neural Network (RNN) and gate cycle unit (GRU), the mean square error of the model is respectively reduced by 8.57 percent and 15.7 percent, and the fitting degree is respectively improved by 3.35 percent and 1.73 percent.

Description

On-chip optical network hot spot prediction method based on LSTM neural network
Technical Field
The invention relates to a communication technology, in particular to an on-chip optical network hot spot prediction method based on LSTM.
Background
At present, high-performance computing has a great deal of requirements and applications in the aspects of numerical simulation, life science, large-scale engineering computing and the like, and the on-chip multi-core system plays a key role in the high-performance computing. An on-Chip Optical Network (Optical Network on-Chip, ONoC) is a scheme for solving the problem of data transmission between different cores in an on-Chip multi-core System, which overcomes the disadvantages of the conventional bus connection-based System on Chip (SoC) that is difficult to expand, poor reliability and high energy consumption in the on-Chip Network (Network on Chip, NoC), and the capability of a multi-core Chip to process data is continuously improved after the on-Chip Optical Network is proposed. Meanwhile, in order to improve network performance, researchers often divide the upper layer application into multiple tasks, and map the tasks onto several adjacent Intellectual Property (IP) cores in the on-chip optical network to reduce communication delay. From the view of the whole on-chip optical network, the uneven distribution of the node tasks inevitably causes local hot spots to be formed in the whole on-chip optical network. For a chip, the existence of hot spots may cause the temperature of nodes to rise and the power consumption to increase, thereby greatly shortening the service life of the chip. In addition, to reduce time and cost, the chip may contact third-party personnel many times during the design and manufacturing process, but these third-party personnel are not completely trusted because they may implant Hardware Trojans (HT) to attack the entire system during the design and manufacturing process. While hardware trojans are usually activated in hot spots in order to maximize the attack effect with the lowest cost. Therefore, for the on-chip optical network, a simple method capable of determining hot spots in advance is needed to optimize a mapping algorithm to map tasks to IP cores with smaller communication traffic in advance to realize thermal equilibrium, and meanwhile, the security of the on-chip optical network is guaranteed.
Disclosure of Invention
The present invention is directed to solving the above problems of the prior art. A method for predicting hot spots of an on-chip optical network based on an LSTM neural network is provided. The technical scheme of the invention is as follows:
a method for predicting hot spots of an on-chip optical network based on an LSTM neural network comprises the following steps:
1) acquiring a flow value of each node in the on-chip optical network through a public way, and taking the flow value as an input value of an LSTM neural network;
2) inputting the flow serving as input data into a built multi-input multi-output LSTM neural network model for prediction; compared with the existing LSTM neural network, the multi-input multi-output LSTM neural network can input a plurality of variables at one time, and output the predicted values of the variables at the same time after processing, so that the prediction time is shortened to a great extent.
3) After the predicted flow is obtained, dividing the flow value corresponding to the node into n intervals, calculating the node proportion number in each flow interval, and taking the node in the interval with the larger flow value as a hot point at the next moment.
Further, the multiple input and output LSTM neural network model of step 2) includes: the system comprises an input layer, a hidden layer, a training module and an output layer, wherein the input layer is used for processing data input into the LSTM network so as to meet network requirements; the hidden layer has multiple layers, each layer comprises multiple LSTM neural network elements, and the LSTM neural network elements are used for data training; the training module adjusts the weight and the bias in the training process according to the relation between input and output so as to optimize network training; the output layer is used for outputting the training result of the hidden layer.
Further, the input layer comprises the steps of data cleaning, data normalization and data division, wherein the data cleaning is used for removing NAN values and unqualified values in the traffic data; the data normalization linearizes the original data set by adopting a Min-Max method and is used for eliminating the influence of singular sample data on training; after normalization, the data set is divided into a training set and a testing set according to a certain proportion, and the training set and the testing set are used for training and testing the model.
Further, the hidden layer comprises two LSTM circulation layers, each layer comprises 32 LSTM neurons, and each LSTM neuron internally comprises a forgetting gate, an input gate and an output gate; wherein the forgetting gate is determined by the input data and the output of the last cell unit for determining to discard the unwanted data from the input; the input gate determines to keep useful data from the input and update the cell state by a sigmoid function and a tanh function; the output gate is also determined by the sigmoid function and the tanh function for the output of the cell unit.
Further, in order to avoid Linear characteristics between the output and the input of each cell Unit, an activation function is added in the hidden layer for increasing the learning capacity of the neural network, wherein the activation function adopts a Linear rectification Unit (RELU) activation function, and the mathematical expression of the activation function is as follows:
Figure BDA0003542769540000031
further, the training module mainly calculates loss errors between theoretical output and model output, and feeds back the loss errors to the hidden layer by using an optimization algorithm according to the loss values for continuously adjusting parameter update weights to accelerate the convergence rate of the network, wherein the mathematical expression is as follows:
Figure BDA0003542769540000032
wherein y isiFor the true value of the traffic of each node at the ith moment,
Figure BDA0003542769540000033
a flow predicted value input for the network at the ith moment;
the loss error is calculated by Mean Square Error (MSE), and the optimization algorithm is Adma optimization algorithm.
Furthermore, a Dropout layer is added after the hidden layer to prevent the training from generating overfitting.
The output layer is used for outputting prediction data and performing inverse normalization.
Further, x of the hidden layer outputn、ynThe output represents the prediction data obtained during the training of the network, the value is realActually, real data is used for learning the network and is related to the time step of the over-parameter in the training process; after the network training is finished, the LSTM model already knows the basic development trend of the data set, and at the moment, test data can be input into the network to obtain predicted data; since the data input into the network has been normalized, the predicted values are also those belonging to [0,1 ]]For obtaining the real predicted value, the normalization data needs to be reversely normalized, and the reverse normalization formula is as follows:
Figure BDA0003542769540000034
furthermore, a plurality of model evaluation indexes are adopted to evaluate the quality of the prediction result, including the root mean square error RMSE, the average absolute error MAE and the decision coefficient R2.
The invention has the following advantages and beneficial effects:
1. the invention improves the traditional single-input single-output LSTM model, and provides a multi-input multi-output LSTM neural network model, which is stacked with a plurality of layers of LSTM neural networks, wherein each layer of neural network contains a plurality of LSTM neural network units to calculate and output a plurality of samples at the same time, so that the model is not limited by the hot spot prediction scene of an on-chip optical network, and can adopt the steps of the claims to predict other scenes with the prediction requirements of a plurality of samples.
2. Compared with the traditional method, the method does not depend on the temperature in the on-chip optical network, the utilization rate of each node, a task mapping algorithm and other variable data which are difficult to determine in the calculation process, but only needs the historical flow change value of each node. Compared with the current representative neural network model recurrent neural network RNN and gate cycle unit GRU, the mean square error of the experiment is respectively reduced by 8.57 percent and 15.7 percent, and the fitting degree is respectively improved by 3.35 percent and 1.73 percent;
3. from the aspect of applicability, the on-chip optical network hot spot prediction method based on the LSTM neural network is not only suitable for the number of cores of a single on-chip optical network, but also shows a good prediction effect on-chip optical networks with different numbers of cores.
Drawings
FIG. 1 is an LSTM network model structure of the preferred embodiment provided by the present invention.
FIG. 2 is a comparison of different variable input dimensions of the LSTM neural network.
Fig. 3 is a structural view of an LSTM basic unit.
FIG. 4 is a graph comparing loss values of example 1 and different prediction models.
FIG. 5 is a graph comparing loss values of example 2 of the present invention with different prediction models.
FIG. 6 is a graph comparing loss values of example 3 of the present invention with different prediction models.
FIG. 7 is a predicted flow profile of example 1 of the present invention.
FIG. 8 is a predicted flow distribution diagram according to example 2 of the present invention.
Fig. 9 is a predicted flow distribution diagram of embodiment 3 of the present invention.
Detailed Description
The technical solutions in the embodiments of the present invention will be described in detail and clearly with reference to the accompanying drawings. The described embodiments are only some of the embodiments of the present invention.
The technical scheme for solving the technical problems is as follows:
a multiple-input multiple-output LSTM neural network model, as shown in fig. 1, comprising: the device comprises an input layer, a hidden layer, a training module and an output layer. Wherein, the input layer is used for processing the data input into the LSTM network to meet the network requirement; the hidden layer has multiple layers, and each layer comprises a plurality of LSTM neural network elements for training data; the training module adjusts the weight and the bias in the training process according to the relation between the input and the output so as to optimize the network training. The output layer is used for outputting the training result of the hidden layer. Each part will be described in detail below:
the input layer comprises data cleaning, data normalization and data cleaning. After obtaining the corresponding data set from the public way, the original streamThe quantity data sequence is: t ═ Φ1234,…,ΦnH, wherein Φ1={x1,x2,x3,x4,…, xm},Φ2={y1,y2,y3,y4,…,ym…, so the input tensors (m, s, n) of the LSTM neural network, where m denotes the number of observations, s denotes the number of inputs into the network per training, and n denotes the number of variables. Fig. 2 is a comparison graph (the length of a segmentation window is 5) between the traditional univariate input and the multivariate input constructed in the text, and from the input point of view, the multivariate is actually a combination of a plurality of univariates, so that the values of n variables can be directly input into the LSTM model for prediction after the following steps in training.
Step 1): and (6) data cleaning. The data cleansing is used for removing NAN values and unsatisfactory values in the traffic data, such as data of abnormal traffic value classes such as negative numbers, characters, and the like.
Step 2): and (6) normalizing the data. The original data set is mapped to [0,1 ] by adopting a Min-Max method in data normalization]The method is used for eliminating the influence of singular sample data on training, and the specific normalization formula is as follows:
Figure BDA0003542769540000051
where x is the true value, and min and max represent the minimum and maximum values of each column in the current dataset, respectively.
Step 3): data division: after normalization, the data set is divided into a training set and a testing set according to a certain proportion. Wherein, the training set is used for training the model; the test set does not participate in the training of the model, but only divides a part of data from the original data set to show the accuracy of the model. The specific division ratio is 7: 3.
The hidden layer comprises two LSTM loop layers, each layer comprises 32 LSTM neurons, and each LSTM neuron internally comprises a forgetting gate, an input gate and an output gate as shown in figure 2. Wherein the forgetting gate is determined by the input data and the output of the last cell unit for determining loss from the inputAbandoning useless data; the input gate determines to keep useful data from the input and update the cell state by a sigmoid function and a tanh function; the output gate is also determined by the sigmoid function and the tanh function for the output of the cell unit. Wherein the output C of each unitn-1And Hn-1Respectively, the state and output of the previous LSTM unit for updating the cell parameters.
In order to avoid the variation trend that the network cannot learn the data set due to the existence of linear characteristics between the output and the input of each cell unit, an activation function is added in the hidden layer to increase the learning capability of the neural network. The activation function adopts a RELU activation function, and the mathematical expression of the RELU activation function is as follows:
Figure BDA0003542769540000061
the training module mainly calculates Loss error Loss of theoretical output and model output, and feeds back the Loss error Loss to the hidden layer by using an optimization algorithm according to the Loss function value for continuously adjusting the parameter updating weight so as to accelerate the convergence speed of the network. The loss error is calculated by Mean Square Error (MSE), and the mathematical expression is as follows:
Figure BDA0003542769540000062
wherein y isiFor the true value of the traffic of each node at the ith moment,
Figure BDA0003542769540000063
and the flow predicted value is input by the network at the ith moment. The loss functions are divided into a training set and a test set, and generally, the lower the loss value is, the better the model prediction effect is. Meanwhile, the training set is mainly used for training the network model, and other parameters can be continuously adjusted by the network according to the value during training, so that the loss function value of the training set is lower than that of the test set. The optimization algorithm adopts an Adma optimization algorithm.
A Dropout layer is added behind the hidden layer to prevent overfitting due to training. The Dropout parameter is 0.5.
The output layer includes a filter for outputting the prediction data and an inverse normalization. Such asFIG. 1 shows, x for hidden layer outputn、 ynThe outputs represent predicted data obtained during training of the network, which is actually real data used for learning of the network and is related to the over-parameter time step during the training process. After network training is completed, the LSTM model already knows the basic development trend of the data set, at which point test data can be input into the network to obtain predicted data. Since the data input into the network has been normalized, the predicted values are also those belonging to [0,1 ]]The normalization data between the two needs to be inversely normalized in order to obtain the true predicted value. The denormalization formula is:
Figure BDA0003542769540000071
the invention adopts a plurality of model evaluation indexes to evaluate the quality of a prediction result, wherein the model evaluation indexes comprise Root Mean Square Error (RMSE), Mean Absolute Error (MAE) and a decision coefficient R2. The calculation formulas of the three are respectively as follows:
Figure BDA0003542769540000072
Figure BDA0003542769540000073
Figure BDA0003542769540000074
wherein n represents the number of training data, yiIn order to be the true data,
Figure BDA0003542769540000075
in order to predict the data, it is,
Figure BDA0003542769540000076
is the average of the corresponding columns. The smaller the RMSE and MAE values are, the higher the model accuracy is represented; while a value of R2 closer to 1 indicates a higher degree of fit between the training input and output.
Meanwhile, in order to compare the superiority of the model, the invention also selects two typical neural network prediction models, namely a recurrent neural network RNN and a gate cycle unit GRU, for comparison. Various embodiments will be chosen to illustrate the applicability and advantages of the invention.
Example 1:
in this embodiment, an 8 × 8 Mesh on-chip optical network topology is selected, the model is built based on a high-level API — Keras in Python, Loss, MAE, and RMSE of the three models under different training rounds are shown in fig. 3, 4, and 5, respectively, and a comparison of training sets R2 of the three models under different training rounds is shown in table 1.
Example 2:
in this embodiment, a 4 × 4 Mesh on-chip optical network topology is selected, the model is built based on a high-level API — Keras in Python, Loss of the three models under different training rounds is shown in fig. 3, and the ratio of R2 of the three models under different training rounds is shown in table 2.
Example 3:
in this embodiment, a 6 × 6Mesh on-chip optical network topology is selected, the model is built based on a high-level API — Keras in Python, Loss of the three models under different training rounds is shown in fig. 3, and R2 pairs of the three models under different training rounds are shown in table 3.
It can be seen that the embodiments of the present invention all show better performance in the on-chip optical networks with different core numbers.
Through the prediction process, the traffic prediction distribution maps at the future time under different core numbers of the on-chip optical network under different embodiments can be obtained, as shown in fig. 7, 8 and 9, respectively. For the determination of the hot spot, dividing the flow value corresponding to the node into n intervals, calculating the node percentage in each flow interval, and taking 10% of the intervals with larger flow values as the hot spots. Taking embodiment 1 as an example, as shown in fig. 7, the traffic values of most nodes are distributed between [8, 290], the number of nodes in the interval is about 90% of the number of all nodes, and the percentage of the number of nodes of [290, 478] is about 10%, so the nodes between [290, 478] are regarded as hot spots when determining the hot spots.
Table 1 comparison of example 1 of the present invention with R2 for different prediction models at different rounds
Figure BDA0003542769540000081
Table 2 comparison of example 2 of the present invention with R2 for different prediction models in different rounds
Figure BDA0003542769540000082
Table 3 comparison of example 3 of the present invention with R2 for different prediction models at different rounds
Figure BDA0003542769540000091
The systems, devices, modules or units illustrated in the above embodiments may be implemented by a computer chip or an entity, or by a product with certain functions. One typical implementation device is a computer. In particular, the computer may be, for example, a personal computer, a laptop computer, a cellular telephone, a camera phone, a smartphone, a personal digital assistant, a media player, a navigation device, an email device, a game console, a tablet computer, a wearable device, or a combination of any of these devices.
It should also be noted that the terms "comprises," "comprising," or any other variation thereof, are intended to cover a non-exclusive inclusion, such that a process, method, article, or apparatus that comprises a list of elements does not include only those elements but may include other elements not expressly listed or inherent to such process, method, article, or apparatus. Without further limitation, an element defined by the phrase "comprising an … …" does not exclude the presence of other like elements in a process, method, article, or apparatus that comprises the element.
The above examples are to be construed as merely illustrative and not limitative of the remainder of the disclosure. After reading the description of the invention, the skilled person can make various changes or modifications to the invention, and these equivalent changes and modifications also fall into the scope of the invention defined by the claims.

Claims (9)

1. A method for predicting hot spots of an on-chip optical network based on an LSTM neural network is characterized by comprising the following steps:
1) acquiring a flow value of each node in the on-chip optical network through a public way, and taking the flow value as an input sample of an LSTM neural network;
2) dividing an input sample and inputting the divided input sample into a built multi-input multi-output LSTM neural network model for prediction; the multi-input multi-output LSTM neural network can input a plurality of variables at one time, and outputs predicted values of the variables at the same time after processing;
3) after the predicted flow is obtained, dividing the flow value corresponding to the node into n intervals, calculating the node proportion number in each flow interval, and taking the node in the interval with the larger flow value as a hot point at the next moment.
2. The method according to claim 1, wherein the multiple-input and multiple-output LSTM neural network model in 2) comprises: the system comprises an input layer, a hidden layer, a training module and an output layer, wherein the input layer is used for processing data input into the LSTM network so as to meet network requirements; the hidden layer has multiple layers, each layer comprises multiple LSTM neural network elements, and the LSTM neural network elements are used for data training; the training module adjusts the weight and the bias in the training process according to the relation between input and output so as to optimize network training; the output layer is used for outputting the training result of the hidden layer.
3. The LSTM neural network-based on-chip optical network hot spot prediction method of claim 1, wherein the input layer comprises data washing, data normalization, and data partitioning, wherein the data washing is used for removing NAN values and unsatisfactory values in the traffic data; the data normalization linearizes the original data set by adopting a Min-Max method and is used for eliminating the influence of singular sample data on training; after normalization, the data set is divided into a training set and a testing set according to a certain proportion, and the training set and the testing set are used for training and testing the model.
4. The LSTM neural network-based on-chip optical network hot spot prediction method of claim 1, wherein the hidden layer comprises two LSTM loop layers, each layer comprises 32 LSTM neurons, and each LSTM neuron internally comprises a forgetting gate, an input gate and an output gate; wherein the forgetting gate is determined by the input data and the output of the last cell unit for determining to discard the unwanted data from the input; the input gate determines to keep useful data from the input and update the cell state by a sigmoid function and a tanh function; the output gate is also determined by the sigmoid function and the tanh function for the output of the cell unit.
5. The method as claimed in claim 4, wherein in order to avoid linear characteristics between the output and the input of each cell unit, an activation function is added in the hidden layer for increasing the learning ability of the neural network, and the activation function adopts a linear rectification RELU activation function, whose mathematical expression is:
Figure FDA0003542769530000021
6. the method of claim 1, wherein the training module is configured to compute a loss error between a theoretical output and a model output, and to feed back the loss error to the hidden layer by using an optimization algorithm according to the loss value to continuously adjust a parameter update weight to accelerate a convergence rate of the network, and the mathematical table is configured to perform the parameter update weight calculation according to the loss errorThe expression is as follows:
Figure FDA0003542769530000022
wherein y isiFor the true value of the traffic of each node at the ith moment,
Figure FDA0003542769530000023
a flow predicted value input for the network at the ith moment;
the loss error is calculated by Mean Square Error (MSE), and the optimization algorithm is Adma optimization algorithm.
7. The method as claimed in claim 6, wherein a Dropout layer is added after the hidden layer to prevent the training from generating overfitting.
The output layer is used for outputting prediction data and performing inverse normalization.
8. The method of claim 7, wherein x output from the hidden layer is x output from the LSTM neural networkn、ynOutputting prediction data representing the prediction obtained during training of the network, the value being actually real data for learning of the network and being related to the time step of the over-parameter during the training; after the network training is finished, the LSTM model already knows the basic development trend of the data set, and at the moment, test data can be input into the network to obtain predicted data; since the data input into the network has been normalized, the predicted values are also those belonging to [0,1 ]]For obtaining the real predicted value, the normalization data needs to be reversely normalized, and the reverse normalization formula is as follows:
Figure FDA0003542769530000024
9. the method of any one of claims 1-8, wherein a plurality of model evaluation indicators are further used to evaluate the prediction results, including Root Mean Square Error (RMSE), Mean Absolute Error (MAE), and decision coefficient R2.
CN202210237289.3A 2022-03-11 2022-03-11 On-chip optical network hot spot prediction method based on LSTM neural network Pending CN114662658A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202210237289.3A CN114662658A (en) 2022-03-11 2022-03-11 On-chip optical network hot spot prediction method based on LSTM neural network

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202210237289.3A CN114662658A (en) 2022-03-11 2022-03-11 On-chip optical network hot spot prediction method based on LSTM neural network

Publications (1)

Publication Number Publication Date
CN114662658A true CN114662658A (en) 2022-06-24

Family

ID=82028830

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202210237289.3A Pending CN114662658A (en) 2022-03-11 2022-03-11 On-chip optical network hot spot prediction method based on LSTM neural network

Country Status (1)

Country Link
CN (1) CN114662658A (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN117148161A (en) * 2023-08-29 2023-12-01 深圳市今朝时代股份有限公司 Battery SOC estimation method and device based on cloud neural network

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN117148161A (en) * 2023-08-29 2023-12-01 深圳市今朝时代股份有限公司 Battery SOC estimation method and device based on cloud neural network

Similar Documents

Publication Publication Date Title
Zhao et al. Towards traffic matrix prediction with LSTM recurrent neural networks
Qiao et al. Adaptive Levenberg-Marquardt algorithm based echo state network for chaotic time series prediction
CN110851782A (en) Network flow prediction method based on lightweight spatiotemporal deep learning model
CN111416797B (en) Intrusion detection method for optimizing regularization extreme learning machine by improving longicorn herd algorithm
CN109063939B (en) Wind speed prediction method and system based on neighborhood gate short-term memory network
Zhang et al. A short-term traffic forecasting model based on echo state network optimized by improved fruit fly optimization algorithm
Yang et al. A new method based on PSR and EA-GMDH for host load prediction in cloud computing system
CN109816144B (en) Short-term load prediction method for distributed memory parallel computing optimized deep belief network
Wang et al. A multitask learning-based network traffic prediction approach for SDN-enabled industrial internet of things
CN116562908A (en) Electric price prediction method based on double-layer VMD decomposition and SSA-LSTM
CN114936708A (en) Fault diagnosis optimization method based on edge cloud collaborative task unloading and electronic equipment
CN114662658A (en) On-chip optical network hot spot prediction method based on LSTM neural network
Matsui et al. Peak load forecasting using analyzable structured neural network
CN113128666A (en) Mo-S-LSTMs model-based time series multi-step prediction method
Zhichao et al. Short-term load forecasting of multi-layer LSTM neural network considering temperature fuzzification
Skorpil et al. Back-propagation and k-means algorithms comparison
CN116523001A (en) Method, device and computer equipment for constructing weak line identification model of power grid
Lu et al. Laplacian deep echo state network optimized by genetic algorithm
CN116522747A (en) Two-stage optimized extrusion casting process parameter optimization design method
Xue et al. An improved extreme learning machine based on variable-length particle swarm optimization
CN113641496A (en) DIDS task scheduling optimization method based on deep reinforcement learning
CN113112092A (en) Short-term probability density load prediction method, device, equipment and storage medium
Henriquez et al. An empirical study of the hidden matrix rank for neural networks with random weights
Cai et al. Cycle sampling neural network algorithms and applications
CN112183814A (en) Short-term wind speed prediction method

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination