CN110837889A - Neural network training method and device, storage medium and electronic device - Google Patents

Neural network training method and device, storage medium and electronic device Download PDF

Info

Publication number
CN110837889A
CN110837889A CN201810927680.XA CN201810927680A CN110837889A CN 110837889 A CN110837889 A CN 110837889A CN 201810927680 A CN201810927680 A CN 201810927680A CN 110837889 A CN110837889 A CN 110837889A
Authority
CN
China
Prior art keywords
parameter set
neural network
training parameter
training
output
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN201810927680.XA
Other languages
Chinese (zh)
Inventor
宋英豪
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Ennew Digital Technology Co Ltd
Original Assignee
Ennew Digital Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Ennew Digital Technology Co Ltd filed Critical Ennew Digital Technology Co Ltd
Priority to CN201810927680.XA priority Critical patent/CN110837889A/en
Publication of CN110837889A publication Critical patent/CN110837889A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/08Learning methods

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • Data Mining & Analysis (AREA)
  • General Health & Medical Sciences (AREA)
  • Biomedical Technology (AREA)
  • Biophysics (AREA)
  • Computational Linguistics (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Evolutionary Computation (AREA)
  • Artificial Intelligence (AREA)
  • Molecular Biology (AREA)
  • Computing Systems (AREA)
  • General Engineering & Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • Mathematical Physics (AREA)
  • Software Systems (AREA)
  • Health & Medical Sciences (AREA)
  • Management, Administration, Business Operations System, And Electronic Commerce (AREA)

Abstract

The invention provides a neural network training method and device, a storage medium and an electronic device, wherein the method comprises the following steps: acquiring a first output training parameter set and an input training parameter set; performing linear smoothing on the first output training parameter set to obtain a second output training parameter set; and inputting the second output training parameter set and the input training parameter set into an original model as label data, and training to obtain a neural network model. The invention solves the technical problem of low learning efficiency of the neural network regression model in the prior art.

Description

Neural network training method and device, storage medium and electronic device
Technical Field
The invention relates to the field of artificial intelligence, in particular to a neural network training method and device, a storage medium and an electronic device.
Background
The principle of the neural network model prediction problem in the prior art is as follows: known dependent variable y ═ y1,y2,...,ym) Equivalent to the output quantity, the independent variable x is (x)1,x2,...,xn) Since the determination corresponds to the input amount, the dependent variable y can be predicted from the independent variable x, which is a regression problem.
The principle of neural network regression prediction includes: neural networks may be applied to the prediction problem described above. The input layer variable is x and the output layer variable is y. The neural network will fit a functional relationship between them.
In the prior art, in an actual process, the dependent variable y and the independent variable x do not necessarily present a continuous or piecewise continuous relationship, and especially, under the condition that data may be noisy, the learning efficiency is not high due to poor continuity.
In view of the above problems in the prior art, no effective solution has been found.
Disclosure of Invention
The embodiment of the invention provides a neural network training method and device, a storage medium and an electronic device.
According to an embodiment of the present invention, there is provided a training method of a neural network, including: acquiring a first output training parameter set and an input training parameter set; performing linear smoothing on the first output training parameter set to obtain a second output training parameter set; and inputting the second output training parameter set and the input training parameter set into an original model as label data, and training to obtain a neural network model.
Optionally, after the training of the neural network model, the method further includes: acquiring input data to be detected; and obtaining target data corresponding to the input data according to the neural network model.
Optionally, obtaining target data corresponding to the input data according to the neural network model includes: inputting the input data into the neural network model, and outputting result data of the input data; and carrying out inverse smoothing processing on the result data to obtain the target data.
Optionally, a second output training parameter set is obtained by performing linear smoothing on the first output training parameter set using the following formula: trainY' is the second output training parameter set, trainY is the first output training parameter set, and a is a preset mapping coefficient.
Optionally, the target data is obtained by performing inverse smoothing on the result data by using the following formula: and A is multiplied by testY ', wherein testY' is the result data, testY is the target data, and A is a preset mapping coefficient.
Optionally, a is a matrix consisting of a plurality of eigenvalues smaller than 1, wherein,
Figure BDA0001765804430000021
according to another embodiment of the present invention, there is provided a training apparatus for a neural network, including: a first obtaining module, configured to obtain a first output training parameter set and an input training parameter set; the processing module is used for performing linear smoothing processing on the first output training parameter set to obtain a second output training parameter set; and the training module is used for inputting the second output training parameter set and the input training parameter set into an original model as label data and training to obtain a neural network model.
Optionally, the apparatus further comprises: the second acquisition module is used for acquiring input data to be tested after the training module trains to obtain the neural network model; and the algorithm module is used for obtaining target data corresponding to the input data according to the neural network model.
According to a further embodiment of the present invention, there is also provided a storage medium having a computer program stored therein, wherein the computer program is arranged to perform the steps of any of the above method embodiments when executed.
According to yet another embodiment of the present invention, there is also provided an electronic device, including a memory in which a computer program is stored and a processor configured to execute the computer program to perform the steps in any of the above method embodiments.
According to the invention, the second output training parameter set is obtained by performing linear smoothing on the first output training parameter set, and the continuity of the dependent variable relative to the independent variable is improved by utilizing the linear smoothing technology, so that the learning efficiency of neural network regression is improved, the technical problem of low learning efficiency of a neural network regression model in the prior art is solved, and the prediction effect of the model is improved.
Drawings
The accompanying drawings, which are included to provide a further understanding of the invention and are incorporated in and constitute a part of this application, illustrate embodiment(s) of the invention and together with the description serve to explain the invention without limiting the invention. In the drawings:
fig. 1 is a block diagram of a hardware structure of a mobile terminal of a training method of a neural network according to an embodiment of the present invention;
FIG. 2 is a flow chart of a method of training a neural network according to an embodiment of the present invention;
FIG. 3 is a schematic flow chart of an embodiment of the present invention;
fig. 4 is a block diagram of a training apparatus of a neural network according to an embodiment of the present invention.
Detailed Description
In order to make the technical solutions of the present invention better understood, the technical solutions in the embodiments of the present invention will be clearly and completely described below with reference to the drawings in the embodiments of the present invention, and it is obvious that the described embodiments are only a part of the embodiments of the present invention, and not all of the embodiments. All other embodiments, which can be derived by a person skilled in the art from the embodiments given herein without making any creative effort, shall fall within the protection scope of the present invention.
It should be noted that the terms "first," "second," and the like in the description and claims of the present invention and in the drawings described above are used for distinguishing between similar elements and not necessarily for describing a particular sequential or chronological order. It is to be understood that the data so used is interchangeable under appropriate circumstances such that the embodiments of the invention described herein are capable of operation in sequences other than those illustrated or described herein. Furthermore, the terms "comprises," "comprising," and "having," and any variations thereof, are intended to cover a non-exclusive inclusion, such that a process, method, system, article, or apparatus that comprises a list of steps or elements is not necessarily limited to those steps or elements expressly listed, but may include other steps or elements not expressly listed or inherent to such process, method, article, or apparatus.
Example 1
The method provided by the first embodiment of the present application may be executed in a server, a mobile terminal, a computer terminal, or a similar computing device. Taking the example of the present invention running on a mobile terminal, fig. 1 is a block diagram of a hardware structure of the mobile terminal of a training method of a neural network according to an embodiment of the present invention. As shown in fig. 1, the mobile terminal 10 may include one or more (only one shown in fig. 1) processors 102 (the processor 102 may include, but is not limited to, a processing device such as a microprocessor MCU or a programmable logic device FPGA) and a memory 104 for storing data, and optionally may also include a transmission device 106 for communication functions and an input-output device 108. It will be understood by those skilled in the art that the structure shown in fig. 1 is only an illustration, and does not limit the structure of the mobile terminal. For example, the mobile terminal 10 may also include more or fewer components than shown in FIG. 1, or have a different configuration than shown in FIG. 1.
The memory 104 may be used to store a computer program, for example, a software program and a module of application software, such as a computer program corresponding to the neural network training method in the embodiment of the present invention, and the processor 102 executes various functional applications and data processing by running the computer program stored in the memory 104, so as to implement the above-mentioned method. The memory 104 may include high speed random access memory, and may also include non-volatile memory, such as one or more magnetic storage devices, flash memory, or other non-volatile solid-state memory. In some instances, the memory 104 may further include memory located remotely from the processor 102, which may be connected to the mobile terminal 10 via a network. Examples of such networks include, but are not limited to, the internet, intranets, local area networks, mobile communication networks, and combinations thereof.
The transmission device 106 is used for receiving or transmitting data via a network. Specific examples of the network described above may include a wireless network provided by a communication provider of the mobile terminal 10. In one example, the transmission device 106 includes a Network adapter (NIC), which can be connected to other Network devices through a base station so as to communicate with the internet. In one example, the transmission device 106 may be a Radio Frequency (RF) module, which is used for communicating with the internet in a wireless manner.
In this embodiment, a training method of a neural network is provided, and fig. 2 is a flowchart of the training method of the neural network according to the embodiment of the present invention, as shown in fig. 2, the flowchart includes the following steps:
step S202, acquiring a first output training parameter set and an input training parameter set;
step S204, carrying out linear smoothing processing on the first output training parameter set to obtain a second output training parameter set;
and step S206, inputting the second output training parameter set and the input training parameter set into the original model as label data, and training to obtain a neural network model.
Through the steps, the first output training parameter set is subjected to linear smoothing to obtain the second output training parameter set, and the continuity of the dependent variable relative to the independent variable is improved by utilizing the linear smoothing technology, so that the learning efficiency of neural network regression is improved, the technical problem that the learning efficiency of a neural network regression model in the prior art is low is solved, and the prediction effect of the model is improved.
Optionally, the executing subject of the above steps may be a data processing device, a server, a terminal, and the like, and may specifically be a processor, an algorithm module, and the like, but is not limited thereto.
In the application scenario of this embodiment, the prediction model corresponding to the neural network model may be applied to prediction tasks such as load prediction and time series prediction. The load prediction can be understood as determining load data at a certain future time under the condition of meeting a certain precision requirement according to a plurality of factors such as the operating characteristics, capacity increasing decision, natural conditions and social influence of the system, wherein the load can be energy demand or energy consumption, for example. Energy load prediction is a prerequisite for energy planning, and if energy demand or energy consumption in a certain area can be fully mastered, reliable reference data can be improved for an energy planning scheme, so that the operation efficiency of an energy station can be improved, energy consumption is reduced, and the comprehensive utilization rate of energy is improved. Therefore, the load prediction can be widely applied to the field of comprehensive energy sources so as to optimize the construction and planning of energy stations.
In a specific example, the first output training parameter set comprises a plurality of gas quantities, the input training parameter set comprises a plurality of influence factors (such as time, temperature, population and the like) influencing the gas quantities, linear smoothing processing is performed on the gas quantities corresponding to each influence factor through training of the neural network model, and when the neural network model is subsequently used for predicting the gas quantities to be used, the influence factors are input, so that more accurate gas quantities can be obtained, and a gas company can store the gas quantities in advance.
The smoothing technique in this embodiment may be implemented by only a linear smoothing technique such as moving average (with modification) in the neural network model. Of course, the selection of the mapping matrix in the model can be further studied, and even can be expanded to the nonlinear condition, only the "continuity" of the function needs to be improved. In the present embodiment, an inverse smoothing technique is proposed, that is, a regression result (a prediction result) of a neural network is restored to original data by solving an equation set.
In an example of this embodiment, after the training to obtain the neural network model, the method further includes:
s11, acquiring input data to be tested;
and S12, obtaining target data corresponding to the input data according to the neural network model.
The method is a process of working by using a trained neural network model, can be applied to different scenes to predict different data, and takes input data to be tested as the input of the neural network model.
In one example of this embodiment, obtaining target data corresponding to the input data according to the neural network model includes:
s21, inputting the input data into the neural network model, and outputting the result data of the input data; and the input data to be tested is used as the input of the neural network model, and the result data is the output of the neural network model.
And S22, performing inverse smoothing processing on the result data to obtain target data. The inverse smoothing process is equivalent to the inverse process of the linear smoothing process.
In an example of this embodiment, the second output training parameter set may be obtained by performing linear smoothing on the first output training parameter set using the following formula:
trainY' is a second output training parameter set, trainY is a first output training parameter set, and a is a preset mapping coefficient.
In an example of this embodiment, the target data may be obtained by performing inverse smoothing on the result data using the following formula:
and a × testY ═ testY ', where testY' is result data, testY is target data, and a is a preset mapping coefficient.
A complete flow is used to illustrate the solution of the present embodiment, and fig. 3 is a schematic flow chart of the embodiment of the present invention, which includes the following steps:
the functional relationship between the dependent variable y and the independent variable x in this embodiment is a function that satisfies a good fit of the feedforward neural network (from finite space to Borel measurable functions in finite space, including continuous functions and piecewise continuous functions).
Wherein, trainX: x is the number of1,x2,...,xk∈Rm
the linear smoothing of trainY is: trainY: y is1,y2,...,yk∈Rn→ linear smoothing y ═ Ay → train y': y'1,y′2,...,y′k∈Rn
The method comprises the following steps: and linear smoothing is carried out on the trainY. The aim of mapping training y to y ' with a linear change, train y ' a x train y, is to make the "continuity" between y ' and x better (or smoother).
The theoretical basis in the step one comprises the following steps: the function y (f) (x), continuity means | y (1) -y (2) | | | - >0 when | | x1-x2| - > 0. After mapping, | Ay (1) -Ay (2) | | | | | a (y (1) -y (2)) | < | | a | | | | | | | | y (1) -y (2) | |, so as long as | | | a | | <1, in the sense of L2, i.e., in the case where the eigenvalues of a are all less than 1, there are | | | a (y (1) -y (2)) | | | | | | | | | | | | a (y (1) -y (2) | | |. In this sense, the function can be considered to be "more continuous". The neural network can better fit this function.
Step two: and (5) training the model. Training neural network regression models with training sets trainX and trainY'.
Step three: new data is predicted. For the new test set testX, prediction is performed using this neural network model to yield testY'.
Step four: and (5) performing inverse smoothing. testY is calculated from testY' ═ a × testY, which is actually solving a system of equations.
A is a mapping matrix composed of a plurality of eigenvalues smaller than 1, and in one example, a is a mapping matrix in a 5-dimensional space, e.g.,
in the mapping process, the average of every three numbers is equivalent to the moving smoothing of y, and the characteristic values of the y are 1/3 and meet the requirement. In practice, any matrix satisfying a matrix eigenvalue less than 1 is sufficient.
The scheme of the embodiment is the combined application of smoothing and inverse smoothing in neural network regression, and the process of inverse smoothing is actually a problem of solving a linear equation system.
It should be noted that, for simplicity of description, the above-mentioned method embodiments are described as a series of acts or combination of acts, but those skilled in the art will recognize that the present invention is not limited by the order of acts, as some steps may occur in other orders or concurrently in accordance with the invention. Further, those skilled in the art should also appreciate that the embodiments described in the specification are preferred embodiments and that the acts and modules referred to are not necessarily required by the invention.
Through the above description of the embodiments, those skilled in the art can clearly understand that the method according to the above embodiments can be implemented by software plus a necessary general hardware platform, and certainly can also be implemented by hardware, but the former is a better implementation mode in many cases. Based on such understanding, the technical solutions of the present invention may be embodied in the form of a software product, which is stored in a storage medium (such as ROM/RAM, magnetic disk, optical disk) and includes instructions for enabling a terminal device (such as a mobile phone, a computer, a server, or a network device) to execute the method according to the embodiments of the present invention.
Example 2
In this embodiment, a training apparatus for a neural network is further provided, and the apparatus is used to implement the foregoing embodiments and preferred embodiments, which have already been described and are not described again. As used below, the term "module" may be a combination of software and/or hardware that implements a predetermined function. Although the means described in the embodiments below are preferably implemented in software, an implementation in hardware, or a combination of software and hardware is also possible and contemplated.
Fig. 4 is a block diagram of a training apparatus for a neural network according to an embodiment of the present invention, as shown in fig. 4, the apparatus including:
a first obtaining module 40, configured to obtain a first output training parameter set and an input training parameter set;
a processing module 42, configured to perform linear smoothing on the first output training parameter set to obtain a second output training parameter set;
and the training module 44 is configured to input the second output training parameter set and the input training parameter set to the original model as tag data, and train to obtain a neural network model.
Optionally, the apparatus further comprises:
the second acquisition module is used for acquiring input data to be tested after the training module trains to obtain the neural network model;
and the algorithm module is used for obtaining target data corresponding to the input data according to the neural network model.
It should be noted that, the above modules may be implemented by software or hardware, and for the latter, the following may be implemented, but not limited to: the modules are all positioned in the same processor; alternatively, the modules are respectively located in different processors in any combination.
Example 4
Embodiments of the present invention also provide a storage medium having a computer program stored therein, wherein the computer program is arranged to perform the steps of any of the above method embodiments when executed.
Alternatively, in the present embodiment, the storage medium may be configured to store a computer program for executing the steps of:
s1, acquiring a first output training parameter set and an input training parameter set;
s2, performing linear smoothing processing on the first output training parameter set to obtain a second output training parameter set;
and S3, inputting the second output training parameter set and the input training parameter set as label data to the original model, and training to obtain the neural network model.
Optionally, in this embodiment, the storage medium may include, but is not limited to: various media capable of storing computer programs, such as a usb disk, a Read-Only Memory (ROM), a Random Access Memory (RAM), a removable hard disk, a magnetic disk, or an optical disk.
Embodiments of the present invention also provide an electronic device comprising a memory having a computer program stored therein and a processor arranged to run the computer program to perform the steps of any of the above method embodiments.
Optionally, the electronic apparatus may further include a transmission device and an input/output device, wherein the transmission device is connected to the processor, and the input/output device is connected to the processor.
Optionally, in this embodiment, the processor may be configured to execute the following steps by a computer program:
s1, acquiring a first output training parameter set and an input training parameter set;
s2, performing linear smoothing processing on the first output training parameter set to obtain a second output training parameter set;
and S3, inputting the second output training parameter set and the input training parameter set as label data to the original model, and training to obtain the neural network model.
Optionally, the specific examples in this embodiment may refer to the examples described in the above embodiments and optional implementation manners, and this embodiment is not described herein again.
It will be apparent to those skilled in the art that the modules or steps of the present invention described above may be implemented by a general purpose computing device, they may be centralized on a single computing device or distributed across a network of multiple computing devices, and alternatively, they may be implemented by program code executable by a computing device, such that they may be stored in a storage device and executed by a computing device, and in some cases, the steps shown or described may be performed in an order different than that described herein, or they may be separately fabricated into individual integrated circuit modules, or multiple ones of them may be fabricated into a single integrated circuit module. Thus, the present invention is not limited to any specific combination of hardware and software.
The above description is only a preferred embodiment of the present invention and is not intended to limit the present invention, and various modifications and changes may be made by those skilled in the art. Any modification, equivalent replacement, or improvement made within the principle of the present invention should be included in the protection scope of the present invention.

Claims (10)

1. A method of training a neural network, comprising:
acquiring a first output training parameter set and an input training parameter set;
performing linear smoothing on the first output training parameter set to obtain a second output training parameter set;
and inputting the second output training parameter set and the input training parameter set into an original model as label data, and training to obtain a neural network model.
2. The method of claim 1, wherein a neural network model is derived during training, the method further comprising:
acquiring input data to be detected;
and obtaining target data corresponding to the input data according to the neural network model.
3. The method of claim 2, wherein obtaining target data corresponding to the input data from the neural network model comprises:
inputting the input data into the neural network model, and outputting result data of the input data;
and carrying out inverse smoothing processing on the result data to obtain the target data.
4. The method of claim 1, wherein the linear smoothing of the first output training parameter set results in a second output training parameter set using the following equation:
trainY' is the second output training parameter set, trainY is the first output training parameter set, and a is a preset mapping coefficient.
5. The method of claim 3, wherein the target data is obtained by inverse smoothing the result data using the following formula:
and A is multiplied by testY ', wherein testY' is the result data, testY is the target data, and A is a preset mapping coefficient.
6. The method according to claim 4 or 5, wherein A is a matrix consisting of a plurality of eigenvalues smaller than 1, wherein,
7. an apparatus for training a neural network, comprising:
a first obtaining module, configured to obtain a first output training parameter set and an input training parameter set;
the processing module is used for performing linear smoothing processing on the first output training parameter set to obtain a second output training parameter set;
and the training module is used for inputting the second output training parameter set and the input training parameter set into an original model as label data and training to obtain a neural network model.
8. The apparatus of claim 7, further comprising:
the second acquisition module is used for acquiring input data to be tested after the training module trains to obtain the neural network model;
and the algorithm module is used for obtaining target data corresponding to the input data according to the neural network model.
9. A storage medium, in which a computer program is stored, wherein the computer program is arranged to perform the method of any of claims 1 to 6 when executed.
10. An electronic device comprising a memory and a processor, wherein the memory has stored therein a computer program, and wherein the processor is arranged to execute the computer program to perform the method of any of claims 1 to 6.
CN201810927680.XA 2018-08-15 2018-08-15 Neural network training method and device, storage medium and electronic device Pending CN110837889A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201810927680.XA CN110837889A (en) 2018-08-15 2018-08-15 Neural network training method and device, storage medium and electronic device

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201810927680.XA CN110837889A (en) 2018-08-15 2018-08-15 Neural network training method and device, storage medium and electronic device

Publications (1)

Publication Number Publication Date
CN110837889A true CN110837889A (en) 2020-02-25

Family

ID=69573191

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201810927680.XA Pending CN110837889A (en) 2018-08-15 2018-08-15 Neural network training method and device, storage medium and electronic device

Country Status (1)

Country Link
CN (1) CN110837889A (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113673153A (en) * 2021-08-11 2021-11-19 追觅创新科技(苏州)有限公司 Method and device for determining electromagnetic torque of robot, storage medium and electronic device

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113673153A (en) * 2021-08-11 2021-11-19 追觅创新科技(苏州)有限公司 Method and device for determining electromagnetic torque of robot, storage medium and electronic device

Similar Documents

Publication Publication Date Title
EP3822880A1 (en) Load prediction method and apparatus based on neural network
CN110956202B (en) Image training method, system, medium and intelligent device based on distributed learning
CN110874671B (en) Power load prediction method and device of power distribution network and storage medium
WO2021098281A1 (en) Project baseline data generation method and device, computer device, and computer readable storage medium
CN110874765B (en) Data processing method, device, equipment and storage medium
US20210312295A1 (en) Information processing method, information processing device, and information processing program
CN112613642B (en) Emergency material demand prediction method and device, storage medium and electronic equipment
CN110490331A (en) The processing method and processing device of knowledge mapping interior joint
US11455544B2 (en) Prediction model generation device, prediction model generation method, and recording medium
CN110084407A (en) Load forecasting method and device based on Recognition with Recurrent Neural Network and meta learning strategy
CN107944677A (en) Achievement method for tracing, application server and computer-readable recording medium
CN109670624B (en) Method and device for pre-estimating meal waiting time
CN112052027A (en) Method and device for processing AI task
CN112330048A (en) Scoring card model training method and device, storage medium and electronic device
CN108667877B (en) Method and device for determining recommendation information, computer equipment and storage medium
CN114943284A (en) Data processing system and method of behavior prediction model
CN111325509A (en) Data processing method and device, storage medium and electronic device
CN110837889A (en) Neural network training method and device, storage medium and electronic device
CN114021018A (en) Recommendation method, system and storage medium based on graph convolution neural network
Bantouna et al. Network load predictions based on big data and the utilization of self-organizing maps
CN105432038A (en) Application ranking calculating apparatus and usage information collecting apparatus
Kapoor et al. A comparison of short-term load forecasting techniques
CN112614010A (en) Load prediction method and device, storage medium and electronic device
CN111950802A (en) Production scheduling control method and device
CN110084406B (en) Load prediction method and device based on self-encoder and meta-learning strategy

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
RJ01 Rejection of invention patent application after publication

Application publication date: 20200225

RJ01 Rejection of invention patent application after publication