WO2020143225A1 - 神经网络的训练方法、训练装置和电子设备 - Google Patents
神经网络的训练方法、训练装置和电子设备 Download PDFInfo
- Publication number
- WO2020143225A1 WO2020143225A1 PCT/CN2019/100983 CN2019100983W WO2020143225A1 WO 2020143225 A1 WO2020143225 A1 WO 2020143225A1 CN 2019100983 W CN2019100983 W CN 2019100983W WO 2020143225 A1 WO2020143225 A1 WO 2020143225A1
- Authority
- WO
- WIPO (PCT)
- Prior art keywords
- neural network
- feature map
- loss function
- function value
- trained
- Prior art date
Links
- 238000013528 artificial neural network Methods 0.000 title claims abstract description 415
- 238000012549 training Methods 0.000 title claims abstract description 117
- 238000000034 method Methods 0.000 title claims abstract description 59
- 230000006870 function Effects 0.000 claims description 150
- 238000004590 computer program Methods 0.000 claims description 13
- 238000010586 diagram Methods 0.000 description 18
- 238000001514 detection method Methods 0.000 description 14
- 230000008569 process Effects 0.000 description 8
- 238000012546 transfer Methods 0.000 description 7
- 238000012360 testing method Methods 0.000 description 6
- 238000013140 knowledge distillation Methods 0.000 description 4
- 238000003062 neural network model Methods 0.000 description 3
- 238000012545 processing Methods 0.000 description 3
- 238000004364 calculation method Methods 0.000 description 2
- 238000004821 distillation Methods 0.000 description 2
- 230000000694 effects Effects 0.000 description 2
- 238000012804 iterative process Methods 0.000 description 2
- 238000012986 modification Methods 0.000 description 2
- 230000004048 modification Effects 0.000 description 2
- 230000003287 optical effect Effects 0.000 description 2
- 238000007792 addition Methods 0.000 description 1
- 238000013145 classification model Methods 0.000 description 1
- 238000004891 communication Methods 0.000 description 1
- 238000000354 decomposition reaction Methods 0.000 description 1
- 238000013135 deep learning Methods 0.000 description 1
- 238000005516 engineering process Methods 0.000 description 1
- 230000006872 improvement Effects 0.000 description 1
- 230000002452 interceptive effect Effects 0.000 description 1
- 230000007246 mechanism Effects 0.000 description 1
- 230000001537 neural effect Effects 0.000 description 1
- 239000013307 optical fiber Substances 0.000 description 1
- 238000007781 pre-processing Methods 0.000 description 1
- 238000005215 recombination Methods 0.000 description 1
- 230000006798 recombination Effects 0.000 description 1
- 230000004044 response Effects 0.000 description 1
- 239000004065 semiconductor Substances 0.000 description 1
Images
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06N—COMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
- G06N3/00—Computing arrangements based on biological models
- G06N3/02—Neural networks
- G06N3/08—Learning methods
- G06N3/084—Backpropagation, e.g. using gradient descent
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06N—COMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
- G06N3/00—Computing arrangements based on biological models
- G06N3/02—Neural networks
- G06N3/04—Architecture, e.g. interconnection topology
- G06N3/045—Combinations of networks
-
- Y—GENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
- Y02—TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
- Y02T—CLIMATE CHANGE MITIGATION TECHNOLOGIES RELATED TO TRANSPORTATION
- Y02T10/00—Road transport of goods or passengers
- Y02T10/10—Internal combustion engine [ICE] based vehicles
- Y02T10/40—Engine management systems
Definitions
- the present application relates to the field of deep learning technology, and more specifically, to a neural network training method, neural network training device, and electronic equipment.
- Deep neural networks with good performance usually have deep layers, resulting in huge network parameters. If you want to use it on the mobile terminal, you will usually choose a lightweight network with a small model parameter, but the performance of the lightweight network is not so good.
- knowledge distillation is widely used as an effective means. Its working principle is to use the output of the large model as an auxiliary label to further effectively supervise the training of lightweight networks and realize knowledge transfer.
- the embodiments of the present application provide a neural network training method, a neural network training device, and an electronic device, which can combine a trained and untrained neural network with a feature map at the same preset layer to obtain a loss function, and further combine The loss function of the untrained neural network itself updates the parameters of the untrained neural network, thereby improving the accuracy of the trained neural network.
- a method for training a neural network includes: inputting training data into a trained first neural network and a second neural network to be trained; and determining a preset layer of the first neural network A first feature map output and a second feature map output by the second neural network at the preset layer; determining the first of the second neural network based on the first feature map and the second feature map Loss function value; update the parameters of the second neural network based on the first loss function value and the second loss function value of the second neural network; and, update the parameters of the second neural network
- the initial parameters of the second neural network to be trained repeat the above steps of inputting the training data into the trained first neural network and the second neural network to be trained in an iterative manner-based on the first loss function Value and the second loss function value of the second neural network, the step of updating the parameters of the second neural network, and when the updated second neural network meets preset conditions, the final trained The second neural network.
- a neural network training device including: a neural network input unit for inputting training data into a trained first neural network and a second neural network to be trained; determination of a feature map A unit for determining a first feature map output by a preset layer of the first neural network input by the neural network input unit and a second feature map output by the second neural network at the preset layer; loss A function determining unit for determining the first loss function value of the second neural network based on the first feature map and the second feature map determined by the feature map determining unit; a neural network updating unit for The first loss function value determined by the loss function determination unit and the second loss function value of the second neural network, updating the parameters of the second neural network; and, an iterative update unit for updating
- the parameters of the second neural network are used as the initial parameters of the second neural network to be trained, and the above steps of inputting the training data into the trained first neural network and the second neural network to be trained are repeated iteratively ⁇ The step of updating the parameters of the second neural network
- an electronic device including: a processor; and a memory in which computer program instructions are stored, and when the computer program instructions are executed by the processor, The processor executes the neural network training method described above.
- a computer-readable medium on which computer program instructions are stored, which when executed by a processor causes the processor to perform the training of the neural network as described above method.
- the neural network training method, the neural network training device and the electronic equipment according to the present application can input the training data into the trained first neural network and the second neural network to be trained; A first feature map output by a preset layer of a neural network and a second feature map output by the second neural network at the preset layer; determining the based on the first feature map and the second feature map The first loss function value of the second neural network; updating the parameters of the second neural network based on the first loss function value and the second loss function value of the second neural network; The parameters of the second neural network are used as the initial parameters of the second neural network to be trained, and the above steps of inputting the training data into the trained first neural network and the second neural network to be trained are repeated iteratively.
- the loss function value is determined by combining the trained first neural network and the feature map output by the second neural network to be trained in the preset layer, and further combined with the loss function value of the second neural network itself to update
- the parameters of the second neural network, and using the updated parameters of the second neural network as the initial parameters of the second neural network to be trained, updating the second neural network in an iterative manner can fully and effectively
- the parameters of the trained first neural network are used to train the second neural network, thereby improving the accuracy of the second neural network.
- FIG. 1 illustrates a flowchart of a neural network training method according to an embodiment of the present application.
- FIG. 2 illustrates a schematic diagram of an iterative process in a training method of a neural network according to an embodiment of the present application.
- FIG. 3 illustrates a schematic diagram of a neural network training method applied to an image recognition and detection scene according to an embodiment of the present application.
- FIG. 4 illustrates a flowchart of a process of determining a feature map and a loss function in an image recognition and detection scene according to the training method of a neural network according to an embodiment of the present application.
- FIG. 5 illustrates a schematic diagram of a neural network training method applied to a classification scene according to an embodiment of the present application.
- FIG. 6 illustrates a flowchart of a process of determining a feature map and a loss function of a neural network training method according to an embodiment of the present application in a classification scenario.
- FIG. 7 illustrates a flowchart of a training example of the second neural network in the method for training a neural network according to an embodiment of the present application.
- FIG. 8 illustrates a block diagram of a training device of a neural network according to an embodiment of the present application.
- FIG. 9 illustrates a block diagram of a first example of a neural network training device in an image recognition and detection scenario according to an embodiment of the present application.
- FIG. 10 illustrates a block diagram of a second example of a training device of a neural network according to an embodiment of the present application in a classification scenario.
- FIG. 11 illustrates a block diagram of a schematic neural network update unit of a neural network training device according to an embodiment of the present application.
- FIG. 12 illustrates a block diagram of an electronic device according to an embodiment of the present application.
- the degree of knowledge transfer determines the accuracy of the lightweight network, that is, if the knowledge transfer is insufficient, the accuracy of the generated lightweight network is insufficient.
- the basic idea of this application is to determine the loss function value by combining the trained neural network and the feature map output by the neural network to be trained at the preset layer, and further combining the loss function of the neural network to be trained itself Value to update the parameters of the neural network to be trained in an iterative manner.
- the neural network training method, the neural network training device, and the electronic device provided by the present application first input the training data into the first neural network that has been trained and the second neural network to be trained, and then determine the first neural network The first feature map output by the preset layer and the second feature map output by the second neural network at the preset layer, and then determining the second feature map based on the first feature map and the second feature map
- the first loss function value of the neural network and then update the parameters of the second neural network based on the first loss function value and the second loss function value of the second neural network, and finally update the updated
- the parameters of the second neural network are used as the initial parameters of the second neural network to be trained, and the above steps of inputting the training data into the trained first neural network and the second neural network to be trained are repeated iteratively.
- the update of the parameters of the second neural network depends on its own second loss function value and the feature map output at the preset layer by combining the trained first neural network and the second neural network to be trained Determine the value of the first loss function, and use the updated parameters of the second neural network as the initial parameters of the second neural network to be trained, and update the second neural network in an iterative manner.
- the parameters of the first neural network that have been trained can be fully and effectively used, thereby improving the accuracy of the second neural network after training.
- the training method of neural network, the training device of neural network and the electronic equipment according to the present application can be used for various Knowledge transfer between neural networks, for example, both the trained first neural network and the second neural network to be trained may be large networks or lightweight networks, and this application is not intended to impose any restrictions on this .
- FIG. 1 illustrates a flowchart of a neural network training method according to an embodiment of the present application.
- the training method of a neural network includes the following steps.
- the training data is input to the trained first neural network and the second neural network to be trained.
- the first neural network and the second neural network may be various types of neural networks for image recognition, object detection, object classification, etc.
- the training data may be an image training set.
- the trained first neural network may be a large network with a large amount of parameters and high accuracy
- the second neural network to be trained may be lightweight Type network
- the parameter quantity is small and the accuracy is relatively low. Therefore, in order to improve the accuracy of lightweight networks, it is necessary to provide supervised signals for large networks after training to guide the learning of lightweight networks.
- the first neural network has been trained before inputting the training data, that is, the first neural network has been trained to converge.
- the second neural network corresponds to the first neural network, so that the trained first neural network can be used for training, and the second neural network obtains initialization parameters through Gaussian initialization.
- the method for training a neural network before inputting the training data into the first neural network that has been trained and the second neural network that is to be trained, it further includes: training the first neural network until the first A neural network converges; and Gaussian initialization is performed on the second neural network corresponding to the first neural network.
- the first neural network after training can provide a supervision signal to supervise the training of the second neural network to achieve Knowledge transfer between neural networks is improved, and the accuracy of the second neural network is improved.
- a first feature map output by the preset layer of the first neural network and a second feature map output by the second neural network at the preset layer are determined. That is, in order for the first neural network to provide a supervision signal to supervise the training of the second neural network, it is necessary to extract the output feature map from the same layer of the first neural network and the second neural network.
- the preset layer may be different preset layers of the network model, This will be explained in further detail later.
- the first loss function value of the second neural network is determined based on the first feature map and the second feature map.
- the extracted first feature map and the second feature map output at the preset layer will also be Different feature maps, therefore, the first loss function value determined based on the first feature map and the second feature map may correspondingly be different types of loss function values, which will also be described in further detail later.
- step S140 the parameters of the second neural network are updated based on the first loss function value and the second loss function value of the second neural network. Because the first loss function value is based on the first feature map output by the first neural network at the preset layer and the second feature output by the second neural network at the preset layer The graph determines that the value of the first loss function can be used as the supervision signal provided by the first neural network. Furthermore, by further combining the second loss function value of the second neural network to update the parameters of the second neural network, knowledge transfer of the parameters of the first neural network can be achieved, thereby improving the updated Describe the accuracy of the second neural network.
- step S150 the updated parameters of the second neural network are used as the initial parameters of the second neural network to be trained, and the foregoing input of the training data into the trained first neural network and the to-be-trained is repeated in an iterative manner Of the second neural network-the step of updating the parameters of the second neural network based on the first loss function value and the second loss function value of the second neural network, in the updated When the second neural network meets the preset condition, the second neural network that is finally trained is obtained.
- the second neural network obtained by this training can be used as the unused in step S110
- the trained second neural network uses the already-trained parameters as initial parameters, and repeatedly executes steps S110-S140 in the embodiment shown in FIG. 1 to obtain a second neural network that meets a certain accuracy after multiple iterations. Therefore, by iterative distillation, the neural network after the last distillation is used as the initialization of the neural network to be trained in this training process, and the second neural network is continuously distilled through the trained first neural network, thereby making the large network The knowledge of the first neural network is fully transferred to the lightweight second neural network.
- the supervisory signals provided by the first neural network can be fully utilized to further improve the Describe the accuracy of the second neural network.
- FIG. 2 illustrates a schematic diagram of an iterative process in a training method of a neural network according to an embodiment of the present application.
- training data such as image set IN
- the trained first neural network Net 1 and the second neural network to be trained Net 2 are input into the trained first neural network Net 1 and the second neural network to be trained Net 2 , and is trained by the training method of the neural network as described above, The updated parameters of the second neural network are obtained.
- the trained first neural network Net 1 remains as it is, and the parameters of the updated second neural network are used as parameters of the second neural network to be trained, that is, the updated It said second neural network to be trained as the second pre-trained neural network model, by inputting a set of images I N, for example, to the second neural network net 2 'training.
- the accuracy of the updated second neural network may be determined until the accuracy of the model updated twice before and after there is no significant difference in the accuracy of the iteration is stopped.
- obtaining the second neural network that is finally trained includes: obtaining the before-update A first test accuracy of the second neural network and an updated second test accuracy of the second neural network; determining whether the difference between the first test accuracy and the second test accuracy is less than a predetermined threshold; and In response to the difference between the first test accuracy and the second test accuracy being less than a predetermined threshold, it is determined that the training of the second neural network is completed.
- FIG. 3 illustrates a schematic diagram of a neural network training method applied to an image recognition and detection scene according to an embodiment of the present application.
- the features of the last layer output of the convolutional layers of the first neural network and the second neural network are extracted Figure.
- the L2 loss function value of the second neural network is calculated through the first feature map and the second feature map, and then combined with the loss function value of the second neural network to calculate a total loss function value.
- FIG. 4 illustrates a flowchart of a process of determining a feature map and a loss function in an image recognition and detection scene according to the training method of a neural network according to an embodiment of the present application.
- the step S120 may include the following steps.
- step S121a the feature map output from the last layer of the convolutional layer of the first neural network is determined as the first feature map, that is, the last one of the first neural network shown in FIG. 2 The output of the layer convolution layer.
- step S122a the feature map output from the last layer of the convolutional layer of the second neural network is determined as the second feature map, that is, the last one of the second neural network shown in FIG. 2 The output of the layer convolution layer.
- the step S130 may include the following steps.
- step S131a the L2 loss function value of the second neural network is determined based on the first feature map and the second feature map, that is, based on the first neural network and the The L2 loss function value calculated from the output of the last convolutional layer of the second neural network.
- the first loss function value of the second neural network is determined based on the L2 loss function value, for example, the L2 loss function value may be multiplied by a predetermined weighting coefficient to obtain the second neural network Value of the first loss function.
- the neural network training method can be used to train neural network models for image recognition and detection, such as face recognition and object detection, thereby improving the accuracy of the neural network, thereby improving the image recognition and detection Accuracy.
- FIG. 5 illustrates a schematic diagram of a neural network training method applied to a classification scene according to an embodiment of the present application.
- a feature map output from the last layer of the softmax layers of the first neural network and the second neural network is extracted.
- the last layer of the convolution layer and the softmax layer include a fully connected layer
- the first neural network and the second neural network may also be Does not include fully connected layers.
- the cross-entropy loss function value of the second neural network is calculated through the first feature map and the second feature map, and then combined with the loss function value of the second neural network itself to calculate the total loss Function value.
- FIG. 6 illustrates a flowchart of a process of determining a feature map and a loss function of a neural network training method according to an embodiment of the present application in a classification scenario.
- the step S120 may include the following steps.
- step S121b the feature map output from the softmax layer of the first neural network is determined as the first feature map, that is, the output of the softmax layer of the first neural network as shown in FIG. 4.
- step S122b the feature map output from the softmax layer of the second neural network is determined as the second feature map, that is, the output of the softmax layer of the second neural network as shown in FIG. 4.
- the step S130 may include the following steps.
- step S131b the cross-entropy loss function value of the second neural network is determined based on the first feature map and the second feature map, that is, based on the first neural network and the The value of the cross-entropy loss function calculated by the output of the softmax layer of the second neural network.
- the first loss function value of the second neural network is determined based on the cross-entropy loss function value, for example, the cross-entropy loss function value may be multiplied by a predetermined weighting coefficient to obtain the second The first loss function value of the neural network.
- the training method of the neural network according to the embodiment of the present application can be used to train and classify, for example, a neural network model of object classification based on images, thereby improving the accuracy of the neural network, thereby improving the accuracy of object classification.
- FIG. 7 illustrates a flowchart of a training example of the second neural network in the method for training a neural network according to an embodiment of the present application.
- the step S140 may include the following steps.
- step S141 the value of the cross-entropy loss function of the second neural network is calculated as the value of the second loss function, that is, for the value of the loss function of the second neural network itself, the value of the cross-entropy loss function can be calculated.
- the value of the cross-entropy loss function can be calculated.
- step S142 a weighted sum of the first loss function value and the second loss function value is calculated as the total loss function value.
- the first loss function value and the second loss function value may be combined in other ways to calculate the total loss function value.
- step S143 the parameters of the second neural network are updated in a manner of back propagation through the total loss function value. At this time, the parameters of the second neural network are updated, while the parameters of the first neural network remain unchanged.
- the trained parameters of the first neural network are fully utilized to improve the training accuracy.
- FIG. 8 illustrates a block diagram of a training device of a neural network according to an embodiment of the present application.
- a neural network training device 200 includes: a neural network input unit 210 for inputting training data into a first neural network that has been trained and a second neural network that is to be trained; a feature map The determining unit 220 is configured to determine a first feature map output by a preset layer of the first neural network input by the neural network input unit 210 and a second feature output by the second neural network at the preset layer Figure; loss function determination unit 230 for determining the first loss function value of the second neural network based on the first feature map and the second feature map determined by the feature map determination unit 220; neural network update Unit 240, configured to update the parameters of the second neural network based on the first loss function value determined by the loss function determination unit 230 and the second loss function value of the second neural network; and an iterative update unit 250, used to use the parameters of the second neural network updated by the neural network updating unit 240 as the initial parameters of the second neural network to be trained, and iteratively repeating the training data of
- FIG. 9 illustrates a block diagram of a first example of a neural network training device according to an embodiment of the present application in an image recognition and detection scenario.
- the feature map determination unit 220 includes: a first feature map determination subunit 221 a for inputting all the inputs of the neural network input unit 210
- the feature map output from the last layer of the convolutional layer of the first neural network is determined to be the first feature map
- the second feature map determining subunit 222a is used to input the neural network input unit 210 into the
- the feature map output from the last layer of the convolutional layer of the second neural network is determined to be the second feature map
- the loss function determination unit 230 includes: a first loss function determination subunit 231a, configured to be based on the first feature
- the first feature map determined by the graph determination subunit 221a and the second feature map determined by the second feature map determination subunit 222a determine the L2 loss function value of the second neural network
- the second loss The function determining subunit 232a is configured to determine the first loss function value of the second neural network input by the neural network input unit 210 based on the L2 loss function value
- FIG. 10 illustrates a block diagram of a second example of a training device of a neural network according to an embodiment of the present application in a classification scenario.
- the feature map determination unit 220 includes: a third feature map determination subunit 221b, which is used to input all the inputs of the neural network input unit 210 The feature map output from the softmax layer of the first neural network is determined to be the first feature map; and, the fourth feature map determining subunit 222b is configured to input the second neural network input from the neural network input unit 210 to the second neural network.
- the feature map output by the softmax layer is determined as the second feature map;
- the loss function determination unit 230 includes: a third loss function determination subunit 231b for determining the first determined by the subunit 221b based on the third feature map
- the second feature map determined by the feature map and the fourth feature map determining subunit 222b determines the cross-entropy loss function value of the second neural network; and, the fourth loss function determining subunit 232b is used to determine
- the cross-entropy loss function value determined by the third loss function determination subunit 231b determines the first loss function value of the second neural network input by the neural network input unit 210.
- FIG. 11 illustrates a block diagram of a schematic neural network update unit of a neural network training device according to an embodiment of the present application.
- the neural network update unit 240 includes: a calculation subunit 241 for calculating the value of the cross-entropy loss function of the second neural network as The second loss function value; a weighting sub-unit 242 for calculating the weighting of the first loss function value determined by the loss function determination unit 230 and the second loss function value calculated by the calculation sub-unit 241 And as the value of the total loss function; and, an update subunit 243 for updating the parameters of the second neural network in a manner of back propagation through the value of the total loss function calculated by the weighting subunit 242.
- the above neural network training device 200 further including a preprocessing unit for training a first neural network until the first neural network converges, and the first neural network corresponding to the first The second neural network performs Gaussian initialization.
- the neural network training device 200 may be implemented in various terminal devices, such as a server for face recognition, object detection, or object classification.
- the neural network training device 200 according to an embodiment of the present application may be integrated into a terminal device as a software module and/or a hardware module.
- the training device 200 of the neural network may be a software module in the operating system of the terminal device, or may be an application program developed for the terminal device; of course, the training device 200 of the neural network may also be One of the many hardware modules of the terminal device.
- the training device 200 of the neural network and the terminal device may also be separate devices, and the training device 200 of the neural network may be connected to the terminal device through a wired and/or wireless network, and Transmit interactive information according to the agreed data format.
- FIG. 12 illustrates a block diagram of an electronic device according to an embodiment of the present application.
- the electronic device 10 includes one or more processors 11 and memory 12.
- the processor 13 may be a central processing unit (CPU) or other forms of processing units having data processing capabilities and/or instruction execution capabilities, and may control other components in the electronic device 10 to perform desired functions.
- CPU central processing unit
- the memory 12 may include one or more computer program products, which may include various forms of computer-readable storage media, such as volatile memory and/or non-volatile memory.
- the volatile memory may include, for example, random access memory (RAM) and/or cache memory.
- the non-volatile memory may include, for example, read-only memory (ROM), hard disk, flash memory, and the like.
- One or more computer program instructions may be stored on the computer-readable storage medium, and the processor 11 may execute the program instructions to implement the neural network training method of the embodiments of the present application described above and/or Or other desired functions.
- Various contents such as a first feature map, a second feature map, a first loss function value, a second loss function value, etc. may also be stored in the computer-readable storage medium.
- the electronic device 10 may further include: an input device 13 and an output device 14, these components are interconnected by a bus system and/or other forms of connection mechanisms (not shown).
- the input device 13 may include, for example, a keyboard, a mouse, and the like.
- the output device 14 can output various kinds of information to the outside, including the second neural network that has completed training and the like.
- the output device 14 may include, for example, a display, a speaker, a printer, and a communication network and its connected remote output device.
- the electronic device 10 may also include any other suitable components.
- embodiments of the present application may also be computer program products, which include computer program instructions, which when executed by the processor cause the processor to perform the above-described "exemplary method" of this specification.
- the computer program product may write program codes for performing operations of the embodiments of the present application in any combination of one or more programming languages, and the programming languages include object-oriented programming languages, such as Java, C++, etc. , Also includes conventional procedural programming languages, such as "C" language or similar programming languages.
- the program code may be executed entirely on the user computing device, partly on the user device, as an independent software package, partly on the user computing device and partly on the remote computing device, or entirely on the remote computing device or server To execute.
- an embodiment of the present application may also be a computer-readable storage medium having computer program instructions stored thereon, which when executed by a processor causes the processor to perform the above-mentioned "exemplary method" part of the specification
- the steps in the neural network training method according to various embodiments of the present application are described in.
- the computer-readable storage medium may employ any combination of one or more readable media.
- the readable medium may be a readable signal medium or a readable storage medium.
- the readable storage medium may include, but is not limited to, electrical, magnetic, optical, electromagnetic, infrared, or semiconductor systems, devices, or devices, or any combination of the above, for example. More specific examples of readable storage media (non-exhaustive list) include: electrical connections with one or more wires, portable disks, hard disks, random access memory (RAM), read only memory (ROM), erasable Programmable read-only memory (EPROM or flash memory), optical fiber, portable compact disk read-only memory (CD-ROM), optical storage device, magnetic storage device, or any suitable combination of the foregoing.
- each component or each step can be decomposed and/or recombined.
- decompositions and/or recombinations shall be regarded as equivalent solutions of this application.
Abstract
Description
Claims (10)
- 一种神经网络的训练方法,包括:将训练数据输入已训练的第一神经网络和待训练的第二神经网络;确定所述第一神经网络的预设层输出的第一特征图与所述第二神经网络在所述预设层输出的第二特征图;基于所述第一特征图和所述第二特征图确定所述第二神经网络的第一损失函数值;以及基于所述第一损失函数值和所述第二神经网络的第二损失函数值,更新所述第二神经网络的参数;以及将更新后的所述第二神经网络的参数作为待训练的第二神经网络的初始参数,以迭代方式重复上述所述将训练数据输入已训练的第一神经网络和待训练的第二神经网络的步骤~所述基于所述第一损失函数值和所述第二神经网络的第二损失函数值,更新所述第二神经网络的参数的步骤,在更新得到的所述第二神经网络符合预设条件时,得到最终已训练的所述第二神经网络。
- 如权利要求1所述的神经网络的训练方法,其中,确定所述第一神经网络的预设层输出的第一特征图与所述第二神经网络在所述预设层输出的第二特征图包括:将所述第一神经网络的卷积层中的最后一层输出的特征图确定为第一特征图;以及,将所述第二神经网络的卷积层中的最后一层输出的特征图确定为第二特征图;基于所述第一特征图和所述第二特征图确定所述第二神经网络的第一损失函数值包括:基于所述第一特征图和所述第二特征图确定所述第二神经网络的L2损失函数值;以及基于所述L2损失函数值确定所述第二神经网络的第一损失函数值。
- 如权利要求1所述的神经网络的训练方法,其中,确定所述第一神经网络的预设层输出的第一特征图与所述第二神经网络在所述预设层输出的第二特征图包括:将所述第一神经网络的softmax层输出的特征图确定为第一特征图;以及,将所述第二神经网络的softmax层输出的特征图确定为第二特征图;基于所述第一特征图和所述第二特征图确定所述第二神经网络的第一损失函数值包括:基于所述第一特征图和所述第二特征图确定所述第二神经网络的交叉熵损失函数值;以及基于所述交叉熵损失函数值确定所述第二神经网络的第一损失函数值。
- 如权利要求1所述的神经网络的训练方法,其中,基于所述第一损失函数值和所述第二神经网络的第二损失函数值,更新所述第二神经网络的参数包括:计算所述第二神经网络的交叉熵损失函数值作为所述第二损失函数值;计算所述第一损失函数值和所述第二损失函数值的加权和作为总损失函数值;以及以所述总损失函数值通过反向传播的方式更新所述第二神经网络的参数。
- 如权利要求1所述的神经网络的训练方法,其中,在将训练数据输入已训练的第一神经网络和待训莲的第二神经网络之前进一步包括:训练第一神经网络直到所述第一神经网络收敛;以及对所述第一神经网络对应的所述第二神经网络进行高斯初始化。
- 一种神经网络的训练装置,包括:神经网络输入单元,用于将训练数据输入已训练的第一神经网络和待训练的第二神经网络;特征图确定单元,用于确定所述神经网络输入单元输入的所述第一神经网络的预设层输出的第一特征图与所述第二神经网络在所述预设层输出的 第二特征图;损失函数确定单元,用于基于所述特征图确定单元确定的所述第一特征图和所述第二特征图确定所述第二神经网络的第一损失函数值;神经网络更新单元,用于基于所述损失函数确定单元确定的所述第一损失函数值和所述第二神经网络的第二损失函数值,更新所述第二神经网络的参数;以及迭代更新单元,用于将更新后的所述第二神经网络的参数作为待训练的第二神经网络的初始参数,以迭代方式重复上述所述将训练数据输入已训练的第一神经网络和待训练的第二神经网络的步骤~所述基于所述第一损失函数值和所述第二神经网络的第二损失函数值,更新所述第二神经网络的参数的步骤,在更新得到的所述第二神经网络符合预设条件时,得到最终已训练的所述第二神经网络。
- 如权利要求6所述的神经网络的训练装置,其中,所述特征图确定单元包括:第一特征图确定子单元,用于将所述神经网络输入单元输入的所述第一神经网络的卷积层中的最后一层输出的特征图确定为第一特征图;以及,第二特征图确定子单元,用于将所述神经网络输入单元输入的所述第二神经网络的卷积层中的最后一层输出的特征图确定为第二特征图;所述损失函数确定单元包括:第一损失函数确定子单元,用于基于所述第一特征图确定子单元确定的所述第一特征图和所述第二特征图确定子单元确定的所述第二特征图确定所述第二神经网络的L2损失函数值;以及第二损失函数确定子单元,用于基于所述第一损失函数确定子单元确定的所述L2损失函数值确定所述神经网络输入单元输入的所述第二神经网络的第一损失函数值。
- 如权利要求6所述的神经网络的训练装置,其中,所述特征图确定单元包括:第三特征图确定子单元,用于将所述神经网络输入单元输入的所述第一神经网络的softmax层输出的特征图确定为第一特征图;以及,第四特征图确定子单元,用于将所述神经网络输入单元输入的所述第二神经网络的softmax层输出的特征图确定为第二特征图;所述损失函数确定单元包括:第三损失函数确定子单元,用于基于所述第三特征图确定子单元确定的所述第一特征图和所述第四特征图确定子单元确定的所述第二特征图确定所述第二神经网络的交叉熵损失函数值;以及第四损失函数确定子单元,用于基于所述第三损失函数确定子单元,确定的所述交叉熵损失函数值确定所述神经网络输入单元输入的所述第二神经网络的第一损失函数值。
- 一种电子设备,包括:处理器;以及存储器,在所述存储器中存储有计算机程序指令,所述计算机程序指令在被所述处理器运行时使得所述处理器执行如权利要求1-5中任一项所述的神经网络的训练方法。
- 一种计算机可读介质,其上存储有计算机程序指令,所述计算机程序指令在被处理器运行时使得所述处理器执行如权利要求1-5中任一项所述的神经网络的训练方法。
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
US17/421,446 US20220083868A1 (en) | 2019-01-08 | 2019-08-16 | Neural network training method and apparatus, and electronic device |
Applications Claiming Priority (2)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201910015326.4 | 2019-01-08 | ||
CN201910015326.4A CN111414987B (zh) | 2019-01-08 | 2019-01-08 | 神经网络的训练方法、训练装置和电子设备 |
Publications (1)
Publication Number | Publication Date |
---|---|
WO2020143225A1 true WO2020143225A1 (zh) | 2020-07-16 |
Family
ID=71494078
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
PCT/CN2019/100983 WO2020143225A1 (zh) | 2019-01-08 | 2019-08-16 | 神经网络的训练方法、训练装置和电子设备 |
Country Status (3)
Country | Link |
---|---|
US (1) | US20220083868A1 (zh) |
CN (1) | CN111414987B (zh) |
WO (1) | WO2020143225A1 (zh) |
Cited By (4)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN112862095A (zh) * | 2021-02-02 | 2021-05-28 | 浙江大华技术股份有限公司 | 基于特征分析的自蒸馏学习方法、设备以及可读存储介质 |
CN113420227A (zh) * | 2021-07-21 | 2021-09-21 | 北京百度网讯科技有限公司 | 点击率预估模型的训练方法、预估点击率的方法、装置 |
CN114330712A (zh) * | 2021-12-31 | 2022-04-12 | 苏州浪潮智能科技有限公司 | 一种神经网络的训练方法、系统、设备以及介质 |
CN113420227B (zh) * | 2021-07-21 | 2024-05-14 | 北京百度网讯科技有限公司 | 点击率预估模型的训练方法、预估点击率的方法、装置 |
Families Citing this family (9)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US20210374462A1 (en) * | 2020-05-28 | 2021-12-02 | Canon Kabushiki Kaisha | Image processing apparatus, neural network training method, and image processing method |
CN112288086B (zh) * | 2020-10-30 | 2022-11-25 | 北京市商汤科技开发有限公司 | 一种神经网络的训练方法、装置以及计算机设备 |
US20220188605A1 (en) * | 2020-12-11 | 2022-06-16 | X Development Llc | Recurrent neural network architectures based on synaptic connectivity graphs |
CN112541462A (zh) * | 2020-12-21 | 2021-03-23 | 南京烨鸿智慧信息技术有限公司 | 用于有机废气的光净化效果检测的神经网络的训练方法 |
CN112766488A (zh) * | 2021-01-08 | 2021-05-07 | 江阴灵通网络科技有限公司 | 用于防凝固混凝土搅拌控制的神经网络的训练方法 |
CN113542651B (zh) * | 2021-05-28 | 2023-10-27 | 爱芯元智半导体(宁波)有限公司 | 模型训练方法、视频插帧方法及对应装置 |
CN113657483A (zh) * | 2021-08-14 | 2021-11-16 | 北京百度网讯科技有限公司 | 模型训练方法、目标检测方法、装置、设备以及存储介质 |
CN113780556A (zh) * | 2021-09-18 | 2021-12-10 | 深圳市商汤科技有限公司 | 神经网络训练及文字识别的方法、装置、设备及存储介质 |
CN116384460A (zh) * | 2023-03-29 | 2023-07-04 | 清华大学 | 鲁棒性光学神经网络训练方法、装置、电子设备及介质 |
Citations (5)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN107247989A (zh) * | 2017-06-15 | 2017-10-13 | 北京图森未来科技有限公司 | 一种神经网络训练方法及装置 |
US20180268292A1 (en) * | 2017-03-17 | 2018-09-20 | Nec Laboratories America, Inc. | Learning efficient object detection models with knowledge distillation |
CN108664893A (zh) * | 2018-04-03 | 2018-10-16 | 福州海景科技开发有限公司 | 一种人脸检测方法及存储介质 |
CN108764462A (zh) * | 2018-05-29 | 2018-11-06 | 成都视观天下科技有限公司 | 一种基于知识蒸馏的卷积神经网络优化方法 |
CN108960407A (zh) * | 2018-06-05 | 2018-12-07 | 出门问问信息科技有限公司 | 递归神经网路语言模型训练方法、装置、设备及介质 |
Family Cites Families (3)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
KR20180027887A (ko) * | 2016-09-07 | 2018-03-15 | 삼성전자주식회사 | 뉴럴 네트워크에 기초한 인식 장치 및 뉴럴 네트워크의 트레이닝 방법 |
CN108805259A (zh) * | 2018-05-23 | 2018-11-13 | 北京达佳互联信息技术有限公司 | 神经网络模型训练方法、装置、存储介质及终端设备 |
CN108830813B (zh) * | 2018-06-12 | 2021-11-09 | 福建帝视信息科技有限公司 | 一种基于知识蒸馏的图像超分辨率增强方法 |
-
2019
- 2019-01-08 CN CN201910015326.4A patent/CN111414987B/zh active Active
- 2019-08-16 WO PCT/CN2019/100983 patent/WO2020143225A1/zh active Application Filing
- 2019-08-16 US US17/421,446 patent/US20220083868A1/en active Pending
Patent Citations (5)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US20180268292A1 (en) * | 2017-03-17 | 2018-09-20 | Nec Laboratories America, Inc. | Learning efficient object detection models with knowledge distillation |
CN107247989A (zh) * | 2017-06-15 | 2017-10-13 | 北京图森未来科技有限公司 | 一种神经网络训练方法及装置 |
CN108664893A (zh) * | 2018-04-03 | 2018-10-16 | 福州海景科技开发有限公司 | 一种人脸检测方法及存储介质 |
CN108764462A (zh) * | 2018-05-29 | 2018-11-06 | 成都视观天下科技有限公司 | 一种基于知识蒸馏的卷积神经网络优化方法 |
CN108960407A (zh) * | 2018-06-05 | 2018-12-07 | 出门问问信息科技有限公司 | 递归神经网路语言模型训练方法、装置、设备及介质 |
Cited By (6)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN112862095A (zh) * | 2021-02-02 | 2021-05-28 | 浙江大华技术股份有限公司 | 基于特征分析的自蒸馏学习方法、设备以及可读存储介质 |
CN112862095B (zh) * | 2021-02-02 | 2023-09-29 | 浙江大华技术股份有限公司 | 基于特征分析的自蒸馏学习方法、设备以及可读存储介质 |
CN113420227A (zh) * | 2021-07-21 | 2021-09-21 | 北京百度网讯科技有限公司 | 点击率预估模型的训练方法、预估点击率的方法、装置 |
CN113420227B (zh) * | 2021-07-21 | 2024-05-14 | 北京百度网讯科技有限公司 | 点击率预估模型的训练方法、预估点击率的方法、装置 |
CN114330712A (zh) * | 2021-12-31 | 2022-04-12 | 苏州浪潮智能科技有限公司 | 一种神经网络的训练方法、系统、设备以及介质 |
CN114330712B (zh) * | 2021-12-31 | 2024-01-12 | 苏州浪潮智能科技有限公司 | 一种神经网络的训练方法、系统、设备以及介质 |
Also Published As
Publication number | Publication date |
---|---|
CN111414987B (zh) | 2023-08-29 |
US20220083868A1 (en) | 2022-03-17 |
CN111414987A (zh) | 2020-07-14 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
WO2020143225A1 (zh) | 神经网络的训练方法、训练装置和电子设备 | |
WO2020083073A1 (zh) | 非机动车图像多标签分类方法、系统、设备及存储介质 | |
WO2019034129A1 (zh) | 神经网络结构的生成方法和装置、电子设备、存储介质 | |
WO2021174935A1 (zh) | 对抗生成神经网络的训练方法及系统 | |
WO2019232847A1 (zh) | 手写模型训练方法、手写字识别方法、装置、设备及介质 | |
CN114048331A (zh) | 一种基于改进型kgat模型的知识图谱推荐方法及系统 | |
WO2021000745A1 (zh) | 一种知识图谱的嵌入表示方法及相关设备 | |
US9836564B1 (en) | Efficient extraction of the worst sample in Monte Carlo simulation | |
WO2022105108A1 (zh) | 一种网络数据分类方法、装置、设备及可读存储介质 | |
CN111612080B (zh) | 模型解释方法、设备及可读存储介质 | |
WO2023051369A1 (zh) | 一种神经网络的获取方法、数据处理方法以及相关设备 | |
CN109409508B (zh) | 一种基于生成对抗网络使用感知损失解决模型崩塌的方法 | |
JP6172317B2 (ja) | 混合モデル選択の方法及び装置 | |
WO2019232855A1 (zh) | 手写模型训练方法、手写字识别方法、装置、设备及介质 | |
CN114065693A (zh) | 超大规模集成电路结构布局的优化方法、系统和电子设备 | |
WO2020107264A1 (zh) | 神经网络架构搜索的方法与装置 | |
WO2023197857A1 (zh) | 一种模型切分方法及其相关设备 | |
WO2020252925A1 (zh) | 用户特征群中用户特征寻优方法、装置、电子设备及计算机非易失性可读存储介质 | |
WO2023078009A1 (zh) | 一种模型权重获取方法以及相关系统 | |
CN113361621B (zh) | 用于训练模型的方法和装置 | |
CN111339308B (zh) | 基础分类模型的训练方法、装置和电子设备 | |
CN112348045A (zh) | 神经网络的训练方法、训练装置和电子设备 | |
CN112348161A (zh) | 神经网络的训练方法、神经网络的训练装置和电子设备 | |
CN112862758A (zh) | 用于检测墙壁顶面的油漆涂抹质量的神经网络的训练方法 | |
CN112766365A (zh) | 用于智能光影弯折检测的神经网络的训练方法 |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
121 | Ep: the epo has been informed by wipo that ep was designated in this application |
Ref document number: 19908981 Country of ref document: EP Kind code of ref document: A1 |
|
NENP | Non-entry into the national phase |
Ref country code: DE |
|
122 | Ep: pct application non-entry in european phase |
Ref document number: 19908981 Country of ref document: EP Kind code of ref document: A1 |
|
122 | Ep: pct application non-entry in european phase |
Ref document number: 19908981 Country of ref document: EP Kind code of ref document: A1 |
|
32PN | Ep: public notification in the ep bulletin as address of the adressee cannot be established |
Free format text: NOTING OF LOSS OF RIGHTS PURSUANT TO RULE 112(1) EPC (EPO FORM 1205A DATED 16.02.2022) |
|
122 | Ep: pct application non-entry in european phase |
Ref document number: 19908981 Country of ref document: EP Kind code of ref document: A1 |