WO2020143225A1 - 神经网络的训练方法、训练装置和电子设备 - Google Patents

神经网络的训练方法、训练装置和电子设备 Download PDF

Info

Publication number
WO2020143225A1
WO2020143225A1 PCT/CN2019/100983 CN2019100983W WO2020143225A1 WO 2020143225 A1 WO2020143225 A1 WO 2020143225A1 CN 2019100983 W CN2019100983 W CN 2019100983W WO 2020143225 A1 WO2020143225 A1 WO 2020143225A1
Authority
WO
WIPO (PCT)
Prior art keywords
neural network
feature map
loss function
function value
trained
Prior art date
Application number
PCT/CN2019/100983
Other languages
English (en)
French (fr)
Inventor
周贺龙
张骞
黄畅
Original Assignee
南京人工智能高等研究院有限公司
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 南京人工智能高等研究院有限公司 filed Critical 南京人工智能高等研究院有限公司
Priority to US17/421,446 priority Critical patent/US20220083868A1/en
Publication of WO2020143225A1 publication Critical patent/WO2020143225A1/zh

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/08Learning methods
    • G06N3/084Backpropagation, e.g. using gradient descent
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/045Combinations of networks
    • YGENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
    • Y02TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
    • Y02TCLIMATE CHANGE MITIGATION TECHNOLOGIES RELATED TO TRANSPORTATION
    • Y02T10/00Road transport of goods or passengers
    • Y02T10/10Internal combustion engine [ICE] based vehicles
    • Y02T10/40Engine management systems

Definitions

  • the present application relates to the field of deep learning technology, and more specifically, to a neural network training method, neural network training device, and electronic equipment.
  • Deep neural networks with good performance usually have deep layers, resulting in huge network parameters. If you want to use it on the mobile terminal, you will usually choose a lightweight network with a small model parameter, but the performance of the lightweight network is not so good.
  • knowledge distillation is widely used as an effective means. Its working principle is to use the output of the large model as an auxiliary label to further effectively supervise the training of lightweight networks and realize knowledge transfer.
  • the embodiments of the present application provide a neural network training method, a neural network training device, and an electronic device, which can combine a trained and untrained neural network with a feature map at the same preset layer to obtain a loss function, and further combine The loss function of the untrained neural network itself updates the parameters of the untrained neural network, thereby improving the accuracy of the trained neural network.
  • a method for training a neural network includes: inputting training data into a trained first neural network and a second neural network to be trained; and determining a preset layer of the first neural network A first feature map output and a second feature map output by the second neural network at the preset layer; determining the first of the second neural network based on the first feature map and the second feature map Loss function value; update the parameters of the second neural network based on the first loss function value and the second loss function value of the second neural network; and, update the parameters of the second neural network
  • the initial parameters of the second neural network to be trained repeat the above steps of inputting the training data into the trained first neural network and the second neural network to be trained in an iterative manner-based on the first loss function Value and the second loss function value of the second neural network, the step of updating the parameters of the second neural network, and when the updated second neural network meets preset conditions, the final trained The second neural network.
  • a neural network training device including: a neural network input unit for inputting training data into a trained first neural network and a second neural network to be trained; determination of a feature map A unit for determining a first feature map output by a preset layer of the first neural network input by the neural network input unit and a second feature map output by the second neural network at the preset layer; loss A function determining unit for determining the first loss function value of the second neural network based on the first feature map and the second feature map determined by the feature map determining unit; a neural network updating unit for The first loss function value determined by the loss function determination unit and the second loss function value of the second neural network, updating the parameters of the second neural network; and, an iterative update unit for updating
  • the parameters of the second neural network are used as the initial parameters of the second neural network to be trained, and the above steps of inputting the training data into the trained first neural network and the second neural network to be trained are repeated iteratively ⁇ The step of updating the parameters of the second neural network
  • an electronic device including: a processor; and a memory in which computer program instructions are stored, and when the computer program instructions are executed by the processor, The processor executes the neural network training method described above.
  • a computer-readable medium on which computer program instructions are stored, which when executed by a processor causes the processor to perform the training of the neural network as described above method.
  • the neural network training method, the neural network training device and the electronic equipment according to the present application can input the training data into the trained first neural network and the second neural network to be trained; A first feature map output by a preset layer of a neural network and a second feature map output by the second neural network at the preset layer; determining the based on the first feature map and the second feature map The first loss function value of the second neural network; updating the parameters of the second neural network based on the first loss function value and the second loss function value of the second neural network; The parameters of the second neural network are used as the initial parameters of the second neural network to be trained, and the above steps of inputting the training data into the trained first neural network and the second neural network to be trained are repeated iteratively.
  • the loss function value is determined by combining the trained first neural network and the feature map output by the second neural network to be trained in the preset layer, and further combined with the loss function value of the second neural network itself to update
  • the parameters of the second neural network, and using the updated parameters of the second neural network as the initial parameters of the second neural network to be trained, updating the second neural network in an iterative manner can fully and effectively
  • the parameters of the trained first neural network are used to train the second neural network, thereby improving the accuracy of the second neural network.
  • FIG. 1 illustrates a flowchart of a neural network training method according to an embodiment of the present application.
  • FIG. 2 illustrates a schematic diagram of an iterative process in a training method of a neural network according to an embodiment of the present application.
  • FIG. 3 illustrates a schematic diagram of a neural network training method applied to an image recognition and detection scene according to an embodiment of the present application.
  • FIG. 4 illustrates a flowchart of a process of determining a feature map and a loss function in an image recognition and detection scene according to the training method of a neural network according to an embodiment of the present application.
  • FIG. 5 illustrates a schematic diagram of a neural network training method applied to a classification scene according to an embodiment of the present application.
  • FIG. 6 illustrates a flowchart of a process of determining a feature map and a loss function of a neural network training method according to an embodiment of the present application in a classification scenario.
  • FIG. 7 illustrates a flowchart of a training example of the second neural network in the method for training a neural network according to an embodiment of the present application.
  • FIG. 8 illustrates a block diagram of a training device of a neural network according to an embodiment of the present application.
  • FIG. 9 illustrates a block diagram of a first example of a neural network training device in an image recognition and detection scenario according to an embodiment of the present application.
  • FIG. 10 illustrates a block diagram of a second example of a training device of a neural network according to an embodiment of the present application in a classification scenario.
  • FIG. 11 illustrates a block diagram of a schematic neural network update unit of a neural network training device according to an embodiment of the present application.
  • FIG. 12 illustrates a block diagram of an electronic device according to an embodiment of the present application.
  • the degree of knowledge transfer determines the accuracy of the lightweight network, that is, if the knowledge transfer is insufficient, the accuracy of the generated lightweight network is insufficient.
  • the basic idea of this application is to determine the loss function value by combining the trained neural network and the feature map output by the neural network to be trained at the preset layer, and further combining the loss function of the neural network to be trained itself Value to update the parameters of the neural network to be trained in an iterative manner.
  • the neural network training method, the neural network training device, and the electronic device provided by the present application first input the training data into the first neural network that has been trained and the second neural network to be trained, and then determine the first neural network The first feature map output by the preset layer and the second feature map output by the second neural network at the preset layer, and then determining the second feature map based on the first feature map and the second feature map
  • the first loss function value of the neural network and then update the parameters of the second neural network based on the first loss function value and the second loss function value of the second neural network, and finally update the updated
  • the parameters of the second neural network are used as the initial parameters of the second neural network to be trained, and the above steps of inputting the training data into the trained first neural network and the second neural network to be trained are repeated iteratively.
  • the update of the parameters of the second neural network depends on its own second loss function value and the feature map output at the preset layer by combining the trained first neural network and the second neural network to be trained Determine the value of the first loss function, and use the updated parameters of the second neural network as the initial parameters of the second neural network to be trained, and update the second neural network in an iterative manner.
  • the parameters of the first neural network that have been trained can be fully and effectively used, thereby improving the accuracy of the second neural network after training.
  • the training method of neural network, the training device of neural network and the electronic equipment according to the present application can be used for various Knowledge transfer between neural networks, for example, both the trained first neural network and the second neural network to be trained may be large networks or lightweight networks, and this application is not intended to impose any restrictions on this .
  • FIG. 1 illustrates a flowchart of a neural network training method according to an embodiment of the present application.
  • the training method of a neural network includes the following steps.
  • the training data is input to the trained first neural network and the second neural network to be trained.
  • the first neural network and the second neural network may be various types of neural networks for image recognition, object detection, object classification, etc.
  • the training data may be an image training set.
  • the trained first neural network may be a large network with a large amount of parameters and high accuracy
  • the second neural network to be trained may be lightweight Type network
  • the parameter quantity is small and the accuracy is relatively low. Therefore, in order to improve the accuracy of lightweight networks, it is necessary to provide supervised signals for large networks after training to guide the learning of lightweight networks.
  • the first neural network has been trained before inputting the training data, that is, the first neural network has been trained to converge.
  • the second neural network corresponds to the first neural network, so that the trained first neural network can be used for training, and the second neural network obtains initialization parameters through Gaussian initialization.
  • the method for training a neural network before inputting the training data into the first neural network that has been trained and the second neural network that is to be trained, it further includes: training the first neural network until the first A neural network converges; and Gaussian initialization is performed on the second neural network corresponding to the first neural network.
  • the first neural network after training can provide a supervision signal to supervise the training of the second neural network to achieve Knowledge transfer between neural networks is improved, and the accuracy of the second neural network is improved.
  • a first feature map output by the preset layer of the first neural network and a second feature map output by the second neural network at the preset layer are determined. That is, in order for the first neural network to provide a supervision signal to supervise the training of the second neural network, it is necessary to extract the output feature map from the same layer of the first neural network and the second neural network.
  • the preset layer may be different preset layers of the network model, This will be explained in further detail later.
  • the first loss function value of the second neural network is determined based on the first feature map and the second feature map.
  • the extracted first feature map and the second feature map output at the preset layer will also be Different feature maps, therefore, the first loss function value determined based on the first feature map and the second feature map may correspondingly be different types of loss function values, which will also be described in further detail later.
  • step S140 the parameters of the second neural network are updated based on the first loss function value and the second loss function value of the second neural network. Because the first loss function value is based on the first feature map output by the first neural network at the preset layer and the second feature output by the second neural network at the preset layer The graph determines that the value of the first loss function can be used as the supervision signal provided by the first neural network. Furthermore, by further combining the second loss function value of the second neural network to update the parameters of the second neural network, knowledge transfer of the parameters of the first neural network can be achieved, thereby improving the updated Describe the accuracy of the second neural network.
  • step S150 the updated parameters of the second neural network are used as the initial parameters of the second neural network to be trained, and the foregoing input of the training data into the trained first neural network and the to-be-trained is repeated in an iterative manner Of the second neural network-the step of updating the parameters of the second neural network based on the first loss function value and the second loss function value of the second neural network, in the updated When the second neural network meets the preset condition, the second neural network that is finally trained is obtained.
  • the second neural network obtained by this training can be used as the unused in step S110
  • the trained second neural network uses the already-trained parameters as initial parameters, and repeatedly executes steps S110-S140 in the embodiment shown in FIG. 1 to obtain a second neural network that meets a certain accuracy after multiple iterations. Therefore, by iterative distillation, the neural network after the last distillation is used as the initialization of the neural network to be trained in this training process, and the second neural network is continuously distilled through the trained first neural network, thereby making the large network The knowledge of the first neural network is fully transferred to the lightweight second neural network.
  • the supervisory signals provided by the first neural network can be fully utilized to further improve the Describe the accuracy of the second neural network.
  • FIG. 2 illustrates a schematic diagram of an iterative process in a training method of a neural network according to an embodiment of the present application.
  • training data such as image set IN
  • the trained first neural network Net 1 and the second neural network to be trained Net 2 are input into the trained first neural network Net 1 and the second neural network to be trained Net 2 , and is trained by the training method of the neural network as described above, The updated parameters of the second neural network are obtained.
  • the trained first neural network Net 1 remains as it is, and the parameters of the updated second neural network are used as parameters of the second neural network to be trained, that is, the updated It said second neural network to be trained as the second pre-trained neural network model, by inputting a set of images I N, for example, to the second neural network net 2 'training.
  • the accuracy of the updated second neural network may be determined until the accuracy of the model updated twice before and after there is no significant difference in the accuracy of the iteration is stopped.
  • obtaining the second neural network that is finally trained includes: obtaining the before-update A first test accuracy of the second neural network and an updated second test accuracy of the second neural network; determining whether the difference between the first test accuracy and the second test accuracy is less than a predetermined threshold; and In response to the difference between the first test accuracy and the second test accuracy being less than a predetermined threshold, it is determined that the training of the second neural network is completed.
  • FIG. 3 illustrates a schematic diagram of a neural network training method applied to an image recognition and detection scene according to an embodiment of the present application.
  • the features of the last layer output of the convolutional layers of the first neural network and the second neural network are extracted Figure.
  • the L2 loss function value of the second neural network is calculated through the first feature map and the second feature map, and then combined with the loss function value of the second neural network to calculate a total loss function value.
  • FIG. 4 illustrates a flowchart of a process of determining a feature map and a loss function in an image recognition and detection scene according to the training method of a neural network according to an embodiment of the present application.
  • the step S120 may include the following steps.
  • step S121a the feature map output from the last layer of the convolutional layer of the first neural network is determined as the first feature map, that is, the last one of the first neural network shown in FIG. 2 The output of the layer convolution layer.
  • step S122a the feature map output from the last layer of the convolutional layer of the second neural network is determined as the second feature map, that is, the last one of the second neural network shown in FIG. 2 The output of the layer convolution layer.
  • the step S130 may include the following steps.
  • step S131a the L2 loss function value of the second neural network is determined based on the first feature map and the second feature map, that is, based on the first neural network and the The L2 loss function value calculated from the output of the last convolutional layer of the second neural network.
  • the first loss function value of the second neural network is determined based on the L2 loss function value, for example, the L2 loss function value may be multiplied by a predetermined weighting coefficient to obtain the second neural network Value of the first loss function.
  • the neural network training method can be used to train neural network models for image recognition and detection, such as face recognition and object detection, thereby improving the accuracy of the neural network, thereby improving the image recognition and detection Accuracy.
  • FIG. 5 illustrates a schematic diagram of a neural network training method applied to a classification scene according to an embodiment of the present application.
  • a feature map output from the last layer of the softmax layers of the first neural network and the second neural network is extracted.
  • the last layer of the convolution layer and the softmax layer include a fully connected layer
  • the first neural network and the second neural network may also be Does not include fully connected layers.
  • the cross-entropy loss function value of the second neural network is calculated through the first feature map and the second feature map, and then combined with the loss function value of the second neural network itself to calculate the total loss Function value.
  • FIG. 6 illustrates a flowchart of a process of determining a feature map and a loss function of a neural network training method according to an embodiment of the present application in a classification scenario.
  • the step S120 may include the following steps.
  • step S121b the feature map output from the softmax layer of the first neural network is determined as the first feature map, that is, the output of the softmax layer of the first neural network as shown in FIG. 4.
  • step S122b the feature map output from the softmax layer of the second neural network is determined as the second feature map, that is, the output of the softmax layer of the second neural network as shown in FIG. 4.
  • the step S130 may include the following steps.
  • step S131b the cross-entropy loss function value of the second neural network is determined based on the first feature map and the second feature map, that is, based on the first neural network and the The value of the cross-entropy loss function calculated by the output of the softmax layer of the second neural network.
  • the first loss function value of the second neural network is determined based on the cross-entropy loss function value, for example, the cross-entropy loss function value may be multiplied by a predetermined weighting coefficient to obtain the second The first loss function value of the neural network.
  • the training method of the neural network according to the embodiment of the present application can be used to train and classify, for example, a neural network model of object classification based on images, thereby improving the accuracy of the neural network, thereby improving the accuracy of object classification.
  • FIG. 7 illustrates a flowchart of a training example of the second neural network in the method for training a neural network according to an embodiment of the present application.
  • the step S140 may include the following steps.
  • step S141 the value of the cross-entropy loss function of the second neural network is calculated as the value of the second loss function, that is, for the value of the loss function of the second neural network itself, the value of the cross-entropy loss function can be calculated.
  • the value of the cross-entropy loss function can be calculated.
  • step S142 a weighted sum of the first loss function value and the second loss function value is calculated as the total loss function value.
  • the first loss function value and the second loss function value may be combined in other ways to calculate the total loss function value.
  • step S143 the parameters of the second neural network are updated in a manner of back propagation through the total loss function value. At this time, the parameters of the second neural network are updated, while the parameters of the first neural network remain unchanged.
  • the trained parameters of the first neural network are fully utilized to improve the training accuracy.
  • FIG. 8 illustrates a block diagram of a training device of a neural network according to an embodiment of the present application.
  • a neural network training device 200 includes: a neural network input unit 210 for inputting training data into a first neural network that has been trained and a second neural network that is to be trained; a feature map The determining unit 220 is configured to determine a first feature map output by a preset layer of the first neural network input by the neural network input unit 210 and a second feature output by the second neural network at the preset layer Figure; loss function determination unit 230 for determining the first loss function value of the second neural network based on the first feature map and the second feature map determined by the feature map determination unit 220; neural network update Unit 240, configured to update the parameters of the second neural network based on the first loss function value determined by the loss function determination unit 230 and the second loss function value of the second neural network; and an iterative update unit 250, used to use the parameters of the second neural network updated by the neural network updating unit 240 as the initial parameters of the second neural network to be trained, and iteratively repeating the training data of
  • FIG. 9 illustrates a block diagram of a first example of a neural network training device according to an embodiment of the present application in an image recognition and detection scenario.
  • the feature map determination unit 220 includes: a first feature map determination subunit 221 a for inputting all the inputs of the neural network input unit 210
  • the feature map output from the last layer of the convolutional layer of the first neural network is determined to be the first feature map
  • the second feature map determining subunit 222a is used to input the neural network input unit 210 into the
  • the feature map output from the last layer of the convolutional layer of the second neural network is determined to be the second feature map
  • the loss function determination unit 230 includes: a first loss function determination subunit 231a, configured to be based on the first feature
  • the first feature map determined by the graph determination subunit 221a and the second feature map determined by the second feature map determination subunit 222a determine the L2 loss function value of the second neural network
  • the second loss The function determining subunit 232a is configured to determine the first loss function value of the second neural network input by the neural network input unit 210 based on the L2 loss function value
  • FIG. 10 illustrates a block diagram of a second example of a training device of a neural network according to an embodiment of the present application in a classification scenario.
  • the feature map determination unit 220 includes: a third feature map determination subunit 221b, which is used to input all the inputs of the neural network input unit 210 The feature map output from the softmax layer of the first neural network is determined to be the first feature map; and, the fourth feature map determining subunit 222b is configured to input the second neural network input from the neural network input unit 210 to the second neural network.
  • the feature map output by the softmax layer is determined as the second feature map;
  • the loss function determination unit 230 includes: a third loss function determination subunit 231b for determining the first determined by the subunit 221b based on the third feature map
  • the second feature map determined by the feature map and the fourth feature map determining subunit 222b determines the cross-entropy loss function value of the second neural network; and, the fourth loss function determining subunit 232b is used to determine
  • the cross-entropy loss function value determined by the third loss function determination subunit 231b determines the first loss function value of the second neural network input by the neural network input unit 210.
  • FIG. 11 illustrates a block diagram of a schematic neural network update unit of a neural network training device according to an embodiment of the present application.
  • the neural network update unit 240 includes: a calculation subunit 241 for calculating the value of the cross-entropy loss function of the second neural network as The second loss function value; a weighting sub-unit 242 for calculating the weighting of the first loss function value determined by the loss function determination unit 230 and the second loss function value calculated by the calculation sub-unit 241 And as the value of the total loss function; and, an update subunit 243 for updating the parameters of the second neural network in a manner of back propagation through the value of the total loss function calculated by the weighting subunit 242.
  • the above neural network training device 200 further including a preprocessing unit for training a first neural network until the first neural network converges, and the first neural network corresponding to the first The second neural network performs Gaussian initialization.
  • the neural network training device 200 may be implemented in various terminal devices, such as a server for face recognition, object detection, or object classification.
  • the neural network training device 200 according to an embodiment of the present application may be integrated into a terminal device as a software module and/or a hardware module.
  • the training device 200 of the neural network may be a software module in the operating system of the terminal device, or may be an application program developed for the terminal device; of course, the training device 200 of the neural network may also be One of the many hardware modules of the terminal device.
  • the training device 200 of the neural network and the terminal device may also be separate devices, and the training device 200 of the neural network may be connected to the terminal device through a wired and/or wireless network, and Transmit interactive information according to the agreed data format.
  • FIG. 12 illustrates a block diagram of an electronic device according to an embodiment of the present application.
  • the electronic device 10 includes one or more processors 11 and memory 12.
  • the processor 13 may be a central processing unit (CPU) or other forms of processing units having data processing capabilities and/or instruction execution capabilities, and may control other components in the electronic device 10 to perform desired functions.
  • CPU central processing unit
  • the memory 12 may include one or more computer program products, which may include various forms of computer-readable storage media, such as volatile memory and/or non-volatile memory.
  • the volatile memory may include, for example, random access memory (RAM) and/or cache memory.
  • the non-volatile memory may include, for example, read-only memory (ROM), hard disk, flash memory, and the like.
  • One or more computer program instructions may be stored on the computer-readable storage medium, and the processor 11 may execute the program instructions to implement the neural network training method of the embodiments of the present application described above and/or Or other desired functions.
  • Various contents such as a first feature map, a second feature map, a first loss function value, a second loss function value, etc. may also be stored in the computer-readable storage medium.
  • the electronic device 10 may further include: an input device 13 and an output device 14, these components are interconnected by a bus system and/or other forms of connection mechanisms (not shown).
  • the input device 13 may include, for example, a keyboard, a mouse, and the like.
  • the output device 14 can output various kinds of information to the outside, including the second neural network that has completed training and the like.
  • the output device 14 may include, for example, a display, a speaker, a printer, and a communication network and its connected remote output device.
  • the electronic device 10 may also include any other suitable components.
  • embodiments of the present application may also be computer program products, which include computer program instructions, which when executed by the processor cause the processor to perform the above-described "exemplary method" of this specification.
  • the computer program product may write program codes for performing operations of the embodiments of the present application in any combination of one or more programming languages, and the programming languages include object-oriented programming languages, such as Java, C++, etc. , Also includes conventional procedural programming languages, such as "C" language or similar programming languages.
  • the program code may be executed entirely on the user computing device, partly on the user device, as an independent software package, partly on the user computing device and partly on the remote computing device, or entirely on the remote computing device or server To execute.
  • an embodiment of the present application may also be a computer-readable storage medium having computer program instructions stored thereon, which when executed by a processor causes the processor to perform the above-mentioned "exemplary method" part of the specification
  • the steps in the neural network training method according to various embodiments of the present application are described in.
  • the computer-readable storage medium may employ any combination of one or more readable media.
  • the readable medium may be a readable signal medium or a readable storage medium.
  • the readable storage medium may include, but is not limited to, electrical, magnetic, optical, electromagnetic, infrared, or semiconductor systems, devices, or devices, or any combination of the above, for example. More specific examples of readable storage media (non-exhaustive list) include: electrical connections with one or more wires, portable disks, hard disks, random access memory (RAM), read only memory (ROM), erasable Programmable read-only memory (EPROM or flash memory), optical fiber, portable compact disk read-only memory (CD-ROM), optical storage device, magnetic storage device, or any suitable combination of the foregoing.
  • each component or each step can be decomposed and/or recombined.
  • decompositions and/or recombinations shall be regarded as equivalent solutions of this application.

Abstract

公开了一种神经网络的训练方法、神经网络的训练装置和电子设备。该神经网络的训练方法包括:将训练数据输入已训练的第一神经网络和待训练的第二神经网络;确定第一神经网络的预设层输出的第一特征图与第二神经网络在所述预设层输出的第二特征图;基于第一特征图和第二特征图确定第二神经网络的第一损失函数值;基于第一损失函数值和第二神经网络的第二损失函数值,更新第二神经网络的参数;以及,将更新后的第二神经网络的参数作为待训练的第二神经网络的初始参数,以迭代方式更新第二神经网络的参数,在更新得到的第二神经网络符合预设条件时,得到最终已训练的第二神经网络。这样,提高了训练后的第二神经网络的精度。

Description

神经网络的训练方法、训练装置和电子设备 技术领域
本申请涉及深度学习技术领域,且更具体地,涉及一种神经网络的训练方法、神经网络的训练装置和电子设备。
背景技术
性能优良的深度神经网络通常具有较深的层数,导致网络的参数量巨大。如果要在移动端应用的话,通常会选择模型参数较小的轻量型网络,但轻量型网络的性能相对没有那么优良。
提升轻量型网络的模型性能的技术中,知识蒸馏作为一种有效的手段,被广泛应用。其工作原理是将大模型的输出作为辅助标注去进一步有效的监督轻量型网络的训练,实现知识迁移。
但是,传统的知识蒸馏并没有充分地将大网络的知识迁移到轻量型网络中,轻量型网络的精度尚存在提高空间。
因此,期望提供改进的轻量型网络的生成方案。
发明内容
为了解决上述技术问题,提出了本申请。本申请的实施例提供了一种神经网络的训练方法、神经网络的训练装置和电子设备,其能够结合已训练和未训练的神经网络在相同预设层的特征图获得损失函数,并进一步结合未训练的神经网络本身的损失函数来更新未训练的神经网络的参数,从而提高训练后的神经网络的精度。
根据本申请的一个方面,提供了一种神经网络的训练方法,包括:将训练数据输入已训练的第一神经网络和待训练的第二神经网络;确定所述第一神经网络的预设层输出的第一特征图与所述第二神经网络在所述预设层输出的第二特征图;基于所述第一特征图和所述第二特征图确定所述第二神经网络的第一损失函数值;基于所述第一损失函数值和所述第二神经网络的第二损失函数值,更新所述第二神经网络的参数;以及,将更新后的所述第二神经网络的参数作为待训练的第二神经网络的初始参数,以迭代方式重复上述所述将训练数据输入已训练的第一神经网络和待训练的第二神经网络的步骤~所述基于所述第一损失函数值和所述第二神经网络的第二损失函数值, 更新所述第二神经网络的参数的步骤,在更新得到的所述第二神经网络符合预设条件时,得到最终已训练的所述第二神经网络。
根据本申请的另一方面,提供了一种神经网络的训练装置,包括:神经网络输入单元,用于将训练数据输入已训练的第一神经网络和待训练的第二神经网络;特征图确定单元,用于确定所述神经网络输入单元输入的所述第一神经网络的预设层输出的第一特征图与所述第二神经网络在所述预设层输出的第二特征图;损失函数确定单元,用于基于所述特征图确定单元确定的所述第一特征图和所述第二特征图确定所述第二神经网络的第一损失函数值;神经网络更新单元,用于基于所述损失函数确定单元确定的所述第一损失函数值和所述第二神经网络的第二损失函数值,更新所述第二神经网络的参数;以及,迭代更新单元,用于将更新后的所述第二神经网络的参数作为待训练的第二神经网络的初始参数,以迭代方式重复上述所述将训练数据输入已训练的第一神经网络和待训练的第二神经网络的步骤~所述基于所述第一损失函数值和所述第二神经网络的第二损失函数值,更新所述第二神经网络的参数的步骤,在更新得到的所述第二神经网络符合预设条件时,得到最终已训练的所述第二神经网络。
根据本申请的再一方面,提供了一种电子设备,包括:处理器;以及,存储器,在所述存储器中存储有计算机程序指令,所述计算机程序指令在被所述处理器运行时使得所述处理器执行如上所述的神经网络的训练方法。
根据本申请的又一方面,提供了一种计算机可读介质,其上存储有计算机程序指令,所述计算机程序指令在被处理器运行时使得所述处理器执行如上所述的神经网络的训练方法。
与现有技术相比,根据本申请的神经网络的训练方法、神经网络的训练装置和电子设备可以将训练数据输入已训练的第一神经网络和待训练的第二神经网络;确定所述第一神经网络的预设层输出的第一特征图与所述第二神经网络在所述预设层输出的第二特征图;基于所述第一特征图和所述第二特征图确定所述第二神经网络的第一损失函数值;基于所述第一损失函数值和所述第二神经网络的第二损失函数值,更新所述第二神经网络的参数;将更新后的所述第二神经网络的参数作为待训练的第二神经网络的初始参数,以迭代方式重复上述所述将训练数据输入已训练的第一神经网络和待训练的第二神经网络的步骤~所述基于所述第一损失函数值和所述第二神经网络 的第二损失函数值,更新所述第二神经网络的参数的步骤,在更新得到的所述第二神经网络符合预设条件时,得到最终已训练的所述第二神经网络。
这样,因为通过结合已训练的第一神经网络和待训练的第二神经网络在预设层输出的特征图来确定损失函数值,并进一步结合所述第二神经网络本身的损失函数值来更新所述第二神经网络的参数,并将更新后的所述第二神经网络的参数作为待训练的第二神经网络的初始参数,以迭代方式更新所述第二神经网络,可以充分并有效地利用所述已训练的第一神经网络的参数来对所述第二神经网络进行训练,从而提高所述第二神经网络的精度。
附图说明
通过结合附图对本申请实施例进行更详细的描述,本申请的上述以及其他目的、特征和优势将变得更加明显。附图用来提供对本申请实施例的进一步理解,并且构成说明书的一部分,与本申请实施例一起用于解释本申请,并不构成对本申请的限制。在附图中,相同的参考标号通常代表相同部件或步骤。
图1图示了根据本申请实施例的神经网络的训练方法的流程图。
图2图示了根据本申请实施例的神经网络的训练方法中的迭代过程的示意图。
图3图示了根据本申请实施例的神经网络的训练方法应用于图像识别和检测场景的示意图。
图4图示了根据本申请实施例的神经网络的训练方法在图像识别和检测场景下的特征图和损失函数确定过程的流程图。
图5图示了根据本申请实施例的神经网络的训练方法应用于分类场景的示意图。
图6图示了根据本申请实施例的神经网络的训练方法在分类场景下的特征图和损失函数确定过程的流程图。
图7图示了根据本申请实施例的神经网络的训练方法中所述第二神经网络的训练示例的流程图。
图8图示了根据本申请实施例的神经网络的训练装置的框图。
图9图示了根据本申请实施例的神经网络的训练装置在图像识别和检测场景下的第一示例的框图。
图10图示了根据本申请实施例的神经网络的训练装置在分类场景下的 第二示例的框图。
图11图示了根据本申请实施例的神经网络的训练装置的示意性神经网络更新单元的框图。
图12图示了根据本申请实施例的电子设备的框图。
具体实施方式
下面,将参考附图详细地描述根据本申请的示例实施例。显然,所描述的实施例仅仅是本申请的一部分实施例,而不是本申请的全部实施例,应理解,本申请不受这里描述的示例实施例的限制。
申请概述
如上所述,通过知识蒸馏,可以实现大网络到轻量型网络的知识迁移。并且,知识迁移的程度决定了轻量型网络的精度,也就是说,如果知识迁移不充分,则生成的轻量型网络的精度不足。
对于以上技术问题,本申请的基本构思是通过结合已训练的神经网络和待训练的神经网络在预设层输出的特征图来确定损失函数值,并进一步结合待训练的神经网络本身的损失函数值来通过迭代方式更新待训练的神经网络的参数。
具体地,本申请提供的神经网络的训练方法、神经网络的训练装置和电子设备首先将训练数据输入已训练的第一神经网络和待训练的第二神经网络,然后确定所述第一神经网络的预设层输出的第一特征图与所述第二神经网络在所述预设层输出的第二特征图,再基于所述第一特征图和所述第二特征图确定所述第二神经网络的第一损失函数值,再基于所述第一损失函数值和所述第二神经网络的第二损失函数值,更新所述第二神经网络的参数,最后将更新后的所述第二神经网络的参数作为待训练的第二神经网络的初始参数,以迭代方式重复上述所述将训练数据输入已训练的第一神经网络和待训练的第二神经网络的步骤~所述基于所述第一损失函数值和所述第二神经网络的第二损失函数值,更新所述第二神经网络的参数的步骤,在更新得到的所述第二神经网络符合预设条件时,得到最终已训练的所述第二神经网络。
这样,由于所述第二神经网络的参数的更新取决于其自身的第二损失函数值以及通过结合已训练的第一神经网络和待训练的第二神经网络在预设层输出的特征图而确定的第一损失函数值,并且将更新后的所述第二神经网络的参数作为待训练的第二神经网络的初始参数,以迭代方式更新所述第二 神经网络,在所述第二神经网络的训练过程中可以充分并有效地利用已训练的第一神经网络的参数,从而提高了训练后的所述第二神经网络的精度。
值得注意的是,虽然在以上以大网络到轻量型网络的知识蒸馏为例进行了说明,根据本申请的神经网络的训练方法、神经网络的训练装置和电子设备实质上可以用于各种神经网络之间的知识迁移,例如,所述已训练的第一神经网络和所述待训练的第二神经网络均可以为大网络或者轻量型网络,本申请并不意在对此进行任何限制。
在介绍了本申请的基本原理之后,下面将参考附图来具体介绍本申请的各种非限制性实施例。
示例性方法
图1图示了根据本申请实施例的神经网络的训练方法的流程图。
如图1所示,根据本申请实施例的神经网络的训练方法包括以下步骤。
在步骤S110中,将训练数据输入已训练的第一神经网络和待训练的第二神经网络。这里,所述第一神经网络和所述第二神经网络可以是用于图像识别、物体检测、物体分类等各种类型的神经网络,相应地,所述训练数据可以是图像训练集。
并且,如上所述,在本申请实施例中,所述已训练的第一神经网络可以是大网络,其参数量大且精度高,并且,所述待训练的第二神经网络可以是轻量型网络,其参数量小且精度相对较低。因此,为了提高轻量型网络的精度,需要完成训练后的大网络提供监督信号来指导轻量型网络学习。
这里,所述第一神经网络是在输入所述训练数据之前已经训练完成的,即,所述第一神经网络经训练而收敛。而所述第二神经网络与所述第一神经网络对应,从而能够使用完成训练的所述第一神经网络来进行训练,并且,所述第二神经网络通过高斯初始化来获得初始化的参数。
也就是,在根据本申请实施例的神经网络的训练方法中,在将训练数据输入已训练的第一神经网络和待训练的第二神经网络之前进一步包括:训练第一神经网络直到所述第一神经网络收敛;以及,对所述第一神经网络对应的所述第二神经网络进行高斯初始化。
这样,通过对所述第一神经网络进行训练和对所述第二神经网络进行初始化,可以使得训练后的所述第一神经网络能够提供监督信号来监督所述第二神经网络的训练,实现了神经网络之间的知识迁移,提高了所述第二神经 网络的精度。
在步骤S120中,确定所述第一神经网络的预设层输出的第一特征图与所述第二神经网络在所述预设层输出的第二特征图。也就是说,为了使得所述第一神经网络提供监督信号来监督所述第二神经网络的训练,需要从所述第一神经网络和所述第二神经网络的相同层提取输出的特征图。这里,根据所述第一神经网络和所述第二神经网络的具体模型类型,例如人脸识别模型、物体检测模型、分类模型等,所述预设层可以是网络模型的不同预设层,这将在后面进一步详细说明。
在步骤S130中,基于所述第一特征图和所述第二特征图确定所述第二神经网络的第一损失函数值。如上所述,由于所述第一神经网络和所述第二神经网络可以是各种模型,所提取出的在预设层输出的所述第一特征图和所述第二特征图也会是不同特征图,因此,基于所述第一特征图和所述第二特征图确定的所述第一损失函数值也相应地可以是不同类型的损失函数值,这也将在后面进一步详细说明。
在步骤S140中,基于所述第一损失函数值和所述第二神经网络的第二损失函数值,更新所述第二神经网络的参数。因为所述第一损失函数值是基于所述第一神经网络在所述预设层输出的所述第一特征图和所述第二神经网络在所述预设层输出的所述第二特征图确定的,所述第一损失函数值可以作为所述第一神经网络所提供的监督信号。并且,通过进一步结合所述第二神经网络自身的第二损失函数值来更新所述第二神经网络的参数,就可以实现所述第一神经网络的参数的知识迁移,从而提高更新后的所述第二神经网络的精度。
在步骤S150中,将更新后的所述第二神经网络的参数作为待训练的第二神经网络的初始参数,以迭代方式重复上述所述将训练数据输入已训练的第一神经网络和待训练的第二神经网络的步骤~所述基于所述第一损失函数值和所述第二神经网络的第二损失函数值,更新所述第二神经网络的参数的步骤,在更新得到的所述第二神经网络符合预设条件时,得到最终已训练的所述第二神经网络。
也就是说,在根据本申请实施例的神经网络的训练方法中,为了进一步提高训练后的所述第二神经网络的精度,可以将本次训练得到的第二神经网络作为步骤S110中的未训练的第二神经网络,将已经训练得到的参数作为 初始参数,重复执行上述图1所示实施例中的步骤S110-步骤S140,通过多次迭代后,得到符合一定精度的第二神经网络。因此,可以通过迭代蒸馏的方式,以上一次蒸馏后的神经网络作为本次训练过程的待训练的神经网络的初始化,不断地通过已训练的第一神经网络蒸馏第二神经网络,从而使得大网络的第一神经网络的知识充分迁移到轻量型的第二神经网络中。
这样,通过将训练完成的所述第二神经网络的参数作为下一次迭代的所述第二神经网络的初始参数,可以使得所述第一神经网络提供的监督信号得以充分地利用,进一步提升所述第二神经网络的精度。
图2图示了根据本申请实施例的神经网络的训练方法中的迭代过程的示意图。
如图2所示,将训练数据,例如图像集I N输入已训练的第一神经网络Net 1和待训练的第二神经网络Net 2,并通过如上所述的神经网络的训练方法进行训练,得到更新后的所述第二神经网络的参数。
接下来,所述已训练的第一神经网络Net 1保持原样,而以所述更新后的所述第二神经网络的参数作为所述待训练的第二神经网络的参数,即,将更新后的所述第二神经网络作为所述待训练的所述第二神经网络的预训练模型,通过输入例如图像集I N来对所述第二神经网络Net 2’进行训练。
持续上述迭代过程,直到更新后的所述第二神经网络符合预设条件为止。具体地,在迭代过程中,可以确定更新后的所述第二神经网络的精度,直到前后两次更新的模型的精度无明显差异后停止迭代。
也就是,在根据本申请实施例的神经网络的训练方法中,在更新得到的所述第二神经网络符合预设条件时,得到最终已训练的所述第二神经网络包括:获取更新前的所述第二神经网络的第一测试精度和更新后的所述第二神经网络的第二测试精度;确定所述第一测试精度和所述第二测试精度的差值是否小于预定阈值;以及,响应于所述第一测试精度和所述第二测试精度的差值小于预定阈值,确定所述第二神经网络的训练完成。
因此,通过设置迭代终止条件,可以有效地执行所述第二神经网络的迭代更新,以提高训练效率。
图3图示了根据本申请实施例的神经网络的训练方法应用于图像识别和检测场景的示意图。
如图3所示,在应用于图像识别和检测,例如人脸识别和物体检测场景 时,提取所述第一神经网络和所述第二神经网络的卷积层中的最后一层输出的特征图。并且,通过所述第一特征图和所述第二特征图来计算所述第二神经网络的L2损失函数值,再与所述第二神经网络本身的损失函数值结合来计算出总损失函数值。
图4图示了根据本申请实施例的神经网络的训练方法在图像识别和检测场景下的特征图和损失函数确定过程的流程图。
如图4所示,在如图1所示的实施例的基础上,所述步骤S120可包括如下步骤。
在步骤S121a中,将所述第一神经网络的卷积层中的最后一层输出的特征图确定为第一特征图,也就是,如图2所示的所述第一神经网络的最后一层卷积层的输出。
在步骤S122a中,将所述第二神经网络的卷积层中的最后一层输出的特征图确定为第二特征图,也就是,如图2所示的所述第二神经网络的最后一层卷积层的输出。
并且,进一步如图4所示,在如图1所示的实施例的基础上,所述步骤S130可包括如下步骤。
在步骤S131a中,基于所述第一特征图和所述第二特征图确定所述第二神经网络的L2损失函数值,也就是,如图3所示的基于所述第一神经网络和所述第二神经网络的最后一层卷积层的输出计算的L2损失函数值。
在步骤S132a中,基于所述L2损失函数值确定所述第二神经网络的第一损失函数值,例如,可以将所述L2损失函数值乘以一预定加权系数以获得所述第二神经网络的第一损失函数值。
这样,根据本申请实施例的神经网络的训练方法可用于训练进行图像识别和检测,例如人脸识别和物体检测的神经网络模型,从而提高神经网络的精度,由此提高了图像识别和检测的准确率。
图5图示了根据本申请实施例的神经网络的训练方法应用于分类场景的示意图。
如图5所示,在应用于分类场景,例如基于图像的对象分类场景时,提取所述第一神经网络和所述第二神经网络的softmax层中的最后一层输出的特征图。这里,本领域技术人员可以理解,虽然在图4中示出在卷积层的最后一层以及softmax层之间包括全连接层,但是所述第一神经网络和所述第 二神经网络也可以不包括全连接层。
然后,通过所述第一特征图和所述第二特征图来计算所述第二神经网络的交叉熵损失函数值,再与所述第二神经网络本身的损失函数值结合来计算出总损失函数值。
图6图示了根据本申请实施例的神经网络的训练方法在分类场景下的特征图和损失函数确定过程的流程图。
如图6所示,在如图1所示的实施例的基础上,所述步骤S120可包括如下步骤。
在步骤S121b中,将所述第一神经网络的softmax层输出的特征图确定为第一特征图,也就是,如图4所示的所述第一神经网络的softmax层的输出。
在步骤S122b中,将所述第二神经网络的softmax层输出的特征图确定为第二特征图,也就是,如图4所示的所述第二神经网络的softmax层的输出。
并且,进一步如图6所示,在如图1所示的实施例的基础上,所述步骤S130可包括如下步骤。
在步骤S131b中,基于所述第一特征图和所述第二特征图确定所述第二神经网络的交叉熵损失函数值,也就是,如图5所示的基于所述第一神经网络和所述第二神经网络的softmax层的输出计算的交叉熵损失函数值。
在步骤S132b中,基于所述交叉熵损失函数值确定所述第二神经网络的第一损失函数值,例如,可以将所述交叉熵损失函数值乘以一预定加权系数以获得所述第二神经网络的第一损失函数值。
这样,根据本申请实施例的神经网络的训练方法可用于训练进行分类,例如基于图像的对象分类的神经网络模型,从而提高神经网络的精度,由此提高了对象分类的准确率。
图7图示了根据本申请实施例的神经网络的训练方法中所述第二神经网络的训练示例的流程图。
如图7所示,在如图1所示的实施例的基础上,所述步骤S140可包括如下步骤。
在步骤S141中,计算所述第二神经网络的交叉熵损失函数值作为所述第二损失函数值,也就是,对于所述第二神经网络自身的损失函数值,可以 计算交叉熵损失函数值,当然,本领域技术人员可以理解,也可以计算其它类型的损失函数值。
在步骤S142中,计算所述第一损失函数值和所述第二损失函数值的加权和作为总损失函数值。同样地,本领域技术人员可以理解,也可以以其它方式结合所述第一损失函数值和所述第二损失函数值以计算所述总损失函数值。
在步骤S143中,以所述总损失函数值通过反向传播的方式更新所述第二神经网络的参数。此时,所述第二神经网络的参数得到更新,而所述第一神经网络的参数保持不变。
因此,通过结合基于所述第一神经网络和所述第二神经网络的特征图确定的所述第一损失函数值来通过反向传播的方式更新所述第二神经网络的参数,可以在所述第二神经网络的训练过程中充分利用所述第一神经网络的已训练的参数,从而提高训练精度。
示例性装置
图8图示了根据本申请实施例的神经网络的训练装置的框图。
如图8所示,根据本申请实施例的神经网络的训练装置200包括:神经网络输入单元210,用于将训练数据输入已训练的第一神经网络和待训练的第二神经网络;特征图确定单元220,用于确定所述神经网络输入单元210输入的所述第一神经网络的预设层输出的第一特征图与所述第二神经网络在所述预设层输出的第二特征图;损失函数确定单元230,用于基于所述特征图确定单元220确定的所述第一特征图和所述第二特征图确定所述第二神经网络的第一损失函数值;神经网络更新单元240,用于基于所述损失函数确定单元230确定的所述第一损失函数值和所述第二神经网络的第二损失函数值,更新所述第二神经网络的参数;以及迭代更新单元250,用于将所述神经网络更新单元240更新后的所述第二神经网络的参数作为待训练的第二神经网络的初始参数,以迭代方式重复上述所述神经网络输入单元210将训练数据输入已训练的第一神经网络和待训练的第二神经网络的步骤~所述神经网络更新单元240基于所述第一损失函数值和所述第二神经网络的第二损失函数值,更新所述第二神经网络的参数的步骤,在更新得到的所述第二神经网络符合预设条件时,得到最终已训练的所述第二神经网络。
图9图示了根据本申请实施例的神经网络的训练装置在图像识别和检测 场景下的第一示例的框图。
如图9所示,在如图8所示的实施例的基础上,所述特征图确定单元220包括:第一特征图确定子单元221a,用于将所述神经网络输入单元210输入的所述第一神经网络的卷积层中的最后一层输出的特征图确定为第一特征图;以及,第二特征图确定子单元222a,用于将所述神经网络输入单元210输入的所述第二神经网络的卷积层中的最后一层输出的特征图确定为第二特征图;所述损失函数确定单元230包括:第一损失函数确定子单元231a,用于基于所述第一特征图确定子单元221a确定的所述第一特征图和所述第二特征图确定子单元222a确定的所述第二特征图确定所述第二神经网络的L2损失函数值;以及,第二损失函数确定子单元232a,用于基于所述第一损失函数确定子单元231a确定的所述L2损失函数值确定所述神经网络输入单元210输入的所述第二神经网络的第一损失函数值。
图10图示了根据本申请实施例的神经网络的训练装置在分类场景下的第二示例的框图。
如图10所示,在如图8所示的实施例的基础上,所述特征图确定单元220包括:第三特征图确定子单元221b,用于将所述神经网络输入单元210输入的所述第一神经网络的softmax层输出的特征图确定为第一特征图;以及,,第四特征图确定子单元222b,用于将所述神经网络输入单元210输入的所述第二神经网络的softmax层输出的特征图确定为第二特征图;所述损失函数确定单元230包括:第三损失函数确定子单元231b,用于基于所述第三特征图确定子单元221b确定的所述第一特征图和所述第四特征图确定子单元222b确定的所述第二特征图确定所述第二神经网络的交叉熵损失函数值;以及,第四损失函数确定子单元232b,用于基于所述第三损失函数确定子单元231b确定的所述交叉熵损失函数值确定所述神经网络输入单元210输入的所述第二神经网络的第一损失函数值。
图11图示了根据本申请实施例的神经网络的训练装置的示意性神经网络更新单元的框图。
如图11所示,在如图8所示的实施例的基础上,所述神经网络更新单元240包括:计算子单元241,用于计算所述第二神经网络的交叉熵损失函数值作为所述第二损失函数值;加权子单元242,用于计算所述损失函数确定单元230确定的所述第一损失函数值和所述计算子单元241所计算的所述 第二损失函数值的加权和作为总损失函数值;以及,更新子单元243,用于以所述加权子单元242所计算的所述总损失函数值通过反向传播的方式更新所述第二神经网络的参数。
在一个示例中,在上述神经网络的训练装置200中,进一步包括预处理单元,用于训练第一神经网络直到所述第一神经网络收敛,以及对所述第一神经网络对应的所述第二神经网络进行高斯初始化。
这里,本领域技术人员可以理解,上述神经网络的训练装置200中的各个单元和模块的具体功能和操作已经在上面参考图1到图7的神经网络的训练方法的描述中得到了详细介绍,并因此,将省略其重复描述。
如上所述,根据本申请实施例的神经网络的训练装置200可以实现在各种终端设备中,例如用于进行人脸识别、物体检测或者对象分类的服务器中。在一个示例中,根据本申请实施例的神经网络的训练装置200可以作为一个软件模块和/或硬件模块而集成到终端设备中。例如,该神经网络的训练装置200可以是该终端设备的操作系统中的一个软件模块,或者可以是针对于该终端设备所开发的一个应用程序;当然,该神经网络的训练装置200同样可以是该终端设备的众多硬件模块之一。
替换地,在另一示例中,该神经网络的训练装置200与该终端设备也可以是分立的设备,并且该神经网络的训练装置200可以通过有线和/或无线网络连接到该终端设备,并且按照约定的数据格式来传输交互信息。
示例性电子设备
下面,参考图12来描述根据本申请实施例的电子设备。
图12图示了根据本申请实施例的电子设备的框图。
如图12所示,电子设备10包括一个或多个处理器11和存储器12。
处理器13可以是中央处理单元(CPU)或者具有数据处理能力和/或指令执行能力的其他形式的处理单元,并且可以控制电子设备10中的其他组件以执行期望的功能。
存储器12可以包括一个或多个计算机程序产品,所述计算机程序产品可以包括各种形式的计算机可读存储介质,例如易失性存储器和/或非易失性存储器。所述易失性存储器例如可以包括随机存取存储器(RAM)和/或高速缓冲存储器(cache)等。所述非易失性存储器例如可以包括只读存储器(ROM)、硬盘、闪存等。在所述计算机可读存储介质上可以存储一个或多 个计算机程序指令,处理器11可以运行所述程序指令,以实现上文所述的本申请的各个实施例的神经网络的训练方法以及/或者其他期望的功能。在所述计算机可读存储介质中还可以存储诸如第一特征图、第二特征图、第一损失函数值、第二损失函数值等各种内容。
在一个示例中,电子设备10还可以包括:输入装置13和输出装置14,这些组件通过总线系统和/或其他形式的连接机构(未示出)互连。
该输入装置13可以包括例如键盘、鼠标等等。
该输出装置14可以向外部输出各种信息,包括完成训练的所述第二神经网络等。该输出装置14可以包括例如显示器、扬声器、打印机、以及通信网络及其所连接的远程输出设备等等。
当然,为了简化,图12中仅示出了该电子设备10中与本申请有关的组件中的一些,省略了诸如总线、输入/输出接口等等的组件。除此之外,根据具体应用情况,电子设备10还可以包括任何其他适当的组件。
示例性计算机程序产品和计算机可读存储介质
除了上述方法和设备以外,本申请的实施例还可以是计算机程序产品,其包括计算机程序指令,所述计算机程序指令在被处理器运行时使得所述处理器执行本说明书上述“示例性方法”部分中描述的根据本申请各种实施例的神经网络的训练方法中的步骤。
所述计算机程序产品可以以一种或多种程序设计语言的任意组合来编写用于执行本申请实施例操作的程序代码,所述程序设计语言包括面向对象的程序设计语言,诸如Java、C++等,还包括常规的过程式程序设计语言,诸如“C”语言或类似的程序设计语言。程序代码可以完全地在用户计算设备上执行、部分地在用户设备上执行、作为一个独立的软件包执行、部分在用户计算设备上部分在远程计算设备上执行、或者完全在远程计算设备或服务器上执行。
此外,本申请的实施例还可以是计算机可读存储介质,其上存储有计算机程序指令,所述计算机程序指令在被处理器运行时使得所述处理器执行本说明书上述“示例性方法”部分中描述的根据本申请各种实施例的神经网络的训练方法中的步骤。
所述计算机可读存储介质可以采用一个或多个可读介质的任意组合。可读介质可以是可读信号介质或者可读存储介质。可读存储介质例如可以包括 但不限于电、磁、光、电磁、红外线、或半导体的系统、装置或器件,或者任意以上的组合。可读存储介质的更具体的例子(非穷举的列表)包括:具有一个或多个导线的电连接、便携式盘、硬盘、随机存取存储器(RAM)、只读存储器(ROM)、可擦式可编程只读存储器(EPROM或闪存)、光纤、便携式紧凑盘只读存储器(CD-ROM)、光存储器件、磁存储器件、或者上述的任意合适的组合。
以上结合具体实施例描述了本申请的基本原理,但是,需要指出的是,在本申请中提及的优点、优势、效果等仅是示例而非限制,不能认为这些优点、优势、效果等是本申请的各个实施例必须具备的。另外,上述公开的具体细节仅是为了示例的作用和便于理解的作用,而非限制,上述细节并不限制本申请为必须采用上述具体的细节来实现。
本申请中涉及的器件、装置、设备、系统的方框图仅作为例示性的例子并且不意图要求或暗示必须按照方框图示出的方式进行连接、布置、配置。如本领域技术人员将认识到的,可以按任意方式连接、布置、配置这些器件、装置、设备、系统。诸如“包括”、“包含”、“具有”等等的词语是开放性词汇,指“包括但不限于”,且可与其互换使用。这里所使用的词汇“或”和“和”指词汇“和/或”,且可与其互换使用,除非上下文明确指示不是如此。这里所使用的词汇“诸如”指词组“诸如但不限于”,且可与其互换使用。
还需要指出的是,在本申请的装置、设备和方法中,各部件或各步骤是可以分解和/或重新组合的。这些分解和/或重新组合应视为本申请的等效方案。
提供所公开的方面的以上描述以使本领域的任何技术人员能够做出或者使用本申请。对这些方面的各种修改对于本领域技术人员而言是非常显而易见的,并且在此定义的一般原理可以应用于其他方面而不脱离本申请的范围。因此,本申请不意图被限制到在此示出的方面,而是按照与在此公开的原理和新颖的特征一致的最宽范围。
为了例示和描述的目的已经给出了以上描述。此外,此描述不意图将本申请的实施例限制到在此公开的形式。尽管以上已经讨论了多个示例方面和实施例,但是本领域技术人员将认识到其某些变型、修改、改变、添加和子组合。

Claims (10)

  1. 一种神经网络的训练方法,包括:
    将训练数据输入已训练的第一神经网络和待训练的第二神经网络;
    确定所述第一神经网络的预设层输出的第一特征图与所述第二神经网络在所述预设层输出的第二特征图;
    基于所述第一特征图和所述第二特征图确定所述第二神经网络的第一损失函数值;以及
    基于所述第一损失函数值和所述第二神经网络的第二损失函数值,更新所述第二神经网络的参数;以及
    将更新后的所述第二神经网络的参数作为待训练的第二神经网络的初始参数,以迭代方式重复上述所述将训练数据输入已训练的第一神经网络和待训练的第二神经网络的步骤~所述基于所述第一损失函数值和所述第二神经网络的第二损失函数值,更新所述第二神经网络的参数的步骤,在更新得到的所述第二神经网络符合预设条件时,得到最终已训练的所述第二神经网络。
  2. 如权利要求1所述的神经网络的训练方法,其中,
    确定所述第一神经网络的预设层输出的第一特征图与所述第二神经网络在所述预设层输出的第二特征图包括:
    将所述第一神经网络的卷积层中的最后一层输出的特征图确定为第一特征图;以及,
    将所述第二神经网络的卷积层中的最后一层输出的特征图确定为第二特征图;
    基于所述第一特征图和所述第二特征图确定所述第二神经网络的第一损失函数值包括:
    基于所述第一特征图和所述第二特征图确定所述第二神经网络的L2损失函数值;以及
    基于所述L2损失函数值确定所述第二神经网络的第一损失函数值。
  3. 如权利要求1所述的神经网络的训练方法,其中,
    确定所述第一神经网络的预设层输出的第一特征图与所述第二神经网络在所述预设层输出的第二特征图包括:
    将所述第一神经网络的softmax层输出的特征图确定为第一特征图;以及,
    将所述第二神经网络的softmax层输出的特征图确定为第二特征图;
    基于所述第一特征图和所述第二特征图确定所述第二神经网络的第一损失函数值包括:
    基于所述第一特征图和所述第二特征图确定所述第二神经网络的交叉熵损失函数值;以及
    基于所述交叉熵损失函数值确定所述第二神经网络的第一损失函数值。
  4. 如权利要求1所述的神经网络的训练方法,其中,基于所述第一损失函数值和所述第二神经网络的第二损失函数值,更新所述第二神经网络的参数包括:
    计算所述第二神经网络的交叉熵损失函数值作为所述第二损失函数值;
    计算所述第一损失函数值和所述第二损失函数值的加权和作为总损失函数值;以及
    以所述总损失函数值通过反向传播的方式更新所述第二神经网络的参数。
  5. 如权利要求1所述的神经网络的训练方法,其中,在将训练数据输入已训练的第一神经网络和待训莲的第二神经网络之前进一步包括:
    训练第一神经网络直到所述第一神经网络收敛;以及
    对所述第一神经网络对应的所述第二神经网络进行高斯初始化。
  6. 一种神经网络的训练装置,包括:
    神经网络输入单元,用于将训练数据输入已训练的第一神经网络和待训练的第二神经网络;
    特征图确定单元,用于确定所述神经网络输入单元输入的所述第一神经网络的预设层输出的第一特征图与所述第二神经网络在所述预设层输出的 第二特征图;
    损失函数确定单元,用于基于所述特征图确定单元确定的所述第一特征图和所述第二特征图确定所述第二神经网络的第一损失函数值;
    神经网络更新单元,用于基于所述损失函数确定单元确定的所述第一损失函数值和所述第二神经网络的第二损失函数值,更新所述第二神经网络的参数;以及
    迭代更新单元,用于将更新后的所述第二神经网络的参数作为待训练的第二神经网络的初始参数,以迭代方式重复上述所述将训练数据输入已训练的第一神经网络和待训练的第二神经网络的步骤~所述基于所述第一损失函数值和所述第二神经网络的第二损失函数值,更新所述第二神经网络的参数的步骤,在更新得到的所述第二神经网络符合预设条件时,得到最终已训练的所述第二神经网络。
  7. 如权利要求6所述的神经网络的训练装置,其中,
    所述特征图确定单元包括:
    第一特征图确定子单元,用于将所述神经网络输入单元输入的所述第一神经网络的卷积层中的最后一层输出的特征图确定为第一特征图;以及,第二特征图确定子单元,用于将所述神经网络输入单元输入的所述第二神经网络的卷积层中的最后一层输出的特征图确定为第二特征图;
    所述损失函数确定单元包括:
    第一损失函数确定子单元,用于基于所述第一特征图确定子单元确定的所述第一特征图和所述第二特征图确定子单元确定的所述第二特征图确定所述第二神经网络的L2损失函数值;以及
    第二损失函数确定子单元,用于基于所述第一损失函数确定子单元确定的所述L2损失函数值确定所述神经网络输入单元输入的所述第二神经网络的第一损失函数值。
  8. 如权利要求6所述的神经网络的训练装置,其中,
    所述特征图确定单元包括:
    第三特征图确定子单元,用于将所述神经网络输入单元输入的所述第一神经网络的softmax层输出的特征图确定为第一特征图;以及,
    第四特征图确定子单元,用于将所述神经网络输入单元输入的所述第二神经网络的softmax层输出的特征图确定为第二特征图;
    所述损失函数确定单元包括:
    第三损失函数确定子单元,用于基于所述第三特征图确定子单元确定的所述第一特征图和所述第四特征图确定子单元确定的所述第二特征图确定所述第二神经网络的交叉熵损失函数值;以及
    第四损失函数确定子单元,用于基于所述第三损失函数确定子单元,确定的所述交叉熵损失函数值确定所述神经网络输入单元输入的所述第二神经网络的第一损失函数值。
  9. 一种电子设备,包括:
    处理器;以及
    存储器,在所述存储器中存储有计算机程序指令,所述计算机程序指令在被所述处理器运行时使得所述处理器执行如权利要求1-5中任一项所述的神经网络的训练方法。
  10. 一种计算机可读介质,其上存储有计算机程序指令,所述计算机程序指令在被处理器运行时使得所述处理器执行如权利要求1-5中任一项所述的神经网络的训练方法。
PCT/CN2019/100983 2019-01-08 2019-08-16 神经网络的训练方法、训练装置和电子设备 WO2020143225A1 (zh)

Priority Applications (1)

Application Number Priority Date Filing Date Title
US17/421,446 US20220083868A1 (en) 2019-01-08 2019-08-16 Neural network training method and apparatus, and electronic device

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
CN201910015326.4 2019-01-08
CN201910015326.4A CN111414987B (zh) 2019-01-08 2019-01-08 神经网络的训练方法、训练装置和电子设备

Publications (1)

Publication Number Publication Date
WO2020143225A1 true WO2020143225A1 (zh) 2020-07-16

Family

ID=71494078

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/CN2019/100983 WO2020143225A1 (zh) 2019-01-08 2019-08-16 神经网络的训练方法、训练装置和电子设备

Country Status (3)

Country Link
US (1) US20220083868A1 (zh)
CN (1) CN111414987B (zh)
WO (1) WO2020143225A1 (zh)

Cited By (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112862095A (zh) * 2021-02-02 2021-05-28 浙江大华技术股份有限公司 基于特征分析的自蒸馏学习方法、设备以及可读存储介质
CN113420227A (zh) * 2021-07-21 2021-09-21 北京百度网讯科技有限公司 点击率预估模型的训练方法、预估点击率的方法、装置
CN114330712A (zh) * 2021-12-31 2022-04-12 苏州浪潮智能科技有限公司 一种神经网络的训练方法、系统、设备以及介质
CN113420227B (zh) * 2021-07-21 2024-05-14 北京百度网讯科技有限公司 点击率预估模型的训练方法、预估点击率的方法、装置

Families Citing this family (9)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20210374462A1 (en) * 2020-05-28 2021-12-02 Canon Kabushiki Kaisha Image processing apparatus, neural network training method, and image processing method
CN112288086B (zh) * 2020-10-30 2022-11-25 北京市商汤科技开发有限公司 一种神经网络的训练方法、装置以及计算机设备
US20220188605A1 (en) * 2020-12-11 2022-06-16 X Development Llc Recurrent neural network architectures based on synaptic connectivity graphs
CN112541462A (zh) * 2020-12-21 2021-03-23 南京烨鸿智慧信息技术有限公司 用于有机废气的光净化效果检测的神经网络的训练方法
CN112766488A (zh) * 2021-01-08 2021-05-07 江阴灵通网络科技有限公司 用于防凝固混凝土搅拌控制的神经网络的训练方法
CN113542651B (zh) * 2021-05-28 2023-10-27 爱芯元智半导体(宁波)有限公司 模型训练方法、视频插帧方法及对应装置
CN113657483A (zh) * 2021-08-14 2021-11-16 北京百度网讯科技有限公司 模型训练方法、目标检测方法、装置、设备以及存储介质
CN113780556A (zh) * 2021-09-18 2021-12-10 深圳市商汤科技有限公司 神经网络训练及文字识别的方法、装置、设备及存储介质
CN116384460A (zh) * 2023-03-29 2023-07-04 清华大学 鲁棒性光学神经网络训练方法、装置、电子设备及介质

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN107247989A (zh) * 2017-06-15 2017-10-13 北京图森未来科技有限公司 一种神经网络训练方法及装置
US20180268292A1 (en) * 2017-03-17 2018-09-20 Nec Laboratories America, Inc. Learning efficient object detection models with knowledge distillation
CN108664893A (zh) * 2018-04-03 2018-10-16 福州海景科技开发有限公司 一种人脸检测方法及存储介质
CN108764462A (zh) * 2018-05-29 2018-11-06 成都视观天下科技有限公司 一种基于知识蒸馏的卷积神经网络优化方法
CN108960407A (zh) * 2018-06-05 2018-12-07 出门问问信息科技有限公司 递归神经网路语言模型训练方法、装置、设备及介质

Family Cites Families (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
KR20180027887A (ko) * 2016-09-07 2018-03-15 삼성전자주식회사 뉴럴 네트워크에 기초한 인식 장치 및 뉴럴 네트워크의 트레이닝 방법
CN108805259A (zh) * 2018-05-23 2018-11-13 北京达佳互联信息技术有限公司 神经网络模型训练方法、装置、存储介质及终端设备
CN108830813B (zh) * 2018-06-12 2021-11-09 福建帝视信息科技有限公司 一种基于知识蒸馏的图像超分辨率增强方法

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20180268292A1 (en) * 2017-03-17 2018-09-20 Nec Laboratories America, Inc. Learning efficient object detection models with knowledge distillation
CN107247989A (zh) * 2017-06-15 2017-10-13 北京图森未来科技有限公司 一种神经网络训练方法及装置
CN108664893A (zh) * 2018-04-03 2018-10-16 福州海景科技开发有限公司 一种人脸检测方法及存储介质
CN108764462A (zh) * 2018-05-29 2018-11-06 成都视观天下科技有限公司 一种基于知识蒸馏的卷积神经网络优化方法
CN108960407A (zh) * 2018-06-05 2018-12-07 出门问问信息科技有限公司 递归神经网路语言模型训练方法、装置、设备及介质

Cited By (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112862095A (zh) * 2021-02-02 2021-05-28 浙江大华技术股份有限公司 基于特征分析的自蒸馏学习方法、设备以及可读存储介质
CN112862095B (zh) * 2021-02-02 2023-09-29 浙江大华技术股份有限公司 基于特征分析的自蒸馏学习方法、设备以及可读存储介质
CN113420227A (zh) * 2021-07-21 2021-09-21 北京百度网讯科技有限公司 点击率预估模型的训练方法、预估点击率的方法、装置
CN113420227B (zh) * 2021-07-21 2024-05-14 北京百度网讯科技有限公司 点击率预估模型的训练方法、预估点击率的方法、装置
CN114330712A (zh) * 2021-12-31 2022-04-12 苏州浪潮智能科技有限公司 一种神经网络的训练方法、系统、设备以及介质
CN114330712B (zh) * 2021-12-31 2024-01-12 苏州浪潮智能科技有限公司 一种神经网络的训练方法、系统、设备以及介质

Also Published As

Publication number Publication date
CN111414987B (zh) 2023-08-29
US20220083868A1 (en) 2022-03-17
CN111414987A (zh) 2020-07-14

Similar Documents

Publication Publication Date Title
WO2020143225A1 (zh) 神经网络的训练方法、训练装置和电子设备
WO2020083073A1 (zh) 非机动车图像多标签分类方法、系统、设备及存储介质
WO2019034129A1 (zh) 神经网络结构的生成方法和装置、电子设备、存储介质
WO2021174935A1 (zh) 对抗生成神经网络的训练方法及系统
WO2019232847A1 (zh) 手写模型训练方法、手写字识别方法、装置、设备及介质
CN114048331A (zh) 一种基于改进型kgat模型的知识图谱推荐方法及系统
WO2021000745A1 (zh) 一种知识图谱的嵌入表示方法及相关设备
US9836564B1 (en) Efficient extraction of the worst sample in Monte Carlo simulation
WO2022105108A1 (zh) 一种网络数据分类方法、装置、设备及可读存储介质
CN111612080B (zh) 模型解释方法、设备及可读存储介质
WO2023051369A1 (zh) 一种神经网络的获取方法、数据处理方法以及相关设备
CN109409508B (zh) 一种基于生成对抗网络使用感知损失解决模型崩塌的方法
JP6172317B2 (ja) 混合モデル選択の方法及び装置
WO2019232855A1 (zh) 手写模型训练方法、手写字识别方法、装置、设备及介质
CN114065693A (zh) 超大规模集成电路结构布局的优化方法、系统和电子设备
WO2020107264A1 (zh) 神经网络架构搜索的方法与装置
WO2023197857A1 (zh) 一种模型切分方法及其相关设备
WO2020252925A1 (zh) 用户特征群中用户特征寻优方法、装置、电子设备及计算机非易失性可读存储介质
WO2023078009A1 (zh) 一种模型权重获取方法以及相关系统
CN113361621B (zh) 用于训练模型的方法和装置
CN111339308B (zh) 基础分类模型的训练方法、装置和电子设备
CN112348045A (zh) 神经网络的训练方法、训练装置和电子设备
CN112348161A (zh) 神经网络的训练方法、神经网络的训练装置和电子设备
CN112862758A (zh) 用于检测墙壁顶面的油漆涂抹质量的神经网络的训练方法
CN112766365A (zh) 用于智能光影弯折检测的神经网络的训练方法

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 19908981

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: DE

122 Ep: pct application non-entry in european phase

Ref document number: 19908981

Country of ref document: EP

Kind code of ref document: A1

122 Ep: pct application non-entry in european phase

Ref document number: 19908981

Country of ref document: EP

Kind code of ref document: A1

32PN Ep: public notification in the ep bulletin as address of the adressee cannot be established

Free format text: NOTING OF LOSS OF RIGHTS PURSUANT TO RULE 112(1) EPC (EPO FORM 1205A DATED 16.02.2022)

122 Ep: pct application non-entry in european phase

Ref document number: 19908981

Country of ref document: EP

Kind code of ref document: A1