CN110782017A - Method and device for adaptively adjusting learning rate - Google Patents

Method and device for adaptively adjusting learning rate Download PDF

Info

Publication number
CN110782017A
CN110782017A CN201911025726.XA CN201911025726A CN110782017A CN 110782017 A CN110782017 A CN 110782017A CN 201911025726 A CN201911025726 A CN 201911025726A CN 110782017 A CN110782017 A CN 110782017A
Authority
CN
China
Prior art keywords
learning rate
training
model
updating
gradient
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN201911025726.XA
Other languages
Chinese (zh)
Other versions
CN110782017B (en
Inventor
希滕
张刚
温圣召
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Beijing Baidu Netcom Science and Technology Co Ltd
Original Assignee
Beijing Baidu Netcom Science and Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Beijing Baidu Netcom Science and Technology Co Ltd filed Critical Beijing Baidu Netcom Science and Technology Co Ltd
Priority to CN201911025726.XA priority Critical patent/CN110782017B/en
Publication of CN110782017A publication Critical patent/CN110782017A/en
Application granted granted Critical
Publication of CN110782017B publication Critical patent/CN110782017B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/045Combinations of networks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/08Learning methods

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Health & Medical Sciences (AREA)
  • Computing Systems (AREA)
  • Biomedical Technology (AREA)
  • Biophysics (AREA)
  • Computational Linguistics (AREA)
  • Data Mining & Analysis (AREA)
  • Evolutionary Computation (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Molecular Biology (AREA)
  • Artificial Intelligence (AREA)
  • General Engineering & Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • Mathematical Physics (AREA)
  • Software Systems (AREA)
  • Health & Medical Sciences (AREA)
  • Image Analysis (AREA)
  • Stored Programmes (AREA)

Abstract

The embodiment of the disclosure discloses a method and a device for adaptively adjusting a learning rate. One embodiment of the method comprises: initializing an initial learning rate and model parameters of the model; calculating the gradient of the model parameter; the following attenuation steps are performed: calculating a local first derivative according to the gradient and the learning rate; determining whether the local first derivative satisfies a predetermined condition; if so, updating the model parameters according to the gradient and the learning rate; if not, the learning rate is attenuated, and the attenuation step is continued based on the attenuated learning rate. The implementation mode solves the problem that the attenuation strategy is complex to adjust parameters by manual design and learning, and solves the problem that a simple learning rate reduction strategy cannot converge to better model parameters.

Description

Method and device for adaptively adjusting learning rate
Technical Field
The embodiment of the disclosure relates to the technical field of computers, in particular to a method and a device for adaptively adjusting a learning rate.
Background
In recent years, deep learning techniques, in which a learning rate reduction strategy (including an initial learning rate) is crucial for an optimizer, have had great success in many directions. The convergence rate of the model and the final convergence accuracy of the model are limited by the learning rate reduction strategy. In the current main mode, the learning rate is adjusted by manually setting a learning rate strategy, or a simple learning rate attenuation rule is set to control the learning rate. Simple strategies such as a learning rate reduction strategy, exponential decay, reciprocal decay, cosine decay and the like are set, manual intervention is not needed, the strategies are too simple, the learning rate is only related to iteration turns and is not related to local gradient characteristics of the model, and therefore the optimal model parameters are difficult to converge. The learning rate is manually set, the learning rate is very dependent on the prior information of people, and no proper strategy can be used for reference for a new task. In addition, no matter new or old tasks, the parameter adjustment for the learning rate is very tedious, much energy of scientific research personnel can be consumed, and equipment resources can be wasted due to redundant debugging.
Disclosure of Invention
The embodiment of the disclosure provides a method and a device for adaptively adjusting a learning rate.
In a first aspect, an embodiment of the present disclosure provides a method for adaptively adjusting a learning rate, including: initializing an initial learning rate and model parameters of the model; calculating the gradient of the model parameter; the following attenuation steps are performed: calculating a local first derivative according to the gradient and the learning rate; determining whether the local first derivative satisfies a predetermined condition; if so, updating the model parameters according to the gradient and the learning rate; if not, the learning rate is attenuated, and the attenuation step is continued based on the attenuated learning rate.
In some embodiments, the method further comprises: the following training steps are performed: calculating the gradient of the updated model parameter; continuing to perform the above-described attenuation step based on the gradient until the local first derivative satisfies a predetermined condition; if the model meets the training completion condition, ending the training; if the model does not meet the training completion condition, updating the model parameters according to the gradient and the learning rate, and continuing to execute the training step.
In some embodiments, the method further comprises: for each training stage, at least one learning rate for updating the model parameters is acquired from a predetermined number of previous batches of training in the stage, and the average value of the at least one learning rate for updating the model parameters is used as the learning rate used in other batches of training in the stage.
In some embodiments, the method further comprises: after the model parameters are updated every time, the initial learning rate in the next training is set to be not less than the learning rate used for updating the model parameters.
In some embodiments, the predetermined conditions include: and updating the parameters of the model according to the current learning rate, wherein the function representation of the model before and after updating meets the local concave-convex property.
In a second aspect, an embodiment of the present disclosure provides an apparatus for adaptively adjusting a learning rate, including: an initialization unit configured to initialize an initial learning rate and model parameters of a model; a calculation unit configured to calculate gradients of the model parameters; an attenuation unit configured to perform the attenuation steps of: calculating a local first derivative according to the gradient and the learning rate; determining whether the local first derivative satisfies a predetermined condition; if so, updating the model parameters according to the gradient and the learning rate; and a circulation unit configured to attenuate the learning rate if the learning rate is not satisfied, and continue to execute the attenuation step based on the attenuated learning rate.
In some embodiments, the apparatus further comprises a training unit configured to: the following training steps are performed: calculating the gradient of the updated model parameter; continuing to perform the above-described attenuation step based on the gradient until the local first derivative satisfies a predetermined condition; if the model meets the training completion condition, ending the training; if the model does not meet the training completion condition, updating the model parameters according to the gradient and the learning rate, and continuing to execute the training step.
In some embodiments, the initialization unit is further configured to: for each training stage, at least one learning rate for updating the model parameters is acquired from a predetermined number of previous batches of training in the stage, and the average value of the at least one learning rate for updating the model parameters is used as the learning rate used in other batches of training in the stage.
In some embodiments, the initialization unit is further configured to: after the model parameters are updated every time, the initial learning rate in the next training is set to be not less than the learning rate used for updating the model parameters.
In some embodiments, the predetermined conditions include: and updating the parameters of the model according to the current learning rate, wherein the function representation of the model before and after updating meets the local concave-convex property.
In a third aspect, an embodiment of the present disclosure provides an electronic device for adaptively adjusting a learning rate, including: one or more processors; a storage device having one or more programs stored thereon which, when executed by one or more processors, cause the one or more processors to implement a method as in any one of the first aspects.
In a fourth aspect, embodiments of the disclosure provide a computer readable medium having a computer program stored thereon, wherein the program when executed by a processor implements a method as in any one of the first aspect.
The method and the device for adaptively adjusting the learning rate solve the problem of tedious parameter adjustment in the process of artificially designing the learning attenuation strategy and solve the problem that a simple learning rate reduction strategy cannot converge to a better model parameter.
Drawings
Other features, objects and advantages of the disclosure will become more apparent upon reading of the following detailed description of non-limiting embodiments thereof, made with reference to the accompanying drawings in which:
FIG. 1 is an exemplary system architecture diagram in which one embodiment of the present disclosure may be applied;
FIG. 2 is a flow diagram of one embodiment of a method for adaptively adjusting a learning rate according to the present disclosure;
FIG. 3 is a schematic diagram of one application scenario of a method for adaptively adjusting a learning rate according to the present disclosure;
FIG. 4 is a schematic structural diagram of an embodiment of an apparatus for adaptively adjusting a learning rate according to the present disclosure;
FIG. 5 is a schematic block diagram of a computer system suitable for use with an electronic device implementing embodiments of the present disclosure.
Detailed Description
The present disclosure is described in further detail below with reference to the accompanying drawings and examples. It is to be understood that the specific embodiments described herein are merely illustrative of the relevant invention and not restrictive of the invention. It should be noted that, for convenience of description, only the portions related to the related invention are shown in the drawings.
It should be noted that, in the present disclosure, the embodiments and features of the embodiments may be combined with each other without conflict. The present disclosure will be described in detail below with reference to the accompanying drawings in conjunction with embodiments.
Fig. 1 shows an exemplary system architecture 100 to which embodiments of the disclosed method for adaptively adjusting a learning rate or apparatus for adaptively adjusting a learning rate may be applied.
As shown in fig. 1, the system architecture 100 may include terminal devices 101, 102, 103, a network 104, and a server 105. The network 104 serves as a medium for providing communication links between the terminal devices 101, 102, 103 and the server 105. Network 104 may include various connection types, such as wired, wireless communication links, or fiber optic cables, to name a few.
The user may use the terminal devices 101, 102, 103 to interact with the server 105 via the network 104 to receive or send messages or the like. Various communication client applications, such as an image acquisition application, an image processing application, a search application, etc., may be installed on the terminal devices 101, 102, 103.
The terminal apparatuses 101, 102, and 103 may be hardware or software. When the terminal devices 101, 102, 103 are hardware, they may be various electronic devices with display screens, including but not limited to smart phones, tablet computers, laptop portable computers, desktop computers, and the like. When the terminal apparatuses 101, 102, 103 are software, they can be installed in the electronic apparatuses listed above. It may be implemented as a plurality of software or software modules (for example to provide image acquisition services) or as a single software or software module. And is not particularly limited herein.
The server 105 may be a server that provides various services, such as a server that performs neural network training based on sample images uploaded by the terminal devices 101, 102, 103 (e.g., street view images taken by an unmanned vehicle). The server can analyze and process the received data such as the sample image, generate a neural network model and feed the neural network model back to the terminal equipment. And the image to be identified uploaded by the terminal equipment can be processed, and the processing result (such as an image segmentation result) is fed back to the terminal equipment. In the training process, the learning rate can be adjusted in a self-adaptive mode.
It should be noted that the method for adaptively adjusting the learning rate provided by the embodiment of the present disclosure is generally performed by the server 105, and accordingly, the apparatus for adaptively adjusting the learning rate is generally disposed in the server 105.
The server 105 may be hardware or software. When the server 105 is hardware, it may be implemented as a distributed server cluster composed of a plurality of servers, or may be implemented as a single server. When the server is software, it may be implemented as a plurality of software or software modules (for example, for providing an image segmentation service), or may be implemented as a single software or software module. And is not particularly limited herein.
It should be understood that the number of terminal devices, networks, and servers in fig. 1 is merely illustrative. There may be any number of terminal devices, networks, and servers, as desired for implementation.
With continued reference to fig. 2, a flow 200 of one embodiment of a method for adaptively adjusting a learning rate according to the present disclosure is shown. The method for adaptively adjusting the learning rate comprises the following steps:
in step 201, the initial learning rate and model parameters of the model are initialized.
In this embodiment, an executing entity (for example, a server shown in fig. 1) of the method for adaptively adjusting the learning rate may receive a training request from a terminal with which a user performs model training through a wired connection manner or a wireless connection manner, where the training request includes a training sample set, for example, cirfar-10. The model is short for neural network model. The initial learning rate may refer to an initial learning rate of the general optimizer or may be higher than the initial learning rate of the general optimizer.
Taking the convolutional neural network model as an example, since the convolutional neural network is a multi-layer neural network, each layer is composed of a plurality of two-dimensional planes, and each plane is composed of a plurality of independent neurons, it is necessary to determine which layers (e.g., convolutional layers, pooling layers, fully-connected layers, classifiers, etc.) the initial neural network of the convolutional neural network type includes, the connection order relationship between the layers, and which parameters (e.g., weights, bias terms, convolution step sizes) each layer includes, etc. during initialization. Among other things, convolutional layers may be used to extract image features. For each convolution layer, it can determine how many convolution kernels there are, the size of each convolution kernel, the weight of each neuron in each convolution kernel, the bias term corresponding to each convolution kernel, the step size between two adjacent convolutions, and the like.
In practice, the various network parameters of the neural network (e.g., weight parameters and bias parameters) may be initialized with a number of different small random numbers. The small random number is used for ensuring that the network does not enter a saturation state due to overlarge weight value, so that training fails, and the different random numbers are used for ensuring that the network can normally learn.
The learning rate (learning _ rate) refers to the magnitude of each parameter update.
Step 202, the gradient of the model parameters is calculated.
In this embodiment, the gradient is calculated as described in equation 1, which is a common method in the prior art and therefore will not be described in detail.
Figure BDA0002248574370000061
In step 203, a local first derivative is calculated based on the gradient and the learning rate.
In this embodiment, the predetermined condition refers to adding a parameter of the model to be updated according to the current learning rate, and whether the local concave-convex property is satisfied before and after the update is represented by the following formula:
f (W + l + d W) < f (W) < W) + l f' (W) < T + d W (equation 2)
The description of the points is as follows:
1) wherein f is a functional representation of the neural network
2) W is the current parameter value
3) d is a preset very small constant
4) f' (W) represents the gradient of W
5) f '(W) ^ T represents the transposition of f' (W)
6) Current stage learning rate
And 204, if the local first derivative does not meet the preset condition, attenuating the learning rate, and continuing to execute the attenuation step based on the attenuated learning rate.
In this embodiment, if equation 2 is not satisfied, l is attenuated until equation 2 is satisfied. The attenuation can be performed by a factor or a step. For example, a damping coefficient of 0.9, or a damping step of 0.05 is set.
In step 205, if the local first derivative satisfies a predetermined condition, the model parameters are updated according to the gradient and the learning rate.
In the present embodiment, if the local first-order derivative satisfies the predetermined condition, it is indicated that the selected learning rate is appropriate, and the model parameters may be updated at the learning rate.
In some optional implementations of the present embodiment, after updating the model parameters, if the training completion condition is not met, the training still needs to be continued. The preset training completion condition may include, but is not limited to, at least one of the following: the training time exceeds the preset time; the training times exceed the preset times; the calculated difference is less than a preset difference threshold. During training, the method continues to follow step 202 and 205, where the appropriate learning rate is determined and then the parameters are updated. The initial learning rate may be set to the learning rate for updating the model parameters, which was explored in the previous round of training, or may be higher than the learning rate. And then decays from this initial learning rate. Therefore, the searching speed can be improved, and the model can be converged as soon as possible.
In some alternative implementations of the present embodiment, it is not necessary to find a suitable learning rate for each training. The samples may be chunked and then trained in stages, each of which may in turn be trained in batches. The appropriate learning rate, i.e., the learning rate for updating the model parameters, is found for each batch. For each training stage, acquiring at least one learning rate for updating the model parameters from a predetermined number of previous batches of training in the stage, and taking the average value of the at least one learning rate for updating the model parameters as the learning rate used in other batches of training in the stage.
With continued reference to fig. 3, fig. 3 is a schematic diagram of an application scenario of the method for adaptively adjusting the learning rate according to the present embodiment. In the application scenario of fig. 3, the initial learning rate is set to 0.9. Then, the local first derivative is calculated after the gradient of the current model is calculated and is combined with the learning rate. And if the calculation result does not satisfy the formula 2, the learning rate is attenuated to 0.85, then the local first-order derivative is calculated by combining the gradient of the current model, and if the calculation result satisfies the formula 2, the model parameters are updated according to the learning rate of 0.85. Continuing the next round of training, the initial learning rate is set to 0.85 calculated in the previous round, and then repeating the above process until the learning rate 0.75 satisfying the formula 2 is found, and then updating the model parameters.
According to the method provided by the embodiment of the disclosure, through the self-adaptive adjustment of the learning rate based on the first-order derivative characteristic, the problem of tedious parameter adjustment in the process of artificially designing the learning attenuation strategy is solved, and meanwhile, the problem that a simple learning rate reduction strategy cannot converge to a better model parameter is solved.
With further reference to fig. 4, as an implementation of the methods shown in the above figures, the present disclosure provides an embodiment of an apparatus for adaptively adjusting a learning rate, which corresponds to the method embodiment shown in fig. 2, and which is particularly applicable to various electronic devices.
As shown in fig. 4, the apparatus 400 for adaptively adjusting the learning rate of the present embodiment includes: an initialization unit 401, a calculation unit 402, an attenuation unit 403 and a loop unit 404. Wherein, the initialization unit 401 is configured to initialize an initial learning rate and model parameters of the model; a calculation unit 402 configured to calculate gradients of the model parameters; an attenuation unit 403 configured to perform the attenuation steps of: calculating a local first derivative according to the gradient and the learning rate; determining whether the local first derivative satisfies a predetermined condition; if so, updating the model parameters according to the gradient and the learning rate; and a looping unit 404 configured to decay the learning rate if the learning rate is not satisfied, and continue to execute the above-mentioned decay step based on the decayed learning rate.
In the present embodiment, the specific processes of the initialization unit 401, the calculation unit 402, the attenuation unit 403, and the loop unit 404 of the apparatus 400 for adaptively adjusting the learning rate may refer to step 201, step 202, step 203, step 204 in the corresponding embodiment of fig. 2.
In some optional implementations of this embodiment, the apparatus 400 further comprises a training unit (not shown in the drawings) configured to: the following training steps are performed: calculating the gradient of the updated model parameter; continuing to perform the above-described attenuation step based on the gradient until the local first derivative satisfies a predetermined condition; if the model meets the training completion condition, ending the training; if the model does not meet the training completion condition, updating the model parameters according to the gradient and the learning rate, and continuing to execute the training step.
In some optional implementations of this embodiment, the initialization unit 401 is further configured to: for each training stage, at least one learning rate for updating the model parameters is acquired from a predetermined number of previous batches of training in the stage, and the average value of the at least one learning rate for updating the model parameters is used as the learning rate used in other batches of training in the stage.
In some optional implementations of this embodiment, the initialization unit 401 is further configured to: after the model parameters are updated every time, the initial learning rate in the next training is set to be not less than the learning rate used for updating the model parameters.
In some optional implementations of this embodiment, the predetermined condition includes: and updating the parameters of the model according to the current learning rate, wherein the function representation of the model before and after updating meets the local concave-convex property.
Referring now to FIG. 5, a schematic diagram of an electronic device (e.g., the server of FIG. 1) 500 suitable for use in implementing embodiments of the present disclosure is shown. The server shown in fig. 5 is only an example, and should not bring any limitation to the functions and the scope of use of the embodiments of the present disclosure.
As shown in fig. 5, electronic device 500 may include a processing means (e.g., central processing unit, graphics processor, etc.) 501 that may perform various appropriate actions and processes in accordance with a program stored in a Read Only Memory (ROM)502 or a program loaded from a storage means 508 into a Random Access Memory (RAM) 503. In the RAM503, various programs and data necessary for the operation of the electronic apparatus 500 are also stored. The processing device 501, the ROM 502, and the RAM503 are connected to each other through a bus 504. An input/output (I/O) interface 505 is also connected to bus 504.
Generally, the following devices may be connected to the I/O interface 505: input devices 506 including, for example, a touch screen, touch pad, keyboard, mouse, camera, microphone, accelerometer, gyroscope, etc.; output devices 507 including, for example, a Liquid Crystal Display (LCD), speakers, vibrators, and the like; storage devices 508 including, for example, magnetic tape, hard disk, etc.; and a communication device 509. The communication means 509 may allow the electronic device 500 to communicate with other devices wirelessly or by wire to exchange data. While fig. 5 illustrates an electronic device 500 having various means, it is to be understood that not all illustrated means are required to be implemented or provided. More or fewer devices may alternatively be implemented or provided. Each block shown in fig. 5 may represent one device or may represent multiple devices as desired.
In particular, according to an embodiment of the present disclosure, the processes described above with reference to the flowcharts may be implemented as computer software programs. For example, embodiments of the present disclosure include a computer program product comprising a computer program embodied on a computer readable medium, the computer program comprising program code for performing the method illustrated in the flow chart. In such an embodiment, the computer program may be downloaded and installed from a network via the communication means 509, or installed from the storage means 508, or installed from the ROM 502. The computer program, when executed by the processing device 501, performs the above-described functions defined in the methods of embodiments of the present disclosure. It should be noted that the computer readable medium described in the embodiments of the present disclosure may be a computer readable signal medium or a computer readable storage medium or any combination of the two. A computer readable storage medium may be, for example, but not limited to, an electronic, magnetic, optical, electromagnetic, infrared, or semiconductor system, apparatus, or device, or any combination of the foregoing. More specific examples of the computer readable storage medium may include, but are not limited to: an electrical connection having one or more wires, a portable computer diskette, a hard disk, a Random Access Memory (RAM), a read-only memory (ROM), an erasable programmable read-only memory (EPROM or flash memory), an optical fiber, a portable compact disc read-only memory (CD-ROM), an optical storage device, a magnetic storage device, or any suitable combination of the foregoing. In embodiments of the disclosure, a computer readable storage medium may be any tangible medium that can contain, or store a program for use by or in connection with an instruction execution system, apparatus, or device. In embodiments of the present disclosure, however, a computer readable signal medium may comprise a propagated data signal with computer readable program code embodied therein, for example, in baseband or as part of a carrier wave. Such a propagated data signal may take many forms, including, but not limited to, electro-magnetic, optical, or any suitable combination thereof. A computer readable signal medium may also be any computer readable medium that is not a computer readable storage medium and that can communicate, propagate, or transport a program for use by or in connection with an instruction execution system, apparatus, or device. Program code embodied on a computer readable medium may be transmitted using any appropriate medium, including but not limited to: electrical wires, optical cables, RF (radio frequency), etc., or any suitable combination of the foregoing.
The computer readable medium may be embodied in the electronic device; or may exist separately without being assembled into the electronic device. The computer readable medium carries one or more programs which, when executed by the electronic device, cause the electronic device to: initializing an initial learning rate and model parameters of the model; calculating the gradient of the model parameter; the following attenuation steps are performed: calculating a local first derivative according to the gradient and the learning rate; determining whether the local first derivative satisfies a predetermined condition; if so, updating the model parameters according to the gradient and the learning rate; if not, the learning rate is attenuated, and the attenuation step is continued based on the attenuated learning rate.
Computer program code for carrying out operations for embodiments of the present disclosure may be written in any combination of one or more programming languages, including an object oriented programming language such as Java, Smalltalk, C + +, and conventional procedural programming languages, such as the "C" programming language or similar programming languages. The program code may execute entirely on the user's computer, partly on the user's computer, as a stand-alone software package, partly on the user's computer and partly on a remote computer or entirely on the remote computer or server. In the case of a remote computer, the remote computer may be connected to the user's computer through any type of network, including a Local Area Network (LAN) or a Wide Area Network (WAN), or the connection may be made to an external computer (for example, through the Internet using an Internet service provider).
The flowchart and block diagrams in the figures illustrate the architecture, functionality, and operation of possible implementations of systems, methods and computer program products according to various embodiments of the present disclosure. In this regard, each block in the flowchart or block diagrams may represent a module, segment, or portion of code, which comprises one or more executable instructions for implementing the specified logical function(s). It should also be noted that, in some alternative implementations, the functions noted in the block may occur out of the order noted in the figures. For example, two blocks shown in succession may, in fact, be executed substantially concurrently, or the blocks may sometimes be executed in the reverse order, depending upon the functionality involved. It will also be noted that each block of the block diagrams and/or flowchart illustration, and combinations of blocks in the block diagrams and/or flowchart illustration, can be implemented by special purpose hardware-based systems which perform the specified functions or acts, or combinations of special purpose hardware and computer instructions.
The units described in the embodiments of the present disclosure may be implemented by software or hardware. The described units may also be provided in a processor, and may be described as: a processor includes an initialization unit, a calculation unit, an attenuation unit, and a loop unit. Where the names of these units do not in some cases constitute a limitation on the unit itself, for example, an initialization unit may also be described as a "unit that initializes the initial learning rate and model parameters of the model".
The foregoing description is only exemplary of the preferred embodiments of the disclosure and is illustrative of the principles of the technology employed. It will be appreciated by those skilled in the art that the scope of the invention in the present disclosure is not limited to the specific combination of the above-mentioned features, but also encompasses other embodiments in which any combination of the above-mentioned features or their equivalents is possible without departing from the inventive concept. For example, the above features and (but not limited to) the features disclosed in this disclosure having similar functions are replaced with each other to form the technical solution.

Claims (12)

1. A method for adaptively adjusting a learning rate, comprising:
initializing an initial learning rate and model parameters of the model;
calculating a gradient of the model parameter;
the following attenuation steps are performed: calculating a local first derivative according to the gradient and the learning rate; determining whether the local first derivative satisfies a predetermined condition; if so, updating the model parameters according to the gradient and the learning rate;
if not, the learning rate is attenuated, and the attenuation step is continuously executed based on the attenuated learning rate.
2. The method of claim 1, wherein the method further comprises:
the following training steps are performed: calculating the gradient of the updated model parameter; continuing to perform the above-described attenuation step based on the gradient until the local first derivative satisfies a predetermined condition; if the model meets the training completion condition, ending the training;
and if the model does not meet the training completion condition, updating the model parameters according to the gradient and the learning rate, and continuing to execute the training step.
3. The method of claim 1, wherein the method further comprises:
for each training stage, acquiring at least one learning rate for updating the model parameters from a predetermined number of previous batches of training in the stage, and taking the average value of the at least one learning rate for updating the model parameters as the learning rate used in other batches of training in the stage.
4. The method of claim 1, wherein the method further comprises:
after the model parameters are updated every time, the initial learning rate in the next training is set to be not less than the learning rate used for updating the model parameters at this time.
5. The method according to one of claims 1 to 4, wherein the predetermined condition comprises:
and updating the parameters of the model according to the current learning rate, wherein the function representation of the model before and after updating meets the local concave-convex property.
6. An apparatus for adaptively adjusting a learning rate, comprising:
an initialization unit configured to initialize an initial learning rate and model parameters of a model;
a calculation unit configured to calculate a gradient of the model parameter;
an attenuation unit configured to perform the attenuation steps of: calculating a local first derivative according to the gradient and the learning rate; determining whether the local first derivative satisfies a predetermined condition; if so, updating the model parameters according to the gradient and the learning rate;
and a circulating unit configured to attenuate the learning rate if the learning rate is not satisfied, and continue to execute the attenuation step based on the attenuated learning rate.
7. The apparatus of claim 6, wherein the apparatus further comprises a training unit configured to:
the following training steps are performed: calculating the gradient of the updated model parameter; continuing to perform the above-described attenuation step based on the gradient until the local first derivative satisfies a predetermined condition; if the model meets the training completion condition, ending the training;
and if the model does not meet the training completion condition, updating the model parameters according to the gradient and the learning rate, and continuing to execute the training step.
8. The apparatus of claim 6, wherein the initialization unit is further configured to:
for each training stage, acquiring at least one learning rate for updating the model parameters from a predetermined number of previous batches of training in the stage, and taking the average value of the at least one learning rate for updating the model parameters as the learning rate used in other batches of training in the stage.
9. The apparatus of claim 6, wherein the initialization unit is further configured to:
after the model parameters are updated every time, the initial learning rate in the next training is set to be not less than the learning rate used for updating the model parameters at this time.
10. The apparatus according to one of claims 6-9, wherein the predetermined condition comprises:
and updating the parameters of the model according to the current learning rate, wherein the function representation of the model before and after updating meets the local concave-convex property.
11. An electronic device for adaptively adjusting a learning rate, comprising:
one or more processors;
a storage device having one or more programs stored thereon,
when executed by the one or more processors, cause the one or more processors to implement the method of any one of claims 1-5.
12. A computer-readable medium, on which a computer program is stored, wherein the program, when executed by a processor, implements the method of any one of claims 1-5.
CN201911025726.XA 2019-10-25 2019-10-25 Method and device for adaptively adjusting learning rate Active CN110782017B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201911025726.XA CN110782017B (en) 2019-10-25 2019-10-25 Method and device for adaptively adjusting learning rate

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201911025726.XA CN110782017B (en) 2019-10-25 2019-10-25 Method and device for adaptively adjusting learning rate

Publications (2)

Publication Number Publication Date
CN110782017A true CN110782017A (en) 2020-02-11
CN110782017B CN110782017B (en) 2022-11-22

Family

ID=69386654

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201911025726.XA Active CN110782017B (en) 2019-10-25 2019-10-25 Method and device for adaptively adjusting learning rate

Country Status (1)

Country Link
CN (1) CN110782017B (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
EP3996005A1 (en) * 2020-11-06 2022-05-11 Fujitsu Limited Calculation processing program, calculation processing method, and information processing device

Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN103971163A (en) * 2014-05-09 2014-08-06 哈尔滨工程大学 Adaptive learning rate wavelet neural network control method based on normalization lowest mean square adaptive filtering
CN108205706A (en) * 2016-12-19 2018-06-26 上海寒武纪信息科技有限公司 Artificial neural network reverse train device and method
WO2018112699A1 (en) * 2016-12-19 2018-06-28 上海寒武纪信息科技有限公司 Artificial neural network reverse training device and method
CN109389222A (en) * 2018-11-07 2019-02-26 清华大学深圳研究生院 A kind of quick adaptive neural network optimization method

Patent Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN103971163A (en) * 2014-05-09 2014-08-06 哈尔滨工程大学 Adaptive learning rate wavelet neural network control method based on normalization lowest mean square adaptive filtering
CN108205706A (en) * 2016-12-19 2018-06-26 上海寒武纪信息科技有限公司 Artificial neural network reverse train device and method
WO2018112699A1 (en) * 2016-12-19 2018-06-28 上海寒武纪信息科技有限公司 Artificial neural network reverse training device and method
CN109389222A (en) * 2018-11-07 2019-02-26 清华大学深圳研究生院 A kind of quick adaptive neural network optimization method

Non-Patent Citations (2)

* Cited by examiner, † Cited by third party
Title
樊海玮等: "BP改进算法及其在路面裂缝检测中的应用", 《长安大学学报(自然科学版)》 *
蒋文斌等: "深度学习自适应学习率算法研究", 《华中科技大学学报(自然科学版)》 *

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
EP3996005A1 (en) * 2020-11-06 2022-05-11 Fujitsu Limited Calculation processing program, calculation processing method, and information processing device

Also Published As

Publication number Publication date
CN110782017B (en) 2022-11-22

Similar Documents

Publication Publication Date Title
CN109800732B (en) Method and device for generating cartoon head portrait generation model
CN108520220B (en) Model generation method and device
CN111523640B (en) Training method and device for neural network model
CN112699991A (en) Method, electronic device, and computer-readable medium for accelerating information processing for neural network training
CN110766142A (en) Model generation method and device
CN108197652B (en) Method and apparatus for generating information
WO2019111118A1 (en) Robust gradient weight compression schemes for deep learning applications
CN113128419B (en) Obstacle recognition method and device, electronic equipment and storage medium
CN111368973B (en) Method and apparatus for training a super network
CN109993298B (en) Method and apparatus for compressing neural networks
CN112668588B (en) Parking space information generation method, device, equipment and computer readable medium
CN113505848B (en) Model training method and device
US20220114479A1 (en) Systems and methods for automatic mixed-precision quantization search
CN113095129A (en) Attitude estimation model training method, attitude estimation device and electronic equipment
CN111311480A (en) Image fusion method and device
CN110782016A (en) Method and apparatus for optimizing neural network architecture search
CN110782017B (en) Method and device for adaptively adjusting learning rate
CN112241761B (en) Model training method and device and electronic equipment
CN109670579A (en) Model generating method and device
CN113610228B (en) Method and device for constructing neural network model
CN115293292A (en) Training method and device for automatic driving decision model
CN113762304B (en) Image processing method, image processing device and electronic equipment
CN111582456B (en) Method, apparatus, device and medium for generating network model information
CN111523639B (en) Method and apparatus for training a super network
CN111310896B (en) Method and device for training neural network

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant