CN109491494A - Method of adjustment, device and the intensified learning model training method of power parameter - Google Patents
Method of adjustment, device and the intensified learning model training method of power parameter Download PDFInfo
- Publication number
- CN109491494A CN109491494A CN201811419611.4A CN201811419611A CN109491494A CN 109491494 A CN109491494 A CN 109491494A CN 201811419611 A CN201811419611 A CN 201811419611A CN 109491494 A CN109491494 A CN 109491494A
- Authority
- CN
- China
- Prior art keywords
- parameter
- power parameter
- neural network
- power
- network processor
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Granted
Links
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F1/00—Details not covered by groups G06F3/00 - G06F13/00 and G06F21/00
- G06F1/26—Power supply means, e.g. regulation thereof
- G06F1/32—Means for saving power
- G06F1/3203—Power management, i.e. event-based initiation of a power-saving mode
- G06F1/3206—Monitoring of events, devices or parameters that trigger a change in power modality
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F1/00—Details not covered by groups G06F3/00 - G06F13/00 and G06F21/00
- G06F1/26—Power supply means, e.g. regulation thereof
- G06F1/32—Means for saving power
- G06F1/3203—Power management, i.e. event-based initiation of a power-saving mode
- G06F1/3234—Power saving characterised by the action undertaken
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06N—COMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
- G06N3/00—Computing arrangements based on biological models
- G06N3/02—Neural networks
- G06N3/06—Physical realisation, i.e. hardware implementation of neural networks, neurons or parts of neurons
- G06N3/063—Physical realisation, i.e. hardware implementation of neural networks, neurons or parts of neurons using electronic means
Landscapes
- Engineering & Computer Science (AREA)
- Theoretical Computer Science (AREA)
- Physics & Mathematics (AREA)
- General Engineering & Computer Science (AREA)
- General Physics & Mathematics (AREA)
- Health & Medical Sciences (AREA)
- Life Sciences & Earth Sciences (AREA)
- Biomedical Technology (AREA)
- Biophysics (AREA)
- Artificial Intelligence (AREA)
- Neurology (AREA)
- Computational Linguistics (AREA)
- Data Mining & Analysis (AREA)
- Evolutionary Computation (AREA)
- General Health & Medical Sciences (AREA)
- Molecular Biology (AREA)
- Computing Systems (AREA)
- Mathematical Physics (AREA)
- Software Systems (AREA)
- Management, Administration, Business Operations System, And Electronic Commerce (AREA)
- Supply And Distribution Of Alternating Current (AREA)
Abstract
The embodiment of the present application discloses the method for adjustment and device, electronic equipment of a kind of power parameter, which comprises determines the state parameter and the first power parameter of neural network processor at runtime;The second power parameter of the neural network processor is determined according to the state parameter and first power parameter;And the power parameter of the Processing with Neural Network is adjusted to second power parameter by first power parameter.By adjusting the power parameter of neural network processor, to realize that neural network processor operates in the optimal state of energy consumption, achieve the purpose that save energy consumption.
Description
Technical field
The present invention relates to processor technical fields, and in particular to a kind of power parameter adjustment side of neural network processor
Method, device, intensified learning model training method and electronic equipment.
Background technique
Most of at present to have certain requirements scene such as mobile phone to power consumption, the processors such as computer CPU and GPU can all be supported to move
State electric voltage frequency (Dynamic voltage and frequency scaling, referred to as DVFS) adjustment.It is given for one
Task, the calculating total amount of processor is a constant, only reduces voltage while reducing frequency, could veritably reduce
The consumption of energy.
The prior art is to design DVFS management strategy for CPU processor mostly, however, with artificial intelligence technology
The application of development, neural network processor (NPU) is also more and more.But the framework of CPU processor and neural network processor
It is inconsistent, and the DVFS algorithm based on CPU processor depends on the software for operating in CPU processor itself to be operated, and NPU
Processor is unable to run the corresponding software of CPU processor because its customizations degree is high.Therefore, it needs a kind of for NPU processing
The DVFS management method of device.
Summary of the invention
In order to solve the above-mentioned technical problem, the application is proposed.Embodiments herein provides a kind of power parameter
Method of adjustment solves the DVFS management method for NPU processor.
According to the one aspect of the application, a kind of method of adjustment of power parameter is provided, comprising: determine at neural network
Manage the state parameter and the first power parameter of device at runtime;Institute is determined according to the state parameter and first power parameter
State the second power parameter of neural network processor;And by the power parameter of the Processing with Neural Network by first power
Parameter is adjusted to second power parameter.
According to further aspect of the application, a kind of instruction of intensified learning model applied to power parameter adjustment is provided
Practice method, comprising: obtain neural network processor operation when state parameter and power parameter, wherein the state parameter and
The power parameter is obtained by coprocessor, and the state parameter is based on default load state parameter and generates;Calculate the state
Earned value representated by parameter and the power parameter;By the state parameter, power parameter and the corresponding income valence
Empirically sample is stored in the experience pond of the intensified learning model value;Update the power parameter, wherein the power ginseng
Several more new ranges is given multiple power parameters;And when given multiple power parameters all have corresponding income
When value, intensified learning model described in deconditioning.
According to further aspect of the application, a kind of adjustment device of power parameter is provided, comprising: first determines mould
Block, for determining the state parameter and the first power parameter of the neural network processor at runtime;Second determining module is used
In the second power parameter for determining the neural network processor according to the state parameter and first power parameter;And
Module is adjusted, for the power parameter of the neural network processor to be adjusted to second function by first power parameter
Rate parameter.
According to further aspect of the application, a kind of computer readable storage medium, the storage medium storage are provided
There is computer program, the computer program is used to execute any of the above-described method.
According to further aspect of the application, a kind of electronic equipment is provided, the electronic equipment includes: processor;With
In the memory for storing the processor-executable instruction;The processor, for executing any of the above-described method.
The method of adjustment of power parameter provided by the embodiments of the present application, at runtime by determining neural network processor
State parameter and the first power parameter determine the second power of neural network processor according to state parameter and the first power parameter
Parameter achievees the purpose that save energy consumption to realize that neural network processor operates in the optimal state of energy consumption.
Detailed description of the invention
The embodiment of the present application is described in more detail in conjunction with the accompanying drawings, the above-mentioned and other purposes of the application,
Feature and advantage will be apparent.Attached drawing is used to provide to further understand the embodiment of the present application, and constitutes explanation
A part of book is used to explain the application together with the embodiment of the present application, does not constitute the limitation to the application.In the accompanying drawings,
Identical reference label typically represents same parts or step.
Fig. 1 is the system diagram of the power parameter adjustment for the neural network processor that the application is applicable in.
Fig. 2 is the flow diagram of the method for adjustment for the power parameter that one exemplary embodiment of the application provides.
Fig. 3 is the flow diagram of the method for adjustment for the power parameter that the application another exemplary embodiment provides.
Fig. 4 is the flow diagram of the method for adjustment for the power parameter that the application another exemplary embodiment provides.
Fig. 5 is the flow diagram of the method for adjustment for the power parameter that the application another exemplary embodiment provides.
Fig. 6 is the flow diagram for the intensified learning method that one exemplary embodiment of the application provides.
Fig. 7 is the flow diagram for the intensified learning method that the application another exemplary embodiment provides.
Fig. 8 is the structure chart of the adjustment device for the power parameter that one exemplary embodiment of the application provides.
Fig. 9 is the structure chart for the first determining module that one exemplary embodiment of the application provides.
Figure 10 is the structure chart for the intensified learning model training apparatus that one exemplary embodiment of the application provides.
Figure 11 is the structure chart for the intensified learning model training apparatus that the application another exemplary embodiment provides.
Figure 12 is the structure chart for the electronic equipment that one exemplary embodiment of the application provides.
Specific embodiment
In the following, example embodiment according to the application will be described in detail by referring to the drawings.Obviously, described embodiment is only
It is only a part of the embodiment of the application, rather than the whole embodiments of the application, it should be appreciated that the application is not by described herein
The limitation of example embodiment.
Application is summarized
This application can be applied to any fields that task processing is carried out using neural network processor.For example, the application
Embodiment can be applied under the scenes such as image procossing or speech processes, and the application is the dynamic for neural network processor
The method of adjustment and device of voltage and frequency, therefore, as long as the application can be used by having the field of neural network processor
Provided method and apparatus.
As described above, under the application scenarios of finite energy, such as mobile phone, computer are not joining with the equipment of processor
When energization source, in order to extend the runing time using equipment as far as possible, it will usually carry out dynamic voltage frequency to processor
Adjustment, i.e., according to load the case where and processor operation state adjustment processor voltage and frequency, to guarantee processor
Power consumption it is lower, to reduce the energy consumption of processor, save energy.
However the management plan that the adjustment of the existing dynamic voltage frequency to processor is designed both for CPU processor
Slightly, and with the development of artificial intelligence technology, the application of neural network processor (NPU) is also more and more.But CPU is handled
The framework of device and neural network processor is inconsistent, and the DVFS algorithm based on CPU processor depends on and operates in CPU processor
The software of itself is operated, and NPU processor is unable to run the corresponding software of CPU processor because its customizations degree is high.
It is thus impossible to will directly be applied on NPU processor for the method for adjustment of the dynamic voltage frequency of CPU processor.
For the above technical issues, the basic conception of the application is the method for adjustment for proposing a kind of power parameter, is passed through
The state parameter and the first power parameter of neural network processor at runtime are determined, according to state parameter and the first power parameter
Determine the second power parameter of neural network processor, and also by the power parameter of Processing with Neural Network by the first power parameter
It is adjusted to the second power parameter, to realize that neural network processor operates in the optimal state of energy consumption, reaches the mesh for saving energy consumption
's.
After describing the basic principle of the application, carry out the various non-limits for specifically introducing the application below with reference to the accompanying drawings
Property embodiment processed.
Exemplary system
Fig. 1 is the system diagram of the power parameter adjustment for the neural network processor that the application is applicable in.As shown in Figure 1, this
The system for applying for the power parameter adjustment of the neural network processor in embodiment includes neural network processor 1 and coprocessor
2, wherein neural network processor 1 and coprocessor 2 communicate to connect.Determine that neural network processor 1 exists by coprocessor 2
State parameter and the first power parameter when operation, determine neural network processor 1 according to state parameter and the first power parameter
The second power parameter, and also state parameter is adjusted accordingly.
When neural network processor 1 runs and handles load, when coprocessor 2 is run according to neural network processor 1
State parameter and the first power parameter determine the second power parameter of neural network processor 1 to realize neural network processor 1
The adjustment of power parameter reach the mesh for saving energy consumption to realize that neural network processor 1 operates in the optimal state of energy consumption
's.
The load of neural network processor 1 may include image, voice etc., and state parameter can also include temperature accordingly
And performance parameter.Wherein, performance is with reference to may include operation frame per second, voice delay time etc., in the fortune of neural network processor 1
During row, it will usually realize that the performance scheduling of neural network processor 1 works using coprocessor 2, therefore, at neural network
The performance parameter of reason device 1 exists in coprocessor 2.Temperature can be the DIE Temperature of neural network processor 1, Ke Yitong
Cross the DIE Temperature value that the transistor being set at neural network processor 1 directly obtains neural network processor 1, coprocessor
2 can also read the DIE Temperature of neural network processor 1 from the transistor.
Illustrative methods
Fig. 2 is the flow diagram of the method for adjustment for the power parameter that one exemplary embodiment of the application provides.This implementation
Example can be applicable on electronic equipment, as shown in Fig. 2, including the following steps:
Step 210: determining the state parameter and the first power parameter of neural network processor 1 at runtime.
In one embodiment, state parameter can be determined based on neural network processor data type to be treated,
For example, state parameter may include the temperature of frame per second and neural network processor 1 when data type is image data;Work as number
According to type be voice data when, state parameter may include voice delay time and the temperature through network processing unit 1.
In one embodiment, the first power parameter may include voltage, electric current and frequency.When neural network processor 1 exists
The numerical value that corresponding first power parameter can be generated when processing load, that is, generate corresponding voltage value, current value and frequency values, and
Voltage value, current value and frequency values are able to reflect the instantaneous energy consumption of neural network processor 1.
When neural network processor 1 is when handling load, for example, when carrying out image real time transfer, Processing with Neural Network
Device 1 can according to processing image frame per second and image parameter (frame per second and resolution ratio including image etc.) to image at
Reason, so that the specific value of corresponding state parameter can be generated, including temperature value, frame rate value etc.;Carrying out language data process
When, neural network processor 1 is handled voice according to the delay time etc. of processing voice, to generate corresponding state ginseng
Several specific values, the temperature value etc. including delay time and neural network processor.The state parameter of neural network processor
Also it can reflect the operating status of neural network processor 1, state parameter when so as to run according to neural network processor 1
Determine its operating status.
Step 220: the second power parameter of neural network processor 1 is determined according to state parameter and the first power parameter.
In one embodiment, the second power parameter may include voltage, electric current and frequency.When according to neural network processor
1 current state parameter and the first power parameter determine it not and be when operating in optimum state, according to current state parameter and
First power parameter determines the second power parameter of neural network processor 1.Wherein, the second power parameter and corresponding state ginseng
Number is the optimized operation state of neural network processor 1, and optimized operation state can be neural network processor 1 and meet load
Under the premise of demand, the output power of neural network processor 1 is minimum and DIE Temperature is less than preset temperature threshold.
Step 230: the power parameter of neural network processor 1 is adjusted to the second power parameter by the first power parameter.
After the second power parameter of neural network processor 1 has been determined, by the power parameter tune of neural network processor 1
Whole is the second power parameter, to realize that neural network processor 1 operates in optimized operation state.
The method of adjustment of power parameter provided by the embodiments of the present application determines mind by state parameter and the first power parameter
The second power parameter through network processing unit, and the power parameter of Processing with Neural Network is adjusted to the second power parameter, due to
Second power parameter has comprehensively considered the state parameter and power parameter of neural network processor itself, therefore by will be refreshing
Power parameter through network processing unit is adjusted to the second power parameter by the first power parameter, neural network processor can be made to transport
The row state optimal in energy consumption achievees the purpose that save energy consumption.
Fig. 3 is the flow diagram of the method for adjustment for the power parameter that the application another exemplary embodiment provides.Such as Fig. 3
Shown, step 220 may include sub-step:
Step 221: state parameter and the first power parameter are inputted into the intensified learning model trained.
Intensified learning is that intelligent body (Agent) is learnt in a manner of " trial and error ", by interacting acquisition with environment
Award instruct behavior, target is that intelligent body is made to obtain maximum award, is by the enhanced signal that environment provides in intensified learning
A kind of evaluation is made to the quality of generation movement, rather than tells how intensified learning model goes to generate correct movement.Due to outer
The information that portion's environment provides is seldom, and intensified learning model must lean on the experience of itself to be learnt, in this way, extensive chemical
It practises model and obtains knowledge in the environment of action-critic, improve action scheme to adapt to environment.Reinforcing in the embodiment of the present application
Learning model can be Q learning model, Deep Q learning model, Sarsa model, Policy Gradients mould
Type etc..
Step 222: the second power parameter of neural network processor 1 is calculated by intensified learning model.
In the present embodiment, the highest power parameter of income for choosing the output of intensified learning model is the second power parameter, is led to
Setting intensified learning model is crossed, simply to realize the acquisition of the second power parameter, avoids complicated calculation formula or logic
Operation.
Fig. 4 is the flow diagram of the method for adjustment for the power parameter that the application another exemplary embodiment provides.Such as Fig. 4
Shown, step 222 may include sub-step:
Step 2221: being calculated corresponding to all power parameters of neural network processor at least by intensified learning model
One earned value.
In the present embodiment, earned value can be a characterization corresponding to energy consumption when running with neural network processor
Value runs wasted power consumption values for representing neural network processor, i.e. earned value and energy consumption is negatively correlated.
Step 2222: highest earned value is determined from least one earned value.
Step 2223: the corresponding power parameter of highest earned value is determined as to the second power of neural network processor
Parameter.
In the present embodiment, the intensified learning model by having completed study calculates all power of neural network processor
At least one earned value corresponding to parameter, and power parameter corresponding to highest earned value is therefrom chosen as nerve
Second power parameter of network processing unit, to realize that neural network processor operates in optimized operation state.
Fig. 5 is the flow diagram of the method for adjustment for the power parameter that the application another exemplary embodiment provides.Such as Fig. 5
Shown, step 210 may include sub-step:
Step 211: determining the data type to be treated at runtime of neural network processor 1.
In one embodiment, the data type to be treated of neural network processor 1 may include image data, voice number
According to etc..
Step 212: the state parameter of neural network processor 1 is determined based on data type.
The state parameter of neural network processor 1 is determined according to data type handled by neural network processor 1.Example
Such as, when the data type handled by the neural network processor 1 is image data, corresponding state parameter may include nerve net
DIE Temperature, operation frame per second and the performance parameter of network processor 1;The data type handled by the neural network processor 1 is language
When sound data, corresponding state parameter may include the DIE Temperature of neural network processor 1, voice delay time.
It should be appreciated that the embodiment of the present application can choose different state parameters according to different data types, as long as
Selected state parameter is able to reflect the operating status of neural network processor, and the embodiment of the present application is for state parameter
Particular content without limitation.
Fig. 6 is the flow diagram for the intensified learning method that one exemplary embodiment of the application provides.As shown in fig. 6, should
Intensified learning method may include steps of:
Step 510: obtaining the state parameter and power parameter when neural network processor operation.Wherein, state parameter and
Power parameter is obtained by coprocessor, and state parameter is based on default load state parameter and generates.
The training of intensified learning model in the present embodiment can carry out in training module (such as server), and instruct
In experienced process, the state parameter of neural network processor 1 can be sent to the training module by coprocessor 2, for instruction
Practice the module training intensified learning model.After the completion of the intensified learning model is trained in training module, by the intensified learning mould
Type is sent in coprocessor 2, and the implementation procedure of the intensified learning model is realized by coprocessor 2.
Step 520: calculating earned value representated by state parameter and power parameter.
Intensified learning process is a kind of machine learning mode for not needing offer data sample initially in training, core reason
Thought is so that the intensified learning model being trained to is learnt in a manner of " trial and error ", by interacting acquisition with environment
Income, and action behavior is instructed using income, the target of intensified learning is to obtain the intensified learning model being trained to
Maximum value.
Step 530: will be in state parameter, power parameter and corresponding earned value empirically sample deposit experience pond.
The embodiment of the present application is converted into earned value by calculating earned value, by the power parameter of each study output,
And using state parameter, power parameter and earned value as the experience sample in the experience pond of intensified learning model.When being executed,
According to state parameter different caused by neural network processor operation load, corresponding income valence is chosen in the experience pond
It is worth output of the highest power parameter as model.Load the operating status for determining current neural network processor, root
Optimal function of the highest power parameter of earned value as neural network processor is chosen from experience pond according to the operating status
Rate parameter, to realize that neural network processor operates in the optimal state of energy consumption.
Step 540: updating power parameter.Wherein, the more new range of power parameter is given multiple power parameters.
Power parameter is successively updated according to the value of given power parameter, it is right with all power parameters institute for calculating given
The earned value answered, to provide the needs of enough experience samples are come when meeting execution.
Step 550: when given multiple power parameters all have corresponding earned value, deconditioning intensified learning mould
Type.
Since processor (including neural network processor) can only be in limited power parameter in actual moving process
Choose one, i.e. the power parameter of processor is discrete existing, rather than consecutive variations, therefore, and when handling load, processing
Theoretically optimal power parameter value might not can only give in practical implementation device in given power parameter
A power parameter the most optimal is chosen in fixed power parameter.For the particularity of this field, in training intensified learning mould
It when type, needs to improve the generalization ability of model as far as possible, that is, needs in the experience pond of intensified learning model comprising in implementation procedure
All samples that may relate to just can guarantee that generated state parameter and power parameter are in the warp of the model when execution in this way
Testing Chi Zhongjun has corresponding experience sample, therefore, has all calculated separately corresponding earned value in given multiple power parameters
Afterwards, deconditioning intensified learning model.
The embodiment of the present application is made by calculating earned value, and by state parameter, power parameter and corresponding earned value
It is stored in experience pond for experience sample, in practical implementation, chooses the highest power parameter of earned value as nerve net
The optimal power parameter of network processor is realized according to the current state parameter of neural network processor and the first power parameter come really
Determine the second power parameter of neural network processor, to realize that neural network processor operates in the optimal state of energy consumption, reaches
Save the purpose of energy consumption.
In one embodiment, the application can also be fitted according to discrete power parameter and obtain becoming for a consecutive variations
Gesture, to obtain the corresponding relationship of continuous earned value and power parameter, when operation load theoretical optimal power parameter not
When being present within the scope of given power parameter, function nearest with theoretical optimal power parameter within the scope of power parameter can be chosen
Rate parameter is as optimal power parameter.
It in one embodiment, can also be by the corresponding income valence of state parameter in the training process of intensified learning model
Value is not high or is not that highest experience sample is deleted, and only retains optimal experience sample in experience pond, execution can be improved in this way
Regulated efficiency in the process.
In one embodiment, the condition of deconditioning intensified learning model may also is that the earned value being calculated is big
In the first default revenue threshold.By the way that the first default revenue threshold is arranged, in the earned value for calculating a certain power parameter representative
Afterwards it is judged whether to be optimal, when judging result has reached optimal for earned value representated by the power parameter,
The process of training can be simplified with deconditioning;Otherwise, continue to train the intensified learning model.
In one embodiment, the training method of above-mentioned intensified learning model can also include: when all given power ginsengs
When the representative earned value of number is both less than or is equal to the first default revenue threshold, Training Control corresponding to maximum return is chosen
Information is the output of intensified learning model, deconditioning intensified learning model.
It should be appreciated that the embodiment of the present application can choose the condition of different deconditionings according to different application scenarios,
As long as the condition of selected deconditioning can be realized the training of intensified learning model, the stopping in the embodiment of the present application
Trained condition includes but is not limited to above-mentioned any number of condition.
Fig. 7 is the flow diagram for the intensified learning method that the application another exemplary embodiment provides.As shown in fig. 7,
The intensified learning model training method of the embodiment of the present application can also include the following steps:
Step 610: given default load state parameter is obtained into combination load state ginseng by way of permutation and combination
Number.
Specifically, default load state parameter may include any one or more of combination of following parameter: load
Quantity loads resolution ratio, and loadtype loads delay time, loads frame per second, the network number of plies and type needed for load processing.
In a further embodiment, load resolution ratio may include following any: 1080p, 720p;And/or load frame per second can be with
Including following any frame rate value: 10 frames/per second, 24 frames/per second, 30 frames/per second;And/or loadtype may include following
Any type: detection, tracking, identification.
More load condition parameters in order to obtain, the embodiment of the present application can lead to given all load condition parameters
The mode for crossing permutation and combination obtains new combination load state parameter.For example, given load condition parameter includes the first load
State parameter and the second load condition parameter, wherein the first load condition parameter is resolution ratio 720P, 10 frame of frame per second/per second
The Detection task of single image, the second load condition parameter is resolution ratio 1080P, the knowledge of two images of 24 frame of frame per second/per second
Other task.After being directed to the first load condition parameter and the second load condition parameter progress intensified learning respectively, can there is first
Load condition parameter and the second load condition parameter obtain combination load state parameter by way of permutation and combination, and the combination is negative
Carry state parameter include: resolution ratio 720P, 24 frame of frame per second/per second single image Detection task, resolution ratio 1080P, frame per second
The Detection task of the single image of 10 frames/per second, the identification mission of two images etc. of resolution ratio 720P, 10 frame of frame per second/per second.
Step 620: obtaining the state parameter generated based on combination load state parameter.
Step 630: intensified learning model being trained based on generated state parameter.
The new combination load state parameter input neural network processor that will acquire to generate new state parameter, and with
This new state parameter further trains intensified learning model, to improve the application range and accuracy of the intensified learning model.
Two given load condition parameters are given it should be appreciated that being only exemplary in the embodiment of the present application, in reality
Border application in, given load condition parameter can be two or more, and be above in exemplary illustration also only
It is enumerated part combination load state parameter, the embodiment of the present application can obtain very more according to given load condition parameter
Combination load state parameter train the intensified learning model, under the conditions of given load condition parameter is limited, to the greatest extent may be used
Most load condition parameter training intensified learning models can be obtained, to improve answering for the intensified learning model to greatest extent
With range, while it can also improve the accuracy of the intensified learning model.
In one embodiment, the specific implementation of above-mentioned steps 520 may include: according to state parameter and processing load
The difference of required minimum state parameter, power parameter calculate earned value, wherein required for state parameter and processing load
Minimum state parameter difference and earned value it is negatively correlated, power parameter and earned value are negatively correlated.Update earned value
Mode can be to be calculated by state parameter and power parameter, wherein earned value and state parameter meet processing load
Degree (i.e. the difference of minimum state parameter required for state parameter and processing load) negatively correlated, earned value and the function needed
Rate parameter is negatively correlated.I.e. state parameter is more beyond load needs, power parameter is bigger, then the value of earned value is smaller, because
This, to obtain earned value maximum, it is desirable to which state parameter needs minimum, power parameter as far as possible minimum as far as possible beyond load.
In one embodiment, the specific implementation of above-mentioned steps 520 can also include: to load to state parameter and processing
The difference of required minimum state parameter is weighted processing, and weighting treated difference and earned value are negatively correlated;With/
Or, be weighted processing to power parameter, weighting treated power parameter and earned value are negatively correlated.According to state parameter and
Power parameter can be weighted the importance of intensive training model output result to state parameter and/or power parameter,
And earned value is calculated by weighting treated state parameter and power parameter, to change state parameter and power parameter pair
The influence degree of earned value, to accelerate the convergence rate of intensive training model.
In one embodiment, when load is image data, the specific implementation formula of above-mentioned steps 520 can be with specifically: R
(a)=- ((a-a0) * β+P);Wherein, R is earned value, and a is operation frame per second, and a0 is load frame per second, and β is weighting coefficient, and P is
The operation power of neural network processor.The degree of processing load needs can be met according to state parameter (i.e. by the formula
Operation frame per second with load frame per second difference) and power parameter (i.e. the output power of neural network processor) be directly calculated.
It, can be with it should be appreciated that the embodiment of the present application can choose different weighting coefficients according to different application scenarios
The weighting coefficient for understanding output power in above-mentioned formula is 1, it is of course also possible to select the weighting system of output power according to demand
System, the application for the weighting coefficient in above-mentioned formula without limitation.It should be appreciated that above-mentioned formula be the embodiment of the present application to
A kind of mode of illustrative calculating earned value out, the embodiment of the present application can choose other formula and calculate income valence
Value, the embodiment of the present application for earned value calculation formula without limitation.
Exemplary means
The adjustment device of the power parameter of a kind of neural network processor provided by the present application, for realizing above-mentioned nerve net
The method of adjustment of the power parameter of network processor.
Fig. 8 is the structure chart of the adjustment device for the power parameter that one exemplary embodiment of the application provides.As shown in figure 8,
The adjustment device includes: the first determining module 21, for determining the state parameter and first of neural network processor 1 at runtime
Power parameter;Second determining module 22, for determining the of neural network processor 1 according to state parameter and the first power parameter
Two power parameters;And adjustment module 11, for the power parameter of Processing with Neural Network 1 to be adjusted to the by the first power parameter
Two power parameters.
The adjustment device of power parameter provided by the embodiments of the present application determines Processing with Neural Network by the first determining module
The state parameter and the first power parameter of device at runtime, the second determining module are determined according to state parameter and the first power parameter
Second power parameter of neural network processor, and the power parameter of neural network processor is adjusted to second by adjustment module
Power parameter achievees the purpose that save energy consumption to realize that neural network processor operates in the optimal state of energy consumption.
In one embodiment, the first determining module 21 and the second determining module 22 can be set in coprocessor 2,
In, coprocessor 2 and neural network processor 1 communicate to connect, power when for assisting adjustment neural network processor 1 to run
Parameter.
In one embodiment, adjustment module 11 can be set in neural network processor 1, for nerve net to be determined
After second power parameter of network processor 1, the power parameter of Processing with Neural Network 1 is adjusted to the second function by the first power parameter
Rate parameter.
In one embodiment, the second determining module 22 is configurable to:
State parameter and the first power parameter are inputted into the intensified learning model trained, calculated by intensified learning model
Second power parameter of neural network processor 1.By the way that intensified learning model is arranged, simply to realize the second power parameter
It obtains, avoids complicated calculation formula or logical operation.
Fig. 9 is the structure chart for the first determining module that one exemplary embodiment of the application provides.As shown in figure 9, first is true
Cover half block 21 may include:
Task determines submodule 211, the data type handled at runtime for determining neural network processor 1.
State parameter determines submodule 212, for determining the state parameter of neural network processor 1 based on data type.
The state parameter of neural network processor 1 is determined according to data type handled by neural network processor 1.Example
Such as, when the data type handled by the neural network processor 1 is image procossing, corresponding state parameter may include nerve net
DIE Temperature, operation frame per second, working voltage, running current and the performance parameter of network processor 1;When 1 institute of neural network processor
When the data type of processing is speech processes, corresponding state parameter may include the DIE Temperature of neural network processor 1, fortune
Row voltage, running current, voice delay time.
Figure 10 is the structure chart for the intensified learning model training apparatus that one exemplary embodiment of the application provides.Such as Figure 10 institute
Show, comprising:
Module 31 is obtained, for obtaining state parameter and power parameter when neural network processor operation, wherein state
Parameter and power parameter are obtained by coprocessor, and state parameter is based on default load state parameter and generates.
Computing module 32, for calculating earned value representated by state parameter and power parameter.
Sample establishes module 33, for by state parameter, power parameter and corresponding earned value, empirically sample to be deposited
Enter in experience pond.By calculating earned value, earned value is converted by the power parameter of each study output, and state is joined
Number, power parameter and earned value are as the experience sample in the experience pond of intensified learning model.When being executed, according to nerve net
Different state parameters, chooses the corresponding highest function of earned value in the experience pond caused by the operation load of network processor
Output of the rate parameter as model.
Update module 34, for updating power parameter, wherein the more new range of power parameter is given multiple power ginseng
Number.Power parameter is successively updated according to the value of given power parameter, to calculate corresponding to given all power parameters
Earned value, to provide the needs of enough experience samples are come when meeting execution.
Stopping modular 35, for when given multiple power parameters all have corresponding earned value, deconditioning to be strong
Change learning model.In training intensified learning model, needs to improve the generalization ability of model as far as possible, that is, need intensified learning mould
Comprising all samples that may relate in implementation procedure in the experience pond of type, generated state when execution just can guarantee in this way
Parameter and power parameter have corresponding experience sample in the experience Chi Zhongjun of the model, therefore, in given multiple power parameters
After all having calculated separately corresponding earned value, deconditioning intensified learning model.
The embodiment of the present application is made by calculating earned value, and by state parameter, power parameter and corresponding earned value
It is stored in experience pond for experience sample, in practical implementation, chooses the highest power parameter of earned value as nerve net
The optimal power parameter of network processor is realized according to the current state parameter of neural network processor and the first power parameter come really
Determine the second power parameter of neural network processor, to realize that neural network processor operates in the optimal state of energy consumption, reaches
Save the purpose of energy consumption.
In one embodiment, stopping modular 235 is configurable to: the earned value being calculated is greater than the first default income
Threshold value.By the way that the first default revenue threshold is arranged, it is judged whether after calculating the earned value that a certain power parameter represents
It is optimal, can be with deconditioning when judging result has reached optimal for earned value representated by the power parameter, letter
Change the process of training;Otherwise, continue to train the intensified learning model.
In one embodiment, stopping modular 235 can be further configured to: representated by all given power parameters
When earned value is both less than or is equal to the first default revenue threshold, choosing Training Control information corresponding to maximum return is to strengthen
The output of learning model, deconditioning intensified learning model.
Figure 11 is the structure chart for the intensified learning model training apparatus that the application another exemplary embodiment provides.Such as Figure 11
Shown, training device 3 can also include:
Load combinations module 36, for obtaining given all default load state parameters by way of permutation and combination
Combination load state parameter.And it obtains module 33 and obtains the state parameter generated based on combination load state parameter and power ginseng
Number is trained.
Neural network processor is inputted by the new combination load state parameter that will acquire to generate new state parameter
And power parameter, and intensified learning model is trained further with this new state parameter and power parameter, the reinforcing can be improved
The application range and accuracy of learning model.
In one embodiment, update module 34 is configurable to: according to minimum required for state parameter and processing load
The difference of state parameter, power parameter calculate earned value, wherein state parameter and processing load required minimum state and join
Several differences and earned value is negatively correlated, and power parameter and earned value are negatively correlated.The mode of update earned value can be logical
It crosses state parameter and power parameter is calculated, wherein earned value and state parameter meet the degree of processing load needs (i.e.
The difference of minimum state parameter required for state parameter and processing load) it is negatively correlated, earned value and power parameter are negatively correlated.
I.e. state parameter is more beyond load needs, power parameter is bigger, then the value of earned value is smaller, therefore, to be received
Beneficial Maximum Value, it is desirable to which state parameter needs minimum, power parameter as far as possible minimum as far as possible beyond load.
In one embodiment, update module 34 is also configured as: to minimum required for state parameter and processing load
The difference of state parameter is weighted processing, and weighting treated difference and earned value are negatively correlated;And/or to power parameter
It is weighted processing, weighting treated power parameter and earned value are negatively correlated.According to state parameter and power parameter for
Intensive training model exports the importance of result, can be weighted to state parameter and/or power parameter, and by weighting
State parameter and power parameter after reason calculate earned value, to change state parameter and power parameter to the shadow of earned value
The degree of sound, to accelerate the convergence rate of intensive training model.
In one embodiment, when load is image, update module 34 is configurable to: R (a)=- ((a-a0) * β+P);
Wherein, R is the feedback variable of reward function, and a is operation frame per second, and a0 is load frame per second, and β is weighting coefficient, and P is at neural network
Manage the operation power of device.By the formula can according to state parameter meet processing load need degree (i.e. operation frame per second with
Load frame per second difference) and power parameter (i.e. the output power of neural network processor) be directly calculated.
Example electronic device
Figure 12 illustrates the block diagram of the electronic equipment according to the embodiment of the present application.It should be noted that when electronic equipment is held
The above-mentioned Fig. 2 of row to embodiment illustrated in fig. 5 method flow when, can be the electronics such as photographic device, recording device, intelligent apparatus
Equipment.It can be technical staff for instructing when electronic equipment executes method flow of the above-mentioned Fig. 6 to embodiment illustrated in fig. 7
Practice the electronic equipments such as the server of intensified learning model.
As shown in figure 12, electronic equipment 11 includes one or more processors 111 and memory 112.
Processor 111 can be central processing unit (CPU) or have data-handling capacity and/or instruction execution capability
Other forms processing unit, and can control the other assemblies in electronic equipment 11 to execute desired function.
Memory 112 may include one or more computer program products, and the computer program product may include
Various forms of computer readable storage mediums, such as volatile memory and/or nonvolatile memory.The volatibility is deposited
Reservoir for example may include random access memory (RAM) and/or cache memory (cache) etc..It is described non-volatile
Memory for example may include read-only memory (ROM), hard disk, flash memory etc..It can be on the computer readable storage medium
One or more computer program instructions are stored, processor 111 can run described program instruction, to realize sheet described above
The power parameter method of adjustment of each embodiment or the training method of intensified learning model of application and/or other are desired
Function.It is various that input signal, signal component, noise component(s) etc. can also be stored in the computer readable storage medium
Content.
In one example, electronic equipment 11 can also include: input unit 113 and output device 114, these components are logical
Cross bindiny mechanism's (not shown) interconnection of bus system and/or other forms.
For example, the input unit 113 can be above-mentioned camera or microphone, microphone array etc., for capturing figure
The input signal of picture or sound source.When the electronic equipment is stand-alone device, which can be communication network connection
Device, for receiving input signal collected from neural network processor.
In addition, the input equipment 113 can also include such as keyboard, mouse etc..
The output device 114 can be output to the outside various information, including output voltage, the output current information determined
Deng.The output equipment 114 may include such as display, loudspeaker, printer and communication network and its be connected long-range
Output equipment etc..
Certainly, to put it more simply, illustrated only in Figure 12 it is some in component related with the application in the electronic equipment 11,
The component of such as bus, input/output interface etc. is omitted.In addition to this, according to concrete application situation, electronic equipment 11 is also
It may include any other component appropriate.
Illustrative computer program product and computer readable storage medium
Other than the above method and equipment, embodiments herein can also be computer program product comprising meter
Calculation machine program instruction, it is above-mentioned that the computer program instructions make the processor execute this specification when being run by processor
Described in " illustrative methods " part extremely according to the power parameter method of adjustment or Fig. 6 of the application Fig. 2 to embodiment illustrated in fig. 5
Step in the training method of Fig. 7 intensified learning model.
The computer program product can be write with any combination of one or more programming languages for holding
The program code of row the embodiment of the present application operation, described program design language includes object oriented program language, such as
Java, C++ etc. further include conventional procedural programming language, such as " C " language or similar programming language.Journey
Sequence code can be executed fully on the user computing device, partly execute on a user device, be independent soft as one
Part packet executes, part executes on a remote computing or completely in remote computing device on the user computing device for part
Or it is executed on server.
In addition, embodiments herein can also be computer readable storage medium, it is stored thereon with computer program and refers to
It enables, the computer program instructions make the processor execute above-mentioned " the exemplary side of this specification when being run by processor
According to the power parameter method of adjustment of the various embodiments of the application or the training method of intensified learning model described in method " part
In step.
The computer readable storage medium can be using any combination of one or more readable mediums.Readable medium can
To be readable signal medium or readable storage medium storing program for executing.Readable storage medium storing program for executing for example can include but is not limited to electricity, magnetic, light, electricity
Magnetic, the system of infrared ray or semiconductor, device or device, or any above combination.Readable storage medium storing program for executing it is more specific
Example (non exhaustive list) includes: the electrical connection with one or more conducting wires, portable disc, hard disk, random access memory
Device (RAM), read-only memory (ROM), erasable programmable read only memory (EPROM or flash memory), optical fiber, portable compact disc
Read-only memory (CD-ROM), light storage device, magnetic memory device or above-mentioned any appropriate combination.
The basic principle of the application is described in conjunction with specific embodiments above, however, it is desirable to, it is noted that in this application
The advantages of referring to, advantage, effect etc. are only exemplary rather than limitation, must not believe that these advantages, advantage, effect etc. are the application
Each embodiment is prerequisite.In addition, detail disclosed above is merely to exemplary effect and the work being easy to understand
With, rather than limit, it is that must be realized using above-mentioned concrete details that above-mentioned details, which is not intended to limit the application,.
Device involved in the application, device, equipment, system block diagram only as illustrative example and be not intended to
It is required that or hint must be attached in such a way that box illustrates, arrange, configure.As those skilled in the art will appreciate that
, it can be connected by any way, arrange, configure these devices, device, equipment, system.Such as "include", "comprise", " tool
" etc. word be open vocabulary, refer to " including but not limited to ", and can be used interchangeably with it.Vocabulary used herein above
"or" and "and" refer to vocabulary "and/or", and can be used interchangeably with it, unless it is not such that context, which is explicitly indicated,.Here made
Vocabulary " such as " refers to phrase " such as, but not limited to ", and can be used interchangeably with it.
It may also be noted that each component or each step are can to decompose in the device of the application, device and method
And/or reconfigure.These decompose and/or reconfigure the equivalent scheme that should be regarded as the application.
The above description of disclosed aspect is provided so that any person skilled in the art can make or use this
Application.Various modifications in terms of these are readily apparent to those skilled in the art, and are defined herein
General Principle can be applied to other aspect without departing from scope of the present application.Therefore, the application is not intended to be limited to
Aspect shown in this, but according to principle disclosed herein and the consistent widest range of novel feature.
In order to which purpose of illustration and description has been presented for above description.In addition, this description is not intended to the reality of the application
It applies example and is restricted to form disclosed herein.Although already discussed above multiple exemplary aspects and embodiment, this field skill
Its certain modifications, modification, change, addition and sub-portfolio will be recognized in art personnel.
Claims (12)
1. a kind of method of adjustment of power parameter, comprising:
Determine the state parameter and the first power parameter of neural network processor at runtime;
The second power parameter of the neural network processor is determined according to the state parameter and first power parameter;With
And
The power parameter of the neural network processor is adjusted to second power parameter by first power parameter.
2. described to be determined according to the state parameter and first power parameter according to the method described in claim 1, wherein
Second power parameter of the neural network processor, comprising:
The state parameter and first power parameter are inputted into the intensified learning model trained;
The second power parameter of the neural network processor is calculated by the intensified learning model.
3. according to the method described in claim 2, wherein, the method also includes:
The state parameter of the neural network processor and power parameter are sent to training module, for training module instruction
Practice the intensified learning model.
4. described to be calculated at the neural network by the intensified learning model according to the method described in claim 2, wherein
Manage the second power parameter of device, comprising:
At least one corresponding to all power parameters of the neural network processor is calculated by the intensified learning model
Earned value;
Highest earned value is determined from least one described earned value;And
The corresponding power parameter of the highest earned value is determined as to the second power parameter of the neural network processor.
5. according to the method described in claim 1, wherein, the state parameter of the determining neural network processor at runtime,
Include:
Determine neural network processor data type to be treated at runtime;And
Type determines the state parameter of the neural network processor based on the data.
6. a kind of training method of the intensified learning model applied to power parameter adjustment, comprising:
Obtain the state parameter and power parameter when neural network processor operation, wherein the state parameter and the power
Parameter is obtained by coprocessor, and the state parameter is based on default load state parameter and generates;
Calculate earned value representated by the state parameter and the power parameter;
By the state parameter, power parameter and the corresponding earned value, empirically sample is stored in the intensified learning mould
In the experience pond of type;
Update the power parameter, wherein the more new range of the power parameter is given multiple power parameters;And
When given multiple power parameters all have corresponding earned value, intensified learning model described in deconditioning.
7. according to the method described in claim 6, wherein, the method also includes:
The given default load state parameter is obtained into combination load state parameter by way of permutation and combination;
Obtain the state parameter generated based on the combination load state parameter;And
The intensified learning model is trained based on the generated state parameter.
8. method according to claim 6 or 7, wherein the calculating state parameter and power parameter institute's generation
The earned value of table, comprising:
According to the difference of minimum state parameter required for the state parameter and processing load, power parameter calculating
Earned value, wherein the difference and the earned value are negatively correlated, and the power parameter and the earned value are negatively correlated.
9. according to the method described in claim 8, wherein, representated by the calculating state parameter and the power parameter
Earned value, further includes:
Processing is weighted to the difference, and/or, processing is weighted to the power parameter;And
According to weighting treated the difference and/or weighting, treated that the power parameter calculates the earned value.
10. a kind of adjustment device of power parameter, comprising:
First determining module, for determining the state parameter and the first power parameter of the neural network processor at runtime;
Second determining module, for determining the neural network processor according to the state parameter and first power parameter
The second power parameter;And
Module is adjusted, for the power parameter of the neural network processor to be adjusted to described the by first power parameter
Two power parameters.
11. a kind of computer readable storage medium, the storage medium is stored with computer program, and the computer program is used for
Execute any method of the claims 1-9.
12. a kind of electronic equipment, the electronic equipment include:
Processor;
For storing the memory of the processor-executable instruction;
The processor, for executing any method of the claims 1-9.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201811419611.4A CN109491494B (en) | 2018-11-26 | 2018-11-26 | Power parameter adjusting method and device and reinforcement learning model training method |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201811419611.4A CN109491494B (en) | 2018-11-26 | 2018-11-26 | Power parameter adjusting method and device and reinforcement learning model training method |
Publications (2)
Publication Number | Publication Date |
---|---|
CN109491494A true CN109491494A (en) | 2019-03-19 |
CN109491494B CN109491494B (en) | 2020-04-17 |
Family
ID=65696719
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN201811419611.4A Active CN109491494B (en) | 2018-11-26 | 2018-11-26 | Power parameter adjusting method and device and reinforcement learning model training method |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN109491494B (en) |
Cited By (10)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN110147891A (en) * | 2019-05-23 | 2019-08-20 | 北京地平线机器人技术研发有限公司 | Method, apparatus and electronic equipment applied to intensified learning training process |
CN110456644A (en) * | 2019-08-13 | 2019-11-15 | 北京地平线机器人技术研发有限公司 | Determine the method, apparatus and electronic equipment of the execution action message of automation equipment |
CN110941268A (en) * | 2019-11-20 | 2020-03-31 | 苏州大学 | Unmanned automatic trolley control method based on Sarsa safety model |
CN111182549A (en) * | 2020-01-03 | 2020-05-19 | 广州大学 | Anti-interference wireless communication method based on deep reinforcement learning |
CN112016665A (en) * | 2020-10-20 | 2020-12-01 | 深圳云天励飞技术股份有限公司 | Method and device for calculating running time of neural network on processor |
CN112347584A (en) * | 2020-11-09 | 2021-02-09 | 北京三一智造科技有限公司 | Power distribution method and power distribution device |
CN112529170A (en) * | 2019-09-18 | 2021-03-19 | 意法半导体股份有限公司 | Variable clock adaptation in a neural network processor |
WO2022151783A1 (en) * | 2021-01-18 | 2022-07-21 | 成都国科微电子有限公司 | Processor parameter adjustment method, apparatus, electronic device, and storage medium |
EP4137913A1 (en) * | 2021-08-17 | 2023-02-22 | Axis AB | Power management in processing circuitry which implements a neural network |
CN117130769A (en) * | 2023-02-25 | 2023-11-28 | 荣耀终端有限公司 | Frequency modulation method, training method of frequency adjustment neural network and electronic equipment |
Citations (4)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US20040123163A1 (en) * | 2002-12-20 | 2004-06-24 | Ta-Feng Huang | Device for managing electric power source of CPU |
CN103376869A (en) * | 2012-04-28 | 2013-10-30 | 华为技术有限公司 | Temperature feedback control system and method for DVFS (Dynamic Voltage Frequency Scaling) |
CN107209548A (en) * | 2015-02-13 | 2017-09-26 | 英特尔公司 | Power management is performed in polycaryon processor |
TW201807538A (en) * | 2016-08-18 | 2018-03-01 | 瑞昱半導體股份有限公司 | Voltage and frequency scaling apparatus, system on chip and voltage and frequency scaling method |
-
2018
- 2018-11-26 CN CN201811419611.4A patent/CN109491494B/en active Active
Patent Citations (4)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US20040123163A1 (en) * | 2002-12-20 | 2004-06-24 | Ta-Feng Huang | Device for managing electric power source of CPU |
CN103376869A (en) * | 2012-04-28 | 2013-10-30 | 华为技术有限公司 | Temperature feedback control system and method for DVFS (Dynamic Voltage Frequency Scaling) |
CN107209548A (en) * | 2015-02-13 | 2017-09-26 | 英特尔公司 | Power management is performed in polycaryon processor |
TW201807538A (en) * | 2016-08-18 | 2018-03-01 | 瑞昱半導體股份有限公司 | Voltage and frequency scaling apparatus, system on chip and voltage and frequency scaling method |
Cited By (15)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN110147891A (en) * | 2019-05-23 | 2019-08-20 | 北京地平线机器人技术研发有限公司 | Method, apparatus and electronic equipment applied to intensified learning training process |
CN110456644A (en) * | 2019-08-13 | 2019-11-15 | 北京地平线机器人技术研发有限公司 | Determine the method, apparatus and electronic equipment of the execution action message of automation equipment |
CN110456644B (en) * | 2019-08-13 | 2022-12-06 | 北京地平线机器人技术研发有限公司 | Method and device for determining execution action information of automation equipment and electronic equipment |
CN112529170A (en) * | 2019-09-18 | 2021-03-19 | 意法半导体股份有限公司 | Variable clock adaptation in a neural network processor |
EP3822737A3 (en) * | 2019-09-18 | 2021-07-07 | STMicroelectronics International N.V. | Variable clock adaptation in neural network processors |
US11900240B2 (en) | 2019-09-18 | 2024-02-13 | Stmicroelectronics S.R.L. | Variable clock adaptation in neural network processors |
CN110941268A (en) * | 2019-11-20 | 2020-03-31 | 苏州大学 | Unmanned automatic trolley control method based on Sarsa safety model |
CN111182549A (en) * | 2020-01-03 | 2020-05-19 | 广州大学 | Anti-interference wireless communication method based on deep reinforcement learning |
CN112016665A (en) * | 2020-10-20 | 2020-12-01 | 深圳云天励飞技术股份有限公司 | Method and device for calculating running time of neural network on processor |
CN112347584A (en) * | 2020-11-09 | 2021-02-09 | 北京三一智造科技有限公司 | Power distribution method and power distribution device |
CN112347584B (en) * | 2020-11-09 | 2024-05-28 | 北京三一智造科技有限公司 | Power distribution method and power distribution device |
WO2022151783A1 (en) * | 2021-01-18 | 2022-07-21 | 成都国科微电子有限公司 | Processor parameter adjustment method, apparatus, electronic device, and storage medium |
EP4137913A1 (en) * | 2021-08-17 | 2023-02-22 | Axis AB | Power management in processing circuitry which implements a neural network |
US11874721B2 (en) | 2021-08-17 | 2024-01-16 | Axis Ab | Power management in processing circuitry which implements a neural network |
CN117130769A (en) * | 2023-02-25 | 2023-11-28 | 荣耀终端有限公司 | Frequency modulation method, training method of frequency adjustment neural network and electronic equipment |
Also Published As
Publication number | Publication date |
---|---|
CN109491494B (en) | 2020-04-17 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN109491494A (en) | Method of adjustment, device and the intensified learning model training method of power parameter | |
US11783227B2 (en) | Method, apparatus, device and readable medium for transfer learning in machine learning | |
CN109863537B (en) | Stylized input image | |
JP7017640B2 (en) | Learning data expansion measures | |
JP6854921B2 (en) | Multitasking neural network system with task-specific and shared policies | |
US20210350233A1 (en) | System and Method for Automated Precision Configuration for Deep Neural Networks | |
US11727265B2 (en) | Methods and apparatus to provide machine programmed creative support to a user | |
CN112200736B (en) | Image processing method based on reinforcement learning and model training method and device | |
US20220176554A1 (en) | Method and device for controlling a robot | |
CN112199477A (en) | Dialogue management scheme and dialogue management corpus construction method | |
US20220107793A1 (en) | Concept for Placing an Execution of a Computer Program | |
JP2016218513A (en) | Neural network and computer program therefor | |
US20230267307A1 (en) | Systems and Methods for Generation of Machine-Learned Multitask Models | |
WO2020164644A2 (en) | Neural network model splitting method, apparatus, computer device and storage medium | |
CN112116104A (en) | Method, apparatus, medium, and electronic device for automatically integrating machine learning | |
Wen et al. | Taso: Time and space optimization for memory-constrained DNN inference | |
JP2022165395A (en) | Method for optimizing neural network model and method for providing graphical user interface for neural network model | |
CN107544794A (en) | The treating method and apparatus of program information | |
WO2019134987A1 (en) | Parallel video processing systems | |
CN113052257A (en) | Deep reinforcement learning method and device based on visual converter | |
Chen et al. | Experiments and optimizations for TVM on RISC-V architectures with p extension | |
CN115827225A (en) | Distribution method of heterogeneous operation, model training method, device, chip, equipment and medium | |
CN113743567A (en) | Method for deploying deep learning model to acceleration unit | |
US11726544B2 (en) | Dynamic agent for multiple operators optimization | |
Banković et al. | Trading-off Accuracy vs Energy in Multicore Processors via Evolutionary Algorithms Combining Loop Perforation and Static Analysis-Based Scheduling |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
GR01 | Patent grant | ||
GR01 | Patent grant |