CN109491494B - Power parameter adjusting method and device and reinforcement learning model training method - Google Patents

Power parameter adjusting method and device and reinforcement learning model training method Download PDF

Info

Publication number
CN109491494B
CN109491494B CN201811419611.4A CN201811419611A CN109491494B CN 109491494 B CN109491494 B CN 109491494B CN 201811419611 A CN201811419611 A CN 201811419611A CN 109491494 B CN109491494 B CN 109491494B
Authority
CN
China
Prior art keywords
neural network
parameter
network processor
power parameter
power
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN201811419611.4A
Other languages
Chinese (zh)
Other versions
CN109491494A (en
Inventor
李江涛
侯鹏飞
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Beijing Horizon Robotics Technology Research and Development Co Ltd
Original Assignee
Beijing Horizon Robotics Technology Research and Development Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Beijing Horizon Robotics Technology Research and Development Co Ltd filed Critical Beijing Horizon Robotics Technology Research and Development Co Ltd
Priority to CN201811419611.4A priority Critical patent/CN109491494B/en
Publication of CN109491494A publication Critical patent/CN109491494A/en
Application granted granted Critical
Publication of CN109491494B publication Critical patent/CN109491494B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F1/00Details not covered by groups G06F3/00 - G06F13/00 and G06F21/00
    • G06F1/26Power supply means, e.g. regulation thereof
    • G06F1/32Means for saving power
    • G06F1/3203Power management, i.e. event-based initiation of a power-saving mode
    • G06F1/3206Monitoring of events, devices or parameters that trigger a change in power modality
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F1/00Details not covered by groups G06F3/00 - G06F13/00 and G06F21/00
    • G06F1/26Power supply means, e.g. regulation thereof
    • G06F1/32Means for saving power
    • G06F1/3203Power management, i.e. event-based initiation of a power-saving mode
    • G06F1/3234Power saving characterised by the action undertaken
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/06Physical realisation, i.e. hardware implementation of neural networks, neurons or parts of neurons
    • G06N3/063Physical realisation, i.e. hardware implementation of neural networks, neurons or parts of neurons using electronic means

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • General Engineering & Computer Science (AREA)
  • Biophysics (AREA)
  • Biomedical Technology (AREA)
  • Health & Medical Sciences (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Data Mining & Analysis (AREA)
  • Evolutionary Computation (AREA)
  • General Health & Medical Sciences (AREA)
  • Molecular Biology (AREA)
  • Computing Systems (AREA)
  • Neurology (AREA)
  • Computational Linguistics (AREA)
  • Mathematical Physics (AREA)
  • Software Systems (AREA)
  • Artificial Intelligence (AREA)
  • Supply And Distribution Of Alternating Current (AREA)
  • Management, Administration, Business Operations System, And Electronic Commerce (AREA)

Abstract

The embodiment of the application discloses a method and a device for adjusting power parameters and electronic equipment, wherein the method comprises the following steps: determining a state parameter and a first power parameter of a neural network processor at runtime; determining a second power parameter of the neural network processor according to the state parameter and the first power parameter; and adjusting the power parameter processed by the neural network from the first power parameter to the second power parameter. The power parameter of the neural network processor is adjusted to realize that the neural network processor operates in the state of optimal energy consumption, thereby achieving the purpose of saving energy consumption.

Description

Power parameter adjusting method and device and reinforcement learning model training method
Technical Field
The invention relates to the technical field of processors, in particular to a method and a device for adjusting power parameters of a neural network processor, a reinforcement learning model training method and electronic equipment.
Background
At present, most of scenes with certain requirements on power consumption, such as mobile phones, computer CPUs, GPUs and other processors, support Dynamic Voltage and Frequency Scaling (DVFS) adjustment. For a given task, the total amount of computation of the processor is a constant, and only reducing the frequency while reducing the voltage actually reduces the power consumption.
In the prior art, DVFS management strategies are mostly designed for CPU processors, however, with the development of artificial intelligence technology, neural Network Processors (NPUs) are increasingly applied. However, the architecture of the CPU processor is not consistent with that of the neural network processor, and the DVFS algorithm based on the CPU processor depends on software running on the CPU processor itself to operate, whereas the NPU processor cannot run software corresponding to the CPU processor because of its high degree of specialization. Therefore, a DVFS management method for NPU processors is needed.
Disclosure of Invention
The present application is proposed to solve the above-mentioned technical problems. The embodiment of the application provides a power parameter adjusting method, and the DVFS management method for an NPU (network processor unit) is achieved.
According to an aspect of the present application, there is provided a method for adjusting a power parameter, including: determining a state parameter and a first power parameter of a neural network processor at runtime; determining a second power parameter of the neural network processor according to the state parameter and the first power parameter; and adjusting the power parameter processed by the neural network from the first power parameter to the second power parameter.
According to another aspect of the present application, there is provided a training method applied to a reinforcement learning model for power parameter adjustment, including: acquiring a state parameter and a power parameter of a neural network processor during operation, wherein the state parameter and the power parameter are acquired by a coprocessor, and the state parameter is generated based on a preset load state parameter; calculating the profit values represented by the state parameters and the power parameters; storing the state parameters, the power parameters and the corresponding income values as experience samples into an experience pool of the reinforcement learning model; updating the power parameter, wherein the updating range of the power parameter is a given plurality of power parameters; and stopping training the reinforcement learning model when the given plurality of power parameters all have corresponding revenue values.
According to another aspect of the present application, there is provided an apparatus for adjusting a power parameter, including: a first determination module for determining a state parameter and a first power parameter of the neural network processor at runtime; a second determining module, configured to determine a second power parameter of the neural network processor according to the state parameter and the first power parameter; and an adjusting module, configured to adjust the power parameter of the neural network processor from the first power parameter to the second power parameter.
According to another aspect of the present application, there is provided a computer-readable storage medium having stored thereon a computer program for executing the method of any of the above.
According to another aspect of the present application, there is provided an electronic apparatus including: a processor; a memory for storing the processor-executable instructions; the processor is configured to perform any of the methods described above.
According to the method for adjusting the power parameters, the state parameter and the first power parameter of the neural network processor during operation are determined, and the second power parameter of the neural network processor is determined according to the state parameter and the first power parameter, so that the neural network processor operates in the state with optimal energy consumption, and the purpose of saving energy consumption is achieved.
Drawings
The above and other objects, features and advantages of the present application will become more apparent by describing in more detail embodiments of the present application with reference to the attached drawings. The accompanying drawings are included to provide a further understanding of the embodiments of the application and are incorporated in and constitute a part of this specification, illustrate embodiments of the application and together with the description serve to explain the principles of the application. In the drawings, like reference numbers generally represent like parts or steps.
FIG. 1 is a system diagram of power parameter adjustment for a neural network processor to which the present application is applicable.
Fig. 2 is a flowchart illustrating a method for adjusting a power parameter according to an exemplary embodiment of the present disclosure.
Fig. 3 is a flowchart illustrating a method for adjusting a power parameter according to another exemplary embodiment of the present application.
Fig. 4 is a flowchart illustrating a method for adjusting a power parameter according to another exemplary embodiment of the present application.
Fig. 5 is a flowchart illustrating a method for adjusting a power parameter according to another exemplary embodiment of the present application.
Fig. 6 is a flowchart illustrating a reinforcement learning method according to an exemplary embodiment of the present application.
Fig. 7 is a flowchart illustrating a reinforcement learning method according to another exemplary embodiment of the present application.
Fig. 8 is a block diagram of an apparatus for adjusting a power parameter according to an exemplary embodiment of the present application.
Fig. 9 is a block diagram of a first determination module provided in an exemplary embodiment of the present application.
Fig. 10 is a block diagram of a reinforcement learning model training apparatus according to an exemplary embodiment of the present application.
Fig. 11 is a block diagram of a reinforcement learning model training apparatus according to another exemplary embodiment of the present application.
Fig. 12 is a block diagram of an electronic device provided in an exemplary embodiment of the present application.
Detailed Description
Hereinafter, example embodiments according to the present application will be described in detail with reference to the accompanying drawings. It should be understood that the described embodiments are only some embodiments of the present application and not all embodiments of the present application, and that the present application is not limited by the example embodiments described herein.
Summary of the application
The application can be applied to any field of task processing using a neural network processor. For example, the embodiments of the present application can be applied to the scenes of image processing or voice processing, and the present application is directed to a method and an apparatus for adjusting dynamic voltage and frequency of a neural network processor, so that the method and the apparatus provided by the present application can be applied to any field having a neural network processor.
As described above, in an application scenario with limited energy, such as when a device with a processor, such as a mobile phone and a computer, does not have a power supply connected, in order to extend the operation time of the device as much as possible, the dynamic voltage frequency adjustment is usually performed on the processor, that is, the voltage and frequency of the processor are adjusted according to the load condition and the operation state of the processor, so as to ensure that the power consumption of the processor is low, thereby reducing the energy consumption of the processor and saving energy.
However, the existing adjustment of the dynamic voltage frequency of the processor is a management strategy designed for the CPU processor, and with the development of artificial intelligence technology, the application of the neural Network Processor (NPU) is increasing. However, the architecture of the CPU processor is not consistent with that of the neural network processor, and the DVFS algorithm based on the CPU processor depends on software running on the CPU processor itself to operate, whereas the NPU processor cannot run software corresponding to the CPU processor because of its high degree of specialization. Therefore, the adjustment method for the dynamic voltage frequency of the CPU processor cannot be directly applied to the NPU processor.
In view of the above technical problems, a basic idea of the present application is to provide a method for adjusting a power parameter, in which a state parameter and a first power parameter of a neural network processor during operation are determined, a second power parameter of the neural network processor is determined according to the state parameter and the first power parameter, and the power parameter processed by the neural network is adjusted from the first power parameter to the second power parameter, so that the neural network processor operates in a state with optimal energy consumption, and the purpose of saving energy consumption is achieved.
Having described the general principles of the present application, various non-limiting embodiments of the present application will now be described with reference to the accompanying drawings.
Exemplary System
FIG. 1 is a system diagram of power parameter adjustment for a neural network processor to which the present application is applicable. As shown in fig. 1, a system for adjusting a power parameter of a neural network processor in an embodiment of the present application includes a neural network processor 1 and a coprocessor 2, where the neural network processor 1 is communicatively connected to the coprocessor 2. The state parameter and the first power parameter of the neural network processor 1 during operation are determined by the coprocessor 2, the second power parameter of the neural network processor 1 is determined according to the state parameter and the first power parameter, and the state parameter is adjusted correspondingly.
When the neural network processor 1 operates and processes a load, the coprocessor 2 determines a second power parameter of the neural network processor 1 according to the state parameter and the first power parameter of the neural network processor 1 during operation to adjust the power parameter of the neural network processor 1, so that the neural network processor 1 operates in a state with optimal energy consumption, and the purpose of saving energy consumption is achieved.
The load of the neural network processor 1 may comprise images, speech, etc., and the state parameters may correspondingly comprise temperature and performance parameters. The performance reference may include an operation frame rate, a voice delay time, and the like, and in an operation process of the neural network processor 1, the performance scheduling operation of the neural network processor 1 is usually implemented by using the coprocessor 2, so that the performance parameters of the neural network processor 1 also exist in the coprocessor 2. The temperature may be a core temperature of the neural network processor 1, and the core temperature value of the neural network processor 1 may be directly obtained through a transistor disposed at the neural network processor 1, and the coprocessor 2 may also read the core temperature of the neural network processor 1 from the transistor.
Exemplary method
Fig. 2 is a flowchart illustrating a method for adjusting a power parameter according to an exemplary embodiment of the present disclosure. The embodiment can be applied to an electronic device, as shown in fig. 2, and includes the following steps:
step 210: a state parameter and a first power parameter of the neural network processor 1 at runtime are determined.
In an embodiment, the state parameters may be determined based on the type of data that the neural network processor needs to process, for example, when the type of data is image data, the state parameters may include the frame rate and the temperature of the neural network processor 1; when the data type is voice data, the state parameters may include a voice delay time and a temperature through the network processor 1.
In one embodiment, the first power parameter may include voltage, current, and frequency. When the neural network processor 1 processes the load, the corresponding first power parameter value, that is, the corresponding voltage value, current value and frequency value are generated, and the voltage value, the current value and the frequency value can reflect the instantaneous energy consumption of the neural network processor 1.
When the neural network processor 1 is processing a load, for example, when image data processing is performed, the neural network processor 1 processes an image according to a frame rate of processing the image and parameters of the image (including the frame rate, resolution, and the like of the image), so that specific values of corresponding state parameters, including a temperature value, a frame rate value, and the like, are generated; when processing voice data, the neural network processor 1 processes voice according to the delay time of processing voice, etc., thereby generating corresponding specific values of state parameters, including the delay time and the temperature value of the neural network processor, etc. The state parameter of the neural network processor can also reflect the operation state of the neural network processor 1, so that the operation state of the neural network processor 1 can be determined according to the state parameter when the neural network processor operates.
Step 220: a second power parameter of the neural network processor 1 is determined from the state parameter and the first power parameter.
In one embodiment, the second power parameter may include voltage, current, and frequency. When the neural network processor 1 is determined not to be operated in the optimal state according to the current state parameter and the first power parameter of the neural network processor 1, determining a second power parameter of the neural network processor 1 according to the current state parameter and the first power parameter. The second power parameter and the corresponding state parameter are the optimal operating state of the neural network processor 1, and the optimal operating state may be that the output power of the neural network processor 1 is the minimum and the core temperature is less than the preset temperature threshold value on the premise that the neural network processor 1 meets the load requirement.
Step 230: the power parameter of the neural network processor 1 is adjusted from the first power parameter to the second power parameter.
After the second power parameter of the neural network processor 1 is determined, the power parameter of the neural network processor 1 is adjusted to the second power parameter, so that the neural network processor 1 operates in the optimal operating state.
According to the method for adjusting the power parameter, the second power parameter of the neural network processor is determined through the state parameter and the first power parameter, the power parameter processed by the neural network is adjusted to be the second power parameter, and the state parameter and the power parameter of the neural network processor are comprehensively considered by the second power parameter, so that the neural network processor can operate in the state with optimal energy consumption by adjusting the power parameter of the neural network processor from the first power parameter to the second power parameter, and the purpose of saving energy consumption is achieved.
Fig. 3 is a flowchart illustrating a method for adjusting a power parameter according to another exemplary embodiment of the present application. As shown in fig. 3, step 220 may include the sub-steps of:
step 221: the state parameter and the first power parameter are input into a trained reinforcement learning model.
Reinforcement learning is learning by an Agent in a 'trial and error' manner, a reward obtained by interacting with an environment guides behavior, the goal is to enable the Agent to obtain maximum reward, and reinforcement signals provided by the environment in reinforcement learning are used for evaluating the quality of generated actions rather than telling a reinforcement learning model how to generate correct actions. Because the information provided by the external environment is very little, the reinforcement learning model must learn by its own experience, and in this way, the reinforcement learning model obtains knowledge in the action-evaluation environment and improves the action scheme to adapt to the environment. The reinforcement learning model in the embodiment of the present application may be a Q learning model, a Deep Q learning model, a Sarsa model, a Policy Gradients model, or the like.
Step 222: the second power parameter of the neural network processor 1 is calculated by a reinforcement learning model.
In this embodiment, the power parameter with the highest profit output by the reinforcement learning model is selected as the second power parameter, and the reinforcement learning model is set to simply achieve acquisition of the second power parameter, thereby avoiding a complex calculation formula or logical operation.
Fig. 4 is a flowchart illustrating a method for adjusting a power parameter according to another exemplary embodiment of the present application. As shown in fig. 4, step 222 may include the sub-steps of:
step 2221: and calculating at least one profit value corresponding to all power parameters of the neural network processor through the reinforcement learning model.
In this embodiment, the profit value may be a characteristic value corresponding to energy consumption of the neural network processor during operation, and is used to represent an energy consumption value wasted by the operation of the neural network processor, that is, the profit value is inversely related to the energy consumption.
Step 2222: a highest revenue value is determined from the at least one revenue value.
Step 2223: and determining the power parameter corresponding to the highest profit value as a second power parameter of the neural network processor.
In this embodiment, at least one profit value corresponding to all power parameters of the neural network processor is calculated through the learned reinforcement learning model, and the power parameter corresponding to the highest profit value is selected as the second power parameter of the neural network processor, so that the neural network processor operates in the optimal operating state.
Fig. 5 is a flowchart illustrating a method for adjusting a power parameter according to another exemplary embodiment of the present application. As shown in fig. 5, step 210 may include the sub-steps of:
step 211: the type of data that needs to be processed by the neural network processor 1 at run-time is determined.
In an embodiment, the type of data that the neural network processor 1 needs to process may include image data, voice data, and the like.
Step 212: the state parameters of the neural network processor 1 are determined based on the data type.
The state parameters of the neural network processor 1 are determined according to the type of data processed by the neural network processor 1. For example, when the type of data processed by the neural network processor 1 is image data, the corresponding state parameters may include the core temperature, the operating frame rate, and the performance parameters of the neural network processor 1; when the type of data processed by the neural network processor 1 is voice data, the corresponding state parameters may include a core temperature of the neural network processor 1, and a voice delay time.
It should be understood that, in the embodiment of the present application, different state parameters may be selected according to different data types, as long as the selected state parameters can reflect the operation state of the neural network processor, and the specific content of the state parameters is not limited in the embodiment of the present application.
Fig. 6 is a flowchart illustrating a reinforcement learning method according to an exemplary embodiment of the present application. As shown in fig. 6, the reinforcement learning method may include the steps of:
step 510: and acquiring the state parameter and the power parameter of the neural network processor during operation. The state parameters and the power parameters are acquired by the coprocessor, and the state parameters are generated based on the preset load state parameters.
The training of the reinforcement learning model in this embodiment may be performed in a training module (e.g., a server), and during the training process, the state parameters of the neural network processor 1 may be transmitted to the training module by the coprocessor 2, so that the training module trains the reinforcement learning model. After the reinforcement learning model is trained in the training module, the reinforcement learning model is transmitted to the coprocessor 2, and the execution process of the reinforcement learning model is realized by the coprocessor 2.
Step 520: and calculating the profit values represented by the state parameters and the power parameters.
The reinforcement learning process is a machine learning mode without providing data samples at the initial training stage, the core idea is to enable the trained reinforcement learning model to learn in a trial and error mode, benefits are obtained through interaction with the environment, action behaviors are guided by the benefits, and the reinforcement learning aims to enable the trained reinforcement learning model to obtain the maximum benefits.
Step 530: and storing the state parameters, the power parameters and the corresponding profit values as experience samples into an experience pool.
According to the embodiment of the application, the power parameter output by each learning is converted into the profit value by calculating the profit value, and the state parameter, the power parameter and the profit value are used as experience samples in an experience pool of the reinforcement learning model. During execution, according to different state parameters generated by the operation load of the neural network processor, the corresponding power parameter with the highest profit value is selected from the experience pool to be used as the output of the model. The load determines the current running state of the neural network processor, and the power parameter with the highest profit value is selected from the experience pool according to the running state to serve as the optimal power parameter of the neural network processor, so that the neural network processor runs in the state with the optimal energy consumption.
Step 540: and updating the power parameter. Wherein the updating range of the power parameter is a plurality of given power parameters.
And sequentially updating the power parameters according to the values of the given power parameters to calculate the profit values corresponding to all the given power parameters so as to provide enough experience samples to meet the requirements in execution.
Step 550: and stopping training the reinforcement learning model when the given plurality of power parameters have corresponding profit values.
Because the processor (including the neural network processor) can only select one of a limited number of power parameters in the actual operation process, that is, the power parameters of the processor exist discretely rather than continuously, the theoretically optimal power parameter value of the processor is not necessarily within the given power parameter when the processor processes the load, and only the most optimal power parameter can be selected from the given power parameters in the actual execution process. For the particularity of the field, when the reinforcement learning model is trained, the generalization capability of the model needs to be improved as much as possible, that is, all samples possibly involved in the execution process are included in the experience pool of the reinforcement learning model, so that it can be ensured that the state parameters and the power parameters generated during execution have corresponding experience samples in the experience pool of the model, and therefore, after the given power parameters respectively calculate the corresponding profit values, the reinforcement learning model is stopped from being trained.
According to the embodiment of the application, the profit value is calculated, the state parameter, the power parameter and the corresponding profit value are stored in the experience pool as experience samples, the power parameter with the highest profit value is selected as the optimal power parameter of the neural network processor in the actual execution process, the second power parameter of the neural network processor is determined according to the current state parameter and the first power parameter of the neural network processor, the neural network processor is operated in the state with the optimal energy consumption, and the purpose of saving energy consumption is achieved.
In an embodiment, the method can further obtain a continuously changing trend according to the fitting of discrete power parameters, so as to obtain the corresponding relation between the continuous profit value and the power parameters, and when the theoretical optimal power parameter of the operating load does not exist in a given power parameter range, the power parameter closest to the theoretical optimal power parameter in the power parameter range can be selected as the optimal power parameter.
In an embodiment, in the training process of the reinforcement learning model, the experience samples with low income value or not the highest income value corresponding to the state parameters can be deleted, and only the optimal experience samples are reserved in the experience pool, so that the adjustment efficiency in the execution process can be improved.
In an embodiment, the condition for stopping training the reinforcement learning model may be: and the calculated income value is greater than a first preset income threshold value. By setting a first preset profit threshold, after calculating the profit value represented by a certain power parameter, judging whether the profit value is optimal or not, and stopping training when the profit value represented by the power parameter is optimal according to a judgment result, so that the training process is simplified; otherwise, continuing to train the reinforcement learning model.
In an embodiment, the training method of the reinforcement learning model may further include: and when the income value represented by all the given power parameters is less than or equal to a first preset income threshold value, selecting the training control information corresponding to the maximum income as the output of the reinforcement learning model, and stopping training the reinforcement learning model.
It should be understood that different training stopping conditions may be selected according to different application scenarios in the embodiments of the present application, as long as the selected training stopping conditions can implement training of the reinforcement learning model, and the training stopping conditions in the embodiments of the present application include, but are not limited to, any of the above conditions.
Fig. 7 is a flowchart illustrating a reinforcement learning method according to another exemplary embodiment of the present application. As shown in fig. 7, the reinforcement learning model training method according to the embodiment of the present application may further include the following steps:
step 610: and acquiring the combined load state parameters by the given preset load state parameters in a permutation and combination mode.
Specifically, the preset load state parameters may include any one or a combination of the following parameters: the load number, the load resolution, the load type, the load delay time, the load frame rate, and the number and type of network layers required for load processing. In further embodiments, the load resolution may include any of: 1080p, 720 p; and/or, the frame rate payload may include any of the following frame rate values: 10 frames/second, 24 frames/second, 30 frames/second; and/or, the load type may include any of the following types: and (4) detecting, tracking and identifying.
In order to obtain more load state parameters, the embodiments of the present application may obtain new combined load state parameters by permutation and combination of all given load state parameters. For example, the given load status parameters include a first load status parameter and a second load status parameter, wherein the first load status parameter is a task of detecting a single image at a resolution of 720P and a frame rate of 10 frames per second, and the second load status parameter is a task of identifying two images at a resolution of 1080P and a frame rate of 24 frames per second. After the reinforcement learning is performed on the first load state parameter and the second load state parameter, a combined load state parameter may be obtained by permutation and combination of the first load state parameter and the second load state parameter, where the combined load state parameter includes: the task of detecting a single image with resolution 720P and frame rate of 24 frames/second, the task of detecting a single image with resolution 1080P and frame rate of 10 frames/second, the task of identifying two images with resolution 720P and frame rate of 10 frames/second, and the like.
Step 620: a state parameter generated based on the combined load state parameter is obtained.
Step 630: a reinforcement learning model is trained based on the generated state parameters.
And inputting the acquired new combined load state parameters into the neural network processor to generate new state parameters, and further training the reinforcement learning model according to the new state parameters so as to improve the application range and the accuracy of the reinforcement learning model.
It should be understood that two given load state parameters are only given in the embodiments of the present application by way of example, in practical applications, the given load state parameters may be two or more, and only some of the combined load state parameters are listed in the above exemplary description, and the embodiments of the present application may obtain a great number of combined load state parameters according to the given load state parameters to train the reinforcement learning model, and under the condition that the given load state parameters are limited, obtain the most load state parameters as possible to train the reinforcement learning model, so as to maximize the application range of the reinforcement learning model and improve the accuracy of the reinforcement learning model.
In an embodiment, a specific implementation manner of the step 520 may include: and calculating the profit value according to the difference value between the state parameter and the minimum state parameter required by the processing load and the power parameter, wherein the difference value between the state parameter and the minimum state parameter required by the processing load is negatively related to the profit value, and the power parameter is negatively related to the profit value. The way to update the profit-value may be calculated by the state parameter and the power parameter, where the profit-value is inversely related to the degree to which the state parameter satisfies the processing load requirement (i.e., the difference between the state parameter and the minimum state parameter required for processing the load), and the profit-value is inversely related to the power parameter. That is, the more the state parameter exceeds the load, the larger the power parameter is, the smaller the profit value is, and therefore, in order to obtain the maximum profit value, the minimum state parameter exceeding the load and the minimum power parameter are required.
In an embodiment, the specific implementation manner of step 520 may further include: carrying out weighting processing on the difference value of the state parameter and the minimum state parameter required by load processing, wherein the difference value after weighting processing is negatively related to the income value; and/or weighting the power parameter, wherein the weighted power parameter is inversely related to the profit value. According to the importance of the state parameters and the power parameters to the output result of the enhanced training model, the state parameters and/or the power parameters can be weighted, and the profit value is calculated through the weighted state parameters and the weighted power parameters, so that the influence degree of the state parameters and the power parameters on the profit value is changed, and the convergence speed of the enhanced training model is accelerated.
In an embodiment, when the load is image data, the specific implementation formula of the step 520 may be specifically R (a) ((a-a0) × β + P), where R is the profit value, a is the operating frame rate, a0 is the load frame rate, β is the weighting factor, and P is the operating power of the neural network processor.
It should be understood that, in the embodiment of the present application, different weighting coefficients may be selected according to different application scenarios, and it may be understood that the weighting coefficient of the output power in the above formula is 1, and of course, a weighting system of the output power may also be selected according to a requirement, and the present application does not limit the weighting coefficient in the above formula. It should be understood that the above formula is only an exemplary way to calculate the profit value given in the embodiment of the present application, and other formulas may be selected to calculate the profit value in the embodiment of the present application, and the benefit value calculation formula is not limited in the embodiment of the present application.
Exemplary devices
The application provides a device for adjusting power parameters of a neural network processor, which is used for realizing the method for adjusting the power parameters of the neural network processor.
Fig. 8 is a block diagram of an apparatus for adjusting a power parameter according to an exemplary embodiment of the present application. As shown in fig. 8, the adjusting apparatus includes: a first determining module 21, configured to determine a state parameter and a first power parameter of the neural network processor 1 during operation; a second determining module 22, configured to determine a second power parameter of the neural network processor 1 according to the state parameter and the first power parameter; and an adjusting module 11, configured to adjust the power parameter of the neural network process 1 from the first power parameter to the second power parameter.
According to the adjusting device for the power parameters, the first determining module determines the state parameters and the first power parameters of the neural network processor during operation, the second determining module determines the second power parameters of the neural network processor according to the state parameters and the first power parameters, and the adjusting module adjusts the power parameters of the neural network processor to the second power parameters, so that the neural network processor operates in the state with optimal energy consumption, and the purpose of saving energy consumption is achieved.
In an embodiment, the first determining module 21 and the second determining module 22 may be disposed in the coprocessor 2, wherein the coprocessor 2 is communicatively connected to the neural network processor 1 for assisting in adjusting the power parameter when the neural network processor 1 operates.
In an embodiment, the adjusting module 11 may be disposed in the neural network processor 1, and configured to adjust the power parameter of the neural network processor 1 from the first power parameter to the second power parameter after determining the second power parameter of the neural network processor 1.
In an embodiment, the second determination module 22 may be configured to:
and inputting the state parameters and the first power parameters into the trained reinforcement learning model, and calculating second power parameters of the neural network processor 1 through the reinforcement learning model. By setting the reinforcement learning model, the second power parameter is simply obtained, and a complex calculation formula or logic operation is avoided.
Fig. 9 is a block diagram of a first determination module provided in an exemplary embodiment of the present application. As shown in fig. 9, the first determining module 21 may include:
and the task determination submodule 211 is used for determining the data type processed by the neural network processor 1 during the operation.
A state parameter determination submodule 212 for determining a state parameter of the neural network processor 1 based on the data type.
The state parameters of the neural network processor 1 are determined according to the type of data processed by the neural network processor 1. For example, when the type of data processed by the neural network processor 1 is image processing, the corresponding state parameters may include a core temperature, an operating frame rate, an operating voltage, an operating current, and performance parameters of the neural network processor 1; when the type of data processed by the neural network processor 1 is speech processing, the corresponding state parameters may include a core temperature, an operating voltage, an operating current, and a speech delay time of the neural network processor 1.
Fig. 10 is a block diagram of a reinforcement learning model training apparatus according to an exemplary embodiment of the present application. As shown in fig. 10, includes:
the obtaining module 31 is configured to obtain a state parameter and a power parameter of the neural network processor during operation, where the state parameter and the power parameter are obtained by the coprocessor, and the state parameter is generated based on a preset load state parameter.
And a calculation module 32 for calculating the profit values represented by the state parameters and the power parameters.
And the sample establishing module 33 is configured to store the state parameter, the power parameter, and the corresponding profit value as experience samples in an experience pool. And converting the power parameter output by each learning into a profit value by calculating the profit value, and taking the state parameter, the power parameter and the profit value as experience samples in an experience pool of the reinforcement learning model. During execution, according to different state parameters generated by the operation load of the neural network processor, the corresponding power parameter with the highest profit value is selected from the experience pool to be used as the output of the model.
And an updating module 34 for updating the power parameter, wherein the updating range of the power parameter is a given plurality of power parameters. And sequentially updating the power parameters according to the values of the given power parameters to calculate the profit values corresponding to all the given power parameters so as to provide enough experience samples to meet the requirements in execution.
A stopping module 35, configured to stop training the reinforcement learning model when the given plurality of power parameters all have corresponding profit values. When the reinforcement learning model is trained, it is necessary to improve the generalization ability of the model as much as possible, that is, it is necessary that all samples possibly involved in the execution process are included in the experience pool of the reinforcement learning model, so that it is ensured that the state parameters and the power parameters generated during execution have corresponding experience samples in the experience pool of the model, and therefore, after the given power parameters respectively calculate corresponding profit values, the training of the reinforcement learning model is stopped.
According to the embodiment of the application, the profit value is calculated, the state parameter, the power parameter and the corresponding profit value are stored in the experience pool as experience samples, the power parameter with the highest profit value is selected as the optimal power parameter of the neural network processor in the actual execution process, the second power parameter of the neural network processor is determined according to the current state parameter and the first power parameter of the neural network processor, the neural network processor is operated in the state with the optimal energy consumption, and the purpose of saving energy consumption is achieved.
In an embodiment, the stop module 235 may be configured to: and the calculated income value is greater than a first preset income threshold value. By setting a first preset profit threshold, after calculating the profit value represented by a certain power parameter, judging whether the profit value is optimal or not, and stopping training when the profit value represented by the power parameter is optimal according to a judgment result, so that the training process is simplified; otherwise, continuing to train the reinforcement learning model.
In an embodiment, the stop module 235 may be further configured to: and when the income value represented by all the given power parameters is less than or equal to a first preset income threshold value, selecting the training control information corresponding to the maximum income as the output of the reinforcement learning model, and stopping training the reinforcement learning model.
Fig. 11 is a block diagram of a reinforcement learning model training apparatus according to another exemplary embodiment of the present application. As shown in fig. 11, the training apparatus 3 may further include:
and the load combination module 36 is configured to obtain the combined load state parameter by permutation and combination of all the preset load state parameters. And the acquisition module 33 acquires the state parameters and power parameters generated based on the combined load state parameters for training.
The acquired new combined load state parameters are input into the neural network processor to generate new state parameters and power parameters, and the reinforcement learning model is further trained according to the new state parameters and power parameters, so that the application range and the accuracy of the reinforcement learning model can be improved.
In an embodiment, the update module 34 may be configured to: and calculating the profit value according to the difference value between the state parameter and the minimum state parameter required by the processing load and the power parameter, wherein the difference value between the state parameter and the minimum state parameter required by the processing load is negatively related to the profit value, and the power parameter is negatively related to the profit value. The way to update the profit-value may be calculated by the state parameter and the power parameter, where the profit-value is inversely related to the degree to which the state parameter satisfies the processing load requirement (i.e., the difference between the state parameter and the minimum state parameter required for processing the load), and the profit-value is inversely related to the power parameter. That is, the more the state parameter exceeds the load, the larger the power parameter is, the smaller the profit value is, and therefore, in order to obtain the maximum profit value, the minimum state parameter exceeding the load and the minimum power parameter are required.
In an embodiment, the update module 34 may be further configured to: carrying out weighting processing on the difference value of the state parameter and the minimum state parameter required by load processing, wherein the difference value after weighting processing is negatively related to the income value; and/or weighting the power parameter, wherein the weighted power parameter is inversely related to the profit value. According to the importance of the state parameters and the power parameters to the output result of the enhanced training model, the state parameters and/or the power parameters can be weighted, and the profit value is calculated through the weighted state parameters and the weighted power parameters, so that the influence degree of the state parameters and the power parameters on the profit value is changed, and the convergence speed of the enhanced training model is accelerated.
In one embodiment, when the load is an image, the update module 34 may be configured to R (a) ((a-a0) × β + P), where R is a feedback variable of the reward function, a is the operating frame rate, a0 is the load frame rate, β is the weighting factor, and P is the operating power of the neural network processor.
Exemplary electronic device
FIG. 12 illustrates a block diagram of an electronic device in accordance with an embodiment of the present application. It should be noted that, when the electronic device executes the method flow of the embodiment shown in fig. 2 to fig. 5, it may be an electronic device such as an image capturing device, a sound recording device, an intelligent device, and the like. When the electronic device executes the method flows of the embodiments shown in fig. 6 to 7, it may be an electronic device such as a server used by a technician to train a reinforcement learning model.
As shown in fig. 12, the electronic device 11 includes one or more processors 111 and memory 112.
The processor 111 may be a Central Processing Unit (CPU) or other form of processing unit having data processing capabilities and/or instruction execution capabilities, and may control other components in the electronic device 11 to perform desired functions.
Memory 112 may include one or more computer program products that may include various forms of computer-readable storage media, such as volatile memory and/or non-volatile memory. The volatile memory may include, for example, Random Access Memory (RAM), cache memory (cache), and/or the like. The non-volatile memory may include, for example, Read Only Memory (ROM), hard disk, flash memory, etc. One or more computer program instructions may be stored on the computer-readable storage medium and executed by the processor 111 to implement the power parameter adjustment method or the reinforcement learning model training method of the various embodiments of the present application described above, and/or other desired functions. Various contents such as an input signal, a signal component, a noise component, etc. may also be stored in the computer-readable storage medium.
In one example, the electronic device 11 may further include: an input device 113 and an output device 114, which are interconnected by a bus system and/or other form of connection mechanism (not shown).
For example, the input device 113 may be a camera or a microphone, a microphone array, or the like as described above, for capturing an input signal of an image or a sound source. When the electronic device is a stand-alone device, the input means 123 may be a communication network connector for receiving the acquired input signals from the neural network processor.
The input device 113 may also include, for example, a keyboard, a mouse, and the like.
The output device 114 may output various information to the outside, including the determined output voltage, output current information, and the like. The output devices 114 may include, for example, a display, speakers, a printer, and a communication network and remote output devices connected thereto, among others.
Of course, for the sake of simplicity, only some of the components related to the present application in the electronic device 11 are shown in fig. 12, and components such as a bus, an input/output interface, and the like are omitted. In addition, the electronic device 11 may include any other suitable components, depending on the particular application.
Exemplary computer program product and computer-readable storage Medium
In addition to the above-described methods and apparatus, embodiments of the present application may also be a computer program product comprising computer program instructions that, when executed by a processor, cause the processor to perform the steps in the power parameter adjustment method according to the embodiment illustrated in fig. 2 to 5 or the training method of the reinforcement learning model of fig. 6 to 7 of the present application described in the above-mentioned "exemplary method" section of this specification.
The computer program product may be written with program code for performing the operations of embodiments of the present application in any combination of one or more programming languages, including an object oriented programming language such as Java, C + + or the like and conventional procedural programming languages, such as the "C" programming language or similar programming languages. The program code may execute entirely on the user's computing device, partly on the user's device, as a stand-alone software package, partly on the user's computing device and partly on a remote computing device, or entirely on the remote computing device or server.
Furthermore, embodiments of the present application may also be a computer-readable storage medium having stored thereon computer program instructions that, when executed by a processor, cause the processor to perform the steps in the power parameter adjustment method or the training method of the reinforcement learning model according to various embodiments of the present application described in the "exemplary methods" section above in this specification.
The computer-readable storage medium may take any combination of one or more readable media. The readable medium may be a readable signal medium or a readable storage medium. A readable storage medium may include, for example, but not limited to, an electronic, magnetic, optical, electromagnetic, infrared, or semiconductor system, apparatus, or device, or a combination of any of the foregoing. More specific examples (a non-exhaustive list) of the readable storage medium include: an electrical connection having one or more wires, a portable disk, a hard disk, a Random Access Memory (RAM), a read-only memory (ROM), an erasable programmable read-only memory (EPROM or flash memory), an optical fiber, a portable compact disc read-only memory (CD-ROM), an optical storage device, a magnetic storage device, or any suitable combination of the foregoing.
The foregoing describes the general principles of the present application in conjunction with specific embodiments, however, it is noted that the advantages, effects, etc. mentioned in the present application are merely examples and are not limiting, and they should not be considered essential to the various embodiments of the present application. Furthermore, the foregoing disclosure of specific details is for the purpose of illustration and description and is not intended to be limiting, since the foregoing disclosure is not intended to be exhaustive or to limit the disclosure to the precise details disclosed.
The block diagrams of devices, apparatuses, systems referred to in this application are only given as illustrative examples and are not intended to require or imply that the connections, arrangements, configurations, etc. must be made in the manner shown in the block diagrams. These devices, apparatuses, devices, systems may be connected, arranged, configured in any manner, as will be appreciated by those skilled in the art. Words such as "including," "comprising," "having," and the like are open-ended words that mean "including, but not limited to," and are used interchangeably therewith. The words "or" and "as used herein mean, and are used interchangeably with, the word" and/or, "unless the context clearly dictates otherwise. The word "such as" is used herein to mean, and is used interchangeably with, the phrase "such as but not limited to".
It should also be noted that in the devices, apparatuses, and methods of the present application, the components or steps may be decomposed and/or recombined. These decompositions and/or recombinations are to be considered as equivalents of the present application.
The previous description of the disclosed aspects is provided to enable any person skilled in the art to make or use the present application. Various modifications to these aspects will be readily apparent to those skilled in the art, and the generic principles defined herein may be applied to other aspects without departing from the scope of the application. Thus, the present application is not intended to be limited to the aspects shown herein but is to be accorded the widest scope consistent with the principles and novel features disclosed herein.
The foregoing description has been presented for purposes of illustration and description. Furthermore, the description is not intended to limit embodiments of the application to the form disclosed herein. While a number of example aspects and embodiments have been discussed above, those of skill in the art will recognize certain variations, modifications, alterations, additions and sub-combinations thereof.

Claims (10)

1. A method for adjusting power parameters comprises the following steps:
determining a state parameter and a first power parameter of a neural network processor at runtime;
determining a second power parameter of the neural network processor according to the state parameter and the first power parameter; and
adjusting a power parameter of the neural network processor from the first power parameter to the second power parameter;
the specific method for determining the second power parameter of the neural network processor according to the state parameter and the first power parameter comprises the following steps:
inputting the state parameter and the first power parameter into a trained reinforcement learning model;
calculating at least one profit value corresponding to all power parameters of the neural network processor through the reinforcement learning model; the profit value is a characteristic value corresponding to energy consumption of the neural network processor during operation and is used for representing the energy consumption value wasted by the operation of the neural network processor, and the profit value is inversely related to the energy consumption value;
determining a highest revenue value from the at least one revenue value; and
and determining the power parameter corresponding to the highest profit value as a second power parameter of the neural network processor.
2. The method of claim 1, wherein the method further comprises:
and transmitting the state parameters and the power parameters of the neural network processor to a training module so that the training module can train the reinforcement learning model.
3. The method of claim 1, wherein the determining the state parameters of the neural network processor at runtime comprises:
determining the type of data to be processed by the neural network processor during operation; and
determining a state parameter of the neural network processor based on the data type.
4. A training method applied to a reinforcement learning model for power parameter adjustment comprises the following steps:
acquiring a state parameter and a power parameter of a neural network processor during operation, wherein the state parameter and the power parameter are acquired by a coprocessor, and the state parameter is generated based on a preset load state parameter;
calculating the profit values represented by the state parameters and the power parameters; the profit value is a characteristic value corresponding to the energy consumption of the neural network processor during operation and is used for representing the energy consumption value wasted by the operation of the neural network processor;
storing the state parameters, the power parameters and the corresponding income values as experience samples into an experience pool of the reinforcement learning model;
updating the power parameter, wherein the updating range of the power parameter is a given plurality of power parameters; and
stopping training the reinforcement learning model when the given plurality of power parameters all have corresponding revenue values.
5. The method of claim 4, wherein the method further comprises:
acquiring a combined load state parameter from the given preset load state parameters in a permutation and combination mode;
obtaining the state parameters generated based on the combined load state parameters; and
training the reinforcement learning model based on the generated state parameters.
6. The method of claim 4 or 5, wherein said calculating a value of revenue represented by said state parameters and said power parameters comprises:
and calculating the profit value according to the difference value between the state parameter and the minimum state parameter required by the processing load and the power parameter, wherein the difference value is negatively related to the profit value, and the power parameter is negatively related to the profit value.
7. The method of claim 6, wherein said calculating a value of revenue represented by said state parameters and said power parameters further comprises:
weighting the difference value and/or weighting the power parameter; and
and calculating the profit value according to the weighted difference value and/or the weighted power parameter.
8. An apparatus for adjusting a power parameter, comprising:
the first determination module is used for determining a state parameter and a first power parameter of the neural network processor during operation;
a second determining module, configured to determine a second power parameter of the neural network processor according to the state parameter and the first power parameter; and
an adjusting module, configured to adjust a power parameter of the neural network processor from the first power parameter to the second power parameter;
wherein the second determination module is further configured to:
inputting the state parameter and the first power parameter into a trained reinforcement learning model;
calculating at least one profit value corresponding to all power parameters of the neural network processor through the reinforcement learning model; the profit value is a characteristic value corresponding to energy consumption of the neural network processor during operation and is used for representing the energy consumption value wasted by the operation of the neural network processor, and the profit value is inversely related to the energy consumption value;
determining a highest revenue value from the at least one revenue value; and
and determining the power parameter corresponding to the highest profit value as a second power parameter of the neural network processor.
9. A computer-readable storage medium, the storage medium storing a computer program for performing the method of any of the preceding claims 1-7.
10. An electronic device, the electronic device comprising:
a processor;
a memory for storing the processor-executable instructions;
the processor configured to perform the method of any of the preceding claims 1-7.
CN201811419611.4A 2018-11-26 2018-11-26 Power parameter adjusting method and device and reinforcement learning model training method Active CN109491494B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201811419611.4A CN109491494B (en) 2018-11-26 2018-11-26 Power parameter adjusting method and device and reinforcement learning model training method

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201811419611.4A CN109491494B (en) 2018-11-26 2018-11-26 Power parameter adjusting method and device and reinforcement learning model training method

Publications (2)

Publication Number Publication Date
CN109491494A CN109491494A (en) 2019-03-19
CN109491494B true CN109491494B (en) 2020-04-17

Family

ID=65696719

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201811419611.4A Active CN109491494B (en) 2018-11-26 2018-11-26 Power parameter adjusting method and device and reinforcement learning model training method

Country Status (1)

Country Link
CN (1) CN109491494B (en)

Families Citing this family (10)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN110147891B (en) * 2019-05-23 2021-06-01 北京地平线机器人技术研发有限公司 Method and device applied to reinforcement learning training process and electronic equipment
CN110456644B (en) * 2019-08-13 2022-12-06 北京地平线机器人技术研发有限公司 Method and device for determining execution action information of automation equipment and electronic equipment
US11900240B2 (en) * 2019-09-18 2024-02-13 Stmicroelectronics S.R.L. Variable clock adaptation in neural network processors
CN110941268B (en) * 2019-11-20 2022-09-02 苏州大学 Unmanned automatic trolley control method based on Sarsa safety model
CN111182549B (en) * 2020-01-03 2022-12-30 广州大学 Anti-interference wireless communication method based on deep reinforcement learning
CN112016665B (en) * 2020-10-20 2021-04-06 深圳云天励飞技术股份有限公司 Method and device for calculating running time of neural network on processor
CN112347584B (en) * 2020-11-09 2024-05-28 北京三一智造科技有限公司 Power distribution method and power distribution device
CN112667407B (en) * 2021-01-18 2023-09-19 成都国科微电子有限公司 Processor parameter adjusting method and device, electronic equipment and storage medium
EP4137913A1 (en) 2021-08-17 2023-02-22 Axis AB Power management in processing circuitry which implements a neural network
CN117130769A (en) * 2023-02-25 2023-11-28 荣耀终端有限公司 Frequency modulation method, training method of frequency adjustment neural network and electronic equipment

Family Cites Families (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20040123163A1 (en) * 2002-12-20 2004-06-24 Ta-Feng Huang Device for managing electric power source of CPU
CN103376869B (en) * 2012-04-28 2016-11-23 华为技术有限公司 A kind of temperature feedback control system and method for DVFS
US10234930B2 (en) * 2015-02-13 2019-03-19 Intel Corporation Performing power management in a multicore processor
TWI627525B (en) * 2016-08-18 2018-06-21 瑞昱半導體股份有限公司 Voltage and frequency scaling apparatus, system on chip and voltage and frequency scaling method

Also Published As

Publication number Publication date
CN109491494A (en) 2019-03-19

Similar Documents

Publication Publication Date Title
CN109491494B (en) Power parameter adjusting method and device and reinforcement learning model training method
CN109863537B (en) Stylized input image
CN109300179B (en) Animation production method, device, terminal and medium
CN110852438A (en) Model generation method and device
US11521038B2 (en) Electronic apparatus and control method thereof
JP2019049977A (en) Pruning and retraining method for convolution neural network
TW201901532A (en) Feedforward generation neural network
CN111295700B (en) Adaptive display brightness adjustment
WO2021259106A1 (en) Method, system, and device for optimizing neural network chip, and storage medium
US20200057937A1 (en) Electronic apparatus and controlling method thereof
US11967150B2 (en) Parallel video processing systems
US20230162050A1 (en) Method and device for predicting and controlling time series data based on automatic learning
US20240153044A1 (en) Circuit for executing stateful neural network
CN112672405B (en) Power consumption calculation method, device, storage medium, electronic equipment and server
US11769490B2 (en) Electronic apparatus and control method thereof
CN111295677A (en) Information processing apparatus, information processing method, and computer program
CN117348854A (en) Method and device for generating control flow of Internet of things based on natural language dialogue
EP4131852A1 (en) Automated pausing of audio and/or video during a conferencing session
CN112819021B (en) Image detection method and device, electronic equipment and storage medium
CN115903453B (en) Motor control method, control system and electronic equipment
KR102366691B1 (en) Method for providing guidance information for abnormal circumstance of utility pole and electronic device thereof
CN111508010A (en) Method and device for depth estimation of two-dimensional image and electronic equipment
JP7159884B2 (en) Information processing device and information processing method
WO2023209797A1 (en) Information processing device, inference system, and control method
CN112202886B (en) Task unloading method, system, device and storage medium

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant