CN117851022A - Bandwidth adjustment method and device and intelligent driving equipment - Google Patents

Bandwidth adjustment method and device and intelligent driving equipment Download PDF

Info

Publication number
CN117851022A
CN117851022A CN202211212975.1A CN202211212975A CN117851022A CN 117851022 A CN117851022 A CN 117851022A CN 202211212975 A CN202211212975 A CN 202211212975A CN 117851022 A CN117851022 A CN 117851022A
Authority
CN
China
Prior art keywords
bandwidth
function
priority
range
weight
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202211212975.1A
Other languages
Chinese (zh)
Inventor
丁涛
许轲
黄琳珊
张韦妮
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Huawei Technologies Co Ltd
Original Assignee
Huawei Technologies Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Huawei Technologies Co Ltd filed Critical Huawei Technologies Co Ltd
Priority to CN202211212975.1A priority Critical patent/CN117851022A/en
Priority to PCT/CN2023/118403 priority patent/WO2024067080A1/en
Publication of CN117851022A publication Critical patent/CN117851022A/en
Pending legal-status Critical Current

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F9/00Arrangements for program control, e.g. control units
    • G06F9/06Arrangements for program control, e.g. control units using stored programs, i.e. using an internal store of processing equipment to receive or retain programs
    • G06F9/46Multiprogramming arrangements
    • G06F9/50Allocation of resources, e.g. of the central processing unit [CPU]

Abstract

The application provides a bandwidth adjustment method and device and intelligent driving equipment. The bandwidth adjusting device obtains the scheduling priority and the first bandwidth priority of a first function, the first function corresponds to a first computing unit, and the first computing unit belongs to a computing platform. The bandwidth adjusting means determines a second bandwidth priority of the first function based on the scheduling priority and the first bandwidth priority. The bandwidth adjusting device determines a first bandwidth range of the first computing unit corresponding to the first function according to the second bandwidth priority, wherein the first bandwidth range is a first range of bandwidths allocated to the first function for use by the computing platform from the first computing unit. According to the method and the device, the bandwidth range allocated to the first function by the computing platform is determined according to the scheduling priority and the bandwidth priority of the first function, so that the performance of the computing platform can be improved, and the performance of the intelligent driving equipment is further improved.

Description

Bandwidth adjustment method and device and intelligent driving equipment
Technical Field
The present application relates to the field of intelligent driving, and more particularly, to a bandwidth adjustment method, apparatus, and intelligent driving device.
Background
The computing platform is used as a central brain of the intelligent driving system and is embedded platform equipment with high safety and high certainty requirements. The intelligent driving system developed based on the computing platform directly determines the safety, stability, reliability and performance of the whole advanced driving assistance system (advanced driving assistance system, ADAS) function.
A central processing unit (central processing unit, CPU) on the computing platform determines the running computing speed of the functions/threads. Double data rate synchronous dynamic random access memory (DDR) on a computing platform determines the memory operating speed of functions/threads. While DDR bandwidth is limited on computing platforms, users can adjust the access configuration of quality of service (quality of service, qoS) DDR bandwidth for individual hardware resources/processes/threads on the computing platform.
When the DDR bandwidth allocated to one function becomes larger, the DDR bandwidth of the other function becomes smaller. Thus, when a function occupies CPU resources, the function is more dependent on the use of memory. When the DDR bandwidth of the function is increased, the situation that the function waits for the CPU memory operation can be reduced or even avoided, so that the running time is reduced; when the DDR bandwidth of the function becomes smaller, the CPU is occupied to calculate the function all the time, but the memory operation speed of the CPU is reduced due to the smaller DDR bandwidth, so that the overall running time is increased.
Therefore, how to adjust the bandwidth of each unit on the computing platform, so as to achieve performance optimization, is a problem to be solved urgently.
Disclosure of Invention
The application provides a bandwidth adjusting method and device and intelligent driving equipment, which are used for adjusting the bandwidth of each function on a computing platform, realizing performance optimization and further improving the performance of the intelligent driving equipment.
In order to achieve the above purpose, the following technical scheme is adopted in the application.
In a first aspect, the present application provides a bandwidth adjustment method, which may include: the bandwidth adjusting device obtains the scheduling priority and the first bandwidth priority of a first function, the first function corresponds to a first computing unit, and the first computing unit belongs to a computing platform. The bandwidth adjusting means determines a second bandwidth priority of the first function based on the scheduling priority and the first bandwidth priority. The bandwidth adjusting device determines a first bandwidth range of the first computing unit corresponding to the first function according to the second bandwidth priority, wherein the first bandwidth range is a first range of bandwidths distributed to the first function from the first computing unit by the computing platform.
According to the method and the device, the bandwidth range allocated to the first function by the computing platform is determined according to the scheduling priority and the bandwidth priority of the first function, so that the performance of the computing platform can be improved, and the performance of the intelligent driving equipment is further improved.
In one implementation, the determining, by the bandwidth adjustment device, the second bandwidth priority of the first function according to the scheduling priority and the first bandwidth priority may specifically be: the bandwidth adjusting device acquires a first weight of the scheduling priority and a second weight of the first bandwidth priority; the bandwidth adjusting device determines a second bandwidth priority according to the scheduling priority, the first weight, the first bandwidth priority and the second weight.
In the method, the final bandwidth priority of the function is flexibly determined by integrating the influence of the scheduling priority and the first bandwidth priority of the function and combining the weights of different priorities of each function, so that the calculation result is closer to the ideal result.
In one implementation manner, the determining, by the bandwidth adjustment device, the first bandwidth range of the first function corresponding to the first computing unit according to the second bandwidth priority may specifically be: the bandwidth adjusting means determines the first bandwidth range of the first function based on the bandwidth priority-bandwidth range mapping function or the bandwidth priority-bandwidth range mapping table, and the second bandwidth priority.
In one implementation, the obtaining, by the bandwidth adjustment device, the scheduling priority of the first function and the first bandwidth priority may specifically be: the bandwidth adjusting device acquires the scheduling priority and the first bandwidth priority of the first function input through the display device; or the bandwidth adjusting device acquires the scheduling priority and the first bandwidth priority of the first function through a configuration file or a configuration interface.
In one implementation, the obtaining, by the bandwidth adjustment device, the first weight of the scheduling priority, and the second weight of the first bandwidth priority may specifically be: the bandwidth adjusting device acquires the first weight and the second weight input through the display device.
In some implementations, the bandwidth adjustment method may further include: the bandwidth adjusting device obtains the operation information of the first function. The bandwidth adjusting device adjusts the first bandwidth range according to the operation information of the first function to obtain a second bandwidth range, wherein the second bandwidth range is a second range of bandwidths distributed to the first function for use by the computing platform from the first computing unit, and the operation information comprises: the running time of the first function, the system throughput when the first function runs, the running time standard deviation of the last K times when the first function runs, and the number of waiting functions when the first function runs.
In one implementation manner, when the operation information includes multiple information, the bandwidth adjustment device adjusts the first bandwidth range according to the operation information of the first function, so as to obtain the second bandwidth range, where the second bandwidth range may specifically be: the bandwidth adjusting apparatus determines a weight of each of the plurality of information. The bandwidth adjusting means determines the cost of the first function based on each type of information and the weight of each type of information. When the cost indicates that the bandwidth range of the first function needs to be adjusted, the bandwidth adjusting device adjusts the first bandwidth range to obtain a second bandwidth range, and the cost when the first function operates in the second bandwidth range is better than the cost when the first function operates in the first bandwidth range.
In the application, the first bandwidth range of the first function is dynamically adjusted, so that the adjusted second bandwidth range is closer to an ideal value. Ensuring that the cost of the first function operating in the second bandwidth range is better than the cost of the first function operating in the first bandwidth range allows for accurate setting of the desired objective in the optimization process.
In one implementation, the second bandwidth range includes a first boundary and a second boundary, the first boundary being equal to or less than a first threshold, the second boundary being equal to or greater than a second threshold.
In a second aspect, the present application provides a bandwidth adjustment apparatus, the apparatus comprising: the device comprises a first acquisition unit and a determination unit, wherein the first acquisition unit is used for: the method comprises the steps of obtaining scheduling priority and first bandwidth priority of a first function, wherein the first function corresponds to a first computing unit, and the first computing unit belongs to a computing platform. The determining unit is used for: determining a second bandwidth priority of the first function according to the scheduling priority and the first bandwidth priority; and determining a first bandwidth range of the first computing unit corresponding to the first function according to the second bandwidth priority, wherein the first bandwidth range is a first range of bandwidth allocated to the first function for use by the computing platform from the first computing unit.
According to the method and the device, the bandwidth range allocated to the first function by the computing platform is determined according to the scheduling priority and the bandwidth priority of the first function, so that the performance of the computing platform can be improved, and the performance of the intelligent driving equipment is further improved.
In one implementation, the determining unit is configured to: acquiring a first weight of a scheduling priority and a second weight of a first bandwidth priority; and determining the second bandwidth priority according to the scheduling priority, the first weight, the first bandwidth priority and the second weight.
In the method, the final bandwidth priority of the function is flexibly determined by integrating the influence of the scheduling priority and the first bandwidth priority of the function and combining the weights of different priorities of each function, so that the calculation result is closer to the ideal result.
In one implementation, the determining unit is further configured to: the first bandwidth range of the first function is determined based on the bandwidth priority-bandwidth range mapping function or the bandwidth priority-bandwidth range mapping table, and the second bandwidth priority.
In one implementation, the first obtaining unit is configured to: acquiring a scheduling priority and a first bandwidth priority of a first function input through a display device; or the scheduling priority and the first bandwidth priority of the first function are obtained through a configuration file or a configuration interface.
In one implementation, the first obtaining unit is configured to: the method includes the steps of acquiring a first weight and a second weight input through a display device.
In some implementations, the apparatus further includes: the device comprises a second acquisition unit and an adjustment unit, wherein the second acquisition unit is used for: and acquiring the operation information of the first function. The adjusting unit is used for: adjusting the first bandwidth range according to the operation information of the first function to obtain a second bandwidth range, wherein the second bandwidth range is a second range of bandwidths allocated to the first function for use by the computing platform from the first computing unit, and the operation information comprises: the running time of the first function, the system throughput when the first function runs, the running time standard deviation of the last K times when the first function runs, and the number of waiting functions when the first function runs.
In one implementation, when the operation information includes a plurality of information, the adjusting unit is configured to: determining the weight of each of the plurality of information; determining the cost of the first function according to each type of information and the weight of each type of information; when the cost indicates that the bandwidth range of the first function needs to be adjusted, the first bandwidth range is adjusted to obtain a second bandwidth range, and the cost when the first function operates in the second bandwidth range is better than the cost when the first function operates in the first bandwidth range.
In the application, the first bandwidth range of the first function is dynamically adjusted, so that the adjusted second bandwidth range is closer to an ideal value. Ensuring that the cost of the first function operating in the second bandwidth range is better than the cost of the first function operating in the first bandwidth range allows for accurate setting of the desired objective in the optimization process.
In one implementation, the second bandwidth range includes a first boundary and a second boundary, the first boundary being equal to or less than a first threshold, the second boundary being equal to or greater than a second threshold.
In a third aspect, the present application provides a bandwidth adjustment apparatus, which may include: a memory for storing a computer program; a processor for executing a computer program stored in a memory to cause an apparatus to perform the method as in the first aspect.
In one implementation, the bandwidth adjustment device is a computing platform.
In a fourth aspect, the present application provides an intelligent driving apparatus comprising the device of the second or third aspect.
In one implementation, the intelligent driving device is a vehicle.
In a fifth aspect, the present application provides a computer program product comprising computer instructions which, when run on a computer, cause the computer to perform the method according to the first aspect.
In a sixth aspect, the present application provides a computer readable storage medium comprising computer instructions, the computer readable storage medium comprising computer instructions which, when run on a computer, cause the computer to perform the method according to the first aspect.
The specific embodiments and the corresponding technical effects of each of the second aspect to the sixth aspect may be referred to the specific embodiments and the technical effects of the first aspect.
In the present application, the bandwidth adjustment device obtains the scheduling priority of the first function and the first bandwidth priority. The bandwidth adjusting means determines a second bandwidth priority of the first function based on the scheduling priority and the first bandwidth priority. The bandwidth adjusting device determines a first bandwidth range of the first computing unit corresponding to the first function according to the second bandwidth priority, wherein the first bandwidth range is a first range of bandwidths distributed to the first function from the first computing unit by the computing platform. Therefore, the scheduling priority and the bandwidth priority of the first function determine the bandwidth range allocated to the first function by the computing platform, so that the performance of the computing platform can be improved, the performance of the intelligent driving device is further improved, a developer is helped to perform performance optimization, the performance of the computing platform is further optimized, and the performance of the intelligent driving device is further improved.
Drawings
Fig. 1 is a schematic functional block diagram of an intelligent driving device according to an embodiment of the present application;
fig. 2 is a schematic block diagram of a bandwidth adjustment system architecture provided in an embodiment of the present application;
fig. 3 is a flow chart of a bandwidth adjustment method provided in an embodiment of the application;
fig. 4A is a flowchart of another bandwidth adjustment method according to an embodiment of the present application;
fig. 4B is a flowchart of another bandwidth adjustment method according to an embodiment of the present application;
FIG. 5 is a schematic diagram of an interface of a display device and interaction between the display device and a computing platform according to an embodiment of the present disclosure;
FIG. 6 is a schematic diagram of an interface of another display device and interaction between the display device and a computing platform according to an embodiment of the present disclosure;
FIG. 7 is a schematic diagram of an interface of a display device and interaction between the display device and a computing platform according to an embodiment of the present disclosure;
fig. 8 is a schematic diagram of a bandwidth adjusting apparatus according to an embodiment of the present application.
Detailed Description
The technical solutions in the embodiments of the present application will be described below with reference to the accompanying drawings.
Fig. 1 is a functional block diagram illustration of an intelligent driving apparatus 100 provided in an embodiment of the present application. Intelligent driving device 100 may include a computing platform 110. Some or all of the functions of the intelligent driving apparatus 100 may be controlled by the computing platform 110. Computing platform 110 may include processors 111 through 11n (n is a positive integer), which is a circuit with signal processing capabilities, and in one implementation, may be a circuit with instruction fetch and execute capabilities, such as a central processing unit (central processing unit, CPU), microprocessor, graphics processor (graphics processing unit, GPU) (which may be understood as a microprocessor), or digital signal processor (digital signal processor, DSP), etc.; in another implementation, the processor may perform functions by way of a logical relationship of hardware circuitry that is fixed or reconfigurable, e.g., a hardware circuit implemented as an application-specific integrated circuit (ASIC) or programmable logic device (programmable logic device, PLD), e.g., a field-programmable logic gate array (field programmable gate array, FPGA). In the reconfigurable hardware circuit, the processor loads the configuration document, and the process of implementing the configuration of the hardware circuit may be understood as a process of loading instructions by the processor to implement the functions of some or all of the above units. Furthermore, the processor may also be a hardware circuit designed for artificial intelligence, which may be understood as an ASIC, such as a neural network processing unit (neural network processing unit, NPU), tensor processing unit (tensor processing unit, TPU), deep learning processing unit (deep learning processing unit, DPU), etc. In addition, computing platform 110 may also include memory for storing instructions that some or all of processors 111-11 n may invoke in memory to implement the corresponding functionality.
Illustratively, the computing platform 110 may include: at least one of a mobile data center (mobile data center, MDC), a vehicle domain controller (vehicle domain controller, VDC), a chassis domain controller (chassis domain controller, CDC); or may also include other computing platforms such as in-car application-server (ICAS) controllers, body controllers (body domain controller, BDC), special equipment systems (special equipment system, SAS), media graphics units (media graphics unit, MGU), body Super Cores (BSC), ADAS super cores (ADAS super cores), etc., as the application is not limited in this regard. Wherein the ICAS may include at least one of: the vehicle control server ICAS1, the intelligent driving server ICAS2, the intelligent cabin server ICAS3 and the infotainment server ICAS4. In one implementation, computing platform 110 may be an MDC and processors 111 through 11n may be ai_cpus 1 through ai_cpu n, respectively.
The intelligent driving device related to the application can comprise an on-road vehicle, a water vehicle, an air vehicle, an industrial device, an agricultural device, an entertainment device or the like. For example, the intelligent driving device may be a vehicle, which is a vehicle in a broad concept, may be a vehicle (e.g., commercial vehicle, passenger car, motorcycle, aerocar, train, etc.), an industrial vehicle (e.g., forklift, trailer, tractor, etc.), an engineering vehicle (e.g., excavator, earth mover, crane, etc.), an agricultural device (e.g., mower, harvester, etc.), an amusement device, a toy vehicle, etc., and the type of vehicle is not particularly limited in the present application. For another example, the intelligent driving device may be an aircraft, or a ship.
In one particular implementation, the intelligent driving apparatus referred to herein may further comprise an intelligent driving system comprising the computing platform 110. CPUs on computing platform 110 may be divided into CTRL_CPUs (or non-deterministic cores) and AI_CPUs (or deterministic cores). Wherein, AI_CPU possesses following characteristics at least: 1. the operation noise floor of the CPU is driven and isolated, the interference is less, and the operation is more deterministic; 2. the event scheduler hardened on the chip is adopted for scheduling management, so that the scheduling speed is higher.
The computing platform 110 provides deterministic scheduling capability based on ai_cpu. Deterministic scheduling is a scheduling mechanism, among other things, that can provide faster event scheduling capabilities through on-chip hardened event schedulers. Based on this scheduling capability, the computing platform may obtain a more stable deterministic execution effect. Specifically, the computing platform 110 may unify all interrupts and events of the user mode threads related to triggering deterministic traffic into an event scheduler for execution. The event scheduler adopts a non-preemptive scheduling mode, specifically, after the user-mode thread is scheduled to the CPU resource, the thread is ensured to be executed in a determined time, and preemption can not occur.
Based on the above, the computing platform 110 may bind the functions of each function to the ai_cpu to run based on the deterministic scheduling capability of the ai_cpu, so that a developer may monopolize the ai_cpu and quickly execute the function on the ai_cpu, so that the function can reach the expectation of deterministic execution.
In addition, computing platform 110 provides the ability to adjust memory access so that developers can adjust QoS DDR bandwidth memory access configurations on individual hardware resources/processes/threads on computing platform 110. In summary, the CPU on the computing platform 110 determines the running computing speed of the functions/threads of each function. DDR on computing platform 110 determines the memory operating speed of the functions/threads of each function. While the DDR bandwidth on computing platform 110 is limited, as the DDR bandwidth allocated to one function on an AI_CPU increases, the DDR bandwidth of the other functions decreases; when the DDR bandwidth allocated to one function on one ai_cpu becomes smaller, the DDR bandwidth of the other functions becomes larger. Therefore, how to adjust the bandwidth of each function on the computing platform 110 to achieve the performance optimization becomes a problem to be solved.
In order to adjust the bandwidth of each function on a computing platform, thereby realizing performance optimization, an embodiment of the present application provides a bandwidth adjustment method, which includes: the bandwidth adjusting device obtains the scheduling priority and the first bandwidth priority of the first function, and the first computing unit belongs to a computing platform. The bandwidth adjusting means determines a second bandwidth priority of the first function based on the scheduling priority and the first bandwidth priority. The bandwidth adjusting device determines a first bandwidth range of the first computing unit corresponding to the first function according to the second bandwidth priority, wherein the first bandwidth range is a first range of bandwidths distributed to the first function from the first computing unit by the computing platform. Therefore, the bandwidth range distributed to the first function by the computing platform is determined according to the scheduling priority and the bandwidth priority of the first function, so that the performance of the computing platform can be improved, and the performance of the intelligent driving equipment can be further improved.
The intelligent driving system may include different product forms in the vehicle field, for example: vehicle-mounted chip, vehicle-mounted device (such as vehicle-mounted machine, vehicle-mounted computing platform, whole vehicle and server (virtual or physical)).
Before describing the bandwidth adjustment method in detail, a system architecture to which the bandwidth adjustment method provided in the embodiments of the present application is applicable is described.
Fig. 2 shows a schematic block diagram of a bandwidth adjustment system architecture provided by an embodiment of the present application. The bandwidth adjustment system 200 may include a deterministic thread group application program interface (application program interface, API) 201, an event scheduler 202, a joint optimizer (e.g., a computing platform as described above) 203, and a QoS DDR processor 204. Wherein the deterministic thread group API201 is used to obtain thread group instances and loop-down functions. The cache queue of the deterministic thread group API201 is used to store functions to be executed/issued. The event scheduler 202 is configured to execute a specified function at a specified time. When the event scheduler 202 needs to execute a specified function, the deterministic thread group API201 issues the function into the cache queue of the deterministic thread group API 201. The joint optimizer 203 is configured to read information of the function from the buffer queue, obtain a scheduling priority and a first bandwidth priority of the function, determine a second bandwidth priority of the function according to the scheduling priority and the first bandwidth priority, and determine a bandwidth range of the function according to the second bandwidth priority.
In some implementations, during the operation of the function by the event scheduler 202, the joint optimizer 203 is further configured to obtain, from the event scheduler 202, information such as an operation duration of the function, a system throughput when the function is operated, a standard deviation of operation time consumption of the function for the last K times when the function is operated, and a number of waiting functions when the function is operated. The joint optimizer 203 is also configured to obtain information of a double data rate synchronous dynamic random access memory (DDR) from the QoS DDR processor 204 in real time. The joint optimizer 203 is further configured to count the above information in real time, and determine a cost of the function according to each information in the above information and the weight of each information. When the cost indicates that the bandwidth range of the function needs to be adjusted, the bandwidth adjusting means adjusts the bandwidth range of the function to obtain a target bandwidth range (e.g., a second bandwidth range as described below), at which the cost of the function is lower. After obtaining the target bandwidth range of the function, the joint optimizer 203 is further configured to invoke the setup interface of the QoS DDR processor 204 to adjust the bandwidth range of the target unit to which the function binds.
It should be understood that the system architecture shown in fig. 2 is merely exemplary, and that the system may include more or fewer modules or nodes in a particular implementation, and that the modules or nodes may be omitted or added according to actual circumstances.
Fig. 3 shows a schematic flowchart of a bandwidth adjustment method 300 provided in an embodiment of the present application, where the method 300 may be performed by the intelligent driving device 100 shown in fig. 1, or may also be performed by one or more processors of the computing platform 110 in fig. 1, or may also be performed by the MDC of the computing platform 110 shown in fig. 2, or may also be performed by a bandwidth adjustment device outside the intelligent driving device 100. The method 300 may include S301, S302, and S303, described below as an example of computing platform execution.
In S301, the computing platform obtains a scheduling priority and a first bandwidth priority of a first function, where the first function corresponds to a first computing unit, and the first computing unit belongs to the computing platform.
In one implementation, S301 may be specifically implemented as: the computing platform obtains a scheduling priority and a first bandwidth priority of a first function input through the display device. For example, the user may enter the scheduling priority and the first bandwidth priority of the first function via a display interface of a display device external to the computing platform. In another specific implementation manner, S301 may be specifically implemented as: the computing platform obtains the scheduling priority and the first bandwidth priority of the first function through a configuration file or a configuration interface. The computing platform scans a configuration file of the computing platform, and obtains a scheduling priority and a first bandwidth priority of the first function according to the configuration condition of resources of the computing platform recorded in the configuration file. For example, the configuration interface of the computing platform may identify priorities (e.g., scheduling priorities and first bandwidth priorities of the first functions) and traffic attributes of the first functions.
S302, the computing platform determines a second bandwidth priority of the first function according to the scheduling priority and the first bandwidth priority.
In one implementation, S302 may be implemented specifically as: the computing platform takes the scheduling priority of the first function as a second bandwidth priority of the first function. In another specific implementation, S302 may be specifically implemented as: the computing platform determines a second bandwidth priority of the first function according to the scheduling priority of the first function and the weight thereof, and the first bandwidth priority of the first function and the weight thereof.
S303, the computing platform determines a first bandwidth range of the first computing unit corresponding to the first function according to the second bandwidth priority.
Wherein the first bandwidth range is a first range of bandwidths allocated by the computing platform from the first computing unit for use by the first function.
In one implementation, S303 may be specifically implemented as: the computing platform determines a bandwidth range of the first function according to the bandwidth priority-bandwidth range mapping function mapping table and the second bandwidth priority. In another specific implementation manner, S303 may be specifically implemented as: the computing platform determines a bandwidth range of the first function according to the bandwidth priority-bandwidth range mapping table and the second bandwidth priority.
According to the method and the device, the bandwidth range allocated to the first function by the computing platform is determined according to the scheduling priority and the bandwidth priority of the first function, so that the performance of the computing platform can be improved, and the performance of the intelligent driving equipment is further improved.
In some embodiments, the running information of the first function may include: the running time of the first function, the system throughput when the first function runs, the running time standard deviation of the last K times when the first function runs, and the number of waiting functions when the first function runs. When the operation information of the first function includes a plurality of information, the computing platform may adjust the first bandwidth range according to the operation information of the first function to obtain the second bandwidth range. Specifically, the computing platform determines a weight for each of the plurality of information. The computing platform determines a cost of the first function based on each type of information and the weight of each type of information. When the cost indicates that the bandwidth range of the first function needs to be adjusted, the computing platform adjusts the first bandwidth range to obtain a second bandwidth range, wherein the second bandwidth range is a second range of bandwidths which the computing platform allocates to the first function from the first computing unit for use, and the cost when the first function operates in the second bandwidth range is better than the cost when the first function operates in the first bandwidth range.
In this embodiment of the present application, the adjusted second bandwidth range is closer to the ideal value by dynamically adjusting the first bandwidth range of the first function. Ensuring that the cost of the first function operating in the second bandwidth range is better than the cost of the first function operating in the first bandwidth range allows for accurate setting of the desired objective in the optimization process.
Fig. 4A and 4B show a schematic flowchart of a bandwidth adjustment method 400 provided in an embodiment of the present application, the method 400 being an extension of the method 300. Illustratively, the steps performed by the computing platform in method 400 may be performed by the computing platform shown in FIG. 3. It should be understood that the steps or operations of the bandwidth adjustment methods shown in fig. 4A and 4B are merely examples, and that other operations or variations of the respective operations in fig. 4A and 4B may also be performed by embodiments of the present application. Further, the various steps in fig. 4A and 4B may be performed in a different order than presented in fig. 4A and 4B, and it is possible that not all of the operations in fig. 4A and 4B are to be performed.
The method 400 may include: s401, S402, and S403.
S401, a computing platform acquires a scheduling priority and a first bandwidth priority of a first function, wherein the first function corresponds to a first computing unit, and the first computing unit belongs to the computing platform.
The first bandwidth priority may be a priority configured for the first function in advance, and the first bandwidth priority may also be referred to as a first access priority, which is not specifically limited in the embodiment of the present application.
In one implementation, S401 may be implemented specifically as follows: the computing platform obtains a scheduling priority and a first bandwidth priority of a first function input through the display device. The display device may be understood as a display device external to the computing platform, although the display device may be part of the computing platform. The display device may be a device provided with a display screen. The display device may be in communication with a computing platform. In this way, the user may input the scheduling priority and the first bandwidth priority of the first function to the computing platform via the display device. Here, the scheduling priority of the first function may be input to the computing platform by the user through the display device, or may be the scheduling priority of the first function identified by the interface of the computing platform.
Fig. 5 is an exemplary diagram of an interface of a display device. The display device 1 displays an interface 501 as shown in fig. 5, and an input box 502 for inputting function information is displayed on the interface 501. Also displayed on this interface 501 is an input box 503 for inputting the priorities of functions. The user may input information of the first function in the input box 502, where the information of the first function may include information of a name of the first function, a name of a class to which the first function belongs, a service attribute of the first function, and so on. Illustratively, the function information table is shown in table 1 below. The first function is any one of the functions in table 1. Next, the user enters the first bandwidth priority of the first function, or the first bandwidth priority and the scheduling priority of the first function, in an input box 503. Thereafter, the user may click on the submit control 504 on the interface 501, and the display device 1 sends the information of the first function, as well as the first bandwidth priority and the scheduling priority of the first function, to the computing platform 110.
TABLE 1 function information Table
In another specific implementation manner, S401 may be implemented specifically as follows: the computing platform obtains the scheduling priority and the first bandwidth priority of the first function through a configuration file or a configuration interface.
In one implementation, the computing platform may obtain the scheduling priority and the first bandwidth priority of the first function by scanning the configuration file. Wherein, the configuration file records the resource configuration condition of the computing platform. The computing platform includes an MDC platform, where the configuration file records specific core allocation conditions on the MDC platform, such as, for example, which core the ctrl_cpu on the MDC platform is, which core the ai_cpu on the MDC platform is, and which functions are bound to the ai_cpu, where each function corresponds to one computing unit, and each computing unit may implement a function. In addition, the ai_cpu in the configuration file may be threaded, that is, one thread monopolizes one core, and a deterministic thread group may be obtained, where a cache queue of the deterministic thread group is used to manage tasks/functions to be executed and issued.
In another particular implementation, the computing platform obtains, via the configuration interface, a scheduling priority and a first bandwidth priority of the first function. Tasks/functions to be performed in the cache queues of deterministic thread groups as described above are given priority and traffic attributes and synchronized to the QoS DDR throttling interface in real time. The QoS DDR adjustment interface may be understood as a configuration interface. When the buffer pool task/function is about to be issued, the QoS DDR adjusting interface identifies the task attribute and the priority of the function, so that the scheduling priority and the first bandwidth priority of the first function are obtained.
S402, the computing platform determines a second bandwidth priority of the first function according to the scheduling priority and the first bandwidth priority.
If the first bandwidth priority is defined as the initial bandwidth priority of the first function, the second bandwidth priority may be understood as the actual bandwidth priority of the first function.
S402 may be implemented in two ways:
in a first manner, the computing platform determines the scheduling priority of the first function as the second bandwidth priority of the first function, and specifically, expression 1 may be:
g(x i )=f(x i )i=0,1,2,3,4……
wherein i represents a function number; g (x) i ) A second bandwidth priority of the ith function; f (x) i ) Indicating the scheduling priority of the ith function.
In one implementation, as shown in fig. 5, for example, the user may input the scheduling priority of the first function in the input box 503, where the display apparatus 1 sends the scheduling priority of the first function to the computing platform 110, and the computing platform 110 may determine that the second bandwidth priority of the first function is the scheduling priority of the first function according to the above expression 1. In this way, the user only needs to input the scheduling priority of the function on the interface of the display device 1, and the operation is simple.
In a second manner, the computing platform determines a second bandwidth priority of the first function based on the scheduling priority of the first function and its weight, and the first bandwidth priority of the first function and its weight. Specifically, a computing platform acquires a first weight of a scheduling priority and a second weight of a first bandwidth priority; the computing platform determines a second bandwidth priority based on the scheduling priority, the first weight, the first bandwidth priority, and the second weight. Specifically, expression 2 may be:
Wherein: g (x) i ) ' represents a second bandwidth priority of the ith function; g (x) i ) A first bandwidth priority for an ith function; f (x) i ) Scheduling priority for the ith function; alpha i A second weight representing a first bandwidth priority of the ith function; beta i A first weight representing a scheduling priority of an ith function; i represents a function number.
In one implementation, by way of example, fig. 6 is another interface schematic of a display device. The display apparatus 1 displays an interface 601 as shown in fig. 6, and an input box 602 for inputting a function scheduling priority is displayed on the interface 601. An input box 603 for inputting a first weight corresponding to the scheduling priority is displayed on the interface 601. An input box 604 for inputting a first bandwidth priority of the function is displayed on the interface 601. An input box 605 for inputting a second weight corresponding to the first bandwidth priority is displayed on the interface 601. The user inputs the scheduling priority of the first function and its first weight, the first bandwidth priority and its second weight in input boxes 602-605, respectively. Thereafter, the display apparatus 1 transmits the above information input by the user to the computing platform 110, and the computing platform 110 may determine the second bandwidth priority of the first function based on the above expression 2.
In the embodiment of the application, the final bandwidth priority of the function is flexibly determined by integrating the influence of the scheduling priority and the first bandwidth priority of the function and combining the weights of different priorities of each function, so that the calculation result is closer to the ideal result.
S403, the computing platform determines a first bandwidth range of the first computing unit corresponding to the first function according to the second bandwidth priority.
Wherein the first bandwidth range is a first range of bandwidths allocated by the computing platform from the first computing unit for use by the first function.
In one implementation, S403 may be specifically implemented as: the computing platform determines a first bandwidth range of the first function based on the bandwidth priority-bandwidth range mapping function and the second bandwidth priority. The bandwidth priority-bandwidth range mapping function may be understood as that the bandwidth priority and the bandwidth range satisfy the change rule of the designated function, and then the computing platform may determine the first bandwidth range corresponding to the second bandwidth priority according to the second bandwidth priority and the mapping function.
In another specific implementation manner, S403 may be specifically implemented as follows: the computing platform determines a first bandwidth range of the first function according to the bandwidth priority-bandwidth range mapping table and the second bandwidth priority. The bandwidth priority-bandwidth range mapping table may be understood as a mapping relationship in which bandwidth priority and bandwidth range are recorded. Then, the computing platform may determine the first bandwidth range corresponding to the second bandwidth priority according to the second bandwidth priority and the mapping relationship.
In another specific implementation manner, the first bandwidth range of the first function may be set according to a relationship between the second bandwidth priority and the threshold value, and the specific expression 3 may be:
wherein: x is x i A second bandwidth priority representing an ith function; g (x) i ) A first boundary and a second boundary representing a first bandwidth range of an ith function; qos_ddr_min_limit represents a first boundary of a first bandwidth range; QOS_DDR_MAX_LIMIT represents the second boundary of the first bandwidth range; threshold represents a threshold for the priority of the partition function.
In some embodiments, the bandwidth adjustment method provided in the embodiments of the present application may further include: s404 and S405.
In S404, the computing platform obtains the running information of the first function.
Wherein, the operation information may include: the running time of the first function, the system throughput when the first function runs, the last K running time standard deviations when the first function runs, and the number of waiting functions when the first function runs.
In one implementation, fig. 7 is a schematic diagram of yet another interface of a display device, by way of example. The display device displays an interface 701 as shown in fig. 7, and an input box 702 for inputting running information of a function is displayed on the interface 701. The user may enter the running information of the first function in input box 702, after which the display device sends the running information of the first function to the computing platform.
S405, the computing platform adjusts the first bandwidth range according to the operation information of the first function so as to obtain a second bandwidth range.
Wherein the second bandwidth range is a second range of bandwidths allocated by the computing platform from the first computing unit to the first function for use.
In one implementation, when the operation information includes a plurality of information, S405 may include: s4051, S4052 and S4053. In S4051, the computing platform determines a weight for each of the plurality of information.
The operation information may include: the running time of the first function, the system throughput when the first function runs, the running time standard deviation of the last K times when the first function runs, and the number of waiting functions when the first function runs. Accordingly, the weight of each information may include: the method comprises the steps of a third weight corresponding to the running time of a first function, a fourth weight corresponding to the system throughput when the first function runs, a fifth weight corresponding to the last K time of running time standard deviation when the first function runs, and a sixth weight corresponding to the number of waiting functions when the first function runs.
In one implementation, the weight of each of the above information may be input to the computing platform by the user via the display device. Illustratively, the display device displays an interface 701 as shown in fig. 7, and an input box 703 for inputting weights of running information of functions is also displayed on the interface 701. The user may input the weight corresponding to the operation information of the first function in the input box 703, and then the display device sends the weight corresponding to the operation information of the first function to the computing platform.
And S4052, the computing platform determines the cost of the first function according to each piece of information and the weight of each piece of information.
In a specific implementation, after the computing platform receives the running information of the first function and the weight corresponding to the running information, the computing platform may determine the cost of the first function according to the information. Specifically, expression 4 may be:
wherein: i represents a function number; n represents a total of n functions; alpha iiii Four weights corresponding to each Cost representing the ith function; cost (test) time Representing the execution duration of the ith function; cost (test) throughput Representing the system throughput when the ith function is running; cost (test) s tandDev Representing the last K execution time-consuming standard deviations of the ith function; cost (test) waitNum The number of waiting functions when the ith function is executed is represented.
S4053, when the cost indicates that the bandwidth range of the first function needs to be adjusted, the computing platform adjusts the first bandwidth range to obtain the second bandwidth range, where the cost of the first function running in the second bandwidth range is better than the cost of the first function running in the first bandwidth range.
That is, the computing platform adjusts the first bandwidth range such that the adjusted bandwidth range (i.e., the second bandwidth range) is near the ideal value.
In one implementation, the computing platform may employ the control principle of a proportional-integral-derivative controller (proportion integration differentiation, PID) to adjust the first bandwidth range of the first function to gradually reduce the cost such that the adjusted cost corresponding to the second bandwidth range is better than the cost corresponding to the first bandwidth range.
In one implementation, the second bandwidth range may include a first boundary and a second boundary, the first boundary being less than or equal to a first threshold and the second boundary being greater than or equal to a second threshold. That is, the first boundary of the second bandwidth range cannot be greater than the first threshold, and the second boundary of the second bandwidth range cannot be less than the second threshold.
In this embodiment of the present application, the adjusted second bandwidth range is closer to the ideal value by dynamically adjusting the first bandwidth range of the first function. The cost of the first function running in the second bandwidth range is ensured to be better than that of the first function running in the first bandwidth range, the expected target in the optimization process can be accurately set, and the optimization purpose and the influence range can be effectively adjusted.
In the absence of specific recitations and logic conflict, terms and/or descriptions between various embodiments of the present application may be consistent and interchangeable, and features of different embodiments may be combined to form new embodiments in accordance with their inherent logic.
The bandwidth adjustment method provided in the embodiment of the present application is described in detail above with reference to fig. 1 to 7. The bandwidth adjusting apparatus provided in the embodiment of the present application will be described in detail below with reference to fig. 8. It should be understood that the descriptions of the apparatus embodiments and the descriptions of the method embodiments correspond to each other, and thus, descriptions of details not described may be referred to the above method embodiments, which are not repeated herein for brevity.
Fig. 8 shows a schematic block diagram of a bandwidth adjustment device 800 provided in an embodiment of the present application, where the device 800 includes: a first acquisition unit 801 and a determination unit 802; wherein,
the first acquisition unit 801 is configured to: the method comprises the steps of obtaining scheduling priority and first bandwidth priority of a first function, wherein the first function corresponds to a first computing unit, and the first computing unit belongs to a computing platform. The first obtaining unit 801 may perform the step of S401 described above, for example. The first acquisition unit 801 may be the joint optimizer 203 shown in fig. 2. Specifically, the first obtaining unit 801 may be a module for reading function information of the joint optimizer 203 shown in fig. 2.
The determining unit 802 is configured to: determining a second bandwidth priority of the first function according to the scheduling priority and the first bandwidth priority; and determining a first bandwidth range of the first computing unit corresponding to the first function according to the second bandwidth priority, wherein the first bandwidth range is a first range of bandwidth allocated to the first function for use by the computing platform from the first computing unit. For example, the determining unit 802 may perform the steps of S402 and S403 described above. The determination unit 802 may be the joint optimizer 203 shown in fig. 2. In particular, the determining unit 802 may be a module of the bandwidth range of the calculation function of the joint optimizer 203 shown in fig. 2.
In one implementation, the determining unit 802 is further configured to: acquiring a first weight of a scheduling priority and a second weight of a first bandwidth priority; and determining the second bandwidth priority according to the scheduling priority, the first weight, the first bandwidth priority and the second weight.
In one implementation, the determining unit 802 is configured to: the first bandwidth range of the first function is determined based on the bandwidth priority-bandwidth range mapping function or the bandwidth priority-bandwidth range mapping table, and the second bandwidth priority.
In one implementation, the first obtaining unit 801 is configured to: acquiring a scheduling priority and a first bandwidth priority of a first function input through a display device; or the scheduling priority and the first bandwidth priority of the first function are obtained through a configuration file or a configuration interface.
In one implementation, the first obtaining unit 801 is configured to: the method includes the steps of acquiring a first weight and a second weight input through a display device.
In some implementations, the apparatus 800 further includes: a second acquisition unit 803 and an adjustment unit 804; wherein,
the second acquisition unit 803 is configured to: and acquiring the operation information of the first function. The operation information includes: the running time of the first function, the system throughput when the first function runs, the running time standard deviation of the last K times when the first function runs, and the number of waiting functions when the first function runs. The second obtaining unit 803 may perform the step of S404 described above, for example. The second acquisition unit 803 may be the joint optimizer 203 shown in fig. 2. Specifically, the second obtaining unit 803 may be a module of real-time result statistics of the joint optimizer 203 shown in fig. 2.
The adjusting unit 804 is configured to: and adjusting the first bandwidth range according to the operation information of the first function to obtain a second bandwidth range, wherein the second bandwidth range is a second range of bandwidths distributed to the first function for use by the computing platform from the first computing unit. For example, the adjustment unit 804 may perform the step of S405 described above. The adjustment unit 804 may be the joint optimizer 203 shown in fig. 2. Specifically, the adjustment unit 804 may be a call QoS DDR interface of the joint optimizer 203 shown in fig. 2.
In one implementation, when the operation information includes a plurality of information, the adjusting unit 804 is configured to: determining the weight of each of the plurality of information; determining the cost of the first function according to each type of information and the weight of each type of information; when the cost indicates that the bandwidth range of the first function needs to be adjusted, the first bandwidth range is adjusted to obtain a second bandwidth range, and the cost when the first function operates in the second bandwidth range is better than the cost when the first function operates in the first bandwidth range. Illustratively, the adjustment unit 804 may perform the steps of S4051-S4053 described above.
In one implementation, the second bandwidth range includes a first boundary and a second boundary, the first boundary being equal to or less than a first threshold, the second boundary being equal to or greater than a second threshold.
According to the method and the device, the bandwidth range allocated to the first function by the computing platform is determined according to the scheduling priority and the bandwidth priority of the first function, so that the performance of the computing platform can be improved, and the performance of the intelligent driving equipment is further improved.
The embodiment of the application also provides a device, which comprises a processing unit and a storage unit, wherein the storage unit is used for storing instructions, and the processing unit executes the instructions stored by the storage unit, so that the device executes the method or the steps executed by the embodiment.
It should be understood that the division of the units in the above apparatus is only a division of a logic function, and may be fully or partially integrated into one physical entity or may be physically separated. Furthermore, units in the apparatus may be implemented in the form of processor-invoked software; the device comprises, for example, a processor, which is connected to a memory, in which instructions are stored, the processor calling the instructions stored in the memory to implement any of the above methods or to implement the functions of the units of the device, wherein the processor is, for example, a general-purpose processor, such as a CPU or a microprocessor, and the memory is an internal memory of the device or an external memory of the device. Alternatively, the units in the apparatus may be implemented in the form of hardware circuits, and the functions of some or all of the units may be implemented by the design of hardware circuits, which may be understood as one or more processors; for example, in one implementation, the hardware circuit is an ASIC, and the functions of some or all of the above units are implemented by designing the logic relationships of the elements in the circuit; for another example, in another implementation, the hardware circuit may be implemented by a PLD, for example, an FPGA, which may include a large number of logic gates, and the connection relationship between the logic gates is configured by a configuration file, so as to implement the functions of some or all of the above units. All units of the above device may be realized in the form of processor calling software, or in the form of hardware circuits, or in part in the form of processor calling software, and in the rest in the form of hardware circuits.
In the embodiment of the application, the processor is a circuit with signal processing capability, and in one implementation, the processor may be a circuit with instruction reading and running capability, such as a CPU, a microprocessor, a GPU, or a DSP, etc.; in another implementation, the processor may perform a function through a logical relationship of hardware circuitry that is fixed or reconfigurable, e.g., a hardware circuit implemented by the processor as an ASIC or PLD, such as an FPGA. In the reconfigurable hardware circuit, the processor loads the configuration document, and the process of implementing the configuration of the hardware circuit may be understood as a process of loading instructions by the processor to implement the functions of some or all of the above units. Furthermore, it may be a hardware circuit designed for artificial intelligence, which may be understood as an ASIC, such as NPU, TPU, DPU, etc.
It will be seen that each of the units in the above apparatus may be one or more processors (or processing circuits) configured to implement the above method, for example: CPU, GPU, NPU, TPU, DPU, microprocessor, DSP, ASIC, FPGA, or a combination of at least two of these processor forms.
Furthermore, the units in the above apparatus may be integrated together in whole or in part, or may be implemented independently. In one implementation, these units are integrated together and implemented in the form of a system-on-a-chip (SOC). The SOC may include at least one processor for implementing any of the methods above or for implementing the functions of the units of the apparatus, where the at least one processor may be of different types, including, for example, a CPU and an FPGA, a CPU and an artificial intelligence processor, a CPU and a GPU, and the like.
Alternatively, in this possible design, all relevant contents related to each step of the method embodiment shown in fig. 1 to fig. 7 in the foregoing description may be referred to the functional description of the corresponding functional module, which is not repeated herein. The intelligent driving system described in this possible design is used to perform the functions of the intelligent driving system in the bandwidth adjustment method shown in fig. 1 to 7, and thus the same effects as those of the bandwidth adjustment method described above can be achieved.
The bandwidth adjusting device provided by the embodiment of the application comprises: a processor and a memory coupled to the processor, the memory for storing computer program code comprising computer instructions that, when read from the memory by the processor, cause the bandwidth adjustment apparatus to perform the bandwidth adjustment methods shown in fig. 1-7. In some embodiments, the bandwidth adjustment device is a computing platform.
An intelligent driving device provided in an embodiment of the present application includes: a bandwidth adjustment device, the bandwidth adjustment device comprising: a processor and a memory coupled to the processor, the memory for storing computer program code comprising computer instructions that, when read from the memory by the processor, cause the bandwidth adjustment apparatus to perform the bandwidth adjustment methods shown in fig. 1-7. In some embodiments, the bandwidth adjustment device is a computing platform. In some embodiments, the intelligent driving device is a vehicle.
An intelligent driving device provided in an embodiment of the present application includes: the bandwidth adjusting apparatus shown in fig. 8. In some embodiments, the intelligent driving device is a vehicle.
The embodiment of the application provides a computer program product, which when run on a computer, causes the computer to execute the bandwidth adjustment method shown in fig. 1 to 7.
The embodiment of the application provides a computer readable storage medium, which includes computer instructions that, when executed on a terminal, cause a network device to execute a bandwidth adjustment method shown in fig. 1 to 7.
The chip system provided by the embodiment of the application comprises one or more processors, and when the one or more processors execute instructions, the one or more processors execute the bandwidth adjustment method shown in fig. 1 to 7.
The foregoing is merely specific embodiments of the present application, but the scope of the present application is not limited thereto, and any person skilled in the art can easily think about changes or substitutions within the technical scope of the present application, and the changes and substitutions are intended to be covered by the scope of the present application. Therefore, the protection scope of the present application shall be subject to the protection scope of the claims.
In the description of the embodiments of the present application, unless otherwise indicated, "/" means or, for example, a/B may represent a or B; "and/or" herein is merely an association relationship describing an association object, and means that three relationships may exist, for example, a and/or B may mean: a exists alone, A and B exist together, and B exists alone.
The terms "first" and "second" are used below for descriptive purposes only and are not to be construed as indicating or implying relative importance or implicitly indicating the number of technical features indicated. Thus, a feature defining "a first" or "a second" may explicitly or implicitly include one or more such feature. In the description of the embodiments of the present application, unless otherwise indicated, the meaning of "a plurality" is two or more.
In the embodiments of the present application, words such as "exemplary" or "such as" are used to mean serving as examples, illustrations, or descriptions. Any embodiment or design described herein as "exemplary" or "for example" should not be construed as preferred or advantageous over other embodiments or designs. Rather, the use of words such as "exemplary" or "such as" is intended to present related concepts in a concrete fashion.
It will be appreciated that the communication device or the like described above, in order to implement the above-described functions, includes corresponding hardware structures and/or software modules that perform the respective functions. Those of skill in the art will readily appreciate that the elements and algorithm steps of the examples described in connection with the embodiments disclosed herein may be implemented as hardware or combinations of hardware and computer software. Whether a function is implemented as hardware or computer software driven hardware depends upon the particular application and design constraints imposed on the solution. Skilled artisans may implement the described functionality in varying ways for each particular application, but such implementation decisions should not be interpreted as causing a departure from the scope of the embodiments of the present application.
The embodiment of the present application may divide the functional modules of the communication device or the like according to the above method example, for example, each functional module may be divided corresponding to each function, or two or more functions may be integrated into one processing module. The integrated modules may be implemented in hardware or in software functional modules. It should be noted that, in the embodiment of the present application, the division of the modules is schematic, which is merely a logic function division, and other division manners may be implemented in actual implementation.
From the foregoing description of the embodiments, it will be apparent to those skilled in the art that, for convenience and brevity of description, only the above-described division of functional modules is illustrated, and in practical application, the above-described functional allocation may be implemented by different functional modules according to needs, i.e. the internal structure of the apparatus is divided into different functional modules to implement all or part of the functions described above. The specific working processes of the above-described systems, devices and units may refer to the corresponding processes in the foregoing method embodiments, which are not described herein.

Claims (20)

1. A method of bandwidth adjustment, the method comprising:
acquiring a scheduling priority and a first bandwidth priority of a first function, wherein the first function corresponds to a first computing unit, and the first computing unit belongs to a computing platform;
determining a second bandwidth priority of the first function according to the scheduling priority and the first bandwidth priority;
and determining a first bandwidth range of the first computing unit corresponding to the first function according to the second bandwidth priority, wherein the first bandwidth range is a first range of bandwidth allocated to the first function from the first computing unit by the computing platform.
2. The method of claim 1, wherein said determining a second bandwidth priority of the first function based on the scheduling priority and the first bandwidth priority comprises:
acquiring a first weight of the scheduling priority and a second weight of the first bandwidth priority;
and determining the second bandwidth priority according to the scheduling priority, the first weight, the first bandwidth priority and the second weight.
3. The method of claim 1, wherein said determining a first bandwidth range for the first computing unit corresponding to the first function based on the second bandwidth priority comprises:
a first bandwidth range of the first function is determined based on a bandwidth priority-bandwidth range mapping function or a bandwidth priority-bandwidth range mapping table, and the second bandwidth priority.
4. A method according to any of claims 1-3, wherein said obtaining the scheduling priority and the first bandwidth priority of the first function comprises:
acquiring a scheduling priority and a first bandwidth priority of the first function input through a display device; or (b)
And acquiring the scheduling priority and the first bandwidth priority of the first function through a configuration file or a configuration interface.
5. A method according to claim 2 or 3, wherein the obtaining a first weight of the scheduling priority and a second weight of the first bandwidth priority comprises:
and acquiring the first weight and the second weight input through the display device.
6. A method according to any one of claims 1-3, characterized in that the method further comprises:
acquiring operation information of the first function;
adjusting the first bandwidth range according to the operation information of the first function to obtain a second bandwidth range, wherein the second bandwidth range is a second range of bandwidths allocated to the first function by the computing platform from the first computing unit, and the operation information comprises: the running time of the first function, the system throughput when the first function runs, the running time-consuming standard deviation of the last K times of the first function run and the number of waiting functions when the first function runs.
7. The method of claim 6, wherein when the operation information includes a plurality of information, the adjusting the first bandwidth range according to the operation information of the first function to obtain a second bandwidth range includes:
Determining a weight of each of the plurality of information;
determining the cost of the first function according to the information and the weight of the information;
and when the cost indicates that the bandwidth range of the first function needs to be adjusted, adjusting the first bandwidth range to obtain the second bandwidth range, wherein the cost when the first function operates in the second bandwidth range is better than the cost when the first function operates in the first bandwidth range.
8. The method according to claim 6 or 7, wherein the second bandwidth range comprises a first boundary and a second boundary, the first boundary being equal to or less than a first threshold and the second boundary being equal to or greater than a second threshold.
9. A bandwidth adjustment device, the device comprising: a first acquisition unit and a determination unit, wherein,
the first acquisition unit is used for:
acquiring a scheduling priority and a first bandwidth priority of a first function, wherein the first function corresponds to a first computing unit, and the first computing unit belongs to a computing platform;
the determining unit is used for:
determining a second bandwidth priority of the first function according to the scheduling priority and the first bandwidth priority;
And determining a first bandwidth range of the first computing unit corresponding to the first function according to the second bandwidth priority, wherein the first bandwidth range is a first range of bandwidth allocated to the first function from the first computing unit by the computing platform.
10. The apparatus according to claim 9, wherein the determining unit is configured to:
acquiring a first weight of the scheduling priority and a second weight of the first bandwidth priority;
and determining the second bandwidth priority according to the scheduling priority, the first weight, the first bandwidth priority and the second weight.
11. The apparatus according to claim 9, wherein the determining unit is configured to:
a first bandwidth range of the first function is determined based on a bandwidth priority-bandwidth range mapping function or a bandwidth priority-bandwidth range mapping table, and the second bandwidth priority.
12. The apparatus according to any one of claims 9-11, wherein the first acquisition unit is configured to:
acquiring a scheduling priority and a first bandwidth priority of the first function input through a display device; or (b)
And acquiring the scheduling priority and the first bandwidth priority of the first function through a configuration file or a configuration interface.
13. The apparatus according to claim 10 or 11, wherein the first acquisition unit is configured to:
and acquiring the first weight and the second weight input through the display device.
14. The apparatus according to any one of claims 9-11, wherein the apparatus further comprises: a second acquisition unit and an adjustment unit; wherein,
the second acquisition unit is used for:
acquiring operation information of the first function;
the adjusting unit is used for:
adjusting the first bandwidth range according to the operation information of the first function to obtain a second bandwidth range, wherein the second bandwidth range is a second range of bandwidths allocated to the first function by the computing platform from the first computing unit, and the operation information comprises: the running time of the first function, the system throughput when the first function runs, the running time-consuming standard deviation of the last K times of the first function run and the number of waiting functions when the first function runs.
15. The apparatus according to claim 14, wherein when the operation information includes a plurality of information, the adjusting unit is configured to:
Determining a weight of each of the plurality of information;
determining the cost of the first function according to the information and the weight of the information;
and when the cost indicates that the bandwidth range of the first function needs to be adjusted, adjusting the first bandwidth range to obtain the second bandwidth range, wherein the cost when the first function operates in the second bandwidth range is better than the cost when the first function operates in the first bandwidth range.
16. The apparatus of claim 14 or 15, wherein the second bandwidth range includes a first boundary and a second boundary, the first boundary being less than or equal to a first threshold and the second boundary being greater than or equal to a second threshold.
17. A bandwidth adjustment apparatus, comprising:
a memory for storing a computer program;
a processor for executing a computer program stored in the memory to cause the apparatus to perform the method of any one of claims 1-8.
18. The bandwidth adjustment device of claim 17, wherein the bandwidth adjustment device is a computing platform.
19. Intelligent driving apparatus, characterized by comprising a device according to any of claims 9-16, 17-18.
20. The intelligent driving apparatus of claim 19, wherein the intelligent driving apparatus is a vehicle.
CN202211212975.1A 2022-09-30 2022-09-30 Bandwidth adjustment method and device and intelligent driving equipment Pending CN117851022A (en)

Priority Applications (2)

Application Number Priority Date Filing Date Title
CN202211212975.1A CN117851022A (en) 2022-09-30 2022-09-30 Bandwidth adjustment method and device and intelligent driving equipment
PCT/CN2023/118403 WO2024067080A1 (en) 2022-09-30 2023-09-12 Bandwidth adjustment method and apparatus and intelligent driving device

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202211212975.1A CN117851022A (en) 2022-09-30 2022-09-30 Bandwidth adjustment method and device and intelligent driving equipment

Publications (1)

Publication Number Publication Date
CN117851022A true CN117851022A (en) 2024-04-09

Family

ID=90476062

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202211212975.1A Pending CN117851022A (en) 2022-09-30 2022-09-30 Bandwidth adjustment method and device and intelligent driving equipment

Country Status (2)

Country Link
CN (1) CN117851022A (en)
WO (1) WO2024067080A1 (en)

Family Cites Families (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111344677A (en) * 2017-11-21 2020-06-26 谷歌有限责任公司 Managing processing system efficiency
CN114449576A (en) * 2020-10-30 2022-05-06 华为技术有限公司 Application data sending method, device and equipment
CN114466413A (en) * 2020-11-10 2022-05-10 海能达通信股份有限公司 Bandwidth adjusting method, communication device and computer-readable storage medium
DE102020214951A1 (en) * 2020-11-27 2022-06-02 Robert Bosch Gesellschaft mit beschränkter Haftung Method for dynamically allocating memory bandwidth
CN113296957B (en) * 2021-06-18 2024-03-05 中国科学院计算技术研究所 Method and device for dynamically distributing network bandwidth on chip
CN114217920A (en) * 2021-11-16 2022-03-22 曙光信息产业(北京)有限公司 Job scheduling method and device, computer cluster and computer readable storage medium

Also Published As

Publication number Publication date
WO2024067080A1 (en) 2024-04-04

Similar Documents

Publication Publication Date Title
US8756548B2 (en) Computing system with hardware reconfiguration mechanism and method of operation thereof
US9734546B2 (en) Split driver to control multiple graphics processors in a computer system
WO2017107091A1 (en) Virtual cpu consolidation to avoid physical cpu contention between virtual machines
US20150199214A1 (en) System for distributed processing of stream data and method thereof
WO2023165105A1 (en) Load balancing control method and apparatus, electronic device, storage medium, and computer program
DE112011103194B4 (en) Coordinate device and application interrupt events for platform energy savings
DE112012002465T5 (en) Graphics processor with non-blocking concurrent architecture
US20110265093A1 (en) Computer System and Program Product
CN112328378A (en) Task scheduling method, computer device and storage medium
CN114356587B (en) Calculation power task cross-region scheduling method, system and equipment
CN111045786B (en) Container creation system and method based on mirror image layering technology in cloud environment
EP3662376B1 (en) Reconfigurable cache architecture and methods for cache coherency
US9547576B2 (en) Multi-core processor system and control method
CN112181613A (en) Heterogeneous resource distributed computing platform batch task scheduling method and storage medium
US20170083375A1 (en) Thread performance optimization
CN112925616A (en) Task allocation method and device, storage medium and electronic equipment
US10042659B1 (en) Caching virtual contexts for sharing of physical instances of a hardware resource
US20210390405A1 (en) Microservice-based training systems in heterogeneous graphic processor unit (gpu) cluster and operating method thereof
CN117851022A (en) Bandwidth adjustment method and device and intelligent driving equipment
US20230037293A1 (en) Systems and methods of hybrid centralized distributive scheduling on shared physical hosts
CN116578416A (en) Signal-level simulation acceleration method based on GPU virtualization
CN108009074B (en) Multi-core system real-time evaluation method based on model and dynamic analysis
CN112416826A (en) Special computing chip, DMA data transmission system and method
KR101952221B1 (en) Efficient Multitasking GPU with Latency Minimization and Cache boosting
CN114510324B (en) Disk management method and system for KVM virtual machine with ceph volume mounted thereon

Legal Events

Date Code Title Description
PB01 Publication
SE01 Entry into force of request for substantive examination