CN116185643A - Load balancing method, device and equipment for hardware resources and storage medium - Google Patents

Load balancing method, device and equipment for hardware resources and storage medium Download PDF

Info

Publication number
CN116185643A
CN116185643A CN202310457152.3A CN202310457152A CN116185643A CN 116185643 A CN116185643 A CN 116185643A CN 202310457152 A CN202310457152 A CN 202310457152A CN 116185643 A CN116185643 A CN 116185643A
Authority
CN
China
Prior art keywords
hardware
load
data
processed
target
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202310457152.3A
Other languages
Chinese (zh)
Inventor
陈丰
张昆
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Nanjing Semidrive Technology Co Ltd
Original Assignee
Nanjing Semidrive Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Nanjing Semidrive Technology Co Ltd filed Critical Nanjing Semidrive Technology Co Ltd
Priority to CN202310457152.3A priority Critical patent/CN116185643A/en
Publication of CN116185643A publication Critical patent/CN116185643A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F9/00Arrangements for program control, e.g. control units
    • G06F9/06Arrangements for program control, e.g. control units using stored programs, i.e. using an internal store of processing equipment to receive or retain programs
    • G06F9/46Multiprogramming arrangements
    • G06F9/50Allocation of resources, e.g. of the central processing unit [CPU]
    • G06F9/5005Allocation of resources, e.g. of the central processing unit [CPU] to service a request
    • G06F9/5027Allocation of resources, e.g. of the central processing unit [CPU] to service a request the resource being a machine, e.g. CPUs, Servers, Terminals
    • G06F9/505Allocation of resources, e.g. of the central processing unit [CPU] to service a request the resource being a machine, e.g. CPUs, Servers, Terminals considering the load
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F9/00Arrangements for program control, e.g. control units
    • G06F9/06Arrangements for program control, e.g. control units using stored programs, i.e. using an internal store of processing equipment to receive or retain programs
    • G06F9/46Multiprogramming arrangements
    • G06F9/50Allocation of resources, e.g. of the central processing unit [CPU]
    • G06F9/5005Allocation of resources, e.g. of the central processing unit [CPU] to service a request
    • G06F9/5027Allocation of resources, e.g. of the central processing unit [CPU] to service a request the resource being a machine, e.g. CPUs, Servers, Terminals
    • G06F9/5044Allocation of resources, e.g. of the central processing unit [CPU] to service a request the resource being a machine, e.g. CPUs, Servers, Terminals considering hardware capabilities
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F2209/00Indexing scheme relating to G06F9/00
    • G06F2209/50Indexing scheme relating to G06F9/50
    • G06F2209/508Monitor
    • YGENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
    • Y02TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
    • Y02DCLIMATE CHANGE MITIGATION TECHNOLOGIES IN INFORMATION AND COMMUNICATION TECHNOLOGIES [ICT], I.E. INFORMATION AND COMMUNICATION TECHNOLOGIES AIMING AT THE REDUCTION OF THEIR OWN ENERGY USE
    • Y02D10/00Energy efficient computing, e.g. low power processors, power management or thermal management

Abstract

The disclosure provides a load balancing method, a device, equipment and a storage medium of hardware resources, which are used for receiving a data processing request, wherein the data processing request comprises data to be processed; searching hardware meeting a load condition in a hardware resource table as target hardware, wherein the hardware resource table is used for recording all used hardware and current load scores thereof, and the current load scores are used for representing the workload of processing the hardware at the current moment; the target hardware is adopted to process the data to be processed, so that the conditions of overhigh hardware load and data processing delay caused by simultaneous operation of multiple processes can be avoided, and hardware resources can be automatically determined to process the data according to the current load state of the hardware.

Description

Load balancing method, device and equipment for hardware resources and storage medium
Technical Field
The disclosure relates to the field of computer technology, and in particular, to a method, a device, equipment and a storage medium for balancing loads of hardware resources.
Background
With the rapid development of science and technology, people have increasingly higher requirements on entertainment performance of automobiles, which results in increasingly higher requirements on hardware of automobile configuration. Because of considering the problem of safety performance, the automobile has more strict requirements on hardware than other electronic products, and good hardware load balance is more beneficial to the running stability of the automobile.
In the prior art, a programmer would carefully compare the hardware capabilities with pre-estimated loads to configure individual hardware loads, but this fixed writing approach results in poor practicality and maintenance. Moreover, the configuration of each hardware is different, and the communication and development costs for consulting hardware documents and operating application program interfaces (Application Program Interface, APIs) are also high. In addition, in the running process of the system, the multiprocess occupies hardware resources, so that the fixed hardware load suddenly becomes large, unpredictable delay of data processing is caused, and potential safety hazards of automobiles are increased.
Disclosure of Invention
The present disclosure provides a method, an apparatus, a device, and a storage medium for balancing load of hardware resources, so as to at least solve the above technical problems in the prior art.
According to a first aspect of the present disclosure, there is provided a load balancing method for hardware resources, the method comprising:
receiving a data processing request, wherein the data processing request comprises data to be processed;
searching hardware meeting a load condition in a hardware resource table as target hardware, wherein the hardware resource table is used for recording all used hardware and current load scores thereof, and the current load scores are used for representing the workload of processing the hardware at the current moment;
and processing the data to be processed by adopting the target hardware.
In an embodiment, the searching the hardware meeting the load condition as the target hardware includes:
searching the hardware with the lowest current load score;
and taking the hardware with the lowest current load score as target hardware.
In an embodiment, after the searching for the hardware with the lowest current load score, the method further includes:
if at least two current hardware have the same current load score, any current hardware is randomly selected as the target hardware.
In an embodiment, after said processing the data to be processed using the target hardware, the method further includes:
acquiring the data volume of the data to be processed;
and determining the latest load fraction of the target hardware according to the data quantity of the data to be processed.
In an embodiment, the determining the latest load fraction of the target hardware according to the data amount of the data to be processed includes:
determining a newly added load fraction based on the load coefficient of the target hardware and the data volume of the data to be processed;
and accumulating the newly added load score to the current load score, and determining the latest load score of the target hardware.
In an embodiment, after the determining the latest load score of the target hardware, the method further includes:
and after the target hardware finishes processing the data to be processed, releasing the newly added load fraction of the target hardware.
In an embodiment, before the load factor based on the target hardware, the method further includes:
and obtaining a load coefficient of the target hardware, wherein the load coefficient is determined according to the hardware parameters, the processing speed and the occupied bandwidth of the target hardware.
According to a second aspect of the present disclosure, there is provided a load balancing apparatus for hardware resources, the apparatus comprising:
the request receiving module is used for receiving a data processing request, wherein the data processing request comprises data to be processed;
the hardware searching module is used for searching the hardware meeting the load condition in the hardware resource table as target hardware, wherein the hardware resource table is used for recording all the used hardware and the current load score thereof, and the current load score is used for representing the workload of processing the hardware at the current moment;
and the processing module is used for processing the data to be processed by adopting the target hardware.
In an embodiment, the hardware searching module is specifically configured to: searching the hardware with the lowest current load score; and taking the hardware with the lowest current load score as target hardware.
In an embodiment, the hardware searching module is specifically further configured to: after searching the hardware with the lowest current load score, if at least two current hardware have the same current load score, randomly selecting any current hardware as the target hardware.
In an embodiment, the method further comprises: the latest score determining module is used for acquiring the data volume of the data to be processed after the data to be processed are processed by adopting the target hardware; and determining the latest load fraction of the target hardware according to the data quantity of the data to be processed.
In an embodiment, the latest score determining module is specifically configured to: determining a newly added load fraction based on the load coefficient of the target hardware and the data volume of the data to be processed; and accumulating the newly added load score to the current load score, and determining the latest load score of the target hardware.
In an embodiment, the latest score determining module is specifically further configured to: and after the latest load fraction of the target hardware is determined, releasing the newly added load fraction of the target hardware after the target hardware processes the data to be processed.
In an embodiment, the latest score determining module is specifically configured to: and acquiring the load coefficient of the target hardware before the load coefficient based on the target hardware, wherein the load coefficient is determined according to the hardware parameters, the processing speed and the occupied bandwidth of the target hardware.
According to a third aspect of the present disclosure, there is provided an electronic device comprising:
at least one processor; and
a memory communicatively coupled to the at least one processor; wherein, the liquid crystal display device comprises a liquid crystal display device,
the memory stores instructions executable by the at least one processor to enable the at least one processor to perform the methods described in the present disclosure.
According to a fourth aspect of the present disclosure, there is provided a non-transitory computer readable storage medium storing computer instructions for causing the computer to perform the method of the present disclosure.
The load balancing method, device, equipment and storage medium of the hardware resources are characterized by receiving a data processing request, wherein the data processing request comprises data to be processed; searching hardware meeting a load condition in a hardware resource table as target hardware, wherein the hardware resource table is used for recording all used hardware and current load scores thereof, and the current load scores are used for representing the workload of processing the hardware at the current moment; the target hardware is adopted to process the data to be processed, so that the conditions of overhigh hardware load and data processing delay caused by simultaneous operation of multiple processes can be avoided, and hardware resources can be automatically determined to process the data according to the current load state of the hardware.
It should be understood that the description in this section is not intended to identify key or critical features of the embodiments of the disclosure, nor is it intended to be used to limit the scope of the disclosure. Other features of the present disclosure will become apparent from the following specification.
Drawings
The above, as well as additional purposes, features, and advantages of exemplary embodiments of the present disclosure will become readily apparent from the following detailed description when read in conjunction with the accompanying drawings. Several embodiments of the present disclosure are illustrated by way of example, and not by way of limitation, in the figures of the accompanying drawings, in which:
in the drawings, the same or corresponding reference numerals indicate the same or corresponding parts.
Fig. 1 is a schematic implementation flow diagram of a load balancing method of a hardware resource according to an embodiment of the disclosure;
FIG. 2 illustrates a flow diagram of an exemplary method for load balancing of hardware resources provided by embodiments of the present disclosure;
fig. 3 is a schematic structural diagram of a load balancing device for hardware resources according to an embodiment of the present disclosure;
fig. 4 shows a schematic diagram of a composition structure of an electronic device according to an embodiment of the disclosure.
Detailed Description
In order to make the objects, features and advantages of the present disclosure more comprehensible, the technical solutions in the embodiments of the present disclosure will be clearly described in conjunction with the accompanying drawings in the embodiments of the present disclosure, and it is apparent that the described embodiments are only some embodiments of the present disclosure, but not all embodiments. Based on the embodiments in this disclosure, all other embodiments that a person skilled in the art would obtain without making any inventive effort are within the scope of protection of this disclosure.
Fig. 1 is a flowchart of a method for balancing load of a hardware resource according to an embodiment of the present disclosure, where the method may be performed by a device for balancing load of a hardware resource according to an embodiment of the present disclosure, and the device may be implemented in software and/or hardware. The method specifically comprises the following steps:
s110, receiving a data processing request.
Wherein the data processing request includes data to be processed.
The data processing request may be a processing request of data to be processed, where the data to be processed is data to be processed, and may be data in various formats.
Specifically, in this embodiment, when there is a need to process the data to be processed, the data processing request is directly received, for example, the data processing request of the data to be processed may be received by means of a newly created API interface or the like. For example, if the data to be processed is a play a video, the received data processing request is a request to play a video; also exemplary, if the data to be processed is a map of a navigation, the received data processing request is to play the navigation.
S120, searching the hardware meeting the load condition in the hardware resource table as target hardware.
The hardware resource table is used for recording all used hardware and current load scores thereof, and the current load scores are used for representing the workload of processing the hardware at the current moment. The target hardware may be the hardware corresponding to the data to be processed at the current time.
It should be noted that, the hardware in the hardware resource table includes, but is not limited to: 2D image Processor (Graphics for 2D, g 2D), image Processor (Graphics Processing Unit, GPU), central processing unit (Central Processing Unit/Processor, CPU), visual digital signal Processor (Vision Digital Signal Processing, VDSP), and the like. The current load fraction may vary in real time depending on the operating conditions of the hardware at different times.
Specifically, the embodiment creates a hardware resource table, and after receiving a data processing request, the operation of the data to be processed can be executed by automatically matching appropriate hardware through the hardware resource table. It should be noted that, different automobile systems have different hardware recorded in the hardware resource table due to different configured hardware. After the API interface receives a data processing request of the data to be processed, the embodiment may query the hardware resource table to obtain the load score of each hardware used by the system at the current moment, so as to use the hardware meeting the load condition as the target hardware.
For example, the load condition of the present embodiment may be set such that the current load score is smaller than a score threshold, and then the hardware that satisfies the load condition is the target hardware. If the number of pieces of hardware satisfying the score threshold value in the above case is plural, one piece of hardware may be randomly selected from the plural pieces of hardware as the target hardware. Alternatively, the load condition of the present embodiment may be set to be the lowest current load score, and then the hardware that satisfies the load condition is the hardware with the lowest current load score as the target hardware.
In the embodiment of the present disclosure, searching for hardware satisfying a load condition as target hardware includes: searching the hardware with the lowest current load score; and taking the hardware with the lowest current load score as target hardware.
Specifically, in this embodiment, the current load score corresponding to each piece of hardware at the current moment may be obtained by querying the hardware resource table. And then sequencing the current load scores corresponding to the hardware, screening out the lowest load score at the current moment, and taking the hardware corresponding to the lowest load score as the target hardware.
According to the embodiment, the hardware with the lowest current load score is determined to be the target hardware, so that the load dynamic balance of each hardware in the system can be maintained in real time, and the hidden danger of overload of individual hardware is avoided.
In the embodiment of the present disclosure, after the searching for the hardware with the lowest current load score, the method further includes: if at least two current hardware have the same current load score, any current hardware is randomly selected as the target hardware.
Specifically, if there are at least two current load scores of the same current hardware in the hardware resource table, and the current load scores are the lowest load scores in the hardware resource table, any one of the current hardware may be randomly selected as the target hardware in the embodiment. For example, if at the current moment, the score corresponding to the CPU and the GPU is the lowest load score in the hardware resource table, the CPU may be set as the target hardware, or the GPU may be set as the target hardware.
S130, processing the data to be processed by adopting the target hardware.
Specifically, the present embodiment may process data to be processed using target hardware after determining the target hardware.
It should be noted that the data processing request of this embodiment may include one or more. When a plurality of data processing requests are received, the target hardware can be determined one by one, and different target hardware is used for processing corresponding data to be processed.
The present embodiment processes requests by receiving data; searching hardware meeting a load condition in a hardware resource table as target hardware; the target hardware is adopted to process the data to be processed, so that the conditions of overhigh hardware load and data processing delay caused by simultaneous operation of multiple processes can be avoided, hardware resources can be automatically determined to process the data according to the current load state of the hardware, and the safety performance of an automobile system is improved.
In an embodiment of the present disclosure, after the processing the data to be processed using the target hardware, the method further includes: acquiring the data volume of the data to be processed; and determining the latest load fraction of the target hardware according to the data quantity of the data to be processed.
The data size of the data to be processed may be the data size of the data to be processed that needs to be processed by hardware operation. For example, if the data to be processed is playing a video, the data size is the resolution of the video; for another example, if the data to be processed is a process B image, the data size is the amount of image memory occupied to process the B image. The latest load fraction may be a value obtained after the current load fraction of the target hardware changes after the target hardware processes the data to be processed.
Since the workload of the target hardware is increased when the target hardware processes the data to be processed, the present embodiment can determine the latest load fraction of the target hardware according to the acquired data amount of the data to be processed. When the data volume of the data to be processed is large, the latest load fraction of the target hardware is increased greatly; when the data amount of the data to be processed is small, the latest load fraction of the target hardware increases less.
Also, since the rate at which embedded IP hardware (G2D, GPU, VDSP) processes image data can be simply understood as being positively correlated with image resolution, i.e., the greater the image resolution, the longer the processing time. Which is not too much dependent on the specific operation (scaling, format conversion, rotation, etc.). The acquired data size in this embodiment can be obtained approximately by occupying the memory size by the image.
In an embodiment of the present disclosure, the determining, according to the data amount of the data to be processed, the latest load fraction of the target hardware includes: determining a newly added load fraction based on the load coefficient of the target hardware and the data volume of the data to be processed; and accumulating the newly added load score to the current load score, and determining the latest load score of the target hardware.
The load factor may be a factor set manually, and is used to calculate a new load fraction generated by the target hardware when the target hardware processes the data to be processed, and the new load fraction is denoted as k. The new load score may be a load score calculated by the target hardware to quantify the workload of the target hardware when processing the data to be processed.
Specifically, each hardware in the system has its corresponding load factor. After the data volume of the data to be processed is acquired, the embodiment can multiply the acquired data volume based on the load coefficient of the target hardware to obtain the newly increased load fraction generated by the target hardware when the data to be processed is processed, and then accumulate the obtained newly increased load fraction of the target hardware with the original current load fraction recorded in the hardware resource table, so that the latest load fraction of the target hardware can be obtained.
According to the method and the device, the latest load fraction of the target hardware is determined through the load coefficient of the target hardware and the data volume of the data to be processed, and the current load fraction of each hardware in the hardware resource table can be updated in real time, so that the more real working state of each hardware is obtained, and a foundation is provided for subsequent allocation of hardware resources.
In an embodiment of the present disclosure, after the determining the latest load score of the target hardware, the method further includes: and after the target hardware finishes processing the data to be processed, releasing the newly added load fraction of the target hardware.
Because the target hardware processes the data to be processed and has a processing completion time, in order to update the current load score of the target hardware in the hardware resource table in time, the embodiment can receive the instruction of ending the task after the target hardware processes the data to be processed corresponding to the target hardware. After receiving the instruction for ending the task, the embodiment may immediately release the calculated new load score related to the target hardware, and timely show the most real working state of the target hardware.
It should be noted that, the target hardware of this embodiment may include a plurality of target hardware, and when there are a plurality of target hardware processing corresponding data to be processed, the current load score corresponding to each target hardware in the hardware resource table may be updated in time according to the respective actual completion time.
In an embodiment of the present disclosure, before the load factor based on the target hardware, the method further includes: and obtaining the load coefficient of the target hardware.
The load factor is determined according to the hardware parameters, the processing speed and the occupied bandwidth of the target hardware.
It should be noted that the manner in which the hardware calculates the load is not the same as that of a general network server. Most of the network servers are conventional and standard, and the calculation load belongs to the calculation mode on the application layer, so that the calculation load mode is also trace. However, the hardware configured by the embedded platform, such as the automobile system, cannot specifically calculate the load state of each hardware due to the different hardware parameters of the hardware itself, the mutual cooperation, the restriction and the influence of the different hardware configurations in the same system. Therefore, for the calculation of the hardware load, the embodiment can initially set the initial load coefficient of each hardware in the same system based on the hardware parameters according to the experience of each hardware. And then, carrying out a large number of experiments, and continuously adjusting the initial load coefficient by monitoring the processing speed, occupied bandwidth and the like of each hardware, thereby determining the final load coefficient of each hardware.
For example, the load factor range for each hardware within a certain automotive system may be set to 1-20. More specifically, the load factor range for G2D and GPU in a certain automotive system may be set to 0.8-1.5, the load factor range for cpu may be set to 8-10 and the load factor range for VDSP may be set to 0.6-1.2. It should be noted that, the values of the load factor ranges of the respective hardware provided in the present embodiment are merely an example, and specific values thereof are not limited.
Therefore, in this embodiment, when calculating the new load score of the target hardware, the load factor of the target hardware needs to be obtained first, and the load factor of the target hardware may be determined according to the hardware parameter, the processing speed and the occupied bandwidth of the target hardware.
By setting the load coefficient of hardware, the embodiment provides a feasible hardware load calculation mode, and can effectively solve the problems of overhigh part of hardware loads, delayed processing data and the like caused by complex hardware configuration and uneven hardware resource allocation.
Fig. 2 is a flowchart of an exemplary load balancing method for hardware resources according to an embodiment of the present disclosure. As shown in fig. 2, the present embodiment exemplarily uses G2D, GPU, CPU and VDSP hardware as examples, and three process operations are described. The specific steps are as follows:
a. creating a hardware resource table that records current load scores for individual hardware, including, for example: the current load score for G2D is 60, the current load score for gpu is 70, the current load score for cpu is 80, and the current load score for VDSP is 75.
b. An API interface is created that is capable of dynamically selecting hardware resources. After receiving the request instruction, the current load score of each piece of hardware in the hardware resource table can be queried to find the piece of hardware with the lowest load score. Illustratively, the present embodiment receives three process operations, including; process a operation 1, process B operation 2, and process C operation 3. Meanwhile, the data amount of the request data operation of the acquire operation 1 is 50, the data amount of the request data operation of the acquire operation 2 is 80, and the data amount of the request data operation of the acquire operation 3 is 100.
c. And locking target hardware corresponding to each process operation, multiplying the data to be processed by the target hardware coefficient to obtain a fraction, and adding the fraction to the current load fraction of the corresponding target hardware.
Illustratively, when the API interface receives the request processing instruction of the process a operation 1, it determines that the G2D corresponding to the current load score 60 is the target hardware. Subsequently, the embodiment locks the hardware G2D, obtains the load factor 1.1 of the G2D, and calculates the latest load fraction based on the load factor, where the formula is: 60+50×1.1=115.
Illustratively, the present embodiment can determine the target hardware of the process B operation 2 without waiting for the data processing of the process a operation 1 to end. More specifically, after the API interface receives the request processing instruction of the process B operation 2, it queries that the current load score of each hardware in the hardware resource table at the current moment is (115, 70, 80, 75), and determines the GPU corresponding to the current load score 70 as the target hardware. Subsequently, the embodiment locks the hardware GPU, obtains the load factor 1.2 of the GPU, calculates the latest load fraction based on the load factor, and has the following formula: 70+1.2x80=166.
Similarly, the present embodiment can determine the target hardware of the process C operation 3 without waiting for the data processing of the process B operation 2 to end. More specifically, after the API interface receives the request processing instruction of the process C operation 3, it queries that the current load score of each hardware in the hardware resource table at the current moment is (115, 166, 80, 75), and determines the VDSP corresponding to the current load score 75 as the target hardware. Subsequently, the embodiment locks the hardware VDSP, obtains the load factor 0.8 of the VDSP, and calculates the latest load fraction based on the load factor, where the formula is: 75+0.8x100=155.
d. And the target hardware processes the respective corresponding data to be processed, and after the data to be processed is processed, the corresponding target hardware is released, and the added load fraction is subtracted.
The prior art needs a client to fix a certain piece of hardware to complete a certain task, but during the running process of the system, the piece of hardware may be overloaded, thereby causing a phenomenon of delay in processing data to be processed. According to the embodiment, through unified management of hardware resources and unified client operation of an API interface, only the data quantity of the data to be processed is acquired, the hardware resource table is queried, and the target hardware can be automatically selected without knowing the actual operation condition of the hardware. Meanwhile, the embodiment can also automatically mobilize a plurality of hardware to process a plurality of data to be processed, and can still maintain the load balance of hardware resources. In addition, the method provided by the embodiment does not need to be modified after development, so that the cost of developing and maintaining the system program is effectively reduced.
Fig. 3 is a schematic structural diagram of a load balancing device for hardware resources according to an embodiment of the present disclosure, where the device specifically includes:
a request receiving module 310, configured to receive a data processing request, where the data processing request includes data to be processed;
the hardware searching module 320 is configured to search for, in a hardware resource table, hardware that satisfies a load condition as target hardware, where the hardware resource table is configured to record each used hardware and a current load score thereof, where the current load score is used to represent a workload of processing the hardware at a current moment;
and a processing module 330, configured to process the data to be processed by using the target hardware.
In one embodiment, the hardware searching module 320 is specifically configured to: searching the hardware with the lowest current load score; and taking the hardware with the lowest current load score as target hardware.
In an embodiment, the hardware searching module 320 is specifically further configured to: after searching the hardware with the lowest current load score, if at least two current hardware have the same current load score, randomly selecting any current hardware as the target hardware.
In an embodiment, the method further comprises: the latest score determining module is used for acquiring the data volume of the data to be processed after the data to be processed are processed by adopting the target hardware; and determining the latest load fraction of the target hardware according to the data quantity of the data to be processed.
In an embodiment, the latest score determining module is specifically configured to: determining a newly added load fraction based on the load coefficient of the target hardware and the data volume of the data to be processed; and accumulating the newly added load score to the current load score, and determining the latest load score of the target hardware.
In an embodiment, the latest score determining module is specifically further configured to: and after the latest load fraction of the target hardware is determined, releasing the newly added load fraction of the target hardware after the target hardware processes the data to be processed.
In an embodiment, the latest score determining module is specifically configured to: and acquiring the load coefficient of the target hardware before the load coefficient based on the target hardware, wherein the load coefficient is determined according to the hardware parameters, the processing speed and the occupied bandwidth of the target hardware.
According to embodiments of the present disclosure, the present disclosure also provides an electronic device and a readable storage medium.
Fig. 4 illustrates a schematic block diagram of an example electronic device 400 that may be used to implement embodiments of the present disclosure. Electronic devices are intended to represent various forms of digital computers, such as laptops, desktops, workstations, personal digital assistants, servers, blade servers, mainframes, and other appropriate computers. The electronic device may also represent various forms of mobile devices, such as personal digital processing, cellular telephones, smartphones, wearable devices, and other similar computing devices. The components shown herein, their connections and relationships, and their functions, are meant to be exemplary only, and are not meant to limit implementations of the disclosure described and/or claimed herein.
As shown in fig. 4, the apparatus 400 includes a computing unit 401 that can perform various suitable actions and processes according to a computer program stored in a Read Only Memory (ROM) 402 or a computer program loaded from a storage unit 408 into a Random Access Memory (RAM) 403. In RAM 403, various programs and data required for the operation of device 400 may also be stored. The computing unit 401, ROM 402, and RAM 403 are connected to each other by a bus 404. An input/output (I/O) interface 405 is also connected to bus 404.
Various components in device 400 are connected to I/O interface 405, including: an input unit 406 such as a keyboard, a mouse, etc.; an output unit 407 such as various types of displays, speakers, and the like; a storage unit 408, such as a magnetic disk, optical disk, etc.; and a communication unit 409 such as a network card, modem, wireless communication transceiver, etc. The communication unit 409 allows the device 400 to exchange information/data with other devices via a computer network, such as the internet, and/or various telecommunication networks.
The computing unit 401 may be a variety of general purpose and/or special purpose processing components having processing and computing capabilities. Some examples of computing unit 401 include, but are not limited to, a Central Processing Unit (CPU), a Graphics Processing Unit (GPU), various specialized Artificial Intelligence (AI) computing chips, various computing units running machine learning model algorithms, a Digital Signal Processor (DSP), and any suitable processor, controller, microcontroller, etc. The computing unit 401 performs the various methods and processes described above, such as a load balancing method for hardware resources. For example, in some embodiments, the load balancing method of the hardware resources may be implemented as a computer software program tangibly embodied on a machine-readable medium, such as the storage unit 408. In some embodiments, part or all of the computer program may be loaded and/or installed onto the device 400 via the ROM 402 and/or the communication unit 409. When the computer program is loaded into RAM 403 and executed by computing unit 401, one or more steps of the load balancing method for hardware resources described above may be performed. Alternatively, in other embodiments, the computing unit 401 may be configured to perform the load balancing method of hardware resources in any other suitable way (e.g., by means of firmware).
Various implementations of the systems and techniques described here above may be implemented in digital electronic circuitry, integrated circuitry, field Programmable Gate Arrays (FPGAs), application Specific Integrated Circuits (ASICs), application Specific Standard Products (ASSPs), systems-on-a-chip (SOCs), complex Programmable Logic Devices (CPLDs), computer hardware, firmware, software, and/or combinations thereof. These various embodiments may include: implemented in one or more computer programs, the one or more computer programs may be executed and/or interpreted on a programmable system including at least one programmable processor, which may be a special purpose or general-purpose programmable processor, that may receive data and instructions from, and transmit data and instructions to, a storage system, at least one input device, and at least one output device.
Program code for carrying out methods of the present disclosure may be written in any combination of one or more programming languages. These program code may be provided to a processor or controller of a general purpose computer, special purpose computer, or other programmable data processing apparatus such that the program code, when executed by the processor or controller, causes the functions/operations specified in the flowchart and/or block diagram to be implemented. The program code may execute entirely on the machine, partly on the machine, as a stand-alone software package, partly on the machine and partly on a remote machine or entirely on the remote machine or server.
In the context of this disclosure, a machine-readable medium may be a tangible medium that can contain, or store a program for use by or in connection with an instruction execution system, apparatus, or device. The machine-readable medium may be a machine-readable signal medium or a machine-readable storage medium. The machine-readable medium may include, but is not limited to, an electronic, magnetic, optical, electromagnetic, infrared, or semiconductor system, apparatus, or device, or any suitable combination of the foregoing. More specific examples of a machine-readable storage medium would include an electrical connection based on one or more wires, a portable computer diskette, a hard disk, a Random Access Memory (RAM), a read-only memory (ROM), an erasable programmable read-only memory (EPROM or flash memory), an optical fiber, a portable compact disc read-only memory (CD-ROM), an optical storage device, a magnetic storage device, or any suitable combination of the foregoing.
To provide for interaction with a user, the systems and techniques described here can be implemented on a computer having: a display device (e.g., a CRT (cathode ray tube) or LCD (liquid crystal display) monitor) for displaying information to a user; and a keyboard and pointing device (e.g., a mouse or trackball) by which a user can provide input to the computer. Other kinds of devices may also be used to provide for interaction with a user; for example, feedback provided to the user may be any form of sensory feedback (e.g., visual feedback, auditory feedback, or tactile feedback); and input from the user may be received in any form, including acoustic input, speech input, or tactile input.
The systems and techniques described here can be implemented in a computing system that includes a background component (e.g., as a data server), or that includes a middleware component (e.g., an application server), or that includes a front-end component (e.g., a user computer having a graphical user interface or a web browser through which a user can interact with an implementation of the systems and techniques described here), or any combination of such background, middleware, or front-end components. The components of the system can be interconnected by any form or medium of digital data communication (e.g., a communication network). Examples of communication networks include: local Area Networks (LANs), wide Area Networks (WANs), and the internet.
The computer system may include a client and a server. The client and server are typically remote from each other and typically interact through a communication network. The relationship of client and server arises by virtue of computer programs running on the respective computers and having a client-server relationship to each other. The server may be a cloud server, a server of a distributed system, or a server incorporating a blockchain.
It should be appreciated that various forms of the flows shown above may be used to reorder, add, or delete steps. For example, the steps recited in the present disclosure may be performed in parallel or sequentially or in a different order, provided that the desired results of the technical solutions of the present disclosure are achieved, and are not limited herein.
Furthermore, the terms "first," "second," and the like, are used for descriptive purposes only and are not to be construed as indicating or implying a relative importance or implicitly indicating the number of technical features indicated. Thus, a feature defining "a first" or "a second" may explicitly or implicitly include at least one such feature. In the description of the present disclosure, the meaning of "a plurality" is two or more, unless explicitly defined otherwise.
The foregoing is merely specific embodiments of the disclosure, but the protection scope of the disclosure is not limited thereto, and any person skilled in the art can easily think about changes or substitutions within the technical scope of the disclosure, and it is intended to cover the scope of the disclosure. Therefore, the protection scope of the present disclosure shall be subject to the protection scope of the claims.

Claims (10)

1. A method for load balancing of hardware resources, the method comprising:
receiving a data processing request, wherein the data processing request comprises data to be processed;
searching hardware meeting a load condition in a hardware resource table as target hardware, wherein the hardware resource table is used for recording all used hardware and current load scores thereof, and the current load scores are used for representing the workload of processing the hardware at the current moment;
and processing the data to be processed by adopting the target hardware.
2. The method of claim 1, wherein the locating hardware that satisfies the load condition as target hardware comprises:
searching the hardware with the lowest current load score;
and taking the hardware with the lowest current load score as target hardware.
3. The method of claim 2, further comprising, after said looking up the hardware with the lowest current load score:
if at least two current hardware have the same current load score, any current hardware is randomly selected as the target hardware.
4. The method of claim 1, further comprising, after said processing said data to be processed with said target hardware:
acquiring the data volume of the data to be processed;
and determining the latest load fraction of the target hardware according to the data quantity of the data to be processed.
5. The method of claim 4, wherein determining the latest load fraction of the target hardware based on the amount of data of the data to be processed comprises:
determining a newly added load fraction based on the load coefficient of the target hardware and the data volume of the data to be processed;
and accumulating the newly added load score to the current load score, and determining the latest load score of the target hardware.
6. The method of claim 5, further comprising, after said determining the latest load score for the target hardware:
and after the target hardware finishes processing the data to be processed, releasing the newly added load fraction of the target hardware.
7. The method of claim 5, further comprising, prior to the load factor based on the target hardware:
and obtaining a load coefficient of the target hardware, wherein the load coefficient is determined according to the hardware parameters, the processing speed and the occupied bandwidth of the target hardware.
8. A load balancing apparatus for hardware resources, the apparatus comprising:
the request receiving module is used for receiving a data processing request, wherein the data processing request comprises data to be processed;
the hardware searching module is used for searching the hardware meeting the load condition in the hardware resource table as target hardware, wherein the hardware resource table is used for recording all the used hardware and the current load score thereof, and the current load score is used for representing the workload of processing the hardware at the current moment;
and the processing module is used for processing the data to be processed by adopting the target hardware.
9. An electronic device, comprising:
at least one processor; and
a memory communicatively coupled to the at least one processor; wherein, the liquid crystal display device comprises a liquid crystal display device,
the memory stores instructions executable by the at least one processor to enable the at least one processor to perform the method of any one of claims 1-7.
10. A non-transitory computer readable storage medium storing computer instructions for causing a computer to perform the method of any one of claims 1-7.
CN202310457152.3A 2023-04-23 2023-04-23 Load balancing method, device and equipment for hardware resources and storage medium Pending CN116185643A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202310457152.3A CN116185643A (en) 2023-04-23 2023-04-23 Load balancing method, device and equipment for hardware resources and storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202310457152.3A CN116185643A (en) 2023-04-23 2023-04-23 Load balancing method, device and equipment for hardware resources and storage medium

Publications (1)

Publication Number Publication Date
CN116185643A true CN116185643A (en) 2023-05-30

Family

ID=86434817

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202310457152.3A Pending CN116185643A (en) 2023-04-23 2023-04-23 Load balancing method, device and equipment for hardware resources and storage medium

Country Status (1)

Country Link
CN (1) CN116185643A (en)

Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101013387A (en) * 2007-02-09 2007-08-08 华中科技大学 Load balancing method based on object storage device
US20190079799A1 (en) * 2017-09-08 2019-03-14 Apple Inc. Systems and methods for scheduling virtual memory compressors
CN110178118A (en) * 2017-01-17 2019-08-27 微软技术许可有限责任公司 Hard-wired load balance

Patent Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101013387A (en) * 2007-02-09 2007-08-08 华中科技大学 Load balancing method based on object storage device
CN110178118A (en) * 2017-01-17 2019-08-27 微软技术许可有限责任公司 Hard-wired load balance
US20190079799A1 (en) * 2017-09-08 2019-03-14 Apple Inc. Systems and methods for scheduling virtual memory compressors

Similar Documents

Publication Publication Date Title
US10579417B2 (en) Boosting user thread priorities to resolve priority inversions
JP2022003507A (en) Stress test method and device, electronic apparatus, computer-readable storage medium, and computer program
CN109582649B (en) Metadata storage method, device and equipment and readable storage medium
CN113961510B (en) File processing method, device, equipment and storage medium
CN111488492A (en) Method and apparatus for retrieving graph database
KR20210083222A (en) Method, apparatus, device and storage medium for processing voice data
CN113934958A (en) Page loading method and device, electronic equipment and computer readable medium
CN117236236A (en) Chip design data management method and device, electronic equipment and storage medium
WO2023078237A1 (en) Method and apparatus for controlling temperature of cloud mobile phone server, and device
CN116185643A (en) Load balancing method, device and equipment for hardware resources and storage medium
CN116244703A (en) Fuzzy test method and device
CN112965836B (en) Service control method, device, electronic equipment and readable storage medium
CN113139891B (en) Image processing method, device, electronic equipment and storage medium
CN117032991B (en) Gray scale publishing method, device and system
CN110825477A (en) Method, device and equipment for loading graphical interface and storage medium
CN113656299B (en) Method and device for determining limit QPS, electronic equipment and readable storage medium
CN117632456A (en) 3D rendering capability calling method, device, equipment and storage medium
CN113220282B (en) Development processing method, device and storage medium for applet
CN115599307B (en) Data access method, device, electronic equipment and storage medium
CN115829053B (en) Model operation strategy determination method and device, electronic equipment and storage medium
US20230342200A1 (en) System and method for resource management in dynamic systems
CN113220233A (en) Data reading method, device and system
CN116594708A (en) Method, device, equipment and medium for generating multi-model loading configuration file
CN117891792A (en) Database data sharing method and device, electronic equipment and storage medium
CN114461950A (en) Global caching method and device, electronic equipment and storage medium

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
RJ01 Rejection of invention patent application after publication

Application publication date: 20230530

RJ01 Rejection of invention patent application after publication