CN112527501A - Big data resource allocation method, device, equipment and medium - Google Patents
Big data resource allocation method, device, equipment and medium Download PDFInfo
- Publication number
- CN112527501A CN112527501A CN202011437029.8A CN202011437029A CN112527501A CN 112527501 A CN112527501 A CN 112527501A CN 202011437029 A CN202011437029 A CN 202011437029A CN 112527501 A CN112527501 A CN 112527501A
- Authority
- CN
- China
- Prior art keywords
- queue
- processed
- resource
- big data
- resource allocation
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Granted
Links
- 238000013468 resource allocation Methods 0.000 title claims abstract description 78
- 238000000034 method Methods 0.000 title claims abstract description 47
- 238000004064 recycling Methods 0.000 claims description 10
- 238000007726 management method Methods 0.000 claims description 9
- 238000012545 processing Methods 0.000 claims description 7
- 238000004458 analytical method Methods 0.000 claims description 6
- 238000012544 monitoring process Methods 0.000 claims description 6
- 238000001514 detection method Methods 0.000 claims description 5
- 238000010276 construction Methods 0.000 claims description 4
- 239000002699 waste material Substances 0.000 abstract description 10
- 238000005516 engineering process Methods 0.000 abstract description 7
- 238000004590 computer program Methods 0.000 description 11
- 230000006870 function Effects 0.000 description 8
- 238000010586 diagram Methods 0.000 description 5
- 230000004044 response Effects 0.000 description 4
- 238000012546 transfer Methods 0.000 description 4
- 238000011161 development Methods 0.000 description 3
- 238000012360 testing method Methods 0.000 description 3
- 238000004891 communication Methods 0.000 description 2
- 230000007812 deficiency Effects 0.000 description 2
- 239000004973 liquid crystal related substance Substances 0.000 description 2
- 230000003287 optical effect Effects 0.000 description 2
- 230000008569 process Effects 0.000 description 2
- 238000011084 recovery Methods 0.000 description 2
- 230000001960 triggered effect Effects 0.000 description 2
- 230000005540 biological transmission Effects 0.000 description 1
- 238000004422 calculation algorithm Methods 0.000 description 1
- 238000004364 calculation method Methods 0.000 description 1
- 238000013500 data storage Methods 0.000 description 1
- 235000021393 food security Nutrition 0.000 description 1
- 230000036541 health Effects 0.000 description 1
- 230000010365 information processing Effects 0.000 description 1
- 230000003993 interaction Effects 0.000 description 1
- 230000002452 interceptive effect Effects 0.000 description 1
- 230000007246 mechanism Effects 0.000 description 1
- 238000012986 modification Methods 0.000 description 1
- 230000004048 modification Effects 0.000 description 1
- 230000002093 peripheral effect Effects 0.000 description 1
- 238000006467 substitution reaction Methods 0.000 description 1
Images
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F9/00—Arrangements for program control, e.g. control units
- G06F9/06—Arrangements for program control, e.g. control units using stored programs, i.e. using an internal store of processing equipment to receive or retain programs
- G06F9/46—Multiprogramming arrangements
- G06F9/50—Allocation of resources, e.g. of the central processing unit [CPU]
-
- Y—GENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
- Y02—TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
- Y02D—CLIMATE CHANGE MITIGATION TECHNOLOGIES IN INFORMATION AND COMMUNICATION TECHNOLOGIES [ICT], I.E. INFORMATION AND COMMUNICATION TECHNOLOGIES AIMING AT THE REDUCTION OF THEIR OWN ENERGY USE
- Y02D10/00—Energy efficient computing, e.g. low power processors, power management or thermal management
Landscapes
- Engineering & Computer Science (AREA)
- Software Systems (AREA)
- Theoretical Computer Science (AREA)
- Physics & Mathematics (AREA)
- General Engineering & Computer Science (AREA)
- General Physics & Mathematics (AREA)
- Telephonic Communication Services (AREA)
- Information Retrieval, Db Structures And Fs Structures Therefor (AREA)
Abstract
The invention relates to the field of big data, and provides a big data resource allocation method, a device, equipment and a medium, which can acquire historical resource consumption data of a queue to be processed, analyze the historical resource consumption data of the queue to be processed to obtain at least one index value of the queue to be processed, construct at least one queue group according to the at least one index value, identify the queue group corresponding to the queue to be allocated from the at least one queue group as a target queue group, detect the resource utilization rate in the target queue group, allocate big data resources for the queue to be allocated according to the resource utilization rate, avoid serious shortage of resources of part of queues caused by uneven resource allocation and idle and waste of resources of part of queues, and further realize reasonable allocation of the big data resources. In addition, the invention also relates to a block chain technology, and the index value can be stored in the block chain node.
Description
Technical Field
The present invention relates to the field of big data technologies, and in particular, to a method, an apparatus, a device, and a medium for big data resource allocation.
Background
With the continuous development of the information era and the arrival of the cloud era, big data is more and more widely concerned, big data technology is permeated into social aspects, such as aspects of medical health, business analysis, national security, food security, financial security and the like, and cultural atmosphere and era characteristics of 'speaking by data, managing by data, deciding by data and innovating by data' are formed in the whole society.
The application of the big data technology mainly aims at calculating and analyzing mass data, so that the consumption of computing resources and storage resources is high, the big data resources are very expensive, particularly, the computing resources are mutually contended and robbed if the resources are insufficient, the task execution is slow or the tasks are mutually waited, the working efficiency of development and testing is influenced, and the idle waste of the resources can be caused if the high resources are directly distributed to the development environment or the testing environment.
At present, a parent queue is shared by a plurality of sub queues in an allocation mode of big data resources, a plurality of tasks of a project group also belong to the same sub queue, although a version period is fixed, the demand and the test time point of a big data version of each project group are inconsistent, the demand of some project versions is large, and the consumption of resources is high.
Disclosure of Invention
In view of the foregoing, it is desirable to provide a method, an apparatus, a device, and a medium for allocating big data resources, which can avoid serious shortage of resources in a part of queues due to uneven resource allocation, and avoid idle and wasted resources in the part of queues, thereby implementing reasonable allocation of the big data resources.
A big data resource allocation method comprises the following steps:
responding to a big data resource allocation instruction, and acquiring a queue to be processed according to the big data resource allocation instruction;
acquiring historical resource consumption data of the queue to be processed;
analyzing historical resource consumption data of the queue to be processed to obtain at least one index value of the queue to be processed;
constructing at least one queue group according to the at least one index value;
when receiving a queue to be distributed, identifying a queue group corresponding to the queue to be distributed from the at least one queue group as a target queue group;
calling a Yarn assembly to detect the resource utilization rate in the target queue group;
and distributing big data resources for the queue to be distributed according to the resource utilization rate.
According to a preferred embodiment of the present invention, the acquiring a queue to be processed according to the big data resource allocation instruction includes:
analyzing the method body of the big data resource allocation instruction to obtain the information carried by the big data resource allocation instruction;
acquiring a preset label;
constructing a regular expression according to the preset label;
traversing in the information carried by the big data resource allocation instruction by using the regular expression, and determining the traversed data as a target address;
and linking to the target address, and acquiring data from the target address as the queue to be processed.
According to a preferred embodiment of the present invention, the analyzing the historical resource consumption data of the queue to be processed to obtain at least one index value of the queue to be processed includes:
identifying a queue identifier of each queue to be processed in the queues to be processed;
acquiring data which is the same as the queue identification of each queue to be processed from the historical resource consumption data as the historical resource consumption data of each queue to be processed;
calling the Yarn component to analyze historical resource consumption data of each queue to be processed to obtain the lowest datum line, the highest datum line and the consumption average value of each queue to be processed;
and integrating the lowest datum line, the highest datum line and the consumption mean value of each queue to be processed as at least one index value of the queue to be processed.
According to a preferred embodiment of the invention, the method further comprises:
establishing a public resource pool of the queue to be processed;
acquiring current distributed resources of each queue to be processed;
comparing the current allocated resources of each queue to be processed with the highest reference line of each queue to be processed;
when detecting that the current allocated resources of the queue to be processed are higher than the corresponding highest reference line, determining the detected queue to be processed as a first queue;
sending an authentication request to a designated terminal;
when receiving an authentication passing signal fed back by the appointed terminal, calculating a difference value between the current allocated resource of the first queue and the corresponding highest reference line as a transferable resource of the first queue;
and recycling the transferable resource to the public resource pool.
According to a preferred embodiment of the present invention, said constructing at least one queue group according to the at least one index value comprises:
acquiring the budget of each queue to be processed;
calculating the difference value between the budget of each queue to be processed and the consumption average value of each queue to be processed as a first numerical value of each queue to be processed;
calculating the difference value between the highest datum line of each queue to be processed and the budget of each queue to be processed, and taking the difference value as a second numerical value of each queue to be processed;
calculating the difference value between the budget of each queue to be processed and the lowest datum line of each queue to be processed as a third numerical value of each queue to be processed;
dividing the queue with the first value being the same as the second value and/or the consumption average value being the same as the third value into a queue group;
and integrating all the divided queue groups to obtain the at least one queue group.
According to a preferred embodiment of the present invention, the allocating big data resources to the queue to be allocated according to the resource utilization ratio includes:
acquiring the queue with the lowest resource utilization rate in the target queue group as a target queue;
detecting idle resources of the target queue;
and transferring the idle resources of the target queue to the queue to be distributed.
According to a preferred embodiment of the invention, the method further comprises:
acquiring the resource service life of the queue to be distributed;
monitoring the resource use time of the queue to be distributed;
calculating the time interval between the resource use time and the resource use period;
when the time interval is smaller than or equal to a preset time length, detecting whether a renewal instruction exists in the time interval;
when the renewal instruction is detected, sending a renewal request to the appointed terminal; or
When the renewal instruction is not detected, recycling the resources of the queue to be allocated to the public resource pool when the resource use period is reached.
A big data resource allocation apparatus, the big data resource allocation apparatus comprising:
the device comprises an acquisition unit, a processing unit and a processing unit, wherein the acquisition unit is used for responding to a big data resource allocation instruction and acquiring a queue to be processed according to the big data resource allocation instruction;
the acquiring unit is further configured to acquire historical resource consumption data of the queue to be processed;
the analysis unit is used for analyzing historical resource consumption data of the queue to be processed to obtain at least one index value of the queue to be processed;
the construction unit is used for constructing at least one queue group according to the at least one index value;
the device comprises an identification unit, a queue management unit and a queue management unit, wherein the identification unit is used for identifying a queue group corresponding to a queue to be distributed from at least one queue group as a target queue group when the queue to be distributed is received;
the detection unit is used for calling the Yarn component to detect the resource utilization rate in the target queue group;
and the allocation unit is used for allocating the large data resources to the queue to be allocated according to the resource utilization rate.
An electronic device, the electronic device comprising:
a memory storing at least one instruction; and
a processor executing instructions stored in the memory to implement the big data resource allocation method.
A computer-readable storage medium having at least one instruction stored therein, the at least one instruction being executable by a processor in an electronic device to implement the big data resource allocation method.
It can be seen from the above technical solutions that, in response to a big data resource allocation instruction, the present invention obtains a queue to be processed according to the big data resource allocation instruction, obtains historical resource consumption data of the queue to be processed, analyzes the historical resource consumption data of the queue to be processed, obtains at least one index value of the queue to be processed, constructs at least one queue group according to the at least one index value, identifies a queue group corresponding to the queue to be allocated from the at least one queue group as a target queue group when receiving the queue to be allocated, calls a Yarn component to detect a resource utilization rate in the target queue group, allocates a big data resource to the queue to be allocated according to the resource utilization rate, and avoids the situations that resources of a part of the queue are seriously insufficient due to uneven resource allocation and resources of the part of the queue are idle and wasted, and further reasonable distribution of big data resources is realized.
Drawings
FIG. 1 is a flow chart of a big data resource allocation method according to a preferred embodiment of the present invention.
FIG. 2 is a functional block diagram of a large data resource allocation apparatus according to a preferred embodiment of the present invention.
Fig. 3 is a schematic structural diagram of an electronic device implementing a large data resource allocation method according to a preferred embodiment of the present invention.
Detailed Description
In order to make the objects, technical solutions and advantages of the present invention more apparent, the present invention will be described in detail with reference to the accompanying drawings and specific embodiments.
Fig. 1 is a flow chart of a big data resource allocation method according to a preferred embodiment of the present invention. The order of the steps in the flow chart may be changed and some steps may be omitted according to different needs.
The method for allocating big data resources is applied to one or more electronic devices, where the electronic devices are devices capable of automatically performing numerical calculation and/or information processing according to preset or stored instructions, and the hardware includes, but is not limited to, a microprocessor, an Application Specific Integrated Circuit (ASIC), a Programmable Gate Array (FPGA), a Digital Signal Processor (DSP), an embedded device, and the like.
The electronic device may be any electronic product capable of performing human-computer interaction with a user, for example, a Personal computer, a tablet computer, a smart phone, a Personal Digital Assistant (PDA), a game machine, an interactive Internet Protocol Television (IPTV), an intelligent wearable device, and the like.
The electronic device may also include a network device and/or a user device. The network device includes, but is not limited to, a single network server, a server group consisting of a plurality of network servers, or a Cloud Computing (Cloud Computing) based Cloud consisting of a large number of hosts or network servers.
The Network where the electronic device is located includes, but is not limited to, the internet, a wide area Network, a metropolitan area Network, a local area Network, a Virtual Private Network (VPN), and the like.
S10, responding to the big data resource allocation instruction, and acquiring the queue to be processed according to the big data resource allocation instruction.
In this embodiment, the big data resource allocation instruction may be triggered by a person in charge of the project or task, such as a project manager.
In at least one embodiment of the present invention, the obtaining the queue to be processed according to the big data resource allocation instruction includes:
analyzing the method body of the big data resource allocation instruction to obtain the information carried by the big data resource allocation instruction;
acquiring a preset label;
constructing a regular expression according to the preset label;
traversing in the information carried by the big data resource allocation instruction by using the regular expression, and determining the traversed data as a target address;
and linking to the target address, and acquiring data from the target address as the queue to be processed.
Specifically, the big data resource allocation instruction is substantially a piece of code, and in the big data resource allocation instruction, according to the writing principle of the code, the content between { } is referred to as the method body.
The preset tag can be configured by self-definition, and the preset tag and the address have a one-to-one correspondence relationship, for example: the preset label may be Add, and further, a regular expression Add () is established by the preset label, and traversal is performed by the Add ().
The queue to be processed may belong to a department or a project group, but the invention is not limited thereto.
By the implementation method, the target address can be quickly determined based on the regular expression and the preset label, so that the queue to be processed can be further obtained from the target address.
And S11, acquiring historical resource consumption data of the queue to be processed.
Specifically, the historical resource consumption data of the queue to be processed may be obtained from a preset database.
For example: the preset database may be a HIVE (data warehouse technology) library.
S12, analyzing the historical resource consumption data of the queue to be processed to obtain at least one index value of the queue to be processed.
In this embodiment, the resource refers to a big data resource, including, but not limited to: computing resources and storage resources.
The computing resources, namely queue resources, are the most scarce, and are mainly used for supporting the MAP and Reduce operations of mass data.
In at least one embodiment of the present invention, the analyzing the historical resource consumption data of the queue to be processed to obtain at least one index value of the queue to be processed includes:
identifying a queue identifier of each queue to be processed in the queues to be processed;
acquiring data which is the same as the queue identification of each queue to be processed from the historical resource consumption data as the historical resource consumption data of each queue to be processed;
calling the Yarn component to analyze historical resource consumption data of each queue to be processed to obtain the lowest datum line, the highest datum line and the consumption average value of each queue to be processed;
and integrating the lowest datum line, the highest datum line and the consumption mean value of each queue to be processed as at least one index value of the queue to be processed.
Through the embodiment, the index value of each queue to be processed can be analyzed based on the historical consumption data and the Yarn component, so that the common resource consumption situation of each queue to be processed is reflected.
In at least one embodiment of the invention, the method further comprises:
establishing a public resource pool of the queue to be processed;
acquiring current distributed resources of each queue to be processed;
comparing the current allocated resources of each queue to be processed with the highest reference line of each queue to be processed;
when detecting that the current allocated resources of the queue to be processed are higher than the corresponding highest reference line, determining the detected queue to be processed as a first queue;
sending an authentication request to a designated terminal;
when receiving an authentication passing signal fed back by the appointed terminal, calculating a difference value between the current allocated resource of the first queue and the corresponding highest reference line as a transferable resource of the first queue;
and recycling the transferable resource to the public resource pool.
The designated terminal may be a terminal with an authentication right.
Resources in the common resource pool may be utilized by any of the queues to be processed.
It is understood that the resource above the highest baseline refers to the allocated redundant resource, and this part of resource is usually not utilized, and if it is kept in the queue all the time, it will cause the waste of resource.
Therefore, the redundant resources are recycled to the public resource pool for other queues in need, waste of the resources is effectively avoided, and the utilization rate of the resources is improved.
S13, constructing at least one queue group according to the at least one index value.
The queues belonging to one queue group are not limited by tasks or departments and can transfer resources with each other.
Specifically, the constructing at least one queue group according to the at least one index value includes:
acquiring the budget of each queue to be processed;
calculating the difference value between the budget of each queue to be processed and the consumption average value of each queue to be processed as a first numerical value of each queue to be processed;
calculating the difference value between the highest datum line of each queue to be processed and the budget of each queue to be processed, and taking the difference value as a second numerical value of each queue to be processed;
calculating the difference value between the budget of each queue to be processed and the lowest datum line of each queue to be processed as a third numerical value of each queue to be processed;
dividing the queue with the first value being the same as the second value and/or the consumption average value being the same as the third value into a queue group;
and integrating all the divided queue groups to obtain the at least one queue group.
For example: obtaining MAX (i.e. the highest reference line), MIN (i.e. the lowest reference line), and AVG (i.e. the consumption average) values according to the maximum consumption and the minimum consumption and the average consumption of a queue in a period of two weeks, respectively, combining the three values with other queues, wherein a first value of each queue is budget-AVG, a second value of each queue is MAX-budget, and a third value of each queue is budget-MIN, further dividing the queue with the same first value and the second value, and/or dividing the queue with the same consumption average value and the third value into a queue group, and integrating all the divided queue groups to obtain the at least one queue group.
By the embodiment, strict attribution limitation (whether the queues belong to the same task or not) in the conventional resource allocation mode can be avoided, and the transfer of resources among queues belonging to different attributions can be realized.
And S14, when receiving the queue to be distributed, identifying the queue group corresponding to the queue to be distributed from the at least one queue group as a target queue group.
Wherein the queue to be allocated may include, but is not limited to: queues that detect a resource deficiency during task execution, new issue queues, etc.
In this embodiment, the queue group corresponding to the queue to be allocated may be identified from the at least one queue group as a target queue group by a queue name or the like, which is not limited in the present invention.
And S15, calling a Yarn component to detect the resource utilization rate in the target queue group.
In this embodiment, the resource utilization ratio refers to a ratio between the utilized resource and all the available resources in the queue.
The resource utilization rate can reflect the utilization condition of the resource.
And S16, distributing large data resources for the queue to be distributed according to the resource utilization rate.
Specifically, the allocating the big data resource to the queue to be allocated according to the resource utilization ratio includes:
acquiring the queue with the lowest resource utilization rate in the target queue group as a target queue;
detecting idle resources of the target queue;
and transferring the idle resources of the target queue to the queue to be distributed.
For example: after the target queue with the lowest resource utilization rate is obtained, a Yarn component of a Hadoop2 cluster can be used for detecting the resource consumption condition of the target queue, obtaining the idle resource of the target queue, and transferring the idle resource of the target queue to the queue to be allocated for the queue to be allocated to use, so that the target queue is prevented from having resource idle and the queue to be allocated does not have enough resource usage.
Through the implementation mode, the serious shortage of the resources of the partial queues caused by uneven resource distribution is avoided, and the conditions of idling and waste of the resources of the partial queues are avoided.
In at least one embodiment of the invention, the method further comprises:
acquiring the resource service life of the queue to be distributed;
monitoring the resource use time of the queue to be distributed;
calculating the time interval between the resource use time and the resource use period;
when the time interval is smaller than or equal to a preset time length, detecting whether a renewal instruction exists in the time interval;
when the renewal instruction is detected, sending a renewal request to the appointed terminal; or
When the renewal instruction is not detected, recycling the resources of the queue to be allocated to the public resource pool when the resource use period is reached.
For example: when the resource use period of the queue to be allocated is up to 12 noon in friday, monitoring that the time interval between the resource use time of the queue to be allocated and the resource use period is 30 minutes and is equal to the preset time length of 30 minutes, continuously detecting whether a renewal instruction exists in 30, and when the renewal instruction is detected, sending a renewal request to a specified terminal with a renewal authority; or when no renewal instruction is detected, recycling the resources of the queue to be allocated to the public resource pool at 12 noon on friday, so as to recycle the resources in time when the service life is reached.
Through the implementation mode, dynamic renewal and recovery of resources can be realized, waste of resources is avoided, and reasonable distribution of big data resources is further realized.
It should be noted that, in order to improve security, an index value may be stored in the block chain.
It can be seen from the above technical solutions that, in response to a big data resource allocation instruction, the present invention obtains a queue to be processed according to the big data resource allocation instruction, obtains historical resource consumption data of the queue to be processed, analyzes the historical resource consumption data of the queue to be processed, obtains at least one index value of the queue to be processed, constructs at least one queue group according to the at least one index value, identifies a queue group corresponding to the queue to be allocated from the at least one queue group as a target queue group when receiving the queue to be allocated, calls a Yarn component to detect a resource utilization rate in the target queue group, allocates a big data resource to the queue to be allocated according to the resource utilization rate, and avoids the situations that resources of a part of the queue are seriously insufficient due to uneven resource allocation and resources of the part of the queue are idle and wasted, and further reasonable distribution of big data resources is realized.
Fig. 2 is a functional block diagram of a large data resource allocation apparatus according to a preferred embodiment of the present invention. The big data resource allocation device 11 comprises an acquisition unit 110, an analysis unit 111, a construction unit 112, a recognition unit 113, a detection unit 114 and an allocation unit 115. The module/unit referred to in the present invention refers to a series of computer program segments that can be executed by the processor 13 and that can perform a fixed function, and that are stored in the memory 12. In the present embodiment, the functions of the modules/units will be described in detail in the following embodiments.
In response to the big data resource allocation instruction, the obtaining unit 110 obtains the queue to be processed according to the big data resource allocation instruction.
In this embodiment, the big data resource allocation instruction may be triggered by a person in charge of the project or task, such as a project manager.
In at least one embodiment of the present invention, the obtaining unit 110 obtains the queue to be processed according to the big data resource allocation instruction, including:
analyzing the method body of the big data resource allocation instruction to obtain the information carried by the big data resource allocation instruction;
acquiring a preset label;
constructing a regular expression according to the preset label;
traversing in the information carried by the big data resource allocation instruction by using the regular expression, and determining the traversed data as a target address;
and linking to the target address, and acquiring data from the target address as the queue to be processed.
Specifically, the big data resource allocation instruction is substantially a piece of code, and in the big data resource allocation instruction, according to the writing principle of the code, the content between { } is referred to as the method body.
The preset tag can be configured by self-definition, and the preset tag and the address have a one-to-one correspondence relationship, for example: the preset label may be Add, and further, a regular expression Add () is established by the preset label, and traversal is performed by the Add ().
The queue to be processed may belong to a department or a project group, but the invention is not limited thereto.
By the implementation method, the target address can be quickly determined based on the regular expression and the preset label, so that the queue to be processed can be further obtained from the target address.
The obtaining unit 110 obtains the historical resource consumption data of the queue to be processed.
Specifically, the historical resource consumption data of the queue to be processed may be obtained from a preset database.
For example: the preset database may be a HIVE (data warehouse technology) library.
The analysis unit 111 analyzes the historical resource consumption data of the queue to be processed to obtain at least one index value of the queue to be processed.
In this embodiment, the resource refers to a big data resource, including, but not limited to: computing resources and storage resources.
The computing resources, namely queue resources, are the most scarce, and are mainly used for supporting the MAP and Reduce operations of mass data.
In at least one embodiment of the present invention, the analyzing unit 111 analyzes the historical resource consumption data of the queue to be processed, and obtaining at least one index value of the queue to be processed includes:
identifying a queue identifier of each queue to be processed in the queues to be processed;
acquiring data which is the same as the queue identification of each queue to be processed from the historical resource consumption data as the historical resource consumption data of each queue to be processed;
calling the Yarn component to analyze historical resource consumption data of each queue to be processed to obtain the lowest datum line, the highest datum line and the consumption average value of each queue to be processed;
and integrating the lowest datum line, the highest datum line and the consumption mean value of each queue to be processed as at least one index value of the queue to be processed.
Through the embodiment, the index value of each queue to be processed can be analyzed based on the historical consumption data and the Yarn component, so that the common resource consumption situation of each queue to be processed is reflected.
In at least one embodiment of the present invention, a common resource pool of the pending queues is established;
acquiring current distributed resources of each queue to be processed;
comparing the current allocated resources of each queue to be processed with the highest reference line of each queue to be processed;
when detecting that the current allocated resources of the queue to be processed are higher than the corresponding highest reference line, determining the detected queue to be processed as a first queue;
sending an authentication request to a designated terminal;
when receiving an authentication passing signal fed back by the appointed terminal, calculating a difference value between the current allocated resource of the first queue and the corresponding highest reference line as a transferable resource of the first queue;
and recycling the transferable resource to the public resource pool.
The designated terminal may be a terminal with an authentication right.
Resources in the common resource pool may be utilized by any of the queues to be processed.
It is understood that the resource above the highest baseline refers to the allocated redundant resource, and this part of resource is usually not utilized, and if it is kept in the queue all the time, it will cause the waste of resource.
Therefore, the redundant resources are recycled to the public resource pool for other queues in need, waste of the resources is effectively avoided, and the utilization rate of the resources is improved.
The constructing unit 112 constructs at least one queue group according to the at least one index value.
The queues belonging to one queue group are not limited by tasks or departments and can transfer resources with each other.
Specifically, the constructing unit 112 constructs at least one queue group according to the at least one index value, including:
acquiring the budget of each queue to be processed;
calculating the difference value between the budget of each queue to be processed and the consumption average value of each queue to be processed as a first numerical value of each queue to be processed;
calculating the difference value between the highest datum line of each queue to be processed and the budget of each queue to be processed, and taking the difference value as a second numerical value of each queue to be processed;
calculating the difference value between the budget of each queue to be processed and the lowest datum line of each queue to be processed as a third numerical value of each queue to be processed;
dividing the queue with the first value being the same as the second value and/or the consumption average value being the same as the third value into a queue group;
and integrating all the divided queue groups to obtain the at least one queue group.
For example: obtaining MAX (i.e. the highest reference line), MIN (i.e. the lowest reference line), and AVG (i.e. the consumption average) values according to the maximum consumption and the minimum consumption and the average consumption of a queue in a period of two weeks, respectively, combining the three values with other queues, wherein a first value of each queue is budget-AVG, a second value of each queue is MAX-budget, and a third value of each queue is budget-MIN, further dividing the queue with the same first value and the second value, and/or dividing the queue with the same consumption average value and the third value into a queue group, and integrating all the divided queue groups to obtain the at least one queue group.
By the embodiment, strict attribution limitation (whether the queues belong to the same task or not) in the conventional resource allocation mode can be avoided, and the transfer of resources among queues belonging to different attributions can be realized.
When receiving a queue to be allocated, the identifying unit 113 identifies a queue group corresponding to the queue to be allocated as a target queue group from the at least one queue group.
Wherein the queue to be allocated may include, but is not limited to: queues that detect a resource deficiency during task execution, new issue queues, etc.
In this embodiment, the queue group corresponding to the queue to be allocated may be identified from the at least one queue group as a target queue group by a queue name or the like, which is not limited in the present invention.
The detecting unit 114 invokes the Yarn component to detect the resource utilization rate in the target queue group.
In this embodiment, the resource utilization ratio refers to a ratio between the utilized resource and all the available resources in the queue.
The resource utilization rate can reflect the utilization condition of the resource.
The allocating unit 115 allocates the large data resource to the queue to be allocated according to the resource utilization rate.
Specifically, the allocating unit 115 allocates the big data resource to the queue to be allocated according to the resource utilization ratio includes:
acquiring the queue with the lowest resource utilization rate in the target queue group as a target queue;
detecting idle resources of the target queue;
and transferring the idle resources of the target queue to the queue to be distributed.
For example: after the target queue with the lowest resource utilization rate is obtained, a Yarn component of a Hadoop2 cluster can be used for detecting the resource consumption condition of the target queue, obtaining the idle resource of the target queue, and transferring the idle resource of the target queue to the queue to be allocated for the queue to be allocated to use, so that the target queue is prevented from having resource idle and the queue to be allocated does not have enough resource usage.
Through the implementation mode, the serious shortage of the resources of the partial queues caused by uneven resource distribution is avoided, and the conditions of idling and waste of the resources of the partial queues are avoided.
In at least one embodiment of the present invention, a resource lifetime of the queue to be allocated is obtained;
monitoring the resource use time of the queue to be distributed;
calculating the time interval between the resource use time and the resource use period;
when the time interval is smaller than or equal to a preset time length, detecting whether a renewal instruction exists in the time interval;
when the renewal instruction is detected, sending a renewal request to the appointed terminal; or
When the renewal instruction is not detected, recycling the resources of the queue to be allocated to the public resource pool when the resource use period is reached.
For example: when the resource use period of the queue to be allocated is up to 12 noon in friday, monitoring that the time interval between the resource use time of the queue to be allocated and the resource use period is 30 minutes and is equal to the preset time length of 30 minutes, continuously detecting whether a renewal instruction exists in 30, and when the renewal instruction is detected, sending a renewal request to a specified terminal with a renewal authority; or when no renewal instruction is detected, recycling the resources of the queue to be allocated to the public resource pool at 12 noon on friday, so as to recycle the resources in time when the service life is reached.
Through the implementation mode, dynamic renewal and recovery of resources can be realized, waste of resources is avoided, and reasonable distribution of big data resources is further realized.
It should be noted that, in order to improve security, an index value may be stored in the block chain.
It can be seen from the above technical solutions that, in response to a big data resource allocation instruction, the present invention obtains a queue to be processed according to the big data resource allocation instruction, obtains historical resource consumption data of the queue to be processed, analyzes the historical resource consumption data of the queue to be processed, obtains at least one index value of the queue to be processed, constructs at least one queue group according to the at least one index value, identifies a queue group corresponding to the queue to be allocated from the at least one queue group as a target queue group when receiving the queue to be allocated, calls a Yarn component to detect a resource utilization rate in the target queue group, allocates a big data resource to the queue to be allocated according to the resource utilization rate, and avoids the situations that resources of a part of the queue are seriously insufficient due to uneven resource allocation and resources of the part of the queue are idle and wasted, and further reasonable distribution of big data resources is realized.
Fig. 3 is a schematic structural diagram of an electronic device implementing a method for allocating big data resources according to a preferred embodiment of the present invention.
The electronic device 1 may comprise a memory 12, a processor 13 and a bus, and may further comprise a computer program, such as a big data resource allocation program, stored in the memory 12 and executable on the processor 13.
It will be understood by those skilled in the art that the schematic diagram is merely an example of the electronic device 1, and does not constitute a limitation to the electronic device 1, the electronic device 1 may have a bus-type structure or a star-type structure, the electronic device 1 may further include more or less hardware or software than those shown in the figures, or different component arrangements, for example, the electronic device 1 may further include an input and output device, a network access device, and the like.
It should be noted that the electronic device 1 is only an example, and other existing or future electronic products, such as those that can be adapted to the present invention, should also be included in the scope of the present invention, and are included herein by reference.
The memory 12 includes at least one type of readable storage medium, which includes flash memory, removable hard disks, multimedia cards, card-type memory (e.g., SD or DX memory, etc.), magnetic memory, magnetic disks, optical disks, etc. The memory 12 may in some embodiments be an internal storage unit of the electronic device 1, for example a removable hard disk of the electronic device 1. The memory 12 may also be an external storage device of the electronic device 1 in other embodiments, such as a plug-in mobile hard disk, a Smart Media Card (SMC), a Secure Digital (SD) Card, a Flash memory Card (Flash Card), and the like provided on the electronic device 1. Further, the memory 12 may also include both an internal storage unit and an external storage device of the electronic device 1. The memory 12 may be used not only to store application software installed in the electronic device 1 and various types of data, such as codes of a large data resource allocation program, but also to temporarily store data that has been output or is to be output.
The processor 13 may be composed of an integrated circuit in some embodiments, for example, a single packaged integrated circuit, or may be composed of a plurality of integrated circuits packaged with the same or different functions, including one or more Central Processing Units (CPUs), microprocessors, digital Processing chips, graphics processors, and combinations of various control chips. The processor 13 is a Control Unit (Control Unit) of the electronic device 1, connects various components of the electronic device 1 by using various interfaces and lines, and executes various functions and processes data of the electronic device 1 by running or executing programs or modules (for example, executing a big data resource allocation program and the like) stored in the memory 12 and calling data stored in the memory 12.
The processor 13 executes an operating system of the electronic device 1 and various installed application programs. The processor 13 executes the application program to implement the steps in the above-mentioned various embodiments of the big data resource allocation method, such as the steps shown in fig. 1.
Illustratively, the computer program may be divided into one or more modules/units, which are stored in the memory 12 and executed by the processor 13 to accomplish the present invention. The one or more modules/units may be a series of computer program instruction segments capable of performing specific functions, which are used to describe the execution process of the computer program in the electronic device 1. For example, the computer program may be divided into an acquisition unit 110, an analysis unit 111, a construction unit 112, a recognition unit 113, a detection unit 114, an assignment unit 115.
The integrated unit implemented in the form of a software functional module may be stored in a computer-readable storage medium. The software functional module is stored in a storage medium and includes several instructions to enable a computer device (which may be a personal computer, a computer device, or a network device) or a processor (processor) to execute parts of the big data resource allocation method according to the embodiments of the present invention.
The integrated modules/units of the electronic device 1 may be stored in a computer-readable storage medium if they are implemented in the form of software functional units and sold or used as separate products. Based on such understanding, all or part of the flow of the method according to the embodiments of the present invention may be implemented by a computer program, which may be stored in a computer-readable storage medium, and when the computer program is executed by a processor, the steps of the method embodiments may be implemented.
Wherein the computer program comprises computer program code, which may be in the form of source code, object code, an executable file or some intermediate form, etc. The computer-readable medium may include: any entity or device capable of carrying the computer program code, recording medium, U-disk, removable hard disk, magnetic disk, optical disk, computer Memory, Read-Only Memory (ROM), random-access Memory, or the like.
Further, the computer-readable storage medium may mainly include a storage program area and a storage data area, wherein the storage program area may store an operating system, an application program required for at least one function, and the like; the storage data area may store data created according to the use of the blockchain node, and the like.
The block chain is a novel application mode of computer technologies such as distributed data storage, point-to-point transmission, a consensus mechanism, an encryption algorithm and the like. A block chain (Blockchain), which is essentially a decentralized database, is a series of data blocks associated by using a cryptographic method, and each data block contains information of a batch of network transactions, so as to verify the validity (anti-counterfeiting) of the information and generate a next block. The blockchain may include a blockchain underlying platform, a platform product service layer, an application service layer, and the like.
The bus may be a Peripheral Component Interconnect (PCI) bus, an Extended Industry Standard Architecture (EISA) bus, or the like. The bus may be divided into an address bus, a data bus, a control bus, etc. For ease of illustration, only one arrow is shown in FIG. 3, but this does not indicate only one bus or one type of bus. The bus is arranged to enable connection communication between the memory 12 and at least one processor 13 or the like.
Although not shown, the electronic device 1 may further include a power supply (such as a battery) for supplying power to each component, and preferably, the power supply may be logically connected to the at least one processor 13 through a power management device, so as to implement functions of charge management, discharge management, power consumption management, and the like through the power management device. The power supply may also include any component of one or more dc or ac power sources, recharging devices, power failure detection circuitry, power converters or inverters, power status indicators, and the like. The electronic device 1 may further include various sensors, a bluetooth module, a Wi-Fi module, and the like, which are not described herein again.
Further, the electronic device 1 may further include a network interface, and optionally, the network interface may include a wired interface and/or a wireless interface (such as a WI-FI interface, a bluetooth interface, etc.), which are generally used for establishing a communication connection between the electronic device 1 and other electronic devices.
Optionally, the electronic device 1 may further comprise a user interface, which may be a Display (Display), an input unit (such as a Keyboard), and optionally a standard wired interface, a wireless interface. Alternatively, in some embodiments, the display may be an LED display, a liquid crystal display, a touch-sensitive liquid crystal display, an OLED (Organic Light-Emitting Diode) touch device, or the like. The display, which may also be referred to as a display screen or display unit, is suitable for displaying information processed in the electronic device 1 and for displaying a visualized user interface, among other things.
It is to be understood that the described embodiments are for purposes of illustration only and that the scope of the appended claims is not limited to such structures.
Fig. 3 only shows the electronic device 1 with components 12-13, and it will be understood by a person skilled in the art that the structure shown in fig. 3 does not constitute a limitation of the electronic device 1, and may comprise fewer or more components than shown, or a combination of certain components, or a different arrangement of components.
In conjunction with fig. 1, the memory 12 in the electronic device 1 stores a plurality of instructions to implement a big data resource allocation method, and the processor 13 can execute the plurality of instructions to implement:
responding to a big data resource allocation instruction, and acquiring a queue to be processed according to the big data resource allocation instruction;
acquiring historical resource consumption data of the queue to be processed;
analyzing historical resource consumption data of the queue to be processed to obtain at least one index value of the queue to be processed;
constructing at least one queue group according to the at least one index value;
when receiving a queue to be distributed, identifying a queue group corresponding to the queue to be distributed from the at least one queue group as a target queue group;
calling a Yarn assembly to detect the resource utilization rate in the target queue group;
and distributing big data resources for the queue to be distributed according to the resource utilization rate.
Specifically, the processor 13 may refer to the description of the relevant steps in the embodiment corresponding to fig. 1 for a specific implementation method of the instruction, which is not described herein again.
In the embodiments provided in the present invention, it should be understood that the disclosed system, apparatus and method may be implemented in other ways. For example, the above-described apparatus embodiments are merely illustrative, and for example, the division of the modules/units is only one logical function division, and there may be other division ways in actual implementation.
The modules described as separate parts may or may not be physically separate, and parts displayed as modules may or may not be physical units, may be located in one place, or may be distributed on a plurality of network units. Some or all of the modules may be selected according to actual needs to achieve the purpose of the solution of the present embodiment.
In addition, functional modules in the embodiments of the present invention may be integrated into one processing unit, or each unit may exist alone physically, or two or more units are integrated into one unit. The integrated unit can be realized in a form of hardware, or in a form of hardware plus a software functional module.
It will be evident to those skilled in the art that the invention is not limited to the details of the foregoing illustrative embodiments, and that the present invention may be embodied in other specific forms without departing from the spirit or essential attributes thereof.
The present embodiments are therefore to be considered in all respects as illustrative and not restrictive, the scope of the invention being indicated by the appended claims rather than by the foregoing description, and all changes which come within the meaning and range of equivalency of the claims are therefore intended to be embraced therein. Any reference signs in the claims shall not be construed as limiting the claim concerned.
Furthermore, it is obvious that the word "comprising" does not exclude other elements or steps, and the singular does not exclude the plural. A plurality of units or means recited in the present invention may also be implemented by one unit or means through software or hardware. The terms first, second, etc. are used to denote names, but not any particular order.
Finally, it should be noted that the above embodiments are only for illustrating the technical solutions of the present invention and not for limiting, and although the present invention is described in detail with reference to the preferred embodiments, it should be understood by those skilled in the art that modifications or equivalent substitutions may be made on the technical solutions of the present invention without departing from the spirit and scope of the technical solutions of the present invention.
Claims (10)
1. A big data resource allocation method is characterized in that the big data resource allocation method comprises the following steps:
responding to a big data resource allocation instruction, and acquiring a queue to be processed according to the big data resource allocation instruction;
acquiring historical resource consumption data of the queue to be processed;
analyzing historical resource consumption data of the queue to be processed to obtain at least one index value of the queue to be processed;
constructing at least one queue group according to the at least one index value;
when receiving a queue to be distributed, identifying a queue group corresponding to the queue to be distributed from the at least one queue group as a target queue group;
calling a Yarn assembly to detect the resource utilization rate in the target queue group;
and distributing big data resources for the queue to be distributed according to the resource utilization rate.
2. The big data resource allocation method according to claim 1, wherein said obtaining a pending queue according to the big data resource allocation instruction comprises:
analyzing the method body of the big data resource allocation instruction to obtain the information carried by the big data resource allocation instruction;
acquiring a preset label;
constructing a regular expression according to the preset label;
traversing in the information carried by the big data resource allocation instruction by using the regular expression, and determining the traversed data as a target address;
and linking to the target address, and acquiring data from the target address as the queue to be processed.
3. The big data resource allocation method according to claim 1, wherein the analyzing the historical resource consumption data of the queue to be processed to obtain at least one index value of the queue to be processed comprises:
identifying a queue identifier of each queue to be processed in the queues to be processed;
acquiring data which is the same as the queue identification of each queue to be processed from the historical resource consumption data as the historical resource consumption data of each queue to be processed;
calling the Yarn component to analyze historical resource consumption data of each queue to be processed to obtain the lowest datum line, the highest datum line and the consumption average value of each queue to be processed;
and integrating the lowest datum line, the highest datum line and the consumption mean value of each queue to be processed as at least one index value of the queue to be processed.
4. The big data resource allocation method of claim 3, wherein the method further comprises:
establishing a public resource pool of the queue to be processed;
acquiring current distributed resources of each queue to be processed;
comparing the current allocated resources of each queue to be processed with the highest reference line of each queue to be processed;
when detecting that the current allocated resources of the queue to be processed are higher than the corresponding highest reference line, determining the detected queue to be processed as a first queue;
sending an authentication request to a designated terminal;
when receiving an authentication passing signal fed back by the appointed terminal, calculating a difference value between the current allocated resource of the first queue and the corresponding highest reference line as a transferable resource of the first queue;
and recycling the transferable resource to the public resource pool.
5. The big data resource allocation method according to claim 3, wherein the constructing at least one queue group according to the at least one index value comprises:
acquiring the budget of each queue to be processed;
calculating the difference value between the budget of each queue to be processed and the consumption average value of each queue to be processed as a first numerical value of each queue to be processed;
calculating the difference value between the highest datum line of each queue to be processed and the budget of each queue to be processed, and taking the difference value as a second numerical value of each queue to be processed;
calculating the difference value between the budget of each queue to be processed and the lowest datum line of each queue to be processed as a third numerical value of each queue to be processed;
dividing the queue with the first value being the same as the second value and/or the consumption average value being the same as the third value into a queue group;
and integrating all the divided queue groups to obtain the at least one queue group.
6. The big data resource allocation method according to claim 1, wherein the allocating big data resources for the queue to be allocated according to the resource utilization ratio comprises:
acquiring the queue with the lowest resource utilization rate in the target queue group as a target queue;
detecting idle resources of the target queue;
and transferring the idle resources of the target queue to the queue to be distributed.
7. The big data resource allocation method of claim 1, wherein the method further comprises:
acquiring the resource service life of the queue to be distributed;
monitoring the resource use time of the queue to be distributed;
calculating the time interval between the resource use time and the resource use period;
when the time interval is smaller than or equal to a preset time length, detecting whether a renewal instruction exists in the time interval;
when the renewal instruction is detected, sending a renewal request to the appointed terminal; or
When the renewal instruction is not detected, recycling the resources of the queue to be allocated to the public resource pool when the resource use period is reached.
8. A big data resource allocation apparatus, comprising:
the device comprises an acquisition unit, a processing unit and a processing unit, wherein the acquisition unit is used for responding to a big data resource allocation instruction and acquiring a queue to be processed according to the big data resource allocation instruction;
the acquiring unit is further configured to acquire historical resource consumption data of the queue to be processed;
the analysis unit is used for analyzing historical resource consumption data of the queue to be processed to obtain at least one index value of the queue to be processed;
the construction unit is used for constructing at least one queue group according to the at least one index value;
the device comprises an identification unit, a queue management unit and a queue management unit, wherein the identification unit is used for identifying a queue group corresponding to a queue to be distributed from at least one queue group as a target queue group when the queue to be distributed is received;
the detection unit is used for calling the Yarn component to detect the resource utilization rate in the target queue group;
and the allocation unit is used for allocating the large data resources to the queue to be allocated according to the resource utilization rate.
9. An electronic device, characterized in that the electronic device comprises:
a memory storing at least one instruction; and
a processor executing instructions stored in the memory to implement the big data resource allocation method of any of claims 1 to 7.
10. A computer-readable storage medium characterized by: the computer readable storage medium stores at least one instruction which is executed by a processor in an electronic device to implement the big data resource allocation method according to any one of claims 1 to 7.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202011437029.8A CN112527501B (en) | 2020-12-07 | 2020-12-07 | Big data resource allocation method, device, equipment and medium |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202011437029.8A CN112527501B (en) | 2020-12-07 | 2020-12-07 | Big data resource allocation method, device, equipment and medium |
Publications (2)
Publication Number | Publication Date |
---|---|
CN112527501A true CN112527501A (en) | 2021-03-19 |
CN112527501B CN112527501B (en) | 2024-08-23 |
Family
ID=74999295
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN202011437029.8A Active CN112527501B (en) | 2020-12-07 | 2020-12-07 | Big data resource allocation method, device, equipment and medium |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN112527501B (en) |
Citations (5)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN105718317A (en) * | 2016-01-15 | 2016-06-29 | 浪潮(北京)电子信息产业有限公司 | Task scheduling method and task scheduling device |
US20190146840A1 (en) * | 2017-11-14 | 2019-05-16 | Salesforce.Com, Inc. | Computing resource allocation based on number of items in a queue and configurable list of computing resource allocation steps |
CN111198767A (en) * | 2020-01-07 | 2020-05-26 | 平安科技(深圳)有限公司 | Big data resource processing method and device, terminal and storage medium |
CN111278132A (en) * | 2020-01-19 | 2020-06-12 | 重庆邮电大学 | Resource allocation method for low-delay high-reliability service in mobile edge calculation |
US20200334077A1 (en) * | 2019-04-22 | 2020-10-22 | International Business Machines Corporation | Scheduling requests based on resource information |
-
2020
- 2020-12-07 CN CN202011437029.8A patent/CN112527501B/en active Active
Patent Citations (5)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN105718317A (en) * | 2016-01-15 | 2016-06-29 | 浪潮(北京)电子信息产业有限公司 | Task scheduling method and task scheduling device |
US20190146840A1 (en) * | 2017-11-14 | 2019-05-16 | Salesforce.Com, Inc. | Computing resource allocation based on number of items in a queue and configurable list of computing resource allocation steps |
US20200334077A1 (en) * | 2019-04-22 | 2020-10-22 | International Business Machines Corporation | Scheduling requests based on resource information |
CN111198767A (en) * | 2020-01-07 | 2020-05-26 | 平安科技(深圳)有限公司 | Big data resource processing method and device, terminal and storage medium |
CN111278132A (en) * | 2020-01-19 | 2020-06-12 | 重庆邮电大学 | Resource allocation method for low-delay high-reliability service in mobile edge calculation |
Non-Patent Citations (1)
Title |
---|
陈斌 等: "基于分层队列历史性能建模的云系统资源管理", 《微型机与应用》, vol. 35, no. 14, 31 August 2016 (2016-08-31), pages 27 - 29 * |
Also Published As
Publication number | Publication date |
---|---|
CN112527501B (en) | 2024-08-23 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN112559535B (en) | Multithreading-based asynchronous task processing method, device, equipment and medium | |
US20180331927A1 (en) | Resource Coordinate System for Data Centers | |
CN112231586A (en) | Course recommendation method, device, equipment and medium based on transfer learning | |
CN114124968B (en) | Load balancing method, device, equipment and medium based on market data | |
CN114675976B (en) | GPU (graphics processing Unit) sharing method, device, equipment and medium based on kubernets | |
CN112084486A (en) | User information verification method and device, electronic equipment and storage medium | |
CN112256783A (en) | Data export method and device, electronic equipment and storage medium | |
CN111694844A (en) | Enterprise operation data analysis method and device based on configuration algorithm and electronic equipment | |
CN113806434A (en) | Big data processing method, device, equipment and medium | |
CN115269523A (en) | File storage and query method based on artificial intelligence and related equipment | |
CN112631731A (en) | Data query method and device, electronic equipment and storage medium | |
CN115129753A (en) | Data blood relationship analysis method and device, electronic equipment and storage medium | |
CN111694843A (en) | Missing number detection method and device, electronic equipment and storage medium | |
CN111858604B (en) | Data storage method and device, electronic equipment and storage medium | |
CN110647409A (en) | Message writing method, electronic device, system and medium | |
CN112541640A (en) | Resource authority management method and device, electronic equipment and computer storage medium | |
CN114817408B (en) | Scheduling resource identification method and device, electronic equipment and storage medium | |
CN112527501B (en) | Big data resource allocation method, device, equipment and medium | |
CN113918305B (en) | Node scheduling method, node scheduling device, electronic equipment and readable storage medium | |
CN114201466A (en) | Method, device and equipment for preventing cache breakdown and readable storage medium | |
CN115086047A (en) | Interface authentication method and device, electronic equipment and storage medium | |
CN115102770A (en) | Resource access method, device and equipment based on user permission and storage medium | |
CN113918603A (en) | Hash cache generation method and device, electronic equipment and storage medium | |
CN113419718A (en) | Data transmission method, device, equipment and medium | |
CN113449037A (en) | AI-based SQL engine calling method, device, equipment and medium |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
GR01 | Patent grant | ||
GR01 | Patent grant |