CN113835856A - Storage statistical method, device and equipment for AI platform - Google Patents

Storage statistical method, device and equipment for AI platform Download PDF

Info

Publication number
CN113835856A
CN113835856A CN202111094443.8A CN202111094443A CN113835856A CN 113835856 A CN113835856 A CN 113835856A CN 202111094443 A CN202111094443 A CN 202111094443A CN 113835856 A CN113835856 A CN 113835856A
Authority
CN
China
Prior art keywords
storage
platform
statistics
file
calling
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN202111094443.8A
Other languages
Chinese (zh)
Other versions
CN113835856B (en
Inventor
郑玉会
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Suzhou Inspur Intelligent Technology Co Ltd
Original Assignee
Suzhou Inspur Intelligent Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Suzhou Inspur Intelligent Technology Co Ltd filed Critical Suzhou Inspur Intelligent Technology Co Ltd
Priority to CN202111094443.8A priority Critical patent/CN113835856B/en
Publication of CN113835856A publication Critical patent/CN113835856A/en
Application granted granted Critical
Publication of CN113835856B publication Critical patent/CN113835856B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F9/00Arrangements for program control, e.g. control units
    • G06F9/06Arrangements for program control, e.g. control units using stored programs, i.e. using an internal store of processing equipment to receive or retain programs
    • G06F9/46Multiprogramming arrangements
    • G06F9/48Program initiating; Program switching, e.g. by interrupt
    • G06F9/4806Task transfer initiation or dispatching
    • G06F9/4843Task transfer initiation or dispatching by program, e.g. task dispatcher, supervisor, operating system
    • G06F9/4881Scheduling strategies for dispatcher, e.g. round robin, multi-level priority queues
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F16/00Information retrieval; Database structures therefor; File system structures therefor
    • G06F16/20Information retrieval; Database structures therefor; File system structures therefor of structured data, e.g. relational data
    • G06F16/24Querying
    • G06F16/245Query processing
    • G06F16/2458Special types of queries, e.g. statistical queries, fuzzy queries or distributed queries
    • G06F16/2462Approximate or statistical queries
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F9/00Arrangements for program control, e.g. control units
    • G06F9/06Arrangements for program control, e.g. control units using stored programs, i.e. using an internal store of processing equipment to receive or retain programs
    • G06F9/46Multiprogramming arrangements
    • G06F9/50Allocation of resources, e.g. of the central processing unit [CPU]
    • G06F9/5005Allocation of resources, e.g. of the central processing unit [CPU] to service a request
    • G06F9/5011Allocation of resources, e.g. of the central processing unit [CPU] to service a request the resources being hardware resources other than CPUs, Servers and Terminals
    • G06F9/5016Allocation of resources, e.g. of the central processing unit [CPU] to service a request the resources being hardware resources other than CPUs, Servers and Terminals the resource being the memory
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F9/00Arrangements for program control, e.g. control units
    • G06F9/06Arrangements for program control, e.g. control units using stored programs, i.e. using an internal store of processing equipment to receive or retain programs
    • G06F9/46Multiprogramming arrangements
    • G06F9/50Allocation of resources, e.g. of the central processing unit [CPU]
    • G06F9/5005Allocation of resources, e.g. of the central processing unit [CPU] to service a request
    • G06F9/5027Allocation of resources, e.g. of the central processing unit [CPU] to service a request the resource being a machine, e.g. CPUs, Servers, Terminals
    • YGENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
    • Y02TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
    • Y02DCLIMATE CHANGE MITIGATION TECHNOLOGIES IN INFORMATION AND COMMUNICATION TECHNOLOGIES [ICT], I.E. INFORMATION AND COMMUNICATION TECHNOLOGIES AIMING AT THE REDUCTION OF THEIR OWN ENERGY USE
    • Y02D10/00Energy efficient computing, e.g. low power processors, power management or thermal management

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Software Systems (AREA)
  • Physics & Mathematics (AREA)
  • General Engineering & Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • Probability & Statistics with Applications (AREA)
  • Fuzzy Systems (AREA)
  • Mathematical Physics (AREA)
  • Computational Linguistics (AREA)
  • Data Mining & Analysis (AREA)
  • Databases & Information Systems (AREA)
  • Debugging And Monitoring (AREA)

Abstract

The application discloses a storage statistical method of an AI platform, which is realized based on a storage statistical microservice independent of a platform service module, and the method firstly decouples the storage statistics from the core service of the AI platform to solve the problem of resource preemption. The application also provides a storage statistical device, equipment and a computer readable storage medium of the AI platform, and the technical effect of the storage statistical device and the equipment corresponds to the technical effect of the method.

Description

Storage statistical method, device and equipment for AI platform
Technical Field
The present application relates to the field of computer technologies, and in particular, to a storage statistics method, an apparatus, a device, and a computer-readable storage medium for an AI platform.
Background
With the wide popularization of an AI (Artificial Intelligence) training platform, the enterprise-level application is more and more, the service volume of users is gradually increased, and the requirement on the storage statistical performance of the platform is higher and higher. The storage statistics of the user disk space in the existing AI platform is difficult to meet the requirements of users, and how to rapidly, efficiently and stably count the disk spaces of all users in the AI platform, the storage statistics is rapidly performed by using the minimum resource cost, and the normal operation of other service modules is ensured is a problem which is urgently needed to be solved at present.
In the conventional storage statistical module of the AI training platform, a timing task method is adopted, and storage space statistics is performed again for all user home directories of the AI platform after the timing task is triggered according to a certain frequency no matter whether the home directories of a bottom layer user are updated or not. The scheme uses a sizeOf (java method in Apache File tool class) method in an Apache-common-io packet, and the method traverses each directory layer by layer and accumulates the size of each File. When the number of files reaches above the TB level, the disk IO expense of the method is large, and the service is blocked and even abnormal.
On the basis of the apache-common-io package, a sizeOf method is rewritten, a thread is newly built for each folder, a multithreading mode is adopted, the sizes of files are accumulated, efficiency can be improved, but when the number of the files reaches the level of TB, the occupation of CPU and memory resources is excessive, the whole service can be directly dragged and broken, and the whole service module is abnormal.
In summary, in the conventional scheme, storage statistics is coupled with core services of an AI platform, preemption of service resources results in a stuttering of service and poor stability, the conventional java method sizeOf storage method consumes a large amount of CPU and memory resources, and regularly triggers a storage statistics task to do a lot of useless work, so how to overcome the above disadvantages is a technical problem to be solved by those skilled in the art.
Disclosure of Invention
The application aims to provide a storage statistical method, a storage statistical device, equipment and a computer readable storage medium for an AI platform, which are used for solving the problem that the traditional storage statistical scheme occupies resources of core services of the AI platform and influences service performance. The specific scheme is as follows:
in a first aspect, the present application provides a storage statistics method for an AI platform, which is implemented based on a storage statistics microservice independent of a platform service module, and includes:
monitoring a user home directory of the platform service module by using a file monitor, and issuing a storage statistic task when monitoring file change;
and calling a liunx kernel function to perform storage statistics when the storage statistics task is received, so as to obtain a storage statistics result.
Optionally, the calling the liunx kernel function to perform storage statistics to obtain a storage statistical result, where the storage statistical result includes:
and calling a liunx kernel function to read the file stream according to the preset buffer size to obtain a storage statistical result.
Optionally, the calling the liunx kernel function to read the file stream according to the preset buffer size further includes, before obtaining the storage statistical result:
and setting a preset buffer size according to a target condition, wherein the target condition comprises the file number of the home directory of the user.
Optionally, the calling the liunx kernel function to perform storage statistics to obtain a storage statistical result, where the storage statistical result includes:
and calling a linux kernel function readdir by adopting a C language to perform storage statistics to obtain a storage statistical result.
Optionally, after the calling the liunx kernel function to perform storage statistics to obtain a storage statistical result, the method further includes:
and sending the storage statistical result to the platform service module in a message mode.
Optionally, after the sending the stored statistical result to the platform service module in a message manner, the method further includes:
and updating the size of the storage space of the user home directory on the storage monitoring page according to the storage statistical result.
Optionally, before the monitoring the user home directory of the platform service module by using the file listener, the method further includes:
and configuring the CPU and the memory resource of the storage statistic microservice according to the prior condition.
In a second aspect, the present application provides a storage statistics apparatus for an AI platform, which is implemented based on a storage statistics microservice independent of a platform service module, and includes:
the monitoring module is used for monitoring the user home directory of the platform service module by using the file monitor and issuing a storage and statistics task when the file change is monitored;
and the storage counting module is used for calling a liunx kernel function to carry out storage counting when the storage counting task is received, so as to obtain a storage counting result.
In a third aspect, the present application provides a storage statistics apparatus for an AI platform, including:
a memory: for storing a computer program;
a processor: for executing the computer program to implement the storage statistics method of the AI platform as described above.
In a fourth aspect, the present application provides a computer-readable storage medium having stored thereon a computer program for implementing the storage statistics method of the AI platform as described above when executed by a processor.
The application provides a storage statistical method of an AI platform, based on the realization of storage statistical microservice independent of a platform service module, comprising the following steps: monitoring a user home directory of a platform service module by using a file monitor, and issuing a storage and statistics task when a file change is monitored; and when the storage statistic task is received, calling a liunx kernel function to carry out storage statistics to obtain a storage statistic result. In addition, the method actively monitors the change of the home directory file of the user in a file monitor mode, triggers a storage statistic task only when the change of the file is monitored, avoids triggering the storage statistic task to do useless work regularly, and finally calls a linux kernel function for storage statistics to reduce the switching between a user mode and a kernel mode, reduce the occupancy rates of a CPU and a memory and improve the efficiency of the storage statistics.
In addition, the application also provides a storage statistical device, equipment and a computer readable storage medium of the AI platform, and the technical effect corresponds to the technical effect of the method, which is not repeated herein.
Drawings
For a clearer explanation of the embodiments or technical solutions of the prior art of the present application, the drawings needed for the description of the embodiments or prior art will be briefly described below, it is obvious that the drawings in the following description are only some embodiments of the present application, and for those skilled in the art, other drawings can be obtained according to these drawings without creative efforts.
Fig. 1 is a flowchart of a first embodiment of a storage statistics method of an AI platform provided in the present application;
fig. 2 is a schematic diagram illustrating an internal implementation of a storage statistics service according to a second embodiment of the storage statistics method for an AI platform provided in the present application;
fig. 3 is a flowchart of an AI platform storage statistics scheme according to a second embodiment of the AI platform storage statistics method provided in the present application;
FIG. 4 is a schematic diagram of an embodiment of a storage statistics apparatus of the AI platform provided in the present application;
fig. 5 is a schematic diagram of an embodiment of a storage statistics device of an AI platform provided in the present application.
Detailed Description
In order that those skilled in the art will better understand the disclosure, the following detailed description will be given with reference to the accompanying drawings. It is to be understood that the embodiments described are only a few embodiments of the present application and not all embodiments. All other embodiments, which can be derived by a person skilled in the art from the embodiments given herein without making any creative effort, shall fall within the protection scope of the present application.
The application provides a storage statistical method, a device, equipment and a computer readable storage medium of an AI platform, aiming at the problems that in the traditional scheme, storage statistics is coupled with core services, service resources are seized, and service blocking and poor stability are caused, and the problems that a traditional java method sizeOf storage method consumes CPU and large memory resources, and regularly triggers a storage statistical task to do a lot of useless work.
Referring to fig. 1, a first embodiment of a storage statistics method for an AI platform provided in the present application is described below, where the first embodiment is implemented based on a storage statistics microservice independent from a platform service module, and includes:
s11, monitoring the user' S directory of the platform service module by using the file monitor, and issuing a storage statistic task when the file change is monitored;
and S12, calling a liunx kernel function to perform storage statistics when the storage statistics task is received, and obtaining a storage statistics result.
In this embodiment, the storage statistics is used as a new micro service to be decoupled from the AI platform service module, and after the storage statistics micro service is started, the implementation process mainly includes two parts, namely, file monitoring and storage statistics.
Specifically, a file listener (listener) mode is used for monitoring file changes of each user home directory, tasks do not need to be issued at regular time, and only one storage statistic task needs to be issued when the file listener monitors the file changes. The implementation mode of the storage statistics is as follows: and calling a liunx kernel function to perform storage statistics to obtain a storage statistical result. Specifically, C language writes a code, and calls linux kernel function readdir () to perform storage statistics.
On the basis, the buffer (buffer area) for reading the file each time can be configured, and the problem that a large amount of memory resources are occupied or the memory overflows due to the fact that a large number of file objects are created by the sizeOf method in the traditional java implementation is solved. Specifically, a liunx kernel function is called to read the file stream according to the preset buffer size, and a storage statistical result is obtained. After the storage statistical result is obtained, a notification message (notify) can be further sent to the service module of the AI platform, and the size of the storage space for storing each user home directory on the monitoring page is updated.
The storage statistical method for the AI platform provided by this embodiment is implemented based on a storage statistical microservice independent of a platform service module, and has the following advantages:
1. the storage statistics and the core service micro-service are decoupled, the resource preemption problem is solved, and a user can configure the resource preemption ratio according to the prior condition.
2. The storage statistic task is not periodically and passively issued any more, the change of the home directory file of each user is actively monitored in a file monitor mode, and only when the change of the file is monitored, the storage statistic task is triggered once.
3. The storage statistical scheme supports dynamic configuration of the buffer, a user can flexibly and dynamically configure the buffer value according to prior knowledge such as the number of files and the like, and the problem that a large number of file objects are created by the size of the java program sizeOf in the traditional mode and CPU and memory resources are consumed is solved by calling a linux kernel function through C language coding to read a file stream.
The second embodiment of the storage statistics method for the AI platform provided by the present application is described in detail below.
In the embodiment, the storage statistics and the platform service module iresresource are decoupled, and a user can configure the CPU and the memory resource of the storage statistics service microservice according to the prior condition. Referring to fig. 2 and fig. 3, the second embodiment is implemented based on a storage statistics microservice independent of a platform business module, and includes:
s21, configuring the CPU and the memory resource of the storage statistic microservice according to the prior condition;
s22, setting a preset buffer size according to a target condition, wherein the target condition comprises the file number of the home directory of the user;
s23, starting storage statistics micro-service, monitoring the user' S directory of the platform service module by using a file monitor, and issuing a storage statistics task when monitoring the file change;
s24, calling a linux kernel function readdir by adopting a C language when the storage statistic task is received, and reading a file stream according to the size of a preset buffer to obtain a storage statistic result;
s25, sending the storage statistical result to the platform service module in a message mode;
and S26, updating the size of the storage space of the user home directory on the storage monitoring page according to the storage statistical result.
As can be seen, the storage statistical method for the AI platform provided by this embodiment at least has the following advantages:
1. the storage statistics and the AI platform service module are decoupled to form a micro service independently, and the CPU and the memory resources of the storage statistics micro service and the AI platform service module are reasonably distributed according to the prior condition, so that the problem of abnormal service caused by overhigh CPU and memory occupation ratio in the original storage statistics mode is solved, and the problem that the AI platform cannot be normally used due to the fact that the resources of the service module are occupied by the storage statistics is prevented.
2. The file monitor is used for monitoring files in each user home directory, and when the change of the files is monitored, a storage counting task is triggered once to replace a traditional mode of issuing the counting task at regular time, so that a plurality of useless storage counting under the condition that the files are not changed are avoided. Specifically, a linux kernel function is called according to a buffer value configured by a user, a file stream is read, and the size is counted. The useless work that a large number of user home directories are unchanged but storage statistics is repeatedly carried out when the tasks are issued by the timing tasks is reduced.
3. The storage statistical method is improved, the traditional sizeOf method needs to create a large number of file objects, inevitably needs a large amount of memory to create and recycle the objects, and even possibly causes the condition of abnormal service caused by memory overflow. In this embodiment, the file stream is directly read by using the linux kernel function, and the buffer for reading the file each time can be configured. And calling a linux kernel function readdir () by adopting a C language, and informing a service module of the AI platform of the storage size in a message mode after counting is completed. Meanwhile, the storage statistical mode also supports a dynamic buffer configuration mode (default is 32K), and a user can increase the value of the buffer according to the prior conditions such as the number of files under the current home directory, so that the switching between a user mode and a kernel mode is reduced, the occupancy rates of a CPU and a memory are reduced, and the efficiency of storage statistics is improved. And finally, the service module updates the stored value and displays the updated stored value on a storage monitoring page.
In the following, the storage statistics device of the AI platform provided by the embodiment of the present application is introduced, and the storage statistics device of the AI platform described below and the storage statistics method of the AI platform described above may be referred to correspondingly.
As shown in fig. 4, the storage statistics apparatus of the AI platform according to this embodiment is implemented based on a storage statistics microservice independent of a platform service module, and includes:
the monitoring module 41 is configured to monitor the user home directory of the platform service module by using a file monitor, and when a file change is monitored, issue a storage statistics task;
and the storage statistic module 42 is configured to call a liunx kernel function to perform storage statistics when the storage statistic task is received, so as to obtain a storage statistic result.
The storage statistics device of the AI platform of the present embodiment is used to implement the foregoing storage statistics method of the AI platform, and therefore, the detailed implementation of the device can be seen in the foregoing section of the storage statistics method of the AI platform, and will not be described herein again.
In addition, the present application also provides a storage statistics device of an AI platform, as shown in fig. 5, including:
the memory 100: for storing a computer program;
the processor 200: for executing the computer program to implement the storage statistics method of the AI platform as described above.
Finally, the present application provides a computer-readable storage medium having stored thereon a computer program for implementing the storage statistics method of the AI platform as described above when executed by a processor.
The embodiments are described in a progressive manner, each embodiment focuses on differences from other embodiments, and the same or similar parts among the embodiments are referred to each other. The device disclosed by the embodiment corresponds to the method disclosed by the embodiment, so that the description is simple, and the relevant points can be referred to the method part for description.
The steps of a method or algorithm described in connection with the embodiments disclosed herein may be embodied directly in hardware, in a software module executed by a processor, or in a combination of the two. A software module may reside in Random Access Memory (RAM), memory, Read Only Memory (ROM), electrically programmable ROM, electrically erasable programmable ROM, registers, hard disk, a removable disk, a CD-ROM, or any other form of storage medium known in the art.
The above detailed descriptions of the solutions provided in the present application, and the specific examples applied herein are set forth to explain the principles and implementations of the present application, and the above descriptions of the examples are only used to help understand the method and its core ideas of the present application; meanwhile, for a person skilled in the art, according to the idea of the present application, there may be variations in the specific embodiments and the application scope, and in summary, the content of the present specification should not be construed as a limitation to the present application.

Claims (10)

1. A storage statistical method of an AI platform is characterized in that the storage statistical microservice implementation based on a platform service module independent comprises the following steps:
monitoring a user home directory of the platform service module by using a file monitor, and issuing a storage statistic task when monitoring file change;
and calling a liunx kernel function to perform storage statistics when the storage statistics task is received, so as to obtain a storage statistics result.
2. The method of claim 1, wherein the calling the liunx kernel function to perform storage statistics to obtain a storage statistics result comprises:
and calling a liunx kernel function to read the file stream according to the preset buffer size to obtain a storage statistical result.
3. The method according to claim 2, wherein the calling the liunx kernel function reads the file stream according to the preset buffer size, and before obtaining the storage statistical result, the method further comprises:
and setting a preset buffer size according to a target condition, wherein the target condition comprises the file number of the home directory of the user.
4. The method of claim 1, wherein the calling the liunx kernel function to perform storage statistics to obtain a storage statistics result comprises:
and calling a linux kernel function readdir by adopting a C language to perform storage statistics to obtain a storage statistical result.
5. The method of claim 1, wherein after the calling the liunx kernel function to perform the storage statistics to obtain the storage statistics result, further comprising:
and sending the storage statistical result to the platform service module in a message mode.
6. The method of claim 5, wherein after the sending the stored statistics to the platform services module in a message, further comprising:
and updating the size of the storage space of the user home directory on the storage monitoring page according to the storage statistical result.
7. The method of any of claims 1 to 6, wherein prior to listening to the customer's home directory of the platform services module with a file listener, further comprising:
and configuring the CPU and the memory resource of the storage statistic microservice according to the prior condition.
8. A storage statistic device of an AI platform is characterized in that based on the realization of storage statistic microservice independent of a platform service module, the device comprises:
the monitoring module is used for monitoring the user home directory of the platform service module by using the file monitor and issuing a storage and statistics task when the file change is monitored;
and the storage counting module is used for calling a liunx kernel function to carry out storage counting when the storage counting task is received, so as to obtain a storage counting result.
9. A storage statistics device of an AI platform, comprising:
a memory: for storing a computer program;
a processor: a storage statistics method for executing the computer program to realize the AI platform of any one of claims 1 to 7.
10. A computer-readable storage medium, characterized in that a computer program is stored thereon, which, when being executed by a processor, is configured to implement the storage statistics method of the AI platform according to any one of claims 1 to 7.
CN202111094443.8A 2021-09-17 2021-09-17 Storage statistics method, device and equipment of AI platform Active CN113835856B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202111094443.8A CN113835856B (en) 2021-09-17 2021-09-17 Storage statistics method, device and equipment of AI platform

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202111094443.8A CN113835856B (en) 2021-09-17 2021-09-17 Storage statistics method, device and equipment of AI platform

Publications (2)

Publication Number Publication Date
CN113835856A true CN113835856A (en) 2021-12-24
CN113835856B CN113835856B (en) 2023-07-14

Family

ID=78959936

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202111094443.8A Active CN113835856B (en) 2021-09-17 2021-09-17 Storage statistics method, device and equipment of AI platform

Country Status (1)

Country Link
CN (1) CN113835856B (en)

Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101588275A (en) * 2008-12-25 2009-11-25 深圳市宇沃德信息技术有限公司 Method for information monitoring of network application layer
CN111901377A (en) * 2020-06-28 2020-11-06 苏州浪潮智能科技有限公司 File transmission method, device, equipment and medium based on AI (Artificial Intelligence) training platform
CN113010479A (en) * 2021-03-18 2021-06-22 山东英信计算机技术有限公司 File management method, device and medium

Patent Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101588275A (en) * 2008-12-25 2009-11-25 深圳市宇沃德信息技术有限公司 Method for information monitoring of network application layer
CN111901377A (en) * 2020-06-28 2020-11-06 苏州浪潮智能科技有限公司 File transmission method, device, equipment and medium based on AI (Artificial Intelligence) training platform
CN113010479A (en) * 2021-03-18 2021-06-22 山东英信计算机技术有限公司 File management method, device and medium

Also Published As

Publication number Publication date
CN113835856B (en) 2023-07-14

Similar Documents

Publication Publication Date Title
US10572284B2 (en) Virtualization Congestion Control Framework for Modifying Execution of Applications on Virtual Machine Based on Mass Congestion Indicator in Host Computing System
CN109391505B (en) Network instance management method and related equipment
US20120016994A1 (en) Distributed system
US11044729B2 (en) Function scheduling method, device, and system
CN102264110B (en) Based on changing method and the system of wireless resource distribution database
US8706858B2 (en) Method and apparatus for controlling flow of management tasks to management system databases
EP3264723A1 (en) Method, related apparatus and system for processing service request
CN112218272A (en) Event subscription method, device and equipment
CN112383585A (en) Message processing system and method and electronic equipment
CN103379040A (en) Device and method for controlling concurrency number in high concurrency system
KR20140014285A (en) Traffic control method and traffic control apparatus
CN111538572A (en) Task processing method, device, scheduling server and medium
CN111885112A (en) Node service exception handling method, device, equipment and storage medium
CN112817772A (en) Data communication method, device, equipment and storage medium
CN113835856A (en) Storage statistical method, device and equipment for AI platform
CN111600738B (en) Method for optimizing timeout processing and storage medium
CN112463315A (en) Cluster task scheduling method and device and related components
CN112671664A (en) CDN scheduling system and method based on refined scheduling
CN106793093B (en) Service processing method and device
CN114666215B (en) Method, system, medium and electronic equipment for applying cross-cluster elastic expansion
US10747632B2 (en) Data redundancy and allocation system
CN109284188B (en) Buffer array maintenance method, device, terminal and readable medium
CN110401708B (en) Session processing system and method based on server load state
CN114237896A (en) Distributed node resource dynamic scheduling method and device
CN111858060A (en) Resource dynamic adjustment method and device for high-performance computing cluster

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant