CN113835856B - Storage statistics method, device and equipment of AI platform - Google Patents

Storage statistics method, device and equipment of AI platform Download PDF

Info

Publication number
CN113835856B
CN113835856B CN202111094443.8A CN202111094443A CN113835856B CN 113835856 B CN113835856 B CN 113835856B CN 202111094443 A CN202111094443 A CN 202111094443A CN 113835856 B CN113835856 B CN 113835856B
Authority
CN
China
Prior art keywords
storage
storage statistics
statistics
platform
file
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202111094443.8A
Other languages
Chinese (zh)
Other versions
CN113835856A (en
Inventor
郑玉会
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Suzhou Inspur Intelligent Technology Co Ltd
Original Assignee
Suzhou Inspur Intelligent Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Suzhou Inspur Intelligent Technology Co Ltd filed Critical Suzhou Inspur Intelligent Technology Co Ltd
Priority to CN202111094443.8A priority Critical patent/CN113835856B/en
Publication of CN113835856A publication Critical patent/CN113835856A/en
Application granted granted Critical
Publication of CN113835856B publication Critical patent/CN113835856B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F9/00Arrangements for program control, e.g. control units
    • G06F9/06Arrangements for program control, e.g. control units using stored programs, i.e. using an internal store of processing equipment to receive or retain programs
    • G06F9/46Multiprogramming arrangements
    • G06F9/48Program initiating; Program switching, e.g. by interrupt
    • G06F9/4806Task transfer initiation or dispatching
    • G06F9/4843Task transfer initiation or dispatching by program, e.g. task dispatcher, supervisor, operating system
    • G06F9/4881Scheduling strategies for dispatcher, e.g. round robin, multi-level priority queues
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F16/00Information retrieval; Database structures therefor; File system structures therefor
    • G06F16/20Information retrieval; Database structures therefor; File system structures therefor of structured data, e.g. relational data
    • G06F16/24Querying
    • G06F16/245Query processing
    • G06F16/2458Special types of queries, e.g. statistical queries, fuzzy queries or distributed queries
    • G06F16/2462Approximate or statistical queries
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F9/00Arrangements for program control, e.g. control units
    • G06F9/06Arrangements for program control, e.g. control units using stored programs, i.e. using an internal store of processing equipment to receive or retain programs
    • G06F9/46Multiprogramming arrangements
    • G06F9/50Allocation of resources, e.g. of the central processing unit [CPU]
    • G06F9/5005Allocation of resources, e.g. of the central processing unit [CPU] to service a request
    • G06F9/5011Allocation of resources, e.g. of the central processing unit [CPU] to service a request the resources being hardware resources other than CPUs, Servers and Terminals
    • G06F9/5016Allocation of resources, e.g. of the central processing unit [CPU] to service a request the resources being hardware resources other than CPUs, Servers and Terminals the resource being the memory
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F9/00Arrangements for program control, e.g. control units
    • G06F9/06Arrangements for program control, e.g. control units using stored programs, i.e. using an internal store of processing equipment to receive or retain programs
    • G06F9/46Multiprogramming arrangements
    • G06F9/50Allocation of resources, e.g. of the central processing unit [CPU]
    • G06F9/5005Allocation of resources, e.g. of the central processing unit [CPU] to service a request
    • G06F9/5027Allocation of resources, e.g. of the central processing unit [CPU] to service a request the resource being a machine, e.g. CPUs, Servers, Terminals
    • YGENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
    • Y02TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
    • Y02DCLIMATE CHANGE MITIGATION TECHNOLOGIES IN INFORMATION AND COMMUNICATION TECHNOLOGIES [ICT], I.E. INFORMATION AND COMMUNICATION TECHNOLOGIES AIMING AT THE REDUCTION OF THEIR OWN ENERGY USE
    • Y02D10/00Energy efficient computing, e.g. low power processors, power management or thermal management

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Software Systems (AREA)
  • Physics & Mathematics (AREA)
  • General Engineering & Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • Probability & Statistics with Applications (AREA)
  • Fuzzy Systems (AREA)
  • Mathematical Physics (AREA)
  • Computational Linguistics (AREA)
  • Data Mining & Analysis (AREA)
  • Databases & Information Systems (AREA)
  • Debugging And Monitoring (AREA)

Abstract

The method comprises the steps of decoupling storage statistics from AI platform core service, solving the problem of resource preemption, adopting a file monitor mode to actively monitor user home directory file change, triggering a storage statistics task only when the file change is monitored, avoiding idle work of the storage statistics task triggered at fixed time, and finally calling a linux kernel function to carry out storage statistics so as to reduce switching between a user state and a kernel state, reduce CPU and memory occupancy rate and improve efficiency of storage statistics. The application also provides a storage statistics device, equipment and a computer readable storage medium of the AI platform, and the technical effects of the storage statistics device and the equipment correspond to those of the method.

Description

Storage statistics method, device and equipment of AI platform
Technical Field
The present disclosure relates to the field of computer technologies, and in particular, to a storage statistics method, apparatus, device and computer readable storage medium for an AI platform.
Background
With the wide popularization of AI (Artificial Intelligence ) training platforms, enterprise-level applications are increasing, the traffic of users is gradually increasing, and the requirements on the storage statistical performance of the platforms are higher. The storage statistics of the disk space of the user in the existing AI platform is difficult to meet the demands of the user, how to rapidly, efficiently and stably count the disk space of all the users in the AI platform, and the storage statistics is rapidly performed by using the minimum resource cost, so that the normal operation of other service modules is ensured, and the problem to be solved is urgently needed at present.
The storage statistics module of the current AI training platform adopts a timing task method, and the storage space statistics is carried out again for all user home directories of the AI platform after the timing task is triggered according to a certain frequency no matter whether the user home directory of the bottom layer is updated or not. The scheme uses sizeOf (a java method in the Apache File tool class) method in the Apache-common-io package, which traverses each directory layer by layer, accumulating each File size. When the number of files reaches above the TB level, the disk IO overhead of the method is large, so that service is blocked and even abnormal.
Based on an apache-common-io package, a sizeOf method is rewritten, a thread is newly built for each folder, and the sizes of files are accumulated in a multithreading mode, so that the efficiency can be improved, and when the number of files reaches over a TB level, CPU and memory resources are excessively occupied, the whole service can be directly dragged and collapsed, and the whole service module is abnormal.
In summary, the storage statistics and the core business of the AI platform are coupled in the traditional scheme, the preemption of service resources leads to service blocking, the stability is poor, the conventional java method sizeOf storage method consumes a large amount of CPU and memory resources, the storage statistics tasks are triggered regularly to do a lot of idle work, and how to overcome the defects is a technical problem to be solved by a person skilled in the art.
Disclosure of Invention
The invention aims to provide a storage statistics method, device and equipment of an AI platform and a computer readable storage medium, which are used for solving the problem that the traditional storage statistics scheme is used for preempting resources of core business of the AI platform and influencing service performance. The specific scheme is as follows:
in a first aspect, the present application provides a storage statistics method of an AI platform, based on a storage statistics micro-service implementation independent of a platform service module, including:
monitoring a user home directory of the platform service module by using a file monitor, and issuing a storage statistics task when monitoring file change;
and when the storage statistics task is received, calling a liunx kernel function to carry out storage statistics to obtain a storage statistics result.
Optionally, the calling the liunx kernel function to perform storage statistics to obtain a storage statistics result includes:
and calling a liunx kernel function to read the file stream according to the preset buffer size, and obtaining a storage statistical result.
Optionally, the calling the liunx kernel function reads the file stream according to a preset buffer size, and before obtaining the storage statistics result, the method further includes:
and setting a preset buffer size according to target conditions, wherein the target conditions comprise the number of files in the home directory of the user.
Optionally, the calling the liunx kernel function to perform storage statistics to obtain a storage statistics result includes:
c language is adopted to call linux kernel function readdir for storage statistics, and storage statistics results are obtained.
Optionally, after the calling the liunx kernel function to perform storage statistics, the method further includes:
and sending the storage statistical result to the platform service module in a message mode.
Optionally, after the sending the storage statistics to the platform service module in a message manner, the method further includes:
and updating the storage space size of the user home directory on the storage monitoring page according to the storage statistical result.
Optionally, before the file monitor monitors the user home directory of the platform service module, the method further includes:
and configuring the CPU and the memory resources of the storage statistics micro-service according to the prior condition.
In a second aspect, the present application provides a storage statistics device of an AI platform, based on a storage statistics micro-service implementation independent of a platform service module, including:
the monitoring module is used for monitoring the user home directory of the platform service module by using the file monitor, and issuing a storage statistics task when monitoring the file change;
and the storage statistics module is used for calling the liunx kernel function to carry out storage statistics when the storage statistics task is received, so as to obtain a storage statistics result.
In a third aspect, the present application provides a storage statistics device of an AI platform, including:
a memory: for storing a computer program;
a processor: for executing the computer program to implement the storage statistics method of the AI platform as described above.
In a fourth aspect, the present application provides a computer readable storage medium having stored thereon a computer program for implementing the storage statistics method of the AI platform as described above when executed by a processor.
The storage statistics method of the AI platform provided by the application is based on the storage statistics micro-service implementation independent of the platform business module and comprises the following steps: monitoring a user home directory of the platform service module by using a file monitor, and issuing a storage statistics task when monitoring file change; and when receiving the storage statistics task, calling a liunx kernel function to carry out storage statistics to obtain a storage statistics result. It can be seen that the method decouples the storage statistics from the core business of the AI platform to solve the problem of resource preemption, in addition, the method adopts a file monitor mode to actively monitor the file change of the user home directory, only triggers a storage statistics task once when monitoring the file change, avoids the timed triggering of the storage statistics task to do idle work, and finally, the method calls the linux kernel function to carry out storage statistics so as to reduce the switching between a user state and a kernel state, reduce the occupation rate of a CPU and a memory and improve the efficiency of the storage statistics.
In addition, the application further provides a storage statistics device, equipment and a computer readable storage medium of the AI platform, and the technical effects of the storage statistics device and the equipment correspond to those of the method, and are not repeated here.
Drawings
For a clearer description of embodiments of the present application or of the prior art, the drawings that are used in the description of the embodiments or of the prior art will be briefly described, it being apparent that the drawings in the description that follow are only some embodiments of the present application, and that other drawings may be obtained from these drawings by a person of ordinary skill in the art without inventive effort.
FIG. 1 is a flowchart of a first embodiment of a storage statistics method for an AI platform provided in the present application;
fig. 2 is a schematic diagram of an internal implementation of a storage statistics service of a second storage statistics method embodiment of an AI platform provided in the present application;
FIG. 3 is a flowchart of an AI platform storage statistics scheme of a second embodiment of the storage statistics method of the AI platform provided in the present application;
FIG. 4 is a schematic diagram of an embodiment of a storage statistics device of the AI platform provided in the present application;
fig. 5 is a schematic diagram of an embodiment of a storage statistics device of the AI platform provided in the present application.
Detailed Description
In order to provide a better understanding of the present application, those skilled in the art will now make further details of the present application with reference to the drawings and detailed description. It will be apparent that the described embodiments are only some, but not all, of the embodiments of the present application. All other embodiments, which can be made by one of ordinary skill in the art without undue burden from the present disclosure, are within the scope of the present disclosure.
Aiming at the problems that storage statistics and core business are coupled in the traditional scheme, service clamping is caused by preempting service resources, stability is poor, the problems that a conventional java method sizeOf storage method consumes a CPU (central processing unit), memory resources are large, and storage statistics tasks are triggered to do a lot of idle work at regular time, and the like, the application provides a storage statistics method, device, equipment and computer-readable storage medium of an AI platform, and the performance, storage statistics efficiency and stability of the whole AI platform are improved.
Referring to fig. 1, an embodiment of a storage statistics method of an AI platform provided in the present application is described below, where the embodiment is based on a storage statistics micro-service implementation independent of a platform service module, and includes:
s11, monitoring a user home directory of a platform service module by using a file monitor, and issuing a storage statistics task when monitoring file changes;
and S12, when receiving a storage statistics task, calling a liunx kernel function to carry out storage statistics to obtain a storage statistics result.
In this embodiment, the storage statistics is decoupled from the AI platform service module as a new micro service, and after the storage statistics micro service is started, the implementation process mainly includes two parts, namely file monitoring and storage statistics.
Specifically, a file monitor (listner) is used to monitor file changes of home directories of all users, and a task is not necessarily issued, and only one-time storage statistics task is issued when the file monitor monitors the file changes. The implementation mode of the storage statistics is as follows: and calling a liunx kernel function to carry out storage statistics to obtain a storage statistics result. Specifically, the C language writes codes, and a linux kernel function readdir () is called to store statistics.
On the basis, a buffer (buffer) for reading the file each time can be configured, and the problem that a large number of file objects are created by a sizeOf method in the traditional java implementation, so that a large amount of memory resources are occupied or memory overflows is solved. Specifically, calling a liunx kernel function to read a file stream according to a preset buffer size, and obtaining a storage statistical result. After the storage statistics result is obtained, a notification message (notify) can be further sent to the service module of the AI platform, and the storage space size of each user home directory on the storage monitoring page is updated.
The storage statistics method of the AI platform provided by the embodiment is based on the storage statistics micro-service implementation independent of the platform service module, and has the following advantages:
1. the storage statistics are decoupled from the core business micro-service, the problem of resource preemption is solved, and the user can configure the resource duty ratio according to the prior condition.
2. And the method does not receive the issuing of the storage statistics task at regular time, actively monitors the file change of the home directory of each user by adopting a file monitor mode, and triggers the storage statistics task once only when the file change is monitored.
3. The storage statistics scheme supports dynamic configuration of the buffer, a user can flexibly and dynamically configure the buffer value according to priori knowledge such as the number of files, and the problem that a large number of file objects are created by the statistical size of a java program sizeOf in a traditional mode and CPU and memory resources are consumed is solved by a mode of calling a linux kernel function to read a file stream through C language coding.
A second embodiment of the storage statistics method of the AI platform provided in the present application is described in detail below.
In this embodiment, the storage statistics and the platform service module iresource are decoupled, and the user can configure the CPU and the memory resources of the storage statistics service micro-service according to the prior condition. Referring to fig. 2 and 3, the second embodiment is based on a platform business module independent storage statistics micro service implementation, including:
s21, configuring the CPU and the memory resources of the storage statistics micro-service according to the priori conditions;
s22, setting a preset buffer size according to target conditions, wherein the target conditions comprise the number of files in a user home directory;
s23, starting a storage statistics micro-service, monitoring a user home directory of the platform business module by using a file monitor, and issuing a storage statistics task when monitoring file change;
s24, when the storage statistics task is received, calling a linux kernel function readdir by adopting a C language, and reading a file stream according to a preset buffer size to obtain a storage statistics result;
s25, sending the storage statistical result to the platform service module in a message mode;
and S26, updating the storage space of the user home directory on the storage monitoring page according to the storage statistical result.
Therefore, the storage statistical method of the AI platform provided by the embodiment has at least the following advantages:
1. the storage statistics and the AI platform service module are decoupled and are independently formed into a micro service, and CPU and memory resources of the storage statistics micro service and the AI platform service module are reasonably distributed according to prior conditions, so that the problem that service is abnormal due to the fact that the CPU and memory are too high in duty ratio in an original storage statistics mode is solved, and the problem that the AI platform cannot be normally used due to the fact that the storage statistics occupy the service module resources is prevented.
2. The file monitor is used for monitoring the files under the home directory of each user, when the change of the files is monitored, the storage statistics task is triggered once, the traditional mode of regularly issuing the statistics task is replaced, and a plurality of useless storage statistics under the condition that the files are unchanged are avoided. Specifically, a linux kernel function is called according to a buffer value configured by a user, a file stream is read, and the size is counted. When the timed task is issued, a large number of user home catalogues are unchanged, but useless work of storing statistics is repeated.
3. The storage statistical method is improved, a large number of file objects are required to be created by the traditional sizeOf method, a large amount of memory is inevitably required to create and recycle the objects, and even the condition of abnormal service caused by memory overflow can be caused. In this embodiment, the linux kernel function is directly used to read the file stream, and the buffer for reading the file each time can be configured. Calling a linux kernel function readdir (), and notifying the storage size to a service module of the AI platform in a message mode after statistics is completed. Meanwhile, the method for storing statistics also supports a method for dynamically configuring buffers (defaulting to 32K), and a user can increase the value of the buffers according to prior conditions such as the number of files in the current home directory, so that the switching between a user mode and a kernel mode is reduced, the occupation rate of a CPU and a memory is reduced, and the efficiency of storing statistics is improved. And finally, updating the stored value by the service module and displaying the updated value on the storage monitoring page.
The following describes a storage statistics device of an AI platform provided in an embodiment of the present application, where the storage statistics device of the AI platform described below and the storage statistics method of the AI platform described above may be referred to correspondingly with each other.
As shown in fig. 4, the storage statistics device of the AI platform of the present embodiment, based on a storage statistics micro-service implementation independent of a platform service module, includes:
the monitoring module 41 is configured to monitor a user home directory of the platform service module by using a file monitor, and issue a storage statistics task when monitoring file changes;
and the storage statistics module 42 is used for calling the liunx kernel function to carry out storage statistics when the storage statistics task is received, so as to obtain a storage statistics result.
The storage statistics device of the AI platform of this embodiment is used to implement the aforementioned storage statistics method of the AI platform, so that the specific implementation in this device can be found in the foregoing example section of the storage statistics method of the AI platform, and will not be described herein.
In addition, the application further provides a storage statistics device of the AI platform, as shown in fig. 5, including:
memory 100: for storing a computer program;
processor 200: for executing the computer program to implement the storage statistics method of the AI platform as described above.
Finally, the present application provides a computer readable storage medium having stored thereon a computer program for implementing the storage statistics method of the AI platform as described above when executed by a processor.
In this specification, each embodiment is described in a progressive manner, and each embodiment is mainly described in a different point from other embodiments, so that the same or similar parts between the embodiments are referred to each other. For the device disclosed in the embodiment, since it corresponds to the method disclosed in the embodiment, the description is relatively simple, and the relevant points refer to the description of the method section.
The steps of a method or algorithm described in connection with the embodiments disclosed herein may be embodied directly in hardware, in a software module executed by a processor, or in a combination of the two. The software modules may be disposed in Random Access Memory (RAM), memory, read Only Memory (ROM), electrically programmable ROM, electrically erasable programmable ROM, registers, hard disk, a removable disk, a CD-ROM, or any other form of storage medium known in the art.
The foregoing has outlined rather broadly the more detailed description of the present application and the principles and embodiments of the present application have been presented in terms of specific examples, which are provided herein to assist in the understanding of the method and core concepts of the present application; meanwhile, as those skilled in the art will have modifications in the specific embodiments and application scope in accordance with the ideas of the present application, the present description should not be construed as limiting the present application in view of the above.

Claims (10)

1. A storage statistics method for an AI platform, wherein the storage statistics micro-service implementation based on platform-independent business modules comprises:
monitoring a user home directory of the platform service module by using a file monitor, and issuing a storage statistics task when monitoring file change;
and when the storage statistics task is received, calling a liunx kernel function to carry out storage statistics to obtain a storage statistics result.
2. The method of claim 1, wherein said calling the liunx kernel function to perform the storage statistics to obtain the storage statistics result comprises:
and calling a liunx kernel function to read the file stream according to the preset buffer size, and obtaining a storage statistical result.
3. The method of claim 2, wherein the calling the liunx kernel function reads a file stream according to a preset buffer size, and before obtaining the storage statistics, further comprises:
and setting a preset buffer size according to target conditions, wherein the target conditions comprise the number of files in the home directory of the user.
4. The method of claim 1, wherein said calling the liunx kernel function to perform the storage statistics to obtain the storage statistics result comprises:
c language is adopted to call linux kernel function readdir for storage statistics, and storage statistics results are obtained.
5. The method of claim 1, further comprising, after said calling the liunx kernel function to perform the storage statistics, obtaining the storage statistics:
and sending the storage statistical result to the platform service module in a message mode.
6. The method of claim 5, further comprising, after said sending said stored statistics to said platform business module in a message:
and updating the storage space size of the user home directory on the storage monitoring page according to the storage statistical result.
7. The method of any of claims 1 to 6, further comprising, prior to said utilizing a file listener to listen to a user home directory of said platform business module:
and configuring the CPU and the memory resources of the storage statistics micro-service according to the prior condition.
8. A storage statistics device of an AI platform, characterized by a storage statistics micro-service implementation based on platform-independent business modules, comprising:
the monitoring module is used for monitoring the user home directory of the platform service module by using the file monitor, and issuing a storage statistics task when monitoring the file change;
and the storage statistics module is used for calling the liunx kernel function to carry out storage statistics when the storage statistics task is received, so as to obtain a storage statistics result.
9. A storage statistics device of an AI platform, comprising:
a memory: for storing a computer program;
a processor: for executing the computer program to implement the storage statistics method of the AI platform of any of claims 1-7.
10. A computer readable storage medium, characterized in that the computer readable storage medium has stored thereon a computer program for implementing the storage statistics method of the AI platform of any of claims 1 to 7 when executed by a processor.
CN202111094443.8A 2021-09-17 2021-09-17 Storage statistics method, device and equipment of AI platform Active CN113835856B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202111094443.8A CN113835856B (en) 2021-09-17 2021-09-17 Storage statistics method, device and equipment of AI platform

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202111094443.8A CN113835856B (en) 2021-09-17 2021-09-17 Storage statistics method, device and equipment of AI platform

Publications (2)

Publication Number Publication Date
CN113835856A CN113835856A (en) 2021-12-24
CN113835856B true CN113835856B (en) 2023-07-14

Family

ID=78959936

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202111094443.8A Active CN113835856B (en) 2021-09-17 2021-09-17 Storage statistics method, device and equipment of AI platform

Country Status (1)

Country Link
CN (1) CN113835856B (en)

Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101588275A (en) * 2008-12-25 2009-11-25 深圳市宇沃德信息技术有限公司 Method for information monitoring of network application layer
CN111901377A (en) * 2020-06-28 2020-11-06 苏州浪潮智能科技有限公司 File transmission method, device, equipment and medium based on AI (Artificial Intelligence) training platform
CN113010479A (en) * 2021-03-18 2021-06-22 山东英信计算机技术有限公司 File management method, device and medium

Patent Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101588275A (en) * 2008-12-25 2009-11-25 深圳市宇沃德信息技术有限公司 Method for information monitoring of network application layer
CN111901377A (en) * 2020-06-28 2020-11-06 苏州浪潮智能科技有限公司 File transmission method, device, equipment and medium based on AI (Artificial Intelligence) training platform
CN113010479A (en) * 2021-03-18 2021-06-22 山东英信计算机技术有限公司 File management method, device and medium

Also Published As

Publication number Publication date
CN113835856A (en) 2021-12-24

Similar Documents

Publication Publication Date Title
CN112199194B (en) Resource scheduling method, device, equipment and storage medium based on container cluster
CN107948095B (en) Resource control method and device and bus system server
CN110308983B (en) Resource load balancing method and system, service node and client
CN106452818B (en) Resource scheduling method and system
CN111258737B (en) Resource scheduling method and device and filter scheduler
CN108134814B (en) Service data processing method and device
CN111209110B (en) Task scheduling management method, system and storage medium for realizing load balancing
JPWO2009060530A1 (en) Network processing control device, program, and method
CN111061570B (en) Image calculation request processing method and device and terminal equipment
CN112527544B (en) Server, and method and device for triggering fusing
CN111858474B (en) Distributed storage system Inode number allocation management method and related components
CN113835856B (en) Storage statistics method, device and equipment of AI platform
CN110795234A (en) Resource scheduling method and device
CN114461385A (en) Thread pool scheduling method, device and equipment and readable storage medium
CN111600738B (en) Method for optimizing timeout processing and storage medium
CN112463315A (en) Cluster task scheduling method and device and related components
CN106534571A (en) Event notification method and terminal
WO2022095862A1 (en) Thread priority adjusting method, terminal, and computer readable storage medium
WO2018188405A1 (en) Method and device for allocating cloud application resources
CN111858060A (en) Resource dynamic adjustment method and device for high-performance computing cluster
CN109284188B (en) Buffer array maintenance method, device, terminal and readable medium
CN108279982B (en) Method, system and equipment for managing pbs resources and hadoop resources
CN114327259A (en) Flash memory channel controller operation method, device, equipment and storage medium
CN113918093B (en) Capacity reduction optimization method and terminal
CN116069502A (en) Dynamic control method and equipment for data processing

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant