CN114880084A - Request distribution method, device, equipment and computer readable storage medium - Google Patents

Request distribution method, device, equipment and computer readable storage medium Download PDF

Info

Publication number
CN114880084A
CN114880084A CN202210303625.XA CN202210303625A CN114880084A CN 114880084 A CN114880084 A CN 114880084A CN 202210303625 A CN202210303625 A CN 202210303625A CN 114880084 A CN114880084 A CN 114880084A
Authority
CN
China
Prior art keywords
request
thread
read
task
type identifier
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202210303625.XA
Other languages
Chinese (zh)
Inventor
丁宇光
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Lenovo Netapp Technology Ltd
Original Assignee
Lenovo Netapp Technology Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Lenovo Netapp Technology Ltd filed Critical Lenovo Netapp Technology Ltd
Priority to CN202210303625.XA priority Critical patent/CN114880084A/en
Publication of CN114880084A publication Critical patent/CN114880084A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F9/00Arrangements for program control, e.g. control units
    • G06F9/06Arrangements for program control, e.g. control units using stored programs, i.e. using an internal store of processing equipment to receive or retain programs
    • G06F9/46Multiprogramming arrangements
    • G06F9/48Program initiating; Program switching, e.g. by interrupt
    • G06F9/4806Task transfer initiation or dispatching
    • G06F9/4843Task transfer initiation or dispatching by program, e.g. task dispatcher, supervisor, operating system
    • G06F9/4881Scheduling strategies for dispatcher, e.g. round robin, multi-level priority queues
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F16/00Information retrieval; Database structures therefor; File system structures therefor
    • G06F16/10File systems; File servers
    • G06F16/18File system types
    • G06F16/182Distributed file systems
    • G06F16/1824Distributed file systems implemented using Network-attached Storage [NAS] architecture
    • G06F16/183Provision of network file services by network file servers, e.g. by using NFS, CIFS

Abstract

The application provides a request distribution method, a request distribution device, equipment and a computer-readable storage medium, wherein the request distribution method comprises the following steps: a server side obtains a task request sent by a client side, wherein the task request carries a request type identifier and a file handle identifier; selecting a target thread for processing the task request from thread types corresponding to the request type identifier according to the request type identifier and the file handle identifier; and distributing the task request to a task queue corresponding to the target thread so as to process the task request. According to the embodiment of the application, the task requests are grouped by the corresponding working threads according to the types of the task requests, so that the abnormity of the different types of requests can be isolated, and the effect of improving the usability of the working threads is achieved.

Description

Request distribution method, device, equipment and computer readable storage medium
Technical Field
The present application relates to the field of network file system technology, and relates to, but is not limited to, a request distribution method, apparatus, device, and computer-readable storage medium.
Background
A Network File System (NFS) is one of the current mainstream heterogeneous platform sharing File systems, and can support File sharing between different types of systems through a Network. By using NFS, users and programs can access files on remote systems as well as local files, enabling the nodes of each computer to conveniently use resources on the web as well as local resources.
When multiple threads of a conventional NFS server execute requests of an NFS client, various types of requests are indiscriminately distributed to all worker threads. When a request of a certain type is abnormally blocked on a bottom file system, all working threads are blocked, and finally the NFS server can not respond to any request.
Disclosure of Invention
In view of this, embodiments of the present application provide a request distribution method, apparatus, device, and computer-readable storage medium.
The technical scheme of the embodiment of the application is realized as follows:
an embodiment of the present application provides a request distribution method, including:
acquiring a task request sent by a client, wherein the task request carries a request type identifier and a file handle identifier;
selecting a target thread for processing the task request from thread types corresponding to the request type identifier according to the request type identifier and the file handle identifier;
and distributing the task request to a task queue corresponding to the target thread so as to process the task request.
In some embodiments, the method further comprises:
analyzing the task request, and extracting a request type identifier and a file handle identifier carried in the task request;
the request type identification comprises a read type identification, a write type identification and a metadata type identification.
In some embodiments, the selecting, according to the request type identifier and the file handle identifier, a target thread for processing the task request from thread types corresponding to the request type identifier includes:
when the request type identifier is a read type identifier, acquiring a read management array, wherein the read management array is used for managing a read working thread, and the read working thread is used for processing a task request of a read type;
judging whether a file handle corresponding to the read request of the client side is bound with a read working thread of the server side;
if the file handle corresponding to the read request of the client is bound with the read working thread of the server, searching the read working thread with the same file handle identification from the read management array to obtain a search result;
and when the search result represents that the search is successful, determining the read working thread with the same file handle identifier as the target thread.
In some embodiments, the method further comprises:
when a file handle corresponding to the read request of the client is not bound with the read working thread of the server, or when the search result represents that the search fails, acquiring the number of distributed tasks of each read working thread in the read management array;
and determining the read working thread with the least number of the distributed tasks as a target thread.
In some embodiments, the selecting, according to the request type identifier and the file handle identifier, a target thread for processing the task request from thread types corresponding to the request type identifier includes:
when the request type identifier is a write type identifier, acquiring a write management array, wherein the write management array is used for managing a write working thread, and the write working thread is used for processing a task request of the write type;
acquiring the number of distributed tasks of each write work thread in the write management array;
and determining the write work thread with the least number of the distributed tasks as a target thread.
In some embodiments, the selecting, according to the request type identifier and the file handle identifier, a target thread for processing the task request from thread types corresponding to the request type identifier further includes:
when the request type identifier is a metadata type identifier, acquiring a metadata management array, wherein the metadata management array is used for managing a metadata working thread, and the metadata working thread is used for processing a task request of the metadata type;
acquiring the number of distributed tasks of each metadata working thread in the metadata management array;
and determining the metadata work thread with the least number of the distributed tasks as a target thread.
In some embodiments, the method further comprises:
updating the number of the tasks distributed to the target thread;
wherein the updating the number of tasks allocated to the target thread includes:
when the target thread completes one task request every time, reducing the number of the distributed tasks of the target thread by one;
and when the target thread distributes one task request, adding one to the number of the tasks distributed by the target thread.
An embodiment of the present application provides a request distribution apparatus, including:
the system comprises an acquisition module, a processing module and a processing module, wherein the acquisition module is used for acquiring a task request sent by a client, and the task request carries a request type identifier and a file handle identifier;
a selecting module, configured to select a target thread for processing the task request from thread types corresponding to the request type identifier according to the request type identifier and the file handle identifier;
and the distribution module is used for distributing the task request to a task queue corresponding to the target thread so as to process the task request.
An embodiment of the present application provides an electronic device, including:
a memory for storing executable instructions;
and the processor is used for realizing the request distribution method provided by the embodiment of the application when executing the executable instructions stored in the memory.
The embodiment of the present application provides a computer-readable storage medium, which stores executable instructions for causing a processor to implement the request distribution method provided by the embodiment of the present application when executed.
The embodiment of the application has the following beneficial effects:
according to the request distribution method provided by the embodiment of the application, a server side obtains a task request sent by a client side, wherein the task request carries a request type identifier and a file handle identifier; selecting a target thread for processing the task request from thread types corresponding to the request type identifier according to the request type identifier and the file handle identifier; and distributing the task request to a task queue corresponding to the target thread so as to process the task request. By grouping the work threads of the NFS server according to the types of the request tasks, the abnormity of different types of requests can be isolated, and the effect of improving the usability of the work threads is achieved.
Drawings
In the drawings, which are not necessarily drawn to scale, like reference numerals may describe similar components in different views. The drawings illustrate generally, by way of example, but not by way of limitation, various embodiments discussed herein.
Fig. 1 is a schematic flow chart of an implementation of a request distribution method according to an embodiment of the present application;
fig. 2 is a schematic flow chart of an implementation of selecting a target thread when a request type identifier is a read type identifier according to an embodiment of the present application;
fig. 3 is a schematic flow chart of an implementation of selecting a target thread when a request type identifier is a write type identifier according to an embodiment of the present application;
fig. 4 is a schematic flow chart of an implementation of selecting a target thread when a request type identifier is a metadata type identifier according to an embodiment of the present application;
fig. 5 is a schematic flowchart of another implementation of a request distribution method according to an embodiment of the present application;
fig. 6 is a schematic flow chart of an implementation process of recording different worker _ indexes into elements of different management arrays of corresponding types according to the embodiment of the present application;
fig. 7 is a schematic diagram of a thread selection flow select _ worker _ thread for obtaining a thread worker _ index according to an embodiment of the present application;
fig. 8 is a schematic structural diagram of a request distribution apparatus according to an embodiment of the present application;
fig. 9 is a schematic structural diagram of an electronic device according to an embodiment of the present application.
Detailed Description
In order to make the objectives, technical solutions and advantages of the present application clearer, the present application will be described in further detail with reference to the attached drawings, the described embodiments should not be considered as limiting the present application, and all other embodiments obtained by a person of ordinary skill in the art without creative efforts shall fall within the protection scope of the present application.
In the following description, reference is made to "some embodiments" which describe a subset of all possible embodiments, but it is understood that "some embodiments" may be the same subset or different subsets of all possible embodiments, and may be combined with each other without conflict.
The following description will be added if a similar description of "first \ second \ third" appears in the application file, where the terms "first \ second \ third" merely distinguish similar objects and do not represent a specific ordering with respect to the objects, and it should be understood that "first \ second \ third" may be interchanged with a specific order or sequence as permitted, so that the embodiments of the application described herein can be implemented in an order other than that illustrated or described herein.
Unless defined otherwise, all technical and scientific terms used herein have the same meaning as commonly understood by one of ordinary skill in the art to which this application belongs. The terminology used herein is for the purpose of describing embodiments of the present application only and is not intended to be limiting of the application.
Before the embodiments of the present application are explained in detail, terms and expressions mentioned in the embodiments of the present application will be explained, and the terms and expressions mentioned in the embodiments of the present application will be used for the following explanation.
1) Network File System (NFS, Network File System): the UNIX presentation layer protocol (UNIX) developed by SUN corporation is a network abstraction over file systems that allows remote clients to access over a network in a manner similar to local file systems.
2) Virtual File System (VFS): created by Sun microsystems, inc, when defining NFS, which is a distributed file system for a network environment, is an interface that allows implementation using a different file system than the operating system. The VFS is an interface layer between the physical file system and the service that abstracts all the details of each file system of Linux, making different file systems appear the same to the Linux kernel and other processes running in the system.
3) File Handle (FH, File Handle): in file import and export, to read data from a file, an application first calls an operating system function that retrieves a sequence number, i.e., a file handle, and passes the file name and selects a path to the file to open the file. The file handle is a unique identifier used between the NFS client and the server to identify the file in this embodiment.
4) Linux kernel: kernel of Linux operating system.
5) NFS server: the server program for providing the NFS service in the embodiment of the present application refers to an NFS service process running in a Linux operating system user state.
6) NFS client: a client program accessing the NFS.
7) Metadata (Metadata): the data (data about data) is mainly information describing data attributes and is used for supporting functions such as indicating storage positions, historical data, resource searching, file recording and the like.
In order to better understand the embodiment of the present application, a description is first given of a distribution method for distributing a request to multiple working threads when an NFS server in a Linux user state executes an NFS request using a multithreading technology in the related art, and disadvantages of the distribution method.
When multiple threads of a conventional NFS server execute requests of an NFS client, various types of requests are indiscriminately distributed to all worker threads. When a request of a certain type is abnormally blocked on a bottom file system, all working threads are finally blocked, and the NFS server can not respond to any request. In addition, the file system in the VFS may cache different read data for different threads, and if the read request is a working thread distributed indiscriminately, the NFS server cannot guarantee that the read request of the client uses the cache of the existing read data in the VFS each time.
It can be seen that, when the related art uses the multithreading technology to execute the NFS request, there are main drawbacks: 1) if a certain request is abnormally blocked in the NFS server, the NFS client is caused to retransmit, all threads may be occupied for a short time, and no idle thread executes other requests; 2) when the number of requests is large and no idle thread exists, more requests distributed on a certain thread may be caused, and the number of requests in the request queue of each thread is unbalanced, so that a large number of requests cannot be executed concurrently; 3) when the NFS server accesses the Linux kernel file system, the VFS provides a read-ahead cache for the read file to improve the read performance, but the cache is isolated among threads, so that when different read requests of the same file are executed in different threads, the existing read cache cannot be used, and waste of performance and memory space is caused.
Based on the above defects, the embodiments of the present application provide a method for implementing request distribution when an NFS request is executed using a multithreading technique. Fig. 1 is a schematic flow chart of an implementation of a request distribution method provided in an embodiment of the present application, and as shown in fig. 1, the method includes the following steps:
step S101, acquiring a task request sent by a client.
The method provided by the embodiment of the application can be executed by a network file system server, namely, an NFS server. The task request received by the server carries a request type identifier and a file handle identifier.
After a server side obtains a task request sent by a client side, the task request is analyzed and processed, and a request type identifier and a file handle identifier carried in the task request are extracted. Here, the request type identifier may include a read type identifier, a write type identifier, and a metadata type identifier.
And step S102, selecting a target thread for processing the task request from the thread types corresponding to the request type identifier according to the request type identifier and the file handle identifier.
In the embodiment of the application, the server groups the working threads according to three types of reading, writing and metadata according to the type of the task request, so that the corresponding task request is only distributed to the working threads of the same type to execute processing.
After receiving a task request, selecting a corresponding working thread group according to a request type identifier carried by the task request, then selecting a target thread from the selected working thread group according to the request type identifier and a file handle identifier of the task request, and processing the task request by using the target thread.
Step S103, distributing the task request to a task queue corresponding to the target thread to process the task request.
In a multi-thread multi-task concurrent network file system, a target thread may have a plurality of task requests to be processed, and the plurality of task requests to be processed may be sequentially processed according to the sequence of received requests. After the target thread is determined in step S102, a task queue of the target thread is obtained, and the task request is distributed to the task queue, so that the target thread processes the task request, and after the processing is completed, the task request is deleted from the task queue.
According to the method provided by the embodiment of the application, a server side obtains a task request sent by a client side, wherein the task request carries a request type identifier and a file handle identifier; selecting a target thread for processing the task request from thread types corresponding to the request type identifier according to the request type identifier and the file handle identifier; and distributing the task request to a task queue corresponding to the target thread so as to process the task request. By grouping the work threads of the NFS server according to the types of the request tasks, the abnormity of different types of requests can be isolated, and the effect of improving the usability of the work threads is achieved.
In some embodiments, when the request type identifier of the task request is a read type identifier, step S102 "in the embodiment shown in fig. 1, according to the request type identifier and the file handle identifier, selecting a target thread for processing the task request from thread types corresponding to the request type identifier" may be implemented by the steps shown in fig. 2:
step S201, a read management array is acquired.
In the embodiment of the application, the server divides all threads into three types according to the request types in advance, and the array is used for maintaining three management structures so as to provide the thread management function when the request is distributed.
Firstly, defining a distribution management data structure of a work thread pool:
struct worker{
unsigned long worker_index;
char fh[NFS_V4_FH_LEN];
int32_t jobs_cnt;
}
the worker _ index is a unique identifier of a working thread, and the value range of the worker _ index can represent different thread types: read, write, metadata, used to find the task queue of the specified worker thread through the distribution management process.
fh: and storing the NFS file handle FH, comparing the NFS file handle FH with the file handle FH in the new task request, and distributing different read requests with the same file handle to the same thread.
jobs _ cnt: the number of assigned tasks in the thread's task queue represents the load of the thread, with a greater number indicating a heavier load for the thread.
The three management arrays are all struct worker type arrays, and the functions of the three management arrays are as follows:
struct worker read _ WORKERS [ MAX _ WORKERS _ NUM ]; the read _ workers manages all threads for executing the read request.
struct worker meta _ WORKERS [ MAX _ WORKERS _ NUM ]; meta _ workers, manages all threads for executing non-read, write, commit requests.
struct worker write _ WORKERS [ MAX _ WORKERS _ NUM ]; write _ workers, which manages all threads for executing write, commit requests.
The thread number of the three arrays is initialized to MAX _ works _ NUM, and when the implementation is performed, the thread number of the three data may be configured through a configuration file, for example, configured as:
worker _ meta _ counts: the number of non-read-write threads (i.e., metadata threads);
worker _ read _ counts: reading the number of threads;
worker _ write _ counts: the number of write threads;
the sum of the three is the number of the bus programs, and the NFS server starts all the working threads according to the number of the bus programs when the service is started.
In the embodiment of the present application, the purpose of initializing the distribution management array by the server is: and associating the created work thread pool in the NFS server with a distribution management array, wherein the association identifier is a work thread identifier worker _ index.
According to the worker _ index, the server divides the thread types and records:
for example, threads in the [0, worker _ meta _ counts ] range are metadata threads;
threads in the range of (worker _ meta _ counts, worker _ meta _ counts + worker _ read _ counts) are read threads;
threads in the range (worker _ meta _ counts + worker _ read _ counts, worker _ meta _ counts + worker _ read _ counts + worker _ write _ counts) are write threads.
And the server records different worker _ indexes into elements of different management arrays of corresponding types.
In the embodiment of the application, the read management array is used for managing a read working thread, and the read working thread is used for processing a task request of a read type. When the request type identifier of the task request received by the server is the read type identifier, the server acquires a read management array read _ works [ ] for managing the read request thread.
Step S202, judging whether the file handle corresponding to the read request of the client side is bound with the read working thread of the server side.
When the file handle corresponding to the read request of the client is bound to the read working thread of the server, indicating that the read type task request of the client has been processed before, then entering step S203; when the file handle corresponding to the read request of the client is not bound to the read working thread of the server, the process proceeds to step S206.
Step S203, searching a read working thread identical to the file handle identifier from the read management array, and obtaining a search result.
And traversing and searching a read working thread with the same file handle identifier as the current task request in the read working threads included in the read _ works [ ], determining that the search result is successful when the read working threads included in the read _ works [ ] have the read working threads with the same file handle identifier as the current task request, and otherwise determining that the search result is failed.
And step S204, judging whether the search result is characterized as the search success.
When the search result represents that the search is successful, it indicates that there is a read working thread having the same file handle identifier as the current task request in the read working threads included in the read management array read _ workers [ ], and the read working thread can be used as a target thread for processing the current task request, and the process proceeds to step S205; when the search result represents that the search is failed, it indicates that there is no read work thread having the same file handle identifier as the current task request in the read work threads included in the read management array read _ workers [ ], and then the process proceeds to step S206.
Step S205, determining the read working thread with the same file handle identification as the target thread.
After selecting the target thread according to this step, the process proceeds to step S103. The file handle of the read request is bound with the read working thread, the read working thread with the same file handle identification as the task request is used as a target thread to process the task request, different read requests of the same file are guaranteed to be distributed to the same thread, the VFS pre-read cache can be fully utilized, and the effects of improving the availability and the performance are achieved.
Step S206, acquiring the distributed task number of each read working thread in the read management array.
When a file handle corresponding to a read request of a client is not bound with a read working thread of a server, or when the file handle corresponding to the read request of the client is bound with the read working thread of the server but no read working thread with the same file handle identification as that of a current task request exists in the read working threads included in a read management array read _ workers [ ], the allocated task number of each read working thread in the read management array is acquired.
Step S207, determining the read worker thread with the least number of assigned tasks as a target thread.
The read working thread with the least number of distributed tasks is determined as a target thread, the number of task requests of each read working thread is balanced, the target thread is utilized to process the task requests, VFS pre-read cache can be fully utilized, and the effects of improving availability and performance are achieved.
In some embodiments, when the request type identifier of the task request is a write type identifier, step S102 "in the embodiment shown in fig. 1, according to the request type identifier and the file handle identifier, selecting a target thread for processing the task request from thread types corresponding to the request type identifier" may be implemented by the steps shown in fig. 3:
in step S301, a write management array is acquired.
Referring to the initialization explanation in the step S201, when the request type identifier of the received task request is the write type identifier, the server obtains the write management array write _ works [ ] for managing the write request thread. Here, the write management array is used to manage write worker threads, which are used to process write-type task requests.
Step S302, acquiring the distributed task number of each write work thread in the write management array.
Step S303, determining the read working thread with the least number of assigned tasks as a target thread.
When the request type identifier of the task request is the write type identifier, the server side obtains the number of the distributed tasks of each write-work thread in the write management array, selects the write-work thread with the least number of the distributed tasks as a target thread, can balance the number of the task requests of each write-work thread, processes the task request by using the target thread, can fully utilize the VFS pre-read cache, and achieves the effect of improving the availability and the performance.
In some embodiments, when the request type identifier of the task request is the metadata type identifier, step S102 "in the embodiment shown in fig. 1, according to the request type identifier and the file handle identifier, selecting a target thread for processing the task request from the thread types corresponding to the request type identifier" may be implemented by the steps shown in fig. 4:
step S401, obtaining a metadata management array.
Referring to the initialization explanation in step S201, when the request type identifier of the received task request is the metadata type identifier, the server obtains the metadata management array meta _ works [ ] for managing the meta request thread. Here, the metadata management array is used to manage metadata worker threads for processing task requests of the metadata type.
Step S402, acquiring the distributed task number of each metadata work thread in the metadata management array.
Step S403, determining the read work thread with the least number of assigned tasks as a target thread.
When the request type identifier of the task request is the metadata type identifier, the server side obtains the distributed task number of each metadata working thread in the metadata management array, selects the metadata working thread with the minimum distributed task number as a target thread, can balance the task request number of each metadata working thread, processes the task request by using the target thread, can fully utilize the VFS pre-reading cache, and achieves the effect of improving the availability and the performance.
On the basis of the foregoing embodiments, an embodiment of the present application further provides a request distribution method, and fig. 5 is a schematic flow chart of another implementation of the request distribution method provided in the embodiment of the present application, and as shown in fig. 5, the method includes the following steps:
step S501, a task request sent by a client is obtained.
The method provided by the embodiment of the application can be executed by a network file system server, namely, an NFS server. The task request received by the server carries a request type identifier and a file handle identifier.
Step S502, the task request is analyzed, and a request type identifier and a file handle identifier carried in the task request are extracted.
After a server side obtains a task request sent by a client side, the task request is analyzed and processed, and a request type identifier and a file handle identifier carried in the task request are extracted. Here, the request type identifier may include a read type identifier, a write type identifier, and a metadata type identifier.
Step S503, according to the request type identifier and the file handle identifier, selecting a target thread for processing the task request from the thread types corresponding to the request type identifier.
In the embodiment of the application, the server groups the working threads according to three types of reading, writing and metadata according to the type of the task request, so that the corresponding task request is only distributed to the working threads of the same type to execute processing.
After receiving a task request, selecting a corresponding working thread group according to a request type identifier carried by the task request, then selecting a target thread from the selected working thread group according to the request type identifier and a file handle identifier of the task request, and processing the task request by using the target thread.
Step S504, updating the number of tasks already allocated to the target thread.
In this embodiment of the present application, the updating the number of tasks allocated to the target thread may be implemented as: when the target thread completes one task request every time, reducing the number of the distributed tasks of the target thread by one; and when the target thread distributes one task request, adding one to the number of the tasks distributed by the target thread. When the number of the distributed tasks of the target thread is increased, the load of the target thread is increased; as the number of allocated tasks for the target thread decreases, the load on the target thread decreases.
Step S505, the task request is distributed to a task queue corresponding to the target thread, so as to process the task request.
In a multi-thread multi-task concurrent network file system, a target thread may have a plurality of task requests to be processed, and the plurality of task requests to be processed may be sequentially processed according to the sequence of received requests. After the target thread is determined in step S502, a task queue of the target thread is obtained, and the task request is distributed to the task queue, so that the target thread processes the task request, and after the processing is completed, the task request is deleted from the task queue.
According to the method provided by the embodiment of the application, a server side obtains a task request sent by a client side, wherein the task request carries a request type identifier and a file handle identifier; analyzing the task request, and taking out a request type identifier and a file handle identifier carried in the task request, wherein the request type identifier comprises a read type identifier, a write type identifier and a metadata type identifier; selecting a target thread for processing the task request from thread types corresponding to the request type identifier according to the request type identifier and the file handle identifier; updating the number of the tasks distributed to the target thread; and distributing the task request to a task queue corresponding to the target thread so as to process the task request. The method has the advantages that the faults of different types of requests are isolated by grouping the working threads, the load balance of the working threads is realized by recording thread loads and distributing the requests according to the loads, the effective utilization of the cache of a bottom layer file system is realized by binding read requests and read threads, and the availability of the NFS server and the performance of reading files are finally improved.
Next, an exemplary application of the embodiment of the present application in a practical application scenario will be described.
When multiple threads of a conventional NFS server execute requests of an NFS client, various types of requests are indiscriminately distributed to all worker threads. When a request of a certain type is abnormally blocked on a bottom file system, all working threads are finally blocked, and the NFS server can not respond to any request. The file system in the VFS caches different read data for different threads, and if the read request is a working thread distributed indiscriminately, the NFS server cannot guarantee that the read request of the client uses the cache of the existing read data in the VFS each time.
When the NFS server in the Linux user state executes an NFS request by using a multithreading technique, the distribution method for distributing the request to a plurality of working threads has the following defects:
1) if a certain request is abnormally blocked in the NFS server, the NFS client is caused to retransmit, all threads may be occupied for a short time, and no idle thread executes other requests;
2) when the number of requests is large and no idle thread exists, more requests distributed on a certain thread may be caused, and the number of requests in the request queue of each thread is unbalanced, so that a large number of requests cannot be executed concurrently;
3) when the NFS server accesses the Linux kernel file system, the VFS provides a read-ahead cache for the read file to improve the read performance, but the cache is isolated among threads, so that when different read requests of the same file are executed in different threads, the existing read cache cannot be used, and waste of performance and memory space is caused.
Based on the above defects, embodiments of the present application provide a thread selection method applied in a request distribution method, which implements isolation of faults of different types of requests by grouping working threads, implements load balancing of the working threads by recording thread loads and performing request distribution according to the loads, implements effective utilization of a bottom layer file system cache by binding a read request and a read thread, and finally improves availability of an NFS server and file reading performance. The thread selection method provided by the embodiment of the application solves the three defects by using the following scheme:
1) grouping the working threads according to three types of reading, writing and metadata, correspondingly dividing the requests into three types, and only distributing the requests to the threads of the same type for execution;
2) recording the number of requests which are distributed in the threads and are not executed, wherein a small count means that the load of the threads is small, and the threads with smaller loads can be selected when the requests are distributed;
3) binding the FH with the read thread ensures that different read requests of the same file are distributed to the same thread.
The embodiment of the application uses a distribution management data structure to realize a thread selection method for selecting a proper working thread according to type, FH and load, completes request distribution in NFS server and achieves the purpose of improvement. The details will be described below.
First, the distribution management data structure of the worker thread pool:
struct worker{
unsigned long worker_index;
char fh[NFS_V4_FH_LEN];
int32_t jobs_cnt;
}
worker _ index: the unique identification of the working thread is used for finding the task queue of the designated working thread through a distribution management process, and the value range of the unique identification can represent different thread types: read, write, metadata.
fh: and storing the NFS handle FH, comparing the NFS handle FH with the FH in the new request, and distributing different read requests to the same thread.
jobs _ cnt: the number of dispatched requests in the thread's task queue represents the load of the thread.
In the embodiment of the application, all threads can be divided into three types according to the request type: reading, writing and metadata, and the following arrays are used for maintaining three management structures and providing a thread management function when a request is distributed:
struct worker read_workers[MAX_WORKERS_NUM];
struct worker write_workers[MAX_WORKERS_NUM];
struct worker meta_workers[MAX_WORKERS_NUM];
the three arrays are global variables, wherein:
read _ workers: the struct worker type array manages all threads used for executing the read request;
write _ works: the struct worker type array manages all threads for executing the write and commit requests;
meta _ works: and the struct worker type array manages all threads for executing non-read, write and commit requests.
Second, the distribution management array is initialized:
the thread quantity of the three arrays can be respectively configured into
worker _ meta _ counts: the number of non-read-write threads;
worker _ read _ counts: reading the number of threads;
worker _ write _ counts: the number of write threads;
the sum of the three is the number of the bus threads, and the NFS server starts all the working threads according to the sum when the service is started.
The purpose of the initialization of the distribution management array is: the created work thread pool in NFS server is associated with a distribution management array, which is associated with worker _ index.
And according to the worker _ index, dividing the thread types and recording:
1) threads in the range of [0, worker _ meta _ counts ] are metadata threads;
2) threads in the range of (worker _ meta _ counts, worker _ meta _ counts + worker _ read _ counts) are read threads;
3) threads in the range (worker _ meta _ counts + worker _ read _ counts, worker _ meta _ counts + worker _ read _ counts + worker _ write _ counts) are write threads.
Fig. 6 is a schematic flow chart of an implementation process of recording different worker _ indexes into elements of different management arrays of corresponding types according to the embodiment of the present application.
Thirdly, the flow of the distribution request:
the process of selecting the thread is added into the distribution request, and the distribution method provided by the text can be realized. The distribution management array is used in the process of thread selection to select the appropriate thread according to type, FH and load to distribute the request.
Fig. 7 is a schematic diagram of a thread selection flow select _ worker _ thread for obtaining a thread worker _ index according to an embodiment of the present application. As shown in fig. 7, the main flow of the dispatch request based on the thread selection method is as follows:
1) extracting a request type op and a file handle fh from a request of an NFS client;
2) if the op is a reading type, selecting elements with the same fh from the reading management array, and if the elements with the same fh do not exist, selecting elements with smaller jobs _ cnt count from the reading management array;
3) if the op is of other types, selecting an element with a smaller jobs _ cnt count in the corresponding type management array;
4) the job _ cnt in the selected element is incremented by the count;
5) obtaining worker _ index of the selected element;
6) and adding the request into a task queue corresponding to the worker _ index.
In addition, the jobs _ cnt is decremented after the completion of the requested execution.
The method provided by the embodiment of the application can solve the problems of availability and performance of the NFS server and can improve the availability and performance of the system.
The core of the request distribution method provided by the embodiment of the application is a thread selection method based on load, request types and read request FH (frequency hopping), the abnormity of different types of requests can be isolated, the number of request tasks of each thread is balanced, the VFS pre-read cache is fully utilized, and the effects of improving availability and performance are achieved.
The key points of the request distribution method provided by the embodiment of the application are as follows: 1) grouping the work threads of the NFS server according to the type of the request task; 2) distributing the request tasks according to the request task load of the working thread; 3) and binding the read working thread of the NFS server with the FH of the read request of the NFS client.
Based on the foregoing embodiments, the present application provides a request distribution apparatus, where each module included in the apparatus and each unit included in each module may be implemented by a processor in a computer device; of course, the implementation can also be realized through a specific logic circuit; in the implementation process, the processor may be a Central Processing Unit (CPU), a Microprocessor Unit (MPU), a Digital Signal Processor (DSP), a Field Programmable Gate Array (FPGA), or the like.
Fig. 8 is a schematic structural diagram of a request distribution apparatus provided in an embodiment of the present application, where the request distribution apparatus 800 is applied to a server, and as shown in fig. 8, the request distribution apparatus 800 includes:
an obtaining module 801, configured to obtain a task request sent by a client, where the task request carries a request type identifier and a file handle identifier;
a selecting module 802, configured to select, according to the request type identifier and the file handle identifier, a target thread for processing the task request from thread types corresponding to the request type identifier;
the distributing module 803 is configured to distribute the task request to a task queue corresponding to the target thread, so as to process the task request.
In some embodiments, the request distribution apparatus 800 further includes: the analysis module is used for analyzing the task request and extracting a request type identifier and a file handle identifier carried in the task request; the request type identifier comprises a read type identifier, a write type identifier and a metadata type identifier.
In some embodiments, the selecting module 802 is further configured to:
when the request type identifier is a read type identifier, acquiring a read management array, wherein the read management array is used for managing a read working thread, and the read working thread is used for processing a task request of a read type;
judging whether a file handle corresponding to the read request of the client side is bound with a read working thread of the server side;
if the file handle corresponding to the read request of the client side is bound with the read working thread of the server side, searching the read working thread with the same file handle identification from the read management array to obtain a search result;
and when the search result represents that the search is successful, determining the read working thread with the same file handle identifier as the target thread.
In some embodiments, the selecting module 802 is further configured to:
when a file handle corresponding to the read request of the client is not bound with the read working thread of the server, or when the search result represents that the search fails, acquiring the number of distributed tasks of each read working thread in the read management array;
and determining the read working thread with the least number of the distributed tasks as a target thread.
In some embodiments, the selecting module 802 is further configured to:
when the request type identifier is a write type identifier, acquiring a write management array, wherein the write management array is used for managing a write working thread, and the write working thread is used for processing a task request of the write type;
acquiring the number of distributed tasks of each write work thread in the write management array;
and determining the write work thread with the least number of the distributed tasks as a target thread.
In some embodiments, the selecting module 802 is further configured to:
when the request type identifier is a metadata type identifier, acquiring a metadata management array, wherein the metadata management array is used for managing a metadata working thread, and the metadata working thread is used for processing a task request of the metadata type;
acquiring the number of distributed tasks of each metadata working thread in the metadata management array;
and determining the metadata work thread with the least number of the distributed tasks as a target thread.
In some embodiments, the request distribution apparatus 800 further includes:
the updating module is used for updating the number of the tasks distributed to the target thread;
in some embodiments, the update module is further configured to: when the target thread completes one task request every time, reducing the number of the distributed tasks of the target thread by one; and when the target thread distributes one task request, adding one to the number of the tasks distributed by the target thread.
Here, it should be noted that: the above description of the embodiment of the request distribution apparatus is similar to the above description of the method, and has the same advantageous effects as the embodiment of the method. For technical details not disclosed in the embodiments of the device for dispensing a request of the present application, a person skilled in the art shall refer to the description of the embodiments of the method of the present application for understanding.
It should be noted that, in the embodiment of the present application, if the request distribution method is implemented in the form of a software functional module and sold or used as a standalone product, the request distribution method may also be stored in a computer readable storage medium. Based on such understanding, the technical solutions of the embodiments of the present application may be essentially implemented or portions thereof contributing to the prior art may be embodied in the form of a software product stored in a storage medium, and including several instructions for causing a computer device (which may be a personal computer, a server, or a network device) to execute all or part of the methods described in the embodiments of the present application. And the aforementioned storage medium includes: various media capable of storing program codes, such as a usb disk, a removable hard disk, a Read Only Memory (ROM), a magnetic disk, or an optical disk. Thus, embodiments of the present application are not limited to any specific combination of hardware and software.
Accordingly, an embodiment of the present application provides a computer-readable storage medium, on which a computer program is stored, wherein the computer program is implemented to implement the steps in the request distribution method provided in the above embodiment when executed by a processor.
An embodiment of the present application further provides an electronic device, fig. 9 is a schematic structural diagram of a component of the electronic device provided in the embodiment of the present application, and as shown in fig. 9, the electronic device 900 includes: a processor 901, at least one communication bus 902, a user interface 903, at least one external communication interface 904 and memory 905. Wherein the communication bus 902 is configured to enable connective communication between these components. The user interface 903 may include a display screen, and the external communication interface 904 may include a standard wired interface and a wireless interface, among others. Wherein the processor 901 is configured to execute the program of the request distribution method stored in the memory to implement the steps in the request distribution method provided in the above embodiments.
The above description of the display device, electronic device and storage medium embodiments, similar to the description of the method embodiments above, has similar advantageous effects as the method embodiments. For technical details not disclosed in the embodiments of the display device, the electronic device and the storage medium of the present application, reference is made to the description of the embodiments of the method of the present application for understanding.
It should be appreciated that reference throughout this specification to "one embodiment" or "an embodiment" means that a particular feature, structure or characteristic described in connection with the embodiment is included in at least one embodiment of the present application. Thus, the appearances of the phrases "in one embodiment" or "in an embodiment" in various places throughout this specification are not necessarily all referring to the same embodiment. Furthermore, the particular features, structures, or characteristics may be combined in any suitable manner in one or more embodiments. It should be understood that, in the various embodiments of the present application, the sequence numbers of the above-mentioned processes do not mean the execution sequence, and the execution sequence of each process should be determined by its function and inherent logic, and should not constitute any limitation to the implementation process of the embodiments of the present application. The above-mentioned serial numbers of the embodiments of the present application are merely for description and do not represent the merits of the embodiments.
It should be noted that, in this document, the terms "comprises," "comprising," or any other variation thereof, are intended to cover a non-exclusive inclusion, such that a process, method, article, or apparatus that comprises a list of elements does not include only those elements but may include other elements not expressly listed or inherent to such process, method, article, or apparatus. Without further limitation, an element defined by the phrase "comprising an … …" does not exclude the presence of other like elements in a process, method, article, or apparatus that comprises the element.
In the several embodiments provided in the present application, it should be understood that the disclosed apparatus and method may be implemented in other ways. The above-described device embodiments are merely illustrative, for example, the division of the unit is only a logical functional division, and there may be other division ways in actual implementation, such as: multiple units or components may be combined, or may be integrated into another system, or some features may be omitted, or not implemented. In addition, the coupling, direct coupling or communication connection between the components shown or discussed may be through some interfaces, and the indirect coupling or communication connection between the devices or units may be electrical, mechanical or other forms.
The units described as separate parts may or may not be physically separate, and parts displayed as units may or may not be physical units; can be located in one place or distributed on a plurality of network units; some or all of the units can be selected according to actual needs to achieve the purpose of the solution of the embodiment.
In addition, all functional units in the embodiments of the present application may be integrated into one processing unit, or each unit may be separately regarded as one unit, or two or more units may be integrated into one unit; the integrated unit can be realized in a form of hardware, or in a form of hardware plus a software functional unit.
Those of ordinary skill in the art will understand that: all or part of the steps for realizing the method embodiments can be completed by hardware related to program instructions, the program can be stored in a computer readable storage medium, and the program executes the steps comprising the method embodiments when executed; and the aforementioned storage medium includes: various media that can store program code, such as removable storage devices, read-only memories, magnetic or optical disks, etc.
Alternatively, the integrated units described above in the present application may be stored in a computer-readable storage medium if they are implemented in the form of software functional modules and sold or used as independent products. Based on such understanding, the technical solutions of the embodiments of the present application may be embodied in the form of a software product, which is stored in a storage medium and includes several instructions for causing a device to perform all or part of the methods described in the embodiments of the present application. And the aforementioned storage medium includes: a removable storage device, a ROM, a magnetic or optical disk, or other various media that can store program code.
The above description is only for the embodiments of the present application, but the scope of the present application is not limited thereto, and any person skilled in the art can easily conceive of changes or substitutions within the technical scope of the present application, and shall be covered by the scope of the present application. Therefore, the protection scope of the present application shall be subject to the protection scope of the claims.

Claims (10)

1. A request distribution method is applied to a server side, and comprises the following steps:
acquiring a task request sent by a client, wherein the task request carries a request type identifier and a file handle identifier;
selecting a target thread for processing the task request from thread types corresponding to the request type identifier according to the request type identifier and the file handle identifier;
and distributing the task request to a task queue corresponding to the target thread so as to process the task request.
2. The method of claim 1, further comprising:
analyzing the task request, and extracting a request type identifier and a file handle identifier carried in the task request;
the request type identification comprises a read type identification, a write type identification and a metadata type identification.
3. The method according to claim 2, wherein the selecting a target thread for processing the task request from thread types corresponding to the request type identifier according to the request type identifier and the file handle identifier comprises:
when the request type identifier is a read type identifier, acquiring a read management array, wherein the read management array is used for managing a read working thread, and the read working thread is used for processing a task request of a read type;
judging whether a file handle corresponding to the read request of the client side is bound with a read working thread of the server side;
if the file handle corresponding to the read request of the client side is bound with the read working thread of the server side, searching the read working thread with the same file handle identification from the read management array to obtain a search result;
and when the search result represents that the search is successful, determining the read working thread with the same file handle identifier as the target thread.
4. The method of claim 3, further comprising:
when a file handle corresponding to the read request of the client is not bound with the read working thread of the server, or when the search result represents that the search fails, acquiring the number of distributed tasks of each read working thread in the read management array;
and determining the read working thread with the least number of the distributed tasks as a target thread.
5. The method according to claim 2, wherein the selecting a target thread for processing the task request from thread types corresponding to the request type identifier according to the request type identifier and the file handle identifier comprises:
when the request type identifier is a write type identifier, acquiring a write management array, wherein the write management array is used for managing a write working thread, and the write working thread is used for processing a task request of the write type;
acquiring the number of distributed tasks of each write work thread in the write management array;
and determining the write work thread with the least number of the distributed tasks as a target thread.
6. The method according to claim 2, wherein the selecting a target thread for processing the task request from thread types corresponding to the request type identifier according to the request type identifier and the file handle identifier further comprises:
when the request type identifier is a metadata type identifier, acquiring a metadata management array, wherein the metadata management array is used for managing a metadata working thread, and the metadata working thread is used for processing a task request of the metadata type;
acquiring the distributed task number of each metadata working thread in the metadata management array;
and determining the metadata work thread with the least number of the distributed tasks as a target thread.
7. The method of claim 1, further comprising:
updating the number of the tasks distributed to the target thread;
wherein the updating the number of tasks allocated to the target thread includes:
when the target thread completes one task request every time, reducing the number of the distributed tasks of the target thread by one;
and when the target thread distributes one task request, adding one to the number of the tasks distributed by the target thread.
8. A request distribution device is applied to a server side, and the device comprises:
the system comprises an acquisition module, a processing module and a processing module, wherein the acquisition module is used for acquiring a task request sent by a client, and the task request carries a request type identifier and a file handle identifier;
a selecting module, configured to select a target thread for processing the task request from thread types corresponding to the request type identifier according to the request type identifier and the file handle identifier;
and the distribution module is used for distributing the task request to a task queue corresponding to the target thread so as to process the task request.
9. An electronic device, comprising:
a memory for storing executable instructions;
a processor, configured to implement the request distribution method of any one of claims 1 to 7 when executing executable instructions stored in the memory.
10. A computer-readable storage medium storing executable instructions for implementing the request distribution method of any one of claims 1 to 7 when executed by a processor.
CN202210303625.XA 2022-03-24 2022-03-24 Request distribution method, device, equipment and computer readable storage medium Pending CN114880084A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202210303625.XA CN114880084A (en) 2022-03-24 2022-03-24 Request distribution method, device, equipment and computer readable storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202210303625.XA CN114880084A (en) 2022-03-24 2022-03-24 Request distribution method, device, equipment and computer readable storage medium

Publications (1)

Publication Number Publication Date
CN114880084A true CN114880084A (en) 2022-08-09

Family

ID=82667896

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202210303625.XA Pending CN114880084A (en) 2022-03-24 2022-03-24 Request distribution method, device, equipment and computer readable storage medium

Country Status (1)

Country Link
CN (1) CN114880084A (en)

Similar Documents

Publication Publication Date Title
US7299468B2 (en) Management of virtual machines to utilize shared resources
US8244888B2 (en) Method and mechanism for implementing tagged session pools
US8301840B2 (en) Assigning cache priorities to virtual/logical processors and partitioning a cache according to such priorities
US9201896B2 (en) Managing distributed storage quotas
US8001355B2 (en) Storage system, volume allocation method and management apparatus
US20200348863A1 (en) Snapshot reservations in a distributed storage system
US20040205109A1 (en) Computer system
GB2258546A (en) Data storage management.
US10817380B2 (en) Implementing affinity and anti-affinity constraints in a bundled application
US11308066B1 (en) Optimized database partitioning
US11556468B2 (en) Multi-ring shared, traversable, and dynamic advanced database
CN107408132B (en) Method and system for moving hierarchical data objects across multiple types of storage
US20230100484A1 (en) Serverless function colocation with storage pools
US10579419B2 (en) Data analysis in storage system
US10416892B2 (en) Fileset-based data locality enablement in distributed file systems
US9430530B1 (en) Reusing database statistics for user aggregate queries
US20170315930A1 (en) Cache scoring scheme for a cloud-backed deduplication storage system
CN114880084A (en) Request distribution method, device, equipment and computer readable storage medium
US20040073907A1 (en) Method and system of determining attributes of a functional unit in a multiple processor computer system
KR101754713B1 (en) Asymmetric distributed file system, apparatus and method for distribution of computation
US7721287B2 (en) Organizing transmission of repository data
US8880828B2 (en) Preferential block recycling in a redirect-on-write filesystem
CN113360455B (en) Data processing method, device, equipment and medium of super fusion system
Jeon et al. Domain level page sharing in xen virtual machine systems
CN114168306A (en) Scheduling method and scheduling device

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination