CN112835508A - Method and device for processing data - Google Patents
Method and device for processing data Download PDFInfo
- Publication number
- CN112835508A CN112835508A CN201911153145.4A CN201911153145A CN112835508A CN 112835508 A CN112835508 A CN 112835508A CN 201911153145 A CN201911153145 A CN 201911153145A CN 112835508 A CN112835508 A CN 112835508A
- Authority
- CN
- China
- Prior art keywords
- queue
- data
- level
- access
- access request
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Pending
Links
- 238000000034 method Methods 0.000 title claims abstract description 64
- 238000004590 computer program Methods 0.000 claims description 10
- 230000015654 memory Effects 0.000 description 26
- 238000010586 diagram Methods 0.000 description 11
- 230000008569 process Effects 0.000 description 9
- 238000004891 communication Methods 0.000 description 8
- 230000006870 function Effects 0.000 description 7
- 239000000872 buffer Substances 0.000 description 5
- 230000008859 change Effects 0.000 description 4
- 230000008878 coupling Effects 0.000 description 3
- 238000010168 coupling process Methods 0.000 description 3
- 238000005859 coupling reaction Methods 0.000 description 3
- 230000003247 decreasing effect Effects 0.000 description 2
- 230000003287 optical effect Effects 0.000 description 2
- 235000019633 pungent taste Nutrition 0.000 description 2
- 239000004065 semiconductor Substances 0.000 description 2
- 239000007787 solid Substances 0.000 description 2
- 238000003491 array Methods 0.000 description 1
- 230000003139 buffering effect Effects 0.000 description 1
- 238000013500 data storage Methods 0.000 description 1
- 230000009977 dual effect Effects 0.000 description 1
- 239000000835 fiber Substances 0.000 description 1
- 230000010354 integration Effects 0.000 description 1
- 230000002093 peripheral effect Effects 0.000 description 1
- 230000004044 response Effects 0.000 description 1
- 238000006467 substitution reaction Methods 0.000 description 1
- 230000026676 system process Effects 0.000 description 1
Images
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F3/00—Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
- G06F3/06—Digital input from, or digital output to, record carriers, e.g. RAID, emulated record carriers or networked record carriers
- G06F3/0601—Interfaces specially adapted for storage systems
- G06F3/0602—Interfaces specially adapted for storage systems specifically adapted to achieve a particular effect
- G06F3/0604—Improving or facilitating administration, e.g. storage management
- G06F3/0607—Improving or facilitating administration, e.g. storage management by facilitating the process of upgrading existing storage systems, e.g. for improving compatibility between host and storage device
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F3/00—Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
- G06F3/06—Digital input from, or digital output to, record carriers, e.g. RAID, emulated record carriers or networked record carriers
- G06F3/0601—Interfaces specially adapted for storage systems
- G06F3/0602—Interfaces specially adapted for storage systems specifically adapted to achieve a particular effect
- G06F3/061—Improving I/O performance
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F3/00—Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
- G06F3/06—Digital input from, or digital output to, record carriers, e.g. RAID, emulated record carriers or networked record carriers
- G06F3/0601—Interfaces specially adapted for storage systems
- G06F3/0628—Interfaces specially adapted for storage systems making use of a particular technique
- G06F3/0638—Organizing or formatting or addressing of data
- G06F3/0643—Management of files
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F3/00—Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
- G06F3/06—Digital input from, or digital output to, record carriers, e.g. RAID, emulated record carriers or networked record carriers
- G06F3/0601—Interfaces specially adapted for storage systems
- G06F3/0668—Interfaces specially adapted for storage systems adopting a particular infrastructure
- G06F3/067—Distributed or networked storage systems, e.g. storage area networks [SAN], network attached storage [NAS]
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F3/00—Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
- G06F3/06—Digital input from, or digital output to, record carriers, e.g. RAID, emulated record carriers or networked record carriers
- G06F3/0601—Interfaces specially adapted for storage systems
- G06F3/0668—Interfaces specially adapted for storage systems adopting a particular infrastructure
- G06F3/0671—In-line storage system
- G06F3/0683—Plurality of storage devices
- G06F3/0689—Disk arrays, e.g. RAID, JBOD
Landscapes
- Engineering & Computer Science (AREA)
- Theoretical Computer Science (AREA)
- Human Computer Interaction (AREA)
- Physics & Mathematics (AREA)
- General Engineering & Computer Science (AREA)
- General Physics & Mathematics (AREA)
- Information Retrieval, Db Structures And Fs Structures Therefor (AREA)
Abstract
The application provides a method and a device for processing data. The method can cache access requests of different levels through different queues, different queues can be provided with different preset values, whether the access requests in the current queue are continuously processed or the access requests in the next queue are processed is determined according to whether the number of the access requests continuously processed in each queue reaches the preset value, therefore, the number of the access requests in the queues is controlled through the difference of the preset values of each queue, different access performances can be provided for the access requests of different levels, and the access performance of data can be improved.
Description
Technical Field
The present application relates to the field of computers, and more particularly, to a method and apparatus for processing data in the field of computers.
Background
In a storage system, different data may be stored in different performance memories in the storage system by hierarchical storage. For example, storing hot data in a high performance memory and cold data in a low performance memory may allow for hierarchical storage of data, enabling hot data to be processed more quickly.
However, for a storage system with only one storage medium, such as an all flash memory system including only Solid State Drives (SSDs), there are data with different degrees of hotness, but the storage system cannot provide different access performance for the data with different degrees of hotness, resulting in poor access performance.
Disclosure of Invention
The application provides a method and a device for processing data, which can improve access performance.
In a first aspect, a method of processing data is provided, the method being performed by a storage controller, the method comprising:
after an access request in a current queue is processed, determining whether the number of access requests which are continuously processed in the current queue reaches a preset value of the current queue, wherein the storage controller comprises a plurality of queues, different queues are used for caching access requests of different levels, the queues form a circular queue according to the level of the access request cached by each queue, the queues are accessed according to the sequence of the circular queue, and each queue corresponds to a preset value of the number of access requests which are continuously processed;
when the number of the access requests which are continuously processed in the current queue reaches a preset value of the current queue, acquiring the access requests from the next queue of the current queue;
and when the number of the access requests which are continuously processed in the current queue does not reach the preset value of the current queue, continuously acquiring the access requests from the current queue.
In the technical scheme, different queues can buffer access requests of different levels, different queues can be provided with different preset values, whether the access request in the current queue is continuously processed or the access request in the next queue is processed is determined according to whether the number of the access requests continuously processed in each queue reaches the preset value, and therefore the number of the access requests in the queues is controlled according to the different preset values of each queue, different access performances can be provided for the access requests of different levels, and the access performance of data can be improved.
The number of queues included in the storage controller is equal to the number of levels of the access request, for example, there are three queues having three levels and three preset values, one queue corresponding to one level and one preset value, and three queues corresponding to three levels and three preset values, respectively.
It is to be understood that the plurality of queues includes, for example, a first queue corresponding to hot data, a second queue corresponding to warm data, and a third queue corresponding to cold data, the second queue being a next queue to the first queue, the third queue being a next queue to the second queue, the first queue being a next queue to the third queue, the level of access requests in the first queue being higher than the level of access requests in the second queue, the level of access requests in the second queue being higher than the level of access requests in the third queue. And acquiring the access requests from the second queue after the number of the access requests continuously processed in the first queue reaches the preset value of the first queue, acquiring the access requests from the third queue after the number of the access requests continuously processed in the second queue reaches the preset value of the second queue, acquiring the access requests from the first queue after the number of the access requests continuously processed in the third queue reaches the preset value of the third queue, and acquiring the access requests from the second queue … … and so on after the number of the access requests continuously processed in the first queue reaches the preset value of the first queue.
In a possible implementation manner, the higher the level of the access request buffered by each queue is, the larger the preset value corresponding to each queue is. The preset value of the first queue corresponding to the hot data can be set as a first preset value; setting a preset value of a second queue corresponding to the temperature data to be smaller than a first preset value and larger than a second preset value, wherein the first preset value is larger than the second preset value; the preset value of the third queue corresponding to the cold data may be set to the second preset value. Thus, the access request for accessing the hot data can be processed more quickly, and the access performance of the hot data can be improved.
Optionally, the access request comprises an input output, IO, request.
In some possible implementations, the method further includes: receiving an access request, and determining the level of the received access request; and caching the access request into a queue corresponding to the determined level.
In some possible implementations, the determining the level of the received access request includes:
determining a data unit to which the data to be accessed belongs according to the identifier of the data to be accessed carried in the access request;
determining a level of the access request based on the level of the data unit.
If the access request carries the ID and LBA address of the LUN, a data unit to which the data to be accessed belongs may be determined according to the LUN ID and the LBA address, where the data unit is a data block with a preset size, and a level of the data unit to which the data to be accessed belongs is determined as a level of the access request. If the access request carries a file ID, the file may be determined according to the file ID, and at this time, the data unit may be a file, and the file level is the access request level.
In some possible implementations, the method includes: and updating the access frequency of the data unit corresponding to the data to be accessed, wherein the access frequency is used for setting the level of the data unit.
In some possible implementations, the method includes: periodically determining whether the level of the data unit changes according to the access frequency of the data unit; updating the level of the accessed data when it is determined that the level of the accessed data changes.
Specifically, when the storage controller receives an access request every time, the storage controller requests the access frequency of the data unit to which the data accessed by the access request belongs, the storage controller periodically determines whether the change of the access frequency of a certain data unit exceeds a frequency threshold, and determines whether to modify the level of the data unit according to whether the change of the access frequency of the certain data unit exceeds the frequency threshold. For example, when the access frequency of a certain data unit is T1, the level of the data unit is warm data, when the access frequency of the data unit increases to T2, the frequency threshold is T, and when T2-T1> T, the level of the data unit is set to hot data. When T2-T1 ═ T, the level of the data unit is still warm data.
In a second aspect, a device for processing data is provided, where the device includes a processing unit and an obtaining unit, and functions executed by the processing unit and the obtaining unit are the same as functions implemented by steps of the method provided in the first aspect, and reference is specifically made to descriptions of the steps of the method in the first aspect, which are not repeated herein.
In a third aspect, a memory controller is provided that includes a processor, a memory, a communication interface, and a bus. The processor, the memory and the communication interface are connected by a bus to complete mutual communication, the memory is used for storing computer execution instructions, and when the memory controller runs, the processor executes the computer execution instructions in the memory to execute the operation steps of the method in the first aspect or any one of the possible implementation manners of the first aspect by using the memory controller.
In a fourth aspect, a storage system is provided, where the storage system includes a storage device and the storage controller of the third aspect, where the storage controller accesses the storage device through the interface, and where the storage controller is connected to a host.
In a fifth aspect, there is provided a computer program product comprising: computer program code which, when run on a computer, causes the computer to perform the method of the first aspect described above.
In a sixth aspect, a computer-readable medium is provided, having program code stored thereon, which, when run on a computer, causes the computer to perform the method of the first aspect described above.
Drawings
Fig. 1 is a system architecture diagram provided by an embodiment of the present application.
Fig. 2 is a schematic diagram of a hierarchical storage in the prior art provided by an embodiment of the present application.
Fig. 3 is a schematic diagram of determining a level of a data unit according to an embodiment of the present application.
Fig. 4 is a data hierarchy representation provided by an embodiment of the present application.
Fig. 5 is another data hierarchy representation provided by an embodiment of the present application.
Fig. 6 is a schematic diagram illustrating a method for caching access requests of different levels in different queues according to an embodiment of the present application.
Fig. 7 is a schematic diagram of a method for acquiring access requests in different queues according to an embodiment of the present application.
Fig. 8 is a schematic diagram illustrating another method for obtaining access requests in different queues according to an embodiment of the present application.
Fig. 9 is a schematic block diagram of an apparatus for processing data according to an embodiment of the present application.
Fig. 10 is a schematic block diagram of another apparatus for processing data according to an embodiment of the present disclosure.
Detailed Description
The technical solution in the present application will be described below with reference to the accompanying drawings.
Fig. 1 is a system architecture diagram 100 of an embodiment of the present application. Connected to the storage system 120 by the host 110 through a network. The host 110 responds to a request of a user, and issues an Input Output (IO) request to the storage system 120, so as to read and write data stored in the storage system 120. The storage system 120 comprises a control device 121 and a storage device 122, wherein the control device 121 may also be referred to as a storage controller. The control device 121 is mainly responsible for processing an IO request issued by the host 110, and the control device 121 may be a single-control architecture including only one controller, or a multi-control architecture including multiple controllers. The control device 121 in fig. 1 is exemplified by a dual controller architecture. The storage device 130 may construct at least one disk array (RAID) by using a plurality of disks, where the at least one disk array may be virtualized as a virtual disk, that is, a storage pool, and a space of the storage pool is used as a space that can be used by a host. In fig. 1 it is shown that the control device 121 and the storage device 122 may be in one subrack, so called a "disk control integration", and that the control device 121 and the storage device 122 may be connected to each other via an interface on the subrack. The control device 121 and the storage device 122 may not be in one subrack, i.e., the control device 121 and the storage device 122 are separate. In this embodiment of the application, the host 110 may be connected to the storage system 120 through a Storage Area Network (SAN), or may be connected to the storage system 120 through a Network Attached Storage (NAS). When the host 110 is connected to the storage system 120 in a SAN manner, a plurality of logical disks (e.g., Logical Unit Numbers (LUNs)) may be established in the storage pool and mounted to the host 110. And the host 110 may access a logical disk mounted to the host 110. When the host 110 is connected to the storage system 120 in the manner of a SAN and data is stored in a storage pool in the form of files, the host 110 may access the files in the storage pool. In the embodiment of the present application, the disks constituting the storage device 130 are the same type of disk, and are all SSDs, for example.
The hierarchical storage system in the prior art comprises storage devices with different performances, and data with different levels can be stored in the storage devices with different performances. For example, as shown in fig. 2, data written into the storage system for the first time may be respectively stored in storage devices with different performances according to attributes of the data, such as importance degree of the data, then the read-write frequency of the data is counted, and according to the difference of the read-write frequency of the data, the data may be divided into different levels, such as hot data, cold data, and warm data, and the data with different levels may be migrated to the storage devices with different performances for storage, for example, for the hot data stored in the storage device with high performance, if the subsequent read-write frequency is decreased and becomes warm data, it is necessary to migrate the warm data from the storage device with high performance to the storage device with medium performance, and for the warm data stored in the storage device with medium performance, if the subsequent read-write frequency is decreased and becomes cold data, the cold data is migrated to the storage device with low performance, similarly, for cold data stored in a low-performance storage device, if the subsequent read-write frequency is increased and the cold data becomes temperature data, the temperature data is migrated to the medium-performance storage device.
For example, the high-performance storage device may be an SSD, the medium-performance storage device may be a Serial Attached SCSI (SAS) disk, where the SCSI is an abbreviation of small computer system interface (small computer system interface), and the low-performance storage device may be a near-line SAS (near-line NLSAS) disk.
In the embodiment of the application, the hot data is data with an access frequency greater than a first preset value, the warm data is data with an access frequency less than the first preset value and greater than a second preset value, and the cold data is data with an access frequency less than the second preset value, wherein the second preset value is less than the first preset value.
In a scenario (e.g., a full flash memory scenario) in which the disks constituting the storage device 130 are the same type of disk, the access frequency of data is different, that is, data with different heat degrees also exist, but since the performance of all storage media is the same in the full flash memory scenario, the data is not classified, and the storage system does not distinguish whether the access frequency is high hot data or low cold data, so that the response cannot be obtained quickly for the hot data with high access performance requirement.
Therefore, in view of the above problems, the method for processing data provided by the embodiment of the present application may rank data in the same storage medium, and when the storage system processes an access request of data, process the data access request according to the rank of the accessed data.
A method 300 for determining the level of data stored in the storage device 122 according to an embodiment of the present disclosure is described below with reference to fig. 3, for example, the method 300 is performed by the control device 121 shown in fig. 1, where the control device 121 may also be referred to as a storage controller, and the method 300 includes:
s310, detecting whether the level of each data unit in the logical disk changes.
In this embodiment of the present application, the access frequency of each data unit may be counted by taking the data unit as a granularity, for a SAN environment, the logical disk refers to a LUN, and the data unit is a data block with a preset size. The level of each data unit is recorded in the metadata of the storage pool through a data hierarchy table, which may include LUN Identity (ID), ID of the data unit, logical address range of the data unit, access frequency of the data unit, and level of the data unit as shown in fig. 4. For the NAS environment, the data unit is a file, the logical disk refers to a storage pool, and as shown in the data hierarchy table shown in fig. 5, only the file ID, the access frequency of the data unit, and the level of the data unit need to be recorded.
And determining the access frequency of the data unit according to the number of times of accessing the data corresponding to the current data unit once when the data unit is accessed, and correspondingly changing the frequency of the data unit in the primary data hierarchical table. The control device 121 will determine the level of the data units on the basis of the counted access frequency of the data units. In the embodiment of the present application, the data is divided into three levels, i.e., hot data, warm data, and cold data. The manner of determining the rank of the data unit according to the access frequency of the data unit is the same as the conventional manner, and is not described herein again. Since the access frequency of the data unit may change every moment, the level of each data unit may also change, and the control device 121 may determine whether the level of each data unit changes according to the access frequency of each data unit at intervals (e.g., periodically or aperiodically). For a first written data unit, the level of the data unit may set a default level.
S320, updating the level of the data unit for the data unit with the changed level.
The control device 121 determines whether the level of the data unit is changed according to whether the variation of the access frequency of the data unit exceeds a preset value, and if the variation of the access frequency of the data unit exceeds a set value, determines that the level of the data unit is changed and the level of the data unit needs to be updated. If the variation of the access frequency of the data unit does not exceed the set value, it is determined that the level of the data unit has not changed, and it is not necessary to update the level of the data unit.
For example, when the access frequency of a data unit is accessed more than 4 times per day, the level of the data unit is hot data, i.e., when the access frequency of the data unit changes from 6 times per day to 5 times per day, the data unit remains hot data; when the access frequency of the data unit becomes 3 times per day, falling within the access frequency range set for the temperature data (for example, more than 2 times and 4 times or less) requires updating the level of the data unit to the temperature data at this time.
If the control device 121 receives the access request, the level of the data unit accessed by the access request is determined according to the hierarchical table, and the access request needs to be cached in a queue corresponding to the determined level. One level corresponds to one queue, and different queues buffer access requests of different levels, a method 600 for buffering access requests of different levels by different queues is described below with reference to fig. 6, where fig. 6 describes an example in which one queue buffers one access request. The method 600 comprises:
s610, the host 110 issues an access request to the control device 121, and the control device 121 receives the access request issued by the host 110.
When the host 110 and the storage system 120 are connected by a SAN, the access request carries an identifier (identity ID) and a Logical Block Address (LBA) of the LUN of the data to be accessed, and when the host 110 and the storage system 120 are connected by a NAS, the access request carries an identifier of the file.
S620, the control device 121 determines the level of the data to be accessed according to the access request.
If the access request carries the ID and LBA address of the LUN, the data unit in which the data to be accessed is located in the hierarchical table shown in fig. 4 may be determined according to the LUN ID and LBA address, and then the level of the data unit in which the data to be accessed is located is determined, that is, the level of the data to be accessed is determined. If the access request carries a file ID, the file to be read can be identified in the hierarchical table shown in fig. 5 according to the file ID, and the level of the file can be determined.
S630, the control device 121 buffers the access request to the queue corresponding to the level of the data to be accessed according to the level of the data to be accessed.
In this embodiment of the application, the control device 121 sets a queue for each level of data, and when the level of the data to be accessed in the received access request is determined, the access request may be buffered in the queue corresponding to the level of the data to be accessed. The plurality of queues form a circular queue according to the level of the access request buffered by each queue, and the plurality of queues are accessed according to the order of the circular queue. The hot data corresponds to a first queue, the warm data corresponds to a second queue, and the cold data corresponds to a third queue. The circular queue formed by the first queue, the second queue and the third queue is as follows: the next queue of the first queue is the second queue, the next queue of the second queue is the third queue, and the next queue of the third queue is the first queue. The control device also performs accesses in the order of such a circular queue when accessing access requests in the queue.
In the embodiment of the application, each queue is preset with a preset value of the number of access requests which are continuously processed in the queue. The preset values of the number of the access requests which are processed continuously and correspond to different queues are different.
The higher the corresponding preset value of the queue corresponding to the high level is, the lower the corresponding preset value of the queue corresponding to the low level is. For example, a first preset value, for example, 5; the preset value of the second queue corresponding to the temperature data may be set to be smaller than the first preset value and larger than a second preset value, for example, 3; the preset value of the third queue corresponding to the cold data may be set to a second preset value, for example, 2, so that when the control device processes data, the control device continuously obtains 5 access requests from the first queue for processing, then continuously obtains 3 access requests from the second queue for processing, then obtains 2 access requests from the third queue for processing, then obtains 5 access requests from the first queue for processing, and so on. Thus, the access request for accessing the hot data can be processed more quickly, and the access performance of the hot data can be improved.
After the access requests are buffered in the corresponding queues by the method 600, a method for obtaining access requests in different queues is described below in conjunction with the method 700, for example, the method 700 is executed by the control device 121 shown in fig. 1, and the method 700 includes:
s710, after the control device 121 finishes processing an access request in the current queue, the control device 121 determines whether the number of access requests continuously processed in the current queue reaches a preset value of the current queue.
S720, when the number of access requests continuously processed in the current queue reaches the preset value of the current queue, the control device 121 obtains an access request from the next queue of the current queue.
S730, when the number of the access requests continuously processed in the current queue does not reach the preset value of the current queue, the control device 121 continues to obtain the access requests from the current queue.
For example, if the preset value of the number of access requests consecutively processed in the first queue corresponding to hot data is 5, the preset value of the number of access requests consecutively processed in the second queue corresponding to warm data is 3, and the preset value of the number of access requests consecutively processed in the third queue corresponding to cold data is 2, the process of processing the access requests from the plurality of queues by the control device 121 will be described below with reference to the flowchart of fig. 8.
S801, after processing an access request of the current first queue, the control device 121 counts the number of access requests that are continuously processed in the current first queue.
S802, the control device 121 determines whether the number of access requests that are continuously processed in the current first queue reaches a preset value 5 of the first queue, if so, executes S804, and if not, executes S803.
S803, the control device 121 continues to acquire an access request from the first queue for processing.
Wherein the control apparatus executes S801 after executing S803.
S804, the control device 121 obtains an access request from a second queue, which is a next queue of the first queue, where the second queue is a current queue.
S805, after processing an access request of the current second queue, the control device 121 counts the number of access requests that are continuously processed in the current second queue.
S806, the control device 121 determines whether the number of access requests that are continuously processed in the current second queue reaches a preset value 3 of the second queue, if so, performs S808, and if not, performs S807.
S807, the control device 121 continues to acquire an access request from the second queue for processing.
Wherein the control apparatus executes S805 after executing S807.
S808, the control device 121 obtains the access request from the third queue, where the third queue is the current queue.
S809, after processing an access request in the third queue, the control device 121 counts the number of access requests that are continuously processed in the current third queue.
S810, the control device 121 determines whether the number of access requests that are continuously processed in the current third queue reaches a preset value 2 of the third queue, if so, executes S803, and if not, executes S811.
S811, the control device 121 continues to acquire an access request from the third queue,
wherein the control device executes S809 after executing S811.
In this way, the method 800 can obtain the access requests in the three queues, and control the number of the access requests in each queue according to the preset value of each queue, so as to control the access performance of the access requests of different levels.
It will be appreciated that the access frequency may include the frequency of read and write, modify, lookup, etc. operations, where read and write include write operations and read operations, and write operations may include first write or modify, etc.
The method provided by the embodiments of the present application is described in detail above with reference to fig. 1 to 8, and the embodiments of the apparatus of the present application are described in detail below. It is to be understood that the description of the method embodiments corresponds to the description of the apparatus embodiments, and therefore reference may be made to the preceding method embodiments for parts not described in detail.
Fig. 9 is a schematic structural diagram of an apparatus 900 for processing data according to an embodiment of the present disclosure. The apparatus 900 includes:
a processing unit 910, configured to determine, after processing an access request in a current queue, whether the number of access requests that are continuously processed in the current queue reaches a preset value of the current queue, where the apparatus 900 includes multiple queues, different queues are used to buffer access requests of different levels, the multiple queues form a circular queue according to the level of the access request buffered in each queue, the multiple queues are accessed according to the order of the circular queue, and each queue corresponds to a preset value of the number of access requests that are continuously processed;
an obtaining unit 920, configured to obtain an access request from a next queue of the current queue when the number of access requests that are continuously processed in the current queue reaches a preset value of the current queue;
the obtaining unit is further configured to continue to obtain the access request from the current queue when the number of the access requests that are continuously processed in the current queue does not reach a preset value of the current queue.
As an alternative embodiment, the apparatus further comprises: a receiving unit 930 configured to receive an access request;
the processing unit 910 is further configured to:
determining a level of the received access request;
and caching the access request into a queue corresponding to the determined level.
As an optional embodiment, the processing unit 910 is specifically configured to: determining a data unit to which the data to be accessed belongs according to the identifier of the data to be accessed carried in the access request; determining a level of the access request based on the level of the data unit.
As an alternative embodiment, the processing unit 910 is further configured to:
and updating the access frequency of the data unit corresponding to the data to be accessed, wherein the access frequency is used for setting the level of the data unit.
As an alternative embodiment, the processing unit 910 is further configured to:
periodically determining whether the level of the data unit changes according to the access frequency of the data unit;
and updating the level of the data unit after determining that the level of the data unit changes.
As an optional embodiment, the higher the level of the access request buffered by each queue is, the larger the preset value corresponding to each queue is.
The apparatus 900 for processing data according to the embodiment of the present application may correspond to performing the method described in the embodiment of the present application, and the above and other operations and/or functions of each unit in the apparatus 900 for processing data are respectively for implementing corresponding flows of the methods in fig. 3 and fig. 6 to fig. 8, and are not described herein again for brevity.
Fig. 10 is a schematic structural diagram of an apparatus 1000 for processing data provided in an embodiment of the present application. The apparatus 1000 for processing data includes: a processor 1010, a memory 1020, a communication interface 1030, and a bus 1040.
It is to be understood that the processor 1110 in the apparatus for processing data 1000 shown in fig. 10 may correspond to the processing unit 910 in the apparatus for processing data 900 in fig. 10. The communication interface 1030 in the apparatus 1000 for processing data may correspond to the acquisition unit 920 and the reception unit 930 in the apparatus 900 for processing data.
The processor 1010 may be coupled to the memory 1020. The memory 1020 may be used to store the program codes and data. Therefore, the memory 1020 may be a memory unit inside the processor 1010, an external memory unit independent from the processor 1010, or a component including a memory unit inside the processor 1010 and an external memory unit independent from the processor 1010.
Optionally, the apparatus 1000 may also include a bus 1040. The memory 1020 and the communication interface 1030 may be connected to the processor 1010 through a bus 1040. The bus 1040 may be a Peripheral Component Interconnect (PCI) bus, an Extended Industry Standard Architecture (EISA) bus, or the like. The bus 1040 may be divided into an address bus, a data bus, a control bus, and the like. For ease of illustration, only one line is shown in FIG. 10, but it is not intended that there be only one bus or one type of bus.
It should be understood that, in the embodiment of the present application, the processor 1010 may adopt a Central Processing Unit (CPU). The processor may also be other general purpose processors, Digital Signal Processors (DSPs), Application Specific Integrated Circuits (ASICs), Field Programmable Gate Arrays (FPGAs) or other programmable logic devices, discrete gate or transistor logic devices, discrete hardware components, or the like. A general purpose processor may be a microprocessor or the processor may be any conventional processor or the like. Or the processor 1010 adopts one or more integrated circuits for executing related programs to implement the technical solutions provided in the embodiments of the present application.
The memory 1020 may include both read-only memory and random access memory, and provides instructions and data to the processor 1010. A portion of processor 1010 may also include non-volatile random access memory. For example, processor 1010 may also store device type information.
When the apparatus 1000 is running, the processor 1010 executes the computer-executable instructions in the memory 1020 to perform the operational steps of the methods 300, 600, 700, and 800 described above by the apparatus 1000.
It should be understood that the apparatus 1000 according to the embodiment of the present application may correspond to the apparatus 900 in the embodiment of the present application, and the above and other operations and/or functions of each unit in the apparatus 1000 are respectively for implementing corresponding flows of the method, and are not described herein again for brevity.
Optionally, in some embodiments, the present application further provides a computer-readable medium storing program code, which when executed on a computer, causes the computer to perform the method in the above aspects.
Optionally, in some embodiments, the present application further provides a computer program product, where the computer program product includes: computer program code which, when run on a computer, causes the computer to perform the method of the above-mentioned aspects.
The above embodiments may be implemented in whole or in part by software, hardware, firmware, or any combination thereof. When implemented in software, the above-described embodiments may be implemented in whole or in part in the form of a computer program product. The computer program product includes one or more computer instructions. When loaded or executed on a computer, cause the processes or functions described in accordance with the embodiments of the application to occur, in whole or in part. The computer may be a general purpose computer, a special purpose computer, a network of computers, or other programmable device. The computer instructions may be stored in a computer readable storage medium or transmitted from one computer readable storage medium to another, for example, from one website site, computer, server, or data center to another website site, computer, server, or data center via wired (e.g., coaxial cable, fiber optic, Digital Subscriber Line (DSL)) or wireless (e.g., infrared, wireless, microwave, etc.). The computer-readable storage medium can be any available medium that can be accessed by a computer or a data storage device such as a server, data center, etc. that contains one or more collections of available media. The usable medium may be a magnetic medium (e.g., floppy disk, hard disk, magnetic tape), an optical medium (e.g., DVD), or a semiconductor medium. The semiconductor medium may be a Solid State Drive (SSD).
It should be understood that, in the various embodiments of the present application, the sequence numbers of the above-mentioned processes do not mean the execution sequence, and the execution sequence of each process should be determined by its function and inherent logic, and should not constitute any limitation to the implementation process of the embodiments of the present application.
Those of ordinary skill in the art will appreciate that the various illustrative elements and algorithm steps described in connection with the embodiments disclosed herein may be implemented as electronic hardware or combinations of computer software and electronic hardware. Whether such functionality is implemented as hardware or software depends upon the particular application and design constraints imposed on the implementation. Skilled artisans may implement the described functionality in varying ways for each particular application, but such implementation decisions should not be interpreted as causing a departure from the scope of the present application.
It is clear to those skilled in the art that, for convenience and brevity of description, the specific working processes of the above-described systems, apparatuses and units may refer to the corresponding processes in the foregoing method embodiments, and are not described herein again.
In the several embodiments provided in the present application, it should be understood that the disclosed system, apparatus and method may be implemented in other ways. For example, the above-described apparatus embodiments are merely illustrative, and for example, the division of the units is only one logical division, and other divisions may be realized in practice, for example, a plurality of units or components may be combined or integrated into another system, or some features may be omitted, or not executed. In addition, the shown or discussed mutual coupling or direct coupling or communication connection may be an indirect coupling or communication connection through some interfaces, devices or units, and may be in an electrical, mechanical or other form.
The units described as separate parts may or may not be physically separate, and parts displayed as units may or may not be physical units, may be located in one place, or may be distributed on a plurality of network units. Some or all of the units can be selected according to actual needs to achieve the purpose of the solution of the embodiment.
In addition, functional units in the embodiments of the present application may be integrated into one processing unit, or each unit may exist alone physically, or two or more units are integrated into one unit.
The functions, if implemented in the form of software functional units and sold or used as a stand-alone product, may be stored in a computer readable storage medium. Based on such understanding, the technical solution of the present application or portions thereof that substantially contribute to the prior art may be embodied in the form of a software product, which is stored in a storage medium and includes instructions for causing a computer device (which may be a personal computer, a server, or a network device) to execute all or part of the steps of the method according to the embodiments of the present application. And the aforementioned storage medium includes: various media capable of storing program codes, such as a usb disk, a removable hard disk, a read-only memory (ROM), a Random Access Memory (RAM), a magnetic disk, or an optical disk.
The above description is only for the specific embodiments of the present application, but the scope of the present application is not limited thereto, and any person skilled in the art can easily conceive of the changes or substitutions within the technical scope of the present application, and shall be covered by the scope of the present application. Therefore, the protection scope of the present application shall be subject to the protection scope of the claims.
Claims (14)
1. A method of processing data, performed by a storage controller, the method comprising:
after an access request in a current queue is processed, determining whether the number of access requests which are continuously processed in the current queue reaches a preset value of the current queue, wherein the storage controller comprises a plurality of queues, different queues are used for caching access requests of different levels, the queues form a circular queue according to the level of the access request cached by each queue, the queues are accessed according to the sequence of the circular queue, and each queue corresponds to a preset value of the number of access requests which are continuously processed;
when the number of the access requests which are continuously processed in the current queue reaches a preset value of the current queue, acquiring the access requests from the next queue of the current queue;
and when the number of the access requests which are continuously processed in the current queue does not reach the preset value of the current queue, continuously acquiring the access requests from the current queue.
2. The method of claim 1, further comprising:
receiving an access request, and determining the level of the received access request;
and caching the access request into a queue corresponding to the determined level.
3. The method of claim 2, wherein determining the level of the received access request comprises:
determining a data unit to which the data to be accessed belongs according to the identifier of the data to be accessed carried in the access request;
determining a level of the access request based on the level of the data unit.
4. The method of claim 3, wherein the method comprises:
and updating the access frequency of the data unit corresponding to the data to be accessed, wherein the access frequency is used for setting the level of the data unit.
5. The method of claim 4, wherein the method comprises:
periodically determining whether the level of the data unit changes according to the access frequency of the data unit;
and updating the level of the data unit after determining that the level of the data unit changes.
6. The method according to any one of claims 1 to 5, wherein the higher the level of the access request buffered in each queue is, the larger the preset value corresponding to each queue is.
7. An apparatus for processing data, the apparatus comprising:
the storage controller comprises a plurality of queues, wherein different queues are used for caching access requests of different levels, the queues form a circular queue according to the level of the access request cached by each queue, the queues are accessed according to the sequence of the circular queue, and each queue corresponds to a preset value of the number of the access requests which are continuously processed;
an obtaining unit, configured to obtain an access request from a next queue of the current queue when the number of access requests that are continuously processed in the current queue reaches a preset value of the current queue;
the obtaining unit is further configured to continue to obtain the access request from the current queue when the number of the access requests that are continuously processed in the current queue does not reach a preset value of the current queue.
8. The apparatus of claim 7, further comprising:
a receiving unit configured to receive an access request;
the processing unit is further to:
determining a level of the received access request;
and caching the access request into a queue corresponding to the determined level.
9. The apparatus according to claim 8, wherein the processing unit is specifically configured to:
determining a data unit to which the data to be accessed belongs according to the identifier of the data to be accessed carried in the access request;
determining a level of the access request based on the level of the data unit.
10. The apparatus of claim 9, wherein the processing unit is further configured to:
and updating the access frequency of the data unit corresponding to the data to be accessed, wherein the access frequency is used for setting the level of the data unit.
11. The apparatus of claim 10, wherein the processing unit is further configured to:
periodically determining whether the level of the data unit changes according to the access frequency of the data unit;
and updating the level of the data unit after determining that the level of the data unit changes.
12. The apparatus according to any one of claims 7 to 11, wherein the higher the level of the access request buffered in each queue is, the larger the preset value corresponding to each queue is.
13. There is provided a computer program product, the computer program product comprising: computer program code which, when run on a computer, causes the computer to perform the method of any of claims 1 to 6.
14. A computer-readable medium is provided, having program code stored thereon, which, when run on a computer, causes the computer to perform the method of any one of claims 1 to 6.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201911153145.4A CN112835508A (en) | 2019-11-22 | 2019-11-22 | Method and device for processing data |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201911153145.4A CN112835508A (en) | 2019-11-22 | 2019-11-22 | Method and device for processing data |
Publications (1)
Publication Number | Publication Date |
---|---|
CN112835508A true CN112835508A (en) | 2021-05-25 |
Family
ID=75921701
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN201911153145.4A Pending CN112835508A (en) | 2019-11-22 | 2019-11-22 | Method and device for processing data |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN112835508A (en) |
Citations (3)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN1606301A (en) * | 2004-07-09 | 2005-04-13 | 清华大学 | A resource access shared scheduling and controlling method and apparatus |
CN105246052A (en) * | 2015-10-14 | 2016-01-13 | 中国联合网络通信集团有限公司 | Data distribution method and device |
CN109257293A (en) * | 2018-08-01 | 2019-01-22 | 北京明朝万达科技股份有限公司 | A kind of method for limiting speed, device and gateway server for network congestion |
-
2019
- 2019-11-22 CN CN201911153145.4A patent/CN112835508A/en active Pending
Patent Citations (3)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN1606301A (en) * | 2004-07-09 | 2005-04-13 | 清华大学 | A resource access shared scheduling and controlling method and apparatus |
CN105246052A (en) * | 2015-10-14 | 2016-01-13 | 中国联合网络通信集团有限公司 | Data distribution method and device |
CN109257293A (en) * | 2018-08-01 | 2019-01-22 | 北京明朝万达科技股份有限公司 | A kind of method for limiting speed, device and gateway server for network congestion |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
US11422722B2 (en) | Intelligent wide port PHY usage | |
JP7116381B2 (en) | Dynamic relocation of data using cloud-based ranks | |
US10296255B1 (en) | Data migration techniques | |
US7606944B2 (en) | Dynamic input/output optimization within a storage controller | |
JP6409613B2 (en) | Information processing apparatus, multipath control method, and multipath control program | |
JP2010102369A (en) | Storage system | |
EP2981920B1 (en) | Detection of user behavior using time series modeling | |
US9983997B2 (en) | Event based pre-fetch caching storage controller | |
US8972645B2 (en) | Request sent to storage device based on moving average | |
US20150058503A1 (en) | Storage apparatus and method of controlling storage apparatus | |
US10621059B2 (en) | Site recovery solution in a multi-tier storage environment | |
US20170220476A1 (en) | Systems and Methods for Data Caching in Storage Array Systems | |
Huffman et al. | The nonvolatile memory transformation of client storage | |
US9798661B2 (en) | Storage system and cache control method | |
US9436834B1 (en) | Techniques using an encryption tier property in a multi-tiered storage environment | |
US11055001B2 (en) | Localized data block destaging | |
CN112835508A (en) | Method and device for processing data | |
US20230079746A1 (en) | Managing cache replacement in a storage cache based on input-output access types of data stored in the storage cache | |
US10089228B2 (en) | I/O blender countermeasures | |
US10320907B2 (en) | Multi-stage prefetching to exploit long-term future data access sequence knowledge | |
US11436151B2 (en) | Semi-sequential drive I/O performance | |
US9658803B1 (en) | Managing accesses to storage | |
US20140365727A1 (en) | Storage control device and access control method | |
KR20240142162A (en) | Method of operating memory system and memory system performing the same | |
Rodeh et al. | Cache Prediction for XIV |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
RJ01 | Rejection of invention patent application after publication |
Application publication date: 20210525 |
|
RJ01 | Rejection of invention patent application after publication |