CN112214178A - Storage system, data reading method and data writing method - Google Patents

Storage system, data reading method and data writing method Download PDF

Info

Publication number
CN112214178A
CN112214178A CN202011273132.3A CN202011273132A CN112214178A CN 112214178 A CN112214178 A CN 112214178A CN 202011273132 A CN202011273132 A CN 202011273132A CN 112214178 A CN112214178 A CN 112214178A
Authority
CN
China
Prior art keywords
data
pool
osd
ssd
read
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN202011273132.3A
Other languages
Chinese (zh)
Other versions
CN112214178B (en
Inventor
黄军
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
New H3C Big Data Technologies Co Ltd
Original Assignee
New H3C Big Data Technologies Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by New H3C Big Data Technologies Co Ltd filed Critical New H3C Big Data Technologies Co Ltd
Priority to CN202011273132.3A priority Critical patent/CN112214178B/en
Publication of CN112214178A publication Critical patent/CN112214178A/en
Application granted granted Critical
Publication of CN112214178B publication Critical patent/CN112214178B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/06Digital input from, or digital output to, record carriers, e.g. RAID, emulated record carriers or networked record carriers
    • G06F3/0601Interfaces specially adapted for storage systems
    • G06F3/0628Interfaces specially adapted for storage systems making use of a particular technique
    • G06F3/0655Vertical data movement, i.e. input-output transfer; data movement between one or more hosts and one or more storage devices
    • G06F3/0656Data buffering arrangements
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/06Digital input from, or digital output to, record carriers, e.g. RAID, emulated record carriers or networked record carriers
    • G06F3/0601Interfaces specially adapted for storage systems
    • G06F3/0602Interfaces specially adapted for storage systems specifically adapted to achieve a particular effect
    • G06F3/061Improving I/O performance

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Human Computer Interaction (AREA)
  • Physics & Mathematics (AREA)
  • General Engineering & Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • Memory System Of A Hierarchy Structure (AREA)

Abstract

The embodiment of the application provides a storage system, a data reading method and a data writing method, and relates to the technical field of storage. The storage system comprises a cache pool and a back-end pool, wherein OSD in the cache pool is used for inquiring whether an object of data to be read exists in the cache pool; if so, reading data to be read from the buffer pool; if not, forwarding the data reading request to a back-end pool; the OSD in the back end pool is used for inquiring whether an object of data to be read exists in the SSD of the back end pool; if so, reading data to be read from the SSD of the back-end pool; if not, reading data to be read from the HDD of the rear-end pool; the OSD in the cache pool is also used for inquiring whether an object to be written in the SSD of the cache pool exists or not; if so, writing the data to be written into the SSD of the cache pool; if not, forwarding the data writing request to a back-end pool; the OSD in the back-end pool is also used for writing the data to be written into the SSD and the HDD of the back-end pool. The data read-write performance can be improved.

Description

Storage system, data reading method and data writing method
Technical Field
The present invention relates to the field of storage technologies, and in particular, to a storage system, a data reading method, and a data writing method.
Background
Data storage becomes particularly important with the rise of large data, and since a mechanical Hard Disk (HDD) has an advantage of low cost, the HDD is currently generally used to store mass data. However, the data read/write performance of the HDD is limited, and when mass data is stored in the HDD, if a data read/write (I/O) request is received, a storage location of the data to be read/written needs to be searched from the mass data, which results in a long processing delay and poor data read/write performance.
Disclosure of Invention
Embodiments of the present invention provide a storage system, a data reading method, and a data writing method, so as to improve data read-write performance. The specific technical scheme is as follows:
in a first aspect, an embodiment of the present application provides a storage system, where the storage system includes a cache pool and a backend pool, the cache pool includes a plurality of object storage devices OSD and a solid state disk SSD corresponding to each OSD, and the backend pool includes an SSD, a plurality of OSDs and a mechanical hard disk HDD corresponding to each OSD;
the OSD in the cache pool is used for receiving a data reading request aiming at data to be read and sent by a client, and inquiring whether an object of the data to be read exists in the SSD of the cache pool or not; if the data to be read exists, reading the data to be read from the SSD of the cache pool based on the object of the data to be read; if not, forwarding the data reading request to the OSD in the back-end pool;
the OSD in the back-end pool is used for receiving the data reading request and inquiring whether an object of the data to be read exists in the SSD of the back-end pool or not through a flash cache module; if so, reading the data to be read from the SSD of the back-end pool based on the object of the data to be read; if not, reading the data to be read from the HDD of the rear-end pool;
the OSD in the cache pool is also used for receiving a data writing request aiming at data to be written, which is sent by a client, and inquiring whether an object of the data to be written exists in the SSD of the cache pool; if so, writing the data to be written into the SSD of the cache pool based on the object of the data to be written; if not, forwarding the data write-in request to the OSD in the back-end pool;
and the OSD in the back-end pool is also used for receiving the data writing request and writing the data to be written into the SSD and the HDD of the back-end pool.
In a possible implementation manner, the OSD in the back-end pool is further configured to cache the read data in the SSD of the back-end pool after the data to be read is read from the HDD of the back-end pool.
In a possible implementation manner, the OSD in the backend pool is further configured to send a data write response to the OSD in the cache pool after the data to be written is written in the SSD and the HDD of the backend pool, where the data write response carries the data written this time;
and the OSD in the cache pool is used for receiving a data writing response sent by the OSD in the cache pool and caching data carried in the data writing response in the SSD of the cache pool.
In a second aspect, an embodiment of the present application provides a data reading method, where the method is applied to an object storage device OSD in a cache pool of a storage system, the storage system further includes a backend pool, the cache pool includes a plurality of OSDs and a solid state disk SSD corresponding to each OSD, the backend pool includes the SSD, the plurality of OSDs, and a mechanical hard disk HDD corresponding to each OSD, and the method includes:
receiving a data reading request aiming at data to be read, which is sent by a client;
inquiring whether the SSD of the cache pool has the object of the data to be read or not;
if the data to be read exists, reading the data to be read from the SSD of the cache pool based on the object of the data to be read;
if the data to be read does not exist in the SSD, forwarding the data reading request to the OSD in the back-end pool, so that the OSD in the back-end pool inquires whether the data to be read exists in the SSD of the back-end pool or not through a flash cache module; if so, reading the data to be read from the SSD of the back-end pool based on the object of the data to be read; and if the data to be read does not exist, reading the data to be read from the HDD of the back-end pool.
In a third aspect, an embodiment of the present application provides a data reading method, where the method is applied to an object storage device OSD in a backend pool of a storage system, the storage system further includes a cache pool, the cache pool includes a plurality of OSDs and a solid state disk SSD corresponding to each OSD, and the backend pool includes the SSD, the plurality of OSDs, and a mechanical hard disk corresponding to each OSD, and the method includes:
receiving a data reading request sent by the OSD in the cache pool;
inquiring whether the SSD of the rear-end pool has the object of the data to be read or not through a flash cache module;
if so, reading the data to be read from the SSD of the back-end pool based on the object of the data to be read;
and if the data to be read does not exist, reading the data to be read from the HDD of the back-end pool.
In one possible implementation, after the data to be read is read from the HDD in the backend pool, the method further includes:
and caching the read data in the SSD of the back-end pool.
In a fourth aspect, an embodiment of the present application provides a data writing method, where the method is applied to an object storage device OSD in a cache pool of a storage system, the storage system further includes a backend pool, the cache pool includes a plurality of OSDs and a solid state disk SSD corresponding to each OSD, the backend pool includes the SSD, the plurality of OSDs, and a mechanical hard disk HDD corresponding to each OSD, and the method includes:
receiving a data writing request aiming at data to be written, which is sent by a client;
inquiring whether the SSD of the cache pool has the object of the data to be written;
if so, writing the data to be written into the SSD of the cache pool based on the object of the data to be written;
and if the data to be written does not exist, forwarding the data writing request to the OSD in the back-end pool so that the OSD in the back-end pool writes the data to be written into the SSD and the HDD of the back-end pool.
In one possible implementation, after forwarding the data write request to the OSD in the backend pool, the method further includes:
receiving a data write-in response sent by the OSD in the rear-end pool, wherein the data write-in response carries the data written in this time;
and caching the data carried in the data write response in the SSD of the cache pool.
In a fifth aspect, an embodiment of the present application provides a data writing method, where the method is applied to an object storage device OSD in a backend pool of a storage system, the storage system further includes a cache pool, the cache pool includes a plurality of OSDs and a solid state disk SSD corresponding to each OSD, the backend pool includes the SSD, the plurality of OSDs, and a mechanical hard disk corresponding to each OSD, and the method includes:
receiving a data writing request aiming at data to be written, which is sent by the OSD in the cache pool;
and writing the data to be written into the SSD and the HDD of the back-end pool.
In one possible implementation manner, after the data to be written is written into the SSD and the HDD of the backend pool, the method further includes:
and sending a data write-in response to the OSD in the cache pool, wherein the data write-in response carries the data written in this time, so that the data carried in the data write-in response is cached in the SSD of the cache pool by the OSD in the cache pool.
In a sixth aspect, an embodiment of the present invention further provides an electronic device, including a processor, a communication interface, a memory, and a communication bus, where the processor and the communication interface complete communication between the memory and the processor through the communication bus;
a memory for storing a computer program;
a processor, configured to implement the data reading method steps of the second aspect and/or the data writing method steps of the fourth aspect when executing the program stored in the memory.
In a seventh aspect, an embodiment of the present invention further provides an electronic device, including a processor, a communication interface, a memory, and a communication bus, where the processor, the communication interface, and the memory complete mutual communication through the communication bus;
a memory for storing a computer program;
a processor, configured to implement the data reading method steps of the third aspect and/or the data writing method steps of the fifth aspect when executing the program stored in the memory.
In an eighth aspect, the present invention further provides a computer-readable storage medium, in which a computer program is stored, and the computer program, when executed by a processor, implements the data reading method according to the second aspect and/or the data writing method according to the fourth aspect.
In a ninth aspect, the present invention further provides a computer-readable storage medium, in which a computer program is stored, and the computer program, when executed by a processor, implements the data reading method of the third aspect and/or the data writing method of the fifth aspect
In a tenth aspect, embodiments of the present application further provide a computer program product containing instructions, which when run on a computer, cause the computer to perform the data reading method of the second aspect and/or the data writing method of the fourth aspect.
In an eleventh aspect, embodiments of the present application further provide a computer program product containing instructions, which when run on a computer, cause the computer to perform the data reading method of the third aspect and/or the data writing method of the fifth aspect.
By adopting the technical scheme, the data can be preferentially read from or written into the SSD in the cache pool, and the SSD has the characteristic of high reading and writing speed, so that the data reading or writing speed from or into the SSD in the cache pool is high, and the data reading and writing efficiency can be improved. If the object of the data to be read does not exist in the SSD of the cache pool, the object of the data to be read can be further searched from the SSD of the back-end pool.
Of course, not all of the advantages described above need to be achieved at the same time in the practice of any one product or method of the invention.
Drawings
In order to more clearly illustrate the embodiments of the present invention or the technical solutions in the prior art, the drawings used in the description of the embodiments or the prior art will be briefly described below, it is obvious that the drawings in the following description are only some embodiments of the present invention, and for those skilled in the art, other drawings can be obtained according to the drawings without creative efforts.
Fig. 1 is a schematic structural diagram of a storage system according to an embodiment of the present application;
fig. 2 is a flowchart of a data reading method according to an embodiment of the present application;
FIG. 3 is a flow chart of another data reading method according to an embodiment of the present disclosure;
fig. 4 is a flowchart of a data writing method according to an embodiment of the present application;
FIG. 5 is a flowchart of another data writing method according to an embodiment of the present application;
fig. 6 is a schematic structural diagram of an electronic device according to an embodiment of the present application.
Detailed Description
The technical solutions in the embodiments of the present invention will be clearly and completely described below with reference to the drawings in the embodiments of the present invention, and it is obvious that the described embodiments are only a part of the embodiments of the present invention, and not all of the embodiments. All other embodiments, which can be derived by a person skilled in the art from the embodiments given herein without making any creative effort, shall fall within the protection scope of the present invention.
An embodiment of the present application provides a Storage system, as shown in fig. 1, the Storage system includes a cache pool and a backend pool, where the cache pool includes a plurality of Object Storage Devices (OSDs) and an SSD corresponding to each OSD, and the backend pool includes a Solid State Disk (SSD), a plurality of OSDs and an HDD corresponding to each OSD.
The SSD in the cache pool and the SSD in the back-end pool are both used for caching data, and the HDD in the back-end pool is used for storing data. The OSD in the cache pool has a mapping relation with the SSD, and the OSD in the cache pool can carry out data read-write operation on the SSD which has the mapping relation with the OSD. The OSD in the back end pool has a mapping relation with the HDD, and the OSD in the back end pool can carry out data read-write operation on the HDD having the mapping relation with the OSD.
The OSD in the cache pool is used for receiving a data reading request aiming at the data to be read sent by the client and inquiring whether an object of the data to be read exists in the SSD in the cache pool or not; if the data to be read exists, reading the data to be read from the SSD of the cache pool based on the object of the data to be read; and if not, forwarding the data reading request to the OSD in the back-end pool.
The OSD in the back-end pool is used for receiving a data reading request and inquiring whether an object of data to be read exists in the SSD of the back-end pool or not through a flash cache module; if so, reading the data to be read from the SSD of the back-end pool based on the object of the data to be read; and if the data does not exist, reading the data to be read from the HDD of the back-end pool.
The OSD in the cache pool is also used for receiving a data writing request aiming at the data to be written, which is sent by the client, and inquiring whether an object of the data to be written exists in the SSD in the cache pool; if so, writing the data to be written into the SSD of the cache pool based on the object of the data to be written; and if not, forwarding the data write-in request to the OSD in the back-end pool.
And the OSD in the back-end pool is used for receiving the data writing request and writing the data to be written into the SSD and the HDD of the back-end pool.
The flash cache module is a Linux-based kernel module and is used for caching the SSD as a cache and caching the hot data on the SSD, so that the data processing speed is accelerated.
By adopting the technical scheme, the data can be preferentially read from or written into the SSD in the cache pool, and the SSD has the characteristic of high reading and writing speed, so that the data reading or writing speed from or into the SSD in the cache pool is high, and the data reading and writing efficiency can be improved. If the object of the data to be read does not exist in the SSD of the cache pool, the object of the data to be read can be further searched from the SSD of the back-end pool.
In one embodiment, the OSDs in the cache pool are further configured to, in a case that it is determined that there is no object to be read in the SSD of the cache pool, calculate a master OSD in the backend pool for processing the data read request, and forward the data read request to the master OSD. Accordingly, the master OSD in the backend pool receives the data read request and processes the data read request in the manner described above. The method for calculating the main OSD for processing the data read request may refer to the description in the related art, and is not described herein again.
Optionally, the OSD in the back-end pool is further configured to, after the data to be read is read from the HDD in the back-end pool, cache the read data in the SSD in the back-end pool.
By adopting the embodiment of the application, after the data read this time is cached in the SSD in the back-end pool, if the data reading request aiming at the data is received again subsequently, the data can be read from the SSD, and compared with the data read from the HDD, the reading efficiency can be improved, and the data reading performance of the storage system can be improved.
In another embodiment of the present application, after writing data to be written into the SSD and the HDD of the back-end pool, the OSD in the back-end pool sends a data write response to the OSD in the cache pool, where the data write response carries the data written this time;
and the OSD in the cache pool is used for receiving a data writing response sent by the OSD in the cache pool and caching the data carried in the data writing response in the SSD of the cache pool.
By adopting the embodiment of the application, the data carried in the data writing response is cached in the SSD of the cache pool, and the probability of successful writing in the SSD of the cache pool can be improved and the data writing efficiency is improved when the data is written subsequently.
In the embodiment of the application, in order to avoid excessive data cached in the cache pool, the data cached in the SSD of the cache pool may be periodically written into the HDD of the back-end pool; or after the data amount cached by the SSD in the cache pool reaches a certain threshold value, writing the data cached in the SSD in the cache pool into the HDD of the rear-end pool.
Based on this, the OSD in the cache pool is also used for sending a disk-flushing request to the OSD in the backend pool.
And the OSD in the rear-end pool is also used for receiving the disk refreshing request and responding to the disk refreshing request to send a data acquisition request to the OSD in the cache pool. The data acquisition request is used for requesting to acquire data written in the SSD of the cache pool.
And the OSD in the cache pool is also used for receiving the data acquisition request and sending the data cached by the SSD in the cache pool to the OSD in the back-end pool.
And the OSD in the back-end pool is also used for storing the received data in the HDD in the back-end pool.
By adopting the technical scheme, the combination of the flash cache module and the cache pool can be realized, the influence of the disk refreshing operation on the data reading and writing performance can be relieved under the condition that the SSD and the HDD coexist, the disk refreshing time delay can be reduced through the storage system, and the data reading and writing efficiency can be improved.
On the basis of the storage system shown in fig. 1, the embodiment of the present application further provides a data reading method and a data writing method, which are described in detail below.
As shown in fig. 2, an embodiment of the present application provides a data reading method, which is applied to an OSD in a cache pool of a storage system, and the method includes:
s201, receiving a data reading request aiming at data to be read, which is sent by a client.
S202, inquiring whether an object of data to be read exists in the SSD of the cache pool. If yes, go to S203; if not, go to S204.
The OSD in the cache pool can specifically inquire whether an object of data to be read exists in the SSD which has the mapping relation with the OSD in the cache pool.
S203, reading the data to be read from the SSD of the cache pool based on the object of the data to be read.
After the OSD in the cache pool reads the data to be read from the SSD of the cache pool, the read data can be returned to the client.
S204, forwarding a data reading request to the OSD in the back-end pool so that the OSD in the back-end pool inquires whether an object of data to be read exists in the SSD of the back-end pool through a flash cache module; if so, reading the data to be read from the SSD of the back-end pool based on the object of the data to be read; and if the data does not exist, reading the data to be read from the HDD of the back-end pool.
When determining that the SSD of the cache pool does not have the object of the data to be read, the OSD in the cache pool may calculate a main OSD in the backend pool for processing the data read request, and forward the data read request to the main OSD. The method for calculating the main OSD for processing the data read request may refer to the description in the related art, and is not described herein again.
By adopting the method, after receiving the data reading request, the OSD in the cache pool can preferentially read the data from the SSD of the cache pool, if the SSD of the cache pool does not have the object of the data to be read, the OSD in the back-end pool forwards the data request, so that the OSD in the back-end pool preferentially reads the data from the SSD in the back-end pool, and if the SSD of the back-end pool does not have the object of the data to be read, the data can be read from the HDD. Due to the fact that two stages of SSD caches exist, the possibility of reading data from the SSD caches can be improved, and due to the fact that the SSD has the characteristic of being high in reading and writing speed, the speed of reading the data from the SSD is high, and the reading performance can be improved.
Corresponding to the data reading method shown in fig. 2, an embodiment of the present application further provides a data reading method applied to an OSD in a backend pool of a storage system, as shown in fig. 3, where the method includes:
s301, receiving a data reading request sent by the OSD in the buffer pool.
S302, inquiring whether an object of data to be read exists in the SSD of the rear-end pool through the flash cache module.
If yes, executing S303; if not, go to step S304.
S303, reading the data to be read from the SSD of the back-end pool based on the object of the data to be read.
S304, reading data to be read from the HDD of the back end pool.
Optionally, after the data to be read is read from the HDD in the backend pool, the read data may be buffered in the SSD in the backend pool, so as to improve the hit rate of subsequently reading the data.
It can be understood that after the OSD in the back-end pool reads the data to be read, a data read response may be returned to the OSD in the cache pool, where the data read response carries the read data, and then the OSD in the cache pool may return the read data to the client.
By adopting the method, under the condition that the data to be read is not cached in the cache pool, the OSD in the rear-end pool firstly inquires whether the data to be read exists in the SSD of the rear-end pool through the flash cache module, if the data to be read exists in the SSD of the rear-end pool, the data does not need to be read from the HDD, and the data reading and writing efficiency can be improved.
As shown in fig. 4, an embodiment of the present application further provides a data writing method, where the method is applied to an OSD in a cache pool of a storage system, and the method includes:
s401, receiving a data writing request aiming at data to be written sent by a client.
S402, inquiring whether an object to be written in the SSD of the cache pool exists or not.
If yes, executing S403; if not, go to S404.
S403, writing the data to be written into the SSD of the cache pool based on the object of the data to be written.
After the data to be written is written into the SSD of the cache pool, a data write success response can be returned to the client.
S404, forwarding a data writing request to the OSD in the back-end pool, so that the OSD in the back-end pool writes data to be written into the SSD and the HDD of the back-end pool.
By adopting the method, after the OSD in the cache pool receives the data writing request, if the SSD of the cache pool is inquired to have the object of the data to be written, the data to be written can be written into the SSD of the cache pool, under the condition, the data is preferentially written into the SSD of the cache pool, the data writing efficiency can be improved, if the SSD of the cache pool has no object of the data to be written, the data is written into the OSD and the HDD of the rear pool, so that the data can be successfully written finally, and the better data writing performance is realized.
Optionally, after the OSD in the cache pool forwards the data write request to the OSD in the backend pool, a data write response sent by the OSD in the backend pool may also be received, where the data write response carries the data written this time. Then the OSD in the cache pool can cache the data carried in the data write response in the SSD of the cache pool.
In this embodiment of the present application, after receiving the data write response, the OSD in the cache pool may return a data write success response to the client.
By adopting the method, the data carried in the data writing response is cached in the SSD of the cache pool, and the probability of writing success in the SSD of the cache pool can be improved and the data writing efficiency can be improved when the data is written subsequently.
Corresponding to the data writing method shown in fig. 4, an embodiment of the present application further provides a data writing method, which is applied to an OSD in a backend pool of a storage system, as shown in fig. 5, where the method includes:
s501, receiving a data writing request aiming at data to be written, which is sent by the OSD in the cache pool.
The data writing request is sent when the OSD in the cache pool does not inquire the data object of the data to be written in the SSD in the cache pool.
S502, writing the data to be written into the SSD and the HDD of the back-end pool.
And the OSD in the back-end pool can write the data to be written into the SSD and the HDD of the back-end pool in an asynchronous writing mode.
By adopting the method, the OSD in the rear-end pool only needs to write the data to be written into the SSD and the HDD under the condition that the SSD in the cache pool has no object of the data to be written into, and compared with the method of directly writing data into the HDD, the method can improve the data writing efficiency and has better data writing performance.
Optionally, after the data to be written is written into the SSD and the HDD in the back-end pool, a data write response is sent to the OSD in the cache pool, where the data write response carries the data written this time, so that the OSD in the cache pool caches the data carried in the data write response in the SSD of the cache pool. Furthermore, if a data read-write request for the data written at this time is received subsequently, the OSD in the cache pool can process the data read-write request, and the read-write performance is better.
The embodiment of the present application further provides an electronic device, as shown in fig. 6, which includes a processor 601, a communication interface 602, a memory 603, and a communication bus 604, where the processor 601, the communication interface 602, and the memory 603 complete mutual communication through the communication bus 604,
a memory 603 for storing a computer program;
the processor 601 is configured to implement any data reading method and/or any data writing method in the above method embodiments when executing the program stored in the memory 603.
The communication bus mentioned in the electronic device may be a Peripheral Component Interconnect (PCI) bus, an Extended Industry Standard Architecture (EISA) bus, or the like. The communication bus may be divided into an address bus, a data bus, a control bus, etc. For ease of illustration, only one thick line is shown, but this does not mean that there is only one bus or one type of bus.
The communication interface is used for communication between the electronic equipment and other equipment.
The Memory may include a Random Access Memory (RAM) or a Non-Volatile Memory (NVM), such as at least one disk Memory. Optionally, the memory may also be at least one memory device located remotely from the processor.
The Processor may be a general-purpose Processor, including a Central Processing Unit (CPU), a Network Processor (NP), and the like; but also Digital Signal Processors (DSPs), Application Specific Integrated Circuits (ASICs), Field Programmable Gate Arrays (FPGAs) or other Programmable logic devices, discrete Gate or transistor logic devices, discrete hardware components.
In yet another embodiment of the present invention, a computer-readable storage medium is further provided, in which a computer program is stored, and the computer program realizes the steps of any of the above data reading methods and/or the steps of any of the above data writing methods when being executed by a processor.
In a further embodiment of the present invention, there is also provided a computer program product comprising instructions which, when run on a computer, cause the computer to perform the steps of any of the data reading methods of the above embodiments, and/or the steps of any of the data writing methods of the above embodiments.
In the above embodiments, the implementation may be wholly or partially realized by software, hardware, firmware, or any combination thereof. When implemented in software, may be implemented in whole or in part in the form of a computer program product. The computer program product includes one or more computer instructions. When loaded and executed on a computer, cause the processes or functions described in accordance with the embodiments of the invention to occur, in whole or in part. The computer may be a general purpose computer, a special purpose computer, a network of computers, or other programmable device. The computer instructions may be stored in a computer readable storage medium or transmitted from one computer readable storage medium to another, for example, from one website site, computer, server, or data center to another website site, computer, server, or data center via wired (e.g., coaxial cable, fiber optic, Digital Subscriber Line (DSL)) or wireless (e.g., infrared, wireless, microwave, etc.). The computer-readable storage medium can be any available medium that can be accessed by a computer or a data storage device, such as a server, a data center, etc., that incorporates one or more of the available media. The usable medium may be a magnetic medium (e.g., floppy Disk, hard Disk, magnetic tape), an optical medium (e.g., DVD), or a semiconductor medium (e.g., Solid State Disk (SSD)), among others.
It is noted that, herein, relational terms such as first and second, and the like may be used solely to distinguish one entity or action from another entity or action without necessarily requiring or implying any actual such relationship or order between such entities or actions. Also, the terms "comprises," "comprising," or any other variation thereof, are intended to cover a non-exclusive inclusion, such that a process, method, article, or apparatus that comprises a list of elements does not include only those elements but may include other elements not expressly listed or inherent to such process, method, article, or apparatus. Without further limitation, an element defined by the phrase "comprising an … …" does not exclude the presence of other identical elements in a process, method, article, or apparatus that comprises the element.
All the embodiments in the present specification are described in a related manner, and the same and similar parts among the embodiments may be referred to each other, and each embodiment focuses on the differences from the other embodiments. In particular, as for the method embodiment, since it is substantially similar to the system embodiment, the description is simple, and the relevant points can be referred to the partial description of the method embodiment.
The above description is only for the preferred embodiment of the present invention, and is not intended to limit the scope of the present invention. Any modification, equivalent replacement, or improvement made within the spirit and principle of the present invention shall fall within the protection scope of the present invention.

Claims (10)

1. A storage system is characterized by comprising a cache pool and a back end pool, wherein the cache pool comprises a plurality of object storage devices OSD and a Solid State Disk (SSD) corresponding to each OSD, and the back end pool comprises the SSD, a plurality of OSD and a mechanical hard disk (HDD) corresponding to each OSD;
the OSD in the cache pool is used for receiving a data reading request aiming at data to be read and sent by a client, and inquiring whether an object of the data to be read exists in the SSD of the cache pool or not; if the data to be read exists, reading the data to be read from the SSD of the cache pool based on the object of the data to be read; if not, forwarding the data reading request to the OSD in the back-end pool;
the OSD in the back-end pool is used for receiving the data reading request and inquiring whether an object of the data to be read exists in the SSD of the back-end pool or not through a flash cache module; if so, reading the data to be read from the SSD of the back-end pool based on the object of the data to be read; if not, reading the data to be read from the HDD of the rear-end pool;
the OSD in the cache pool is also used for receiving a data writing request aiming at data to be written, which is sent by a client, and inquiring whether an object of the data to be written exists in the SSD of the cache pool; if so, writing the data to be written into the SSD of the cache pool based on the object of the data to be written; if not, forwarding the data write-in request to the OSD in the back-end pool;
and the OSD in the back-end pool is also used for receiving the data writing request and writing the data to be written into the SSD and the HDD of the back-end pool.
2. The storage system of claim 1,
and the OSD in the rear-end pool is also used for caching the read data in the SSD of the rear-end pool after the data to be read is read from the HDD of the rear-end pool.
3. The storage system of claim 1,
the OSD in the back end pool is further used for sending a data writing response to the OSD in the cache pool after the data to be written is written into the SSD and the HDD of the back end pool, and the data writing response carries the data written at this time;
and the OSD in the cache pool is used for receiving a data writing response sent by the OSD in the cache pool and caching data carried in the data writing response in the SSD of the cache pool.
4. A data reading method is characterized in that the method is applied to an object storage device OSD in a cache pool of a storage system, the storage system further comprises a back-end pool, the cache pool comprises a plurality of OSD and a solid state disk SSD corresponding to each OSD, the back-end pool comprises the SSD, the OSD and a mechanical hard disk HDD corresponding to each OSD, and the method comprises the following steps:
receiving a data reading request aiming at data to be read, which is sent by a client;
inquiring whether the SSD of the cache pool has the object of the data to be read or not;
if the data to be read exists, reading the data to be read from the SSD of the cache pool based on the object of the data to be read;
if the data to be read does not exist in the SSD, forwarding the data reading request to the OSD in the back-end pool, so that the OSD in the back-end pool inquires whether the data to be read exists in the SSD of the back-end pool or not through a flash cache module; if so, reading the data to be read from the SSD of the back-end pool based on the object of the data to be read; and if the data to be read does not exist, reading the data to be read from the HDD of the back-end pool.
5. A data reading method is applied to an object storage device OSD in a back-end pool of a storage system, the storage system further comprises a cache pool, the cache pool comprises a plurality of OSD and a solid state disk SSD corresponding to each OSD, the back-end pool comprises the SSD, the OSD and a mechanical hard disk corresponding to each OSD, and the method comprises the following steps:
receiving a data reading request sent by the OSD in the cache pool;
inquiring whether the SSD of the rear-end pool has the object of the data to be read or not through a flash cache module;
if so, reading the data to be read from the SSD of the back-end pool based on the object of the data to be read;
and if the data to be read does not exist, reading the data to be read from the HDD of the back-end pool.
6. The method of claim 5, wherein after reading the data to be read from the HDD of the back-end pool, the method further comprises:
and caching the read data in the SSD of the back-end pool.
7. A data write-in method is characterized in that the method is applied to an object storage device OSD in a cache pool of a storage system, the storage system further comprises a back-end pool, the cache pool comprises a plurality of OSD and a solid state disk SSD corresponding to each OSD, the back-end pool comprises the SSD, the OSD and a mechanical hard disk HDD corresponding to each OSD, and the method comprises the following steps:
receiving a data writing request aiming at data to be written, which is sent by a client;
inquiring whether the SSD of the cache pool has the object of the data to be written;
if so, writing the data to be written into the SSD of the cache pool based on the object of the data to be written;
and if the data to be written does not exist, forwarding the data writing request to the OSD in the back-end pool so that the OSD in the back-end pool writes the data to be written into the SSD and the HDD of the back-end pool.
8. The method of claim 7, wherein after forwarding the data write request to the OSD in the backend pool, the method further comprises:
receiving a data write-in response sent by the OSD in the rear-end pool, wherein the data write-in response carries the data written in this time;
and caching the data carried in the data write response in the SSD of the cache pool.
9. A data write-in method is characterized in that the method is applied to an object storage device OSD in a back end pool of a storage system, the storage system further comprises a cache pool, the cache pool comprises a plurality of OSD and a solid state disk SSD corresponding to each OSD, the back end pool comprises the SSD, the OSD and a mechanical hard disk corresponding to each OSD, and the method comprises the following steps:
receiving a data writing request aiming at data to be written, which is sent by the OSD in the cache pool;
and writing the data to be written into the SSD and the HDD of the back-end pool.
10. The method of claim 9, wherein after writing the data to be written to the SSDs and HDDs of the back-end pool, the method further comprises:
and sending a data write-in response to the OSD in the cache pool, wherein the data write-in response carries the data written in this time, so that the data carried in the data write-in response is cached in the SSD of the cache pool by the OSD in the cache pool.
CN202011273132.3A 2020-11-13 2020-11-13 Storage system, data reading method and data writing method Active CN112214178B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202011273132.3A CN112214178B (en) 2020-11-13 2020-11-13 Storage system, data reading method and data writing method

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202011273132.3A CN112214178B (en) 2020-11-13 2020-11-13 Storage system, data reading method and data writing method

Publications (2)

Publication Number Publication Date
CN112214178A true CN112214178A (en) 2021-01-12
CN112214178B CN112214178B (en) 2022-08-19

Family

ID=74057054

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202011273132.3A Active CN112214178B (en) 2020-11-13 2020-11-13 Storage system, data reading method and data writing method

Country Status (1)

Country Link
CN (1) CN112214178B (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN114237518A (en) * 2022-02-22 2022-03-25 苏州浪潮智能科技有限公司 Data reading method, system, device and terminal

Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN102713828A (en) * 2011-12-21 2012-10-03 华为技术有限公司 Multi-device mirror images and stripe function-providing disk cache method, device, and system
US20160179683A1 (en) * 2014-12-23 2016-06-23 Prophetstor Data Services, Inc. Ssd caching system for hybrid storage
CN105892947A (en) * 2016-03-31 2016-08-24 华中科技大学 SSD and HDD hybrid caching management method and system of energy-saving storage system
US20160246519A1 (en) * 2015-02-20 2016-08-25 Netapp, Inc. Solid state device parity caching in a hybrid storage array
CN107241444A (en) * 2017-07-31 2017-10-10 郑州云海信息技术有限公司 A kind of distributed caching data management system, method and device
CN107632784A (en) * 2017-09-14 2018-01-26 郑州云海信息技术有限公司 The caching method of a kind of storage medium and distributed memory system, device and equipment
CN108845768A (en) * 2018-06-19 2018-11-20 郑州云海信息技术有限公司 A kind of date storage method, device, equipment and storage medium

Patent Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN102713828A (en) * 2011-12-21 2012-10-03 华为技术有限公司 Multi-device mirror images and stripe function-providing disk cache method, device, and system
US20160179683A1 (en) * 2014-12-23 2016-06-23 Prophetstor Data Services, Inc. Ssd caching system for hybrid storage
US20160246519A1 (en) * 2015-02-20 2016-08-25 Netapp, Inc. Solid state device parity caching in a hybrid storage array
CN105892947A (en) * 2016-03-31 2016-08-24 华中科技大学 SSD and HDD hybrid caching management method and system of energy-saving storage system
CN107241444A (en) * 2017-07-31 2017-10-10 郑州云海信息技术有限公司 A kind of distributed caching data management system, method and device
CN107632784A (en) * 2017-09-14 2018-01-26 郑州云海信息技术有限公司 The caching method of a kind of storage medium and distributed memory system, device and equipment
CN108845768A (en) * 2018-06-19 2018-11-20 郑州云海信息技术有限公司 A kind of date storage method, device, equipment and storage medium

Non-Patent Citations (3)

* Cited by examiner, † Cited by third party
Title
LI CHAO ET AL.: "An IO optimized Data Access Method in Distributed KEY-VALUE Storage System", 《IEEE XPLORE》 *
杨庆 等: "ADCS:一种基于SSD的阵列数据库缓存技术", 《计算机与数字工程》 *
郭唐宝等: "一种面向应用服务器的分布式缓存机制", 《科学技术与工程》 *

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN114237518A (en) * 2022-02-22 2022-03-25 苏州浪潮智能科技有限公司 Data reading method, system, device and terminal

Also Published As

Publication number Publication date
CN112214178B (en) 2022-08-19

Similar Documents

Publication Publication Date Title
CN110275841B (en) Access request processing method and device, computer equipment and storage medium
US7165144B2 (en) Managing input/output (I/O) requests in a cache memory system
CN108733316B (en) Method and manager for managing storage system
CN110413199B (en) Method, apparatus, and computer-readable storage medium for managing storage system
US10860494B2 (en) Flushing pages from solid-state storage device
CN107797760B (en) Method and device for accessing cache information and solid-state drive
CN108073527B (en) Cache replacement method and equipment
CN111338561B (en) Memory controller and memory page management method
CN110555001A (en) data processing method, device, terminal and medium
CN107577775B (en) Data reading method and device, electronic equipment and readable storage medium
CN111563052A (en) Cache method and device for reducing read delay, computer equipment and storage medium
CN108228088B (en) Method and apparatus for managing storage system
CN112214178B (en) Storage system, data reading method and data writing method
CN116303590A (en) Cache data access method, device, equipment and storage medium
CN109246234B (en) Image file downloading method and device, electronic equipment and storage medium
US9454479B2 (en) Processing read and write requests in a storage controller
US11645209B2 (en) Method of cache prefetching that increases the hit rate of a next faster cache
US9158697B2 (en) Method for cleaning cache of processor and associated processor
CN111290975A (en) Method for processing read command and pre-read command by using unified cache and storage device thereof
US11237975B2 (en) Caching assets in a multiple cache system
CN113254363A (en) Non-volatile memory controller with partial logical to physical address translation table
US7421536B2 (en) Access control method, disk control unit and storage apparatus
CN115080459A (en) Cache management method and device and computer readable storage medium
US10372623B2 (en) Storage control apparatus, storage system and method of controlling a cache memory
CN112947845A (en) Thermal data identification method and storage device thereof

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant