CN113867646B - Disk performance improving method and terminal - Google Patents

Disk performance improving method and terminal Download PDF

Info

Publication number
CN113867646B
CN113867646B CN202111164247.3A CN202111164247A CN113867646B CN 113867646 B CN113867646 B CN 113867646B CN 202111164247 A CN202111164247 A CN 202111164247A CN 113867646 B CN113867646 B CN 113867646B
Authority
CN
China
Prior art keywords
cache
transmitted
disk
data
level
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202111164247.3A
Other languages
Chinese (zh)
Other versions
CN113867646A (en
Inventor
宫大成
宫立基
石帆
肖光胜
亓迎
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Fujian Jicun Data Technology Co ltd
Original Assignee
Fujian Jicun Data Technology Co ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Fujian Jicun Data Technology Co ltd filed Critical Fujian Jicun Data Technology Co ltd
Priority to CN202111164247.3A priority Critical patent/CN113867646B/en
Priority to PCT/CN2021/124022 priority patent/WO2023050488A1/en
Publication of CN113867646A publication Critical patent/CN113867646A/en
Application granted granted Critical
Publication of CN113867646B publication Critical patent/CN113867646B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/06Digital input from, or digital output to, record carriers, e.g. RAID, emulated record carriers or networked record carriers
    • G06F3/0601Interfaces specially adapted for storage systems
    • G06F3/0602Interfaces specially adapted for storage systems specifically adapted to achieve a particular effect
    • G06F3/0604Improving or facilitating administration, e.g. storage management
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/06Digital input from, or digital output to, record carriers, e.g. RAID, emulated record carriers or networked record carriers
    • G06F3/0601Interfaces specially adapted for storage systems
    • G06F3/0602Interfaces specially adapted for storage systems specifically adapted to achieve a particular effect
    • G06F3/062Securing storage systems
    • G06F3/0622Securing storage systems in relation to access
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/06Digital input from, or digital output to, record carriers, e.g. RAID, emulated record carriers or networked record carriers
    • G06F3/0601Interfaces specially adapted for storage systems
    • G06F3/0628Interfaces specially adapted for storage systems making use of a particular technique
    • G06F3/0655Vertical data movement, i.e. input-output transfer; data movement between one or more hosts and one or more storage devices
    • G06F3/0656Data buffering arrangements
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/06Digital input from, or digital output to, record carriers, e.g. RAID, emulated record carriers or networked record carriers
    • G06F3/0601Interfaces specially adapted for storage systems
    • G06F3/0668Interfaces specially adapted for storage systems adopting a particular infrastructure
    • G06F3/0671In-line storage system
    • G06F3/0683Plurality of storage devices
    • G06F3/0689Disk arrays, e.g. RAID, JBOD

Abstract

According to the disk performance improving method and the terminal, the backboard is electrified by using the capacitor electrification method, so that the requirement on a server power supply can be reduced, and the energy consumption is lower; configuring a four-level cache access comprising a memory, a solid state disk cache, an array card cache and a hard disk drive cache, uniformly configuring and managing each level in the four-level cache access, and calling the cache according to different requirements so as to greatly improve the access speed; acquiring data to be transmitted, if the size of the data to be transmitted exceeds a preset volume, storing the data to be transmitted into a disk after the data to be transmitted passes through each level of cache of a four-level cache access, otherwise, storing the data to be transmitted into the disk after the data to be transmitted passes through a memory and a hard disk drive cache; therefore, the corresponding cache is selected according to the size of the transmission data, and the cache can be flexibly adjusted. Therefore, by combining the backboard power-on method, the four-level cache access and the optimization of the RAID card, the delay and the energy consumption can be reduced, and the read-write speed of the disk can be improved.

Description

Disk performance improving method and terminal
Technical Field
The invention relates to the technical field of computers, in particular to a method and a terminal for improving the performance of a disk.
Background
With the progress of the era, the bandwidth of a public network is wirelessly developed from the 1G era to the current 5G era, a wire is used for accessing the internet from dial-up to the current ten-trillion optical fiber, the used files are larger, one picture before 10 years is a few KB, the pictures taken by mobile phones at any time are dozens of MB, the files collected by professional equipment are larger, and with the development of the video era, even a video file is hundreds of MB, and a 5G +4K/8K video file is dozens of GB, so that the real large file data era comes, and under the trend, a disk array storage server which is one of basic devices for storing the files is an important direction of the large file data era.
At present, the multi-disk array storage servers in the market are mainly divided into traditional mechanical hard disk arrays and full flash memory arrays. The traditional mechanical hard disk array has low cost and large capacity, but the bandwidth and the performance are not enough to meet the requirements on efficiency in the digital era along with the development of 5G and public networks, the performance needs to be improved by a mode of forming a distributed cluster by stacking, and the corresponding cost and the power consumption are also increased; although the solid-state array of the full flash memory can improve the bandwidth and the IOPS (Input/Output Operations Per Second) performance, the capacity is low, the cost is high, and the digitized circuit structure has certain potential safety hazard.
For example: a software or file manufacturer which can provide file downloading establishes a storage server in the 4G era, the storage server can reach 800MB/s bandwidth, the conventional downloading in the 4G era is generally 12MB/s, that is, the storage server can simultaneously support 66 persons to access and download files in real time, and if the conventional downloading speed in the 5G era reaches at least 120MB/s, the storage server can only support 6-7 persons to simultaneously access and download files in real time. With the increase of the public network speed, higher requirements are required on the performance of basic hardware. At present, with the popularization of 5G, general enterprises solve the problems of file transmission bandwidth and IOPS through solid state disk array or distributed cluster stacking, but the problems of high cost, high energy consumption and difficult maintenance exist in solid state disk array storage and distributed cluster array storage, and the problem of the performance of the disk array cannot be fundamentally solved by products on the market at present.
Disclosure of Invention
The technical problem to be solved by the invention is as follows: the method and the terminal for improving the performance of the disk can reduce the energy consumption of the disk and improve the read-write speed of the disk.
In order to solve the technical problems, the invention adopts the technical scheme that:
a method for improving the performance of a magnetic disk comprises the following steps:
the method comprises the steps that a capacitor power-on method is used for powering on a backboard, four-level cache ways comprising a memory, a solid state disk cache, an array card cache and a hard disk drive cache are configured, and all levels in the four-level cache ways are uniformly configured and managed;
and acquiring data to be transmitted, and judging whether the size of the data to be transmitted exceeds a preset volume, if so, storing the transmission path of the data to be transmitted into a disk after passing through each level of cache of the four-level cache access, otherwise, storing the transmission path of the data to be transmitted into the disk after passing through the memory in the four-level cache access and the hard disk drive cache only.
In order to solve the technical problem, the invention adopts another technical scheme as follows:
a disk performance enhancement terminal, comprising a memory, a processor, and a computer program stored on the memory and executable on the processor, the processor implementing the following steps when executing the computer program:
the method comprises the steps that a capacitor power-on method is used for powering on a backboard, four-level cache ways comprising a memory, a solid state disk cache, an array card cache and a hard disk drive cache are configured, and all levels in the four-level cache ways are uniformly configured and managed;
and acquiring data to be transmitted, and judging whether the size of the data to be transmitted exceeds a preset volume, if so, storing the transmission path of the data to be transmitted into a disk after passing through each level of cache of the four-level cache access, otherwise, storing the transmission path of the data to be transmitted into the disk after passing through the memory in the four-level cache access and the hard disk drive cache only.
The invention has the beneficial effects that: the back plate is powered on by using a capacitor power-on method, so that the requirement on a server power supply can be reduced, the hard disk is powered on more safely, and the energy consumption is lower; the method comprises the steps that a four-level cache access comprising a memory, a solid state disk cache, an array card cache and a hard disk drive cache is configured, and all levels in the four-level cache access are uniformly configured and managed, so that the allocation is more flexible compared with the two-level cache between the memory and the hard disk drive cache in the prior art, the caches can be called according to different requirements, and the access speed is greatly improved; acquiring data to be transmitted, and judging whether the size of the data to be transmitted exceeds a preset volume, if so, storing a transmission path of the data to be transmitted into a disk after passing through each level of cache of the four-level cache path, otherwise, storing the transmission path of the data to be transmitted into the disk after passing through a memory and a hard disk drive cache; therefore, the small files do not need too much cache space, the cache space of the small files can be met by using the memory and the hard disk drive cache, and the transmission efficiency is high; and large files need more buffer space, and four-level buffers are needed for file transmission, so that the delay can be reduced by improving the storage capacity of the buffers. Therefore, by combining the backboard power-on method, the four-level cache access and the optimization of the RAID card, the delay and the energy consumption can be reduced, and the read-write speed of the disk can be improved.
Drawings
Fig. 1 is a flowchart of a method for improving disk performance according to an embodiment of the present invention;
fig. 2 is a schematic diagram of a disk performance improving terminal according to an embodiment of the present invention;
fig. 3 is a schematic diagram of a transmission path of a method for improving disk performance according to an embodiment of the present invention;
FIG. 4 is a schematic diagram of a four-level cache way of a disk performance improving method according to an embodiment of the present invention;
FIG. 5 is a schematic diagram of a backplane circuit of a disk performance improving method according to an embodiment of the present invention;
FIG. 6 is a flowchart of a method for improving disk performance according to an embodiment of the present invention;
fig. 7 is a schematic diagram of a transmission path of a method for improving disk performance according to an embodiment of the present invention.
Detailed Description
In order to explain technical contents, achieved objects, and effects of the present invention in detail, the following description is made with reference to the accompanying drawings in combination with the embodiments.
Referring to fig. 1, an embodiment of the present invention provides a method for improving disk performance, including:
the method comprises the steps that a capacitor power-on method is used for powering on a backboard, four-level cache ways comprising a memory, a solid state disk cache, an array card cache and a hard disk drive cache are configured, and all levels in the four-level cache ways are uniformly configured and managed;
and acquiring data to be transmitted, and judging whether the size of the data to be transmitted exceeds a preset volume, if so, storing the transmission path of the data to be transmitted into a disk after passing through each level of cache of the four-level cache access, otherwise, storing the transmission path of the data to be transmitted into the disk after passing through the memory in the four-level cache access and the hard disk drive cache only.
From the above description, the beneficial effects of the present invention are: the back plate is powered on by using a capacitor power-on method, so that the requirement on a server power supply can be reduced, the hard disk is powered on more safely, and the energy consumption is lower; the method comprises the steps that a four-level cache access comprising a memory, a solid state disk cache, an array card cache and a hard disk drive cache is configured, and all levels in the four-level cache access are uniformly configured and managed, so that the allocation is more flexible compared with the two-level cache between the memory and the hard disk drive cache in the prior art, the caches can be called according to different requirements, and the access speed is greatly improved; acquiring data to be transmitted, and judging whether the size of the data to be transmitted exceeds a preset volume, if so, storing a transmission path of the data to be transmitted into a disk after passing through each level of cache of the four-level cache path, otherwise, storing the transmission path of the data to be transmitted into the disk after passing through a memory and a hard disk drive cache; therefore, the small files do not need too much cache space, the cache space of the small files can be met by using the memory and the hard disk drive cache, and the transmission efficiency is high; and large files need more buffer space, and four-level buffers are needed for file transmission, so that the delay can be reduced by improving the storage capacity of the buffers. Therefore, by combining the backboard power-on method, the four-level cache access and the optimization of the RAID card, the delay and the energy consumption can be reduced, and the read-write speed of the disk can be improved.
Further, the method for powering up the backplane by using the capacitor comprises the following steps:
using a plurality of capacitors to electrify each hard disk in a layered manner;
the back board is a multi-hard-disk hardware management board card for managing interfaces of SCSI bus type structure templates and controlling.
According to the description, the plurality of capacitors are used for electrifying each hard disk in a layered mode, the requirement on a server power supply is lower, lower peak current is needed, the management is more stable, and the whole path is smoother; compared with the prior art that the back plate adopts centralized power-on or grouped power-on, the protection and the stability of the magnetic disk are respectively different, and the instability of centralized power-on can be avoided by the layered power-on of the capacitor, so that the performance of the magnetic disk is improved.
Further, the unified configuration and management of the levels in the four-level cache way includes:
uniformly configuring the capacity, speed and strategy of a memory, a solid state disk cache and an array card cache;
directly starting a cache function of a hard disk drive and a cache function of an array card;
and selecting a transmission path through the array card, and starting a cache function of a corresponding level cache through the transmission path.
According to the description, the capacities, speeds and strategies of the memory, the solid state disk cache and the array card cache are configured in a unified manner, so that the access speed of various files can be faster and the delay can be lower under the condition that the caching strategies at all levels are configured; when the four-level cache access is managed, the cache function of the hard disk drive and the cache function of the array card are produced and are available, and the cache function can be directly started; and selecting a transmission path through the array card, and starting a cache function of the corresponding level cache through the transmission path, thereby flexibly allocating the cache.
Further, whether the size of the data to be transmitted exceeds a preset volume is judged, if yes, the transmission path of the data to be transmitted is stored in a disk after passing through each level of cache of the four-level cache access, otherwise, the transmission path of the data to be transmitted is stored in the disk after passing through the memory in the four-level cache access and the hard disk drive cache only, and the steps of:
classifying, cutting and reordering the data to be transmitted to obtain files to be transmitted;
if the size of the file to be transmitted exceeds the preset volume, the transmission path of the file to be transmitted sequentially passes through a memory, a solid state disk cache, an array card cache and a hard disk drive cache and then is stored in a disk;
otherwise, the transmission path of the file to be transmitted directly crosses the array card cache for transmission, and the file is stored in the disk after sequentially passing through the memory and the hard disk drive cache.
According to the description, the data to be transmitted are classified, cut and reordered to obtain files to be transmitted, and the transmission path is selected according to the size of the files to be transmitted, so that the reading and writing speed of the data is ensured, the energy consumption is reduced, and the performance of the disk can be improved through the configuration and optimization of the array card.
Further, judging whether the size of the data to be transmitted exceeds a preset volume, if so, storing the transmission path of the data to be transmitted into a disk after passing through each level of cache of the four-level cache access, otherwise, storing the transmission path of the data to be transmitted into the disk after passing through the memory in the four-level cache access and the hard disk drive cache only, and further comprising:
calculating an optimal transmission path according to the size of the data to be transmitted and the required transmission speed;
and calling the cache in the four-level cache access according to the optimal transmission path and storing the cache in a disk.
From the above description, it can be known that the optimal transmission path is calculated according to the size of the data to be transmitted and the required transmission speed, and the cache in the fourth-level cache path is called and then stored in the disk, so that the cache allocation of the array card is more flexible and the cache efficiency is higher.
Referring to fig. 2, another embodiment of the present invention provides a disk performance improving terminal, including a memory, a processor, and a computer program stored in the memory and executable on the processor, where the processor executes the computer program to implement the following steps:
the method comprises the steps that a capacitor power-on method is used for powering on a backboard, four-level cache ways comprising a memory, a solid state disk cache, an array card cache and a hard disk drive cache are configured, and all levels in the four-level cache ways are uniformly configured and managed;
and acquiring data to be transmitted, and judging whether the size of the data to be transmitted exceeds a preset volume, if so, storing the transmission path of the data to be transmitted into a disk after passing through each level of cache of the four-level cache access, otherwise, storing the transmission path of the data to be transmitted into the disk after passing through the memory in the four-level cache access and the hard disk drive cache only.
As can be seen from the above description, the back plate is powered on by using the capacitor power-on method, so that the requirement on the power supply of the server can be reduced, the hard disk is powered on more safely, and the energy consumption is lower; the method comprises the steps that a four-level cache access comprising a memory, a solid state disk cache, an array card cache and a hard disk drive cache is configured, and all levels in the four-level cache access are uniformly configured and managed, so that the allocation is more flexible compared with the two-level cache between the memory and the hard disk drive cache in the prior art, the caches can be called according to different requirements, and the access speed is greatly improved; acquiring data to be transmitted, and judging whether the size of the data to be transmitted exceeds a preset volume, if so, storing a transmission path of the data to be transmitted into a disk after passing through each level of cache of the four-level cache path, otherwise, storing the transmission path of the data to be transmitted into the disk after passing through a memory and a hard disk drive cache; therefore, the small files do not need too much cache space, the cache space of the small files can be met by using the memory and the hard disk drive cache, and the transmission efficiency is high; and large files need more buffer space, and four-level buffers are needed for file transmission, so that the delay can be reduced by improving the storage capacity of the buffers. Therefore, by combining the backboard power-on method, the four-level cache access and the optimization of the RAID card, the delay and the energy consumption can be reduced, and the read-write speed of the disk can be improved.
Further, the method for powering up the backplane by using the capacitor comprises the following steps:
using a plurality of capacitors to electrify each hard disk in a layered manner;
the back board is a multi-hard-disk hardware management board card for managing interfaces of SCSI bus type structure templates and controlling.
According to the description, the plurality of capacitors are used for electrifying each hard disk in a layered mode, the requirement on a server power supply is lower, lower peak current is needed, the management is more stable, and the whole path is smoother; compared with the prior art that the back plate adopts centralized power-on or grouped power-on, the protection and the stability of the magnetic disk are respectively different, and the instability of centralized power-on can be avoided by the layered power-on of the capacitor, so that the performance of the magnetic disk is improved.
Further, the unified configuration and management of the levels in the four-level cache way includes:
uniformly configuring the capacity, speed and strategy of a memory, a solid state disk cache and an array card cache;
directly starting a cache function of a hard disk drive and a cache function of an array card;
and selecting a transmission path through the array card, and starting a cache function of a corresponding level cache through the transmission path.
According to the description, the capacities, speeds and strategies of the memory, the solid state disk cache and the array card cache are configured in a unified manner, so that the access speed of various files can be faster and the delay can be lower under the condition that the caching strategies at all levels are configured; when the four-level cache access is managed, the cache function of the hard disk drive and the cache function of the array card are produced and are available, and the cache function can be directly started; and selecting a transmission path through the array card, and starting a cache function of the corresponding level cache through the transmission path, thereby flexibly allocating the cache.
Further, whether the size of the data to be transmitted exceeds a preset volume is judged, if yes, the transmission path of the data to be transmitted is stored in a disk after passing through each level of cache of the four-level cache access, otherwise, the transmission path of the data to be transmitted is stored in the disk after passing through the memory in the four-level cache access and the hard disk drive cache only, and the steps of:
classifying, cutting and reordering the data to be transmitted to obtain files to be transmitted;
if the size of the file to be transmitted exceeds the preset volume, the transmission path of the file to be transmitted sequentially passes through a memory, a solid state disk cache, an array card cache and a hard disk drive cache and then is stored in a disk;
otherwise, the transmission path of the file to be transmitted directly crosses the array card cache for transmission, and the file is stored in the disk after sequentially passing through the memory and the hard disk drive cache.
According to the description, the data to be transmitted are classified, cut and reordered to obtain files to be transmitted, and the transmission path is selected according to the size of the files to be transmitted, so that the reading and writing speed of the data is ensured, the energy consumption is reduced, and the performance of the disk can be improved through the configuration and optimization of the array card.
Further, judging whether the size of the data to be transmitted exceeds a preset volume, if so, storing the transmission path of the data to be transmitted into a disk after passing through each level of cache of the four-level cache access, otherwise, storing the transmission path of the data to be transmitted into the disk after passing through the memory in the four-level cache access and the hard disk drive cache only, and further comprising:
calculating an optimal transmission path according to the size of the data to be transmitted and the required transmission speed;
and calling the cache in the four-level cache access according to the optimal transmission path and storing the cache in a disk.
From the above description, it can be known that the optimal transmission path is calculated according to the size of the data to be transmitted and the required transmission speed, and the cache in the fourth-level cache path is called and then stored in the disk, so that the cache allocation of the array card is more flexible and the cache efficiency is higher.
Example one
Referring to fig. 1, fig. 3, fig. 4, fig. 6, and fig. 7, a method for improving disk performance, which is referred to as HEHA in this embodiment, includes the steps of:
s1, powering on the backboard by using a capacitor power-on method, configuring four-level cache ways including a memory, a solid state disk cache, an array card cache and a hard disk drive cache, and uniformly configuring and managing each level in the four-level cache ways.
Wherein the unified configuration and management of each level in the four-level cache way comprises:
uniformly configuring the capacity, speed and strategy of a memory, a solid state disk cache and an array card cache;
directly starting a cache function of a hard disk drive and a cache function of an array card;
and selecting a transmission path through the array card, and starting a cache function of a corresponding level cache through the transmission path.
Specifically, in this embodiment, the four-level cache access technology may configure a memory, an SSD (solid state disk) cache, and an RAID (redundant array of independent disks) cache according to a requirement; the capacity, the speed and the strategy can be customized by the memory, the SSD cache and the RAID cache according to the requirements, the access speed of various files can be faster and the delay is lower under the condition that various levels of cache strategies are configured, the bottleneck between a server backboard channel and an RAID board card can be opened, and the files can be continuously read and written at high speed with higher efficiency.
Referring to fig. 4, four caches are managed in a unified manner:
the hard disk is provided with a cache when being produced, and the cache function is directly started, wherein the specific command is caching ═ enable;
the array card is provided with a cache and can be directly started, and the specific command is as follows: opening Always Write Back;
in addition, the array card cache selects a transmission path, and the cache function of the corresponding level cache is started through the transmission path;
in this embodiment, the memory can be regarded as a cache, and the specific command is:
$vim/etc/fstab;
designing the size according to the speed requirement:
tmps/dev/shm tmpfs defaults,size=25G 0 0;
the addition is generally about 10-50% of the physical memory according to the size of the physical memory.
Mount/dev/shm/directory:
$mount-o remount/dev/shm/;
$mkdir/dev/shm/tmp;
$chmod 755/dev/shm/tmp;
$mount-B/dev/shm/tmp/tmp。
in this embodiment, the solid-state cache is no longer taken over by the array card, and the specific command is:
firstly, a MasterCache module is started:
modprobe bcache;
make-bcache-B/dev/mapper/fedora_virthost-home;
make-bcache-C/dev/sda1;
modprobe bcache。
then configuring a cache:
echo/dev/mapper/fedora_virthost-home>/sys/fs/bcache/register;
echo/dev/sda1>/sys/fs/bcache/register;
mkfs.ext4/dev/bcache0;
mount/dev/bcache0/home # mounting position row;
ls/sys/fs/bcache/# view uuid;
echo 766e3ca5-f2db3a97b32d348>/sys/block/bcache0/bcache/attach # is the uuid of its own disk here;
echo"766e3ca5-f2dba97b32d348">/sys/block/bcache0/bcache/attach。
in the prior art, only two levels of caches, namely a memory and a CPU, are provided, for example: the memory capacity is 2G, the hard disk drive cache is 1G, and when a 20G file needs to be transmitted, only 3G content can be transmitted each time; and solid state disk cache and array card cache are added, the total capacity of the four-level cache can reach 40G, and 20G large files can be transmitted at one time.
S2, obtaining data to be transmitted, and judging whether the size of the data to be transmitted exceeds a preset volume, if so, storing the transmission path of the data to be transmitted in a disk after passing through the memory, the solid state disk cache, the array card cache and the hard disk drive cache, otherwise, storing the transmission path of the data to be transmitted in the disk after passing through the memory and the hard disk drive cache.
Step S2 includes:
classifying, cutting and reordering the data to be transmitted to obtain files to be transmitted;
if the size of the file to be transmitted exceeds the preset volume, the transmission path of the file to be transmitted sequentially passes through a memory, a solid state disk cache, an array card cache and a hard disk drive cache and then is stored in a disk;
otherwise, the transmission path of the file to be transmitted directly crosses the array card cache for transmission, and the file is stored in the disk after sequentially passing through the memory and the hard disk drive cache.
Specifically, referring to fig. 3, the data is classified, cut and reordered by using an array card technology to obtain a transmission file, wherein the transmission file is a large file with a size larger than a preset volume, and is a small file otherwise;
the array command processing is added to distinguish data files, structured small file division bypasses the RAID group cache to write in the hard disk cache and then enter the disk for storage, lower time delay is brought, the pressure of the array card cache and the hard disk cache is reduced, and the array card and the hard disk cache are used for writing and reading large files at higher speed.
Step S2 further includes:
calculating an optimal transmission path according to the size of the data to be transmitted and the required transmission speed;
and calling the cache in the four-level cache access according to the optimal transmission path and storing the cache in a disk.
Specifically, referring to fig. 6, the four-level Cache path in this embodiment has three levels of Cache allocation, and the RAM Cache, the SSD Cache, and the array card Cache are called, so that an optimal transmission path can be calculated according to the size of data to be transmitted and a required transmission speed, and a required Cache and a size and a policy thereof can be allocated according to the optimal transmission path, so that a single large file, no matter a fragmented small file, can be optimized according to a requirement, and a transmission speed is increased, thereby enabling the Cache allocation of the RAID card to be more flexible and the Cache efficiency to be higher through the optimization of the RAID card.
Therefore, referring to fig. 7, the disk performance improving method provided in this embodiment can combine configuration and optimized improvement performance of the RAID array card, server performance optimization and drive upgrade to break through the RAID array card and the backplane access, and power up the server backplane capacitor, so as to improve the disk performance of the conventional multi-disk storage server by about 20% to over 85%;
for example: the 32-disk-position 'disk array storage server' is calculated according to a single disk position of 200MB/s (enterprise-level mechanical hard disk), theoretically, 32 disks of 200MB/s are 6400MB/s, but the conventional disk array solution can only basically achieve about 800MB/s at present and cannot exert the performance of the disk array, and by adopting the disk performance improving method of the technical scheme, the bandwidth performance of the multi-disk-position 'disk array storage server' can be improved to more than 85% from about 20% -30% of the original performance, and the same configuration can reach more than 6000 MB/s.
The HEHA technology can realize the efficiency improvement of a disk array storage server and meet the bandwidth requirement caused by the increase of unstructured files. Therefore, the space increased by the bandwidth requirement, the hardware cost, the power consumption, the operation and maintenance cost and the like are reduced, and particularly, the popularization and application of the 5G +4K/8K and VR technologies which are mainly promoted in the present country can provide convenience for more cost saving and content production and manufacturing.
Example two
The difference between this embodiment and the first embodiment is that how to perform the back plate capacitor electrification is further defined, specifically:
using a plurality of capacitors to electrify each hard disk in a layered manner;
the back board is a multi-hard-disk hardware management board card for managing interfaces of SCSI bus type structure templates and controlling.
In this embodiment, the server backplane module is a multi-hard-disk hardware management board card for managing interfaces and control of SCSI bus-type structure templates;
referring to fig. 5, the interface backplane powers up five SATA and SAS protocol hard disks disk by disk through three capacitors, performs layered processing, has a lower requirement for a server power supply, requires a lower peak current, makes management more stable, makes an overall path smoother, and thereby improves overall performance.
EXAMPLE III
Referring to fig. 2, a disk performance improving terminal includes a memory, a processor, and a computer program stored in the memory and executable on the processor, where the processor executes the computer program to implement the steps of the disk performance improving method according to one or two of the embodiments.
In summary, according to the method and the terminal for improving the disk performance provided by the invention, the plurality of capacitors are used for hierarchically electrifying each hard disk, so that the requirement on the server power supply is lower, lower peak current is required, the management is more stable, and the whole path is smoother; compared with the prior art that the back plate adopts centralized power-on or grouped power-on, the protection and the stability of the magnetic disk are respectively different, and the instability of centralized power-on can be avoided by the layered power-on of the capacitor, so that the performance of the magnetic disk is improved; the method comprises the steps that a four-level cache access comprising a memory, a solid state disk cache, an array card cache and a hard disk drive cache is configured, and all levels in the four-level cache access are uniformly configured and managed, so that the allocation is more flexible compared with the two-level cache between the memory and the hard disk drive cache in the prior art, the caches can be called according to different requirements, and the access speed is greatly improved; acquiring data to be transmitted, wherein if the size of the data to be transmitted exceeds a preset volume, the transmission path of the data to be transmitted is stored in a disk after passing through each level of cache of the four-level cache path, otherwise, the transmission path of the data to be transmitted is stored in the disk after passing through a memory and a hard disk drive cache; therefore, the small files do not need too much cache space, the cache space of the small files can be met by using the memory and the hard disk drive cache, and the transmission efficiency is high; and large files need more buffer space, and four-level buffers are needed for file transmission, so that the delay can be reduced by improving the storage capacity of the buffers. Therefore, by combining the backboard power-on method, the four-level cache access and the optimization of the RAID card, the delay and the energy consumption can be reduced, and the read-write speed of the disk can be improved.
The above description is only an embodiment of the present invention, and not intended to limit the scope of the present invention, and all equivalent changes made by using the contents of the present specification and the drawings, or applied directly or indirectly to the related technical fields, are included in the scope of the present invention.

Claims (6)

1. A method for improving the performance of a magnetic disk is characterized by comprising the following steps:
the method comprises the steps that a capacitor power-on method is used for powering on a backboard, four-level cache ways comprising a memory, a solid state disk cache, an array card cache and a hard disk drive cache are configured, and all levels in the four-level cache ways are uniformly configured and managed;
acquiring data to be transmitted, and judging whether the size of the data to be transmitted exceeds a preset volume, if so, storing the transmission path of the data to be transmitted into a disk after passing through each level of cache of the four-level cache access, otherwise, storing the transmission path of the data to be transmitted into the disk after passing through the memory and the hard disk drive cache in the four-level cache access;
the method for powering up the backboard by using the capacitor comprises the following steps:
using a plurality of capacitors to electrify each hard disk in a layered manner;
the back board is a multi-hard-disk hardware management board card for managing interfaces of SCSI bus type structure templates and controlling;
the unified configuration and management of each level in the four-level cache way comprises:
uniformly configuring the capacity, speed and strategy of a memory, a solid state disk cache and an array card cache;
directly starting a cache function of a hard disk drive and a cache function of an array card;
and selecting a transmission path through the array card, and starting a cache function of a corresponding level cache through the transmission path.
2. The method according to claim 1, wherein determining whether the size of the data to be transmitted exceeds a preset volume, if so, storing the transmission path of the data to be transmitted in the disk after passing through each level of cache in the four-level cache way, otherwise, storing the transmission path of the data to be transmitted in the disk after passing through only the memory in the four-level cache way and the hard disk drive cache comprises:
classifying, cutting and reordering the data to be transmitted to obtain files to be transmitted;
if the size of the file to be transmitted exceeds the preset volume, the transmission path of the file to be transmitted sequentially passes through a memory, a solid state disk cache, an array card cache and a hard disk drive cache and then is stored in a disk;
otherwise, the transmission path of the file to be transmitted directly crosses the array card cache for transmission, and the file is stored in the disk after sequentially passing through the memory and the hard disk drive cache.
3. The method according to claim 1, wherein determining whether the size of the data to be transmitted exceeds a preset volume, if so, storing the transmission path of the data to be transmitted in the disk after passing through each level of cache in the four-level cache way, otherwise, storing the transmission path of the data to be transmitted in the disk after passing through only the memory in the four-level cache way and the hard disk drive cache further comprises:
calculating an optimal transmission path according to the size of the data to be transmitted and the required transmission speed;
and calling the cache in the four-level cache access according to the optimal transmission path and storing the cache in a disk.
4. A disk performance enhancement terminal comprising a memory, a processor, and a computer program stored on the memory and executable on the processor, wherein the processor implements the following steps when executing the computer program:
the method comprises the steps that a capacitor power-on method is used for powering on a backboard, four-level cache ways comprising a memory, a solid state disk cache, an array card cache and a hard disk drive cache are configured, and all levels in the four-level cache ways are uniformly configured and managed;
acquiring data to be transmitted, and judging whether the size of the data to be transmitted exceeds a preset volume, if so, storing the transmission path of the data to be transmitted in a disk after passing through the memory, the solid state disk cache, the array card cache and the hard disk drive cache, otherwise, storing the transmission path of the data to be transmitted in the disk after passing through the memory, the solid state disk cache, the array card cache and the hard disk drive cache;
the method for powering up the backboard by using the capacitor comprises the following steps:
using a plurality of capacitors to electrify each hard disk in a layered manner;
the back board is a multi-hard-disk hardware management board card for managing interfaces of SCSI bus type structure templates and controlling;
the unified configuration and management of each level in the four-level cache way comprises:
uniformly configuring the capacity, speed and strategy of a memory, a solid state disk cache and an array card cache;
directly starting a cache function of a hard disk drive and a cache function of an array card;
and selecting a transmission path through the array card, and starting a cache function of a corresponding level cache through the transmission path.
5. The terminal for improving disk performance according to claim 4, wherein it is determined whether the size of the data to be transmitted exceeds a preset volume, if so, the transmission path of the data to be transmitted is stored in the disk after passing through each level of cache in the four-level cache way, otherwise, the transmission path of the data to be transmitted is stored in the disk after passing through the memory in the four-level cache way and the hard disk drive cache only, and the step of:
classifying, cutting and reordering the data to be transmitted to obtain files to be transmitted;
if the size of the file to be transmitted exceeds the preset volume, the transmission path of the file to be transmitted sequentially passes through a memory, a solid state disk cache, an array card cache and a hard disk drive cache and then is stored in a disk;
otherwise, the transmission path of the file to be transmitted directly crosses the array card cache for transmission, and the file is stored in the disk after sequentially passing through the memory and the hard disk drive cache.
6. The terminal for improving disk performance according to claim 4, wherein the step of determining whether the size of the data to be transmitted exceeds a preset volume, if so, the transmission path of the data to be transmitted is stored in the disk after passing through each level of cache in the four-level cache way, otherwise, the step of storing the transmission path of the data to be transmitted in the disk after passing through the memory in the four-level cache way and the hard disk drive cache only further comprises:
calculating an optimal transmission path according to the size of the data to be transmitted and the required transmission speed;
and calling the cache in the four-level cache access according to the optimal transmission path and storing the cache in a disk.
CN202111164247.3A 2021-09-30 2021-09-30 Disk performance improving method and terminal Active CN113867646B (en)

Priority Applications (2)

Application Number Priority Date Filing Date Title
CN202111164247.3A CN113867646B (en) 2021-09-30 2021-09-30 Disk performance improving method and terminal
PCT/CN2021/124022 WO2023050488A1 (en) 2021-09-30 2021-10-15 Disk performance improvement method and terminal

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202111164247.3A CN113867646B (en) 2021-09-30 2021-09-30 Disk performance improving method and terminal

Publications (2)

Publication Number Publication Date
CN113867646A CN113867646A (en) 2021-12-31
CN113867646B true CN113867646B (en) 2022-03-18

Family

ID=79001313

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202111164247.3A Active CN113867646B (en) 2021-09-30 2021-09-30 Disk performance improving method and terminal

Country Status (2)

Country Link
CN (1) CN113867646B (en)
WO (1) WO2023050488A1 (en)

Families Citing this family (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN115686372B (en) * 2022-11-07 2023-07-25 武汉麓谷科技有限公司 ZNS solid state disk ZRWA function-based data management method

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN104407818A (en) * 2014-12-03 2015-03-11 浪潮集团有限公司 Scheme design of PCIE (Peripheral Component Interconnect Express) high-speed storage device
CN105426127A (en) * 2015-11-13 2016-03-23 浪潮(北京)电子信息产业有限公司 File storage method and apparatus for distributed cluster system
CN205263797U (en) * 2015-12-02 2016-05-25 成都广达新网科技股份有限公司 Adopt memory of solid state hard drives SSD as L2 cache
CN107589918A (en) * 2017-10-12 2018-01-16 苏州韦科韬信息技术有限公司 The high storage cost performance of method carried using storage system is mixed
CN110472004A (en) * 2019-08-23 2019-11-19 国网山东省电力公司电力科学研究院 A kind of method and system of scientific and technological information data multilevel cache management

Family Cites Families (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20050097132A1 (en) * 2003-10-29 2005-05-05 Hewlett-Packard Development Company, L.P. Hierarchical storage system
CN201698255U (en) * 2010-05-14 2011-01-05 上海浙大网新易得科技发展有限公司 Server capable of accessing disc at high speed
CN105573669A (en) * 2015-12-11 2016-05-11 上海爱数信息技术股份有限公司 IO read speeding cache method and system of storage system

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN104407818A (en) * 2014-12-03 2015-03-11 浪潮集团有限公司 Scheme design of PCIE (Peripheral Component Interconnect Express) high-speed storage device
CN105426127A (en) * 2015-11-13 2016-03-23 浪潮(北京)电子信息产业有限公司 File storage method and apparatus for distributed cluster system
CN205263797U (en) * 2015-12-02 2016-05-25 成都广达新网科技股份有限公司 Adopt memory of solid state hard drives SSD as L2 cache
CN107589918A (en) * 2017-10-12 2018-01-16 苏州韦科韬信息技术有限公司 The high storage cost performance of method carried using storage system is mixed
CN110472004A (en) * 2019-08-23 2019-11-19 国网山东省电力公司电力科学研究院 A kind of method and system of scientific and technological information data multilevel cache management

Also Published As

Publication number Publication date
CN113867646A (en) 2021-12-31
WO2023050488A1 (en) 2023-04-06

Similar Documents

Publication Publication Date Title
US10866912B2 (en) Integrated heterogeneous solid state storage drive
US11687446B2 (en) Namespace change propagation in non-volatile memory devices
US8402152B2 (en) Apparatus and system for object-based storage solid-state drive
US20180322043A1 (en) Apparatus and system for object-based storage solid-state device
US9792227B2 (en) Heterogeneous unified memory
US5887151A (en) Method and apparatus for performing a modified prefetch which sends a list identifying a plurality of data blocks
KR102044023B1 (en) Data Storage System based on a key-value and Operating Method thereof
US9182912B2 (en) Method to allow storage cache acceleration when the slow tier is on independent controller
US20130166844A1 (en) Storage in tiered environment for colder data segments
US9424314B2 (en) Method and apparatus for joining read requests
US20100082537A1 (en) File system for storage device which uses different cluster sizes
CN111722786A (en) Storage system based on NVMe equipment
CN102117248A (en) Caching system and method for caching data in caching system
US10303369B2 (en) Storage in tiered environment with cache collaboration
JP2020533694A (en) Dynamic relocation of data using cloud-based ranks
CN108089825B (en) Storage system based on distributed cluster
US20180203612A1 (en) Adaptive storage reclamation
CN110688062A (en) Cache space management method and device
CN113867646B (en) Disk performance improving method and terminal
JP2012123556A (en) Virtual server system and control method thereof
US11237956B2 (en) Apparatus and system for object-based storage solid-state device
CN113377278A (en) Solid state disk, garbage recycling and controlling method, equipment, system and storage medium
US11960419B2 (en) Systems and methods for data prefetching for low latency data read from a remote server
US7694079B2 (en) Tagged sequential read operations
KR101694299B1 (en) Method and metadata server for managing storage device of cloud storage

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant