US20200042204A1 - Method for building distributed memory disk cluster storage system - Google Patents

Method for building distributed memory disk cluster storage system Download PDF

Info

Publication number
US20200042204A1
US20200042204A1 US16/583,228 US201916583228A US2020042204A1 US 20200042204 A1 US20200042204 A1 US 20200042204A1 US 201916583228 A US201916583228 A US 201916583228A US 2020042204 A1 US2020042204 A1 US 2020042204A1
Authority
US
United States
Prior art keywords
memory
data
cluster
computers
chunk
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Abandoned
Application number
US16/583,228
Inventor
Hsun-Yuan Chen
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Individual
Original Assignee
Individual
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Individual filed Critical Individual
Priority to US16/583,228 priority Critical patent/US20200042204A1/en
Publication of US20200042204A1 publication Critical patent/US20200042204A1/en
Abandoned legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/06Digital input from, or digital output to, record carriers, e.g. RAID, emulated record carriers or networked record carriers
    • G06F3/0601Interfaces specially adapted for storage systems
    • G06F3/0602Interfaces specially adapted for storage systems specifically adapted to achieve a particular effect
    • G06F3/0614Improving the reliability of storage systems
    • G06F3/0619Improving the reliability of storage systems in relation to data integrity, e.g. data losses, bit errors
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/06Digital input from, or digital output to, record carriers, e.g. RAID, emulated record carriers or networked record carriers
    • G06F3/0601Interfaces specially adapted for storage systems
    • G06F3/0602Interfaces specially adapted for storage systems specifically adapted to achieve a particular effect
    • G06F3/0604Improving or facilitating administration, e.g. storage management
    • G06F3/0607Improving or facilitating administration, e.g. storage management by facilitating the process of upgrading existing storage systems, e.g. for improving compatibility between host and storage device
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/06Digital input from, or digital output to, record carriers, e.g. RAID, emulated record carriers or networked record carriers
    • G06F3/0601Interfaces specially adapted for storage systems
    • G06F3/0628Interfaces specially adapted for storage systems making use of a particular technique
    • G06F3/0629Configuration or reconfiguration of storage systems
    • G06F3/0631Configuration or reconfiguration of storage systems by allocating resources to storage systems
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/06Digital input from, or digital output to, record carriers, e.g. RAID, emulated record carriers or networked record carriers
    • G06F3/0601Interfaces specially adapted for storage systems
    • G06F3/0628Interfaces specially adapted for storage systems making use of a particular technique
    • G06F3/0662Virtualisation aspects
    • G06F3/0665Virtualisation aspects at area level, e.g. provisioning of virtual or logical volumes
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/06Digital input from, or digital output to, record carriers, e.g. RAID, emulated record carriers or networked record carriers
    • G06F3/0601Interfaces specially adapted for storage systems
    • G06F3/0668Interfaces specially adapted for storage systems adopting a particular infrastructure
    • G06F3/067Distributed or networked storage systems, e.g. storage area networks [SAN], network attached storage [NAS]

Definitions

  • the present invention relates to an operation method of distributed memory disk cluster storage system, especially to a network data interchange storage system having features of fast many-to-many transmission, high expandability and stable performance.
  • the electronic data has to utilize a network as a bridge for being transmitted, when the data amount is in the normal level, the available network transmission capacity is enough to handle, when the data amount is rapidly increased, because the network transmitting rate of the network bridging transmission has its limit, the required processing speed for handling the huge data amount cannot be met regardless how up-to-date the server computer is, so users may face the problems of data delaying or transmission being interrupted when using the network system.
  • the conventional server mainframe still adopts a hard disk device for storing data and allowing the main operating system to be installed therein.
  • the data transmitting speed between the process unit and the memory is much higher than the data transmitting speed between the process unit and the hardware, in other words, utilizing the hard disk equipment as the main storage space for the purpose of computing is the main reason why the optimal processing performance cannot be achieved, such situation would only get worse when processing large amounts of data; moreover, the service life of the hard disk equipment is way shorter than the service life of the memory, so adopting the hard disk equipment as the main storage means is not the best solution for the whole system.
  • the prior art could not enable the processer to perform the real processing efficiency when large amounts of data is awaiting to be handled; accordingly, the applicant of the present invention has devoted himself for developing and designing an operation method of distributed memory disk cluster storage system for improving the disadvantages existed in prior art.
  • the present invention provides an operation method of distributed memory disk cluster storage system for overcoming the above-mentioned disadvantages.
  • the present invention provides an operation method of distributed memory disk cluster storage system, characterized in that: firstly the installation of a distributed memory storage equipment includes a plurality of computer units for assembling a cluster scheme so as to form a cluster memory disk; the computer unit is installed with a virtual machine platform, so the computer unit is formed with a plurality of virtual machines, and the computer unit is used for setting the memory capacity occupying means through the virtual machine operating system or a program software, so the memory is able to be planned as a storage device thereby forming as a plurality of chunk memory disks; a file is divided into one or plural data, one or plural copies are evenly distributed in the chunk memory disk, a memory bus with multiple channels is utilized for parallel accessing the memory module thereby allowing the capacity of the memory module to be planned for being used as a hard disk, wherein the access of the memory module supports all the file formats of the virtual machine operating system, and a distributed storage scheme is utilized for allowing the data to be copied to one or more copies; when the virtual machine operating system of the virtual
  • each of the chunk memory disks is respectively and electrically connected to at least a hard disk storage device, the hard disk storage device is served to backup the data in the chunk memory disk in every preset period of time.
  • the chunk memory disks of all the computer units use the continuous data protector for constantly and continuously backup the data to a common large-scale hard disk cluster array for the purpose of back up.
  • the computer unit is installed with a CPU, at least a memory, at least a hard disk, at least a network card, a mother board, an I/O interface card, at least a connection cable and a housing.
  • each copied data is encrypted through mixing the 1-4096 bit AES and RSA for being stored in the memory, when the data is desired to be accessed, the data is transmitted between the memory and the CPU, the virtual machine is formed as a file format for being stored in the memory module, the memory capacity planned for the virtual memory is also in the same sector.
  • each of the chunk memory disks is provided with a monitor unit for monitoring the operation status
  • the detection unit adopts the Splunk or any software provided by other search engine for the purpose of monitoring, when a problem is detected, a service of restarting application software can be provided thereby achieving a recovery function.
  • the virtual machine platform can be VMware vSphere ESXi 4.1 or later version, Microsoft Server 2012 R2 Hyper-V or later version, Citrix XenServer Oracle VM, Oracle VM, Red Hat KVM, Red Hat Control groups (cgroups), Red Hat Linux Containers (LXC), KVM, Eucalyptus, OpenStack, User Mode Linux, LXC, OpenVZ, OpenNebula, Enomaly's Elastic Computing, OpenFlow, or Linux-Base KVM; and the virtual machine operating system can be Linux (Linux 2.6.14 and up have FUSE Support included in the official kernel), FreeBSD, OpenSolaris or MacOS X.
  • the memory of the virtual machine is operated through the storage area network, a network layer interface virtualized by a software is adopted for connecting all the chunk memory disks so as to be jointly operated.
  • the network layer interface adopts the SAN, SAN iSCSI, SAN FC, SAN FCoE, NFS, NAS, JBOD, CIFS or FUSE interface for communicating with the server and the disk driver, and the RAMSTORAGETM API is provided and served as a backup program; wherein the RAMSTORAGETM API adopts the REST, Restful, C++, PHP, Python, Java, Perl, Javascript and other program developing software for forming the RAMSTORAGETM API, and the API function of the distributed memory disk cluster storage includes tolerance, backup, shift, rapidly layout virtual machine, managing disk size, automatically increasing the chunk memory disks according to the actual needs, balancing the data loading between chunks, backup recovery, continuous back protector, rapid capture and monitoring resource.
  • the CPU, the memory and the physical hard disk which are not in use can be integrated as a common resource pool through the virtual machine platform, and each required computer resource can be automatically adjusted and transmitted to other computer unit having richer resource.
  • the connecting manner of the plural distributed memory disk cluster storage can be according to the physical internet transmission protocol, and the packages can be transmitted through SSL, VPN or encryption computing manner; when the network connection is unable to be established, each region is able to be independently operated, when the connection is recovered, the data can be fully synchronized to each of the chunk memory disks of each of the distributed memory disk cluster storages.
  • the CPU is selected from x86, x86-64, IA-64. Alpha, ARM, SPARC 32 and 64, PowerPC, MIPS and Tilera.
  • the operating manner of the memory installed in the computer unit is to directly utilize the memory controller of the CPU to directly access the memory data with a manner of three-channel or multiple-channel and a speed of 800 MHz to 1,333 MHz or higher.
  • the memory capacity is 1 MB to 16 ZB
  • the adopted memory type can be a dynamic random access memory (DRAM), a synchronous dynamic memory (DRAM), a dynamic mobile platform memory, a dynamic graphic process memory, a dynamic Rambus memory, a static random access memory (SRAM), a read-only memory (ROM), a Magnetoresistive random-access memory or a flash memory.
  • the dynamic random access memory is FPM RAM (Fast Page Mode RAM) or EDO RAM (Extended Data Output RAM);
  • the synchronous dynamic memory (DRAM) is SDRAM, DDR SDRAM, DDR2 SDRAM, DDR3 SDRAM, DDR4 SDRAM or DDR5 SDRAM;
  • the dynamic mobile platform memory is LPDDR (Low Power Double Data Rate RAM), LPDDR2, LPDDR3 or LPDDR4;
  • the dynamic graphic process memory is VRAM (Video RAM), WRAM (Window RAM), MDRAM (Multibank Dynamic RAM), SCRAM (Synchronous Graphics RAM), SDRAM, GDDR (Graphics Double Data Rate RAM), GDDR2, GDDR3, GDDR4, GDDR5, GDDR6 or GDDR7; and other upward compatible types having higher access speed or different access manner; or a Magnetoresistive random-access memory such as MRAM and other upward compatible types having higher access speed or different access manner; or a Ferroelectric RAM such as FeRAM and other upward compatible types having higher access speed
  • the hard disk storage device is a conventional disk head, a floppy-disk drive, solid state drive, internet drive, SAS drive, SATA drive, mSATA drive, PCIe drive, FC drive, SCSI drive, ATA drive, NAND Flash card, FCoE drive.
  • the network card is an Ethernet, fast Ethernet, gigabit Ethernet, glass fiber, token ring network, InfiniBand, FCoE (fiber channel over Ethernet) or wireless network.
  • the network speed is 2 Mbit/s, 10 Mbit/s, 11 Mbit/s, 40Mbit/s, 54 Mbit/s, 80 Mbit/s, 100 Mbit/s, 150 Mbit/s, 300 Mbit/s, 433 Mbit/s, 1,000 Mbit/s, 1 Gbit/s, 8 Gbit/s, 10 Gbit/s, 16 Gbit/s, 32 Gbit/s, 40 Gbit/s, 56 Gbit/s, 100 Gbit/s, 160 Gbit/s, 1,000 Gbit/s.
  • the mother board is compatible with the x86, x86-64, IA-64, Alpha, ARM, SPARC 32 and 64, PowerPC, MIPS and Tilera processer.
  • the file format of the operation system is VMFS3, VMFS5 and other upward compatible types having different format, VHD and other upward compatible types having different format, VHDX and other upward compatible types having different format, VMDK and other upward compatible types having different format, HDFS and other upward compatible types having different format, Isilon OneFS and other upward compatible types having different format, any format generated through memory-type pagefile and other upward compatible types having different format, VEs and other upward compatible types having different format, VPSs and other upward compatible types having different format, CePH, GlusterFS, SphereFS, Taobao File System, ZFS, SDFS, MooseFS, AdvFS, Be file system (BFS), Btrfs, Coda, CrossDOS, disk file system (DFS), Episode, EFS, exFAT, ext, FAT, global file system (GFS), hierarchical file system (HFS), HFS Plus, high performance file system, IBM general parallel file system, JFS, Macintosh file system, MINIX, NetWar
  • the distributed memory storage system can satisfy four desired expansions which are the expansion of network bandwidth, the expansion of hard disk capacity, the expansion of TOPS speed, and the expansion of memory I/O transmitting speed.
  • the system can be cross-region operated, data center and WAN, so the user's requirements can be collected through the local memory disk cluster for being provided with the corresponding services, the capacity of the memory disk cluster can also be gradually expanded for further providing cross-region or cross-country data service.
  • the distributed memory disk cluster storage is served like a physical hard disk, so the whole operation would not be affected due to one of the physical mainframes being failed, the chunk memory disk in the copy could copy the stored data to a new chunk memory disk, so a fundamental data backup is maintained, meanwhile the continuous data protector (CDP) is also adopted for providing novel service of data backup and recovery, thus the disadvantages of the tape backup often being failed and the backup only being performed once a day are improved.
  • CDP continuous data protector
  • the data generated through the copy can be sent from different chunk memory disk thereby achieving many-to-one data transmission, when the user amount increases, only increasing the quantity of the chunk memory disk can achieve the many-to-many transmission, so the disadvantages of the multiple RAID hard disks crashing causing the whole data being missed, the limitation of the quantity of network interface of storage device and the network speed causing the excessive data being overly jammed and delayed for transmitting, the expansion of LUN and the data center being unable to be cross-region operated can be solved; the present invention adopts the memory being served as a disk, each file or each virtual machine can be stored in the memory with a file format, the highest I/O speed of the memory bus can be directly utilized, the data can be transmitted between the CPU and the memory, the highest I/O number, distance and speed can be provided. Accordingly, the present invention is novel and more practical in use comparing to prior art.
  • FIG. 1 is a schematic view illustrating the operation method of distributed memory disk cluster storage system according to one embodiment provided by the present invention.
  • FIG. 2 is another schematic view illustrating the operation method of distributed memory disk cluster storage system according to one embodiment provided by the present invention.
  • the present invention provides an operation method of distributed memory disk cluster storage system, wherein one preferred embodiment for illustrating the operation method of distributed memory disk cluster storage system is as following:
  • the installation of a distributed memory storage equipment includes a plurality of computer units ( 10 ) for assembling a cluster scheme ( 1 ) so as to form a cluster memory disk; wherein the computer unit ( 10 ) is installed with a CPU, at least a memory, at least a hard disk, at least a network card, a mother board, an I/O interface card, at least a connection cable and a housing.
  • the computer unit ( 10 ) is installed with a virtual machine platform, so the computer unit ( 10 ) is formed with a plurality of virtual machines, and the computer unit ( 10 ) is used for setting the required machine memory resource capacity, the virtual machine operating system is used for setting a way the memory capacity is occupied, or a program software is utilized for planning the memory to a hard disk device for forming as a chunk memory disk ( 11 ) which is the same as the tracks of a hard disk.
  • a file is enabled to be divided into one or plural data, and the file size can be 64 MB or bigger, one or plural copies are evenly distributed in the chunk memory disk ( 11 ), so the data is actually stored in a memory module, and a memory bus with multiple channels is utilized for parallel accessing the memory module thereby allowing the capacity of the memory module to be planned for being used as a hard disk, wherein the access of the memory module supports all the file formats of the virtual machine operating system, and a distributed storage scheme is utilized for allowing the data to be copied to one or more copies, with the above-mentioned method, the data center can still be operated even if the machine is broken and/or the data center is damaged.
  • Each copied data can be encrypted through mixing the 1-4096 bit AES and RSA for being stored in the memory, when the data is desired to be accessed, the data is transmitted between the memory and the CPU thereby minimizing the I/O accessing times and distance, the virtual machine is formed as a file format for being stored in the memory module, the memory capacity planned for the virtual memory is also in the same sector.
  • the storage system provided by the present invention allows most of the data in the computer unit ( 10 ) to be processed in the chunk memory disk ( 11 ) with a parallel computing manner, the data which is not in the computer unit ( 10 ) accesses the chunk memory disk ( 11 ) of other computer unit ( 10 ) through a network card ( 13 ) being connected to a connection port cluster link ( 20 ).
  • the virtual machine platform operation system can be VMware vSphere ESXi 4.1 or later version, Microsoft Server 2012 R2 Hyper-V or later version, Citrix XenServer Oracle VM, Oracle VM, Red Hat KVM, Red Hat Control groups (cgroups), Red Hat Linux Containers (LXC), KVM, Eucalyptus, OpenStack, User Mode Linux, LXC, OpenVZ, OpenNebula, Enomaly's Elastic Computing, OpenFlow, or Linux-Base KVM; and the virtual machine operating system can be Linux (Linux 2.6.14 and up have FUSE Support included in the official kernel), FreeBSD, OpenSolaris or MacOS X.
  • each of the chunk memory disks ( 11 ) can be provided with a monitor unit for monitoring the operation status
  • the detection unit can adopt the Splunk or any software provided by other search engine for the purpose of monitoring, when a problem is detected, a service of restarting application software can be provided thereby achieving a recovery function, the mentioned program is prior art therefore no further illustration is provided.
  • each of the computer units ( 10 ) can be categorized to a first data center ( 101 ), at least a second data center ( 102 ) and a backup center ( 103 ); wherein the first data center ( 101 ) is provided with a virtual cluster data control station ( 1011 ) for controlling, wherein each of the second data centers ( 102 ) is provided with a virtual cluster data backup station ( 1021 ) for controlling, and the backup center ( 103 ) is provided with a virtual cluster data backup station ( 1021 ) for controlling, wherein the first data center ( 101 ) and the second data center ( 102 ) jointly form a distributed memory file system ( 40 ).
  • the first data center ( 101 ) is provided with a virtual cluster data control station ( 1011 ) for controlling
  • each of the second data centers ( 102 ) is provided with a virtual cluster data backup station ( 1021 ) for controlling
  • the backup center ( 103 ) is provided with a virtual cluster data backup station ( 1021 ) for controlling, wherein the first data center
  • a stack scheme is provided for expanding the storage capacity scheme, the access means of a network layer interface is utilized to plan the plural chunk memory disks ( 11 ) of a computer unit ( 10 ) to a resource pool of cluster memory disk unit with a cluster concept, the operating theory is the same as that of the bus of the computer.
  • the operation is the same as the 64 bit CPU bus for synchronously using all the chunk memory disks ( 11 ) for accessing data; when the quantity of the chunk memory disks ( 11 ) is expanded, the operation is the same as upgrading the 64 bit CPU bus to 128 bit or 256 bit, the access speed is increased with an accumulating manner, so the memory disk capacity can be increased through the quantity of chunk memory disk (I 1 ) being increased, the limitation of the disk capacity can be increased, and the data access speed and the data liability can also be increased, and the above-mentioned can be gradually increased according to the user's desire.
  • each of the cluster schemes can be independently operated, and each of the cluster schemes ( 1 ) can be used as a distributed memory disk cluster storage (DMDCS) ( 1 A), the network layer interface is used for stacking, so each of the distributed memory disk cluster storages ( 1 A) can be used for simulating chunk memory disk, a new cluster data control station ( 1011 ) and a new cluster data backup station ( 1021 ) are provided for controlling the amount of processed data to be distributed in all the chunk memory disks.
  • DDCS distributed memory disk cluster storage
  • the above-mentioned is the same as utilizing the resource of each mainframe for parallel computing, the data is divided into blocks for being transmitted to each machine for computing, then eventually integrated to a final result.
  • the server computer unit marks the damaged memory as malfunction, and the chunk memory IC of the DIMM memory is no longer in use, the resource would only be used again after the memory is replaced.
  • the memory of the virtual machine is operated through a storage area network (SAN), a network layer interface virtualized by a software is adopted for connecting all the chunk memory disks so as to be jointly operated; the network layer interface adopts SAN, SAN iSCSI, SAN FC, SAN FCoE, NFS, NAS, JBOD, CIFS, FUSE interface for communicating with the server and the disk driver, and the RAMSTORAGETM API is provided and served as a backup program.
  • SAN storage area network
  • the RAMSTORAGETM API adopts REST, Restful, C++, PHP, Python, Java, PerL, Javascript and other program developing software for forming the RAMSTORAGETM API
  • the API function of the distributed memory disk cluster storage ( 1 A) includes tolerance, backup, shift, rapidly layout machine, planning disk size, automatically increasing the chunk memory disks ( 11 ) according to the actual needs, balancing the data loading between chunks, backup recovery, continuous back protector (CDP), rapid capture and monitoring resource.
  • the RAMSTORAGETM API adopts REST, Restful, C++, PHP, Python, Java, PerL, Javascript and other program developing software for forming the RAMSTORAGETM API
  • the API function of the distributed memory disk cluster storage ( 1 A) includes tolerance, backup, shift, rapidly layout machine, planning disk size, automatically increasing the chunk memory disks ( 11 ) according to the actual needs, balancing the data loading between chunks, backup recovery, continuous back protector (CDP), rapid capture and monitoring resource.
  • each of the chunk memory disks ( 11 ) is respectively and electrically connected to at least a hard disk storage device ( 12 ), the hard disk storage device ( 12 ) is served to backup the data in the chunk memory disk ( 11 ) in every preset period of time thereby avoiding any unanticipated malfunction, for example in every minute, the altered portion of certain data in each of the chunk memory disks ( 11 ) is copied to the hard disk storage device ( 12 ) for the purpose of data backup.
  • All the chunk memory disks ( 11 ) of all the computer units ( 10 ) use the continuous data protector (CDP) for constantly and continuously backup the data to a common large-scale hard disk cluster array for the purpose of back up, when a part of the server computer units or a part of the chunk memory disks ( 11 ) is failed due to environmental or other factors, the virtual machine can be recovered according to the required timing for finding the captured backup or a certain recovery timing, the mentioned large-scale hard disk is the mentioned backup center ( 103 ), of course the cluster disk array can adopt the conventional magnetic tape for providing a third backup.
  • CDP continuous data protector
  • Each of the chunk memory disks ( 11 ) is able to plan the required capacity and the CPU resource with an automatic layout manner, the network layer interface can also be served to automatically set up the required IP and MAC address, and the virtual machine can be set with the AP according to the actual needs and the required conditions can be automatically assigned.
  • the CPU, the memory and the physical hard disk which are not in use can be integrated as a common resource pool through the virtual machine platform operation system, and each required computer resource can be automatically adjusted and transmitted to other computer unit ( 10 ) with richer resource.
  • the connecting manner of the plural distributed memory disk cluster storages ( 1 A) can be according to the physical internet transmission protocol, and the packages can be transmitted through SSL, VPN or encryption computing manner, and can be operated with a means of cross-region, cross-country and WAN IP; when the network connection is unable to be established, each region is able to be independently operated. When the connection is recovered, the data can be fully synchronized to each of the chunk memory disks ( 11 ) of each of the distributed memory disk cluster storages ( 1 A).
  • the CPU is selected from x86, x86-64, IA-64, Alpha, ARM, SPARC 32 and 64, PowerPC, MIPS and Tilera.
  • the operating manner of the memory installed in the computer unit ( 10 ) is to directly utilize the memory controller of the CPU to directly access the memory data with a manner of three-channel or multiple-channel and a speed of 800 MHz to 1,333 MHz or higher.
  • the capacity of single memory is 1 MB (megabyte) to 16 ZB (zettabyte), the adopted memory type can be a dynamic random access memory (DRAM) such as FPM RAM, EDO RAM; or a synchronous dynamic memory (DRAM) such as SDRAM, DDR SDRAM, DDR2 SDRAM, DDR3 SDRAM, DDR4 SDRAM, DDR5 SDRAM and other upward compatible types having higher access speed or different access manner; or a dynamic mobile platform memory such as LPDDK, LPDDR2, LPDDD3, LPDDK4 and other upward compatible types having higher access speed or different access manner; or a dynamic graphic process memory such as VRAM, WRAM, MDRAM, SGRAM, SDRAM, GDDR, GDDR2, GDDR3, GDDR4, GDDR5, GDDR6, GDDR7 and other upward compatible types having higher access speed or different access manner; or a Magnetoresistive random-access memory such as MRAM and other upward compatible types having higher access speed or different access manner; or a Ferroelectric RAM such as Fe
  • the hard disk storage device ( 12 ) includes a conventional disk head, a floppy-disk drive, solid state drive, internet drive.
  • SAS drive SATA drive, mSATA drive, PCIe drive, FC drive, SCSI drive, ATA drive, NAND Flash card, FCoE drive and other upward compatible types having higher access speed or different access manner.
  • the network card can be selected from an Ethernet, fast Ethernet, gigabit Ethernet, glass fiber, token ring network, InfiniBand, FCoE (fiber channel over Ethernet) or wireless network; and with respect to the network protocol, the network speed can adopt 2 Mbit/s, 10 Mbit/s, 11 Mbit/s, 40 Mbit/s, 54 Mbit/s, 80 Mbit/s, 100 Mbit/s, 150 Mbit/s, 300 Mbit/s, 433 Mbit/s, 1,000 Mbit/s, 1 Gbit/s, 8 Gbit/s, 10 Gbit/s, 16 Gbit/s, 32 Gbit/s, 40 Gbit/s, 56 Gbit/s, 100 Gbit/s, 1 60 Gbit/s and 1,000 Gbit/s or any other network card with new network communication protocol can also be adopted.
  • the mother board is selected from any mother board compatible with the x86, x86-64, IA-64, Alpha, ARM, SPARC 32 and 64, PowerPC, MIPS and Tilera processer and the BeagleBoneBlack or Raspberry Pi mother board made by specific computer manufacturers.
  • the file format of the virtual machine operating system can be selected from VMFS3, VMFSS and other upward compatible types having different format, VHD and other upward compatible types having different format, VHDX and other upward compatible types having different format, VMDK and other upward compatible types having different format, HDFS and other upward compatible types having different format, Isilon OneFS and other upward compatible types having different format, any format generated through memory-type pagefile and other upward compatible types having different format, VEs and other upward compatible types having different format, VPSs and other upward compatible types having different format, CePH, GlusterFS, SphereFS, Taobao File System, ZFS, SDFS, MooseFS, AdvFS, Be file system (BFS), Btrfs, Coda, CrossDOS, disk file system (DFS), Episode, EFS, exFAT, ext, FAT, global file system (GFS), hierarchical file system(HFS), HFS Plus, high performance file system, IBM general parallel file system, JFS, Macintosh file
  • the physical network protocol transferring can be selected from Ethernet, fast Ethernet, gigabit Ethernet, fiber glass, token ring network, SS7, GSM, GPRS, EDGE, HSPA, HSPA+, CDMA, WCDMA, TD-WCDMA, LTE, GSM, cdmaOne, CDMA2000, UMTS WCDMA, TD-SCDMA, WiMAX, 3G broadcast network, CDMA20001X, Wi-Fi, SuperWiFi, Wi-Fi GO and other upward compatible IEEE network transmission protocol.
  • the distributed memory storage system can satisfy four desired expansions which are the expansion of network bandwidth, the expansion of hard disk capacity, the expansion of TOPS speed, and the expansion of memory I/O transmitting speed. Meanwhile, the system can be cross-region operated, data center and WAN, so the user's requirements can be collected through the local memory disk cluster for being provided with the corresponding services, the capacity of the memory disk cluster can also be gradually expanded for further providing cross-region or cross-country data service.
  • the distributed memory disk cluster storage ( 1 A) is served like a physical hard disk, so the whole operation would not be affected due to one of the physical mainframes being failed, the chunk memory disk ( 11 ) in the copy could copy the stored data to a new chunk memory disk ( 11 ), so a fundamental data backup is maintained, meanwhile the continuous data protector (CDP) is also adopted for providing novel service of data backup and recovery, thus the disadvantages of the tape backup often being failed and the backup only being performed once a day are improved.
  • CDP continuous data protector
  • the data generated through the copy can be sent from different chunk memory disk ( 11 ) thereby achieving the many-to-one data transmission, when the user amount increases, only increasing the quantity of the chunk memory disk ( 11 ) can achieve the many-to-many transmission, so the disadvantages of the multiple RAID hard disks crashing causing the whole data being missed, the limitation of the quantity of network interface of storage device and the network speed causing the excessive data being overly jammed and delayed for transmitting, the expansion of LUN and the data center being unable to be cross-region operated can be solved; the present invention adopts the memory being served as a disk, each file or each virtual machine can be stored in the memory with a file format, the highest I/O speed of the memory bus can be directly utilized, the data can be transmitted between the CPU and the memory, the highest I/O number, distance and speed can be provided. Accordingly, the present invention is novel and more practical in use comparing to prior art.

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Human Computer Interaction (AREA)
  • Physics & Mathematics (AREA)
  • General Engineering & Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • Computer Security & Cryptography (AREA)
  • Information Retrieval, Db Structures And Fs Structures Therefor (AREA)

Abstract

The present invention relates to an operation method of distributed memory disk cluster storage system, the distributed memory storage system is adopted thereby satisfying four desired improvements including the expansion of network bandwidth, the expansion of hard disk capacity, the increasing of TOPS speed, and the increasing of memory I/O transmitting speed.
Meanwhile, the system can be cross-region, cross-datacenter and cross-WAN operated, so the user's requirements can be collected through the local memory disk cluster for providing the corresponding services, the capacity of the memory disk cluster can also be gradually expanded for further providing cross-region or cross-country data service.

Description

    CROSS-REFERENCE TO RELATED APPLICATION
  • This application is a continuation application of and claims the priority benefit of U.S. application Ser. No. 14/562,892, filed on Dec. 8, 2014, now allowed, which claims the priority benefit of Taiwan application serial no. 102145155, filed on Dec. 9, 2013. The entirety of each of the above-mentioned patent applications is hereby incorporated by reference herein and made a part of this specification.
  • BACKGROUND OF THE INVENTION 1. Field of the Invention
  • The present invention relates to an operation method of distributed memory disk cluster storage system, especially to a network data interchange storage system having features of fast many-to-many transmission, high expandability and stable performance.
  • 2. Description of Related Art
  • In recent years, with the prevalence of network application and the increasing network communication required by mobile devices, the corporation information system has been being developed for meeting the fast growing demand. The requirement for computer resources has never been so high, and with the boosting amount of users accessing the network at the same time, the current storage equipment is unable to satisfy the connection and bandwidth required by such enormous amount of users.
  • In the hardware system of a conventional network processing server, the electronic data has to utilize a network as a bridge for being transmitted, when the data amount is in the normal level, the available network transmission capacity is enough to handle, when the data amount is rapidly increased, because the network transmitting rate of the network bridging transmission has its limit, the required processing speed for handling the huge data amount cannot be met regardless how up-to-date the server computer is, so users may face the problems of data delaying or transmission being interrupted when using the network system.
  • Moreover, with the existing technology, the development of a memory has yet reached the maximum stage, so the storage capacity is very limited and only can be used for temporary storage, thus the conventional server mainframe still adopts a hard disk device for storing data and allowing the main operating system to be installed therein.
  • Speaking of the data transmission between software and hardware, the data transmitting speed between the process unit and the memory is much higher than the data transmitting speed between the process unit and the hardware, in other words, utilizing the hard disk equipment as the main storage space for the purpose of computing is the main reason why the optimal processing performance cannot be achieved, such situation would only get worse when processing large amounts of data; moreover, the service life of the hard disk equipment is way shorter than the service life of the memory, so adopting the hard disk equipment as the main storage means is not the best solution for the whole system.
  • Based on the above, the prior art could not enable the processer to perform the real processing efficiency when large amounts of data is awaiting to be handled; accordingly, the applicant of the present invention has devoted himself for developing and designing an operation method of distributed memory disk cluster storage system for improving the disadvantages existed in prior art.
  • SUMMARY OF THE INVENTION
  • In view of the disadvantages existing in the prior art, the present invention provides an operation method of distributed memory disk cluster storage system for overcoming the above-mentioned disadvantages.
  • Accordingly, the present invention provides an operation method of distributed memory disk cluster storage system, characterized in that: firstly the installation of a distributed memory storage equipment includes a plurality of computer units for assembling a cluster scheme so as to form a cluster memory disk; the computer unit is installed with a virtual machine platform, so the computer unit is formed with a plurality of virtual machines, and the computer unit is used for setting the memory capacity occupying means through the virtual machine operating system or a program software, so the memory is able to be planned as a storage device thereby forming as a plurality of chunk memory disks; a file is divided into one or plural data, one or plural copies are evenly distributed in the chunk memory disk, a memory bus with multiple channels is utilized for parallel accessing the memory module thereby allowing the capacity of the memory module to be planned for being used as a hard disk, wherein the access of the memory module supports all the file formats of the virtual machine operating system, and a distributed storage scheme is utilized for allowing the data to be copied to one or more copies; when the virtual machine operating system of the virtual machine directly accesses the required file through the CPU, the processed data is stored in the memory module, the memory used by the virtual machine for computing is also in the memory module, the computed data is directly stored in the original location of the memory module of the virtual machine operating system, so most of the data in the computer unit is able to be processed in the chunk memory disk with a parallel computing manner; the data which is not in the computer unit accesses the chunk memory disk of other computer unit through a network card being connected to a connection port cluster link; with respect to the assigned functions, each of the computer units is categorized to a first data center, at least a second data center and a backup center; wherein the first data center is provided with a virtual cluster data control station for controlling, wherein each of the second data centers is provided with a virtual cluster data backup station for controlling, and the backup center is provided with a virtual cluster data backup station for controlling, wherein the first data center and second data center together form a distributed memory file system; moreover, a stack scheme is provided for expanding the storage capacity scheme, the access means of a network layer interface is utilized to plan the plural chunk memory disks of a computer unit to a resource pool of cluster memory disk unit with a cluster concept, and all the chunk memory disks are enabled to be synchronously operated for accessing data; when the cluster schemes are formed, each of the cluster schemes is able to be independently operated, and each of the cluster schemes is able to be served as a distributed memory disk cluster storage, the network layer interface is used for stacking, each of the distributed memory disk cluster storages is able to be used for simulating a chunk memory disk, a new cluster data control station and a new cluster data backup station are provided for controlling the amount of processed data to be distributed in all the chunk memory disks.
  • Wherein, each of the chunk memory disks is respectively and electrically connected to at least a hard disk storage device, the hard disk storage device is served to backup the data in the chunk memory disk in every preset period of time.
  • Wherein, the chunk memory disks of all the computer units use the continuous data protector for constantly and continuously backup the data to a common large-scale hard disk cluster array for the purpose of back up.
  • Wherein, the computer unit is installed with a CPU, at least a memory, at least a hard disk, at least a network card, a mother board, an I/O interface card, at least a connection cable and a housing.
  • Wherein, each copied data is encrypted through mixing the 1-4096 bit AES and RSA for being stored in the memory, when the data is desired to be accessed, the data is transmitted between the memory and the CPU, the virtual machine is formed as a file format for being stored in the memory module, the memory capacity planned for the virtual memory is also in the same sector.
  • Wherein, each of the chunk memory disks is provided with a monitor unit for monitoring the operation status, the detection unit adopts the Splunk or any software provided by other search engine for the purpose of monitoring, when a problem is detected, a service of restarting application software can be provided thereby achieving a recovery function.
  • Wherein, the virtual machine platform can be VMware vSphere ESXi 4.1 or later version, Microsoft Server 2012 R2 Hyper-V or later version, Citrix XenServer Oracle VM, Oracle VM, Red Hat KVM, Red Hat Control groups (cgroups), Red Hat Linux Containers (LXC), KVM, Eucalyptus, OpenStack, User Mode Linux, LXC, OpenVZ, OpenNebula, Enomaly's Elastic Computing, OpenFlow, or Linux-Base KVM; and the virtual machine operating system can be Linux (Linux 2.6.14 and up have FUSE Support included in the official kernel), FreeBSD, OpenSolaris or MacOS X.
  • Wherein, the memory of the virtual machine is operated through the storage area network, a network layer interface virtualized by a software is adopted for connecting all the chunk memory disks so as to be jointly operated.
  • Wherein, the network layer interface adopts the SAN, SAN iSCSI, SAN FC, SAN FCoE, NFS, NAS, JBOD, CIFS or FUSE interface for communicating with the server and the disk driver, and the RAMSTORAGE™ API is provided and served as a backup program; wherein the RAMSTORAGE™ API adopts the REST, Restful, C++, PHP, Python, Java, Perl, Javascript and other program developing software for forming the RAMSTORAGE™ API, and the API function of the distributed memory disk cluster storage includes tolerance, backup, shift, rapidly layout virtual machine, managing disk size, automatically increasing the chunk memory disks according to the actual needs, balancing the data loading between chunks, backup recovery, continuous back protector, rapid capture and monitoring resource.
  • Wherein, the CPU, the memory and the physical hard disk which are not in use can be integrated as a common resource pool through the virtual machine platform, and each required computer resource can be automatically adjusted and transmitted to other computer unit having richer resource.
  • Wherein, the connecting manner of the plural distributed memory disk cluster storage can be according to the physical internet transmission protocol, and the packages can be transmitted through SSL, VPN or encryption computing manner; when the network connection is unable to be established, each region is able to be independently operated, when the connection is recovered, the data can be fully synchronized to each of the chunk memory disks of each of the distributed memory disk cluster storages.
  • Wherein, the CPU is selected from x86, x86-64, IA-64. Alpha, ARM, SPARC 32 and 64, PowerPC, MIPS and Tilera.
  • Wherein, the operating manner of the memory installed in the computer unit is to directly utilize the memory controller of the CPU to directly access the memory data with a manner of three-channel or multiple-channel and a speed of 800 MHz to 1,333 MHz or higher.
  • Wherein, the memory capacity is 1 MB to 16 ZB, and the adopted memory type can be a dynamic random access memory (DRAM), a synchronous dynamic memory (DRAM), a dynamic mobile platform memory, a dynamic graphic process memory, a dynamic Rambus memory, a static random access memory (SRAM), a read-only memory (ROM), a Magnetoresistive random-access memory or a flash memory.
  • Wherein, the dynamic random access memory (DRAM) is FPM RAM (Fast Page Mode RAM) or EDO RAM (Extended Data Output RAM); the synchronous dynamic memory (DRAM) is SDRAM, DDR SDRAM, DDR2 SDRAM, DDR3 SDRAM, DDR4 SDRAM or DDR5 SDRAM; the dynamic mobile platform memory is LPDDR (Low Power Double Data Rate RAM), LPDDR2, LPDDR3 or LPDDR4; the dynamic graphic process memory is VRAM (Video RAM), WRAM (Window RAM), MDRAM (Multibank Dynamic RAM), SCRAM (Synchronous Graphics RAM), SDRAM, GDDR (Graphics Double Data Rate RAM), GDDR2, GDDR3, GDDR4, GDDR5, GDDR6 or GDDR7; and other upward compatible types having higher access speed or different access manner; or a Magnetoresistive random-access memory such as MRAM and other upward compatible types having higher access speed or different access manner; or a Ferroelectric RAM such as FeRAM and other upward compatible types having higher access speed or different access manner; or a Phase change Random Access Memory such as PC RAM and other upward compatible types having higher access speed or different access manner; or a Resistive random-access memory such as ReRAM and other upward compatible types having higher access speed or different access manner; the dynamic Rambus memory is RDRAM(Rambus DRAM), XDR DRAM (Extreme Data Rate Dynamic RAM) or XDR2 DRAM; and the flash memory is NOR Flash, NAND Flash, 3D NAND Flash, V-Flash, SLC (Single-Level Cell flash memory), MLC (Multi-Level Cell flash memory) , eMMC (embedded Multi Media Card) or TLC (Triple-Level Cells flash memory).
  • Wherein, the hard disk storage device is a conventional disk head, a floppy-disk drive, solid state drive, internet drive, SAS drive, SATA drive, mSATA drive, PCIe drive, FC drive, SCSI drive, ATA drive, NAND Flash card, FCoE drive.
  • Wherein, the network card is an Ethernet, fast Ethernet, gigabit Ethernet, glass fiber, token ring network, InfiniBand, FCoE (fiber channel over Ethernet) or wireless network.
  • Wherein, the network speed is 2 Mbit/s, 10 Mbit/s, 11 Mbit/s, 40Mbit/s, 54 Mbit/s, 80 Mbit/s, 100 Mbit/s, 150 Mbit/s, 300 Mbit/s, 433 Mbit/s, 1,000 Mbit/s, 1 Gbit/s, 8 Gbit/s, 10 Gbit/s, 16 Gbit/s, 32 Gbit/s, 40 Gbit/s, 56 Gbit/s, 100 Gbit/s, 160 Gbit/s, 1,000 Gbit/s.
  • Wherein, the mother board is compatible with the x86, x86-64, IA-64, Alpha, ARM, SPARC 32 and 64, PowerPC, MIPS and Tilera processer.
  • Wherein, the file format of the operation system is VMFS3, VMFS5 and other upward compatible types having different format, VHD and other upward compatible types having different format, VHDX and other upward compatible types having different format, VMDK and other upward compatible types having different format, HDFS and other upward compatible types having different format, Isilon OneFS and other upward compatible types having different format, any format generated through memory-type pagefile and other upward compatible types having different format, VEs and other upward compatible types having different format, VPSs and other upward compatible types having different format, CePH, GlusterFS, SphereFS, Taobao File System, ZFS, SDFS, MooseFS, AdvFS, Be file system (BFS), Btrfs, Coda, CrossDOS, disk file system (DFS), Episode, EFS, exFAT, ext, FAT, global file system (GFS), hierarchical file system (HFS), HFS Plus, high performance file system, IBM general parallel file system, JFS, Macintosh file system, MINIX, NetWare file system, NILFS, Novell storage service, NTFS, QFS, QNX4FS, ReiserFS (Reiser4), SpadFS, UBIFS, Unix file system, Veritas file system (VxFS), VFAT, write anywhere file layout (WAFL), XFS, Xsan, ZFS, CHFS, FFS2, F2FS, JFFS, JFFS2, LogFS, NVFS, YAFFS, UBIFS, DCE/DFS, MFS, CXFS, GFS2, Google file system, OCFS, OCFS2, QFS, Xsan, AFS, OpenAFS, AFP, MS-DFS, GPFS, Lustre, NCP, NFS, POHMELFS, Hadoop, HAMMER, SMB (CIFS), cramfs, FUSE, SquashFS, UMSDOS, UnionFS, configfs, devfs, procfs, specfs, sysfs, tmpfs, WinFS, EncFS, EFS, ZFS, RAW, ASM, LVM, SFS, MPFS or MGFS.
  • According to the operation method of distributed memory disk cluster storage system provided by the present invention, the distributed memory storage system can satisfy four desired expansions which are the expansion of network bandwidth, the expansion of hard disk capacity, the expansion of TOPS speed, and the expansion of memory I/O transmitting speed. Meanwhile, the system can be cross-region operated, data center and WAN, so the user's requirements can be collected through the local memory disk cluster for being provided with the corresponding services, the capacity of the memory disk cluster can also be gradually expanded for further providing cross-region or cross-country data service.
  • With the increased quantity of the storage devices, increasing one server would have the network bandwidth and the disk capacity being correspondingly accumulated thereby forming a resource pool, the distributed memory disk cluster storage is served like a physical hard disk, so the whole operation would not be affected due to one of the physical mainframes being failed, the chunk memory disk in the copy could copy the stored data to a new chunk memory disk, so a fundamental data backup is maintained, meanwhile the continuous data protector (CDP) is also adopted for providing novel service of data backup and recovery, thus the disadvantages of the tape backup often being failed and the backup only being performed once a day are improved.
  • In addition, the data generated through the copy can be sent from different chunk memory disk thereby achieving many-to-one data transmission, when the user amount increases, only increasing the quantity of the chunk memory disk can achieve the many-to-many transmission, so the disadvantages of the multiple RAID hard disks crashing causing the whole data being missed, the limitation of the quantity of network interface of storage device and the network speed causing the excessive data being overly jammed and delayed for transmitting, the expansion of LUN and the data center being unable to be cross-region operated can be solved; the present invention adopts the memory being served as a disk, each file or each virtual machine can be stored in the memory with a file format, the highest I/O speed of the memory bus can be directly utilized, the data can be transmitted between the CPU and the memory, the highest I/O number, distance and speed can be provided. Accordingly, the present invention is novel and more practical in use comparing to prior art.
  • BRIEF DESCRIPTION OF THE DRAWINGS
  • The present invention will be apparent to those skilled in the art by reading the following detailed description of a preferred embodiment thereof, with reference to the attached drawings, in which:
  • FIG. 1 is a schematic view illustrating the operation method of distributed memory disk cluster storage system according to one embodiment provided by the present invention; and
  • FIG. 2 is another schematic view illustrating the operation method of distributed memory disk cluster storage system according to one embodiment provided by the present invention.
  • DESCRIPTION OF THE EMBODIMENTS
  • The following descriptions are of exemplary embodiments only, and are not intended to limit the scope, applicability or configuration of the invention in any way. Rather, the following description provides a convenient illustration for implementing exemplary embodiments of the invention. Various changes to the described embodiments may be made in the function and arrangement of the elements described without departing from the scope of the invention as set forth in the appended claims.
  • Referring from FIG. 1, the present invention provides an operation method of distributed memory disk cluster storage system, wherein one preferred embodiment for illustrating the operation method of distributed memory disk cluster storage system is as following:
  • The installation of a distributed memory storage equipment includes a plurality of computer units (10) for assembling a cluster scheme (1) so as to form a cluster memory disk; wherein the computer unit (10) is installed with a CPU, at least a memory, at least a hard disk, at least a network card, a mother board, an I/O interface card, at least a connection cable and a housing.
  • The computer unit (10) is installed with a virtual machine platform, so the computer unit (10) is formed with a plurality of virtual machines, and the computer unit (10) is used for setting the required machine memory resource capacity, the virtual machine operating system is used for setting a way the memory capacity is occupied, or a program software is utilized for planning the memory to a hard disk device for forming as a chunk memory disk (11) which is the same as the tracks of a hard disk.
  • As such, a file is enabled to be divided into one or plural data, and the file size can be 64 MB or bigger, one or plural copies are evenly distributed in the chunk memory disk (11), so the data is actually stored in a memory module, and a memory bus with multiple channels is utilized for parallel accessing the memory module thereby allowing the capacity of the memory module to be planned for being used as a hard disk, wherein the access of the memory module supports all the file formats of the virtual machine operating system, and a distributed storage scheme is utilized for allowing the data to be copied to one or more copies, with the above-mentioned method, the data center can still be operated even if the machine is broken and/or the data center is damaged.
  • Each copied data can be encrypted through mixing the 1-4096 bit AES and RSA for being stored in the memory, when the data is desired to be accessed, the data is transmitted between the memory and the CPU thereby minimizing the I/O accessing times and distance, the virtual machine is formed as a file format for being stored in the memory module, the memory capacity planned for the virtual memory is also in the same sector.
  • When the virtual machine operating system of the virtual machine directly accesses the required file through the CPU, the processed data is stored in the memory module, the memory required by the virtual machine for computing is also in the memory module, the computed data is directly stored in the original location of the memory module of the virtual machine operating system; with the reduced access path and the fastest I/O speed, the storage system provided by the present invention allows most of the data in the computer unit (10) to be processed in the chunk memory disk (11) with a parallel computing manner, the data which is not in the computer unit (10) accesses the chunk memory disk (11) of other computer unit (10) through a network card (13) being connected to a connection port cluster link (20).
  • Wherein, the virtual machine platform operation system can be VMware vSphere ESXi 4.1 or later version, Microsoft Server 2012 R2 Hyper-V or later version, Citrix XenServer Oracle VM, Oracle VM, Red Hat KVM, Red Hat Control groups (cgroups), Red Hat Linux Containers (LXC), KVM, Eucalyptus, OpenStack, User Mode Linux, LXC, OpenVZ, OpenNebula, Enomaly's Elastic Computing, OpenFlow, or Linux-Base KVM; and the virtual machine operating system can be Linux (Linux 2.6.14 and up have FUSE Support included in the official kernel), FreeBSD, OpenSolaris or MacOS X.
  • Moreover, each of the chunk memory disks (11) can be provided with a monitor unit for monitoring the operation status, the detection unit can adopt the Splunk or any software provided by other search engine for the purpose of monitoring, when a problem is detected, a service of restarting application software can be provided thereby achieving a recovery function, the mentioned program is prior art therefore no further illustration is provided.
  • With respect to the assigned functions, each of the computer units (10) can be categorized to a first data center (101), at least a second data center (102) and a backup center (103); wherein the first data center (101) is provided with a virtual cluster data control station (1011) for controlling, wherein each of the second data centers (102) is provided with a virtual cluster data backup station (1021) for controlling, and the backup center (103) is provided with a virtual cluster data backup station (1021) for controlling, wherein the first data center (101) and the second data center (102) jointly form a distributed memory file system (40).
  • Referring to FIG. 2, a stack scheme is provided for expanding the storage capacity scheme, the access means of a network layer interface is utilized to plan the plural chunk memory disks (11) of a computer unit (10) to a resource pool of cluster memory disk unit with a cluster concept, the operating theory is the same as that of the bus of the computer.
  • When the 64 bit chunk memory disks (11) of the plural computer units (10) are adopted, the operation is the same as the 64 bit CPU bus for synchronously using all the chunk memory disks (11) for accessing data; when the quantity of the chunk memory disks (11) is expanded, the operation is the same as upgrading the 64 bit CPU bus to 128 bit or 256 bit, the access speed is increased with an accumulating manner, so the memory disk capacity can be increased through the quantity of chunk memory disk (I1) being increased, the limitation of the disk capacity can be increased, and the data access speed and the data liability can also be increased, and the above-mentioned can be gradually increased according to the user's desire.
  • When the cluster schemes (1) are formed, each of the cluster schemes can be independently operated, and each of the cluster schemes (1) can be used as a distributed memory disk cluster storage (DMDCS) (1A), the network layer interface is used for stacking, so each of the distributed memory disk cluster storages (1A) can be used for simulating chunk memory disk, a new cluster data control station (1011) and a new cluster data backup station (1021) are provided for controlling the amount of processed data to be distributed in all the chunk memory disks.
  • Accordingly, the above-mentioned is the same as utilizing the resource of each mainframe for parallel computing, the data is divided into blocks for being transmitted to each machine for computing, then eventually integrated to a final result.
  • When one of the chunk memory disks (11) is failed or one of the distributed memory disk cluster storages (1A) is failed, the operation of the whole disk is not affected and the whole disk is prevented from being totally crashed.
  • Moreover, when the memory of one of the computer units (10) is failed, the server computer unit marks the damaged memory as malfunction, and the chunk memory IC of the DIMM memory is no longer in use, the resource would only be used again after the memory is replaced.
  • The memory of the virtual machine is operated through a storage area network (SAN), a network layer interface virtualized by a software is adopted for connecting all the chunk memory disks so as to be jointly operated; the network layer interface adopts SAN, SAN iSCSI, SAN FC, SAN FCoE, NFS, NAS, JBOD, CIFS, FUSE interface for communicating with the server and the disk driver, and the RAMSTORAGE™ API is provided and served as a backup program.
  • Wherein the RAMSTORAGE™ API adopts REST, Restful, C++, PHP, Python, Java, PerL, Javascript and other program developing software for forming the RAMSTORAGE™ API, and the API function of the distributed memory disk cluster storage (1A) includes tolerance, backup, shift, rapidly layout machine, planning disk size, automatically increasing the chunk memory disks (11) according to the actual needs, balancing the data loading between chunks, backup recovery, continuous back protector (CDP), rapid capture and monitoring resource.
  • Wherein the RAMSTORAGE™ API adopts REST, Restful, C++, PHP, Python, Java, PerL, Javascript and other program developing software for forming the RAMSTORAGE™ API, and the API function of the distributed memory disk cluster storage (1A) includes tolerance, backup, shift, rapidly layout machine, planning disk size, automatically increasing the chunk memory disks (11) according to the actual needs, balancing the data loading between chunks, backup recovery, continuous back protector (CDP), rapid capture and monitoring resource.
  • In addition, each of the chunk memory disks (11) is respectively and electrically connected to at least a hard disk storage device (12), the hard disk storage device (12) is served to backup the data in the chunk memory disk (11) in every preset period of time thereby avoiding any unanticipated malfunction, for example in every minute, the altered portion of certain data in each of the chunk memory disks (11) is copied to the hard disk storage device (12) for the purpose of data backup.
  • When each of the chunk memory disks (11) restarts, the last backup data stored in the hard disk storage device (12) would be fully recovered to the chunk memory disk (11), and the cluster data control station (1011) is informed for joining the cluster operation.
  • All the chunk memory disks (11) of all the computer units (10) use the continuous data protector (CDP) for constantly and continuously backup the data to a common large-scale hard disk cluster array for the purpose of back up, when a part of the server computer units or a part of the chunk memory disks (11) is failed due to environmental or other factors, the virtual machine can be recovered according to the required timing for finding the captured backup or a certain recovery timing, the mentioned large-scale hard disk is the mentioned backup center (103), of course the cluster disk array can adopt the conventional magnetic tape for providing a third backup.
  • Each of the chunk memory disks (11) is able to plan the required capacity and the CPU resource with an automatic layout manner, the network layer interface can also be served to automatically set up the required IP and MAC address, and the virtual machine can be set with the AP according to the actual needs and the required conditions can be automatically assigned.
  • Moreover, the CPU, the memory and the physical hard disk which are not in use can be integrated as a common resource pool through the virtual machine platform operation system, and each required computer resource can be automatically adjusted and transmitted to other computer unit (10) with richer resource.
  • The connecting manner of the plural distributed memory disk cluster storages (1A) can be according to the physical internet transmission protocol, and the packages can be transmitted through SSL, VPN or encryption computing manner, and can be operated with a means of cross-region, cross-country and WAN IP; when the network connection is unable to be established, each region is able to be independently operated. When the connection is recovered, the data can be fully synchronized to each of the chunk memory disks (11) of each of the distributed memory disk cluster storages (1A).
  • According to one embodiment of the present invention, the CPU is selected from x86, x86-64, IA-64, Alpha, ARM, SPARC 32 and 64, PowerPC, MIPS and Tilera.
  • The operating manner of the memory installed in the computer unit (10) is to directly utilize the memory controller of the CPU to directly access the memory data with a manner of three-channel or multiple-channel and a speed of 800 MHz to 1,333 MHz or higher.
  • The capacity of single memory is 1 MB (megabyte) to 16 ZB (zettabyte), the adopted memory type can be a dynamic random access memory (DRAM) such as FPM RAM, EDO RAM; or a synchronous dynamic memory (DRAM) such as SDRAM, DDR SDRAM, DDR2 SDRAM, DDR3 SDRAM, DDR4 SDRAM, DDR5 SDRAM and other upward compatible types having higher access speed or different access manner; or a dynamic mobile platform memory such as LPDDK, LPDDR2, LPDDD3, LPDDK4 and other upward compatible types having higher access speed or different access manner; or a dynamic graphic process memory such as VRAM, WRAM, MDRAM, SGRAM, SDRAM, GDDR, GDDR2, GDDR3, GDDR4, GDDR5, GDDR6, GDDR7 and other upward compatible types having higher access speed or different access manner; or a Magnetoresistive random-access memory such as MRAM and other upward compatible types having higher access speed or different access manner; or a Ferroelectric RAM such as FeRAM and other upward compatible types having higher access speed or different access manner; or a Phase change Random Access Memory such as PC RAM and other upward compatible types having higher access speed or different access manner; or a Resistive random-access memory such as ReRAM and other upward compatible types having higher access speed or different access manner; or a dynamic Rambus memory such as RDRAM, XDR DRAM, XDR2 DRAM and other upward compatible types having higher access speed or different access manner; or a static random access memory (SRAM) or a read-only memory (ROM), or a flash memory such as NOR Flash, NAND Flash, 3D NAND Flash, V-Flash, SLC, MLC, eMMC, TLC and other upward compatible types having higher access speed or different access manner.
  • The hard disk storage device (12) includes a conventional disk head, a floppy-disk drive, solid state drive, internet drive. SAS drive, SATA drive, mSATA drive, PCIe drive, FC drive, SCSI drive, ATA drive, NAND Flash card, FCoE drive and other upward compatible types having higher access speed or different access manner.
  • The network card can be selected from an Ethernet, fast Ethernet, gigabit Ethernet, glass fiber, token ring network, InfiniBand, FCoE (fiber channel over Ethernet) or wireless network; and with respect to the network protocol, the network speed can adopt 2 Mbit/s, 10 Mbit/s, 11 Mbit/s, 40 Mbit/s, 54 Mbit/s, 80 Mbit/s, 100 Mbit/s, 150 Mbit/s, 300 Mbit/s, 433 Mbit/s, 1,000 Mbit/s, 1 Gbit/s, 8 Gbit/s, 10 Gbit/s, 16 Gbit/s, 32 Gbit/s, 40 Gbit/s, 56 Gbit/s, 100 Gbit/s, 1 60 Gbit/s and 1,000 Gbit/s or any other network card with new network communication protocol can also be adopted.
  • The mother board is selected from any mother board compatible with the x86, x86-64, IA-64, Alpha, ARM, SPARC 32 and 64, PowerPC, MIPS and Tilera processer and the BeagleBoneBlack or Raspberry Pi mother board made by specific computer manufacturers.
  • What shall be addressed is that the file format of the virtual machine operating system can be selected from VMFS3, VMFSS and other upward compatible types having different format, VHD and other upward compatible types having different format, VHDX and other upward compatible types having different format, VMDK and other upward compatible types having different format, HDFS and other upward compatible types having different format, Isilon OneFS and other upward compatible types having different format, any format generated through memory-type pagefile and other upward compatible types having different format, VEs and other upward compatible types having different format, VPSs and other upward compatible types having different format, CePH, GlusterFS, SphereFS, Taobao File System, ZFS, SDFS, MooseFS, AdvFS, Be file system (BFS), Btrfs, Coda, CrossDOS, disk file system (DFS), Episode, EFS, exFAT, ext, FAT, global file system (GFS), hierarchical file system(HFS), HFS Plus, high performance file system, IBM general parallel file system, JFS, Macintosh file system, MINIX, NetWare file system, NILFS, Novell storage service, NTFS, QFS, QNX4FS, ReiserFS (Reiser4), SpadFS, UBIFS, Unix file system, Veritas file system (VxFS), VFAT, write anywhere file layout (WAFL), XFS, Xsan, ZFS, CHFS, FFS2, F2FS, JFFS, JFFS2, LogFS, NVFS, YAFFS, UBIFS, DCE/DFS, MFS, CXFS, GFS2, Google file system, OCFS, OCFS2, QFS, Xsan, AFS, OpenAFS, AFP, MS-DFS, GPFS, Lustre, NCP, NFS, POHMELFS, Hadoop, HAMMER, SMB (CIFS), cramfs, FUSE, SquashFS, UMSDOS, UnionFS, configfs, devfs, procfs, specfs, sysfs, tmpfs, WinFS, EncFS, EFS, ZFS, RAW, ASM, LVM, SFS, MPFS or MGFS.
  • The physical network protocol transferring can be selected from Ethernet, fast Ethernet, gigabit Ethernet, fiber glass, token ring network, SS7, GSM, GPRS, EDGE, HSPA, HSPA+, CDMA, WCDMA, TD-WCDMA, LTE, GSM, cdmaOne, CDMA2000, UMTS WCDMA, TD-SCDMA, WiMAX, 3G broadcast network, CDMA20001X, Wi-Fi, SuperWiFi, Wi-Fi GO and other upward compatible IEEE network transmission protocol.
  • With the technical breakthrough for the distributed memory disk cluster storage system provided by the present invention, the distributed memory storage system can satisfy four desired expansions which are the expansion of network bandwidth, the expansion of hard disk capacity, the expansion of TOPS speed, and the expansion of memory I/O transmitting speed. Meanwhile, the system can be cross-region operated, data center and WAN, so the user's requirements can be collected through the local memory disk cluster for being provided with the corresponding services, the capacity of the memory disk cluster can also be gradually expanded for further providing cross-region or cross-country data service.
  • With the increased quantity of the storage devices, increasing one server would have the network bandwidth and the disk capacity being correspondingly aggregated thereby forming a resource pool, the distributed memory disk cluster storage (1A) is served like a physical hard disk, so the whole operation would not be affected due to one of the physical mainframes being failed, the chunk memory disk (11) in the copy could copy the stored data to a new chunk memory disk (11), so a fundamental data backup is maintained, meanwhile the continuous data protector (CDP) is also adopted for providing novel service of data backup and recovery, thus the disadvantages of the tape backup often being failed and the backup only being performed once a day are improved.
  • In addition, the data generated through the copy can be sent from different chunk memory disk (11) thereby achieving the many-to-one data transmission, when the user amount increases, only increasing the quantity of the chunk memory disk (11) can achieve the many-to-many transmission, so the disadvantages of the multiple RAID hard disks crashing causing the whole data being missed, the limitation of the quantity of network interface of storage device and the network speed causing the excessive data being overly jammed and delayed for transmitting, the expansion of LUN and the data center being unable to be cross-region operated can be solved; the present invention adopts the memory being served as a disk, each file or each virtual machine can be stored in the memory with a file format, the highest I/O speed of the memory bus can be directly utilized, the data can be transmitted between the CPU and the memory, the highest I/O number, distance and speed can be provided. Accordingly, the present invention is novel and more practical in use comparing to prior art.
  • It is to be understood, however, that even though numerous characteristics and advantages of the present invention have been set forth in the foregoing description, together with details of the structure and function of the invention, the disclosure is illustrative only, and changes may be made in detail, especially in matters of shape, size, and arrangement of parts within the principles of the invention to the fill extent indicated by the broad general meaning of the terms in which the appended claims are expressed. Many modifications and other embodiments of the inventions set forth herein will come to mind to one skilled in the art to which these inventions pertain having the benefit of the teachings presented in the foregoing descriptions and the associated drawings. Therefore, it is to be understood that the inventions are not to be limited to the specific examples of the embodiments disclosed and that modifications and other embodiments are intended to be included within the scope of the appended claims. Although specific terms are employed herein, they are used in a generic and descriptive sense only and not for purposes of limitation.

Claims (21)

What is claimed is:
1. A method for building a distributed memory disk cluster storage system, wherein the distributed memory disk cluster storage system comprises a plurality of computers, and each of the plurality of computers comprises a processor, a memory, and a network card, the method comprising:
installing a virtual machine platform into the plurality of computers so that each of the plurality of computers has a plurality of virtual machines;
planning the memory of each of the plurality of computers as a plurality of chunk memory disks by means of the virtual machine platform;
integrating the plurality of computers to form a cluster memory disk; and
dividing each of a plurality of files of the plurality of chunk memory disks of the plurality of computers of the cluster memory disk into a plurality of data, copying the plurality of data to generate a plurality of copied data, and evenly distributing the plurality of copied data to all the plurality of chunk memory disks of the plurality of computers of the cluster memory disk,
wherein all the plurality of chunk memory disks of the plurality of computers of the cluster memory disk are connected so as to be jointly operated.
2. The method as claimed in claim 1, wherein the step of planning the memory of each of the plurality of computers as the plurality of chunk memory disks by means of the virtual machine platform comprises:
each of the plurality of computers sets a memory capacity assignment for the memory through a virtual machine operating system of the virtual machine platform and a program for setting the memory capacity assignment to plan the memory of each of the plurality of computers as the plurality of chunk memory disks.
3. The method as claimed in claim 1, wherein the plurality of computers are integrated to form the cluster memory disk by means of a network layer interface virtualized by a virtual machine operating system of the virtual machine platform, a memory bus with multiple channels and the network card in each of the plurality of computers.
4. The method as claimed in claim 2, wherein when the virtual machine operating system of the virtual machine platform accesses a file from a memory location of the virtual machine operating system, the file is computed by one of the plurality of virtual machines to generate computed data, and the computed data is stored back in the memory location, so that the plurality of data in each of the plurality of computers is processed in the plurality of chunk memory disks by parallel computing.
5. The method as claimed in claim 4, wherein other data which is not stored in the plurality of computers is accessed by the plurality of chunk memory disks of other computers through the network card to connect to a connection port cluster link.
6. The method as claimed in claim 1, wherein the plurality of computers are categorized to a first data center, at least one second data center and a backup center,
wherein the first data center is controlled by a virtual cluster data control station, and each of at least one second data center and the backup center are controlled by a virtual cluster data backup station, wherein the virtual cluster data backup station is a backup of virtual cluster data control station,
wherein the first data center and at least one second data center jointly form a distributed memory file system of the distributed memory disk cluster storage system.
7. The method as claimed in claim 1, wherein each of the plurality of chunk memory disks is electrically connected to at least one hard disk storage device, and the at least one hard disk storage device is configured to backup data in each of the plurality of chunk memory disks once every preset period of time.
8. The method as claimed in claim 1, wherein the plurality of chunk memory disks of the plurality of computers use a continuous data protector for continuously backing up the data to a hard disk cluster array.
9. The method as claimed in claim 1, wherein each of the plurality of computers further comprises a hard disk, a mother board, an I/O interface card, a connection cable and a housing.
10. The method as claimed in claim 1, wherein each of the plurality of copied data is encrypted through mixing a plurality of encryption algorithm and stored in the memory.
11. The method as claimed in claim 1, wherein an operation status of each of the plurality of chunk memory disks is monitored by a search engine, and the search engine restarts once an error is detected.
12. The method as claimed in claim 2, wherein an assigned memory of each of the plurality of virtual machines is operated through a storage area network, the storage area network adopts a network layer interface virtualized by a virtual machine operating system of the virtual machine platform to connect all the chunk memory disks.
13. The method as claimed in claim 1, wherein the virtual machine platform plans the processor, the memory and a hard disk as a resource pool, and transfers each of unused computer resources to other computers.
14. The method as claimed in claim 1, wherein a maximum capacity of the cluster memory disk is larger than 52.5 TB(terabyte).
15. The method as claimed in claim 1, wherein all the plurality of chunk memory disks of the plurality of computers access data synchronously.
16. A method for building a distributed memory disk cluster storage system, wherein the distributed memory disk cluster storage system with comprises a plurality of computers, and each of the plurality of computers comprises a processor, a memory, and a network card, the method comprising:
installing a virtual machine platform into the plurality of computers so that each of the plurality of computers has a plurality of virtual machines;
planning the memory of each of the plurality of computers as a plurality of chunk memory disks by means of the virtual machine platform;
planning all the chunk memory disks of each of the plurality of computers as a resource pool of each of a plurality of cluster memory disks respectively by means of a network layer interface virtualized by a virtual machine operating system of the virtual machine platform, a memory bus with multiple channels, and the network card in each of the plurality of computers, wherein all the plurality of chunk memory disks of the plurality of computers access data synchronously;
simulating the plurality of cluster memory disks as a common resource pool,
dividing each of a plurality of files of the plurality of chunk memory disks of the plurality of computers of plurality of cluster memory disks into a plurality of data, copying the plurality of data to generate a plurality of copied data, and evenly distributing the plurality of copied data to all the plurality of chunk memory disks of the plurality of computers of the plurality of cluster memory disks of the common resource pool,
wherein all the plurality of chunk memory disks of the plurality of computers of the cluster memory disk of the common resource pool are connected so as to be jointly operated.
17. The method as claimed in claim 16, wherein the plurality of computers are categorized to a first data center, at least one second data center and a backup center,
wherein the first data center is controlled by a virtual cluster data control station, and each of at least one second data center and the backup center are controlled by a virtual cluster data backup station, wherein the virtual cluster data backup station is a backup of virtual cluster data control station,
wherein the first data center and at least one second data center jointly form a distributed memory file system of the distributed memory disk cluster storage system.
18. The method as claimed in claim 17, wherein the common resource pool of cluster memory disks is controlled by the virtual cluster data control station and the virtual cluster data backup station to evenly distribute amounts of processing data to all the chunk memory disks of the plurality of computers.
19. The method as claimed in claim 16, wherein the plurality of cluster memory disks are connected with each other according to a physical internet transmission protocol, and the plurality of data is transmitted through an encryption algorithm;
when one of network connections among the plurality of cluster memory disks is unable to be established, each of the plurality of cluster memory disks is independently operated; and
when one of the network connection is recovered, data is synchronized to each of the chunk memory disks of each of the plurality of cluster memory disks.
20. The method as claimed in claim 16, wherein a maximum capacity of the common resource pool is larger than 16 ZB(zettabyte).
21. The method as claimed in claim 16, wherein all the plurality of chunk memory disks of the plurality of computers access data synchronously.
US16/583,228 2013-12-09 2019-09-25 Method for building distributed memory disk cluster storage system Abandoned US20200042204A1 (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
US16/583,228 US20200042204A1 (en) 2013-12-09 2019-09-25 Method for building distributed memory disk cluster storage system

Applications Claiming Priority (4)

Application Number Priority Date Filing Date Title
TW102145155A TWI676898B (en) 2013-12-09 2013-12-09 Decentralized memory disk cluster storage system operation method
TW102145155 2013-12-09
US14/562,892 US10466912B2 (en) 2013-12-09 2014-12-08 Operation method of distributed memory disk cluster storage system
US16/583,228 US20200042204A1 (en) 2013-12-09 2019-09-25 Method for building distributed memory disk cluster storage system

Related Parent Applications (1)

Application Number Title Priority Date Filing Date
US14/562,892 Continuation US10466912B2 (en) 2013-12-09 2014-12-08 Operation method of distributed memory disk cluster storage system

Publications (1)

Publication Number Publication Date
US20200042204A1 true US20200042204A1 (en) 2020-02-06

Family

ID=51293817

Family Applications (2)

Application Number Title Priority Date Filing Date
US14/562,892 Active US10466912B2 (en) 2013-12-09 2014-12-08 Operation method of distributed memory disk cluster storage system
US16/583,228 Abandoned US20200042204A1 (en) 2013-12-09 2019-09-25 Method for building distributed memory disk cluster storage system

Family Applications Before (1)

Application Number Title Priority Date Filing Date
US14/562,892 Active US10466912B2 (en) 2013-12-09 2014-12-08 Operation method of distributed memory disk cluster storage system

Country Status (3)

Country Link
US (2) US10466912B2 (en)
CN (2) CN111506267A (en)
TW (1) TWI676898B (en)

Families Citing this family (59)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
EP3349418B1 (en) * 2014-05-29 2019-07-24 Huawei Technologies Co., Ltd. Service processing method, related device, and system
US9491241B1 (en) * 2014-06-30 2016-11-08 EMC IP Holding Company LLC Data storage system with native representational state transfer-based application programming interface
TWI509426B (en) * 2014-09-17 2015-11-21 Prophetstor Data Services Inc System for achieving non-interruptive data reconstruction
CN105653345A (en) * 2014-10-17 2016-06-08 伊姆西公司 Method and device supporting data nonvolatile random access
CN105119737A (en) * 2015-07-16 2015-12-02 浪潮软件股份有限公司 Method for monitoring Ceph cluster through Zabbix
CN105245576B (en) * 2015-09-10 2019-03-19 浪潮(北京)电子信息产业有限公司 A kind of storage architecture system based on complete shared exchange
CN105471989B (en) * 2015-11-23 2018-11-02 上海爱数信息技术股份有限公司 A kind of date storage method
US9965197B2 (en) * 2015-12-15 2018-05-08 Quanta Computer Inc. System and method for storage area network management using serial attached SCSI expander
TWI578167B (en) * 2016-03-11 2017-04-11 宏正自動科技股份有限公司 System, apparatus and method of virtualized byot
CN106066771A (en) * 2016-06-08 2016-11-02 池州职业技术学院 A kind of Electronic saving integrator system
CN106534249A (en) * 2016-09-21 2017-03-22 苏州市广播电视总台 File transmission system based on file straight-through technology
CN106527968A (en) * 2016-09-21 2017-03-22 苏州市广播电视总台 File through technology-based file transmission method
CN106502830B (en) * 2016-10-27 2019-01-22 一铭软件股份有限公司 A kind of method for restoring system backup based on Btrfs file system
CN108008911A (en) * 2016-11-01 2018-05-08 阿里巴巴集团控股有限公司 Read-write requests processing method and processing device
CN108279851B (en) * 2017-03-03 2021-06-11 阿里巴巴(中国)有限公司 Network storage device and construction method
CN106886374A (en) * 2017-03-10 2017-06-23 济南浪潮高新科技投资发展有限公司 A kind of virtual disk dispatching method based on openstack
US11102299B2 (en) * 2017-03-22 2021-08-24 Hitachi, Ltd. Data processing system
US10409614B2 (en) * 2017-04-24 2019-09-10 Intel Corporation Instructions having support for floating point and integer data types in the same register
US10474458B2 (en) 2017-04-28 2019-11-12 Intel Corporation Instructions and logic to perform floating-point and integer operations for machine learning
TWI648967B (en) * 2017-07-11 2019-01-21 中華電信股份有限公司 Service chain deployment method considering network latency and physical resources
US10761743B1 (en) 2017-07-17 2020-09-01 EMC IP Holding Company LLC Establishing data reliability groups within a geographically distributed data storage environment
US10819656B2 (en) 2017-07-24 2020-10-27 Rubrik, Inc. Throttling network bandwidth using per-node network interfaces
US10339016B2 (en) * 2017-08-10 2019-07-02 Rubrik, Inc. Chunk allocation
CN108646985A (en) * 2018-05-16 2018-10-12 广东睿江云计算股份有限公司 A kind of resource constraint and distribution method of Ceph distributed memory systems
US11436203B2 (en) 2018-11-02 2022-09-06 EMC IP Holding Company LLC Scaling out geographically diverse storage
CN109474429B (en) * 2018-12-24 2022-02-15 无锡市同威科技有限公司 Key configuration strategy method facing FC storage encryption gateway
JP7408671B2 (en) 2019-03-15 2024-01-05 インテル コーポレイション Architecture for block sparse operations on systolic arrays
US11934342B2 (en) 2019-03-15 2024-03-19 Intel Corporation Assistance for hardware prefetch in cache access
US20220179787A1 (en) 2019-03-15 2022-06-09 Intel Corporation Systems and methods for improving cache efficiency and utilization
WO2020223099A2 (en) 2019-04-30 2020-11-05 Clumio, Inc. Cloud-based data protection service
US11748004B2 (en) 2019-05-03 2023-09-05 EMC IP Holding Company LLC Data replication using active and passive data storage modes
US11449399B2 (en) * 2019-07-30 2022-09-20 EMC IP Holding Company LLC Mitigating real node failure of a doubly mapped redundant array of independent nodes
US11228322B2 (en) 2019-09-13 2022-01-18 EMC IP Holding Company LLC Rebalancing in a geographically diverse storage system employing erasure coding
CN110688674B (en) * 2019-09-23 2024-04-26 中国银联股份有限公司 Access dockee, system and method and device for applying access dockee
US11449248B2 (en) 2019-09-26 2022-09-20 EMC IP Holding Company LLC Mapped redundant array of independent data storage regions
CN110807065A (en) * 2019-10-30 2020-02-18 浙江大华技术股份有限公司 Memory table implementation method, memory and data node of distributed database
US11435910B2 (en) 2019-10-31 2022-09-06 EMC IP Holding Company LLC Heterogeneous mapped redundant array of independent nodes for data storage
US11288139B2 (en) 2019-10-31 2022-03-29 EMC IP Holding Company LLC Two-step recovery employing erasure coding in a geographically diverse data storage system
US11435957B2 (en) 2019-11-27 2022-09-06 EMC IP Holding Company LLC Selective instantiation of a storage service for a doubly mapped redundant array of independent nodes
CN111046108A (en) * 2019-12-20 2020-04-21 辽宁振兴银行股份有限公司 Ceph-based cross-data center Oracle high-availability implementation method
CN111078368B (en) * 2019-12-26 2023-03-21 浪潮电子信息产业股份有限公司 Memory snapshot method and device of cloud computing platform virtual machine and readable storage medium
US11231860B2 (en) 2020-01-17 2022-01-25 EMC IP Holding Company LLC Doubly mapped redundant array of independent nodes for data storage with high performance
CN111400101B (en) * 2020-03-18 2021-06-01 北京北亚宸星科技有限公司 Data recovery method and system for deleting JFS2 file system data
CN111506262B (en) * 2020-03-25 2021-12-28 华为技术有限公司 Storage system, file storage and reading method and terminal equipment
US11507308B2 (en) 2020-03-30 2022-11-22 EMC IP Holding Company LLC Disk access event control for mapped nodes supported by a real cluster storage system
CN111459864B (en) * 2020-04-02 2021-11-30 深圳朗田亩半导体科技有限公司 Memory device and manufacturing method thereof
CN113625937A (en) * 2020-05-09 2021-11-09 鸿富锦精密电子(天津)有限公司 Storage resource processing device and method
CN111708488B (en) * 2020-05-26 2023-01-06 苏州浪潮智能科技有限公司 Distributed memory disk-based Ceph performance optimization method and device
US11288229B2 (en) 2020-05-29 2022-03-29 EMC IP Holding Company LLC Verifiable intra-cluster migration for a chunk storage system
CN111930299B (en) * 2020-06-22 2024-01-26 中国建设银行股份有限公司 Method for distributing storage units and related equipment
CN111858612B (en) * 2020-07-28 2023-04-18 平安科技(深圳)有限公司 Data accelerated access method and device based on graph database and storage medium
CN112148219A (en) * 2020-09-16 2020-12-29 北京优炫软件股份有限公司 Design method and device for ceph type distributed storage cluster
US11693983B2 (en) 2020-10-28 2023-07-04 EMC IP Holding Company LLC Data protection via commutative erasure coding in a geographically diverse data storage system
US11847141B2 (en) 2021-01-19 2023-12-19 EMC IP Holding Company LLC Mapped redundant array of independent nodes employing mapped reliability groups for data storage
US11625174B2 (en) 2021-01-20 2023-04-11 EMC IP Holding Company LLC Parity allocation for a virtual redundant array of independent disks
US11354191B1 (en) 2021-05-28 2022-06-07 EMC IP Holding Company LLC Erasure coding in a large geographically diverse data storage system
US11449234B1 (en) 2021-05-28 2022-09-20 EMC IP Holding Company LLC Efficient data access operations via a mapping layer instance for a doubly mapped redundant array of independent nodes
TWI823223B (en) * 2021-12-30 2023-11-21 新唐科技股份有限公司 Method and device for a secure data transmission
TWI826093B (en) * 2022-11-02 2023-12-11 財團法人資訊工業策進會 Virtual machine backup method and system

Family Cites Families (17)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US6947987B2 (en) * 1998-05-29 2005-09-20 Ncr Corporation Method and apparatus for allocating network resources and changing the allocation based on dynamic workload changes
US7051188B1 (en) * 1999-09-28 2006-05-23 International Business Machines Corporation Dynamically redistributing shareable resources of a computing environment to manage the workload of that environment
US6631456B2 (en) * 2001-03-06 2003-10-07 Lance Leighnor Hypercache RAM based disk emulation and method
US6880002B2 (en) * 2001-09-05 2005-04-12 Surgient, Inc. Virtualized logical server cloud providing non-deterministic allocation of logical attributes of logical servers to physical resources
US8499086B2 (en) * 2003-01-21 2013-07-30 Dell Products L.P. Client load distribution
US8195866B2 (en) * 2007-04-26 2012-06-05 Vmware, Inc. Adjusting available persistent storage during execution in a virtual computer system
US8275815B2 (en) * 2008-08-25 2012-09-25 International Business Machines Corporation Transactional processing for clustered file systems
CN101477495B (en) * 2008-10-28 2011-03-16 北京航空航天大学 Implementing method for distributed internal memory virtualization technology
CN101499027A (en) * 2009-03-06 2009-08-05 赵晓宇 Intelligent memory system based on independent kernel and distributed architecture
CN102137125A (en) * 2010-01-26 2011-07-27 复旦大学 Method for processing cross task data in distributive network system
CN101859317A (en) * 2010-05-10 2010-10-13 浪潮电子信息产业股份有限公司 Method for establishing database cluster by utilizing virtualization
CN102088490B (en) * 2011-01-19 2013-06-12 华为技术有限公司 Data storage method, device and system
CN102110071B (en) * 2011-03-04 2013-04-17 浪潮(北京)电子信息产业有限公司 Virtual machine cluster system and implementation method thereof
US9003021B2 (en) * 2011-12-27 2015-04-07 Solidfire, Inc. Management of storage system access based on client performance and cluser health
CN103226518B (en) * 2012-01-31 2016-06-22 国际商业机器公司 A kind of method and apparatus carrying out volume extension in storage management system
US9330106B2 (en) * 2012-02-15 2016-05-03 Citrix Systems, Inc. Selective synchronization of remotely stored content
CN103268252A (en) * 2013-05-12 2013-08-28 南京载玄信息科技有限公司 Virtualization platform system based on distributed storage and achieving method thereof

Also Published As

Publication number Publication date
TWI676898B (en) 2019-11-11
TW201416881A (en) 2014-05-01
CN104699419A (en) 2015-06-10
CN104699419B (en) 2020-05-12
CN111506267A (en) 2020-08-07
US10466912B2 (en) 2019-11-05
US20150160872A1 (en) 2015-06-11

Similar Documents

Publication Publication Date Title
US20200042204A1 (en) Method for building distributed memory disk cluster storage system
US10001947B1 (en) Systems, methods and devices for performing efficient patrol read operations in a storage system
US20210019067A1 (en) Data deduplication across storage systems
US11747981B2 (en) Scalable data access system and methods of eliminating controller bottlenecks
US10467246B2 (en) Content-based replication of data in scale out system
US8479037B1 (en) Distributed hot-spare storage in a storage cluster
US7778960B1 (en) Background movement of data between nodes in a storage cluster
US8521685B1 (en) Background movement of data between nodes in a storage cluster
US10031703B1 (en) Extent-based tiering for virtual storage using full LUNs
US9804939B1 (en) Sparse raid rebuild based on storage extent allocation
US11928005B2 (en) Techniques for performing resynchronization on a clustered system
US20140281306A1 (en) Method and apparatus of non-disruptive storage migration
US9875043B1 (en) Managing data migration in storage systems
US9936013B2 (en) Techniques for controlling client traffic on a clustered system
US9256373B1 (en) Invulnerable data movement for file system upgrade
US20180039413A1 (en) Identifying disk drives and processing data access requests
WO2014094568A1 (en) Data storage planning method and device
US9229814B2 (en) Data error recovery for a storage device
US10025516B2 (en) Processing data access requests from multiple interfaces for data storage devices
US10552342B1 (en) Application level coordination for automated multi-tiering system in a federated environment
CN105302472A (en) Operating method for distributed memory disk cluster storage system
US11144221B1 (en) Efficient resilience in a metadata paging array for in-flight user data
US11467930B2 (en) Distributed failover of a back-end storage director
US20220342767A1 (en) Detecting corruption in forever incremental backups with primary storage systems
US20230185822A1 (en) Distributed storage system

Legal Events

Date Code Title Description
STPP Information on status: patent application and granting procedure in general

Free format text: DOCKETED NEW CASE - READY FOR EXAMINATION

STPP Information on status: patent application and granting procedure in general

Free format text: NON FINAL ACTION MAILED

STPP Information on status: patent application and granting procedure in general

Free format text: RESPONSE TO NON-FINAL OFFICE ACTION ENTERED AND FORWARDED TO EXAMINER

STPP Information on status: patent application and granting procedure in general

Free format text: FINAL REJECTION MAILED

STPP Information on status: patent application and granting procedure in general

Free format text: DOCKETED NEW CASE - READY FOR EXAMINATION

STPP Information on status: patent application and granting procedure in general

Free format text: NON FINAL ACTION MAILED

STPP Information on status: patent application and granting procedure in general

Free format text: RESPONSE TO NON-FINAL OFFICE ACTION ENTERED AND FORWARDED TO EXAMINER

STPP Information on status: patent application and granting procedure in general

Free format text: NOTICE OF ALLOWANCE MAILED -- APPLICATION RECEIVED IN OFFICE OF PUBLICATIONS

STCB Information on status: application discontinuation

Free format text: ABANDONED -- FAILURE TO PAY ISSUE FEE