CN116915510A - Distributed storage system based on high-speed encryption algorithm - Google Patents

Distributed storage system based on high-speed encryption algorithm Download PDF

Info

Publication number
CN116915510A
CN116915510A CN202311176134.4A CN202311176134A CN116915510A CN 116915510 A CN116915510 A CN 116915510A CN 202311176134 A CN202311176134 A CN 202311176134A CN 116915510 A CN116915510 A CN 116915510A
Authority
CN
China
Prior art keywords
file
node
address
sequence
storage
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN202311176134.4A
Other languages
Chinese (zh)
Other versions
CN116915510B (en
Inventor
朱云
李元骅
可为
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Beijing Shudun Information Technology Co ltd
Original Assignee
Beijing Shudun Information Technology Co ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Beijing Shudun Information Technology Co ltd filed Critical Beijing Shudun Information Technology Co ltd
Priority to CN202311176134.4A priority Critical patent/CN116915510B/en
Publication of CN116915510A publication Critical patent/CN116915510A/en
Application granted granted Critical
Publication of CN116915510B publication Critical patent/CN116915510B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L63/00Network architectures or network communication protocols for network security
    • H04L63/04Network architectures or network communication protocols for network security for providing a confidential data exchange among entities communicating through data packet networks
    • H04L63/0428Network architectures or network communication protocols for network security for providing a confidential data exchange among entities communicating through data packet networks wherein the data content is protected, e.g. by encrypting or encapsulating the payload
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L67/00Network arrangements or protocols for supporting network services or applications
    • H04L67/01Protocols
    • H04L67/06Protocols specially adapted for file transfer, e.g. file transfer protocol [FTP]
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L67/00Network arrangements or protocols for supporting network services or applications
    • H04L67/01Protocols
    • H04L67/10Protocols in which an application is distributed across nodes in the network
    • H04L67/1097Protocols in which an application is distributed across nodes in the network for distributed storage of data in networks, e.g. transport arrangements for network file system [NFS], storage area networks [SAN] or network attached storage [NAS]
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L9/00Cryptographic mechanisms or cryptographic arrangements for secret or secure communications; Network security protocols
    • H04L9/40Network security protocols
    • YGENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
    • Y02TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
    • Y02DCLIMATE CHANGE MITIGATION TECHNOLOGIES IN INFORMATION AND COMMUNICATION TECHNOLOGIES [ICT], I.E. INFORMATION AND COMMUNICATION TECHNOLOGIES AIMING AT THE REDUCTION OF THEIR OWN ENERGY USE
    • Y02D10/00Energy efficient computing, e.g. low power processors, power management or thermal management

Landscapes

  • Engineering & Computer Science (AREA)
  • Computer Security & Cryptography (AREA)
  • Computer Networks & Wireless Communication (AREA)
  • Signal Processing (AREA)
  • Computer Hardware Design (AREA)
  • Computing Systems (AREA)
  • General Engineering & Computer Science (AREA)
  • Information Retrieval, Db Structures And Fs Structures Therefor (AREA)

Abstract

The application provides a distributed storage system based on a high-speed encryption algorithm, which comprises a scheduling node, a group, nodes, a first storage device and a second storage device, wherein after receiving any write-in request through the scheduling node, the scheduling node splits a provided file into a plurality of subfiles with preset lengths, and virtual files with corresponding numbers are generated in the subfiles; distributing each file in the file sequence according to the node task occupancy rate, and generating a table index; the scheduling node sends a virtual address to the source of the write request; the application realizes the encryption performance of file storage access, a requester only holds the virtual address of file access and does not have direct access authority, and meanwhile, for a malicious encroachment party, a dispatching node does not execute direct access for the acquired physical address, thereby ensuring the security of data.

Description

Distributed storage system based on high-speed encryption algorithm
Technical Field
The application belongs to the technical field of data processing systems, and particularly relates to a distributed storage system based on a high-speed encryption algorithm.
Background
The distributed storage system dispersedly stores data on a plurality of independent devices, adopts an expandable system structure, and utilizes the plurality of independent devices to share the storage load, so that the reliability, the performance and the capacity of the storage system can be improved, and the expansion is easy. Currently, common architectures for distributed storage systems include, for example, a scale-up architecture and a scale-out architecture. The scale-up architecture refers to that in the original storage system, the capacity of a single device is increased or the device is updated to meet the increasing requirements of data and performance (that is, a storage controller uniformly manages a plurality of storage devices arranged in the same device); scale-out architecture refers to that in an original storage system, capacity expansion is performed by adding nodes (i.e. adding at least one device including a storage device and a storage controller, and performing data interaction between two adjacent devices through respective storage controllers), where each node can provide capacity, bandwidth and computing capability.
How to ensure the accuracy of data and the encryption performance of data when data is migrated between nodes when the newly added data nodes ensure the newly added nodes in a distributed storage system becomes a problem to be solved.
Disclosure of Invention
Therefore, the application provides a distributed storage system based on a high-speed encryption algorithm, so that the safety of data is ensured while the distributed storage is executed.
In a first aspect of the present application, there is provided a distributed storage system based on a high-speed encryption algorithm, comprising:
at least one scheduling node configured to perform scheduling and node allocation and to provide at least a first key;
a group configured to set a first address and a second address into each node, and also configured to generate a distribution identifier thereof to each node in the group according to the first address and the second address thereof;
a node configured to belong to at least one group, wherein a plurality of first storage devices and at least one second storage device corresponding to the first storage devices are correspondingly arranged in each node;
the first storage device configured to respond when indexed by a first address and unable to write the second address;
the second storage device configured to respond when indexed by a second address and unable to write the first address;
the first address is a plaintext address, the second address is obtained after first encryption, and the byte numbers of the first address and the second address are equal;
and a storage controller configured to store a first address and a second address generated from when the first storage device is responded to, and request a response from the first storage device and the second storage device when the scheduling node performs an index operation on a file;
the indexing operation includes:
after receiving any writing request, the scheduling node splits a file provided with the request into a plurality of subfiles with preset lengths to generate a subfile sequence and a time sequence corresponding to the subfile sequence;
generating virtual files with corresponding sequence digits in the sub-file sequences, wherein the byte occupation of each virtual file is a preset value;
distributing each file in the sub-file sequence according to the node task occupancy rate;
generating a table index by a first poisson algorithm, wherein the table index is the distribution of all subfiles in a subfile sequence written in one time in all nodes;
the scheduling node sends a second address to the source of the write request;
index reading, wherein after receiving any reading request, the scheduling node indexes to storage bits in the corresponding second storage equipment according to the second address;
the storage controller calls a virtual file corresponding to the indexed second address;
the storage controller returns the virtual file to the dispatching node, retrieves the corresponding sub-file indexed by the virtual file from the table sequence, submits the access to the first address, and returns the sub-file to the dispatching node;
and deleting the index, and deleting the table sequence in the provided request after the scheduling node receives any deleting request.
Further, the first storage device and the second storage device are independent hardware media.
Further, the first storage device and the second storage device are virtual partitions in the same hardware medium.
Further, the scheduling node is further configured to:
acquiring the memory occupation, the storage equipment occupation and the CPU occupation of the node;
generating an occupation distribution sequence according to the memory occupation, the storage equipment occupation and the CPU occupation;
generating a pre-storage sequence according to the occupation distribution sequence, wherein the pre-storage sequence is used for a file sequence when index writing is performed next time;
if the length of the file sequence when the file is written next time is smaller than the pre-stored sequence, filling the identification file into the difference bit sequence, wherein the byte length of the identification file is 1.
Further, the scheduling node is further configured to:
after the index deletion is executed, searching the file occupation of the first storage equipment in each node;
and deleting the empty file according to the time of generating the empty file.
Further, the group is further configured to:
if the occupied storage equipment in the group is smaller than a preset threshold, initiating a group combining request to an idle node in another group, and expanding the capacity of the node into the node;
if the grouping request occurs, the corresponding nodes belong to two groups.
In a second aspect of the application, a computer readable storage medium is provided for performing the method in the system provided by the application when the computer readable storage medium is executed by a computer.
In a third aspect of the application, an electronic device is provided comprising a memory and a processor, the memory, when executed by the processor, is adapted to carry out the method in the system provided by the application.
Compared with the prior art, the technical scheme provided by the application has the following advantages:
after receiving any writing request through the scheduling node, the application splits the provided file into a plurality of subfiles with preset lengths, and generates a file sequence and a time identifier according to the generation sequence of the subfiles; generating a corresponding number of virtual files in the subfiles, wherein the occupation of the bytes of the virtual files is a fixed preset value; distributing each file in the file sequence according to the node task occupancy rate, and generating a table index, wherein the table index is used for distributing the file sequence of the same written file among nodes; the scheduling node sends a virtual address to the source of the write request; the index reads are: after receiving any reading request, the scheduling node indexes to a storage bit corresponding to the second storage device according to the virtual address; calling an empty file corresponding to the virtual address in the storage controller; the storage controller returns the empty file to the scheduling node, the scheduling node returns the empty file to the requester and searches the empty file from the table sequence, and searches the file sequence position corresponding to the empty file according to the position of the empty file in the table sequence, submits access to the physical address corresponding to the file sequence and returns the file, so that the encryption performance of file storage access is realized, the requester only holds the virtual address of file access and does not have direct access authority, and meanwhile, for a malicious encroachment party, the scheduling node does not execute direct access on the acquired physical address, and the security of data is ensured.
Drawings
FIG. 1 is a schematic diagram of a system provided by an embodiment of the present application;
fig. 2 is a schematic structural diagram of an electronic device according to an embodiment of the present application.
Detailed Description
So that the manner in which the features and techniques of the disclosed embodiments can be understood in more detail, a more particular description of the embodiments of the disclosure, briefly summarized below, may be had by reference to the appended drawings, which are not intended to be limiting of the embodiments of the disclosure. In the following description of the technology, for purposes of explanation, numerous details are set forth in order to provide a thorough understanding of the disclosed embodiments. However, one or more embodiments may still be practiced without these details. In other instances, well-known structures and devices may be shown simplified in order to simplify the drawing.
Example 1
This embodiment provides a distributed storage system based on a high-speed encryption algorithm, as shown in figure 1,
including groups, nodes, and at least one scheduling node.
In the disclosed embodiment, one scheduling node is connected to N groups, and any one group connects N nodes. And for the presently disclosed embodiments, the number of nodes connected between different groups is not limited, and may be one of the following settings:
the number of nodes among all groups is the same; the number of nodes among the groups is different; the number of nodes between at least two groups is the same.
Each node in the embodiments of the present disclosure includes two first storage devices and one second storage device, and it should be noted that the number of storage devices (including the first storage device and the second storage device) is not fixed in the same node, and includes at least one first storage device and at least one second storage device in its minimum composition.
Within the scope of the embodiments of the present disclosure, the construction of the first key is based on an international general encryption algorithm or a key algorithm conforming to national standards or a combination between the two, and the setting of the first key is a symmetric key, an asymmetric key or a composite key of both.
Specifically, one scheduling node provided in an embodiment of the present disclosure is configured to perform scheduling and node allocation, and to provide at least a first key.
The group is configured to set a first address and a second address into each of the first node and the second node, and is also configured to generate a distribution identification from its first address and second address to each node in the group. The nodes are configured to belong to at least one group, and each node is correspondingly provided with the first storage device and the second storage device in the embodiment of the disclosure.
The first storage device is configured to respond when indexed by the first address and is unable to write the second address into the first storage device.
The second storage device is configured to respond when indexed by the second address and is unable to write the first address to the second storage device.
The first address is a plaintext address and is encrypted by a first key to obtain a second address, and the byte numbers of the first address and the second address are kept equal before and after the encryption by the first key.
The storage controller is configured to store a first address and a second address generated from when the first storage device is responded to, and request responses from the first storage device and the second storage device when the scheduling node performs an indexing operation on the file.
Index operations provided in embodiments of the present disclosure include index writing, index reading, and index deleting.
When the scheduling node receives any writing request, splitting the file provided by the request into a plurality of subfiles with preset lengths, generating a subfile sequence and a time sequence corresponding to the subfile sequence, and generating virtual files with corresponding sequence digits in the subfile sequence, wherein the byte occupation of each virtual file is a preset value, and it is required to be noted that the preset value set in the embodiment of the present disclosure is a fixed byte number, that is, the byte occupation size of each virtual file is the same.
The scheduling file obtains the node occupancy rate among the nodes, allocates the subfiles according to the node occupancy rate, and generates a table index according to a first poisson algorithm, wherein the table index is the distribution index of each subfile in the subfile sequence written in at one time in each node. In the embodiment of the disclosure, the files are all written into the first storage device.
The first poisson algorithm in the embodiment of the disclosure changes the node allocation rate in a preset time period of the system, obtains the ratio between the node allocation rate at each moment and the unit time of occurrence, obtains the second unit time allocation rate in each node in the unit time, obtains the total second unit time allocation rate of all nodes, obtains the first poisson rate and the second poisson rate about the node allocation rate change, wherein the first poisson rate is the poisson distribution of the same node in the change of the allocation rate in the preset time period, and the second poisson rate is the poisson distribution of the same node in the group when the file is written in the preset time period.
The first poisson rate and the second poisson rate are then written into the table index of the present distribution, specifically the second poisson rate provided by the node included in the feedback value of the subfiles of the file written each time, and the first poisson rate provided within the group, i.e. the format value of the table index is the first poisson rate # group-node index code # second poisson rate, so the table index is actually an array of format values provided by a plurality of embodiments of the present disclosure. According to the table index, since the poisson ratio within the preset time period has a unique type, the file position stored in the node can be accurately retrieved.
Furthermore, in the embodiment of the present disclosure, the table index is always stored in the scheduling node, and the first poisson rate and the second poisson rate are changed to a certain extent after the new file is written, so that the requester cannot directly read the data through the table index.
Meanwhile, the poisson rate has the characteristic of being generally similar, and a fuzzy addressing effect is provided when the file is searched, so that in the embodiment of the disclosure, a time sequence is also provided when the file is written, therefore, the scheduling node indexes the file together according to the table index and the time sequence, the first poisson rate and the second poisson rate set a threshold value, and the first poisson rate and the second poisson rate within the threshold value range can be addressed to the same file. The embodiments of the present disclosure further illustrate that setting of the threshold does not interfere with accurate addressing, since the first poisson rate and the second poisson rate are file distributions provided when writing to the same file sequence in a unit time period, a newly written file may be written to a less occupied node by the scheduling node in the unit time period, and thus, the overall walk may only change within the threshold.
The table index provides specific distribution of the written data for the requester, the predicted distribution of the file can be obtained according to the table index obtained by the written data, two kinds of writing predictions of random writing and writing according to the distribution rate are provided for the requester, the requester does not directly read the corresponding data in the written nodes, but basically controls the predicting conditions of the random writing and writing according to the distribution rate, and occupation of the system to each node can be managed more accurately.
And then the scheduling node sends a second address to the source of the writing request, and the second address is encrypted by the first address to obtain a ciphertext address, so that the requester cannot directly analyze the ciphertext address, and then the rough distribution and the subsequent probability distribution of the file are obtained according to the table index.
And meanwhile, the scheduling node generates a table sequence to the storage controller, namely, addresses a first address and a second address generated by the corresponding files of the table index according to all files of the table index, and establishes an association sequence of the first address and the second address.
After receiving any reading request, the scheduling node indexes storage bits in the corresponding second storage device according to the second address; the storage controller calls a virtual file corresponding to the indexed second address; the storage controller returns the virtual file to the dispatching node, retrieves the corresponding sub-file indexed by the virtual file from the table sequence, submits access to the first address, returns the sub-file to the dispatching node, and returns the dispatching node to the requester.
After any deletion request is received by the scheduling node, deleting the table sequence and the table index in the provided request without processing the file, and because the generation of the table sequence and the table index is based on the first poisson rate and the second poisson rate, the file is not changed so as to ensure that the global first poisson rate and the second poisson rate are not changed any more in a transient response, and the table index of all current requesters is prevented from being invalid because the same request initiates writing and deleting requests in a short time.
As a further preferable mode of the embodiment of the disclosure, the first storage device and the second storage device are independent hardware media or the first storage device and the second storage device are virtual partitions in the same hardware media
As another preferred mode of the embodiment of the present disclosure, the scheduling node is further configured to obtain a memory occupancy, a storage device occupancy, and a CPU occupancy of the node; generating an occupation distribution sequence according to the memory occupation, the storage equipment occupation and the CPU occupation;
generating a pre-storage sequence according to the occupation distribution sequence, wherein the pre-storage sequence is used for a file sequence when index writing is performed next time; if the length of the file sequence when the file is written next time is smaller than the pre-stored sequence, filling the identification file into the difference bit sequence, wherein the byte length of the identification file is 1. Based on the table index provided by the embodiment of the disclosure, random writing prediction and distribution prediction can be performed on the storage, so that a corresponding pre-storage sequence can be generated, and the pre-storage sequence has an influence on the first poisson rate and the second poisson rate in a new preset time period, so that the pre-storage sequence is always set to be the same length, and if the length of the next distribution storage is insufficient, the difference change is needed.
The scheduling node is further configured to retrieve file occupation of the first storage device in each node after performing index deletion; and deleting the empty file according to the time of generating the empty file, wherein the empty file is the virtual file and the identification file.
The group is further configured to: if the occupied storage equipment in the group is smaller than a preset threshold, initiating a group combining request to an idle node in another group, and expanding the capacity of the node into the node; if the grouping request occurs, the corresponding nodes belong to two groups. If a node belongs to two groups, the first poisson ratio of the node participating in calculation is calculated according to a calculation mode provided by the embodiment of the disclosure when the node belongs to one group.
Example two
The embodiment of the present disclosure is different from the first embodiment in that the scheduling node in the present embodiment is a component of a group, not an independent component.
Example III
As shown in connection with fig. 2, an embodiment of the present disclosure provides an electronic device of a distributed storage system based on a high-speed encryption algorithm, where the electronic device is configured to implement a method in a system as in the first embodiment, and the method in the first embodiment includes an indexing operation, scheduling between nodes, and node allocation.
The electronic device comprises a processor (processor) 30 and a memory (memory) 31.
Optionally, the electronic device may also include a communication interface (communication interface) 32 and a bus 33.
The processor 30, the communication interface 32, and the memory 31 may communicate with each other via the bus 33. The communication interface 32 may be used for information transfer. The processor 30 may invoke logic instructions in the memory 31 to execute the distributed storage system of the above-described embodiments based on a high-speed encryption algorithm.
The disclosed embodiments provide a storage medium storing computer executable instructions configured to perform the above-described high-speed encryption algorithm-based distributed storage system.
The storage medium may be a transitory computer readable storage medium or a non-transitory computer readable storage medium. A non-transitory storage medium comprising: a plurality of media capable of storing program codes, such as a usb disk, a removable hard disk, a Read-only memory (ROM), a random access memory (RAM, randomAccessMemory), a magnetic disk, or an optical disk, or a transitory storage medium.
The above description and the drawings illustrate embodiments of the disclosure sufficiently to enable those skilled in the art to practice them. Other embodiments may involve structural, logical, electrical, process, and other changes. The embodiments represent only possible variations. Individual components and acts are optional unless explicitly required, and the sequence of operations may vary. Portions and features of some embodiments may be included in, or substituted for, those of others. Moreover, the terminology used in the present application is for the purpose of describing embodiments only and is not intended to limit the claims. As used in the description of the embodiments and the claims, the singular forms "a," "an," and "the" (the) are intended to include the plural forms as well, unless the context clearly indicates otherwise. Similarly, the term "and/or" as used in this disclosure is meant to encompass any and all possible combinations of one or more of the associated listed. Furthermore, when used in the present disclosure, the terms "comprises," "comprising," and/or variations thereof, mean that the recited features, integers, steps, operations, elements, and/or components are present, but do not preclude the presence or addition of one or more other features, integers, steps, operations, elements, components, and/or groups thereof. Without further limitation, an element defined by the phrase "comprising one …" does not exclude the presence of other like elements in a process, method or apparatus comprising such elements. In this context, each embodiment may be described with emphasis on the differences from the other embodiments, and the same similar parts between the various embodiments may be referred to each other. For the methods, products, etc. disclosed in the embodiments, if they correspond to the method sections disclosed in the embodiments, the description of the method sections may be referred to for relevance.

Claims (8)

1. A distributed storage system based on a high-speed encryption algorithm, comprising:
at least one scheduling node configured to perform scheduling and node allocation and to provide at least a first key;
a group configured to set a first address and a second address into each node, and also configured to generate a distribution identifier thereof to each node in the group according to the first address and the second address thereof;
a node configured to belong to at least one group, wherein a plurality of first storage devices and at least one second storage device corresponding to the first storage devices are correspondingly arranged in each node;
the first storage device configured to respond when indexed by a first address and unable to write the second address;
the second storage device configured to respond when indexed by a second address and unable to write the first address;
the first address is a plaintext address, the second address is obtained after first encryption, and the byte numbers of the first address and the second address are equal;
and a storage controller configured to store a first address and a second address generated from when the first storage device is responded to, and request a response from the first storage device and the second storage device when the scheduling node performs an index operation on a file;
the indexing operation includes:
after receiving any writing request, the scheduling node splits a file provided with the request into a plurality of subfiles with preset lengths to generate a subfile sequence and a time sequence corresponding to the subfile sequence;
generating virtual files with corresponding sequence digits in the sub-file sequences, wherein the byte occupation of each virtual file is a preset value;
distributing each file in the sub-file sequence according to the node task occupancy rate;
generating a table index by a first poisson algorithm, wherein the table index is the distribution of all subfiles in a subfile sequence written in one time in all nodes;
the scheduling node sends a second address to the source of the write request;
index reading, wherein after receiving any reading request, the scheduling node indexes to storage bits in the corresponding second storage equipment according to the second address;
the storage controller calls a virtual file corresponding to the indexed second address;
the storage controller returns the virtual file to the dispatching node, retrieves the corresponding sub-file indexed by the virtual file from the table sequence, submits the access to the first address, and returns the sub-file to the dispatching node;
and deleting the index, and deleting the table sequence in the provided request after the scheduling node receives any deleting request.
2. The high-speed encryption algorithm based distributed storage system of claim 1, wherein the first storage device and the second storage device are separate hardware media.
3. The high-speed encryption algorithm based distributed storage system of claim 2, wherein the first storage device and the second storage device are virtual partitions in the same hardware medium.
4. The high-speed encryption algorithm based distributed storage system of claim 1, wherein the scheduling node is further configured to:
acquiring the memory occupation, the storage equipment occupation and the CPU occupation of the node;
generating an occupation distribution sequence according to the memory occupation, the storage equipment occupation and the CPU occupation;
generating a pre-storage sequence according to the occupation distribution sequence, wherein the pre-storage sequence is used for a file sequence when index writing is performed next time;
if the length of the file sequence when the file is written next time is smaller than the pre-stored sequence, filling the identification file into the difference bit sequence, wherein the byte length of the identification file is 1.
5. The high-speed encryption algorithm based distributed storage system of claim 1, wherein the scheduling node is further configured to:
after the index deletion is executed, searching the file occupation of the first storage equipment in each node;
and deleting the empty file according to the time of generating the empty file.
6. The high-speed encryption algorithm based distributed storage system of claim 1, wherein the group is further configured to:
if the occupied storage equipment in the group is smaller than a preset threshold, initiating a group combining request to an idle node in another group, and expanding the capacity of the node into the node;
if the grouping request occurs, the corresponding nodes belong to two groups.
7. A computer readable storage medium, having stored thereon a computer program which, when executed, is adapted to implement the system of any of claims 1-6.
8. An electronic device comprising a memory and a processor adapted to perform the system of any of claims 1-6.
CN202311176134.4A 2023-09-13 2023-09-13 Distributed storage system based on high-speed encryption algorithm Active CN116915510B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202311176134.4A CN116915510B (en) 2023-09-13 2023-09-13 Distributed storage system based on high-speed encryption algorithm

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202311176134.4A CN116915510B (en) 2023-09-13 2023-09-13 Distributed storage system based on high-speed encryption algorithm

Publications (2)

Publication Number Publication Date
CN116915510A true CN116915510A (en) 2023-10-20
CN116915510B CN116915510B (en) 2023-12-01

Family

ID=88356867

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202311176134.4A Active CN116915510B (en) 2023-09-13 2023-09-13 Distributed storage system based on high-speed encryption algorithm

Country Status (1)

Country Link
CN (1) CN116915510B (en)

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN106446126A (en) * 2016-09-19 2017-02-22 哈尔滨航天恒星数据系统科技有限公司 Massive space information data storage management method and storage management device
CN113472864A (en) * 2021-06-21 2021-10-01 湖南大学 High-performance block chain distributed storage system, method, equipment and storage medium
CN115185914A (en) * 2022-07-20 2022-10-14 广东省禁毒基金会 Data sharing method and computing device based on uplink and downlink data cooperation
CN115422570A (en) * 2022-11-07 2022-12-02 北京数盾信息科技有限公司 Data processing method and system for distributed storage
US20220407725A1 (en) * 2019-11-04 2022-12-22 Jingdong Technology Information Technology Co., LTD File storage method, terminal, and storage medium

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN106446126A (en) * 2016-09-19 2017-02-22 哈尔滨航天恒星数据系统科技有限公司 Massive space information data storage management method and storage management device
US20220407725A1 (en) * 2019-11-04 2022-12-22 Jingdong Technology Information Technology Co., LTD File storage method, terminal, and storage medium
CN113472864A (en) * 2021-06-21 2021-10-01 湖南大学 High-performance block chain distributed storage system, method, equipment and storage medium
CN115185914A (en) * 2022-07-20 2022-10-14 广东省禁毒基金会 Data sharing method and computing device based on uplink and downlink data cooperation
CN115422570A (en) * 2022-11-07 2022-12-02 北京数盾信息科技有限公司 Data processing method and system for distributed storage

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
钟杰等: "基于关系数据库的加气站SCADA系统数据存储方案", 自动化与仪器仪表, no. 11, pages 171 - 173 *

Also Published As

Publication number Publication date
CN116915510B (en) 2023-12-01

Similar Documents

Publication Publication Date Title
CN112765262B (en) Data redistribution method, electronic equipment and storage medium
US10152501B2 (en) Rollover strategies in a n-bit dictionary compressed column store
JP6542909B2 (en) File operation method and apparatus
CN108287669A (en) Date storage method, device and storage medium
CN110908589B (en) Data file processing method, device, system and storage medium
CN110427386B (en) Data processing method, device and computer storage medium
CN106570113B (en) Mass vector slice data cloud storage method and system
CN112346871A (en) Request processing method and micro-service system
CN110858162A (en) Memory management method and device and server
CN106936931B (en) Method, related equipment and system for realizing distributed lock
CN113687964B (en) Data processing method, device, electronic equipment, storage medium and program product
CN109597903B (en) Image file processing apparatus and method, file storage system, and storage medium
CN110740155A (en) Request processing method and device in distributed system
JP2016177688A (en) Data processor, data processing method and computer program
CN107423425B (en) Method for quickly storing and inquiring data in K/V format
CN115470156A (en) RDMA-based memory use method, system, electronic device and storage medium
CN105760391B (en) Method, data node, name node and system for dynamically redistributing data
CN110162395B (en) Memory allocation method and device
CN116915510B (en) Distributed storage system based on high-speed encryption algorithm
CN112395220A (en) Processing method, device and system of shared storage controller and storage controller
CN110798492B (en) Data storage method and device and data processing system
CN115934354A (en) Online storage method and device
JP6233846B2 (en) Variable-length nonce generation
CN112115521A (en) Data access method and device
EP4383076A1 (en) Data processing method and apparatus, computer device, and computer-readable storage medium

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant