CN114138898A - SMG-VME-AFS iterable distributed storage system - Google Patents
SMG-VME-AFS iterable distributed storage system Download PDFInfo
- Publication number
- CN114138898A CN114138898A CN202111375039.8A CN202111375039A CN114138898A CN 114138898 A CN114138898 A CN 114138898A CN 202111375039 A CN202111375039 A CN 202111375039A CN 114138898 A CN114138898 A CN 114138898A
- Authority
- CN
- China
- Prior art keywords
- data
- vme
- node
- afs
- cluster
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Pending
Links
- 238000000034 method Methods 0.000 claims abstract description 37
- 230000008901 benefit Effects 0.000 claims abstract description 5
- 230000009466 transformation Effects 0.000 claims abstract description 4
- 238000004422 calculation algorithm Methods 0.000 claims description 45
- 230000006870 function Effects 0.000 claims description 19
- 230000008569 process Effects 0.000 claims description 15
- 238000013500 data storage Methods 0.000 claims description 11
- 238000004891 communication Methods 0.000 claims description 9
- 238000010586 diagram Methods 0.000 claims description 9
- 101100166455 Neurospora crassa (strain ATCC 24698 / 74-OR23-1A / CBS 708.71 / DSM 1257 / FGSC 987) ccg-4 gene Proteins 0.000 claims description 8
- 238000004364 calculation method Methods 0.000 claims description 7
- 238000013507 mapping Methods 0.000 claims description 6
- 238000004458 analytical method Methods 0.000 claims description 4
- 230000005540 biological transmission Effects 0.000 claims description 4
- 238000007906 compression Methods 0.000 claims description 4
- 230000006835 compression Effects 0.000 claims description 4
- 238000007726 management method Methods 0.000 claims description 3
- 230000001960 triggered effect Effects 0.000 claims description 3
- 241000178435 Eliokarmos dubius Species 0.000 claims description 2
- JJWKPURADFRFRB-UHFFFAOYSA-N carbonyl sulfide Chemical compound O=C=S JJWKPURADFRFRB-UHFFFAOYSA-N 0.000 claims description 2
- 238000010367 cloning Methods 0.000 claims description 2
- 238000001514 detection method Methods 0.000 claims description 2
- 239000006185 dispersion Substances 0.000 claims description 2
- 239000012634 fragment Substances 0.000 claims description 2
- 238000001093 holography Methods 0.000 claims description 2
- 230000010354 integration Effects 0.000 claims description 2
- 230000010076 replication Effects 0.000 claims description 2
- 238000009795 derivation Methods 0.000 description 4
- 230000004992 fission Effects 0.000 description 4
- 238000013461 design Methods 0.000 description 2
- 241000109539 Conchita Species 0.000 description 1
- 101001072091 Homo sapiens ProSAAS Proteins 0.000 description 1
- 244000035744 Hura crepitans Species 0.000 description 1
- 102100036366 ProSAAS Human genes 0.000 description 1
- 235000019892 Stellar Nutrition 0.000 description 1
- 230000006399 behavior Effects 0.000 description 1
- 230000008859 change Effects 0.000 description 1
- 230000007547 defect Effects 0.000 description 1
- 238000011161 development Methods 0.000 description 1
- 238000009472 formulation Methods 0.000 description 1
- 238000013467 fragmentation Methods 0.000 description 1
- 238000006062 fragmentation reaction Methods 0.000 description 1
- 230000003993 interaction Effects 0.000 description 1
- 238000002955 isolation Methods 0.000 description 1
- 238000004519 manufacturing process Methods 0.000 description 1
- 230000005055 memory storage Effects 0.000 description 1
- 239000000203 mixture Substances 0.000 description 1
- 230000008520 organization Effects 0.000 description 1
- 238000005192 partition Methods 0.000 description 1
- 238000011160 research Methods 0.000 description 1
Images
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F9/00—Arrangements for program control, e.g. control units
- G06F9/06—Arrangements for program control, e.g. control units using stored programs, i.e. using an internal store of processing equipment to receive or retain programs
- G06F9/46—Multiprogramming arrangements
- G06F9/48—Program initiating; Program switching, e.g. by interrupt
- G06F9/4806—Task transfer initiation or dispatching
- G06F9/4843—Task transfer initiation or dispatching by program, e.g. task dispatcher, supervisor, operating system
- G06F9/4881—Scheduling strategies for dispatcher, e.g. round robin, multi-level priority queues
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F9/00—Arrangements for program control, e.g. control units
- G06F9/06—Arrangements for program control, e.g. control units using stored programs, i.e. using an internal store of processing equipment to receive or retain programs
- G06F9/44—Arrangements for executing specific programs
- G06F9/4401—Bootstrapping
- G06F9/4418—Suspend and resume; Hibernate and awake
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F9/00—Arrangements for program control, e.g. control units
- G06F9/06—Arrangements for program control, e.g. control units using stored programs, i.e. using an internal store of processing equipment to receive or retain programs
- G06F9/44—Arrangements for executing specific programs
- G06F9/448—Execution paradigms, e.g. implementations of programming paradigms
- G06F9/4482—Procedural
- G06F9/4484—Executing subprograms
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F9/00—Arrangements for program control, e.g. control units
- G06F9/06—Arrangements for program control, e.g. control units using stored programs, i.e. using an internal store of processing equipment to receive or retain programs
- G06F9/46—Multiprogramming arrangements
- G06F9/50—Allocation of resources, e.g. of the central processing unit [CPU]
- G06F9/5005—Allocation of resources, e.g. of the central processing unit [CPU] to service a request
- G06F9/5011—Allocation of resources, e.g. of the central processing unit [CPU] to service a request the resources being hardware resources other than CPUs, Servers and Terminals
- G06F9/5016—Allocation of resources, e.g. of the central processing unit [CPU] to service a request the resources being hardware resources other than CPUs, Servers and Terminals the resource being the memory
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F9/00—Arrangements for program control, e.g. control units
- G06F9/06—Arrangements for program control, e.g. control units using stored programs, i.e. using an internal store of processing equipment to receive or retain programs
- G06F9/46—Multiprogramming arrangements
- G06F9/50—Allocation of resources, e.g. of the central processing unit [CPU]
- G06F9/5005—Allocation of resources, e.g. of the central processing unit [CPU] to service a request
- G06F9/5011—Allocation of resources, e.g. of the central processing unit [CPU] to service a request the resources being hardware resources other than CPUs, Servers and Terminals
- G06F9/5022—Mechanisms to release resources
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F9/00—Arrangements for program control, e.g. control units
- G06F9/06—Arrangements for program control, e.g. control units using stored programs, i.e. using an internal store of processing equipment to receive or retain programs
- G06F9/46—Multiprogramming arrangements
- G06F9/50—Allocation of resources, e.g. of the central processing unit [CPU]
- G06F9/5005—Allocation of resources, e.g. of the central processing unit [CPU] to service a request
- G06F9/5027—Allocation of resources, e.g. of the central processing unit [CPU] to service a request the resource being a machine, e.g. CPUs, Servers, Terminals
-
- Y—GENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
- Y02—TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
- Y02D—CLIMATE CHANGE MITIGATION TECHNOLOGIES IN INFORMATION AND COMMUNICATION TECHNOLOGIES [ICT], I.E. INFORMATION AND COMMUNICATION TECHNOLOGIES AIMING AT THE REDUCTION OF THEIR OWN ENERGY USE
- Y02D10/00—Energy efficient computing, e.g. low power processors, power management or thermal management
Landscapes
- Engineering & Computer Science (AREA)
- Software Systems (AREA)
- Theoretical Computer Science (AREA)
- Physics & Mathematics (AREA)
- General Engineering & Computer Science (AREA)
- General Physics & Mathematics (AREA)
- Computer Security & Cryptography (AREA)
- Information Retrieval, Db Structures And Fs Structures Therefor (AREA)
- Computer And Data Communications (AREA)
- Stored Programmes (AREA)
- Multi Processors (AREA)
Abstract
The SMG-VME-AFS iterative distributed storage system is provided. The method utilizes the Fibonacci series iterative transformation to realize the distributed operating system and the distributed storage system. The invention can be widely used for the distributed storage of core data by super large group countries and enterprises, and realizes computing power sharing and distributed storage capacity sharing; scheduling and sharing of core data; analyzing the data behavior of the big user and protecting privacy; scheduling the user intelligent contracts; the method has the advantages that an enterprise information island is opened, and the method has important significance for establishing a harmonious programmable intelligent world to realize the intelligent Internet.
Description
Technical Field
The invention aims to solve the problems of how to cache and share distributed storage under the distributed big data environment; and parallel computing, high-efficiency cluster distributed storage.
The prior art is as follows: an open source architecture commonly adopted by distributed storage, namely a hadoop based google GFS architecture of apache organization, the prior art embodies the open source, and relates to the problems of complex component background, uneven calling api, high component calling learning cost and design defects.
The invention relates to a set of programs deduced on the basis of a mathematical iteration function Fibonacci sequence. More centralized services are released for entity services by CPU, memory and storage. The afs can be used for iterative distributed storage, also called quantum (iterative) distributed storage, and theoretically, data storage is not limited. The size of data storage is related to the nonlinear growth of memory range. Conventional data storage grows linearly. The capacity of the memory is expanded by 2 to the power of n and exponential.
Distributed storage is established in the minimum unit cluster, and 10 hundred million + users are actually measured to achieve second storage and second access under a single cluster; the theoretical performance can reach 100 ten thousand grade/second communication scheduling.
The invention depends on an SMG-VME iterative distributed operating system, and concretely refers to the following national patent application numbers: 2021113373306, electronic application case number 364636644, are implemented by inheritance. The system can also be independently used as a core product service module to independently customize calculation force output; and (4) distributed storage output.
Patent examination:
no case for network search until 2021-11-20
Background
With the industrial internet, the intelligent interconnection of everything, distributed computation and big data derivation development, the computation and storage of the super-large scale data are put forward. The invention mainly researches a parallel computing engine, and designs a set of feasible framework aiming at ultra-large scale data storage and computation. And computing power output is carried out externally to construct an IAAS and SAAS system. The super-cluster enterprises share computing power and share storage. The CPU utilization efficiency and the memory storage efficiency are improved, and the distributed data storage capacity is improved.
Description of the figures
FIG. 1 is an abstracted analogy diagram of an iteratively distributed operating system;
1 super stellar system Tsung (x)
2 Sun system sun (m)
3 Planet System planet (n)
4 iterative distributed operating system vme (n)
Derivation: TSun (x)1- >2- >3- >4vme (n) from large to small; the reverse reasoning is the principle of fission of universe
FIG. 2 is a functional diagram of the vme (n) system components;
4vme (n) iteratable distributed operating system
401linux OS operating system, centros 6.8(FX-SMG)
402java wrapper service
403 jvm JDK1.78
404ioserver communication framework and lightweight nio communication framework
405wscomm common module
406ppg system mode engine services
407ppgmt traffic pattern engine interface service
408application user service
409mbs (n) iterable mall
5 afs (n) iterable distributed storage engine (Quantum storage)
FIG. 3 is an afs (n) iterable distributed storage engine (Quantum storage) partition node diagram;
5 afs (n) iterable distributed storage engine
500vm.0.0 are represented by the top-level root vm.0
In addition, the last number of each level of node is 0, the current node root is not responsible for specific data storage, only takes charge of instruction transmission and child node management
Number 502 vm.0.22 virtual machine data node
…
Virtual machine data node No. 508 vm.0.88
50100 root number 1 (Happy ending 00 father root)
No. 1 data node 50101 vm.0.1.11
No. 2 data node 50102 vm.0.1.21
…
No. 16 data node 50116 vm.0.1.161
The multiple relation of the number of the child roots and the father nodes at each level being 2
----------------------------------------------------
Disclosure of Invention
Distributed storage system capable of iterating algorithm
1.1 the algorithm relies on: according to the Favoraze series: f (n) ═ f (n-1) + f (n-2), n >2 natural numbers, a penbonic number series, are cosmic fission functions. The derivation is commonly found in nature, such as conch shell, universe star series, where the derivation meets the Pepenacci series, i.e. the fission of everything.
Variation n
=>f(n)=f(n+1)+f(n+2)
Merging f (n +1), f (n +2)
=>f(n)=f(n+1)
From big to small
As shown in figure 1
The function f is abstracted into a TSun hyper star system (Taiyang) and a Sun star system (Sun) Planet Planet system (earth), and the deduction from large to small of n by replacing x is as follows
=>TSun(x)=TSun(x+1)
=>TSun(x)=Sun(m)+Sun(n)+...
=>Sun(x)=Planet(m)+Planet(n)+...
=>Planet(x)=Planet(x+1)
=>vme(x)=vme(x+1)
Transformation f
=>afs(x)=afs(x+1)
afs is an iterable distributed storage engine
vme, is an iterable distributed operating system.
The subsequent sub-items from the high dimension to the low dimension of the F-TSun-sun-planet F can be iterated and not changed
Here the high dimension is linked to the low dimension, i.e. F fission.
The high dimension can schedule the low dimension to perform information communication and instruction transmission operation, and the low dimension can transmit instruction execution result data to the high dimension, and finally vme (n) ═ vme (n +1) is true, namely, the iterative distributed operating system.
An iteratable distributed operating system should be composed of basic components that are business independent. All scenes are universal; wherein the ppg, ppg-mt, ioserver component must implement the iterable mode f (n) ═ f (n +1).
1.2 a distributed database system, which can store a non-relational database system based on KV key value pair data; the method is characterized in that: the data format storage of any $ key val & … can be supported, the data storage capacity and the data storage type are not limited, and the data can be automatically expanded and dynamically increased; the data may be altered.
The method can store character strings of any kv data, and as the user attribute increases, the storage space increases; old space is deleted and reclaimed. Data will be migrated to a new address; only the addressing relation of other data of the user is unchanged, so that only the base address of the data of the current growing user is changed;
1.3 a storage system of the very large data of the capacity, can break up the big data, can hash in a special node container distributed storage according to making the hash algorithm, node and father node are in the relation of 2 squares, can iterate according to 1.1 algorithm between the nodes; each segment of data has a unique main key, a base address is obtained by establishing a hash algorithm, the base address is automatically assigned by the system to address as if the user is anchored, and then directional hash is carried out to read the base address data; because the hash nodes are distributed in different data nodes, each node has balanced load and balanced data;
1.4 based on 1.1, the distributed storage system capable of infinite routing expansion is characterized in that the expansion of the characteristic nodes is carried out by parent-child cascade expansion according to the power of n of 2;
FIG. 3 shows a multi-layer structure; this hierarchy can be routed down in infinite stages;
universe holographic iterative distributed system
An operating system operating independently; each subsystem is connected with the parent system, namely the subsystem can be scheduled by the parent system;
FIG. 2 shows the sharing of libraries by the subsystem and the parent system;
the subsystem transmits an instruction and a result vme through a parent system to realize instruction data exchange of forward, backward and router protocols, and performs information interaction layer by layer in an iterative manner, wherein the routing protocol is inherited and shared by afs (back-to-back) and refers to the architecture of FIG. 3;
each afs integration distributes the data under vme (an iterable distributed operating system);
the universe holography is that the individuals and the whole universe are in a unified inheritance relationship, and the individuals have the DNA of the universe whole information fragment no matter how small the individuals are; therefore, each node comprises a holographic architecture, the smallest individual unit is calculated, and the calculation component comprising the whole cluster refers to fig. 2;
each node data is a part of all node information, individual iterative computation is output to the whole cluster, and the individual and the cluster are in an iterative relationship;
all nodes are combined with the super-large-scale distributed storage, but the nodes and the nodes have unified instruction system scheduling.
3 minimum Unit vme (n) set of iteratable programs see FIG. 2
4vme (n) the distributed operating system can be iterated. 401linux os operating system. 402java wrapper service. 403 jvm. 404ioserver communication framework. 405wscomm. 406ppg virtual machine engine services. 407ppgmt interface service. 408application. 409mbs may iterate through the mall.
The ioserver, ppg, ppgmt, mbs component services must implement the algorithm implementation agreed in 1.1.
An iteratable distributed storage system capable of executing an intelligent contract program
The method is characterized in that:
afs (n) abbreviations afs
4.1 af distributed storage encrypts and stores the core data formulated algorithm, supports formulated algorithm decryption, is quick and effective, and supports a high-power compression algorithm to generate a ciphertext; 4.11 fast encryption of core data
4.124.11 high-power compression algorithm for repackaging and compressing (binary compression) encrypted data
4.13 and support reverse decryption
4.2 af supports the entry of user individual attribute data, the data comprises a formulated intelligent contract, a formulated intelligent function and data encryption, decryption and read-write services agreed by the section;
4.3 af supports the establishment of an intelligent contract analyzer, the intelligent contract and the intelligent function agreed in 4.2 can be analyzed, the decryption and the operation are carried out, the operation result is output to af, and the calculation before writing is supported and stored;
4.4 Intelligent contracts: a java-like computer applet is analyzed and executed by an intelligent contract engine carried by the afs, and supports simple mathematical operation and condition judgment; the intelligent contract is a mathematical expression relating to the benefit parts of both parties in the written contract, can be triggered and executed by a system under a limited condition, is kept under an af node user data block in a programmed electronic manner, can only be executed by an af intelligent contract analysis engine, and stores the digital effective signatures of both parties required by the intelligent contract;
4.5 the afs supports the data calculation process in 4.3 without leaving the afs storage bottom layer, the encryption, decryption, analysis and operation of the intelligent contract agreement are completed, and the result is written back to the memory; this way a high efficiency of execution can be achieved; the operation process does not leave the bottom layer, so that the safety of data isolated access is ensured;
4.6 af intelligent contract compiling resolver, which is a self-defined realization algorithm; executing in a sandbox observer mode, wherein all file operations are strictly controlled;
the significance of this section: the intelligent contract can be given to the user, the world can be executed in a programmable mode, and finally the intelligent Internet becomes possible.
Iterative distributed storage system capable of supporting super-large capacity and achieving second-access
The method is characterized in that:
from the operating system diagram 2,401 to the upper mbs mall service diagram 2,409, all are iterateable; vme as operating system layer referring to fig. 2,4, afs as distributed storage layer, and intelligent contracts; mbs is a basic mall service; the afs server is ioserver framework diagram 2,406, the afs storage module is ppg system mode engine, refer to diagram 2,406, and the afs start instance is ppgmt diagram 2,407 business mode engine service.
5.0, making a hash algorithm on the data, and performing capacity storage on distributed nodes in a hash storage cluster; as long as each user determines the unique identity primary key of the user, the user information is anchored at a certain node; directly addressing to a target node for data reading and writing operation; the read-write operation is carried out based on the shared queue, and the characteristics of sequence, data uniqueness, isolation and the like can be guaranteed.
5.1 each expanded one-dimensional sub-node is expanded by one time according to 2 geometric multiples;
carrying out route expansion on the cluster by the power of n of 2 according to the hierarchy; the current lowest vm dimension of 4, namely 16 node sheets vm. can rapidly expand the capacity of storage only by expanding the physical cluster; the unit cluster does not need to be changed; this facilitates iterative upgrade of the batch engineering deployment capability.
5.2 after the system is expanded, the original data node is automatically upgraded to a current father node, the original node data automatically flows in a balanced manner again according to a formulated hash algorithm, all child node data under the current father node are also applicable to the formulated hash algorithm for calling (the original data can also be addressed and positioned), and the formulated hash algorithm can comprise a parent-child cluster dimensionality factor;
where data is clustered there; therefore, the upgrading data can not cover the disturbed whole network, and if the cluster dimension factor is not considered, the original algorithm after upgrading can not route the specific data.
5.3 each layer of nodes realize distributed forward, backward and router protocols in turn, and realize communication and data transmission with the superior nodes;
the three protocols ensure the message communication capability of each cluster.
5.4 the transferring algorithms of all brother nodes are completely the same, namely the f iterative function rules are simultaneously met;
5.5 each data node meets the characteristics of making a hash algorithm and data dispersion, and simultaneously considers the cluster dimension-increasing factor, so that the original data is ensured not to be hashed to other clusters in the process of upgrading the nodes of the sub-clusters; a hashing algorithm principle, where one goes back and forth;
5.6 the load of each data node meets the characteristic of balance;
because the data storage adopts a formulated hash algorithm which is also a hash equilibrium algorithm, the finally stored data can be distributed in different nodes; i.e. consistent with data scatter.
5.7 data consistency, idempotent write operation: f (x) is f (x), meaning that the operation can be repeated n times, only the first successful writing is allowed;
sometimes, under the condition of poor transaction network, the user repeatedly submits, the algorithm ensures that only the first successful submission is effective, and the subsequent repeated submission cannot be executed, so that the uniqueness of the data is ensured.
5.8 afs supports user spatial data independence, privacy and user data protection;
the user data is hashed in different storage nodes, even two persons close to the region may be in different nodes, addressing must be accessed according to a hash algorithm, and privacy is as follows: the core data is cryptographically protected.
5.9 afs supports dynamic rewriting of user block attribute data and automatic expansion of storage space, and the automatic expanded old data space is automatically recycled;
large users are stored on a large afs file system in a distributed mode, and index and block address are made; when the user rewrites the data to cause capacity expansion to cause address change, the system automatically opens a new block to replace the original block, copies the data to the address of the new block, and re-indexes the data; other user indexes are unchanged; blocks that have overflowed are marked for garbage collection to fill smaller user data stores, avoiding excessive garbage fragmentation.
5.10 afs supports user block data reading second access, and is not limited by user scale, because cluster expansion and node expansion follow the n power rule of 2;
the read-write queue is operated by adopting a high-speed queue and supports multi-path iterative addressing; the algorithm guarantees the data capacity of the cache operation.
5.11 afs supports block writing of quantitative user block data and flash memory;
in order to save CPU and avoid reducing disk IO, a special write thread is provided and is responsible for data writing, the writing of a single user cannot be written immediately, and overtime writing is triggered if the access is idle; if the user multi-system is accumulated quickly, then the data block is submitted to be written on the disk at one time, the process is called as flash memory.
5.12 afs supports multi-user concurrent writing and ensures the uniqueness of data;
because the shared queue is adopted, all data are submitted to the memory first, then the queue performs writing operation, the queue supports concurrent FIFO operation, and the algorithm ensures the uniqueness of the data.
5.13 afs supports remote reading and writing of the remote cluster and service calling, mapsend data proxy sending, mapum proxy summarizing (similar google GFS mapreduce), and the data scale is reduced by half when each layer of data is promoted;
mapsend encapsulates the ability of remotely calling a client to initiate a connection to a server to submit data, and informs which method is called by specifying a method (afs. xxxx); the system will automatically route to that node vme to perform the corresponding method; the method comprises the steps that a proxy client side initiates a request, completes instruction sending, and the instruction is routed to a target node to complete instruction execution.
5.14 safety: the afs supports the capability of automatic deployment and aDDoS (anti-DDoS attack) intrusion detection, and illegal intrusion automatically enters into a blacklist to be bound;
the aDDoS is an automatic anti-DDoS attack interceptor of the vmeserver; the algorithm can effectively resist a large number of illegal connections and reduce invalid socket connections.
5.15 minimal Unit Cluster, Single physical machine Cluster, supporting 1x (8+1) x 16: 1 physical machine, 8 virtual machines are configured below the physical machine, 1 virtual machine root is configured below each virtual machine, 16 vme virtual machines are configured below each virtual machine, and only 3 vme virtual machines are configured to monitor and dispatch the root without storing data;
5.16 supporting 5.16 single-cluster cloning expansion of multiple physical machine clusters;
5.17 virtual machine network planning rules in the minimal unit cluster 5.16 are as follows: as shown in fig. 3
# name Address Process name function
vm.0 192.168.1.179:9000 ppg0 root
vm.0.1 192.168.1.180:9100 vm
vm.0.2 192.168.1.181:9200 vm
…
vm.0.8 192.168.1.187:9800 vm
Allocating vme in order from virtual machines 1-8
The virtual machine 1:
# name Port Process name function
vm.0.1.0 9100 ppg0 root
vm.0.1.1 9101 ppg1 vme
vm.0.1.2 9102 ppg2 vme
…
vm.0.1.16 9116 ppg16 vme
And (3) the virtual machine 2:
# name Port Process name function
vm.0.2.0 9200 ppg0 root
vm.0.2.1 9201 ppg1 vme
vm.0.2.2 9202 ppg2 vme
…
vm.0.2.16 9216 ppg16 vme
……
The virtual machine 8:
# name Port Process name function
vm.0.8.0 9800 ppg0 root
vm.0.8.1 9801 ppg1 vme
vm.0.8.2 9802 ppg2 vme
…
vm.0.8.16 9816 ppg16 vme
Note:
5.17.11-16 are data nodes;
5.17.2 the penultimate position 2 separator element is the virtual machine number;
5.17.3 Port number 9xxx, bit 2, the virtual machine number;
5.17.4 the 5.17.3 port number first bit 9 is replaced by 8, i.e. 8xxx port number, other rules are as above, which are indicated for management of the vme shell, 9xxx is called for the program interface afs
5.17.5A fast network addressing algorithm according to claims 5.17.1, 5.17.2, 5.17.3, 5.17.4 which can either count ip quickly by virtual machine numbering or count numbers easily by ip; the node number and the ip tail number can be calculated through the port number, and the addressing can be easily calculated through a dynamic mapping algorithm;
5.18 mapport mapping service
The physical machine hosts an operating system: windows Server 2012(Fix)
Bat performs port route mapping on a physical machine such as a graph network adapter by making a map through a port proxy; 9xxx and 8xxx ports are mapped to each virtual machine of the intranet for use
mapport$oip$vmeip$oport$vmeport
eg:mapport 192.168.1.170 192.168.1.179 9000 9000
A $ oip physical machine ip $ vmeip virtual machine vmeip $ openport physical machine port $ vmeport node port; bat can implement all virtual machines under this cluster in batches, vme port 9xxx, shell 8 xxx; opening an ip path between the physical machine and the virtual machine through a mapport service, vme;
5.19 based on 5.18, each node is communicated with the physical, the physical machine is bound with an external network, the virtual machine and the physical machine are bridged, the virtual machine and the physical machine share the external network ip, and any virtual machine and vme nodes can be accessed through the ip + port, and the ports are uniquely distributed;
5.20 finishing a cluster intranet based on 5.19, repeating iterative replication of a physical machine to replace ip, so that a cluster can be replicated, no additional configuration is needed in the cluster, iteration can be performed to multiple clusters to realize a dimension-increasing concept as long as ip is replaced, and iterative upgrade can be performed in the right 1 and the right 2.
5.21 minimal clustering:
FIG. 3 shows that 1 physical machine includes at least 9 virtual machines 0 as root and does not store data, 500vm.0.0 ip:192.168.1.179:9000
The other 1-8 virtual machines are child node virtual machines for storing data, refer to FIG. 3,501-3,508 correspond to vm.0.1-vm.0.8, ip:192.168.1.180:9100- >192.168.1.187:9800
Each data virtual machine has 17 vme (ppg) ppg0 as the current root
The method comprises the following steps:
vm.0.1.0=ppg0=9100
vm.0.1.1=ppg1=9101
..
vm.0.1.16=ppg16=9116
… iterate to the last node in turn
vm.0.8.0=ppg0=9800
vm.0.8.1=ppg1=9801
..
vm.0.8.16=ppg16=9816
Note: the 9xxx port is changed into an 8xxx port, the rule parameter is a vm shell remote login port, and a certain vm (n) ═ ppg (n) can be remotely logged in and operated.
vme parent-child relationship
1-level vm.0 ═ root
Level 2 vm.0.0 ═ upper (vm.0), current subset: vm.0.1- > vm.0.8
The other nodes are as follows.
The characteristic significance is as follows:
a) all the minimum unit clusters ip are planned in this way (the rule is unchanged), and the method has the advantages that only clone clusters need to be copied (batch engineering manufacturing is available), and then ip. of the external network is modified in each ppg attribute file to realize the multiplexing of the ip of the external network, so that the cascade interconnection and intercommunication of everything is realized;
b) the ip number is known through the port number, the ip number is known through the virtual machine number, and the virtual machine number and the port number are known according to the ip tail number. The one-number multi-purpose and one-number multi-purpose can be realized;
c) no look-up table is required. The child nodes of the virtual machine vme can be quickly positioned through a built-in formulation algorithm;
d) the multi-layer routing helps to achieve parallel computation because the multiple paths issue cluster commands almost simultaneously, and all sub-clusters inform the issuing task in turn.
The af start-up procedure:
v me (n) refer to 4- > linux os launch in fig. 2 refer to 401- > 402java wrapper in fig. 2 refer to 403jvm- > 406ppg- > ppg server launch in fig. 2 refer to 404ioserver frame- > vmeServer launch in fig. 2 refer to 407,406- > vme (n) launch- > afs (n) refer to 5 launch- > vm (n) in fig. 2 refer to 5 launch- > vm (n) - > traverse vm.0.0-vm.0.8 launch virtual machines in fig. 3, 500-
- $ x.0-vm.0.$ x.16 launch VME per vm.0 node traversal, see fig. 3,50$ x00-50$ x16 $ x is the virtual machine number.
Note: $ x denotes a virtual machine number, $ yy: ppg Process name (vme number)
Vm (n), afs (n) are in ready state, listen for 9$ x0$ yy $ x as the virtual machine number, $ yy as vme node number (the same below). And the ppgserver monitors an 8$ x0$ yy port, the port is operated by a remote shell console, and vme has a unique authentication system. The 9$ x0$ yy port is for afs to remotely store data to programs.
The invention has the following practical significance:
the method utilizes Fibonacci number series iterative transformation to realize the distributed operating system and the distributed storage system. The invention can be widely used for the distributed storage of core data by super large group countries and enterprises, and realizes computing power sharing and distributed storage capacity sharing; scheduling and sharing of core data; analyzing the data behavior of the big user and protecting privacy; scheduling the user intelligent contracts; the method has the advantages that an enterprise information island is opened, and the method has important significance for establishing a harmonious programmable intelligent world to realize the intelligent Internet.
Claims (4)
1. A distributed storage system of an iterative algorithm;
1.1 deducing based on a mathematical iterable function f (n) = f (n-1) algorithm according to: f (n) = f (n-1) + f (n-2),
n >2 natural number, a Fibonacci number series
Variation n
=>f(n)=f(n+1)+f(n+2)
Merging f (n +1), f (n +2)
=>f(n)=f(n+1)
The function f is abstracted from big to small into a TSun hyper star system (Taiyang) and a Sun star system (Sun) Planet planet system (Earth), and the deduction from big to small is carried out by replacing n with x
=>TSun(x)=TSun(x+1)
=>TSun(x)=Sun(m)+Sun(n)+...
=>Sun(x)=Planet(m)+Planet(n)+...
=>Planet(x)=Planet(x+1)
=>vme(x)=vme(x+1)
Transformation f
=>afs(x)=afs(x+1)
afs is an iterable distributed storage engine
vme distributed operating system capable of being iterated
1.2 a distributed database system, which can store a non-relational database system based on KV key value pair data; the method is characterized in that: the method can support any data format storage of $ key = val & …, the data storage capacity and the data storage type are not limited, and the data can be automatically expanded and dynamically increased; the data can be changed
1.3 a storage system of the very large data of the capacity, can break up the big data, can hash in a special node container distributed storage according to making the hash algorithm, node and father node are in the relation of 2 squares, can iterate according to 1.1 algorithm between the nodes;
1.4 based on 1.1, the distributed storage system capable of infinite routing expansion is characterized in that the expansion of the characteristic nodes is performed by parent-child cascade expansion according to the power n of 2.
2. A cosmic holographic iterative distributed system;
the method is characterized in that:
2.1 operating systems which operate independently, wherein each subsystem is connected with the parent system, namely the subsystem can be scheduled by the parent system;
2.2 the subsystem and the parent system share the library;
2.3 the subsystem transmits the instruction and result through the parent system;
2.4 distributed storage of data under vme (an iterable distributed operating system) per af integration;
2.5, performing universe holography, wherein individuals and the whole of the universe are in a unified inheritance relationship, and the individuals have universe whole information fragment DNA no matter how small the individuals are, so that each node comprises a holographic architecture system, the minimum individual unit is calculated, and the nodes comprise calculation components of a whole cluster;
2.6 each node data is a part of all node information, individual iterative computation is output to the whole cluster, and the individual and the cluster are in iterative relationship;
2.7 combining the super-large-scale distributed storage by all nodes, but having unified instruction system scheduling from node to node.
3. An iterable universe hologram minimum unit vme (n) assembly comprising:
(architecture diagram referenced in the latter small label) 3.04 vme (n) iterable distributed operating system 3.1401 linux os operating system 3.2402 java wrapper service 3.3403 jvm 3.4404 ioserver communication framework 3.5405 wscomm. common module 3.6406 ppg virtual machine engine service 3.7407 ppg mt interface service 3.8408 application. client applications 3.9409 mbs iterable mall.
Iteratable distributed storage system capable of executing intelligent contract program
The method is characterized in that:
afs (n) abbreviations afs
4.1 af distributed storage encrypts and stores the core data formulated algorithm, supports formulated algorithm decryption, is quick and effective, and supports a high-power compression algorithm to generate a ciphertext;
4.2 af supports the entry of user individual attribute data, the data comprises a formulated intelligent contract, a formulated intelligent function and data encryption, decryption and read-write services agreed by the section;
4.3 af supports the establishment of an intelligent contract analyzer, the intelligent contract and the intelligent function agreed in 4.2 can be analyzed, the decryption and the operation are carried out, the operation result is output to af, and the calculation before writing is supported and stored;
4.4 Intelligent contracts: a java-like computer applet is analyzed and executed by an intelligent contract engine carried by the afs, and supports simple mathematical operation and condition judgment; the intelligent contract is a mathematical expression relating to the benefit parts of both parties in the written contract, can be triggered and executed by a system under a limited condition, is kept under an af node user data block in a programmed electronic manner, can only be executed by an af intelligent contract analysis engine, and stores the digital effective signatures of both parties required by the intelligent contract;
4.5 the afs supports the data calculation process in 4.3 without leaving the afs storage bottom layer, the encryption, decryption, analysis and operation of the intelligent contract agreement are completed, and the result is written back to the memory;
iterative distributed storage system capable of supporting super-large capacity and achieving second-access
The method is characterized in that:
5.0, making a hash algorithm on the data, and performing capacity storage on distributed nodes in a hash storage cluster;
5.1 each expanded one-dimensional sub-node is expanded by one time according to 2 geometric multiples;
5.2 after the system is expanded, the original data node is automatically upgraded to a current father node, the original node data automatically flows in a balanced mode again according to a formulated hash algorithm, all child node data under the current father node are also applicable to the formulated hash algorithm for calling (the old data can be addressed and positioned), and the formulated hash algorithm can comprise a parent-child cluster dimensionality factor;
5.3 each layer of nodes realize distributed forward, backward and router protocols in turn, and realize communication and data transmission with the superior nodes;
5.4 the transferring algorithms of all brother nodes are completely the same, namely the f iterative function rules are simultaneously met;
5.5 each data node meets the characteristics of making a hash algorithm and data dispersion, and simultaneously considers the cluster dimension-increasing factor, so that the original data is ensured not to be hashed to other clusters in the process of upgrading the nodes of the sub-clusters; a hashing algorithm principle, where one goes back and forth;
5.6 the load of each data node meets the characteristic of balance;
5.7 data consistency, idempotent write operation: f (x)) = f (x), meaning that the operation can be repeated n times, only the first successful write is true;
5.8 afs supports user spatial data independence, privacy and user data protection;
5.9 afs supports dynamic rewriting of user block attribute data and automatic expansion of storage space, and the automatic expanded old data space is automatically recycled;
5.10 afs supports user block data reading second access, and is not limited by user scale, because cluster expansion and node expansion follow the n power rule of 2;
5.11 afs supports block writing of quantitative user block data and flash memory;
5.12 afs supports multi-user concurrent writing and ensures the uniqueness of data;
5.13 afs supports remote reading and writing of the remote cluster and service calling, mapsend data proxy sending, mapum proxy summarizing (similar google GFS mapreduce), and the data scale is reduced by half when each layer of data is promoted;
5.14 safety: the afs supports the capability of automatic deployment and aDDoS (anti-DDoS attack) intrusion detection, and illegal intrusion automatically enters into a blacklist to be bound;
5.15 minimal Unit Cluster, Single physical machine Cluster, supporting 1x (8+1) x 16: 1 physical machine, 8 virtual machines are configured below the physical machine, 1 virtual machine root is configured below each virtual machine, 16 vme virtual machines are configured below each virtual machine, and only 3 vme virtual machines are configured to monitor and dispatch the root without storing data;
5.16 supporting 5.16 single-cluster cloning expansion of multiple physical machine clusters;
5.17 virtual machine network planning rules in the minimal unit cluster 5.16 are as follows:
# name Address Process name function
vm.0 192.168.1.179:9000 ppg0 root
vm.0.1 192.168.1.180:9100 vm
vm.0.2 192.168.1.181:9200 vm
…
vm.0.8 192.168.1.187:9800 vm
Allocating vme in order from virtual machines 1-8
The virtual machine 1:
# name Port Process name function
vm.0.1.0 9100 ppg0 root
vm.0.1.1 9101 ppg1 vme
vm.0.1.2 9102 ppg2 vme
…
vm.0.1.16 9116 ppg16 vme
And (3) the virtual machine 2:
# name Port Process name function
vm.0.2.0 9200 ppg0 root
vm.0.2.1 9201 ppg1 vme
vm.0.2.2 9202 ppg2 vme
…
vm.0.2.16 9216 ppg16 vme
… …
The virtual machine 8:
# name Port Process name function
vm.0.8.0 9800 ppg0 root
vm.0.8.1 9801 ppg1 vme
vm.0.8.2 9802 ppg2 vme
…
vm.0.8.16 9816 ppg16 vme
Note:
5.17.11-16 are data nodes;
5.17.2 the penultimate position 2 separator element is the virtual machine number;
5.17.3 Port number 9xxx, bit 2, the virtual machine number;
5.17.4, the 5.17.3 port number is first bit 9 to be 8, i.e. 8xxx port number, other rules are as above, which are indicated for management of the vme shell, 9xxx is called for the program interface afs
5.17.5A fast network addressing algorithm according to claims 5.17.1, 5.17.2, 5.17.3, 5.17.4 which can either count ip quickly by virtual machine numbering or count numbers easily by ip; the node number and the ip tail number can be calculated through the port number, and the addressing can be easily calculated through a dynamic mapping algorithm;
5.18 mapport mapping service
The physical machine hosts an operating system: windows Server 2012(Fix)
Bat performs port route mapping on a physical machine such as a graph network adapter by making a map through a port proxy; 9xxx and 8xxx ports are mapped to each virtual machine of the intranet for use
mapport $oip $vmeip $oport $vmeport
eg:mapport 192.168.1.170 192.168.1.179 9000 9000
A $ oip physical machine ip $ vmeip virtual machine vmeip $ openport physical machine port $ vmeport node port;
bat can implement all virtual machines under this cluster in batches, vme port 9xxx, shell 8 xxx;
opening an ip path between the physical machine and the virtual machine through a mapport service, vme;
5.19 based on 5.18, each node is communicated with the physical, the physical machine is bound with an external network, the virtual machine and the physical machine are bridged, the virtual machine and the physical machine share the external network ip, and any virtual machine and vme nodes can be accessed through the ip + port, and the ports are uniquely distributed;
5.20 finishing a cluster intranet based on 5.19, repeating iterative replication of a physical machine to replace ip, so that a cluster can be replicated, no additional configuration is needed in the cluster, iteration can be performed to multiple clusters to realize a dimension-increasing concept as long as ip is replaced, and iterative upgrade can be performed in the right 1 and the right 2.
Applications Claiming Priority (2)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202111337330 | 2021-11-15 | ||
CN2021113373306 | 2021-11-15 |
Publications (1)
Publication Number | Publication Date |
---|---|
CN114138898A true CN114138898A (en) | 2022-03-04 |
Family
ID=80390145
Family Applications (5)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN202111375039.8A Pending CN114138898A (en) | 2021-11-15 | 2021-11-20 | SMG-VME-AFS iterable distributed storage system |
CN202111381552.8A Pending CN114138410A (en) | 2021-11-15 | 2021-11-21 | SMG-vmecloneVMOS iterated virtual machine clone operating system |
CN202111381353.7A Pending CN114157460A (en) | 2021-11-15 | 2021-11-22 | SMG-VME-aDDoS attack defense system based on VME-TCP-IP anti-DDoS |
CN202111392114.1A Pending CN114281889A (en) | 2021-11-15 | 2021-11-23 | SMG-VME-DSSS data sharing slicing service |
CN202210154742.4A Pending CN114510335A (en) | 2021-11-15 | 2022-02-21 | SMG-VME iteratively distributed operating system |
Family Applications After (4)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN202111381552.8A Pending CN114138410A (en) | 2021-11-15 | 2021-11-21 | SMG-vmecloneVMOS iterated virtual machine clone operating system |
CN202111381353.7A Pending CN114157460A (en) | 2021-11-15 | 2021-11-22 | SMG-VME-aDDoS attack defense system based on VME-TCP-IP anti-DDoS |
CN202111392114.1A Pending CN114281889A (en) | 2021-11-15 | 2021-11-23 | SMG-VME-DSSS data sharing slicing service |
CN202210154742.4A Pending CN114510335A (en) | 2021-11-15 | 2022-02-21 | SMG-VME iteratively distributed operating system |
Country Status (1)
Country | Link |
---|---|
CN (5) | CN114138898A (en) |
Cited By (1)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN115051789A (en) * | 2022-06-15 | 2022-09-13 | 道和邦(广州)电子信息科技有限公司 | SMG-wscomm-MagicNumberEncrypt efficient and high-compression data encryption technology based on magic number theorem |
Families Citing this family (1)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN116094769B (en) * | 2022-12-22 | 2024-03-01 | 燕山大学 | Port micro-grid control method for resisting false data injection attack |
Citations (2)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN102855284A (en) * | 2012-08-03 | 2013-01-02 | 北京联创信安科技有限公司 | Method and system for managing data of cluster storage system |
KR20130047042A (en) * | 2011-10-31 | 2013-05-08 | 삼성에스디에스 주식회사 | Data partitioning apparatus for distributed data storages and method thereof |
Family Cites Families (14)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US20040187032A1 (en) * | 2001-08-07 | 2004-09-23 | Christoph Gels | Method, data carrier, computer system and computer progamme for the identification and defence of attacks in server of network service providers and operators |
CN102103637A (en) * | 2011-01-24 | 2011-06-22 | 上海银杏界信息科技有限公司 | Software-as-a-service (SaaS) software data model realization method |
CN102291390B (en) * | 2011-07-14 | 2014-06-04 | 南京邮电大学 | Method for defending against denial of service attack based on cloud computation platform |
CN102841759B (en) * | 2012-05-10 | 2016-04-20 | 天津兆民云计算科技有限公司 | A kind of storage system for super large scale dummy machine cluster |
US9171147B2 (en) * | 2012-07-31 | 2015-10-27 | Thomas C. Logan | Process and system for strengthening password security |
CN104468791B (en) * | 2014-12-09 | 2018-01-16 | 广州杰赛科技股份有限公司 | The construction method of private clound IaaS platforms |
CN104898573B (en) * | 2015-04-06 | 2016-08-17 | 华中科技大学 | A kind of digital control system data acquisition based on cloud computing and processing method |
US11316775B2 (en) * | 2016-12-21 | 2022-04-26 | Juniper Networks, Inc. | Maintaining coherency in distributed operating systems for network devices |
CN109327426A (en) * | 2018-01-11 | 2019-02-12 | 白令海 | A kind of firewall attack defense method |
CN109710609B (en) * | 2018-12-14 | 2023-08-08 | 中国平安人寿保险股份有限公司 | Method and device for generating data table identification |
CN111538561B (en) * | 2020-03-27 | 2023-10-31 | 上海仪电(集团)有限公司中央研究院 | OpenStack large-scale cluster deployment test method and system based on KVM virtualization technology |
CN111585751A (en) * | 2020-04-10 | 2020-08-25 | 四川大学 | Data sharing method based on block chain |
CN111723210A (en) * | 2020-06-29 | 2020-09-29 | 深圳壹账通智能科技有限公司 | Method and device for storing data table, computer equipment and readable storage medium |
CN112667362B (en) * | 2021-01-04 | 2022-06-21 | 烽火通信科技股份有限公司 | Method and system for deploying Kubernetes virtual machine cluster on Kubernetes |
-
2021
- 2021-11-20 CN CN202111375039.8A patent/CN114138898A/en active Pending
- 2021-11-21 CN CN202111381552.8A patent/CN114138410A/en active Pending
- 2021-11-22 CN CN202111381353.7A patent/CN114157460A/en active Pending
- 2021-11-23 CN CN202111392114.1A patent/CN114281889A/en active Pending
-
2022
- 2022-02-21 CN CN202210154742.4A patent/CN114510335A/en active Pending
Patent Citations (2)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
KR20130047042A (en) * | 2011-10-31 | 2013-05-08 | 삼성에스디에스 주식회사 | Data partitioning apparatus for distributed data storages and method thereof |
CN102855284A (en) * | 2012-08-03 | 2013-01-02 | 北京联创信安科技有限公司 | Method and system for managing data of cluster storage system |
Non-Patent Citations (1)
Title |
---|
许俊红: "分布式海量数据储存系统负载均衡算法的优化设计与实现", 《中国优秀硕士学位论文全文数据库信息科技辑》, no. 2, 15 December 2013 (2013-12-15), pages 137 - 104 * |
Cited By (1)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN115051789A (en) * | 2022-06-15 | 2022-09-13 | 道和邦(广州)电子信息科技有限公司 | SMG-wscomm-MagicNumberEncrypt efficient and high-compression data encryption technology based on magic number theorem |
Also Published As
Publication number | Publication date |
---|---|
CN114510335A (en) | 2022-05-17 |
CN114138410A (en) | 2022-03-04 |
CN114157460A (en) | 2022-03-08 |
CN114281889A (en) | 2022-04-05 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN111444020B (en) | Super-fusion computing system architecture and fusion service platform | |
CN109194506B (en) | Block chain network deployment method, platform and computer storage medium | |
Talia | Cloud Computing and Software Agents: Towards Cloud Intelligent Services. | |
CN114138898A (en) | SMG-VME-AFS iterable distributed storage system | |
US20080256078A1 (en) | Secure distributed computing engine and database system | |
CN112835977A (en) | Database management method and system based on block chain | |
Grossman et al. | The design of a community science cloud: The open science data cloud perspective | |
Chen et al. | Federation in cloud data management: Challenges and opportunities | |
Loughran et al. | Dynamic cloud deployment of a mapreduce architecture | |
Vaquero et al. | Deploying large-scale datasets on-demand in the cloud: treats and tricks on data distribution | |
George et al. | Hadoop MapReduce for mobile clouds | |
Li et al. | Wide-area spark streaming: Automated routing and batch sizing | |
Stagni et al. | The DIRAC interware: current, upcoming and planned capabilities and technologies | |
CN114996053A (en) | Remote volume replication transmission method, system, device and storage medium | |
JP2024500373A (en) | Key rotation in publishing-subscription systems | |
CN111898162B (en) | Parallel task execution method and device, storage medium and electronic equipment | |
Pingle et al. | Big data processing using apache hadoop in cloud system | |
Zinkin et al. | Dynamic Topology Transformation of Cloud-Network Computer Systems: Conceptual Level | |
Abid | HPC (high-performance the computing) for big data on cloud: Opportunities and challenges | |
Nurain et al. | An in-depth study of map reduce in cloud environment | |
CN112235356B (en) | Distributed PB-level CFD simulation data management system based on cluster | |
Suchodoletz et al. | Storage infrastructures to support advanced scientific workflows. Towards research data management aware storage infrastructures | |
Wang et al. | Towards a Service-based Adaptable Data Layer for Cloud Workflows | |
US11526534B2 (en) | Replicating data changes through distributed invalidation | |
WO2022057698A1 (en) | Efficient bulk loading multiple rows or partitions for single target table |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination |