CN111880895A - Data reading and writing method and device based on Kubernetes platform - Google Patents

Data reading and writing method and device based on Kubernetes platform Download PDF

Info

Publication number
CN111880895A
CN111880895A CN202010667718.1A CN202010667718A CN111880895A CN 111880895 A CN111880895 A CN 111880895A CN 202010667718 A CN202010667718 A CN 202010667718A CN 111880895 A CN111880895 A CN 111880895A
Authority
CN
China
Prior art keywords
manager node
standby
node
write request
cluster
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Withdrawn
Application number
CN202010667718.1A
Other languages
Chinese (zh)
Inventor
高艳涛
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Suzhou Inspur Intelligent Technology Co Ltd
Original Assignee
Suzhou Inspur Intelligent Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Suzhou Inspur Intelligent Technology Co Ltd filed Critical Suzhou Inspur Intelligent Technology Co Ltd
Priority to CN202010667718.1A priority Critical patent/CN111880895A/en
Publication of CN111880895A publication Critical patent/CN111880895A/en
Withdrawn legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F9/00Arrangements for program control, e.g. control units
    • G06F9/06Arrangements for program control, e.g. control units using stored programs, i.e. using an internal store of processing equipment to receive or retain programs
    • G06F9/44Arrangements for executing specific programs
    • G06F9/455Emulation; Interpretation; Software simulation, e.g. virtualisation or emulation of application or operating system execution engines
    • G06F9/45533Hypervisors; Virtual machine monitors
    • G06F9/45558Hypervisor-specific management and integration aspects

Landscapes

  • Engineering & Computer Science (AREA)
  • Software Systems (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Engineering & Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • Hardware Redundancy (AREA)

Abstract

The invention relates to the technical field of distributed systems, in particular to a data reading and writing method and a data reading and writing device based on a Kubernetes platform, which comprise the following steps: HDFS master service and HDFS slave service deployed on the Kubernetes platform; a manager node cluster and a worker node cluster; wherein the manager node cluster provides service to the HDFS master service, and the worker node cluster provides service to the HDFS slave service; the HDFS master service is used for determining one standby manager node from the standby manager nodes as a new active manager node under the condition that the current active manager node fails, and executing read operation or write operation by using the new active manager node. The invention can ensure that the HDFS cluster can always provide service to the outside.

Description

Data reading and writing method and device based on Kubernetes platform
Technical Field
The invention relates to the technical field of distributed systems, in particular to a data reading and writing method and device based on a Kubernetes platform.
Background
Docker is an application Container engine based on Linux Container, which provides a lightweight virtualization technique. And a developer can pack and transplant the application program and the dependent runtime library file into a new container, so that the application can be packed once and run everywhere. When only a few containers are in the production environment, the management task is simpler. But when a production environment has hundreds or thousands of containers, the management task becomes complex, and the kubernets platform can simplify the container management problem for large-scale deployments.
Further, Hadoop is a reliable, extensible, distributed system infrastructure for distributed computing. Compared with the direct deployment of the Hadoop cluster on a physical machine, the containerized Hadoop cluster provides a series of enhanced functions of efficient deployment, service discovery, dynamic expansion and the like, the convenience of large-scale container cluster management is improved, the cost is reduced, and the efficiency is improved. The Hadoop Distributed File System (HDFS) is one of Hadoop core components.
Currently, when the HDFS cluster is containerized on a Kubernetes platform, a minimum deployable unit (Pod) reconstruction or multiple copy policy relying on Kubernetes guarantees High Availability (HA) of the HDFS cluster. The HA scheme HAs a recovery phase, that is, after a Pod fails, before a new Pod switches to a working state, snapshot or checkpoint recovery needs to be performed, and log replay needs to be performed, otherwise data is lost. However, the HDFS cluster cannot provide service to the outside during the recovery phase.
Disclosure of Invention
In view of the above problems, the present invention is proposed to provide a data read/write method and apparatus based on the kubernets platform, which overcomes or at least partially solves the above problems.
According to a first aspect of the present invention, there is provided a distributed system based on a kubernets platform, comprising:
HDFS master service and HDFS slave service deployed on the Kubernetes platform;
a manager node cluster and a worker node cluster;
wherein the manager node cluster provides service to the HDFS master service, and the worker node cluster provides service to the HDFS slave service;
the HDFS master service is used for determining one standby manager node from the standby manager nodes as a new active manager node under the condition that the current active manager node fails, and executing read operation or write operation by using the new active manager node.
Preferably, the ETCD cluster is also included;
the ETCD cluster is used for providing data storage service for the HDFS master service and the HDFS slave service.
According to a second aspect of the present invention, the present invention provides a data read-write method based on a Kubernetes platform, which is applied to the distributed system described in the first aspect, and the method includes:
receiving a read request or a write request;
judging whether the current active manager node in the manager node cluster is invalid or not;
if the current active manager node fails, determining a standby manager node from all standby manager nodes as a new active manager node;
and executing the read operation corresponding to the read request or the write operation corresponding to the write request by utilizing the new active manager node.
Preferably, after receiving the read request, the determining one standby manager node from all the standby manager nodes as a new active manager node includes:
and randomly determining one standby manager node from all the standby manager nodes as a new active manager node.
Preferably, after receiving the write request, the determining a standby manager node from the plurality of standby manager nodes as a new active manager node includes:
and determining the standby manager node with the largest current synchronous write request number in all the standby manager nodes as a new active manager node.
Preferably, after the receiving of the write request, the method further comprises:
judging whether the standby manager node has a missing write request or not;
and if so, acquiring information corresponding to the missing write request from the active manager node, and synchronizing the information to the standby manager node.
Preferably, when the distributed system includes the ETCD cluster, after receiving the write request and before the determining whether a current active manager node in the manager node cluster fails, the method further includes:
storing the write request to the ETCD cluster through the current active manager node.
Preferably, the executing, by the new active manager node, a write operation corresponding to the write request includes:
judging whether an unprocessed write request exists in the ETCD cluster;
if the unprocessed write request exists in the ETCD cluster, judging whether the unprocessed write request number is consistent with the write request number in the pre-written log of the new active manager node;
and if the write requests are inconsistent, reprocessing the unprocessed write requests by utilizing the new active manager node.
According to a third aspect of the present invention, there is provided a computer readable storage medium having a computer program stored thereon, wherein the program is adapted to perform the method steps of the second aspect when executed by a processor.
According to a fourth aspect of the present invention, there is provided a computer device comprising a memory, a processor and a computer program stored on the memory and executable on the processor, wherein the processor implements the method steps of the second aspect when executing the program.
The distributed system based on the Kubernetes platform comprises an HDFS master service and an HDFS slave service which are deployed on the Kubernetes platform, a manager node cluster and a worker node cluster. The manager node cluster provides services for the HDFS master service, and the worker node cluster provides services for the HDFS slave service. The HDFS main service is used for judging whether the current active manager node in the manager node cluster fails or not, determining a standby manager node from all standby manager nodes as a new active manager node under the condition that the current active manager node fails, and executing read operation or write operation by using the new active manager node. According to the invention, the manager node cluster and the worker node cluster are deployed in the Kubernets platform, high availability is not ensured by using a Pod control strategy any more, and a new active manager node is determined from the standby manager node under the condition that the current active manager node fails, so that the HDFS cluster can be ensured to provide service to the outside all the time.
The foregoing description is only an overview of the technical solutions of the present invention, and the embodiments of the present invention are described below in order to make the technical means of the present invention more clearly understood and to make the above and other objects, features, and advantages of the present invention more clearly understandable.
Drawings
Various other advantages and benefits will become apparent to those of ordinary skill in the art upon reading the following detailed description of the preferred embodiments. The drawings are only for purposes of illustrating the preferred embodiments and are not to be construed as limiting the invention. Also, like reference numerals are used to refer to like parts throughout the drawings. In the drawings:
fig. 1 shows an architecture diagram of a distributed system based on a kubernets platform in an embodiment of the present invention.
Fig. 2 shows a flowchart of a data read-write method based on a Kubernetes platform in the embodiment of the present invention.
Fig. 3 shows a schematic structural diagram of a computer device in an embodiment of the present invention.
Detailed Description
Exemplary embodiments of the present disclosure will be described in more detail below with reference to the accompanying drawings. While exemplary embodiments of the present disclosure are shown in the drawings, it should be understood that the present disclosure may be embodied in various forms and should not be limited to the embodiments set forth herein. Rather, these embodiments are provided so that this disclosure will be thorough and complete, and will fully convey the scope of the disclosure to those skilled in the art.
A first embodiment of the present invention provides a distributed system based on a kubernets platform, as shown in fig. 1, the system includes:
the system comprises an HDFS Master service (HDFS Master service) and an HDFS Slave service (HDFS Slave service) which are deployed on a Kubernetes platform, a manager node cluster (NameNode node cluster) and a worker node cluster (DataNode node cluster). The manager node cluster provides services for the HDFS master service, and the worker node cluster provides services for the HDFS slave service.
Further, the manager node cluster includes one Active manager node (i.e., Active NameNode) and a plurality of standby manager nodes (i.e., standby NameNode). The HDFS main service is used for judging whether a current active manager node in the manager node cluster fails or not, determining one standby manager node from all standby manager nodes as a new active manager node under the condition that the current active manager node fails, and executing read operation or write operation by using the new active manager node.
Based on the same inventive concept, a second embodiment of the present invention provides a data read-write method based on a kubernets platform, which is applied to the distributed system based on the kubernets platform described in the first embodiment. As shown in fig. 2, the method includes:
step 201: a read request or a write request is received.
Step 202: and judging whether the current active manager node in the manager node cluster is invalid or not.
Step 203: and if the current active manager node fails, determining one standby manager node from all the standby manager nodes as a new active manager node.
Step 204: and executing the read operation corresponding to the read request or the write operation corresponding to the write request by utilizing the new active manager node.
For step 201, a read request and a write request are issued by a client, and the client sends the read request or the write request to the HDFS main service. The following will describe read processing performed based on a read request and write processing performed based on a write request, respectively:
for read requests:
firstly, the HDFS main service selects an administrator node to process the read request, and specifically, may select an active administrator node to process the read request, where the selected active administrator node is referred to as a current active administrator node.
Further, in step 202, it is determined whether the current active manager node has failed. If the active manager node fails, the node state of at least half of the standby manager nodes in the manager node cluster is consistent with the node state of the current active manager node before the active manager node fails, and therefore one standby manager node is determined from all the standby manager nodes to serve as a new active manager node.
Further, since the read request is an idempotent operation, which does not destroy the consistency of data, the read request numbers currently synchronized between the standby manager node and the current active manager node are consistent, and thus, in step 203, one standby manager node can be randomly determined from all standby manager nodes as a new active manager node, and the new active manager node can ensure that the kubernets platform can still provide services to the outside when the current active manager node fails.
For example, if the current active manager node processes a read request numbered 3, the standby manager node will synchronize to this information, i.e., the standby manager node will know that a read request numbered 3 is currently processed. Therefore, one standby manager node is randomly selected from all the standby manager nodes to serve as a new active manager node, and the new active manager node is used for writing subsequent read requests.
Further, in the embodiment of the present invention, the HDFS master service is used to determine whether a currently active manager node in the manager node cluster fails. And under the condition that the current active manager node fails, determining one standby manager node from all standby manager nodes as a new active manager node by using the HDFS main service. And after determining a new active manager node, the new active manager node returns the metadata information of the data storage position to the client according to the read request. And then, the client requests data from the service to the HDFS according to the metadata information, and finally, the HDFS returns result state information to the client from the service.
For a write request:
first, the HDFS main service selects an active manager node to process the write request, and at this time, the selected active manager node is referred to as a current active manager node.
Further, in step 202, it is determined whether the current active manager node has failed. If the active manager node fails, the node state of at least half of the standby manager nodes in the manager node cluster is consistent with the node state of the current active manager node before the active manager node fails, and therefore one standby manager node is determined from all the standby manager nodes to serve as a new active manager node.
Further, since the currently synchronized write request number of the standby manager node may lag behind the write request number processed by the current active manager node, in step 203, the standby manager node with the largest currently synchronized write request number among all the standby manager nodes is determined as a new active manager node, and the new active manager node can ensure that the kubernets platform can still provide external services when the current active manager node fails.
For example, if the current active manager node processes a write request with the number of 4, and if all the standby manager nodes include a first standby manager node and a second standby manager node, the write request currently synchronized by the first standby manager node is numbered 3, and the write request currently synchronized by the second standby manager node is numbered 4, the second standby manager node is determined as a new active manager node. And performing a write operation on the subsequent write request by using the second standby manager node.
According to the invention, the standby manager node with the largest current synchronous write request number in all the standby manager nodes is determined as the new active manager node, so that the rapid reconstruction of data can be realized.
Further, the distributed system based on the Kubernetes platform further comprises an ETCD cluster, and the ETCD cluster is used for providing data storage service for the HDFS master service and the HDFS slave service. Thus, for a write request, after receiving the write request and before determining whether a current active manager node in the cluster of manager nodes is failed, the method further comprises:
the write request is stored to the ETCD cluster by the current active manager node.
The information written into the ETCD cluster comprises a write request and a write request number, so that the ETCD cluster is guaranteed to contain the latest write request information. Further, in step 204, it is first determined whether or not there is an unprocessed write request in the ETCD cluster. And if the unprocessed write request exists in the ETCD cluster, judging whether the unprocessed write request number is consistent with the write request number in the pre-written log of the new active manager node. And if the data are inconsistent, reprocessing the unprocessed write request by using the new active manager node, so that the reconstruction of the data of the new active manager node is realized, and the consistency of the data in the ETCD cluster is ensured. And if the ETCD cluster does not have unprocessed write requests or the unprocessed write request number is consistent with the write request number in the pre-written log of the new active manager node, ending the process and not acting.
Further, in the embodiment of the present invention, the HDFS master service is used to determine whether a currently active manager node in the manager node cluster fails. And under the condition that the current active manager node fails, determining one standby manager node from all standby manager nodes as a new active manager node by using the HDFS main service. And after determining a new active manager node, the new active manager node receives ACK (acknowledgement) information sent by other manager nodes, and after receiving the half ACK information, returns metadata information of the data storage position to the client. And then, the client requests data from the HDFS slave service according to the metadata information, and finally, the HDFS slave service returns the state information of the result of the client and the HDFS master service.
In the embodiment of the invention, due to unbalanced Pod resources and loads of different manager nodes or fluctuation of a network, write request information processed by the standby manager node lags behind that of the active manager node. To overcome the above problem, in an embodiment of the present invention, after receiving a write request, the method further includes:
judging whether a missing write request exists in the standby manager node;
and if so, acquiring information corresponding to the missing write request from the active manager node, and synchronizing the information to the standby manager node.
For example, if there are write requests numbered 1 and 2 in the standby manager node, after the standby manager node receives the write request numbered 4, it indicates that there is a missing write request. Therefore, the standby manager node acquires the missing write request with the number of 3 from the active manager node and performs synchronous updating so as to ensure the consistency of data.
Based on the same inventive concept, the third embodiment of the present invention further provides a computer-readable storage medium, on which a computer program is stored, which when executed by a processor implements the method steps described in the foregoing second embodiment.
Based on the same inventive concept, a computer apparatus is further provided in the fourth embodiment of the present invention, as shown in fig. 3, for convenience of description, only the parts related to the embodiment of the present invention are shown, and details of the specific technology are not disclosed, please refer to the method part of the embodiment of the present invention. The computer device may be any terminal device including a mobile phone, a tablet computer, a PDA (personal digital assistant), a POS (Point of sales), a vehicle-mounted computer, and the like, taking the computer device as the mobile phone as an example:
fig. 3 is a block diagram illustrating a partial structure associated with a computer device provided by an embodiment of the present invention. Referring to fig. 3, the computer apparatus includes: a memory 31 and a processor 32. Those skilled in the art will appreciate that the computer device configuration illustrated in FIG. 3 does not constitute a limitation of computer devices, and may include more or fewer components than those illustrated, or some components may be combined, or a different arrangement of components.
The following describes the components of the computer device in detail with reference to fig. 3:
the memory 31 may be used to store software programs and modules, and the processor 32 executes various functional applications and data processing by operating the software programs and modules stored in the memory 31. The memory 31 may mainly include a storage program area and a storage data area, wherein the storage program area may store an operating system, an application program required by at least one function (such as a sound playing function, an image playing function, etc.), and the like; the storage data area may store data (such as audio data, a phonebook, etc.), and the like. Further, the memory 31 may include high speed random access memory, and may also include non-volatile memory, such as at least one magnetic disk storage device, flash memory device, or other volatile solid state storage device.
The processor 32 is a control center of the computer device, and performs various functions and processes data by operating or executing software programs and/or modules stored in the memory 31 and calling data stored in the memory 31. Alternatively, processor 32 may include one or more processing units; preferably, the processor 32 may integrate an application processor, which primarily handles operating systems, user interfaces, applications, etc., and a modem processor, which primarily handles wireless communications.
In the embodiment of the present invention, the processor 32 included in the computer device may have the functions corresponding to any of the method steps in the foregoing second embodiment.
The algorithms and displays presented herein are not inherently related to any particular computer, virtual machine, or other apparatus. Various general purpose systems may also be used with the teachings herein. The required structure for constructing such a system will be apparent from the description above. Moreover, the present invention is not directed to any particular programming language. It is appreciated that a variety of programming languages may be used to implement the teachings of the present invention as described herein, and any descriptions of specific languages are provided above to disclose the best mode of the invention.
In the description provided herein, numerous specific details are set forth. It is understood, however, that embodiments of the invention may be practiced without these specific details. In some instances, well-known methods, structures and techniques have not been shown in detail in order not to obscure an understanding of this description.
Similarly, it should be appreciated that in the foregoing description of exemplary embodiments of the invention, various features of the invention are sometimes grouped together in a single embodiment, figure, or description thereof for the purpose of streamlining the disclosure and aiding in the understanding of one or more of the various inventive aspects. However, the disclosed method should not be interpreted as reflecting an intention that: that the invention as claimed requires more features than are expressly recited in each claim. Rather, as the following claims reflect, inventive aspects lie in less than all features of a single foregoing disclosed embodiment. Thus, the claims following the detailed description are hereby expressly incorporated into this detailed description, with each claim standing on its own as a separate embodiment of this invention.
Those skilled in the art will appreciate that the modules in the device in an embodiment may be adaptively changed and disposed in one or more devices different from the embodiment. The modules or units or components of the embodiments may be combined into one module or unit or component, and furthermore they may be divided into a plurality of sub-modules or sub-units or sub-components. All of the features disclosed in this specification (including any accompanying claims, abstract and drawings), and all of the processes or elements of any method or apparatus so disclosed, may be combined in any combination, except combinations where at least some of such features and/or processes or elements are mutually exclusive. Each feature disclosed in this specification (including any accompanying claims, abstract and drawings) may be replaced by alternative features serving the same, equivalent or similar purpose, unless expressly stated otherwise.
Furthermore, those skilled in the art will appreciate that while some embodiments described herein include some features included in other embodiments, rather than other features, combinations of features of different embodiments are meant to be within the scope of the invention and form different embodiments. For example, in the following claims, any of the claimed embodiments may be used in any combination.
The various component embodiments of the invention may be implemented in hardware, or in software modules running on one or more processors, or in a combination thereof. Those skilled in the art will appreciate that a microprocessor or Digital Signal Processor (DSP) may be used in practice to implement some or all of the functionality of some or all of the components in accordance with embodiments of the present invention. The present invention may also be embodied as apparatus or device programs (e.g., computer programs and computer program products) for performing a portion or all of the methods described herein. Such programs implementing the present invention may be stored on computer-readable media or may be in the form of one or more signals. Such a signal may be downloaded from an internet website or provided on a carrier signal or in any other form.
It should be noted that the above-mentioned embodiments illustrate rather than limit the invention, and that those skilled in the art will be able to design alternative embodiments without departing from the scope of the appended claims. In the claims, any reference signs placed between parentheses shall not be construed as limiting the claim. The word "comprising" does not exclude the presence of elements or steps not listed in a claim. The word "a" or "an" preceding an element does not exclude the presence of a plurality of such elements. The invention may be implemented by means of hardware comprising several distinct elements, and by means of a suitably programmed computer. In the unit claims enumerating several means, several of these means may be embodied by one and the same item of hardware. The usage of the words first, second and third, etcetera do not indicate any ordering. These words may be interpreted as names.

Claims (10)

1. A distributed system based on a Kubernetes platform, comprising:
HDFS master service and HDFS slave service deployed on the Kubernetes platform;
a manager node cluster and a worker node cluster;
wherein the manager node cluster provides service to the HDFS master service, and the worker node cluster provides service to the HDFS slave service;
the HDFS master service is used for determining one standby manager node from the standby manager nodes as a new active manager node under the condition that the current active manager node fails, and executing read operation or write operation by using the new active manager node.
2. The system of claim 1, further comprising an ETCD cluster;
the ETCD cluster is used for providing data storage service for the HDFS master service and the HDFS slave service.
3. A data read-write method based on a Kubernetes platform, which is applied to the distributed system based on the Kubernetes platform as claimed in any one of claims 1-2, and the method comprises:
receiving a read request or a write request;
judging whether the current active manager node in the manager node cluster is invalid or not;
if the current active manager node fails, determining a standby manager node from all standby manager nodes as a new active manager node;
and executing the read operation corresponding to the read request or the write operation corresponding to the write request by utilizing the new active manager node.
4. The method of claim 3, wherein said determining a standby manager node from all standby manager nodes as a new active manager node after receiving the read request comprises:
and randomly determining one standby manager node from all the standby manager nodes as a new active manager node.
5. The method of claim 3, wherein said determining a standby manager node from the plurality of standby manager nodes as a new active manager node after receiving the write request comprises:
and determining the standby manager node with the largest current synchronous write request number in all the standby manager nodes as a new active manager node.
6. The method of claim 3, wherein after the receiving a write request, the method further comprises:
judging whether the standby manager node has a missing write request or not;
and if so, acquiring information corresponding to the missing write request from the active manager node, and synchronizing the information to the standby manager node.
7. The method of claim 3, wherein when the distributed system includes the ETCD cluster, after receiving the write request and before the determining whether a currently active manager node in the cluster of manager nodes is down, the method further comprises:
storing the write request to the ETCD cluster through the current active manager node.
8. The method of claim 7, wherein performing, with the new active manager node, a write operation corresponding to the write request comprises:
judging whether an unprocessed write request exists in the ETCD cluster;
if the unprocessed write request exists in the ETCD cluster, judging whether the unprocessed write request number is consistent with the write request number in the pre-written log of the new active manager node;
and if the write requests are inconsistent, reprocessing the unprocessed write requests by utilizing the new active manager node.
9. A computer-readable storage medium, on which a computer program is stored which, when being executed by a processor, carries out the method steps of claim 3.
10. A computer device comprising a memory, a processor and a computer program stored on the memory and executable on the processor, characterized in that the processor implements the method steps of claim 3 when executing the program.
CN202010667718.1A 2020-07-13 2020-07-13 Data reading and writing method and device based on Kubernetes platform Withdrawn CN111880895A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202010667718.1A CN111880895A (en) 2020-07-13 2020-07-13 Data reading and writing method and device based on Kubernetes platform

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202010667718.1A CN111880895A (en) 2020-07-13 2020-07-13 Data reading and writing method and device based on Kubernetes platform

Publications (1)

Publication Number Publication Date
CN111880895A true CN111880895A (en) 2020-11-03

Family

ID=73150622

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202010667718.1A Withdrawn CN111880895A (en) 2020-07-13 2020-07-13 Data reading and writing method and device based on Kubernetes platform

Country Status (1)

Country Link
CN (1) CN111880895A (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113204353A (en) * 2021-04-27 2021-08-03 新华三大数据技术有限公司 Big data platform assembly deployment method and device

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113204353A (en) * 2021-04-27 2021-08-03 新华三大数据技术有限公司 Big data platform assembly deployment method and device
CN113204353B (en) * 2021-04-27 2022-08-30 新华三大数据技术有限公司 Big data platform assembly deployment method and device

Similar Documents

Publication Publication Date Title
Ma et al. Efficient live migration of edge services leveraging container layered storage
CN108234641B (en) Data reading and writing method and device based on distributed consistency protocol
RU2714098C1 (en) Data processing method and device
CN111930473B (en) Method and apparatus for deploying image recognition service on container cloud
EP3933608A1 (en) Method and apparatus for updating database by using two-phase commit distributed transaction
CN107391033B (en) Data migration method and device, computing equipment and computer storage medium
US10860375B1 (en) Singleton coordination in an actor-based system
CN113204353B (en) Big data platform assembly deployment method and device
CN111880956A (en) Data synchronization method and device
CN111240892A (en) Data backup method and device
CN111880895A (en) Data reading and writing method and device based on Kubernetes platform
CN107943615B (en) Data processing method and system based on distributed cluster
US11809275B2 (en) FaaS in-memory checkpoint restore
CN116418826A (en) Object storage system capacity expansion method, device and system and computer equipment
CN111382132A (en) Medical image data cloud storage system
CN113157392B (en) High-availability method and equipment for mirror image warehouse
CN111208949B (en) Method for determining data rollback time period in distributed storage system
CN113656496A (en) Data processing method and system
GB2542585A (en) Task scheduler and task scheduling process
CN113377489A (en) Construction and operation method and device of remote sensing intelligent monitoring application based on cloud platform
CN116244040B (en) Main and standby container cluster system, data synchronization method thereof and electronic equipment
CN114172917B (en) Distributed cache system and deployment method thereof
CN112929459B (en) Edge system and data operation request processing method
CN112596741B (en) Video monitoring service deployment method and device
CN117234607B (en) Multi-core system, dynamic module loading method, medium and processor chip thereof

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
WW01 Invention patent application withdrawn after publication
WW01 Invention patent application withdrawn after publication

Application publication date: 20201103