CN109120674B - Deployment method and device of big data platform - Google Patents
Deployment method and device of big data platform Download PDFInfo
- Publication number
- CN109120674B CN109120674B CN201810803535.0A CN201810803535A CN109120674B CN 109120674 B CN109120674 B CN 109120674B CN 201810803535 A CN201810803535 A CN 201810803535A CN 109120674 B CN109120674 B CN 109120674B
- Authority
- CN
- China
- Prior art keywords
- server
- data platform
- model
- component
- component model
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Active
Links
Images
Classifications
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04L—TRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
- H04L67/00—Network arrangements or protocols for supporting network services or applications
- H04L67/01—Protocols
- H04L67/10—Protocols in which an application is distributed across nodes in the network
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F9/00—Arrangements for program control, e.g. control units
- G06F9/06—Arrangements for program control, e.g. control units using stored programs, i.e. using an internal store of processing equipment to receive or retain programs
- G06F9/46—Multiprogramming arrangements
- G06F9/50—Allocation of resources, e.g. of the central processing unit [CPU]
- G06F9/5005—Allocation of resources, e.g. of the central processing unit [CPU] to service a request
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04L—TRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
- H04L41/00—Arrangements for maintenance, administration or management of data switching networks, e.g. of packet switching networks
- H04L41/50—Network service management, e.g. ensuring proper service fulfilment according to agreements
- H04L41/5041—Network service management, e.g. ensuring proper service fulfilment according to agreements characterised by the time relationship between creation and deployment of a service
- H04L41/5051—Service on demand, e.g. definition and deployment of services in real time
Landscapes
- Engineering & Computer Science (AREA)
- Computer Networks & Wireless Communication (AREA)
- Signal Processing (AREA)
- Software Systems (AREA)
- Theoretical Computer Science (AREA)
- Physics & Mathematics (AREA)
- General Engineering & Computer Science (AREA)
- General Physics & Mathematics (AREA)
- Stored Programmes (AREA)
- Information Transfer Between Computers (AREA)
Abstract
The disclosure relates to a deployment method and a device of a big data platform, wherein the method comprises the following steps: collecting resources of each server in a server cluster of a large data platform to be deployed; determining a component model corresponding to the big data platform; partitioning the collected resources according to the determined component model; and issuing the determined component model to each server, and instructing the server to install each component in the component model so as to deploy the big data platform. Therefore, the big data platform can be automatically deployed without manually deploying the big data platform by operation and maintenance personnel, so that operation and maintenance manpower can be released, the cost is saved, and unreasonable deployment of the big data platform caused by errors of the operation and maintenance personnel is avoided.
Description
Technical Field
The disclosure relates to the technical field of big data, in particular to a deployment method and device of a big data platform.
Background
Since the big data platform is deployed in the server cluster, and the server cluster may involve large-capacity storage and high-performance data calculation and analysis, when the big data platform is deployed, detailed planning needs to be made on actual demands of customers and hardware resources to ensure that abnormal use caused by unreasonable deployment of the big data platform does not occur when the big data platform is used.
In the related art, a large data platform is deployed manually by operation and maintenance personnel. However, since the hardware environments of the customers are different and the abilities of the operation and maintenance personnel are uneven, the deployment of the big data platform may be unreasonable, such as insufficient storage resources and computing resources, and the use of the big data platform is abnormal.
Disclosure of Invention
In view of this, the present disclosure provides a deployment method and an apparatus for a big data platform.
According to one aspect of the disclosure, a deployment method of a big data platform is provided, which is applied to a cloud server, and the method includes:
collecting resources of each server in a server cluster of a large data platform to be deployed;
determining a component model corresponding to the big data platform;
partitioning the collected resources according to the determined component model;
and issuing the determined component model to each server, and instructing the server to install each component in the component model so as to deploy the big data platform.
According to another aspect of the present disclosure, a deployment apparatus for a big data platform is provided, which is applied to a cloud server, the apparatus includes:
the collection module is used for collecting resources of each server in the server cluster of the large data platform to be deployed;
the determining module is used for determining the component model corresponding to the big data platform;
a partitioning module for partitioning the collected resources according to the determined component model;
and the processing module is used for issuing the determined component model to each server and instructing the server to install each component in the component model so as to deploy the big data platform.
The technical scheme provided by the embodiment of the disclosure can have the following beneficial effects: the cloud server collects resources of the server and determines a component model corresponding to the big data platform; partitioning the collected resources according to the determined component model; and issuing the determined component model to each server, and instructing the server to install each component in the component model so as to deploy the big data platform. Therefore, the big data platform can be automatically deployed without manually deploying the big data platform by operation and maintenance personnel, so that operation and maintenance manpower can be released, the cost is saved, and unreasonable deployment of the big data platform caused by errors of the operation and maintenance personnel is avoided.
In addition, the cloud server records the connection mode of each server and the cloud server by maintaining the basic configuration library, so that operation and maintenance personnel do not need to master the connection mode of each server and the cloud server, and therefore the cloud server can shield the difference of the servers and release operation and maintenance manpower.
Other features and aspects of the present disclosure will become apparent from the following detailed description of exemplary embodiments, which proceeds with reference to the accompanying drawings.
Drawings
The accompanying drawings, which are incorporated in and constitute a part of this specification, illustrate exemplary embodiments, features, and aspects of the disclosure and, together with the description, serve to explain the principles of the disclosure.
FIG. 1 is a schematic diagram illustrating networking of a large data platform, according to an example embodiment.
FIG. 2 is a flow diagram illustrating a method for deployment of a large data platform in accordance with an exemplary embodiment.
FIG. 3 is a block diagram illustrating a deployment apparatus for a large data platform in accordance with an exemplary embodiment.
Fig. 4 is a block diagram illustrating a hardware architecture of a deployment device for a large data platform, according to an example embodiment.
Detailed Description
Various exemplary embodiments, features and aspects of the present disclosure will be described in detail below with reference to the accompanying drawings. In the drawings, like reference numbers can indicate functionally identical or similar elements. While the various aspects of the embodiments are presented in drawings, the drawings are not necessarily drawn to scale unless specifically indicated.
The word "exemplary" is used exclusively herein to mean "serving as an example, embodiment, or illustration. Any embodiment described herein as "exemplary" is not necessarily to be construed as preferred or advantageous over other embodiments.
Furthermore, in the following detailed description, numerous specific details are set forth in order to provide a better understanding of the present disclosure. It will be understood by those skilled in the art that the present disclosure may be practiced without some of these specific details. In some instances, methods, means, elements and circuits that are well known to those skilled in the art have not been described in detail so as not to obscure the present disclosure.
FIG. 1 is a schematic diagram illustrating networking of a large data platform, according to an example embodiment. As shown in fig. 1, the networking includes a cloud server and a server cluster, the cloud server maintains a component model library and a base configuration library, and the server cluster includes a plurality of servers 1 to N. It is to be understood that the present disclosure is not limited by the type of servers, which may include, but are not limited to, X86 servers (i.e., servers based on the X86 architecture) and non-X86 servers.
The component model library is used for recording each component model of the big data platform. In one implementation manner, when an operation and maintenance person creates a component model meeting the business according to business requirements of a client, if the cloud server does not find the component model in the component model library, the cloud server adds the component model to the component model library so as to be used for deploying a large data platform subsequently. On the contrary, if the cloud server finds the component model in the component model library, the component model is recorded in the component model library, and the cloud server does not need to add the component model into the component model library.
Because the business requirements of customers are diverse, the component models of the big data platform which meet the business requirements of customers are also diverse. Exemplary, component models for big data platforms include, but are not limited to: the component model Spark + MR + HDFS for providing data computation services. Spark is a platform for realizing fast and general cluster computing, and provides functions of batch processing, stream processing, SQL query, machine learning, graph computation, R language, and the like. Mr (mapreduce) is a programming model that provides parallel operation of large-scale datasets. HDFS (Hadoop distributed File System) is a distributed File System running on general-purpose hardware, providing high throughput access to data of applications.
The basic configuration library is used for recording the connection mode of the servers of various types and the cloud server. In one implementation manner, when an operation and maintenance worker manually connects a server of a certain model with a cloud server, if the cloud server does not find the model of the server in a basic configuration library, the cloud server adds the connection manner of the server of the model and the cloud server into the basic configuration library for subsequent deployment of a large data platform. On the contrary, if the cloud server finds the model of the server in the basic configuration library, the connection mode of the server with the model and the cloud server is recorded in the basic configuration library, and the cloud server does not need to add the connection mode of the server with the model and the cloud server into the basic configuration library.
The connection mode between the server and the cloud server includes, but is not limited to, BMC (Baseboard Management controller), BIOS (Basic Input Output System), and RAID (Redundant Array of Independent Disks) connection. Specifically, if the BMC interface of the server is connected to the cloud server, the connection mode between the server and the cloud server is BMC connection. If the BIOS interface of the server is connected with the cloud server, the connection mode of the server and the cloud server is BIOS connection. If the RAID interface of the server is connected with the cloud server, the server is connected with the cloud server in an RAID mode.
Fig. 2 is a flowchart illustrating a deployment method of a big data platform, which may be applied to the cloud server in fig. 1 according to an exemplary embodiment. Before the deployment method of this embodiment is executed, each server needs to be connected to the cloud server, and for example, the BMC port of each server and any data port of each server may be connected to the cloud server. As shown in fig. 2, the deployment method may include the following steps.
In step S220, resources of each server in the server cluster to be deployed with the big data platform are collected.
In this embodiment, since each server is already connected to the cloud server, the cloud server may communicate with each server, and accordingly, the cloud server may collect resources of the server from each server through a correlation technique.
The resources of each server include, but are not limited to, the computing resources and storage resources of each server. The computing resources are, for example, the processing capacity of the CPU, and may be quantified by the number of CPUs, the number of CPU cores, the frequency of the CPUs, and the like. The storage resource is, for example, a storage space, and may be quantified by the size of the memory, the size of the hard disk, the number of the hard disks, and the like.
In one implementation, the cloud server may collect resources of each server by:
acquiring a connection mode of each server and a cloud server;
reading Basic Input Output System (BIOS) information and Redundant Array of Independent Disks (RAID) information of each server according to the connection mode of each server and the cloud server;
hard disk information and memory information in the BIOS information of each server and RAID information of each server are used as storage resources of each server, and CPU information in the BIOS information of each server is used as computing resources of each server.
In this embodiment, each server may collect its own BIOS information and RAID information. After each server is connected to the cloud server through any interface, each server can send the collected BIOS information and RAID information to the cloud server through any interface connected with the cloud server, and accordingly the cloud server can read the BIOS information and RAID information of each server through the interface of each server.
For example, after each server is connected to the cloud server through the BMC interface, the cloud server may allocate an IP address to the BMC interface of each server connected to the cloud server through a Dynamic Host Configuration Protocol (DHCP), and thus, the cloud server may communicate with each server using each allocated IP address to read the BIOS information and RAID information collected by each server through the BMC interface of each server.
Alternatively, each server may collect its own BIOS information and RAID information. After each server is connected to the cloud server through the BIOS interface and the RAID interface, each server can send the collected BIOS information to the cloud server through the BIOS interface, and each server can send the collected RAID information to the cloud server through the RAID interface. Correspondingly, the cloud server can read the BIOS information of each server through the BIOS interface of each server, and the cloud server can read the RAID information of each server through the RAID interface of each server.
The BIOS information includes, but is not limited to, hard disk information, memory information, CPU information, and the like. The hard disk information includes, for example, the size of the hard disk and the number of the hard disks; the memory information includes, for example, the size of the memory; the CPU information includes, for example, the number of CPUs and the frequency of the CPUs. RAID information is used, for example, to describe which disks each RAID group includes. Therefore, the cloud server can use the size of the hard disks, the number of the hard disks, the size of the memory and the read RAID information in the read BIOS information as storage resources of each server, and can use the number of the CPUs and the frequency of the CPUs in the read BIOS information as computing resources of each server, thereby collecting the storage resources and the computing resources of each server.
In one implementation, the cloud server may obtain the connection mode between each server and the cloud server in the following manner:
obtaining the model of each server;
searching a basic configuration library maintained by the cloud server according to the model of each server;
and determining the connection mode corresponding to the model of each server recorded in the basic configuration library as the connection mode of each server and the cloud server.
In this embodiment, the cloud server may obtain the model of each server through a related technique, for example, each server may send the model of itself to the cloud server at regular time through any interface (for example, a BMC interface, a BIOS interface, or a RAID interface) of itself connected to the cloud server, and accordingly, the cloud server may receive the model sent at regular time by each server through the interface of each server. Or the cloud server sends a command for enabling each server to send the model of each server to the cloud server to each server, each server sends the model of each server to the cloud server through the interface connected to the cloud server in response to receiving the command, and accordingly the cloud server can receive the model sent by each server through the interface of each server.
The basic configuration library records the connection mode of each type of server and the cloud server, so that each type of server has a corresponding connection mode, the cloud server can search the basic configuration library according to the obtained type of the server, and the connection mode corresponding to the obtained type of the server can be determined as the connection mode of the server and the cloud server.
In step S240, a component model corresponding to the big data platform is determined.
In this embodiment, the requirement of the big data platform corresponds to the business requirement of the customer, for example, providing data computing service or providing data storage service. The cloud server can determine the component model meeting the requirement as the component model corresponding to the big data platform.
In one implementation, step S240 may include: and searching a component model meeting preset conditions in a component model library maintained by the cloud server, and determining the searched component model as a component model corresponding to the big data platform. The preset condition is, for example, the service requirement.
In step S260, the collected resources are divided according to the determined component model.
In this embodiment, because each component in the determined component model requires a corresponding storage resource and a corresponding computing resource, the cloud server may divide the resources of each server according to the storage resource and the computing resource required by each component.
In one implementation, step S260 may include: dividing the storage resources of each server according to the storage resources required by each component in the determined component model so as to distribute the divided storage resources to each component; and dividing the computing resources of each server according to the computing resources required by each component in the determined component model so as to distribute the divided computing resources to each component.
In step S280, the determined component model is issued to each server, and the server is instructed to install each component in the component model to deploy the big data platform.
In this embodiment, before installing components for each server, the cloud server may install an operating system for each server in a PXE (Preboot execution Environment) manner.
In one implementation, in a server cluster formed by the servers, one server is a master server and the other servers are standby servers, the master server may automatically deploy a big data platform first, and then the master server may synchronize the deployed big data platform to the standby servers.
Therefore, in this embodiment, the cloud server collects resources of the server and determines a component model corresponding to the big data platform; partitioning the collected resources according to the determined component model; and issuing the determined component model to each server, and instructing the server to install each component in the component model so as to deploy the big data platform. Therefore, the big data platform can be automatically deployed without manually deploying the big data platform by operation and maintenance personnel, so that operation and maintenance manpower can be released, the cost is saved, and unreasonable deployment of the big data platform caused by errors of the operation and maintenance personnel is avoided.
In addition, the cloud server records the connection mode of each server and the cloud server by maintaining the basic configuration library, so that operation and maintenance personnel do not need to master the connection mode of each server and the cloud server, and therefore the cloud server can shield the difference of the servers and release operation and maintenance manpower.
FIG. 3 is a block diagram illustrating a deployment apparatus for a large data platform in accordance with an exemplary embodiment. The deployment device of the big data platform can be applied to a cloud server. As shown in fig. 3, the deployment apparatus 300 of the big data platform may include a collection module 310, a determination module 320, a division module 330, and a processing module 340.
The collection module 310 is used for collecting resources of each server in the server cluster of the large data platform to be deployed.
The determining module 320 is configured to determine a component model corresponding to the big data platform.
The partitioning module 330 is connected to the collecting module 310 and the determining module 320, and is configured to partition the collected resources according to the determined component model.
The processing module 340 is connected to the determining module 320 and the dividing module 330, and configured to issue the determined component model to each server, and instruct the server to install each component in the component model to deploy the big data platform.
In one implementation, the collection module 310 is configured to:
acquiring the connection mode of each server and the cloud server;
reading Basic Input Output System (BIOS) information and Redundant Array of Independent Disks (RAID) information of each server according to the connection mode of each server and the cloud server;
hard disk information and memory information in the BIOS information of each server and RAID information of each server are used as storage resources of each server, and CPU information in the BIOS information of each server is used as computing resources of each server.
In one implementation, the obtaining a connection mode between each server and the cloud server includes:
obtaining the model of each server;
searching a basic configuration library maintained by the cloud server according to the model of each server, wherein the basic configuration library is used for recording the connection mode between each type of server and the cloud server;
and determining the connection mode corresponding to the model of each server recorded in the basic configuration library as the connection mode of each server and the cloud server.
In one implementation, the partitioning module 330 is configured to:
dividing the storage resources of each server according to the storage resources required by each component in the determined component model so as to distribute the divided storage resources to each component;
and dividing the computing resources of each server according to the computing resources required by each component in the determined component model so as to distribute the divided computing resources to each component.
In one implementation, the determining module 320 is configured to:
and searching a component model meeting preset conditions in a component model library maintained by the cloud server, and determining the searched component model as a component model corresponding to the big data platform, wherein the component model library is used for recording each component model of the big data platform.
With regard to the apparatus in the above-described embodiment, the specific manner in which each module performs the operation has been described in detail in the embodiment related to the method, and will not be elaborated here.
Fig. 4 is a block diagram illustrating a hardware architecture of a deployment device for a large data platform, according to an example embodiment. Referring to fig. 4, the apparatus 900 may include a processor 901, a machine-readable storage medium 902 having stored thereon machine-executable instructions. The processor 901 and the machine-readable storage medium 902 may communicate via a system bus 903. Also, the processor 901 executes the deployment method of the big data platform described above by reading machine executable instructions in the machine readable storage medium 902 corresponding to the deployment logic of the big data platform.
The machine-readable storage medium 902 referred to herein may be any electronic, magnetic, optical, or other physical storage device that can contain or store information such as executable instructions, data, and the like. For example, the machine-readable storage medium may be: a RAM (random Access Memory), a volatile Memory, a non-volatile Memory, a flash Memory, a storage drive (e.g., a hard drive), a solid state drive, any type of storage disk (e.g., an optical disk, a dvd, etc.), or similar storage medium, or a combination thereof.
Having described embodiments of the present invention, the foregoing description is intended to be exemplary, not exhaustive, and not limited to the embodiments disclosed. Many modifications and variations will be apparent to those of ordinary skill in the art without departing from the scope and spirit of the described embodiments. The terms used herein were chosen in order to best explain the principles of the embodiments, the practical application, or technical improvements to the techniques in the marketplace, or to enable others of ordinary skill in the art to understand the embodiments disclosed herein.
Claims (10)
1. A deployment method of a big data platform is applied to a cloud server, and is characterized in that the cloud server maintains a component model library and a basic configuration library, the component model library is used for recording each component model of the big data platform, the basic configuration library is used for recording the connection mode of each type of server and the cloud server, and the method comprises the following steps:
collecting resources of each server in a server cluster of a large data platform to be deployed;
determining a component model corresponding to the big data platform;
partitioning the collected resources according to the determined component model;
and issuing the determined component model to each server, and instructing the server to install each component in the component model so as to deploy the big data platform.
2. The method of claim 1, wherein the collecting resources of each server in the server cluster to be deployed with the big data platform comprises:
acquiring the connection mode of each server and the cloud server;
reading Basic Input Output System (BIOS) information and Redundant Array of Independent Disks (RAID) information of each server according to the connection mode of each server and the cloud server;
hard disk information and memory information in the BIOS information of each server and RAID information of each server are used as storage resources of each server, and CPU information in the BIOS information of each server is used as computing resources of each server.
3. The method of claim 2, wherein obtaining the connection between each server and the cloud server comprises:
obtaining the model of each server;
searching a basic configuration library maintained by the cloud server according to the model of each server;
and determining the connection mode corresponding to the model of each server recorded in the basic configuration library as the connection mode of each server and the cloud server.
4. The method of claim 2, wherein partitioning the collected resources according to the determined component model comprises:
dividing the storage resources of each server according to the storage resources required by each component in the determined component model so as to distribute the divided storage resources to each component;
and dividing the computing resources of each server according to the computing resources required by each component in the determined component model so as to distribute the divided computing resources to each component.
5. The method according to any one of claims 1 to 4, wherein the determining the component model corresponding to the big data platform comprises:
and searching a component model meeting preset conditions in a component model library maintained by the cloud server, and determining the searched component model as the component model corresponding to the big data platform.
6. The utility model provides a big data platform's deployment device, is applied to the high in the clouds server, its characterized in that, the high in the clouds server maintains subassembly model base and basic configuration storehouse, subassembly model base is used for recording each subassembly model of big data platform, basic configuration storehouse is used for recording the connected mode of the server of each model and high in the clouds server, the device includes:
the collection module is used for collecting resources of each server in the server cluster of the large data platform to be deployed;
the determining module is used for determining the component model corresponding to the big data platform;
a partitioning module for partitioning the collected resources according to the determined component model;
and the processing module is used for issuing the determined component model to each server and instructing the server to install each component in the component model so as to deploy the big data platform.
7. The apparatus of claim 6, wherein the collection module is configured to:
acquiring the connection mode of each server and the cloud server;
reading Basic Input Output System (BIOS) information and Redundant Array of Independent Disks (RAID) information of each server according to the connection mode of each server and the cloud server;
hard disk information and memory information in the BIOS information of each server and RAID information of each server are used as storage resources of each server, and CPU information in the BIOS information of each server is used as computing resources of each server.
8. The apparatus of claim 7, wherein obtaining the connection between each server and the cloud server comprises:
obtaining the model of each server;
searching a basic configuration library maintained by the cloud server according to the model of each server;
and determining the connection mode corresponding to the model of each server recorded in the basic configuration library as the connection mode of each server and the cloud server.
9. The apparatus of claim 7, wherein the partitioning module is configured to:
dividing the storage resources of each server according to the storage resources required by each component in the determined component model so as to distribute the divided storage resources to each component;
and dividing the computing resources of each server according to the computing resources required by each component in the determined component model so as to distribute the divided computing resources to each component.
10. The apparatus of any of claims 6-9, wherein the determination module is configured to:
and searching a component model meeting preset conditions in a component model library maintained by the cloud server, and determining the searched component model as the component model corresponding to the big data platform.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201810803535.0A CN109120674B (en) | 2018-07-20 | 2018-07-20 | Deployment method and device of big data platform |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201810803535.0A CN109120674B (en) | 2018-07-20 | 2018-07-20 | Deployment method and device of big data platform |
Publications (2)
Publication Number | Publication Date |
---|---|
CN109120674A CN109120674A (en) | 2019-01-01 |
CN109120674B true CN109120674B (en) | 2021-07-02 |
Family
ID=64862370
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN201810803535.0A Active CN109120674B (en) | 2018-07-20 | 2018-07-20 | Deployment method and device of big data platform |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN109120674B (en) |
Families Citing this family (2)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN111580958A (en) * | 2020-04-20 | 2020-08-25 | 佛山科学技术学院 | Deployment method and device of big data platform |
CN111897660B (en) * | 2020-09-29 | 2021-01-15 | 深圳云天励飞技术股份有限公司 | Model deployment method, model deployment device and terminal equipment |
Citations (3)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN104734892A (en) * | 2015-04-02 | 2015-06-24 | 江苏物联网研究发展中心 | Automatic deployment system for big data processing system Hadoop on cloud platform OpenStack |
US9348709B2 (en) * | 2013-12-27 | 2016-05-24 | Sybase, Inc. | Managing nodes in a distributed computing environment |
CN108241539A (en) * | 2018-01-03 | 2018-07-03 | 百度在线网络技术(北京)有限公司 | Interactive big data querying method, device, storage medium and terminal device based on distributed system |
-
2018
- 2018-07-20 CN CN201810803535.0A patent/CN109120674B/en active Active
Patent Citations (3)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US9348709B2 (en) * | 2013-12-27 | 2016-05-24 | Sybase, Inc. | Managing nodes in a distributed computing environment |
CN104734892A (en) * | 2015-04-02 | 2015-06-24 | 江苏物联网研究发展中心 | Automatic deployment system for big data processing system Hadoop on cloud platform OpenStack |
CN108241539A (en) * | 2018-01-03 | 2018-07-03 | 百度在线网络技术(北京)有限公司 | Interactive big data querying method, device, storage medium and terminal device based on distributed system |
Also Published As
Publication number | Publication date |
---|---|
CN109120674A (en) | 2019-01-01 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
US11593149B2 (en) | Unified resource management for containers and virtual machines | |
US10922191B2 (en) | Virtual proxy based backup | |
CN111488241B (en) | Method and system for realizing agent-free backup and recovery operation in container arrangement platform | |
US11392400B2 (en) | Enhanced migration of clusters based on data accessibility | |
CN106020854B (en) | Applying firmware updates in a system with zero downtime | |
US10977086B2 (en) | Workload placement and balancing within a containerized infrastructure | |
US9521194B1 (en) | Nondeterministic value source | |
US10095537B1 (en) | Driver version identification and update system | |
US20200026446A1 (en) | Establishing and maintaining data apportioning for availability domain fault tolerance | |
US9880827B2 (en) | Managing software version upgrades in a multiple computer system environment | |
US8612700B1 (en) | Method and system of performing block level duplications of cataloged backup data | |
US20180181383A1 (en) | Controlling application deployment based on lifecycle stage | |
CN110825494A (en) | Physical machine scheduling method and device and computer storage medium | |
US20060212871A1 (en) | Resource allocation in computing systems | |
WO2019001319A1 (en) | Quasi-agentless cloud resource management | |
CN107547635B (en) | Method and device for modifying IP address of large data cluster host | |
US20210303327A1 (en) | Gpu-remoting latency aware virtual machine migration | |
CN109120674B (en) | Deployment method and device of big data platform | |
US11561824B2 (en) | Embedded persistent queue | |
US20200244766A1 (en) | Systems and methods for semi-automatic workload domain deployments | |
US20150281124A1 (en) | Facilitating management of resources | |
CN114816728A (en) | Elastic expansion method and system for cloud environment MongoDB database cluster instance node | |
US7395403B2 (en) | Simulating partition resource allocation | |
US11442763B2 (en) | Virtual machine deployment system using configurable communication couplings | |
US20220206836A1 (en) | Method and Apparatus for Processing Virtual Machine Migration, Method and Apparatus for Generating Virtual Machine Migration Strategy, Device and Storage Medium |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
GR01 | Patent grant | ||
GR01 | Patent grant |