CN111427949A - Method and device for creating big data service - Google Patents

Method and device for creating big data service Download PDF

Info

Publication number
CN111427949A
CN111427949A CN201910020151.6A CN201910020151A CN111427949A CN 111427949 A CN111427949 A CN 111427949A CN 201910020151 A CN201910020151 A CN 201910020151A CN 111427949 A CN111427949 A CN 111427949A
Authority
CN
China
Prior art keywords
big data
data service
component
node
service
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN201910020151.6A
Other languages
Chinese (zh)
Other versions
CN111427949B (en
Inventor
韩卫
郭峰
刘中军
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Hangzhou Hikvision Digital Technology Co Ltd
Original Assignee
Hangzhou Hikvision Digital Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Hangzhou Hikvision Digital Technology Co Ltd filed Critical Hangzhou Hikvision Digital Technology Co Ltd
Priority to CN201910020151.6A priority Critical patent/CN111427949B/en
Publication of CN111427949A publication Critical patent/CN111427949A/en
Application granted granted Critical
Publication of CN111427949B publication Critical patent/CN111427949B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • YGENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
    • Y02TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
    • Y02DCLIMATE CHANGE MITIGATION TECHNOLOGIES IN INFORMATION AND COMMUNICATION TECHNOLOGIES [ICT], I.E. INFORMATION AND COMMUNICATION TECHNOLOGIES AIMING AT THE REDUCTION OF THEIR OWN ENERGY USE
    • Y02D10/00Energy efficient computing, e.g. low power processors, power management or thermal management

Landscapes

  • Stored Programmes (AREA)
  • Information Retrieval, Db Structures And Fs Structures Therefor (AREA)

Abstract

The application relates to a method and a device for creating big data service, belonging to the field of data mining. The method comprises the following steps: the method comprises the steps that a main node generates a container mirror image of each big data service according to a big data service component template corresponding to each big data service in M big data services and configuration parameters of each big data service, a service cluster is built and comprises the main node and N agent nodes, the service mirror image is installed on each agent node, the agent mirror image is installed on each agent node, the big data service is built in the service cluster according to an executable file and a component configuration file, the executable file comprises a component identifier of at least one big data service component, and the component configuration file comprises a deployment file storage path of each big data service component. The big data service can be quickly established.

Description

Method and device for creating big data service
Technical Field
The present application relates to the field of data control, and in particular, to a method and an apparatus for creating a big data service.
Background
Big data is a data set which cannot be captured, managed and processed by a conventional software tool within a certain time range, and is a massive, high-growth-rate and diversified information asset which can have stronger decision-making power, insight discovery power and flow optimization capability only by a new processing mode.
The big data service is a service for processing big data, and the big data service can provide a large amount of hardware resources and software resources for processing big data which cannot be processed by conventional software. When a user uses a big data service to process big data, a big data service needs to be created, so how to create the big data service is a problem which needs to be solved urgently at present.
Disclosure of Invention
In order to create a big data service, the embodiment of the application provides a method and a device for creating the big data service. The technical scheme is as follows:
in a first aspect, the present application provides a method for creating a big data service, the method comprising:
the method comprises the steps that a main node generates a container mirror image of each big data service according to a big data service component template corresponding to each big data service in M big data services and configuration parameters of each big data service, wherein M is an integer greater than or equal to 1;
the method comprises the steps that a service cluster is built by a main node, the service cluster comprises the main node and N agent nodes, the main node is provided with a service mirror image, each agent node is provided with an agent mirror image, the service mirror image is used as an operating system of the main node, the agent mirror image is used as an operating system of the agent node, the service mirror image and the agent mirror image respectively comprise a mirror image warehouse, the mirror image warehouse comprises a container mirror image of each big data service, and N is an integer greater than or equal to 1;
the main node creates a big data service in the service cluster according to an executable file and a component configuration file, wherein the executable file comprises a component identifier of at least one big data service component required by the big data service, and the component configuration file comprises a deployment file storage path of each big data service component.
Optionally, the step of obtaining, according to a big data service component template corresponding to each big data service of the M big data services and a configuration parameter of each big data service, includes:
the master node receives configuration information corresponding to each big data service in the M big data services sent by the terminal;
the master node acquires a stored big data service component template corresponding to a target big data service, wherein the big data service component template corresponding to the target big data service comprises at least one configuration item and default parameters corresponding to each configuration item, and the target big data service is any one of the M big data services;
and the main node respectively updates the default parameters corresponding to each configuration item included in the big data service assembly template into the parameters of each configuration item included in the configuration information corresponding to the target big data service to obtain the container mirror image corresponding to the target big data service.
Optionally, the master node constructs a service cluster, including:
the master node installs a service mirror;
the main node sends a proxy image to each proxy node connected with the main node so that each proxy node installs the proxy image;
the main node deploys a kubernets system in a node cluster formed by the main node and each agent node to form a service cluster.
Optionally, the proxy image further includes an address of the master node,
after the master node sends the proxy image to each proxy node connected with the master node, the method further includes:
and the main node receives an address allocation request sent by each proxy node according to the address of the main node and respectively sends the address of each proxy node to each proxy node.
Optionally, the creating, by the master node, a big data service in the service cluster according to the executable file and the component configuration file includes:
the main node acquires a storage path of at least one deployment file corresponding to each big data service component from the component configuration file according to the component identifier of each big data service component in the executable file;
the main node acquires the at least one deployment file corresponding to each big data service component according to the storage path of the at least one deployment file corresponding to each big data service component, and loads the at least one deployment file corresponding to each big data service component into the service cluster to create the big data service.
Optionally, before the loading the at least one deployment file corresponding to each big data service component into the service cluster, the method further includes:
the main node receives an isolation file, wherein the isolation file comprises computing resources and Kubernets elements required by big data service, containers are created in a plurality of proxy nodes in the service cluster, and the containers comprise the computing resources and the Kubernets elements required by the big data service;
the loading at least one deployment file corresponding to each big data service component into the service cluster includes:
and the main node loads at least one deployment file corresponding to each big data service component into the container.
In a second aspect, the present application provides an apparatus for creating big data service, the apparatus comprising:
the generating module is used for generating a container mirror image of each big data service according to a big data service component template corresponding to each big data service in the M big data services and the configuration parameters of each big data service, wherein M is an integer greater than or equal to 1;
the service cluster comprises the device and N agent nodes, the device is provided with a service mirror image, each agent node is provided with an agent mirror image, the service mirror image is used as an operating system of the device, the agent mirror image is used as an operating system of the agent node, the service mirror image and the agent mirror image respectively comprise a mirror image warehouse, the mirror image warehouse comprises a container mirror image of each big data service, and N is an integer greater than or equal to 1;
the creating module is used for creating the big data service in the service cluster according to an executable file and a component configuration file, wherein the executable file comprises a component identifier of at least one big data service component required by the big data service, and the component configuration file comprises a deployment file storage path of each big data service component.
Optionally, the generating module includes:
the receiving unit is used for receiving configuration information corresponding to each big data service in the M big data services sent by the terminal;
the device comprises an acquisition unit, a storage unit and a processing unit, wherein the acquisition unit is used for acquiring a big data service component template corresponding to a stored target big data service, the big data service component template corresponding to the target big data service comprises at least one configuration item and default parameters corresponding to each configuration item, and the target big data service is any one of the M big data services;
and the updating unit is used for respectively updating the default parameters corresponding to each configuration item included in the big data service component template into the parameters of each configuration item included in the configuration information corresponding to the target big data service to obtain the container mirror image corresponding to the target big data service.
Optionally, the building module includes:
an installation unit for installing the service image;
a sending unit, configured to send a proxy image to each proxy node connected to the device, so that each proxy node installs the proxy image;
and the deployment unit is used for deploying the kubernets system in a node cluster formed by the device and each proxy node so as to form a service cluster.
Optionally, the proxy image further includes an address of the master node, and the apparatus further includes:
a receiving module, configured to receive an address allocation request sent by each proxy node according to an address of the device;
a sending module, configured to send the address of each proxy node to each proxy node respectively.
Optionally, the creating module includes:
a second obtaining unit, configured to obtain, according to a component identifier of each big data service component in the executable file, a storage path of at least one deployment file corresponding to each big data service component from the component configuration file;
and the loading unit is used for acquiring the at least one deployment file corresponding to each big data service component according to the storage path of the at least one deployment file corresponding to each big data service component, and loading the at least one deployment file corresponding to each big data service component into the service cluster to create the big data service.
Optionally, the creating module further includes: a second receiving unit and a creating unit;
the second receiving unit is configured to receive an isolation file, where the isolation file includes a computing resource and a Kubernetes element that are needed by a big data service;
the creating unit is used for creating a container in a plurality of proxy nodes in the service cluster, wherein the container comprises computing resources required by the big data service and Kubernets elements;
the loading unit is configured to load at least one deployment file corresponding to each big data service component into the container.
In a third aspect, the present application provides a computer-readable storage medium having stored thereon a computer program which, when executed by a processor, performs the method steps provided by the first aspect or any alternative form of the first aspect.
The technical scheme provided by the embodiment of the application can have the following beneficial effects:
in the application, a service cluster is firstly constructed by a master node, an operating system of the master node and an operating system of a proxy node of the service cluster comprise M container images of big data services, M is an integer greater than or equal to 1, so that when one big data service is created, the big data service can be quickly created in the service cluster according to an executable file and a component configuration file, the executable file comprises a component identifier of at least one big data service component of the big data service, and the component configuration file comprises a deployment file storage path of each big data service component, so that the big data service can be quickly created.
It is to be understood that both the foregoing general description and the following detailed description are exemplary and explanatory only and are not restrictive of the application.
Drawings
The accompanying drawings, which are incorporated in and constitute a part of this specification, illustrate embodiments consistent with the present application and together with the description, serve to explain the principles of the application.
FIG. 1 is a flow chart of a method for creating big data service according to an embodiment of the present disclosure;
FIG. 2 is a flow chart of another method for creating big data service provided by an embodiment of the present application;
fig. 3 is a schematic structural diagram of a service cluster according to an embodiment of the present application;
FIG. 4 is a flowchart of a method for building a service cluster according to an embodiment of the present disclosure;
FIG. 5 is a schematic structural diagram of an apparatus for creating a big data service according to an embodiment of the present application;
fig. 6 is a schematic structural diagram of a terminal according to an embodiment of the present application.
With the above figures, there are shown specific embodiments of the present application, which will be described in more detail below. These drawings and written description are not intended to limit the scope of the inventive concepts in any manner, but rather to illustrate the inventive concepts to those skilled in the art by reference to specific embodiments.
Detailed Description
Reference will now be made in detail to the exemplary embodiments, examples of which are illustrated in the accompanying drawings. When the following description refers to the accompanying drawings, like numbers in different drawings represent the same or similar elements unless otherwise indicated. The embodiments described in the following exemplary embodiments do not represent all embodiments consistent with the present application. Rather, they are merely examples of apparatus and methods consistent with certain aspects of the present application, as detailed in the appended claims.
Referring to fig. 1, an embodiment of the present application provides a method for creating a big data service, where the method includes:
step 101: the master node generates a container mirror image of each big data service according to a big data service component template corresponding to each big data service in the M big data services and the configuration parameters of each big data service, wherein M is an integer greater than or equal to 1.
Step 102: the method comprises the steps that a main node constructs a service cluster, the service cluster comprises the main node and N agent nodes, the main node is provided with a service mirror image, each agent node is provided with an agent mirror image, the service mirror image is used as an operating system of the main node, the agent mirror image is used as an operating system of the agent node, the service mirror image and the agent mirror image respectively comprise a mirror image warehouse, the mirror image warehouse comprises a container mirror image of each big data service, and N is an integer greater than or equal to 1.
Step 103: the main node creates a big data service in the service cluster according to an executable file and a component configuration file, wherein the executable file comprises a component identifier of at least one big data service component of the big data service, and the component configuration file comprises a deployment file storage path of each big data service component.
Referring to fig. 2, an embodiment of the present application provides a method for creating a big data service, where the method includes:
step 201: and the master node generates a container mirror image of each big data service according to the big data service component template corresponding to each big data service component in the M big data services and the configuration parameter of each big data service component, wherein M is an integer greater than or equal to 1.
Referring to fig. 3, before performing this step, a master node and N proxy nodes may be deployed, where N is an integer greater than or equal to 1, and then the master node and each proxy node in the N proxy nodes are connected to form a cluster. Alternatively, a network cable may be used to connect the master node with each of the proxy nodes.
The method includes the steps that N +1 nodes can be deployed, one node is selected, the selected node is connected with each other node, then a user can log in a portal page of the selected node through a terminal, the user can set the role of the selected node as a 'main node' in the portal page, and the roles of the other N nodes are set as proxy nodes.
Alternatively, the master node may be a server or a computer, and each proxy node may be a server or a computer. The computing power of the master node is higher than that of the proxy nodes.
This step may be realized by the following operations 2011 to 2013. The 2011-2013 operations may be:
2011: and the master node receives the configuration information corresponding to each big data service component in the M big data service components sent by the terminal.
When a user needs to create a big data service, the user can input configuration information and component identifications of M big data service components on a corresponding terminal. For any big data service component, the configuration information includes at least one parameter of configuration item, and the configuration item includes at least one of container name, mirror path, storage path, copy number and the like.
Optionally, the M big data service components may include hadoop, zookeeper, kafka, spark, hbase, and the like.
The terminal may obtain the configuration information and the component identifiers of the M big data service components input by the user, and send a generation request message to the master node, where the generation request message may include the configuration information and the component identifiers of the M big data service components. The master node receives the generation request message, and extracts the configuration information and the component identifications of the M big data service components from the generation request message.
The master node may locally store a correspondence between the component identifier and the big data service component template, where the correspondence includes the component identifier of each big data service component in the M big data service components and the big data service component template.
Optionally, the big data service component template includes at least one configuration item and a default parameter corresponding to each configuration item in the at least one configuration item. For example, if the configuration item is a container name, the default parameter corresponding to the configuration item is a default container name, and if the configuration item is a mirror path, the default parameter corresponding to the configuration item is a default mirror path.
The configuration items in the big DATA service component template comprise a container name ENV ZK _ USER, a storage path ZK _ DATA _ DIR, a log path ZK _ L OG _ DIR of the zookeeper, an environment transformation path JAVA _ HOME, an installation package path ARG ZK _ DIST and the like, wherein the default parameter corresponding to the container name ENV ZK _ USER is zookeeper, the default parameter corresponding to the storage path ZK _ DATA _ DIR is ZK _ DATA _ DIR, the default parameter corresponding to the log path ZK _ DATA _ L of the zookeeper in lib is ZK _ DATA _ DIR, the default parameter corresponding to the log path ZK _ DATA _ DIR of the zookeeper is ZK _ DIST, the default parameter corresponding to the environment transformation path ARG _ DIST is ZK _ DIR, the environment transformation parameter corresponding to the log path ZK _ DATA _ DIR 57 of the zookeeper is ZK _ DATA _ DIR, and the default parameter corresponding to the environment transformation path ARG _ DIST is ARK _ DIR 82.
Figure BDA0001940474230000081
Figure BDA0001940474230000091
Step 2012: and the master node acquires a stored big data service component template corresponding to a target big data service component, wherein the target big data service component is any one of the M big data service components.
In this step, the master node may obtain, according to the component identifier of the target big data service component, a big data service component template corresponding to the target big data service component from the correspondence between the stored component identifier and the big data service component template.
Step 2013: and the main node respectively updates the default parameters corresponding to each configuration item included in the big data service assembly template into the parameters of each configuration item included in the configuration information corresponding to the target big data service assembly to obtain the container mirror image corresponding to the target big data service assembly.
For example, assuming that the target big DATA service component is zookeeper, the configuration information of zookeeper includes a container name ENV ZK _ USER and a corresponding parameter thereof being "zookeeper", a storage path ZK _ DATA _ DIR and a corresponding parameter thereof being "/var/lib/zookeeper/DATA \", a log path ZK _ DATA _ L OG _ DIR under lib and a corresponding parameter thereof being "/var/lib/zookeeper/log", a log path ZK _ L OG _ DIR of zookeeper and a corresponding parameter thereof being "/var/log/zookeeper", an environment variable path JAVA _ HOME and a corresponding parameter thereof being "/USER/lib/jvm/JAVA-8-openjd-amd 35 64", and a corresponding parameter of installation package path ARG and a corresponding parameter thereof being "zookeeper 3.4.10".
Then in a big DATA service component template corresponding to zookeeper, a default parameter zookeeper corresponding to ENV ZK _ USER, a default parameter ZK _ DATA _ DIR corresponding to ZK _ DATA _ DIR, a default parameter ZK _ L OG corresponding to ZK _ DATA _ L OG _ DIR, a default parameter ZK _ L OG _ DIR corresponding to ZK _ L OG _ DIR, a default parameter JAVA _ HOME corresponding to JAVA _ HOME, a default parameter ARG ZK _ DIST corresponding to ARG ZK _ DIST is respectively updated to a parameter zookeeper corresponding to ENV ZK _ USER, a parameter/var/lib/zookeeper/DATA \ corresponding to ZK _ DIR, a parameter/var/lib/zookeeper/DATA \ OG corresponding to ZK _ DIR, a parameter/var/lib/zookeeper/log, a parameter/ZK _ DATA _ DIR corresponding to ARV ZK _ DATA _ USER, a parameter/log, a parameter/log/ZOK _ HOK _ HOME corresponding to ARK _ DIST, a parameter ARK _ POIKEK _ DIST, a parameter/ZOKEYOKEYOKEYOUKE _ DIR corresponding to ARK-ROK-PAT, and a parameter/ARK-368 corresponding to AROKEYK-ROK component template, and a parameter.
Figure BDA0001940474230000101
Figure BDA0001940474230000111
Step 202: the master node constructs a service cluster comprising the master node and the N proxy nodes.
The service mirror image is installed on the main node and the agent mirror image is installed on each agent node, the service mirror image is used as an operating system of the main node, the agent mirror image is used as an operating system of the agent node, the service mirror image comprises a mirror image warehouse, the agent mirror image comprises a mirror image warehouse, the mirror image warehouse comprises container mirror images of each big data service, and N is an integer greater than or equal to 1.
Referring to fig. 4, this step may construct a service cluster through the operations 2021 to 2027. The operations of 2021 to 2027 may be:
2021: the master node installs the service image and sends the proxy image to each proxy node connected to the master node.
Optionally, the primary node includes a linux version of a base image, and generates a service image and an agent image by combining an image repository on the basis of the base image, where the image repository includes a container image of each big data service. The proxy image also includes the address of the primary node.
Alternatively, the address may be an IP address or the like.
2022: for each of the N proxy nodes, the proxy node receives the proxy image and installs the proxy image.
2023: the proxy node sends an address allocation request to the master node, wherein the address allocation request is used for requesting the master node to allocate an address for the proxy node.
The proxy node may extract the address of the master node from the proxy image and send an address assignment request to the master node based on the address of the master node.
2024: and the main node receives the address allocation request and sends the address of the proxy node to the proxy node.
After receiving the address allocation request, the master node may allocate an address to the proxy node, where the address may be an IP address.
2025: the proxy node receives the address, sets the address as the received address, and sends resource information to the main node, wherein the resource information comprises at least one of the number of CPUs (central processing units), the size of memory space and the size of disk space of the proxy node.
2026: and the main node receives the resource information of the agent node and stores the resource information of the agent node.
2027: the main node deploys a kubernets system in a node cluster formed by the main node and each proxy node to form a service cluster.
The kubernets system includes kubernets elements, which may include hostnetwork, macvlan bridge, ovs, calico, flannel, and canal, among other elements.
The hostnetwork uses the host network stack in the communication process, so that the bandwidth performance of the container at the cross-endpoint is the best.
The macvlan directly delivers the message from the host network card by crossing the host network stack, so that the performance is higher than that in other non-hostwork scenes, but the macvlan does not pass through the host network stack and has no control flow, so that the security policy and the like are not easy to take effect.
ovs directly connects the container network name space and ovs bridge with ve4, and sends the message to physical network card delivery message by ovs datapa4 matching processing in kernel mode, so it does not pass through host machine name space, but because it is not successfully matched by the kernel mode first packet, it needs to be sent to user mode processing, and needs ovs control, it increases message control, and compared with macvlan, it reduces the flow performance simply and straightly.
Due to the use of the vxlan tunneling technology, the flannel vxlan and canal have obvious loss in performance because of message encapsulation and decapsulation in cross-host communication. The difference between flannel and canal is that flannel uses linux bridge to complete the container network namespace to communicate with the host/peer container, while canal uses ve4, but the network performance using ve4 directly is slightly higher than linux bridge.
The requirements of big data on performance and safety are integrated, simple multi-tenant deployment is realized based on the isolation space established in the subsequent steps, and finally a canal using mode is selected.
Step 203: and the terminal sends an executable file and a component configuration file to the main node to create the big data service in the service cluster, wherein the executable file comprises the component identification of at least one big data service component, and the component configuration file comprises a deployment file storage path of each big data service component.
Alternatively, the component configuration file may be a yaml file, and the terminal may generate the executable file and the component file by operations 2031 to 2034 as follows, where operations 2031 to 2033 are:
2031: the terminal generates an executable file comprising the component identification of the at least one big data service component.
Optionally, the user may configure a component identifier of a big data service component required by the big data service on the terminal, and the terminal generates the executable file according to the component identifier of each big data service component configured by the user.
2032: and the terminal creates a palm warehouse according to the executable file, wherein the palm warehouse comprises at least one deployment file corresponding to each big data service component in the at least one big data service component.
At least one deployment file corresponding to the big data service component comprises implementation codes used for implementing the big data service component.
The server stores a corresponding relationship between the component identifier and the deployment file, and the corresponding relationship stores the component identifier of each big data service component and at least one corresponding deployment file.
In this step, the terminal may send a helm warehouse establishment request to the server, where the helm warehouse establishment request carries the executable file. The server receives the helm warehouse establishment request, extracts an executable file from the helm warehouse establishment request, acquires at least one deployment file corresponding to each big data service component from the corresponding relation between the component identification and the deployment file according to the component identification of each big data service component included in the executable file, establishes a helm warehouse, the helm warehouse includes the acquired deployment file of each big data service component, and sends the warehouse identification of the helm warehouse to the terminal.
2033: and the terminal generates a component configuration file according to the helm warehouse, wherein the component configuration file comprises the components of each big data service component and the storage path of the deployment file.
The terminal receives the warehouse identification of the helm warehouse sent by the server and sends a path acquisition request to the server, wherein the path acquisition request carries the warehouse identification. The server receives the path acquisition request, extracts the warehouse identification from the path acquisition request, determines a helm warehouse corresponding to the warehouse identification, acquires the storage path of each deployment file in the helm warehouse, and sends the storage path of each deployment file in the helm warehouse to the terminal. And the terminal receives the storage path of each deployment file in the helm warehouse and generates a road-pressing file comprising the storage path of each deployment file in the helm warehouse.
The helm repository is a package management tool that uses kubernets helm for a large amount of component profile management. Meanwhile, the problems of service dependence and starting sequence are solved, and management is facilitated.
The project is deployed and released based on component configuration files, most of the current projects are micro-serviced or modularized and are deployed by being divided into a plurality of components, each component may correspond to a deployment element.
Helm is a package management tool of Kubernets to simplify the deployment and management of Kubernets applications. Helm can be likened to the yum tool of CentOS. Helm has several basic concepts as follows:
and Chart is an installation package managed by Helm, and the installation package resource needing to be deployed is contained in the Chart. Chart can be compared to the rpm file used by CentoS yum. Each Chart contains the following two parts: yart, yaml, a basic description file of the package, and one or more kubernets manifest file templates placed in a templates directory
Release is the deployment instance of Chart, a Chart may have multiple releases on a Kubernets cluster, i.e. the Chart may be installed many times, an instance of Chart running on a Kubernets cluster, on the same cluster, a Chart may be installed many times, each installation creates a new Release, e.g. a MySQ L Chart, if you want to run two databases on the server, the Chart may be installed twice, each installation generates its own Release, with its own Release name.
Reproducibility: and the store of the chart is used for publishing and storing the chart.
Optionally, the terminal may also send an isolation file to the server, where the isolation file includes computing resources and kubernets elements required by the big data service.
When a big data service of a user is deployed, an isolation environment needs to be created for the big data service, computing resources and Kubernets elements in the isolation environment are used for operating the big data service, resources used by the big data service are effectively isolated, and the use efficiency of the resources can be improved.
The default isolation environment has no resource quota, a quota needs to be set for the isolation environment, the quota comprises two aspects, namely a computing resource quota and a kubernets element quantity limit, the computing resource is mainly a cpu resource and a memory resource, and the kubernets element comprises a pod, a service, a replicationicontrol, a resourcequotiota, a persistentvolumetaclose and other elements.
Creating a group of big data services requires creating an isolation environment first, and single tenant is mainly isolated by cpu and memory by virtue of kubernets isolation environment technology. For the entire container platform, resources may be over-provisioned. For example, the cpu core number of the whole platform is N, and the memory is M. The cpu over-match ratio is s (1< s <16), and the memory over-match ratio is t (1< t < 2). Assuming that the average number of cpu cores allocated to each tenant is a and the allocated memory is b, the cores of the entire platform can be used by (N × s/a) tenants, and the memory of the entire platform can be used by (M × t/b) tenants. The over-allocation ratio needs to be set according to the business busyness degree of an actual platform, if the over-allocation ratio is idle, the ratio can be increased, and more tenants can share resources.
Step 204: the master node creates a big data service in the service cluster according to the executable file and the component configuration file.
Alternatively, this step may be implemented using namespace technology from Kubernetes. In this step, the master node obtains, from the component configuration file, a storage path of at least one deployment file corresponding to each big data service component according to a component identifier of each big data service component in the executable file, obtains, from the server, at least one deployment file corresponding to each big data service component according to the storage path of the at least one deployment file corresponding to each big data service component, and loads the at least one deployment file corresponding to each big data service component into the service cluster to create a big data service.
In this step, the master node may further receive an isolation file sent by the terminal, select, according to a computing resource and a kubernets element that are included in the isolation file and are needed by the big data service, a plurality of proxy nodes in the service cluster, where the plurality of proxy nodes include the computing resource and the kubernets element that are needed by the big data service, and create, in the plurality of proxy nodes, a container that includes the computing resource and the kubernets element that are needed by the big data service, where the container is an isolation environment corresponding to the big data service. And the main node loads at least one deployment file corresponding to each big data service component into the container so as to create a big data service, and the big data service runs in the container.
After creating a big data service, the master node may use the plurality of proxy nodes to run the big data service in the container. The agent mirror image in the agent node is used as an operating system and is also an operating system on which the big data service runs, and the container mirror image of the big data service stored in the mirror image warehouse of the agent mirror image is used in the running process of the big data service and is used for providing services for users.
In the embodiment of the application, a master node generates a container mirror image of each big data service according to a big data service component template corresponding to each big data service in M big data services and a configuration parameter of each big data service, M is an integer greater than or equal to 1, then a service cluster is constructed, the service cluster comprises the master node and N proxy nodes, the master node is provided with the service mirror image and each proxy node is provided with a proxy mirror image, the service mirror image is used as an operating system of the master node and used as an operating system of the proxy node, the service mirror image and the proxy mirror image both comprise a mirror image warehouse, the mirror image warehouse comprises the container mirror image of each big data service, N is an integer greater than or equal to 1, then the big data service is created in the service cluster according to an executable file and a component configuration file, the executable file comprises a component identifier of at least one big data service component of one big data service, the component configuration file includes a deployment file storage path for each big data service component. Therefore, when the big data service is created, at least one deployment file of each big data service component of the big data service only needs to be loaded into the service cluster, and the container mirror image required by the big data service component in the process of running the big data service is arranged in the service cluster in the process of constructing the service cluster, so that the big data service is rapidly created. In addition, when the big data service is created, a container can be created for the big data service, and resources needed by the big data service are isolated through the container, so that the use efficiency of the resources is improved.
The following are embodiments of the apparatus of the present application that may be used to perform embodiments of the method of the present application. For details which are not disclosed in the embodiments of the apparatus of the present application, reference is made to the embodiments of the method of the present application.
Referring to fig. 5, the present application provides an apparatus 300 for creating a big data service, where the apparatus 300 includes:
a generating module 301, configured to generate a container mirror image of each big data service according to a big data service component template corresponding to each big data service in the M big data services and a configuration parameter of each big data service, where M is an integer greater than or equal to 1;
a building module 302, configured to build a service cluster, where the service cluster includes the device and N proxy nodes, the device is provided with a service mirror, each proxy node is provided with a proxy mirror, the service mirror is used as an operating system of the device, the proxy mirrors are used as operating systems of the proxy nodes, the service mirror and the proxy mirrors each include a mirror repository, the mirror repository includes a container mirror of each big data service, and N is an integer greater than or equal to 1;
a creating module 303, configured to create a big data service in the service cluster according to an executable file and a component configuration file, where the executable file includes a component identifier of at least one big data service component, and the component configuration file includes a deployment file storage path of each big data service component.
Optionally, the generating module 301 includes:
the receiving unit is used for receiving configuration information corresponding to each big data service in the M big data services sent by the terminal;
the device comprises an acquisition unit, a storage unit and a processing unit, wherein the acquisition unit is used for acquiring a big data service component template corresponding to a stored target big data service, the big data service component template corresponding to the target big data service comprises at least one configuration item and default parameters corresponding to each configuration item, and the target big data service is any one of the M big data services;
and the updating unit is used for respectively updating the default parameters corresponding to each configuration item included in the big data service component template into the parameters of each configuration item included in the configuration information corresponding to the target big data service to obtain the container mirror image corresponding to the target big data service.
Optionally, the building module 302 includes:
an installation unit for installing the service image;
a sending unit, configured to send a proxy image to each proxy node connected to the device, so that each proxy node installs the proxy image;
and the deployment unit is used for deploying the kubernets system in a node cluster formed by the device and each proxy node so as to form a service cluster.
Optionally, the proxy image further includes an address of the master node, and the apparatus 300 further includes:
a receiving module, configured to receive an address allocation request sent by each proxy node according to an address of the device;
a sending module, configured to send the address of each proxy node to each proxy node respectively.
Optionally, the creating module 303 includes:
a second obtaining unit, configured to obtain, according to a component identifier of each big data service component in the executable file, a storage path of at least one deployment file corresponding to each big data service component from the component configuration file;
and the loading unit is used for acquiring the at least one deployment file corresponding to each big data service component according to the storage path of the at least one deployment file corresponding to each big data service component, and loading the at least one deployment file corresponding to each big data service component into the service cluster to create the big data service.
Optionally, the creating module 303 further includes: a second receiving unit and a creating unit;
the second receiving unit is configured to receive an isolation file, where the isolation file includes a computing resource and a Kubernetes element that are needed by a big data service;
the creating unit is used for creating a container in a plurality of proxy nodes in the service cluster, wherein the container comprises computing resources required by the big data service and Kubernets elements;
the loading unit is configured to load at least one deployment file corresponding to each big data service component into the container.
In the embodiment of the present application, the generating module generates a container mirror image of each big data service according to a big data service component template corresponding to each big data service of M big data services and a configuration parameter of each big data service, M is an integer greater than or equal to 1, then a service cluster is constructed by the constructing module, the service cluster includes the device 300 and N proxy nodes, the device 300 is installed with a service mirror image and each proxy node is installed with a proxy mirror image, the service mirror image is used as an operating system of the device 300, the proxy mirror image is used as an operating system of the proxy node, the service mirror image and the proxy mirror image both include a mirror repository, the mirror repository includes the container mirror image of each big data service, N is an integer greater than or equal to 1, and then the creating module creates a big data service in the service cluster according to an executable file and a component configuration file, the executable file includes component identifications of at least one big data service component of a big data service, and the component configuration file includes a deployment file storage path of each big data service component. Thereby realizing the rapid creation of big data service.
With regard to the apparatus in the above-described embodiment, the specific manner in which each module performs the operation has been described in detail in the embodiment related to the method, and will not be elaborated here.
Fig. 6 shows a block diagram of a terminal 400 according to an exemplary embodiment of the present invention. The terminal 400 may be a computer or the like. Generally, the terminal 400 includes: a processor 401 and a memory 402.
Processor 401 may include one or more Processing cores, such as a 4-core processor, an 8-core processor, etc. processor 401 may be implemented in at least one hardware form of a DSP (Digital Signal Processing), an FPGA (Field-Programmable Gate Array), a P L a (Programmable logic Array), processor 401 may also include a main processor and a coprocessor, the main processor being a processor for Processing data in a wake-up state, also known as a CPU (Central Processing Unit), the coprocessor being a low-power processor for Processing data in a standby state, in some embodiments, processor 401 may be integrated with a GPU (Graphics Processing Unit) for rendering and rendering content for display, in some embodiments, processor 401 may also include an AI (intelligent processor) for learning operations related to an AI (Artificial Intelligence processor) for computing operations related to display screens.
Memory 402 may include one or more computer-readable storage media, which may be non-transitory. Memory 402 may also include high speed random access memory, as well as non-volatile memory, such as one or more magnetic disk storage devices, flash memory storage devices. In some embodiments, a non-transitory computer readable storage medium in memory 402 is used to store at least one instruction for execution by processor 401 to implement a method of creating big data services as provided by method embodiments herein.
In some embodiments, the terminal 400 may further optionally include: a peripheral interface 403 and at least one peripheral. The processor 401, memory 402 and peripheral interface 403 may be connected by bus or signal lines. Each peripheral may be connected to the peripheral interface 403 via a bus, signal line, or circuit board. Specifically, the peripheral device includes: at least one of radio frequency circuitry 404, touch screen display 405, camera 406, audio circuitry 407, positioning components 408, and power supply 409.
The peripheral interface 403 may be used to connect at least one peripheral related to I/O (Input/Output) to the processor 401 and the memory 402. In some embodiments, processor 401, memory 402, and peripheral interface 403 are integrated on the same chip or circuit board; in some other embodiments, any one or two of the processor 401, the memory 402 and the peripheral interface 403 may be implemented on a separate chip or circuit board, which is not limited by this embodiment.
The Radio Frequency circuit 404 is used for receiving and transmitting RF (Radio Frequency) signals, also called electromagnetic signals. The radio frequency circuitry 404 communicates with communication networks and other communication devices via electromagnetic signals. The rf circuit 404 converts an electrical signal into an electromagnetic signal to transmit, or converts a received electromagnetic signal into an electrical signal. Optionally, the radio frequency circuit 404 includes: an antenna system, an RF transceiver, one or more amplifiers, a tuner, an oscillator, a digital signal processor, a codec chipset, a subscriber identity module card, and so forth. The radio frequency circuitry 404 may communicate with other terminals via at least one wireless communication protocol. The wireless communication protocols include, but are not limited to: the world wide web, metropolitan area networks, intranets, generations of mobile communication networks (2G, 3G, 4G, and 5G), Wireless local area networks, and/or WiFi (Wireless Fidelity) networks. In some embodiments, the rf circuit 404 may further include NFC (Near Field Communication) related circuits, which are not limited in this application.
The Display 405 may be used to Display a UI (User Interface) that may include graphics, text, icons, video, and any combination thereof, when the Display 405 is a touch screen, the Display 405 may also have the ability to capture touch signals on or over the surface of the Display 405. the touch signals may be input to the processor 401 for processing as control signals, at which time the Display 405 may also be used to provide virtual buttons and/or a virtual keyboard, also referred to as soft buttons and/or a soft keyboard. in some embodiments, the Display 405 may be one, providing the front panel of the terminal 400, in other embodiments, the Display 405 may be at least two, each disposed on a different surface of the terminal 400 or in a folded design, in still other embodiments, the Display 405 may be a flexible Display disposed on a curved surface or on a folded surface of the terminal 400. even, the Display 405 may be provided in non-rectangular irregular graphics, the Display 405 may be provided in L CD (L idCry Display, Display L), Emotig-Diode, or the like.
The camera assembly 406 is used to capture images or video. Optionally, camera assembly 406 includes a front camera and a rear camera. Generally, a front camera is disposed at a front panel of the terminal, and a rear camera is disposed at a rear surface of the terminal. In some embodiments, the number of the rear cameras is at least two, and each rear camera is any one of a main camera, a depth-of-field camera, a wide-angle camera and a telephoto camera, so that the main camera and the depth-of-field camera are fused to realize a background blurring function, and the main camera and the wide-angle camera are fused to realize panoramic shooting and VR (Virtual Reality) shooting functions or other fusion shooting functions. In some embodiments, camera assembly 406 may also include a flash. The flash lamp can be a monochrome temperature flash lamp or a bicolor temperature flash lamp. The double-color-temperature flash lamp is a combination of a warm-light flash lamp and a cold-light flash lamp, and can be used for light compensation at different color temperatures.
The audio circuit 407 may include a microphone and a speaker. The microphone is used for collecting sound waves of a user and the environment, converting the sound waves into electric signals, and inputting the electric signals to the processor 401 for processing, or inputting the electric signals to the radio frequency circuit 404 for realizing voice communication. For the purpose of stereo sound collection or noise reduction, a plurality of microphones may be provided at different portions of the terminal 400. The microphone may also be an array microphone or an omni-directional pick-up microphone. The speaker is used to convert electrical signals from the processor 401 or the radio frequency circuit 404 into sound waves. The loudspeaker can be a traditional film loudspeaker or a piezoelectric ceramic loudspeaker. When the speaker is a piezoelectric ceramic speaker, the speaker can be used for purposes such as converting an electric signal into a sound wave audible to a human being, or converting an electric signal into a sound wave inaudible to a human being to measure a distance. In some embodiments, audio circuitry 407 may also include a headphone jack.
The positioning component 408 is used to locate the current geographic location of the terminal 400 to implement navigation or L BS (L o spatially based Service). the positioning component 408 may be a positioning component based on the united states GPS (global positioning System), the chinese beidou System, or the russian galileo System.
The power supply 409 is used to supply power to the various components in the terminal 400. The power source 409 may be alternating current, direct current, disposable or rechargeable. When the power source 409 includes a rechargeable battery, the rechargeable battery may be a wired rechargeable battery or a wireless rechargeable battery. The wired rechargeable battery is a battery charged through a wired line, and the wireless rechargeable battery is a battery charged through a wireless coil. The rechargeable battery may also be used to support fast charge technology.
In some embodiments, the terminal 400 also includes one or more sensors 410. The one or more sensors 410 include, but are not limited to: acceleration sensor 411, gyro sensor 412, pressure sensor 413, fingerprint sensor 414, optical sensor 415, and proximity sensor 416.
The acceleration sensor 411 may detect the magnitude of acceleration in three coordinate axes of the coordinate system established with the terminal 400. For example, the acceleration sensor 411 may be used to detect components of the gravitational acceleration in three coordinate axes. The processor 401 may control the touch display screen 405 to display the user interface in a landscape view or a portrait view according to the gravitational acceleration signal collected by the acceleration sensor 411. The acceleration sensor 411 may also be used for acquisition of motion data of a game or a user.
The gyro sensor 412 may detect a body direction and a rotation angle of the terminal 400, and the gyro sensor 412 may cooperate with the acceleration sensor 411 to acquire a 3D motion of the terminal 400 by the user. From the data collected by the gyro sensor 412, the processor 401 may implement the following functions: motion sensing (such as changing the UI according to a user's tilting operation), image stabilization at the time of photographing, game control, and inertial navigation.
The pressure sensor 413 may be disposed on a side bezel of the terminal 400 and/or a lower layer of the touch display screen 405. When the pressure sensor 413 is disposed on the side frame of the terminal 400, a user's holding signal to the terminal 400 can be detected, and the processor 401 performs left-right hand recognition or shortcut operation according to the holding signal collected by the pressure sensor 413. When the pressure sensor 413 is disposed at the lower layer of the touch display screen 405, the processor 401 controls the operability control on the UI interface according to the pressure operation of the user on the touch display screen 405. The operability control comprises at least one of a button control, a scroll bar control, an icon control and a menu control.
The fingerprint sensor 414 is used for collecting a fingerprint of a user, and the processor 401 identifies the identity of the user according to the fingerprint collected by the fingerprint sensor 414, or the fingerprint sensor 414 identifies the identity of the user according to the collected fingerprint, when the identity of the user is identified as a trusted identity, the processor 401 authorizes the user to perform relevant sensitive operations, wherein the sensitive operations include screen unlocking, encrypted information viewing, software downloading, payment, setting change and the like, the fingerprint sensor 414 can be arranged on the front side, the back side or the side of the terminal 400, and when a physical key or a vendor L ogo is arranged on the terminal 400, the fingerprint sensor 414 can be integrated with the physical key or the vendor L ogo.
The optical sensor 415 is used to collect the ambient light intensity. In one embodiment, the processor 401 may control the display brightness of the touch display screen 405 based on the ambient light intensity collected by the optical sensor 415. Specifically, when the ambient light intensity is high, the display brightness of the touch display screen 405 is increased; when the ambient light intensity is low, the display brightness of the touch display screen 405 is turned down. In another embodiment, the processor 401 may also dynamically adjust the shooting parameters of the camera assembly 406 according to the ambient light intensity collected by the optical sensor 415.
A proximity sensor 416, also known as a distance sensor, is typically disposed on the front panel of the terminal 400. The proximity sensor 416 is used to collect the distance between the user and the front surface of the terminal 400. In one embodiment, when the proximity sensor 416 detects that the distance between the user and the front surface of the terminal 400 gradually decreases, the processor 401 controls the touch display screen 405 to switch from the bright screen state to the dark screen state; when the proximity sensor 416 detects that the distance between the user and the front surface of the terminal 400 gradually becomes larger, the processor 401 controls the touch display screen 405 to switch from the breath screen state to the bright screen state.
Those skilled in the art will appreciate that the configuration shown in fig. 6 is not intended to be limiting of terminal 400 and may include more or fewer components than those shown, or some components may be combined, or a different arrangement of components may be used.
Other embodiments of the present application will be apparent to those skilled in the art from consideration of the specification and practice of the application disclosed herein. This application is intended to cover any variations, uses, or adaptations of the invention following, in general, the principles of the application and including such departures from the present disclosure as come within known or customary practice within the art to which the invention pertains. It is intended that the specification and examples be considered as exemplary only, with a true scope and spirit of the application being indicated by the following claims.
It will be understood that the present application is not limited to the precise arrangements described above and shown in the drawings and that various modifications and changes may be made without departing from the scope thereof. The scope of the application is limited only by the appended claims.

Claims (12)

1. A method of creating a big data service, the method comprising:
the method comprises the steps that a main node generates a container mirror image of each big data service according to a big data service component template corresponding to each big data service in M big data services and configuration parameters of each big data service, wherein M is an integer greater than or equal to 1;
the method comprises the steps that a service cluster is built by a main node, the service cluster comprises the main node and N agent nodes, the main node is provided with a service mirror image, each agent node is provided with an agent mirror image, the service mirror image is used as an operating system of the main node, the agent mirror image is used as an operating system of the agent node, the service mirror image and the agent mirror image respectively comprise a mirror image warehouse, the mirror image warehouse comprises a container mirror image of each big data service, and N is an integer greater than or equal to 1;
the main node creates a big data service in the service cluster according to an executable file and a component configuration file, wherein the executable file comprises a component identifier of at least one big data service component required by the big data service, and the component configuration file comprises a deployment file storage path of each big data service component.
2. The method of claim 1, wherein the determining the big data service component template and the configuration parameters of each big data service according to the big data service component template corresponding to each big data service of the M big data services comprises:
the master node receives configuration information corresponding to each big data service in the M big data services sent by the terminal;
the master node acquires a stored big data service component template corresponding to a target big data service, wherein the big data service component template corresponding to the target big data service comprises at least one configuration item and default parameters corresponding to each configuration item, and the target big data service is any one of the M big data services;
and the main node respectively updates the default parameters corresponding to each configuration item included in the big data service assembly template into the parameters of each configuration item included in the configuration information corresponding to the target big data service to obtain the container mirror image corresponding to the target big data service.
3. The method of claim 1, wherein the master node constructs a service cluster comprising:
the master node installs a service mirror;
the main node sends a proxy image to each proxy node connected with the main node so that each proxy node installs the proxy image;
the main node deploys a kubernets system in a node cluster formed by the main node and each agent node to form a service cluster.
4. The method of claim 3, wherein the proxy image further comprises an address of the primary node,
after the master node sends the proxy image to each proxy node connected with the master node, the method further includes:
and the main node receives an address allocation request sent by each proxy node according to the address of the main node and respectively sends the address of each proxy node to each proxy node.
5. The method of claim 1, wherein the master node creating a big data service in the service cluster according to executable files and component configuration files, comprising:
the main node acquires a storage path of at least one deployment file corresponding to each big data service component from the component configuration file according to the component identifier of each big data service component in the executable file;
the main node acquires the at least one deployment file corresponding to each big data service component according to the storage path of the at least one deployment file corresponding to each big data service component, and loads the at least one deployment file corresponding to each big data service component into the service cluster to create the big data service.
6. The method of claim 5, wherein before loading the at least one deployment file corresponding to each big data service component into the service cluster, further comprising:
the main node receives an isolation file, wherein the isolation file comprises computing resources and Kubernets elements required by big data service, containers are created in a plurality of proxy nodes in the service cluster, and the containers comprise the computing resources and the Kubernets elements required by the big data service;
the loading at least one deployment file corresponding to each big data service component into the service cluster includes:
and the main node loads at least one deployment file corresponding to each big data service component into the container.
7. An apparatus for creating big data service, the apparatus comprising:
the generating module is used for generating a container mirror image of each big data service according to a big data service component template corresponding to each big data service in the M big data services and the configuration parameters of each big data service, wherein M is an integer greater than or equal to 1;
the service cluster comprises the device and N agent nodes, the device is provided with a service mirror image, each agent node is provided with an agent mirror image, the service mirror image is used as an operating system of the device, the agent mirror image is used as an operating system of the agent node, the service mirror image and the agent mirror image respectively comprise a mirror image warehouse, the mirror image warehouse comprises a container mirror image of each big data service, and N is an integer greater than or equal to 1;
the creating module is used for creating the big data service in the service cluster according to an executable file and a component configuration file, wherein the executable file comprises a component identifier of at least one big data service component required by the big data service, and the component configuration file comprises a deployment file storage path of each big data service component.
8. The apparatus of claim 7, wherein the generating module comprises:
the terminal comprises a first receiving unit, a second receiving unit and a sending unit, wherein the first receiving unit is used for receiving configuration information corresponding to each big data service in M big data services sent by the terminal;
a first obtaining unit, configured to obtain a stored big data service component template corresponding to a target big data service, where the big data service component template corresponding to the target big data service includes at least one configuration item and a default parameter corresponding to each configuration item, and the target big data service is any one of the M big data services;
and the updating unit is used for respectively updating the default parameters corresponding to each configuration item included in the big data service component template into the parameters of each configuration item included in the configuration information corresponding to the target big data service to obtain the container mirror image corresponding to the target big data service.
9. The apparatus of claim 8, wherein the building module comprises:
an installation unit for installing the service image;
a sending unit, configured to send a proxy image to each proxy node connected to the device, so that each proxy node installs the proxy image;
and the deployment unit is used for deploying the kubernets system in a node cluster formed by the device and each proxy node so as to form a service cluster.
10. The apparatus of claim 9, wherein the proxy image further comprises an address of the primary node, the apparatus further comprising:
a receiving module, configured to receive an address allocation request sent by each proxy node according to an address of the device;
a sending module, configured to send the address of each proxy node to each proxy node respectively.
11. The apparatus of claim 7, wherein the creation module comprises:
a second obtaining unit, configured to obtain, according to a component identifier of each big data service component in the executable file, a storage path of at least one deployment file corresponding to each big data service component from the component configuration file;
and the loading unit is used for acquiring the at least one deployment file corresponding to each big data service component according to the storage path of the at least one deployment file corresponding to each big data service component, and loading the at least one deployment file corresponding to each big data service component into the service cluster to create the big data service.
12. The apparatus of claim 11, wherein the creation module further comprises: a second receiving unit and a creating unit;
the second receiving unit is configured to receive an isolation file, where the isolation file includes a computing resource and a Kubernetes element that are needed by a big data service;
the creating unit is used for creating a container in a plurality of proxy nodes in the service cluster, wherein the container comprises computing resources required by the big data service and Kubernets elements;
the loading unit is configured to load at least one deployment file corresponding to each big data service component into the container.
CN201910020151.6A 2019-01-09 2019-01-09 Method and device for creating big data service Active CN111427949B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201910020151.6A CN111427949B (en) 2019-01-09 2019-01-09 Method and device for creating big data service

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201910020151.6A CN111427949B (en) 2019-01-09 2019-01-09 Method and device for creating big data service

Publications (2)

Publication Number Publication Date
CN111427949A true CN111427949A (en) 2020-07-17
CN111427949B CN111427949B (en) 2023-10-20

Family

ID=71546599

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201910020151.6A Active CN111427949B (en) 2019-01-09 2019-01-09 Method and device for creating big data service

Country Status (1)

Country Link
CN (1) CN111427949B (en)

Cited By (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111930441A (en) * 2020-08-10 2020-11-13 上海熙菱信息技术有限公司 Consul-based configuration file management system and method
CN112099915A (en) * 2020-09-07 2020-12-18 紫光云(南京)数字技术有限公司 Soft load balancing dynamic issuing configuration method and system
CN113760442A (en) * 2020-10-19 2021-12-07 北京沃东天骏信息技术有限公司 Application running and accessing method, device and equipment
CN116909584A (en) * 2023-05-06 2023-10-20 广东国地规划科技股份有限公司 Deployment method, device, equipment and storage medium of space-time big data engine

Citations (16)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20090288082A1 (en) * 2008-05-19 2009-11-19 Novell, Inc. System and method for performing designated service image processing functions in a service image warehouse
CN103037002A (en) * 2012-12-21 2013-04-10 中标软件有限公司 Method and system for arranging server cluster in cloud computing cluster environment
WO2013072925A2 (en) * 2011-09-19 2013-05-23 Tata Consultancy Services Limited A computing platform for development and deployment of sensor data based applications and services
US20140053149A1 (en) * 2012-08-17 2014-02-20 Systex Software & Service Corporation Fast and automatic deployment method for cluster system
WO2017045424A1 (en) * 2015-09-18 2017-03-23 乐视控股(北京)有限公司 Application program deployment system and deployment method
CN106850621A (en) * 2017-02-07 2017-06-13 南京云创大数据科技股份有限公司 A kind of method based on container cloud fast construction Hadoop clusters
CN106888254A (en) * 2017-01-20 2017-06-23 华南理工大学 A kind of exchange method between container cloud framework based on Kubernetes and its each module
CN107493191A (en) * 2017-08-08 2017-12-19 深信服科技股份有限公司 A kind of clustered node and self scheduling container group system
CN108173919A (en) * 2017-12-22 2018-06-15 百度在线网络技术(北京)有限公司 Big data platform builds system, method, equipment and computer-readable medium
CN108196843A (en) * 2018-01-09 2018-06-22 成都睿码科技有限责任公司 Visualization Docker containers compile the O&M method of deployment automatically
CN108234164A (en) * 2016-12-14 2018-06-29 杭州海康威视数字技术股份有限公司 Clustered deploy(ment) method and device
CN108694053A (en) * 2018-05-14 2018-10-23 平安科技(深圳)有限公司 Build the method and terminal device of Kubernetes host nodes automatically based on Ansible tools
CN108809722A (en) * 2018-06-13 2018-11-13 郑州云海信息技术有限公司 A kind of method, apparatus and storage medium of deployment Kubernetes clusters
CN108958927A (en) * 2018-05-31 2018-12-07 康键信息技术(深圳)有限公司 Dispositions method, device, computer equipment and the storage medium of container application
CN109032760A (en) * 2018-08-01 2018-12-18 北京百度网讯科技有限公司 Method and apparatus for application deployment
CN109062655A (en) * 2018-06-05 2018-12-21 腾讯科技(深圳)有限公司 A kind of containerization cloud platform and server

Patent Citations (16)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20090288082A1 (en) * 2008-05-19 2009-11-19 Novell, Inc. System and method for performing designated service image processing functions in a service image warehouse
WO2013072925A2 (en) * 2011-09-19 2013-05-23 Tata Consultancy Services Limited A computing platform for development and deployment of sensor data based applications and services
US20140053149A1 (en) * 2012-08-17 2014-02-20 Systex Software & Service Corporation Fast and automatic deployment method for cluster system
CN103037002A (en) * 2012-12-21 2013-04-10 中标软件有限公司 Method and system for arranging server cluster in cloud computing cluster environment
WO2017045424A1 (en) * 2015-09-18 2017-03-23 乐视控股(北京)有限公司 Application program deployment system and deployment method
CN108234164A (en) * 2016-12-14 2018-06-29 杭州海康威视数字技术股份有限公司 Clustered deploy(ment) method and device
CN106888254A (en) * 2017-01-20 2017-06-23 华南理工大学 A kind of exchange method between container cloud framework based on Kubernetes and its each module
CN106850621A (en) * 2017-02-07 2017-06-13 南京云创大数据科技股份有限公司 A kind of method based on container cloud fast construction Hadoop clusters
CN107493191A (en) * 2017-08-08 2017-12-19 深信服科技股份有限公司 A kind of clustered node and self scheduling container group system
CN108173919A (en) * 2017-12-22 2018-06-15 百度在线网络技术(北京)有限公司 Big data platform builds system, method, equipment and computer-readable medium
CN108196843A (en) * 2018-01-09 2018-06-22 成都睿码科技有限责任公司 Visualization Docker containers compile the O&M method of deployment automatically
CN108694053A (en) * 2018-05-14 2018-10-23 平安科技(深圳)有限公司 Build the method and terminal device of Kubernetes host nodes automatically based on Ansible tools
CN108958927A (en) * 2018-05-31 2018-12-07 康键信息技术(深圳)有限公司 Dispositions method, device, computer equipment and the storage medium of container application
CN109062655A (en) * 2018-06-05 2018-12-21 腾讯科技(深圳)有限公司 A kind of containerization cloud platform and server
CN108809722A (en) * 2018-06-13 2018-11-13 郑州云海信息技术有限公司 A kind of method, apparatus and storage medium of deployment Kubernetes clusters
CN109032760A (en) * 2018-08-01 2018-12-18 北京百度网讯科技有限公司 Method and apparatus for application deployment

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
盛乐标;周庆林;游伟倩;张予倩;: "Kubernetes高可用集群的部署实践", 电脑知识与技术, no. 26 *

Cited By (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111930441A (en) * 2020-08-10 2020-11-13 上海熙菱信息技术有限公司 Consul-based configuration file management system and method
CN111930441B (en) * 2020-08-10 2024-03-29 上海熙菱信息技术有限公司 Consul-based configuration file management system and method
CN112099915A (en) * 2020-09-07 2020-12-18 紫光云(南京)数字技术有限公司 Soft load balancing dynamic issuing configuration method and system
CN113760442A (en) * 2020-10-19 2021-12-07 北京沃东天骏信息技术有限公司 Application running and accessing method, device and equipment
CN116909584A (en) * 2023-05-06 2023-10-20 广东国地规划科技股份有限公司 Deployment method, device, equipment and storage medium of space-time big data engine
CN116909584B (en) * 2023-05-06 2024-05-24 广东国地规划科技股份有限公司 Deployment method, device, equipment and storage medium of space-time big data engine

Also Published As

Publication number Publication date
CN111427949B (en) 2023-10-20

Similar Documents

Publication Publication Date Title
CN111427949B (en) Method and device for creating big data service
CN112256425B (en) Load balancing method and system, computer cluster, information editing method and terminal
CN111190748B (en) Data sharing method, device, equipment and storage medium
CN110569220B (en) Game resource file display method and device, terminal and storage medium
CN111159604A (en) Picture resource loading method and device
CN111866140B (en) Fusion management device, management system, service calling method and medium
CN110535890B (en) File uploading method and device
CN110704324A (en) Application debugging method and device and storage medium
CN110636144A (en) Data downloading method and device
CN113190362A (en) Service calling method and device, computer equipment and storage medium
CN111064657B (en) Method, device and system for grouping concerned accounts
CN110086814B (en) Data acquisition method and device and storage medium
CN112612539A (en) Data model unloading method and device, electronic equipment and storage medium
CN111914985A (en) Configuration method and device of deep learning network model and storage medium
CN111694521B (en) Method, device and system for storing file
CN110971692B (en) Method and device for opening service and computer storage medium
CN114565388A (en) Method and device for updating consensus rounds, electronic equipment and storage medium
CN110324791B (en) Networking method and device, computer equipment and storage medium
CN111580892B (en) Method, device, terminal and storage medium for calling service components
CN113268234A (en) Page generation method, device, terminal and storage medium
CN113051015A (en) Page rendering method and device, electronic equipment and storage medium
CN113076452A (en) Application classification method, device, equipment and computer readable storage medium
CN112612540A (en) Data model configuration method and device, electronic equipment and storage medium
CN111010732A (en) Network registration method, device, electronic equipment and medium
CN111191254A (en) Access verification method and device, computer equipment and storage medium

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant