CN111427949B - Method and device for creating big data service - Google Patents

Method and device for creating big data service Download PDF

Info

Publication number
CN111427949B
CN111427949B CN201910020151.6A CN201910020151A CN111427949B CN 111427949 B CN111427949 B CN 111427949B CN 201910020151 A CN201910020151 A CN 201910020151A CN 111427949 B CN111427949 B CN 111427949B
Authority
CN
China
Prior art keywords
big data
data service
component
proxy
service
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN201910020151.6A
Other languages
Chinese (zh)
Other versions
CN111427949A (en
Inventor
韩卫
郭峰
刘中军
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Hangzhou Hikvision Digital Technology Co Ltd
Original Assignee
Hangzhou Hikvision Digital Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Hangzhou Hikvision Digital Technology Co Ltd filed Critical Hangzhou Hikvision Digital Technology Co Ltd
Priority to CN201910020151.6A priority Critical patent/CN111427949B/en
Publication of CN111427949A publication Critical patent/CN111427949A/en
Application granted granted Critical
Publication of CN111427949B publication Critical patent/CN111427949B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • YGENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
    • Y02TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
    • Y02DCLIMATE CHANGE MITIGATION TECHNOLOGIES IN INFORMATION AND COMMUNICATION TECHNOLOGIES [ICT], I.E. INFORMATION AND COMMUNICATION TECHNOLOGIES AIMING AT THE REDUCTION OF THEIR OWN ENERGY USE
    • Y02D10/00Energy efficient computing, e.g. low power processors, power management or thermal management

Abstract

The application relates to a method and a device for creating big data service, belonging to the field of data mining. The method comprises the following steps: the method comprises the steps that a master node generates a container mirror image of each big data service according to a big data service component template corresponding to each big data service in M big data services and configuration parameters of each big data service, a service cluster is built, the service cluster comprises the master node and N agent nodes, the service mirror image is installed, the agent mirror image is installed on each agent node, big data services are created in the service cluster according to an executable file and a component configuration file, the executable file comprises component identification of at least one big data service component, and the component configuration file comprises a deployment file storage path of each big data service component. The application can quickly create big data service.

Description

Method and device for creating big data service
Technical Field
The present application relates to the field of data mining, and in particular, to a method and apparatus for creating a big data service.
Background
Big data refers to a data set which cannot be captured, managed and processed by a conventional software tool within a certain time range, and is a massive, high-growth-rate and diversified information asset which needs a new processing mode to have stronger decision-making ability, insight discovery ability and flow optimization ability.
The big data service is a service for processing big data, and the big data service can provide a large amount of hardware resources and software resources for processing big data which cannot be processed by conventional software. When a user processes big data using a big data service, a big data service needs to be created, so how to create the big data service is a problem that needs to be solved at present.
Disclosure of Invention
In order to create a big data service, the embodiment of the application provides a method and a device for creating the big data service. The technical scheme is as follows:
in a first aspect, the present application provides a method of creating a big data service, the method comprising:
the master node generates a container mirror image of each big data service according to a big data service component template corresponding to each big data service in M big data services and configuration parameters of each big data service, wherein M is an integer greater than or equal to 1;
the master node builds a service cluster, wherein the service cluster comprises the master node and N proxy nodes, the master node is provided with a service image and each proxy node is provided with a proxy image, the service image is used as an operating system of the master node, the proxy image is used as an operating system of the proxy node, the service image and the proxy image both comprise image warehouses, the image warehouses comprise container images of each big data service, and N is an integer greater than or equal to 1;
The master node creates a big data service in the service cluster according to an executable file and a component configuration file, wherein the executable file comprises a component identifier of at least one big data service component required by the big data service, and the component configuration file comprises a deployment file storage path of each big data service component.
Optionally, the configuration parameters according to the big data service component template corresponding to each big data service in the M big data services and each big data service include:
the master node receives configuration information corresponding to each big data service in M big data services sent by a terminal;
the master node acquires a stored big data service component template corresponding to a target big data service, wherein the big data service component template corresponding to the target big data service comprises at least one configuration item and a default parameter corresponding to each configuration item, and the target big data service is any one of the M big data services;
and the main node respectively updates default parameters corresponding to each configuration item included in the big data service component template into parameters of each configuration item included in the configuration information corresponding to the target big data service to obtain a container mirror image corresponding to the target big data service.
Optionally, the master node constructs a service cluster, including:
the master node installs a service image;
the master node sends a proxy mirror image to each proxy node connected with the master node so that each proxy node installs the proxy mirror image;
the master node deploys a kubernetes system in a node cluster formed by the master node and each proxy node to form a service cluster.
Optionally, the proxy image further comprises an address of the master node,
after the master node sends the proxy mirror image to each proxy node connected with the master node, the method further comprises:
and the master node receives an address allocation request sent by each proxy node according to the address of the master node, and sends the address of each proxy node to each proxy node respectively.
Optionally, the master node creates a big data service in the service cluster according to the executable file and the component configuration file, including:
the master node obtains a storage path of at least one deployment file corresponding to each big data service component from the component configuration file according to the component identifier of each big data service component in the executable file;
And the master node acquires at least one deployment file corresponding to each big data service component according to the storage path of the at least one deployment file corresponding to each big data service component, and loads the at least one deployment file corresponding to each big data service component into the service cluster to create big data service.
Optionally, before loading the at least one deployment file corresponding to each big data service component into the service cluster, the method further includes:
the main node receives an isolation file, wherein computing resources and Kubernetes elements required by big data service are included in the isolation file, and a container is created in a plurality of proxy nodes in the service cluster, and the container comprises the computing resources and the Kubernetes elements required by the big data service;
the loading the at least one deployment file corresponding to each big data service component into the service cluster includes:
and the main node loads at least one deployment file corresponding to each big data service component into the container.
In a second aspect, the present application provides an apparatus for creating a big data service, the apparatus comprising:
The generation module is used for generating a container mirror image of each big data service according to a big data service component template corresponding to each big data service in M big data services and configuration parameters of each big data service, wherein M is an integer greater than or equal to 1;
the system comprises a construction module, a storage module and a storage module, wherein the construction module is used for constructing a service cluster, the service cluster comprises a device and N proxy nodes, the device is provided with a service image and each proxy node is provided with a proxy image, the service image is used as an operating system of the device, the proxy image is used as the operating system of the proxy node, the service image and the proxy image both comprise image warehouses, the image warehouses comprise container images of each big data service, and N is an integer greater than or equal to 1;
the creation module is used for creating the big data service in the service cluster according to an executable file and a component configuration file, wherein the executable file comprises the component identification of at least one big data service component required by the big data service, and the component configuration file comprises a deployment file storage path of each big data service component.
Optionally, the generating module includes:
The receiving unit is used for receiving configuration information corresponding to each big data service in the M big data services sent by the terminal;
the system comprises an acquisition unit, a storage unit and a storage unit, wherein the acquisition unit is used for acquiring a stored big data service component template corresponding to a target big data service, the big data service component template corresponding to the target big data service comprises at least one configuration item and a default parameter corresponding to each configuration item, and the target big data service is any one of M big data services;
and the updating unit is used for respectively updating default parameters corresponding to each configuration item included in the big data service component template into parameters of each configuration item included in the configuration information corresponding to the target big data service to obtain a container mirror image corresponding to the target big data service.
Optionally, the building module includes:
an installation unit for installing a service image;
a transmitting unit configured to transmit a proxy image to each proxy node connected to the apparatus, so that the proxy image is installed by each proxy node;
and the deployment unit is used for deploying the kubernetes system in the node cluster formed by the device and each proxy node so as to form a service cluster.
Optionally, the proxy image further includes an address of the master node, and the apparatus further includes:
the receiving module is used for receiving an address allocation request sent by each proxy node according to the address of the device;
and the sending module is used for respectively sending the address of each proxy node to each proxy node.
Optionally, the creating module includes:
the second acquisition unit is used for acquiring a storage path of at least one deployment file corresponding to each big data service component from the component configuration file according to the component identifier of each big data service component in the executable file;
the loading unit is used for acquiring at least one deployment file corresponding to each big data service component according to the storage path of the at least one deployment file corresponding to each big data service component, and loading the at least one deployment file corresponding to each big data service component into the service cluster so as to create big data service.
Optionally, the creating module further includes: a second receiving unit and a creating unit;
the second receiving unit is used for receiving an isolation file, and computing resources and Kubernetes elements required by big data service included in the isolation file;
The creation unit is configured to create a container in a plurality of proxy nodes in the service cluster, where the container includes computing resources and Kubernetes elements required by the big data service;
and the loading unit is used for loading at least one deployment file corresponding to each big data service component into the container.
In a third aspect, the present application provides a computer readable storage medium having a computer program stored therein, which when executed by a processor implements the method steps of the first aspect or any of the alternatives provided by the first aspect.
The technical scheme provided by the embodiment of the application can have the following beneficial effects:
in the application, a service cluster is firstly constructed by a main node, a container mirror image of M big data services is included in an operating system of the main node and an agent node of the service cluster, M is an integer greater than or equal to 1, thus when a big data service is created, the big data service can be quickly created in the service cluster according to an executable file and a component configuration file, the executable file comprises the component identification of at least one big data service component of the big data service, and the component configuration file comprises the deployment file storage path of each big data service component, thereby realizing the quick creation of the big data service.
It is to be understood that both the foregoing general description and the following detailed description are exemplary and explanatory only and are not restrictive of the application as claimed.
Drawings
The accompanying drawings, which are incorporated in and constitute a part of this specification, illustrate embodiments consistent with the application and together with the description, serve to explain the principles of the application.
FIG. 1 is a flow chart of a method for creating big data services provided by an embodiment of the present application;
FIG. 2 is a flow chart of another method for creating big data services provided by an embodiment of the present application;
FIG. 3 is a schematic diagram of a service cluster according to an embodiment of the present application;
FIG. 4 is a flow chart of a method for constructing a service cluster according to an embodiment of the present application;
FIG. 5 is a schematic diagram of an apparatus for creating big data services according to an embodiment of the present application;
fig. 6 is a schematic diagram of a terminal structure according to an embodiment of the present application.
Specific embodiments of the present application have been shown by way of the above drawings and will be described in more detail below. The drawings and the written description are not intended to limit the scope of the inventive concepts in any way, but rather to illustrate the inventive concepts to those skilled in the art by reference to the specific embodiments.
Detailed Description
Reference will now be made in detail to exemplary embodiments, examples of which are illustrated in the accompanying drawings. When the following description refers to the accompanying drawings, the same numbers in different drawings refer to the same or similar elements, unless otherwise indicated. The implementations described in the following exemplary examples do not represent all implementations consistent with the application. Rather, they are merely examples of apparatus and methods consistent with aspects of the application as detailed in the accompanying claims.
Referring to fig. 1, an embodiment of the present application provides a method of creating a big data service, the method comprising:
step 101: and the master node generates a container mirror image of each big data service according to the big data service component template corresponding to each big data service in the M big data services and the configuration parameters of each big data service, wherein M is an integer greater than or equal to 1.
Step 102: the method comprises the steps that a service cluster is built by a master node, the service cluster comprises the master node and N proxy nodes, the master node is provided with a service image and each proxy node is provided with a proxy image, the service image is used as an operating system of the master node, the proxy images are used as the operating systems of the proxy nodes, the service image and the proxy images both comprise image warehouses, each image warehouse comprises a container image of each big data service, and N is an integer greater than or equal to 1.
Step 103: the master node creates a big data service in the service cluster according to an executable file and a component configuration file, the executable file comprising a component identification of at least one big data service component of one big data service, the component configuration file comprising a deployment file storage path of each big data service component.
Referring to fig. 2, an embodiment of the present application provides a method of creating a big data service, the method comprising:
step 201: and the master node generates a container mirror image of each big data service according to the big data service component template corresponding to each big data service component in the M big data services and the configuration parameters of each big data service component, wherein M is an integer greater than or equal to 1.
Referring to fig. 3, before performing this step, a master node and N proxy nodes may be deployed, where N is an integer greater than or equal to 1, and then each of the master node and the N proxy nodes may be connected to form a cluster. Alternatively, a network cable may be used to connect the master node with each proxy node.
The method comprises the steps that n+1 nodes can be deployed, one node is selected, the selected node is connected with each other node, then a user can log in a portal page of the selected node through a terminal, the user can set the role of the selected node as a master node in the portal page, and the roles of the other N nodes are set as proxy nodes.
Alternatively, the master node may be a device such as a server or a computer, and each proxy node may be a device such as a server or a computer. The computing power of the master node is higher than the computing power of the proxy node.
This step may be achieved by the operations of 2011 to 2013 as follows. The operations of 2011 to 2013 may be:
2011: and the master node receives configuration information corresponding to each big data service component in the M big data service components sent by the terminal.
When a user needs to create a big data service, configuration information and component identifications of M big data service components can be input on the corresponding terminal. Configuration information for any one big data service component, the configuration information including parameters of at least one configuration item, the configuration item including at least one of a container name, a mirror path, a storage path, a number of copies, and the like.
Alternatively, the M big data service components may include hadoop, zookeeper, kafka, spark, hbase, etc. components.
The terminal may acquire configuration information and component identifiers of the M big data service components input by the user, and send a generation request message to the master node, where the generation request message may include the configuration information and component identifiers of the M big data service components. The master node receives the generation request message, and extracts configuration information and component identifications of the M big data service components from the generation request message.
The master node may locally store a correspondence between the component identifier and the big data service component template, where the correspondence includes the component identifier and the big data service component template of each big data service component of the M big data service components.
Optionally, the big data service component template includes at least one configuration item and a default parameter corresponding to each configuration item in the at least one configuration item. For example, if the configuration item is a container name, the default parameter corresponding to the configuration item is a default container name, and if the configuration item is a mirror path, the default parameter corresponding to the configuration item is a default mirror path.
For example, a big data service component template corresponding to the big data service component zookeeper is provided in this step as follows. Configuration items in the big DATA service component template include a container name ENV zk_user, a LOG path zk_log_dir of a LOG path zk_data_log_ DIR, zookeeper under lib of a storage path zk_data_ DIR, zookeeper, an environment transformation path java_home, an installation package path ARG zk_dist, and the like; the default parameter corresponding to the container name ENV ZK_USER is a cookie, the default parameter corresponding to the storage path ZK_DATA_DIR is ZK_DATA_DIR, the default parameter corresponding to the LOG path ZK_DATA_LOG_DIR of the cookie under the lib is ZK_DATA_LOG_DIR, the default parameter corresponding to the LOG path ZK_LOG_DIR of the cookie is ZK_LOG_DIR, the default parameter corresponding to the environment transformation path JAVA_HOME is JAVA_HOME, and the default parameter corresponding to the installation packet path ARG ZK_DIST is ARG ZK_DIST.
Step 2012: the master node acquires a stored big data service component template corresponding to a target big data service component, wherein the target big data service component is any big data service component in the M big data service components.
In this step, the master node may obtain, according to the component identifier of the target big data service component, a big data service component template corresponding to the target big data service component from the stored correspondence between the component identifier and the big data service component template.
Step 2013: and the master node respectively updates default parameters corresponding to each configuration item included in the big data service component template into parameters of each configuration item included in the configuration information corresponding to the target big data service component to obtain a container mirror image corresponding to the target big data service component.
For example, assume that the configuration information of the target big DATA service component is a zookeeper, the zookeeper's configuration information includes a container name ENV zk_user and its corresponding parameter is "zookeeper", a storage path zk_data_dir and its corresponding parameter is "/var/lib/zookeeper/data\", a zookeeper under lib path zk_data_log_dir and its corresponding parameter is "/var/lib/zookeeper/log\", a zookeeper path zk_log_dir and its corresponding parameter is "/var/LOG/zookeeper\", an environment variable path java_home and its corresponding parameter is "/usr/lib/jvm/JAVA-8-openjdk-amd64", and an installation packet path ARG zk_dist and its corresponding parameter is "zookeeper-3.4.10".
And then updating the default parameters ZK_DATA_DIR corresponding to the ENV ZK_USER, the default parameters ZK_DATA_DIR corresponding to the ZK_DATA_DIR, the default parameters ZK_LOG_DIR corresponding to the ZK_LOG_DIR, the default parameters JAVA_HOME corresponding to the ZK_LOG_DIR, the default parameters ARG ZK_DIT corresponding to the ARG ZK_DIT as the parameters Zokkeep corresponding to the ENV ZK_USER, the parameters/var/lib/zookeeper/data\corresponding to the ZK_DATA_DIR, the parameters/var/LOG/zookeeper/LOG/ZK_DIR corresponding to the ZK_DATA_DIR, the parameters corresponding to the ZK_DIR/Java/LOG/HOkeeper/LOG/ZK_DIR, and the parameters corresponding to the ZK_DIT are mirror image parameters/video/record corresponding to the ZK_DIR, and the large DATA container corresponding to the ZK_35-35.
Step 202: the master node builds a service cluster comprising the master node and the N proxy nodes.
The master node is provided with a service image and each proxy node is provided with a proxy image, the service image is used as an operating system of the master node, the proxy image is used as an operating system of the proxy node, the service image comprises an image warehouse, the proxy image comprises an image warehouse, the image warehouse comprises a container image of each big data service, and N is an integer greater than or equal to 1.
Referring to fig. 4, this step may build a service cluster through operations 2021 to 2027. The operations of 2021 to 2027 may be:
2021: the master node installs the service image and sends the proxy image to each proxy node connected to the master node.
Optionally, the master node includes a basic mirror image of the linux version, and generates a service mirror image and a proxy mirror image by combining a mirror image warehouse based on the basic mirror image, where the mirror image warehouse includes a container mirror image of each big data service. The proxy mirror also includes the address of the master node.
Alternatively, the address may be an IP address or the like.
2022: for each of the N proxy nodes, the proxy node receives the proxy image and installs the proxy image.
2023: the proxy node sends an address allocation request to the master node, the address allocation request being for requesting the master node to allocate an address for the proxy node.
The proxy node may extract the address of the master node from the proxy image and send an address allocation request to the master node according to the address of the master node.
2024: the master node receives the address allocation request and sends the address of the proxy node to the proxy node.
After receiving the address allocation request, the master node may allocate an address to the proxy node, where the address may be an IP address or the like.
2025: the proxy node receives the address and sets its address to the received address, and transmits resource information to the master node, the resource information including at least one of a number of CPUs, a memory space size, and a disk space size of the proxy node.
2026: the master node receives the resource information of the proxy node and stores the resource information of the proxy node.
2027: the master node deploys the kubernetes system in a node cluster composed of the master node and each proxy node to form a service cluster.
The Kubernetes system includes Kubernetes elements, which may include elements such as hostnetwork, macvlan bridge, ovs, calico, flannel, and cananal.
The hostnetwork has the best bandwidth performance at the cross-point due to the use of the host network stack in the communication process.
The macvlan passes through the host network stack to deliver the message directly from the host network card, so that the performance is higher than that of other non-hostworks, but the macvlan does not pass through the host network stack and has no control flow, so that security policies and the like are not easy to take effect.
ovs uses ve4 to directly connect the container network naming space and ovs network bridge, and the ovs datapa4 processes the message in kernel state matching and sends the message to the physical network card to deliver the message, so the message does not pass through the host naming space, but the message is not successfully matched in kernel state first packet and needs to be sent to user state processing, and ovs control is needed, so that the message control is increased, and compared with the macvlan, the flow performance of the macvlan is reduced.
The tunnel technology is used by the cannel vxlan and the canals, so that the packet encapsulation and decapsulation are needed when the cross-host communication is performed, and the performance loss is obvious. The difference between the cannel and the cannel is that the cannel uses the linux bridge to complete the container network namespace to communicate with the host/co-node container, while the cannel uses ve4, but the network performance of using ve4 directly is slightly higher than that of the linux bridge.
And integrating the requirements of big data on performance, safety and the isolation space established based on the subsequent steps to realize simple multi-tenant deployment, and finally selecting a way of using the canals.
Step 203: the terminal sends an executable file and a component configuration file to the master node to create a big data service in the service cluster, wherein the executable file comprises a component identification of at least one big data service component, and the component configuration file comprises a deployment file storage path of each big data service component.
Alternatively, the component profile may be a yaml file, and the terminal may generate the executable file and the component file by operations 2031 to 2034, the operations 2031 to 2033 being respectively:
2031: the terminal generates an executable file comprising a component identification of at least one big data service component.
Optionally, the user may configure a component identifier of a big data service component required for a big data service on the terminal, and the terminal generates the executable file according to the component identifier of each big data service component configured by the user.
2032: the terminal creates a palm repository from the executable file, the palm repository including at least one deployment file corresponding to each of the at least one big data service component.
The at least one deployment file corresponding to the big data service component includes implementation code for implementing the big data service component.
The embodiment further includes a server, and the server stores a correspondence between component identifiers and deployment files, where the correspondence stores component identifiers of each big data service component and at least one corresponding deployment file.
In this step, the terminal may send a palm repository setup request to the server, the palm repository setup request carrying the executable file. The server receives the palm warehouse establishment request, extracts an executable file from the palm warehouse establishment request, acquires at least one deployment file corresponding to each big data service component from the corresponding relation between the component identification and the deployment file according to the component identification of each big data service component included in the executable file, creates a palm warehouse, and sends the warehouse identification of the palm warehouse to the terminal, wherein the palm warehouse comprises the acquired deployment file of each big data service component.
2033: the terminal generates a component configuration file according to the palm repository, the component configuration file including components of each big data service component and a storage path of the deployment file.
The terminal receives a warehouse identifier of the palm warehouse sent by the server, and sends a path acquisition request to the server, wherein the path acquisition request carries the warehouse identifier. The server receives a path acquisition request, extracts the warehouse identifier from the path acquisition request, determines a palm warehouse corresponding to the warehouse identifier, acquires a storage path of each deployment file in the palm warehouse, and sends the storage path of each deployment file in the palm warehouse to the terminal. The terminal receives the storage path of each deployment file in the palm repository and generates a press road file including the storage path of each deployment file in the palm repository.
The palm repository is a package management tool that uses kubernetes palm for a large number of component profile management. Meanwhile, the problems of service dependence and starting sequence are solved, and management is facilitated.
The project is deployed and released based on component configuration files, most of the current micro-service or modularization projects are divided into a plurality of components to be deployed, each component can correspond to one depth. Yaml, one service. Yaml, one Ingress. Yaml can have various dependency relationships, the large data service components are more served, the quantity of the component configuration files to be maintained is large, the component configuration files are stored in a scattered mode, if the project is restored, the deployment sequence, the dependency relationships and the like are difficult to know, and all the components can be solved through the help of the concentrated storage based on the component configuration files, the packaging based on the project and the dependency between the components.
Helm is a package management tool of Kubernetes that is used to simplify the deployment and management of the Kubernetes application. Helm can be compared to the yum tool of CentOS. Helm has the following basic concepts:
chart is a Helm managed installation package, and contains installation package resources to be deployed. Chart can be compared to the rpm file used by CentOS yum. Each Chart contains the following two parts: basic description file Chart. Yaml of package, and one or more Kubernetes manifest file templates placed in templates directory
Release: is an example of the deployment of a char, a char may have multiple release on a Kubernetes cluster, i.e., the char may be installed multiple times. One example of Chart running on a Kubernetes cluster. On the same cluster, one Chart may be installed many times. Each installation creates a new release. For example, a MySQL Chart, if two databases are to be run on the server, the Chart may be installed twice. Each installation generates its own Release, and there will be its own Release name.
Repositive: and the warehouse of the chart is used for issuing and storing the chart.
Optionally, the terminal may also send an quarantine file to the server, which includes the computing resources and Kubernetes elements required for the big data service.
When a user's big data service is deployed, an isolation environment needs to be created for the big data service, computing resources and Kubernetes elements in the isolation environment are used for running the big data service, and resources used by the big data service are effectively isolated, so that the use efficiency of the resources can be improved.
The default isolation environment is free of resource quota, the isolation environment needs to be set with quota, the quota comprises two aspects, computing resource quota and Kubernetes element quantity limit, computing resources mainly comprise cpu resources and memory resources, and Kubernetes elements comprise pod, service, replicationcontroller, resourcequota, persistentvolumeclaim elements.
The method comprises the steps that a group of big data services are created, an isolation environment is created first, and CPU and memory are isolated in a single tenant mainly by virtue of an isolation environment technology of kubernetes. Resources may be super-provisioned for the entire container platform. For example, the cpu core number of the whole platform is N, and the memory is M. The cpu super-match ratio is s (1 < s < 16), and the memory super-match ratio is t (1 < t < 2). Assuming that the average cpu core number allocated to each tenant is a and the allocated memory is b, the core of the whole platform can be used by (n×s/a) tenants, and the memory of the whole platform can be used by (m×t/b) tenants. The super-proportion example needs to be set according to the busyness of the service of the actual platform, and if the super-proportion example is idle, the proportion can be improved, and more tenants can share resources.
Step 204: the master node creates a big data service in the service cluster according to the executable file and the component configuration file.
Alternatively, a Kubernetes namespace technique may be used in this step. In this step, the master node obtains a storage path of at least one deployment file corresponding to each big data service component from the component configuration file according to the component identifier of each big data service component in the executable file, obtains at least one deployment file corresponding to each big data service component from the server according to the storage path of at least one deployment file corresponding to each big data service component, and loads at least one deployment file corresponding to each big data service component into the service cluster to create a big data service.
In this step, the master node may further receive an isolation file sent by the terminal, and select, in the service cluster, a plurality of proxy nodes according to computing resources and Kubernetes elements required by the big data service included in the isolation file, where the plurality of proxy nodes include computing resources and Kubernetes elements required by the big data service, and create, in the plurality of proxy nodes, a container including computing resources and Kubernetes elements required by the big data service, where the container is an isolation environment corresponding to the big data service. The master node loads at least one deployment file corresponding to each big data service component into the container to create a big data service and the big data service runs in the container.
After creating a big data service, the master node may run the big data service in the container using the plurality of proxy nodes. The proxy mirror image in the proxy node is used as an operating system, is also an operating system on which the large data service runs, and can use a container mirror image of the large data service stored in a mirror image warehouse of the proxy mirror image in the running process of the large data service, and the container mirror image is used for providing services for users.
In the embodiment of the application, a master node generates a container image of each big data service according to a big data service component template corresponding to each big data service in M big data services and configuration parameters of each big data service, M is an integer greater than or equal to 1, then a service cluster is constructed, the service cluster comprises the master node and N proxy nodes, the master node is provided with the service image and each proxy node is provided with the proxy image, the service image is used as an operating system of the master node, the proxy image is used as an operating system of the proxy node, the service image and the proxy image both comprise an image warehouse, the image warehouse comprises the container image of each big data service, N is an integer greater than or equal to 1, then the big data service is created in the service cluster according to an executable file and a component configuration file, the executable file comprises a component identifier of at least one big data service component of one big data service, and the component configuration file comprises a deployment file storage path of each big data service component. When the big data service is created, only at least one deployment file of each big data service component of the big data service is loaded into the service cluster, and the container mirror image required by the big data service components in the process of running the big data service is arranged in the service cluster in the construction of the service cluster, so that the big data service is quickly created. In addition, when the big data service is created, a container can be created for the big data service, and resources required by the big data service are isolated through the container, so that the use efficiency of the resources is improved.
The following are examples of the apparatus of the present application that may be used to perform the method embodiments of the present application. For details not disclosed in the embodiments of the apparatus of the present application, please refer to the embodiments of the method of the present application.
Referring to fig. 5, an implementation of the present application provides an apparatus 300 for creating big data services, the apparatus 300 comprising:
the generating module 301 is configured to generate a container image of each big data service according to a big data service component template corresponding to each big data service in M big data services and a configuration parameter of each big data service, where M is an integer greater than or equal to 1;
a building module 302, configured to build a service cluster, where the service cluster includes the device and N proxy nodes, the device is provided with a service image and each proxy node is provided with a proxy image, the service image is used as an operating system of the device, the proxy image is used as an operating system of the proxy node, the service image and the proxy image both include an image repository, the image repository includes a container image of each big data service, and N is an integer greater than or equal to 1;
a creating module 303, configured to create a big data service in the service cluster according to an executable file and a component configuration file, where the executable file includes a component identifier of at least one big data service component, and the component configuration file includes a deployment file storage path of each big data service component.
Optionally, the generating module 301 includes:
the receiving unit is used for receiving configuration information corresponding to each big data service in the M big data services sent by the terminal;
the system comprises an acquisition unit, a storage unit and a storage unit, wherein the acquisition unit is used for acquiring a stored big data service component template corresponding to a target big data service, the big data service component template corresponding to the target big data service comprises at least one configuration item and a default parameter corresponding to each configuration item, and the target big data service is any one of M big data services;
and the updating unit is used for respectively updating default parameters corresponding to each configuration item included in the big data service component template into parameters of each configuration item included in the configuration information corresponding to the target big data service to obtain a container mirror image corresponding to the target big data service.
Optionally, the building module 302 includes:
an installation unit for installing a service image;
a transmitting unit configured to transmit a proxy image to each proxy node connected to the apparatus, so that the proxy image is installed by each proxy node;
and the deployment unit is used for deploying the kubernetes system in the node cluster formed by the device and each proxy node so as to form a service cluster.
Optionally, the proxy image further includes an address of the master node, and the apparatus 300 further includes:
the receiving module is used for receiving an address allocation request sent by each proxy node according to the address of the device;
and the sending module is used for respectively sending the address of each proxy node to each proxy node.
Optionally, the creating module 303 includes:
the second acquisition unit is used for acquiring a storage path of at least one deployment file corresponding to each big data service component from the component configuration file according to the component identifier of each big data service component in the executable file;
the loading unit is used for acquiring at least one deployment file corresponding to each big data service component according to the storage path of the at least one deployment file corresponding to each big data service component, and loading the at least one deployment file corresponding to each big data service component into the service cluster so as to create big data service.
Optionally, the creating module 303 further includes: a second receiving unit and a creating unit;
the second receiving unit is used for receiving an isolation file, and computing resources and Kubernetes elements required by big data service included in the isolation file;
The creation unit is configured to create a container in a plurality of proxy nodes in the service cluster, where the container includes computing resources and Kubernetes elements required by the big data service;
and the loading unit is used for loading at least one deployment file corresponding to each big data service component into the container.
In the embodiment of the present application, the generating module generates a container image of each big data service according to a big data service component template corresponding to each big data service in M big data services and a configuration parameter of each big data service, M is an integer greater than or equal to 1, and then the creating module creates a service cluster, where the service cluster includes the device 300 and N proxy nodes, the device 300 is provided with a service image and each proxy node is provided with a proxy image, the service image is used as an operating system of the device 300, the proxy image is used as an operating system of a proxy node, the service image and the proxy image each include an image repository, the image repository includes a container image of each big data service, N is an integer greater than or equal to 1, and then the creating module creates a big data service in the service cluster according to an executable file and a component configuration file, where the executable file includes a component identifier of at least one big data service component of one big data service, and the component configuration file includes a deployment file storage path of each big data service component. Thereby enabling quick creation of large data services.
The specific manner in which the various modules perform the operations in the apparatus of the above embodiments have been described in detail in connection with the embodiments of the method, and will not be described in detail herein.
Fig. 6 shows a block diagram of a terminal 400 according to an exemplary embodiment of the present invention. The terminal 400 may be a computer or the like. In general, the terminal 400 includes: a processor 401 and a memory 402.
Processor 401 may include one or more processing cores such as a 4-core processor, an 8-core processor, etc. The processor 401 may be implemented in at least one hardware form of DSP (Digital Signal Processing ), FPGA (Field-Programmable Gate Array, field programmable gate array), PLA (Programmable Logic Array ). The processor 401 may also include a main processor, which is a processor for processing data in an awake state, also called a CPU (Central Processing Unit ), and a coprocessor; a coprocessor is a low-power processor for processing data in a standby state. In some embodiments, the processor 401 may integrate a GPU (Graphics Processing Unit, image processor) for rendering and drawing of content required to be displayed by the display screen. In some embodiments, the processor 401 may also include an AI (Artificial Intelligence ) processor for processing computing operations related to machine learning.
Memory 402 may include one or more computer-readable storage media, which may be non-transitory. Memory 402 may also include high-speed random access memory, as well as non-volatile memory, such as one or more magnetic disk storage devices, flash memory storage devices. In some embodiments, a non-transitory computer readable storage medium in memory 402 is used to store at least one instruction for execution by processor 401 to implement the method of creating a big data service provided by the method embodiments of the present application.
In some embodiments, the terminal 400 may further optionally include: a peripheral interface 403 and at least one peripheral. The processor 401, memory 402, and peripheral interface 403 may be connected by a bus or signal line. The individual peripheral devices may be connected to the peripheral device interface 403 via buses, signal lines or a circuit board. Specifically, the peripheral device includes: at least one of radio frequency circuitry 404, a touch display 405, a camera 406, audio circuitry 407, a positioning component 408, and a power supply 409.
Peripheral interface 403 may be used to connect at least one Input/Output (I/O) related peripheral to processor 401 and memory 402. In some embodiments, processor 401, memory 402, and peripheral interface 403 are integrated on the same chip or circuit board; in some other embodiments, either or both of the processor 401, memory 402, and peripheral interface 403 may be implemented on separate chips or circuit boards, which is not limited in this embodiment.
The Radio Frequency circuit 404 is configured to receive and transmit RF (Radio Frequency) signals, also known as electromagnetic signals. The radio frequency circuitry 404 communicates with a communication network and other communication devices via electromagnetic signals. The radio frequency circuit 404 converts an electrical signal into an electromagnetic signal for transmission, or converts a received electromagnetic signal into an electrical signal. Optionally, the radio frequency circuit 404 includes: antenna systems, RF transceivers, one or more amplifiers, tuners, oscillators, digital signal processors, codec chipsets, subscriber identity module cards, and so forth. The radio frequency circuitry 404 may communicate with other terminals via at least one wireless communication protocol. The wireless communication protocol includes, but is not limited to: the world wide web, metropolitan area networks, intranets, generation mobile communication networks (2G, 3G, 4G, and 5G), wireless local area networks, and/or WiFi (Wireless Fidelity ) networks. In some embodiments, the radio frequency circuitry 404 may also include NFC (Near Field Communication ) related circuitry, which is not limiting of the application.
The display screen 405 is used to display a UI (User Interface). The UI may include graphics, text, icons, video, and any combination thereof. When the display screen 405 is a touch display screen, the display screen 405 also has the ability to collect touch signals at or above the surface of the display screen 405. The touch signal may be input as a control signal to the processor 401 for processing. At this time, the display screen 405 may also be used to provide virtual buttons and/or a virtual keyboard, also referred to as soft buttons and/or a soft keyboard. In some embodiments, the display 405 may be one, providing a front panel of the terminal 400; in other embodiments, the display 405 may be at least two, and disposed on different surfaces of the terminal 400 or in a folded design; in still other embodiments, the display 405 may be a flexible display disposed on a curved surface or a folded surface of the terminal 400. Even more, the display screen 405 may be arranged in an irregular pattern that is not rectangular, i.e. a shaped screen. The display 405 may be made of LCD (Liquid Crystal Display ), OLED (Organic Light-Emitting Diode) or other materials.
The camera assembly 406 is used to capture images or video. Optionally, camera assembly 406 includes a front camera and a rear camera. Typically, the front camera is disposed on the front panel of the terminal and the rear camera is disposed on the rear surface of the terminal. In some embodiments, the at least two rear cameras are any one of a main camera, a depth camera, a wide-angle camera and a tele camera, so as to realize that the main camera and the depth camera are fused to realize a background blurring function, and the main camera and the wide-angle camera are fused to realize a panoramic shooting and Virtual Reality (VR) shooting function or other fusion shooting functions. In some embodiments, camera assembly 406 may also include a flash. The flash lamp can be a single-color temperature flash lamp or a double-color temperature flash lamp. The dual-color temperature flash lamp refers to a combination of a warm light flash lamp and a cold light flash lamp, and can be used for light compensation under different color temperatures.
The audio circuit 407 may include a microphone and a speaker. The microphone is used for collecting sound waves of users and environments, converting the sound waves into electric signals, and inputting the electric signals to the processor 401 for processing, or inputting the electric signals to the radio frequency circuit 404 for realizing voice communication. For the purpose of stereo acquisition or noise reduction, a plurality of microphones may be respectively disposed at different portions of the terminal 400. The microphone may also be an array microphone or an omni-directional pickup microphone. The speaker is used to convert electrical signals from the processor 401 or the radio frequency circuit 404 into sound waves. The speaker may be a conventional thin film speaker or a piezoelectric ceramic speaker. When the speaker is a piezoelectric ceramic speaker, not only the electric signal can be converted into a sound wave audible to humans, but also the electric signal can be converted into a sound wave inaudible to humans for ranging and other purposes. In some embodiments, audio circuit 407 may also include a headphone jack.
The location component 408 is used to locate the current geographic location of the terminal 400 to enable navigation or LBS (Location Based Service, location-based services). The positioning component 408 may be a positioning component based on the united states GPS (Global Positioning System ), the chinese beidou system, or the russian galileo system.
The power supply 409 is used to power the various components in the terminal 400. The power supply 409 may be an alternating current, a direct current, a disposable battery, or a rechargeable battery. When power supply 409 comprises a rechargeable battery, the rechargeable battery may be a wired rechargeable battery or a wireless rechargeable battery. The wired rechargeable battery is a battery charged through a wired line, and the wireless rechargeable battery is a battery charged through a wireless coil. The rechargeable battery may also be used to support fast charge technology.
In some embodiments, the terminal 400 further includes one or more sensors 410. The one or more sensors 410 include, but are not limited to: acceleration sensor 411, gyroscope sensor 412, pressure sensor 413, fingerprint sensor 414, optical sensor 415, and proximity sensor 416.
The acceleration sensor 411 may detect the magnitudes of accelerations on three coordinate axes of the coordinate system established with the terminal 400. For example, the acceleration sensor 411 may be used to detect components of gravitational acceleration on three coordinate axes. The processor 401 may control the touch display screen 405 to display a user interface in a lateral view or a longitudinal view according to the gravitational acceleration signal acquired by the acceleration sensor 411. The acceleration sensor 411 may also be used for the acquisition of motion data of a game or a user.
The gyro sensor 412 may detect a body direction and a rotation angle of the terminal 400, and the gyro sensor 412 may collect a 3D motion of the user to the terminal 400 in cooperation with the acceleration sensor 411. The processor 401 may implement the following functions according to the data collected by the gyro sensor 412: motion sensing (e.g., changing UI according to a tilting operation by a user), image stabilization at shooting, game control, and inertial navigation.
The pressure sensor 413 may be disposed at a side frame of the terminal 400 and/or at a lower layer of the touch display 405. When the pressure sensor 413 is disposed at a side frame of the terminal 400, a grip signal of the terminal 400 by a user may be detected, and the processor 401 performs a left-right hand recognition or a shortcut operation according to the grip signal collected by the pressure sensor 413. When the pressure sensor 413 is disposed at the lower layer of the touch display screen 405, the processor 401 controls the operability control on the UI interface according to the pressure operation of the user on the touch display screen 405. The operability controls include at least one of a button control, a scroll bar control, an icon control, and a menu control.
The fingerprint sensor 414 is used to collect a fingerprint of the user, and the processor 401 identifies the identity of the user based on the fingerprint collected by the fingerprint sensor 414, or the fingerprint sensor 414 identifies the identity of the user based on the collected fingerprint. Upon recognizing that the user's identity is a trusted identity, the user is authorized by the processor 401 to perform relevant sensitive operations including unlocking the screen, viewing encrypted information, downloading software, paying for and changing settings, etc. The fingerprint sensor 414 may be provided on the front, back or side of the terminal 400. When a physical key or vendor Logo is provided on the terminal 400, the fingerprint sensor 414 may be integrated with the physical key or vendor Logo.
The optical sensor 415 is used to collect the ambient light intensity. In one embodiment, the processor 401 may control the display brightness of the touch display screen 405 according to the ambient light intensity collected by the optical sensor 415. Specifically, when the intensity of the ambient light is high, the display brightness of the touch display screen 405 is turned up; when the ambient light intensity is low, the display brightness of the touch display screen 405 is turned down. In another embodiment, the processor 401 may also dynamically adjust the shooting parameters of the camera assembly 406 according to the ambient light intensity collected by the optical sensor 415.
A proximity sensor 416, also referred to as a distance sensor, is typically provided on the front panel of the terminal 400. The proximity sensor 416 is used to collect the distance between the user and the front of the terminal 400. In one embodiment, when the proximity sensor 416 detects a gradual decrease in the distance between the user and the front face of the terminal 400, the processor 401 controls the touch display 405 to switch from the bright screen state to the off screen state; when the proximity sensor 416 detects that the distance between the user and the front surface of the terminal 400 gradually increases, the processor 401 controls the touch display screen 405 to switch from the off-screen state to the on-screen state.
Those skilled in the art will appreciate that the structure shown in fig. 6 is not limiting of the terminal 400 and may include more or fewer components than shown, or may combine certain components, or may employ a different arrangement of components.
Other embodiments of the application will be apparent to those skilled in the art from consideration of the specification and practice of the application disclosed herein. This application is intended to cover any variations, uses, or adaptations of the application following, in general, the principles of the application and including such departures from the present disclosure as come within known or customary practice within the art to which the application pertains. It is intended that the specification and examples be considered as exemplary only, with a true scope and spirit of the application being indicated by the following claims.
It is to be understood that the application is not limited to the precise arrangements and instrumentalities shown in the drawings, which have been described above, and that various modifications and changes may be effected without departing from the scope thereof. The scope of the application is limited only by the appended claims.

Claims (12)

1. A method of creating a big data service, the method comprising:
the master node generates a container mirror image of each big data service according to a big data service component template corresponding to each big data service in M big data services and configuration parameters of each big data service, wherein M is an integer greater than or equal to 1;
the master node builds a service cluster, wherein the service cluster comprises the master node and N proxy nodes, the master node is provided with a service image and each proxy node is provided with a proxy image, the service image is used as an operating system of the master node, the proxy image is used as an operating system of the proxy node, the service image and the proxy image both comprise image warehouses, the image warehouses comprise container images of each big data service, and N is an integer greater than or equal to 1;
The master node creates big data service in the service cluster according to an executable file and a component configuration file, wherein the executable file comprises a component identifier of at least one big data service component required by the big data service, and the component configuration file comprises a deployment file storage path of each big data service component;
the executable file and the component configuration file are sent by a terminal, the executable file is generated by the terminal, a palm warehouse is created according to the executable file, the palm warehouse comprises at least one deployment file corresponding to each big data service component in at least one big data service component, the at least one deployment file corresponding to the big data service component comprises implementation codes for realizing the big data service component, and the component configuration file is generated according to the palm warehouse.
2. The method of claim 1, wherein the configuring parameters according to the big data service component template corresponding to each big data service in the M big data services and each big data service comprises:
the master node receives configuration information corresponding to each big data service in M big data services sent by a terminal;
The master node acquires a stored big data service component template corresponding to a target big data service, wherein the big data service component template corresponding to the target big data service comprises at least one configuration item and a default parameter corresponding to each configuration item, and the target big data service is any one of the M big data services;
and the main node respectively updates default parameters corresponding to each configuration item included in the big data service component template into parameters of each configuration item included in the configuration information corresponding to the target big data service to obtain a container mirror image corresponding to the target big data service.
3. The method of claim 1, wherein the master node builds a service cluster comprising:
the master node installs a service image;
the master node sends a proxy mirror image to each proxy node connected with the master node so that each proxy node installs the proxy mirror image;
the master node deploys a kubernetes system in a node cluster formed by the master node and each proxy node to form a service cluster.
4. The method of claim 3, wherein the proxy image further comprises an address of the master node,
After the master node sends the proxy mirror image to each proxy node connected with the master node, the method further comprises:
and the master node receives an address allocation request sent by each proxy node according to the address of the master node, and sends the address of each proxy node to each proxy node respectively.
5. The method of claim 1, wherein the master node creating big data services in the service cluster from executable files and component configuration files, comprising:
the master node obtains a storage path of at least one deployment file corresponding to each big data service component from the component configuration file according to the component identifier of each big data service component in the executable file;
and the master node acquires at least one deployment file corresponding to each big data service component according to the storage path of the at least one deployment file corresponding to each big data service component, and loads the at least one deployment file corresponding to each big data service component into the service cluster to create big data service.
6. The method of claim 5, wherein before loading the at least one deployment file corresponding to each big data service component into the service cluster, further comprises:
The main node receives an isolation file, wherein computing resources and Kubernetes elements required by big data service are included in the isolation file, and a container is created in a plurality of proxy nodes in the service cluster, and the container comprises the computing resources and the Kubernetes elements required by the big data service;
the loading the at least one deployment file corresponding to each big data service component into the service cluster includes:
and the main node loads at least one deployment file corresponding to each big data service component into the container.
7. An apparatus for creating a big data service, the apparatus comprising:
the generation module is used for generating a container mirror image of each big data service according to a big data service component template corresponding to each big data service in M big data services and configuration parameters of each big data service, wherein M is an integer greater than or equal to 1;
the system comprises a construction module, a storage module and a storage module, wherein the construction module is used for constructing a service cluster, the service cluster comprises a device and N proxy nodes, the device is provided with a service image and each proxy node is provided with a proxy image, the service image is used as an operating system of the device, the proxy image is used as the operating system of the proxy node, the service image and the proxy image both comprise image warehouses, the image warehouses comprise container images of each big data service, and N is an integer greater than or equal to 1;
The system comprises a creation module, a storage module and a storage module, wherein the creation module is used for creating big data service in the service cluster according to an executable file and a component configuration file, the executable file comprises a component identifier of at least one big data service component required by the big data service, and the component configuration file comprises a deployment file storage path of each big data service component;
the executable file and the component configuration file are sent by a terminal, the executable file is generated by the terminal, a palm warehouse is created according to the executable file, the palm warehouse comprises at least one deployment file corresponding to each big data service component in at least one big data service component, the at least one deployment file corresponding to the big data service component comprises implementation codes for realizing the big data service component, and the component configuration file is generated according to the palm warehouse.
8. The apparatus of claim 7, wherein the generating module comprises:
the first receiving unit is used for receiving configuration information corresponding to each big data service in M big data services sent by the terminal;
the first acquisition unit is used for acquiring a stored big data service component template corresponding to a target big data service, wherein the big data service component template corresponding to the target big data service comprises at least one configuration item and a default parameter corresponding to each configuration item, and the target big data service is any one of the M big data services;
And the updating unit is used for respectively updating default parameters corresponding to each configuration item included in the big data service component template into parameters of each configuration item included in the configuration information corresponding to the target big data service to obtain a container mirror image corresponding to the target big data service.
9. The apparatus of claim 8, wherein the build module comprises:
an installation unit for installing a service image;
a transmitting unit configured to transmit a proxy image to each proxy node connected to the apparatus, so that the proxy image is installed by each proxy node;
and the deployment unit is used for deploying the kubernetes system in the node cluster formed by the device and each proxy node so as to form a service cluster.
10. The apparatus of claim 9, wherein the proxy image further comprises an address of the apparatus, the apparatus further comprising:
the receiving module is used for receiving an address allocation request sent by each proxy node according to the address of the device;
and the sending module is used for respectively sending the address of each proxy node to each proxy node.
11. The apparatus of claim 7, wherein the creation module comprises:
The second acquisition unit is used for acquiring a storage path of at least one deployment file corresponding to each big data service component from the component configuration file according to the component identifier of each big data service component in the executable file;
the loading unit is used for acquiring at least one deployment file corresponding to each big data service component according to the storage path of the at least one deployment file corresponding to each big data service component, and loading the at least one deployment file corresponding to each big data service component into the service cluster so as to create big data service.
12. The apparatus of claim 11, wherein the creation module further comprises: a second receiving unit and a creating unit;
the second receiving unit is used for receiving an isolation file, and computing resources and Kubernetes elements required by big data service included in the isolation file;
the creation unit is configured to create a container in a plurality of proxy nodes in the service cluster, where the container includes computing resources and Kubernetes elements required by the big data service;
and the loading unit is used for loading at least one deployment file corresponding to each big data service component into the container.
CN201910020151.6A 2019-01-09 2019-01-09 Method and device for creating big data service Active CN111427949B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201910020151.6A CN111427949B (en) 2019-01-09 2019-01-09 Method and device for creating big data service

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201910020151.6A CN111427949B (en) 2019-01-09 2019-01-09 Method and device for creating big data service

Publications (2)

Publication Number Publication Date
CN111427949A CN111427949A (en) 2020-07-17
CN111427949B true CN111427949B (en) 2023-10-20

Family

ID=71546599

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201910020151.6A Active CN111427949B (en) 2019-01-09 2019-01-09 Method and device for creating big data service

Country Status (1)

Country Link
CN (1) CN111427949B (en)

Families Citing this family (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111930441B (en) * 2020-08-10 2024-03-29 上海熙菱信息技术有限公司 Consul-based configuration file management system and method
CN112099915B (en) * 2020-09-07 2022-10-25 紫光云(南京)数字技术有限公司 Soft load balancing dynamic issuing configuration method and system
CN116909584A (en) * 2023-05-06 2023-10-20 广东国地规划科技股份有限公司 Deployment method, device, equipment and storage medium of space-time big data engine

Citations (14)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN103037002A (en) * 2012-12-21 2013-04-10 中标软件有限公司 Method and system for arranging server cluster in cloud computing cluster environment
WO2013072925A2 (en) * 2011-09-19 2013-05-23 Tata Consultancy Services Limited A computing platform for development and deployment of sensor data based applications and services
WO2017045424A1 (en) * 2015-09-18 2017-03-23 乐视控股(北京)有限公司 Application program deployment system and deployment method
CN106850621A (en) * 2017-02-07 2017-06-13 南京云创大数据科技股份有限公司 A kind of method based on container cloud fast construction Hadoop clusters
CN106888254A (en) * 2017-01-20 2017-06-23 华南理工大学 A kind of exchange method between container cloud framework based on Kubernetes and its each module
CN107493191A (en) * 2017-08-08 2017-12-19 深信服科技股份有限公司 A kind of clustered node and self scheduling container group system
CN108173919A (en) * 2017-12-22 2018-06-15 百度在线网络技术(北京)有限公司 Big data platform builds system, method, equipment and computer-readable medium
CN108196843A (en) * 2018-01-09 2018-06-22 成都睿码科技有限责任公司 Visualization Docker containers compile the O&M method of deployment automatically
CN108234164A (en) * 2016-12-14 2018-06-29 杭州海康威视数字技术股份有限公司 Clustered deploy(ment) method and device
CN108694053A (en) * 2018-05-14 2018-10-23 平安科技(深圳)有限公司 Build the method and terminal device of Kubernetes host nodes automatically based on Ansible tools
CN108809722A (en) * 2018-06-13 2018-11-13 郑州云海信息技术有限公司 A kind of method, apparatus and storage medium of deployment Kubernetes clusters
CN108958927A (en) * 2018-05-31 2018-12-07 康键信息技术(深圳)有限公司 Dispositions method, device, computer equipment and the storage medium of container application
CN109032760A (en) * 2018-08-01 2018-12-18 北京百度网讯科技有限公司 Method and apparatus for application deployment
CN109062655A (en) * 2018-06-05 2018-12-21 腾讯科技(深圳)有限公司 A kind of containerization cloud platform and server

Family Cites Families (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US8572607B2 (en) * 2008-05-19 2013-10-29 Novell, Inc. System and method for performing designated service image processing functions in a service image warehouse
TWI592808B (en) * 2012-08-17 2017-07-21 High-speed automated cluster system deployment using virtual disks

Patent Citations (14)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2013072925A2 (en) * 2011-09-19 2013-05-23 Tata Consultancy Services Limited A computing platform for development and deployment of sensor data based applications and services
CN103037002A (en) * 2012-12-21 2013-04-10 中标软件有限公司 Method and system for arranging server cluster in cloud computing cluster environment
WO2017045424A1 (en) * 2015-09-18 2017-03-23 乐视控股(北京)有限公司 Application program deployment system and deployment method
CN108234164A (en) * 2016-12-14 2018-06-29 杭州海康威视数字技术股份有限公司 Clustered deploy(ment) method and device
CN106888254A (en) * 2017-01-20 2017-06-23 华南理工大学 A kind of exchange method between container cloud framework based on Kubernetes and its each module
CN106850621A (en) * 2017-02-07 2017-06-13 南京云创大数据科技股份有限公司 A kind of method based on container cloud fast construction Hadoop clusters
CN107493191A (en) * 2017-08-08 2017-12-19 深信服科技股份有限公司 A kind of clustered node and self scheduling container group system
CN108173919A (en) * 2017-12-22 2018-06-15 百度在线网络技术(北京)有限公司 Big data platform builds system, method, equipment and computer-readable medium
CN108196843A (en) * 2018-01-09 2018-06-22 成都睿码科技有限责任公司 Visualization Docker containers compile the O&M method of deployment automatically
CN108694053A (en) * 2018-05-14 2018-10-23 平安科技(深圳)有限公司 Build the method and terminal device of Kubernetes host nodes automatically based on Ansible tools
CN108958927A (en) * 2018-05-31 2018-12-07 康键信息技术(深圳)有限公司 Dispositions method, device, computer equipment and the storage medium of container application
CN109062655A (en) * 2018-06-05 2018-12-21 腾讯科技(深圳)有限公司 A kind of containerization cloud platform and server
CN108809722A (en) * 2018-06-13 2018-11-13 郑州云海信息技术有限公司 A kind of method, apparatus and storage medium of deployment Kubernetes clusters
CN109032760A (en) * 2018-08-01 2018-12-18 北京百度网讯科技有限公司 Method and apparatus for application deployment

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
Kubernetes高可用集群的部署实践;盛乐标;周庆林;游伟倩;张予倩;;电脑知识与技术(第26期);全文 *

Also Published As

Publication number Publication date
CN111427949A (en) 2020-07-17

Similar Documents

Publication Publication Date Title
CN108881030B (en) Routing method and device in gray release scene
CN111190748B (en) Data sharing method, device, equipment and storage medium
CN112256425B (en) Load balancing method and system, computer cluster, information editing method and terminal
CN110704324B (en) Application debugging method, device and storage medium
CN111427949B (en) Method and device for creating big data service
CN110636144A (en) Data downloading method and device
CN110147503B (en) Information issuing method and device, computer equipment and storage medium
CN111866140B (en) Fusion management device, management system, service calling method and medium
CN112612539B (en) Data model unloading method and device, electronic equipment and storage medium
CN111008083B (en) Page communication method and device, electronic equipment and storage medium
CN110086814B (en) Data acquisition method and device and storage medium
CN111130985B (en) Incidence relation establishing method, device, terminal, server and storage medium
CN108837509B (en) method for configuring setting parameters of virtual scene, computer device and storage medium
CN111914985A (en) Configuration method and device of deep learning network model and storage medium
CN113051015B (en) Page rendering method and device, electronic equipment and storage medium
CN112612540B (en) Data model configuration method, device, electronic equipment and storage medium
CN112699906B (en) Method, device and storage medium for acquiring training data
CN114329292A (en) Resource information configuration method and device, electronic equipment and storage medium
CN113076452A (en) Application classification method, device, equipment and computer readable storage medium
CN112988254A (en) Method, device and equipment for managing hardware equipment
CN111064657A (en) Method, device and system for grouping concerned accounts
CN110968549A (en) File storage method and device, electronic equipment and medium
CN115348262B (en) Cross-link operation execution method and network system based on cross-link protocol
CN112035823B (en) Data acquisition method, device, terminal and storage medium
CN114513479B (en) Message transmitting and receiving method, device, terminal, server and storage medium

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant