CN117614825A - Cloud primary platform of intelligent coal preparation plant - Google Patents

Cloud primary platform of intelligent coal preparation plant Download PDF

Info

Publication number
CN117614825A
CN117614825A CN202311235137.0A CN202311235137A CN117614825A CN 117614825 A CN117614825 A CN 117614825A CN 202311235137 A CN202311235137 A CN 202311235137A CN 117614825 A CN117614825 A CN 117614825A
Authority
CN
China
Prior art keywords
cluster
storage
application
computing
management
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202311235137.0A
Other languages
Chinese (zh)
Inventor
方圆
荣东
周国宾
耿延兵
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
PINGDINGSHAN ZHONGXUAN AUTOMATIC CONTROL SYSTEM CO LTD
Original Assignee
PINGDINGSHAN ZHONGXUAN AUTOMATIC CONTROL SYSTEM CO LTD
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by PINGDINGSHAN ZHONGXUAN AUTOMATIC CONTROL SYSTEM CO LTD filed Critical PINGDINGSHAN ZHONGXUAN AUTOMATIC CONTROL SYSTEM CO LTD
Priority to CN202311235137.0A priority Critical patent/CN117614825A/en
Publication of CN117614825A publication Critical patent/CN117614825A/en
Pending legal-status Critical Current

Links

Abstract

The application provides a cloud primary platform of intelligent coal preparation factory, this platform includes: the platform deployment module is used for deploying storage clusters, computing clusters and private container warehouses among nodes corresponding to the coal preparation plants; the computing resource management module is used for carrying out clustered management on computing resources through the container arrangement tool; the storage resource management module is used for carrying out clustered management on storage resources through the distributed network file system; the network module is used for configuring a network of the cluster and realizing the internal and external communication of the computing cluster; the platform operation and maintenance module is used for managing the operation states of the storage clusters and the calculation clusters; the application management module is used for managing a plurality of applications to be deployed in the coal preparation plant; and the authority control module is used for performing authority management on different projects, clusters and application management processes of the plurality of coal preparation plants based on the service line. The cloud primary platform realizes clustered management of the coal preparation plant and improves availability, reliability and stability of the intelligent coal preparation plant.

Description

Cloud primary platform of intelligent coal preparation plant
Technical Field
The application relates to the technical field of intelligent coal preparation plants, in particular to a cloud native platform of an intelligent coal preparation plant.
Background
At present, raw coal mined from a coal mine is subjected to coal dressing by a coal dressing plant so as to sort clean coal meeting the quality requirement from the raw coal. And various software and hardware devices such as a server and the like are required to be arranged in the coal separation plant so as to realize various functions required by the coal separation plant.
However, with the increasing degree of intellectualization of coal preparation plants, coal preparation plants in the related art face the following problems: firstly, along with the increasing complexity of data requirements and increasing of historical data volume of an intelligent coal preparation plant, the capacity expansion of single server hardware reaches a physical limit, so that the resource expansion is inconvenient to continue. Second, as the accumulated running time of each service of the intelligent coal preparation plant increases, the problems of hardware failure, exhaustion of computing resources and storage resources and the like all cause abnormal service, and the reliability of the coal preparation plant is reduced. Thirdly, as the intelligent coal preparation plant business becomes complex, the number of software applications is continuously increased, the data of each component is continuously increased, the service is gradually complex, and the like, the deployment and operation cost of various applications is higher. Fourth, because the software and hardware facilities of each intelligent coal preparation plant are greatly different, the systems of different coal preparation plants are difficult to flexibly compatible, so that the collaborative management of each coal preparation plant cannot be carried out. Fifth, resource management among a plurality of coal preparation plants belonging to the same upper layer management system is difficult, and rights management is chaotic, which easily causes resource waste.
Disclosure of Invention
The present application aims to solve, at least to some extent, one of the technical problems in the related art.
Therefore, a first object of the present application is to provide a cloud native platform of an intelligent coal preparation plant, which implements clustered management on the coal preparation plant, improves availability, reliability and stability of the intelligent coal preparation plant, and solves a plurality of problems of poor availability, reliability and stability, difficult lateral expansion of resources, disordered rights management, difficult resource management and the like caused by incapability of implementing clustering in the coal preparation plant in related technologies.
A second object of the present application is to propose an electronic device.
To achieve the above objective, an embodiment of the present application provides a cloud native platform of an intelligent coal preparation plant, the cloud native platform comprising: the system comprises a platform deployment module, a computing resource management module, a storage resource management module, a network module, a platform operation and maintenance module, an application management module and a permission control module, wherein,
the platform deployment module is used for communicating with nodes corresponding to a plurality of coal preparation plants, and deploying at least one storage cluster, at least one computing cluster and a private container warehouse among the nodes;
The computing resource management module is used for carrying out clustered management on computing resources in the computing cluster through a container arrangement tool;
the storage resource management module is used for carrying out clustered management on storage resources in the storage cluster through a distributed network file system;
the network module is used for configuring the network of the computing cluster to realize network communication inside the computing cluster and communication outside the computing cluster;
the platform operation and maintenance module is used for managing the operation states of the at least one storage cluster and the at least one computing cluster;
the application management module is used for managing the construction, storage and actual use of a plurality of applications to be deployed in the coal preparation plant;
the authority control module is used for performing authority management on different projects of the coal preparation plants, the at least one storage cluster, the at least one computing cluster and the application management process based on the service line.
Optionally, in one embodiment of the present application, the platform deployment module includes: the system comprises a basic configuration deployment unit, a storage cluster configuration unit, a computing cluster configuration unit and a private container warehouse configuration unit, wherein the basic configuration deployment unit is used for constructing a basic deployment environment in a plurality of nodes, installing a bottom layer service and adding basic configuration information; the storage cluster configuration unit is used for deploying the cluster fs service on a plurality of storage nodes according to preset storage cluster configuration information to construct the at least one storage cluster, deploying the heketi service on a storage management node, and constructing the node topology of the at least one storage cluster through the heketi service.
Optionally, in an embodiment of the present application, the computing cluster configuration unit is specifically configured to: deploying kubernetes on a computing node where a main computing cluster is located to generate the main computing cluster, and installing related service components on the main computing cluster; constructing a plurality of member computing clusters based on encryption information of the master computing cluster; the private container warehouse configuration unit is used for building a harbor service on a node where the private container warehouse is located.
Optionally, in one embodiment of the present application, the container orchestration tool comprises: kubernetes, the computing resource management module is specifically configured to: according to the demand information of the target application, automatically adjusting the scale and the distributed computing resources of the target application through kubernetes; automatically adjusting the flow route of the target application through a load balancing mechanism built in kubernetes, and balancing the load of the target application; the status of each container and node is monitored based on the kubernetes self-repair capability, and the failed container and the failed node are automatically restarted.
Optionally, in one embodiment of the present application, the distributed network file system includes: the storage resource management module is specifically configured to: performing data copying and data dispersing among the plurality of storage nodes through the glumerfs, and under the condition that a failure node is detected, performing access by replacing the failure node by any normal node; according to the storage capacity to be expanded, a corresponding number of storage nodes are added in the storage cluster; when writing data into a storage cluster, copying the written data to the plurality of storage nodes through a glumerfs algorithm.
Optionally, in one embodiment of the present application, the platform operation and maintenance module includes: the system comprises a computing cluster management unit and a storage cluster management unit, wherein the computing cluster management unit is used for monitoring the resource state of each node through kubesphere, scheduling equipment in each node and sending abnormal state alarm information; the storage cluster management unit is used for managing the at least one storage cluster through a glumerfs management interface provided by the heketi service and is combined with kubernetes to use storage resources in the kubernetes.
Optionally, in one embodiment of the present application, the application management module includes: the system comprises a container mirror image management unit, a package management unit and an application running management unit, wherein the container mirror image management unit is used for storing and managing container mirror images sent by a user by using a harbor service as a private mirror image warehouse; the package management unit is a Helmcharts package manager and is used for storing and managing the charts packages sent by the user by using the harbor service as a Helmcharts warehouse, wherein the charts packages contain applications to be deployed by a coal preparation plant.
Optionally, in one embodiment of the present application, the application running management unit includes: the application deployment subunit is used for rapidly deploying the target application to be deployed, which is called by the user from the container mirror image management unit, into a corresponding target cluster; the application operation and maintenance subunit is used for monitoring the state of each application through kubesphere, adjusting the configuration information of each application and carrying out abnormal state alarming of the application.
Optionally, in an embodiment of the present application, the rights control module is specifically configured to: distributing projects for different coal preparation plants by setting service lines, and setting authority corresponding to each project; matching the service line to a name space of kubernetes, and performing authority isolation of the computing cluster based on the name space; when the application is deployed, binding the application to a storage volume of the glusterfs through API resources of kubernetes, and performing authority isolation of the storage cluster based on the name space; corresponding project authorities are set for different users based on the harbor service, and authority isolation and control of different applications are realized by binding projects to the service line.
To achieve the above object, an embodiment of a second aspect of the present invention provides an electronic device, including: a cloud-based platform for an intelligent coal preparation plant as claimed in any one of the embodiments of the first aspect above.
The technical scheme provided by the embodiment of the application at least brings the following beneficial effects: the method and the system realize clustered management of each coal preparation plant, and can realize transverse resource expansion based on the cloud native platform. And stability, reliability and availability of coal washery are greatly improved, and fault probability is reduced. The cloud primary platform is integrated with application market functions, so that rich application components can be provided for coal preparation plants. And the cost of deployment and operation and maintenance of various applications through the cloud native platform is low. The cloud primary platform is high in integration degree, high in universality and standardization degree and convenient to integrally manage and control each coal preparation plant. And the cloud primary platform also realizes the authority isolation of different coal preparation plants, is convenient for carrying out resource management on the different coal preparation plants, and improves the rationality of resource allocation.
Additional aspects and advantages of the invention will be set forth in part in the description which follows and, in part, will be obvious from the description, or may be learned by practice of the invention.
Drawings
The foregoing and/or additional aspects and advantages of the present application will become apparent and readily appreciated from the following description of the embodiments, taken in conjunction with the accompanying drawings, in which:
fig. 1 is a schematic structural diagram of a cloud primary platform of an intelligent coal preparation plant according to an embodiment of the present application;
fig. 2 is a schematic structural diagram of a platform deployment module according to an embodiment of the present application;
fig. 3 is a schematic structural diagram of an application management module according to an embodiment of the present application;
fig. 4 is a schematic architecture diagram of a cloud native platform of a specific intelligent coal preparation plant according to an embodiment of the present application;
fig. 5 is a schematic structural diagram of an electronic device according to an embodiment of the present application.
Detailed Description
Embodiments of the present invention are described in detail below, examples of which are illustrated in the accompanying drawings, wherein like or similar reference numerals refer to like or similar elements or elements having like or similar functions throughout. The embodiments described below by referring to the drawings are illustrative and intended to explain the present invention and should not be construed as limiting the invention.
It should be noted that the present application analyzes the problems faced by the current intelligent coal preparation plant one by one, and provides a cloud native platform of the intelligent coal preparation plant to solve the problems existing in the intelligent coal preparation plant in the related technology.
The cloud native platform realizes the transverse expansion of resources for the hardware expansion of the single server of the intelligent coal-dressing plant to reach a physical limit, thereby breaking through the performance bottleneck of computing resources and storage resources; for service abnormality of the intelligent coal preparation plant, the cloud primary platform automatically transfers and restores faults, so that high service availability is realized; for the system of each coal preparation plant is difficult to flexibly compatible, the platform provides a set of general solution to help to cope with the basic environment with complex and changeable bottom layer, wherein for the coal preparation plant which can not be directly transited to the cloud primary cluster, the platform realizes coexistence, mutual compatibility and cooperation through the current super fusion server, the virtual machine, the physical machine and the newly built cluster; for coal preparation plants with very complex demands on clusters, for example, two coal preparation plants under one superior group, the same set of architecture scheme and management platform are needed, but resources are needed to be isolated, the cloud native platform can manage multiple clusters simultaneously, and unification of management layers, sharing of private product warehouses and the like are considered on the basis of realizing multi-cluster calculation and storage resource isolation; different services and equipment of different factories are needed in an intelligent transformation project of a coal preparation plant, rights isolation is needed for data and service safety, an independent server is generally used in the existing coal preparation plant, resource waste can be caused in the situation, rights management confusion exists when a shared server is used, and the cloud native platform of the application realizes rights control and isolation in all aspects of calculation, storage and application aiming at different factories and equipment.
The cloud primary platform of the intelligent coal preparation plant provided by the embodiment of the invention is described in detail below with reference to the accompanying drawings.
Fig. 1 is a schematic structural diagram of a cloud native platform of an intelligent coal preparation plant according to an embodiment of the present application, as shown in fig. 1, a cloud native platform 10 of the intelligent coal preparation plant includes: platform deployment module 100, computing resource management module 200, storage resource management module 300, network module 400, platform operation and maintenance module 500, application management module 600, and rights control module 700.
The platform deployment module 100 is configured to communicate with nodes corresponding to a plurality of coal preparation plants, and deploy at least one storage cluster, at least one computing cluster and a private container warehouse among the plurality of nodes.
Specifically, the node in the embodiment of the present application may be a coal preparation plant to be clustered, and may also be regarded as a device for executing an application, such as a server in the coal preparation plant. Because the present application performs cluster management on a plurality of intelligent coal preparation plants, the platform deployment module 100 communicates with a plurality of nodes first, and builds a cluster on the plurality of nodes by issuing a deployment file, and mainly includes a computing cluster and a storage cluster, so as to manage computing resources and storage resources. That is, the platform deployment module 100 is configured to deploy clusters and other services required by clustered management in the coal preparation plants, and after the platform deployment module 100 is deployed, the cloud native platform performs clustered management on the deployed clusters, so as to implement integrated management and control of multiple coal preparation plants.
In one embodiment of the present application, as shown in fig. 2, a platform deployment module 100 includes: a base configuration deployment unit 110, a storage cluster configuration unit 120, a computing cluster configuration unit 130, and a private container repository configuration unit 140. The deployed computing cluster in this embodiment is a kubernetes cluster, and the deployed storage cluster is a glumerfs cluster.
The base configuration deployment unit 110 is configured to construct a base deployment environment in a plurality of nodes, install an underlying service, and add base configuration information. Specifically, the base configuration deployment unit 110 issues deployment scripts to each node by communicating with each node, prepares a base deployment environment on the relevant node, and installs dependent underlying services, such as a time synchronization tool, etc., on the node. At the same time, basic configuration such as secret-free mutual trust and authentication among multiple nodes is added.
The storage cluster configuration unit 120 is configured to deploy a cluster service on a plurality of storage nodes according to preset storage cluster configuration information to construct at least one storage cluster, deploy a heketi service on a storage management node, and construct a node topology of the at least one storage cluster through the heketi service.
Specifically, the preset storage cluster configuration information may be configuration information manually written by a user according to application requirements, and is used for providing information such as a relationship between a cluster and a node, a disk device name, and the like. It should be noted that, in the embodiment of the present application, one or more storage clusters may be deployed according to actual application requirements (since the storage clusters are built based on the glumerfs service in the embodiment of the present application, the storage clusters may be represented by the glumerfs clusters), and if multiple glumerfs clusters need to be built, information of the multiple glumerfs clusters should be represented in the configuration.
Further, the storage cluster configuration unit 120 issues deployment scripts to all storage nodes, and controls each storage node to execute the scripts, so as to implement installation and start the glumerfs service, and further construct a cluster according to the configuration information. Furthermore, the storage cluster configuration unit 120 may further issue a deployment script to the storage management node, where the storage management node may be one, install and enable the heketi service on the storage management node, and construct a node topology relationship of the deployed cluster. It should be noted that if a plurality of glumerfs clusters are constructed in advance, the topology of a plurality of clusters may be constructed at the same time.
The computing cluster configuration unit 130 is specifically configured to deploy kubernetes on a computing node where the main computing cluster is located, so as to generate the main computing cluster, and install relevant service components on the main computing cluster; a plurality of member computing clusters are constructed based on the encryption information of the master computing cluster.
Specifically, the computing cluster configuration unit 130 deploys the computing clusters according to manually written configuration information, where the configuration information is manually written by a user according to application requirements, and is used to provide information such as relationships between the clusters and the nodes, node roles, and the like. It should be noted that, in the embodiment of the present application, one or more computing clusters may be deployed according to actual application requirements, and if a plurality of kubernetes clusters need to be constructed, a main cluster and the rest of member clusters in the plurality of computing clusters are determined in the configuration information.
Further, the computing cluster configuration unit 130 issues a deployment script to the computing node where the main cluster is located, and controls the node to execute the script to implement installation and start of the kubrenetes cluster, and to install and start of the kubresphere cluster, where other relevant service components may include: openelb services, power components, etc. Further, jwtSecret (i.e., encryption information) of the master kubresophere cluster is obtained, and the jwtSecret information is reported back to the platform deployment module 100. Wherein jwtSecret can be regarded approximately as key information of the encrypted character string. the power component is a component in kubespher for proxy connection, and is a tool for establishing network connection between clusters through proxy, which can expose proxy service address of a certain cluster, and make other clusters connected to the cluster.
Furthermore, if multiple computing clusters need to be built, other clusters are taken as member clusters, the computing cluster configuration unit 130 issues deployment scripts to the computing nodes where the member clusters are located, the scripts include jwtSecret information of the main cluster, and controls the computing nodes where the member clusters are located to execute the scripts, so that the kubrennetes clusters are installed and started, the kubressescenes clusters are installed and started, and the member clusters automatically report various state information of the member clusters to the main cluster.
The private container repository configuration unit 140 is configured to build a harbor service on a node where the private container repository is located. Specifically, the private container repository configuration unit 140 issues a deployment script to a node where the private repository is located, and makes the node execute the deployment script to build the harbor service.
Therefore, the platform deployment module 100 realizes cluster deployment among nodes through the steps, and is convenient for the other modules in the cloud native platform to carry out integrated management on clusters.
The computing resource management module 200 is configured to perform clustered management of computing resources in a computing cluster through a container arrangement tool.
The cloud native platform of the intelligent coal preparation plant manages the application used by the coal preparation plant from the angle of a container. The container mirror image is an executable software package, which contains a complete executable program, including codes and default values required by the running process, and can bind and run an application in a container through a container engine, and the container engine utilizes the kernel resource isolation characteristic of an operating system and can run a plurality of containers on the same operating system. The container engine may be considered Virtual Machines (VMs). While container orchestration tools are used to achieve orchestration of containers, the container orchestrator can control and automate various tasks of container management, and can deploy the same program in multiple environments without the need for re-writing. The present application may perform clustered management of computing resources through various container orchestration tools, such as Kubernetes, docker Swarm, and Apache meso, among others.
Preferably, in one embodiment of the present application, the clustered management of the computing resources is performed through kubernetes. Where kubernetes (K8 s for short) is an open-source container orchestration and management platform that implements deployment and management applications by forming an abstraction layer over the cluster, treating all machines in the cluster as part of a single resource pool. The computing resource management module 200 realizes clustered management of computing resources such as memory, CPU and the like through K8 s.
Specifically, in the present embodiment, the computing resource management module 200 may specifically perform the following operations:
in a first example, the scale and allocated computing resources of the target application are automatically adjusted by kubernetes according to the demand information of the target application. In this example, the target application is an application to be deployed and extended resources, and since kubernetes provides an automated deployment and extension mechanism, the computing resource management module 200 can automatically adjust the scale and resource allocation of the target application program according to the needs, so as to better meet the requirements of the application program, and realize the automated deployment and extension of the application.
In a second example, through a load balancing mechanism built in kubernetes, the traffic route of the target application is automatically adjusted, and the load of the target application is balanced. In this example, the target application is an application that needs to perform load adjustment, and since kubernetes has a built-in service discovery and load balancing mechanism, the traffic route and load balancing of the target application can be automatically adjusted according to the requirements through kubernetes, so as to ensure high availability and stability of the application program.
In a third example, the status of individual containers and nodes is monitored based on kubernetes' self-healing capabilities, and failed containers and replacement failed nodes are automatically restarted. In this example, since kubernetes has self-repairing capability, states of containers and nodes can be monitored, and when a certain container or node is determined to fail, the failed container or the failed node can be automatically restarted by using kubernetes, so that stability and usability of an application program are improved.
In this embodiment, when the computing resource management module 200 performs clustered management of computing resources through kubernetes, since kubernetes regards the container as a whole and abstracts it into a single computing unit, the computing resource management module 200 can more easily migrate and deploy an application program, and more fully protect the security of the application program based on the security and resource management function of the container level provided by kubernetes. Moreover, since kubernetes is a popular open source item for applications, including various plug-ins, tools, and third party services, the computing resource management module 200 can more fully manage and deploy applications via the kubernetes market ecosystem.
The storage resource management module 300 is configured to perform clustered management of storage resources in a storage cluster through a distributed network file system.
In one embodiment of the present application, the selected distributed network file system is glumerfs, and the storage resource management module 300 uses glumerfs to manage storage resources, so as to implement clustered management of storage, and implement functions of multiple copy replication, automatic recovery of storage failures, and the like through the glumerfs, so as to ensure data security.
Specifically, the glumerfs is an expandable network file system, has the characteristics of high expansibility, high availability, high performance, data consistency, capability of being transversely expanded and the like, avoids using a metadata server, reduces the probability of single-point faults in a cluster, and can realize the functions of network storage, combined storage by fusing storage spaces on a plurality of nodes, redundant backup, load balancing of large files and the like. In this embodiment, the storage resource management module 300 may specifically perform the following operations based on the glumerfs:
in a first example, data replication and data dispersion are performed among a plurality of storage nodes by the glumerfs, and in the case where a failed node is detected, access is performed by any normal node instead of the failed node. In this example, since the glumerfs uses a multi-node architecture, data can be replicated and distributed among multiple nodes, enabling higher availability and data redundancy. When the storage resource management module 300 detects that a node fails, data loss can be avoided and downtime can be reduced by remaining other normal node accesses.
In a second example, a corresponding number of storage nodes is added to the storage cluster according to the storage capacity to be expanded. In this example, the present platform can handle large-scale datasets, since the glumerfs can be extended to hundreds of nodes. When more storage capacity needs to be expanded, the storage resource management module 300 can expand the glumerfs by adding additional storage nodes, so that the present cloud native platform has higher performance and capacity.
In a third example, when writing data into a storage cluster, the written data is copied to the plurality of storage nodes by a glumerfs algorithm. In this example, since the glumerfs uses data replication and data distribution to determine the consistency and reliability of the data, when writing data, the written data may be replicated to multiple storage nodes by the glumerfs running multiple algorithms, thereby ensuring that the acquired data is consistent and up-to-date when accessing the data on any node in the cluster.
In this embodiment, when the storage resource management module 300 performs clustered management of storage resources through the glumerfs, since the glumerfs may run on various hardware platforms and storage types, including: local disk, network storage, cloud storage, etc., and may support multiple protocols, such as NFS, SMB, FTP, etc., the storage cluster of the present application may be integrated with various applications and operating systems, and further the storage resource management module 300 may manage storage resources in multiple ways more flexibly. In addition, the glumerfs adopts parallel I/O operation and supports functions such as cache and memory mapping, so that the platform can reduce delay in remote data access and improve access speed through the glumerfs.
The network module 400 is configured to configure a network of the computing cluster, so as to implement network communication inside the computing cluster and external communication of the computing cluster. Specifically, the network module 400 is configured to implement functions such as network configuration of the cluster and external communication of the cluster.
In one embodiment of the present application, the network module 400 implements cluster network configuration through Service (SVC for short) of kubernetes, and uses openelb as a load balancer for external communication of the cluster, so as to implement network communication with the outside of the cluster, thereby being compatible with communication between services in non-cluster environments. The openelb can directly perform configuration management through the CRD, and achieves the functions of BGP and Layer 2 mode-based load balancing, router ECMP-based load balancing, IP address pool management and the like in the application of actual load balancing.
The platform operation and maintenance module 500 is configured to manage an operation state of at least one storage cluster and at least one computing cluster.
The running state of the cluster comprises the resource state of each node in the cluster, the working state of the machine equipment, the normal or abnormal state in the running process and the like.
In one embodiment of the present application, the platform operation and maintenance module 500 includes: a compute cluster management unit 510 and a storage cluster management unit 520.
The computing cluster management unit 510 is configured to monitor a resource status of each node through kubesphere, schedule devices in each node, and send abnormal status alarm information.
Specifically, the computing cluster management unit 510 monitors the resource status of each physical node through kubesphere, controls the operation schedule of the machine, controls the machine to be on-line or off-line, and performs alarm notification when an abnormal state occurs in the node, and the like. When the alarm notification is performed, the computing cluster management unit 510 may send text or voice alarm information to the mobile terminal of the relevant staff member whose validity is verified in advance by means of wireless communication.
The storage cluster management unit 520 is configured to manage at least one storage cluster through a glumerfs management interface provided by the heketi service, and combine with kubernetes to use storage resources in kubernetes.
Wherein the cluster fs provided by the hekeyi service may be used to manage the lifecycle of the cluster fs volumes, the cluster fs volumes may be created, listed, and deleted in a plurality of storage clusters using hekeyi, and the storage cluster management unit 520 may intelligently manage allocation, creation, and deletion of the entire disks in the clusters using hekeyi.
Specifically, the storage cluster management unit 520 uses the glumerfs management interface provided by heketi, can manage multiple clusters simultaneously, and after combining with kubernetes, can quickly use storage resources of glumerfs in kubernetes.
The application management module 600 is used for managing the construction, storage and actual use of a plurality of applications to be deployed in the coal preparation plant.
In one embodiment of the present application, as shown in fig. 3, the application management module 600 includes: the container image management unit 610, the package management unit 620, and the application execution management unit 630.
The container image management unit 610 is configured to store and manage container images sent by a user using a harbor service as a private image repository.
Specifically, the harbor is an open-source container mirror image warehouse, which can be used as a private mirror image warehouse designed for a coal preparation plant, and a user can push container mirror images to the container mirror image management unit 610, and the container mirror image management unit 610 stores and manages each container mirror image through the harbor.
The package management unit 620 is a helmchar package manager, and the package management unit 620 is configured to store and manage char packages sent by a user, where the char packages include applications to be deployed by a coal preparation factory, using a harbor service as a helmchar repository.
Among them, helm is a packet manager of kubernetes that can quickly find, download and install software packets. The Helm is composed of a client component Helm and a server component Tiller, and can package and uniformly manage a group of K8S resources for searching, sharing and using application software constructed for kubernetes.
Specifically, the helmchar-s package management unit 620 uses the harbor service as a private helm-s warehouse, and the user may package the application into a helm-s package and push the charts package to the package management unit 620, and the charts package is managed by the package management unit 620. The helm charts warehouse contains rich applications including, but not limited to: basic services (such as a relational database mysql, a message queue rubbitmq, a non-relational database mongamb and the like) commonly used by coal preparation plants, established standardized coal preparation plant business general applications (such as a user center, gateway services, workflow engines, reports, a billboard engine and the like) and business services custom-developed for certain coal preparation plants.
As shown in fig. 3, the application execution management unit 630 includes: an application deployment subunit 631 and an application operations and maintenance subunit 632.
The application deployment subunit 631 is configured to rapidly deploy the target application to be deployed, which is called by the user from the container image management unit, into a corresponding target cluster. Specifically, when the user deploys the application, the user may select the private harbor library to pull the application to be deployed through the visual interface of the application deployment subunit 631, then select the cluster to be issued in the selection list, and configure the storage, memory, cpu resource, configuration network and other information required by the deployed application. Furthermore, the application deployment subunit 631 receives the instruction sent by the user, and can rapidly deploy the service into the cluster according to the control instruction.
The application operation and maintenance subunit 632 is configured to monitor the status of each application through kubesphere, adjust the configuration information of each application, and perform abnormal status alarm of the application.
The kubresophere is a cloud native application-oriented distributed operating system constructed on kuubernes, can facilitate the integration of plug-and-play (plug-and-play) of third party applications and cloud native ecological components, provides single-node, multi-node and cluster plug-in installation, and upgrading and operation and maintenance of clusters, and supports unified distribution and operation and maintenance management of cloud native applications in multiple clouds and multiple clusters.
Specifically, the application operation and maintenance subunit 632 monitors the status of each application, for example, the scheduling condition, the running status, the resource usage condition, and the running log of each application, through kubesphere, and can display various kinds of configuration information adjusted in the page visualization, for example, adjust the resources, adjust the nodes of scheduling, adjust the number of service copies, and so on. In addition, when an abnormal state occurs in a certain application, an alarm notification may be sent, and the implementation process of specifically performing the application abnormal alarm notification may refer to the computing cluster management unit 510 to perform node abnormal state alarm, which is not described herein.
The rights control module 700 is configured to perform rights management on different projects, at least one storage cluster, at least one computing cluster, and an application management process of a plurality of coal preparation plants based on the service line.
Specifically, the rights control module 700 is capable of isolating and controlling rights in various aspects such as computing resources, storage resources, and application management based on the concept of a service line.
In particular implementations, in one embodiment of the present application, the rights control module 700 may perform several operations:
in a first example, items are allocated to different coal preparation plants by setting service lines, and rights corresponding to each item are set. Specifically, through setting the logic concept of the service line, projects are distributed for different coal preparation plants, corresponding rights are set for each project, and the rights under the same project are ensured to be the same.
In a second example, the business lines are matched to namespaces of kubernetes and rights isolation of the computing clusters is performed based on the namespaces. Specifically, in terms of computing resources, the rights control module 700 matches the service line to a namespace (namespace) of kubernets, and implements rights isolation of the computing clusters through the resources and rights isolation of the namespaces.
In a third example, when an application is deployed, the application is bound to a storage volume of the glusterfs through an API resource of kubernetes, and rights isolation of the storage cluster is performed based on the namespace. Specifically, in terms of storage resources, when an application is deployed by the application deployment subunit 631 in the above embodiment, the rights control module 700 creates and binds to a storage volume of the glumerfs using API resources of kubernet, such as pv or pvc, etc. Wherein pv is a block of storage in the cluster, and can be preset by a user or dynamically supplied by using a storage class mode. pvc (persistant volume claim) is a user request for storage. Whereas both pv and pvc themselves belong to the nasspace and can only be accessed under the nasspace, the entitlement control module 700 can match applications to the service lines.
In a fourth example, corresponding project authorities are set for different users based on the harbor service, and authority isolation and control of different applications are achieved by binding projects to the service line. Specifically, in terms of application management, as the harbor provides the user and project concepts, different project authorities can be opened for different users, and the authority control module 700 realizes authority isolation and control by binding the concepts of project and service lines.
Based on the above embodiments, in order to more clearly illustrate the working principle of the cloud native platform of the intelligent coal preparation plant of the present application, the following is an exemplary description of an infrastructure of the cloud native platform of the intelligent coal preparation plant provided in one embodiment of the present application. Fig. 4 is a schematic architecture diagram of a cloud native platform of a specific intelligent coal preparation plant according to an embodiment of the present application.
As shown in fig. 4, the architecture includes four parts of infrastructure, cluster deployment, clusters and infrastructure services, and application markets. The basic equipment is equipment required for realizing the cloud native platform and comprises a physical machine, a virtual machine and a super fusion server (Hyperconverged Infrastructure, HCI for short), namely the cloud native platform can be built by the cooperative operation of the super fusion server, the virtual machine and the physical machine in the coal preparation plant. The platform performs clustered management on computing resources and storage resources by deploying K8s clusters and glumerFs clusters and related basic services. And may integrate applications in the application market shown in fig. 4 on the cloud native platform.
To sum up, the cloud native platform of the intelligent coal preparation plant of the embodiment of the application has the following technical effects:
The cloud primary platform can avoid the physical expansion bottleneck of a single server, and realize the breakthrough of the transverse expansion capability from nothing to nothing, so that the intelligent cloud primary management and control platform of the coal preparation plant has the basic capability of constructing a large-scale data center. The platform realizes multi-cluster management, and the computing and storage clusters simultaneously achieve unification of management layers, sharing of private product warehouses and the like on the basis of multi-cluster computing and storage resource isolation. Thus, the cluster management of a plurality of coal preparation plants can be realized.
The cloud native platform performs clustered management, has the capabilities of dynamic expansion and contraction of resources, elastic expansion and rolling upgrading of software services, and achieves the effects that hardware resources are upgraded without shutdown and a software platform upgrading user does not feel. The cloud native platform realizes communication and coordination among the transverse nodes by using basic algorithms such as replication, slicing, consensus and consistency, achieves the effects of automatic fault transfer and recovery, realizes the breakthrough of high availability from nothing to nothing, and improves the Mean Time Between Failure (MTBF) from 5 kilohours to more than 1 kilohour. Therefore, the usability, reliability and stability of the intelligent coal preparation plant are greatly improved.
The cloud native platform provides rich basic components, and provides a standardized scheme for rapid deployment of common basic components of software platforms such as MongoDB, mysql, redis, sqlServer, kafka, rabbitMq, influxDB and the like in the cloud native platform. And moreover, rich intelligent application services of the coal preparation plant are provided, and one-key deployment of common services such as report signs, workflows, data buses, authorities, user centers and the like is realized. Thereby providing a rich application component, integrating application market functions in the platform
The cloud native platform provides cluster visual management capability, can monitor various indexes of a cluster in real time, provides operation and maintenance guarantee for managing a large-scale data center, and greatly reduces cluster management difficulty. And moreover, the service visual management capability is provided, the service running state can be monitored in real time, and the operation and maintenance difficulty and cost are reduced. Therefore, the deployment, operation and maintenance cost of the application is reduced:
the cloud native platform has high integration degree, establishes the PaaS cloud native service base for the intelligent cloud native platform of the coal preparation plant, can integrate computing resources and storage resources, and supports intelligent management and control application of the coal preparation plant to be developed, deployed, transported and interconnected in a standard component mode. The platform has strong universality and compatibility, the bottom layer supports various basic environments, and the platform can be constructed on various infrastructures such as a physical machine, a super fusion server, a virtual machine and the like. Meanwhile, the bottom layer details are shielded for the upper layer application, and the application deployment and the basic environment are decoupled, so that a developer does not need to pay attention to the bottom layer facilities, the development difficulty and the access cost of new services are reduced, and the deployment working hours of various components can be reduced by more than 50%. The platform has an all-aspect standard system of application and data, has unified framework and standard, and is convenient for carrying out integrated management and control on each coal preparation plant.
The cloud native platform of the application aims at controlling and isolating the authority of computing, storing and applying all aspects of different factories and equipment involved in intelligent projects of coal preparation plants. Therefore, authority isolation of different coal preparation plants is realized.
In order to achieve the above embodiments, the embodiments of the present application further provide an electronic device. Fig. 5 is a schematic structural diagram of an electronic device according to an embodiment of the present invention.
As shown in fig. 5, the electronic device 1000 may include the cloud native platform 10 of the intelligent coal preparation plant as described in the above embodiments. That is, in the embodiment of the present application, the cloud native platform 10 of the intelligent coal preparation plant may be mounted in the electronic device 1000, and the electronic device 1000 may be of various types such as a physical machine, a virtual machine, and a super fusion server, which are specifically determined according to the application requirements of the intelligent coal preparation plant.
Therefore, the electronic device 1000 in the embodiment of the application can perform clustered management on the coal preparation plant by applying the cloud native platform of the intelligent coal preparation plant carried by the electronic device, and improves the availability, reliability and stability of the intelligent coal preparation plant.
In the description of the present specification, a description referring to terms "one embodiment," "some embodiments," "examples," "specific examples," or "some examples," etc., means that a particular feature, structure, material, or characteristic described in connection with the embodiment or example is included in at least one embodiment or example of the present application. In this specification, schematic representations of the above terms are not necessarily directed to the same embodiment or example. Furthermore, the particular features, structures, materials, or characteristics described may be combined in any suitable manner in any one or more embodiments or examples. Furthermore, the different embodiments or examples described in this specification and the features of the different embodiments or examples may be combined and combined by those skilled in the art without contradiction.
Furthermore, the terms "first," "second," and the like, are used for descriptive purposes only and are not to be construed as indicating or implying a relative importance or implicitly indicating the number of technical features indicated. Thus, a feature defining "a first" or "a second" may explicitly or implicitly include at least one such feature. In the description of the present application, the meaning of "plurality" is at least two, such as two, three, etc., unless explicitly defined otherwise.
In the present invention, unless explicitly specified and limited otherwise, the terms "mounted," "connected," "secured," and the like are to be construed broadly, and may be, for example, fixedly connected, detachably connected, or integrally formed; can be mechanically or electrically connected; either directly or indirectly, through intermediaries, or both, may be in communication with each other or in interaction with each other, unless expressly defined otherwise. The specific meaning of the above terms in the present invention can be understood by those of ordinary skill in the art according to the specific circumstances.
In the present invention, unless expressly stated or limited otherwise, a first feature "up" or "down" a second feature may be the first and second features in direct contact, or the first and second features in indirect contact via an intervening medium. Moreover, a first feature being "above," "over" and "on" a second feature may be a first feature being directly above or obliquely above the second feature, or simply indicating that the first feature is level higher than the second feature. The first feature being "under", "below" and "beneath" the second feature may be the first feature being directly under or obliquely below the second feature, or simply indicating that the first feature is less level than the second feature.
While embodiments of the present invention have been shown and described above, it will be understood that the above embodiments are illustrative and not to be construed as limiting the invention, and that variations, modifications, alternatives and variations may be made to the above embodiments by one of ordinary skill in the art within the scope of the invention.

Claims (10)

1. The cloud primary platform of intelligent coal preparation factory, its characterized in that includes: the system comprises a platform deployment module, a computing resource management module, a storage resource management module, a network module, a platform operation and maintenance module, an application management module and a permission control module, wherein,
the platform deployment module is used for communicating with nodes corresponding to a plurality of coal preparation plants, and deploying at least one storage cluster, at least one computing cluster and a private container warehouse among the nodes;
the computing resource management module is used for carrying out clustered management on computing resources in the computing cluster through a container arrangement tool;
the storage resource management module is used for carrying out clustered management on storage resources in the storage cluster through a distributed network file system;
the network module is used for configuring the network of the computing cluster to realize network communication inside the computing cluster and communication outside the computing cluster;
The platform operation and maintenance module is used for managing the operation states of the at least one storage cluster and the at least one computing cluster;
the application management module is used for managing the construction, storage and actual use of a plurality of applications to be deployed in the coal preparation plant;
the authority control module is used for performing authority management on different projects of the coal preparation plants, the at least one storage cluster, the at least one computing cluster and the application management process based on the service line.
2. The cloud native platform of claim 1, wherein the platform deployment module comprises: a base configuration deployment unit, a storage cluster configuration unit, a computing cluster configuration unit, and a private container repository configuration unit, wherein,
the basic configuration deployment unit is used for constructing a basic deployment environment in a plurality of nodes, installing a bottom layer service and adding basic configuration information;
the storage cluster configuration unit is used for deploying the cluster fs service on a plurality of storage nodes according to preset storage cluster configuration information to construct the at least one storage cluster, deploying the heketi service on a storage management node, and constructing the node topology of the at least one storage cluster through the heketi service.
3. The cloud native platform according to claim 2, wherein the computing cluster configuration unit is specifically configured to:
deploying kubernetes on a computing node where a main computing cluster is located to generate the main computing cluster, and installing related service components on the main computing cluster;
constructing a plurality of member computing clusters based on encryption information of the master computing cluster;
the private container warehouse configuration unit is used for building a harbor service on a node where the private container warehouse is located.
4. The cloud native platform of claim 1, wherein the container orchestration tool comprises: kubernetes, the computing resource management module is specifically configured to:
according to the demand information of the target application, automatically adjusting the scale and the distributed computing resources of the target application through kubernetes;
automatically adjusting the flow route of the target application through a load balancing mechanism built in kubernetes, and balancing the load of the target application;
the status of each container and node is monitored based on the kubernetes self-repair capability, and the failed container and the failed node are automatically restarted.
5. The cloud native platform of claim 2, wherein the distributed network file system comprises: the storage resource management module is specifically configured to:
Performing data copying and data dispersing among the plurality of storage nodes through the glumerfs, and under the condition that a failure node is detected, performing access by replacing the failure node by any normal node;
according to the storage capacity to be expanded, a corresponding number of storage nodes are added in the storage cluster;
when writing data into a storage cluster, copying the written data to the plurality of storage nodes through a glumerfs algorithm.
6. The cloud native platform of claim 2, wherein the platform operation and maintenance module comprises: a computing cluster management unit and a storage cluster management unit, wherein,
the computing cluster management unit is used for monitoring the resource state of each node through kubesphere, scheduling equipment in each node and sending abnormal state alarm information;
the storage cluster management unit is used for managing the at least one storage cluster through a glumerfs management interface provided by the heketi service and is combined with kubernetes to use storage resources in the kubernetes.
7. The cloud native platform of claim 1, wherein the application management module comprises: a container mirror management unit, a package management unit and an application operation management unit, wherein,
The container mirror image management unit is used for storing and managing the container mirror images sent by the user by using the harbor service as a private mirror image warehouse;
the package management unit is a Helmcharts package manager and is used for storing and managing the charts packages sent by the user by using the harbor service as a Helmcharts warehouse, wherein the charts packages contain applications to be deployed by a coal preparation plant.
8. The cloud native platform of claim 7, wherein the application execution management unit comprises: an application deployment subunit and an application operation and maintenance subunit, wherein,
the application deployment subunit is used for rapidly deploying the target application to be deployed, which is called by the user from the container mirror image management unit, into a corresponding target cluster;
the application operation and maintenance subunit is used for monitoring the state of each application through kubesphere, adjusting the configuration information of each application and carrying out abnormal state alarming of the application.
9. The cloud native platform according to claim 1, wherein the rights control module is specifically configured to:
distributing projects for different coal preparation plants by setting service lines, and setting authority corresponding to each project;
Matching the service line to a name space of kubernetes, and performing authority isolation of the computing cluster based on the name space;
when the application is deployed, binding the application to a storage volume of the glusterfs through API resources of kubernetes, and performing authority isolation of the storage cluster based on the name space;
corresponding project authorities are set for different users based on the harbor service, and authority isolation and control of different applications are realized by binding projects to the service line.
10. An electronic device comprising the cloud-based platform of an intelligent coal preparation plant of any one of claims 1-9.
CN202311235137.0A 2023-09-22 2023-09-22 Cloud primary platform of intelligent coal preparation plant Pending CN117614825A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202311235137.0A CN117614825A (en) 2023-09-22 2023-09-22 Cloud primary platform of intelligent coal preparation plant

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202311235137.0A CN117614825A (en) 2023-09-22 2023-09-22 Cloud primary platform of intelligent coal preparation plant

Publications (1)

Publication Number Publication Date
CN117614825A true CN117614825A (en) 2024-02-27

Family

ID=89948509

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202311235137.0A Pending CN117614825A (en) 2023-09-22 2023-09-22 Cloud primary platform of intelligent coal preparation plant

Country Status (1)

Country Link
CN (1) CN117614825A (en)

Similar Documents

Publication Publication Date Title
KR100491541B1 (en) A contents synchronization system in network environment and a method therefor
US7430616B2 (en) System and method for reducing user-application interactions to archivable form
US8903963B2 (en) Method and apparatus for web based storage on demand
CN101460907B (en) Method and dsystem for managing execution of programs
US7139809B2 (en) System and method for providing virtual network attached storage using excess distributed storage capacity
US9450700B1 (en) Efficient network fleet monitoring
CN109344000B (en) Block chain network service platform, recovery tool, fault processing method thereof and storage medium
US9288266B1 (en) Method and apparatus for web based storage on-demand
EP2426605B1 (en) Providing executing programs with reliable access to non-local block data storage
US20030212775A1 (en) System and method for an enterprise-to-enterprise compare within a utility data center (UDC)
CN113259447B (en) Cloud platform deployment method and device, electronic equipment and storage medium
CN111400036A (en) Cloud application management system, method, device and medium based on server cluster
US9148430B2 (en) Method of managing usage rights in a share group of servers
CN111158859A (en) Application management system based on kylin operating system and implementation and use method thereof
CN202565318U (en) Distributed virtual storage system
CN116107704B (en) Block chain BaaS and multi-alliance deployment method, data storage access method and device
US20210120070A1 (en) Networking-based file share witness system
CN114338670B (en) Edge cloud platform and network-connected traffic three-level cloud control platform with same
CN117614825A (en) Cloud primary platform of intelligent coal preparation plant
CN113965582B (en) Mode conversion method and system, and storage medium
JPWO2020158016A1 (en) Backup system and its method and program
CN114615268B (en) Service network, monitoring node, container node and equipment based on Kubernetes cluster
CN112783610A (en) Saltstack-based Ceph deployment host node
CN117453652A (en) Database cluster deployment system, method, electronic equipment and storage medium
CN113485650A (en) Data arrangement system

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination