CN116795483A - Resource processing method and device and storage medium - Google Patents

Resource processing method and device and storage medium Download PDF

Info

Publication number
CN116795483A
CN116795483A CN202211320966.4A CN202211320966A CN116795483A CN 116795483 A CN116795483 A CN 116795483A CN 202211320966 A CN202211320966 A CN 202211320966A CN 116795483 A CN116795483 A CN 116795483A
Authority
CN
China
Prior art keywords
service
grids
namespace
resources
namespaces
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202211320966.4A
Other languages
Chinese (zh)
Inventor
张永曦
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
China Mobile Communications Group Co Ltd
China Mobile Suzhou Software Technology Co Ltd
Original Assignee
China Mobile Communications Group Co Ltd
China Mobile Suzhou Software Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by China Mobile Communications Group Co Ltd, China Mobile Suzhou Software Technology Co Ltd filed Critical China Mobile Communications Group Co Ltd
Priority to CN202211320966.4A priority Critical patent/CN116795483A/en
Publication of CN116795483A publication Critical patent/CN116795483A/en
Pending legal-status Critical Current

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F9/00Arrangements for program control, e.g. control units
    • G06F9/06Arrangements for program control, e.g. control units using stored programs, i.e. using an internal store of processing equipment to receive or retain programs
    • G06F9/44Arrangements for executing specific programs
    • G06F9/455Emulation; Interpretation; Software simulation, e.g. virtualisation or emulation of application or operating system execution engines
    • G06F9/45533Hypervisors; Virtual machine monitors
    • G06F9/45558Hypervisor-specific management and integration aspects
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F11/00Error detection; Error correction; Monitoring
    • G06F11/30Monitoring
    • G06F11/3003Monitoring arrangements specially adapted to the computing system or computing system component being monitored
    • G06F11/301Monitoring arrangements specially adapted to the computing system or computing system component being monitored where the computing system is a virtual computing platform, e.g. logically partitioned systems
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F11/00Error detection; Error correction; Monitoring
    • G06F11/30Monitoring
    • G06F11/3065Monitoring arrangements determined by the means or processing involved in reporting the monitored data
    • G06F11/3072Monitoring arrangements determined by the means or processing involved in reporting the monitored data where the reporting involves data filtering, e.g. pattern matching, time or event triggered, adaptive or policy-based reporting
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F9/00Arrangements for program control, e.g. control units
    • G06F9/06Arrangements for program control, e.g. control units using stored programs, i.e. using an internal store of processing equipment to receive or retain programs
    • G06F9/44Arrangements for executing specific programs
    • G06F9/455Emulation; Interpretation; Software simulation, e.g. virtualisation or emulation of application or operating system execution engines
    • G06F9/45533Hypervisors; Virtual machine monitors
    • G06F9/45558Hypervisor-specific management and integration aspects
    • G06F2009/45587Isolation or security of virtual machine instances
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F9/00Arrangements for program control, e.g. control units
    • G06F9/06Arrangements for program control, e.g. control units using stored programs, i.e. using an internal store of processing equipment to receive or retain programs
    • G06F9/44Arrangements for executing specific programs
    • G06F9/455Emulation; Interpretation; Software simulation, e.g. virtualisation or emulation of application or operating system execution engines
    • G06F9/45533Hypervisors; Virtual machine monitors
    • G06F9/45558Hypervisor-specific management and integration aspects
    • G06F2009/45595Network integration; Enabling network access in virtual machine instances

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • Software Systems (AREA)
  • General Engineering & Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • Computing Systems (AREA)
  • Quality & Reliability (AREA)
  • Mathematical Physics (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Data Exchanges In Wide-Area Networks (AREA)

Abstract

The embodiment of the application provides a resource processing method, which comprises the following steps: configuring a plurality of service grids according to the corresponding relation between the preset labels and the namespaces; wherein, the plurality of service grids are provided with filters; after receiving the operation instruction, filtering the resources to be operated corresponding to the operation instruction based on a filter, and supervising the filtered resources by using a plurality of service grids. The method comprises the steps of configuring a plurality of service grids according to the corresponding relation between the preset labels and the namespaces, monitoring cluster resources through the plurality of service grids, filtering resources to be operated corresponding to operation instructions based on a filter, and monitoring the filtered resources by using the plurality of service grids, so that the plurality of service grids can only monitor the cluster resources under the corresponding namespaces, safety isolation among different applications can be realized, and meanwhile, the resource utilization rate of the service grids is improved.

Description

Resource processing method and device and storage medium
Technical Field
The present application relates to the field of computer technologies, and in particular, to a method and an apparatus for processing resources, and a storage medium.
Background
Currently, kubernetes is already a de facto container orchestration standard, and is also one of the cloud primary foundation technologies. The service grid is used as an infrastructure layer of communication among micro services, takes on the role of flow control, and is responsible for network calling, current limiting, fusing and monitoring among services, which is one of several technologies of cloud native technology. Istio is a de facto standard for service grids and is widely used due to its good support for Kubernetes. However, as the Kubernetes cluster grows in size, more applications run on it, and one Kubernetes cluster can only be subordinate to a solution of service grid (Istio) nanotubes, and the requirements in terms of security and performance of mutual isolation between different applications cannot be met.
Disclosure of Invention
The embodiment of the application provides a resource processing method and device and a storage medium, which can realize that one Kubernetes cluster is managed by a plurality of service grids, improve the resource utilization rate and provide a safety isolation mechanism for different applications.
The technical scheme of the embodiment of the application is realized as follows:
in a first aspect, an embodiment of the present application provides a resource processing method, where the resource processing method includes:
Configuring a plurality of service grids according to the corresponding relation between the preset labels and the namespaces; wherein the plurality of service grids are provided with filters;
and after receiving the operation instruction, filtering the resources to be operated corresponding to the operation instruction based on the filter, and performing supervision processing on the filtered resources by using the plurality of service grids.
In a second aspect, an embodiment of the present application provides a resource processing apparatus, including: the system comprises a configuration unit, a filtering unit and a supervision unit;
the configuration unit is used for configuring a plurality of service grids according to the corresponding relation between the preset labels and the namespaces; wherein the plurality of service grids are provided with filters;
the filtering unit is used for filtering the resources to be operated corresponding to the operation instruction based on the filter after the operation instruction is received;
and the supervision unit is used for performing supervision processing on the filtered resources by using the plurality of service grids.
In a third aspect, an embodiment of the present application provides a resource processing apparatus, including: a processor and a memory; wherein,,
the memory is used for storing a computer program capable of running on the processor;
The processor is configured to execute the resource processing method as described above when the computer program is executed.
In a fourth aspect, an embodiment of the present application provides a computer-readable storage medium, wherein the storage medium has stored thereon computer program code which, when executed by a computer, implements a resource processing method as described above.
The embodiment of the application provides a resource processing method and device and a storage medium, wherein the method comprises the following steps: configuring a plurality of service grids according to the corresponding relation between the preset labels and the namespaces; wherein, the plurality of service grids are provided with filters; after receiving the operation instruction, filtering the resources to be operated corresponding to the operation instruction based on a filter, and supervising the filtered resources by using a plurality of service grids. Therefore, the resource processing device can configure a plurality of service grids according to the corresponding relation between the preset label and the naming space, so that cluster resources can be monitored through the plurality of service grids, meanwhile, if the resource processing device receives an operation instruction, the resource to be operated corresponding to the operation instruction can be filtered based on a filter, and the filtered resource is monitored through the plurality of service grids, so that the plurality of service grids can only monitor the cluster resources under the corresponding naming space, safety isolation among different applications can be realized, and meanwhile, the resource utilization rate of the service grids is improved.
Drawings
FIG. 1 is a schematic diagram of a resource processing method according to an embodiment of the present application;
FIG. 2 is a schematic diagram of a filter according to an embodiment of the present application;
FIG. 3 is a schematic diagram of a filter according to an embodiment of the present application;
FIG. 4 is a schematic diagram of a service grid according to an embodiment of the present application;
FIG. 5 is a schematic diagram of a resource processing device according to an embodiment of the present application;
FIG. 6 is a schematic diagram of a second embodiment of a resource processing device;
Detailed Description
The technical solutions in the embodiments of the present application will be clearly and completely described below with reference to the accompanying drawings in the embodiments of the present application. It is to be understood that the specific embodiments described herein are for purposes of illustration only and are not intended to be limiting. It should be noted that, for convenience of description, only a portion related to the related application is shown in the drawings.
Currently, kubernetes is already a de facto container orchestration standard, and is also one of the cloud primary foundation technologies. The service grid is used as an infrastructure layer of communication among micro services, takes on the role of flow control, and is responsible for network calling, current limiting, fusing and monitoring among services, which is one of several technologies of cloud native technology. Istio is a de facto standard for service grids and is widely used due to its good support for Kubernetes. However, as the Kubernetes cluster grows in size, more applications run on it, and one Kubernetes cluster can only be subordinate to a solution of service grid (Istio) nanotubes, and the requirements in terms of security and performance of mutual isolation between different applications cannot be met.
In a Kubernetes cluster mode, an Istio monitors the resource objects and configuration objects under all namespaces (Namespace) in the cluster, most of which are not of interest themselves. Thus, not only can the performance of Istiod be bottleneck, but also a great amount of operation resources can be wasted. Meanwhile, since the Sidecar of different applications is issued by one CA, theoretically, applications in the same grid can access each other theoretically. Under the condition that the traffic is not secondarily encrypted, some safety problems can be caused, and if the traffic is secondarily encrypted, unnecessary resource waste can be brought. In order to ensure consistency, the routing policies are issued globally, namely, the routing policies of each Sidecar are all globally consistent, which results in that the routing policies of the whole service grid are stored in each Sidecar, the routing policies suitable for the Sidecar only occupy a small part of the routing policies, and each time the Sidecar carries out the routing forwarding of the traffic, the routing policies matched with the current traffic can be found out from all the routing policies, thereby causing the reduction of the routing forwarding efficiency of the Sidecar, the waste of storage and network resources, and in fact, each Sidecar only needs to obtain the routing policies related to the Sidecar.
In order to solve the problem of a service grid nanotube resource at present, an embodiment of the present application provides a resource processing method and apparatus, and a storage medium, where the method includes: configuring a plurality of service grids according to the corresponding relation between the preset labels and the namespaces; wherein, the plurality of service grids are provided with filters; after receiving the operation instruction, filtering the resources to be operated corresponding to the operation instruction based on a filter, and supervising the filtered resources by using a plurality of service grids. Therefore, the resource processing device can configure a plurality of service grids according to the corresponding relation between the preset label and the naming space, so that cluster resources can be monitored through the plurality of service grids, meanwhile, if the resource processing device receives an operation instruction, the resource to be operated corresponding to the operation instruction can be filtered based on a filter, and the filtered resource is monitored through the plurality of service grids, so that the plurality of service grids can only monitor the cluster resources under the corresponding naming space, safety isolation among different applications can be realized, and meanwhile, the resource utilization rate of the service grids is improved.
The technical solutions in the embodiments of the present application will be clearly and completely described below with reference to the accompanying drawings in the embodiments of the present application.
Example 1
The embodiment of the application provides a resource processing method, fig. 1 is a schematic diagram of the resource processing method provided by the embodiment of the application, and as shown in fig. 1, the resource processing may include the following steps:
step 101, configuring a plurality of service grids according to the corresponding relation between preset labels and namespaces; wherein the plurality of service grids are each provided with a filter.
In the embodiment of the application, the resource processing device can configure a plurality of service grids according to the corresponding relation between the preset label and the name space; wherein the plurality of service grids are each provided with a filter.
It should be noted that, in the embodiment of the present application, kubernetes may be a cluster system, and a user may deploy various services in a cluster, where deploying various services in a cluster may be understood as running individual containers in the Kubernetes cluster, and running a specified application in a container. The smallest management unit of Kubernetes is Pod, in which containers can be placed, while Kubernetes can manage Pod by Pod controller.
It should be noted that, in the embodiment of the present application, the resource processing device may be any device capable of running a Kubernetes cluster environment, which is not limited in particular.
It should be noted that, in the embodiment of the present application, a plurality of service grids may all operate in a Kubernetes cluster environment.
It should be noted that, in the embodiment of the present application, the Kubernetes logically isolate the resource objects to form multiple virtual clusters, that is, multiple namespaces (namespaces) may be formed, and in default, all the Pod in the Kubernetes cluster may access each other, but in reality, if it is not desired to make the two Pod access each other, then the two Pod may be divided into different namespaces, and the specific method of division is not limited by the present application, and the Kubernetes may form a logical "group" by distributing the resources inside the cluster into different namespaces, so as to facilitate the isolated use and management of the resources of different groups.
It should be noted that in the embodiment of the present application, the Istio is a service grid that allows more detailed, complex and observable communication between the pods and the services in the Kubernetes cluster, the service grid can be managed by extending the Kubernetes API using the CRD, injecting the agents of the sides into all the pods, then controlling the traffic in the cluster by these pods, and when the configuration or service changes, the Istio control plane will process all the Istio-proxy sides, all the traffic will be routed through the Istio-proxy container in each Pod, and whenever an Istio-proxy receives and redirects a request, it will submit information about the request to the Istio control plane, thus in the cluster with services in communication with each other, this can improve observability and better control all the traffic.
It should be noted that, in the embodiment of the present application, the service grid is a key part in the micro-service infrastructure, so that the call between services can be implemented, the flexibility and security of the application program are improved, and the service grid is logically divided into a data plane and a control plane. The data plane is composed of an Istio-agent and an Envoy, is injected into the Pod in the form of a Sidecar, takes over the routing table of the container, adopts XDS protocol and control plane interaction, receives a routing forwarding rule, and performs traffic forwarding according to the routing rule to act as a Proxy agent in the Pod. The control plane is realized by an Istio-pilot module, runs in a Pod form and supports various platforms (the application refers to a Kubernetes cluster), and the control plane is used for monitoring the state of platform resources, generating a routing rule and issuing the routing rule to the data plane in an XDS protocol form. The main function of the control plane is to generate XDS protocol content based on the relevant resource conditions of Kubernetes and send the XDS protocol content to the data plane.
It should be noted that, in the embodiment of the present application, kubernetes provides a mechanism to classify resources, that is, labels, which is used to add identifiers to resources to distinguish and select them, one resource object may define any number of labels, one Label may also be added to any number of resource objects, the specific adding manner is not limited by the present application, and labels are usually determined when the resource objects are defined, and may also be dynamically added or deleted after the objects are created.
It should be noted that, in the embodiment of the present application, the corresponding relationship between the preset tag and the namespaces may be different namespaces corresponding to the preset tag, for example, namespaces corresponding to the preset tag 1 have namespaces a, b, and c, namespaces corresponding to the preset tag 2 have d, e, and f, and the specific corresponding relationship may be set arbitrarily.
TABLE 1
Preset label Preset label 1 Preset label 2
Namespace group a、b、c d、e、f
Further, in an embodiment of the present application, for each of a plurality of service grids, determining a target label based on a correspondence of a preset label and a namespace; configuring first attribute information corresponding to the service grid according to the target label; the filter is configured based on the corresponding first attribute information.
It should be noted that, in the embodiment of the present application, the resource processing device may determine the target label based on the corresponding relationship between the preset label and the namespace, for example, the preset label may be label 1, label 2, label 3, and label 4, the namespaces corresponding to the preset label 1 may have namespaces a, b, and c, the namespaces corresponding to the preset label 2 may have namespaces d, e, and f, and the target label may be label 1 and label 2 according to the corresponding relationship between the preset label and the namespaces.
It should be noted that, in the embodiment of the present application, the resource processing device may configure first attribute information corresponding to the service grid according to the target tag; the filter can be configured through the first attribute information corresponding to the service grid, so that filtering of cluster resources through the filter can be achieved, for example, the first attribute information corresponding to the service grid can be an attribute value s, global, nano space, labelfilter added in an Istio-Operator installation tool, the value of the attribute can be a label combination, for example, the label combination is label 1, and the label 2 is a target label.
Further, for each service grid in the plurality of service grids, determining a target naming space according to a target label corresponding to the service grid based on a corresponding relation between a preset label and the naming space; and performing supervision processing on the target cluster resources belonging to the target namespaces.
It should be noted that, in the embodiment of the present application, the resource processing device may determine, based on the correspondence between the preset label and the namespace, the target namespace according to the target label corresponding to the service grid, and perform supervision processing on the target cluster resource belonging to the target namespace, as shown in table 2, different target labels may correspond to unused namespaces, and different service grids may correspond to different target labels, that is, different namespaces corresponding to different target labels, that is, cluster resources of the target namespaces may be supervised by the corresponding service grids, for example, the label specified by the service grid 1 is the target label 1, the label specified by the service grid 2 is the target label 2, then the target namespaces that the service grid 1 may supervise may be a, b, c, the target cluster resources may be resources under the target namespaces a, b, c of the service grid 1, and the target namespaces that the service grid 2 may be the target namespaces d, e, f, and the target cluster resources may be resources under the supervision table 2 of the service grid 2.
TABLE 2
Preset label Target preset label 1 Target preset label 2
Service grid Service grid 1 Service grid 2
Object namespaces a、b、c d、e、f
It should be noted that, in the embodiment of the present application, the label corresponding to each service grid may be dynamically changed, and each namespace may dynamically join or leave a certain service grid.
It should be noted that, in the embodiment of the present application, each service grid needs to be assigned with a specific target tag, so that, in the case that there are multiple target tags, the present application can configure multiple service grids, and each namespace corresponding to the target tag is incorporated into the service grid assigned with the target tag for supervision.
Further, fig. 2 is a schematic diagram of a filter location according to an embodiment of the present application, as shown in fig. 2, the resource processing device adds a filter between the service instance controller, the configuration rule controller and Kubernetes Informer.
It should be noted that, in the embodiment of the present application, two types of resources of the Kubernetes cluster may include a service instance and a configuration rule, where the service instance mainly includes Service, pod, endpoint, and the configuration rule mainly includes VirtualService, destinationRule, gateway that the service grid is customized.
It should be noted that, in the embodiment of the present application, the resource processing device may monitor and acquire two types of resources of the Kubernetes cluster in real time through an infomer mechanism of the Kubernetes, because both the service instance and the configuration rule of the Kubernetes are managed by naming space, by adding a filter, the two types of resources monitored by Kubernetes Informer may be filtered according to the naming space resources, if it is monitored that the resources under the corresponding naming space change, the corresponding request may be issued to a data plane (Proxy) through an XDS Server, so that the service grid may only monitor the resources of the corresponding naming space through the infomer mechanism, and shield the resources of irrelevant naming spaces, thereby implementing the naming space isolation of the service grid.
It should be noted that, in the embodiment of the present application, for the service grid CA, security isolation may be achieved by using a filter from other service grids, so that each relevant namespace may obtain the correct RootCert, and based on the RootCert root certificate, the Istio-agent establishes a Grpc connection with the Istio-pilot.
Step 102, after receiving the operation instruction, filtering the resource to be operated corresponding to the operation instruction based on a filter.
In the embodiment of the application, a resource processing device configures a plurality of service grids according to the corresponding relation between the preset label and the name space; after the filters are set on the plurality of service grids and the operation instructions are received, filtering processing is carried out on the resources to be operated corresponding to the operation instructions based on the filters.
It should be noted that, in the embodiment of the present application, the resource processing device configures a plurality of service grids according to the correspondence between preset labels and namespaces, the plurality of service grids may monitor the target namespaces corresponding to the respective specified target labels, the plurality of namespaces may monitor the global Mesh Config of the service grids through the Mesh watch component, the Mesh watch component may read the value of the first attribute information and may identify the scope of the namespaces of the corresponding service grids, and after receiving the operation instruction, the resource processing device may filter the cluster resources under the target namespaces corresponding to the plurality of service grids through the filter.
Further, in an embodiment of the present application, for each of a plurality of service grids, a first data structure is maintained; the target namespace is stored through a first data structure.
It should be noted that, in the embodiment of the present application, for each service grid in the plurality of service grids, the Mesh watch component may maintain two data structures, a first data structure Available Namespace and a second data structure Different Namespace, where the first data structure may store a target namespace included in the current service grid, and the second data structure may store namespaces corresponding to modification instructions, for example, a new and a deleted namespaces.
Further, in the embodiment of the present application, for each service grid in the plurality of service grids, the resource processing device filters, based on the filter, resources that do not belong to the namespace corresponding to the target label in the resources to be operated, and obtains the filtered resources.
It should be noted that, in the embodiment of the present application, fig. 3 is a schematic diagram of a filter principle provided in the embodiment of the present application, as shown in fig. 3, a Service info mer and a Config info mer may monitor a resource change in real time, when it is monitored that a resource changes, a resource processing device may perform resource screening according to a first data structure based on a filter, filter a resource under a namespace not belonging to the first data structure, obtain a filtered resource, and may perform a next process on a resource under a namespace belonging to the first data structure.
And 103, performing supervision processing on the filtered resources by using a plurality of service grids.
It should be noted that, in the embodiment of the present application, after the resource processing device receives the operation instruction, after filtering the resource to be operated corresponding to the operation instruction based on the filter, the resource processing device uses a plurality of service grids to perform supervision processing on the filtered resource.
Further, in an embodiment of the present application, at least one of the following processes is performed on the target cluster resource in terms of the filtered resource: adding processing, deleting processing and updating processing.
It should be noted that, in the embodiment of the present application, as shown in fig. 3, the target cluster resource may be a resource under a namespace belonging to the first data structure, if the resource processing device needs to perform at least one of adding processing, deleting processing and updating processing on the target cluster resource, any one of adding, deleting and updating may be packaged as an Event object and plugged into Service Event Queue or Config Event Queue, and then waiting for asynchronous processing of a Consumer, where the Consumer packages the Event as an XDS Push Request and sends it to the XDS Server for issuing.
Further, in an embodiment of the present application, a second data structure is maintained for each of a plurality of service grids; after the first attribute information is modified in response to the modification instruction, the target namespace is updated according to the namespace corresponding to the modification instruction, while the namespace corresponding to the modification instruction is stored via the second data structure.
It should be noted that, in the embodiment of the present application, when the first attribute information stored in the second data structure changes, there is a new namespace that is added and deleted, that is, the scope of the namespaces of the corresponding service grids changes. That is, when the resource processing device receives at least one delete or add modification instruction, a new add namespace or delete namespace is made in response to the modification instruction, while the new add namespace and the delete namespace are stored to the second data structure.
It should be noted that, in the embodiment of the present application, the resource processing device may perform resource screening according to the first data structure based on the filter, filter the resources not under the namespace of the first data structure to obtain the filtered resources, and meanwhile, if there is an addition process, a deletion process, and at least one process condition in the update process of the filtered resources, package the corresponding condition into an event to be plugged into a queue, and then package the event into an XDS Push Request, and send the event to the XDS Server to be sent to the data plane, so that the plurality of service grids only monitor the cluster resources under the corresponding namespace, without monitoring the unrelated cluster resources, thereby improving the resource utilization rate of the service grids.
Further, in the embodiment of the present application, for each service grid in the plurality of service grids, when the Mesh tracker monitors that the first attribute information changes, that is, the scope of the namespaces of the corresponding service grids changes, that is, when there is a new added or deleted Namespace, the first data structure is updated, and a second data structure is calculated, where a calculation formula is as follows, and Labeled Namespace is a latest qualified Namespace.
New Namespace=Labeled Namespace-Labeled Namespa∩Available Namespace (1)
Deleted Namespace=Available Namespace-Labeled Namespa∩Available Namespace(2)
Different Namespace = {New Namespace , Deleted Namespace} (3)
Available Namespace = Labeled Namespace (4)
It should be noted that, in the embodiment of the present application, the New Namespace may be a Namespace shared by the latest qualified Namespace and the namespaces in the first data structure, the Deleted Namespace deleted namespaces may be namespaces shared by the latest qualified namespaces and the namespaces in the first data structure, the second data structure Different Namespace may be a collection of the newly added namespaces and the deleted namespaces, and the updated first data structure may be Labeled Namespace, i.e., the latest qualified namespaces.
Further, in the embodiment of the present application, as shown in fig. 3, the resource processing device may obtain, for a newly added namespace in the service grid, all resources under the newly added namespace through Namespace Manager and package them into Add events, respond to the deleted namespaces in the service grid by stuffing them into two Queue, namely Service Event Queue or Config Event Queue, otherwise, obtain all resources under the deleted namespaces, package them into Delete events, respond to the deleted namespaces by stuffing them into two Queue, and wait for asynchronous processing of a Consumer, where the Consumer packages the events into an XDS Push Request and sends the XDS Push Request to the XDS Server for issuing. At the same time Namespace Manager updates the RootCert root certificate for the corresponding namespace.
Further, in the embodiment of the present application, the resource processing device may acquire the second attribute information; determining a naming space corresponding to the second attribute information; verifying the name space corresponding to the second attribute information through the first attribute information to obtain a verification result; and if the verification result is that the verification is passed, determining the namespaces corresponding to the second attribute information as target namespaces corresponding to the plurality of service grids, and configuring the plurality of service grids in the target namespaces.
It should be noted that, in the embodiment of the present application, the resource processing device may acquire second attribute information, and determine namespaces corresponding to installation of a plurality of service grids according to second attributes corresponding to the service grids; verifying namespaces corresponding to the installation of the plurality of service grids determined by the second attribute through the first attribute information; and if the verification result is that the verification is passed, determining the namespaces corresponding to the second attribute information as target namespaces corresponding to the plurality of service grids, and configuring the plurality of service grids in the target namespaces.
It should be noted that, in the embodiment of the present application, the specific manner in which the resource processing device may install the plurality of service grids is as follows, first, a target Namespace corresponding to installation of the plurality of service grids may be determined according to a second attribute corresponding to the service grid, and it may be determined in which Namespace the current service grid is installed through the second attribute information, for example, the second attribute corresponding to the service grid may be a value, global, space, istio attribute added in the Istio-Operator installation tool, and the resource processing device may install the deployment command Istioctl Operator init of the Istio-Operator before adding the second attribute, where one parameter, istioNamespace, specifies Namespace installed by Istio. In the Istioctl Operator init command, the parameter IstioNamespace is discarded, and then the value, global, namespace, istio attribute added in the IstioOpera installation tool can replace the IstioNamespace parameter to specify in which namespace the current service grid is installed, i.e. the target namespaces corresponding to the installation of multiple service grids can be determined.
In the embodiment of the present application, the resource processing device may verify the namespace corresponding to the second attribute information by using the first attribute information, install a plurality of service grids in the target namespace passing the verification, add the analysis value s.global.anamespace.Istio logic in Istio-operator controller, verify the target namespace parsed by using the analysis logic by using the first attribute information, and install a plurality of service grids in the target namespace passing the verification.
In the embodiment of the present application, the manner in which the resource processing device installs the plurality of service grids in the verified target namespace may be that the lower layer of the Istio-operator controller invokes the Helm tool to install the plurality of service grids.
It should be noted that, the resource processing device verifies the namespaces corresponding to the first attribute information and the second attribute information, and the verification result includes that if the namespaces corresponding to the second attribute information belong to the namespaces corresponding to the first attribute information, the verification result is that verification is passed.
For example, the namespaces corresponding to the second attribute information may be namespaces a, b and c, and if the namespaces corresponding to the first attribute information are namespaces a and b, the namespaces corresponding to the second attribute information may be considered to belong to namespaces corresponding to the first attribute information, and the resource processing device may install the corresponding service grid under the namespaces passing verification.
It should be noted that, in the embodiment of the present application, the resource processing device deploys the service grid in an Istio-operator manner, and performs optimization and improvement on the original Istio-operator. The advantages of deploying the service grids in an Istio-operator mode include that management of a plurality of Istio service grid instances is facilitated, all the service grids can be enumerated through a kubectl get IstioOperator mode, and specific configuration is checked; and secondly, the dynamic expansion and contraction capacity of the service grid can be realized. By changing the value, global, labelfilter attribute of the Istio-Operator in a kubectl application mode, the naming space range of each service grid management can be dynamically controlled.
It should be noted that, in the embodiment of the present application, the resource processing device may install a plurality of service grids under a corresponding target namespace, so as to implement the establishment of a plurality of service grids in one Kubernetes cluster, and provide a service grid isolation mechanism for a plurality of different applications in the Kubernetes cluster.
Further, in the embodiment of the application, after the resource processing device installs a plurality of service grids in the verified target naming space, determining the service grid corresponding to the Sidecar injection; determining a corresponding namespace in which the corresponding service grid is deployed; the first command is executed after determining the corresponding namespace in which the corresponding service grid is deployed, completing the Sidecar injection.
It should be noted that in the embodiment of the present application, in order for an application to be taken over by a service grid as the only component of the service grid data plane, it is necessary to bind a Sidecar agent and an application together, and under the Kubernetes cluster, one Sidecar agent is running in the Pod of each application and responsible for traffic related matters, so that Sidecar injection in the Pod is critical, in general, the Sidecar injection mode is an automatic injection mode, and in the present application, the automatic injection mode is not suitable for being adopted under the premise that a plurality of service grids exist.
It should be noted that, in the embodiment of the present application, in the case of multiple service grids, each service grid may have one Webhooks Server, and the globally unique mutatingwebhooks configuration may only specify one Webhooks Server. If the automatic injection mode of the Sidecar is used, sidecar of all Istio service grids is caused, the same Istio-pilot is designated, and therefore only one Istio service grid is available for the whole Kubernetes cluster, wherein the automatic injection mode is a common mode of Sidecar injection and is realized through Kubernetes Admission Controllers. In the Istio deployment process, a MutingWebHookconfiguration type resource object named "Istio-sidecar-injector" is declared in Kubernetes, where information such as WebHooks address is configured. Admission Controllers is embedded in the Istio-pilot, and the Webhooks service is externally provided in the Istiod Pod, and before the Pod is created, the Webhooks are embedded with the relevant definition of the Sidecar container in the configuration yaml of the Deployment, so that the automatic injection of the Sidecar is realized.
It should be noted that, in the embodiment of the present application, a Kubernetes cluster may have multiple service grids, and the resource processing device may determine an Istio-pilot corresponding to each Sidecar by selecting a ConfigMap, so that the Sidecar is matched to a correct Istio-pilot, so that the Sidecar may be injected into only the necessary Pod, thereby avoiding injecting the Sidecar into all the pods, and causing unnecessary resource consumption.
In the embodiment of the application, the resource processing device can only inject the Sidecar into the necessary Pod, and the Sidecar only needs to obtain the relevant routing strategy when carrying out the routing forwarding of the traffic, thereby avoiding the reduction of the routing forwarding efficiency of the Sidecar and the waste of storage and network resources caused by finding out the routing strategy matched with the current traffic from all the routing strategies.
Further, in the embodiment of the application, the resource processing device determines a service grid corresponding to the Sidecar injection; determining a corresponding namespace in which the corresponding service grid is deployed; after determining the corresponding namespace deployed by the corresponding service grid, executing the first command to complete the Sidecar injection, firstly, the resource processing device may determine to which service grid the Pod injected by the Sidecar to be injected belongs, then find the namespace deployed by the service grid, and after determining the namespace deployed by the service grid, the resource processing device may execute the corresponding command.
For example, the resource processing device may execute the following commands to complete the Sidecar injection, and may first execute the command kubectl-n NamespaceX get cm Istio-Sidecar-input-ojsonath= "{ data. Config }" > inj-template. Tmpl, generating a depth injection template; secondly, a command kubectl-n NamespaceX get cm Istio-o jsonpath= "{ data.mesh }" > mesh.yaml can be executed, and Mesh Config of the service grid can be obtained; then executing a command kubectl-nNamespaceX get cm Istio-sidecar-index-o jsonath= "{ data.values }" > values. Json to obtain a filling value of the depth injection template; finally, the command Istioctl kube-input-fdmployment.yaml-injectConfigFile inj-template.tmpl-meshConfigFile mesh.yaml-valuesFile values.json|kubcectol apply-f-, can be executed, and the manual injection is completed, wherein the reployment.yaml is the yaml file of the Deployment to which Pod belongs.
The embodiment of the application provides a resource processing method and device and a storage medium, wherein the method comprises the following steps: configuring a plurality of service grids according to the corresponding relation between the preset labels and the namespaces; wherein, the plurality of service grids are provided with filters; after receiving the operation instruction, filtering the resources to be operated corresponding to the operation instruction based on a filter, and supervising the filtered resources by using a plurality of service grids. Therefore, the resource processing device can configure a plurality of service grids according to the corresponding relation between the preset label and the naming space, so that cluster resources can be monitored through the plurality of service grids, meanwhile, if the resource processing device receives an operation instruction, the resource to be operated corresponding to the operation instruction can be filtered based on a filter, and the filtered resource is monitored through the plurality of service grids, so that the plurality of service grids can only monitor the cluster resources under the corresponding naming space, safety isolation among different applications can be realized, and meanwhile, the resource utilization rate of the service grids is improved.
Example two
Based on the above embodiment, a further embodiment of the present application provides a resource processing method, which includes firstly modifying an Istio-pilot module, adding a LabelFilter filter, letting Istio monitor only the Kubernetes cluster resources of the relevant Namespace; secondly, an Istio-operator module is modified, so that a Kubernetes cluster can dynamically and smoothly deploy a plurality of Istio examples; finally, the injection mode of the Sidecar is changed to eliminate the problem of interference of the injection of a plurality of Istiods. The proposed method comprises the following steps:
the resource handling device may first modify the core module Istio-pilot based on LabelFilter filters.
It should be noted that, in the embodiment of the present application, fig. 4 is a schematic diagram of the service grid according to the embodiment of the present application, and as shown in fig. 4, the Istio is divided into a data plane and a control plane. The data plane is composed of an Istio-agent and an Envoy, is injected into the Pod in the form of a Sidecar, takes over the routing table of the container, adopts XDS protocol and control plane interaction, receives a routing forwarding rule, and performs traffic forwarding according to the routing rule to act as a Proxy agent in the Pod. The control plane is realized by an Istio-pixel module, operates in an Istiod Pod mode and supports a plurality of platforms (the proposal application refers to a Kubernetes cluster), and is used for monitoring the state of platform resources, generating a routing rule and issuing the routing rule to a data plane in an XDS protocol mode, wherein the platform adapter is used as a platform adapter, so that the Istio can smoothly work under any platform, the Abstart Model can store various service information, and the Envoy Api provides functions (Service discovery and traffic rules) of dynamically updating information, service discovery and configuration routing rule. Istio is independent of the expression in the underlying platform (Kubernetes, mesos and closed found etc.), the adapter of a particular platform is responsible for retrieving the various fields of the metadata from the respective platform and then populating the Istio model.
It should be noted that, in the embodiment of the present application, the main function of the control plane is to generate XDS protocol content based on the Kubernetes related resource situation, and issue the XDS protocol content to the data plane. Two types of resources of the Kubernetes cluster are obtained through real-time monitoring through an Informier mechanism of the Kubernetes, service instances and configuration rules, wherein the service instances mainly comprise Service, pod, endpoint and the like, and the configuration rules mainly comprise Istio custom VirtualService, destinationRule, gateway and the like. Whenever there is a change in these two types of resources, the Istio-pilot will convert the change content to the corresponding CDS, LDS, EDS, RDS protocol in the XDS protocol, and then issue to the Sidecar in sequence. Meanwhile, the CA certificate authority of Istio also establishes a ConfigMap named Istio-CA-root-cert in each Namespace, and stores a root certificate RootCert of Istio CA in the ConfigMap to establish Grpc connection of Istio-pilot and Istio-agent.
It should be noted that, in the embodiment of the present application, since the service instance and the configuration rule of Kubernetes are managed by Namespace. The key to realizing the isolation of the Istio service grid into the namespaces is to add a filter, and the resource processing device can filter the two types of resources monitored by Kubernetes Informer according to Namespace resources.
It should be noted that, in the embodiment of the present application, as shown in fig. 2, a LabelFilter filter is added between the service instance controller ServiceController and the configuration rule controllers ConfigController and Kubernetes Informer, so that the atio only monitors the resources of the corresponding nasspace, and shields the resources of the irrelevant nasspace, thereby implementing the separation of the atio service grids into nasspace; meanwhile, for Istio CA, limitation is also made by LabelFilter, and safety isolation is achieved with other Istio grids of the same Kubernetes cluster, so that each related Namespace can obtain correct RootCert, and based on the RootCert root certificate, istio-agent establishes Grpc connection with Istio-pilot.
Second, the implementation and principle of LabelFilter filters is as follows, labelFilter is, as the name implies, a Namespace Label-based Filter in Kubernetes. Each Istio service grid needs to specify a particular Label combination, and Namespace corresponding to that Label combination is incorporated into the Istio service grid. Meanwhile, the Label combination of the Istio service grids can be dynamically changed, and each Namespace can dynamically join or leave a certain Istio service grid. The principle of LabelFilter implementation is shown in FIG. 3.
Further, in the embodiment of the present application, in the IstioOpera of the Istios, an attribute value is added, the value of the attribute is a Label combination of Namespace, and Namespace range (IstioOpera is a customer resource definition of the Istios, and is used for controlling installation of the Istios), and a global Mesh configuration definition of the Istios) is used for identifying the Istios service grid.
Further, in an embodiment of the present application, a Mesh watch component is used to monitor the global Mesh Config of the Istio. Mesh watch maintains two data structures in memory, available Namespace and Different Namespace, respectively. Available Namespace is a Hash table, storing all Namespace included in the current grid, different Namespace contains two arrays, and storing Namespace newly added and deleted when attribute values.
Further, in the embodiment of the application, the Service Informier and the Config Informier monitor the resource change condition in the Kubernetes cluster in real time under normal conditions. When a cluster resource changes, the changed resource is acquired by one of the two Informiers, and screening is performed according to Available Namespace, so that the cluster resource of which Namespace does not belong to Available Namespace is filtered out. The conforming cluster resources, according to their variant (add, delete, update), are packaged as an Event object, plugged into Service Event Queue or Config Event Queue, awaiting asynchronous processing by Consumer. The Consumer packages the Event into an XDS Push Request, and sends the XDS Push Request to an XDS Server for issuing;
It should be noted that, in the embodiment of the present application, when the first attribute information (attribute values, global. Nano space, labelfilter) stored in the second data structure (Different Namespace) changes, there is a new added and deleted namespace, that is, the scope of the namespace of the corresponding service grid changes. That is, when the resource processing device receives at least one delete or add modification instruction, a new add namespace or delete namespace is made in response to the modification instruction, while the new add namespace and the delete namespace are stored to the second data structure (Different Namespace).
It should be noted that, in the embodiment of the present application, the resource processing device may perform resource screening according to the first data structure (Available Namespace) based on a filter, filter resources under a namespace that does not belong to the first data structure (Available Namespace) to obtain filtered resources, and meanwhile, if there is an addition process, a deletion process, and at least one process condition in the update process of the filtered resources, package the corresponding condition into an event plug into a queue, and then package the event into an XDS Push Request, and send the event plug to the XDS Server for sending to the data plane, so that a plurality of service grids only monitor cluster resources under the corresponding namespaces, without monitoring irrelevant cluster resources, thereby improving the resource utilization rate of the service grids.
Further, in the embodiment of the present application, when the Mesh watch monitors that the labelfilter definition of the global Mesh Config changes, available Namespace (the first data structure) is updated, and Different Namespace (the second data structure) is calculated as above.
It should be noted that, in the embodiment of the present application, the New Namespace may be a shared Namespace of which the latest qualified Namespace is removed from the namespaces in the first data structure Available Namespace, the Deleted Namespace deleted namespaces may be namespaces of which the latest qualified namespaces are removed from the namespaces in the first data structure Available Namespace and the namespaces in the first data structure Available Namespace, the second data structure Different Namespace may be a collection of the newly added namespaces and the deleted namespaces, and the updated first data structure Available Namespace may be Labeled Namespace.
It should be noted that, in the embodiment of the present application, for New Namespace newly added in the grid, namespace Manager will obtain all resources of the Namespace, package into Add Event, and respond to plug into two Queue; conversely, for a deleted Deleted Namespace in the grid, all of its resources will be packaged as Delete Event, plugged into two Queue. At the same time Namespace Manager will update the root certificate of RootCert corresponding to naspace.
Then, the resource processing device can optimize the Istio-operator automatic installation technology, the existing Istio installation and deployment mode is designed based on that only one Istio service grid is arranged in one Kubernetes cluster, and in order to meet the requirement of installing a plurality of grids on one Kubernetes cluster, the Istio installation mode needs to be optimized. The existing Istio installation and deployment modes are two, namely a static mode and a dynamic mode.
By way of example, the static mode is installed and deployed based on the instruction or the manifest command of the Istioctl tool, and the mode has the advantages of simplicity and easiness in starting, and is not applicable to complex installation and deployment scenes; in a dynamic manner, using Istio-operators, istio can achieve dynamic, smooth deployment, upgrade, and offloading. The Istio-operators are the self-defined Kubernetes Controller of the Istio, and the effect of deploying the Istio in a dynamic mode is achieved by creating the self-defined Kubernetes resource Istiooperators;
it should be noted that, in the embodiment of the present application, in order to implement deployment of multiple service grids by one Kubernetes cluster, the present application adopts an Istio-operator mode to deploy, and optimizes and improves the original Istio-operator. The following benefits are realized by using the Istio-operator mode: (1) The management of a plurality of Istio service grid examples is facilitated, all grid examples can be list through a kubectl get IstioOperator mode, and specific configuration is checked; (2) implementing dynamic scaling of the Istio service grid. By changing the value, global, labelfilter attribute of IstioOpera, the Namespace range of each service grid management can be dynamically controlled.
Further, in the embodiment of the present application, to implement the Istio-operator to support multiple Istio instance deployment, the Istio-operator needs to be modified as follows, and first, an Istio-operator's installation deployment command Istioctl operator init is performed, where an IstioNamespace parameter specifies the Namespace of the Istio installation. In the Istioctl operator init command, the parameter is discarded. Then the resource processing device adds a value, global, namespace, istio attribute in the definition of IstioOpera, which replaces the IstioNamespace parameter, specifying in which Namespace the Istio instance is installed; then, the resource processing device adds the resolution value. And verifying whether the Label of the Namespace is met by the value of the Namespace attribute value. If yes, executing the original dynamic installation logic, and completing the installation of the Istio in the Namespace. The bottom layer of Istio-operator controller invokes the Helm tool to make an Istio install.
It should be noted that, in the embodiment of the present application, the resource processing device may install a plurality of service grids under a corresponding target namespace, so as to implement the establishment of a plurality of service grids in one Kubernetes cluster, and provide a service grid isolation mechanism for a plurality of different applications in the Kubernetes cluster.
Finally, the resource processing device may dispense with the automatic injection of the Sidecar, which is a common way of Sidecar injection, which is accomplished by Kubernetes Admission Controllers. In the Istio deployment process, a MutingWebHookconfiguration type resource object named "Istio-sidecar-injector" is declared in Kubernetes, where information such as WebHooks address is configured. Admission Controllers is embedded in the Istio-pilot, and the Webhooks service is externally provided in the Istiod Pod, and before the Pod is created, the Webhooks are embedded with the relevant definition of the Sidecar container in the configuration yaml of the Deployment, so that the automatic injection of the Sidecar is realized.
It should be noted that, in the embodiment of the present application, such an automatic injection mode of Sidecar by Istio is theoretically impossible. In the case of a Kubernetes cluster, multiple Istio service grids, each Istio service grid would have a Webhooks Server, and globally unique mutatingWebhooks configuration would specify only one of the Webhooks Server. If the automatic injection mode is used, the Sidecar of all the Istio service grids is caused to assign the same Istio-pilot, and the whole Kubernetes cluster is caused to have only one Istio service grid available.
In the embodiment of the present application, the resource processing device adopts a mode of manual injection of the Sidecar. The manual injection has the following advantages: (1) The resource utilization rate is improved, and Sidecar injection on useless Pod is avoided. The manual Sidecar injection mode only injects Sidecar into the necessary Pod, and the automatic injection mode injects Sidecar into all the pods, so that unnecessary resource consumption is caused; (2) In the application, a plurality of Istio service grids exist in one Kubernetes cluster, and the corresponding Istio-pilot of each Sidecar can be determined by selecting a ConfigMap mode through a Sidecar manual injection mode, so that the Sidecar is matched with the correct Istio-pilot.
Further, in the embodiment of the present application, first, the resource processing device may determine to which service grid the Pod to be subjected to the Sidecar injection belongs, and then find a namespace of the service grid deployment; secondly, the resource processing device can execute a command kubectl-n NamespaceX get cm Istio-sidecar-injector-o jsonath= "{ data.config }" > inj-template.tmpl to generate a Deployment injection template; then, a command kubectl-n NamespaceX get cm Istio-ojsonpath= "{ data.mesh }" > mesh.yaml may be executed to obtain Mesh Config of Istio; then, a command kubectl-n NamespaceX get cm Istio-sidecar-objective-ojsonpath= "{.data.values }" > values.json can be executed to obtain a filling value of the Deployment injection template; finally, the command Istioctl kube-input-f depth-development. Yaml-injectConfigFile inj-template. Tmpl-meshConfigFile mesh. Yaml-valuesFile values. Json|kubct apply-f-, can be executed, and the manual injection is completed, wherein depth. Yaml is the yaml file of depth to which Pod belongs.
The embodiment of the application provides a resource processing method and device and a storage medium, wherein the method comprises the following steps: configuring a plurality of service grids according to the corresponding relation between the preset labels and the namespaces; wherein, the plurality of service grids are provided with filters; after receiving the operation instruction, filtering the resources to be operated corresponding to the operation instruction based on a filter, and supervising the filtered resources by using a plurality of service grids. Therefore, the resource processing device can configure a plurality of service grids according to the corresponding relation between the preset label and the naming space, so that cluster resources can be monitored through the plurality of service grids, meanwhile, if the resource processing device receives an operation instruction, the resource to be operated corresponding to the operation instruction can be filtered based on a filter, and the filtered resource is monitored through the plurality of service grids, so that the plurality of service grids can only monitor the cluster resources under the corresponding naming space, safety isolation among different applications can be realized, and meanwhile, the resource utilization rate of the service grids is improved.
Example III
Based on the above embodiments, the embodiment of the present application provides a resource processing device, fig. 5 is a schematic diagram of a composition structure of the resource processing device, and as shown in fig. 5, the resource processing device 10 includes: a configuration unit 11, a filtering unit 12, and a supervision unit 13;
The configuration unit 11 is configured to configure a plurality of service grids according to a corresponding relationship between a preset tag and a namespace; wherein the plurality of service grids are provided with filters;
the filtering unit 12 is configured to, after receiving an operation instruction, perform filtering processing on a resource to be operated corresponding to the operation instruction based on the filter;
the supervision unit 13 is configured to perform supervision processing on the filtered resources using the plurality of service grids.
In an embodiment of the present application, further, fig. 6 is a schematic diagram of a second component structure of the resource processing device, as shown in fig. 6, the resource processing device 10 according to the embodiment of the present application may further include a processor 14, a memory 15 storing instructions executable by the processor 14, further, the resource processing device 10 may further include a communication interface 16, and a bus 17 for connecting the processor 14, the memory 15 and the communication interface 16.
In an embodiment of the present application, the processor 14 may be at least one of an application specific integrated circuit (Application Specific Integrated Circuit, ASIC), a digital signal processor (Digital Signal Processor, DSP), a digital signal processing device (Digital Signal Processing Device, DSPD), a programmable logic device (ProgRAMmable Logic Device, PLD), a field programmable gate array (Field ProgRAMmable Gate Array, FPGA), a central processing unit (Central Processing Unit, CPU), a controller, a microcontroller, and a microprocessor. It will be appreciated that the electronics for implementing the above-described processor functions may be other for different devices, and embodiments of the present application are not particularly limited. The resource processing device 10 may further comprise a memory 15, which memory 15 may be connected to the processor 14, wherein the memory 15 is adapted to store executable program code comprising computer operation instructions, the memory 15 may comprise a high speed RAM memory, and may further comprise a non-volatile memory, e.g. at least two disk memories.
In an embodiment of the application, a bus 17 is used to connect the communication interface 16, the processor 14 and the memory 15 and the communication between these devices.
In an embodiment of the application, the memory 15 is used for storing instructions and data.
Further, in the embodiment of the present application, the processor 14 is configured to configure a plurality of service grids according to the correspondence between the preset tag and the namespace by the resource processing device; wherein the plurality of service grids are provided with filters; and after receiving the operation instruction, filtering the resources to be operated corresponding to the operation instruction based on the filter, and performing supervision processing on the filtered resources by using the plurality of service grids.
In practical applications, the Memory 15 may be a volatile Memory (RAM), such as a Random-Access Memory (RAM); or a nonvolatile Memory (non-volatile Memory), such as a Read-Only Memory (ROM), a flash Memory (flash Memory), a Hard Disk (HDD) or a Solid State Drive (SSD); or a combination of the above types of memories and provides instructions and data to the processor 14.
The embodiment of the application provides a resource processing device, which can configure a plurality of service grids according to the corresponding relation between a preset label and a naming space; wherein the plurality of service grids are provided with filters; and after receiving the operation instruction, filtering the resources to be operated corresponding to the operation instruction based on the filter, and performing supervision processing on the filtered resources by using the plurality of service grids. Therefore, the resource processing device can configure a plurality of service grids according to the corresponding relation between the preset label and the naming space, so that cluster resources can be monitored through the plurality of service grids, meanwhile, if the resource processing device receives an operation instruction, the resource to be operated corresponding to the operation instruction can be filtered based on the filter, and the plurality of service grids are used for monitoring the filtered resource, so that the plurality of service grids only monitor the cluster resources under the corresponding naming space, safety isolation among different applications can be realized, and meanwhile, the resource utilization rate of the service grids is improved.
An embodiment of the present application provides a computer-readable storage medium having stored thereon a program which, when executed by a processor, implements the resource processing method as described above.
Specifically, the program instructions corresponding to one resource processing method in the present embodiment may be stored on a storage medium such as an optical disc, a hard disk, or a usb disk, and when the program instructions corresponding to one software workload prediction method in the storage medium are read or executed by an electronic device, the method includes the following steps:
configuring a plurality of service grids according to the corresponding relation between the preset labels and the namespaces; wherein the plurality of service grids are provided with filters;
and after receiving the operation instruction, filtering the resources to be operated corresponding to the operation instruction based on the filter, and performing supervision processing on the filtered resources by using the plurality of service grids.
It will be appreciated by those skilled in the art that embodiments of the present application may be provided as a method, system, or computer program product. Accordingly, the present application may take the form of a hardware embodiment, a software embodiment, or an embodiment combining software and hardware aspects. Furthermore, the present application may take the form of a computer program product embodied on one or more computer-usable storage media (including, but not limited to, magnetic disk storage, optical storage, and the like) having computer-usable program code embodied therein.
The present application is described with reference to flowchart illustrations and/or block diagrams of implementations of methods, apparatus (systems) and computer program products according to embodiments of the application. It will be understood that each block and/or flow of the flowchart illustrations and/or block diagrams, and combinations of blocks and/or flow diagrams in the flowchart illustrations and/or block diagrams, can be implemented by computer program instructions. These computer program instructions may be provided to a processor of a general purpose computer, special purpose computer, embedded processor, or other programmable data processing apparatus to produce a machine, such that the instructions, which execute via the processor of the computer or other programmable data processing apparatus, create means for implementing the functions specified in the flowchart block or blocks and/or block diagram block or blocks.
These computer program instructions may also be stored in a computer-readable memory that can direct a computer or other programmable data processing apparatus to function in a particular manner, such that the instructions stored in the computer-readable memory produce an article of manufacture including instruction means which implement the function specified in the flowchart block or blocks and/or block diagram block or blocks.
These computer program instructions may also be loaded onto a computer or other programmable data processing apparatus to cause a series of operational steps to be performed on the computer or other programmable apparatus to produce a computer implemented process such that the instructions which execute on the computer or other programmable apparatus provide steps for implementing the functions specified in the flowchart block or blocks and/or block diagram block or blocks.
The foregoing description is only of the preferred embodiments of the present application, and is not intended to limit the scope of the present application.

Claims (14)

1. A method of resource processing, the method comprising:
configuring a plurality of service grids according to the corresponding relation between the preset labels and the namespaces; wherein the plurality of service grids are provided with filters;
and after receiving the operation instruction, filtering the resources to be operated corresponding to the operation instruction based on the filter, and performing supervision processing on the filtered resources by using the plurality of service grids.
2. The method of claim 1, wherein the plurality of service grids operate in a Kubernetes cluster environment.
3. The method according to claim 2, wherein after the configuring the plurality of service grids according to the correspondence between the preset labels and the namespaces, the method further comprises:
For each service grid in the plurality of service grids, determining a target naming space according to a target label corresponding to the service grid based on the corresponding relation between the preset label and the naming space;
and performing supervision processing on the target cluster resources belonging to the target namespace.
4. A method according to claim 3, wherein the configuring a plurality of service grids according to the correspondence between preset labels and namespaces includes:
for each service grid of the plurality of service grids, determining the target label based on the corresponding relation between the preset label and the naming space;
configuring first attribute information corresponding to the service grid according to the target label;
the filter is configured based on the corresponding first attribute information.
5. The method of claim 4, wherein the filter is added between a service instance controller, a configuration rule controller, and Kubernetes Informer.
6. The method of claim 3, wherein the filtering, based on the filter, the resource to be operated corresponding to the operation instruction includes:
and for each service grid in the plurality of service grids, filtering the resources which do not belong to the namespaces corresponding to the target labels in the resources to be operated based on the filter, and obtaining the filtered resources.
7. The method of claim 3, wherein the using the plurality of service grids to policing the filtered resources comprises:
and executing at least one of the following processes on the target cluster resource according to the filtered resource: adding processing, deleting processing and updating processing.
8. The method of claim 4, wherein the step of determining the position of the first electrode is performed,
maintaining a first data structure for each of the plurality of service grids;
the target namespace is stored through the first data structure.
9. The method of claim 8, wherein the step of determining the position of the first electrode is performed,
maintaining a second data structure for each of the plurality of service grids;
after the first attribute information is modified in response to a modification instruction, updating the target namespace according to the namespace corresponding to the modification instruction, and storing the namespace corresponding to the modification instruction through the second data structure.
10. The method of claim 4, wherein configuring a plurality of service grids according to the correspondence of the labels and namespaces comprises:
acquiring second attribute information;
Determining a name space corresponding to the second attribute information;
verifying the name space corresponding to the second attribute information through the first attribute information to obtain a verification result;
and if the verification result is that verification is passed, determining the name space corresponding to the second attribute information as the target name space corresponding to the plurality of service grids, and configuring the plurality of service grids in the target name space.
11. The method according to claim 10, wherein verifying the namespace corresponding to the second attribute information by the first attribute information, to obtain a verification result, includes:
and if the name space corresponding to the second attribute information belongs to the name space corresponding to the first attribute information, the verification result is that verification is passed.
12. A resource processing device, characterized in that the resource processing device comprises: the system comprises a configuration unit, a filtering unit and a supervision unit;
the configuration unit is used for configuring a plurality of service grids according to the corresponding relation between the preset labels and the namespaces; wherein the plurality of service grids are provided with filters;
the filtering unit is used for filtering the resources to be operated corresponding to the operation instruction based on the filter after the operation instruction is received;
And the supervision unit is used for performing supervision processing on the filtered resources by using the plurality of service grids.
13. A resource processing device, characterized in that the resource processing device: a processor and a memory; wherein,,
the memory is used for storing a computer program capable of running on the processor;
the processor being adapted to perform the method of any of claims 1-11 when the computer program is run.
14. A computer readable storage medium, characterized in that the storage medium has stored thereon a computer program code which, when executed by a computer, performs the method of any of claims 1-11.
CN202211320966.4A 2022-10-26 2022-10-26 Resource processing method and device and storage medium Pending CN116795483A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202211320966.4A CN116795483A (en) 2022-10-26 2022-10-26 Resource processing method and device and storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202211320966.4A CN116795483A (en) 2022-10-26 2022-10-26 Resource processing method and device and storage medium

Publications (1)

Publication Number Publication Date
CN116795483A true CN116795483A (en) 2023-09-22

Family

ID=88033514

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202211320966.4A Pending CN116795483A (en) 2022-10-26 2022-10-26 Resource processing method and device and storage medium

Country Status (1)

Country Link
CN (1) CN116795483A (en)

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN117155939A (en) * 2023-10-31 2023-12-01 北京比格大数据有限公司 Method for realizing cross-cluster resource scheduling
CN117270916A (en) * 2023-11-21 2023-12-22 北京凌云雀科技有限公司 Istio-based Sidecar thermal updating method and device

Cited By (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN117155939A (en) * 2023-10-31 2023-12-01 北京比格大数据有限公司 Method for realizing cross-cluster resource scheduling
CN117155939B (en) * 2023-10-31 2024-02-27 北京比格大数据有限公司 Method for realizing cross-cluster resource scheduling
CN117270916A (en) * 2023-11-21 2023-12-22 北京凌云雀科技有限公司 Istio-based Sidecar thermal updating method and device
CN117270916B (en) * 2023-11-21 2024-02-06 北京凌云雀科技有限公司 Istio-based Sidecar thermal updating method and device

Similar Documents

Publication Publication Date Title
US11924068B2 (en) Provisioning a service
US10931599B2 (en) Automated failure recovery of subsystems in a management system
KR101954480B1 (en) Automated build-out of a cloud-computing stamp
CN109347663B (en) Resource visualization arranging method in OpenStack cloud platform
CN116795483A (en) Resource processing method and device and storage medium
CN111966305A (en) Persistent volume allocation method and device, computer equipment and storage medium
CN108370328B (en) Management method and device of NFV MANO policy descriptor
US11316742B2 (en) Stateless resource management
US11508021B2 (en) Processes and systems that determine sustainability of a virtual infrastructure of a distributed computing system
CN112269640B (en) Method for realizing life cycle management of container cloud component
TWI385576B (en) Managing component programs within a service application
CN105049268A (en) Distributed computing resource allocation system and task processing method
CN104781783B (en) The integrated calculating platform disposed in existing computing environment
KR20150011250A (en) Method and system for managing cloud center
CN109992373B (en) Resource scheduling method, information management method and device and task deployment system
CN111736994B (en) Resource arranging method, system, storage medium and electronic equipment
CN114090176A (en) Kubernetes-based container scheduling method
CN110377232A (en) A kind of method, apparatus and system for disposing distributed storage cluster
CN106406980B (en) A kind of dispositions method and device of virtual machine
CN112099919A (en) Application service visualization rapid deployment method based on cloud computing API
CN114528085A (en) Resource scheduling method, device, computer equipment, storage medium and program product
CN113835834A (en) K8S container cluster-based computing node capacity expansion method and system
CN115037757B (en) Multi-cluster service management system
CN102325043B (en) Topology generation method, device and system
CN114500530A (en) Automatic adjustment method for civil edge information system

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination