CN117319204A - Log management method and system - Google Patents

Log management method and system Download PDF

Info

Publication number
CN117319204A
CN117319204A CN202210709250.7A CN202210709250A CN117319204A CN 117319204 A CN117319204 A CN 117319204A CN 202210709250 A CN202210709250 A CN 202210709250A CN 117319204 A CN117319204 A CN 117319204A
Authority
CN
China
Prior art keywords
log
service
node
management
log management
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202210709250.7A
Other languages
Chinese (zh)
Inventor
潘畅
魏世江
王勇桥
陈炜青
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Huawei Cloud Computing Technologies Co Ltd
Original Assignee
Huawei Cloud Computing Technologies Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Huawei Cloud Computing Technologies Co Ltd filed Critical Huawei Cloud Computing Technologies Co Ltd
Priority to CN202210709250.7A priority Critical patent/CN117319204A/en
Publication of CN117319204A publication Critical patent/CN117319204A/en
Pending legal-status Critical Current

Links

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L41/00Arrangements for maintenance, administration or management of data switching networks, e.g. of packet switching networks
    • H04L41/08Configuration management of networks or network elements
    • H04L41/0803Configuration setting
    • H04L41/0813Configuration setting characterised by the conditions triggering a change of settings
    • H04L41/082Configuration setting characterised by the conditions triggering a change of settings the condition being updates or upgrades of network functionality
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L41/00Arrangements for maintenance, administration or management of data switching networks, e.g. of packet switching networks
    • H04L41/06Management of faults, events, alarms or notifications
    • H04L41/069Management of faults, events, alarms or notifications using logs of notifications; Post-processing of notifications

Landscapes

  • Engineering & Computer Science (AREA)
  • Computer Networks & Wireless Communication (AREA)
  • Signal Processing (AREA)
  • Debugging And Monitoring (AREA)

Abstract

The application provides a log management method and system, which are used for improving the flexibility of log management. The method comprises the following steps: the management node sends first log management information to the log management client so that the log management client updates a log management policy of the first service container, and the log management client and the first service container are deployed in the first service node; and the log management client manages the log files of the first service container according to the updated log management strategy.

Description

Log management method and system
Technical Field
The present disclosure relates to the field of computer technologies, and in particular, to a log management method and system.
Background
With the development of cloud-native technology, technologies for deploying management applications (such as micro-service applications) through containerization are widely used. Currently, containerized deployment is mainly achieved by application management clusters, such as Kubernetes clusters. The Kubernetes cluster may include a management node and a plurality of service nodes. When a developer needs to deploy a service container set (pod) corresponding to an application in a Kubernetes cluster, the service pod can be deployed on an appropriate service node by a management node.
The log is used as important information for recording the running state of the container, and the log quantity generated by various applications is huge. How to manage the logs of service pod in Kubernetes cluster becomes an important research topic in the containerized deployment process.
Disclosure of Invention
The embodiment of the application provides a log management method and a log management system, so as to improve the flexibility of log management of a service container.
In a first aspect, an embodiment of the present application provides a log management method, where the log management method may be applied to an application management cluster, where the application management cluster includes at least one management node and at least one service node, and a log management client for managing a log is disposed on each service node in the at least one service node.
Illustratively, the log management method provided in the embodiment of the present application is described below in terms of interactions between one management node and a log management client in one service node.
The method comprises the steps that a management node sends first log management information to a log management client so that the log management client updates a log management strategy of a first service container, and the log management client and the first service container are deployed in a first service node; and the log management client manages the log files of the first service container according to the updated log management strategy. Wherein it is understood that the first service container comprises one or more containers in the same service pod in the first service node.
In the design, the log management client dynamically updates the log management policy of the deployed service container based on the log management information indicated by the management node, so that the flexibility of log management of the service container can be improved.
In one possible design, the log management information corresponding to different service containers is different, so that the log management strategies of the different service containers are mutually independent through the design, and a personalized log management scheme is realized.
In one possible design, the first log management information includes one or more of the following: log catalog, log aging period, log reporting period. When the first log management information includes the log catalog, the updated log management policy includes managing log files stored in a storage path corresponding to the log catalog by the first service container; when the first log management information comprises the log aging period, the updated log management strategy comprises the step of performing compression dump processing on the appointed log file of the first service container at intervals of the log aging period; when the first log management information comprises the log reporting period, the updated log management policy comprises sending the log file of the first service container to a third-party log system at intervals of the log reporting period.
In one possible design, before the management node sends the first log management information to the log management client, the method further includes: when the management node deploys the first service container in the first service node, the log management client acquires second log management information from the management node, wherein the second log management information is used for creating a log management policy of the first service container. In this case, the log management client may replace (or update) the second log management information with the first log management information after acquiring the first log management information.
The deployment scheme for the service containers is further described below, and one service pod to be deployed is indicated below with a second service container, which includes one or more containers in the service pod to be deployed.
In one possible design, the method further comprises: the management node obtains an instruction for deploying a second service container, wherein the instruction comprises resource demand information of the second service container, and the resource demand information of the second service container is used for indicating resources occupied by log files of the second service container; and the management node determines a second service node for deploying the second service container according to the resource demand information of the second service container.
In one possible design, the management node determines a second service node for deploying the second service container according to the resource requirement information of the second service container, including: the management node obtains log capacity information of at least one service node associated with the management node, wherein the log capacity information of the at least one service node is used for indicating the current capacity and/or capacity quota of the log storage space of the at least one service node; the management node determines the second service node in the at least one service node according to the resource demand information of the second service container and the log capacity information of the at least one service node.
In the design, the consideration of the log capacity required by the service container to be deployed is introduced, so that the overload of the log storage space on the service node is avoided, the normal deployment and operation of the service container can be ensured, and the stability of the system is enhanced.
In one possible design, the method further comprises: and the management node receives a configuration instruction sent by a user, wherein the configuration instruction indicates that the log management strategy of the second service container is configured.
In one possible design, the method further comprises: and the log management client saves the mapping relation between the first business container and the first log management information. The design is beneficial to the log management client to inquire the log management information corresponding to the service container, monitor the running state of the service container, delete the related mapping relation when the service container is migrated, and avoid interfering the normal running of other service containers.
In one possible design, one of the log management clients is deployed in the first service node, and one or more service containers are deployed on the first service node. Such a design enables non-invasive, lightweight log management.
In a second aspect, an embodiment of the present application provides a log management system, including a management node and at least one service node; each service node in the at least one service node is provided with a log management client;
the management node is configured to send first log management information to a log management client in a designated service node, so that the log management client updates a log management policy of a first service container, where the designated service node is a service node deployed with the first service container;
And the log management client in the designated service node is used for managing the log files of the first service container according to the updated log management policy.
Some possible designs may be understood with reference to the description in the first aspect, and this will not be repeated in the embodiments of the present application.
In a third aspect, embodiments of the present application provide a cluster of computing devices, including at least one computing device, the computing device including a processor and a memory; wherein the memory of the at least one computing device is for storing computer-executable instructions; the processor of the at least one computing device is configured to execute the computer-executable instructions to cause the cluster of computing devices to perform the method as described in the first aspect and any one of the possible designs of the first aspect.
In a fourth aspect, embodiments of the present application provide a computer-readable storage medium comprising computer program instructions which, when executed by a cluster of computing devices, cause the cluster of computing devices to perform a method as described in the first aspect and any one of the possible designs of the first aspect.
In a fifth aspect, embodiments of the present application provide a computer program product comprising instructions which, when executed by a cluster of computing devices, cause the cluster of computing devices to perform a method as described in the first aspect and any one of the possible designs of the first aspect.
Drawings
FIG. 1 is a schematic diagram of a Kubernetes cluster;
FIG. 2A is a schematic diagram of a first prior art log management scheme;
FIG. 2B is a schematic diagram of a second prior art log management scheme;
FIG. 2C is a schematic diagram of a third prior art log management scheme;
FIG. 2D is a schematic diagram of a fourth prior art log management scheme;
FIG. 3 is a schematic diagram of a log management scheme according to an embodiment of the present disclosure;
fig. 4 is a flow chart of a log management method according to an embodiment of the present application;
FIG. 5 is a flowchart illustrating another log management method according to an embodiment of the present disclosure;
FIG. 6 is a schematic diagram of a framework of another log management scheme according to an embodiment of the present disclosure;
fig. 7 is a schematic structural diagram of a computing device cluster according to an embodiment of the present application;
fig. 8 is a schematic structural diagram of another computing device cluster according to an embodiment of the present application.
Detailed Description
In order to make the objects, technical solutions and advantages of the embodiments of the present application more apparent, the embodiments of the present application will be described in further detail with reference to the accompanying drawings.
At least one (item) referred to below in embodiments of the application indicates one (item) or more (items). Plural (items) means two (items) or more than two (items). "and/or", describes an association relationship of an association object, and indicates that there may be three relationships, for example, a and/or B, and may indicate: a exists alone, A and B exist together, and B exists alone. The character "/" generally indicates that the context-dependent object is an "or" relationship.
The terms "comprising" and "having" and any variations thereof, as used in the following description of embodiments of the present application, are intended to cover non-exclusive inclusions. For example, a process, method, system, article, or apparatus that comprises a list of steps or elements is not limited to only those listed but may optionally include other steps or elements not listed or inherent to such process, method, article, or apparatus. It should be noted that, in the embodiments of the present application, words such as "exemplary" or "such as" are used to mean serving as an example, instance, or illustration. Any method or design described herein as "exemplary" or "such as" in the examples should not be construed as preferred or advantageous over other methods or designs. Rather, the use of words such as "exemplary" or "such as" is intended to present related concepts in a concrete fashion.
The techniques provided by embodiments of the present application may be used for application management clusters, such as Kubernetes clusters. Kubernetes clusters can also be understood as container deployment platforms, called K8S platforms; the Kubernetes cluster can manage applications on multiple hosts in a cloud platform, and provide functions of containerized deployment, planning, updating, maintenance and the like of the applications (apps). The Kubernetes cluster comprises at least one management node and at least one service node. As an example, one management node and two service nodes, namely service node 1 and service node 2, are illustrated in fig. 1.
Wherein, the management node is responsible for providing an interface for resource operation and scheduling of resources, one management node can schedule (or control, manage, etc.) one or more service nodes, for example, the management node can receive an instruction for deploying the pod, and schedule the resources of the service nodes to deploy the pod on the service nodes. In particular, the management node may include a kube-apiserver component and a kube-schedule component. kube-APIs server provides access to resource operations, such as application programming interfaces (application programming interface, APIs), and provides authentication, authorization, access control, API registration, and discovery of users. kube-scheduler is responsible for scheduling of resources, such as scheduling (deployment) of pod to the corresponding service node according to a specific scheduling policy. It is further understood that in Kubernetes clusters the service nodes are also referred to as K8S nodes (nodes). The pod is the smallest unit of planning, creation and management, and the pod deployment runs on the service node. One or more pod may be included in a service node, one or more container may be included in a pod, and a pod may also be referred to as a container set or group of containers.
Specifically, the management node and the service node may be implemented by software, or may be implemented by hardware. Illustratively, the implementation of the management node is described next by taking the management node as an example. Similarly, the implementation of a service node may refer to the implementation of a management node.
When implemented in software, the management node may be an application or block of code running on a computing device. The computing device may be at least one of a physical host (or physical machine), a virtual machine, and the like. Further, the computing device may be one or more. For example, the management node may be an application running on multiple hosts/virtual machines/containers. It should be noted that, a plurality of hosts/virtual machines for running the application may be distributed in the same available area (availability zone, AZ) or may be distributed in different AZs. Multiple hosts/virtual machines/containers for running the application may be distributed in the same region (region) or may be distributed in different regions. Wherein typically a region may comprise a plurality of AZs. Also, multiple hosts/virtual machines for running the application may be distributed in the same virtual private cloud (virtual private cloud, VPC) or in multiple VPCs. Where typically a region may comprise multiple VPCs and a VPC may comprise multiple AZs.
When implemented in hardware, the management node may include at least one computing device, such as a server or the like. Alternatively, the management node may be a device implemented using an application-specific integrated circuit (ASIC) or a programmable logic device (programmable logic device, PLD), or the like. The PLD may be implemented as a complex program logic device (complex programmable logical device, CPLD), a field-programmable gate array (FPGA), a general-purpose array logic (generic array logic, GAL), or any combination thereof. The plurality of computing devices included in the management node may be distributed in the same AZ or may be distributed in different AZ. Multiple computing devices included in a management node may be distributed in the same region or may be distributed in different regions. Likewise, multiple computing devices included in a management node may be distributed in the same VPC or may be distributed among multiple VPCs. Wherein the plurality of computing devices may be any combination of computing devices such as servers, ASIC, PLD, CPLD, FPGA, and GAL.
In one possible design, the management node and the service node scheduled by the management node may operate in different virtual machines in the same physical host, so as to implement isolation of dimensions of the virtual machines. In another possible design, the management node and the service node scheduled by the management node may respectively run on different physical hosts, so as to realize isolation of physical host dimensions.
The embodiment of the application mainly relates to log management of service pod, wherein the service pod is used for indicating that the type of the pod is a container set corresponding to an application. The log management scheme of the service pod implemented in the related art will be first described below.
Fig. 2A is a schematic diagram of a framework of a first existing log management scheme, illustrating that two service pod are deployed on a service node, where a container corresponding to a service of an application 1 is deployed in pod1, and is denoted as app 1-container (container); the pod2 is a container corresponding to the service of the application 2, and is denoted as an app 2-container. An administrator configures (or mounts) a log volume for a service node in advance, where the log volume is used to store log files of each service pod on the service node, such as app1.Log generated by app 1-container in pod1 and app2.Log generated by app 2-container in pod 2. The administrator can uniformly plan and allocate the log storage space of each service pod, and mount log paths for each service pod in a hostPath mode at the management node side according to a uniform format so as to share the log disk of the service node. For example, the format of the log path corresponding to the service pod may be per opt/closed/logs/$closed_service/$micro_service/. Where $ indicates variable identifiers, i.e., $ closed_service and/$ micro_service, as variables, the values of which are related to the service nodes where the service pod is deployed. Generally, an administrator configures a service type corresponding to a service node in advance, and a service pod deployed in the service node conforms to the service type corresponding to the service node.
A log agent (log-agent) pod is deployed on the service node, and a log agent container is deployed on the log agent pod. The format of the log path corresponding to the service pod is recorded in the configuration file of the log management client. The log management client operates in the Daemon mode and automatically scans all log files in the directory that conform to the format of this log path. In addition, an administrator configures a periodic task (crond) tool and a management record file (log-log) tool in advance in a configuration file of the log management client, so as to plan log management policies of the service node in advance, such as policies for managing logs of services of the service node corresponding to the service type. When a service pod is deployed on the service node, the log management client can manage the log file of the service pod according to crond, logrotate, for example, operations such as compressing and dumping the specified log file according to a set aging policy are performed regularly.
The scheme plans the log storage space on the service node in advance, and does not consider the requirement of different types of services on the log storage space. And due to the complexity of different service combinations, the containers in the Kubernetes cluster may dynamically migrate at any time. Therefore, the method of planning log storage space in advance is difficult to meet the log storage requirement of a container migration scene, is not flexible enough, and may have the risk of overload of a log disk. And the aging strategy and the scanning path executed by the log management client on the log are planned in advance by the configuration file of the log management client. If the service pod is dynamically added or deleted, the corresponding log path list is changed, and the configuration file is manually adjusted, so that normal log collection operation on the container after the change of the log path list can be ensured, the manual maintenance cost is increased, and the log collection efficiency is reduced.
As in fig. 2B, a second existing log management scheme is illustrated, based on fig. 2A, an init-container (init-container) is deployed on each service pod. For one service pod, the periodic task tool on the service node can be configured through an initialization-container therein to customize the aging policy of the log file of the individual service pod. However, in such a scheme, a mapping directory needs to be configured for each service pod in a periodic task tool of the service node, root user (root) rights are opened for each service pod, and an initialization-container is deployed on each service pod, so that unified management and control cannot be performed, and there is a possibility of intrusion modification on the service itself, so that a great potential safety hazard exists.
As shown in fig. 2C, a third existing log management scheme is illustrated, which illustrates that two service pod are deployed on a service node, wherein a container corresponding to a service of an application 1 is deployed in pod1, and is denoted as app 1-container (container); the pod2 is a container corresponding to the service of the application 2, and is denoted as an app 2-container. An administrator can configure a network storage such as a persistent capacity (persistent volume, PV) in each service pod of the service node through the management node to enable each service pod to provide independent log storage resources; and deploying log management clients in the respective service pod, and running the log management clients through a sidecar (sidecar) mode. The log management client in one service pod is responsible for managing the log files of the service pod. In addition, the management record files are respectively operated in each service pod to realize the aging strategy of the configuration log.
In this scheme, when a plurality of service pod are running on the service node, a plurality of log management clients need to be run simultaneously, which causes a great waste of resources such as central processing unit (central processing unit, CPU), memory (memory), ports and the like. Because the log management client is highly coupled with the service pod, when at least one log management client makes errors, the related health inspection probe fails to execute and can influence the health state of the service pod, so that the reconstruction of the service pod can be possibly caused; or when the log management client upgrades, it may be necessary to update the configuration of all traffic pod on the traffic node or even involve the reestablishment of traffic pod, resulting in inefficient traffic operation.
As shown in fig. 2D, a fourth existing log management scheme is illustrated, two service pod are deployed on a service node, where a container corresponding to a service of an application 1 is deployed in pod1, and is denoted as app 1-container (container); the pod2 is a container corresponding to the service of the application 2, and is denoted as an app 2-container. And (3) directly writing the log files generated by pod1 and pod2 into a set log storage back end (log background) by calling an API interface.
Typically, the log storage backend is typically remote. Such schemes involve remote transmissions, are susceptible to network anomalies such as network jitter, internal errors in log storage back-end, and the like. Data retransmission may be required, resulting in inefficiency in log management, and reliability is also difficult to guarantee due to the possibility of data loss.
Based on the above description of the related art, the embodiment of the present application provides a log management scheme. A log management client is deployed on the service node, log management information of the service pod is deployed by declaration, and the log management client is dynamically instructed to execute a log management strategy corresponding to the service pod aiming at the service pod, so that personalized requirements of different service pods on log management are met, and the log management flexibility is improved, and meanwhile, compared with the related technology, the efficiency, the safety and the reliability of the log management are improved.
The log management scheme provided in the embodiment of the present application is described in detail below.
Fig. 3 is a schematic diagram of a framework of a log management scheme according to an embodiment of the present application. In this fig. 3 a management node, a service node 1 and a service node 2 are schematically shown. Wherein, two service pod are deployed in each service node. If pod1 and pod2 are deployed in service node 1, pod3 and pod4 are deployed in service node 2. Each service pod deploys a corresponding container for the service. As an example, app 1-container is deployed in pod1, app 2-container is deployed in pod2, app 3-container is deployed in pod3, and app 4-container is deployed in pod4.
Specifically, a large-capacity log disk, such as a log disk with a total capacity of 100G, may be mounted in each service node. Taking a service node as a virtual machine, such as an OpenStack virtual machine as an example, an OpenStack API can be called to perform the creation and mounting actions of the log disk.
When the service node joins the Kubernetes cluster, the service node may register the capacity quota of the log storage space of the service node with the management node, where the capacity quota of the log storage space may be simply referred to as a log capacity quota. Specifically, as illustrated in FIG. 3, the management node includes a kube-apiserver component and a kube-schedule component. The service node may register its own log capacity quota with a kube-apiserver component in the management node.
Alternatively, the log capacity quota of a service node may be less than the total capacity of the log disk it mounts. For example, when the total capacity of the log disk mounted on the service node is 100G, the log capacity quota of the service node may be 90G. Through the design, the log storage space in the service node can be flexibly adjusted, and in the scene that the log management client performs intermittent management on the log file, the following situations can be prevented from happening: during the empty window period when the log management client does not perform log management, the size overflow of a certain log file causes overload of the whole log disk.
Illustratively, the Kubernetes cluster may support a self-defined resource scheduling extension. The service node may call the northbound API exposed by the kube-apiserver component to register a log capacity quota for the service node. The calling mode can be realized through a latch request of the curl. The patch request includes an address (URL) of the northbound API, and a log capacity quota registered for the service node.
The format of the journal capacity quota may be expressed as:
'[{"op":"add","path":"/status/capacity/cloud1.com~1log-size","value":“90"}]'
the format of the address of the northbound API may be expressed as:
http://<k8s-master-ip>:8080/api/v1/nodes/<your-node-name>/status
in this example, this patch request initiated by the service node is used to inform the management node that the service node has 90G of total log capacity available for scheduling. < k8s-master-ip > refers to the floating ip of kube-apiserver, 8080 refers to the service port of kube-apiserver, and < you-node-name > refers to the name of the service node. The 1.com-1 log-size field in the patch request corresponds to the dimension of the newly added log capacity quota in the embodiments of the present application, where the escape code of "/" 1", i.e.," 1.com-1 log-size "can also be described as" 1.com/log-size ". It should be understood that the name of the field in the foregoing format is given as an illustration, and the name of the field may be customized in specific implementation, which is not limited by the embodiment of the present application.
One log management client is deployed in each service node. Specifically, a log management client can be deployed and started in the log proxy pod of the service node based on the DeamonSet mode, and the reliability of the log management client is ensured through a health check mechanism. A management node (such as a kube-apiserver component) can indicate log management information corresponding to a single service pod to a log management client and can dynamically update the log management information; the log management information is used for indicating the log management policy of the corresponding service pod. In one possible design, the management node may indicate log management information corresponding to a single service pod to the log management client in response to an instruction of an administrator or an application client. For example, an administrator or an application client may customize log management information corresponding to a deployed service pod, and a management node (kube-apiserver component) may obtain the log management information corresponding to the service pod, and indicate the log management information of the service pod to the log management client in the service node where the service pod is deployed. Accordingly, the log management client can manage the log file of the service pod according to the log management information corresponding to the service pod acquired last time.
Specifically, when the service pod needs to be deployed, an administrator or an application client can send resources required by the service pod to the management node, and the management node can determine a target service node according to the resources required by the service pod, wherein the target service node is a service node for deploying the service pod; and the management node can deploy the service pod on the target service node. For example, an administrator or an application client may send resources required by a service pod to a kube-apiserver component in a management node, and an extended kube-schedule component may obtain the resources required by the service pod through the kube-apiserver component, and select a deployed service node for the service pod. By way of example, FIG. 3 illustrates a scenario in which a kube-scheduler component deploys a pod4 in a service node 2 based on resources occupied by the pod 4. The kube-scheduler component determines an appropriate target service node based on the resources occupied by the service pod, and after the service pod is successfully deployed on the target service node, an administrator or an application client can be notified through the kube-apis server component.
Optionally, as shown in fig. 3, a third-party log system may be externally arranged, and the third-party log system may also be understood as a log service end or a log receiving back end associated with the Kubernetes cluster, where the third-party log system is used for performing persistent storage and management on log files of the service pod in the whole Kubernetes cluster. Correspondingly, the log management client can also report the log file of the service pod to the third party log system. The third party log system may be, for example, an elastiscearch cluster.
Further, a log management policy of a service pod deployed in a service node will be described in detail with reference to a log management method illustrated in fig. 4.
Specifically, the log management method mainly comprises the following procedures.
S401, the management node sends first log management information to a log management client in the first service node.
The first log management information is used for updating the log management policy of the first service container in the first service node. It will be appreciated that the first traffic container indicates one traffic pod deployed on the first traffic node, optionally the first traffic container comprises one or more containers in the same traffic pod deployed on the first traffic node. One or more service pod can be deployed by the first service node, the first service node is one of a plurality of service nodes which can be scheduled by the management node, and a log management client can be deployed on each service node in the plurality of service nodes.
Specifically, the first log management information may include one or more of the following: log catalog, log compression condition, log aging period and log reporting period.
The log directory is used for determining an actual storage path (path) of the log file of the first service container, for example, the log directory of one first service container is used for indicating a format of the actual storage path of the log file of the first service container, and a mapping relationship is provided between the log directory and the actual storage path of the log file of the first service container.
The log aging period is used for indicating the aging policy of the log file of the first service container, for example, the designated log file of the first service container is subjected to compression dump processing every log aging period. Wherein the size of the specified log file reaches a set size upper threshold (max_size), for example 100 Megabits (MB). The compressed dump process for a specified log file may include generating a compressed file or generating a compressed file and dumping the compressed file.
Optionally, an upper limit threshold (max_files) for the number of compressed files may be set, which indicates that at most max_files are reserved, and the remaining compressed files need to be dumped. For example, the upper limit threshold of the number of the compressed files is set to be 10, if the size of the N log files in the first service container is greater than or equal to the set size uplink threshold, the N log files need to be compressed, and N is a positive integer, assuming that an aging policy needs to be executed at the current time. When N is greater than 10, the log management client may dump (N-10) compressed files whose generation time is earlier than the current time and whose time difference from the current time exceeds a set threshold from among the N compressed files of the first service container. Or the log management client may dump (N-10) compressed files with the largest time difference between the N log file generation times and the current time, where the generation times of the N compressed files of the first service container are earlier than the current time. It is understood that dump processing refers to the replacement of the storage path of the compressed file in the log file of the first service container.
Alternatively, the log aging period may be set to a time period of the compressed dump process, for example, once a week or a month, and for the current time, the compressed dump process may be performed on the log file before the current time for a week or a month. Alternatively, for (N-10) compressed files, the log management client may not perform the dump process, but may directly perform the cleaning, which may also be described as deletion, cleaning, or the like.
A log reporting identifier, e.g., denoted as is_report, is used to indicate whether the log file of the first service container needs to be reported to a third party log system outside the Kubernetes cluster. Optionally, when the value of is_report is true, it indicates that the log file of the first service container needs to be reported to a third party log system outside the Kubernetes cluster; when the is_report value is false, the log file of the first service container does not need to be reported to a third party log system outside the Kubernetes cluster.
And the log reporting period is used for indicating that the log file of the first service container needs to be reported to the third-party log system every other log reporting period. It may be appreciated that the log reporting period may implicitly indicate that the log file of the first service container needs to be reported to a third party log system outside the Kubernetes cluster.
Alternatively, the first log management information may be determined by the management node, or the first log management information may be indicated to the management node by an administrator or an application client, and then the management node sends the first log management information to the log management client.
Alternatively, the administrator or the application client may declare the first log management information through an actions annotation field of the service creation. As an example, for a service name "tie-apiserver", the corresponding first log management information may be defined as log-running: '[ { "path:"/opt/closed/logs/cc-apiserver/. Log "," max_size ":"100MB "," max_files ": 10", "is_report": true "} ]'. Wherein, path indicates that the suffix name of the log file is log. "is_report" means that the log file of the first service container needs to be reported to the third party log system. Or alternatively, the administrator or the application client may configure the first log management information through a service-associated ConfigMap. As an example, when the name field in the ConfigMap is "tie-apiserver-config", it indicates that this ConfigMap configuration belongs to tie-apiserver service, and the corresponding first log management information may be defined as log-running: '[ { "path:"/opt/closed/logs/cc-apiserver/. Log "," max_size ":"100MB "," max_files ": 10", "is_report": true "} ]'. Wherein, path indicates that the suffix name of the log file is log. "is_report" means that the log file of the first service container needs to be reported to the third party log system.
S402, the log management client in the first service node updates the log management policy of the first service container according to the first log management information.
In one possible design, if the first service node is locally configured with the log management policy of the first service container before the management node sends the first log management information, the log management client may use the log management policy determined by the first log management information to override the log management policy about the first service container that is previously locally configured, so as to implement updating of the log management policy of the first service container.
In another possible design, the management node may have sent to the log management client, prior to sending the first log management information, second log management information that is used to create a log management policy for the first traffic container. When the log management client receives the first log management information, the log management client can discard or delete the second log management information received before, and determine a new log management policy according to the first log management information, so as to update the log management policy of the first service container. Alternatively, the management node may send the second log management information to the first service node when the first service container is deployed in the first service node. The format and content of the second log management information may be understood with reference to the first log management information described in S401, and as such the second log management information may also include one or more of the following: log catalog, log compression condition, log aging period and log reporting period. However, it should be noted that the second log management information and the first log management information may include the same type of parameter, but the values of the parameters are different. For example, the second log management information and the first log management information each include a parameter of the log aging period, but the parameter value related to the log aging period in the second log management information is different from the parameter value related to the log aging period in the first log management information.
Based on the above design, it can be understood that the embodiment of the present application supports dynamically updating the log management policy of the service pod deployed in the service node, and the log management client in the service node may determine the latest log management policy of the related service pod according to the log management information sent by the management node.
Specifically, the log management policy updated for the first service container corresponding to the content included in the first log management information may be understood with reference to the following:
when the first log management information comprises the log catalog, the updated log management policy comprises the management of log files stored in the first service container under a storage path corresponding to the log catalog;
when the first log management information comprises the log aging period, the updated log management strategy comprises the step of performing compression dump processing on the appointed log file of the first service container at intervals of the log aging period;
when the first log management information includes the log reporting period, the updated log management policy includes sending a log file of the first service container to a third party log system at intervals of the log reporting period. It will be appreciated that the third party logging system may also be described as the aforementioned log storage backend, log server, etc.
S403, the log management client in the first service node manages the log file of the first service container according to the updated log management policy.
Specifically, the log management client in the first service node generates a specific log management task according to the updated log management policy, and executes the log management task.
Specifically, corresponding to the description in S402, the content included in the first log management information is different, and the management policy of the first service container log management client for the first service container is also different, so that the generated log management task is also different.
For example, if the log management information includes the log directory, the log management client may manage log files stored in the first service container under a storage path corresponding to the log directory;
for example, if the log management information includes the log aging period, the log management client may perform a compressed dump process on the specified log file of the first service container every the log aging period.
For example, if the log management information includes the log report identifier, and the log report identifier indicates that the service node needs to send the log file of the first service container to a third-party log system, the log management client may send the log file of the first service container to the third-party log system.
For example, if the log management information includes the log reporting period, the service node sends the log file of the first service container to a third party log system every the log reporting period. It will be appreciated that the third party logging system may also be described as the aforementioned log storage backend, log server, etc.
Specifically, the above example may be implemented with reference to the description in S401, which is not described in detail in the embodiments of the present application.
Optionally, if the first log management information includes a log reporting identifier indicating inclusion, or a log reporting period, the method further includes step S404 after executing S403.
And the log management client in the first service node reports the log file of the first service container to a third-party log system.
Further optionally, the log management method may further include the following step S405.
S405, the log management client in the first service node stores the mapping relation between the first service container and the first log management information.
Specifically, the log management client in the first service node may store a mapping relationship between the first service container and the first log management information in a cached mapping table. Wherein the first service container is indicated in the cached mapping table by using the name or index of the first service container. Alternatively, the cached mapping table may be set in the ConfigMap of the Kubernetes cluster. As an example, the data in the cached mapping table is illustrated as follows.
apiVersion:v1
kind:ConfigMap
metadata:
name:log-rules-indexer
namespace:default
lables:
cloud1.com/usage:‘log-rules-indexer’
data:
<pod_01_id>:‘[{“path:“/opt/cloud/logs/cce/cce-apiserver/*.log”,“max_size”:“100MB”,“max_files”:10,“is_report”:true”}]’
<pod_02_id>:‘[{“path:“/opt/cloud/logs/vpc/vpc-service/*.log”,“max_size”:“50MB”,“max_files”:10,“is_report”:false”}]’
The name field and the label field of the metadata in the above content indicate the special purpose of the ConfigMap, namely, the mapping relationship between the first service container and the log management information of the first service container is cached; each row of data in the data field is a mapping between the index id of the first service container and its log management information (log-rules), and these data can be dynamically changed as the first service container on the service node changes. Through the design, when the first service container is migrated, the data in the cache mapping table can be dynamically adjusted, interference to operation of other first service containers in the service node is avoided, and the efficiency of log management is improved.
In addition, optionally, if the management node sends the second log management information to the log management client of the first service node before sending the first log management information, the log management client may delete the mapping relationship between the first service container and the second log management information after receiving the first log management information and updating the log management policy.
According to the log management method provided by the embodiment of the application, log management information corresponding to each service pod can be declared, so that the log management client side can implement personalized log management on each service pod on the service node where the log management client side is located. And the log management information corresponding to the single service pod can be dynamically updated, so that the log management is more flexible and variable. Each service node is configured with a log management client to carry out log management, so that non-invasive lightweight management can be realized.
The embodiment of the application also provides a log management scheme, when one service pod needs to be deployed, consideration of the log capacity required by the service pod to be deployed is introduced, a proper service node is selected for the service pod to be deployed, and a related log management strategy is configured. Specifically, referring to fig. 5, another log management method is illustrated, and the method mainly includes the following flow.
S501, the management node obtains an instruction for deploying the second service container, where the instruction includes resource requirement information of the second service container.
Specifically, the resource requirement information of the second service container is used for indicating resources occupied by the log file of the second service container. In addition, the resource requirement information of the second service container is also used for indicating CPU resources, memory resources and the like required by deploying the second service container. The resources occupied by the log files of the second business container may also be referred to as the log capacity required by the second business container. It will be appreciated that the second service container corresponds to an indication of one service pod to be deployed, and that the second service container may include one or more containers in the service pod to be deployed.
In one possible design, it may be a user (e.g., an administrator) or an application client that sends an instruction to the management node to deploy the second service container. Further, the user or the application client may further send a configuration instruction to the management node, where the configuration instruction is used to instruct configuration of the log management policy of the second service container, where the configuration instruction may include initial log management information corresponding to the second service container, where the initial log management information is similar to the second log management information described in fig. 4, and the initial log management information may be used to create the log management policy of the second service container.
In another possible design, the user or the application client may combine the foregoing instruction for deploying the second service container and the configuration instruction for configuring the log management policy into one instruction, and add, in the combined instruction, service requirement information of the second service container, where the service requirement information is used to indicate (or declare) resources (such as log capacity, CPU resources, memory resources, etc.) required by the second service container, and initial log management information corresponding to the second service container, where the initial log management information is similar to the second log management information described in fig. 4, and the initial log management information may be used to create the log management policy of the second service container. And the management node can further determine the resources and initial log management information required by the second service container by analyzing the combined instruction.
Specifically, the format or content of the initial log management information may be understood with reference to the second log management information or the first log management information described in fig. 4, which is not described in detail in the embodiment of the present application.
In particular, the service requirement information of the second service container may be defined for a workload of the second service container. The workload definition of the second traffic container is illustrated below.
An example, the workload definition of the second business container includes the following:
metdata:
annotation:
cloud1.com/logrotate-rules:‘[{“path:“/opt/cloud/logs/cce/cce-apiserver/*.log”,“max_size”:“100m”,“max_files”:10,“is_report”:true”}]’
reources:
requests:
memory:“64Mi”
cpu:“250m”
cloud1.com/log-size:“10G”
limits:
memory:“128Mi”
cpu:“500m”
cloud1.com/log-size:“20G”
the field under the actions indicates log management information of the second service container, and the log files to be managed comprise all files with the suffixes of log under the conditions of opt/closed/logs/cce/cce-apiserver/directory; when the file size reaches 100MB, the compressed dump processing is needed, and at most 10 compressed dump files are reserved.
The requests field of resources indicates that the second traffic container requires at least 10G of log capacity, at least 64Mi of memory resources, and at least 250m of cpu resources; the limits field of resources indicates that the second traffic container requires at most 20G of log capacity, at most 128Mi of memory resources, and at least 500m of cpu resources. It will be appreciated that "Mi" is a memory unit with 1024 as a conversion standard, "m" represents one thousandth of CPU resources, 250m may be described as 250×m, and 500×m may be described as 500×m.
Note that, the field names are exemplified by cld 1.Com/log-size, cld 1. Com/log-rule. The related log management information may be expanded according to the requirements of practical applications, which is not limited in the embodiment of the present application.
S502, the management node determines a second service node for deploying the second service container according to the resource demand information of the second service container. In particular, the management node may obtain log capacity information of at least one service node associated with the management node, the log capacity information of the at least one service node being used to indicate a current capacity and/or capacity quota of a log storage space of the at least one service node. And then the management node determines a second service node for deploying the second service container in the at least one service node according to the resource demand information of the second service container and the log capacity information of the at least one service node, so that the second service container can be deployed on the second service node.
Wherein, as depicted in fig. 3, at least one service node may register its own capacity quota of log storage space with the management node. In addition, it may be provided that at least one service node reports the current capacity of its own log storage space to the management node periodically or in real time.
In particular, the management node may determine the second service node among the at least one service node according to a specific criterion. One or more of the specific criteria may be present.
In one possible implementation manner, the management node may determine any service node, of the at least one service node, whose log capacity information meets the log capacity required by the second service container, as the second service node.
In another possible implementation manner, if there are a plurality of service nodes whose log capacity information meets the log capacity required by the second service container in at least one service node, the management node may select a suitable second service node from the plurality of service nodes whose log capacity information meets the log capacity required by the second service container in consideration of the current remaining log capacity of the service nodes, interaction delay and/or reliability between the second service containers, and the like. For example, the management node may select the service node with the largest current remaining log capacity as the second service node, or the management node considers other deployed second service containers with high interaction delay requirements with the second service containers to be deployed, where the selected second service node should be close to the service nodes where other deployed second service containers are located in distribution, or the management node deploys the second service containers corresponding to the same application or the same application client to different service nodes according to reliability, where the selected second service node does not include the second service containers belonging to the same application or the same application client as the second service containers to be deployed.
Optionally, the method described in fig. 5, in which the management node acquires the initial log management information corresponding to the second service container in correspondence with S501, may further include step S503.
S503, the management node sends initial log management information corresponding to the second service container to the log management client in the second service node.
In an alternative embodiment, the management node may actively provide the log management client in the second service node with the initial log management information of the second service container when the second service container is deployed in the second service node.
In another alternative embodiment, the log management client in the second service node may acquire initial log management information of the second service container from the management node when monitoring that the second service container is newly added to the second service node, where in this case, it may be understood that the management node is triggered by the log management client in the second service node to provide the initial log management information of the second service container to the log management client in the second service node, so that the log management client in the second service node creates a log management policy of the second service container according to the initial log management information.
Further, S504 is illustrated in fig. 5: the management node may further send new log management information corresponding to the second service container to the log management client in the second service node, so that the log management client in the second service node updates the log management policy of the second service container.
In the embodiment of the application, the service pod is deployed by combining the resources of the log capacity scheduling service node required by the service pod to be deployed, so that the log storage requirements of different types of services can be flexibly met, the phenomenon of log disk overload caused by the error of uniformly distributing the resources is avoided, the normal operation of the service is ensured, and the reliability of log management is improved.
It can be appreciated that the two log management schemes provided in the embodiments of the present application may be used in combination, for example, the deployment process for the first service container in fig. 4 may be implemented with reference to the deployment process for the second service container described in fig. 5; in connection with the log management scheme described in fig. 4, the management node in the scheme base shown in fig. 5 may further indicate new log management information corresponding to the second service container to the log management client in the second service node, so that the log management client in the second service node replaces the initial log management information with the new log management information, and update of the log management policy of the second service container is implemented. It should be noted that, in the embodiment of the present application, the first service node and the second service node may be the same service node, or may be different service nodes, which is not limited in this embodiment of the present application.
Further, referring to fig. 6, which illustrates a framework of another log management scheme, interactions between the management nodes and internal structures of the log management client described in fig. 3 are explained. A log management client in a management node and a service node is illustrated in FIG. 6, the management node comprising a kube-apiserver component and a kube-schedule component. The log management client comprises a monitoring module and a processing module.
The kube-apiserver component of the management node can acquire an instruction of deploying the pod from an administrator or an application client, wherein the instruction indicates log management information of the service pod and resources occupied by the service pod. The kube-scheduler component may select a service node for deploying a service pod based on the resources occupied by the service pod. The resources occupied by the service pod comprise the log capacity of the service pod demands, namely, the kube-scheduler component expands the resource scheduling supporting the log capacity dimension of the service pod demands.
The monitoring module may support the functions of the log management client in S402 and S503. Illustratively, the monitoring module is configured to perform operations (1) and (2) as follows:
(1) When the log management information of the newly added service pod or the deployed service pod is monitored to be updated and the newly added service pod or the deployed service pod and the log management client are located at the same service node, the log management information corresponding to the service pod is acquired from the kube-apiserver component. For example, in a scenario in which log management information corresponding to a service pod is declared through an actions annotation field of the service development, the monitoring module may obtain log management information corresponding to the service pod by parsing a relevant log-rule field in actions of the newly added service pod, which corresponds to S401.
(2) The obtained log management information of the service pod is cached, for example, a mapping relationship between the service pod and the log management information of the service pod may be stored in a cached mapping table included in the ConfigMap.
The processing module may support the function of the log management client in S403. Illustratively, the processing module may be divided into a log management task generating module and a log management task executing module.
The log management task generating module is used for analyzing the log management strategy according to the log management information corresponding to the service pod, so as to generate the log management task of the service pod. And the log management task execution module is used for executing the log management task of the service pod.
As an example, the log management task executed by the log management task execution module includes a log aging task and a log reporting task illustrated in fig. 6. Or it may be understood that the log management task execution module is further divided into a log aging task execution unit and a log reporting task execution unit. The log aging task execution unit is used for carrying out compression dump processing on the appointed log file according to the log aging period included in the log management information. And the log reporting task execution unit is used for reporting the related log files to a third-party log system when the log files need to be reported according to the log reporting identification and/or the log reporting period included in the log management information.
When a service pod is destroyed or migrated out of the service node where the log management client is located, the processing module of the log management client can also perceive the change and remove the related log management task. The operation of other service pod on the service node is not affected, and the efficiency and reliability of log management are improved.
Based on the above embodiments, the present application further provides a computing device cluster. As illustrated in fig. 7, the cluster of computing devices includes at least one computing device 100, each computing device 100 of the at least one computing device 100 including a processor 104 and a memory 106.
Wherein the memory 106 of at least one computing device 100 in the computing device cluster is configured to store computer-executable instructions, for example, instructions for executing the log management method described above by the same log management system may be stored. The processor 104 of at least one computing device 100 executes the computer-executable instructions such that the cluster of computing devices execute the instructions of the log management method. In some possible implementations, one or more computing devices 100 in the cluster of computing devices may also be used to execute some instructions of the log management system for performing the log management method. In other words, a combination of one or more computing devices 100 may collectively execute instructions of the log management system for performing the log management method.
A communication interface 108 may also be included in each computing device 100, and one computing device 100 may interact with other computing devices via the communication interface 108. By way of example, the communication interface 108 may be a transceiver, circuit, bus, module, pin, or other type of communication interface. When the computing device 100 is a chip-type apparatus or circuit, the communication interface 108 in the computing device 100 may also be an input/output circuit, may input information (or called receiving information) and output information (or called transmitting information), and the processor may be an integrated processor or a microprocessor or an integrated circuit or a logic circuit, where the processor may determine the output information according to the input information.
The coupling in the embodiments of the present application is an indirect coupling or communication connection between devices, units, or modules, which may be in electrical, mechanical, or other forms for information interaction between the devices, units, or modules. The processor 104 may cooperate with the memory 106, the communication interface 108. The specific connection medium between the processor 104, the memory 106, and the communication interface 108 is not limited in this embodiment.
Optionally, referring to fig. 7, the processor 104, the memory 106, and the communication interface 108 are connected to each other through a bus 102. The bus 102 may be a peripheral component interconnect standard (peripheral component interconnect, PCI) bus, or an extended industry standard architecture (extended industry standard architecture, EISA) bus, among others. The buses may be classified as address buses, data buses, control buses, etc. For ease of illustration, only one thick line is shown in fig. 7, but not only one bus or one type of bus.
In the embodiments of the present application, the processor may be a general purpose processor, a digital signal processor, an application specific integrated circuit, a field programmable gate array or other programmable logic device, a discrete gate or transistor logic device, or a discrete hardware component, and each method, step, and logic block of the embodiments of the present application may be implemented or executed. The general purpose processor may be a microprocessor or any conventional processor or the like. The steps of a method of an embodiment of the application in connection with an embodiment of the application may be embodied directly in a hardware processor for execution, or in a combination of hardware and software modules in the processor.
In the embodiment of the present application, the memory may be a nonvolatile memory, such as a hard disk (HDD) or a Solid State Drive (SSD), or may be a volatile memory (volatile memory), for example, a random-access memory (RAM). The memory is any other medium that can be used to carry or store desired program code in the form of instructions or data structures and that can be accessed by a computer, but is not limited to such. The memory in the embodiments of the present application may also be circuitry or any other device capable of implementing a memory function for storing program instructions and/or data.
Furthermore, it should be noted that the memory 106 in different computing devices 100 in a cluster of computing devices may store different instructions for performing part of the functionality of the log management system. That is, the instructions stored by the memory 106 in the different computing devices 100 may implement the functionality of one or more devices in the log management client on the management node and/or the service node.
Fig. 8 shows one possible implementation. As shown in fig. 8, two computing devices 100A and 100B are connected through a communication interface 108. Instructions for performing the functions of the management node are stored on memory in computing device 100A. Instructions for performing the functions of the log management client in the service node are stored on memory in computing device 100B. In other words, the memory 106 of the computing devices 100A and 100B collectively store instructions of the log management system for performing the log management method.
It should be appreciated that the functionality of computing device 100A shown in fig. 8 may also be performed by multiple computing devices 100. Likewise, the functionality of computing device 100B may also be performed by multiple computing devices 100.
Embodiments of the present application also provide a computer program product comprising instructions. The computer program product may be software or a program product containing instructions capable of running on a computing device or stored in any useful medium. When the computer program product is run on the cluster of computing devices, the cluster of computing devices is caused to perform the above-described application to the log management system for performing the log management method, or the cluster of computing devices is caused to perform the above-described application to the log management system for performing the log management method.
Embodiments of the present application also provide a computer-readable storage medium. The computer readable storage medium may be any available medium that can be stored by a computing device or a data storage device such as a data center containing one or more available media. The usable medium may be a magnetic medium (e.g., floppy disk, hard disk, magnetic tape), an optical medium (e.g., DVD), or a semiconductor medium (e.g., solid state disk), etc. The computer-readable storage medium includes instructions that instruct the cluster of computing devices to execute the above-described application to the log management system for executing the log management method, or instruct the cluster of computing devices to execute the above-described application to the log management system for executing the log management method.
In the embodiments of the present application, the examples may refer to each other without logic contradiction, for example, methods and/or terms between method embodiments may refer to each other, for example, functions and/or terms between system examples and method examples may refer to each other.
Finally, it should be noted that: the above embodiments are only for illustrating the technical solution of the present invention, and are not limiting; although the invention has been described in detail with reference to the foregoing embodiments, it will be understood by those of ordinary skill in the art that: the technical scheme described in the foregoing embodiments can be modified or some technical features thereof can be replaced by equivalents; these modifications or substitutions do not depart from the essence of the corresponding technical solutions from the protection scope of the technical solutions of the embodiments of the present invention.

Claims (13)

1. A log management method, comprising:
the method comprises the steps that a management node sends first log management information to a log management client so that the log management client updates a log management strategy of a first service container, and the log management client and the first service container are deployed in a first service node;
and the log management client manages the log files of the first service container according to the updated log management strategy.
2. The method of claim 1, wherein before the management node sends the first log management information to the log management client, the method further comprises:
when the management node deploys the first service container in the first service node, the log management client acquires second log management information from the management node, wherein the second log management information is used for creating a log management policy of the first service container.
3. The method of claim 1 or 2, wherein the method further comprises:
the management node obtains an instruction for deploying a second service container, wherein the instruction comprises resource demand information of the second service container, and the resource demand information of the second service container is used for indicating resources occupied by log files of the second service container;
And the management node determines a second service node for deploying the second service container group according to the resource demand information of the second service container.
4. The method of claim 3, wherein the management node determining a second service node for deploying the second service container based on the resource requirement information of the second service container comprises:
the management node obtains log capacity information of at least one service node associated with the management node, wherein the log capacity information of the at least one service node is used for indicating the current capacity and/or capacity quota of the log storage space of the at least one service node;
the management node determines the second service node in the at least one service node according to the resource demand information of the second service container and the log capacity information of the at least one service node.
5. The method of claim 3 or 4, wherein the method further comprises:
and the management node receives a configuration instruction sent by a user, wherein the configuration instruction indicates that the log management strategy of the second service container is configured.
6. The method of any one of claims 1-5, wherein the first log management information includes one or more of: log catalog, log aging period, log reporting period.
7. The method of claim 6, wherein,
when the first log management information comprises the log catalog, the updated log management policy comprises the management of log files stored in a storage path corresponding to the log catalog by the first service container;
when the first log management information comprises the log aging period, the updated log management strategy comprises the step of performing compression dump processing on the appointed log file of the first service container at intervals of the log aging period;
when the first log management information comprises the log reporting period, the updated log management policy comprises sending the log file of the first service container to a third-party log system at intervals of the log reporting period.
8. The method of any one of claims 1-7, wherein the method further comprises:
and the log management client saves the mapping relation between the first business container and the first log management information.
9. The method according to any of claims 1-8, wherein one of said log management clients is deployed in said first service node, said first service node having one or more service containers deployed thereon.
10. A log management system comprising a management node and at least one service node; each service node in the at least one service node is provided with a log management client;
the management node is configured to send first log management information to a log management client in a designated service node, so that the log management client updates a log management policy of a first service container, where the designated service node is a service node deployed with the first service container;
and the log management client in the designated service node is used for managing the log files of the first service container according to the updated log management policy.
11. A cluster of computing devices, comprising at least one computing device, the computing device comprising a processor and a memory;
the memory of the at least one computing device is for storing computer-executable instructions;
The processor of the at least one computing device is configured to execute the computer-executable instructions to cause the cluster of computing devices to perform the method of any of claims 1-9.
12. A computer readable storage medium comprising computer program instructions which, when executed by a cluster of computing devices, cause the cluster of computing devices to perform the method of any of claims 1-9.
13. A computer program product comprising instructions that, when executed by a cluster of computing devices, cause the cluster of computing devices to perform the method of any of claims 1-9.
CN202210709250.7A 2022-06-21 2022-06-21 Log management method and system Pending CN117319204A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202210709250.7A CN117319204A (en) 2022-06-21 2022-06-21 Log management method and system

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202210709250.7A CN117319204A (en) 2022-06-21 2022-06-21 Log management method and system

Publications (1)

Publication Number Publication Date
CN117319204A true CN117319204A (en) 2023-12-29

Family

ID=89279915

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202210709250.7A Pending CN117319204A (en) 2022-06-21 2022-06-21 Log management method and system

Country Status (1)

Country Link
CN (1) CN117319204A (en)

Similar Documents

Publication Publication Date Title
US11831600B2 (en) Domain name system operations implemented using scalable virtual traffic hub
US10277705B2 (en) Virtual content delivery network
CN112099938A (en) Determining resource allocation in a distributed computing environment using multi-dimensional metadata tag sets
CN108347343B (en) Policy management method, device and system
US11550713B1 (en) Garbage collection in distributed systems using life cycled storage roots
CN113296792B (en) Storage method, device, equipment, storage medium and system
CN113641311B (en) Method and system for dynamically allocating container storage resources based on local disk
WO2020199089A1 (en) Unified application notification framework
CN114237809A (en) Computer system, container management method and device
US11064046B2 (en) Distributed queueing in a remote network management architecture
CN113849266A (en) Service deployment method and device for multiple Kubernetes clusters
US20210255885A1 (en) System and method for multi-cluster storage
CN112202853A (en) Data synchronization method, system, computer device and storage medium
KR20220038740A (en) Network-Based Media Processing (NBMP) Workflow Management with Framework (FLUS) Control for 5G Live Uplink Streaming
CN115086166A (en) Computing system, container network configuration method, and storage medium
US11397713B2 (en) Historical graph database
CN113079098A (en) Method, device, equipment and computer readable medium for updating route
US11734038B1 (en) Multiple simultaneous volume attachments for live migration between cloud regions and edge locations
US11595471B1 (en) Method and system for electing a master in a cloud based distributed system using a serverless framework
CN114615268B (en) Service network, monitoring node, container node and equipment based on Kubernetes cluster
CN117319204A (en) Log management method and system
US12001872B2 (en) Object tiering from local store to cloud store
US11900152B1 (en) Controlled automatic updates to disk image layers with compatibility verification
CN114911577A (en) Method, device, equipment and storage medium for setting network isolation rule
CN114662102A (en) File processing method and device and storage medium

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication