CN113590443A - Log acquisition and log monitoring method and device - Google Patents

Log acquisition and log monitoring method and device Download PDF

Info

Publication number
CN113590443A
CN113590443A CN202110862175.3A CN202110862175A CN113590443A CN 113590443 A CN113590443 A CN 113590443A CN 202110862175 A CN202110862175 A CN 202110862175A CN 113590443 A CN113590443 A CN 113590443A
Authority
CN
China
Prior art keywords
log
kafka
log4j
jar
flink
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202110862175.3A
Other languages
Chinese (zh)
Inventor
郝卫亮
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Hangzhou Daishu Technology Co ltd
Original Assignee
Hangzhou Daishu Technology Co ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Hangzhou Daishu Technology Co ltd filed Critical Hangzhou Daishu Technology Co ltd
Priority to CN202110862175.3A priority Critical patent/CN113590443A/en
Publication of CN113590443A publication Critical patent/CN113590443A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F11/00Error detection; Error correction; Monitoring
    • G06F11/30Monitoring
    • G06F11/34Recording or statistical evaluation of computer activity, e.g. of down time, of input/output operation ; Recording or statistical evaluation of user activity, e.g. usability assessment
    • G06F11/3466Performance evaluation by tracing or monitoring
    • G06F11/3476Data logging

Landscapes

  • Engineering & Computer Science (AREA)
  • General Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Computer Hardware Design (AREA)
  • Quality & Reliability (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Debugging And Monitoring (AREA)

Abstract

The invention provides a log collection and log monitoring method and device, wherein the log collection and log monitoring method comprises the steps of collecting and writing a flash log into kafka by a user-defined log4j apex; rewriting Flinkclient configuration custom log4j apex; writing the flash log in the kafka into an elastic search based on flash sql, and matching the abnormal log based on sql operator to realize the acquisition and monitoring of the flash log. The method can define the own log format and add the metadata information required by the service according to the own needs, and does not need the operation and maintenance personnel to independently maintain the components similar to logstack, thereby greatly reducing the operation and maintenance cost.

Description

Log acquisition and log monitoring method and device
Technical Field
The invention relates to the technical field of data processing, in particular to a method and a device for log collection and log monitoring.
Background
The most used log collection monitoring scheme of the prior open source is an elastic search + Logstash + Kibana, called ELK for short, wherein the elastic search is a search server based on Lucene and provides a full-text search engine with distributed multi-user capability; logstash is a platform for application program log, event transmission, processing, management and search; kibana is the Web interface for log analysis provided by Logstash and ElasticSearch. Although ELK can solve the problem of log collection monitoring, in order to collect logs on yarn (a resource scheduling platform, which is responsible for providing resources required for operations for an operational program, and is equivalent to a distributed operating system), logstack needs to be deployed at each node of yarn, which causes two problems: 1) logstash will always occupy the node resources; 2) when the cluster is expanded, logstack needs to be installed in a new node, so that the operation and maintenance cost is greatly increased.
Disclosure of Invention
Aiming at the problems, the invention provides a log collection and log monitoring method and device, which effectively solve the technical problem of high operation and maintenance cost of the existing log collection and log monitoring.
The technical scheme provided by the invention is as follows:
in one aspect, the present invention provides a log collecting and monitoring method, including:
self-defining log4j opender collects and writes the flink log into kafka;
rewriting the flink client configuration custom log4j apex;
writing the flash log in the kafka into an elastic search based on flash sql, and matching the abnormal log based on sql operator to realize the acquisition and monitoring of the flash log.
Further preferably, before the method collects and writes the flink log into the kafka based on the custom log4j opender, the method comprises the step of customizing log4j opender; the step of customizing the log4j applet comprises the following steps:
inheriting an appendix Skeleton class, and defining configuration parameters of kafka;
analyzing kafka configuration parameters based on an activateoperations method and initializing a kafka product;
analyzing task parameters from environment variables based on an apend method and writing the task parameters into kafka;
and constructing a jar package containing the kafka class and a jar package not containing the kafka class by using the maven plug-in, and configuring the log4j appender.
Further preferably, in the step of rewriting the flink client configuration custom log4j applet, the step of rewriting the flink client includes:
rewriting a yarnCluster descriptor in an on yarn mode, uploading a jar packet containing kafka classes and a jar packet not containing kafka classes in log4j openers to hdfs in a shipfile mode, and adding the jar packet not containing kafka classes to a classpath directory;
constructing a mirror image in an on kubernets mode, adding jar packets which do not contain kafka classes in log4j apppenders into a flink lib directory, and adding jar packets which contain kafka classes in log4j apppenders into the flink opt directory;
adding an environment variable TASK _ ID and a DTSTACK _ APPENDER _ JAR, wherein the environment variable DTSTACK _ APPENDER _ JAR is used for specifying a path of a JAR packet containing the kafka class in the log4j applicability;
configuring log4j, properties, and adding custom log4j, which is configured.
Further preferably, in writing the flash log in kafka into an elastic search based on the flash sql and matching the exception log based on the sql operator, the method includes:
reading the log information of the flink written in the kafka in real time through a kafak source table defined in the flink sql;
capturing an abnormal log through a REGEXP operator and a predefined abnormal log matching regular expression, and outputting the abnormal log to an alert _ table, wherein the alert _ table is established in a mysql database;
and reading alert _ table data through a stack custom alert interface to realize abnormal log alert.
In another aspect, the present invention provides a log collecting and monitoring device, including:
the log acquisition module is used for self-defining log4j opender to acquire the flash log and write the flash log into kafka;
the configuration rewriting module is used for rewriting the flink client configuration custom log4j applet;
and the log monitoring module is used for writing the flink log in the kafka into an elastic search based on the flink sql and matching the abnormal log based on the sql operator to realize the acquisition and monitoring of the flink log.
Further preferably, the log collecting and monitoring device further includes a log4j vendor configuration module, including:
the parameter definition unit is used for inheriting the class AppenderSkeleton and defining the configuration parameters of the kafka;
the initialization unit is used for analyzing kafka configuration parameters based on an activations options method and initializing the kafka producer;
the task parameter analyzing unit is used for analyzing the task parameters from the environment variables based on an apend method and writing the task parameters into kafka;
and the configuration unit is used for constructing a jar package containing the kafka class and a jar package not containing the kafka class by using the maven plug-in and configuring the log4j appender.
Further preferably, the configuration rewriting module includes:
the on yann mode configuration unit is used for rewriting a yarnCluster descriptor in an onyann mode, uploading a jar packet containing kafka classes and a jar packet not containing kafka classes in log4j opender to hdfs in a shipfile mode, and adding the jar packet not containing kafka classes to a classpath directory;
an on kubernets mode configuration unit, which constructs a mirror image in an on kubernets mode, adds jar packets which do not contain kafka classes in log4j appender to a flink lib directory, and adds jar packets which contain kafka classes in log4j appender to the flink opt directory;
the system comprises a variable configuration unit, a processing unit and a processing unit, wherein the variable configuration unit is used for adding an environment variable TASK _ ID and a DTSTACK _ APPENDER _ JAR, and the environment variable DTSTACK _ APPENDER _ JAR is used for specifying a path of a JAR packet containing kafka class in log4j applicability;
and the log4j.properties configuration unit is used for configuring the log4j.properties and adding the configured custom log4j appender.
Further preferably, the log monitoring module includes:
the log information reading unit is used for reading the log information written into the flink in the kafka in real time through a kafak source table defined in the flink sql;
the abnormal log capturing unit is used for capturing the abnormal logs through a REGEXP operator and a predefined abnormal log matching regular expression and outputting the abnormal logs to an alert _ table, and the alert _ table is established in a mysql database;
and the abnormal log alarm unit is used for reading the alert _ table data through the stack custom alarm interface to realize abnormal log alarm.
In another aspect, the present invention provides a terminal device, including a memory, a processor, and a computer program stored in the memory and executable on the processor, where the processor implements the steps of the log collecting and monitoring method when executing the computer program.
In another aspect, the present invention provides a computer-readable storage medium, which stores a computer program, and when the computer program is executed by a processor, the computer program implements the steps of the log collecting and monitoring method.
The log collection and log monitoring method and device provided by the invention realize a unified and flexible log collection mode of the flink-based stream calculation task in the on yann mode and the on kubernets mode, can define the log format and add metadata information required by the service according to the requirement of the user, does not need operation and maintenance personnel to independently maintain components similar to logstack, and greatly reduces the operation and maintenance cost.
Drawings
The foregoing features, technical features, advantages and embodiments are further described in the following detailed description of the preferred embodiments, which is to be read in connection with the accompanying drawings.
FIG. 1 is a schematic flow chart of a log collection and log monitoring method according to the present invention;
fig. 2 is a schematic structural diagram of a log collection and log monitoring device according to the present invention.
Reference numerals:
100-log collection and log monitoring device, 110-log collection module, 120-configuration rewriting module, 130-log monitoring module.
Detailed Description
In order to more clearly illustrate the embodiments of the present invention or the technical solutions in the prior art, the following description will be made with reference to the accompanying drawings. It is obvious that the drawings in the following description are only some examples of the invention, and that for a person skilled in the art, other drawings and embodiments can be derived from them without inventive effort.
A first embodiment of the present invention provides a log collecting and monitoring method, as shown in fig. 1, including:
s10 self-defining log4j opender to collect and write the flash log into kafka;
s20 rewrites the flink client configuration custom log4j append;
s30 writes the flink log in kafka into an elastic search based on the flink sql and matches the abnormal log based on the sql operator, so that the acquisition and monitoring of the flink log are realized.
In this embodiment, before collecting and monitoring the log, the log4j applet is customized, and based on the java programming language, the customization process includes:
s11 inherits the class appenderskeletton, defining the relevant configuration parameters (including but not limited to brooker, topic, etc.) for kafka.
S12 rewrites the activateoperations method, parses the kafka configuration parameters and initializes the kafka producer. In this process, in order not to have a class conflict with the flash kafka connector, the kafka applier is loaded using a flash child class loader when initializing the kafka producer.
S13 rewrites the apppend method, analyzes task related parameters such as taskId and appicationId from environment variables (including but not limited to taskId, appicationId, taskName, etc.), and writes them in kafka. Here, write kafka employs an asynchronous write approach and traps exceptions using try catch so that even a write fails, it does not affect the flash log stream.
S14 uses maven plug-in to construct a jar package (dt-apppender-kafka. jar) containing kafka class and a jar package (dt-apppender-lib. jar) not containing kafka class, and completes the self-defining of log4j apppender.
In the step S20 of rewriting the flink client configuration custom log4j applet, the step of rewriting the flink client includes, based on the java programming language:
s21 rewrites a yarnCluster Descriptor in an on-yann mode, uploads a jar package (dt-appander-kafka. jar) containing kafka class and a jar package (dt-appander-lib. jar) not containing kafka class in custom log4j appender to hdfs (hadoop distributed file system) in a shipfile (ship file) mode, and adds a jar package (dt-appander-lib. jar) not containing kafka class to a classpath directory;
s22, constructing a mirror image in an on kubernets mode, adding jar packages (dt-appander-lib. jar) which do not contain kafka classes in log4j appenders into a flink lib directory, and adding jar packages (dt-appender-kafka. jar) which contain kafka classes in log4j appenders into the flink opt directory;
s23 adding an environment variable TASK _ ID and DTSTACK _ APPENDER _ JAR, wherein the environment variable DTSTACK _ APPENDER _ JAR is used for specifying a path of a JAR packet containing a kafka class in log4j opender;
s24 configures log4j, properties, and adds custom log4j, which is configured newly.
Writing the flink log in kafka into an elastic search based on the flink sql and matching the exception log based on the sql operator at step S30 includes:
s31 reads the log information of the flink written in the kafka in real time through the kafak source table defined in the flink sql;
s32 capturing the abnormal logs through a REGEXP operator and a predefined abnormal log matching regular expression, and outputting the abnormal logs to an alert _ table, wherein the alert _ table is established in a mysql database and used for storing the matched abnormal logs;
s33 reads alert _ table data through the stack custom alert interface to realize exception log alert.
In practical application, after the web application layer submits the task to the flink client, log4j properties is configured in the flink client, and environment variables such as taskId are set at the same time. In the process that the flight client submits the tasks to the yarn, the custom log4j ap-reader jar is uploaded to the yarn container, and the tasks are submitted to the k8s (kubernets), so that the jobmanger custom log4j ap-reader in the yarn container collects and writes the flight log into kafka, and the jobmanger custom log4j ap-reader in the pod in the kubernets collects and writes the flight log into kafka.
In another embodiment of the present invention, a log collecting and monitoring device, as shown in fig. 2, is based on java programming language, and includes: the log acquisition module is used for self-defining log4j opender to acquire the flash log and write the flash log into kafka; the configuration rewriting module is used for rewriting the flink client configuration custom log4j applet; and the log monitoring module is used for writing the flink log in the kafka into an elastic search based on the flink sql and matching the abnormal log based on the sql operator to realize the acquisition and monitoring of the flink log.
In addition, log collection and log monitoring device also includes log4j apex configuration module, including: the parameter definition unit is used for inheriting a class AppenderSkeleton and defining configuration parameters (including but not limited to brooker, topic and the like) of kafka; the initialization unit is used for analyzing kafka configuration parameters based on an activations options method and initializing the kafka producer; the task parameter analyzing unit is used for analyzing task parameters (including but not limited to taskId, applicationId, taskName and the like) from the environment variables based on an apend method and writing the task parameters into kafka; and the configuration unit is used for constructing a jar package containing the kafka class and a jar package not containing the kafka class by using the maven plug-in and configuring the log4j appender.
Specifically, in the initialization unit, the activateoperations method is rewritten, the kafka configuration parameters are analyzed, and the kafka product is initialized. In this process, in order not to have a class conflict with the flash kafka connector, the kafka applier is loaded using a flash child class loader when initializing the kafka producer. And rewriting an apend method in a task parameter analysis unit, analyzing task related parameters such as taskId, applicationId and the like from the environment variables, and writing the task related parameters into kafka. Here, write kafka employs an asynchronous write approach and traps exceptions using try catch so that even a write fails, it does not affect the flash log stream. And constructing a jar package (dt-applier-kafka. jar) containing the kafka class and a jar package (dt-applier-lib. jar) not containing the kafka class by using maven plug-ins in the configuration unit, and completing the self-defining of log4j applier.
The configuration rewriting module includes: the on yarn mode configuration unit is used for rewriting a yarn Cluster descriptor under the on yarn mode, uploading a jar package (dt-appander-kafka. jar) containing kafka classes and a jar package (dt-appander-lib. jar) not containing the kafka classes in a custom log4j appender to hdfs (hadoop distributed file system) in a shipfile (hip file) mode, and adding the jar package (dt-appander-lib. jar) not containing the kafka classes to a classpath directory; an on kubernets mode configuration unit, which constructs a mirror image in an on kubernets mode, adds jar packets (dt-appander-lib. jar) which do not contain kafka classes in log4j apppenders to a flink lib directory, and adds jar packets (dt-appander-kafka. jar) which contain kafka classes in log4j apppenders to the flink opt directory; the system comprises a variable configuration unit, a processing unit and a processing unit, wherein the variable configuration unit is used for adding an environment variable TASK _ ID and a DTSTACK _ APPENDER _ JAR, and the environment variable DTSTACK _ APPENDER _ JAR is used for specifying a path of a JAR packet containing kafka class in log4j applicability; and the log4j.properties configuration unit is used for configuring log4j.properties and custom log4j appender of newly added configuration.
The log monitoring module comprises: the log information reading unit is used for reading the log information written into the flink in the kafka in real time through a kafak source table defined in the flink sql; the abnormal log capturing unit is used for capturing the abnormal logs through a REGEXP operator and a predefined abnormal log matching regular expression and outputting the abnormal logs to an alert _ table, and the alert _ table is established in a mysql database; and the abnormal log alarm unit is used for reading the alert _ table data through the stack custom alarm interface to realize abnormal log alarm.
It should be noted that the above embodiments can be freely combined as necessary. The foregoing is only a preferred embodiment of the present invention, and it should be noted that, for persons skilled in the art, numerous modifications and adaptations can be made without departing from the principle of the present invention, and such modifications and adaptations should be considered as within the scope of the present invention.

Claims (8)

1. A log collection and log monitoring method is characterized by comprising the following steps:
self-defining log4j opender collects and writes the flink log into kafka;
rewriting the flink client configuration custom log4j apex;
writing the flash log in the kafka into an elastic search based on flash sql, and matching the abnormal log based on sql operator to realize the acquisition and monitoring of the flash log.
2. The log collection and log monitoring method of claim 1, wherein before collecting and writing the flash log based on custom log4j opender into kafka, the method comprises the step of custom log4j opender; the step of customizing the log4j applet comprises the following steps:
inheriting an appendix Skeleton class, and defining configuration parameters of kafka;
analyzing kafka configuration parameters based on an activateoperations method and initializing a kafka product;
analyzing task parameters from environment variables based on an apend method and writing the task parameters into kafka;
and constructing a jar package containing the kafka class and a jar package not containing the kafka class by using the maven plug-in, and configuring the log4j appender.
3. The log collection and log monitoring method of claim 2, wherein the step of rewriting the flink client configures a custom log4j applet comprises:
rewriting a yarnCluster descriptor in an on yarn mode, uploading a jar packet containing kafka classes and a jar packet not containing kafka classes in log4j openers to hdfs in a shipfile mode, and adding the jar packet not containing kafka classes to a classpath directory;
constructing a mirror image in an on kubernets mode, adding jar packets which do not contain kafka classes in log4j apppenders into a flink lib directory, and adding jar packets which contain kafka classes in log4j apppenders into the flink opt directory;
adding an environment variable TASK _ ID and a DTSTACK _ APPENDER _ JAR, wherein the environment variable DTSTACK _ APPENDER _ JAR is used for specifying a path of a JAR packet containing the kafka class in the log4j applicability;
configuring log4j, properties, and adding custom log4j, which is configured.
4. The log collecting and log monitoring method as claimed in any one of claims 1 to 3, wherein in writing the flash log in kafka into an elastic search based on flash sql and matching the abnormal log based on sql operator, the method comprises:
reading the log information of the flink written in the kafka in real time through a kafak source table defined in the flink sql;
capturing an abnormal log through a REGEXP operator and a predefined abnormal log matching regular expression, and outputting the abnormal log to an alert _ table, wherein the alert _ table is established in a mysql database;
and reading alert _ table data through a stack custom alert interface to realize abnormal log alert.
5. The utility model provides a log collection and log monitoring device which characterized in that includes:
the log acquisition module is used for self-defining log4j opender to acquire the flash log and write the flash log into kafka;
the configuration rewriting module is used for rewriting the flink client configuration custom log4j applet;
and the log monitoring module is used for writing the flink log in the kafka into an elastic search based on the flink sql and matching the abnormal log based on the sql operator to realize the acquisition and monitoring of the flink log.
6. The log collection and log monitoring device of claim 5, further comprising a log4j vendor configuration module comprising:
the parameter definition unit is used for inheriting the class AppenderSkeleton and defining the configuration parameters of the kafka;
the initialization unit is used for analyzing kafka configuration parameters based on an activations options method and initializing the kafka producer;
the task parameter analyzing unit is used for analyzing the task parameters from the environment variables based on an apend method and writing the task parameters into kafka;
and the configuration unit is used for constructing a jar package containing the kafka class and a jar package not containing the kafka class by using the maven plug-in and configuring the log4j appender.
7. The log collection and log monitoring device of claim 6, wherein the configuration rewrite module comprises:
the on yarn mode configuration unit is used for rewriting a yarn Cluster descriptor in the on yarn mode, uploading a jar packet containing kafka classes and a jar packet not containing the kafka classes in log4j opender to hdfs in a shipfile mode, and adding the jar packet not containing the kafka classes to a classpath directory;
an on kubernets mode configuration unit, which constructs a mirror image in an on kubernets mode, adds jar packets which do not contain kafka classes in log4j appender to a flink lib directory, and adds jar packets which contain kafka classes in log4j appender to the flink opt directory;
the system comprises a variable configuration unit, a processing unit and a processing unit, wherein the variable configuration unit is used for adding an environment variable TASK _ ID and a DTSTACK _ APPENDER _ JAR, and the environment variable DTSTACK _ APPENDER _ JAR is used for specifying a path of a JAR packet containing kafka class in log4j applicability;
and the log4j.properties configuration unit is used for configuring the log4j.properties and adding the configured custom log4j appender.
8. The log collection and log monitoring device according to any one of claims 5 to 7, wherein the log monitoring module comprises:
the log information reading unit is used for reading the log information written into the flink in the kafka in real time through a kafak source table defined in the flink sql;
the abnormal log capturing unit is used for capturing the abnormal logs through a REGEXP operator and a predefined abnormal log matching regular expression and outputting the abnormal logs to an alert _ table, and the alert _ table is established in a mysql database;
and the abnormal log alarm unit is used for reading the alert _ table data through the stack custom alarm interface to realize abnormal log alarm.
CN202110862175.3A 2021-07-29 2021-07-29 Log acquisition and log monitoring method and device Pending CN113590443A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202110862175.3A CN113590443A (en) 2021-07-29 2021-07-29 Log acquisition and log monitoring method and device

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202110862175.3A CN113590443A (en) 2021-07-29 2021-07-29 Log acquisition and log monitoring method and device

Publications (1)

Publication Number Publication Date
CN113590443A true CN113590443A (en) 2021-11-02

Family

ID=78251554

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202110862175.3A Pending CN113590443A (en) 2021-07-29 2021-07-29 Log acquisition and log monitoring method and device

Country Status (1)

Country Link
CN (1) CN113590443A (en)

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN114489981A (en) * 2022-01-19 2022-05-13 杭州玳数科技有限公司 Method and device for dynamically adjusting log level of flink task
CN114510286A (en) * 2022-01-17 2022-05-17 杭州玳数科技有限公司 Multi-version yann aggregation log export method and system

Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112084009A (en) * 2020-09-17 2020-12-15 湖南长城科技信息有限公司 Method for constructing and monitoring Hadoop cluster and alarming based on containerization technology under PK system
CN112667683A (en) * 2020-12-25 2021-04-16 平安科技(深圳)有限公司 Stream computing system, electronic device and storage medium therefor
CN112732663A (en) * 2020-12-30 2021-04-30 浙江大华技术股份有限公司 Log information processing method and device
WO2021088909A1 (en) * 2019-11-06 2021-05-14 第四范式(北京)技术有限公司 Method and system for assisting operator development

Patent Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2021088909A1 (en) * 2019-11-06 2021-05-14 第四范式(北京)技术有限公司 Method and system for assisting operator development
CN112084009A (en) * 2020-09-17 2020-12-15 湖南长城科技信息有限公司 Method for constructing and monitoring Hadoop cluster and alarming based on containerization technology under PK system
CN112667683A (en) * 2020-12-25 2021-04-16 平安科技(深圳)有限公司 Stream computing system, electronic device and storage medium therefor
CN112732663A (en) * 2020-12-30 2021-04-30 浙江大华技术股份有限公司 Log information processing method and device

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
王知无-IMPORT_BIGDATA: "基于Flink打造实时计算平台为企业赋能", pages 2, Retrieved from the Internet <URL:https://cloud.tencent.com.cn/developer/article/1763157> *

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN114510286A (en) * 2022-01-17 2022-05-17 杭州玳数科技有限公司 Multi-version yann aggregation log export method and system
CN114489981A (en) * 2022-01-19 2022-05-13 杭州玳数科技有限公司 Method and device for dynamically adjusting log level of flink task

Similar Documents

Publication Publication Date Title
CN111625452B (en) Flow playback method and system
US20220138004A1 (en) System and method for automated production and deployment of packaged ai solutions
US20190155578A1 (en) Determining the identity of software in software containers
CN113590443A (en) Log acquisition and log monitoring method and device
US10225375B2 (en) Networked device management data collection
CN109344065A (en) Remote debugging method, debugging server and target machine
CN107957940B (en) Test log processing method, system and terminal
Bolt et al. Finding Process Variants in Event Logs: (Short Paper)
US8806475B2 (en) Techniques for conditional deployment of application artifacts
CN106681891A (en) Method and device for adjusting log levels in Java application system
US20180129712A1 (en) Data provenance and data pedigree tracking
CN111966465B (en) Method, system, equipment and medium for modifying host configuration parameters in real time
CN113656357B (en) File management method, device, system and storage medium
US20230214229A1 (en) Multi-tenant java agent instrumentation system
WO2019046752A1 (en) Data array of objects indexing
CN112732663A (en) Log information processing method and device
US8918765B2 (en) Auto-documenting based on real-time analysis of code execution
CN114090378A (en) Custom monitoring and alarming method based on Kapacitor
US11552868B1 (en) Collect and forward
CN111984505A (en) Operation and maintenance data acquisition engine and acquisition method
CN109086380B (en) Method and system for compressing and storing historical data
Mac Coombea et al. Senaps: A platform for integrating time-series with modelling systems
CN113254040B (en) Front-end framework updating method, device, equipment and storage medium
Plale et al. Data provenance for preservation of digital geoscience data
CN107436790A (en) A kind of component upgrade management method and device

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination