CN114244832A - Method and system for self-defining Prometheus to collect log information indexes - Google Patents

Method and system for self-defining Prometheus to collect log information indexes Download PDF

Info

Publication number
CN114244832A
CN114244832A CN202111543065.7A CN202111543065A CN114244832A CN 114244832 A CN114244832 A CN 114244832A CN 202111543065 A CN202111543065 A CN 202111543065A CN 114244832 A CN114244832 A CN 114244832A
Authority
CN
China
Prior art keywords
log
redis
kafka
prometous
prometheus
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202111543065.7A
Other languages
Chinese (zh)
Inventor
李�浩
古旭宏
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Gf Fund Management Co ltd
Original Assignee
Gf Fund Management Co ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Gf Fund Management Co ltd filed Critical Gf Fund Management Co ltd
Priority to CN202111543065.7A priority Critical patent/CN114244832A/en
Publication of CN114244832A publication Critical patent/CN114244832A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L67/00Network arrangements or protocols for supporting network services or applications
    • H04L67/01Protocols
    • H04L67/06Protocols specially adapted for file transfer, e.g. file transfer protocol [FTP]
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L41/00Arrangements for maintenance, administration or management of data switching networks, e.g. of packet switching networks
    • H04L41/22Arrangements for maintenance, administration or management of data switching networks, e.g. of packet switching networks comprising specially adapted graphical user interfaces [GUI]
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L67/00Network arrangements or protocols for supporting network services or applications
    • H04L67/01Protocols
    • H04L67/10Protocols in which an application is distributed across nodes in the network
    • H04L67/1097Protocols in which an application is distributed across nodes in the network for distributed storage of data in networks, e.g. transport arrangements for network file system [NFS], storage area networks [SAN] or network attached storage [NAS]
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L67/00Network arrangements or protocols for supporting network services or applications
    • H04L67/2866Architectures; Arrangements
    • H04L67/30Profiles

Abstract

The invention discloses a method and a system for self-defining Prometheus to collect log information indexes, wherein the method comprises the following steps: configuring a configuration file of a log collector, and setting a name, a position and an output assembly of the log file to be collected; building a Kafka cluster, and receiving a log message pushed by a log collector; analyzing the log content in the log message, and writing an analysis result into Redis; the log analysis result in the Redis is processed through a custom Prometous exporter, a custom log collection index is output to Prometous, the log collection index defined by the Prometous exporter is collected at regular time through Prometous, and the log collection index is displayed graphically through Grafana. The invention improves the limitation that the conventional Prometheus can only collect some general indexes, realizes the collection of log information by the user-defined indexes, displays the graphical results of the user-defined indexes and is convenient for operation and maintenance personnel to manage.

Description

Method and system for self-defining Prometheus to collect log information indexes
Technical Field
The invention relates to the technical field of log collection and analysis, in particular to a method and a system for self-defining Prometheus collection log information indexes, which are realized based on Kafka and Redis.
Background
The log analysis is used for effectively monitoring the network behavior of the user, tracking the use condition of network resources, and identifying the bottleneck of abnormal flow and performance, so that the system resources can be better planned and deployed, fault alarm can be responded, and a stable, safe and efficient system operation environment can be achieved.
In the traditional log analysis, the log text needs to be analyzed line by line, required fields are extracted, statistical analysis is carried out, the efficiency is low, and the latest log analysis statistical result cannot be displayed in real time.
Therefore, the open source monitoring and warning system Prometheus is used by many companies from 2012, however, the exporter provided by Prometheus cannot customize indexes for collection, and only can collect some general indexes, for example, machine cpu occupation and process memory occupation, and has certain use limitation.
In view of this, it is urgently needed to improve the existing Prometheus monitoring and warning system to improve the limitations of the existing Prometheus, achieve the collection of log information by a user-defined index, display the graphical result of the user-defined index, and facilitate the management of operation and maintenance personnel.
Disclosure of Invention
In view of the above defects, the technical problem to be solved by the present invention is to provide a method and a system for self-defining Prometheus collection log information index, which are implemented based on Kafka and Redis, so as to solve the problems that the existing Prometheus monitoring and warning system cannot self-define the index for collection, and cannot display the latest log analysis statistical result in real time.
Therefore, the method for self-defining the Prometheus to collect the log information indexes provided by the invention comprises the following steps:
configuring a configuration file of a log collector, and setting a name, a position and an output assembly of the log file to be collected;
building a Kafka cluster, and receiving a log message pushed by a log collector;
analyzing the received log message, and writing an analysis result into Redis;
processing a log analysis result in the Redis through a custom Prometous exporter;
the collection log index defined by Prometous exporter is collected by Prometous timing, and the collection log index is graphically displayed by Grafana.
In the above method, preferably, the method of parsing the log content in the log message and writing the parsing result to Redis is as follows:
the Kafka consumer analyzes log contents in the log message and determines a Redis key and a Redis value according to the log keywords, wherein the Redis key is the log keywords, and the Redis value is a keyword value corresponding to the log keywords;
and writing the Redis key and the Redis value into Redis for storage by adopting a Redis set data type, wherein each collected log index is put into the same Redis set, and the unique Redis key is written into a Redis server.
In the method, the log analysis result in the Redis is processed through a self-defined Promultimedia exporter, and the method comprises the following steps:
acquiring all elements of a Redis set;
traversing the Redis set;
determining the index type used by Prometheus according to the Redis key;
extracting Redis keys and corresponding element values;
the element is deleted from the Redis set.
In the above method, preferably, a Filebeat log collector is used to collect logs, and a rule for multi-line log separation is set in a Filebeat.
In the above method, preferably, the Kafka consumers adopt the connection of the Kafka-python library with the Kafka cluster and adopt the connection of the Redis library with the Redis, and the load balancing of the Kafka consumers is realized by setting the partition number of the log topic to be consistent with the number of the Kafka consumers in the same group.
In the method, preferably, a python flash and a proxy-client library are adopted to customize a proxy exporter and output a custom collection log index to the proxy.
In the method, preferably, the collected log index is pulled from the interface provided by the exporter at regular time by configuring a timer in Prometheus.
The invention also provides a system for self-defining the Prometheus to collect the log information indexes, which comprises the following steps:
the log collector is used for setting the name, the position and the output component of a log file to be collected by configuring a configuration file of the log collector and collecting an application log;
the Kafka cluster is used for receiving the log message pushed by the log collector;
redis, which is used for writing the analysis result of the log message into Redis;
the Prometous exporter is used for processing a log analysis result in the Redis and self-defining and collecting a log index;
and Prometheus, periodically collecting the collected log indexes defined by the Prometheus exporter, and graphically displaying the collected log indexes through Grafana.
In the above system, preferably, a filebed journal collector is used to collect the journal, and a rule of separating multiple rows of journals is set in a filebed.
In the system, preferably, the Kafka consumers adopt the connection of the Kafka-python library and the Kafka cluster, adopt the connection of the Redis library and the Redis, and realize the load balance of the Kafka consumers by setting the partition number of the log topic to be consistent with the number of the Kafka consumers in the same group;
and self-defining a Prometous exporter by adopting a python flash and a Prometous-client library, and outputting a self-defined collection log index to Prometous.
According to the technical scheme, the method and the system for collecting the log information indexes by the Prometheus in a user-defined mode solve the problems that the conventional Prometheus cannot collect the indexes by the user-defined mode and cannot display the latest log analysis statistical result in real time. Compared with the prior art, the invention has the following beneficial effects:
receiving a log message pushed by a log collector through a Kafka cluster; analyzing the log content in the log message, and writing an analysis result into Redis; outputting a self-defined collected log index to Prometous through a self-defined Prometous exporter, timing the collected log index defined by the Prometous exporter through Prometous, and graphically displaying the collected log index through Grafana. The log information collection by the user-defined index is realized, the graphical result of the user-defined index is displayed, and the operation and maintenance personnel can manage the log information conveniently.
Drawings
In order to more clearly illustrate the embodiments of the present invention or the technical solutions in the prior art, the drawings used in the description of the embodiments of the present invention or the prior art will be briefly described and explained. It is obvious that the drawings in the following description are only some embodiments of the invention, and that for a person skilled in the art, other drawings can be derived from them without inventive effort.
FIG. 1 is a flowchart of a method for self-defining Prometheus collection log information indicators according to the present invention;
FIG. 2 is a schematic diagram of a Kafka cluster in the present invention;
FIG. 3 is a graph showing the metrics collected in Prometous according to Grafana in the present invention.
Detailed Description
The technical solutions of the embodiments of the present invention will be described clearly and completely with reference to the accompanying drawings, and it is to be understood that the embodiments described below are only a part of the embodiments of the present invention, and not all of the embodiments. All other embodiments, which can be derived by a person skilled in the art from the embodiments of the present invention without any inventive step, are within the scope of the present invention.
The realization principle of the invention is as follows: receiving a log message pushed by a log collector through a Kafka cluster; analyzing the log content in the log message, and writing an analysis result into Redis; and processing a log analysis result in the Redis by a custom Prometous exporter, collecting a collected log index defined by the Prometous exporter at regular time by Prometous, and graphically displaying the collected log index by Grafana. The log information collection by the user-defined index is realized, the graphical result of the user-defined index is displayed, and the operation and maintenance personnel can manage the log information conveniently.
In order to make the technical solution and implementation of the present invention more clearly explained and illustrated, several preferred embodiments for implementing the technical solution of the present invention are described below.
It should be noted that the terms of orientation such as "inside, outside", "front, back" and "left and right" are used herein as reference objects, and it is obvious that the use of the corresponding terms of orientation does not limit the scope of protection of the present invention.
Referring to fig. 1, fig. 1 is a flowchart of a method for self-defining Prometheus collection log information indicators provided by the present invention.
As shown in fig. 1, the method for self-defining Prometheus to collect log information indicators provided by the present invention includes the following steps:
step 110, configuring the configuration file of the log collector, and setting the log file or position to be monitored and the output component.
In this embodiment, a filebed journal logger is used. Filebeacon is a lightweight transfer tool for forwarding and concentrating log data, and by configuring a filebeacon.yml configuration file, setting a name and a location of a log file to be collected, collecting a log event, and pushing the collected log file to a specified output component through a log message, such as: elasticissearch, Logstash, Kafka, Redis, File, Console, ElasticCloud, Changetheoutputcodec, and the like.
Yml profile defines an example of a collection log as follows:
Filebeat.inputs:
-type:log
enabled:true
paths:
-/home/boper/ksb*/logs/ksb.txt
multiline.pattern:^(DEBUG|INFO|WARN|ERROR)
multiline.negate:true
multiline.match:after
because logs may contain multiple lines, the filebed. yml configuration file sets rules for multiple-line log separation by configuring multiple.
Yml profile defines an example of logging to Kafka as follows:
output.kafka:
hosts:["10.88.102.13:9095","10.88.102.13:9096","10.88.102.13:9097"]
topic:kafka_log
partition.round_robin:
reachable_only:true
in the output configuration, Kafka cluster addresses "10.88.102.13:9095", "10.88.102.13:9096", "10.88.102.13:9097", Kafka topoic name, filebed were included as producers of Kafka.
And step 120, building a Kafka cluster, and receiving the log message pushed by the log collector.
Kafka, a distributed, partition-supported, Zookeeper-based coordination of publish/subscribe messages system, in which messages are categorized by topics, Topic, one message queue for each Topic. Broker is an instance of Kafka, with one or more instances of Kafka on each Kafka server that accepts messages sent by Kafka producers and stores them to disk, while serving Kafka consumer requests to pull partition messages, returning messages that have been submitted.
As shown in fig. 2 for Kafka clusters, ZooKeeper clusters are used to manage Broker clusters, which are used to actually store topic data.
Log collector Filebeat as Kafka's producer, producing log messages. After the Kafka cluster receives the log message pushed by the Filebeat, Kafka consumers consume the log message by subscribing to topic.
By setting the partition number of the log topic to be consistent with the Kafka consumer number of the same group, the Kafka consumers are balanced in load.
And step 130, analyzing the log content in the log message, and writing the analysis result into Redis.
Kafka consumers (Kafka Consumer) use the Kafka-python library to interface with the Kafka cluster and the Redis library to interface with the Redis.
The Kafka Consumer process flow is as follows:
after receiving the log message, the Kafka Consumer analyzes the log content in the log message, determines a Redis key according to a log key word, takes the value of the key word as a Redis value, and writes the analysis result (the Redis key and the Redis value) of the log content into the Redis for storage.
During storage, a Redis set data type is adopted, each collected log index is placed into the same Redis set, and a unique Redis key is adopted to write the collected log indexes into a Redis server.
For example, the log: when an instruction pushes the End, the instruction pushes the End to consume time: 7102 ms.
The Kafka Consumer analyzes log contents, extracts a keyword as total instruction pushing time, the value of the keyword is 7102 milliseconds, and writes the keyword into a Redis server side, namely Redis _ client.sadd (ins _ send _ total,7102) in a Redis _ client.sadd format, wherein Redis _ key is the only Redis key ins _ send _ total consumed by the keyword instruction pushing time.
In order to distinguish from the processing time consumption of other processing procedures, the instruction pushing total time consumption adopts a unique Redis key 'ins _ send _ total'.
And 140, processing a log analysis result in the Redis through a custom Promultimedia exporter to obtain a custom log collection index.
Step 140 comprises the steps of:
acquiring all elements of a Redis set;
traversing the Redis set;
determining the index type used by Prometheus according to the Redis key;
extracting Redis keys and corresponding element values;
the element is deleted from the Redis set. .
In this embodiment, a custom Prometheus exporter is implemented by using a python flash and a Prometheus-client library.
For example, the total time consumed for pushing the instruction is counted.
First, an indicator type is defined in a Prometheus exporter.
There are a total of four types of indicators for Prometheus: counter, age, Histogram, and Summary. The present embodiment takes a histogram with buckets for statistical distribution as an example.
risk_registry=CollectorRegistry()
buckets=(100,150,200,250,300,500,1000,3000,5000,10000,20000,30000,float("inf"))
ins_send_total=Histogram(name='ins_send_total',
documentation='instruction send total latency',
registry=risk_registry,
buckets=buckets)
The histogram index type is used for counting the number of sample data falling into different buckets (buckets), and an object () method is used for increasing one sample value.
In the above example, Kafka Consumer has saved time-consuming results to Redis, a save results example:
127.0.0.1:6380>smembers ins_send_total
1)"7102.55"
2)"8284.13"
i.e., the total instruction push time saves two elements, 7102.55 and 8284.13, respectively.
Exporter provides an http/risk/metrics interface (interface is determined by Exporter path) for Prometeus timing calls. The results saved in Redis, e.g., the ins _ send _ total set key already saved above, are traversed through the exporter's/risk/metrics interface.
For two elements 7102.55 and 8284.13 in the set, ins _ send _ total _ object (apply _ time) is called, and sample values 7102.55 and 8284.13 are written into a bucket of the history, ins _ send _ total.
After processing is complete, elements 7102.55 and 8284.13 are deleted from the Redis set, avoiding the next iteration of the process.
By accessing the identifier/risk/metrics interface, the statistical condition of the histogram index can be obtained:
#HELP ins_send_total instruction send total latency
#TYPE ins_send_total histogram
ins_send_total_bucket{le="100.0"}0.0
ins_send_total_bucket{le="150.0"}0.0
ins_send_total_bucket{le="200.0"}0.0
ins_send_total_bucket{le="250.0"}0.0
ins_send_total_bucket{le="300.0"}0.0
ins_send_total_bucket{le="500.0"}0.0
ins_send_total_bucket{le="1000.0"}0.0
ins_send_total_bucket{le="3000.0"}44.0
ins_send_total_bucket{le="5000.0"}356.0
ins_send_total_bucket{le="10000.0"}1981.0
ins_send_total_bucket{le="20000.0"}2069.0
ins_send_total_bucket{le="30000.0"}2073.0
ins_send_total_bucket{le="+Inf"}2078.0
ins_send_total_count 2078.0
ins_send_total_sum 1.4225596889700003e+07
step 150, the Prometous collects the collected log indexes defined by the Prometous exporter at regular time, and graphically displays the collected log indexes through Grafana.
For example, Prometheus configuration timing pulls the pointer data from the/risk/metrics interface provided by the aforementioned exporter, as follows:
scrape_configs:
-job_name:risk-export
scrape_interval:10s
scrape_timeout:10s
metrics_path:/risk/metrics
static_configs:
-targets:
-10.88.102.13:5005
indicating that Prometheus pulled pointer data from the http://10.88.102.13:5005/risk/metrics interface, the frequency of the script _ interval configuration call interface was once every 10 seconds.
And the Prometheus stores the index data pulled by the calling interface into a tsdb time sequence database for grafana display.
In order to improve the efficiency of processing logs by the Kafka Consumer, a plurality of instances of the Kafka Consumer can be started, the number of the instances is consistent with the number of partitions of the Kafka topoic, and load balancing is achieved.
Grafana was used to display the markers collected in Prometheus. After the Prometous data source is configured by Grafana, new dashboards and panels are set, and a tsdb time sequence database of Prometous is inquired through promsql to obtain an index statistical result:
for example, promsql:
(rate(ins_send_total_sum[1m])/rate(ins_send_total_count[1m]))/1000
the display results are shown in FIG. 3.
On the basis of the method, the invention also provides a system for self-defining Prometheus to collect log information indexes, which comprises the following steps:
the log collector is used for setting the name, the position and the output component of a log file to be collected by configuring a configuration file of the log collector and collecting an application log;
the Kafka cluster is used for receiving the log message pushed by the log collector;
redis, which is used for writing the analysis result of the log message into Redis;
the Prometous exporter is used for processing a log analysis result in the Redis and self-defining and collecting a log index;
and Prometheus, periodically collecting the collected log indexes defined by the Prometheus exporter, and graphically displaying the collected log indexes through Grafana.
The working principles and processes of Kafka cluster, Redis and Prometheus are fully described in the above methods and will not be described herein.
With the description of the above specific embodiment, compared with the prior art, the method and system for self-defining Prometheus to collect log information indexes provided by the present invention have the following advantages:
firstly, receiving a log message pushed by a log collector through a Kafka cluster; analyzing the log content in the log message, and writing an analysis result into Redis; and processing a log analysis result in the Redis by a custom Prometous exporter, collecting a collected log index defined by the Prometous exporter at regular time by Prometous, and graphically displaying the collected log index by Grafana. The log information collection by the user-defined index is realized, the graphical result of the user-defined index is displayed, and the operation and maintenance personnel can manage the log information conveniently.
Second, load balancing of Kfka consumers is achieved by setting the partition number of the log topic to be consistent with the number of Kafka consumers of the same group.
Thirdly, the multi-line log separation rule is set in the Filebeat. yml configuration file by configuring the multi-line.
Finally, it should also be noted that the terms "comprises," "comprising," or any other variation thereof, as used herein, are intended to cover a non-exclusive inclusion, such that a process, method, article, or apparatus that comprises a list of elements does not include only those elements but may include other elements not expressly listed or inherent to such process, method, article, or apparatus. Without further limitation, an element defined by the phrase "comprising an …" does not exclude the presence of other identical elements in a process, method, article, or apparatus that comprises the element.
The present invention is not limited to the above-mentioned preferred embodiments, and any structural changes made under the teaching of the present invention shall fall within the scope of the present invention, which is similar or similar to the technical solutions of the present invention.

Claims (10)

1. A method for self-defining Prometheus collection log information indexes is characterized by comprising the following steps:
configuring a configuration file of a log collector, and setting a name, a position and an output assembly of the log file to be collected;
building a Kafka cluster, and receiving a log message pushed by a log collector;
analyzing the log content in the log message, and writing an analysis result into Redis;
processing a log analysis result in the Redis through a self-defined Promultimedia exporter;
the collection log index defined by Prometous exporter is collected by Prometous timing, and the collection log index is graphically displayed by Grafana.
2. The method of claim 1, wherein the method for parsing the log content in the log message and writing the parsing result into Redis is as follows:
the Kafka consumer analyzes log contents in the log message and determines a Redis key and a Redis value according to the log keywords, wherein the Redis key is the log keywords, and the Redis value is a keyword value corresponding to the log keywords;
and writing the Redis key and the Redis value into Redis for storage by adopting a Redis set data type, wherein each collected log index is put into the same Redis set, and the unique Redis key is written into a Redis server.
3. The method according to claim 2, wherein the processing of the log parsing result in Redis through a custom Prometous exporter comprises the following steps:
acquiring all elements of a Redis set;
traversing the Redis set;
determining the index type used by Prometheus according to the Redis key;
extracting Redis keys and corresponding element values;
the element is deleted from the Redis set.
4. The method of claim 1, wherein the logs are collected using a Filebeat log collector and the rules for multi-line log separation are set in a Filebeat. yml configuration file by configuring multiline. new: true and multiline. match: after.
5. The method of claim 1, wherein Kafka consumers are connected with the Kafka cluster by using Kafka-python libraries and with Redis libraries, and the load balancing of the Kafka consumers is realized by setting the partition number of the log topic to be consistent with the Kafka consumer number of the same group.
6. The method of claim 1, wherein a python flash and a proxy-client library are used to customize a proxy exporter and output a custom collection log index to the proxy.
7. The method of claim 1, wherein the collecting log pointer is pulled periodically through an exporter-provided interface by configuring a timer in Prometheus.
8. A system for customizing Prometheus collection log information indexes is characterized by comprising the following steps:
the log collector is used for setting the name, the position and the output component of a log file to be collected by configuring a configuration file of the log collector and collecting an application log;
the Kafka cluster is used for receiving the log message pushed by the log collector;
redis, which is used for writing the analysis result of the log message into Redis;
the Prometous exporter is used for processing a log analysis result in the Redis and self-defining and collecting a log index;
and Prometheus, periodically collecting the collected log indexes defined by the Prometheus exporter, and graphically displaying the collected log indexes through Grafana.
9. The system of claim 8, wherein a Filebeat journal logger is used to collect journals and a rule for multi-row journal separation is set in a Filebeat.
10. The system of claim 8,
the Kafka consumers adopt the connection of the Kafka-python library and the Kafka cluster, adopt the connection of the Redis library and the Redis, and realize the load balance of the Kafka consumers by setting the partition number of the log topic to be consistent with the Kafka consumers of the same group;
and self-defining a Prometous exporter by adopting a python flash and a Prometous-client library, and outputting a self-defined collection log index to Prometous.
CN202111543065.7A 2021-12-16 2021-12-16 Method and system for self-defining Prometheus to collect log information indexes Pending CN114244832A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202111543065.7A CN114244832A (en) 2021-12-16 2021-12-16 Method and system for self-defining Prometheus to collect log information indexes

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202111543065.7A CN114244832A (en) 2021-12-16 2021-12-16 Method and system for self-defining Prometheus to collect log information indexes

Publications (1)

Publication Number Publication Date
CN114244832A true CN114244832A (en) 2022-03-25

Family

ID=80757299

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202111543065.7A Pending CN114244832A (en) 2021-12-16 2021-12-16 Method and system for self-defining Prometheus to collect log information indexes

Country Status (1)

Country Link
CN (1) CN114244832A (en)

Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN107197015A (en) * 2017-05-23 2017-09-22 阿里巴巴集团控股有限公司 A kind of message treatment method and device based on Message Queuing system
CN109542733A (en) * 2018-12-05 2019-03-29 焦点科技股份有限公司 A kind of highly reliable real-time logs collection and visual m odeling technique method
CN110493342A (en) * 2019-08-21 2019-11-22 北京明朝万达科技股份有限公司 Document transmission method, device, electronic equipment and readable storage medium storing program for executing
US20200034216A1 (en) * 2016-12-09 2020-01-30 Sas Institute Inc. Router management by an event stream processing cluster manager
EP3796167A1 (en) * 2019-09-23 2021-03-24 SAS Institute Inc. Router management by an event stream processing cluster manager
CN113055490A (en) * 2021-03-24 2021-06-29 杭州群核信息技术有限公司 Data storage method and device
CN113138834A (en) * 2021-03-19 2021-07-20 中国电子科技集团公司第二十九研究所 Cloud simulation platform lightweight deployment method based on Docker technology

Patent Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20200034216A1 (en) * 2016-12-09 2020-01-30 Sas Institute Inc. Router management by an event stream processing cluster manager
CN107197015A (en) * 2017-05-23 2017-09-22 阿里巴巴集团控股有限公司 A kind of message treatment method and device based on Message Queuing system
CN109542733A (en) * 2018-12-05 2019-03-29 焦点科技股份有限公司 A kind of highly reliable real-time logs collection and visual m odeling technique method
CN110493342A (en) * 2019-08-21 2019-11-22 北京明朝万达科技股份有限公司 Document transmission method, device, electronic equipment and readable storage medium storing program for executing
EP3796167A1 (en) * 2019-09-23 2021-03-24 SAS Institute Inc. Router management by an event stream processing cluster manager
CN113138834A (en) * 2021-03-19 2021-07-20 中国电子科技集团公司第二十九研究所 Cloud simulation platform lightweight deployment method based on Docker technology
CN113055490A (en) * 2021-03-24 2021-06-29 杭州群核信息技术有限公司 Data storage method and device

Similar Documents

Publication Publication Date Title
CN108874640B (en) Cluster performance evaluation method and device
CN105718351B (en) A kind of distributed monitoring management system towards Hadoop clusters
CN106487574A (en) Automatic operating safeguards monitoring system
CN111339175B (en) Data processing method, device, electronic equipment and readable storage medium
CN109977089A (en) Blog management method, device, computer equipment and computer readable storage medium
CN112269718A (en) Service system fault analysis method and device
CN109885453A (en) Big data platform monitoring system based on flow data processing
CN109062769B (en) Method, device and equipment for predicting IT system performance risk trend
US8909768B1 (en) Monitoring of metrics to identify abnormalities in a large scale distributed computing environment
CN114648393A (en) Data mining method, system and equipment applied to bidding
CN112039726A (en) Data monitoring method and system for content delivery network CDN device
CN114356499A (en) Kubernetes cluster alarm root cause analysis method and device
KR20220166760A (en) Apparatus and method for managing trouble using big data of 5G distributed cloud system
CN112636942B (en) Method and device for monitoring service host node
US8850321B2 (en) Cross-domain business service management
CN112068979B (en) Service fault determination method and device
CN109687999A (en) A kind of association analysis method of alarm failure, device and equipment
CN113504996A (en) Load balance detection method, device, equipment and storage medium
CN113342608A (en) Method and device for monitoring streaming computing engine task
CN114244832A (en) Method and system for self-defining Prometheus to collect log information indexes
CN115658441B (en) Method, equipment and medium for monitoring abnormality of household service system based on log
CN111240936A (en) Data integrity checking method and equipment
CN111176950A (en) Method and equipment for monitoring network card of server cluster
CN116232844A (en) System monitoring method based on distributed system
CN115509797A (en) Method, device, equipment and medium for determining fault category

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination