CN113282557A - Big data log analysis method and system based on Spring framework - Google Patents

Big data log analysis method and system based on Spring framework Download PDF

Info

Publication number
CN113282557A
CN113282557A CN202110458196.9A CN202110458196A CN113282557A CN 113282557 A CN113282557 A CN 113282557A CN 202110458196 A CN202110458196 A CN 202110458196A CN 113282557 A CN113282557 A CN 113282557A
Authority
CN
China
Prior art keywords
agent1
big data
log
sink1
logback
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202110458196.9A
Other languages
Chinese (zh)
Inventor
徐敏
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
China Unicom Jiangsu Industrial Internet Co Ltd
Original Assignee
China Unicom Jiangsu Industrial Internet Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by China Unicom Jiangsu Industrial Internet Co Ltd filed Critical China Unicom Jiangsu Industrial Internet Co Ltd
Priority to CN202110458196.9A priority Critical patent/CN113282557A/en
Publication of CN113282557A publication Critical patent/CN113282557A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F16/00Information retrieval; Database structures therefor; File system structures therefor
    • G06F16/10File systems; File servers
    • G06F16/18File system types
    • G06F16/1805Append-only file systems, e.g. using logs or journals to store data
    • G06F16/1815Journaling file systems
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F16/00Information retrieval; Database structures therefor; File system structures therefor
    • G06F16/10File systems; File servers
    • G06F16/18File system types
    • G06F16/182Distributed file systems
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F16/00Information retrieval; Database structures therefor; File system structures therefor
    • G06F16/20Information retrieval; Database structures therefor; File system structures therefor of structured data, e.g. relational data
    • G06F16/24Querying
    • G06F16/245Query processing
    • G06F16/2458Special types of queries, e.g. statistical queries, fuzzy queries or distributed queries
    • G06F16/2465Query processing support for facilitating data mining operations in structured databases

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • Databases & Information Systems (AREA)
  • General Engineering & Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • Data Mining & Analysis (AREA)
  • Fuzzy Systems (AREA)
  • Mathematical Physics (AREA)
  • Probability & Statistics with Applications (AREA)
  • Software Systems (AREA)
  • Computational Linguistics (AREA)
  • Debugging And Monitoring (AREA)

Abstract

The invention discloses a big data log analysis method and a big data log analysis system based on a Spring framework, which belong to the technical field of information safety, and comprise the following steps: logs are collected uniformly, in different Spring frames, a logback dependency is newly added in a POM file, and meanwhile, a logback. The invention provides a unified management platform aiming at internal WEB services of different Spring frameworks; the data are analyzed and mined from offline and real-time dimensions, so that the accuracy and consistency of the data are ensured; the method provides the functions of searching time, service names, log levels and keywords, and enables development, operation and maintenance personnel to quickly locate the production environment.

Description

Big data log analysis method and system based on Spring framework
Technical Field
The invention relates to a big data log analysis method and system based on a Spring framework, and belongs to the technical field of information security.
Background
In the current complex and various enterprise WEB services, various service frameworks of Spring MVC, Spring Boot and Spring Cloud are integrated, and meanwhile, the log recording mode is various and different. Not only the functions of log filing and management are lost, but also unified alarm management and risk early warning cannot be made on each WEB service; whether developers or operation and maintenance personnel can not accurately position services and various problems on a server, and a mode for efficiently searching log contents to quickly position the problems does not exist; meanwhile, problems cannot be predicted in advance and timely processed, perception experience of a customer is greatly influenced, and therefore the customer needs to be centralized and independent, log information on each service and on each server is collected and managed, centralized management and timely early warning are achieved, a good UI (user interface) is provided for data display, processing and analysis, data mining is conducted on customer behaviors, and a higher commercial value is extracted.
Disclosure of Invention
The invention mainly aims to solve the defects of the prior art and provide a method and a system for big data log analysis based on a Spring framework.
The purpose of the invention can be achieved by adopting the following technical scheme:
a big data log analysis method based on a Spring framework comprises the following steps:
step 1: logs are collected uniformly, in different Spring frames, a logback dependency is newly added in a POM file, and meanwhile, a logback.xml file is newly built under resource directories of each service;
step 2: the Logstash data is input into the ES, a trace-logging. conf file is newly built under a Logstash config directory, a real-time log is stored into the ES, and then stream processing is carried out through a Flink;
and step 3: storing the flash data in the HDFS, newly building a properties file under a flash conf directory, storing the offline file in the HDFS, and subsequently performing data analysis and mining through HIVE and SPARK;
and 4, step 4: and the Kibana accesses the ES, modifies Kibana.yml, and completely accesses the offline real-time data into a large data platform.
Preferably, in step 1, the newly added logback depends on the following:
Figure BDA0003041359350000021
preferably, in step 1, creating a logback. xml file under the resources directory of each service as follows:
Figure BDA0003041359350000022
Figure BDA0003041359350000031
Figure BDA0003041359350000041
preferably, in step 2, a trace-logging. conf file is newly created as follows:
Figure BDA0003041359350000051
preferably, 9601 is a port for logstack to receive data, and the port must be configured in logback, and codec > json _ lines is a json parser to receive json data, and a logstack-codec-json _ lines plug-in is required to be installed; output elastic search points to the ip address and port of the cluster we install; stdout prints the received message.
Preferably, the Flume data acquisition architecture is a Web Server, and the Flume data acquisition architecture sequentially passes through Source, Channel, Sink and HDFS.
Preferably, in step 3, a new file is created under the Flume conf directory.
#agent1 name
agent1.sources source1
agent1.sinks=sink1
agent1.channels=channel1
#Spooling Directory
#set source1
agent1.sources.source1.type=spooldir
agent1.sources.source1.spoolDir=/usr/app/flumelog/dir/logdfs
agent1.sources.source1.channe ls:channel1
agent1.sources.source1.fileHeader=false
agent1.sources.source1.interceptors=i1.
agent1.sources.source1.interceptors.i1.type=timestamp
#set sink1
agent1.sinks.sink1.type=hdfs
agent1.sinks.sink1.hdfs.path=/user/yuhui/flume
agent1.sinks.sink1.hdfs.fileType DataStream
agent1.sinks.sink1.hdfs.writeFormat=TEXT
agent1.sinks.sink1.hdfs.rollInterval=1
agent1.sinks.sink1.channel-channel1
agent1.sinks.sink1.hdfs.filePrefix%Y-%m-%d
#set channel1
agent1.channels.channel1.type file
agent1.channe ls.channel1.checkpointDir-/usr/app/flume log/dir/logdfstmp/point
agent1.channe ls.channe 11.dataDirs-/usr/app/f lume log/dir/logdfs tmp
Preferably, in step 4, kimana. yml is modified as follows:
port 5601# # service port
server.host:"0.0.0.0"
elasticsearch.hosts:["http://11.1.5.69:9200","http://11.1.5.70:9200","http://11.1.5.71:9200"]
A system of big data log analysis method based on Spring frame comprises log monitoring, alarm management and behavior analysis
The log monitoring comprises a search function and report monitoring
The search function: screening by setting time, service name and log hierarchy dimension;
report monitoring: project operation and maintenance personnel can be informed in time through monitoring the error log, and monitoring is carried out through service, interface calling times and interface response time dimension;
the management of the alarms is carried out by the user,
when the WEB service generates an error log or a certain service is abnormally increased in a certain period of time, the alarm management system can push maintenance personnel in a mail, short message, nail or WeChat mode;
behavioral analysis
PV/UV, DAU/MAU, newly-increased user number, access duration and GMV reuse index can be counted through the calling frequency of the interface;
the clicking behavior of the user is used as the behavior characteristic, the age, sex, address and hobby attribute characteristics of the user are extracted to establish the portrait of the user, and accurate marketing and intelligent recommendation are performed on the user by combining machine learning.
Preferably, the data of the report statistics includes: PV/UV, GMV, DAU/MAU, number of clicks/access duration, and number of newly added users.
The invention has the beneficial technical effects that: according to the big data log analysis method and system based on the Spring framework, a unified management platform is provided for internal WEB services of different Spring frameworks; the data are analyzed and mined from offline and real-time dimensions, so that the accuracy and consistency of the data are ensured; the method provides the functions of searching time, service names, log levels and keywords, so that development, operation and maintenance personnel can quickly locate the production environment; the functions of risk early warning and warning monitoring are provided, and maintenance personnel are pushed in a flexible and various mode through mails, short messages, nails and WeChat; and further analyzing and mining the behavior data of the user, and performing operation decision analysis for operators and data analysts.
Drawings
Fig. 1 is a Spring overall framework diagram of a method and a system for big data log analysis based on a Spring framework according to a preferred embodiment of the present invention.
FIG. 2 is a diagram of a Flume data collection architecture in accordance with a preferred embodiment of the Spring framework based big data log analysis method and system of the present invention;
FIG. 3 is a diagram of a log monitoring framework in accordance with a preferred embodiment of the method and system for Spring framework based big data log analysis in accordance with the present invention;
FIG. 4 is a diagram of an alarm management framework in accordance with a preferred embodiment of the method and system for Spring framework based big data log analysis in accordance with the present invention;
FIG. 5 is a diagram of a behavior analysis framework for a method and system for Spring framework based big data log analysis in accordance with the present invention;
FIG. 6 is a system block diagram of a method and system for Spring-based big data log analysis in accordance with the present invention.
Detailed Description
In order to make the technical solutions of the present invention more clear and definite for those skilled in the art, the present invention is further described in detail below with reference to the examples and the accompanying drawings, but the embodiments of the present invention are not limited thereto.
As shown in fig. 1 to fig. 5, the method and system for big data log analysis based on a Spring framework provided in this embodiment is a method for big data log analysis based on a Spring framework, and the method includes the following steps:
step 1: logs are collected uniformly, in different Spring frames, a logback dependency is newly added in a POM file, and meanwhile, a logback.xml file is newly built under resource directories of each service;
step 2: the Logstash data is input into the ES, a trace-logging. conf file is newly built under a Logstash config directory, a real-time log is stored into the ES, and then stream processing is carried out through a Flink;
and step 3: storing the flash data in the HDFS, newly building a properties file under a flash conf directory, storing the offline file in the HDFS, and subsequently performing data analysis and mining through HIVE and SPARK;
and 4, step 4: and the Kibana accesses the ES, modifies Kibana.yml, and completely accesses the offline real-time data into a large data platform.
In step 1, the logback is newly added depending on the following:
<dependency>
<groupId>net.logstash.logback</groupid>
<artifactId>logstash-logback-encoder</artifactId>
<version>4.11</version>
</dependency>
in step 1, creating a logback. xml file under the resources directory of each service as follows:
<?xml version=""1.0"encoding="UTF-8"?>
<configuration>
< l- -this name will reflect the beginning of each log >
<contextName>car-trace-logging</contextName>
< | A! - -set a variable, used below, meaning the Log saving Path- - ]
<property name="log.path"value="D:/log/CarTrace"/>
< | A! Output to a console >
<appendername="console"class="ch.qos.logback.core.ConsoleAppender">
< | A! -level filtering >
<filterclass="ch.qos.logback.classic.filter.LevelFilter">
<level>INFO</level>
<onMatch>ACCEPT</onMatch>
<onMismatch>DENY</onMismatch>
</filter>
< | A! Log output format- - > -)
<encoder>
<pattern>%d{HH:mm:ss.ssS}%contextName[%thread]%-5level%logger{36}-kmsgKn</pattern.
</encoder>
</appender>
< | A! - - -export to File- - > -)
<appendername="file"class="ch.qos.logback.core.rolling.RollingFileAppender">
< l- -Log name, using the above configured route- - ]
<file>${log.path}/car-trace.log</file>
< I- -according to yyyy- -dd- - >)
<rollingPolicyclass="ch.qos.logback.core.rolling.TimeBasedRollingPolicy">
<fileNamePattern>${log.path}/car-trace.%d{yyyy-M-dd}.log.zip</fileNamePattern></rollingPolicy>
<encoder>
<pattern>%d{H:mm:ss.SSS]%contextName[%thread]%-5level%logger{36}-‰msg%n</pattern>
</encoder>
</appender>
</appendername="LOGSTASH"class="net.logstash.logback.appender.LogstashTcpSocketAppender">
< destination >192.168.253.6:9601</destination > -designation of logstaship: listening port tcpAppender can implement transmission like kafka by itself >
<encodercharset="UTF-8"class="net.logstash.logback.encoder.LogstashEncoder"/>
</appender>
< | A! - -set Log isolation level- - > -, and
<root level="info">
<appender-ref ref="console"/>
<appender-ref ref=""file">
<appender-ref ref="LOGSTASH”/>
</root>
< | A! - -setting the isolation level of a particular package- - >
<logger name="cn.theUnit4.Mapper"level="debug"/>
</configuration>
In step 2, a trace-logging. conf file is newly created as follows:
input{
tcp{
port=>9601
codec=>json_lines
}
output{
<elasticsearch{
action=>"index"
hosts [ "11.1.5.69:8088", "11.1.5.70:8088", "11.1.5.71:8088" ] # ES cluster ip address and port
index > "% { [ appName ] } -% { + yyyy.mm.dd }" # is indexed by an item name
document_type=>"applog”
}
stdout{codec=>rubydebug}
}
9601 is a port for logstack to receive data, which must be configured in logback, and codec > json _ lines is a json parser, and a plug-in for logstack-codec-json _ lines is required to be installed to receive json data; output elastic search points to the ip address and port of the cluster we install; stdout prints the received message.
And the Flume data acquisition architecture is a Web Server, and the Flume data acquisition architecture sequentially passes through Source, Channel, Sink and HDFS.
In step 3, a new property file is created under the Flume conf directory as follows:
#agent1 name
agent1.sources source1
agent1.sinks=sink1
agent1.channels=channel1
#Spooling Directory
#set source1
agent1.sources.source1.type=spooldir
agent1.sources.source1.spoolDir=/usr/app/flumelog/dir/logdfs
agent1.sources.source1.channe ls:channel1
agent1.sources.source1.fileHeader=false
agent1.sources.source1.interceptors=i1.
agent1.sources.source1.interceptors.i1.type=timestamp
#set sink1
agent1.sinks.sink1.type=hdfs
agent1.sinks.sink1.hdfs.path=/user/yuhui/flume
agent1.sinks.sink1.hdfs.fileType DataStream
agent1.sinks.sink1.hdfs.writeFormat=TEXT
agent1.sinks.sink1.hdfs.rollInterval=1
agent1.sinks.sink1.channel-channel1
agent1.sinks.sink1.hdfs.filePrefix%Y-%m-%d
#set channel1
agent1.channels.channel1.type file
agent1.channe ls.channel1.checkpointDir-/usr/app/flume log/dir/logdfstmp/point
agent1.channe ls.channe 11.dataDirs-/usr/app/f lume log/dir/logdfs tmp
yml, kibana, was modified as follows in step 4:
port 5601# # service port
server.host:"0.0.0.0"
elasticsearch.hosts:["http://11.1.5.69:9200","http://11.1.5.70:9200","http://11.1.5.71:9200"]
A system of big data log analysis method based on Spring frame comprises log monitoring, alarm management and behavior analysis
The log monitoring comprises a search function and report monitoring
The search function: by setting time, service names and log level dimension screening, keyword search enables operation and maintenance and developers to quickly analyze and locate problems of the production environment;
report monitoring: project operation and maintenance personnel can be informed in time through monitoring of the error log, and the service, interface calling times and interface response time dimension are monitored, so that the user experience can be improved, and references are provided for subsequent capacity expansion and performance optimization;
the management of the alarms is carried out by the user,
when a WEB service generates an error log or a certain service is abnormally increased in a certain period of time, the alarm management system can push maintenance personnel in a flexible and various mode through mails, short messages, nails and WeChat, and the operation and maintenance personnel can not only find problems in time, but also predict the occurrence of the problems in advance and expand the capacity in time;
behavioral analysis
PV/UV, DAU/MAU, newly-increased user number, access duration and GMV reuse index can be counted through the calling frequency of the interface, and operation decisions are made for operation personnel and data analysts;
the clicking behavior of the user is used as the behavior characteristic, the age, sex, address and hobby attribute characteristics of the user are extracted to establish the portrait of the user, and accurate marketing and intelligent recommendation are performed on the user by combining machine learning. The data of report statistics comprises: PV/UV, GMV, DAU/MAU, number of clicks/access duration, and number of newly added users.
In summary, in this embodiment, according to the method and system for big data log analysis based on the Spring framework of this embodiment, the method and system for big data log analysis based on the Spring framework of this embodiment provide a uniform management platform for internal WEB services of different Spring frameworks; the data are analyzed and mined from offline and real-time dimensions, so that the accuracy and consistency of the data are ensured; the method provides the functions of searching time, service names, log levels and keywords, so that development, operation and maintenance personnel can quickly locate the production environment; the functions of risk early warning and warning monitoring are provided, and maintenance personnel are pushed in a flexible and various mode through mails, short messages, nails and WeChat; and further analyzing and mining the behavior data of the user, and performing operation decision analysis for operators and data analysts.
The above description is only a further embodiment of the present invention, but the scope of the present invention is not limited thereto, and any person skilled in the art can substitute or change the technical solution and the concept of the present invention within the scope of the present invention.

Claims (10)

1. A big data log analysis method based on a Spring framework is characterized by comprising the following steps:
step 1: logs are collected uniformly, in different Spring frames, a logback dependency is newly added in a POM file, and meanwhile, a logback.xml file is newly built under resource directories of each service;
step 2: the Logstash data is input into the ES, a trace-logging. conf file is newly built under a Logstash config directory, a real-time log is stored into the ES, and then stream processing is carried out through a Flink;
and step 3: storing the flash data in the HDFS, newly building a properties file under a flash conf directory, storing the offline file in the HDFS, and subsequently performing data analysis and mining through HIVE and SPARK;
and 4, step 4: and the Kibana accesses the ES, modifies Kibana.yml, and completely accesses the offline real-time data into a large data platform.
2. The method for big data log analysis based on a Spring framework according to claim 1, wherein in step 1, the newly added logback depends on the following:
<dependency>
<groupId>net.logstash.logback</groupid>
<artifactId>logstash-logback-encoder</artifactId>
<version>4.11</version>
</dependency>
3. the method for big data log analysis based on Spring framework according to claim 1, wherein in step 1, a logback.xml file is created under the resources directory of each service as follows:
Figure FDA0003041359340000011
Figure FDA0003041359340000021
Figure FDA0003041359340000031
Figure FDA0003041359340000041
4. the method for big data log analysis based on a Spring framework according to claim 1, wherein in step 2, a trace-logging.conf file is created as follows:
Figure FDA0003041359340000042
Figure FDA0003041359340000051
5. a method for big data log analysis based on a Spring framework as claimed in claim 1, wherein 9601 is a port for logstack to receive data, and the port must be configured in logback, and codec > json lines is a json parser to receive json data, and a logstack-codec-json lines plug-in is required to be installed; output elastic search points to the ip address and port of the cluster we install; stdout prints the received message.
6. The Spring framework-based big data log analysis method and system according to claim 1, wherein the Flume data collection architecture is a Web server.
7. The method for big data log analysis based on Spring framework as claimed in claim 1, wherein in step 3, a new property file is created under a Flume conf directory as follows:
#agent1 name
agent1.sources source1
agent1.sinks=sink1
agent1.channels=channel1
#Spooling Directory
#set source1
agent1.sources.source1.type=spooldir
agent1.sources.source1.spoolDir=/usr/app/flumelog/dir/logdfs
agent1.sources.source1.channe ls:channel1
agent1.sources.source1.fileHeader=false
agent1.sources.source1.interceptors=i1.
agent1.sources.source1.interceptors.i1.type=timestamp
#set sink1
agent1.sinks.sink1.type=hdfs
agent1.sinks.sink1.hdfs.path=/user/yuhui/flume
agent1.sinks.sink1.hdfs.fileType DataStream
agent1.sinks.sink1.hdfs.writeFormat=TEXT
agent1.sinks.sink1.hdfs.rollInterval=1
agent1.sinks.sink1.channel-channel1
agent1.sinks.sink1.hdfs.filePrefix%Y-%m-%d
#set channel1
agent1.channels.channel1.type file
agent1.channe ls.channel1.checkpointDir-/usr/app/flume log/dir/logdfstmp/point
agent1.channe ls.channe 11.dataDirs-/usr/app/f lume log/dir/logdfs tmp
8. a method for big data log analysis based on Spring framework according to claim 1, characterized in that in step 4, yml is modified kibana.yml as follows:
port 5601# # service port
server.host:"0.0.0.0"
elasticsearch.hosts:["http://11.1.5.69:9200","http://11.1.5.70:9200","http://11.1.5.71:9200"]
9. A system of a big data log analysis method based on a Spring framework is characterized in that the system comprises log monitoring, alarm management and behavior analysis
The log monitoring comprises a search function and report monitoring
The search function: screening by setting time, service name and log hierarchy dimension;
report monitoring: project operation and maintenance personnel can be informed in time through monitoring the error log, and monitoring is carried out through service, interface calling times and interface response time dimension;
the management of the alarms is carried out by the user,
when the WEB service generates an error log or a certain service is abnormally increased in a certain period of time, the alarm management system can push maintenance personnel in a mail, short message, nail or WeChat mode;
behavioral analysis
PV/UV, DAU/MAU, newly-increased user number, access duration and GMV reuse index can be counted through the calling frequency of the interface;
the clicking behavior of the user is used as the behavior characteristic, the age, sex, address and hobby attribute characteristics of the user are extracted to establish the portrait of the user, and accurate marketing and intelligent recommendation are performed on the user by combining machine learning.
10. The system of a Spring framework-based big data log analysis method according to claim 9, wherein the report statistic data includes: PV/UV, GMV, DAU/MAU, number of clicks/access duration, and number of newly added users.
CN202110458196.9A 2021-04-27 2021-04-27 Big data log analysis method and system based on Spring framework Pending CN113282557A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202110458196.9A CN113282557A (en) 2021-04-27 2021-04-27 Big data log analysis method and system based on Spring framework

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202110458196.9A CN113282557A (en) 2021-04-27 2021-04-27 Big data log analysis method and system based on Spring framework

Publications (1)

Publication Number Publication Date
CN113282557A true CN113282557A (en) 2021-08-20

Family

ID=77277440

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202110458196.9A Pending CN113282557A (en) 2021-04-27 2021-04-27 Big data log analysis method and system based on Spring framework

Country Status (1)

Country Link
CN (1) CN113282557A (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN116611663A (en) * 2023-06-07 2023-08-18 广州三七极梦网络技术有限公司 Art outsourcing management system

Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
KR20030015061A (en) * 2001-08-14 2003-02-20 (주) 이씨마이너 Method of CRM on comparison with web-log and user ID
CN109614553A (en) * 2018-12-21 2019-04-12 北京博明信德科技有限公司 PaaS platform for log collection
CN110457178A (en) * 2019-07-29 2019-11-15 江苏艾佳家居用品有限公司 A kind of full link monitoring alarm method based on log collection analysis
US10810110B1 (en) * 2018-01-25 2020-10-20 Intuit Inc. Methods, systems, and articles of manufacture for testing web services using a behavior-driven development domain specific language framework

Patent Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
KR20030015061A (en) * 2001-08-14 2003-02-20 (주) 이씨마이너 Method of CRM on comparison with web-log and user ID
US10810110B1 (en) * 2018-01-25 2020-10-20 Intuit Inc. Methods, systems, and articles of manufacture for testing web services using a behavior-driven development domain specific language framework
CN109614553A (en) * 2018-12-21 2019-04-12 北京博明信德科技有限公司 PaaS platform for log collection
CN110457178A (en) * 2019-07-29 2019-11-15 江苏艾佳家居用品有限公司 A kind of full link monitoring alarm method based on log collection analysis

Non-Patent Citations (2)

* Cited by examiner, † Cited by third party
Title
赵阳;王春喜;: "基于Storm框架结构的分布式实时日志分析系统的设计研究", 信息与电脑(理论版), no. 08 *
陈涛;叶荣华;: "基于Spring Boot和MongoDB的数据持久化框架研究", 电脑与电信, no. 1 *

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN116611663A (en) * 2023-06-07 2023-08-18 广州三七极梦网络技术有限公司 Art outsourcing management system

Similar Documents

Publication Publication Date Title
Liu et al. Monitoring and analyzing big traffic data of a large-scale cellular network with Hadoop
CN111092852B (en) Network security monitoring method, device, equipment and storage medium based on big data
US10108411B2 (en) Systems and methods of constructing a network topology
US20180365085A1 (en) Method and apparatus for monitoring client applications
Chen et al. CauseInfer: Automated end-to-end performance diagnosis with hierarchical causality graph in cloud environment
CN108200111B (en) Resource configuration information updating method and device and resource interface equipment
US10567409B2 (en) Automatic and scalable log pattern learning in security log analysis
US20160306613A1 (en) Code routine performance prediction using test results from code integration tool
Sang et al. Precise, scalable, and online request tracing for multitier services of black boxes
US11042525B2 (en) Extracting and labeling custom information from log messages
US10810216B2 (en) Data relevancy analysis for big data analytics
CN114363042B (en) Log analysis method, device, equipment and readable storage medium
CN105743730A (en) Method and system used for providing real-time monitoring for webpage service of mobile terminal
CN112905548B (en) Security audit system and method
CN113760652B (en) Method, system, device and storage medium for full link monitoring based on application
CN113448812A (en) Monitoring alarm method and device under micro-service scene
CN113760677A (en) Abnormal link analysis method, device, equipment and storage medium
CN113420032A (en) Classification storage method and device for logs
Astekin et al. Incremental analysis of large-scale system logs for anomaly detection
CN107204868B (en) Task operation monitoring information acquisition method and device
CN113282557A (en) Big data log analysis method and system based on Spring framework
JP6002849B2 (en) Monitoring device, monitoring method, and recording medium
US10706108B2 (en) Field name recommendation
Kubacki et al. Multidimensional log analysis
CN108959041B (en) Method for transmitting information, server and computer readable storage medium

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination