CN111652616A - Transaction data real-time monitoring method and device - Google Patents

Transaction data real-time monitoring method and device Download PDF

Info

Publication number
CN111652616A
CN111652616A CN202010644227.5A CN202010644227A CN111652616A CN 111652616 A CN111652616 A CN 111652616A CN 202010644227 A CN202010644227 A CN 202010644227A CN 111652616 A CN111652616 A CN 111652616A
Authority
CN
China
Prior art keywords
data
transaction
monitoring
success rate
mongodb
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN202010644227.5A
Other languages
Chinese (zh)
Other versions
CN111652616B (en
Inventor
李响
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Bank of China Ltd
Original Assignee
Bank of China Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Bank of China Ltd filed Critical Bank of China Ltd
Priority to CN202010644227.5A priority Critical patent/CN111652616B/en
Publication of CN111652616A publication Critical patent/CN111652616A/en
Application granted granted Critical
Publication of CN111652616B publication Critical patent/CN111652616B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06QINFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES; SYSTEMS OR METHODS SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES, NOT OTHERWISE PROVIDED FOR
    • G06Q20/00Payment architectures, schemes or protocols
    • G06Q20/38Payment protocols; Details thereof
    • G06Q20/389Keeping log of transactions for guaranteeing non-repudiation of a transaction
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F16/00Information retrieval; Database structures therefor; File system structures therefor
    • G06F16/20Information retrieval; Database structures therefor; File system structures therefor of structured data, e.g. relational data
    • G06F16/24Querying
    • G06F16/245Query processing
    • G06F16/2458Special types of queries, e.g. statistical queries, fuzzy queries or distributed queries
    • G06F16/2462Approximate or statistical queries
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F16/00Information retrieval; Database structures therefor; File system structures therefor
    • G06F16/20Information retrieval; Database structures therefor; File system structures therefor of structured data, e.g. relational data
    • G06F16/24Querying
    • G06F16/245Query processing
    • G06F16/2458Special types of queries, e.g. statistical queries, fuzzy queries or distributed queries
    • G06F16/2477Temporal data queries
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F16/00Information retrieval; Database structures therefor; File system structures therefor
    • G06F16/20Information retrieval; Database structures therefor; File system structures therefor of structured data, e.g. relational data
    • G06F16/27Replication, distribution or synchronisation of data between databases or within a distributed database system; Distributed database system architectures therefor
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F16/00Information retrieval; Database structures therefor; File system structures therefor
    • G06F16/20Information retrieval; Database structures therefor; File system structures therefor of structured data, e.g. relational data
    • G06F16/28Databases characterised by their database models, e.g. relational or object models

Abstract

The invention discloses a real-time monitoring method and a real-time monitoring device for transaction data, wherein the method comprises the following steps: receiving transaction data and interface specification data sent by a transaction platform by using Kafka; analyzing the transaction data by using Storm according to the interface specification data to generate a MongoDB data object; determining transaction success rate data according to the MongoDB data object; and monitoring the transaction data in real time according to the transaction success rate data. The invention is convenient for monitoring transaction data in real time, improves the statistical efficiency and the real-time performance, and is beneficial to alarming in time under the condition of transaction abnormity.

Description

Transaction data real-time monitoring method and device
Technical Field
The invention relates to the technical field of transaction data processing, in particular to a method and a device for monitoring transaction data in real time.
Background
At present, the transaction data volume of each transaction system is huge, and the transaction types are various, so that the transaction data needs to be analyzed and monitored in real time, and the real-time alarm is given out under the condition of transaction abnormity.
In the prior art, the database data is monitored based on the shell script, under the condition that the transaction data volume is gradually increased, the counting time consumption is long, the efficiency is low, the speed of normal online transaction is possibly influenced due to the operation performed on the database during the counting period, the real-time performance is low, and timely alarming is favorably carried out under the condition that the transaction is abnormal.
Disclosure of Invention
The embodiment of the invention provides a real-time transaction data monitoring method, which is used for monitoring transaction data in real time, improving the statistical efficiency and real-time performance and facilitating timely alarming under the condition of transaction abnormity, and comprises the following steps:
receiving transaction data and interface specification data sent by a transaction platform by using Kafka;
analyzing the transaction data by using Storm according to the interface specification data to generate a MongoDB data object;
determining transaction success rate data according to the MongoDB data object;
and monitoring the transaction data in real time according to the transaction success rate data.
The embodiment of the invention provides a transaction data real-time monitoring device, which is used for monitoring transaction data in real time, improving statistical efficiency and real-time performance and facilitating timely alarming under the condition of transaction abnormity, and comprises:
the data receiving module is used for receiving the transaction data and the interface specification data sent by the transaction platform by using Kafka;
the object generation module is used for analyzing the transaction data by using Storm according to the interface specification data to generate a MongoDB data object;
the success rate determining module is used for determining transaction success rate data according to the MongoDB data object;
and the monitoring module is used for monitoring the transaction data in real time according to the transaction success rate data.
The embodiment of the invention also provides computer equipment which comprises a memory, a processor and a computer program which is stored on the memory and can run on the processor, wherein the processor realizes the transaction data real-time monitoring method when executing the computer program.
Compared with the scheme of monitoring database data based on shell scripts in the prior art, the embodiment of the invention receives transaction data and interface specification data sent by a transaction platform by using Kafka; analyzing the transaction data by using Storm according to the interface specification data to generate a MongoDB data object; determining transaction success rate data according to the MongoDB data object; and monitoring the transaction data in real time according to the transaction success rate data. The embodiment of the invention determines the transaction success rate data according to the generated MongoDB data object, thereby carrying out real-time monitoring on the transaction data without consuming a large amount of time for counting, improving the counting efficiency and the real-time performance and being beneficial to alarming in time under the condition of encountering transaction abnormity.
Drawings
In order to more clearly illustrate the embodiments of the present invention or the technical solutions in the prior art, the drawings used in the description of the embodiments or the prior art will be briefly described below, it is obvious that the drawings in the following description are only some embodiments of the present invention, and for those skilled in the art, other drawings can be obtained according to the drawings without creative efforts. In the drawings:
FIG. 1 is a schematic diagram of a transaction data real-time monitoring method according to an embodiment of the present invention;
fig. 2 is a structural diagram of a real-time transaction data monitoring device according to an embodiment of the invention.
Detailed Description
In order to make the objects, technical solutions and advantages of the embodiments of the present invention more apparent, the embodiments of the present invention are further described in detail below with reference to the accompanying drawings. The exemplary embodiments and descriptions of the present invention are provided to explain the present invention, but not to limit the present invention.
In order to monitor transaction data in real time, improve statistical efficiency and real-time performance, and facilitate timely alarming when transaction abnormality occurs, an embodiment of the present invention provides a method for monitoring transaction data in real time, where as shown in fig. 1, the method may include:
step 101, receiving transaction data and interface specification data sent by a transaction platform by using Kafka;
step 102, analyzing the transaction data by using Storm according to the interface specification data to generate a MongoDB data object;
103, determining transaction success rate data according to the MongoDB data object;
and 104, performing real-time monitoring on the transaction data according to the transaction success rate data.
As shown in fig. 1, in the embodiment of the present invention, transaction data and interface specification data sent by a transaction platform are received by using Kafka; analyzing the transaction data by using Storm according to the interface specification data to generate a MongoDB data object; determining transaction success rate data according to the MongoDB data object; and monitoring the transaction data in real time according to the transaction success rate data. The embodiment of the invention determines the transaction success rate data according to the generated MongoDB data object, thereby carrying out real-time monitoring on the transaction data without consuming a large amount of time for counting, improving the counting efficiency and the real-time performance and being beneficial to alarming in time under the condition of encountering transaction abnormity.
In specific implementation, the Kafka is used for receiving transaction data and interface specification data sent by a transaction platform.
It should be noted that Kafka was originally developed by Linkedin, and is a distributed, partitioned, multi-copy, multi-subscriber, zookeeper-based coordinated distributed log system, which can be commonly used for web/nginx logs, access logs, message services, and the like, and Linkedin contributed to the Apache foundation in 2010 and became a top-level open-source project. The main application scenarios are log collection systems and messaging systems. The main design goals of Kafka are: the message persistence capability is provided in a mode that the time complexity is O (1), and the access performance of constant time can be guaranteed even for data above TB level; the throughput rate is high, and the single machine can support the transmission of 100K messages per second even on a very cheap commercial machine; supporting message partitioning and distributed consumption among KafkaServer, and simultaneously ensuring the message sequence transmission in each partition; simultaneously, offline data processing and real-time data processing are supported; online horizontal extension is supported. A messaging system is responsible for passing data from one application to another, and applications need only be concerned with the data, not with how the data is passed between two or more applications. Distributed messaging is based on reliable message queues, asynchronously delivering messages between client applications and a messaging system. There are two main modes of messaging: point-to-point delivery mode, publish-subscribe mode. Most messaging systems use a publish-subscribe model. Kafka is a publish-subscribe schema. In a point-to-point messaging system, messages persist into a queue. At this point, there will be one or more consumers consuming the data in the queue. But one message can only be consumed once. When a consumer consumes a piece of data in the queue, the piece of data is removed from the message queue. The mode can ensure the data processing sequence even if a plurality of consumers consume data at the same time. In a publish-subscribe messaging system, messages are persisted into a topic. Unlike a peer-to-peer messaging system, a consumer may subscribe to one or more topics, the consumer may consume all of the data in the topic, the same piece of data may be consumed by multiple consumers, and the data may not be immediately deleted after being consumed. In a publish-subscribe messaging system, the producer of a message is called a publisher and the consumer is called a subscriber.
In an embodiment, zookeeper is the manager of Kafka who is scheduled to work after registering with zookeeper. The Kafka interface specification data is currently defined as 17 fields, each field being separated by a & partition, as shown in detail below: platform identification number & platform system serial number & acquirer number & issuer number & transaction number & card number & amount & currency & merchant number & transaction date & transaction time & service success flag & system success flag & transaction start time & send external system time & receive external system time & transaction end time.
In the embodiment, zookeeper + Kafka is adopted to send transaction data to Kafka, the Kafka carries out real-time streaming data transmission, high availability under a large quantity can be guaranteed, a transaction platform generates transaction data, a transaction generates a key transaction information record according to the format of info1& info2& info3 … & info30, and the information record is sent to kafak in real time. And further, receiving the transaction data and the interface specification data sent by the transaction platform by using the Kafka.
In specific implementation, according to the interface specification data, the Storm is used for analyzing the transaction data to generate a MongoDB data object.
It should be noted that Storm is a distributed real-time big data processing framework of Twitter open source, and is referred to as Hadoop real-time version in the industry. With more and more scenes, such as website statistics, recommendation systems, early warning systems, financial systems (high frequency trading, stock) and the like, which cannot tolerate the MapReduce high delay of Hadoop, the application of large data real-time processing solutions is becoming widespread, and the large data real-time processing solutions are the latest explosion point in the field of distributed technology, while Storm is more outstanding and mainstream in flow calculation technology. The meaning of Storm for real-time calculations is similar to that of Hadoop for batch processing. Hadoop provides map, reduce primitives, making our batch process simple and efficient. Likewise, Storm provides some simple and efficient primitives for real-time computation, and its Trident is a higher-level abstraction framework based on Storm primitives, similar to the peg framework based on Hadoop, making development more convenient and efficient. Storm includes the following properties: 1. the application scene is wide: storm can process messages and update a DB in real time, continuously inquire a data volume and return the data volume to a client (continuous calculation), and perform real-time parallelization processing (distributed method calling, namely DRPC) on a resource-consuming inquiry, and the basic APIs of Storm can meet a large number of scenes; 2. the scalability is high: the scalability of Storm can make the message amount that Storm can process per second reach very high, expand a real-time computation task, what you need to do is to add machines and improve the parallelism of this computation task, Storm uses ZooKeeper to coordinate various configurations in the cluster so that the cluster of Storm can be easily expanded; 3. ensuring no data loss: real-time systems must ensure that all data is successfully processed. The applicable scenarios for those systems that lose data are very narrow, and Storm ensures that every message is processed, which is a big contrast to S4; 4. and (3) abnormal and robust: the Storm cluster is very easy to manage, and the nodes are restarted in turn without influencing the application; 5. the fault tolerance is good: when an exception occurs in the message processing process, the Storm can retry; 6. language independence: storm's topology and message processing components can be defined in any language that enables Storm to be used by anyone. The working principle of Storm is as follows: nimbus is responsible for code distributed across the cluster, topo can only commit on Nimbus machines, assign tasks to other machines, and fault monitoring. The Supervisor listens to the nodes assigned to it, starting and shutting down the work process as necessary according to the Nimbus's delegation. Each worker process executes a subset of topology. A running topology consists of many work processes running on many machines. There is an abstraction in Storm for a stream, which is an unbroken, continuous tuple, noting that Storm, when modeling an event stream, abstracts the events in the stream into tuple, i.e., a tuple. Storm considers that each stream has one source, namely the source of the original tuple, and is called Spout. The tuple in the stream is processed, abstracted into the Bolt, the Bolt can consume any number of input streams, the stream direction is only required to be directed to the Bolt, meanwhile, the Bolt can also send a new stream to other bolts for use, thus, the Bolt is directed to other bolts or destinations after the imported stream is processed as long as a specific spout is opened and the tuple flowing out from the spout is directed to the specific Bolt. It is believed that the spout is a faucet, and the water flowing out of each faucet is different, and we want to take which water to turn on which faucet, and then use the pipeline to guide the water from the faucet to a water processor (bolt), and then use the pipeline to guide the water from the faucet to another processor or store the water in a container after the water is processed by the water processor. In order to increase the efficiency of water treatment, it is natural to connect a plurality of water taps to the same water source and use a plurality of water treatment devices, so that the efficiency can be improved. Storm abstracts elements in a stream into tuples, a tuple is a value list, and each value in the list can be any serializable type. Each node of the topology needs to specify the name of the field of the tuple it transmits, and other nodes can receive processing only by subscribing to the name.
In the above, several concepts are referred to, which are explained below:
streams: the message stream is a sequence of tuples without boundaries, and these tuples are created and processed in parallel in a distributed manner. Each tuple may contain multiple columns, and the field types may be: integer, long, short, byte, string, double, float, borolan, and byte array.
Spots: the message source is the topology message producer. Spitout reads data from an external source (message queue) and sends out a tuple to topology. The message source spots may or may not be reliable. A reliable message source may retransmit a tuple that failed processing and an unreliable message source, Spouts, may not. The Spout-like method nextttuple continuously launches a tuple to topology, Storm calls ack when it detects that a tuple was successfully processed by the entire topology, otherwise calls fail. Storm calls ack and fail only for reliable spout.
Bolts: the message handler, the message handling logic is encapsulated inside the Bolts, which can do many things: filtering, aggregating, querying a database, etc. Bolts can simply do the delivery of the message stream. Complex message flow processing often requires many steps and thus many Bolts. The output of the first stage Bolt may be used as the input of the next stage Bolt. Whereas Spout cannot have one stage. The main method of Bolts is that execute continuously processes incoming tuples, successfully processing each tuple calls the ack method of the OutputCollector to inform Storm that this tuple is processed. When the processing fails, a fail method can be adjusted to notify the spitout terminal that the tuple can be retransmitted. The flow is that Bolts processes an input tuple, then calls ack to notify Storm that Storm itself has processed this tuple. Storm provides an IBasicBolt that automatically calls ack. Bolts uses the OutputCollector to transmit tuple to the next level of Blot.
In an embodiment, the MongoDB data object includes: MongoDB connection objects and MongoDB collection objects.
It should be noted that the MongoDB is a database based on distributed file storage. Written in the C + + language. It is intended to provide an extensible high performance data storage solution for WEB applications. MongoDB is a product between relational databases and non-relational databases, and among the non-relational databases, the MongoDB has the most abundant functions and is most similar to the relational databases. The data structure supported by the method is very loose and is in a json-like bson format, so that more complex data types can be stored. The biggest characteristic of Mongo is that the query language supported by Mongo is very strong, the syntax of Mongo is similar to the object-oriented query language, most functions of single-table query of similar relational databases can be almost realized, and index establishment of data is also supported. The MongoDB has the design goals of high performance, expandability, easy deployment, easy use and very convenient data storage. The main functional characteristics are as follows: and the data of the object type is easy to store by facing to the set storage. In MongoDB, data is stored in sets in groups, the sets are similar to tables in an RDBMS, and an infinite number of documents can be stored in one set; the mode is free, the mode-free structure is adopted for storage, the data stored in the set in the MongoDB is a mode-free document, and the mode-free stored data is an important characteristic that the set is different from a table in the RDBMS; the method supports complete indexing, can establish indexes on any attribute, comprises internal objects, can establish indexes on specified attributes and internal objects to improve the speed of query, and provides the capability of establishing indexes based on geospatial space by MongoDB in addition to the fact that the indexes of MongoDB are basically the same as the indexes of RDBMS; supporting query, wherein MongoDB supports rich query operation, and almost supports most of query in SQL; a powerful aggregation tool, namely MongoDB provides rich query functions and also provides powerful aggregation tools such as count, group and the like, and supports the completion of complex aggregation tasks by using MapReduce; the MongoDB supports a master-slave copy mechanism, and can realize the functions of data backup, fault recovery, read expansion and the like. The replication mechanism based on the replica set provides an automatic fault recovery function, and cluster data are ensured not to be lost; any type of data object can be saved using efficient binary data storage, including large objects (e.g., video), using binary format storage; the fragments are automatically processed to support the expansion of cloud computing layers, the MongoDB supports the automatic segmentation of the cluster data, and the data are fragmented, so that the cluster can store more data, larger load is realized, and the load balance of the storage can be ensured; the MongoDB provides a database driving package of all current mainstream development languages, and developers can easily program by using any mainstream development language to access the MongoDB database; the file storage format is BSON (an extension of JSON), the BSON is a short name of JSON in a binary format, and the BSON supports nesting of documents and arrays; (11) accessible over a network. The MongoDB database may be accessed remotely over a network.
In the embodiment, Storm is adopted to analyze and process data transmitted by Kafka in real time, statistics is carried out in real time by taking seconds as dimensions according to transaction codes, and processing results are stored in a mongo database. Specifically, reading transaction data from Kafka, acquiring the current time of the system, analyzing the transaction data according to Kafka interface specifications, setting a timestamp to assemble and store a MongoDB data object if the length of the transaction data & the separation array is greater than 12, acquiring a MongoDB connection object and a MongoDB set object, and executing operation of storing the MongoDB database.
And during specific implementation, determining transaction success rate data according to the MongoDB data object, and performing real-time monitoring on the transaction data according to the transaction success rate data.
In an embodiment, the real-time monitoring of the transaction data according to the transaction success rate data includes: and comparing the transaction success rate data with a preset threshold value, and monitoring the transaction data in real time according to the comparison result.
In an embodiment, the real-time monitoring of the transaction data according to the transaction success rate data includes: comparing the transaction success rate data with a preset threshold, and writing a monitoring field corresponding to the transaction success rate data into a log according to a comparison result, wherein the monitoring field comprises: monitoring a warning field and a normal field; and monitoring the transaction data in real time according to the monitoring field corresponding to the transaction success rate data.
In an embodiment, the monitoring field rule is specifically as follows: the method comprises the steps of collecting time (YYYY-MM-DD HH24: MI: SS), total transaction amount, total transaction success amount, transaction success amount in interval time, transaction success rate in interval time, average transaction response time, transaction amount in interval time, transaction name, transaction success rate state, transaction response time state, average background response time and background response time state in interval time, wherein alarm can be achieved by monitoring the transaction success rate state, the transaction response time state and the background response time state.
In the embodiment, data are read from the mongo database, statistical results in the mongo database are processed according to a preset monitoring time interval, WARN word patterns are generated for transactions with transaction success rates lower than a threshold value and stored in a text, the text data are monitored by using a script, and if the WARN is monitored, real-time alarm can be given. Specifically, a MongoDB connection object and a MongoDB set object are obtained, the MongoDB database result is subjected to statistics on the transaction success rate according to transaction codes, an average value is obtained, whether a set threshold value is exceeded or not is judged, if the set threshold value is not exceeded, the transaction success rate is written into a log according to a monitoring field rule, and a monitoring field is set to be TRAN _ WARN; and if the threshold value is exceeded, sequentially writing the monitoring fields into the log according to the rule of the monitoring fields, wherein the monitoring fields are TRAN _ NORMAL. And monitoring whether a TRAN _ WARN field exists or not, and triggering an alarm mechanism to notify maintenance personnel once the TRAN _ WARN field appears.
Based on the same inventive concept, the embodiment of the present invention further provides a real-time transaction data monitoring apparatus, as described in the following embodiments. Because the principles of solving the problems are similar to the transaction data real-time monitoring method, the implementation of the device can be referred to the implementation of the method, and repeated details are not repeated.
Fig. 2 is a structural diagram of a real-time transaction data monitoring apparatus according to an embodiment of the present invention, as shown in fig. 2, the apparatus includes:
the data receiving module 201 is configured to receive the transaction data and the interface specification data sent by the transaction platform by using Kafka;
the object generation module 202 is used for analyzing the transaction data by Storm according to the interface specification data to generate a MongoDB data object;
the success rate determining module 203 is used for determining transaction success rate data according to the MongoDB data object;
and the monitoring module 204 is used for monitoring the transaction data in real time according to the transaction success rate data.
In one embodiment, the MongoDB data object includes: MongoDB connection objects and MongoDB collection objects.
In one embodiment, the monitoring module 204 is further configured to:
and comparing the transaction success rate data with a preset threshold value, and monitoring the transaction data in real time according to the comparison result.
In one embodiment, the monitoring module 204 is further configured to:
comparing the transaction success rate data with a preset threshold, and writing a monitoring field corresponding to the transaction success rate data into a log according to a comparison result, wherein the monitoring field comprises: monitoring a warning field and a normal field;
and monitoring the transaction data in real time according to the monitoring field corresponding to the transaction success rate data.
In summary, in the embodiment of the present invention, the Kafka is used to receive the transaction data and the interface specification data sent by the transaction platform; analyzing the transaction data by using Storm according to the interface specification data to generate a MongoDB data object; determining transaction success rate data according to the MongoDB data object; and monitoring the transaction data in real time according to the transaction success rate data. The embodiment of the invention determines the transaction success rate data according to the generated MongoDB data object, thereby carrying out real-time monitoring on the transaction data without consuming a large amount of time for counting, improving the counting efficiency and the real-time performance and being beneficial to alarming in time under the condition of encountering transaction abnormity.
As will be appreciated by one skilled in the art, embodiments of the present invention may be provided as a method, system, or computer program product. Accordingly, the present invention may take the form of an entirely hardware embodiment, an entirely software embodiment or an embodiment combining software and hardware aspects. Furthermore, the present invention may take the form of a computer program product embodied on one or more computer-usable storage media (including, but not limited to, disk storage, CD-ROM, optical storage, and the like) having computer-usable program code embodied therein.
The present invention is described with reference to flowchart illustrations and/or block diagrams of methods, apparatus (systems), and computer program products according to embodiments of the invention. It will be understood that each flow and/or block of the flow diagrams and/or block diagrams, and combinations of flows and/or blocks in the flow diagrams and/or block diagrams, can be implemented by computer program instructions. These computer program instructions may be provided to a processor of a general purpose computer, special purpose computer, embedded processor, or other programmable data processing apparatus to produce a machine, such that the instructions, which execute via the processor of the computer or other programmable data processing apparatus, create means for implementing the functions specified in the flowchart flow or flows and/or block diagram block or blocks.
These computer program instructions may also be stored in a computer-readable memory that can direct a computer or other programmable data processing apparatus to function in a particular manner, such that the instructions stored in the computer-readable memory produce an article of manufacture including instruction means which implement the function specified in the flowchart flow or flows and/or block diagram block or blocks.
These computer program instructions may also be loaded onto a computer or other programmable data processing apparatus to cause a series of operational steps to be performed on the computer or other programmable apparatus to produce a computer implemented process such that the instructions which execute on the computer or other programmable apparatus provide steps for implementing the functions specified in the flowchart flow or flows and/or block diagram block or blocks.
The above-mentioned embodiments are intended to illustrate the objects, technical solutions and advantages of the present invention in further detail, and it should be understood that the above-mentioned embodiments are only exemplary embodiments of the present invention, and are not intended to limit the scope of the present invention, and any modifications, equivalent substitutions, improvements and the like made within the spirit and principle of the present invention should be included in the scope of the present invention.

Claims (10)

1. A transaction data real-time monitoring method is characterized by comprising the following steps:
receiving transaction data and interface specification data sent by a transaction platform by using Kafka;
analyzing the transaction data by using Storm according to the interface specification data to generate a MongoDB data object;
determining transaction success rate data according to the MongoDB data object;
and monitoring the transaction data in real time according to the transaction success rate data.
2. The method of claim 1, wherein the MongoDB data object comprises: MongoDB connection objects and MongoDB collection objects.
3. The method of claim 1, wherein performing real-time monitoring of transaction data based on the transaction success rate data comprises:
and comparing the transaction success rate data with a preset threshold value, and monitoring the transaction data in real time according to the comparison result.
4. The method of claim 3, wherein performing real-time monitoring of transaction data based on the transaction success rate data comprises:
comparing the transaction success rate data with a preset threshold, and writing a monitoring field corresponding to the transaction success rate data into a log according to a comparison result, wherein the monitoring field comprises: monitoring a warning field and a normal field;
and monitoring the transaction data in real time according to the monitoring field corresponding to the transaction success rate data.
5. A transaction data real-time monitoring device, comprising:
the data receiving module is used for receiving the transaction data and the interface specification data sent by the transaction platform by using Kafka;
the object generation module is used for analyzing the transaction data by using Storm according to the interface specification data to generate a MongoDB data object;
the success rate determining module is used for determining transaction success rate data according to the MongoDB data object;
and the monitoring module is used for monitoring the transaction data in real time according to the transaction success rate data.
6. The apparatus of claim 5, wherein the MongoDB data object comprises: MongoDB connection objects and MongoDB collection objects.
7. The apparatus of claim 5, wherein the monitoring module is further to:
and comparing the transaction success rate data with a preset threshold value, and monitoring the transaction data in real time according to the comparison result.
8. The apparatus of claim 7, wherein the monitoring module is further to:
comparing the transaction success rate data with a preset threshold, and writing a monitoring field corresponding to the transaction success rate data into a log according to a comparison result, wherein the monitoring field comprises: monitoring a warning field and a normal field;
and monitoring the transaction data in real time according to the monitoring field corresponding to the transaction success rate data.
9. A computer device comprising a memory, a processor and a computer program stored on the memory and executable on the processor, wherein the processor implements the method of any one of claims 1 to 4 when executing the computer program.
10. A computer-readable storage medium, characterized in that the computer-readable storage medium stores a computer program for executing the method of any one of claims 1 to 4.
CN202010644227.5A 2020-07-07 2020-07-07 Transaction data real-time monitoring method and device Active CN111652616B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202010644227.5A CN111652616B (en) 2020-07-07 2020-07-07 Transaction data real-time monitoring method and device

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202010644227.5A CN111652616B (en) 2020-07-07 2020-07-07 Transaction data real-time monitoring method and device

Publications (2)

Publication Number Publication Date
CN111652616A true CN111652616A (en) 2020-09-11
CN111652616B CN111652616B (en) 2023-11-21

Family

ID=72351074

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202010644227.5A Active CN111652616B (en) 2020-07-07 2020-07-07 Transaction data real-time monitoring method and device

Country Status (1)

Country Link
CN (1) CN111652616B (en)

Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20170102933A1 (en) * 2015-10-08 2017-04-13 Opsclarity, Inc. Systems and methods of monitoring a network topology
CN107517131A (en) * 2017-08-31 2017-12-26 四川长虹电器股份有限公司 A kind of analysis and early warning method based on log collection
US10567244B1 (en) * 2018-02-09 2020-02-18 Equinix, Inc. Near real-time feed manager for data center infrastructure monitoring (DCIM) using custom tags for infrastructure assets

Patent Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20170102933A1 (en) * 2015-10-08 2017-04-13 Opsclarity, Inc. Systems and methods of monitoring a network topology
CN107517131A (en) * 2017-08-31 2017-12-26 四川长虹电器股份有限公司 A kind of analysis and early warning method based on log collection
US10567244B1 (en) * 2018-02-09 2020-02-18 Equinix, Inc. Near real-time feed manager for data center infrastructure monitoring (DCIM) using custom tags for infrastructure assets

Also Published As

Publication number Publication date
CN111652616B (en) 2023-11-21

Similar Documents

Publication Publication Date Title
CN107577805B (en) Business service system for log big data analysis
US10409650B2 (en) Efficient access scheduling for super scaled stream processing systems
US10430111B2 (en) Optimization for real-time, parallel execution of models for extracting high-value information from data streams
CN108681569B (en) Automatic data analysis system and method thereof
WO2016206600A1 (en) Information flow data processing method and device
CN109034993A (en) Account checking method, equipment, system and computer readable storage medium
US20210279265A1 (en) Optimization for Real-Time, Parallel Execution of Models for Extracting High-Value Information from Data Streams
US11301425B2 (en) Systems and computer implemented methods for semantic data compression
CN109710731A (en) A kind of multidirectional processing system of data flow based on Flink
CN112507029B (en) Data processing system and data real-time processing method
CN109918349A (en) Log processing method, device, storage medium and electronic device
CN107103064B (en) Data statistical method and device
CN107016039B (en) Database writing method and database system
CN111651510A (en) Data processing method and device, electronic equipment and computer readable storage medium
CN112099977A (en) Real-time data analysis engine of distributed tracking system
CN109325077A (en) A kind of system that number storehouse in real time is realized based on canal and kafka
Gibadullin et al. Service-oriented distributed energy data management using big data technologies
CN108629016B (en) Big data base oriented control system supporting real-time stream computing and computer program
CN114090529A (en) Log management method, device, system and storage medium
CN111049898A (en) Method and system for realizing cross-domain architecture of computing cluster resources
CN111652616B (en) Transaction data real-time monitoring method and device
CN111597157A (en) Method for improving log processing system architecture
CN116186082A (en) Data summarizing method based on distribution, first server and electronic equipment
CN112506960B (en) Multi-model data storage method and system based on ArangoDB engine
CN116701525A (en) Early warning method and system based on real-time data analysis and electronic equipment

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant