CN111652616B - Transaction data real-time monitoring method and device - Google Patents

Transaction data real-time monitoring method and device Download PDF

Info

Publication number
CN111652616B
CN111652616B CN202010644227.5A CN202010644227A CN111652616B CN 111652616 B CN111652616 B CN 111652616B CN 202010644227 A CN202010644227 A CN 202010644227A CN 111652616 B CN111652616 B CN 111652616B
Authority
CN
China
Prior art keywords
data
transaction
mongodb
success rate
monitoring
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202010644227.5A
Other languages
Chinese (zh)
Other versions
CN111652616A (en
Inventor
李响
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Bank of China Ltd
Original Assignee
Bank of China Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Bank of China Ltd filed Critical Bank of China Ltd
Priority to CN202010644227.5A priority Critical patent/CN111652616B/en
Publication of CN111652616A publication Critical patent/CN111652616A/en
Application granted granted Critical
Publication of CN111652616B publication Critical patent/CN111652616B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06QINFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES; SYSTEMS OR METHODS SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES, NOT OTHERWISE PROVIDED FOR
    • G06Q20/00Payment architectures, schemes or protocols
    • G06Q20/38Payment protocols; Details thereof
    • G06Q20/389Keeping log of transactions for guaranteeing non-repudiation of a transaction
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F16/00Information retrieval; Database structures therefor; File system structures therefor
    • G06F16/20Information retrieval; Database structures therefor; File system structures therefor of structured data, e.g. relational data
    • G06F16/24Querying
    • G06F16/245Query processing
    • G06F16/2458Special types of queries, e.g. statistical queries, fuzzy queries or distributed queries
    • G06F16/2462Approximate or statistical queries
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F16/00Information retrieval; Database structures therefor; File system structures therefor
    • G06F16/20Information retrieval; Database structures therefor; File system structures therefor of structured data, e.g. relational data
    • G06F16/24Querying
    • G06F16/245Query processing
    • G06F16/2458Special types of queries, e.g. statistical queries, fuzzy queries or distributed queries
    • G06F16/2477Temporal data queries
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F16/00Information retrieval; Database structures therefor; File system structures therefor
    • G06F16/20Information retrieval; Database structures therefor; File system structures therefor of structured data, e.g. relational data
    • G06F16/27Replication, distribution or synchronisation of data between databases or within a distributed database system; Distributed database system architectures therefor
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F16/00Information retrieval; Database structures therefor; File system structures therefor
    • G06F16/20Information retrieval; Database structures therefor; File system structures therefor of structured data, e.g. relational data
    • G06F16/28Databases characterised by their database models, e.g. relational or object models

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • Databases & Information Systems (AREA)
  • General Physics & Mathematics (AREA)
  • General Engineering & Computer Science (AREA)
  • Data Mining & Analysis (AREA)
  • Probability & Statistics with Applications (AREA)
  • Business, Economics & Management (AREA)
  • Mathematical Physics (AREA)
  • Fuzzy Systems (AREA)
  • Software Systems (AREA)
  • Computational Linguistics (AREA)
  • Accounting & Taxation (AREA)
  • Strategic Management (AREA)
  • Finance (AREA)
  • General Business, Economics & Management (AREA)
  • Computing Systems (AREA)
  • Information Retrieval, Db Structures And Fs Structures Therefor (AREA)

Abstract

The invention discloses a method and a device for monitoring transaction data in real time, wherein the method comprises the following steps: receiving transaction data and interface specification data sent by a transaction platform by using Kafka; analyzing the transaction data by utilizing Storm according to the interface specification data to generate a MongoDB data object; determining transaction success rate data according to the MongoDB data object; and carrying out real-time monitoring on the transaction data according to the transaction success rate data. The invention is convenient for monitoring transaction data in real time, improves statistical efficiency and instantaneity, and is beneficial to alarming in time under the condition of abnormal transaction.

Description

Transaction data real-time monitoring method and device
Technical Field
The invention relates to the technical field of transaction data processing, in particular to a method and a device for monitoring transaction data in real time.
Background
At present, the transaction data of each transaction system is huge in quantity and various in transaction types, so that real-time analysis and monitoring of the transaction data are needed, and real-time alarm is realized under the condition of abnormal transaction.
In the prior art, the database data is monitored based on shell scripts, so that under the condition that the transaction data volume is gradually increased, the time consumption is counted, the efficiency is low, and the speed of normal online transaction is influenced by the operation performed on the database during the counting period, so that the real-time performance is low, and the timely alarm is facilitated under the condition of abnormal transaction.
Disclosure of Invention
The embodiment of the invention provides a transaction data real-time monitoring method, which is used for monitoring transaction data in real time, improving statistical efficiency and instantaneity and facilitating timely alarm under the condition of encountering transaction abnormality, and comprises the following steps:
receiving transaction data and interface specification data sent by a transaction platform by using Kafka;
according to the interface specification data, analyzing the transaction data by utilizing Storm to generate a MongoDB data object, wherein the MongoDB data object comprises: mongoDB connection objects and MongoDB collection objects;
determining transaction success rate data according to the MongoDB data object;
according to the transaction success rate data, carrying out real-time monitoring on transaction data;
and carrying out real-time monitoring on the transaction data according to the transaction success rate data, wherein the real-time monitoring comprises the following steps:
comparing the transaction success rate data with a preset threshold value, and carrying out real-time monitoring on the transaction data according to a comparison result;
comparing the transaction success rate data with a preset threshold value, comprising:
and acquiring a MongoDB connection object and a MongoDB collection object, counting the transaction success rate of the MongoDB data object according to transaction codes, taking an average value, and judging whether the transaction codes exceed a preset threshold value or not, wherein the transaction codes belong to interface specification data.
The embodiment of the invention provides a transaction data real-time monitoring device, which is used for monitoring transaction data in real time, improving statistical efficiency and instantaneity and facilitating timely alarm under the condition of abnormal transaction, and comprises the following components:
the data receiving module is used for receiving transaction data and interface specification data sent by the transaction platform by using Kafka;
the object generating module is configured to parse the transaction data by using Storm according to the interface specification data, and generate a MongoDB data object, where the MongoDB data object includes: mongoDB connection objects and MongoDB collection objects;
the success rate determining module is used for determining transaction success rate data according to the MongoDB data object;
the monitoring module is used for carrying out real-time monitoring on the transaction data according to the transaction success rate data;
wherein the monitoring module is further configured to:
comparing the transaction success rate data with a preset threshold value, and carrying out real-time monitoring on the transaction data according to a comparison result;
the monitoring module is specifically used for:
and acquiring a MongoDB connection object and a MongoDB collection object, counting the transaction success rate of the MongoDB data object according to transaction codes, taking an average value, and judging whether the transaction codes exceed a preset threshold value or not, wherein the transaction codes belong to interface specification data.
The embodiment of the invention also provides computer equipment, which comprises a memory, a processor and a computer program stored on the memory and capable of running on the processor, wherein the processor realizes the transaction data real-time monitoring method when executing the computer program.
Compared with the scheme of monitoring database data based on shell scripts in the prior art, the embodiment of the invention receives transaction data and interface specification data sent by a transaction platform by using Kafka; analyzing the transaction data by utilizing Storm according to the interface specification data to generate a MongoDB data object; determining transaction success rate data according to the MongoDB data object; and carrying out real-time monitoring on the transaction data according to the transaction success rate data. According to the embodiment of the invention, the transaction success rate data is determined according to the generated MongoDB data object, so that the transaction data is monitored in real time, a large amount of time is not consumed for statistics, the statistics efficiency and the instantaneity are improved, and timely alarm is facilitated under the condition of abnormal transaction.
Drawings
In order to more clearly illustrate the embodiments of the invention or the technical solutions in the prior art, the drawings that are required in the embodiments or the description of the prior art will be briefly described, it being obvious that the drawings in the following description are only some embodiments of the invention, and that other drawings may be obtained according to these drawings without inventive effort for a person skilled in the art. In the drawings:
FIG. 1 is a diagram of a method for real-time monitoring transaction data according to an embodiment of the present invention;
fig. 2 is a diagram of a real-time transaction data monitoring device according to an embodiment of the present invention.
Detailed Description
For the purpose of making the objects, technical solutions and advantages of the embodiments of the present invention more apparent, the embodiments of the present invention will be described in further detail with reference to the accompanying drawings. The exemplary embodiments of the present invention and their descriptions herein are for the purpose of explaining the present invention, but are not to be construed as limiting the invention.
In order to monitor transaction data in real time, improve statistical efficiency and instantaneity, and facilitate timely alarm under the circumstance of encountering transaction abnormality, an embodiment of the present invention provides a method for monitoring transaction data in real time, as shown in fig. 1, the method may include:
step 101, receiving transaction data and interface specification data sent by a transaction platform by using Kafka;
step 102, analyzing the transaction data by utilizing Storm according to the interface specification data to generate a MongoDB data object;
step 103, determining transaction success rate data according to the MongoDB data object;
and 104, carrying out real-time monitoring on the transaction data according to the transaction success rate data.
As can be seen from fig. 1, the embodiment of the present invention receives the transaction data and the interface specification data sent by the transaction platform by using Kafka; analyzing the transaction data by utilizing Storm according to the interface specification data to generate a MongoDB data object; determining transaction success rate data according to the MongoDB data object; and carrying out real-time monitoring on the transaction data according to the transaction success rate data. According to the embodiment of the invention, the transaction success rate data is determined according to the generated MongoDB data object, so that the transaction data is monitored in real time, a large amount of time is not consumed for statistics, the statistics efficiency and the instantaneity are improved, and timely alarm is facilitated under the condition of abnormal transaction.
In specific implementation, kafka is utilized to receive transaction data and interface specification data sent by a transaction platform.
It should be noted that, kafka was originally developed by Linkedin corporation, and is a distributed, partitioned, multi-copy, multi-subscriber, zookeeper-based distributed log system, which can be commonly used for web/nginx logs, access logs, message services, etc., and Linkedin contributed to the Apache foundation and became a top-level open source project in 2010. The main application scenarios are log collection systems and message systems. The main design goals of Kafka are: providing message persistence capability in a mode of O (1) with time complexity, and ensuring access performance of constant time even for data above TB level; the high throughput rate can realize that a single machine supports the transmission of 100K messages per second even on a very low-cost commercial machine; message partition and distributed consumption among Kafka servers are supported, and message sequential transmission in each part is guaranteed; simultaneously supporting offline data processing and real-time data processing; on-line horizontal expansion is supported. A messaging system is responsible for transferring data from one application to another application, and an application only needs to focus on the data, and does not need to focus on how the data is transferred between two or more applications. Distributed messaging is based on reliable message queues to asynchronously transfer messages between client applications and a messaging system. There are two main modes of messaging: point-to-point delivery mode, publish-subscribe mode. Most messaging systems use a publish-subscribe mode. Kafka is a publish-subscribe model. In a point-to-point messaging system, messages are persisted into a queue. At this point, there will be one or more consumers consuming the data in the queue. But a message can only be consumed once. When a consumer consumes a certain piece of data in the queue, the piece of data is deleted from the message queue. This mode ensures the order of data processing even if multiple consumers consume data at the same time. In a publish-subscribe messaging system, messages are persisted into one topic. Unlike point-to-point messaging systems, a consumer may subscribe to one or more topics, a consumer may consume all of the data in the topics, the same piece of data may be consumed by multiple consumers, and the data may not be immediately deleted after consumption. In a publish-subscribe messaging system, the producer of a message is called a publisher and the consumer is called a subscriber.
In an embodiment, the zookeeper is a manager of Kafka, and after the zookeeper registers, kafka can work under the dispatch of the zookeeper. The Kafka interface specification data is currently defined as 17 fields, each field being & separate, as follows: platform identification number & platform system serial number & order receiving mechanism number & card issuing mechanism number & transaction code & card number & money & currency & merchant number & transaction date & transaction time & transaction success flag & system success flag & transaction start time & send external system time & receive external system time & transaction end time.
In the embodiment, the zookeeper+kafka is adopted to send the transaction data to the Kafka, the Kafka carries out real-time stream data transmission, high availability under a large number can be guaranteed, the transaction platform generates the transaction data, a transaction generates a key transaction information record according to the format of info1& info2& info3 … & info30, and the information record is sent to the kafak in real time. And further receives transaction data and interface specification data transmitted by the transaction platform by using Kafka.
And in the implementation process, analyzing the transaction data by utilizing Storm according to the interface specification data to generate a MongoDB data object.
It should be noted that Storm is a distributed real-time big data processing framework of a Twitter open source, and is called real-time version Hadoop in industry. With the increasing situation that the Hadoop MapReduce high delay cannot be tolerated, such as website statistics, recommendation systems, early warning systems, financial systems (high-frequency trading, stock) and the like, the application of large data real-time processing solutions is becoming widespread, the latest explosion point in the distributed technical field is already at present, and Storm is the most outstanding person and mainstream in the stream computing technology. The meaning of Storm for real-time computation is similar to that of Hadoop for batch processing. Hadoop provides map, reduce primitives that make our batch process simple and efficient. Similarly, storm also provides some simple and efficient primitives for real-time calculation, and Trident of Storm is a higher-level abstract framework based on Storm primitives, similar to a Pig framework based on Hadoop, so that development is more convenient and efficient. Storm includes the following characteristics: 1. the application scene is wide: the Storm can process information and update DB in real time, continuously inquire a data volume and return to a client (continuous calculation), and perform real-time parallelization processing (distributed method call, namely DRPC) on inquiry of a consumed resource, wherein the basic APIs of the Storm can meet a large number of scenes; 2. the scalability is high: the scalability of Storm can make the message volume that Storm can process per second very high, expand a real-time calculation task, what you need do is add machines and raise the parallelism of this calculation task, storm uses ZooKeeper to coordinate various configurations in the cluster so that Storm's cluster can be expanded very easily; 3. ensure no data loss: real-time systems must ensure that all data is successfully processed. The applicable scenes of the system which can lose data are very narrow, and Storm ensures that each message can be processed, which has huge contrast compared with S4; 4. abnormal robustness: the Storm cluster is easy to manage, and the alternate restarting of the nodes does not affect the application; 5. the fault tolerance is good: abnormal occurs in the message processing process, and the Storm can retry; 6. language independence: the topology and message handling components of a Storm can be defined in any language that allows anyone to use a Storm. The working principle of Storm is as follows: nimbus is responsible for distributing code on clusters, topo can only be submitted on Nimbus machines, assigning tasks to other machines, and fault monitoring. The superisor listens to the nodes assigned to it, starting and shutting down the work process if necessary according to the delegation of Nimbus. Each work process performs a subset of the policies. One running topology consists of many work processes running on many machines. There is an abstraction in Storm for stream, which is an unbroken, unbroken continuous tuple, note that Storm abstracts events in stream into tuple when modeling event stream. Storm considers that each stream has a source, i.e., the source of the original tuple, called Spout. The method and the device process the orders in the stream, abstract to be a Bolt, and the Bolt can consume any number of input streams, so long as the stream direction is led to the Bolt, and simultaneously, the device can send new streams to other bolts for use, so long as a specific spout is opened, the orders flowing out of the spout are led to the specific Bolt, and the Bolt processes the led streams and then leads to other bolts or destinations. It is considered that spout is a tap and the water flowing out of each tap is different, we want to take which water to turn off which tap, then use the pipe to direct the tap water to one water processor (bolt), use the pipe to direct the water to another processor after the water processor processes or store in the container. To increase the efficiency of water treatment, it is natural to contemplate multiple faucets and multiple water processors at the same water source, thereby increasing efficiency. Storm abstracts elements in a stream into a tuple, one tuple being a list of values, each value in the list being of any serializable type. Each node of the topology needs to specify the name of the field of the tuple it sends out, and other nodes need only subscribe to this name to receive the processing.
In the above, several concepts are referred to, which are explained below:
streams: message flows are a sequence of turns without boundaries, which are created and processed in parallel in a distributed manner. Each tuple may contain multiple columns, and the field type may be: integer, long, short, byte, string, double, float, bootie and byte array.
Spouts: the message source is a topology message producer. Spout reads data from an external source (message queue) and issues a reply to the topology. The message source spouses may or may not be reliable. A reliable message source may retransmit a failed process, and an unreliable message source spount may not. The method next of the Spout class continuously transmits a sequence to the topology, and the Storm calls ack when detecting that one sequence is successfully processed by the whole topology, otherwise calls fail. Storm calls ack and fail only for reliable spots.
Bolts: message processors, message processing logic is encapsulated within Bolts, which can do a number of things: filtering, aggregating, querying a database, etc. Bolts can simply do the transfer of the message stream. Complex message stream processing often requires many steps and thus passes through many Bolts. The output of the first stage Bolt may be the input of the next stage Bolt. And Spout cannot have one stage. The main method of Bolts is that execution continuously processes incoming turns, and each turn calls the ack method of the OutputCollector to be successfully processed, so as to inform the Storm that the turn is processed. When the processing fails, the fail method can be adjusted to notify the Spout that the copy can be retransmitted. The flow is that Bolts processes an input scroll and then invokes an ack to inform Storm that it has processed this scroll itself. Storm provides an ibasic bolt that automatically invokes ack. Bolts uses the OutputCollector to transmit a repetition to the next level Blot.
In an embodiment, the MongoDB data object includes: mongoDB connection objects and MongoDB collection objects.
It should be noted that MongoDB is a database based on distributed file storage. Written in the c++ language. It is intended to provide a scalable high performance data storage solution for WEB applications. MongoDB is a product that is interposed between a relational database and a non-relational database, most functional among which is most like a relational database. The data structure it supports is very loose, is in json-like bson format, and can therefore store more complex data types. The biggest characteristic of Mongo is that the query language supported by Mongo is very powerful, the grammar is somewhat similar to the object-oriented query language, almost most functions similar to the query of a relational database list can be realized, and the indexing of data is also supported. The MongoDB has the design targets of high performance, expandability, easy deployment and use, and is convenient for storing data. The main functional characteristics are as follows: set-oriented storage facilitates storing data of object types. In MongoDB data is stored in groups in sets, which resemble tables in RDBMS, an unlimited number of documents can be stored in a set; the method is free in mode, adopts a non-mode structure for storage, wherein data stored in a set in MongoDB is a document without a mode, and adopts non-mode storage data is an important characteristic of a table in the RDBMS, wherein the set is different from the table in the RDBMS; supporting full indexing, an index can be built on any attribute, an index containing an internal object is basically the same as an index of the RDBMS, and an index can be built on a specified attribute and the internal object to improve the query speed, and in addition, the MongoDB also provides the capability of creating an index based on a geographic space; supporting queries, mongoDB supporting rich query operations, mongoDB supporting almost most of the queries in SQL; the powerful aggregation tools, mongoDB, besides providing rich query functions, also provide powerful aggregation tools, such as count, group and the like, and support the completion of complex aggregation tasks by using MapReduce; the MongoDB supports a master-slave replication mechanism and can realize functions of data backup, fault recovery, read expansion and the like. The copy mechanism based on the copy set provides an automatic fault recovery function, so that cluster data is ensured not to be lost; any type of data object may be saved using efficient binary data storage, including large objects (e.g., video), using binary format storage; the slicing is automatically processed to support expansion of cloud computing levels, the MongoDB supports automatic slicing of data of the clusters, the data are sliced, so that the clusters can store more data, larger load is realized, and the stored load balance can be ensured; the driver program supporting Perl, PHP, java, C #, javaScript, ruby, C and C++ languages provides a database driver package of all the current mainstream development languages, and a developer can easily program by using any one of the mainstream development languages to realize access to the MongoDB database; the file storage format is BSON (an extension of JSON), BSON is short for binary format JSON, BSON supports nesting of documents and arrays; (11) accessible via a network. The MongoDB database may be accessed remotely over a network.
In the embodiment, storm is adopted to analyze and process the data transmitted by Kafka in real time, statistics is carried out in real time by taking a second as a dimension according to a transaction code, and a processing result is stored in a mongo database. Specifically, reading transaction data from Kafka, acquiring the current time of the system, analyzing the transaction data according to the Kafka interface specification, if the length of the transaction data and the separation array is more than 12, setting a timestamp to assemble and store MongoDB data objects, acquiring MongoDB connection objects and MongoDB collection objects, and executing the MongoDB database storage operation.
And in the concrete implementation, determining transaction success rate data according to the MongoDB data object, and carrying out real-time monitoring on the transaction data according to the transaction success rate data.
In an embodiment, according to the transaction success rate data, performing real-time monitoring on the transaction data includes: and comparing the transaction success rate data with a preset threshold value, and carrying out real-time monitoring on the transaction data according to a comparison result.
In an embodiment, according to the transaction success rate data, performing real-time monitoring on the transaction data includes: comparing the transaction success rate data with a preset threshold value, and writing a monitoring field corresponding to the transaction success rate data in a log according to a comparison result, wherein the monitoring field comprises: a monitor warning field and a monitor normal field; and carrying out real-time monitoring on the transaction data according to the monitoring field corresponding to the transaction success rate data.
In an embodiment, the monitoring field rule is specifically as follows: acquisition time (YYYY-MM-DD HH24: MI: SS) |total transaction amount|total transaction amount|transaction amount|in the interval time, transaction success rate|average transaction response time|total transaction amount|transaction type|transaction name|transaction success rate state|transaction response time state|average background response time|background response time state in the interval time, wherein an alarm can be achieved by monitoring three state values of the transaction success rate state, the transaction response time state and the background response time state.
In the embodiment, data are read from a mongo database, statistical results in the mongo database are processed according to a preset monitoring time interval, WARN patterns are generated for transactions with transaction success rate lower than a threshold value and stored in a text, script monitoring text data are used, and if the WARN is monitored, real-time alarm can be given. Specifically, a MongoDB connection object and a MongoDB collection object are obtained, the MongoDB database result is used for counting the transaction success rate according to the transaction code, the average value is taken, whether the threshold value is exceeded or not is judged, if the threshold value is not exceeded, the MongoDB connection object and the MongoDB collection object are sequentially written in a log according to the rule of the monitoring field, and the monitoring field is set as TRAN_WARN; if the threshold value is exceeded, the data are written in the log in sequence according to the rule of the monitoring field, and the monitoring field is TRAN_NORMAL. Further, whether the TRAN_WARN field exists is monitored, and once the TRAN_WARN field exists, an alarm mechanism is triggered to inform maintenance personnel.
Based on the same inventive concept, the embodiment of the invention also provides a transaction data real-time monitoring device, as described in the following embodiment. Because the principles of solving the problems are similar to the transaction data real-time monitoring method, the implementation of the device can be referred to the implementation of the method, and the repetition is omitted.
Fig. 2 is a block diagram of a real-time transaction data monitoring device according to an embodiment of the present invention, as shown in fig. 2, the device includes:
a data receiving module 201, configured to receive transaction data and interface specification data sent by a transaction platform by using Kafka;
the object generating module 202 is configured to parse the transaction data by using Storm according to the interface specification data, and generate a mongo db data object;
a success rate determining module 203, configured to determine transaction success rate data according to the MongoDB data object;
and the monitoring module 204 is used for carrying out real-time monitoring on the transaction data according to the transaction success rate data.
In one embodiment, the MongoDB data object includes: mongoDB connection objects and MongoDB collection objects.
In one embodiment, the monitoring module 204 is further configured to:
and comparing the transaction success rate data with a preset threshold value, and carrying out real-time monitoring on the transaction data according to a comparison result.
In one embodiment, the monitoring module 204 is further configured to:
comparing the transaction success rate data with a preset threshold value, and writing a monitoring field corresponding to the transaction success rate data in a log according to a comparison result, wherein the monitoring field comprises: a monitor warning field and a monitor normal field;
and carrying out real-time monitoring on the transaction data according to the monitoring field corresponding to the transaction success rate data.
In summary, the embodiment of the invention receives the transaction data and the interface specification data sent by the transaction platform by using Kafka; analyzing the transaction data by utilizing Storm according to the interface specification data to generate a MongoDB data object; determining transaction success rate data according to the MongoDB data object; and carrying out real-time monitoring on the transaction data according to the transaction success rate data. According to the embodiment of the invention, the transaction success rate data is determined according to the generated MongoDB data object, so that the transaction data is monitored in real time, a large amount of time is not consumed for statistics, the statistics efficiency and the instantaneity are improved, and timely alarm is facilitated under the condition of abnormal transaction.
It will be appreciated by those skilled in the art that embodiments of the present invention may be provided as a method, system, or computer program product. Accordingly, the present invention may take the form of an entirely hardware embodiment, an entirely software embodiment or an embodiment combining software and hardware aspects. Furthermore, the present invention may take the form of a computer program product embodied on one or more computer-usable storage media (including, but not limited to, disk storage, CD-ROM, optical storage, and the like) having computer-usable program code embodied therein.
The present invention is described with reference to flowchart illustrations and/or block diagrams of methods, apparatus (systems) and computer program products according to embodiments of the invention. It will be understood that each flow and/or block of the flowchart illustrations and/or block diagrams, and combinations of flows and/or blocks in the flowchart illustrations and/or block diagrams, can be implemented by computer program instructions. These computer program instructions may be provided to a processor of a general purpose computer, special purpose computer, embedded processor, or other programmable data processing apparatus to produce a machine, such that the instructions, which execute via the processor of the computer or other programmable data processing apparatus, create means for implementing the functions specified in the flowchart flow or flows and/or block diagram block or blocks.
These computer program instructions may also be stored in a computer-readable memory that can direct a computer or other programmable data processing apparatus to function in a particular manner, such that the instructions stored in the computer-readable memory produce an article of manufacture including instruction means which implement the function specified in the flowchart flow or flows and/or block diagram block or blocks.
These computer program instructions may also be loaded onto a computer or other programmable data processing apparatus to cause a series of operational steps to be performed on the computer or other programmable apparatus to produce a computer implemented process such that the instructions which execute on the computer or other programmable apparatus provide steps for implementing the functions specified in the flowchart flow or flows and/or block diagram block or blocks.
The foregoing description of the embodiments has been provided for the purpose of illustrating the general principles of the invention, and is not meant to limit the scope of the invention, but to limit the invention to the particular embodiments, and any modifications, equivalents, improvements, etc. that fall within the spirit and principles of the invention are intended to be included within the scope of the invention.

Claims (6)

1. The real-time transaction data monitoring method is characterized by comprising the following steps of:
receiving transaction data and interface specification data sent by a transaction platform by using Kafka;
according to the interface specification data, analyzing the transaction data by utilizing Storm to generate a MongoDB data object, wherein the MongoDB data object comprises: mongoDB connection objects and MongoDB collection objects;
determining transaction success rate data according to the MongoDB data object;
according to the transaction success rate data, carrying out real-time monitoring on transaction data;
and carrying out real-time monitoring on the transaction data according to the transaction success rate data, wherein the real-time monitoring comprises the following steps:
comparing the transaction success rate data with a preset threshold value, and carrying out real-time monitoring on the transaction data according to a comparison result;
comparing the transaction success rate data with a preset threshold value, comprising:
and acquiring a MongoDB connection object and a MongoDB collection object, counting the transaction success rate of the MongoDB data object according to transaction codes, taking an average value, and judging whether the transaction codes exceed a preset threshold value or not, wherein the transaction codes belong to interface specification data.
2. The method of claim 1, wherein conducting real-time monitoring of transaction data based on the transaction success rate data comprises:
comparing the transaction success rate data with a preset threshold value, and writing a monitoring field corresponding to the transaction success rate data in a log according to a comparison result, wherein the monitoring field comprises: a monitor warning field and a monitor normal field;
and carrying out real-time monitoring on the transaction data according to the monitoring field corresponding to the transaction success rate data.
3. A transaction data real-time monitoring device, comprising:
the data receiving module is used for receiving transaction data and interface specification data sent by the transaction platform by using Kafka;
the object generating module is configured to parse the transaction data by using Storm according to the interface specification data, and generate a MongoDB data object, where the MongoDB data object includes: mongoDB connection objects and MongoDB collection objects;
the success rate determining module is used for determining transaction success rate data according to the MongoDB data object;
the monitoring module is used for carrying out real-time monitoring on the transaction data according to the transaction success rate data;
wherein the monitoring module is further configured to:
comparing the transaction success rate data with a preset threshold value, and carrying out real-time monitoring on the transaction data according to a comparison result;
the monitoring module is specifically used for:
and acquiring a MongoDB connection object and a MongoDB collection object, counting the transaction success rate of the MongoDB data object according to transaction codes, taking an average value, and judging whether the transaction codes exceed a preset threshold value or not, wherein the transaction codes belong to interface specification data.
4. The apparatus of claim 3, wherein the monitoring module is further to:
comparing the transaction success rate data with a preset threshold value, and writing a monitoring field corresponding to the transaction success rate data in a log according to a comparison result, wherein the monitoring field comprises: a monitor warning field and a monitor normal field;
and carrying out real-time monitoring on the transaction data according to the monitoring field corresponding to the transaction success rate data.
5. A computer device comprising a memory, a processor and a computer program stored on the memory and executable on the processor, characterized in that the processor implements the method of any of claims 1 to 2 when executing the computer program.
6. A computer readable storage medium, characterized in that the computer readable storage medium stores a computer program which, when executed by a processor, implements the method of any of claims 1 to 2.
CN202010644227.5A 2020-07-07 2020-07-07 Transaction data real-time monitoring method and device Active CN111652616B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202010644227.5A CN111652616B (en) 2020-07-07 2020-07-07 Transaction data real-time monitoring method and device

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202010644227.5A CN111652616B (en) 2020-07-07 2020-07-07 Transaction data real-time monitoring method and device

Publications (2)

Publication Number Publication Date
CN111652616A CN111652616A (en) 2020-09-11
CN111652616B true CN111652616B (en) 2023-11-21

Family

ID=72351074

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202010644227.5A Active CN111652616B (en) 2020-07-07 2020-07-07 Transaction data real-time monitoring method and device

Country Status (1)

Country Link
CN (1) CN111652616B (en)

Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN107517131A (en) * 2017-08-31 2017-12-26 四川长虹电器股份有限公司 A kind of analysis and early warning method based on log collection
US10567244B1 (en) * 2018-02-09 2020-02-18 Equinix, Inc. Near real-time feed manager for data center infrastructure monitoring (DCIM) using custom tags for infrastructure assets

Family Cites Families (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US10108411B2 (en) * 2015-10-08 2018-10-23 Lightbend, Inc. Systems and methods of constructing a network topology

Patent Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN107517131A (en) * 2017-08-31 2017-12-26 四川长虹电器股份有限公司 A kind of analysis and early warning method based on log collection
US10567244B1 (en) * 2018-02-09 2020-02-18 Equinix, Inc. Near real-time feed manager for data center infrastructure monitoring (DCIM) using custom tags for infrastructure assets

Also Published As

Publication number Publication date
CN111652616A (en) 2020-09-11

Similar Documents

Publication Publication Date Title
CN107577805B (en) Business service system for log big data analysis
US11288142B2 (en) Recovery strategy for a stream processing system
US10198298B2 (en) Handling multiple task sequences in a stream processing framework
US10409650B2 (en) Efficient access scheduling for super scaled stream processing systems
US10191768B2 (en) Providing strong ordering in multi-stage streaming processing
EP3690640B1 (en) Event stream processing cluster manager
CN112507029B (en) Data processing system and data real-time processing method
CN110502583B (en) Distributed data synchronization method, device, equipment and readable storage medium
CN109710731A (en) A kind of multidirectional processing system of data flow based on Flink
CN109918349A (en) Log processing method, device, storage medium and electronic device
CN107016039B (en) Database writing method and database system
CN109325077A (en) A kind of system that number storehouse in real time is realized based on canal and kafka
CN112039726A (en) Data monitoring method and system for content delivery network CDN device
CN113420043A (en) Data real-time monitoring method, device, equipment and storage medium
CN111177237B (en) Data processing system, method and device
CN113568813A (en) Mass network performance data acquisition method, device and system
CN111652616B (en) Transaction data real-time monitoring method and device
CN111049898A (en) Method and system for realizing cross-domain architecture of computing cluster resources
CN116910144A (en) Computing power network resource center, computing power service system and data processing method
CN113377611A (en) Business processing flow monitoring method, system, equipment and storage medium
CN112256446B (en) Kafka message bus control method and system
CN111796983B (en) Monitoring system and method for abnormal transaction request of body color
CN118113766A (en) Batch data processing method, device, equipment and medium
Hao et al. Distributed Message Processing System Based for Internet of Things
CN116980430A (en) Resource allocation processing method, device, computer equipment and storage medium

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant