Disclosure of Invention
In view of this, embodiments of the present invention provide an information processing method and apparatus for traffic monitoring, which can at least solve the problems in the prior art that the utilization rate of hardware and network resources is too high, and the timeliness of traffic monitoring cannot be guaranteed.
In order to achieve the above object, according to an aspect of an embodiment of the present invention, there is provided an information processing method for traffic monitoring, including:
receiving call requests of a plurality of calling parties to a service interface, and counting the call quantity of each calling party to the service interface in the current period to generate call information corresponding to each calling party;
and grouping all the calling information in the current period, and asynchronously transmitting the grouped calling information to a storage party for storage.
Optionally, the calling information at least includes a mapping relationship between a calling party, a service interface, a calling amount, and a timestamp to which the current cycle belongs;
the method further comprises the following steps:
respectively converting the calling party, the service interface, the calling amount and the timestamp in the calling information into corresponding numerical information according to a preset numerical conversion mode, and generating corresponding numerical calling information; or
And respectively converting the caller, the service interface, the call request amount and the timestamp in the call information into corresponding hash function values according to a preset hash function conversion mode, and generating corresponding hash function call information.
Optionally, the grouping processing of all the call information in the current period includes:
determining the groups to be created and the calling information amount in each group according to the number of all calling information and the preset information amount in each group; or
And determining the calling information amount in each group according to the number of all calling information and the preset packet amount.
Optionally, after the calling information after the asynchronous transmission packet is stored in the storage, the method further includes:
determining historical calling information corresponding to the calling information at least according to the calling party and the service interface; and
and according to the timestamp of the current period and the historical timestamp of the historical calling information, counting the historical calling information in a preset historical time length to generate corresponding calling total information.
To achieve the above object, according to another aspect of the embodiments of the present invention, there is provided an information processing apparatus for traffic monitoring, including:
the acquisition module is used for receiving call requests of a plurality of calling parties to the service interface and counting the call quantity of each calling party to the service interface in the current period so as to generate call information corresponding to each calling party;
and the transmission module is used for grouping all the calling information in the current period and asynchronously transmitting the grouped calling information to the storage party for storage.
Optionally, the calling information at least includes a mapping relationship between a calling party, a service interface, a calling amount, and a timestamp to which the current cycle belongs;
the apparatus further comprises an information conversion module configured to:
respectively converting the calling party, the service interface, the calling amount and the timestamp in the calling information into corresponding numerical information according to a preset numerical conversion mode, and generating corresponding numerical calling information; or
And respectively converting the caller, the service interface, the call request amount and the timestamp in the call information into corresponding hash function values according to a preset hash function conversion mode, and generating corresponding hash function call information.
Optionally, the transmission module is configured to:
determining the groups to be created and the calling information amount in each group according to the number of all calling information and the preset information amount in each group; or
And determining the calling information amount in each group according to the number of all calling information and the preset packet amount.
Optionally, the system further includes an information statistics module, configured to:
determining historical calling information corresponding to the calling information at least according to the calling party and the service interface; and
and according to the timestamp of the current period and the historical timestamp of the historical calling information, counting the historical calling information in a preset historical time length to generate corresponding calling total information.
To achieve the above object, according to still another aspect of embodiments of the present invention, there is provided an information processing electronic device for traffic monitoring.
The electronic device of the embodiment of the invention comprises: one or more processors; and the storage device is used for storing one or more programs, and when the one or more programs are executed by the one or more processors, the one or more processors realize any one of the information processing methods for flow monitoring.
To achieve the above object, according to a further aspect of the embodiments of the present invention, there is provided a computer-readable medium on which a computer program is stored, the program implementing any one of the above-mentioned information processing methods for traffic monitoring when executed by a processor.
According to the scheme provided by the invention, one embodiment of the invention has the following advantages or beneficial effects: the efficiency problem when the flow is high in a large-scale cluster is solved through the periodical collection of the calling information and the grouping asynchronous transmission mode, the utilization rate of hardware and network resources can be greatly reduced compared with the traditional mode, and meanwhile high real-time performance is guaranteed.
Further effects of the above-mentioned non-conventional alternatives will be described below in connection with the embodiments.
Detailed Description
Exemplary embodiments of the present invention are described below with reference to the accompanying drawings, in which various details of embodiments of the invention are included to assist understanding, and which are to be considered as merely exemplary. Accordingly, those of ordinary skill in the art will recognize that various changes and modifications of the embodiments described herein can be made without departing from the scope and spirit of the invention. Also, descriptions of well-known functions and constructions are omitted in the following description for clarity and conciseness.
It should be noted that the present invention can be applied to various backend software systems such as e-commerce, social network, finance, etc. to monitor the interface service request traffic therein.
The service provider is a service provider in an application system, and at present, application development is service-oriented, that is, the service provider is internally provided with service interfaces. Therefore, the service monitoring of the invention refers to monitoring each service interface, and the provider is not a mobile, Unicom, telecommunication network provider.
For example, a trading department in the e-commerce platform provides order service, the order service includes functions of placing an order and inquiring an order, and a caller, for example, an APP (Application) or a website of the platform, can perform operations of placing an order and inquiring an order by calling the service interface, and the trading department is a service provider.
The data storage end of the present invention includes but is not limited to a Redis cluster, and may also be any other distributed cluster. The traffic taught by the present invention may be passenger traffic, data traffic, etc. In fact, traffic monitoring is capable of monitoring the condition of each traffic source of each service interface.
The invention uses a distributed processing mode, and when the data scale is increased, a hot-spot bottleneck can not be generated, so that the invention can be expanded to hundreds of millions of levels of flow for monitoring.
The words to which the invention relates are to be interpreted as follows:
embedding points: the current running information is actively acquired to collect information through a specific position of program execution.
A filter: program functions for performing processing before and after a specific flow by a common flow code.
Redis Cluster: a key-value database cluster in advanced memory.
pipeline: a batch processing method can combine multiple requests into one operation.
Redis Incrby Command: the number stored in the key is added to the specified increment value. If the key does not exist, the value of the key is initialized to 0 and the Incrby command is executed. If the value contains the type of error, or the string type of value cannot be represented as a number, then an error is returned.
Referring to fig. 1, a main flowchart of an information processing method for traffic monitoring according to an embodiment of the present invention is shown, including the following steps:
s101: receiving call requests of a plurality of calling parties to a service interface, and counting the call quantity of each calling party to the service interface in the current period to generate call information corresponding to each calling party;
s102: and grouping all the calling information in the current period, and asynchronously transmitting the grouped calling information to a storage party for storage.
In the above embodiment, in step S101, the service end described in the present invention is an application module of a service system such as APP/website, e.g., e-commerce platform, social network, finance, etc., and is integrated in all service applications through POM (Project Object Model) dependency. The POM is the content in the JAVA system maven component, and the POM can be used to facilitate the integration of a server into a collection module.
The flow call request initiated by the caller must include some key information that needs to be used subsequently, such as service information (e.g., interface name, method name), caller application information, and a timestamp.
For the processing of the traffic call request, the filter program configured on the service side can be temporarily stored in the local memory. The filter is a general design mode, can be regarded as an entry module for collecting call requests received by each service interface, and is used for adding new functions to the service system. And the flow calling request can be simply recorded to temporarily store the counter in the memory.
The calling party in the request is different from the service party, and specifically may be a certain APP/website system, a red packet sending system, and the like.
In a general distributed system, only the source IP (internet protocol address) of a calling party can be simply obtained, but the obtained IP is not intuitive, and it is also impossible to determine which calling party applies, and even impossible to gather the same source application together, so that a function of "IP conversion to application name" inside is required to perform conversion to obtain a specific calling party application.
It should be noted that the determination of the call source, i.e. the calling application, is an important function of the present invention and is also a difference point from other current monitoring methods. When a large amount of traffic is encountered, it can be clearly known from which calling source the traffic calling comes, and further, a measure for coping with current limitation or a processing for calling source is taken, for example, a blacklist is added;
for the determination of the call volume, a monitoring main thread can be automatically created when the server application is started, and a scheduling task is periodically (for example, 4s) executed through a timer, that is, the volume of the traffic call requests initiated by each caller is counted. The values counted are in the order of millions of values in a system of millions.
For example, an atomic counter is created in the service side, and the counter self-increment operation is performed each time a volume calling request sent by the calling side to the service interface is received.
The call information may include a correspondence between a service interface, a caller, a call volume, and a timestamp. For example, a static concurrent Map is created in the application instance of each server, so that each system name-interface name-method name-caller application name-timestamp is used as a key, the corresponding value is the counted call quantity, and the call information in the key-value format is generated.
The monitoring service can be provided for a plurality of service systems for use, and each service system has a system name of the service system, so that the information can be recorded without mutual interference; for example, trading systems, marketing systems, advertising systems, etc. inside enterprises are system names;
the method name refers to the classification of the next level of the interface, and can be regarded as a sub-interface or a service; for example, trading system-order service-create order, system name, interface name, and method name, respectively.
For step S102, for the sending of the resulting key-value format call information, for the Redis cluster, it is actually an Incrby operation, i.e., adding the value stored in the key to a specified increment value, e.g., the total amount of calls once every 4 seconds. In addition, for Redis Incrby, the operation is atomic, and when a large amount of requests do the operation, the accumulated value is guaranteed to be correct and not to be collided or lost.
But the above operation may result in a large number of Redis concurrent writes, e.g., 4s very high TPS peaks can be seen once from the entire Redis cluster monitoring. Aiming at the situation, in order to reduce the resource occupation and the calculation complexity as much as possible, the method abandons the existing mode of selecting local log records for retransmission, and directly sends the periodically counted information to the distributed cluster in the asynchronous thread of the consumption queue; wherein, the consumption queue is at the consumption end of the asynchronous queue in the system, namely the processing of the reading queue.
The call information obtained in each cycle can be transmitted according to the execution cycle with the same time length. For example, when the execution cycle time stamp (e.g., 4s) is reached, all the counted key-value call information for the current cycle is transferred into the memory.
It should be noted that after the call information is sent, the value in the atomic counter needs to be cleared, otherwise the counted value in the current cycle will be accumulated in the next cycle, and the timer has no timestamp and is used to send the timestamp of the execution cycle.
In order to guarantee timeliness of task transmission, the time stamp of the task cycle needs to be adjusted continuously, for example, the transmission cycle is 4s, and the time stamp is executed by strictly maintaining the time stamps of 0 second, 4 seconds and 8 seconds … … 56 seconds per minute.
The key-value calling information can be transmitted to the asynchronous queue and transmitted in an asynchronous mode, and compared with a local storage mode, the mode is more efficient and stable.
For example, one thing is performed according to the sequential ordering of 1, 2, and 3, but if 4 is inserted between 1 and 2, 2/3 needs to be continued, and if an asynchronous mode is used, the original 1, 2, and 3 can still operate normally, and 4 is processed through a single space.
In the same period, the calling information amount is more, and aiming at the situation, the invention abandons the mode of transmitting the history one by one, transmits the calling information in batch and accelerates the calling information by using a pipeline mechanism:
1) the number of the groups is fixed, for example, the server receives 1100 key value pairs in the current period, and if there are 5 groups in total, the 1100 key value pairs need to be homogenized, and 1100/5-220 key value pairs exist in each group;
however, if the key value pair is 1100, that is, 1 margin is indicated, the key value pair may be arbitrarily allocated to one of the key value pairs;
when the number of the packets is only 1, it means that all the key-value pairs are allocated to one group for one batch transmission.
2) The amount of information in each group is fixed, for example, it is also 1100 key-value pairs, if the key-value pairs in each group are fixed to 200, then 5 groups containing 200 key-value pairs and 1 group containing 100 key-value pairs can be obtained;
for the remaining 100 key value pairs in one group, the situation that no new information is generated before the next period comes, but the information amount in the group is not exceeded is represented;
when the upper limit of the information quantity in the group is an unlimited value, the same means that all the key value pairs are allocated to one group for batch transmission.
It should be noted that the delay of such packet bulk transfer is usually two seconds, which is within the acceptable range of the system, and the delay can be further reduced by configuring parameters, but the delay is inevitable.
By means of asynchronous queues and batch transmission of call information, a batch of tasks collected and accumulated by queue tasks need to be sent, and tasks which do not generate dequeue overtime are sent in batches at one time through poll and overtime detection of the queues.
The asynchronous transmission mode can improve the performance and the fault tolerance of a server side. The performance improvement means that the asynchronous batch transmission speed is higher; the improvement of the fault tolerance means that the function of the original interface service is not influenced after the asynchronization, and the function is not slowed down or made mistakes.
Moreover, each operation of the existing Redis is subjected to call switching of a plurality of IO systems, but pipeline is used for batch operation, so that the total call times of the internal IO systems can be greatly reduced, and RTT (network round trip delay) during each single call can be saved. For example, information that originally needed to be transmitted one by one is now transmitted at a time. The test proves that the overall performance after batch transmission is improved by about 10 times.
In addition to filters, launch beans (entity object classes, database operations can be abstracted into class-wise operations) can be configured at the provider to accomplish the above-described traffic aggregation and sending, which can be functional.
The above process of collecting and counting the call requests of the caller is less time-consuming, for example:
1) the addition of the counter in the service party is only simple addition and accumulation calculation;
2) periodically sending and clearing the accumulated value of the counter;
these operations do not involve a large number of computations/hard disk/network operations, and the memory operations are at the level of seconds to ensure that the statistical process can be accurately completed without affecting the original service process.
Compared with the existing idea of storing the data in the server side, the data storage side is extracted and processed independently by the method, so that the invasion of each service system is reduced, and the independence and the stability of each service system are maintained. For a distributed cluster, the performance can be improved through multiple fragments, namely, an integral cluster formed by multiple fragments of redis is provided, so that the performance is better, and the cluster is safer and more stable.
The key-value mentioned above is the simplest and straightforward way of designing the logic, but most wasteful of storage space, since the key would be very long.
Therefore, when the calling information is stored in the data storage end, or after the calling information is generated and before the calling information is stored, the calling information needs to be optimized:
1) all interface names, method names and the like are converted into digital IDs, so that keys are short; for example, the trading system 1-order service 1-create order 1, then convert to 1-1-1, all being a sequence of numbers, each stage being assigned in an independent increasing order, e.g., the following 1-2-3.
However, this numerical conversion approach adds complexity because digital conversion is required not only for information storage but also for subsequent querying; when information needs to be added, modified or deleted, errors are easy to occur; the actual footprint of the numbers is still large.
Therefore, there is a need to provide a more optimized method:
2) the optimal method is to store the data by using a data structure such as hash, for example, a key comprises a system name, an interface name, a method name, a calling party and a short timestamp stored in a file, and the corresponding value stores the counting number, so that the storage space can be greatly reduced.
For example, when there are 100 ten thousand key-values, there are 100 ten thousand keys and values, and if the hash structure is used, only a small part (time stamp) of the values and the keys needs to store 100 ten thousand keys, and only 1 prefix of the same key is stored, so that space can be saved.
Experiments prove that 100 acquired data within the range of 1 day are generated, and the storage space of the third hash structure is reduced by more than half compared with that of the first key-value in a simple mode, and the third hash structure is shown in fig. 2.
The management terminal of the invention is an independently deployed application. After the monitored flow is structurally stored, visualization processing can be carried out subsequently in the form of a chart and the like, or functions of alarm notification, configuration degradation current limiting, authorization and the like are carried out on the monitored abnormal flow through a timing task.
Subsequently, in the process of reading and generating the monitoring chart, a very large time range is often encountered, and if second-level data is also used, the number of inquired results is very large, and the data exceeds the capacity of the chart pixels.
For this case, the call information may be summarized with respect to the timestamp during storage, for example, minute-level summarization of second-level monitoring data within a predetermined time period is performed, so that minute-level data may be directly queried during a large-range query, and the process is completed by performing redis in background management.
Optionally, if a higher-level query is queried, the query is summarized by querying all the next-level data. The data after the aggregation has coarser granularity and less occupied space. In addition, reading batch data can also be accelerated with pipeline in batch reading, and the principle is similar to batch storage.
According to the method provided by the embodiment of the invention, the collection, sending and storage processes of the monitored information are redesigned, the modes of distributed processing, asynchronous gathering and the like are ingeniously utilized, the operations of gathering the monitored information and the like are distributed in each service machine for execution, and the gathered information is transmitted to the distributed cluster in batch for gathering. In addition, the transmitted information is compressed, so that the utilization rate of hardware and network resources can be reduced, and the higher real-time performance of the whole scheme is ensured.
The invention relates to four terminals, namely a calling party, a server, a data storage end and a management end, and concretely relates to a terminal as shown in figure 3:
1) the calling party: for sending a traffic invocation request to a server/service interface;
2) the traffic service party: receiving a calling request sent by a calling party;
counting the calling request quantity received from the calling party in a preset period to generate calling information;
grouping the calling information in the period, and transmitting the calling information to a data storage end in batches once or according to groups;
3) a data storage side: mainly distributed clusters, e.g., Redis clusters, for structured storage of data;
4) the management side: and alarming and management control are carried out by using the monitoring data.
Referring to fig. 4, a schematic diagram of main modules of an information processing apparatus 400 for traffic monitoring according to an embodiment of the present invention is shown, including:
the acquisition module 401 is configured to receive call requests of multiple callers to a service interface, and count call volumes of each caller to the service interface in a current period to generate call information corresponding to each caller;
a transmission module 402, configured to perform packet processing on all the call information in the current period, and asynchronously transmit the grouped call information to a storage party for storage.
In the implementation device of the invention, the calling information at least comprises a calling party, a service interface, a calling amount and a mapping relation of a timestamp to which the current period belongs;
the apparatus further comprises an information conversion module 403 (not shown) for:
respectively converting the calling party, the service interface, the calling amount and the timestamp in the calling information into corresponding numerical information according to a preset numerical conversion mode, and generating corresponding numerical calling information; or
And respectively converting the caller, the service interface, the call request amount and the timestamp in the call information into corresponding hash function values according to a preset hash function conversion mode, and generating corresponding hash function call information.
In the device for implementing the present invention, the transmission module 402 is configured to:
determining the groups to be created and the calling information amount in each group according to the number of all calling information and the preset information amount in each group; or
And determining the calling information amount in each group according to the number of all calling information and the preset packet amount.
The apparatus for implementing the present invention further includes an information statistics module 404 (not shown in the figure), configured to:
determining historical calling information corresponding to the calling information at least according to the calling party and the service interface; and
and according to the timestamp of the current period and the historical timestamp of the historical calling information, counting the historical calling information in a preset historical time length to generate corresponding calling total information.
In addition, the detailed implementation of the information processing apparatus for traffic monitoring in the embodiment of the present invention has been described in detail in the above information processing method for traffic monitoring, and therefore, the repeated description is omitted here.
Fig. 5 shows an exemplary system architecture 500 of an information processing method for traffic monitoring or an information processing apparatus for traffic monitoring to which an embodiment of the present invention can be applied.
As shown in fig. 5, the system architecture 500 may include terminal devices 501, 502, 503, a network 504, and a server 505 (by way of example only). The network 504 serves to provide a medium for communication links between the terminal devices 501, 502, 503 and the server 505. Network 504 may include various connection types, such as wired, wireless communication links, or fiber optic cables, to name a few.
The user may use the terminal devices 501, 502, 503 to interact with a server 505 over a network 504 to receive or send messages or the like. The terminal devices 501, 502, 503 may have various communication server applications installed thereon, such as a shopping application, a web browser application, a search application, an instant messaging tool, a mailbox server, social platform software, etc. (by way of example only).
The terminal devices 501, 502, 503 may be various electronic devices having a display screen and supporting web browsing, including but not limited to smart phones, tablet computers, laptop portable computers, desktop computers, and the like.
The server 505 may be a server providing various services, such as a background management server (for example only) providing support for shopping websites browsed by users using the terminal devices 501, 502, 503. The backend management server may analyze and perform other processing on the received data such as the product information query request, and feed back a processing result (for example, target push information, product information — just an example) to the terminal device.
It should be noted that the information processing method for traffic monitoring provided by the embodiment of the present invention is generally executed by the server 505, and accordingly, the information processing apparatus for traffic monitoring is generally disposed in the server 505.
It should be understood that the number of terminal devices, networks, and servers in fig. 5 is merely illustrative. There may be any number of terminal devices, networks, and servers, as desired for implementation.
Referring now to FIG. 6, a block diagram of a computer system 600 suitable for use with a terminal device implementing an embodiment of the invention is shown. The terminal device shown in fig. 6 is only an example, and should not bring any limitation to the functions and the scope of use of the embodiments of the present invention.
As shown in fig. 6, the computer system 600 includes a Central Processing Unit (CPU)601 that can perform various appropriate actions and processes according to a program stored in a Read Only Memory (ROM)602 or a program loaded from a storage section 608 into a Random Access Memory (RAM) 603. In the RAM 603, various programs and data necessary for the operation of the system 600 are also stored. The CPU 601, ROM 602, and RAM 603 are connected to each other via a bus 604. An input/output (I/O) interface 605 is also connected to bus 604.
The following components are connected to the I/O interface 605: an input portion 606 including a keyboard, a mouse, and the like; an output portion 607 including a display such as a Cathode Ray Tube (CRT), a Liquid Crystal Display (LCD), and the like, and a speaker; a storage section 608 including a hard disk and the like; and a communication section 609 including a network interface card such as a LAN card, a modem, or the like. The communication section 609 performs communication processing via a network such as the internet. The driver 610 is also connected to the I/O interface 605 as needed. A removable medium 611 such as a magnetic disk, an optical disk, a magneto-optical disk, a semiconductor memory, or the like is mounted on the drive 610 as necessary, so that a computer program read out therefrom is mounted in the storage section 608 as necessary.
In particular, according to the embodiments of the present disclosure, the processes described above with reference to the flowcharts may be implemented as computer software programs. For example, embodiments of the present disclosure include a computer program product comprising a computer program embodied on a computer readable medium, the computer program comprising program code for performing the method illustrated in the flow chart. In such an embodiment, the computer program may be downloaded and installed from a network through the communication section 609, and/or installed from the removable medium 611. The computer program performs the above-described functions defined in the system of the present invention when executed by the Central Processing Unit (CPU) 601.
It should be noted that the computer readable medium shown in the present invention can be a computer readable signal medium or a computer readable storage medium or any combination of the two. A computer readable storage medium may be, for example, but not limited to, an electronic, magnetic, optical, electromagnetic, infrared, or semiconductor system, apparatus, or device, or any combination of the foregoing. More specific examples of the computer readable storage medium may include, but are not limited to: an electrical connection having one or more wires, a portable computer diskette, a hard disk, a Random Access Memory (RAM), a read-only memory (ROM), an erasable programmable read-only memory (EPROM or flash memory), an optical fiber, a portable compact disc read-only memory (CD-ROM), an optical storage device, a magnetic storage device, or any suitable combination of the foregoing. In the present invention, a computer readable storage medium may be any tangible medium that can contain, or store a program for use by or in connection with an instruction execution system, apparatus, or device. In the present invention, however, a computer readable signal medium may include a propagated data signal with computer readable program code embodied therein, for example, in baseband or as part of a carrier wave. Such a propagated data signal may take many forms, including, but not limited to, electro-magnetic, optical, or any suitable combination thereof. A computer readable signal medium may also be any computer readable medium that is not a computer readable storage medium and that can communicate, propagate, or transport a program for use by or in connection with an instruction execution system, apparatus, or device. Program code embodied on a computer readable medium may be transmitted using any appropriate medium, including but not limited to: wireless, wire, fiber optic cable, RF, etc., or any suitable combination of the foregoing.
The flowchart and block diagrams in the figures illustrate the architecture, functionality, and operation of possible implementations of systems, methods and computer program products according to various embodiments of the present invention. In this regard, each block in the flowchart or block diagrams may represent a module, segment, or portion of code, which comprises one or more executable instructions for implementing the specified logical function(s). It should also be noted that, in some alternative implementations, the functions noted in the block may occur out of the order noted in the figures. For example, two blocks shown in succession may, in fact, be executed substantially concurrently, or the blocks may sometimes be executed in the reverse order, depending upon the functionality involved. It will also be noted that each block of the block diagrams or flowchart illustration, and combinations of blocks in the block diagrams or flowchart illustration, can be implemented by special purpose hardware-based systems which perform the specified functions or acts, or combinations of special purpose hardware and computer instructions.
The modules described in the embodiments of the present invention may be implemented by software or hardware. The described modules may also be provided in a processor, which may be described as: a processor comprises an acquisition module, a transmission module and an acquisition module. The names of these modules do not in some cases form a limitation on the modules themselves, and for example, the collection module may also be described as a "call information collection module within a cycle".
As another aspect, the present invention also provides a computer-readable medium that may be contained in the apparatus described in the above embodiments; or may be separate and not incorporated into the device. The computer readable medium carries one or more programs which, when executed by a device, cause the device to comprise:
receiving call requests of a plurality of calling parties to a service interface, and counting the call quantity of each calling party to the service interface in the current period to generate call information corresponding to each calling party;
and grouping all the calling information in the current period, and asynchronously transmitting the grouped calling information to a storage party for storage.
According to the technical scheme of the embodiment of the invention, the collection, sending and storage processes of the monitored information are redesigned, the modes of distributed processing, asynchronous gathering and the like are ingeniously utilized, the operations of gathering the monitored information and the like are distributed in each service machine for execution, and the gathered information is transmitted to the distributed cluster in batch for gathering. In addition, the transmitted information is compressed, so that the utilization rate of hardware and network resources can be reduced, and the higher real-time performance of the whole scheme is ensured.
The above-described embodiments should not be construed as limiting the scope of the invention. Those skilled in the art will appreciate that various modifications, combinations, sub-combinations, and substitutions can occur, depending on design requirements and other factors. Any modification, equivalent replacement, and improvement made within the spirit and principle of the present invention should be included in the protection scope of the present invention.