WO2018121404A1 - 一种超时监控方法及装置 - Google Patents

一种超时监控方法及装置 Download PDF

Info

Publication number
WO2018121404A1
WO2018121404A1 PCT/CN2017/117733 CN2017117733W WO2018121404A1 WO 2018121404 A1 WO2018121404 A1 WO 2018121404A1 CN 2017117733 W CN2017117733 W CN 2017117733W WO 2018121404 A1 WO2018121404 A1 WO 2018121404A1
Authority
WO
WIPO (PCT)
Prior art keywords
request message
cache
key information
message
level
Prior art date
Application number
PCT/CN2017/117733
Other languages
English (en)
French (fr)
Inventor
陈林
朱龙先
杨森
张峻浩
Original Assignee
中国银联股份有限公司
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 中国银联股份有限公司 filed Critical 中国银联股份有限公司
Priority to US16/470,206 priority Critical patent/US11611634B2/en
Priority to EP17887171.1A priority patent/EP3562096B1/en
Publication of WO2018121404A1 publication Critical patent/WO2018121404A1/zh

Links

Images

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L43/00Arrangements for monitoring or testing data switching networks
    • H04L43/08Monitoring or testing based on specific metrics, e.g. QoS, energy consumption or environmental parameters
    • H04L43/0805Monitoring or testing based on specific metrics, e.g. QoS, energy consumption or environmental parameters by checking availability
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L67/00Network arrangements or protocols for supporting network services or applications
    • H04L67/50Network services
    • H04L67/56Provisioning of proxy services
    • H04L67/568Storing data temporarily at an intermediate stage, e.g. caching
    • H04L67/5682Policies or rules for updating, deleting or replacing the stored data
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L43/00Arrangements for monitoring or testing data switching networks
    • H04L43/16Threshold monitoring
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F11/00Error detection; Error correction; Monitoring
    • G06F11/30Monitoring
    • G06F11/3003Monitoring arrangements specially adapted to the computing system or computing system component being monitored
    • G06F11/3006Monitoring arrangements specially adapted to the computing system or computing system component being monitored where the computing system is distributed, e.g. networked systems, clusters, multiprocessor systems
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F11/00Error detection; Error correction; Monitoring
    • G06F11/30Monitoring
    • G06F11/3003Monitoring arrangements specially adapted to the computing system or computing system component being monitored
    • G06F11/3037Monitoring arrangements specially adapted to the computing system or computing system component being monitored where the computing system component is a memory, e.g. virtual memory, cache
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F12/00Accessing, addressing or allocating within memory systems or architectures
    • G06F12/02Addressing or allocation; Relocation
    • G06F12/08Addressing or allocation; Relocation in hierarchically structured memory systems, e.g. virtual memory systems
    • G06F12/0802Addressing of a memory level in which the access to the desired data or data block requires associative addressing means, e.g. caches
    • G06F12/0893Caches characterised by their organisation or structure
    • G06F12/0897Caches characterised by their organisation or structure with two or more cache hierarchy levels
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L1/00Arrangements for detecting or preventing errors in the information received
    • H04L1/12Arrangements for detecting or preventing errors in the information received by using return channel
    • H04L1/16Arrangements for detecting or preventing errors in the information received by using return channel in which the return channel carries supervisory signals, e.g. repetition request signals
    • H04L1/1607Details of the supervisory signal
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L43/00Arrangements for monitoring or testing data switching networks
    • H04L43/08Monitoring or testing based on specific metrics, e.g. QoS, energy consumption or environmental parameters
    • H04L43/0852Delays
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L67/00Network arrangements or protocols for supporting network services or applications
    • H04L67/14Session management
    • H04L67/143Termination or inactivation of sessions, e.g. event-controlled end of session
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L67/00Network arrangements or protocols for supporting network services or applications
    • H04L67/50Network services
    • H04L67/56Provisioning of proxy services
    • H04L67/568Storing data temporarily at an intermediate stage, e.g. caching
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L67/00Network arrangements or protocols for supporting network services or applications
    • H04L67/50Network services
    • H04L67/60Scheduling or organising the servicing of application requests, e.g. requests for application data transmissions using the analysis and optimisation of the required network resources
    • H04L67/62Establishing a time schedule for servicing the requests

Definitions

  • the present invention relates to the field of data processing technologies, and in particular, to a timeout monitoring method and apparatus.
  • the asynchronous request is the communication method of sending the next data packet directly after the sender sends the data without waiting for the receiver to send back the response.
  • the return time of the response received by the other party needs to be monitored, and the response timeout of the request is considered invalid after a certain time.
  • a common timeout monitoring method is to record the request data and related timestamps in the database after the request is made, and monitor each issued request. When the specified time interval is exceeded, the feedback of the other party is not received, that is, when the request is The time of issue and the specified time interval are less than the current time, the response of the request is considered to have timed out.
  • This implementation method stores all the request data in the database, and needs to judge whether all the requests are timed out.
  • the database needs to provide a large amount of resources for storage and calculation, which is stressful, especially in the case of high concurrent large amounts of data. The resource consumption of the database is very serious.
  • the embodiment of the invention provides a timeout monitoring method and device, which is used to solve the problem that the database needs to consume more database resources for timeout monitoring of the request in the prior art.
  • the server determines key information of the request message, where the key information includes a sending time of the request message;
  • the server stores the key information to a level 1 cache
  • the server scans the L1 cache according to the set frequency. If the L1 cache contains the first request message, the key information of the first request message is stored in the L2 cache, and the first request message is sent. a request message that does not receive a response message;
  • the server scans the second level cache, determines whether the second request message in the second level cache receives the response message through the message log, and if not, determines that the second request message times out, wherein the second The request message is a request message whose difference between the sending time and the current time is greater than the timeout threshold.
  • it also includes:
  • the server searches for the request message corresponding to the response message in the first level cache
  • the key information of the request message corresponding to the response message is marked as answered.
  • the server stores the key information to the level 1 cache, including:
  • the server stores the key information of the request message into a corresponding memory area according to the result of the remainder of the request message, and the memory of the level 1 cache is pre-divided into N memory areas, wherein The size of each memory area is the estimated number of transactions per unit time multiplied by the data segment size of the key information.
  • the server scans the level 1 cache according to a set frequency, including:
  • the server creates N monitoring processes, each monitoring process corresponds to one memory area, and each monitoring process scans the corresponding memory area according to the set frequency.
  • the server scans the L2 cache, and determines, by using a message log, whether the second request message in the L2 cache receives a response message, including:
  • the L2 cache is in a linked list manner, and the key information of the first request message is sequentially stored in the linked list from the header according to the sending time of the first request message;
  • the server sequentially queries the key information of the first request message in the linked list from the header, and determines whether the difference between the sending time of the first request message and the current time is greater than the timeout threshold;
  • a timeout monitoring device includes:
  • a writing module configured to determine key information of the request message, where the key information includes a sending time of the request message
  • the writing module is further configured to store the key information to a level 1 cache;
  • the first monitoring module is configured to scan the level 1 cache according to the set frequency, and if the first level cache includes the first request message, store the key information of the first request message into the second level cache, The first request message is a request message that does not receive the response message;
  • a second monitoring module configured to scan the second level cache, determine, by using a message log, whether the second request message in the second level cache receives a response message, and if not, determine that the second request message times out, where The second request message is a request message that the difference between the sending time and the current time is greater than a timeout threshold.
  • the writing module is further configured to:
  • the key information of the request message corresponding to the response message is marked as answered.
  • the writing module is specifically configured to:
  • the key message of the request message is obtained according to the result of taking the remainder of N for the write time of the request message.
  • the information is stored in the corresponding memory area, and the memory of the first level cache is pre-divided into N memory areas, wherein the size of each memory area is the estimated number of transactions per unit time multiplied by the data segment of the key information. size.
  • the first monitoring module is specifically configured to:
  • the server creates N monitoring processes, each monitoring process corresponds to one memory area, and each monitoring process scans the corresponding memory area according to the set frequency.
  • the second monitoring module is specifically configured to:
  • the L2 cache is in a linked list manner, and the key information of the first request message is sequentially stored in the linked list from the header according to the sending time of the first request message;
  • An embodiment of the present invention provides a computer readable storage medium storing computer executable instructions for causing the computer to perform the method of any of the above.
  • An embodiment of the present invention provides a computing device, including:
  • a memory for storing program instructions
  • a processor configured to invoke a program instruction stored in the memory, and execute the method described in any one of the above according to the obtained program.
  • Embodiments of the present invention provide a computer program product that, when run on a computer, causes the computer to perform the method of any of the above.
  • the key information of the request message is stored in the L1 cache, and the key information of the request message includes the sending time of the request message.
  • the server scans the first-level cache according to the set frequency, determines whether the request message in the first-level cache receives the response message according to the key information, and uses the request message that the response message is not received as the first request message, and the first request
  • the key information of the message is stored in the second level cache. Since most request messages will feed back the response message in a short period of time, only a small number of request messages in the L1 cache need to be stored in the L2 cache as the first request message.
  • the server scans the L2 cache, and determines the difference between the sending time of the request message and the current time according to the key information in the L2 cache. If the difference is greater than the timeout threshold, the request message is used as the second request message. Finding a message log of the second request message, and determining, according to the message log, whether the second request message receives the response message. If the second request message still does not receive the reply message, it is determined that the second request message times out.
  • the key information of the request message is stored in the cache, and the response condition of the request message is monitored by the cache, and the database is not required to be stored and calculated, thereby alleviating the consumption of the database resource and reducing the pressure on the database.
  • FIG. 1 is a flowchart of a timeout monitoring method according to an embodiment of the present invention
  • FIG. 3 is a flowchart of a timeout monitoring method in a specific embodiment of the present invention.
  • FIG. 4 is a schematic structural diagram of a timeout monitoring apparatus according to an embodiment of the present invention.
  • FIG. 5 is a schematic structural diagram of a computing device according to an embodiment of the present invention.
  • An embodiment of the present invention provides a timeout monitoring method, and the process is as shown in FIG. 1.
  • the method may include the following steps:
  • Step 101 The server determines key information of the request message, where the key information includes a sending time of the request message.
  • the key information of the request message includes the sending time of the request message, and the server stores the sending time of the request message in the cache, and may calculate, according to the sending time, whether the request message that does not receive the response message has exceeded the response time limit.
  • the key information further includes a primary key of the request message. In the distributed cache, the related information of the request message may be quickly queried from the server according to the primary key of the request message.
  • Step 102 The server stores the key information to a level 1 cache.
  • Step 103 The server scans the L1 cache according to a set frequency. If the L1 cache contains the first request message, the key information of the first request message is stored in the L2 cache. A request message is a request message that does not receive a reply message.
  • Step 104 The server scans the L2 cache, and determines whether the second request message in the L2 cache receives a response message by using a message log. If not, determining that the second request message times out, where The second request message is a request message that the difference between the sending time and the current time is greater than the timeout threshold.
  • the key information of the request message is stored in the L1 cache, and the key information of the request message includes the sending time of the request message.
  • the server scans the level 1 cache according to the set frequency, determines whether the request message in the level 1 cache receives the response message according to the key information, and uses the request message that the response message is not received as the first request message, and the key of the first request message Information is stored in the secondary cache. Since most request messages will feed back the response message in a short period of time, only a small number of request messages in the L1 cache need to be stored in the L2 cache as the first request message.
  • the server scans the L2 cache, and determines the difference between the sending time of the request message and the current time according to the key information in the L2 cache. If the difference is greater than the timeout threshold, the request message is used as the second request message. Finding a message log of the second request message, and determining, according to the message log, whether the second request message receives the response message. If the second request message is still not received To the reply message, it is determined that the second request message times out.
  • the key information of the request message is stored in the cache, and the response condition of the request message is monitored by the cache, and the database is not required to be stored and calculated, thereby alleviating the consumption of the database resource and reducing the pressure on the database.
  • the server sets up a two-level cache, wherein the first-level cache needs to store the key information of all the sent request messages, and requires a larger capacity than the second-level cache.
  • the first-level cache needs to store the key information of all the sent request messages, and requires a larger capacity than the second-level cache.
  • Designed as a form of shared memory in order to improve efficiency, if the business volume is small, it can also be designed as in-process memory data or memory mapping according to transaction volume and business needs. Memory mapping can provide power-off recovery or short-term shutdown recovery capability in memory operation.
  • the file can be written by the operating system. Therefore, the secondary cache can also be shared memory, and in the case of a small amount of traffic, the second request message can be directly stored in the in-process memory space, which can further improve efficiency.
  • the level 1 cache is divided into multiple memory areas, and the above step 102 includes:
  • the server stores the key information of the request message into a corresponding memory area according to the result of the remainder of the request message, and the memory of the level 1 cache is pre-divided into N memory areas, wherein The size of each memory area is the estimated number of transactions per unit time multiplied by the data segment size of the key information.
  • the memory of the level 1 cache is divided into N memory areas in advance.
  • the value of N is generally determined according to the timeout threshold.
  • the timeout threshold is generally several tens of seconds, and correspondingly, the N is taken.
  • the value may also be a few tens.
  • the value of N is set to 60, that is, the memory of the level 1 cache is divided into 60 memory areas. For the convenience of description, these 60 memory areas are numbered from 0 to 59.
  • the server After the request message is sent, the server adds the key information of the request message to the queue and waits for the first level cache to be written.
  • the write time is divided by N according to the current write time, and the key information is stored in the corresponding memory area according to the calculation result.
  • the value of N is 60
  • the number of the memory area is 0 to 59. If the current write time for storing key information into the L1 cache is 13:48:34, the write time is 60 seconds. The result is 34, so the key information of the request message is stored in the memory area numbered 34.
  • the server initializes the memory area of the first level cache, and the initialization size of one memory area is the number of transactions estimated per unit time multiplied by the key.
  • the data segment size of the information Since the data segment size of the key information of each request message is the same, each memory region has the same size in the same time period. For different time periods, the more transactions per unit time, the larger the memory area initialization. Since the size of the memory area is determined during initialization and is determined by the estimated number of transactions per unit time, this ensures that there is enough storage space to store critical information. In addition, for different time periods, you can reapply and allocate a memory area of sufficient size by adjusting the parameters.
  • the server While the server stores key information in the memory area, it also scans each memory area at a certain frequency. To ensure efficiency and ease of management, the server scans each memory area separately. Then, in step 103, the server scans the level 1 cache according to the set frequency, including:
  • the server creates N monitoring processes, each monitoring process corresponds to one memory area, and each monitoring process scans the corresponding memory area according to the set frequency.
  • the server in the embodiment of the present invention sets N monitoring processes, each monitoring process corresponds to one memory area, and each monitoring process is responsible for scanning the memory from the starting point of the corresponding memory area. region.
  • each monitoring process is responsible for scanning the memory from the starting point of the corresponding memory area. region.
  • the scanning interval for the memory area is generally set to 1 to 2 seconds, that is, the monitoring process scans the corresponding memory area every 2 seconds.
  • the server stores the received request message from the starting point of the memory area into the corresponding memory area.
  • the monitoring process scans the memory area from the starting point of the memory area, and the monitoring process can set an end tag at the beginning of each scan, so that the monitoring process ends until the first end tag in the memory area is scanned. Scan, or you can monitor the process to scan the end of the corresponding memory area for each scan.
  • the embodiment of the invention further includes:
  • the server searches for the request message corresponding to the response message in the first level cache
  • the key information of the request message corresponding to the response message is marked as answered.
  • the server searches for the corresponding request message in the L1 cache, and if the corresponding request message is found, marks the key information of the request message as being answered. In this way, when the server scans the key information of the request message and determines that the request message has received the response message, the request message is not subjected to other processing. If the request message does not receive the response message, that is, the corresponding key information is not marked as answered, the server uses the request message as the first request message, and stores the key information in the second level cache. After a scan is completed, the key information of the request message in the level 1 cache is deleted.
  • the server scans the second-level cache, and determines whether the second request message in the second-level cache receives the response message by using the message log, including:
  • the L2 cache is in a linked list manner, and the key information of the first request message is sequentially stored in the linked list from the header according to the sending time of the first request message;
  • the server sequentially queries the key information of the first request message in the linked list from the header, and determines whether the difference between the sending time of the first request message and the current time is greater than the timeout threshold;
  • the key information of the first request message is sequentially stored in the second level cache from the header, and the header of the second level cache is the first request to send the earliest.
  • the message, the footer is the latest first request message sent.
  • the level 1 cache is scanned.
  • a monitoring process is set to scan the second level cache.
  • the N monitoring processes that scan the level 1 cache are used as the level 1 monitoring process, and the level 2 cache is performed.
  • the scanning monitoring process is used as a secondary monitoring process.
  • the secondary monitoring process cyclically scans the L2 cache linked list, and starts from the header to determine whether the secondary cache linked list is empty.
  • the key information stored in the L2 cache determines whether the difference between the sending time of the first request message and the current time is greater than the timeout threshold, that is, whether the first request message times out, and if not, the secondary monitoring process moves down to The second request message in the second level cache continues to be judged; if the difference between the sending time of the first request message and the current time is greater than the timeout threshold, the first request message is used as the second request message, according to the key
  • the information query corresponding message log determining whether the second request message receives the response message according to the message log of the second request message, and if yes, the second request message has received the response message, and does not perform timeout processing; if not, If the second request message still does not receive the response message, the external timeout processing service is called to timeout the second request message. Thereafter, regardless of whether the second request message receives the response message, the second request process that the secondary monitoring process
  • an exception capture device is also provided, which is responsible for capturing a request message sent during a server failure or restart, so that the server can re-process the request message sent during this period. Or, under manual intervention, the request message based on the specified time point is captured from the database by the abnormality capture device, and the server is reprocessed.
  • the foregoing process is described in detail by using a specific embodiment, which includes a message writing unit, a level 1 cache, a level 1 monitoring process, a level 2 cache, and a level 2 monitoring.
  • the first level cache includes 60 memory areas, numbered 0 ⁇ 59.
  • the number of the first-level monitoring processes is also 60, which are respectively corresponding to the 60 memory areas of the first-level cache, and are responsible for separately monitoring the corresponding memory areas in the level-1 cache, and writing key information of the response message not received.
  • the secondary monitoring is responsible for monitoring the key information in the secondary cache. If the timeout threshold is exceeded, the message log table is consulted to determine that the response message has not yet been received. Request messages and call external timeout handlers to timeout these request messages.
  • Step 301 The message writing unit acquires key information of the request message, where the key information includes a sending time of the request message.
  • Step 302 The message writing unit determines a write time for storing the request message to the L1 cache, and stores the key information of the request message into the corresponding memory area according to the result of the 60 seconds taking the remainder.
  • Step 303 The message writing unit receives the response message, and searches for a request message corresponding to the response message in the L1 cache, and if found, marks the key information of the request message as being answered.
  • Step 304 The first-level monitoring process scans the corresponding memory area in the level-1 cache according to the set frequency, and if it finds that there is no key information of the acknowledged flag, it is used as the first request message, and the key information of the first request message is stored. In the secondary cache.
  • Step 305 The secondary monitoring process scans the memory area of the L2 cache from the header, and determines, from the key information of the first request message, a first request message that the difference between the sending time and the current time is greater than the timeout threshold. Second request message.
  • Step 306 The secondary monitoring process searches for the message log according to the key information of the second request message, and determines that the second request message has not received the response message.
  • Step 307 The secondary monitoring process invokes an external device to perform timeout processing on the second request message.
  • the order in which the message writing unit, the first-level monitoring process, and the secondary monitoring process perform operations is not sequential, that is, the message writing unit writes the key information of the request message into the level-1 cache.
  • the primary monitoring process also scans the primary cache at a set frequency, and at the same time, secondary monitoring also periodically scans the secondary cache.
  • the embodiment of the present invention further provides a timeout monitoring device, as shown in FIG. 4, including:
  • the writing module 401 is configured to determine key information of the request message, where the key information includes a sending time of the request message;
  • the writing module 401 is further configured to store the key information to a level 1 cache;
  • the first monitoring module 402 is configured to scan the level 1 cache according to the set frequency, and if the level 1 cache includes the first request message, store the key information of the first request message into the second level cache.
  • the first request message is a request message that does not receive a response message;
  • the second monitoring module 403 is configured to scan the second level cache, determine whether the second request message in the second level cache receives the response message through the message log, and if not, determine that the second request message times out, wherein The second request message is a request message that the difference between the sending time and the current time is greater than a timeout threshold.
  • the writing module 401 is further configured to:
  • the key information of the request message corresponding to the response message is marked as answered.
  • the writing module 401 is specifically configured to:
  • the key information of the request message is stored in a corresponding memory area according to a result of the remainder of the request message, and the memory of the level 1 cache is pre-divided into N memory areas, wherein each The size of the memory area is the estimated number of transactions per unit time multiplied by the data segment size of the key information.
  • the first monitoring module 402 is specifically configured to be used
  • the server creates N monitoring processes, each monitoring process corresponds to one memory area, and each monitoring process scans the corresponding memory area according to the set frequency.
  • the second monitoring module 403 is specifically configured to:
  • the L2 cache is in a linked list manner, and the key information of the first request message is sequentially stored in the linked list from the header according to the sending time of the first request message;
  • the message log determines whether the second request message receives the response message; if not, the timeout processing is performed on the second request message.
  • FIG. 5 is a schematic structural diagram of a computing device according to an embodiment of the present invention.
  • the computing device may include a central processing unit (CPU), a memory 502, an input device 503, an output device 504, and the like.
  • the device 503 may include a keyboard, a mouse, a touch screen, etc.
  • the output device 504 may include a display device such as a liquid crystal display (LCD), a cathode ray tube (CRT), or the like.
  • LCD liquid crystal display
  • CRT cathode ray tube
  • Memory 502 can include read only memory (ROM) and random access memory (RAM) and provides program instructions and data stored in the memory to the processor.
  • ROM read only memory
  • RAM random access memory
  • the memory may be used to store a program of the method provided by any embodiment of the present invention, and the processor executes the method disclosed in any one of the embodiments according to the obtained program instruction by calling a program instruction stored in the memory. .
  • an embodiment of the present invention further provides a computer readable storage medium for storing computer program instructions for use in the above computing device, comprising a program for executing the method disclosed in any of the above embodiments.
  • the computer storage medium can be any available media or data storage device accessible by a computer, including but not limited to magnetic storage (eg, floppy disk, hard disk, magnetic tape, magneto-optical disk (MO), etc.), optical storage (eg, CD, DVD, BD, HVD, etc.), and semiconductor memories (for example, ROM, EPROM, EEPROM, non-volatile memory (NAND FLASH), solid-state hard disk (SSD)).
  • magnetic storage eg, floppy disk, hard disk, magnetic tape, magneto-optical disk (MO), etc.
  • optical storage eg, CD, DVD, BD, HVD, etc.
  • semiconductor memories for example, ROM, EPROM, EEPROM, non-volatile memory (NAND FLASH), solid-state hard disk (SSD)).
  • an embodiment of the present invention further provides a computer program product, which when executed on a computer, causes the computer to perform the method disclosed in any of the above embodiments.
  • the present invention has been described with reference to flowchart illustrations and/or block diagrams of methods, apparatus (system), and computer program products according to embodiments of the invention. It should be understood that the flow chart can be implemented by computer program instructions And/or a combination of the processes and/or blocks in the block diagrams, and the flowcharts and/or blocks in the flowcharts. These computer program instructions can be provided to a processor of a general purpose computer, special purpose computer, embedded processor, or other programmable data processing device to produce a machine for the execution of instructions for execution by a processor of a computer or other programmable data processing device. Means for implementing the functions specified in one or more of the flow or in a block or blocks of the flow chart.
  • the computer program instructions can also be stored in a computer readable memory that can direct a computer or other programmable data processing device to operate in a particular manner, such that the instructions stored in the computer readable memory produce an article of manufacture comprising the instruction device.
  • the apparatus implements the functions specified in one or more blocks of a flow or a flow and/or block diagram of the flowchart.
  • These computer program instructions can also be loaded onto a computer or other programmable data processing device such that a series of operational steps are performed on a computer or other programmable device to produce computer-implemented processing for execution on a computer or other programmable device.
  • the instructions provide steps for implementing the functions specified in one or more of the flow or in a block or blocks of a flow diagram.

Landscapes

  • Engineering & Computer Science (AREA)
  • Computer Networks & Wireless Communication (AREA)
  • Signal Processing (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • Computing Systems (AREA)
  • General Engineering & Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • Quality & Reliability (AREA)
  • Mathematical Physics (AREA)
  • Environmental & Geological Engineering (AREA)
  • Information Retrieval, Db Structures And Fs Structures Therefor (AREA)
  • Debugging And Monitoring (AREA)

Abstract

本发明涉及数据处理技术领域,公开了一种超时监控方法及装置,包括:服务器确定请求消息的关键信息,所述关键信息包括请求消息的发送时间;所述服务器将所述关键信息存储至一级缓存;所述服务器按照设定频率扫描所述一级缓存,若所述一级缓存中包含第一请求消息,则将所述第一请求消息的关键信息存入二级缓存,所述第一请求消息为未收到应答消息的请求消息;所述服务器扫描所述二级缓存,通过消息日志确定所述二级缓存中的第二请求消息是否收到应答消息,若否,则所述第二请求消息超时,其中,所述第二请求消息为发送时间与当前时间之间的差值大于超时阈值的请求消息。本发明用以解决现有技术中对请求进行超时监控需要消耗较多数据库资源的问题。

Description

一种超时监控方法及装置
本申请要求在2016年12月26日提交中国专利局、申请号为201611219406.4、发明名称为“一种超时监控方法及装置”的中国专利申请的优先权,其全部内容通过引用结合在本申请中。
技术领域
本发明涉及数据处理技术领域,尤其涉及一种超时监控方法及装置。
背景技术
在互联网中,一般是基于互联网协议发送数据请求,以请求获取数据信息。然而当前的网络环境非常复杂,并不是每一次数据请求都能及时地获取到相应的数据信息。相较于同步请求,异步请求是发送方发出数据后,不等接收方发回响应,直接发送下个数据包的通讯方式。在交易系统中,对于异步请求,发出请求后,需要对收到对方应答的返回时间进行监控,超过一定时间即认为该请求的应答超时无效。
常见的超时监控方式是发出请求后,将请求数据及相关时间戳记录在数据库内,对每一个发出的请求进行监控,当超出规定的时间间隔仍未收到对方的反馈,即当该请求的发出时间加上规定的时间间隔小于当前时间,则认为该请求的应答超时。这种实现方法将所有请求数据均存储到数据库中,且需要对所有请求挨个判断是否超时,数据库需要提供大量的资源进行存储和计算,压力较大,尤其在高并发的大量数据的情况下,对数据库的资源消耗非常严重。
发明内容
本发明实施例提供一种超时监控方法及装置,用以解决现有技术中对请求进行超时监控需要消耗较多数据库资源的问题。
本发明实施例提供的超时监控方法包括:
服务器确定请求消息的关键信息,所述关键信息包括请求消息的发送时间;
所述服务器将所述关键信息存储至一级缓存;
所述服务器按照设定频率扫描所述一级缓存,若所述一级缓存中包含第一请求消息,则将所述第一请求消息的关键信息存入二级缓存,所述第一请求消息为未收到应答消息的请求消息;
所述服务器扫描所述二级缓存,通过消息日志确定所述二级缓存中的第二请求消息是否收到应答消息,若否,则确定所述第二请求消息超时,其中,所述第二请求消息为发送时间与当前时间之间的差值大于超时阈值的请求消息。
可选的,还包括:
所述服务器若接收到应答消息,则在所述一级缓存中查找所述应答消息对应的请求消息;
若查找到所述应答消息对应的请求消息,则将所述应答消息对应的请求消息的关键信息标记为已应答。
可选的,所述服务器将所述关键信息存储至一级缓存,包括:
所述服务器确定将所述请求消息存储至一级缓存的写入时间;
所述服务器根据请求消息的写入时间对N取余数的结果,将所述请求消息的关键信息存入对应的内存区域中,所述一级缓存的内存被预先划分为N个内存区域,其中,每个内存区域的大小为单位时间内预估的交易笔数乘以关键信息的数据段大小。
可选的,所述服务器按照设定频率扫描所述一级缓存,包括:
所述服务器创建N个监控进程,每个监控进程对应一个内存区域,每个监控进程按照所述设定频率扫描对应的内存区域。
可选的,所述服务器扫描所述二级缓存,通过消息日志确定所述二级缓存中的第二请求消息是否收到应答消息,包括:
所述二级缓存采用链表方式,按照第一请求消息发送时间的先后从表头依次将第一请求消息的关键信息存入所述链表;
所述服务器从表头依次查询所述链表中第一请求消息的关键信息,判断所述第一请求消息的发送时间与当前时间之间的差值是否大于所述超时阈值;
若是,则将所述第一请求消息作为第二请求消息,根据所述第二请求消息的消息日志确定所述第二请求消息是否收到应答消息;若否,则对所述第二请求消息进行超时处理。
一种超时监控装置,包括:
写入模块,用于确定请求消息的关键信息,所述关键信息包括请求消息的发送时间;
所述写入模块,还用于将所述关键信息存储至一级缓存;
第一监控模块,用于按照设定频率扫描所述一级缓存,若所述一级缓存中包含第一请求消息,则将所述第一请求消息的关键信息存入二级缓存,所述第一请求消息为未收到应答消息的请求消息;
第二监控模块,用于扫描所述二级缓存,通过消息日志确定所述二级缓存中的第二请求消息是否收到应答消息,若否,则确定所述第二请求消息超时,其中,所述第二请求消息为发送时间与当前时间之间的差值大于超时阈值的请求消息。
可选的,所述写入模块,还用于:
若接收到应答消息,则在所述一级缓存中查找所述应答消息对应的请求消息;
若查找到所述应答消息对应的请求消息,则将所述应答消息对应的请求消息的关键信息标记为已应答。
可选的,所述写入模块,具体用于:
确定将所述请求消息存储至一级缓存的写入时间;
根据请求消息的写入时间对N取余数的结果,将所述请求消息的关键信 息存入对应的内存区域中,所述一级缓存的内存被预先划分为N个内存区域,其中,每个内存区域的大小为单位时间内预估的交易笔数乘以关键信息的数据段大小。
可选的,所述第一监控模块,具体用于
所述服务器创建N个监控进程,每个监控进程对应一个内存区域,每个监控进程按照所述设定频率扫描对应的内存区域。
可选的,所述第二监控模块,具体用于:
所述二级缓存采用链表方式,按照第一请求消息发送时间的先后从表头依次将第一请求消息的关键信息存入所述链表;
从表头依次查询所述链表中第一请求消息的关键信息,判断所述第一请求消息的发送时间与当前时间之间的差值是否大于所述超时阈值;
若是,则将所述第一请求消息作为第二请求消息,根据所述第二请求消息的消息日志确定所述第二请求消息是否收到应答消息;若否,则对所述第二请求消息进行超时处理。
本发明实施例提供一种计算机可读存储介质,所述计算机可读存储介质存储有计算机可执行指令,所述计算机可执行指令用于使所述计算机执行上述任一项所述的方法。
本发明实施例提供一种计算设备,包括:
存储器,用于存储程序指令;
处理器,用于调用所述存储器中存储的程序指令,按照获得的程序执行上述任一项所述的方法。
本发明实施例提供一种计算机程序产品,当其在计算机上运行时,使得计算机执行上述任一项所述的方法。
本发明实施例中,服务器发出请求消息后,将请求消息的关键信息存储至一级缓存,请求消息的关键信息中包括该请求消息的发送时间。服务器按照设定频率扫描一级缓存,根据关键信息确定一级缓存中的请求消息是否收到应答消息,将未收到应答消息的请求消息作为第一请求消息,并将第一请 求消息的关键信息存入二级缓存。由于大多数请求消息会在很短的时间内反馈应答消息,因此,只需将一级缓存中很少数量的请求消息作为第一请求消息,存入二级缓存。此外,服务器扫描二级缓存,根据二级缓存中的关键信息,确定请求消息的发送时间与当前时间之间的差值,若该差值大于超时阈值,则将请求消息作为第二请求消息,查找该第二请求消息的消息日志,根据消息日志确定第二请求消息是否收到应答消息。如果第二请求消息仍未收到应答消息,则确定第二请求消息超时。本发明实施例将请求消息的关键信息存入缓存中,通过缓存监控请求消息的应答情况,无需数据库进行存储和计算,缓解了数据库资源的消耗,减轻了数据库的压力。
附图说明
为了更清楚地说明本发明实施例中的技术方案,下面将对实施例描述中所需要使用的附图作简要介绍,显而易见地,下面描述中的附图仅仅是本发明的一些实施例,对于本领域的普通技术人员来讲,在不付出创造性劳动性的前提下,还可以根据这些附图获得其他的附图。
图1为本发明实施例中一种超时监控方法的流程图;
图2为本发明实施例适用的一种系统架构;
图3为本发明的具体实施例中超时监控方法的流程图;
图4为本发明实施例中一种超时监控装置的结构示意图;
图5为本发明实施例提供的一种计算设备结构示意图。
具体实施方式
为了使本发明的目的、技术方案和优点更加清楚,下面将结合附图对本发明作进一步地详细描述,显然,所描述的实施例仅仅是本发明一部份实施例,而不是全部的实施例。基于本发明中的实施例,本领域普通技术人员在没有做出创造性劳动前提下所获得的所有其它实施例,都属于本发明保护的范围。
本发明实施例提供了一种超时监控方法,其流程如图1所示,方法可以包括如下步骤:
步骤101、服务器确定请求消息的关键信息,所述关键信息包括请求消息的发送时间。
上述步骤中,请求消息的关键信息包括请求消息的发送时间,服务器将请求消息的发送时间存储在缓存中,可以根据该发送时间计算未收到应答消息的请求消息是否已超过应答时限。另外,关键信息中还包括该请求消息的主键,在分布式缓存中,可以根据请求消息的主键从服务器中快速查询到该请求消息的相关信息。
步骤102、所述服务器将所述关键信息存储至一级缓存。
步骤103、所述服务器按照设定频率扫描所述一级缓存,若所述一级缓存中包含第一请求消息,则将所述第一请求消息的关键信息存入二级缓存,所述第一请求消息为未收到应答消息的请求消息。
步骤104、所述服务器扫描所述二级缓存,通过消息日志确定所述二级缓存中的第二请求消息是否收到应答消息,若否,则确定所述第二请求消息超时,其中,所述第二请求消息为发送时间与当前时间之间的差值大于超时阈值的请求消息。
本发明实施例中,服务器发出请求消息后,将请求消息的关键信息存储至一级缓存,请求消息的关键信息中包括该请求消息的发送时间。服务器按照设定频率扫描一级缓存,根据关键信息确定一级缓存中的请求消息是否收到应答消息,将未收到应答消息的请求消息作为第一请求消息,并将第一请求消息的关键信息存入二级缓存。由于大多数请求消息会在很短的时间内反馈应答消息,因此,只需将一级缓存中很少数量的请求消息作为第一请求消息,存入二级缓存。此外,服务器扫描二级缓存,根据二级缓存中的关键信息,确定请求消息的发送时间与当前时间之间的差值,若该差值大于超时阈值,则将请求消息作为第二请求消息,查找该第二请求消息的消息日志,根据消息日志确定第二请求消息是否收到应答消息。如果第二请求消息仍未收 到应答消息,则确定第二请求消息超时。本发明实施例将请求消息的关键信息存入缓存中,通过缓存监控请求消息的应答情况,无需数据库进行存储和计算,缓解了数据库资源的消耗,减轻了数据库的压力。
本发明实施例中,为了缓解数据库的压力,服务器设立了两级缓存,其中,一级缓存由于需要存储所有发出的请求消息的关键信息,相较于二级缓存,需要较大的容量,一般设计为共享内存的形式。此外,为了提高效率,若业务量较小,也可以根据交易量和业务需要,设计为进程内内存数据或者内存映射的形式,内存映射可以提供断电恢复或短暂停机恢复的能力,在内存操作的同时可以由操作系统写入文件。因此,二级缓存也可为共享内存,而业务量较小的情况下,可以将第二请求消息直接在进程内内存空间存放,这样可以进一步提高效率。
为了便于监管,一级缓存被划分为多个内存区域,上述步骤102包括:
所述服务器确定将所述请求消息存储至一级缓存的写入时间;
所述服务器根据请求消息的写入时间对N取余数的结果,将所述请求消息的关键信息存入对应的内存区域中,所述一级缓存的内存被预先划分为N个内存区域,其中,每个内存区域的大小为单位时间内预估的交易笔数乘以关键信息的数据段大小。
具体来说,一级缓存的内存预先被划分为N个内存区域,为了便于计算和归类,一般根据超时阈值确定N的取值,由于超时阈值一般为几十秒,相应的,N的取值也可为几十,本发明实施例中,将N的取值设定为60,即一级缓存的内存划分为60个内存区域。为了便于描述,将这60个内存区域编号为0~59。
请求消息发出后,服务器将请求消息的关键信息加入队列中,等待写入一级缓存。将关键信息存入一级缓存时,根据当前的写入时间,将写入时间按对N去余数,并根据计算结果将关键信息存入对应的内存区域内。例如,本发明实施例中N的取值为60,内存区域的编号为0~59。若将关键信息存入一级缓存的当前的写入时间是13时48分34秒,将该写入时间对60秒取余 数,计算结果为34,因此,将该请求消息的关键信息存入编号为34的内存区域中。
为了保证内存区域的容量,并兼顾服务器的工作效率,本发明实施例中,服务器对一级缓存的内存区域进行初始化,一个内存区域的初始化大小为单位时间内预估的交易笔数乘以关键信息的数据段大小,由于每个请求消息的关键信息的数据段大小相同,因此,同一时间段内,每个内存区域的大小相同。对于不同时间段,单位时间的交易笔数越多,内存区域初始化越大。由于内存区域的大小在初始化中决定,且按照预估的单位时间交易笔数峰值确定,这样能够确保有足够的存储空间存入关键信息。此外,对于不同时间段,可通过调整参数进而重新申请并分配足够大小的内存区域。
服务器将关键信息存入内存区域的同时,也以一定频率扫描每个内存区域。为了保证效率,便于管理,服务器针对每个内存区域分别扫描。则上述步骤103,所述服务器按照设定频率扫描所述一级缓存,包括:
所述服务器创建N个监控进程,每个监控进程对应一个内存区域,每个监控进程按照所述设定频率扫描对应的内存区域。
也就是说,对应于N个内存区域,本发明实施例中的服务器设立N个监控进程,每个监控进程对应一个内存区域,每个监控进程负责从对应的内存区域的起始点开始,扫描内存区域。根据大数据统计,一般来说,95%以上的请求消息在5秒内都会收到应答消息,其中,又有近99%的请求消息是在2秒内收到应答消息,基于此,监控进程对内存区域的扫描间隔一般设置为1至2秒,即每隔2秒监控进程扫描一次相应的内存区域。服务器将收到的请求消息从内存区域的起始点存入对应的内存区域。监控进程从内存区域的起始点开始扫描该内存区域,监控进程可以在每次扫描的起始处设置一个结束标记,这样,监控进程直至扫描到内存区域内的第一个结束标记即结束本次扫描,或者也可以监控进程每次扫描均扫描到对应的内存区域的终点。
服务器将请求消息的关键信息存入一级缓存后,若接收到该请求消息的响应消息,则修改请求消息的关键信息。本发明实施例还包括:
所述服务器若接收到应答消息,则在所述一级缓存中查找所述应答消息对应的请求消息;
若查找到所述应答消息对应的请求消息,则将所述应答消息对应的请求消息的关键信息标记为已应答。
具体地,在扫描的间隙,若服务器接收到响应消息,则在一级缓存中查找对应的请求消息,若查找到对应的请求消息,则将该请求消息的关键信息标记为已应答。这样,当服务器扫描到该请求消息的关键信息,确定该请求消息已收到应答消息,则不对该请求消息进行其它处理。若该请求消息未收到应答消息,即相应的关键信息未被标记为已应答,则服务器将该请求消息作为第一请求消息,将其关键信息存入二级缓存中。一次扫描完成后,删除一级缓存中该请求消息的关键信息。
针对二级缓存,由于请求消息的关键信息已经过一级缓存的监控和过滤,二级缓存中存储的关键信息已小于所有关键信息的总量的5%,因此,二级缓存占的内存空间很小,可以采用进程内链表实现。本发明实施例中,所述服务器扫描所述二级缓存,通过消息日志确定所述二级缓存中的第二请求消息是否收到应答消息,包括:
所述二级缓存采用链表方式,按照第一请求消息发送时间的先后从表头依次将第一请求消息的关键信息存入所述链表;
所述服务器从表头依次查询所述链表中第一请求消息的关键信息,判断所述第一请求消息的发送时间与当前时间之间的差值是否大于所述超时阈值;
若是,则将所述第一请求消息作为第二请求消息,根据所述第二请求消息的消息日志确定所述第二请求消息是否收到应答消息;若否,则对所述第二请求消息进行超时处理。
本发明实施例中,按照第一请求消息发送时间的前后顺序,从表头依次将第一请求消息的关键信息存入二级缓存中,则二级缓存的表头为发送最早的第一请求消息,表尾为发送最晚的第一请求消息。不同于N个监控进程对 一级缓存进行扫描,本发明实施例中设置一个监控进程对二级缓存进行扫描,为了方便描述,将对一级缓存进行扫描的N个监控进程作为一级监控进程,将对二级缓存进行扫描的监控进程做为二级监控进程。二级监控进程循环扫描二级缓存链表,从表头开始,判定二级缓存链表中是否为空,若为空则等待下一次扫描,若不为空,则从第一个请求消息开始,根据二级缓存中存储的关键信息,判断第一请求消息的发送时间与当前时间之间的差值是否大于超时阈值,即判断第一请求消息是否超时,若否,则二级监控进程下移到二级缓存中的第二个请求消息继续研判;若是,即第一请求消息的发送时间与当前时间的差值已大于超时阈值,则将该第一请求消息作为第二请求消息,根据其关键信息查询相应的消息日志,根据第二请求消息的消息日志确定第二请求消息是否收到应答消息,若是,则该第二请求消息已收到应答消息,不对其做超时处理;若否,即该第二请求消息仍未收到应答消息,则调用外部超时处理服务对该第二请求消息进行超时处理。之后,无论该第二请求消息是否收到应答消息,二级监控进程均下移到二级缓存中的第二个请求消息继续研判,直至将二级缓存中的请求消息全部研判。
进一步,本发明实施例中,还配备异常捕捉装置,负责捕捉在服务器故障或重新启动过程中发出的请求消息,使得服务器可以重新处理这段时间内发出的请求消息。或者在人工干预下,通过该异常捕捉装置,从数据库内捕捉基于指定时间点开始的请求消息,调用服务器重新处理。
为了更清楚地理解本发明,下面以具体的实施例对上述流程进行详细描述,该具体的实施例中,包括消息写入单元、一级缓存、一级监控进程、二级缓存和二级监控进程,如图2所示,其中,消息写入单元负责获取已发出的请求消息的关键信息,并将关键信息写入一级缓存中,一级缓存中包括60个内存区域,编号为0~59,一级监控进程的数量也为60个,分别与一级缓存的60个内存区域一一对应,负责分别监控一级缓存中对应的内存区域,并将未收到应答消息的关键信息写入二级缓存中。二级监控负责监控二级缓存中的关键信息,若超出超时阈值,查阅消息日志表,确定仍未收到应答消息的 请求消息,并调用外部超时处理装置对这些请求消息进行超时处理。
具体实施例的步骤如图3所示,包括:
步骤301、消息写入单元获取请求消息的关键信息,关键信息包括请求消息的发送时间。
步骤302、消息写入单元确定将请求消息存储至一级缓存的写入时间,根据写入时间对60秒取余数的结果,将请求消息的关键信息存入对应的内存区域中。
步骤303、消息写入单元接收应答消息,在一级缓存中查找该应答消息对应的请求消息,若查找到,则将该请求消息的关键信息标记为已应答。
步骤304、一级监控进程按照设定频率,扫描一级缓存内对应的内存区域,若发现没有已应答标记的关键信息,则作为第一请求消息,并将第一请求消息的关键信息存入二级缓存中。
步骤305、二级监控进程从表头扫描二级缓存的内存区域,从第一请求消息的关键信息中,确定出发送时间与当前时间之间的差值大于超时阈值的第一请求消息,作为第二请求消息。
步骤306、二级监控进程根据第二请求消息的关键信息查找消息日志,确定第二请求消息仍未收到应答消息。
步骤307、二级监控进程调用外部装置对第二请求消息进行超时处理。
需要说明的是,上述步骤中,消息写入单元、一级监控进程和二级监控进程执行操作的顺序并无先后顺序,即消息写入单元将请求消息的关键信息写入一级缓存的同时,一级监控进程也在以设定频率扫描一级缓存,并且同时,二级监控也在周期性地扫描二级缓存。上述步骤编号仅是为了叙述方便。
基于相同的技术构思,本发明实施例还提供一种超时监控装置,如图4所示,包括:
写入模块401,用于确定请求消息的关键信息,所述关键信息包括请求消息的发送时间;
所述写入模块401,还用于将所述关键信息存储至一级缓存;
第一监控模块402,用于按照设定频率扫描所述一级缓存,若所述一级缓存中包含第一请求消息,则将所述第一请求消息的关键信息存入二级缓存,所述第一请求消息为未收到应答消息的请求消息;
第二监控模块403,用于扫描所述二级缓存,通过消息日志确定所述二级缓存中的第二请求消息是否收到应答消息,若否,则确定所述第二请求消息超时,其中,所述第二请求消息为发送时间与当前时间之间的差值大于超时阈值的请求消息。
所述写入模块401,还用于:
若接收到应答消息,则在所述一级缓存中查找所述应答消息对应的请求消息;
若查找到所述应答消息对应的请求消息,则将所述应答消息对应的请求消息的关键信息标记为已应答。
所述写入模块401,具体用于:
确定将所述请求消息存储至一级缓存的写入时间;
根据请求消息的写入时间对N取余数的结果,将所述请求消息的关键信息存入对应的内存区域中,所述一级缓存的内存被预先划分为N个内存区域,其中,每个内存区域的大小为单位时间内预估的交易笔数乘以关键信息的数据段大小。
所述第一监控模块402,具体用于
所述服务器创建N个监控进程,每个监控进程对应一个内存区域,每个监控进程按照所述设定频率扫描对应的内存区域。
所述第二监控模块403,具体用于:
所述二级缓存采用链表方式,按照第一请求消息发送时间的先后从表头依次将第一请求消息的关键信息存入所述链表;
从表头依次查询所述链表中第一请求消息的关键信息,判断所述第一请求消息的发送时间与当前时间之间的差值是否大于所述超时阈值;
若是,则将所述第一请求消息作为第二请求消息,根据所述第二请求消 息的消息日志确定所述第二请求消息是否收到应答消息;若否,则对所述第二请求消息进行超时处理。
基于相同的技术构思,本发明实施例还提供一种计算设备,该计算设备具体可以为桌面计算机、便携式计算机、智能手机、平板电脑、个人数字助理(Personal Digital Assistant,PDA)等。如图5所示,为本发明实施例提供的一种计算设备结构示意图,该计算设备可以包括中央处理器501(Center Processing Unit,CPU)、存储器502、输入设备503、输出设备504等,输入设备503可以包括键盘、鼠标、触摸屏等,输出设备504可以包括显示设备,如液晶显示器(Liquid Crystal Display,LCD)、阴极射线管(Cathode Ray Tube,CRT)等。
存储器502可以包括只读存储器(ROM)和随机存取存储器(RAM),并向处理器提供存储器中存储的程序指令和数据。在本发明实施例中,存储器可以用于存储本发明任一实施例所提供的方法的程序,处理器通过调用存储器存储的程序指令,按照获得的程序指令执行上述任一实施例所公开的方法。
基于相同的技术构思,本发明实施例还提供一种计算机可读存储介质,用于存储为上述计算设备所用的计算机程序指令,其包含用于执行上述任一实施例所公开的方法的程序。
所述计算机存储介质可以是计算机能够存取的任何可用介质或数据存储设备,包括但不限于磁性存储器(例如软盘、硬盘、磁带、磁光盘(MO)等)、光学存储器(例如CD、DVD、BD、HVD等)、以及半导体存储器(例如ROM、EPROM、EEPROM、非易失性存储器(NAND FLASH)、固态硬盘(SSD))等。
基于相同的技术构思,本发明实施例还提供一种计算机程序产品,当其在计算机上运行时,使得计算机执行上述任一实施例所公开的方法。
本发明是参照根据本发明实施例的方法、设备(系统)、和计算机程序产品的流程图和/或方框图来描述的。应理解可由计算机程序指令实现流程图 和/或方框图中的每一流程和/或方框、以及流程图和/或方框图中的流程和/或方框的结合。可提供这些计算机程序指令到通用计算机、专用计算机、嵌入式处理机或其他可编程数据处理设备的处理器以产生一个机器,使得通过计算机或其他可编程数据处理设备的处理器执行的指令产生用于实现在流程图一个流程或多个流程和/或方框图一个方框或多个方框中指定的功能的装置。
这些计算机程序指令也可存储在能引导计算机或其他可编程数据处理设备以特定方式工作的计算机可读存储器中,使得存储在该计算机可读存储器中的指令产生包括指令装置的制造品,该指令装置实现在流程图一个流程或多个流程和/或方框图一个方框或多个方框中指定的功能。
这些计算机程序指令也可装载到计算机或其他可编程数据处理设备上,使得在计算机或其他可编程设备上执行一系列操作步骤以产生计算机实现的处理,从而在计算机或其他可编程设备上执行的指令提供用于实现在流程图一个流程或多个流程和/或方框图一个方框或多个方框中指定的功能的步骤。
尽管已描述了本发明的优选实施例,但本领域内的技术人员一旦得知了基本创造性概念,则可对这些实施例作出另外的变更和修改。所以,所附权利要求意欲解释为包括优选实施例以及落入本发明范围的所有变更和修改。
显然,本领域的技术人员可以对本发明进行各种改动和变型而不脱离本发明的精神和范围。这样,倘若本发明的这些修改和变型属于本发明权利要求及其等同技术的范围之内,则本发明也意图包含这些改动和变型在内。

Claims (13)

  1. 一种超时监控方法,其特征在于,包括:
    服务器确定请求消息的关键信息,所述关键信息包括请求消息的发送时间;
    所述服务器将所述关键信息存储至一级缓存;
    所述服务器按照设定频率扫描所述一级缓存,若所述一级缓存中包含第一请求消息,则将所述第一请求消息的关键信息存入二级缓存,所述第一请求消息为未收到应答消息的请求消息;
    所述服务器扫描所述二级缓存,通过消息日志确定所述二级缓存中的第二请求消息是否收到应答消息,若否,则确定所述第二请求消息超时,其中,所述第二请求消息为发送时间与当前时间之间的差值大于超时阈值的请求消息。
  2. 如权利要求1所述的方法,其特征在于,还包括:
    所述服务器若接收到应答消息,则在所述一级缓存中查找所述应答消息对应的请求消息;
    若查找到所述应答消息对应的请求消息,则将所述应答消息对应的请求消息的关键信息标记为已应答。
  3. 如权利要求1所述的方法,其特征在于,所述服务器将所述关键信息存储至一级缓存,包括:
    所述服务器确定将所述请求消息存储至一级缓存的写入时间;
    所述服务器根据请求消息的写入时间对N取余数的结果,将所述请求消息的关键信息存入对应的内存区域中,所述一级缓存的内存被预先划分为N个内存区域,其中,每个内存区域的大小为单位时间内预估的交易笔数乘以关键信息的数据段大小。
  4. 如权利要求3所述的方法,其特征在于,所述服务器按照设定频率扫描所述一级缓存,包括:
    所述服务器创建N个监控进程,每个监控进程对应一个内存区域,每个监控进程按照所述设定频率扫描对应的内存区域。
  5. 如权利要求1所述的方法,其特征在于,所述服务器扫描所述二级缓存,通过消息日志确定所述二级缓存中的第二请求消息是否收到应答消息,包括:
    所述二级缓存采用链表方式,按照第一请求消息发送时间的先后从表头依次将第一请求消息的关键信息存入所述链表;
    所述服务器从表头依次查询所述链表中第一请求消息的关键信息,判断所述第一请求消息的发送时间与当前时间之间的差值是否大于所述超时阈值;
    若是,则将所述第一请求消息作为第二请求消息,根据所述第二请求消息的消息日志确定所述第二请求消息是否收到应答消息;若否,则对所述第二请求消息进行超时处理。
  6. 一种超时监控装置,其特征在于,包括:
    写入模块,用于确定请求消息的关键信息,所述关键信息包括请求消息的发送时间;
    所述写入模块,还用于将所述关键信息存储至一级缓存;
    第一监控模块,用于按照设定频率扫描所述一级缓存,若所述一级缓存中包含第一请求消息,则将所述第一请求消息的关键信息存入二级缓存,所述第一请求消息为未收到应答消息的请求消息;
    第二监控模块,用于扫描所述二级缓存,通过消息日志确定所述二级缓存中的第二请求消息是否收到应答消息,若否,则确定所述第二请求消息超时,其中,所述第二请求消息为发送时间与当前时间之间的差值大于超时阈值的请求消息。
  7. 如权利要求6所述的装置,其特征在于,所述写入模块,还用于:
    若接收到应答消息,则在所述一级缓存中查找所述应答消息对应的请求消息;
    若查找到所述应答消息对应的请求消息,则将所述应答消息对应的请求消息的关键信息标记为已应答。
  8. 如权利要求6所述的装置,其特征在于,所述写入模块,具体用于:
    确定将所述请求消息存储至一级缓存的写入时间;
    根据请求消息的写入时间对N取余数的结果,将所述请求消息的关键信息存入对应的内存区域中,所述一级缓存的内存被预先划分为N个内存区域,其中,每个内存区域的大小为单位时间内预估的交易笔数乘以关键信息的数据段大小。
  9. 如权利要求8所述的装置,其特征在于,所述第一监控模块,具体用于
    所述服务器创建N个监控进程,每个监控进程对应一个内存区域,每个监控进程按照所述设定频率扫描对应的内存区域。
  10. 如权利要求6所述的装置,其特征在于,所述第二监控模块,具体用于:
    所述二级缓存采用链表方式,按照第一请求消息发送时间的先后从表头依次将第一请求消息的关键信息存入所述链表;
    从表头依次查询所述链表中第一请求消息的关键信息,判断所述第一请求消息的发送时间与当前时间之间的差值是否大于所述超时阈值;
    若是,则将所述第一请求消息作为第二请求消息,根据所述第二请求消息的消息日志确定所述第二请求消息是否收到应答消息;若否,则对所述第二请求消息进行超时处理。
  11. 一种计算机可读存储介质,其特征在于,所述计算机可读存储介质存储有计算机可执行指令,所述计算机可执行指令用于使所述计算机执行权利要求1至5任一项所述的方法。
  12. 一种计算设备,其特征在于,包括:
    存储器,用于存储程序指令;
    处理器,用于调用所述存储器中存储的程序指令,按照获得的程序执行 如权利要求1至5任一项所述的方法。
  13. 一种计算机程序产品,其特征在于,当所述计算机程序产品在计算机上运行时,使得计算机执行如权利要求1至5任一项所述的方法。
PCT/CN2017/117733 2016-12-26 2017-12-21 一种超时监控方法及装置 WO2018121404A1 (zh)

Priority Applications (2)

Application Number Priority Date Filing Date Title
US16/470,206 US11611634B2 (en) 2016-12-26 2017-12-21 Method and device for timeout monitoring
EP17887171.1A EP3562096B1 (en) 2016-12-26 2017-12-21 Method and device for timeout monitoring

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
CN201611219406.4A CN106789431B (zh) 2016-12-26 2016-12-26 一种超时监控方法及装置
CN201611219406.4 2016-12-26

Publications (1)

Publication Number Publication Date
WO2018121404A1 true WO2018121404A1 (zh) 2018-07-05

Family

ID=58926970

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/CN2017/117733 WO2018121404A1 (zh) 2016-12-26 2017-12-21 一种超时监控方法及装置

Country Status (4)

Country Link
US (1) US11611634B2 (zh)
EP (1) EP3562096B1 (zh)
CN (1) CN106789431B (zh)
WO (1) WO2018121404A1 (zh)

Families Citing this family (10)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN106789431B (zh) * 2016-12-26 2019-12-06 中国银联股份有限公司 一种超时监控方法及装置
CN109901918B (zh) * 2017-12-08 2024-04-05 北京京东尚科信息技术有限公司 一种处理超时任务的方法和装置
CN108712494A (zh) * 2018-05-18 2018-10-26 阿里巴巴集团控股有限公司 处理异步消息的方法、装置及设备
CN108418903B (zh) * 2018-05-28 2024-02-02 苏州德姆斯信息技术有限公司 嵌入式软件日志远程访问系统及访问方法
CN109543988A (zh) * 2018-11-16 2019-03-29 中国银行股份有限公司 优化交易超时阀值的方法、装置和存储介质
CN111611090B (zh) * 2020-05-13 2021-12-28 浙江创邻科技有限公司 分布式消息处理方法及系统
CN112104521A (zh) * 2020-09-08 2020-12-18 北京金山云网络技术有限公司 请求超时监控方法、装置、计算机设备和存储介质
CN112181701B (zh) * 2020-09-23 2024-08-13 中国建设银行股份有限公司 一种定位异常业务请求的方法和装置
CN112787958B (zh) * 2021-01-05 2022-09-20 北京字跳网络技术有限公司 延迟消息处理方法及设备
CN113115138B (zh) * 2021-03-24 2022-08-05 烽火通信科技股份有限公司 消息交互超时判断方法、装置、设备及存储介质

Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN1859183A (zh) * 2005-12-30 2006-11-08 华为技术有限公司 一种实现设备状态轮询的方法及装置
US20080168446A1 (en) * 2007-01-10 2008-07-10 Jinmei Shen Method and Apparatus for Handling Service Requests in a Data Processing System
CN105471616A (zh) * 2014-09-12 2016-04-06 博雅网络游戏开发(深圳)有限公司 缓存系统管理方法和系统
CN105516548A (zh) * 2015-11-27 2016-04-20 中央电视台 一种文件预读方法及装置
CN105847184A (zh) * 2016-02-22 2016-08-10 乐视移动智能信息技术(北京)有限公司 用于android操作系统的网络请求方法、装置和系统
CN106210021A (zh) * 2016-07-05 2016-12-07 中国银行股份有限公司 金融应用系统联机业务的实时监控方法以及监控装置
CN106789431A (zh) * 2016-12-26 2017-05-31 中国银联股份有限公司 一种超时监控方法及装置

Family Cites Families (25)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US6631402B1 (en) * 1997-09-26 2003-10-07 Worldcom, Inc. Integrated proxy interface for web based report requester tool set
US8176186B2 (en) * 2002-10-30 2012-05-08 Riverbed Technology, Inc. Transaction accelerator for client-server communications systems
US8938553B2 (en) * 2003-08-12 2015-01-20 Riverbed Technology, Inc. Cooperative proxy auto-discovery and connection interception through network address translation
JP4144882B2 (ja) * 2004-05-14 2008-09-03 インターナショナル・ビジネス・マシーンズ・コーポレーション 情報処理装置、情報システム、プロキシ処理方法、及びプログラムと記録媒体
US8561116B2 (en) * 2007-09-26 2013-10-15 Charles A. Hasek Methods and apparatus for content caching in a video network
CN101702173A (zh) * 2009-11-11 2010-05-05 中兴通讯股份有限公司 一种提高移动门户网站动态页面访问速度的方法和装置
US8452888B2 (en) * 2010-07-22 2013-05-28 International Business Machines Corporation Flow control for reliable message passing
WO2012018430A1 (en) * 2010-07-26 2012-02-09 Seven Networks, Inc. Mobile network traffic coordination across multiple applications
WO2013015835A1 (en) * 2011-07-22 2013-01-31 Seven Networks, Inc. Mobile application traffic optimization
CN108429800B (zh) * 2010-11-22 2020-04-24 杭州硕文软件有限公司 一种移动设备
US8621075B2 (en) * 2011-04-27 2013-12-31 Seven Metworks, Inc. Detecting and preserving state for satisfying application requests in a distributed proxy and cache system
US9996403B2 (en) * 2011-09-30 2018-06-12 Oracle International Corporation System and method for providing message queues for multinode applications in a middleware machine environment
CN103581225A (zh) * 2012-07-25 2014-02-12 中国银联股份有限公司 分布式系统中的节点处理任务的方法
US20140082129A1 (en) * 2012-09-18 2014-03-20 Netapp, Inc. System and method for managing a system of appliances that are attached to a networked file system
US9654353B2 (en) * 2012-12-13 2017-05-16 Level 3 Communications, Llc Framework supporting content delivery with rendezvous services network
US9967780B2 (en) * 2013-01-03 2018-05-08 Futurewei Technologies, Inc. End-user carried location hint for content in information-centric networks
CN103858112A (zh) * 2013-12-31 2014-06-11 华为技术有限公司 一种数据缓存方法、装置及系统
US9584617B2 (en) * 2013-12-31 2017-02-28 Successfactors, Inc. Allocating cache request in distributed cache system based upon cache object and marker identifying mission critical data
US20150339178A1 (en) * 2014-05-21 2015-11-26 Freescale Semiconductor, Inc. Processing system and method of operating a processing system
US10044795B2 (en) * 2014-07-11 2018-08-07 Vmware Inc. Methods and apparatus for rack deployments for virtual computing environments
JP6370993B2 (ja) * 2014-08-07 2018-08-08 インテル アイピー コーポレイション サード・パーティー・サーバが問題に直面した場合のアプリケーションからのトラフィックのコントロール
CN104917645B (zh) * 2015-04-17 2018-04-13 浪潮电子信息产业股份有限公司 一种在线检测报文传输超时的方法与装置
US9747179B2 (en) * 2015-10-29 2017-08-29 Netapp, Inc. Data management agent for selective storage re-caching
CN105721632A (zh) * 2016-04-12 2016-06-29 上海斐讯数据通信技术有限公司 一种基于dns机制的无线接入方法及无线接入设备
CN108073446B (zh) * 2016-11-10 2020-11-17 华为技术有限公司 超时预判方法及装置

Patent Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN1859183A (zh) * 2005-12-30 2006-11-08 华为技术有限公司 一种实现设备状态轮询的方法及装置
US20080168446A1 (en) * 2007-01-10 2008-07-10 Jinmei Shen Method and Apparatus for Handling Service Requests in a Data Processing System
CN105471616A (zh) * 2014-09-12 2016-04-06 博雅网络游戏开发(深圳)有限公司 缓存系统管理方法和系统
CN105516548A (zh) * 2015-11-27 2016-04-20 中央电视台 一种文件预读方法及装置
CN105847184A (zh) * 2016-02-22 2016-08-10 乐视移动智能信息技术(北京)有限公司 用于android操作系统的网络请求方法、装置和系统
CN106210021A (zh) * 2016-07-05 2016-12-07 中国银行股份有限公司 金融应用系统联机业务的实时监控方法以及监控装置
CN106789431A (zh) * 2016-12-26 2017-05-31 中国银联股份有限公司 一种超时监控方法及装置

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
See also references of EP3562096A4

Also Published As

Publication number Publication date
CN106789431A (zh) 2017-05-31
US20200021665A1 (en) 2020-01-16
EP3562096B1 (en) 2023-04-19
US11611634B2 (en) 2023-03-21
EP3562096A4 (en) 2020-01-01
CN106789431B (zh) 2019-12-06
EP3562096A1 (en) 2019-10-30

Similar Documents

Publication Publication Date Title
WO2018121404A1 (zh) 一种超时监控方法及装置
WO2019174129A1 (zh) 事件提醒方法、装置、计算机设备和存储介质
US10013278B2 (en) Methods and systems for batch processing in an on-demand service environment
US9805078B2 (en) Next generation near real-time indexing
US20160142369A1 (en) Service addressing in distributed environment
US9515901B2 (en) Automatic asynchronous handoff identification
WO2018014846A1 (zh) 一种应用消息推送方法、装置
WO2020034951A1 (zh) 利用前端编程语言优化图片懒加载的方法以及电子设备
US20160323160A1 (en) Detection of node.js memory leaks
WO2023232120A1 (zh) 数据处理方法、电子设备及存储介质
US10007562B2 (en) Business transaction context for call graph
WO2018086454A1 (zh) 页面数据处理方法和装置
WO2017157111A1 (zh) 防止内存数据丢失的的方法、装置和系统
US9052796B2 (en) Asynchronous handling of an input stream dedicated to multiple targets
US8510426B2 (en) Communication and coordination between web services in a cloud-based computing environment
WO2023155591A1 (zh) 进度信息管控方法、微服务装置、电子设备及存储介质
CN108390770B (zh) 一种信息生成方法、装置及服务器
WO2018201993A1 (zh) 图像绘制方法、终端及存储介质
US9659041B2 (en) Model for capturing audit trail data with reduced probability of loss of critical data
CN114048059A (zh) 接口的超时时间调整方法、装置、计算机设备及存储介质
CN114238264A (zh) 数据处理方法、装置、计算机设备和存储介质
US9935856B2 (en) System and method for determining end user timing
CN101453386A (zh) 网络的封包撷取方法
US20230101349A1 (en) Query processing method, electronic device and storage medium
CN117667144A (zh) 一种注解热刷新方法、装置、设备及介质

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 17887171

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: DE

WWE Wipo information: entry into national phase

Ref document number: 2017887171

Country of ref document: EP