WO2004079930A2 - Asynchronous mechanism and message pool - Google Patents

Asynchronous mechanism and message pool Download PDF

Info

Publication number
WO2004079930A2
WO2004079930A2 PCT/US2004/006887 US2004006887W WO2004079930A2 WO 2004079930 A2 WO2004079930 A2 WO 2004079930A2 US 2004006887 W US2004006887 W US 2004006887W WO 2004079930 A2 WO2004079930 A2 WO 2004079930A2
Authority
WO
WIPO (PCT)
Prior art keywords
messages
message
cells
receiving
writing
Prior art date
Application number
PCT/US2004/006887
Other languages
French (fr)
Other versions
WO2004079930A3 (en
Inventor
Jianguo Jiang
Yaping Liu
Jingwei Liang
Wei Huang
Shijun Wu
Original Assignee
Messagesoft, Inc.
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Messagesoft, Inc. filed Critical Messagesoft, Inc.
Priority to EP04717486A priority Critical patent/EP1606719A4/en
Priority to NZ542871A priority patent/NZ542871A/en
Priority to AU2004217278A priority patent/AU2004217278B2/en
Publication of WO2004079930A2 publication Critical patent/WO2004079930A2/en
Publication of WO2004079930A3 publication Critical patent/WO2004079930A3/en

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F9/00Arrangements for program control, e.g. control units
    • G06F9/06Arrangements for program control, e.g. control units using stored programs, i.e. using an internal store of processing equipment to receive or retain programs
    • G06F9/46Multiprogramming arrangements
    • G06F9/54Interprogram communication
    • G06F9/546Message passing systems or structures, e.g. queues
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L47/00Traffic control in data switching networks
    • H04L47/10Flow control; Congestion control
    • H04L47/24Traffic characterised by specific attributes, e.g. priority or QoS
    • H04L47/2441Traffic characterised by specific attributes, e.g. priority or QoS relying on flow classification, e.g. using integrated services [IntServ]
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L47/00Traffic control in data switching networks
    • H04L47/50Queue scheduling
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L47/00Traffic control in data switching networks
    • H04L47/50Queue scheduling
    • H04L47/52Queue scheduling by attributing bandwidth to queues
    • H04L47/521Static queue service slot or fixed bandwidth allocation
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L47/00Traffic control in data switching networks
    • H04L47/50Queue scheduling
    • H04L47/62Queue scheduling characterised by scheduling criteria
    • H04L47/6215Individual queue per QOS, rate or priority
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L49/00Packet switching elements
    • H04L49/90Buffering arrangements
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L49/00Packet switching elements
    • H04L49/90Buffering arrangements
    • H04L49/9036Common buffer combined with individual queues
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L51/00User-to-user messaging in packet-switching networks, transmitted according to store-and-forward or real-time protocols, e.g. e-mail
    • H04L51/21Monitoring or handling of messages
    • H04L51/214Monitoring or handling of messages using selective forwarding
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L51/00User-to-user messaging in packet-switching networks, transmitted according to store-and-forward or real-time protocols, e.g. e-mail
    • H04L51/21Monitoring or handling of messages
    • H04L51/226Delivery according to priorities
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L51/00User-to-user messaging in packet-switching networks, transmitted according to store-and-forward or real-time protocols, e.g. e-mail
    • H04L51/21Monitoring or handling of messages
    • H04L51/23Reliability checks, e.g. acknowledgments or fault reporting

Definitions

  • the invention relates generally to communication systems.
  • the handling, transmission and storage of wireless and wired messages, using a protocol such as TCP/IP, can be problematic due to slowness in the handling of incoming traffic and delivery of outbound traffic.
  • Incoming handling can be slowed by slow file creation, fragmentation and overhead delays arising from the storage of individual messages as files in a storage system.
  • Outbound message delivery speed can be limited by the slow and unpredictable nature of looking up domain name servers, slow or delayed remote server response, remote systems being busy or down, and finite computer resources, such as size of the memory.
  • a system and method for handling incoming and outgoing messages includes a message processing system operable to write messages in batches to a message cell pool structure.
  • a data processor receives messages that are retained in a memory.
  • the data processor is operable to process the messages from the memory and write the messages in batches to individual cells of a message cell pool structure.
  • the cells receive and retain the messages.
  • the message cell pool structure is provided on a storage system with the cell pool having a number of cells of predetermined size.
  • the message cell pool structure can be of the form of a first-in first-out (FIFO) queue where messages are written in an order that generally corresponds to receipt.
  • FIFO first-in first-out
  • a table map keeps track of the message location and status or other message information within the cells. Messages are written in batches, so as to eliminate the need to write each individual message as a file to the storage system. Further, because ot the queue structure ot the message ceil pool, fragmentation of the writing of the batches to the storage system can be eliminated.
  • a delivery processor is provided to oversee the delivery processing of the stored messages.
  • the delivery processor can read messages from the message cell pool structure and attempt delivery.
  • Associated with the delivery processor can be one or more output queues.
  • the output queues can be used to receive messages from the message cell pool structure (e.g., at a time when messages from the message cell pool structure are ready for delivery) for further processing by the delivery processor.
  • the output queues can be serviced in accordance with a predefined ordering or policy (e.g., implementing quality of service differentiation for different classes of messages). Alternatively, output queues can be used only for storage of messages that are delayed, or otherwise difficult to deliver.
  • the messages need not remain in the memory longer than a given messaging protocol requires for server acknowledgement.
  • the system allows for batching of messages that have been received.
  • a message can be written to one or more cells.
  • the cells can be optimized to accommodate the size of the message, for example, the cells can be sized to store at least 80% of the average sized message. Portions of a message can be written to cells prior to completely receiving a given message. Completely received messages can be written to the cells prior to completely receiving other messages.
  • the system need not wait to receive the entirety of the first message before processing the second message (i.e., writing the second message as part of a batch to the message cell pool structure).
  • the system may not need to wait to receive the entire message before processing a shorter message that is received in full.
  • the system can have a plurality of interfaces and can contemporaneously receive a plurality of messages that are processed for subsequent transmission.
  • Cells can be written after either a predetermined time or after a predefined number of messages have been received. Messages can be written to and retrieved from the message cell pool structure in a first-in- first-out sequence. Writing to cells can minimize ⁇ disk fragmentation.
  • a number of connections can be made for each delivery attempt. Multiple threads can be used to process the messages.
  • the system can switch between awaiting connections, skipping over delayed transmissions. For delayed messages, the system only takes action on an awaiting connection when the receiver notifies the system that it is ready for further processing of the message.
  • Messages with delivery failure can be returned and identified as returned by marking the message.
  • Failed messages can be kept in separate storage for later processing. Storing a message in separate storage for later processing can reduce the bottleneck of message transmittal caused by returned messages.
  • the methods used by the system can be applied to persistent store and forward systems that handle files rather than messages.
  • the proposed message processing system solves the problems associated with large volumes of incoming messages resulting in storage management bottlenecks and slow delivery due to limited system resources and unpredictability in the real world messaging environment.
  • the proposed message processing system avoids the problem of slow handling of returned messages.
  • the use of a message pool and batch writing increases storage management speed.
  • Asynchronous delivery ensures the incoming messages are delivered without choking of the outgoing message system.
  • Backup storage provides for efficient processing of returned messages to reduce the system burden.
  • Figure 1 is a block diagram of message processing system for receipt and storage of messages
  • Figure 2 is a flow diagram of message receipt and storage
  • FIG. 3 is a flow diagram of message processing for delivery.
  • a message processing system 10 is provided for processing messages received from one or more wired or wireless devices.
  • message processing system 10 is embodied in a multimedia messaging service center (MMSC) employing a data processing unit 20.
  • the message processing system 10 handles incoming 12 and outbound 14 messages.
  • Message processing system 10 receives messages from wired and wireless devices 102, 104, 106, such as the Internet, PCs, PDAs, cell phones, etc.
  • Data processing unit 20 includes a storage memory 30 for receiving messages 12; an input processor 32 for processing the messages 12; a message cell pool 34 with a plurality of cells 117 of predetermined size; and a table map 36 for identifying messages.
  • message processing system 10 can have a separate delivery processor 40 and an output queue structure including one or more output queues 38 for delivery of messages.
  • One or more of output queues 38 can be used to store messages that have failed to be transmitted to a remote receiver after a predetermined time lag, or that are returned from the remote system for various reasons, such as lack of storage and invalid recipient identity.
  • the messages that have failed to be transmitted and that are returned messages can be processed independently of the messages that have not been returned or failed to be transmitted.
  • Output queues 38 are discussed in greater detail below.
  • the message processing system 10 receives messages 12 from the clients, such as the computer 102 and the wireless client, e.g. palm pilot 104 or mobile phone 106.
  • the messages 12 are received at storage memory 30 in a successive order as indicated by msgl, msg2, msg3, etc. 112 and retained in the storage memory 30 during a protocol lag.
  • the protocol lag defines the time period in which an acknowledgment signal must be returned to a message sender in accordance with the messaging protocol being used. Absent the acknowledgement signal, the message sender may attempt retransmission of the message or otherwise indicate message failure.
  • Messages can include meta-data, such as the address of the sender and the address of the intended recipient(s).
  • storage memory 30 is a random access memory (RAM).
  • Input processor 32 is operable to identify the receipt of a first message and gather messages for batch writing to message cell pool 34. After the predetermined time of the protocol lag has expired for the first message, the message is acknowledged and all of the messages received during the lag are transferred in a batch to individual cells 117 in the message cell pool 34.
  • messages 112 are marked prior to being written to the message cell pool 34 (e.g., in one implementation, messages headers are marked) so as to be able to identify returned ⁇ messages.
  • Message handling by input processor 32 is described in greater detail below in association with Fig. 2.
  • the message cell pool 34 is allocated in a storage system of a computer readable medium, which in one implementation is a hard disk.
  • the message cell pool 34 comprises cells 117 of a specific predetermined size or number of bytes.
  • the size of the cells 117 can be selected in relation to the size of the anticipated messages, where the size can be sufficient to store the average sized message in a single cell.
  • the cells 117 are sized to allow each message 120 to be stored in a single cell.
  • cells are sized such that at least substantially 80% of the received messages fit into a single cell.
  • the messaging environment can determine the pool size, such that the capacity of the message cell pool 34 typically can be capable of handling the anticipated volume of messages. Messages 120 larger than the capacity of a single cell can spill over into the next cell.
  • Each cell 117 can be filled in accordance with their respective position in the message cell pool 34 in a numerical order.
  • the table map 36 can comprise a number of entries, where each entry is a row comprising a number of datum, each datum in an individual column. Each entry has data, such as the location 124 of a cell 117 in relation to a message 120, the size 126 of the message 120, whether the cell 117 contains the end of the message 130, whether the message 120 has been processed, e.g. transmitted, what the status of the message is and other message-related information.
  • the table map 36 can have additional data, such as a time stamp.
  • the table map 36 can be on the same data storage entity (as the message cell pool 34) or a different entity that is connected to the same or a different data processor. The table map can be referred to when randomly accessing messages from storage, as described further below.
  • the messages 120 that are stored in the message cell pool 34 are then processed, such as by delivering the messages.
  • An asynchronous delivery mechanism can be employed. Other actions that can be performed in the processing are virus scanning, spam detection, pornography detection, etc.
  • input processor 32 processes toe messages lor delivery.
  • a separate delivery processor 4U can be provided.
  • Input processor 32 and delivery processor 40 can be a same processor. For purposes of simplicity of the description, the delivery processing will be described in an implementation that includes a delivery processor. Other implementations are possible.
  • delivery processor 40 operates to process each message individually in the order written into the message cell pool 34, reading each message from the cells 117 based on the table map 36 information and attempting to transmit the message to the designated receiver.
  • a delivery attempt may include opening a TCP/IP connection and exchanging protocols. After a successful delivery, the successful delivery can be recorded in the table map 36.
  • delivery processor 40 uses a pool of threads. Each thread simultaneously opens a number of TCP/IP connections for operations such as the DNS lookup or protocol exchange or data transmission, and manages the state of message processing for all the connections. In case of a delay in the remote system, such as DNS service taking a long time to resolve an IP address from a domain name, or when the remote server is busy handling other requests and is irresponsive, or the remote server is temporarily down, the thread will set a message aside by recording the current state of message processing in a memory, such as memory 50, and utilize the same thread to handle another message using another connection. Delivery processor 40 can use a callback mechanism to notify the thread when the set aside message is ready for further action.
  • the callback mechanism originates from an underlying computer network layer I/O device driver.
  • a thread can complete a transmission in progress prior to taking up the set aside message for further processing.
  • the thread can be configured to take on each message one by one.
  • that set aside message can be characterized as a failed transmission and moved to a backup storage (not shown).
  • Set aside messages can be stored, such as in processor memory 50.
  • the message can be saved in the message cell pool 34 just as other newly received messages.
  • the delivery processor 40 can recognize the message as returned by analyzing the message header or any other identifiable feature that indicates the message was sent out from the message processing system 10.
  • the delivery processor 40 can store returned messages in a file-based or backup system and remove the message from the message cell pool 34. Periodically the system can process returned messages until either they are successfully handled or have reached a maximum number of attempts for processing and are discarded. Delivery processing is described in greater detail below in association with FIG. 3.
  • a message processing system including the components described above can be use to receive a message, as described below and in Fig. 2.
  • Clients transmit messages to the system.
  • Each message includes data, the address of the sender and the recipient(s) and can optionally include a traversing path, such as IP address.
  • the messages are broken up into multiple packets for transmitting.
  • a packet is received by the system 1021, such as the data processing in unit 20 of an MMSC.
  • the data processing unit 20 determines whether the packet is the first of a group of packets, i.e., a message, or whether the packet belongs to a group of packets that the system has already begun to receive 1025.
  • the memory does not already contain packets from a corresponding message (the "yes" branch)
  • a new message is designated. If the received packet belongs with packets of a corresponding message (the "no" branch), the packet is stored i.e., in storage memory 30, with the rest of the received packets from the corresponding message 1036.
  • the packets for each message have a sequence in which they can be ordered.
  • the data processing unit 20 determines whether the message to which the packet was added is complete 1041. If the message is not complete (the "no" branch), the data processing unit awaits a new packet 1021. If the message is complete (the "yes” branch), the data processing unit determines whether a trigger event has occurred 1057. In one implementation, the trigger event can be the receipt of a predetermined number of messages. Other trigger events are discussed below. If the predetermined number of completed messages have not been received (the "no” branch), the processor continues to received packets 1021. If the predetermined number of complete messages have been received (the "yes” branch), i.e., the capture period has elapsed, the completed messages are batched 1054.
  • the batched messages are written to a persistent storage (i.e. message cell pool 34) 1059.
  • portions of a message can be written as part of a batch, that is, input processor 32 can be configured to include in a batch a portion of message that has been received such that the portion can be written to message cell pool 34 prior to receipt of the entire message.
  • input processor 32 when a write to cell function occurs, all portions of messages and complete messages are written to cells (i.e., input processor 32 does not wait for complete messages to be received before writing to the message cell pool 34).
  • Each message is either written into a single cell or more than one cell when the message is greater than the capacity of the cell.
  • the location of the cell is recorded as an entry in the table map 1064 along with any other relevant information, such as size, and whether the message is complete in a cell or subsequent cell. This process is repeated with each successive write to the cells.
  • message processing system 10 does not treat each individual message as a separate file. Rather, groups of cells 117 or all of the cells 117 in the cell message pool 34 are processed as one file. Treating groups of cells as files or the entire cell structure as a single file can reduce disk fragmentation.
  • the trigger event described in step 1057 is the expiration of the protocol lag time. More specifically, the initial message received in a capture period defines the beginning of the capture period. The end of the capture period (i.e., the expiration of the protocol lag time) triggers the end of the capture period. Messages received during the capture period can then be batched and written to the message cell pool 34.
  • the trigger is a time period that is determined by the protocol lag.
  • input processor 32 may immediately send acknowledgement signals or alternatively may send acknowledgment signals coincident with the batching process (i.e., the acknowledgement signals may not be sent immediately after receipt of a given message and instead delayed so as to allow for the efficient batching of messages in the storage memory 30).
  • Messages arriving within the capture period form a batch of messages written to the message cell pool 34.
  • the SMTP or SMPP protocol permits a predetermined lag before acknowledgement which time lag can be as long as 10 minutes. During this time lag, the input processor 32 can wait for additional messages from any client prior to batch writing messages to message cell pool 34.
  • the trigger event described in step 1057 can be defined in other ways including by the expiration of a system predefined time, such as 1/10 of a second or another time adequate for the system traffic and storage capacity of the memory.
  • a system predefined time such as 1/10 of a second or another time adequate for the system traffic and storage capacity of the memory.
  • other ways of determining when to trigger the batch write operation can be selected.
  • a message can have information added to the message, such as a header to notify the system of the processing history of the message.
  • additional information can include the identity of the system.
  • the additional information permits recognition that the message has been processed by the system in the event the message is returned to the system.
  • the system may indicate in the header that the system has received the message.
  • not all of the messages in the message cell pool 34 are required to be processed prior to the overwriting of older messages. That is, in one implementation, pointers to the head and the tail of the message cell pool 34 can be used to read and write messages from the message cell pool 34 (i.e., so that new messages can be written to appropriate locations of the message cell pool 34 and so that the oldest messages in the message cell pool 34 can be processed for delivery).
  • Messages received from a client may be subject to various interruptions, e.g., there may be stoppages of transmission from the client.
  • the system can disconnect from a remote system after a configurable predetermined period of time. If the handshake is unsuccessful or the message is in an unrecognizable form, the system can disconnect. When only a partial message is received, the partial message can be discarded.
  • Fig. 3 shows a flow diagram for operation of the delivery processor 40 when delivering a message to a recipient.
  • the delivery processor 40 begins by retrieving one or more messages from the message cell pool 34.
  • the delivery processor 40 removes one message at a time from the message cell pool 34.
  • the delivery processor 40 retrieves a block of messages from the message cell pool 34 prior to processing.
  • Delivery processor 40 seeks a remote receiver identity 2000, where normally there will be a delay. Delivery processor 40 determines whether the delay is beyond a predetermined period 2003. If the delay is beyond the predetermined period, (the "yes” branch), the message is set aside 2004 and another message is processed 2000. If there is no delay or the delay is not beyond the predetermined period, (the “no” branch), the message enters into protocol exchange 2008 with the recipient. Delivery processor 40 determines whether the protocol exchange was successful 2011. If for any reason the protocol exchange is not successful (the "no” branch), the message is then set aside 2004. After a successful protocol exchange (the "yes” branch), the message is transmitted to the remote receiver 2014. If the message is successfully transmitted (the "yes” branch), the successful transmission is recorded in the table map 2027. If for any reason there is an interruption or a failure in transmitting the message (the "no” branch), the message is set aside 2004.
  • the messages set aside 2004 are retained until the messages are ready for transmission to the remote receiver.
  • Delivery processor 40 can track the amount of time that a message has been set aside 2030. If a prescribed amount of time has passed and no signal or response has been received from the remote receiver (the "yes" branch), the message is sent to backup 2042. When a message is sent to backup, the status of the message as processed is recorded in a table map 2050. At any time the remote receiver can call back or send a signal to the delivery processor 40 indicating that it is ready to receive tne message and t e system can receive toe sent signal ⁇ )5 /.
  • wnen tne delivery processor 40 receives the signal indicating that the remote receiver is ready to receive the message and the system receives this signal (the "yes” branch), the message can be processed and is ⁇ transmitted 2014. Otherwise, the system continues to wait for the signal or the prescribed time to pass (the "no" branch).
  • a message When a message is returned to the system 2061, such returned message can be stored in the backup database 2042. Messages in the backup storage are subject to being overwritten in the message cell pool 34.
  • the delivery processor 40 can periodically check whether there are messages that need to be delivered in the backup storage and transfer the messages to the system for transmission. The number of attempts to transmit a message can be recorded by the delivery processor 40. When the maximum number of attempts defined by the system has been reached, the message can be erased.
  • the delivery processor 40 can delivery the messages in the order that the messages were received by the message processing system 10. Alternatively, the delivery processor 40 can delivery the messages out of the order in which the messages were received, as described further below.
  • the delivery processor 40 retrieves messages and writes the retrieved messages into one of multiple output queues 38.
  • the delivery processor 40 can be configured to write messages to one of the output queues 38 based on information in the message, such as a domain name or priority indication on the message.
  • the delivery processor 40 can then process the messages from the one or more output queues 38, where each queue can have a priority associated with it. For example, messages of high priority can be written to a first queue, messages of a standard priority can be written to a second queue and messages of a low priority can be written to a third queue.
  • the delivery processor 40 can access the first queue 50% of the time for delivering messages, access the second queue 35% of the time for delivering messages and access the third queue 15% of the time.
  • QOS quality of service
  • the delivery processor 40 can randomly access the output queues 38.
  • QOS can be provided by assigning one queue to messages that have not yet been sent out of the system.
  • the other queues can be reserved for messages that have uceu leiiuncu to uic system ⁇ r mat nave not oeen uenvcicu uuc l ⁇ a transmission iauure.
  • the queue for messages that have not yet been sent out of the system can be prioritized over the other queues.
  • the subject system can be readily applied to a wide variety of applications.
  • the methods described above can be applied to wireless messaging, such as SMS systems, email systems, stock or commodity exchange systems, and the like.
  • the system can be applied to any persistent store and forward application.
  • files can be stored and processed. Each file can be parsed into packets for transfer.

Landscapes

  • Engineering & Computer Science (AREA)
  • Computer Networks & Wireless Communication (AREA)
  • Signal Processing (AREA)
  • Software Systems (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Engineering & Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • Data Exchanges In Wide-Area Networks (AREA)

Abstract

Messages are received and retained in memory and are batch processed including transferring the messages to a cell pool having cells of predetermined size. The location and size of the messages are recorded in a table map with other pertinent information as is required. Messages in the cell pool are processed and delivered asynchronously.

Description

ASYNCHRONOUS MECHANISM AND MESSAGE POOL
CROSS-REFERENCE TO RELATED APPLICATIONS
[0001] This application claims the benefit of U.S. Provisional Application No. 60/451,953, filed on March 5, 2003, which is incorporated by reference herein.
BACKGROUND [0002] The invention relates generally to communication systems. The handling, transmission and storage of wireless and wired messages, using a protocol such as TCP/IP, can be problematic due to slowness in the handling of incoming traffic and delivery of outbound traffic. Incoming handling can be slowed by slow file creation, fragmentation and overhead delays arising from the storage of individual messages as files in a storage system. Outbound message delivery speed can be limited by the slow and unpredictable nature of looking up domain name servers, slow or delayed remote server response, remote systems being busy or down, and finite computer resources, such as size of the memory.
[0003] For wireless handheld and computer devices, particularly as related to the Internet, there needs to be a more efficient method of receiving and delivering messages.
SUMMARY
[0004] A system and method for handling incoming and outgoing messages is provided that includes a message processing system operable to write messages in batches to a message cell pool structure. A data processor receives messages that are retained in a memory. The data processor is operable to process the messages from the memory and write the messages in batches to individual cells of a message cell pool structure. The cells receive and retain the messages. The message cell pool structure is provided on a storage system with the cell pool having a number of cells of predetermined size. The message cell pool structure can be of the form of a first-in first-out (FIFO) queue where messages are written in an order that generally corresponds to receipt. A table map keeps track of the message location and status or other message information within the cells. Messages are written in batches, so as to eliminate the need to write each individual message as a file to the storage system. Further, because ot the queue structure ot the message ceil pool, fragmentation of the writing of the batches to the storage system can be eliminated.
[0005] A delivery processor is provided to oversee the delivery processing of the stored messages. The delivery processor can read messages from the message cell pool structure and attempt delivery. Associated with the delivery processor can be one or more output queues. The output queues can be used to receive messages from the message cell pool structure (e.g., at a time when messages from the message cell pool structure are ready for delivery) for further processing by the delivery processor. The output queues can be serviced in accordance with a predefined ordering or policy (e.g., implementing quality of service differentiation for different classes of messages). Alternatively, output queues can be used only for storage of messages that are delayed, or otherwise difficult to deliver.
[0006] Aspects of the invention may include none, one or more of the following advantages. The messages need not remain in the memory longer than a given messaging protocol requires for server acknowledgement. The system allows for batching of messages that have been received. A message can be written to one or more cells. The cells can be optimized to accommodate the size of the message, for example, the cells can be sized to store at least 80% of the average sized message. Portions of a message can be written to cells prior to completely receiving a given message. Completely received messages can be written to the cells prior to completely receiving other messages. Thus, if only a portion of a first message in time is received before the entirety of a second message in time is received, the system need not wait to receive the entirety of the first message before processing the second message (i.e., writing the second message as part of a batch to the message cell pool structure). When the system receives a long message, the system may not need to wait to receive the entire message before processing a shorter message that is received in full. The system can have a plurality of interfaces and can contemporaneously receive a plurality of messages that are processed for subsequent transmission. [0007] Cells can be written after either a predetermined time or after a predefined number of messages have been received. Messages can be written to and retrieved from the message cell pool structure in a first-in- first-out sequence. Writing to cells can minimize disk fragmentation.
[0008] A number of connections can be made for each delivery attempt. Multiple threads can be used to process the messages. During the course of delivering messages, the system can switch between awaiting connections, skipping over delayed transmissions. For delayed messages, the system only takes action on an awaiting connection when the receiver notifies the system that it is ready for further processing of the message. Messages with delivery failure can be returned and identified as returned by marking the message. Failed messages can be kept in separate storage for later processing. Storing a message in separate storage for later processing can reduce the bottleneck of message transmittal caused by returned messages. The methods used by the system can be applied to persistent store and forward systems that handle files rather than messages.
[0009] The proposed message processing system solves the problems associated with large volumes of incoming messages resulting in storage management bottlenecks and slow delivery due to limited system resources and unpredictability in the real world messaging environment. The proposed message processing system avoids the problem of slow handling of returned messages. The use of a message pool and batch writing increases storage management speed. Asynchronous delivery ensures the incoming messages are delivered without choking of the outgoing message system. Backup storage provides for efficient processing of returned messages to reduce the system burden.
DESCRIPTION OF THE DRAWINGS
[00010] Figure 1 is a block diagram of message processing system for receipt and storage of messages;
[00011] Figure 2 is a flow diagram of message receipt and storage; and
[00012] Figure 3 is a flow diagram of message processing for delivery. DETAILED DESCRIPTION
[00013] As shown in Fig. 1, a message processing system 10 is provided for processing messages received from one or more wired or wireless devices. In one implementation, message processing system 10 is embodied in a multimedia messaging service center (MMSC) employing a data processing unit 20. The message processing system 10 handles incoming 12 and outbound 14 messages. Message processing system 10 receives messages from wired and wireless devices 102, 104, 106, such as the Internet, PCs, PDAs, cell phones, etc. Data processing unit 20 includes a storage memory 30 for receiving messages 12; an input processor 32 for processing the messages 12; a message cell pool 34 with a plurality of cells 117 of predetermined size; and a table map 36 for identifying messages.
[00014] Optionally, message processing system 10 can have a separate delivery processor 40 and an output queue structure including one or more output queues 38 for delivery of messages. One or more of output queues 38 can be used to store messages that have failed to be transmitted to a remote receiver after a predetermined time lag, or that are returned from the remote system for various reasons, such as lack of storage and invalid recipient identity. The messages that have failed to be transmitted and that are returned messages can be processed independently of the messages that have not been returned or failed to be transmitted. Output queues 38 are discussed in greater detail below.
[00015] The message processing system 10 receives messages 12 from the clients, such as the computer 102 and the wireless client, e.g. palm pilot 104 or mobile phone 106. The messages 12 are received at storage memory 30 in a successive order as indicated by msgl, msg2, msg3, etc. 112 and retained in the storage memory 30 during a protocol lag. The protocol lag defines the time period in which an acknowledgment signal must be returned to a message sender in accordance with the messaging protocol being used. Absent the acknowledgement signal, the message sender may attempt retransmission of the message or otherwise indicate message failure. Messages can include meta-data, such as the address of the sender and the address of the intended recipient(s). In one implementation, storage memory 30 is a random access memory (RAM). Input processor 32 is operable to identify the receipt of a first message and gather messages for batch writing to message cell pool 34. After the predetermined time of the protocol lag has expired for the first message, the message is acknowledged and all of the messages received during the lag are transferred in a batch to individual cells 117 in the message cell pool 34. In one implementation, messages 112 are marked prior to being written to the message cell pool 34 (e.g., in one implementation, messages headers are marked) so as to be able to identify returned ■ messages. Message handling by input processor 32 is described in greater detail below in association with Fig. 2.
[00016] The message cell pool 34 is allocated in a storage system of a computer readable medium, which in one implementation is a hard disk. The message cell pool 34 comprises cells 117 of a specific predetermined size or number of bytes. The size of the cells 117 can be selected in relation to the size of the anticipated messages, where the size can be sufficient to store the average sized message in a single cell. In one implementation, the cells 117 are sized to allow each message 120 to be stored in a single cell. In one implementation, cells are sized such that at least substantially 80% of the received messages fit into a single cell. The messaging environment can determine the pool size, such that the capacity of the message cell pool 34 typically can be capable of handling the anticipated volume of messages. Messages 120 larger than the capacity of a single cell can spill over into the next cell. Each cell 117 can be filled in accordance with their respective position in the message cell pool 34 in a numerical order.
[00017] The table map 36 can comprise a number of entries, where each entry is a row comprising a number of datum, each datum in an individual column. Each entry has data, such as the location 124 of a cell 117 in relation to a message 120, the size 126 of the message 120, whether the cell 117 contains the end of the message 130, whether the message 120 has been processed, e.g. transmitted, what the status of the message is and other message-related information. The table map 36 can have additional data, such as a time stamp. The table map 36 can be on the same data storage entity (as the message cell pool 34) or a different entity that is connected to the same or a different data processor. The table map can be referred to when randomly accessing messages from storage, as described further below.
[00018] The messages 120 that are stored in the message cell pool 34 are then processed, such as by delivering the messages. An asynchronous delivery mechanism can be employed. Other actions that can be performed in the processing are virus scanning, spam detection, pornography detection, etc. In one implementation, input processor 32 processes toe messages lor delivery. Alternatively, a separate delivery processor 4U can be provided. Input processor 32 and delivery processor 40 can be a same processor. For purposes of simplicity of the description, the delivery processing will be described in an implementation that includes a delivery processor. Other implementations are possible.
[00019] In one implementation, delivery processor 40 operates to process each message individually in the order written into the message cell pool 34, reading each message from the cells 117 based on the table map 36 information and attempting to transmit the message to the designated receiver. A delivery attempt may include opening a TCP/IP connection and exchanging protocols. After a successful delivery, the successful delivery can be recorded in the table map 36.
[00020] In one implementation, delivery processor 40 uses a pool of threads. Each thread simultaneously opens a number of TCP/IP connections for operations such as the DNS lookup or protocol exchange or data transmission, and manages the state of message processing for all the connections. In case of a delay in the remote system, such as DNS service taking a long time to resolve an IP address from a domain name, or when the remote server is busy handling other requests and is irresponsive, or the remote server is temporarily down, the thread will set a message aside by recording the current state of message processing in a memory, such as memory 50, and utilize the same thread to handle another message using another connection. Delivery processor 40 can use a callback mechanism to notify the thread when the set aside message is ready for further action. In one implementation, the callback mechanism originates from an underlying computer network layer I/O device driver. In one implementation, a thread can complete a transmission in progress prior to taking up the set aside message for further processing. When multiple connections with messages are ready at a same time, the thread can be configured to take on each message one by one. In the event that the remote system disconnects a set aside message, because the processor is busy handling other messages, that set aside message can be characterized as a failed transmission and moved to a backup storage (not shown). Set aside messages can be stored, such as in processor memory 50. [00021] When a message is returned from the remote server due to a system error, such as a wrong address or an out of quota or simply the remote system is down, the message can be saved in the message cell pool 34 just as other newly received messages. When the delivery processor 40 tries to deliver the returned message, the delivery processor 40 can recognize the message as returned by analyzing the message header or any other identifiable feature that indicates the message was sent out from the message processing system 10. The delivery processor 40 can store returned messages in a file-based or backup system and remove the message from the message cell pool 34. Periodically the system can process returned messages until either they are successfully handled or have reached a maximum number of attempts for processing and are discarded. Delivery processing is described in greater detail below in association with FIG. 3.
[00022] A message processing system including the components described above can be use to receive a message, as described below and in Fig. 2. Clients transmit messages to the system. Each message includes data, the address of the sender and the recipient(s) and can optionally include a traversing path, such as IP address. Typically the messages are broken up into multiple packets for transmitting. Referring now to Fig.s 1 and 2, a packet is received by the system 1021, such as the data processing in unit 20 of an MMSC. The data processing unit 20 determines whether the packet is the first of a group of packets, i.e., a message, or whether the packet belongs to a group of packets that the system has already begun to receive 1025. If the memory does not already contain packets from a corresponding message (the "yes" branch), a new message is designated. If the received packet belongs with packets of a corresponding message (the "no" branch), the packet is stored i.e., in storage memory 30, with the rest of the received packets from the corresponding message 1036. The packets for each message have a sequence in which they can be ordered.
[00023] The data processing unit 20 determines whether the message to which the packet was added is complete 1041. If the message is not complete (the "no" branch), the data processing unit awaits a new packet 1021. If the message is complete (the "yes" branch), the data processing unit determines whether a trigger event has occurred 1057. In one implementation, the trigger event can be the receipt of a predetermined number of messages. Other trigger events are discussed below. If the predetermined number of completed messages have not been received (the "no" branch), the processor continues to received packets 1021. If the predetermined number of complete messages have been received (the "yes" branch), i.e., the capture period has elapsed, the completed messages are batched 1054. The batched messages are written to a persistent storage (i.e. message cell pool 34) 1059. In one implementation, portions of a message can be written as part of a batch, that is, input processor 32 can be configured to include in a batch a portion of message that has been received such that the portion can be written to message cell pool 34 prior to receipt of the entire message. In one implementation, when a write to cell function occurs, all portions of messages and complete messages are written to cells (i.e., input processor 32 does not wait for complete messages to be received before writing to the message cell pool 34).
[00024] Each message is either written into a single cell or more than one cell when the message is greater than the capacity of the cell. When a message is written to the cell, the location of the cell is recorded as an entry in the table map 1064 along with any other relevant information, such as size, and whether the message is complete in a cell or subsequent cell. This process is repeated with each successive write to the cells. In one implementation, message processing system 10 does not treat each individual message as a separate file. Rather, groups of cells 117 or all of the cells 117 in the cell message pool 34 are processed as one file. Treating groups of cells as files or the entire cell structure as a single file can reduce disk fragmentation.
[00025] In one implementation, the trigger event described in step 1057 is the expiration of the protocol lag time. More specifically, the initial message received in a capture period defines the beginning of the capture period. The end of the capture period (i.e., the expiration of the protocol lag time) triggers the end of the capture period. Messages received during the capture period can then be batched and written to the message cell pool 34. In one implementation, the trigger is a time period that is determined by the protocol lag. In such an implementation, input processor 32 may immediately send acknowledgement signals or alternatively may send acknowledgment signals coincident with the batching process (i.e., the acknowledgement signals may not be sent immediately after receipt of a given message and instead delayed so as to allow for the efficient batching of messages in the storage memory 30). Messages arriving within the capture period form a batch of messages written to the message cell pool 34. The SMTP or SMPP protocol permits a predetermined lag before acknowledgement which time lag can be as long as 10 minutes. During this time lag, the input processor 32 can wait for additional messages from any client prior to batch writing messages to message cell pool 34.
[00026] In one implementation, the trigger event described in step 1057 can be defined in other ways including by the expiration of a system predefined time, such as 1/10 of a second or another time adequate for the system traffic and storage capacity of the memory. However, other ways of determining when to trigger the batch write operation can be selected.
[00027] A message can have information added to the message, such as a header to notify the system of the processing history of the message. Such additional information can include the identity of the system. The additional information permits recognition that the message has been processed by the system in the event the message is returned to the system. In the case of email, the system may indicate in the header that the system has received the message.
[00028] As the number of received messages increases, cells are filled in the order in which the messages are received until the message cell pool 34 is filled. In one implementation, when the message cell pool 34 is filled, no new messages can be written to the message cell pool 34. As a practical matter, the message cell pool 34 is usually available for new messages, without requiring the memory to store messages while waiting for the message cell pool 34 to become available. However, in the event that the availability of the message cell pool 34 is more limited than the system requires, one or more additional message cell pools can be provided. When all messages have been processed, as evidenced by the table map, as either having been successfully transmitted or having been sent to a backup database, new messages can be entered overwriting messages in cells from the beginning of the cell bank. In one implementation, not all of the messages in the message cell pool 34 are required to be processed prior to the overwriting of older messages. That is, in one implementation, pointers to the head and the tail of the message cell pool 34 can be used to read and write messages from the message cell pool 34 (i.e., so that new messages can be written to appropriate locations of the message cell pool 34 and so that the oldest messages in the message cell pool 34 can be processed for delivery). [00029] Messages received from a client may be subject to various interruptions, e.g., there may be stoppages of transmission from the client. The system can disconnect from a remote system after a configurable predetermined period of time. If the handshake is unsuccessful or the message is in an unrecognizable form, the system can disconnect. When only a partial message is received, the partial message can be discarded.
[00030] Fig. 3 shows a flow diagram for operation of the delivery processor 40 when delivering a message to a recipient. Referring now to Fig.s 1 and 3, the delivery processor 40 begins by retrieving one or more messages from the message cell pool 34. In one implementation, the delivery processor 40 removes one message at a time from the message cell pool 34. In another implementation, the delivery processor 40 retrieves a block of messages from the message cell pool 34 prior to processing.
[00031] Delivery processor 40 seeks a remote receiver identity 2000, where normally there will be a delay. Delivery processor 40 determines whether the delay is beyond a predetermined period 2003. If the delay is beyond the predetermined period, (the "yes" branch), the message is set aside 2004 and another message is processed 2000. If there is no delay or the delay is not beyond the predetermined period, (the "no" branch), the message enters into protocol exchange 2008 with the recipient. Delivery processor 40 determines whether the protocol exchange was successful 2011. If for any reason the protocol exchange is not successful (the "no" branch), the message is then set aside 2004. After a successful protocol exchange (the "yes" branch), the message is transmitted to the remote receiver 2014. If the message is successfully transmitted (the "yes" branch), the successful transmission is recorded in the table map 2027. If for any reason there is an interruption or a failure in transmitting the message (the "no" branch), the message is set aside 2004.
[00032] The messages set aside 2004 are retained until the messages are ready for transmission to the remote receiver. Delivery processor 40 can track the amount of time that a message has been set aside 2030. If a prescribed amount of time has passed and no signal or response has been received from the remote receiver (the "yes" branch), the message is sent to backup 2042. When a message is sent to backup, the status of the message as processed is recorded in a table map 2050. At any time the remote receiver can call back or send a signal to the delivery processor 40 indicating that it is ready to receive tne message and t e system can receive toe sent signal ι )5 /. wnen tne delivery processor 40 receives the signal indicating that the remote receiver is ready to receive the message and the system receives this signal (the "yes" branch), the message can be processed and is transmitted 2014. Otherwise, the system continues to wait for the signal or the prescribed time to pass (the "no" branch).
[00033] When a message is returned to the system 2061, such returned message can be stored in the backup database 2042. Messages in the backup storage are subject to being overwritten in the message cell pool 34. The delivery processor 40 can periodically check whether there are messages that need to be delivered in the backup storage and transfer the messages to the system for transmission. The number of attempts to transmit a message can be recorded by the delivery processor 40. When the maximum number of attempts defined by the system has been reached, the message can be erased.
[00034] The delivery processor 40 can delivery the messages in the order that the messages were received by the message processing system 10. Alternatively, the delivery processor 40 can delivery the messages out of the order in which the messages were received, as described further below.
[00035] In one implementation, the delivery processor 40 retrieves messages and writes the retrieved messages into one of multiple output queues 38. The delivery processor 40 can be configured to write messages to one of the output queues 38 based on information in the message, such as a domain name or priority indication on the message. The delivery processor 40 can then process the messages from the one or more output queues 38, where each queue can have a priority associated with it. For example, messages of high priority can be written to a first queue, messages of a standard priority can be written to a second queue and messages of a low priority can be written to a third queue. The delivery processor 40 can access the first queue 50% of the time for delivering messages, access the second queue 35% of the time for delivering messages and access the third queue 15% of the time. Other priority systems, such as different allocations (i.e., quality of service (QOS)) of accessing the various queues can be configured. In another implementation, the delivery processor 40 can randomly access the output queues 38. In yet another implementation, QOS can be provided by assigning one queue to messages that have not yet been sent out of the system. The other queues can be reserved for messages that have uceu leiiuncu to uic system υr mat nave not oeen uenvcicu uuc lυ a transmission iauure. The queue for messages that have not yet been sent out of the system can be prioritized over the other queues.
[00036] The subject system can be readily applied to a wide variety of applications. The methods described above can be applied to wireless messaging, such as SMS systems, email systems, stock or commodity exchange systems, and the like. In addition to messaging, the system can be applied to any persistent store and forward application. In place of storing and processing messages, files can be stored and processed. Each file can be parsed into packets for transfer.
[00037] All references referred to in the text are incorporated herein by reference as if fully set forth herein. The relevant portions associated with this document will be evident to those of skill in the art. Any discrepancies between this application and such reference will be resolved in favor of the view set forth in this application.
[00038] Although the invention has been described with reference to the above examples, it will be understood that modifications and variations are encompassed within the spirit and scope of the invention. Accordingly, the invention is limited only by the following claims.

Claims

WHAT IS CLAIMED IS:
1. A method for processing messages, comprising: receiving a plurality of messages; delaying the writing of individual messages to memory and instead batch writing the plurality of messages to cells in a memory; and asynchronously delivering each of the plurality of the messages to respective recipients.
2. The method of claim 1, wherein: receiving includes acknowledging receipt of the each of the plurality of messages when received.
3. The method of claim 1 , wherein: batch writing the one or more messages to cells includes batch writing upon receipt of a trigger.
4. The method of claim 3, wherein: receiving a trigger includes determining that a predetermined amount of time has passed.
5. The method of claim 4, wherein: determining that a predetermined amount of time has passed includes determining the predetermined amount of time has passed since a previous batch write operation.
6. The method of claim 3, wherein: receiving a trigger includes determimng that a protocol time lag has expired for a first message received in the plurality of messages.
7. The method of claim 6, wherein: receiving a trigger includes determimng that a predetermined time period has expired.
8. The method of claim 3, wherein: receiving a trigger occurs before receiving an entire message such that a portion of a first message is received; and batch writing the one or more messages to cells includes writing the portion of the first message such that the portion is written to the cells prior to receiving the entire message.
9. The method of claim 1, wherein: batching writing the one or more messages to cells includes writing a group of cells as a file and not writing each message as a separate file.
10. The method of claim 1 , wherein: receiving one or more messages includes receiving the one or more messages in an order of receipt; and asynchronously delivering the one or more messages includes delivering the one or more messages in the order of receipt.
11. The method of claim 1 , wherein: receiving the plurality of messages includes receiving the one or more messages in an order of receipt; and asynchronously delivering the one or more messages includes delivering the one or more messages in an order different from the order of receipt.
12. The method of claim 1, further comprising: processing the messages in the cells for delivery including writing the one or more messages to one or more queues.
13. The method of claim 12, wherein each of the one or more queues has a priority assigned to each queue and the method further comprises: determining a priority for each of the one or more messages; and writing the one or more messages to the one or more queues according to the priority for each of the one or more messages and the priority assigned to each queue.
14. The method of claim 13, wherein: determining a priority for each of the one or more messages includes determining an attribute of the recipient.
15. The method of claim 13, wherein: determining a priority for each of the one or more messages includes determining whether for each of the one or more messages has been a previous delivery attempt.
16. The method of claim 13, further comprising: retrieving a block of messages from the cells for asynchronous delivery, wherein the block of messages includes two or more messages.
17. The method of claim 1 , wherein: batch writing the plurality of messages to cells includes writing to cells of a predetermined size.
18. The method of claim 1 , further comprising: mapping each of the plurality of messages such that a location of each message in the one or more cells is recorded in a table map.
19. The method of claim 18 wherein mapping each message includes recording in the table map a size of the message.
20. The method of claim 18 wherein mapping each message includes recording in the table map whether the cell contains an entire message.
21. The method of claim 18 wherein mapping each message includes recording in the table map information about processing of the message.
21. The method of claim 1 , further comprising: receiving a first message as returned from the recipient; determining that the first message is a returned message; and storing the returned message in a backup system for processing at a later time.
22. A computer implemented system for processing messages, comprising: a message processor for receiving messages, acknowledging messages received and batching messages; a memory having one or more cells of a predetermined size, the message processor operable to write messages in batches to the memory including writing respective individual messages to one or more cells depending on a size of a given message; and a table map for recording an association between each message and each cell in which the message is written.
23. The system of claim 22, wherein: the one or more cells have a numeric order; and the one or more cells are written to in the numeric order of the cells.
24. The system of claim 22, wherein: the one or more cells are each a predetermined size and the size is substantially equal to a size of a majority of messages processed.
25. The system of claim 22, wherein: the one or more cells are sized to store at least 80% of an average message size for messages received by the message receiver.
26. The system of claim 22, further comprising: a delivery processor for delivering the messages to a receiver.
27. The system of claim 26, further comprising: a queue structure for receiving storing messages to be delivered by the delivery processor and extracted from the cells.
28. The system of claim 27, wherein: the queue structure includes one or more queues and each queue has a priority; and the delivery processor is configured to access each queue according to the priority of the queue.
29. The system of claim 22, further comprising: a backup data storage for storing files that have not been processed successfully, wherein the delivery processor processes the files that have not been processed successfully.
PCT/US2004/006887 2003-03-05 2004-03-04 Asynchronous mechanism and message pool WO2004079930A2 (en)

Priority Applications (3)

Application Number Priority Date Filing Date Title
EP04717486A EP1606719A4 (en) 2003-03-05 2004-03-04 Asynchronous mechanism and message pool
NZ542871A NZ542871A (en) 2003-03-05 2004-03-04 Asynchronous mechanism and message pool
AU2004217278A AU2004217278B2 (en) 2003-03-05 2004-03-04 Asynchronous mechanism and message pool

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
US45195303P 2003-03-05 2003-03-05
US60/451,953 2003-03-05

Publications (2)

Publication Number Publication Date
WO2004079930A2 true WO2004079930A2 (en) 2004-09-16
WO2004079930A3 WO2004079930A3 (en) 2005-05-06

Family

ID=32962668

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/US2004/006887 WO2004079930A2 (en) 2003-03-05 2004-03-04 Asynchronous mechanism and message pool

Country Status (5)

Country Link
US (2) US8788591B2 (en)
EP (1) EP1606719A4 (en)
AU (1) AU2004217278B2 (en)
NZ (1) NZ542871A (en)
WO (1) WO2004079930A2 (en)

Cited By (11)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
EP2491494A4 (en) * 2009-10-20 2015-03-18 Alcatel Lucent Message server device and method for controlling message delivery
US9313047B2 (en) 2009-11-06 2016-04-12 F5 Networks, Inc. Handling high throughput and low latency network data packets in a traffic management device
EP3016333A1 (en) * 2014-10-31 2016-05-04 F5 Networks, Inc Handling high throughput and low latency network data packets in a traffic management device
US9606946B2 (en) 2009-01-16 2017-03-28 F5 Networks, Inc. Methods for sharing bandwidth across a packetized bus and systems thereof
US9864606B2 (en) 2013-09-05 2018-01-09 F5 Networks, Inc. Methods for configurable hardware logic device reloading and devices thereof
US10015143B1 (en) 2014-06-05 2018-07-03 F5 Networks, Inc. Methods for securing one or more license entitlement grants and devices thereof
US10135831B2 (en) 2011-01-28 2018-11-20 F5 Networks, Inc. System and method for combining an access control system with a traffic management system
US10375155B1 (en) 2013-02-19 2019-08-06 F5 Networks, Inc. System and method for achieving hardware acceleration for asymmetric flow connections
US10972453B1 (en) 2017-05-03 2021-04-06 F5 Networks, Inc. Methods for token refreshment based on single sign-on (SSO) for federated identity environments and devices thereof
US11537716B1 (en) 2018-11-13 2022-12-27 F5, Inc. Methods for detecting changes to a firmware and devices thereof
US11855898B1 (en) 2018-03-14 2023-12-26 F5, Inc. Methods for traffic dependent direct memory access optimization and devices thereof

Families Citing this family (24)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US7496500B2 (en) * 2004-03-01 2009-02-24 Microsoft Corporation Systems and methods that determine intent of data and respond to the data based on the intent
US7249229B2 (en) * 2004-03-31 2007-07-24 Gemini Mobile Technologies, Inc. Synchronous message queues
US7549151B2 (en) * 2005-02-14 2009-06-16 Qnx Software Systems Fast and memory protected asynchronous message scheme in a multi-process and multi-thread environment
US8667184B2 (en) * 2005-06-03 2014-03-04 Qnx Software Systems Limited Distributed kernel operating system
US7840682B2 (en) 2005-06-03 2010-11-23 QNX Software Systems, GmbH & Co. KG Distributed kernel operating system
US20070094336A1 (en) * 2005-10-24 2007-04-26 Microsoft Corporation Asynchronous server synchronously storing persistent data batches
US7680096B2 (en) * 2005-10-28 2010-03-16 Qnx Software Systems Gmbh & Co. Kg System for configuring switches in a network
US8077699B2 (en) * 2005-11-07 2011-12-13 Microsoft Corporation Independent message stores and message transport agents
US7921165B2 (en) * 2005-11-30 2011-04-05 Microsoft Corporation Retaining mail for availability after relay
US20080140826A1 (en) * 2006-12-08 2008-06-12 Microsoft Corporation Monitoring and controlling electronic message distribution
US9229792B1 (en) 2007-11-21 2016-01-05 Marvell International Ltd. Method and apparatus for weighted message passing
US8601069B1 (en) 2007-11-21 2013-12-03 Marvell International Ltd. Method and apparatus for message multicasting
JP5537181B2 (en) * 2010-02-17 2014-07-02 株式会社日立製作所 Message system
US8762340B2 (en) 2010-05-14 2014-06-24 Salesforce.Com, Inc. Methods and systems for backing up a search index in a multi-tenant database environment
US8516062B2 (en) 2010-10-01 2013-08-20 @Pay Ip Holdings Llc Storage, communication, and display of task-related data
US8918467B2 (en) * 2010-10-01 2014-12-23 Clover Leaf Environmental Solutions, Inc. Generation and retrieval of report information
US9009065B2 (en) * 2010-12-17 2015-04-14 Google Inc. Promoting content from an activity stream
WO2013067224A1 (en) * 2011-11-02 2013-05-10 Akamai Technologies, Inc. Multi-domain configuration handling in an edge network server
GB2529120B (en) * 2013-07-24 2020-12-02 Halliburton Energy Services Inc Automated information logging and viewing system for hydrocarbon recovery operations
CN105991676B (en) * 2015-01-30 2019-04-09 阿里巴巴集团控股有限公司 The acquisition methods and device of data
US10146444B2 (en) * 2016-10-03 2018-12-04 Samsung Electronics Co., Ltd. Method for read latency bound in SSD storage systems
JP7000088B2 (en) * 2017-09-15 2022-01-19 株式会社東芝 Notification control device, notification control method and program
CN108092918A (en) * 2017-12-07 2018-05-29 长城计算机软件与系统有限公司 A kind of method for message transmission and system
US10459778B1 (en) 2018-07-16 2019-10-29 Microsoft Technology Licensing, Llc Sending messages between threads

Family Cites Families (45)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US5062055A (en) * 1986-09-02 1991-10-29 Digital Equipment Corporation Data processor performance advisor
US5005014A (en) * 1989-05-22 1991-04-02 Motorola, Inc. System and method for optimally transmitting acknowledge back responses
US5673252A (en) * 1990-02-15 1997-09-30 Itron, Inc. Communications protocol for remote data generating stations
US5596330A (en) * 1992-10-15 1997-01-21 Nexus Telecommunication Systems Ltd. Differential ranging for a frequency-hopped remote position determination system
US5590403A (en) * 1992-11-12 1996-12-31 Destineer Corporation Method and system for efficiently providing two way communication between a central network and mobile unit
WO1995010805A1 (en) 1993-10-08 1995-04-20 International Business Machines Corporation Message transmission across a network
US5634127A (en) * 1994-11-30 1997-05-27 International Business Machines Corporation Methods and apparatus for implementing a message driven processor in a client-server environment
US5802278A (en) * 1995-05-10 1998-09-01 3Com Corporation Bridge/router architecture for high performance scalable networking
US5710885A (en) * 1995-11-28 1998-01-20 Ncr Corporation Network management system with improved node discovery and monitoring
US5875329A (en) * 1995-12-22 1999-02-23 International Business Machines Corp. Intelligent batching of distributed messages
FI102346B (en) * 1996-02-05 1998-11-13 Nokia Telecommunications Oy Short message queuing mechanism
US5841973A (en) * 1996-03-13 1998-11-24 Cray Research, Inc. Messaging in distributed memory multiprocessing system having shell circuitry for atomic control of message storage queue's tail pointer structure in local memory
US7877291B2 (en) * 1996-05-02 2011-01-25 Technology Licensing Corporation Diagnostic data interchange
US6298386B1 (en) * 1996-08-14 2001-10-02 Emc Corporation Network file server having a message collector queue for connection and connectionless oriented protocols
US6353834B1 (en) 1996-11-14 2002-03-05 Mitsubishi Electric Research Laboratories, Inc. Log based data architecture for a transactional message queuing system
US6085277A (en) * 1997-10-15 2000-07-04 International Business Machines Corporation Interrupt and message batching apparatus and method
US6058389A (en) 1997-10-31 2000-05-02 Oracle Corporation Apparatus and method for message queuing in a database system
US6643797B1 (en) * 1999-12-14 2003-11-04 Microsoft Corporation Single I/O session to commit a single transaction
US8001017B1 (en) * 2000-03-27 2011-08-16 Hector Franco Supply-chain management system
US20050203673A1 (en) * 2000-08-18 2005-09-15 Hassanayn Machlab El-Hajj Wireless communication framework
US7020688B2 (en) * 2000-09-05 2006-03-28 Financial Network, Inc. Methods and systems for archiving and verification of electronic communications
US6996060B1 (en) * 2001-03-20 2006-02-07 Arraycomm, Inc. Closing a communications stream between terminals of a communications system
WO2002030041A2 (en) * 2000-10-03 2002-04-11 Omtool, Ltd Electronically verified digital signature and document delivery system and method
US6754621B1 (en) * 2000-10-06 2004-06-22 Andrew Cunningham Asynchronous hypertext messaging system and method
US6957267B2 (en) * 2000-12-28 2005-10-18 Intel Corporation Data packet processing
US7415504B2 (en) 2001-02-26 2008-08-19 Symantec Corporation System and method for controlling distribution of network communications
US20020178283A1 (en) * 2001-03-29 2002-11-28 Pelco, A Partnership Real-time networking protocol
AU2002253752A1 (en) * 2001-06-21 2003-01-08 Telefonaktiebolaget Lm Ericsson (Publ) Method for secure file transfer to multiple destinations with integrity check
US7110525B1 (en) * 2001-06-25 2006-09-19 Toby Heller Agent training sensitive call routing system
CA2473475C (en) * 2002-02-04 2017-04-25 Imagine Broadband Limited Media transmission system and method
US7783787B1 (en) * 2002-06-13 2010-08-24 Netapp, Inc. System and method for reprioritizing high-latency input/output operations
US7379421B1 (en) * 2002-07-23 2008-05-27 At&T Delaware Intellectual Property, Inc. System and method for forwarding messages
US20040068479A1 (en) * 2002-10-04 2004-04-08 International Business Machines Corporation Exploiting asynchronous access to database operations
US7392240B2 (en) * 2002-11-08 2008-06-24 Dun & Bradstreet, Inc. System and method for searching and matching databases
US7676034B1 (en) * 2003-03-07 2010-03-09 Wai Wu Method and system for matching entities in an auction
US7409722B2 (en) * 2003-05-01 2008-08-05 Sun Microsystems, Inc. Control status register access to enable domain reconfiguration
US20050021836A1 (en) * 2003-05-01 2005-01-27 Reed Carl J. System and method for message processing and routing
US20050021770A1 (en) * 2003-06-13 2005-01-27 Guy Helm Method for transferring PPP inactivity time in a CDMA2000 network
US20050050139A1 (en) * 2003-09-03 2005-03-03 International Business Machines Corporation Parametric-based control of autonomic architecture
US7676562B2 (en) * 2004-01-20 2010-03-09 Microsoft Corporation Computer system for accessing instrumentation information
US7249229B2 (en) * 2004-03-31 2007-07-24 Gemini Mobile Technologies, Inc. Synchronous message queues
US7496036B2 (en) * 2004-11-22 2009-02-24 International Business Machines Corporation Method and apparatus for determining client-perceived server response time
JP4175547B2 (en) * 2005-03-30 2008-11-05 富士通株式会社 Program, form output method and apparatus
US7886187B2 (en) * 2008-05-21 2011-02-08 International Business Machines Corporation System for repeated unmount attempts of distributed file systems
US8219606B2 (en) * 2010-02-27 2012-07-10 Robert Paul Morris Methods, systems, and computer program products for sharing information for detecting an idle TCP connection

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
CHETLUR M ET AL.: "Optimizing Communication in Time-Warp Simulators", PADS, 1998

Cited By (11)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US9606946B2 (en) 2009-01-16 2017-03-28 F5 Networks, Inc. Methods for sharing bandwidth across a packetized bus and systems thereof
EP2491494A4 (en) * 2009-10-20 2015-03-18 Alcatel Lucent Message server device and method for controlling message delivery
US9313047B2 (en) 2009-11-06 2016-04-12 F5 Networks, Inc. Handling high throughput and low latency network data packets in a traffic management device
US10135831B2 (en) 2011-01-28 2018-11-20 F5 Networks, Inc. System and method for combining an access control system with a traffic management system
US10375155B1 (en) 2013-02-19 2019-08-06 F5 Networks, Inc. System and method for achieving hardware acceleration for asymmetric flow connections
US9864606B2 (en) 2013-09-05 2018-01-09 F5 Networks, Inc. Methods for configurable hardware logic device reloading and devices thereof
US10015143B1 (en) 2014-06-05 2018-07-03 F5 Networks, Inc. Methods for securing one or more license entitlement grants and devices thereof
EP3016333A1 (en) * 2014-10-31 2016-05-04 F5 Networks, Inc Handling high throughput and low latency network data packets in a traffic management device
US10972453B1 (en) 2017-05-03 2021-04-06 F5 Networks, Inc. Methods for token refreshment based on single sign-on (SSO) for federated identity environments and devices thereof
US11855898B1 (en) 2018-03-14 2023-12-26 F5, Inc. Methods for traffic dependent direct memory access optimization and devices thereof
US11537716B1 (en) 2018-11-13 2022-12-27 F5, Inc. Methods for detecting changes to a firmware and devices thereof

Also Published As

Publication number Publication date
NZ542871A (en) 2007-03-30
AU2004217278A1 (en) 2004-09-16
WO2004079930A3 (en) 2005-05-06
AU2004217278B2 (en) 2011-03-17
US20050044151A1 (en) 2005-02-24
EP1606719A4 (en) 2010-04-28
US20140330919A1 (en) 2014-11-06
EP1606719A2 (en) 2005-12-21
US8788591B2 (en) 2014-07-22

Similar Documents

Publication Publication Date Title
US8788591B2 (en) Asynchronous mechanism and message pool
US7249229B2 (en) Synchronous message queues
US7127534B2 (en) Read/write command buffer pool resource management using read-path prediction of future resources
EP1545042B1 (en) Retransmission system and method for a transport offload engine
KR100850254B1 (en) Reducing number of write operations relative to delivery of out-of-order rdma send messages
US9462077B2 (en) System, method, and circuit for servicing a client data service request
US8989200B2 (en) Wireless/LAN router queuing method and system
US20030200363A1 (en) Adaptive messaging
US20060067228A1 (en) Flow based packet processing
EP0909063A2 (en) Mechanism for dispatching data units via a telecommunications network
US20080263171A1 (en) Peripheral device that DMAS the same data to different locations in a computer
EP1454239A2 (en) Fast path message transfer agent
US6621825B1 (en) Method and apparatus for per connection queuing of multicast transmissions
US20060221827A1 (en) Tcp implementation with message-count interface
US9423976B2 (en) System and method of expedited message processing using a first-in-first-out transport mechanism
JP2004199620A (en) File transmitting and receiving system and method using electronic-mail
CN109327402B (en) Congestion management method and device
JP2003188895A (en) Packet communication equipment
JPH0614055A (en) Electronic mail transfer control system
JPH06252950A (en) Multilogic channel control method/device
JPS62293851A (en) Communication control equipment

Legal Events

Date Code Title Description
AK Designated states

Kind code of ref document: A2

Designated state(s): AE AG AL AM AT AU AZ BA BB BG BR BW BY BZ CA CH CN CO CR CU CZ DE DK DM DZ EC EE EG ES FI GB GD GE GH GM HR HU ID IL IN IS JP KE KG KP KR KZ LC LK LR LS LT LU LV MA MD MG MK MN MW MX MZ NA NI NO NZ OM PG PH PL PT RO RU SC SD SE SG SK SL SY TJ TM TN TR TT TZ UA UG US UZ VC VN YU ZA ZM ZW

AL Designated countries for regional patents

Kind code of ref document: A2

Designated state(s): BW GH GM KE LS MW MZ SD SL SZ TZ UG ZM ZW AM AZ BY KG KZ MD RU TJ TM AT BE BG CH CY CZ DE DK EE ES FI FR GB GR HU IE IT LU MC NL PL PT RO SE SI SK TR BF BJ CF CG CI CM GA GN GQ GW ML MR NE SN TD TG

121 Ep: the epo has been informed by wipo that ep was designated in this application
WWE Wipo information: entry into national phase

Ref document number: 2004717486

Country of ref document: EP

WWE Wipo information: entry into national phase

Ref document number: 2004217278

Country of ref document: AU

Ref document number: 542871

Country of ref document: NZ

ENP Entry into the national phase

Ref document number: 2004217278

Country of ref document: AU

Date of ref document: 20040304

Kind code of ref document: A

WWP Wipo information: published in national office

Ref document number: 2004217278

Country of ref document: AU

WWP Wipo information: published in national office

Ref document number: 2004717486

Country of ref document: EP