WO2014171413A1 - 処理性能低下を回避するメッセージシステム - Google Patents
処理性能低下を回避するメッセージシステム Download PDFInfo
- Publication number
- WO2014171413A1 WO2014171413A1 PCT/JP2014/060565 JP2014060565W WO2014171413A1 WO 2014171413 A1 WO2014171413 A1 WO 2014171413A1 JP 2014060565 W JP2014060565 W JP 2014060565W WO 2014171413 A1 WO2014171413 A1 WO 2014171413A1
- Authority
- WO
- WIPO (PCT)
- Prior art keywords
- message
- server
- data store
- store server
- information
- Prior art date
Links
Images
Classifications
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04L—TRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
- H04L43/00—Arrangements for monitoring or testing data switching networks
- H04L43/08—Monitoring or testing based on specific metrics, e.g. QoS, energy consumption or environmental parameters
- H04L43/0805—Monitoring or testing based on specific metrics, e.g. QoS, energy consumption or environmental parameters by checking availability
- H04L43/0811—Monitoring or testing based on specific metrics, e.g. QoS, energy consumption or environmental parameters by checking availability by checking connectivity
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04L—TRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
- H04L51/00—User-to-user messaging in packet-switching networks, transmitted according to store-and-forward or real-time protocols, e.g. e-mail
- H04L51/21—Monitoring or handling of messages
- H04L51/226—Delivery according to priorities
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04L—TRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
- H04L43/00—Arrangements for monitoring or testing data switching networks
- H04L43/08—Monitoring or testing based on specific metrics, e.g. QoS, energy consumption or environmental parameters
- H04L43/0805—Monitoring or testing based on specific metrics, e.g. QoS, energy consumption or environmental parameters by checking availability
- H04L43/0817—Monitoring or testing based on specific metrics, e.g. QoS, energy consumption or environmental parameters by checking availability by checking functioning
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04L—TRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
- H04L51/00—User-to-user messaging in packet-switching networks, transmitted according to store-and-forward or real-time protocols, e.g. e-mail
- H04L51/21—Monitoring or handling of messages
- H04L51/214—Monitoring or handling of messages using selective forwarding
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04L—TRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
- H04L51/00—User-to-user messaging in packet-switching networks, transmitted according to store-and-forward or real-time protocols, e.g. e-mail
- H04L51/21—Monitoring or handling of messages
- H04L51/23—Reliability checks, e.g. acknowledgments or fault reporting
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04L—TRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
- H04L67/00—Network arrangements or protocols for supporting network services or applications
- H04L67/01—Protocols
- H04L67/10—Protocols in which an application is distributed across nodes in the network
- H04L67/1001—Protocols in which an application is distributed across nodes in the network for accessing one among a plurality of replicated servers
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04L—TRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
- H04L67/00—Network arrangements or protocols for supporting network services or applications
- H04L67/50—Network services
- H04L67/55—Push-based network services
Definitions
- the subject matter disclosed in this specification relates to a message server technology.
- a message system that provides a message service includes a plurality of server devices that handle message processing (hereinafter, server devices are referred to as servers).
- Patent Documents 1 and 2 have been proposed as methods for improving the availability and service quality of the message system.
- a load balancer (or distribution device) is installed in front of a plurality of servers, and the load balancer measures the response time for requests to each server and distributes the server load (paragraph 0008 of Patent Document 1). , 0013), a load state of each server, a garbage collection state of Java (registered trademark) VM, and the like, and a method for distributing messages (paragraphs 0007 and 0028 of Patent Document 2) has been proposed.
- M2M Machine to Machine
- Patent Documents 1 and 2 have the following problems.
- the first problem is that it is difficult to determine the degradation of processing performance in a short time, specifically, 1 second or less.
- the maximum response waiting time (response timeout) is often set by a failure switching time (several seconds or more) of a switch or a router connected between servers. For example, if the response timeout is shorter than the failure switching time of the above switch, the message that can be saved by the failure switching will time out, and an error will occur in the processing of a large number of messages. There is a possibility that problems such as false detection and service stop may occur.
- the switch failure switching operation when a failure is switched, the path between servers switches to a network that can communicate, and the message is retransmitted by retransmission processing such as TCP (Transmission Control Protocol). Can continue processing.
- TCP Transmission Control Protocol
- Patent Document 1 only mentions a simple response time processing performance degradation, and the method of Patent Literature 2 is limited to the case where the cause of the processing performance degradation is Java FullGC or the like.
- Non-Patent Document 1 discloses a method in which a plurality of servers agree via a network. When processes that are multiple message servers are cooperating, if one server is degraded in processing performance, the cooperating servers are also degraded in processing performance. Since all the processes of the server cooperating with the server whose performance speed is reduced do not decrease uniformly, it depends on the speed reduction status of the cooperative process when it is related to the cooperative process. As described above, it may be difficult to detect the response time by simple threshold setting. In the method of Patent Document 2, since only a decrease in processing performance of one server can be detected, the above cannot be detected at all.
- the server that distributes messages maintains thresholds such as time and the number of connections that are different from the response timeout for each distribution destination server.
- thresholds such as time and the number of connections that are different from the response timeout for each distribution destination server.
- processing performance degradation determination processing the server group whose processing performance has been degraded is identified from the correlation related to the cooperative processing between the servers (hereinafter referred to as processing performance degradation determination processing), and the server in which the distribution server has not degraded the processing performance
- the distribution server prevents the double transmission of the message to the data store server by continuing to wait for the response of the message transmitted before the determination to the server group whose processing performance has deteriorated.
- the message in this specification includes mobile phone e-mail, SNS message, sensor information, data transmitted by things such as cars, and the like.
- the disclosed message system includes a distribution server that distributes messages and a server that processes a plurality of messages.
- the distribution server constantly monitors parameters such as time and the number of connections for determining whether the processing performance has deteriorated for each server that is the distribution destination, and manages correlation information (configuration information) related to the cooperative processing of each server.
- the server group whose processing performance is degraded is detected from this information.
- the distribution server distributes messages by avoiding the server group whose processing performance is degraded from the next message distribution message.
- One specific aspect is a system including a distribution server that distributes messages and a server that processes a plurality of messages, and the distribution server has a function of transferring messages to a server that processes messages, and message transfer A function for managing the time and the number of connections for each server, a function for determining that the processing performance is degraded when the time or connection exceeds a threshold, a function for acquiring correlation information related to the cooperative processing of each server, and the above It is characterized by having a function for identifying a server group whose processing performance is reduced from the processing performance drop and correlation information, and a function for distributing messages while avoiding the server group where the process is low.
- a message system configured to include a plurality of servers, even if a part of the server falls into a failure or the processing speed is reduced, the processing performance of the entire system is prevented from being lowered or the service is partially stopped. Make it possible.
- FIG. 1 is a block diagram illustrating a schematic system configuration in a first embodiment.
- 2 is a block diagram illustrating a configuration of a message reception server 106 or a message transmission server 108.
- FIG. 2 is a block diagram illustrating a configuration of a data store server 107.
- FIG. An example of a message relay sequence of the message system is shown.
- An example of the message acquisition sequence of a message system is shown. It is a block diagram which illustrates the schematic system structure in a 2nd Example.
- a message receiving server having a message receiving function
- a plurality of data store servers or key-value stores (Key-Value Stores)
- a data grid for storing messages a message transmission server having a message transmission function
- Data store server replicates the same data, and realizes data persistence by holding the same data multiple times in multiple data store servers.
- the data store server performs processing such as storing, updating, and deleting data in cooperation between a plurality of data store servers that hold (or should hold) the data.
- the data store server is a key-value store that manages data with key-value pairs.
- Each server in Patent Documents 1 and 2 does not need to hold information such as the processing state of the server, that is, is stateless, whereas the data store server of the present embodiment holds information such as the processing state. Stateful.
- a stateful server requires processing such as taking over the state (data) in the event of a failure, which complicates the system, but is often applied to systems that require availability.
- Patent Literature 1 due to the difference in the configuration that the data store server is stateful, there are not mentioned in Patent Literature 1 and Patent Literature 2, but the problem at the time of processing performance degradation in a stateful server, a highly available system, and its solution method Also described.
- messages such as e-mails of mobile communication carriers are targeted as examples of mass messages.
- FIG. 1 is a block diagram showing the system configuration of the message system of this embodiment.
- the message system is configured in the carrier equipment network 103, and includes a message reception server 106, a data store server 107, and a message transmission server 108.
- the communication terminal 101 is a terminal device capable of data communication such as a mobile phone terminal, a tablet, and a PC, and is connected to the message system of the present embodiment in the carrier equipment network 103 via the wireless network 102.
- the wireless network 102 is a wireless network managed by a mobile communication carrier.
- the carrier equipment network 103 is a network and network equipment that relays communication from the wireless network 102 to the Internet 104 and the message receiving server 106.
- the wireless network 102 and the carrier equipment network 103 are managed by a mobile communication carrier that manages the message receiving server 106 of this embodiment.
- the message transfer server 105 also called MTA (Mail Transfer Agent), is connected to the message system of the present embodiment in the carrier equipment network 103 via the Internet 104, and exchanges messages with the message reception server 106.
- the message transfer server 105 is installed in an equipment network managed by a communication provider such as an Internet provider or another mobile communication carrier.
- the message transfer server 105 performs processing for transmitting a message from another communication carrier that manages the message transfer server 105 to the message receiving server 106.
- the system of the message system of this embodiment includes a plurality of message reception servers 106, a data store server 107, and a message transmission server 108.
- the message receiving server 106 and the data store server 107, the data store server 107 and the message transmitting server 108, the plurality of data store servers 107, and the like are connected in a mesh.
- Each server of the logical configuration 110 may actually be a server device or a virtual machine.
- a plurality of types of servers may be arranged on the same server device with each server of the logical configuration 110 as a server program.
- the message receiving server 106 and the data store server 107 may be arranged on the same server device, or a plurality of data store servers 107 may be arranged on the same server.
- the system configuration of the present embodiment is not limited to FIG. 1 and can be applied to messaging systems having other configurations.
- the system of the message system relays a message received from the communication terminal 101 or the message transfer server 105 once in a storage area called a queue and then performs a transmission process sequentially, which is relayed by a so-called store-and-forward method. , Leveling the amount of information flowing into the system, and immediate response without waiting for the user.
- the message reception server 106 is in charge of message reception processing
- the data store server 107 is in charge of holding a queue
- the message transmission server 108 is in charge of message transmission processing.
- the message receiving server 106 performs processing for storing the message received from the communication terminal 101 or the message transfer server 105 in the data store server 107.
- the message transmission server 108 acquires the message stored in the data store server 107 and transmits it to the message destination server such as the message transfer server 105 or a server that relays the message to the destination server.
- the data store server 107 manages data with a pair of key and value (value), and multiple data store servers 107 hold the same data (key and value) in multiple, and the message receiving server 106 and the message transmitting server 108. The data requested from is processed.
- the message system is a mobile communication carrier.
- the message receiving server 106, the data store server 107, and the message sending process such as authentication processing, billing processing, message conversion processing, congestion control, etc. Any of the servers 108 may do so.
- the message transmission path is the communication terminal 101, the message reception server 106, the data store server 107, and the message transmission server 108 in this order will be described as an example. It is also applicable to.
- the applicable range of the messaging system disclosed in the present embodiment is not limited to e-mails and short messages, but can be applied to devices such as sensors, cars, and meters connected to the wireless network 102 and messages (or data) to be transmitted. Applicable. Further, this embodiment can be applied to a network form such as a wired network or a smart grid instead of the wireless network 102.
- FIG. 2 shows a hardware configuration of an information processing apparatus that implements the message reception server 106 and the message transmission server 108.
- An information processing apparatus that implements the message reception server 106 or the message transmission server 108 is used for transmitting and receiving data to and from the processor 202, the volatile memory 207, the disk 209 that is a nonvolatile storage unit, and the carrier equipment network 103. It includes an input / output circuit interface 203 and an internal communication line such as a bus for connecting them. In order to reduce the influence at the time of failure, the input / output circuit interface 203 may be connected to two or more networks.
- the volatile memory 207 includes a message processing program 204 and a data group 205.
- the message processing program 204 includes a distributed processing unit 210 that performs processing for distributing and storing messages (data) to a plurality of data store servers 107, and various control programs that implement message processing. It is executed by the processor 202.
- the contents of the message processing program 204 differ between the message receiving server 106 and the message sending server 108, but the message processing program 204 can be configured to have both functions of the message receiving server 106 and the message sending server 108. is there.
- the message processing program 204 may be stored in advance in the volatile memory 207 or the disk 209, or a removable storage medium or communication medium (that is, a network or a digital signal or carrier wave propagating through the network) (not shown). Via the volatile memory 207 or the disk 209.
- the disk 209 further stores data such as a log output by the message processing program 204 and a setting file of the message processing program 204.
- the contents described below are realized as functions of the message reception server 106 or the message transmission server 108 when the program included in the distributed processing unit 210 is executed by the processor 202.
- the data group 205 used for processing by the message processing program 204 is illustrated as a separate component from the message processing program 204 from a functional viewpoint. It may be included in the message processing program 204.
- the data group 205 includes data store server operation setting information 221, data store server configuration information 222, data store server conference information 223, processing performance degradation determination condition 231, resource regulation value information 232, virtual queue information 233, distribution method information 234, The acquisition method information 235 and the data store server state information 250 are included.
- the data store server operation setting information 221 stores information such as a data holding method among the plurality of data store servers 107 and operation settings of the data store server 107. For example, how many data store servers 107 hold data (data multiplicity), how to store and manage data among multiple data store servers 107 (data Holding method), and an operation when the data store server 107 performs processing according to the received request (operation-specific operation setting).
- the data retention method includes the consistency maintenance type that maintains data in the same state among multiple data store servers 107, and the service continuity of the data store server 107 instead of allowing the data states to be inconsistent.
- a method that prioritizes (availability maintenance type) is set.
- the data store server 107 cooperates with the number of data store servers 107 for each request such as data storage, acquisition, update, deletion, and comparison, and the processing of the number of data store servers 107 is performed. Information that determines that the request is successful is stored.
- the data multiplicity of the data store server operation setting information 221 is 3 and is a consistency maintaining type.
- the operation setting by request for the setting of the storage, update, and deletion request, the setting that the storage is successful when the data is stored in the three data store servers 107, and for the setting of the acquisition request, one data store is set. Set to acquire data from the server 107.
- the multiplicity of data may be other than 3, and an availability maintaining system can be configured.
- the distributed processing unit 210 can realize processing performance deterioration determination processing of the data store server 107, which will be described later, in accordance with the type and processing form of the data store server 107.
- the correlation information between the data store servers 107 is information about which data store server 107 in the message system holds which key data (hereinafter, key range charge information), and the key range charge information. This is information related to the linkage processing between the data store servers 107, which indicates whether the data store server 107 is a master or a slave.
- the correlation information may be generated based on conference information described later.
- the operation information includes information indicating which key range currently holds the data at what multiplicity among the key ranges included in the correlation information in addition to the operation information of each data store server 107 (for example, even if the data multiplicity is 3). If one of the data store servers 107 holding the data stops due to a failure or the like, the data multiplicity becomes 2).
- Data store server conference information 223 is information determined by a method called a gossip protocol, for example, based on information related to the data store server exchanged between the data store servers 107, and like the data store server configuration information 222, The operating status of the data store server 107 and correlation information are included.
- the distributed processing unit 210 updates the data store server configuration information 222 by acquiring the data store server configuration information periodically transmitted by the data store server 107 by multicast or the like.
- the distributed processing unit 210 uses one of the data store server conference information 223 and the data store server configuration information 222 to determine a decrease in processing performance of the data store server 107 to be described later, but both may be used.
- the data store server configuration information 222 is used.
- the processing performance deterioration determination condition 231 is a condition (threshold value) for the distributed processing unit 210 to determine the processing performance deterioration of the data store server 107.
- the storage request processing performance deterioration determination condition 240A, the acquisition request processing performance deterioration determination A condition such as condition 240B is held for each request type.
- the processing performance degradation determination condition 231 includes a processing elapsed time 241, a connection number 242, a simultaneous processing number 243, a transmission waiting number 244, and a response time 245 for each request type.
- the distributed processing unit 210 compares the communication processing with the data store server 107 and the value acquired from the data store server configuration information 222 (the current value 255 to be determined (described later)) with the processing performance degradation determination condition 231. If this value is exceeded, it is determined that the processing performance of the data store server 107 has deteriorated.
- Each parameter of the processing performance degradation determination condition 231 is a threshold value to be compared with the average value, and includes information on the minimum required number of times for the distributed processing unit 210 to determine that the processing performance degradation is determined.
- the connection number 242 describes a threshold value of the number of connections that the distributed processing unit 210 connects to the data store server 107.
- the simultaneous processing number 243 the number of processes simultaneously executed by the distributed processing unit 210, the threshold number of processes, and the number of threads are described.
- the transmission wait number 244 a threshold value of the number of messages waiting to be transmitted to the data store server 107 of the distributed processing unit 210 is described.
- the response time 245 is a threshold value of the time (average value) when the distributed processing unit 210 transmits a request to the data store server 107 and receives a response.
- the processing elapsed time 241 is the elapsed time since the distributed processing unit 210 has not received a response from the data store server 107 and transmitted the request currently being processed, unlike the response time 245 that is the actual value.
- the time is set shorter than the response time.
- a server that processes a message has a response timeout value that continues to wait for a response by sending a message to an external server, and if it is within the response timeout at the time of reception, the response is normally received and added to the response time 245. If received after the response timeout, the response is determined to be an error.
- This embodiment assumes a system in which the response time 245 is set in the order of several milliseconds to several seconds or less, and the response timeout is set in the order of several seconds to several minutes.
- the processing elapsed time 241 is several microseconds. It is desirable to set in the order of seconds to 1 second.
- the processing performance degradation determination condition 231 holds the condition for the request type because of the data store server operation setting information 221, the data store server 107 linkage processing and the processing performance are degraded for each request type such as storage and acquisition. This is because whether or not the request type can be executed differs depending on the cause and the operating status of the data store server 107.
- the resource regulation value information 232 is a regulation value for protecting resources for the distributed processing unit 210 to transmit a request to the data store server 107.
- the distributed processing unit 210 executes the request simultaneously with the request of the data store server 107. This is a regulation value such as the number of processes, the number of connections, and the number of requests waiting for transmission to the distributed processing unit 210.
- the resource regulation value information 232 holds a plurality of values for each state such as normal time and processing performance degradation determination. Based on the resource regulation value information 232, the distributed processing unit 210 avoids using up all resources in request processing to the data store server 107 whose processing performance has deteriorated.
- the virtual queue information 233 stores management information for centrally managing the queue data 340 (hereinafter sometimes referred to as a distributed queue) held by each of the plurality of data store servers 107 as virtual queues in the message system.
- the queue data 340 hereinafter sometimes referred to as a distributed queue
- the distribution method information 234 information on a distribution method (distribution method at the time of storage) to the distributed queue held by the data store server 107 by the distribution processing unit 210 is stored.
- a virtual queue is provided for each message destination in the entire system.
- the distributed processing unit 210 selects the same virtual queue if the messages have the same destination.
- the actual data in the virtual queue is stored in the distributed queue of the data store server 107, and a plurality of distributed queues correspond to one virtual queue.
- the virtual queue includes a plurality of distributed queues.
- the virtual queue information 233 stores correspondence information between each virtual queue and a plurality of distributed queues corresponding thereto, and information for centrally managing information on the plurality of distributed queues.
- the virtual queue information 233 is shared by the distributed processing unit 210 provided in each of a plurality of receiving servers and transmitting servers in the message system.
- any one of the plurality of distributed processing units 210 of the message system updates the virtual queue information 233 and stores it as virtual queue information 331 in the data store server 107.
- the other distributed processing unit 210 periodically acquires the virtual queue information 331 from the data store server 107 and updates the virtual queue information 233 in its own server. Note that the message system of this embodiment holds a plurality of types of virtual queues.
- the distribution method information 234 stores a distribution method such as a key hash calculation, a round robin, and a least connection. Further, the distribution method information 234 may be dynamically changed when the processing performance degradation of the data store server 107 occurs due to the processing performance degradation determination process.
- the acquisition method information 235 stores information for specifying the data store server 107 (data store server state information 250) from which the message receiving server 106 or the message sending server 108 can acquire a message, and the acquisition priority ( Details will be described with reference to FIG. Specifically, it is set to acquire from all or some of the data store servers 107 that can be acquired, or from which of the set multiple data store servers 107 the priority is acquired, for example, the data store server Information is set such that 107 is preferentially acquired from a large number of messages held.
- a distributed queue (distributed queue data 340) held by the data store server 107 and an acquisition priority for each distributed queue may be set.
- the data store server status information 250 includes the key range information 251, the operation server 252, the operation information 253, the distributed queue list 254, the current value 255 to be determined, and the data multiplicity 256.
- the assigned key range information 251 describes the key range of the data held by each data store server 107, and the active server 252 includes the data store server 107 (including master and slave) that is currently in charge of the key range. Multiple IP addresses are described.
- the responsible key range information 251 and the active server 252 are created by the distributed processing unit 210 based on the data store server configuration information 222, and are dynamically changed when the data store server 107 fails or the configuration is changed.
- the operation information 253 stores server states of a plurality of data store servers 107 including a master and a slave.
- the distributed queue list 254 describes a list of distributed queues included in the range of the assigned key range information 251, and is dynamically changed when the data store server 107 fails or the configuration is changed.
- the distributed processing unit 210 selects the data store server state information 250 having the distributed queue list 254 corresponding to the virtual queue information 233 in the normal time.
- the current value 255 of the determination target describes the parameter that is the target of the processing performance degradation determination condition 231, and the current value corresponding to the processing elapsed time 241 number of connections 242, simultaneous processing 243, number of waiting transmissions 244, and average response time 245 Is stored.
- the data multiplicity 256 is the multiplicity of data (distributed queue) included in the range of the assigned key range information 251.
- Data multiplicity 256 usually matches the number of operating data store servers 107 that hold the data included in the range of assigned key range information 251, but the data store server 107 is running in a virtual environment or the like Stores the number of operating data store servers 107 in the real environment.
- the distributed processing unit 210 can hold the data multiplicity separately from the setting of the data store server 107, and there is a lot of data depending on the convenience of the application of the message receiving server 106 and the virtual environment. It is possible to perform flexible control on the severity (generally, the data store server 107 cannot perform control on the data multiplicity for each message).
- the data store server status information 250 is created for each assigned key range information 251, but the present invention is not limited to this.
- the data store server status information 250 may be created for each distributed queue. .
- FIG. 3 shows the hardware configuration of the information processing apparatus that implements the data store server 107.
- An information processing apparatus that implements the data store server 107 includes a processor 302, a volatile memory 307, a disk 309 that is a nonvolatile storage unit, and an input / output circuit interface 303 for transmitting and receiving data to and from the carrier equipment network 103. And an internal communication line such as a bus connecting them.
- the volatile memory 307 or the disk 309 stores a data store server program 304 and includes a data group 305.
- the data store server program 304 includes various control programs that implement message processing, and these control programs are executed by the processor 302.
- the data store server program 304 may be stored in the volatile memory 207 or the disk 209 in advance, or a removable storage medium or communication medium (not shown) (that is, a network or a digital signal or a carrier wave that propagates the network). It may be introduced to the volatile memory 207 or the disk 209 via.
- the disk 309 further stores data such as a log output by the data store server program 304 and a setting file of the data store server program 304.
- the contents described below are realized as functions of the data store server 107 when various control programs included in the data store server program 304 are executed by the processor 302.
- the data group 305 includes data store server configuration information 321, data store server consultation information 322, and a data store area 330.
- the data store server configuration information 321 includes the same contents as the data store server configuration information 222 in FIG. However, the data store server configuration information 321 is created and used by the data store server 107, and the data format may differ from the data store server configuration information 222. Similarly, the data store server conference information 322 includes the same contents as the data store server conference information 222 in FIG.
- the data store server program 304 exchanges data store server conference information 222 with other data store server programs 304 to create data store server configuration information 221.
- the data store area 330 is an area for storing data (storage request) received by the data store server 107 from the message receiving server 106 (distribution processing unit 210).
- each data stored in the data store 330 also stores a key corresponding to the data (value) at the same time. For simplicity, only data (value) is shown, and keys are omitted.
- the function of the data store server program 304 can be realized even if each information of 321, 322, and 330 is stored in the nonvolatile storage unit 308.
- the data store 330 includes virtual queue information 331 and a plurality of distributed queue data 340.
- the virtual queue information 331 is the same information as the virtual queue information 233, and is held in the data store server 107 so that a plurality of distributed processing units 210 share virtual queue information in the entire message system.
- the distributed queue data 340 (data store 330) is held in multiple by the plurality of data store servers 107.
- the distributed queue data 340 includes one distributed queue management information 341, a plurality of message data 342, and message related information 343.
- the distributed queue management information 341 is information for managing a plurality of message data 342 and message related information 343 included in the distributed queue data 340, and the data store server program 304 can realize a function as a queue based on this information.
- the distributed queue management information 341 includes an identifier of the distributed queue data 340, information on whether the distributed queue data 340 is a master or a slave, information on processing order such as message storage order and retrieval order, and distributed queue data 340.
- the maximum number of message data that can be stored (or the data size that can be used by the distributed queue data 340), the number of message data stored in the current distributed queue data 340 and the data size that is being used, from a plurality of distributed processing units 210 It includes information such as exclusive control for extracting messages one by one.
- Message data 342 is message data received from the message receiving server 106 and stored.
- the message related information 343 is information such as additional information related to the message data 342.
- the message reception server 106 or the message transmission server 108 uses the message related information 343 to perform message processing.
- FIG. 4 is a diagram showing an example of a message relay sequence of the message system.
- Step 401 to Step 442 show a message reception sequence of the message reception server 106, and show an example in which the processing performance of the data store server 107a is deteriorated in Step 405.
- the message receiving server 106 receives a message transmitted from the communication terminal 101 (step 401), selects a virtual queue to be stored from the message destination and the virtual queue information 233, and determines the virtual queue according to the distribution method information 234.
- the data store server state information 250 that includes the distributed queue corresponding to is included in the distributed queue list 254 is selected (step 402), and a message storage request is transmitted to the data store server 107a (step 403).
- the data store server 107a performs storage request processing cooperation (step 404) in order to multiplex messages (data) received between a plurality of data store servers 107 including the data store server 107b.
- step 405 it is assumed that the processing performance deteriorates in the data store server 107a.
- the message receiving server 106 is waiting for a response to the message storage request 403.
- the message receiving server 106 receives another message transmitted from the communication terminal 101 (step 411), selects a distribution destination (storage destination) (step 412), and transmits a message storage request to the data store server 107. (Step 413).
- the message receiving server 106 is in a state where it cannot send a response due to the processing performance degradation 405.
- step 413 when the message receiving server 106 receives the message from the communication terminal 101 (step 431), the data receiving server 106 selects the distribution destination of the data store server state information 250 reflecting the processing result of step 433 (step 432).
- step 432 the distributed processing unit 210 of the message receiving server 106 selects a virtual queue to be stored from the message destination and the virtual queue information 233, and sets the distributed queue corresponding to the virtual queue according to the distribution method information 234 to the distributed queue list.
- the data store server 107a included in H.254 is selected.
- Step 433 is an example in which the message receiving server 106 distributed processing unit 210 determines “processing performance degradation” in the processing performance degradation determination processing at regular intervals.
- the processing performance degradation determination process is performed immediately before the distributed processing unit 210 transmits a request to the data store server 107 (step 403, step 413, step 452, and step 456). Checked at the timing of receiving a response from the store server 107 (steps 435, 441, and 453) and every regular time in the millisecond order, but omitted because it is not determined to be “processing performance degradation” is doing.
- the distributed processing unit 210 performs the processing performance determination process even before the message storage request 413 immediately after the processing performance decrease 405, but the parameters included in the processing performance deterioration determination condition 231 exceed the average value (or the minimum number of determination times). In other words, it is not determined that the processing performance is degraded.
- step 433 the distributed processing unit 210 of the message receiving server 106 performs the following processing. First, the distributed processing unit 210 of the message receiving server 106 compares the current value 255 of the determination target of the data store server state information 250 of the data store server 107a with the processing performance deterioration determination condition 231 to determine the processing performance deterioration ( Step 433).
- the distributed processing unit 210 of the message receiving server 106 determines whether each parameter stored in the determination target current value 255 exceeds the threshold value of the corresponding parameter in the processing performance degradation determination condition 231. To do.
- the processing performance degradation determination condition 231 used in step 433 is the setting of the storage request type.
- the processing performance degradation determination condition 240A of the storage request is used, although not explicitly shown.
- the distributed processing unit 210 compares the processing elapsed time 241 with the current value of the processing elapsed time included in the determination target current value 255 (the average value of the processing elapsed time of the message storage requests 403 and 413).
- the distributed processing unit 210 compares the number of connections 242 with the current value of the number of connections (number of connections) included in the current value 255 to be determined (two connections of message storage requests 403 and 413).
- the distributed processing unit 210 compares the concurrent processing number 243 with the current value (two connections of message storage requests 403 and 413) of the concurrent processing number (the number of processes of the distributed processing unit 210) included in the current value 255 to be determined. To do.
- the distributed processing unit 210 compares the transmission waiting number 244 with the current value of the message transmission waiting number to the data store server 107 included in the current value 255 to be determined.
- the processing performance deterioration determination is performed by checking at the timing when the distributed processing unit 210 receives a response from the data store server 107 or at regular intervals.
- the distributed processing unit 210 determines that the data store server 107 (the master of the active server 252) corresponding to the data store server status information 250 determined to have degraded processing performance.
- the data store that describes the “degraded processing performance” state in the registered operation information 253, and that is linked to the data store server 107a that is included in the active server 252 or the data store server configuration information 222 and that has degraded processing performance
- the operation information 253 of the data store server status information 250 corresponding to the server 107b it is described that “the processing performance is degraded by the cooperation destination”.
- the distributed processing unit 210 determines that the data store server 107a selected in Step 432 is in the “processing performance deteriorated state” and the data store server 107b is in the “processing performance deteriorated state due to cooperation destination”. Change the destination to 107c.
- the operation information 253 of the data store server status information 250 indicates “processing performance degraded state” and “processing performance degraded status by the cooperation destination” indicates that the data store server 107 receives a response, checks at regular intervals, and the distributed processing unit 210 Is canceled if the processing performance deterioration determination at the time of transmitting the request to the data store server 107 is below the threshold value of the processing performance deterioration determination condition 231.
- the parameter at the time of the processing performance degradation state of the resource regulation value information 232 is applied to the data store server 107 that is in the “processing performance degradation state” or “the processing performance degradation state by the cooperation destination”.
- the message receiving server 106 consumes resources such as the number of connections 242 and the number of simultaneous processes 243 with respect to the data store server 107a.
- the distributed processing unit 210 realizes resource protection when the processing performance is deteriorated by applying the parameter in the processing performance deterioration state that is lower than the normal value of the resource regulation value information 232.
- the message receiving server 106 transmits a message storage request to the data store server 107c (step 434), and the data store server 107c stores the distributed queue data 340 that matches the distributed queue of the storage destination included in the received message storage request.
- the message data 342 and the message related information 343 are stored, and a response indicating successful storage is transmitted (step 435) (Note that the data store server 107c also cooperates with other data store servers 107, but is omitted in FIG. 4). ).
- the message reception server 106 transmits a normal response 436 to the message 431 to the communication terminal 101, and normally ends the message reception sequence.
- Steps 441 and 442 are behaviors of the message receiving server 106 when the data store server 107a whose processing performance has deteriorated in Step 405 returns in a short time.
- the message reception server 106 receives a response 441 that has been successfully stored before the message storage request 403 times out, the message reception server 106 continues processing, sends a normal response 442 to the message 401 to the communication terminal 101, and normalizes the message reception sequence. finish.
- the distributed processing unit 210 of the message receiving server 106 manages parameters different from the response time 245 for the processing performance degradation determination condition 231 such as the processing elapsed time 241, a response store and a data store that is shorter than the response timeout
- the linkage processing timeout value between the servers 107 can be set higher than that of the network device.
- step 441 when the distributed processing unit 210 makes a determination based on the response time 245 as in the prior art, even if the distributed processing unit 210 determines that a response time-out has not been detected from step 403 to step 441, the step is not possible. It cannot be detected from 403 until after the response timeout elapses (after step 441). If the distributed processing unit 210 makes a determination based on a response timeout and if the response timeout is shorter than between step 403 and step 441, step 441, which is normally a normal response, is treated as an error, and errors tend to occur frequently. There is a problem.
- Whether the distributed processing unit 210 delays the response to the message currently being processed by the distributed processing unit 210 based on the processing elapsed time 241, the number of connections 242, the number of simultaneous processes 243, and the number of waiting transmissions 244 of the processing performance degradation determination condition 231 can be monitored, and it is possible to prevent the occurrence of a large number of errors and to avoid the service stop of the data store server 107 by reducing the probability of occurrence of a response timeout.
- Steps 451 to 456 are a message system transmission sequence.
- the distributed processing unit 210 of the message transmission server 108 selects the data store server 107 that periodically acquires messages (step 451).
- the acquisition destination selection 451 is a process similar to the distribution destination selection 403, and the distributed processing unit 210 follows the acquisition method information 235, and the message transmission server 108 can acquire “processing performance degradation” in the data store server state information 250 that can be acquired.
- a data store server 107c that does not include "status" is selected. Note that in step 451, the distributed processing unit 210 of the message transmission server 108 may start processing upon receiving an acquisition (event) from another server.
- the distributed processing unit 210 of the message transmission server 108 transmits a message acquisition request to the data store server 107c (step 452), and receives a plurality of messages from the data store server 107 (step 453).
- the message transmission server 108 can collectively acquire the message data 342 and the message related information 343 stored in the plurality of distributed queue data 340 of the data store server 107.
- steps 451 to 453 are executed by the distributed processing unit 210 of the plurality of message transmission servers 108 in the message system (details will be described with reference to FIG. 5).
- the distributed processing unit of the message transmission server 108 attempts to access a plurality of data store servers 107 connected in a mesh form in the message system and acquire a message.
- the distributed processing units 210 of the plurality of message transmission servers 108 receive the message acquisition request 452, and process the message acquisition request 452 on a first-come-first-served basis.
- the message transmission server 108 converts the message received in step 453 so that it can be transmitted to the mail transfer server 105 and transmits it (step 454).
- the message receiving server 106 receives a normal response from the message transfer server 105 (step 454) and confirms the successful transmission of the message, the message receiving server 106 transmits a message deletion request 456 to the data store server 107, and ends the message transmission sequence.
- FIG. 5 is a diagram showing an example of a message acquisition sequence of the message system.
- FIG. 5 is a part of the message relay sequence of the message system shown in FIG. 4, and the distributed processing unit 210 of the plurality of message transmission servers 108 uses the data store server according to the contents stored in the acquisition method information 235. The sequence which acquires a message from 107 queues is illustrated.
- the data store server 107 specified by the acquisition method information 235 is set statically or dynamically according to the status of the server and the network. For example, in order to reduce the network load between the data store server 107 and the message transmission server 108, priority is given to acquisition from the data store server 107 on the same device or the data store server 107 with a short distance (for example, the number of hops) on the network.
- a short distance for example, the number of hops
- Correspondence between a plurality of queues or a plurality of data store servers 107 and a plurality of message transmission servers 108 at the time of message acquisition can be freely set, for example, network load, server network It is set according to conditions such as the upper distance (for example, the number of hops) and server load. Also, the correspondence relationship can be dynamically changed according to the situation at the time of failure.
- the distributed processing unit 210 of the message transmission server 108a selects the data store server 107a that is set to be acquired preferentially from the plurality of data store servers 107 that are associated with each other ( Step 470), a message acquisition request is transmitted (Step 471), and a message is acquired (Step 472).
- “priority acquisition is set” means that the acquisition order is first, the number of acquisitions is large, the acquisition interval (interval between steps 471 and 474) is short, and the number of messages that can be acquired at one time is A state in which one or more items related to acquisition, such as many, are set to have higher priority than other message transmission servers. As described above, these items are set in the acquisition method information 235.
- each data store server 107 holds priority information corresponding to the acquisition method information 235, and changes the response content (for example, the number of messages to be responded) to the acquisition from each message transmission server 108. As a result, the priority may be controlled.
- FIG. 5 assumes a system configuration in which the data store server 107a is on the same device as the message transmission server 108a and on a different device from the other message transmission servers 108b and 108c.
- the distributed processing unit 210 of the message transmission server 108a sets the interval of the message acquisition request 474 so that the data store server 107a on the same device obtains preferentially. It is desirable to set a shorter priority than other message transmission servers 108 on other devices, and to increase the priority.
- the distribution processing unit 210 of the message transmission server 108b and the message transmission server 108c which is not a priority acquisition target and has a normal priority, acquires a message from the data store server 107a at a normal acquisition interval.
- the message processing server 108b and the distributed processing unit 210 of the message transmission server 108c acquire the message by acquiring the message of the data store server 107a. That is, it is possible to construct a system in which the service does not stop.
- the message transmission server 108a stops and the message is stored in the queue of the data store server 107a. Even if a large amount of messages are accumulated, the number of messages to be acquired is increased without newly setting, and the influence on the processing performance (throughput) of the service can be reduced.
- a number of messages smaller than the number of messages that can be acquired at one time is set as a normal acquisition number, and the message transmission server 108b or 108c The number of messages acquired at one time may be increased to the upper limit. In this way, as in the above example, even if the message transmission server 108a is stopped and a large amount of messages are accumulated in the queue of the data store server 107a, the service processing performance (throughput) is affected. Less.
- the data store server 107 is operated by being divided into two groups of different networks, and the message system is made more highly available than in the first embodiment. Differences between the second embodiment and the first embodiment will be described below with reference to FIG.
- the second embodiment adopts the system configuration shown in FIG. 6 instead of FIG.
- the data store server 107 includes 107-1 group and 107-2 group, and each group is connected by different networks (different network devices).
- the message reception server 106 and the message transmission server 108 recognize each group of the data store servers 107-1 and 107-2, and access the data store server 107 by switching the network.
- the data store server 107 cooperates with other data store servers 107 via a network, if the network is divided due to a failure of some network devices, all the data store servers 107 in that network (group) are in service. There is a possibility of suspension.
- the probability of total service stoppage is reduced, and either the message reception server 106 or the message transmission server 108 is operating. Access to the data store server 107 of the group can avoid service stoppage.
- the queue held by each data store server 107 has a master or slave in the group to which the queue belongs, as shown in the assigned key range information 251, and a master if there are a plurality of slaves. Set the order.
- one of the data store servers 107 having other queues set as slaves becomes the master for the queue according to the order in the group (to be promoted to the master). Instead of the failed data store server 107, the processing is continued.
- the data store servers 107 of different groups may be combined and arranged on the same server device, and the promotion order to the master may be set differently for different groups.
- data store servers 107 of different groups are arranged one by one on the same device.
- the data store server 107-1a and the data store server 107-2a are arranged on the same device, and the data store server 107-1b and the data store server 107-2b are similarly arranged on the other same device.
- the order of promotion of the group of one data store server 107 and the master of the queue set as the slave of the other group is set to be different (for example, in the reverse direction).
- the next master is the data store server 107 on a different device. Therefore, the processing can be continued while suppressing an increase in load.
- the distribution method information 234 stores the distribution method by the hash calculation of the key, the distribution method to the data store server 107 of the same group such as round robin, least connection, etc. In the embodiment, the distribution method between groups of the data store server 107 is also stored.
- a method of determining the group of data store servers 107 that are the distribution destinations (storage destinations) and which data store server 107 in the group is to be distributed and stored by the hash calculation of the key, round robin or least connection There is a method of storing in such a way that the group which becomes the distribution destination (storage destination) is different every time.
- the active server 252 of the data store server status information 250 stores the IP address of the data store server 107 of the data store server 107-1, 107-2 group.
- the distributed processing unit 210 manages the data store server status information 250 in units of the key range information 251 and the active server 252 as a set, and the data store server 107 (active server 252) of a different network is used when determining the processing performance.
- the service stop of the data store server 107 can be avoided.
- the distributed processing unit 210 can detect a temporary stop of the data store server 107 in units of groups from the data store server configuration information 222 or the data store server conference information 223, and can switch to the other group. Is possible.
- 101 Communication terminal
- 103 Carrier equipment network
- 105 Message transfer server
- 106 Message reception server
- 107 Data store server
- 108 Message transmission server.
Abstract
Description
Claims (11)
- メッセージ送信装置からメッセージを受信しメッセージ受信装置へメッセージを配信するメッセージサーバと、
前記メッセージと、当該メッセージに係わる関連情報と、のいずれか一方または両方を格納するデータストアサーバと、を備えるメッセージシステムであって、
前記メッセージサーバは、
前記データストアサーバ毎の状態を管理する機能と、
前記データストアサーバの処理性能の低下を、データストアサーバとの応答タイムアウトが発生する前に検出する機能と、
前記処理性能が低下したと判定されたデータストアサーバを除くデータストアサーバを格納先とする機能と、
前記メッセージおよび前記関連情報の格納に係わる制御情報のいずれか一方または両方を生成する機能と、
前記メッセージおよび前記関連情報のいずれか一方または両方と、前記制御情報と、を前記データストアサーバへ送信する機能と、を備え、
前記データストアサーバは、
前記複数のデータストアサーバ間で同じデータを多重に保持する機能と、
前記多重に保持するためにデータストアサーバ間で連携して処理する機能と、
送信された前記メッセージおよび前記関連情報のいずれか一方または両方を、保持する機能と、を備える
ことを特徴とするメッセージシステム。 - 請求項1に記載のメッセージシステムであって、
前記メッセージサーバは、データストアサーバ間の連携処理の相関情報と、データストアサーバ間で交換する情報に基づく合議情報との、いずれか一方または両方を取得する機能を備え、
前記メッセージサーバは、前記処理性能低下したと判定されたデータストアサーバと、前記相関情報と合議情報のいずれか一方または両方に基づき格納先を変更する機能を備える
ことを特徴とするメッセージシステム。 - 請求項2に記載のメッセージシステムであって、
前記メッセージサーバは、
互いに独立したデータストアサーバのグループを判別する機能と、
前記処理性能が低下したと判定されたデータストアサーバを検出した場合に前記データストアサーバとは別のグループに属するデータストアサーバを格納先とする機能と、を備える
ことを特徴とするメッセージシステム。 - 請求項2に記載のメッセージシステムであって、
前記メッセージサーバは、データの多重度をメッセージサーバが管理する機能と、
前記メッセージサーバは、データの多重度により前記メッセージサーバの処理を変更する機能と、を備える
ことを特徴とするメッセージシステム。 - 請求項2に記載のメッセージシステムであって、
前記メッセージサーバは、前記処理性能が低下したと判定されたデータストアサーバを検出した場合にリソース規制を変更する機能を備える
ことを特徴とするメッセージシステム。 - 請求項2に記載のメッセージシステムであって、
前記メッセージサーバは、データストアサーバのキューをシステム全体での1つのキューとして運用管理する機能と、
前記メッセージサーバは、前記システム全体の1つのキューと複数のデータストアサーバのキューを対応付けて管理する機能と、
前記メッセージサーバは、メッセージを前記システム全体の1つのキューに格納時に、それに対応づくデータストアサーバのキューを検索し選択する機能と、を備える
ことを特徴とするメッセージシステム。 - 請求項1に記載のメッセージシステムであって、
前記メッセージサーバは、データストアサーバへ要求を送信してからの経過時間を判定し、閾値を超過することによりデータストアサーバの処理性能低下を検出する機能と、を備える
ことを特徴とするメッセージシステム。 - 請求項1に記載のメッセージシステムであって、
前記メッセージサーバは、データストアサーバへ要求の送信ために同時実行しているプロセス数またはコネクション数の閾値を超過することによりデータストアサーバの処理性能低下を検出する機能と、を備える
ことを特徴とするメッセージシステム。 - 請求項1に記載のメッセージシステムであって、
前記メッセージサーバは、データストアサーバへ要求の送信待ちのメッセージ数の閾値を超過することによりデータストアサーバの処理性能低下を検出する機能と、を備える
ことを特徴とするメッセージシステム。 - 請求項3に記載のメッセージシステムであって、
前記メッセージサーバは、前記相関情報と前記合議情報のいずれか一方または両方に基づき、サービス停止のデータストアサーバのグループを判別し、サービスの停止していないグループに格納先を切り替える機能を備える
ことを特徴とするメッセージシステム。 - 請求項3に記載のメッセージシステムであって、
異なるグループに属する、複数の前記データストアサーバが同一装置上に配置され、
グループ内で保持するキューをマスタとする順序を、他のグループと異なるように設定する
ことを特徴とするメッセージシステム。
Priority Applications (2)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
JP2015512467A JP6117345B2 (ja) | 2013-04-16 | 2014-04-14 | 処理性能低下を回避するメッセージシステム |
US14/784,626 US9967163B2 (en) | 2013-04-16 | 2014-04-14 | Message system for avoiding processing-performance decline |
Applications Claiming Priority (2)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
JP2013-085353 | 2013-04-16 | ||
JP2013085353 | 2013-04-16 |
Publications (1)
Publication Number | Publication Date |
---|---|
WO2014171413A1 true WO2014171413A1 (ja) | 2014-10-23 |
Family
ID=51731354
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
PCT/JP2014/060565 WO2014171413A1 (ja) | 2013-04-16 | 2014-04-14 | 処理性能低下を回避するメッセージシステム |
Country Status (3)
Country | Link |
---|---|
US (1) | US9967163B2 (ja) |
JP (1) | JP6117345B2 (ja) |
WO (1) | WO2014171413A1 (ja) |
Cited By (3)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
JP2015073234A (ja) * | 2013-10-04 | 2015-04-16 | 株式会社日立製作所 | メッセージ転送システム及びキューの管理方法 |
JP2016144169A (ja) * | 2015-02-05 | 2016-08-08 | 株式会社日立製作所 | 通信システム、キュー管理サーバ、及び、通信方法 |
JP2018526740A (ja) * | 2015-08-24 | 2018-09-13 | アリババ グループ ホウルディング リミテッド | モバイル端末のためのデータ記憶方法及び装置 |
Citations (2)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US20120124431A1 (en) * | 2010-11-17 | 2012-05-17 | Alcatel-Lucent Usa Inc. | Method and system for client recovery strategy in a redundant server configuration |
JP2012235220A (ja) * | 2011-04-28 | 2012-11-29 | Hitachi Ltd | メールシステム |
Family Cites Families (23)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US6018619A (en) * | 1996-05-24 | 2000-01-25 | Microsoft Corporation | Method, system and apparatus for client-side usage tracking of information server systems |
US20040059789A1 (en) * | 1999-10-29 | 2004-03-25 | Annie Shum | System and method for tracking messages in an electronic messaging system |
GB2362235A (en) * | 2000-05-11 | 2001-11-14 | Robert Benjamin Franks | Method and apparatus for Internet transaction processing |
US8831995B2 (en) * | 2000-11-06 | 2014-09-09 | Numecent Holdings, Inc. | Optimized server for streamed applications |
US7062567B2 (en) * | 2000-11-06 | 2006-06-13 | Endeavors Technology, Inc. | Intelligent network streaming and execution system for conventionally coded applications |
US20040128346A1 (en) * | 2001-07-16 | 2004-07-01 | Shmuel Melamed | Bandwidth savings and qos improvement for www sites by catching static and dynamic content on a distributed network of caches |
AU2002313583A1 (en) * | 2001-08-01 | 2003-02-17 | Actona Technologies Ltd. | Virtual file-sharing network |
US7010598B2 (en) * | 2002-02-11 | 2006-03-07 | Akamai Technologies, Inc. | Method and apparatus for measuring stream availability, quality and performance |
US20030221000A1 (en) * | 2002-05-16 | 2003-11-27 | Ludmila Cherkasova | System and method for measuring web service performance using captured network packets |
US8352360B2 (en) * | 2003-06-30 | 2013-01-08 | Toshiba Global Commerce Solutions Holdings Corporation | Method and system for secured transactions over a wireless network |
US7720864B1 (en) * | 2004-03-25 | 2010-05-18 | Symantec Operating Corporation | Expiration of access tokens for quiescing a distributed system |
US7760654B2 (en) * | 2004-09-24 | 2010-07-20 | Microsoft Corporation | Using a connected wireless computer as a conduit for a disconnected wireless computer |
US8159961B1 (en) * | 2007-03-30 | 2012-04-17 | Amazon Technologies, Inc. | Load balancing utilizing adaptive thresholding |
US20090043881A1 (en) * | 2007-08-10 | 2009-02-12 | Strangeloop Networks, Inc. | Cache expiry in multiple-server environment |
JP2011521385A (ja) * | 2008-05-26 | 2011-07-21 | スーパーデリバティブス,インコーポレイテッド | 自動金融商品管理の装置、システムおよび方法 |
US8572162B2 (en) * | 2008-12-01 | 2013-10-29 | Novell, Inc. | Adaptive screen painting to enhance user perception during remote management sessions |
US20110099507A1 (en) * | 2009-10-28 | 2011-04-28 | Google Inc. | Displaying a collection of interactive elements that trigger actions directed to an item |
JP5404469B2 (ja) | 2010-02-22 | 2014-01-29 | 日本電信電話株式会社 | メッセージ処理システム、メッセージ処理装置及びメッセージ処理方法 |
JP2011197796A (ja) | 2010-03-17 | 2011-10-06 | Fujitsu Frontech Ltd | 負荷分散制御装置 |
US8719223B2 (en) * | 2010-05-06 | 2014-05-06 | Go Daddy Operating Company, LLC | Cloud storage solution for reading and writing files |
US8832801B1 (en) * | 2012-05-11 | 2014-09-09 | Ravi Ganesan | JUBISM: judgement based information sharing with monitoring |
US9509704B2 (en) * | 2011-08-02 | 2016-11-29 | Oncircle, Inc. | Rights-based system |
US20160035019A1 (en) * | 2014-08-04 | 2016-02-04 | Stayful.com, Inc. | Electronic Marketplace Platform for Expiring Inventory |
-
2014
- 2014-04-14 WO PCT/JP2014/060565 patent/WO2014171413A1/ja active Application Filing
- 2014-04-14 US US14/784,626 patent/US9967163B2/en active Active
- 2014-04-14 JP JP2015512467A patent/JP6117345B2/ja active Active
Patent Citations (2)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US20120124431A1 (en) * | 2010-11-17 | 2012-05-17 | Alcatel-Lucent Usa Inc. | Method and system for client recovery strategy in a redundant server configuration |
JP2012235220A (ja) * | 2011-04-28 | 2012-11-29 | Hitachi Ltd | メールシステム |
Non-Patent Citations (1)
Title |
---|
MASAFUMI KINOSHITA ET AL.: "Throughput Improvement of Mail Gateway in Cooperation with Distributed In-memory KVS", PROCEEDINGS OF THE 2011 IEICE GENERAL CONFERENCE 2, 30 August 2011 (2011-08-30), pages 412 * |
Cited By (4)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
JP2015073234A (ja) * | 2013-10-04 | 2015-04-16 | 株式会社日立製作所 | メッセージ転送システム及びキューの管理方法 |
JP2016144169A (ja) * | 2015-02-05 | 2016-08-08 | 株式会社日立製作所 | 通信システム、キュー管理サーバ、及び、通信方法 |
JP2018526740A (ja) * | 2015-08-24 | 2018-09-13 | アリババ グループ ホウルディング リミテッド | モバイル端末のためのデータ記憶方法及び装置 |
US10776323B2 (en) | 2015-08-24 | 2020-09-15 | Alibaba Group Holding Limited | Data storage for mobile terminals |
Also Published As
Publication number | Publication date |
---|---|
JPWO2014171413A1 (ja) | 2017-02-23 |
US20160261476A1 (en) | 2016-09-08 |
US9967163B2 (en) | 2018-05-08 |
JP6117345B2 (ja) | 2017-04-19 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
US11172023B2 (en) | Data synchronization method and system | |
US9154382B2 (en) | Information processing system | |
JP5381998B2 (ja) | クラスタ制御システム、クラスタ制御方法、及びプログラム | |
US9262287B2 (en) | Computer information system and dynamic disaster recovery method therefor | |
CN112118315A (zh) | 数据处理系统、方法、装置、电子设备和存储介质 | |
CN109672711B (zh) | 一种基于反向代理服务器Nginx的http请求处理方法及系统 | |
JP5884892B2 (ja) | ネットワークシステム、コントローラ、及び負荷分散方法 | |
US20130163415A1 (en) | Apparatus and method for distributing a load among a plurality of communication devices | |
US20160234129A1 (en) | Communication system, queue management server, and communication method | |
JP6117345B2 (ja) | 処理性能低下を回避するメッセージシステム | |
JP4767336B2 (ja) | メールサーバシステム及び輻輳制御方法 | |
US20070294255A1 (en) | Method and System for Distributing Data Processing Units in a Communication Network | |
JP5673057B2 (ja) | 輻輳制御プログラム、情報処理装置および輻輳制御方法 | |
CN112492030B (zh) | 数据存储方法、装置、计算机设备和存储介质 | |
CN113326100A (zh) | 一种集群管理方法、装置、设备及计算机存储介质 | |
US9426115B1 (en) | Message delivery system and method with queue notification | |
CN109688011B (zh) | 一种基于OpenStack的agent选择方法及装置 | |
CN111930710A (zh) | 一种大数据内容分发的方法 | |
CN110661836B (zh) | 消息路由方法、装置及系统、存储介质 | |
KR20120128013A (ko) | 망 부하 감소를 위한 푸시 서비스 제공 시스템 및 방법 | |
CN114900526A (zh) | 负载均衡方法及系统、计算机存储介质、电子设备 | |
CN110247808B (zh) | 信息发送方法、装置、设备及可读存储介质 | |
KR101382177B1 (ko) | 동적 메시지 라우팅 시스템 및 방법 | |
CN108055305B (zh) | 一种存储扩展方法及存储扩展装置 | |
CN214959613U (zh) | 一种负载均衡设备 |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
121 | Ep: the epo has been informed by wipo that ep was designated in this application |
Ref document number: 14784958 Country of ref document: EP Kind code of ref document: A1 |
|
NENP | Non-entry into the national phase |
Ref country code: DE |
|
ENP | Entry into the national phase |
Ref document number: 2015512467 Country of ref document: JP Kind code of ref document: A |
|
WWE | Wipo information: entry into national phase |
Ref document number: 14784626 Country of ref document: US |
|
122 | Ep: pct application non-entry in european phase |
Ref document number: 14784958 Country of ref document: EP Kind code of ref document: A1 |