WO2001090903A1 - Appareil, systeme et procede pour equilibrer la repartition des charges de serveurs de reseau - Google Patents

Appareil, systeme et procede pour equilibrer la repartition des charges de serveurs de reseau Download PDF

Info

Publication number
WO2001090903A1
WO2001090903A1 PCT/US2001/016658 US0116658W WO0190903A1 WO 2001090903 A1 WO2001090903 A1 WO 2001090903A1 US 0116658 W US0116658 W US 0116658W WO 0190903 A1 WO0190903 A1 WO 0190903A1
Authority
WO
WIPO (PCT)
Prior art keywords
server
service
request
quality
client
Prior art date
Application number
PCT/US2001/016658
Other languages
English (en)
Inventor
James C. Mitchell
Arun Ramaswamy
Alan N. Bosworth
Original Assignee
Cohere Networks, Inc.
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Cohere Networks, Inc. filed Critical Cohere Networks, Inc.
Priority to AU2001264844A priority Critical patent/AU2001264844A1/en
Publication of WO2001090903A1 publication Critical patent/WO2001090903A1/fr

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F9/00Arrangements for program control, e.g. control units
    • G06F9/06Arrangements for program control, e.g. control units using stored programs, i.e. using an internal store of processing equipment to receive or retain programs
    • G06F9/46Multiprogramming arrangements
    • G06F9/50Allocation of resources, e.g. of the central processing unit [CPU]
    • G06F9/5005Allocation of resources, e.g. of the central processing unit [CPU] to service a request
    • G06F9/5027Allocation of resources, e.g. of the central processing unit [CPU] to service a request the resource being a machine, e.g. CPUs, Servers, Terminals
    • G06F9/505Allocation of resources, e.g. of the central processing unit [CPU] to service a request the resource being a machine, e.g. CPUs, Servers, Terminals considering the load
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L41/00Arrangements for maintenance, administration or management of data switching networks, e.g. of packet switching networks
    • H04L41/50Network service management, e.g. ensuring proper service fulfilment according to agreements
    • H04L41/5003Managing SLA; Interaction between SLA and QoS
    • H04L41/5009Determining service level performance parameters or violations of service level contracts, e.g. violations of agreed response time or mean time between failures [MTBF]
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L67/00Network arrangements or protocols for supporting network services or applications
    • H04L67/01Protocols
    • H04L67/10Protocols in which an application is distributed across nodes in the network
    • H04L67/1001Protocols in which an application is distributed across nodes in the network for accessing one among a plurality of replicated servers
    • H04L67/1004Server selection for load balancing
    • H04L67/1008Server selection for load balancing based on parameters of servers, e.g. available memory or workload
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L67/00Network arrangements or protocols for supporting network services or applications
    • H04L67/01Protocols
    • H04L67/10Protocols in which an application is distributed across nodes in the network
    • H04L67/1001Protocols in which an application is distributed across nodes in the network for accessing one among a plurality of replicated servers
    • H04L67/1004Server selection for load balancing
    • H04L67/1023Server selection for load balancing based on a hash applied to IP addresses or costs
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L67/00Network arrangements or protocols for supporting network services or applications
    • H04L67/01Protocols
    • H04L67/10Protocols in which an application is distributed across nodes in the network
    • H04L67/1001Protocols in which an application is distributed across nodes in the network for accessing one among a plurality of replicated servers
    • H04L67/1031Controlling of the operation of servers by a load balancer, e.g. adding or removing servers that serve requests
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L41/00Arrangements for maintenance, administration or management of data switching networks, e.g. of packet switching networks
    • H04L41/50Network service management, e.g. ensuring proper service fulfilment according to agreements
    • H04L41/5041Network service management, e.g. ensuring proper service fulfilment according to agreements characterised by the time relationship between creation and deployment of a service
    • H04L41/5051Service on demand, e.g. definition and deployment of services in real time
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L67/00Network arrangements or protocols for supporting network services or applications
    • H04L67/01Protocols
    • H04L67/10Protocols in which an application is distributed across nodes in the network
    • H04L67/1001Protocols in which an application is distributed across nodes in the network for accessing one among a plurality of replicated servers
    • H04L67/1029Protocols in which an application is distributed across nodes in the network for accessing one among a plurality of replicated servers using data related to the state of servers by a load balancer

Definitions

  • the present invention relates, in general, to balancing requests for services, or loads, to network servers.
  • a client-server computing environment such as the networked environment of the Internet
  • web sites offer a variety of services to the users (clients) via computer programs operating on one or more servers coupled to the network.
  • a single server hosts the various programs that form a web site, and as each request or "load" from a client is received at the server, the server performs the requested operation and passes data to the client, thereby satisfying the request (i.e., downloading text, audio, or video data to the client for display in the client's network browser program).
  • difficulties can arise in servicing multiple requests from multiple clients for services from a single web site, as the server may not have the processing speed or throughput to service each of the multiple requests in a timely fashion.
  • One conventional approach to address this problem is shown in Fig. 1.
  • Fig. 1 illustrates a client-server environment wherein a plurality of servers 20 is coupled to a network 22, such as the Internet, for providing various services from a single web site to one or more clients 24.
  • a load balancing device 26 employing a conventional "round-robin" algorithm is provided between the servers 20 and the network 22.
  • the servers 20 of the web site are configured as redundant servers, each having the same programs thereon to provide the same services from the web site to the clients 24.
  • the load balancing device 26 passes each new request to the next server in a "round-robin" fashion.
  • such an approach may still suffer from performance difficulties.
  • a device also referred to herein as a load balancing device/apparatus, for determining if a request from a client computing station for a service in a network should be processed by a first server adapted to service the request or by a second server adapted to service the request.
  • the device includes a front end module for receiving the request and translating the request into a transparent message format, a coordinating module for determining if the first server and second servers are active, and at least one load balancing module, in communications with the first and second servers, for determining whether the first or second server should service the request, and passing the request to the appropriate first or second server, as determined thereby.
  • the front end module translates the request into either an XML format (extensible markup language) or a binary format.
  • the load balancing module receives a quality of service metric or other data from the first server and from the second server, and determines whether the first or second server should service the request based in part on the metrics.
  • quality of service includes, but is not limited to, one or more measures or metrics of the responsiveness of a server in satisfying a client's request for service over a network. QoS and the associated metrics provide information or data regarding the total network system response, and may be affected by, for example, sever loading, network loading, burst traffic, or the like.
  • the load balancing module can obtain the number of pending requests at the first server, a number representing the time required to service the pending requests by the first server, the number of pending requests at the second server, and a number representing the time required to service the pending requests by the second server. In this example, the load balancing module determines whether the first server should service the request based, at least, on the quality of service metrics obtained from the first and second servers, the number of pending requests at the first server, the number representing the time required to service the pending requests at the first server, the number of pending requests at the second server, and the number representing the time required to service the pending requests at the second server.
  • the device can also include a first communications interface to the first server for coupling the load balancing module to the first server, a second communications interface to the second server for coupling the load balancing module to the second server, and a third communications interface reserved for the dynamic addition of a third server for coupling the load balancing module to the third server. Additionally, the device can include an additional load balancing module reserved for the dynamic addition of a new service.
  • a system for receiving and servicing a request from a client computing station for a service in a network include a first server adapted to service the request, a second server adapted to service the request, and a device for determining if the request should be processed by the first server or the secortd server.
  • the device includes a front end module for receiving the request and translating the request into a transparent message format, a coordinating module for determining if the first server and second servers are active, and at least one load balancing module, in communications with the first and second servers, for determining whether the first server should service the request, and if so, passing the request to the first server.
  • the first server and second server each have an input queue for tracking the pending requests to be processed by the first server, and each maintain a list of pending requests and a number corresponding to the time for completing each of the pending requests.
  • the system preferably includes a quality of service agent operating at the client, a quality of service agent operating on the first server adapted to communicate with the quality of service agent operating at the client, and a quality of service agent operating on the second server also adapted to communicate with the quality of service agent operating at the client.
  • the quality of service agent operating on the first server is also adapted to communicate with the load balancing module with a message containing data of the quality of service between the client and the first server.
  • the quality of service agent operating on the second server is adapted to communicate with the load balancing module with a message containing data of the quality of service between the client and the second server.
  • the load balancing module determines whether the first server should service the request based in part the data of the quality of service between the client and the first server and the data of the quality of service between the client and the second server.
  • a method for distributing a request from a client for a service from a web site having a plurality of servers adapted to service the request includes receiving the request and determining if the service requested is offered by a first server and a second server of the plurality of servers.
  • a "quality of service" (QoS) metric is obtained from the first server, and a quality of service metric is obtained from the second server.
  • QoS quality of service
  • the quality of service metric can takes the form of a measure of the performance being provided by a particular server to a client, for instance during the duration of the service period (i.e., during the transmission of data from the server to the client).
  • a client agent operating on the client is provided, and a server agent operating on the first server is provided.
  • the client agent transmit a message to the server agent, the message containing a data rate of data transferred from the client to the first server, wherein the data rate is used as a quality of service metric of the first server. This data is used to determine which server should service the request of the client.
  • the number of pending requests at the first server is obtained, as is the time (estimated, actual, or empirical) required to service the pending requests by the first server.
  • the number of pending requests at the second server is obtained, along with the time required to service the pending requests by the second server.
  • the determining step determines whether the first server should service the request based, at least, on the quality of service metrics obtained from the first and second servers, the number of pending requests at the first server, the time required to service the pending requests at the first server, the number of pending requests at the second server, and the time required to service the pending requests at the second server.
  • the time required can be an estimated time, actual time, or empirical time.
  • the quality of service metrics, and other performance characteristics of the system can be remotely accessed if desired.
  • the client's request is translated into a transparent message format usable by the first and second server, such as XML format with a start designator identifying the beginning of the message, a type designator identifying the type of service, and an end designator indicating the end of the message.
  • XML format e.g., XML format with a start designator identifying the beginning of the message, a type designator identifying the type of service, and an end designator indicating the end of the message.
  • Binary format can also be used.
  • the method of the present invention permits dynamic additions of additional servers to the system.
  • Upon an addition of a third new server to the web site the presence of the new third server is detected and it is determined if the new third server offers the service requested by the client.
  • a quality of service metric is obtained from the new third server and is included in the determination of whether the first server should service the request.
  • the method of the present invention permits dynamic addition of a new service on either an existing server or a new server to the system.
  • the presence of the new service is detected whereupon the load balancer offers the service requested by a client.
  • a quality of service metric is obtained from the server managing the new service and is included in the determination of whether the server offering the new service should receive and process the client request.
  • Fig. 1 illustrates a block diagram of conventional client-server system having a load balancing device utilizing a conventional "round-robin" algorithm for balancing loads in a network such as the Internet.
  • Fig. 2 illustrates a block diagram of one embodiment of the present invention.
  • Fig. 3 illustrates a distribution of requests/loads internal to a server in accordance with one embodiment of the present invention.
  • Fig. 4 illustrates an example of the logical operations performed by the load balancer in accordance with one embodiment of the present invention.
  • Fig. 5 illustrates an example of the logical operations performed by a server in accordance with one embodiment of the present invention.
  • the load balancing apparatus referred to variously herein as a “load balancer” or a “load balancing device” employs a unique and novel set of decision criteria in determining which server coupled thereto should receive and process a request or "load” from a client over the network.
  • the balancer 30 is an interface between the network 32 and a plurality of servers 34 (two applications servers 34A, 34B are shown in the example of Fig. 2).
  • Each server 34A,B has a set of software programs, such as "Appl” and "App2" shown in Fig. 2, for providing the services offered by the website as requested by one or more clients 36 in the network.
  • Each server 34A,B also has an input queue therein, as will be described later with reference to Fig. 3.
  • server 34A provides service “Appl” and service “App2”
  • server 34B provides service “App2.”
  • the load balancer 30 of the present invention determines whether server 34A or server 34B should process the client's request for service "App2", and upon such determination, the load balancer passes the request for service "App2" to the selected server.
  • the load balancer 30 shown in Fig. 2 has knowledge of which applications (“Appl, "App2”) are loaded on which servers 34A,B, so that the load balancer 30 can pass the client's 36 request for a particular service to the appropriate server or set of servers. Furthermore, the load balancer 30 has knowledge of various server specific "metrics" which are criteria used by the load balancer 30 to determine which server should service a pending request from a client 36. In one example, the load balancer 30 of the present invention receives information from each server 34A,B relating to the number of pending jobs in the input queue of that server, as well as the time required for each job to be completed by that server.
  • the information from the server 34A,B may include percent utilization of CPU cycles, percent utilization of network, and other metrics that the server can collect from its own operating system registry.
  • the load balancer 30 can compute a numerical metric which is the product of the number of pending jobs in the input queue of a server, multiplied by the time required to complete each job. Accordingly, the load balancer 30 then has information relating to the ability of a particular server to process an incoming request or load from a client 36. In another example, the load balancer also receives a "quality of service" value from monitoring processes running on the client and server platforms, described below.
  • the load balancer 30 would pass the next incoming request to server 34B, as its metric indicates that server 34B is more available to handle the incoming request than is server 34A.
  • the load balancer 30 as shown in Fig. 2 has a number of processes and interfaces in accordance with one embodiment of the present invention.
  • a CGI front-end process 40 is provided for receiving data from a client application or network browser 42, and converting the data into a desired format.
  • the CGI front-end process 40 is associated with one or more services provided by application servers 34A,B, assuming that the application servers 34A,B have previously "registered” with the load balancer (registration is described below).
  • the incoming data is converted by the CGI front end process 40 from a plurality of client formats, into a format compatible with the requested application and compatible with the remaining load balancing processes.
  • the CGI front-end process 40 converts the message format into a "transparent messaging" format usable by the load balancing processes. Transparent messaging enables the various internal processes of the load balancer to route and load balance network requests without knowing the content of each message itself. In this way, the message content is transparent to the load balancing system.
  • the format of a transparent message is an encapsulation in which the application specific data is encapsulated in the payload or central portion of the message. Around the payload is a start and type designator in the front portion of the message, and a stop designator/identifier at the back portion of the message.
  • Two embodiments of the transparent messages are shown in Table 1 and 2. The first embodiment is in a binary format for efficiency and compatibility with binary data. The second embodiment uses the industry standard XML (extensible markup language) ASCII-based notations for use with strictly ASCII messages and for extensibility of future applications. Table 1. Format example of binary based transparent messages.
  • the start designator is the hexadecimal equivalent of the character string value "START.”
  • the stop designator is the hexadecimal equivalent value of the character string value "STOP.”
  • the type byte is an 8 bit value that is registered with CGI front-end process 40 and represents the type of application service requested, corresponding for example to "Appl” or "App2" shown in Fig. 2.
  • the remaining message content is formatted for the specific application service being requested. Using this technique, the load balancer 30 does not need to understand the content of the message to be able to forward the message to a compatible application server 34A,B that is available and which can most efficiently process the load.
  • the start designator is the XML compatible string " ⁇ ServiceName>”.
  • the end designator is the XML compatible string " ⁇ /ServiceName>”.
  • the type designator is an XML compatible string located between the start and stop designators without extra spaces. In this format, the start and stop designators identify the location of the type designator within a compatible XML message. Using this technique, the load balancer 30 does not need to understand the content of the message in order to be able to forward the message to a compatible application server 34A,B.
  • XML compatible message formatting provides an extensible and simple method of adding new services to the load balancing system without requiring changes to the load balancing algorithms, software or processes employed. Further, providing transparent messaging results in greater speed and efficiency and eliminates any need for re-compiling due to changes in the content of application's messages.
  • the transparent message is passed to the Main Coordinator process 44, referred to also herein as a
  • the Main Coordinator process 44 decodes the type designator to identify to which load balancing module 46, 48 the message should be forwarded.
  • the Main Coordinator process 44 maintains a list 55 of the load balancing modules 46, 48 and their associated applications (i.e., "Appl", "App2"). If the client's 36 request refers to a service that is to be provided by one of the plurality of servers 34A,B coupled to the load balancer 30, then the Main Coordinator process 44 passes the request to the appropriate load balancing module 46, 48 for further processing.
  • the Main Coordinator process 44 determines that the client's 36 request is not addressed to one of the plurality of servers 34A,B coupled to the load balancer 30, in one example, the Main Coordinator process 44 replies to the client's request with a "service not available" message.
  • the load balancing modules 46, 48 determine which server should service the client's request based on various metrics relating to each server 34A,B, such as quality of service, number of pending jobs, or other decision criteria discussed herein or with respect to Fig. 4, or any combination thereof. For example, the load balancing module 48 of Fig. 2 determines whether a request for "App2" service should be processed by server 34A or server 34B. Upon determining which server should process the service request, the load balancing process 48 forwards the request to the chosen server.
  • the load balancing modules 46, 48 are coupled to each server 34A,B through a plurality of communication interfaces, shown as threads 50, 52, 54, with corresponding threads 56, 58, 60 at the servers 34A,B.
  • the metrics includes a calculation of the number of pending jobs in a particular server's input queue multiplied by the time required by the server to complete each job, shown as "Server Stats" 62 in Fig. 2.
  • the load balancing decision process can account for a quality of service (QoS) figure, described below, in making its determination.
  • QoS quality of service
  • the load balancing module 46, 48 Upon determining to which server 34A,B the client's request should be passed, the load balancing module 46, 48 then forwards the client's request through the proper communication interface to the appropriate server. The server 34A,B then places the request in its input queue as described below with reference to Fig. 3.
  • load balancing decision process is described as a portion of the functionality of the load balancing module implemented in various processes 40, 44, 46,48, such functionality can be combined or subdivided or otherwise arranged differently, and may reside at a portal or the like, and incorporated therein.
  • the quality of service "QoS" figure is provided and tracked throughout the system and provides valuable information to the load balancer 30 in making its determination as to which server 34A,B should process a client's request.
  • quality of service agents 70, 72 are operated on each of the plurality of application servers 34A,B and quality of services agents 74 are operated on each of the client 36 platforms, and communicate QoS messages such as message 75 shown in Fig. 2.
  • the QoS agents on the application server and the client communicate to each other over the duration of the provided service. Each agent sends QoS messages to the other respective agent, essentially reflecting back to the sending side what the receiving side is seeing in terms of network performance.
  • the messages between the respective QoS agents contain a QoS agent identification number, sequence counter, time stamp, and other status information such as CPU percent utilization, average data rate, bits transferred, etc.
  • the QoS agent 74 operated by the client communicates with the respective QoS agent 70, 72 operated by the server via these status messages, intrinsically measuring the quality of the network path there between.
  • the QoS agent 70, 72 operated by the application server 34A,B supplies its performance metrics back to the load balancer 30 via a QoS message, which is application server dependent. This QoS message is fed back to the load balancer 30 at the beginning of each new request/load, or more often if desired.
  • Each application server's 34A,B QoS messages are used by the load 30 balancer in its decision process of determining the best or most appropriate application server to handle a new request or load.
  • the instantaneous and average network performance can be gauged by computing the latency of the messages as well as the variance in message latency.
  • the status information provides a measure of the loads at various points in the networked system, from the clients to the application servers. For instance, if a client user 36 was receiving a very good response time for file downloads, then the status information received from the client pings or messages would show a greatly increasing number of bytes transferred and a high average data rate.
  • the load balancer 30 may ignore the latency variance due to the inherent variable data rate nature of a file download.
  • services such as streaming voice or video would be very sensitive to latency variation.
  • a large variance in latency would, for example, trigger the load balancer 30 to reroute future requests to other servers, possibly using alternative communications paths.
  • Metrics computed from the data and status values in the QoS messages can be used in place of or in combination with the queue metric described above. For instance, average data rate of a server can be divided by the CPU percent utilization.
  • a new client request would be routed to the server with the highest ranking.
  • the described metric could be mathematically divided by the variance of the latency calculated from the ping rate variance.
  • a high variance would reduce the ranking of a server, with a high variance resulting in a new distribution of message routing.
  • the QoS agents 70, 72 and 74 provide feedback to the load balancer 30 as to how well the services are sent by the respective server 34A,B and being received by the end user at the client station 36.
  • a server implementation is shown for a server, such as server 34A of Fig. 2, coupled to the load balancer 30 of the present invention.
  • the server 34A has an input queue 80 for storing requests for services, and a job statistics table 82 for storing data relating to the server metrics.
  • the input queue 80 can be implemented as a global queue for all incoming requests, or as a set of local input queues, each associated with a particular application (such as "App 1" or "App 2”) provided by the server 34A.
  • the server has a front end process/thread 84 and a plurality of processes 86A,B,C for servicing the requests placed in the respective locations of the queue.
  • a front end process/thread 88 and process 90 is provided.
  • Fig. 3 will be described with respect to a request for "App 1" service through front-end process 84.
  • the appropriate front end thread/process 84 receives the request from the load balancer and places the request in the input queue 80.
  • the input queue 80 is a circular queue having N entries, such that the front end thread/process 84 places an incoming request into the next available location in the input queue 80.
  • the front end thread 84 of the server communicates to the load balancer, as part of the server metrics, that the input queue 80 is "full.”
  • the load balancer avoids passing any further request to the particular server with the full input queue until the load balancer receives a subsequent message that the input queue 80 of the server is again available to accept and process new requests.
  • the "worker processes" 86A,B,C illustrated in Fig. 3 receive tasks to perform from an entry on the input queue 80. Each of these processes
  • 86A,B,C is an executable image providing one of the services that may be requested by a client.
  • a process 86A,B,C When a process 86A,B,C has completed a requested service, it enters an idle state where it waits and periodically checks the input queue for a new service request. If there is a service request in the queue (i.e., the queue is not empty), the process 86A,B,C copies any user parameters in the queue entry from the client user, deletes the queue entry, and begins performing the service requested. The deletion of the queue entry indicates that the slot is available for scheduling or queuing a new entry by the front- end process 84. After completing the requested service, the process 86A,B,C goes back into idle mode to look for a new entry in queue 80.
  • Operation 100 is the idle state of the load balancer, wherein the load balancer waits for a message to be received or operation to be performed.
  • the load balancer determines whether the message is a quality of service (QoS) message in operation 102. If so, then at operation 104, the load balancer calculates the particular metric being used for the balancing comparison.
  • QoS quality of service
  • one embodiment of the metric is the average duration time for a job multiplied by the number of jobs in the server input queue.
  • Other metrics as previously described can also be calculated by operation 104.
  • an array is maintained which includes therein a dynamic list of servers, arranged by their availability. The calculated metric is used to sort the array of servers in operation 106 to select which server to send the next service request to. After completion, operation 106 returns control to the idle state to await a new message.
  • operation 108 determines whether the received message is a server status message.
  • the server status message contains, among other statistics, whether the server queue is full or not full indicating whether the server is available or not available respectively. If the received message is a server status message, then operation 110 determines whether or not the server is indicating that its input queue is full. If the queue is full, then operation 112, marks a metric array slot associated with the server as unavailable. If the queue is not full, then operation 114, marks the metric array slot associated with the server as available. After completion of operations 112 or 114, control is passed to the idle state 100.
  • operation 116 determines whether or not it is a request for service message. If it is not a request for service message, then the message is discarded and control is passed to idle operation 110. If the received message is a request for service message, then operation 118 determines if there is at least one server available for providing the service. If a server is not available, then operation 120 notifies the originating user that the service is unavailable and suggests that the user try again later. If a server is available, then operation 122 sends the request to the server with the least load, indicated by being at the top of the sorted metric array described with reference to operation 106. After completion of operations 120, 122, control is passed to the idle state 100.
  • Operation 130 is the idle state of the front-end process, where the front-end waits for a message or operation to be performed.
  • the front-end determines whether it is a request for service message in operation 132. If it is not a request for service message, then the message is discarded and the process returns to the idle state in operation 130.
  • Operation 134 determines if there is room in the input queue for a new service request. If there is room, an estimate of the time to complete the requested service is made in operation 136.
  • the running average of request service times is calculated in operation 138.
  • This running average can be calculated by at least two methods. First, for a batch service request, such as a streaming video broadcast, the average is calculated as the total amount of time required to process all requests at the server, divided by the total number of requests pending. Second, for an interactive service request, such as a software service, the average is calculated as the total time of some previous number of completed requests, divided by the number of previous requests. The service request and the time estimated are stored in the next open queue slot in operation 140.
  • the processing loop is finished by decrementing the count of available queue slots in operation 142, sending the time statistics (including the estimated, average times or empirically derived time for completion) to the load balancer in operation 146, sending the number of pending requests for service in operation 146, and returning to idle state 130 and the next message.
  • the server has communicated to the load balancer the respective metrics for the server, in accordance with the present invention.
  • the load balancer is notified of a full queue in operation 148. As described above, the load balancer will suspend sending any messages to this server until the queue opens up.
  • the front-end process waits a programmed time in operation 150 before checking the queue again in operation 152. The front- end process loops between operation 150 and 152 until a slot in the queue becomes available. When a slot becomes available, a message indicating that the queue is available is sent to the load balancer in operation 154. The front- end process then returns to the idle state 130 until another message is received.
  • the load balancer provides a remote monitoring capability.
  • each server communicates the average time to service requests and the number of pending jobs to the load balancer. In effect, this operation concentrates the server load metrics for the entire network at the load balancer.
  • a remote "dial-in" process could gather the load metrics from one or more of the load balancers in a network to obtain a global view on the performance and load on the entire network.
  • the load balancer is adapted to recognize, on a dynamic basis, the addition of a new server or the replacement of an existing server.
  • the discovery, identification and coordination of the server pool are performed through a dynamic communications system.
  • the load balancer initiates or offers one additional communications channel at all times, shown for example in Fig. 2 as 51 or 57.
  • the new server makes a request to send a message to the load balancer and as a result, finds or discovers the additional channel of a load balancer. This allows the new server to uniquely identify itself to the load balancer and coordinate communications.
  • the load balancer in response to a message over the additional channel, gathers the new server's information and adds a new slot in the server statistics table. After the new server has been recorded by the load balancer, a new additional channel is opened and maintained until the server expressly terminates communications with the load balancer, or is otherwise determined to be absent.
  • the load balancer 30 is adapted to recognize, on a dynamic basis, the addition of a new service in association with either an existing server or a new server.
  • the new service can be, for example, a new capability, function or. utility performed by a server.
  • the discovery, identification and coordination of the new service are performed through a dynamic communication service similar to the aforementioned dynamic server coordination.
  • the Main Coordinator process 44 initiates one additional, generic load balancing process/module 49 to discover and manage a new service.
  • the generic load balancing module 49 initiates or offers a generic communication channel 53.
  • the server managing this new service makes a request to send a message to the generic load balancing module 49 and as a result, finds or discovers the communication channel 53 of the generic load balancing module 49.
  • a generic naming practice is used to facilitate the service to discover the available channel of the generic load balancing module.
  • the generic load balancing module 49 registers a new service name with the Main Coordination process 44 (for example, by using list 55), changes its name and channel name to reflect the new service, and the Main Coordination process 44 initiates yet another a new generic load balancing module (not shown) to replace the recently renamed load balancing module 49 in order to support the dynamic addition of yet another service.
  • the generic load balancing process/module 49 dynamically discovers any new servers and operates similar to load balancing modules/processes 46, 48 once it has been renamed.
  • the newly named load balancing module 49 preferably uses QoS metrics to decide to which server to send service requests.
  • the name of the load balancing module 49 registered with the Main Coordinator process 44 can then be used by client applications to request the new service offered thereby.
  • the capability of dynamically adding new services results, in part, from the transparent messaging and QoS metric of embodiments of the present invention.
  • named pipes are used for communications between the load balancer and the servers.
  • sockets can be used.
  • a naming convention can be used to assist the server in opening a communications channel and find the additional channel associated with the load balancer.
  • the pipe or socket channel will be named after the service that is being load balanced.
  • TRIMEDIT_SOCKET could be used for the Trim Edit function in video content creation.
  • GENERIC_SERVICE_SOCKET could be used for the generic load balancing process/module 49 (shown in Fig. 2) to facilitate the discovery and dynamic recognition of new services.
  • the generic load balancing process/module 49 has initiated a new named pipe 53 offered to new services. If a new service were to be connected to the generic load balancing module 49, the new service would make an open call in software to the generic named pipe 53 resulting in a connection to the generic load balancing module 49. This action would initiate the registering of a new service, the load balancing module 49 would begin accepting requests for the new service, and the Main Coordinator process 44 would initiate yet another new generic load balancing module (not shown) to provide for yet another new service.
  • the "App 2" load balancing process/module 48 is currently managing two servers 34A,B.
  • the load balancing process 48 would initiate/maintain a third named pipe 57. If a new server were to be connected to the load balancer, the new server would make an open call corresponding to "App 2" in software, resulting in the connection to this third named pipe 57. This action would add the new server to the server pool and the load balancer would begin to accept and pass service request messages to the new server.
  • the load balancer Upon completion, the load balancer would initiate/maintain yet another new additional named pipe (i.e., forth named pipe, not shown) to provide for the dynamic addition of another (i.e., fourth) new server.
  • embodiments of the present invention permit the dynamic addition of new servers to the load balancer 30, or recognize the addition of new services provided by the servers, without having to alter or restart the load balancer 30.
  • the invention can be embodied in a computer program product. It will be understood that the computer program product of the present invention preferably is created in a computer usable medium, having computer readable code embodied therein.
  • the computer usable medium preferably contains a number of computer readable program code devices configured to cause a computer to affect the various functions required to carry out the invention, as herein described.
  • the embodiments of the invention described herein are preferably implemented as logical operations in a computing system.
  • the logical operations of the present invention are implemented (1) as a sequence of computing implemented steps running on the computing system, or (2) as interconnected modules within the computing system.
  • the implementation is a matter of choice dependent on the performance requirements of the computing system implementing the invention. Accordingly, the logical operations making up the embodiments of the invention described herein are referred to variously as operations, steps, or modules.

Landscapes

  • Engineering & Computer Science (AREA)
  • Computer Networks & Wireless Communication (AREA)
  • Signal Processing (AREA)
  • General Engineering & Computer Science (AREA)
  • Software Systems (AREA)
  • Theoretical Computer Science (AREA)
  • Computer Hardware Design (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Computer And Data Communications (AREA)

Abstract

L'invention concerne un dispositif (30), un système, et un procédé pour déterminer si une requête d'une station informatique client (36) pour un service dans un réseau doit être traitée par un premier serveur (34a) ou un second serveur (34b) adaptés pour gérer la requête. Le dispositif comprend un module frontal (40) pour recevoir la requête et traduire la requête en format de message transparent, un module de coordination (44) pour déterminer si le premier et le second serveur sont actifs, et au moins un module de répartition de charges (46), en communication avec le premier et le second serveur, pour déterminer si le premier serveur doit gérer la requête, et si c'est le cas, transmettre la requête au premier serveur. Le module de répartition de charges reçoit des serveurs différentes mesures, telles que la qualité de service (QS) et le nombre de requêtes en cours de traitement dans les serveurs, pour déterminer si c'est le premier ou le second serveur qui doit gérer la requête. Le dispositif est également conçu pour permettre l'ajout dynamique de nouveaux serveurs au dispositif, ou pour reconnaître l'ajout de nouveaux services assurés par les serveurs, sans avoir à modifier ou à redémarrer le dispositif.
PCT/US2001/016658 2000-05-24 2001-05-23 Appareil, systeme et procede pour equilibrer la repartition des charges de serveurs de reseau WO2001090903A1 (fr)

Priority Applications (1)

Application Number Priority Date Filing Date Title
AU2001264844A AU2001264844A1 (en) 2000-05-24 2001-05-23 Apparatus, system, and method for balancing loads to network servers

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
US57754500A 2000-05-24 2000-05-24
US09/577,545 2000-05-24

Publications (1)

Publication Number Publication Date
WO2001090903A1 true WO2001090903A1 (fr) 2001-11-29

Family

ID=24309184

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/US2001/016658 WO2001090903A1 (fr) 2000-05-24 2001-05-23 Appareil, systeme et procede pour equilibrer la repartition des charges de serveurs de reseau

Country Status (2)

Country Link
AU (1) AU2001264844A1 (fr)
WO (1) WO2001090903A1 (fr)

Cited By (21)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
EP1349339A2 (fr) 2002-03-26 2003-10-01 Hitachi, Ltd. Dispositif permettant de relayer données et système qui l'utilise
EP1361513A2 (fr) * 2002-05-10 2003-11-12 Sun Microsystems, Inc. Dispositifs et méthodes pour fournir une qualité de service dynamique dans un système reparti
EP1370048A1 (fr) * 2002-06-06 2003-12-10 Alcatel Application des réseaux actifs pour la répartition de charge au sein d'une pluralité de serveurs de service
WO2004063946A2 (fr) * 2003-01-06 2004-07-29 Gatelinx Corporation Systeme de communication
EP1564637A1 (fr) * 2004-02-12 2005-08-17 Sap Ag Mise en oeuvre d'un système d'ordinateur par l'attribution de services à des serveurs en fonction d'enregistrements de valeurs de charge
US6983285B2 (en) 1998-03-20 2006-01-03 Sun Microsystems, Inc. Apparatus and method for dynamically verifying information in a distributed system
WO2006009584A1 (fr) 2004-06-21 2006-01-26 Cisco Technology, Inc. Systeme et procede d'equilibrage de charges dans un environnement de reseau utilisant des retours d'information
US7089263B2 (en) 1998-02-26 2006-08-08 Sun Microsystems, Inc. Apparatus and method for dynamically verifying information in a distributed system
EP1770952A1 (fr) * 2005-09-28 2007-04-04 Avaya Technology Llc Procédé et système d'allocation de resources dans un environnement distribué basés sur l'evaluation de réseau
US7734747B2 (en) 1998-02-26 2010-06-08 Oracle America, Inc. Dynamic lookup service in a distributed system
US7756969B1 (en) 2001-09-07 2010-07-13 Oracle America, Inc. Dynamic provisioning of identification services in a distributed system
US7792874B1 (en) 2004-01-30 2010-09-07 Oracle America, Inc. Dynamic provisioning for filtering and consolidating events
US7818441B2 (en) 2008-10-15 2010-10-19 Patentvc Ltd. Methods and systems for using a distributed storage to its maximum bandwidth
US7822856B2 (en) 2008-10-15 2010-10-26 Patentvc Ltd. Obtaining erasure-coded fragments using push and pull protocols
US7870395B2 (en) 2006-10-20 2011-01-11 International Business Machines Corporation Load balancing for a system of cryptographic processors
US7890559B2 (en) 2006-12-22 2011-02-15 International Business Machines Corporation Forward shifting of processor element processing for load balancing
US20110191781A1 (en) * 2010-01-30 2011-08-04 International Business Machines Corporation Resources management in distributed computing environment
US8103760B2 (en) 2001-09-07 2012-01-24 Oracle America, Inc. Dynamic provisioning of service components in a distributed system
US9183066B2 (en) 1998-03-20 2015-11-10 Oracle America Inc. Downloadable smart proxies for performing processing associated with a remote procedure call in a distributed system
WO2020016499A1 (fr) * 2018-07-20 2020-01-23 Orange Procédé de coordination d'une pluralité de serveurs de gestion d'équipements
WO2022193740A1 (fr) * 2021-03-19 2022-09-22 华为技术有限公司 Procédé de traitement de paquets et dispositif associé

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US5774660A (en) * 1996-08-05 1998-06-30 Resonate, Inc. World-wide-web server with delayed resource-binding for resource-based load balancing on a distributed resource multi-node network
US6021439A (en) * 1997-11-14 2000-02-01 International Business Machines Corporation Internet quality-of-service method and system
US6023722A (en) * 1996-12-07 2000-02-08 International Business Machines Corp. High-availability WWW computer server system with pull-based load balancing using a messaging and queuing unit in front of back-end servers
US6092178A (en) * 1998-09-03 2000-07-18 Sun Microsystems, Inc. System for responding to a resource request
US6128279A (en) * 1997-10-06 2000-10-03 Web Balance, Inc. System for balancing loads among network servers

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US5774660A (en) * 1996-08-05 1998-06-30 Resonate, Inc. World-wide-web server with delayed resource-binding for resource-based load balancing on a distributed resource multi-node network
US6023722A (en) * 1996-12-07 2000-02-08 International Business Machines Corp. High-availability WWW computer server system with pull-based load balancing using a messaging and queuing unit in front of back-end servers
US6128279A (en) * 1997-10-06 2000-10-03 Web Balance, Inc. System for balancing loads among network servers
US6021439A (en) * 1997-11-14 2000-02-01 International Business Machines Corporation Internet quality-of-service method and system
US6092178A (en) * 1998-09-03 2000-07-18 Sun Microsystems, Inc. System for responding to a resource request

Cited By (54)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US7089263B2 (en) 1998-02-26 2006-08-08 Sun Microsystems, Inc. Apparatus and method for dynamically verifying information in a distributed system
US7734747B2 (en) 1998-02-26 2010-06-08 Oracle America, Inc. Dynamic lookup service in a distributed system
US8713089B2 (en) 1998-02-26 2014-04-29 Oracle America, Inc. Dynamic lookup service in a distributed system
US6983285B2 (en) 1998-03-20 2006-01-03 Sun Microsystems, Inc. Apparatus and method for dynamically verifying information in a distributed system
US9183066B2 (en) 1998-03-20 2015-11-10 Oracle America Inc. Downloadable smart proxies for performing processing associated with a remote procedure call in a distributed system
US7660887B2 (en) 2001-09-07 2010-02-09 Sun Microsystems, Inc. Systems and methods for providing dynamic quality of service for a distributed system
US7756969B1 (en) 2001-09-07 2010-07-13 Oracle America, Inc. Dynamic provisioning of identification services in a distributed system
US8103760B2 (en) 2001-09-07 2012-01-24 Oracle America, Inc. Dynamic provisioning of service components in a distributed system
EP1349339A3 (fr) * 2002-03-26 2005-08-03 Hitachi, Ltd. Dispositif permettant de relayer données et système qui l'utilise
US7130912B2 (en) 2002-03-26 2006-10-31 Hitachi, Ltd. Data communication system using priority queues with wait count information for determining whether to provide services to client requests
EP1349339A2 (fr) 2002-03-26 2003-10-01 Hitachi, Ltd. Dispositif permettant de relayer données et système qui l'utilise
EP1361513A3 (fr) * 2002-05-10 2003-11-19 Sun Microsystems, Inc. Dispositifs et méthodes pour fournir une qualité de service dynamique dans un système reparti
EP1361513A2 (fr) * 2002-05-10 2003-11-12 Sun Microsystems, Inc. Dispositifs et méthodes pour fournir une qualité de service dynamique dans un système reparti
FR2840703A1 (fr) * 2002-06-06 2003-12-12 Cit Alcatel Application des reseaux actifs pour la repartition de charge au sein d'une pluralite de serveurs de service
EP1370048A1 (fr) * 2002-06-06 2003-12-10 Alcatel Application des réseaux actifs pour la répartition de charge au sein d'une pluralité de serveurs de service
WO2004063946A2 (fr) * 2003-01-06 2004-07-29 Gatelinx Corporation Systeme de communication
WO2004063946A3 (fr) * 2003-01-06 2005-02-24 Gatelinx Corp Systeme de communication
US7792874B1 (en) 2004-01-30 2010-09-07 Oracle America, Inc. Dynamic provisioning for filtering and consolidating events
EP1564637A1 (fr) * 2004-02-12 2005-08-17 Sap Ag Mise en oeuvre d'un système d'ordinateur par l'attribution de services à des serveurs en fonction d'enregistrements de valeurs de charge
EP1766827A1 (fr) * 2004-06-21 2007-03-28 Cisco Technology, Inc. Systeme et procede d'equilibrage de charges dans un environnement de reseau utilisant des retours d'information
WO2006009584A1 (fr) 2004-06-21 2006-01-26 Cisco Technology, Inc. Systeme et procede d'equilibrage de charges dans un environnement de reseau utilisant des retours d'information
EP1766827A4 (fr) * 2004-06-21 2011-08-31 Cisco Tech Inc Systeme et procede d'equilibrage de charges dans un environnement de reseau utilisant des retours d'information
EP1770952A1 (fr) * 2005-09-28 2007-04-04 Avaya Technology Llc Procédé et système d'allocation de resources dans un environnement distribué basés sur l'evaluation de réseau
US8103282B2 (en) 2005-09-28 2012-01-24 Avaya Inc. Methods and apparatus for allocating resources in a distributed environment based on network assessment
US7870395B2 (en) 2006-10-20 2011-01-11 International Business Machines Corporation Load balancing for a system of cryptographic processors
US7890559B2 (en) 2006-12-22 2011-02-15 International Business Machines Corporation Forward shifting of processor element processing for load balancing
US7840680B2 (en) 2008-10-15 2010-11-23 Patentvc Ltd. Methods and systems for broadcast-like effect using fractional-storage servers
US8819260B2 (en) 2008-10-15 2014-08-26 Aster Risk Management Llc Random server selection for retrieving fragments under changing network conditions
US7844712B2 (en) 2008-10-15 2010-11-30 Patentvc Ltd. Hybrid open-loop and closed-loop erasure-coded fragment retrieval process
US7853710B2 (en) 2008-10-15 2010-12-14 Patentvc Ltd. Methods and devices for controlling the rate of a pull protocol
US7827296B2 (en) 2008-10-15 2010-11-02 Patentvc Ltd. Maximum bandwidth broadcast-like streams
US7822855B2 (en) 2008-10-15 2010-10-26 Patentvc Ltd. Methods and systems combining push and pull protocols
US7818441B2 (en) 2008-10-15 2010-10-19 Patentvc Ltd. Methods and systems for using a distributed storage to its maximum bandwidth
US7822869B2 (en) 2008-10-15 2010-10-26 Patentvc Ltd. Adaptation of data centers' bandwidth contribution to distributed streaming operations
US7822856B2 (en) 2008-10-15 2010-10-26 Patentvc Ltd. Obtaining erasure-coded fragments using push and pull protocols
US7818430B2 (en) 2008-10-15 2010-10-19 Patentvc Ltd. Methods and systems for fast segment reconstruction
US7818445B2 (en) 2008-10-15 2010-10-19 Patentvc Ltd. Methods and devices for obtaining a broadcast-like streaming content
US8819261B2 (en) 2008-10-15 2014-08-26 Aster Risk Management Llc Load-balancing an asymmetrical distributed erasure-coded system
US8819259B2 (en) 2008-10-15 2014-08-26 Aster Risk Management Llc Fast retrieval and progressive retransmission of content
US7840679B2 (en) 2008-10-15 2010-11-23 Patentvc Ltd. Methods and systems for requesting fragments without specifying the source address
US8825894B2 (en) 2008-10-15 2014-09-02 Aster Risk Management Llc Receiving streaming content from servers located around the globe
US8832295B2 (en) 2008-10-15 2014-09-09 Aster Risk Management Llc Peer-assisted fractional-storage streaming servers
US8832292B2 (en) 2008-10-15 2014-09-09 Aster Risk Management Llc Source-selection based internet backbone traffic shaping
US8874774B2 (en) 2008-10-15 2014-10-28 Aster Risk Management Llc Fault tolerance in a distributed streaming system
US8874775B2 (en) 2008-10-15 2014-10-28 Aster Risk Management Llc Balancing a distributed system by replacing overloaded servers
US8938549B2 (en) 2008-10-15 2015-01-20 Aster Risk Management Llc Reduction of peak-to-average traffic ratio in distributed streaming systems
US8949449B2 (en) 2008-10-15 2015-02-03 Aster Risk Management Llc Methods and systems for controlling fragment load on shared links
US9049198B2 (en) 2008-10-15 2015-06-02 Aster Risk Management Llc Methods and systems for distributing pull protocol requests via a relay server
US20110191781A1 (en) * 2010-01-30 2011-08-04 International Business Machines Corporation Resources management in distributed computing environment
US9213574B2 (en) * 2010-01-30 2015-12-15 International Business Machines Corporation Resources management in distributed computing environment
WO2020016499A1 (fr) * 2018-07-20 2020-01-23 Orange Procédé de coordination d'une pluralité de serveurs de gestion d'équipements
FR3084181A1 (fr) * 2018-07-20 2020-01-24 Orange Procede de coordination d'une pluralite de serveurs de gestion d'equipements
US11418414B2 (en) 2018-07-20 2022-08-16 Orange Method for coordinating a plurality of device management servers
WO2022193740A1 (fr) * 2021-03-19 2022-09-22 华为技术有限公司 Procédé de traitement de paquets et dispositif associé

Also Published As

Publication number Publication date
AU2001264844A1 (en) 2001-12-03

Similar Documents

Publication Publication Date Title
WO2001090903A1 (fr) Appareil, systeme et procede pour equilibrer la repartition des charges de serveurs de reseau
US11418620B2 (en) Service request management
JP3994057B2 (ja) エッジ・サーバ・コンピュータを選択する方法およびコンピュータ・システム
US7899047B2 (en) Virtual network with adaptive dispatcher
US7257817B2 (en) Virtual network with adaptive dispatcher
TWI230898B (en) Method and apparatus for off-load processing of a message stream
US7207044B2 (en) Methods and systems for integrating with load balancers in a client and server system
TWI224899B (en) Dynamic binding and fail-over of comparable web service instances in a services grid
US6987763B2 (en) Load balancing
US20080320503A1 (en) URL Namespace to Support Multiple-Protocol Processing within Worker Processes
CN109756559B (zh) 面向嵌入式机载系统分布式数据分发服务的构建及使用方法
US8341262B2 (en) System and method for managing the offload type for offload protocol processing
JP4108486B2 (ja) Ipルータ、通信システム及びそれに用いる帯域設定方法並びにそのプログラム
EP2321937B1 (fr) Équilibrage de charge pour des services
US8488448B2 (en) System and method for message sequencing in a broadband gateway
US20020143874A1 (en) Media session framework using a control module to direct and manage application and service servers
EP0861471A1 (fr) Systeme de logiciel standard personnalise de communications
JPH10214189A (ja) オブジェクト要求ブローカの異なるインプリメンテーション間で通信を実施するブリッジ
US7139805B2 (en) Scalable java servers for network server applications
US7418712B2 (en) Method and system to support multiple-protocol processing within worker processes
JP2006072785A (ja) サービス利用のためのリクエストメッセージ制御方法、および、サービス提供システム
WO2022267458A1 (fr) Procédé, appareil et dispositif d'équilibrage de charge et support de stockage
CN116506526A (zh) 基于可配置协议解析器的卫星数据处理方法及系统
CN113873301A (zh) 视频流的获取方法及装置、服务器和存储介质
US7418719B2 (en) Method and system to support a unified process model for handling messages sent in different protocols

Legal Events

Date Code Title Description
AK Designated states

Kind code of ref document: A1

Designated state(s): AE AG AL AM AT AU AZ BA BB BG BR BY BZ CA CH CN CO CR CU CZ DE DK DM DZ EE ES FI GB GD GE GH GM HR HU ID IL IN IS JP KE KG KP KR KZ LC LK LR LS LT LU LV MA MD MG MK MN MW MX MZ NO NZ PL PT RO RU SD SE SG SI SK SL TJ TM TR TT TZ UA UG UZ VN YU ZA ZW

AL Designated countries for regional patents

Kind code of ref document: A1

Designated state(s): GH GM KE LS MW MZ SD SL SZ TZ UG ZW AM AZ BY KG KZ MD RU TJ TM AT BE CH CY DE DK ES FI FR GB GR IE IT LU MC NL PT SE TR BF BJ CF CG CI CM GA GN GW ML MR NE SN TD TG

121 Ep: the epo has been informed by wipo that ep was designated in this application
WWE Wipo information: entry into national phase

Ref document number: 200020033

Country of ref document: SI

REG Reference to national code

Ref country code: DE

Ref legal event code: 8642

32PN Ep: public notification in the ep bulletin as address of the adressee cannot be established

Free format text: NOTING OF LOSS OF RIGHTS PURSUANT TO RULE 69(1) EPC

122 Ep: pct application non-entry in european phase
NENP Non-entry into the national phase

Ref country code: JP