WO2007105006A1 - Queuing system, method and device - Google Patents

Queuing system, method and device Download PDF

Info

Publication number
WO2007105006A1
WO2007105006A1 PCT/GB2007/000952 GB2007000952W WO2007105006A1 WO 2007105006 A1 WO2007105006 A1 WO 2007105006A1 GB 2007000952 W GB2007000952 W GB 2007000952W WO 2007105006 A1 WO2007105006 A1 WO 2007105006A1
Authority
WO
WIPO (PCT)
Prior art keywords
queue
manager
customer
service
data
Prior art date
Application number
PCT/GB2007/000952
Other languages
French (fr)
Other versions
WO2007105006A8 (en
Inventor
John Anderson
Eddie Keane
Rob Walker
Paul Mccready
Original Assignee
Versko Limited
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Priority claimed from GB0605282A external-priority patent/GB0605282D0/en
Application filed by Versko Limited filed Critical Versko Limited
Priority to US12/225,135 priority Critical patent/US20100040222A1/en
Priority to EP07732046A priority patent/EP2018759A1/en
Publication of WO2007105006A1 publication Critical patent/WO2007105006A1/en
Publication of WO2007105006A8 publication Critical patent/WO2007105006A8/en

Links

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L67/00Network arrangements or protocols for supporting network services or applications
    • H04L67/01Protocols
    • H04L67/10Protocols in which an application is distributed across nodes in the network
    • H04L67/1001Protocols in which an application is distributed across nodes in the network for accessing one among a plurality of replicated servers
    • H04L67/1004Server selection for load balancing
    • H04L67/1008Server selection for load balancing based on parameters of servers, e.g. available memory or workload
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L67/00Network arrangements or protocols for supporting network services or applications
    • H04L67/01Protocols
    • H04L67/02Protocols based on web technology, e.g. hypertext transfer protocol [HTTP]
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L67/00Network arrangements or protocols for supporting network services or applications
    • H04L67/50Network services
    • H04L67/60Scheduling or organising the servicing of application requests, e.g. requests for application data transmissions using the analysis and optimisation of the required network resources
    • H04L67/63Routing a service request depending on the request content or context
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L67/00Network arrangements or protocols for supporting network services or applications
    • H04L67/01Protocols
    • H04L67/10Protocols in which an application is distributed across nodes in the network
    • H04L67/1001Protocols in which an application is distributed across nodes in the network for accessing one among a plurality of replicated servers
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L67/00Network arrangements or protocols for supporting network services or applications
    • H04L67/14Session management
    • H04L67/142Managing session states for stateless protocols; Signalling session states; State transitions; Keeping-state mechanisms

Definitions

  • the present invention relates to a queuing system method and device and in particular to one which provides improved website performance by managing client terminal demand and client terminal access to a website.
  • Any service provided by a computer system over a Communications network will have limited capability, resulting in a maximum number of customers that can be served per minute.
  • the capability may be limited for technical reasons, such as web service speed or the number of available connections, or may be limited because there are not enough operators to handle the demand for service.
  • a customer participating in an on-line purchase typically follows the following steps: 1.
  • the customer browses a website, making requests and reading responses using web browsing software.
  • the pages of the website are normally transferred from the web server to the customer's client machine using unencrypted hypertext transfer protocol (HTTP) .
  • HTTP hypertext transfer protocol
  • the customer requests a payment page, usually by pressing a "buy” button, containing form fields in which they can enter their credit/debit card information.
  • This payment page is normally transferred over the network using secure HTTP (HTTPs) .
  • HTTPs secure HTTPs
  • the customer fills in his/her details, and these are sent using HTTPs to a payment gateway. Frequently, the pending transaction is stored in a database.
  • the payment gateway forwards the transaction details onto a dedicated credit card network for processing. Assuming that the transaction is authorised, the transaction is stored on a database attached to the payment page and the original web server. 4.
  • the customer is presented with a response indicating that his or her transaction is successful.
  • the problem with too many customer terminals such as web browsers trying to access a computer system is that the computer system tries to process all the requests for service one after another at very high speed. As it becomes busy and reaches its transactional limit (the maximum capability) the computer system denies service to any customer terminal. Consequently, the customer terminal tries again until it is answered. This causes unnecessary repeat requests that exacerbate the situation, creating more load. Eventually the core system grinds to a near halt, or suffers a total failure of service
  • the server load means that attempts to modify the pages to improve the system "on the fly" may not succeed. If the high load has been anticipated, certain checks (such as credit card authorisation) may be postponed until after the 'sale' to increase performance, causing further work and uncertainty as to the number of tickets actually sold.
  • the server load means that attempts must be made to increase capacity at an additional cost to the vendor and ultimately to the customer. Additional staff must also be hired to cope with the influx of orders and these staff members will have less time to deal with dissatisfied customers.
  • Limits of processing are created by a combination of bandwidth (the connection speed between the core system and user) , any hardware component of the core system (such as Database Server, Application Server, Web Server, Payment Server, Content Server), Firewall, Load Balancer or any software component (Database, Application Server, Web Server or Bespoke code) .
  • WO2005/112389 discloses a queuing system for managing the provision of services over a communications network.
  • the system provides means for allocating a queue identifier to a request for service and for comparing queue status information and the queue identifier during a subsequent request for service.
  • WO2005/112389 also discloses a means for performing a comparison which determines whether the request for service will be sent to a service host or placed in a managed queue . This document describes a system in which the user is able to make a request for service then disconnect whilst maintaining or being able to resume their place in the queue .
  • a system for managing requests for a service from one or more customer terminals comprising: a queue manager for receiving the requests for the service from the one or more customer terminals via one or more communications channels, the queue manager being adapted to place the requests for service in an ordered queue; a service manager, responsive to the request for service, the service manager being adapted to deliver the service to the one or more customer terminals by means of one or more applications; communication means adapted to pass data between the queue manager and the service manager, the data being related to an allowable volume of customer terminals granted access to the service manager; wherein the queue manager holds the customer terminals not granted access to the service manager in the ordered queue once the allowable volume of customer terminals granted access to the service manager is reached and the communications channels between the queue manager and the customer terminals not granted access to the service manager are held open whilst the customer terminals are held in the ordered queue.
  • the queue manager is connected to one or more software applications defined as being non-core applications.
  • the queue manager is a server.
  • each of the one or more customer terminals connected to the queue manager are uniquely identified when placed in the ordered queue.
  • the queue manager comprises: a request receiver for receiving a request for the service from the one or more customer terminals via the one or more communications channels; and a customer manager for receiving data on the volume of customer terminals connected to the service manager, the data defining the allowable volume of customer terminals granted access to the service manager; wherein the queue manager is adapted to hold open the communications channel with the customer terminal whilst the customer terminal is held in the queue.
  • the communications channel between each of the customer terminals and the queue manager is routed through a firewall.
  • the firewall grants the connection between the customer terminal and the queue manager if capacity is available.
  • the service manager is connected to one or more software applications defined as being core applications .
  • an application can be defined as non-core in one configuration of a system of the present invention and as core in another configuration.
  • the service manager is a server.
  • the queue manager and the service manager are contained in the same server.
  • the communication means sends data to the queue manager which calculates the allowable volume of customer terminals granted access to the service manager and determines whether a customer terminal in the ordered queue can pass to the core applications.
  • the communications means sends data on the allowable volume of customer terminals which has been calculated by the service manager such that the queue manager determines whether a customer terminal in the ordered queue can pass to the core applications.
  • the client terminal cannot have concurrent places in the queue, but can re-join the queue after leaving the queue.
  • a token is issued to the client terminal on leaving the queue to allow the client terminal to access the one or more core applications.
  • the token is issued via the queue manager.
  • the token is issued by the one or more core applications.
  • the token is returned to the system after the client terminal has exited from the one or more core application .
  • the token holds a unique identifier.
  • the unique identifier may be used to stop multiple queue entries from a single customer terminal.
  • the token identifier may be compared to previous token unique identifiers and suspected duplicates denied access through the gate.
  • the unique identifier includes the customer terminal MAC address.
  • the communications channel is kept open by sending data to the customer terminal periodically.
  • the queue manager sends the data to the customer terminal .
  • the data comprises information on the position of the customer terminal in the queue.
  • the amount of data transferred is significantly less than that transferred when refreshing an internet page.
  • less than one kilobyte of data is transferred.
  • a minimal amount of bandwidth is required to keep open each of the communication channels used in the ordered queue. It is possible to send around 5 bytes of data.
  • the queue manager measures the position in the ordered queue against the instantaneous number of tokens issued within a time frame to calculate the amount of time the customer terminal is likely to have to wait before receiving a token.
  • the data sent to the customer terminal comprises the amount of time the customer terminal is likely to have to wait before receiving a token.
  • the data sent to the customer terminal further comprises the position of the client terminal in the queue.
  • the system may also logs detailed performance data about the applications associated with the service manager.
  • the system of the present invention can monitor patterns of events to provide detailed logs that can be post processed and replayed enabling measurement of the events that can lead to system failure. This data may be used to set alarms for a system administrator.
  • multiple queues can be controlled by the system.
  • preference can be given to customer terminals located on one of said multiple queues.
  • subscribers or loyalty club members can have their own separate queue through the queue manager.
  • queues from a plurality of web sites or sections of separate sites may be merged into a single queue .
  • a method for managing requests for service from a customer terminal comprising the steps of: receiving a request for service at a queue manager via a communications channel; either passing the request to a service manager for processing or placing the request in a queue depending upon whether one or more applications associated with the service manager are connected to an allowable number of customer terminals; such that, where the request is placed in a queue the communications channel between the customer terminal and the queue manager is held open whilst the customer terminal is held in the queue.
  • the queue manager is connected to one or more software applications defined as being non-core applications.
  • each of the customer terminals connected to the queue manager are uniquely identified when placed in the ordered queue.
  • the communications channel that connects the customer terminals and the queue manager is routed through a firewall.
  • the firewall grants the connection between the customer terminal and the queue manager if capacity is available.
  • the service manager is connected to one or more core applications.
  • the allowable volume of customer terminals is calculated to determine whether a customer terminal in the ordered queue can pass to the core applications.
  • data is sent by the communications means said data relating to the allowable volume of customer terminals which has been calculated by the service manager such that the queue manager determines whether a customer terminal in the ordered queue can pass to the core applications.
  • a token is issued to the client terminal on leaving the queue to allow the client terminal to access the one or more application associated with the service manager .
  • the token is issued via the queue manager.
  • the token is issued by the one or more application associated with the service manager.
  • the token is returned to the system after the client terminal has exited from the applications associated with the service manager.
  • the token holds a calculated unique identifier.
  • the unique identifier may be used to stop multiple queue entries.
  • the token identifier may be compared to previous token unique identifiers and suspected duplicates denied access through the gate.
  • the unique identifier includes the customer terminal MAC address.
  • the communications channel is kept open by sending data to the customer terminal periodically.
  • data is sent from the queue manager to the customer terminal.
  • the data comprises information on the position of the customer terminal in the queue.
  • the amount of data transferred is significantly less than that transferred when refreshing an internet page .
  • less than one kilobyte of data is transferred.
  • the position in the ordered queue is measured against the instantaneous number of tokens issued within a time frame to calculate the amount of time the customer terminal is likely to have to wait before receiving a token.
  • the data sent to the customer terminal comprises the amount of time the customer terminal is likely to have to wait before receiving a token.
  • the data sent to the customer terminal further comprises the position of the client terminal in the queue is also shown.
  • the system may also logs detailed performance data about the applications associated with the service manager.
  • multiple queues can be controlled by the system.
  • preference can be given to customer terminals located on one of said multiple queues .
  • queues from a plurality of web sites or sections of separate sites may be merged into a single queue .
  • a queue manager server comprising: a request receiver for receiving a request for service from a customer terminal via a communications channel; and a customer manager for receiving data on the volume of customer terminals connected to a service manager, the data defining an allowable number of customer terminals granted access to the service manager; wherein the queue manager server is adapted to hold open the communications channel with the customer terminal whilst the customer terminal is held in a queue.
  • the queue manager is connected to one or more software applications defined as being non-core applications.
  • each of the customer terminals connected to the queue manager are uniquely identified when placed in the ordered queue.
  • unique connection of each of the customer terminals is provided by a firewall.
  • the queue manager server is connectable to a service manager located on a client web server, the service manager being connected to one or more software applications defined as being core applications.
  • a communications means sends data to the queue manager which calculates the allowable volume of customer terminals and determines whether a customer terminal in the ordered queue can pass to the core applications .
  • a token is issued to the client terminal on leaving the queue to allow the client terminal to access the one or more applications associated with the service manager.
  • the token is issued via the queue manager.
  • the token is issued by the one or more application associated with the service manager via the queue manager.
  • the token is returned to the system after the client terminal has exited from the applications associated with the service manager.
  • the token holds a calculated unique identifier.
  • the unique identifier may be used to stop multiple queue entries.
  • the token identifier may be compared to previous token unique identifiers and suspected duplicates denied access through the gate.
  • the unique identifier includes the customer terminal MAC address.
  • the system of the present invention can be used as an on- line shopping system, or in other application areas including, but not limited to e-commerce, retail or information services .
  • the communications channel is kept open by sending data to the customer terminal periodically.
  • the queue manager sends the data to the customer terminal.
  • the data comprises information on the position of the customer terminal in the queue.
  • the amount of data transferred is significantly less than that transferred when refreshing an internet page.
  • less than one kilobyte of data is transferred.
  • a minimal amount of bandwidth is required to keep open each of the communication channels used in the ordered queue.
  • the queue manager measures the position in the ordered queue against the instantaneous number of tokens issued within a time frame to calculate the amount of time the customer terminal is likely to have to wait before receiving a token.
  • the figure may for example be 50 per minute. In which case, if a client terminal was 170 th in the queue they would be served in approximately 3 minutes and 24seconds.
  • the data sent to the customer terminal comprises the amount of time the customer terminal is likely to have to wait before receiving a token.
  • the data sent to the customer terminal further comprises the position of the client terminal in the queue is also shown.
  • the system may also logs detailed performance data about the applications associated with the service manager.
  • the system of the present invention can monitor patterns of events to provide detailed logs that can be post processed and replayed enabling measurement of the events that can lead to system failure. This data may be used to set alarms for a system administrator.
  • multiple queues can be controlled by the system .
  • preference can be given to customer terminals located on one of said multiple queues.
  • queues from a plurality of web sites or sections of separate sites may be merged into a single queue.
  • Figure 1 is a graph which shows transactional performance (load) on a web server
  • Figures 2 (a) to 2(c) show a block diagram of a first embodiment of the present invention
  • Figures 3 (a) to 3(c) show a block diagram of second embodiment of the present invention.
  • Figure 4 is a flow diagram that illustrates the method of the present invention.
  • the system of the present invention may be used in e- commerce, for example by supermarkets or online ticket vendors.
  • the system of the present invention may be used by any organisation which experiences or expects to experience a high volume of hits on their website or on part of their website for any reason.
  • the present invention allows the website owner to classify some applications on their website as core applications and some applications on their website as non-core applications.
  • the non-core applications are those which a user who is using a customer terminal is able to browse prior to entering a queue and the core application are those which the users can only access after having been in the queue if a pre-defined maximum load on the core applications has been reached.
  • the customer terminal may be a personal computer, personal cellular telephone or any device capable of making an internet connection to a website.
  • Figure 1 is a graph which shows transactional performance (load) on a web server by plotting the number of transactions 26 against the number of users 28.
  • load the number of transactions 26 against the number of users 28.
  • the optimum 30 and maximum 32 number of users of the system is shown with respect to points on curve 22.
  • the system's downward spiral of performance begins in the area of the graph after the optimum transaction values on curve 22.
  • the flat curve 24 shows the performance of a system in accordance with the present invention.
  • a system 1 having ten customer terminals denoted by reference numerals 3, 5 and 7.
  • the customer terminals are connected to a queue manager module 9 which can be loaded on a central server.
  • the reference numeral 3 denotes customer terminals which are contained within a queue
  • the customer terminals 5 are those which have been unable to obtain access to the system
  • the customer terminals 7 are those which have access to the core applications 21 via the service manager 17.
  • the customer terminals 3 which are connected to the queue manager 9 are connected via a socket connection 11 . Once connected to the queue manager 9 , the customer terminals 3 may access one or more non-core applications 12 . Such non-core applications may typically be the home page of a website or other pages where it is anticipated that a low number of users will attempt to gain access to the specific pages.
  • a customer manager module 13 which in this example is configured to communicate with the service manager 17 and particularly the throughput manager module 19 contained within the service manager 17.
  • the customer manager module is configured to send small amounts of information, typically less than 100 bytes and often less than 10 bytes, to each customer held in the queue. This information concerns the length of time that the customer terminal will be held in the queue and the position of the customer terminal within the queue.
  • This data is pushed to the customer periodically and acts to keep the socket connection between the customer terminal 3 and the socket 11 of the queue manager 9 open so that the customer terminal is in the queue.
  • the frequency at which the data is pushed can be set by the system to ensure that the connection between the customer terminal and the queue manager is maintained.
  • the customer manager module assists with measurement of the position in the queue against the instantaneous number of tokens issued by the core applications 21 via the queue manager 9. In one example 50 tokens per minute were issued. Therefore, a user who is 170 th in a queue would be served in approximately 3 minutes 24 seconds.
  • the customer manager 13 of the queue manager 9 also receives data on the load experienced by the core applications 21. This data is gathered by the throughput manager 19 and provided via the communications link 15 to the customer manager 13. In one example of the present invention, data on the load experienced by the core applications 21 is processed by the throughput manager 19 and communicated to the customer manager 13.
  • core application load data is passed to the customer manager 13 via the throughput manager 19 without being processed and all the processing of this data to determine whether the core applications have exceeded or met a pre-defined maximum load or use account is done by the customer manager 13.
  • the number of users that may be attached to the queue in the system is determined by the number of one to one socket connections between the queue manager system and the customer terminals that wish to have access to the system.
  • the system can maintain connections to individual customer terminals using a very low bandwidth. Therefore, a large number of customer terminals may be connected to the system at any one time.
  • FIG. 2a shows 10 customer terminals. Each of the users of these terminals wishes to use a website. As described above, the contents of the website can be divided into core applications and non-core applications.
  • the users enter the website via a queue manager 9 which contains a number of non-core software applications associated with the website.
  • the queue manager 9 communicates with the service manager 17 (which contains the core applications) .
  • the service manager 17 measures throughput and provides data to the queue manager on whether the applications associated with the service manager have spare capacity. If there is no spare capacity, users are held in an ordered queue.
  • the first customer terminal entering the queue will be the first one to leave once there is spare capacity in the core application.
  • the communications line 4 between the customer terminal and the queue manager 9 is kept open whilst the customer terminal 3 is in the queue.
  • a message is pushed to the customer terminal 3 informing it of its position in the queue and the length of time the system expects it to take to serve the customer.
  • FIG 2 (b) shows the process whereby a customer terminal changes status from one which is queued (denoted by reference numeral 3 figure 2a) to one which has gained access to the core application (denoted by reference numeral 7 figure 2b) .
  • a space opens up in the queue to allow an additional customer terminal to enter the queue.
  • figure 2 (a) contains five customer terminals that were unable to gain access to the queue manager whereas figure l(b) contains four such customer terminals 5.
  • Figure 2 (c) shows a further progression of the use of the system whereby the customer terminal second from the left in figure 2(c) is provided with a token as described above and also gains access to one of the core applications 21. As with figure 2 (b) another customer terminal is added to the queue to take up the empty space vacated by the second customer terminal 7 and this figure shows only three customer terminals who are unable to connect to the queue manager.
  • Figures 3 (a) to 3(c) show a second embodiment of the present invention in which the queue manager functionality is contained within the queue manager 45 and in a firewall. It is known that firewalls are good at granting and refusing access to systems and as such they can be used to grant and deny access in the queue manager of the present invention.
  • the system 31 of figures 3 (a) to 3(c) shows queued customer terminals 33, non queued customer terminals 35 and customer terminals that obtained access to the core applications 37.
  • the queue manager 45 comprises a customer manager 47 and a number of applications 42.
  • the service manager 49 comprises the output manager 51 and a number of core applications 53.
  • the service manager may be a software module loaded onto a server which operates an existing customer website.
  • multiple queues can be controlled by the system. For example, where it is desirable to protect more than one core application and to have customer terminals queued separately for these applications, separate queues can be created. In addition, multiple queues can be used to provide a subset of users and to provide preferential access for one set of users.
  • a supermarket with a customer loyalty scheme may use the present invention to allow a customer owning a loyalty card or ID number to obtain preferential treatment and quicker access to various parts of their website.
  • this type of use of the present invention may provide an excellent marketing tool for the supermarket and may encourage customers to sign up to enhanced loyalty schemes. Similar schemes can be adopted by events ticketing vendors or other website owners.
  • queues may be merged. For example, where a number of different sites all provide access to tickets for a single event, then access to the tickets through the sites can be controlled by a single queue by merging the queues together. Once the queues are merged it may also be possible to differentiate between members of the queue by recognising the website from which they entered the queue .
  • the system is configured to stop multiple queue entries by holding a unique identifier in the token.
  • the unique identifier will be associated with the user terminal by, for example, incorporating features of the terminal's MAC address so that no two queue identifiers with the same MAC address can be issued with an approved timeframe.
  • Figure 4 shows an embodiment of the method of the present invention 61.
  • the method begins when a request for service 63 is received from a customer terminal. Thereafter a connection is opened 65 and an analysis 67 of the load on core applications is conducted. If there is no space in the core applications 69 the request is sent to the queue 71 and the connection between the customer terminal and the system kept open. The load on the core applications is monitored 73 and when space becomes available 75 the customer terminal is provided with a token and the request is sent to the core application.
  • the present invention keeps a core system working at maximum capacity improving efficiency and retuning maximum revenue from the core system.
  • Customer terminals are queued on a (first in first out) FIFO basis and this is perceived to be fairer that the apparently random chances of access provided in many existing systems .
  • the present invention creates a stateful connection between client terminals and the queue in a stateless environment. It does not use persistent cookies to operate the queuing system. It is not designed to be switched off and back on again at the client terminal end.
  • the queue administrator sets the delay between issue time and earliest redemption time.
  • the queue administrator can also set the length of time the soft key is valid for.
  • Soft keys can be switched off permanently or temporarily per gate.
  • the present invention allows a more efficient throughput of users/customers on a website.
  • the website is less likely to fail and customers are informed of their place in a queue, the usability of the website is increased and customers are more likely to select a website that incorporates the present invention for buying e.g. concert tickets or the like.

Landscapes

  • Engineering & Computer Science (AREA)
  • Computer Networks & Wireless Communication (AREA)
  • Signal Processing (AREA)
  • Computer Hardware Design (AREA)
  • General Engineering & Computer Science (AREA)
  • Computer And Data Communications (AREA)
  • Financial Or Insurance-Related Operations Such As Payment And Settlement (AREA)

Abstract

A system and method for managing requests for service from customer terminals (3, 5) via a website. A request for service is received at a queue manager (9) via a communications channel and is either passed to a service manager for processing or placed in a queue depending upon whether one or more applications associated with the service manager (17) are connected to an allowable number of customer terminals such that, where the request is placed in a queue the communications channel between the customer terminal and the queue manager is held open whilst the customer terminal is held in the queue. The invention allows a more efficient throughput of users /customers on a website. In addition, because the website is less likely to fail and customers are informed of their place in a queue, the usability of the website is increased and customers are more likely to select a website that incorporates the present invention for buying concert tickets, on-line shopping system, or in other application areas including, but not limited to e- commerce, retail or information services .

Description

Queuing System, Method and Device
The present invention relates to a queuing system method and device and in particular to one which provides improved website performance by managing client terminal demand and client terminal access to a website.
Any service provided by a computer system over a Communications network will have limited capability, resulting in a maximum number of customers that can be served per minute. The capability may be limited for technical reasons, such as web service speed or the number of available connections, or may be limited because there are not enough operators to handle the demand for service.
Excessive demand often occurs in e-commerce when there is a very high interest in a particular product, which may be available in limited quantities when it first goes on sale. A typical example is the selling of concert tickets using an e-commerce system. Fans, knowing that tickets are limited, will all try to use the system as soon as the tickets go on sale, creating a demand "spike" that may well be above the maximum transaction rate that the system can cope with.
A customer participating in an on-line purchase typically follows the following steps: 1. The customer browses a website, making requests and reading responses using web browsing software. The pages of the website are normally transferred from the web server to the customer's client machine using unencrypted hypertext transfer protocol (HTTP) . 2. When they have made a choice, the customer requests a payment page, usually by pressing a "buy" button, containing form fields in which they can enter their credit/debit card information. This payment page is normally transferred over the network using secure HTTP (HTTPs) . 3. The customer fills in his/her details, and these are sent using HTTPs to a payment gateway. Frequently, the pending transaction is stored in a database. The payment gateway forwards the transaction details onto a dedicated credit card network for processing. Assuming that the transaction is authorised, the transaction is stored on a database attached to the payment page and the original web server. 4. The customer is presented with a response indicating that his or her transaction is successful. Usually a confirmation email is also sent to the customer.
In general, the problem with too many customer terminals such as web browsers trying to access a computer system is that the computer system tries to process all the requests for service one after another at very high speed. As it becomes busy and reaches its transactional limit (the maximum capability) the computer system denies service to any customer terminal. Consequently, the customer terminal tries again until it is answered. This causes unnecessary repeat requests that exacerbate the situation, creating more load. Eventually the core system grinds to a near halt, or suffers a total failure of service
In other words, as the volume of the requests for service increases, the system becomes loaded and has to use ever more resources to distinguish between users re-visiting the core system as part of a larger transaction, and new users. As long as the volume does not exceed capacity, performance is fine. When it exceeds capacity, a downward spiral of performance occurs which can lead to core system failure.
The server load means that attempts to modify the pages to improve the system "on the fly" may not succeed. If the high load has been anticipated, certain checks (such as credit card authorisation) may be postponed until after the 'sale' to increase performance, causing further work and uncertainty as to the number of tickets actually sold. The server load means that attempts must be made to increase capacity at an additional cost to the vendor and ultimately to the customer. Additional staff must also be hired to cope with the influx of orders and these staff members will have less time to deal with dissatisfied customers.
Users of customer terminals get frustrated particularly if they are half way through a process and in the middle of a sequence of steps the core system gets slower and slower and eventually stops working leaving the user with (i) a half completed transaction (ii) one the user had thought he had completed but had not (iii) or one that had been partially completed but which the user thought had not been partially completed.
All these scenarios result in dissatisfied users. The wrong data may be displayed to the user, and payment may be taken for on-line purchases that have not had logistics or delivery data passed to the correct department or systems.
These systems, particularly those created using Internet technology, work on stateless connection technology. This means that connections (or requests for service) between the customer terminal and system components and between the components are switched on 'on request' and switched off when the connection is not needed. Stateless systems allow internet based solutions to transact with many more users than previous architectures (such as client-server connections) could manage.
Limits of processing are created by a combination of bandwidth (the connection speed between the core system and user) , any hardware component of the core system (such as Database Server, Application Server, Web Server, Payment Server, Content Server), Firewall, Load Balancer or any software component (Database, Application Server, Web Server or Bespoke code) .
WO2005/112389 discloses a queuing system for managing the provision of services over a communications network. The system provides means for allocating a queue identifier to a request for service and for comparing queue status information and the queue identifier during a subsequent request for service. WO2005/112389 also discloses a means for performing a comparison which determines whether the request for service will be sent to a service host or placed in a managed queue . This document describes a system in which the user is able to make a request for service then disconnect whilst maintaining or being able to resume their place in the queue .
However, it is believed that many firewalls may prevent the user from re-entering the server thus reducing the effectiveness of this queuing system.
It is an object of the present invention to provide an improved queuing system to improve the management of access to applications accessible via the internet.
In accordance with a first aspect of the invention there is provided a system for managing requests for a service from one or more customer terminals, the system comprising: a queue manager for receiving the requests for the service from the one or more customer terminals via one or more communications channels, the queue manager being adapted to place the requests for service in an ordered queue; a service manager, responsive to the request for service, the service manager being adapted to deliver the service to the one or more customer terminals by means of one or more applications; communication means adapted to pass data between the queue manager and the service manager, the data being related to an allowable volume of customer terminals granted access to the service manager; wherein the queue manager holds the customer terminals not granted access to the service manager in the ordered queue once the allowable volume of customer terminals granted access to the service manager is reached and the communications channels between the queue manager and the customer terminals not granted access to the service manager are held open whilst the customer terminals are held in the ordered queue.
Preferably, the queue manager is connected to one or more software applications defined as being non-core applications.
Preferably, the queue manager is a server.
Preferably, each of the one or more customer terminals connected to the queue manager are uniquely identified when placed in the ordered queue.
Preferably, the queue manager comprises: a request receiver for receiving a request for the service from the one or more customer terminals via the one or more communications channels; and a customer manager for receiving data on the volume of customer terminals connected to the service manager, the data defining the allowable volume of customer terminals granted access to the service manager; wherein the queue manager is adapted to hold open the communications channel with the customer terminal whilst the customer terminal is held in the queue. Preferably, the communications channel between each of the customer terminals and the queue manager is routed through a firewall.
Preferably, the firewall grants the connection between the customer terminal and the queue manager if capacity is available.
Preferably, the service manager is connected to one or more software applications defined as being core applications .
The definition of what are core and non-core applications is flexible and can be change depending upon circumstances. The definition may depend upon the load the service provider expects on their website. Accordingly, an application can be defined as non-core in one configuration of a system of the present invention and as core in another configuration.
Preferably, the service manager is a server.
Alternatively, the queue manager and the service manager are contained in the same server.
Preferably, the communication means sends data to the queue manager which calculates the allowable volume of customer terminals granted access to the service manager and determines whether a customer terminal in the ordered queue can pass to the core applications.
Optionally, the communications means sends data on the allowable volume of customer terminals which has been calculated by the service manager such that the queue manager determines whether a customer terminal in the ordered queue can pass to the core applications.
The client terminal cannot have concurrent places in the queue, but can re-join the queue after leaving the queue.
Preferably a token is issued to the client terminal on leaving the queue to allow the client terminal to access the one or more core applications.
Preferably, the token is issued via the queue manager.
Preferably, the token is issued by the one or more core applications.
Preferably, the token is returned to the system after the client terminal has exited from the one or more core application .
Preferably, the token holds a unique identifier. The unique identifier may be used to stop multiple queue entries from a single customer terminal. The token identifier may be compared to previous token unique identifiers and suspected duplicates denied access through the gate.
Optionally, the unique identifier includes the customer terminal MAC address.
Preferably, the communications channel is kept open by sending data to the customer terminal periodically. Preferably, the queue manager sends the data to the customer terminal .
Preferably, the data comprises information on the position of the customer terminal in the queue.
Preferably, the amount of data transferred is significantly less than that transferred when refreshing an internet page.
Preferably, less than one kilobyte of data is transferred.
More preferably, less than 100 bytes of data is transferred.
Advantageously, by transferring a small amount of data, a minimal amount of bandwidth is required to keep open each of the communication channels used in the ordered queue. It is possible to send around 5 bytes of data.
Preferably, the queue manager measures the position in the ordered queue against the instantaneous number of tokens issued within a time frame to calculate the amount of time the customer terminal is likely to have to wait before receiving a token.
Preferably, the data sent to the customer terminal comprises the amount of time the customer terminal is likely to have to wait before receiving a token. Preferably, the data sent to the customer terminal further comprises the position of the client terminal in the queue.
The system may also logs detailed performance data about the applications associated with the service manager.
The system of the present invention can monitor patterns of events to provide detailed logs that can be post processed and replayed enabling measurement of the events that can lead to system failure. This data may be used to set alarms for a system administrator.
Preferably, multiple queues can be controlled by the system.
Preferably, preference can be given to customer terminals located on one of said multiple queues.
For example subscribers or loyalty club members can have their own separate queue through the queue manager.
Alternatively, queues from a plurality of web sites or sections of separate sites may be merged into a single queue .
A method for managing requests for service from a customer terminal, the method comprising the steps of: receiving a request for service at a queue manager via a communications channel; either passing the request to a service manager for processing or placing the request in a queue depending upon whether one or more applications associated with the service manager are connected to an allowable number of customer terminals; such that, where the request is placed in a queue the communications channel between the customer terminal and the queue manager is held open whilst the customer terminal is held in the queue.
Preferably, the queue manager is connected to one or more software applications defined as being non-core applications.
Preferably, each of the customer terminals connected to the queue manager are uniquely identified when placed in the ordered queue.
Preferably, the communications channel that connects the customer terminals and the queue manager is routed through a firewall.
Preferably, the firewall grants the connection between the customer terminal and the queue manager if capacity is available.
Preferably, the service manager is connected to one or more core applications.
Preferably, the allowable volume of customer terminals is calculated to determine whether a customer terminal in the ordered queue can pass to the core applications.
Optionally, data is sent by the communications means said data relating to the allowable volume of customer terminals which has been calculated by the service manager such that the queue manager determines whether a customer terminal in the ordered queue can pass to the core applications.
Preferably a token is issued to the client terminal on leaving the queue to allow the client terminal to access the one or more application associated with the service manager .
Preferably, the token is issued via the queue manager.
Preferably, the token is issued by the one or more application associated with the service manager.
Preferably, the token is returned to the system after the client terminal has exited from the applications associated with the service manager.
Preferably, the token holds a calculated unique identifier. The unique identifier may be used to stop multiple queue entries. The token identifier may be compared to previous token unique identifiers and suspected duplicates denied access through the gate.
Optionally, the unique identifier includes the customer terminal MAC address.
Preferably, the communications channel is kept open by sending data to the customer terminal periodically.
Preferably, data is sent from the queue manager to the customer terminal. Preferably, the data comprises information on the position of the customer terminal in the queue.
Preferably, the amount of data transferred is significantly less than that transferred when refreshing an internet page .
Preferably, less than one kilobyte of data is transferred.
More preferably, less than 100 bytes of data is transferred.
Preferably, the position in the ordered queue is measured against the instantaneous number of tokens issued within a time frame to calculate the amount of time the customer terminal is likely to have to wait before receiving a token.
Preferably, the data sent to the customer terminal comprises the amount of time the customer terminal is likely to have to wait before receiving a token.
Preferably, the data sent to the customer terminal further comprises the position of the client terminal in the queue is also shown.
The system may also logs detailed performance data about the applications associated with the service manager.
Preferably, multiple queues can be controlled by the system. Preferably, preference can be given to customer terminals located on one of said multiple queues .
Alternatively, queues from a plurality of web sites or sections of separate sites may be merged into a single queue .
A queue manager server comprising: a request receiver for receiving a request for service from a customer terminal via a communications channel; and a customer manager for receiving data on the volume of customer terminals connected to a service manager, the data defining an allowable number of customer terminals granted access to the service manager; wherein the queue manager server is adapted to hold open the communications channel with the customer terminal whilst the customer terminal is held in a queue.
Preferably, the queue manager is connected to one or more software applications defined as being non-core applications.
Preferably, each of the customer terminals connected to the queue manager are uniquely identified when placed in the ordered queue.
Preferably, unique connection of each of the customer terminals is provided by a firewall.
Preferably, the queue manager server is connectable to a service manager located on a client web server, the service manager being connected to one or more software applications defined as being core applications.
Preferably, a communications means sends data to the queue manager which calculates the allowable volume of customer terminals and determines whether a customer terminal in the ordered queue can pass to the core applications .
Preferably a token is issued to the client terminal on leaving the queue to allow the client terminal to access the one or more applications associated with the service manager.
Preferably, the token is issued via the queue manager.
Preferably, the token is issued by the one or more application associated with the service manager via the queue manager.
Preferably, the token is returned to the system after the client terminal has exited from the applications associated with the service manager.
Preferably, the token holds a calculated unique identifier. The unique identifier may be used to stop multiple queue entries. The token identifier may be compared to previous token unique identifiers and suspected duplicates denied access through the gate.
Optionally, the unique identifier includes the customer terminal MAC address. The system of the present invention can be used as an on- line shopping system, or in other application areas including, but not limited to e-commerce, retail or information services .
Preferably, the communications channel is kept open by sending data to the customer terminal periodically.
Preferably, the queue manager sends the data to the customer terminal.
Preferably, the data comprises information on the position of the customer terminal in the queue.
Preferably, the amount of data transferred is significantly less than that transferred when refreshing an internet page.
Preferably, less than one kilobyte of data is transferred.
More preferably, less than 100 bytes of data is transferred.
Advantageously, by transferring a small amount of data, a minimal amount of bandwidth is required to keep open each of the communication channels used in the ordered queue.
Preferably, the queue manager measures the position in the ordered queue against the instantaneous number of tokens issued within a time frame to calculate the amount of time the customer terminal is likely to have to wait before receiving a token. The figure may for example be 50 per minute. In which case, if a client terminal was 170th in the queue they would be served in approximately 3 minutes and 24seconds.
Preferably, the data sent to the customer terminal comprises the amount of time the customer terminal is likely to have to wait before receiving a token.
Preferably, the data sent to the customer terminal further comprises the position of the client terminal in the queue is also shown.
The system may also logs detailed performance data about the applications associated with the service manager.
The system of the present invention can monitor patterns of events to provide detailed logs that can be post processed and replayed enabling measurement of the events that can lead to system failure. This data may be used to set alarms for a system administrator.
Preferably, multiple queues can be controlled by the system .
Preferably, preference can be given to customer terminals located on one of said multiple queues.
Alternatively, queues from a plurality of web sites or sections of separate sites may be merged into a single queue. The present invention will now be described by way of example and with reference to the accompanying drawings in which:
Figure 1 is a graph which shows transactional performance (load) on a web server
Figures 2 (a) to 2(c) show a block diagram of a first embodiment of the present invention;
Figures 3 (a) to 3(c) show a block diagram of second embodiment of the present invention; and
Figure 4 is a flow diagram that illustrates the method of the present invention.
The system of the present invention may be used in e- commerce, for example by supermarkets or online ticket vendors. In addition, the system of the present invention may be used by any organisation which experiences or expects to experience a high volume of hits on their website or on part of their website for any reason.
The present invention allows the website owner to classify some applications on their website as core applications and some applications on their website as non-core applications. The non-core applications are those which a user who is using a customer terminal is able to browse prior to entering a queue and the core application are those which the users can only access after having been in the queue if a pre-defined maximum load on the core applications has been reached. The customer terminal may be a personal computer, personal cellular telephone or any device capable of making an internet connection to a website.
Figure 1 is a graph which shows transactional performance (load) on a web server by plotting the number of transactions 26 against the number of users 28. In the case of state of the art, the optimum 30 and maximum 32 number of users of the system is shown with respect to points on curve 22. The system's downward spiral of performance begins in the area of the graph after the optimum transaction values on curve 22. The flat curve 24 shows the performance of a system in accordance with the present invention.
In the examples of figures 2 (a) to 2(c), a system 1 is shown having ten customer terminals denoted by reference numerals 3, 5 and 7. The customer terminals are connected to a queue manager module 9 which can be loaded on a central server. The reference numeral 3 denotes customer terminals which are contained within a queue, the customer terminals 5 are those which have been unable to obtain access to the system and the customer terminals 7 (figure 2b) are those which have access to the core applications 21 via the service manager 17.
The customer terminals 3 which are connected to the queue manager 9 are connected via a socket connection 11 . Once connected to the queue manager 9 , the customer terminals 3 may access one or more non-core applications 12 . Such non-core applications may typically be the home page of a website or other pages where it is anticipated that a low number of users will attempt to gain access to the specific pages.
Within the queue manager module 9 there is a customer manager module 13 which in this example is configured to communicate with the service manager 17 and particularly the throughput manager module 19 contained within the service manager 17. The customer manager module is configured to send small amounts of information, typically less than 100 bytes and often less than 10 bytes, to each customer held in the queue. This information concerns the length of time that the customer terminal will be held in the queue and the position of the customer terminal within the queue. This data is pushed to the customer periodically and acts to keep the socket connection between the customer terminal 3 and the socket 11 of the queue manager 9 open so that the customer terminal is in the queue. The frequency at which the data is pushed can be set by the system to ensure that the connection between the customer terminal and the queue manager is maintained.
In addition, the customer manager module assists with measurement of the position in the queue against the instantaneous number of tokens issued by the core applications 21 via the queue manager 9. In one example 50 tokens per minute were issued. Therefore, a user who is 170th in a queue would be served in approximately 3 minutes 24 seconds.
The customer manager 13 of the queue manager 9 also receives data on the load experienced by the core applications 21. This data is gathered by the throughput manager 19 and provided via the communications link 15 to the customer manager 13. In one example of the present invention, data on the load experienced by the core applications 21 is processed by the throughput manager 19 and communicated to the customer manager 13.
In another example of the present invention, core application load data is passed to the customer manager 13 via the throughput manager 19 without being processed and all the processing of this data to determine whether the core applications have exceeded or met a pre-defined maximum load or use account is done by the customer manager 13.
In addition, it will be appreciated that the number of users that may be attached to the queue in the system is determined by the number of one to one socket connections between the queue manager system and the customer terminals that wish to have access to the system.
Advantageously, as the present invention pushes a small amount of data to each customer terminal (often as little as 5 bytes) the system can maintain connections to individual customer terminals using a very low bandwidth. Therefore, a large number of customer terminals may be connected to the system at any one time.
One example of a use of the system of figure 2 will now be described.
Figure 2a shows 10 customer terminals. Each of the users of these terminals wishes to use a website. As described above, the contents of the website can be divided into core applications and non-core applications. The users enter the website via a queue manager 9 which contains a number of non-core software applications associated with the website. The queue manager 9 communicates with the service manager 17 (which contains the core applications) . The service manager 17 measures throughput and provides data to the queue manager on whether the applications associated with the service manager have spare capacity. If there is no spare capacity, users are held in an ordered queue.
Typically the first customer terminal entering the queue will be the first one to leave once there is spare capacity in the core application.
In order for the customer terminal 3 to maintain its place in the queue, the communications line 4 between the customer terminal and the queue manager 9 is kept open whilst the customer terminal 3 is in the queue. In addition, whilst the customer terminal 3 is in the queue, a message is pushed to the customer terminal 3 informing it of its position in the queue and the length of time the system expects it to take to serve the customer.
In addition, the customer manager 13 of the queue manager 9 checks for spare capacity by communicating with the service manager 17. This spare capacity is available when a token is sent from the application to the customer terminal via the queue manager 9. Once the customer 7 terminal 7 has received the token, it is able to connect to the core application 21. Figure 2 (b) shows the process whereby a customer terminal changes status from one which is queued (denoted by reference numeral 3 figure 2a) to one which has gained access to the core application (denoted by reference numeral 7 figure 2b) . In addition, once this customer terminal has gained access to the core applications 21 a space opens up in the queue to allow an additional customer terminal to enter the queue. It will be noted that figure 2 (a) contains five customer terminals that were unable to gain access to the queue manager whereas figure l(b) contains four such customer terminals 5.
Figure 2 (c) shows a further progression of the use of the system whereby the customer terminal second from the left in figure 2(c) is provided with a token as described above and also gains access to one of the core applications 21. As with figure 2 (b) another customer terminal is added to the queue to take up the empty space vacated by the second customer terminal 7 and this figure shows only three customer terminals who are unable to connect to the queue manager.
Figures 3 (a) to 3(c) show a second embodiment of the present invention in which the queue manager functionality is contained within the queue manager 45 and in a firewall. It is known that firewalls are good at granting and refusing access to systems and as such they can be used to grant and deny access in the queue manager of the present invention.
As with figures 2 (a) to 2(c), the system 31 of figures 3 (a) to 3(c) shows queued customer terminals 33, non queued customer terminals 35 and customer terminals that obtained access to the core applications 37. The queue manager 45 comprises a customer manager 47 and a number of applications 42. The service manager 49 comprises the output manager 51 and a number of core applications 53.
It will be appreciated that in both embodiments of the present invention, the service manager may be a software module loaded onto a server which operates an existing customer website.
In another embodiment of the present invention, multiple queues can be controlled by the system. For example, where it is desirable to protect more than one core application and to have customer terminals queued separately for these applications, separate queues can be created. In addition, multiple queues can be used to provide a subset of users and to provide preferential access for one set of users.
For example, a supermarket with a customer loyalty scheme may use the present invention to allow a customer owning a loyalty card or ID number to obtain preferential treatment and quicker access to various parts of their website. As well as rewarding loyalty, this type of use of the present invention may provide an excellent marketing tool for the supermarket and may encourage customers to sign up to enhanced loyalty schemes. Similar schemes can be adopted by events ticketing vendors or other website owners.
Conversely, where two or more sites or sections of separate sites provide access to a single type of service then it is possible for queues to be merged. For example, where a number of different sites all provide access to tickets for a single event, then access to the tickets through the sites can be controlled by a single queue by merging the queues together. Once the queues are merged it may also be possible to differentiate between members of the queue by recognising the website from which they entered the queue .
In a further embodiment of the invention, the system is configured to stop multiple queue entries by holding a unique identifier in the token. The unique identifier will be associated with the user terminal by, for example, incorporating features of the terminal's MAC address so that no two queue identifiers with the same MAC address can be issued with an approved timeframe.
Figure 4 shows an embodiment of the method of the present invention 61. The method begins when a request for service 63 is received from a customer terminal. Thereafter a connection is opened 65 and an analysis 67 of the load on core applications is conducted. If there is no space in the core applications 69 the request is sent to the queue 71 and the connection between the customer terminal and the system kept open. The load on the core applications is monitored 73 and when space becomes available 75 the customer terminal is provided with a token and the request is sent to the core application.
Advantageously, the present invention keeps a core system working at maximum capacity improving efficiency and retuning maximum revenue from the core system. Customer terminals are queued on a (first in first out) FIFO basis and this is perceived to be fairer that the apparently random chances of access provided in many existing systems .
The present invention creates a stateful connection between client terminals and the queue in a stateless environment. It does not use persistent cookies to operate the queuing system. It is not designed to be switched off and back on again at the client terminal end.
It has a "Return Later' option that allocates a soft key or pass to the client terminal, delivered by eMail, for example, that provides access to the front of the queue within a later time frame. The queue administrator sets the delay between issue time and earliest redemption time. The queue administrator can also set the length of time the soft key is valid for. Soft keys can be switched off permanently or temporarily per gate.
It allows one token to be issued to a unique client terminal. Even if the client terminal opens up multiple clients on the same machine, and believes that they have multiple places in the queue, when the token is created duplicates are denied and no access to the Entrance Gate can be achieved. Other systems use quite different, more intensive and complex 1st and 2nd encrypted strings.
The present invention allows a more efficient throughput of users/customers on a website. In addition, because the website is less likely to fail and customers are informed of their place in a queue, the usability of the website is increased and customers are more likely to select a website that incorporates the present invention for buying e.g. concert tickets or the like.
Improvements and modifications may be incorporated herein without deviating from the scope of the invention.

Claims

Claims 1. A system for managing requests for a service from one or more customer terminals, the system comprising: a queue manager for receiving the requests for the service from the one or more customer terminals via one or more communications channels, the queue manager being adapted to place the requests for service in an ordered queue ; a service manager, responsive to the request for service, the service manager being adapted to deliver the service to the one or more customer terminals by means of one or more applications; communication means adapted to pass data between the queue manager and the service manager, the data being related to an allowable volume of customer terminals granted access to the service manager; wherein the queue manager holds the customer terminals not granted access to the service manager in the ordered queue once the allowable volume of customer terminals granted access to the service manager is reached and the communications channels between the queue manager and the customer terminals not granted access to the service manager are held open whilst the customer terminals are held in the ordered queue.
2. A system as claimed in claim 1 wherein, the queue manager is connected to one or more software applications defined as being non-core applications.
3. A system as claimed in claim 1 or claim 2 wherein a unique identifier is provided to each of the one or more customer terminals connected to the queue manager when placed in the ordered queue .
4. A system as claimed in any preceding claim wherein, the queue manager comprises: a request receiver for receiving a request for the service from the one or more customer terminals via the one or more communications channels; and a customer manager for receiving data on the volume of customer terminals connected to the service manager, the data defining the allowable volume of customer terminals granted access to the service manager; wherein the queue manager is adapted to hold open the communications channel with the customer terminal whilst the customer terminal is held in the queue.
5. A system as claimed in any preceding claim wherein, the communications channel between each of the customer terminals and the queue manager is routed through a firewall.
6. A system as claimed in claim 5 wherein, the firewall grants a connection between the customer terminal and the queue manager if capacity is available.
7. A system as claimed in any preceding claim wherein, the service manager is connected to one or more software applications defined as being core applications.
8. A system as claimed in any preceding claim wherein, the communication means sends data to the queue manager which calculates the allowable volume of customer terminals granted access to the service manager and determines whether a customer terminal in the ordered queue can pass to the core applications.
9. A system as claimed in claims 1 to 7 wherein, the communications means sends data on the allowable volume of customer terminals which has been calculated by the service manager such that the queue manager determines whether a customer terminal in the ordered queue can pass to the core applications.
10. A system as claimed in any preceding claim wherein, a token is issued to the client terminal on leaving the queue to allow the client terminal to access the one or more core applications.
11. A system as claimed in claim 10 wherein, the token is issued via the queue manager.
12. A system as claimed in claim 10 or claim 11 wherein, the token is issued by the one or more core applications.
13. A system as claimed in any of claims 10 to 12 wherein, the token is returned to the system after the client terminal has exited from the one or more core application .
14. A system as claimed in any of claims 10 to 13 wherein, the token holds a unique identifier.
15. A system as claimed in claim 14 wherein the token having a unique identifier is compared with other tokens and suspected duplicate tokens denied access.
16 A system as claimed in claim 14 and claim 15 wherein, the unique identifier includes the customer terminal MAC address.
17. A system as claimed in any preceding claim wherein, the communications channel is kept open by sending data to the customer terminal periodically.
18. A system as claimed in claim 17 wherein, the queue manager sends the data to the customer terminal.
19. A system as claimed in claim 17 or claim 18 wherein, the data comprises information on the position of the customer terminal in the queue .
20. A system as claimed in any of claims 17 to 19 wherein, less than one kilobyte of data is transferred.
21. A system as claimed in any of claims 17 to 20 wherein, less than 100 bytes of data is transferred.
22. A system as claimed in any of claims 17 to 21 wherein, the data sent to the customer terminal comprises information concerning the amount of time the customer terminal is likely to have to wait before receiving a token .
23. A system as claimed in any of claims 17 to 22 wherein, the data sent to the customer terminal further comprises the position of the client terminal in the queue .
24. A system as claimed in any preceding claim wherein detailed performance data about the applications associated with the service manager is logged.
25. A system as claimed in any of claims 10 to 16 wherein, the queue manager measures the position in the ordered queue against the instantaneous number of tokens issued within a time frame to calculate the amount of time the customer terminal is likely to have to wait before receiving a token.
26. A system as claimed in any preceding claim wherein, multiple queues can be controlled by the system.
27. A system as claimed in claim 26 wherein, preference can be given to customer terminals located on one of said multiple queues .
28. A system as claimed in claims 1 to 25 wherein, queues from a plurality of web sites or sections of separate sites may be merged into a single queue.
29. A method for managing requests for service from a customer terminal, the method comprising the steps of: receiving a request for service at a queue manager via a communications channel; either passing the request to a service manager for processing or placing the request in an ordered queue depending upon whether one or more applications associated with the service manager are connected to an allowable number of customer terminals; such that, where the request is placed in a queue the communications channel between the customer terminal and the queue manager is held open whilst the customer terminal is held in the queue.
30. A method as claimed in claim 29 wherein, the queue manager is connected to one or more software applications defined as being non-core applications.
31. A method as claimed in claim 29 or claim 30 wherein, each of the customer terminals connected to the queue manager are uniquely identified when placed in the ordered queue.
32. A method as claimed in any of claims 29 to 31 wherein, the communications channel that connects the customer terminals and the queue manager is routed through a firewall.
33. A method as claimed in claim 32 wherein, the firewall grants the connection between the customer terminal and the queue manager if capacity is available.
34. A method as claimed in any of claims 29 to 33 wherein, the allowable volume of customer terminals is calculated to determine whether a customer terminal in the ordered queue can pass to the core applications.
35. A method as claimed in any of claims 29 to 34 wherein, data is sent by the communications means said data relating to the allowable volume of customer terminals which has been calculated by the service manager such that the queue manager determines whether a customer terminal in the ordered queue can pass to the core applications.
36. A method as claimed in any of claims 29 to 35 wherein a token is issued to the client terminal on leaving the queue to allow the client terminal to access the one or more application associated with the service manager .
37. A method as claimed in claim 36 wherein, the token is issued via the queue manager.
38. A method as claimed in claim 36 wherein, the token is issued by the one or more application associated with the service manager.
39. A method as claimed in any of claims 36 to 38 wherein, the token is returned to the system after the client terminal has exited from the applications associated with the service manager.
40. A method as claimed in any of claims 36 to 39 wherein, the token holds a calculated unique identifier.
41. A method as claimed in claim 40 wherein, the unique identifier is used to stop multiple queue entries by comparison with previous token unique identifiers denying suspected duplicates access.
42. A method as claimed in claims 40 or 41 wherein, the unique identifier includes the customer terminal MAC address.
43. A method as claimed in claims 29 to 42 wherein, the communications channel is kept open by sending data to the customer terminal periodically.
44. A method as claimed in claim 43 wherein, data is sent from the queue manager to the customer terminal.
45. A method as claimed in claim 43 or 44 wherein, the data comprises information on the position of the customer terminal in the queue.
46. A method as claimed in any one of claims 43 to 45 wherein, the amount of data transferred is significantly less than that transferred when refreshing an internet page .
47. A method as claimed in any of claims 43 to 46 wherein, less than one kilobyte of data is transferred.
48. A method as claimed in any of claims 43 to 47 wherein, less than 100 bytes of data is transferred.
49. A method as claimed in any of claims 43 to 48 wherein, the data sent to the customer terminal comprises the amount of time the customer terminal is likely to have to wait before receiving a token.
50. A method as claimed in any of claims 43 to 49 wherein, the data sent to the customer terminal further comprises the position of the client terminal in the queue .
51. A method as claimed in any preceding claim wherein, the position in the ordered queue is measured against the instantaneous number of tokens issued within a time frame to calculate the amount of time the customer terminal is likely to have to wait before receiving a token.
52. A method as claimed in any preceding claim wherein, multiple queues can be controlled by the system.
53. A method as claimed in any preceding claim wherein, preference can be given to customer terminals located on one of said multiple queues.
54. A method as claimed in claim 52 wherein, queues from a plurality of web sites or sections of separate sites may be merged into a single queue.
55. A queue manager server comprising a queue manager as defined in claims 1 to 7, 9, 11, 18 and 25.
PCT/GB2007/000952 2006-03-16 2007-03-15 Queuing system, method and device WO2007105006A1 (en)

Priority Applications (2)

Application Number Priority Date Filing Date Title
US12/225,135 US20100040222A1 (en) 2006-03-16 2007-03-15 Queuing System, Method And Device
EP07732046A EP2018759A1 (en) 2006-03-16 2007-03-15 Queuing system, method and device

Applications Claiming Priority (4)

Application Number Priority Date Filing Date Title
GB0605282A GB0605282D0 (en) 2006-03-16 2006-03-16 Queuing sysyem, method and device
GB0605282.3 2006-03-16
US81576306P 2006-06-22 2006-06-22
US60/815,763 2006-06-22

Publications (2)

Publication Number Publication Date
WO2007105006A1 true WO2007105006A1 (en) 2007-09-20
WO2007105006A8 WO2007105006A8 (en) 2008-11-20

Family

ID=38230290

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/GB2007/000952 WO2007105006A1 (en) 2006-03-16 2007-03-15 Queuing system, method and device

Country Status (3)

Country Link
US (1) US20100040222A1 (en)
EP (1) EP2018759A1 (en)
WO (1) WO2007105006A1 (en)

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2012072690A1 (en) * 2010-12-01 2012-06-07 Queue-It Aps A method and a computer network system for controlling user access to a transactional site associated with a website
US11128732B1 (en) 2020-08-04 2021-09-21 Akamai Technologies, Inc. Admission policies for queued website visitors

Families Citing this family (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US9608929B2 (en) * 2005-03-22 2017-03-28 Live Nation Entertainment, Inc. System and method for dynamic queue management using queue protocols
US8856804B2 (en) * 2008-02-08 2014-10-07 Microsoft Corporation Performance indicator for measuring responsiveness of user interface applications to user input
US20110093367A1 (en) * 2009-10-20 2011-04-21 At&T Intellectual Property I, L.P. Method, apparatus, and computer product for centralized account provisioning
US9456061B2 (en) * 2012-08-15 2016-09-27 International Business Machines Corporation Custom error page enabled via networked computing service

Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20020138627A1 (en) * 2001-03-26 2002-09-26 Frantzen Michael T. Apparatus and method for managing persistent network connections
US20030023734A1 (en) * 2001-07-27 2003-01-30 International Business Machines Corporation Regulating access to a scarce resource
WO2004008709A1 (en) * 2002-07-15 2004-01-22 Soma Networks, Inc. System and method for reliable packet data transport in a computer network
WO2005112389A2 (en) * 2004-05-14 2005-11-24 Orderly Mind Limited Queuing system, method and computer program product for managing the provision of services over a communications network

Family Cites Families (10)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US6154769A (en) * 1998-03-27 2000-11-28 Hewlett-Packard Company Scheduling server requests to decrease response time and increase server throughput
AUPP308298A0 (en) * 1998-04-20 1998-05-14 Ericsson Australia Pty Ltd Access control method and apparatus
US6360270B1 (en) * 1998-11-16 2002-03-19 Hewlett-Packard Company Hybrid and predictive admission control strategies for a server
US6820260B1 (en) * 1999-06-17 2004-11-16 Avaya Technology Corp. Customized applet-on-hold arrangement
US6389028B1 (en) * 1999-09-24 2002-05-14 Genesys Telecommunications Laboratories, Inc. Method and apparatus for providing estimated response-wait-time displays for data network-based inquiries to a communication center
US7483964B1 (en) * 2000-02-25 2009-01-27 Nortel Networks, Limited System, device, and method for providing personalized services in a communication system
US7299259B2 (en) * 2000-11-08 2007-11-20 Genesys Telecommunications Laboratories, Inc. Method and apparatus for intelligent routing of instant messaging presence protocol (IMPP) events among a group of customer service representatives
JP4092388B2 (en) * 2000-11-10 2008-05-28 富士通株式会社 Service providing method using network and service providing system using the same
US7606899B2 (en) * 2001-07-27 2009-10-20 International Business Machines Corporation Regulating access to a scarce resource
US7827282B2 (en) * 2003-01-08 2010-11-02 At&T Intellectual Property I, L.P. System and method for processing hardware or service usage data

Patent Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20020138627A1 (en) * 2001-03-26 2002-09-26 Frantzen Michael T. Apparatus and method for managing persistent network connections
US20030023734A1 (en) * 2001-07-27 2003-01-30 International Business Machines Corporation Regulating access to a scarce resource
WO2004008709A1 (en) * 2002-07-15 2004-01-22 Soma Networks, Inc. System and method for reliable packet data transport in a computer network
WO2005112389A2 (en) * 2004-05-14 2005-11-24 Orderly Mind Limited Queuing system, method and computer program product for managing the provision of services over a communications network

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2012072690A1 (en) * 2010-12-01 2012-06-07 Queue-It Aps A method and a computer network system for controlling user access to a transactional site associated with a website
US11128732B1 (en) 2020-08-04 2021-09-21 Akamai Technologies, Inc. Admission policies for queued website visitors

Also Published As

Publication number Publication date
WO2007105006A8 (en) 2008-11-20
US20100040222A1 (en) 2010-02-18
EP2018759A1 (en) 2009-01-28

Similar Documents

Publication Publication Date Title
CA2566768C (en) Queuing system, method and computer program product for managing the provision of services over a communications network
US9344549B2 (en) Methods and systems for accessing a computer resource over a network via microphone-captured audio
US8463627B1 (en) Systems and methods for queuing requests and providing queue status
US20100040222A1 (en) Queuing System, Method And Device
US20020059436A1 (en) Service provision method via a network and service provision system using the same
TW200412115A (en) Device independent authentication system and method
US20060031899A1 (en) Methods for augmenting subscription services with pay-per-use services
JP2002189650A (en) Method and device for controlling computer, and recording medium stored with processing program therefor
WO1999056254A1 (en) Prepaid access for information network
EP3952256B1 (en) Improved admission policies for queued website visitors
US20150039505A1 (en) Dynamic trial subscription management
EP0940024A2 (en) Data communication system
WO1998024208A9 (en) Data communication system
JP3822474B2 (en) Personal information integrated management system and program thereof, and medium storing the program
US20030105723A1 (en) Method and system for disclosing information during online transactions
JP4350098B2 (en) Execution control apparatus and method
WO2020253714A1 (en) Data sharing method and apparatus, device and computer readable storage medium
GB2319710A (en) Quality of service in data communication systems
US6819656B2 (en) Session based scheduling scheme for increasing server capacity
JP2008217346A (en) Method for reducing load in peak time period in online system
CN105577394B (en) Integration system, data processing system, data processing equipment and data processing method
WO2001050365A1 (en) Goods delivery service system and method via electronic commerce
KR20020009262A (en) Method For Limiting A Sale Of Goods In Electronic Commerce
Kihl et al. Performance modeling of distributed e-commerce sites
KR20230050941A (en) Member matching automation SW performance improvement method

Legal Events

Date Code Title Description
NENP Non-entry into the national phase

Ref country code: DE

WWE Wipo information: entry into national phase

Ref document number: 2007732046

Country of ref document: EP

121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 07732046

Country of ref document: EP

Kind code of ref document: A1

WWE Wipo information: entry into national phase

Ref document number: 12225135

Country of ref document: US