WO2000045263A2 - Gestionnaire adaptatif de fils - Google Patents

Gestionnaire adaptatif de fils Download PDF

Info

Publication number
WO2000045263A2
WO2000045263A2 PCT/US2000/001993 US0001993W WO0045263A2 WO 2000045263 A2 WO2000045263 A2 WO 2000045263A2 US 0001993 W US0001993 W US 0001993W WO 0045263 A2 WO0045263 A2 WO 0045263A2
Authority
WO
WIPO (PCT)
Prior art keywords
service request
random number
processor
enabling
stochastic
Prior art date
Application number
PCT/US2000/001993
Other languages
English (en)
Other versions
WO2000045263A3 (fr
WO2000045263A8 (fr
Inventor
Marc Peter Kwiatkowski
Norman Robert Henry Black
Original Assignee
Mpath Interactive, Inc.
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Mpath Interactive, Inc. filed Critical Mpath Interactive, Inc.
Publication of WO2000045263A2 publication Critical patent/WO2000045263A2/fr
Publication of WO2000045263A3 publication Critical patent/WO2000045263A3/fr
Publication of WO2000045263A8 publication Critical patent/WO2000045263A8/fr

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F9/00Arrangements for program control, e.g. control units
    • G06F9/06Arrangements for program control, e.g. control units using stored programs, i.e. using an internal store of processing equipment to receive or retain programs
    • G06F9/46Multiprogramming arrangements
    • G06F9/48Program initiating; Program switching, e.g. by interrupt
    • G06F9/4806Task transfer initiation or dispatching
    • G06F9/4843Task transfer initiation or dispatching by program, e.g. task dispatcher, supervisor, operating system
    • G06F9/4881Scheduling strategies for dispatcher, e.g. round robin, multi-level priority queues

Definitions

  • the present invention relates generally to the field of network communications. More particularly, the present invention relates to thread management for processing asynchronous transactions in a multi-threaded server computer.
  • one or more worker threads of execution are created in a server computer in response to the scheduling of a service request from a client computer.
  • the server enters the transaction into a queue and overlaps transaction processing so that new transactions can commence before earlier transactions are completed. Separately scheduled threads of execution are usually employed for this purpose. Threads can then be created as required using the separately scheduled threads of execution.
  • Multi-threading or more precisely, multiple threads of concurrent execution can also be used.
  • thread creation or the start of handling a service request is typically not blocked (that is, held up) pending the completion of any prior requests. Atypically, blocking does occur, especially in situations where resources are over committed.
  • server computers free up (that is, return to a pool) or destroy worker threads used to handle a service request once a corresponding reply message is queued for dispatch to the client that requested service.
  • Deterministic scheduling algorithms have been used to dynamically allocate system resources to improve server performance. Such algorithms include determining when and whether to bring threads into existence.
  • the present invention satisfies the above mentioned needs by providing a system, method, and computer program product for intelligent timing of thread creation for processing asynchronous transactions in a multi-threaded server computer.
  • the present invention accomplishes this by implementing a stochastic algorithm to determine whether to honor a service request or to defer responding to the service request.
  • a stochastic algorithm to determine whether to honor a service request or to defer responding to the service request.
  • a stochastic scaling value is adjusted to reflect the creation of a thread.
  • a worker thread is then created. If the service request is deferred, the service request is re-queued at the tail of the inbound queue.
  • the deferment of a service request results in the service request being re-queued to the head of the inbound queue. In yet another alternative embodiment, the deferment of a service request results in the service request being re-queued randomly within the inbound queue.
  • FIG. 1 is a block diagram illustrating an exemplary client/server communications network implementing the present invention.
  • FIG. 2 is a block diagram illustrating an exemplary server computer implementing the present invention.
  • FIG. 3 is a flow diagram illustrating a method for providing intelligent timing for determining whether to honor or defer a request for service in accordance with the present invention.
  • FIG. 4 is a flow diagram illustrating an example stochastic process used to determine whether to honor or defer a service request.
  • FIG.5 is a diagram of an exemplary computer system for implementing the present invention.
  • FIG. 1 is a block diagram of an exemplary client/server communications network 100.
  • Communications network 100 comprises a plurality of client computers 101 - 106, a shared network 110, server computers 113-115, and a multi-point link 108.
  • Multi-point bus 108 connects client computers 102 and 103 to shared network
  • Both server computers 113-115 and client computers 101 - 106 are connected to shared network 110 via a transmission media, such as twisted pair, co-axial cable, and/or optical fiber cable, and/or a wireless media (for example, the atmosphere).
  • a transmission media such as twisted pair, co-axial cable, and/or optical fiber cable
  • a wireless media for example, the atmosphere
  • client computers 101-106 are coupled through shared network 110 to any of server computers 113-115.
  • client computers 101- 106 can be any type of end user device, including but not limited to, a personal computer, modem, telephone, television, set-top box, video game console, personal data assistant, or other terminal.
  • FIG.2 is a block diagram of an exemplary server computer 200 according to one embodiment of the present invention.
  • Server computer 200 comprises, mter alia, an inbound queue 202, a worker thread creator 204, a plurality of transaction processing modules 206, a worker thread eliminator 208, and an outbound queue 210.
  • Each component 202-210 in server computer 200 is connected via communications infrastructure 212. Both inbound queue 202 and outbound queue 210 can be any type of storage device.
  • Inbound queue 202 receives requests for service from computer clients 101-106. In accordance with the present invention, such requests are only conditionally honored. Each time service is requested, a stochastic feature is used to determine whether to defer the request for service or to honor the request for service contemporaneously. When the request for service is honored, a worker thread is created via worker thread creator 204. Worker threads are used to process transactions within the transaction processing modules 206. Once the corresponding transaction is completed, a transaction processing module 206 generates a reply message and places the reply message on outbound queue 210 to be sent to the corresponding client computer 101 - 106. Once the reply message has been placed on outbound queue 210, the corresponding worker thread can be destroyed using worker thread eliminator 208.
  • steps 302-316 are carried out by server 200.
  • Worker thread creator 204 can implement each of steps 304-316.
  • the process begins with step 302 where control is immediately passed to step 304.
  • step 304 the presence of a request for service from client computer
  • step 306 a stochastic algorithm is used to determine whether to honor or defer the request for service. With the stochastic algorithm, fewer incidents of deadlocks occur while avoiding excessive thrashing. The stochastic process will be discussed below with reference to FIG.4. Control then passes to decision step
  • step 308 it is determined whether or not the request for service is to be deferred. If the request for service is not to be deferred, control passes to step 312.
  • a stochastic scaling value is adjusted to reflect that a thread is to be created.
  • the stochastic scaling value is adjusted using a look-up table of factors indexed by the number of active threads presently being serviced. For example, using random numbers between 0 and 1 , and a permissible load of 16 active worker threads, the stochastic scaling value would be chosen using the number of active threads being serviced at that time according to Table 1.
  • step 314 a worker thread is created.
  • the creation of worker threads is well known to one skilled in the relevant art(s). Any known or future technique for creating worker threads can be used. Control then passes to step 316.
  • step 310 the request for service is re-queued in inbound queue 202.
  • the request for service is re-queued at the tail of inbound queue 202.
  • the request for service is re-queued at the head of inbound queue 202.
  • the request for service is re-queued randomly within inbound queue 202. For example, a random number, in the range of 1 to N, where N is the maximum number of items in the inbound queue 202, is generated. The random number is then rounded to the nearest integer.
  • FIG. 4 is a flow diagram describing how the present invention stochastically determines whether to honor or defer the request for service. The process begins with step 402, where control immediately passes to step 404.
  • a random number is generated.
  • the random number may be generated using a uniform random number generator or pseudo- random number generator. Other methods of generating random numbers employing other types of distributions may also be used without departing from the scope of the invention. Control then passes to step 406.
  • step 406 the random number is scaled by a predetermined scaling factor (for example, between 0 and 1). Control then passes to decision step 408.
  • decision step 408 it is determined whether the scaled random number exceeds the stochastic scaling value. Initially, the stochastic scaling value is generated randomly. Thereafter, the stochastic scaling value is adjusted, as previously described in step 312 of FIG. 3. If the scaled random number is more than the stochastic scaling value, control passes to step 412. In step 412, it is indicated that the request for service is not to be deferred.
  • step 410 if the scaled random number is less than or equal to the stochastic scaling value, control passes to step 410. In step 410, it is indicated that the request for service is to be deferred. Control then passes to step 414. In step 414, the process ends.
  • Stochastic algorithms are well known. Other types of stochastic algorithms may be utilized without departing from the scope of the invention.
  • FIG. 2 is a block diagram of an embodiment of the present invention implemented primarily in server computers
  • the present invention is implemented primarily in software on a computer system operating as discussed herein.
  • An exemplary computer system 500 is shown in Figure 5.
  • the computer system 500 includes one or more processors, such as processor 502.
  • Processor 502 is connected to a communication infrastructure 504 (for example, one or more buses or a network) .
  • the computer system 500 also includes a main memory 506, preferably random access memory (RAM), and a secondary memory 508.
  • the secondary memory 508 includes, for example, a hard disk drive 510 and/or a removable storage drive 512, representing a floppy disk drive, a magnetic tape drive, a compact disk drive, etc.
  • the removable storage drive 512 reads from and/or writes to a removable storage unit 514 in a well known manner.
  • Removable storage unit 514 also called a program storage device or a computer program product, represents a floppy disk, magnetic tape, compact disk, etc.
  • the removable storage unit 514 includes a computer usable storage medium having stored therein computer software and/or data, such as an obj ect' s methods and data.
  • Computer programs also called computer control logic, including obj ect- oriented computer programs, are stored in main memory 506 and/or the secondary memory 508. Such computer programs, when executed, enable the computer system 500 to perform the features of the present invention as discussed herein.
  • the computer programs when executed, enable the processor 502 to perform the features of the present invention. Accordingly, such computer programs represent controllers of the computer system 500.
  • the invention is directed to a computer program product comprising a computer readable medium having control logic (computer software) stored therein.
  • control logic when executed by the processor 502, causes the processor 502 to perform the functions of the invention as described herein.
  • the invention is implemented primarily in hardware using, for example, one or more state machines. Implementation of these state machines so as to perform the functions described herein will be apparent to persons skilled in the relevant art(s).

Landscapes

  • Engineering & Computer Science (AREA)
  • Software Systems (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Engineering & Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • Cylinder Crankcases Of Internal Combustion Engines (AREA)
  • Heat Treatment Of Articles (AREA)
  • Multi Processors (AREA)

Abstract

L'invention concerne un système et un procédé de synchronisation intelligente de la création de fils, destinés au traitement de transactions asynchrones au niveau d'un serveur multi-fils. Lorsque la présence d'une demande de service est détectée au niveau d'une file d'attente entrante, il est possible de déterminer stochastiquement si cette demande doit être honorée ou rejetée. Si la demande de service est honorée, on corrige une valeur de changement d'échelle stochastique de manière à refléter la création d'un fil. Un fil de travail est alors créé. Si la demande de service est rejetée, elle est replacée dans la file d'attente entrante, en fin de liste.
PCT/US2000/001993 1999-02-01 2000-01-28 Gestionnaire adaptatif de fils WO2000045263A2 (fr)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
US24089099A 1999-02-01 1999-02-01
US09/240,890 1999-02-01

Publications (3)

Publication Number Publication Date
WO2000045263A2 true WO2000045263A2 (fr) 2000-08-03
WO2000045263A3 WO2000045263A3 (fr) 2000-12-07
WO2000045263A8 WO2000045263A8 (fr) 2002-02-07

Family

ID=22908352

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/US2000/001993 WO2000045263A2 (fr) 1999-02-01 2000-01-28 Gestionnaire adaptatif de fils

Country Status (1)

Country Link
WO (1) WO2000045263A2 (fr)

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2008040563A1 (fr) * 2006-10-03 2008-04-10 International Business Machines Corporation Procédé, système et programme informatique permettant de répartir l'exécution de travaux indépendants
CN102375931A (zh) * 2010-08-13 2012-03-14 通用汽车环球科技运作有限责任公司 模拟水淬火期间的铝铸件的瞬时热传递和温度分布的方法

Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
EP0366344A2 (fr) * 1988-10-27 1990-05-02 AT&T Corp. Dispositif de partage de charge par multiprocesseur
EP0854617A1 (fr) * 1997-01-13 1998-07-22 Alcatel Elément de commutation de cellules ATM mettant en oeuvre des priorités probabilistes attachées aux cellules

Patent Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
EP0366344A2 (fr) * 1988-10-27 1990-05-02 AT&T Corp. Dispositif de partage de charge par multiprocesseur
EP0854617A1 (fr) * 1997-01-13 1998-07-22 Alcatel Elément de commutation de cellules ATM mettant en oeuvre des priorités probabilistes attachées aux cellules

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
ARVIDSSON A: "A MULTI SERVER SYSTEM WITH REJECTION AND PRIORITIES" PROCEEDINGS OF THE TWELFTH INTERNATIONAL TELETRAFFIC CONGRESS, NL, AMSTERDAM, ELSEVIER, vol. CONGRESS 12, 1 - 8 June 1988, pages 1329-1336, XP000279848 *

Cited By (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2008040563A1 (fr) * 2006-10-03 2008-04-10 International Business Machines Corporation Procédé, système et programme informatique permettant de répartir l'exécution de travaux indépendants
CN102375931A (zh) * 2010-08-13 2012-03-14 通用汽车环球科技运作有限责任公司 模拟水淬火期间的铝铸件的瞬时热传递和温度分布的方法
CN102375931B (zh) * 2010-08-13 2015-03-25 通用汽车环球科技运作有限责任公司 模拟水淬火期间的铝铸件的瞬时热传递和温度分布的方法

Also Published As

Publication number Publication date
WO2000045263A3 (fr) 2000-12-07
WO2000045263A8 (fr) 2002-02-07

Similar Documents

Publication Publication Date Title
CN110276182B (zh) Api分布式限流的实现方法
US5590334A (en) Object oriented message passing system and method
US8924467B2 (en) Load distribution in client server system
US5887168A (en) Computer program product for a shared queue structure for data integrity
US6195682B1 (en) Concurrent server and method of operation having client-server affinity using exchanged client and server keys
US8190743B2 (en) Most eligible server in a common work queue environment
US6424993B1 (en) Method, apparatus, and computer program product for server bandwidth utilization management
CN106209682A (zh) 业务调度方法、装置和系统
CN107590002A (zh) 任务分配方法、装置、存储介质、设备及分布式任务系统
CN105159782A (zh) 基于云主机为订单分配资源的方法和装置
CN108055311B (zh) Http异步请求方法、装置、服务器、终端和存储介质
US20030156547A1 (en) System and method for handling overload of requests in a client-server environment
CN108681481A (zh) 业务请求的处理方法及装置
CN112188015A (zh) 客服会话请求的处理方法、装置及电子设备
CN105302907A (zh) 一种请求的处理方法及装置
WO2000045263A2 (fr) Gestionnaire adaptatif de fils
US5568616A (en) System and method for dynamic scheduling of 3D graphics rendering using virtual packet length reduction
CN115586957B (zh) 一种任务调度系统、方法、装置及电子设备
CN115174535A (zh) 基于Kubernetes实现文件转码POD调度方法、系统、设备及存储介质
CN107229424B (zh) 一种分布式存储系统数据写入方法及分布式存储系统
CN113538081A (zh) 商城订单系统及其实现资源自适应调度的处理方法
US5539913A (en) System for judging whether a main processor after processing an interrupt is required to process the I/O control of an I/O control local processor
Lu et al. A special random selective service queueing model for access to a star LAN
CN118245233B (zh) 云密码卡算力控制系统及方法
CN102891806A (zh) 一种对使用受限资源的批量操作的调度方法和装置

Legal Events

Date Code Title Description
AK Designated states

Kind code of ref document: A2

Designated state(s): JP

AL Designated countries for regional patents

Kind code of ref document: A2

Designated state(s): AT BE CH CY DE DK ES FI FR GB GR IE IT LU MC NL PT SE

121 Ep: the epo has been informed by wipo that ep was designated in this application
AK Designated states

Kind code of ref document: A3

Designated state(s): JP

AL Designated countries for regional patents

Kind code of ref document: A3

Designated state(s): AT BE CH CY DE DK ES FI FR GB GR IE IT LU MC NL PT SE

AK Designated states

Kind code of ref document: C1

Designated state(s): JP

AL Designated countries for regional patents

Kind code of ref document: C1

Designated state(s): AT BE CH CY DE DK ES FI FR GB GR IE IT LU MC NL PT SE

WR Later publication of a revised version of an international search report
122 Ep: pct application non-entry in european phase