WO2010120247A1 - Architecture de serveurs pour des systèmes multicoeurs - Google Patents

Architecture de serveurs pour des systèmes multicoeurs Download PDF

Info

Publication number
WO2010120247A1
WO2010120247A1 PCT/SG2010/000149 SG2010000149W WO2010120247A1 WO 2010120247 A1 WO2010120247 A1 WO 2010120247A1 SG 2010000149 W SG2010000149 W SG 2010000149W WO 2010120247 A1 WO2010120247 A1 WO 2010120247A1
Authority
WO
WIPO (PCT)
Prior art keywords
requests
request
information processing
thread
processing according
Prior art date
Application number
PCT/SG2010/000149
Other languages
English (en)
Inventor
Somasundaram Gokulakannan
Sridharan Venkatesan
Original Assignee
Electron Database Corporation Pte Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Electron Database Corporation Pte Ltd filed Critical Electron Database Corporation Pte Ltd
Priority to CA2758732A priority Critical patent/CA2758732A1/fr
Priority to US13/057,004 priority patent/US20110145312A1/en
Priority to EP10718340A priority patent/EP2419829A1/fr
Publication of WO2010120247A1 publication Critical patent/WO2010120247A1/fr

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F9/00Arrangements for program control, e.g. control units
    • G06F9/06Arrangements for program control, e.g. control units using stored programs, i.e. using an internal store of processing equipment to receive or retain programs
    • G06F9/46Multiprogramming arrangements
    • G06F9/50Allocation of resources, e.g. of the central processing unit [CPU]
    • G06F9/5005Allocation of resources, e.g. of the central processing unit [CPU] to service a request
    • G06F9/5027Allocation of resources, e.g. of the central processing unit [CPU] to service a request the resource being a machine, e.g. CPUs, Servers, Terminals
    • G06F9/5033Allocation of resources, e.g. of the central processing unit [CPU] to service a request the resource being a machine, e.g. CPUs, Servers, Terminals considering data affinity
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F2209/00Indexing scheme relating to G06F9/00
    • G06F2209/50Indexing scheme relating to G06F9/50
    • G06F2209/5013Request control
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F2209/00Indexing scheme relating to G06F9/00
    • G06F2209/50Indexing scheme relating to G06F9/50
    • G06F2209/5017Task decomposition
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F2209/00Indexing scheme relating to G06F9/00
    • G06F2209/50Indexing scheme relating to G06F9/50
    • G06F2209/5018Thread allocation

Definitions

  • the invention discloses a software processing architecture for an application server or database server running on multi-core or multi-processor machines with cache-coherent non-uniform memory access (cc-NUMA) and SMP architectures, particularly favouring the former. More specifically, it relates to the processing, threading and locking in server software for handling concurrent requests such that performance and scalability are improved.
  • cc-NUMA cache-coherent non-uniform memory access
  • SMP architectures particularly favouring the former. More specifically, it relates to the processing, threading and locking in server software for handling concurrent requests such that performance and scalability are improved.
  • FIGURE 1 (Prior Art) shows a schematic block diagram of a model of such process/thread per session architecture.
  • a process or thread is spawned for every user login and which will last until the user has logged out. All requests from that user are undertaken in the spawned process or thread.
  • This category of architecture has some disadvantages, however. For one, process creation is not resource-efficient and costly, especially in online transaction processing (OLTP) systems. When a number of users are logged in, the system would be running a corresponding number of processes and hence a lot of process switching overhead is required by system resources compare to that actually employed to perform useful work.
  • OLTP online transaction processing
  • a connection pool is typically provided at the application server level, i.e. a cache of database connections is maintained by the database so that the connections can be reused when the system needs to attend to future requests for data.
  • a user's OLTP need not be executed by the same process or thread every time but the result is that data cannot be cached in memory at the database level between two requests in a particular session. The data can still be cached in a memory area, accessible to all the sessions, but then it introduces the synchronization overheads to access/modify the data.
  • Server architecture might be designed such that data is stored at the application server level memory. However, this might result in too much caching at the application server level resulting in a considerable garbage collection overhead.
  • Prior art method for avoiding heap management or garbage collection overhead is by pre-allocating pools of memory and using a custom, lightweight scheme for allocation/de-allocation.
  • garbage collection algorithm is designed only for short-lived small objects and not for persistent information or long-lived caches.
  • Application server garbage collection is designed with short lived objects in mind. If such a heap is being used as a cache for database connection, then the overhead will be considerable.
  • some of the data-like query plans are known to be very complex to be cached at the application server level. These query plans are also known as "query execution plan", i.e. a set of steps used to access or modify information in an SQL RDBMS. The plan basically tells whether to do index scan or full table scan to get the query results.
  • connection pool at the application server end Another disadvantage of the connection pool at the application server end is that the full utility of temporary tables cannot be exploited. Any insert/update in the database will result in the recovery log overhead. This overhead is incurred to safeguard data integrity so that when the database crashes, the data can be recovered.
  • logging for crash recovery is not required and it is not necessary to provide for the typical robust log file structure to redundantly record the database update operation.
  • ISO standards approve a less robust temporary table structure which need not be recovered after the database crash. Nevertheless, the temporary table will become tied up to the database server process in this case and the connection pool does not guarantee to allocate the requests from the same user to the same database server process or thread. Hence, any temporary table activity is restricted to be used within a request and cannot be spawned across requests.
  • server software each has meaning that includes software system which is run (either as an application server or database server or both) to provide services to other computer programs or users, rather the computer or hardware server (e.g. “blade server”, “rackmounted server”) unless it is so defined in context;
  • concurrency has meaning that includes a software environment or architecture for server software in which multiple classes of queries and/or transactions are executable or executed simultaneously;
  • request has meaning that includes a command, instruction or query from a user to the server"
  • Request Handler is a thread, which executes the user request. Whenever a page needs to get locked, the Request Handler acquires the lock so that the user request need not be concerned about locks. The Request Handlers are combined into Handler groups and a mapping gets created between the database tables and Handler groups, that favours the ccNUMA systems. Whenever there is a I/O to be performed in the request, it is forwarded to the I/O Handler, the thread that does the I/O.
  • I/O Handler is a thread that takes care of the I/O part of any request. It is expected to occupy the processor for very short periods of time to initiate an
  • session has meaning that includes a process or situation whereby a plurality of requests are grouped to facilitate execution in the same context. Sessions are created when the user logs in. This might end up creating a process/thread in certain architectures. It also serves the purpose of security, in which after the user provides a valid UserlD and password, a session is created with a session ID and is used for further communication with the server.
  • the proposed architecture is a variation of the thread pool/process pool architecture discussed above. Firstly, it tries to group requests of similar kind and tries to execute them together. For example, consider a Full table scan request. Say there are two queries:
  • Query 1 Select * from employees_table where name like 'a%'.
  • Query 2 Select * from employees_table where name like 'b%'.
  • Both the queries have some common operations - fetching the record and obtaining the value of the name field out of it. Only the condition to be applied differs. Current architectures run these two requests as separate requests.
  • our proposed invention concerns novel cache-conscious and multi-core conscious server architecture by fully exploiting concurrency in handling similar requests.
  • Our proposed architecture also minimizes thread-switching overhead by exploiting inherent parallelism in the inflowing requests. This is achieved by grouping together requests that have similar tasks and executing them only once. This involves integrating a value to the set of information or query structure of the request in substantially Asynchronous manner. Preferably, the requests are processed in Complete-Async or Total-Async or Full-Async way or manner. Concurrent requests having common tasks may be grouped together for single execution.
  • the general embodiment of our method for concurrent information processing in a server system on a multi-core processor caching environment includes simultaneously processing requests comprising multiple classes of queries and/or executing transactions in at least one of an application server and a database server. Our method comprises of:
  • the information or query structure of the requests is maintained such that a plurality of concurrent requests is identifiable by their respective tasks and grouped according to their similarity of task category.
  • One aspect of the information or query structure of the requests is to assign the structure with similar hash value according to similar task category of the request.
  • the grouped requests may be executable as a single request.
  • the grouped requests may further preferably be maintained as grouping session for execution in a database server or application server.
  • a second aspect of our method provides for the request to be switchable between threads, based on the current holder of the shared data required by the request.
  • our method provides for the information or query structure of the requests to be managed on the basis of thread-specific storage structures providing concurrent access to shared data among concurrent data updates.
  • a third aspect of our method for concurrent information processing involves for the processing in Full or Complete-Asynchronous way, including enabling the requests to pause and restart on demand.
  • a request is divided into multiple syncpoints and the request is coupled to the thread only between two sync-points. Even between two sync- points, the request might get de-coupled from the thread executing it, in case there is a page request/ Lock request.
  • a fourth aspect of our invention provides for the incoming requests to be handled by Request Handlers and the separate classes of threads to be embodied as I/O handlers, taking care of both the network and/or disk I/O.
  • I/O handlers Preferably, one I/O handler runs for each Request Handler and for flushing data as required for secondary storage.
  • a new request object may preferably be created for the Request Handler to process as a single request for the grouped requests. The request handler will not be waiting on any I/O, hence the processing power is efficiently utilized.
  • a plurality of sync points for each of the requests in the group may be provided prior to their completion. Preferably, all the requests in the group are taken to a particular synchronisation point prior before proceeding therefrom and carry forward all of said requests to next synchronization point.
  • a sixth aspect of our method provides for each processor to be bound to a thread group with each core hosting two threads - a thread each for request and I/O handlers.
  • each thread group operates on a subset of tables depending on the workload on the database and only the joins access data from other tables.
  • an execution thread shall put up the locking request for queuing without waiting for lock to be acquired and shall then take up another request to be processed.
  • the request from the first thread may be switched to a second thread holding shared data, thus enabling the first thread to continue processing the other request.
  • the request may be enqueued in the first thread's operating page queue.
  • multiple requests from second and subsequent threads are processed in the same manner with each of the multiple requests being so enqueued in the first thread's operating page queue, favourably in the same core.
  • the thread or Request Handler when the thread or Request Handler takes a lock on the operating pages and the requests enqueued on the page, the thread preferably groups the enqueued requests according to similarity of tasks, whenever possible, for group execution. More preferably, the grouped requests are executed as a single request or a series of request in a Grouped Executor Session.
  • a specific embodiment of our method involves: (i) a first Request Handler takes a request from Request Handler Group
  • FIGURE 1 (Prior Art) shows a schematic block diagram of a prior art model of a process/thread per session architecture.
  • FIGURE 2 (Prior Art) illustrates a schematic block diagram of another prior art model of a process/thread pool for all sessions architecture.
  • FIGURE 3 depicts an embodiment of our proposed architecture which processing of a plurality of requests at the database server end shown in a schematic block diagram.
  • FIGURE 4 exemplifies another embodiment of our proposed architecture wherein a plurality of sessions are handled by a process/thread pool in multi-core processor environment shown in schematic block diagram.
  • FIGURE 5 shows a block diagram of our Request Handler mapping implementable in a typical NUMA hardware configuration comprising a quad- processor quad-core system.
  • FIGURE 6 illustrates an example of a page locking flow enabling the enqueuing and execution of requests in our invention.
  • the general embodiment of our invention may be described as being comprised in a method for concurrent information processing in a server system on a multi-core processor caching environment wherein the information processing includes simultaneous processing requests comprising multiple classes of queries and/or executing transactions in at least one of an application server and a database server.
  • the I/O Handler may not be able to load completely the object into memory in order for the request handler to process it.
  • a large object is of the size of 10 GB
  • loading it completely might not be feasible in a system with only 8 GB of memory. It may even turn out to be counter-productive, as attempt to load it would flush out other often accessed pages already loaded in memory.
  • streaming the large objects synchronously (with possible I/O waits) by the Request Handler is a possible solution for reading them. Nevertheless, our proposed method will work under such circumstances in a substantially asynchronous, i.e.
  • our method comprises the steps of (a) making said requests in substantially asynchronous way, including a totally asynchronous way; (b) calculating a hash on the request query's string with information to enable similar requests to be grouped together; (c) grouping said similar requests in group session (including whereby a group session is formed by putting all the requests with the same hash value inside a session object) and (d) executing said requests in said group session.
  • a salient feature of our method includes providing the information or query structure of the requests to be maintained such that a plurality of concurrent requests is identifiable by their respective tasks and grouped according to their similarity of task category.
  • One way to structure the requests is to assign each of them with a hash value which reflect the task similarity of the request.
  • a preferred embodiment is to assign a similar hash value to the request's query or information structure so that similar hash value is accorded to similar category of tasks.
  • requests having similar tasks may be grouped and may be executed as a single request in a Grouped Executor session.
  • Our Executor session corresponds to the concept of session in any of the servers today and it is capable of executing a series of requests. Only the requests are assigned with hash value for grouping and not the request handlers, being the threads inside the thread group, which execute the requests.
  • Grouped Executor Session is a Session created for executing the grouped requests and deleted after the execution.
  • the request is switchable between threads, based on the current holder of the shared data required by the request. Accordingly, the information or query structure of the requests may be managed on the basis of thread-specific storage structures while providing concurrent access to shared data among concurrent data updates.
  • a preferably, typical situation of our method comprises a single process with fixed number of threads in different classes. As the requests are grouped as a collection of similar requests, it is preferred that a new request object be created so that it may be seen by the Request Handler as a single request and process it accordingly.
  • the incoming requests may preferably be handled by the Request Handlers and the separate classes of threads may be embodied as I/O handlers, taking care of both the network and/or disk I/O.
  • the Request Handler can keep executing tasks by delegating any I/O to the I/O handler, thereby reducing performance declines caused by waiting.
  • the Request Handler count can be decided depending on the number of processing units available and the I/O handler count can be decided based on the amount of parallelism the data storage can support.
  • our method may provide for one I/O handler running for each Request Handler and for flushing data as required for secondary storage. It may prefetch the data needed for the Request Handler and also takes care of flushing the data created by the Request Handler to the secondary storage.
  • Another aspect of our method involves providing for a plurality of synchronization points for each of the requests in the group prior to their completion.
  • our proposed architecture solves this problem by splitting a long request into multiple sync-points and it is paused and restarted between sync-points. Because of this feature, a long request cannot hog the thread for a long time.
  • By dividing the request into multiple-sync points we may associate a user request with multiple threads and multiple user requests are grouped and associated with a single thread. This level of de-coupling is not present in the current architectures and it avoids a request hogging the thread for long time.
  • This feature involves taking all the requests in the group to a predetermined sync point prior before proceeding therefrom and carry forward all of said requests to next sync point.
  • grouping requests we mean grouping the requests which operate on the same data with different operations. Since all the requests operate on the same data, the request processing is highly cache-efficient. Say you have a join request between tables 'A' and 'B' and another between tables 'A' and 'C Say both of them involve scanning of 'A' first.
  • each processor to be bound to a thread group with each core hosting two threads, i.e. one thread for each of the Request Handlers and I/O handlers.
  • the I/O handlers and Request Handlers usually operate on the same data and hence cache thrashing is avoided.
  • the grouped requests may be maintained as a single grouping session for execution in a database server or application server.
  • the multi version concurrency is usually used to resolve concurrency bottlenecks at the record level but the pages are usually accessed with shared or exclusive locks.
  • the lock is provided to the requesting sessions by the Page manager or Buffer manager which internally employs any one of common synchronization mechanisms. It provides each user connected to the database with a "snapshot" of the database for that person to work with. Any changes made will not be seen by other users of the database until the transaction has been committed. So the processes which are waiting for locks are idle till the lock it is interested in is released.
  • request A locks a page with exclusive lock for insert/update. If another request tries to lock the same page for shared lock/exclusive lock, it goes into waiting mode.
  • threads/Handlers take the lock on pages and the requests who want to work with the page, enqueue themselves in the thread's queue. The thread executes them one by one by grouping them. This is cache- friendly, since the thread operates on the same page again and again.
  • each thread group operates on a subset of tables depending on the workload on the database and only the joins access data from other tables. Since each thread group operate on a subset of entire set of tables, the content from a particular table gets mostly cached at that portion of memory, which is local to that thread group.
  • having the required data in local memory means faster memory access. This feature of our method is thus particularly favourable to the ccNUMA systems.
  • FIGURE 5 a block diagram of our Request Handler mapping implementable in a typical NUMA hardware configuration which system comprises a quad-processor quad-core system shown to be running different typical tasks distributed over the 4 processors and their memories.
  • Each of the four quad-core processor shares in its respective Memory Banks the pages of table that are accessible by Request Handler Group of the respective processor as well as that of other processors.
  • Each of the processor may contain some Request Handler and I/O Handler Threads of a Request Handler Group. Due to affinity between the tables and Request Handler Group, the pages to be accessed by a Request Handler are most likely to be found in the local memory its resident core.
  • FIGURE 6 illustrates an example of a page locking flow enabling the enqueuing and execution of requests in our invention which we shall now describe.
  • a plurality of Request Handlers may operate in series although our diagram only shows the first two Request Handlers.
  • Each Request Handler will take a request from the Request Handler Group Queue to examine it with a view to execute it. If the request's execution requires access to data or information that is on a particular page of a table, then the header of that page is first examined.
  • the request is queued under one of the queues of the Request Handler 1 , depending on the nature of the page. If the page is marked as Operating Page, then the request is queued under the Operating Page Queue" of the Request Handler 1. Otherwise, it is queued under "Other Page Request Queue". It should be noted that there can only be one Operating Page per Request Handler, i.e. the page that is currently being acted upon by the Request Handler.
  • Request A is picked up by the Request Handler in its attempt to execute it.
  • Request A is a request, say, for update of the page 1 of Table A.
  • the Request Handler would then proceed to examine the header of the page of interest and lead it to find out that it may be found in the Operating page of Request Handler 1.
  • our methodology calls for Request Handler 2 to enqueue Request A under Operating Page Queue of the Request Handler 1 and picks up the next request from the Request Handler Group Queue and continues with its execution.
  • the industrial applicability of our invention may be stated in form of the advantages of our method for concurrently processing information in a server system on a multi-core processor environment in the following.
  • the I/O handler takes a count of pages processed from each requests and increments it before doing an I/O on its behalf. A request cannot hog the thread for extended period of time. So, once a request gets executed for x pages, it should be forcibly put into sleep mode to avoid hogging the CPU. Instead, the task gets re-queued behind the newly queued requests. This thus stops a request from repeatedly occupying the CPU continuously for an extended period.
  • the request handler can keep executing tasks delegating any I/O to the I/O handler thereby mitigating the performance degradation caused by waiting.
  • the request handler count can be decided depending on the number of processing units and the I/O handler count can be decided based on the amount of parallelism the data storage can support.

Landscapes

  • Engineering & Computer Science (AREA)
  • Software Systems (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Engineering & Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • Information Retrieval, Db Structures And Fs Structures Therefor (AREA)

Abstract

La présente invention concerne une architecture de serveurs permettant un traitement simultané d'informations par un système de serveurs dans un environnement de processeur multicœur. Dans le mode de réalisation général, l'invention consiste à traiter simultanément des demandes comprenant plusieurs classes d'interrogations et/ou à exécuter des transactions dans un serveur d'application et/ou un serveur de base de données. Le procédé consiste à effectuer lesdites demandes selon un modèle total asynchrone, à structurer les demandes avec des valeurs de hachage pour permettre le regroupement de demandes similaires, à regrouper des demandes similaires dans une session de groupe, et à exécuter les demandes dans ladite session de groupe. De plus, l'architecture proposée réduit au minimum le surdébit de commutation de fil d'exécution en exploitant le parallélisme inhérent dans les demandes entrantes. Les fils d'exécution et les demandes sont découplés et, par conséquent, toute demande de verrouillage a pour seule issue que les fils d'exécution se chargent d'une autre demande au lieu d'attendre jusqu'à ce que le verrouillage soit acquis. De ce fait, les fils à l'intérieur du processus de base de données ne passent jamais en mode de veille/attente et les ressources du système sont utilisées plus efficacement.
PCT/SG2010/000149 2009-04-14 2010-04-14 Architecture de serveurs pour des systèmes multicoeurs WO2010120247A1 (fr)

Priority Applications (3)

Application Number Priority Date Filing Date Title
CA2758732A CA2758732A1 (fr) 2009-04-14 2010-04-14 Architecture de serveurs pour des systemes multicƒurs
US13/057,004 US20110145312A1 (en) 2009-04-14 2010-04-14 Server architecture for multi-core systems
EP10718340A EP2419829A1 (fr) 2009-04-14 2010-04-14 Architecture de serveurs pour des systèmes multicoeurs

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
SG200902512-3 2009-04-14
SG200902512-3A SG166014A1 (en) 2009-04-14 2009-04-14 Server architecture for multi-core systems

Publications (1)

Publication Number Publication Date
WO2010120247A1 true WO2010120247A1 (fr) 2010-10-21

Family

ID=42313073

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/SG2010/000149 WO2010120247A1 (fr) 2009-04-14 2010-04-14 Architecture de serveurs pour des systèmes multicoeurs

Country Status (5)

Country Link
US (1) US20110145312A1 (fr)
EP (1) EP2419829A1 (fr)
CA (1) CA2758732A1 (fr)
SG (1) SG166014A1 (fr)
WO (1) WO2010120247A1 (fr)

Cited By (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2013085669A1 (fr) * 2011-12-07 2013-06-13 Qualcomm Incorporated Groupage de requêtes de ressources dans une transaction et bifurcation de cette transaction dans un dispositif informatique portable
US8615755B2 (en) 2010-09-15 2013-12-24 Qualcomm Incorporated System and method for managing resources of a portable computing device
US8631414B2 (en) 2010-09-15 2014-01-14 Qualcomm Incorporated Distributed resource management in a portable computing device
US8806502B2 (en) 2010-09-15 2014-08-12 Qualcomm Incorporated Batching resource requests in a portable computing device
WO2014149031A1 (fr) * 2013-03-18 2014-09-25 Ge Intelligent Platforms, Inc. Appareil et procédé pour combiner des requêtes de série temporelle
US9098521B2 (en) 2010-09-15 2015-08-04 Qualcomm Incorporated System and method for managing resources and threshsold events of a multicore portable computing device
US9152523B2 (en) 2010-09-15 2015-10-06 Qualcomm Incorporated Batching and forking resource requests in a portable computing device

Families Citing this family (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US9928264B2 (en) * 2014-10-19 2018-03-27 Microsoft Technology Licensing, Llc High performance transactions in database management systems
US10195940B2 (en) * 2015-10-15 2019-02-05 GM Global Technology Operations LLC Vehicle task recommendation system
US11256572B2 (en) * 2017-01-23 2022-02-22 Honeywell International Inc. Systems and methods for processing data in security systems using parallelism, stateless queries, data slicing, or asynchronous pull mechanisms
US10776155B2 (en) 2018-03-15 2020-09-15 International Business Machines Corporation Aggregating, disaggregating and converting electronic transaction request messages
CN111629019B (zh) * 2019-08-13 2022-11-18 广州凡科互联网科技股份有限公司 一种异步处理大数据和高并发的方法
US11271992B2 (en) * 2020-01-22 2022-03-08 EMC IP Holding Company LLC Lazy lock queue reduction for cluster group changes
CN112182003A (zh) * 2020-09-28 2021-01-05 北京沃东天骏信息技术有限公司 一种数据同步方法和装置

Family Cites Families (9)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US5835757A (en) * 1994-03-30 1998-11-10 Siemens Telecom Networks Distributed database management system for servicing application requests in a telecommunications switching system
DE19620622A1 (de) * 1996-05-22 1997-11-27 Siemens Ag Verfahren zur Synchronisation von Programmen auf unterschiedlichen Computern eines Verbundes
US6782410B1 (en) * 2000-08-28 2004-08-24 Ncr Corporation Method for managing user and server applications in a multiprocessor computer system
US7149737B1 (en) * 2002-04-04 2006-12-12 Ncr Corp. Locking mechanism using a predefined lock for materialized views in a database system
US8234256B2 (en) * 2003-11-26 2012-07-31 Loglogic, Inc. System and method for parsing, summarizing and reporting log data
WO2006045029A1 (fr) * 2004-10-19 2006-04-27 Platform Solutions, Inc. Traitement d'un code adaptatif dans des systemes multiprocesseur et a espace multiadresse
US8032885B2 (en) * 2005-10-11 2011-10-04 Oracle International Corporation Method and medium for combining operation commands into database submission groups
US7841080B2 (en) * 2007-05-30 2010-11-30 Intel Corporation Multi-chip packaging using an interposer with through-vias
US8392925B2 (en) * 2009-03-26 2013-03-05 Apple Inc. Synchronization mechanisms based on counters

Non-Patent Citations (5)

* Cited by examiner, † Cited by third party
Title
ALEJANDRO MENENDEZ: "Staged Server Design", 2005, pages 1 - 10, XP002591746, Retrieved from the Internet <URL:http://kbs.cs.tu-berlin.de/teaching/ws2005/htos/papers/staged_svr.pdf> [retrieved on 20100806] *
HEISS, LINNERT: "Hot Topics in OS WS2005/06", XP002595734, Retrieved from the Internet <URL:http://kbs.cs.tu-berlin.de/teaching/ws2005/htos/index.htm> [retrieved on 20100714] *
JAMES R. LARUS, MICHAEL PARKES: "Using Cohort Scheduling to Enhance Server Performance", ITERNET ARTICLE, June 2002 (2002-06-01), pages 1 - 12, XP002591781, Retrieved from the Internet <URL:http://www.cs.toronto.edu/~demke/OS_Reading_Grp/s2002/larus_cohort_usenix02.pdf> [retrieved on 20100713] *
STAVROS HARIZOPOULOS, ANASTASSIA AILAMAKI: "A Case for Staged Database Systems", IN PROCEEDINGS OF 1ST CONFERENCE ON INNOVATIVE DATA SYSTEMS RESEARCH, 2003, XP002591745, Retrieved from the Internet <URL:http://citeseerx.ist.psu.edu/viewdoc/summary?doi=10.1.1.12.1647> [retrieved on 20100713] *
STAVROS HARIZOPOULOS, VLADISLAV SHKAPENYUK, ANASTASSIA AILAMAKI: "QPipe: a simultaneously pipelined relational query engine", PROCEEDINGS OF THE 2005 ACM SIGMOD INTERNATIONAL CONFERENCE ON MANAGEMENT OF DATA, 14 June 2005 (2005-06-14) - 16 June 2005 (2005-06-16), pages 383 - 394, XP002591744, ISBN: 1-59593-060-4, Retrieved from the Internet <URL:http://portal.acm.org/citation.cfm?id=1066201> [retrieved on 20100713] *

Cited By (9)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US8615755B2 (en) 2010-09-15 2013-12-24 Qualcomm Incorporated System and method for managing resources of a portable computing device
US8631414B2 (en) 2010-09-15 2014-01-14 Qualcomm Incorporated Distributed resource management in a portable computing device
US8806502B2 (en) 2010-09-15 2014-08-12 Qualcomm Incorporated Batching resource requests in a portable computing device
US9098521B2 (en) 2010-09-15 2015-08-04 Qualcomm Incorporated System and method for managing resources and threshsold events of a multicore portable computing device
US9152523B2 (en) 2010-09-15 2015-10-06 Qualcomm Incorporated Batching and forking resource requests in a portable computing device
WO2013085669A1 (fr) * 2011-12-07 2013-06-13 Qualcomm Incorporated Groupage de requêtes de ressources dans une transaction et bifurcation de cette transaction dans un dispositif informatique portable
CN103988180A (zh) * 2011-12-07 2014-08-13 高通股份有限公司 在便携式计算装置中将资源请求分批成事务及使此事务分叉
CN103988180B (zh) * 2011-12-07 2018-06-05 高通股份有限公司 在便携式计算装置中将资源请求分批成事务及使此事务分叉
WO2014149031A1 (fr) * 2013-03-18 2014-09-25 Ge Intelligent Platforms, Inc. Appareil et procédé pour combiner des requêtes de série temporelle

Also Published As

Publication number Publication date
EP2419829A1 (fr) 2012-02-22
SG166014A1 (en) 2010-11-29
US20110145312A1 (en) 2011-06-16
CA2758732A1 (fr) 2010-10-21

Similar Documents

Publication Publication Date Title
US8336051B2 (en) Systems and methods for grouped request execution
US20110145312A1 (en) Server architecture for multi-core systems
Boroumand et al. CoNDA: Efficient cache coherence support for near-data accelerators
Calciu et al. Black-box concurrent data structures for NUMA architectures
US8458721B2 (en) System and method for implementing hierarchical queue-based locks using flat combining
CN109075988B (zh) 任务调度和资源发放系统和方法
Mahmoud et al. Maat: Effective and scalable coordination of distributed transactions in the cloud
US20160179865A1 (en) Method and system for concurrency control in log-structured merge data stores
US20140279917A1 (en) Techniques To Parallelize CPU and IO Work of Log Writes
CN101359333A (zh) 一种基于隐含狄利克雷分配模型的并行数据处理方法
Barthels et al. Strong consistency is not hard to get: Two-Phase Locking and Two-Phase Commit on Thousands of Cores
Wang et al. Elastic pipelining in an in-memory database cluster
Das et al. Thread cooperation in multicore architectures for frequency counting over multiple data streams
US10275289B2 (en) Coexistence of message-passing-like algorithms and procedural coding
Wang et al. Numa-aware scalable and efficient in-memory aggregation on large domains
JP6283376B2 (ja) クラスタにおけるワークシェアリング多重化をサポートするためのシステムおよび方法
US10740317B2 (en) Using message-passing with procedural code in a database kernel
Yao et al. Dgcc: A new dependency graph based concurrency control protocol for multicore database systems
Lai et al. Load balancing in distributed shared memory systems
US10810124B2 (en) Designations of message-passing worker threads and job worker threads in a physical processor core
Gugnani et al. Characterizing and accelerating indexing techniques on distributed ordered tables
Ahluwalia Scalability design patterns
Huang et al. Rs-store: a skiplist-based key-value store with remote direct memory access
Rehmann et al. Applications and evaluation of in-memory mapreduce
Jang et al. AutoBahn: accelerating concurrent, durable file I/O via a non-volatile buffer

Legal Events

Date Code Title Description
WWE Wipo information: entry into national phase

Ref document number: 6484/CHENP/2010

Country of ref document: IN

121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 10718340

Country of ref document: EP

Kind code of ref document: A1

WWE Wipo information: entry into national phase

Ref document number: 13057004

Country of ref document: US

WWE Wipo information: entry into national phase

Ref document number: 2758732

Country of ref document: CA

NENP Non-entry into the national phase

Ref country code: DE

WWE Wipo information: entry into national phase

Ref document number: 2010718340

Country of ref document: EP