WO2018194237A1 - Procédé et dispositif de traitement de transaction dans un système de mémoire transactionnelle hybride - Google Patents

Procédé et dispositif de traitement de transaction dans un système de mémoire transactionnelle hybride Download PDF

Info

Publication number
WO2018194237A1
WO2018194237A1 PCT/KR2017/014991 KR2017014991W WO2018194237A1 WO 2018194237 A1 WO2018194237 A1 WO 2018194237A1 KR 2017014991 W KR2017014991 W KR 2017014991W WO 2018194237 A1 WO2018194237 A1 WO 2018194237A1
Authority
WO
WIPO (PCT)
Prior art keywords
transaction
htm
processing
stm
memory area
Prior art date
Application number
PCT/KR2017/014991
Other languages
English (en)
Korean (ko)
Inventor
장재우
윤민
신영성
강문환
장연우
마현국
Original Assignee
전북대학교산학협력단
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 전북대학교산학협력단 filed Critical 전북대학교산학협력단
Publication of WO2018194237A1 publication Critical patent/WO2018194237A1/fr

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F9/00Arrangements for program control, e.g. control units
    • G06F9/06Arrangements for program control, e.g. control units using stored programs, i.e. using an internal store of processing equipment to receive or retain programs
    • G06F9/46Multiprogramming arrangements
    • G06F9/466Transaction processing
    • G06F9/467Transactional memory
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F12/00Accessing, addressing or allocating within memory systems or architectures
    • G06F12/02Addressing or allocation; Relocation
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F12/00Accessing, addressing or allocating within memory systems or architectures
    • G06F12/02Addressing or allocation; Relocation
    • G06F12/0223User address space allocation, e.g. contiguous or non contiguous base addressing
    • G06F12/023Free address space management
    • G06F12/0253Garbage collection, i.e. reclamation of unreferenced memory
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F9/00Arrangements for program control, e.g. control units
    • G06F9/06Arrangements for program control, e.g. control units using stored programs, i.e. using an internal store of processing equipment to receive or retain programs
    • G06F9/46Multiprogramming arrangements

Definitions

  • the present invention relates to a hybrid transactional memory system (HYBRID TRANSACTIONAL MEMORY SYSTEM) for efficient memory management on an in-memory database in a multi-core environment.
  • the present invention relates to a transaction processing method and a transaction processing apparatus in a hybrid transactional memory system that can efficiently detect a collision between transactions and improve processing performance of a large transaction.
  • Transactional memory is a software transactional memory (STM), hardware transactional memory (HTM), and a hybrid transaction that combines STM and HTM, depending on the processing method. It is classified as a hybrid transactional memory (HyTM).
  • STM can process transactions using compilers and APIs, and can efficiently detect conflict in transactions, but the overhead of writing the address of the memory that reads and writes the threads May have a disadvantage of being large.
  • HTM has been proposed to change the cache and bus protocols of the existing hardware architecture to provide the main functions of the TM to hardware.
  • HTM is a transaction because of hardware limitations such as cache size limitations and context switching with the OS. It may have a problem that it is difficult to cope with the collision between them.
  • Hybrid TM Hybrid TM
  • the existing hybrid TM technique allocates memory when calling 'malloc' by using a memory pool in shared memory through a lock-based memory manager. Accessing the pool can incur lock costs. That is, the more threads that are accessed at the same time, the longer the wait time for the lock becomes, and each thread may have a problem of waiting for memory.
  • FIG. 1 is a diagram illustrating a concurrency control problem in transaction processing in a hybrid TM technique according to a conventional embodiment.
  • STM when STM and HTM are simultaneously performed while both x and y values are initialized to '0', STM ('SW_WRITEBACK' of FIG. 1) may be used. Even if the value of x is changed to 1 in line 2 (line 2), in the HTM ('HW_POST_BEGIN' of FIG. 1), an error may be generated in which an incorrect value is always derived by being processed without changing the value of x.
  • An embodiment of the present invention utilizes HTM and STM technology on an in-memory database in a multi-core environment to simultaneously perform HTM processing and STM processing of a transaction based on a flexible Bloom filter data structure. It aims to increase the processing efficiency of DB transactions.
  • the embodiment of the present invention by controlling the HTM processing and STM processing of the transaction at the same time, while processing prioritized to the HTM processing for a unit time, by using the STM processing results to complement the HTM processing results hybrid transactional memory (Hybrid TM) aims to improve transaction processing performance.
  • Hybrid TM hybrid transactional memory
  • an embodiment of the present invention is to prioritize transactions processed on the HTM to enable processing without the interference of the STM, and maintain a snapshot-based sequence lock (seqlock), the transaction and the processing It aims to perform concurrency control efficiently between transactions processed by STM.
  • an embodiment of the present invention aims to efficiently detect collisions between transactions using a flexible Bloom filter data structure.
  • an embodiment of the present invention is to provide an efficient transaction processing in a multi-core in-memory environment through a memory management tool optimized for each data size.
  • Transaction processing method in a hybrid transactional memory system when the transaction is started from the workload, performing the HTM processing by the hardware transactional memory (HTM), and for the transaction, and Performing STM processing on a software transactional memory (STM) for the transaction.
  • HTM hardware transactional memory
  • STM software transactional memory
  • the transaction processing apparatus in a hybrid transactional memory system when a transaction is started from a workload, performs the HTM processing by the hardware transactional memory (HTM) for the transaction, It includes a transaction processing unit for performing the STM processing by the STM (Software Transactional Memory) for the transaction.
  • HTM hardware transactional memory
  • STM Software Transactional Memory
  • HTM processing and STM processing of a transaction are simultaneously performed simultaneously based on a fluid Bloom filter data structure using HTM and STM technology on an in-memory database in a multi-core environment. By doing so, it is possible to increase the processing efficiency of large-scale DB transactions.
  • transaction processing while simultaneously controlling the HTM processing and STM processing of the transaction, while processing prioritized to the HTM processing for a unit time, by using the STM processing result to complement the HTM processing result hybrid transformer In transactional memory, transaction processing can be improved.
  • HTM processing Concurrency control can be efficiently performed between transactions being processed and transactions being processed by STM.
  • FIG. 1 is a diagram illustrating a concurrency control problem in transaction processing in a hybrid TM technique according to a conventional embodiment.
  • FIG. 2 is a block diagram illustrating a configuration of a transaction processing apparatus in a hybrid transactional memory system according to an embodiment of the present invention.
  • FIG. 3 is a diagram illustrating an overall flow of processing a transaction in a hybrid transactional memory system according to an embodiment of the present invention.
  • FIG. 4 is a diagram illustrating a process of performing concurrency control during transaction processing according to an embodiment of the present invention.
  • FIG. 5 is a view showing the structure of a bloom filter in an embodiment of the present invention.
  • FIG. 6 is a diagram illustrating a flow of performing concurrency control based on a bloom filter according to an embodiment of the present invention.
  • FIG. 7 is a diagram illustrating a memory allocator in accordance with one embodiment of the present invention.
  • FIG. 8 illustrates a structure of a free list of a local cache according to an embodiment of the present invention.
  • FIG. 9 illustrates an example of a small object allocation algorithm for a small object of less than a predetermined size according to an embodiment of the present invention.
  • FIG. 10 is a diagram illustrating a structure of a free list of a central heap according to one embodiment of the present invention.
  • FIG. 11 is a diagram for describing object management using a span according to an embodiment of the present invention.
  • FIG. 12 illustrates an example of a span memory allocation algorithm according to an embodiment of the present invention.
  • FIG. 13 is a flowchart illustrating a procedure of a transaction processing method in a hybrid transactional memory system according to an embodiment of the present invention.
  • FIG. 2 is a block diagram illustrating a configuration of a transaction processing apparatus in a hybrid transactional memory system according to an embodiment of the present invention.
  • a transaction processing apparatus 200 in a hybrid transactional memory system includes a transaction processing unit 210, a memory allocation unit 220, and a concurrency control unit 230. Can be configured.
  • the transaction processing unit 210 When a transaction is started from a workload, the transaction processing unit 210 performs an HTM process by a hardware transactional memory (HTM) on the transaction, and an STM by a software transactional memory (STM) on the transaction. Perform the process.
  • HTM hardware transactional memory
  • STM software transactional memory
  • the transaction processing apparatus 200 of the present invention may be implemented in an application form.
  • the transaction processing unit 210 may simultaneously perform the HTM processing and the STM processing of the transaction when a command for starting the processing of the transaction is input by the workload on the application.
  • the transaction processing unit 210 simultaneously performs HTM processing and STM processing of transactions on the basis of a flexible Bloom filter data structure using HTM and STM technologies on an in-memory database in a multi-core environment. By doing so, it is possible to increase the processing efficiency of large-scale DB transactions.
  • the transaction processing unit 210 may selectively perform STM processing or lock processing for transactions where HTM processing of a transaction is difficult. Through this, the transaction processor 210 may quickly process any transaction regardless of the length of the transaction.
  • the transaction processing unit 210 may be composed of an HTM processing unit 211, an STM processing unit 212, and a lock processing unit 213.
  • the memory allocator 220 allocates a memory area in the HTM and allocates a virtual memory area different from the memory area in the STM.
  • the memory allocating unit 220 allocates a memory area for performing the HTM processing in the HTM, and the HTM processing unit 211 in the transaction processing unit 210 performs the predetermined number of times (the number of retries).
  • the HTM process may be attempted to perform the HTM process in the memory area.
  • the STM processing unit 212 in the transaction processing unit 210 may perform the STM processing on the virtual memory area.
  • the transaction processor 210 may process the transaction through a single global lock.
  • the lock processing unit 213 in the transaction processing unit 210 may perform transaction processing through a single global lock for transactions in which HTM processing and STM processing of a transaction are difficult within a predetermined number of retries (threshold).
  • a global lock global lock
  • the memory allocator 220 may efficiently manage memory by allocating and releasing memory according to an object size associated with a transaction in order to improve transaction processing performance of the transaction processor 210.
  • the memory allocator 220 may determine the allocated memory area or the virtual memory area in consideration of the size of an object associated with the transaction.
  • the memory allocator 220 allocates a memory area in consideration of the size of an object associated with a transaction, and if the transaction is not HTM processed, releases the memory area, and sizes the object. In consideration of this, virtual memory for STM processing may be allocated.
  • the memory allocator 220 may allocate the memory area or the virtual memory area in a local cache or a central cache for each thread in consideration of the size of an object associated with the transaction. Release can be performed.
  • the memory allocator 220 allocates the memory area or the virtual memory area in a central cache when the object is larger than or equal to a predetermined size, and generates a local cache by each thread when the object is smaller than the predetermined size. Allocating the memory area or the virtual memory area in the local cache and performing garbage collection at regular intervals, for the object to which the memory area or the virtual memory area is allocated in the local cache, the central cache The memory region or the virtual memory region within the memory may be reallocated.
  • the memory allocator 220 may allocate and release a memory area for HTM processing or a virtual memory area for STM processing in a central cache when the small object (SO) is smaller than a predetermined size. Can be.
  • SO small object
  • the memory allocation unit 220 is a large object (LO) of a predetermined size or more, in the local cache for each thread (thread) of the memory area for HTM processing or virtual memory area for STM processing Allocates and frees them.
  • LO large object
  • the memory allocator 220 allocates and releases a memory area or a virtual memory area using a memory pool existing in the shared memory, and the small object is small. (SO), it is possible to allocate and free memory areas or virtual memory areas by creating a local cache for each thread for each thread.
  • objects of data can be moved from the local area to the central area if necessary. That is, the memory allocation unit 220 may periodically perform a garbage collection (GC) to move the memory from the local area to the central area.
  • GC garbage collection
  • an efficient transaction processing in a multi-core in-memory environment may be provided through a memory management tool optimized for each data size.
  • the concurrency control unit 230 performs a concurrency manager between the HTM process and the STM process based on a bloom filter.
  • the concurrency control unit 230 may efficiently detect a collision between transactions using a flexible Bloom filter data structure.
  • the concurrency controller 230 controls a HTM process and an STM process of a transaction to be performed simultaneously in time using a Bloom filter having a flexible data structure, thereby allowing in-memory in a multi-core environment. It can improve the processing efficiency of large DB transactions on the database.
  • the concurrency control unit 230 may simultaneously control the HTM processing and the STM processing of the transaction while differentiating the processing priority given to the HTM processing and the STM processing for a unit time, and thus, the HTM processing and the STM processing. Can easily be avoided.
  • the concurrency control unit 230 may give a higher processing priority to the HTM process than the STM process, so that the HTM process is preferentially performed regardless of the STM process.
  • the transaction processing unit 210 may not use the sequence lock maintained for the low processing priority when performing the high processing priority.
  • the concurrency control unit 230 validates the processing having the high processing priority at the time when the processing having the low processing priority is completed among the HTM processing and the STM processing, and the transaction processing unit 210 performs the validation. If the verification fails, the processing with low processing priority may be performed again.
  • the transaction processor 210 maintains a snapshot-based sequence lock for the processing having low processing priority, and if the verification succeeds, increases the sequence lock and then executes the transaction. Can be terminated.
  • Hybrid TM can increase the processing power of transactions.
  • FIG. 3 is a diagram illustrating an overall flow of processing a transaction in a hybrid transactional memory system according to an embodiment of the present invention.
  • the hybrid transactional memory scheme devised in the present invention can be in an environment of 'LiteHTM', 'NorecSTM', and 'Single Lock', and in order to perform this efficiently, a concurrency manager and a memory allocator ) Can be used.
  • a transaction processing apparatus 300 in a hybrid transactional memory system may include an HTM processor 310, an STM processor 320, a single lock processor 330, and a memory. It may be configured to include an allocation unit 340.
  • the HTM processor 310 is a module that manages HTM processing of a transaction and provides a basic HTM processing environment.
  • the HTM processor 310 may operate as 'LiteHTM (RTM)' and may include an HTM executor 311 and a first concurrency controller 312.
  • the HTM processor 310 processes a transaction according to a predetermined number of retries, and when the transaction is aborted within a number of retries or by an unexpected cause, the STM processor 320 may process the transaction as a prepared path. have.
  • the HTM processor 310 may include an HTM processing algorithm that operates when processing a transaction simultaneously with the STM processor 320.
  • the STM processing unit 320 is a module for processing a transaction as an STM, and may be operated as 'NOrec STM' as one example that shows the best performance in the STM.
  • the STM processor 320 may include an STM executor 321 and a second concurrency controller 322.
  • the first concurrency control unit 312 and the second concurrency control unit 322 each use a bloom filter based collision detection algorithm to control collision problems for transactions operating in the HTM and STM environments. can do.
  • the first concurrency control unit 312 and the second concurrency control unit 322 may set an optimal bloom filter value for each workload to maximize the performance of the bloom filter (ie, low storage overhead and low false positive). have.
  • the single lock processing unit 330 processes the transaction through a single global lock.
  • the memory allocator 340 allocates a memory region for the HTM processing or a virtual memory region for the STM processing, and manages optimized memory allocation and release in a multi-core environment.
  • the HTM processor 310 prepares execution through the HTM execution unit 311 (step)
  • the memory allocator 340 may be allocated a necessary memory area (step ).
  • the first concurrency control unit 312 and the second concurrency control unit 322 may perform concurrency control between the HTM process and the STM process when there is a transaction operated in the STM environment while the transaction is being processed in the HTM environment (step ).
  • the HTM processor 310 may attempt to process the HTM of the transaction within a predetermined number of retries in the HTM environment (step ).
  • the HTM processor 310 is a fallback path when the HTM processing of the transaction is not possible within the predetermined retries in the HTM environment, or due to an unexpected cause, the STM processing unit 320 is a fallback path (320)
  • the STM processing of the transaction may be performed by the STM processing unit 320 by moving the transaction to (step) ).
  • the STM processor 320 receives an available memory (virtual memory area) through the memory allocator 340 (step ), The transaction can be processed in the STM environment a certain number of retries (step ).
  • the single lock processing unit 330 may re-transmit the transaction processing through a single global lock (step) ).
  • Most transaction processing is performed by the HTM execution unit 311 and the STM processing unit 320, and may be processed as a single lock only for a transaction that cannot be processed by the hybrid transactional memory.
  • FIG. 4 is a diagram illustrating a process of performing concurrency control during transaction processing according to an embodiment of the present invention.
  • the transaction processing apparatus in the hybrid transactional memory system may assign a transaction processed on the HTM to a higher priority than a transaction processed on the STM.
  • the transaction processing device may be processed without interference from the STM, and the transaction processed on the STM may maintain consistency through a Bloom filter and validation process.
  • the 'NorecSTM' (hereinafter, referred to as an STM processor) in the transaction processing apparatus may perform concurrency control by maintaining a snapshot-based seqlock.
  • STM processor controls 'write' through seqlock on the data, and 'Read' without 'seqlock' in case of 'Read'. Can be controlled.
  • the STM processor may apply a concurrency control technique based on a bloom filter for validation of 'read' / 'write'.
  • the STM processor When performing the transaction 'write', the STM processor is a data structure used for concurrency control with the HTM processor (LiteHTM) and may prevent 'false negative'.
  • HTM HTM processor
  • the STM processing unit may perform validation with a transaction performed through the HTM processing unit at the time when the execution of the transaction is completed.
  • the STM processor may execute the transaction again. If the STM processor returns true, the STM processor may increase the sequence lock by '1' and request the transaction to be terminated.
  • a transaction processing apparatus in a hybrid transactional memory system may manage concurrency control of HTM and STM based on a bloom filter.
  • the bloom filter is a technique devised by Burton Howard Bloom in 1970 and may refer to a stochastic data structure used to check whether an element belongs to a set.
  • a 'false positive' may occur in which an element does not actually belong to a set, but it was determined that the element does not belong to a set. It is not possible to have a 'false negative' in which an element belongs to a set.
  • the transaction processing apparatus of the present invention can manage the concurrency control of the HTM and STM by using such a bloom filter. That is, although it is possible to add elements to the set of the bloom filter, it is impossible to delete elements from the set, and as the number of elements in the set increases, the probability of occurrence of a positive error may increase.
  • FIG. 5 is a view showing the structure of a bloom filter in an embodiment of the present invention.
  • a transaction processing apparatus in a hybrid transactional memory system may perform concurrency control based on a bloom filter.
  • the bloom filter may have a bit array structure having a size of m bits. Also, the bloom filter uses k different hash functions, and each hash function can return one of 0 to m-1 for the input element (see Equation 1).
  • Bloom filters can support instructions for adding elements to a set and instructions for checking whether an element belongs to it. On the other hand, there is no command to delete an element.
  • the bloom filter may calculate k hash values for elements to be added, and then set a bit corresponding to each hash value to 1 (Equation 2).
  • a bloom filter examines an element, it will use all k hash results for element x as the array index for V. If the array values are all 1, it will return 'true' Can be. In this case, if at least one element array value is not 1, the bloom filter may determine that the element is not included in the set and return 'false' (see Equation 3).
  • the transaction processing apparatus of the present invention can increase the transaction processing efficiency by reducing the probability of generating a false positive of the bloom filter by applying Equation 4.
  • Equation 4 'k' may be the number of hash hash functions, 'n' may refer to the size of the set, and 'm' may refer to the bit size of the Bloom filter.
  • FIG. 6 is a diagram illustrating a flow of performing concurrency control based on a bloom filter according to an embodiment of the present invention.
  • a transaction processing apparatus in a hybrid transactional memory system may process a transaction by HTM processing or STM (step 610), and process a transaction processed by STM or HTM. All are written to the bloom filter (step 620).
  • the transaction processing device checks whether the bloom filter has transaction data (step 630).
  • a comparison operation may be performed through a bloom filter to check a transaction in which an operation of a next transaction is performed.
  • step 630 if there are no elements of the transaction data in the bloom filter (i.e., the transaction is not already performed), the data is recorded without waiting for a thread or performing verification (step 640).
  • step 630 determines that a data element exists in the bloom filter (ie, if the transaction has already been performed), then returns true (step 650) and adds thread wait or validation The job records data and executes the transaction (step 660).
  • FIG. 7 is a diagram illustrating a memory allocator in accordance with one embodiment of the present invention.
  • the memory allocator (hereinafter, the memory manager) may allocate and release memory in a thread cache 710 and a central cache 720 according to the size of an object associated with a transaction. Can be done.
  • the central cache 720 may include a central free list 721.
  • the memory manager manages a large pool of large objects (LOs) in a memory pool existing in the central cache (shared memory) 720.
  • the small objects SO may be local to a thread for each thread.
  • Cache 710 may be created and allocated.
  • the memory manager may classify an object having a size greater than or equal to '32 kb 'into a large object LO and an object smaller than '32 kb' into a small object SO.
  • the objects of data are moved from the local area to the central area if necessary, and the memory manager may periodically move the memory from the local area to the central area by performing a garbage collection (GC).
  • GC garbage collection
  • the memory manager can use the LO page-level allocator (pages are 4K aligned memory regions) to allocate memory regions directly to the central page heap 730 without allocating them to local regions.
  • the memory manager allocates large objects LO by the number of pages and sorts them in page order.
  • the memory manager may divide successive pages into a series of small objects each having the same size. For example, the memory manager may divide one page 4K into 32 objects of 128 bytes each.
  • FIG. 8 illustrates a structure of a free list of a local cache according to an embodiment of the present invention.
  • each small object sizes may be mapped to one of approximately 170 assignable size-classes. For example, all allocations in the range 961 to 1024 bytes can be treated as 1024. size-classes can be divided into 8 bytes, 16 bytes, 32 bytes, etc. to distinguish small sizes, and the maximum interval is 256 bytes.
  • the local cache contains a list, and each list can have a list of free objects available for each size class.
  • FIG. 9 illustrates an example of a small object allocation algorithm for a small object of less than a predetermined size according to an embodiment of the present invention.
  • the memory manager maps to a size-class corresponding to the size of an object and looks at a corresponding free list in the thread cache of the current thread.
  • the memory manager removes the first object of the free list and returns the corresponding object and does not acquire any lock in this process. This takes roughly 100 nanoseconds to perform a lock / unlock command on a 2.8 GHz Xeon, which can provide good performance for the speed of memory allocation.
  • the memory manager may obtain an object set from a central heap free list corresponding to the size-class.
  • the central heap free list may be shared among all threads.
  • the memory manager can also place the set of objects in a thread-local free list and return one of the newly imported objects to the application.
  • the memory manager allocates consecutive pages from the central page allocator, divides the object bundles according to size-class, and then adds them to the central heap free list. By placing new objects, you can move some of the objects to a thread-local free list for processing.
  • the memory manager may call an appropriate function by measuring a size for requesting memory allocation and distinguishing between SO and LO.
  • FIG. 10 is a diagram illustrating a structure of a free list of a central heap according to one embodiment of the present invention.
  • a large object LO having a size of 32KB or more may be managed by a central heap after being rounded up to a page size of 4K.
  • the central heap is composed of an array of free lists. For example, a structure consisting of 256 consecutive pages and a page having a length of 256 pages or more may be processed as the remaining free list.
  • the memory manager may allocate a page with reference to the n th free list. If the free list is empty, it is necessary to look at the next free list, and to the last free list. If it fails, if it is taken from system memory and the allocation of n pages satisfies a series of pages longer than n lengths, the rest can be returned to the appropriate free list of page heaps.
  • FIG. 11 is a diagram for describing object management using a span according to an embodiment of the present invention.
  • heap management by the memory manager may be configured as a bundle of pages.
  • a series of contiguous pages is represented by spans, which span memory allocations / releases.
  • the span When the memory is released, the span may be one of the entries of the page heap linked list. When allocating memory, it can be either a large object passed to the application, or a series of pages spanned into consecutive small pages.
  • the size-class of the objects may be recorded in the span.
  • the central array can find the span to which a page belongs and have an index of the page number to be used.
  • the span 'a' may occupy two pages
  • the span 'b' may occupy one page
  • the span 'c' may occupy five pages
  • the span 'd' may occupy three pages.
  • the 32-bit address space fits in 4K pages with 2 20 , so the central arrangement can have 4MB of adequate space.
  • FIG. 12 illustrates an example of a span memory allocation algorithm according to an embodiment of the present invention.
  • the span data structure may consist of a linked list data structure.
  • the memory manager can calculate the page number and search the central heap to find a matching span object.
  • the memory manager can put the object in the appropriate free list in the thread cache of the current thread, and if it exceeds the size expected by the current thread cache (for example, the default value of 2MB), it runs garbage collection and uses it. Can be moved from the thread cache to the central freelists.
  • the memory manager tells you the range of pages that the object occupies. For example, suppose the range of pages is [p, q]. When at least one of the adjacent spans is free, it can be bundled into a [p, q] span. This bound span can then be inserted into the appropriate free list on the page heap.
  • FIG. 13 will be described in detail the workflow of the transaction processing apparatus 200 according to the embodiments of the present invention.
  • FIG. 13 is a flowchart illustrating a procedure of a transaction processing method in a hybrid transactional memory system according to an embodiment of the present invention.
  • the transaction processing method in the hybrid transactional memory system according to the present embodiment may be performed by the transaction processing apparatus 200 described above.
  • step 1310 the transaction processing apparatus 200 confirms whether a transaction starts from a workload. As a result of the confirmation in step 1310, if a transaction is started, in step 1320, the transaction processing apparatus 200 performs HTM processing by the HTM on the transaction, and in step 1330, the transaction The processing device 200 performs STM processing by STM on the transaction. In operation 1340, the transaction processing apparatus 200 performs concurrency control between the HTM processing and the STM processing based on the Bloom filter.
  • the transaction processing apparatus 200 of the present invention may be implemented in the form of an application, and when a command for starting processing of a transaction is input by a workload on the application, the HTM processing and the STM processing of the transaction are simultaneously performed. Can be done.
  • the transaction processing apparatus 200 may allocate a memory area for performing the HTM processing in the HTM and perform the HTM processing in the memory area.
  • the transaction processing apparatus 200 may allocate a virtual memory area different from the memory area in the STM, and perform the STM processing on the virtual memory area.
  • the transaction processing apparatus 200 may consider the size of the object associated with the transaction, and determine the size of the memory area or the virtual memory in a local cache or central cache for each thread. Allocates and frees them.
  • the transaction processing apparatus 200 may manage memory efficiently and improve transaction processing performance by performing memory allocation and release according to the object size associated with a transaction.
  • the transaction processing apparatus 200 simultaneously performs HTM processing and STM processing of transactions on the basis of a flexible Bloom filter data structure using HTM and STM technology on an in-memory database in a multi-core environment. It can improve the processing efficiency of large DB transactions.
  • the transaction processing apparatus 200 controls the HTM processing and the STM processing of the transaction at the same time, and differs from each other in the processing priority given to the HTM processing and the STM processing for a unit time. Collision can be easily avoided.
  • Hybrid TM can increase the processing power of transactions.
  • the method according to an embodiment of the present invention can be implemented in the form of program instructions that can be executed by various computer means and recorded in a computer readable medium.
  • the computer readable medium may include program instructions, data files, data structures, etc. alone or in combination.
  • the program instructions recorded on the media may be those specially designed and constructed for the purposes of the embodiments, or they may be of the kind well-known and available to those having skill in the computer software arts.
  • Examples of computer-readable recording media include magnetic media such as hard disks, floppy disks, and magnetic tape, optical media such as CD-ROMs, DVDs, and magnetic disks, such as floppy disks.
  • Examples of program instructions include not only machine code generated by a compiler, but also high-level language code that can be executed by a computer using an interpreter or the like.
  • the hardware device described above may be configured to operate as one or more software modules to perform the operations of the embodiments, and vice versa.

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Engineering & Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • Software Systems (AREA)
  • Memory System Of A Hierarchy Structure (AREA)
  • Information Retrieval, Db Structures And Fs Structures Therefor (AREA)

Abstract

L'invention concerne un procédé et un système permettant de traiter une transaction dans un système de mémoire transactionnelle hybride. Selon un mode de réalisation de l'invention, un procédé permettant de traiter une transaction dans un système de mémoire transactionnelle hybride consiste à : effectuer un traitement de mémoire transactionnelle matérielle (HTM) sur une transaction au moyen d'une HTM lorsque la transaction commence par une charge de travail; et effectuer un traitement de mémoire transactionnelle logicielle (STM) sur la transaction au moyen d'une STM.
PCT/KR2017/014991 2017-04-21 2017-12-19 Procédé et dispositif de traitement de transaction dans un système de mémoire transactionnelle hybride WO2018194237A1 (fr)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
KR1020170051677A KR101885030B1 (ko) 2017-04-21 2017-04-21 하이브리드 트랜잭셔널 메모리 시스템에서의 트랜잭션 처리 방법 및 트랜잭션 처리 장치
KR10-2017-0051677 2017-04-21

Publications (1)

Publication Number Publication Date
WO2018194237A1 true WO2018194237A1 (fr) 2018-10-25

Family

ID=63251785

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/KR2017/014991 WO2018194237A1 (fr) 2017-04-21 2017-12-19 Procédé et dispositif de traitement de transaction dans un système de mémoire transactionnelle hybride

Country Status (2)

Country Link
KR (1) KR101885030B1 (fr)
WO (1) WO2018194237A1 (fr)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112148930A (zh) * 2020-09-28 2020-12-29 上海交通大学 基于rtm的图数据库系统事务处理的方法、系统及介质

Families Citing this family (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
KR102184841B1 (ko) * 2019-06-27 2020-11-30 전북대학교산학협력단 하이브리드 트랜잭셔널 메모리 시스템에서의 트랜잭션 복구 방법 및 트랜잭션 복구 장치
KR102150597B1 (ko) * 2019-07-23 2020-09-01 전북대학교산학협력단 최적의 재시도 정책을 제공하는 하이브리드 트랜잭셔널 메모리 시스템의 운영 방법 및 하이브리드 트랜잭셔널 메모리 시스템

Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20070288902A1 (en) * 2006-06-09 2007-12-13 Sun Microsystems, Inc. Replay debugging
KR20080076981A (ko) * 2005-12-30 2008-08-20 인텔 코오퍼레이션 무한 트랜잭션 메모리 시스템
KR20160113207A (ko) * 2014-03-26 2016-09-28 인텔 코포레이션 하이브리드 트랜잭션 메모리 시스템에서 최대 동시실행을 가능케 하기
KR20160113205A (ko) * 2014-03-26 2016-09-28 인텔 코포레이션 트랜잭션 메모리 프로그램을 위한 소프트웨어 재생기

Family Cites Families (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US7395382B1 (en) * 2004-08-10 2008-07-01 Sun Microsystems, Inc. Hybrid software/hardware transactional memory

Patent Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
KR20080076981A (ko) * 2005-12-30 2008-08-20 인텔 코오퍼레이션 무한 트랜잭션 메모리 시스템
US20070288902A1 (en) * 2006-06-09 2007-12-13 Sun Microsystems, Inc. Replay debugging
KR20160113207A (ko) * 2014-03-26 2016-09-28 인텔 코포레이션 하이브리드 트랜잭션 메모리 시스템에서 최대 동시실행을 가능케 하기
KR20160113205A (ko) * 2014-03-26 2016-09-28 인텔 코포레이션 트랜잭션 메모리 프로그램을 위한 소프트웨어 재생기

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
YOON, MIN: "Hardware Transactional Memory based on Conflict Prediction and Retry Policy in Multi-core In-Memory Databases", DOCT. THESIS DEPARTM. OF COMPUTER ENGINEERING CHONBUK NAT. UNIVERSITY, 22 February 2017 (2017-02-22), pages 1 - 148 *

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112148930A (zh) * 2020-09-28 2020-12-29 上海交通大学 基于rtm的图数据库系统事务处理的方法、系统及介质

Also Published As

Publication number Publication date
KR101885030B1 (ko) 2018-08-02

Similar Documents

Publication Publication Date Title
US8024505B2 (en) System and method for optimistic creation of thread local objects in a virtual machine environment
JP2941947B2 (ja) 非同期に順序付けされた動作を行うコンピュータ方法及び装置
JP4917138B2 (ja) オブジェクト最適配置装置、オブジェクト最適配置方法、及びオブジェクト最適配置プログラム
WO2018194237A1 (fr) Procédé et dispositif de traitement de transaction dans un système de mémoire transactionnelle hybride
US4914570A (en) Process distribution and sharing system for multiple processor computer system
WO2012109879A1 (fr) Procédé, dispositif et système permettant de placer des données en mémoire cache dans un système multinoeud
WO2012111905A2 (fr) Dispositif et procédé de commande de cluster de mémoire distribuée utilisant mapreduce
WO2013042880A2 (fr) Procédé et dispositif de mémorisation de données dans une mémoire flash au moyen d'un mappage d'adresse pour prendre en charge diverses tailles de bloc
JPS6341100B2 (fr)
WO2019212182A1 (fr) Appareil et procédé de gestion d'une ressource partageable dans un processeur multicœur
WO2022124720A1 (fr) Procédé de détection d'erreur de la mémoire de noyau du système d'exploitation en temps réel
Cranor et al. The UVM virtual memory system
CN1226023A (zh) 加载/加载检测和重定序方法
WO2012159436A1 (fr) Procédé et dispositif d'ajustement de partitions de disque sous windows
JP3360933B2 (ja) 情報処理システムにおける記憶制御方法および記憶制御装置
US11016883B2 (en) Safe manual memory management
WO2022124507A1 (fr) Système informatique permettant de mélanger un schéma de récupération de mémoire basé sur une époque et un schéma de récupération de mémoire basé sur un pointeur, et son procédé
WO2016182255A1 (fr) Dispositif électronique et procédé de fusion de pages associé
JP3453761B2 (ja) アドレス変換方式
WO2022215783A1 (fr) Procédé et dispositif de commande pour la détection de logiciels rançonneurs dans un ssd
JPH0444140A (ja) 仮想メモリ制御方法
JPS603229B2 (ja) 情報処理方式
WO2017188484A1 (fr) Procédé de gestion de mémoire, programme informatique correspondant, et support d'enregistrement correspondant
JP2787107B2 (ja) バッファ制御方式及び装置
JPS6122824B2 (fr)

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 17906680

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: DE

122 Ep: pct application non-entry in european phase

Ref document number: 17906680

Country of ref document: EP

Kind code of ref document: A1