WO2013161076A1 - データベース管理システム、計算機、データベース管理方法 - Google Patents
データベース管理システム、計算機、データベース管理方法 Download PDFInfo
- Publication number
- WO2013161076A1 WO2013161076A1 PCT/JP2012/061436 JP2012061436W WO2013161076A1 WO 2013161076 A1 WO2013161076 A1 WO 2013161076A1 JP 2012061436 W JP2012061436 W JP 2012061436W WO 2013161076 A1 WO2013161076 A1 WO 2013161076A1
- Authority
- WO
- WIPO (PCT)
- Prior art keywords
- context
- thread
- task
- query
- threads
- Prior art date
Links
Images
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F16/00—Information retrieval; Database structures therefor; File system structures therefor
- G06F16/20—Information retrieval; Database structures therefor; File system structures therefor of structured data, e.g. relational data
- G06F16/24—Querying
- G06F16/245—Query processing
- G06F16/2453—Query optimisation
- G06F16/24534—Query rewriting; Transformation
- G06F16/24542—Plan optimisation
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F16/00—Information retrieval; Database structures therefor; File system structures therefor
- G06F16/20—Information retrieval; Database structures therefor; File system structures therefor of structured data, e.g. relational data
- G06F16/24—Querying
- G06F16/245—Query processing
- G06F16/2453—Query optimisation
- G06F16/24532—Query optimisation of parallel queries
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F16/00—Information retrieval; Database structures therefor; File system structures therefor
- G06F16/20—Information retrieval; Database structures therefor; File system structures therefor of structured data, e.g. relational data
- G06F16/24—Querying
- G06F16/245—Query processing
- G06F16/2455—Query execution
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F9/00—Arrangements for program control, e.g. control units
- G06F9/06—Arrangements for program control, e.g. control units using stored programs, i.e. using an internal store of processing equipment to receive or retain programs
- G06F9/46—Multiprogramming arrangements
- G06F9/48—Program initiating; Program switching, e.g. by interrupt
- G06F9/4806—Task transfer initiation or dispatching
- G06F9/4843—Task transfer initiation or dispatching by program, e.g. task dispatcher, supervisor, operating system
Definitions
- the present invention relates to data management technology.
- DB database
- DBMS database management system
- Patent Document 1 A technique disclosed in Patent Document 1 is known as a technique for shortening a data read waiting time in processing of one query.
- the DBMS generates a plan (hereinafter referred to as a query execution plan) in which a plurality of database operations (DB operations or processing steps) necessary for executing a query are combined.
- a task for executing a step is dynamically generated, and data read requests are multiplexed by executing the tasks in parallel.
- any execution environment such as a process or thread managed by the OS or a pseudo process or pseudo thread implemented by an application or middleware can be used for task implementation.
- tens of thousands of tasks may be generated for one query processing.
- a task is implemented by a thread (or process)
- tens of thousands of threads are generated.
- the arbitrary processor core updates the management structure of the thread. This increases the overhead for managing the execution of threads. As a result, there is a problem that the execution time of the query increases.
- an object of the present invention is to use a plurality of processor cores in the DBMS and reduce the thread management overhead.
- the DBMS is realized by a computer having a processor core and manages the DB.
- the DBMS is a query execution plan that generates a query execution plan that includes a query reception unit that receives a query to the DB, processing steps necessary to execute the received query, and execution procedures of the processing steps.
- the generation unit and the received query are executed based on the generated query execution plan, and a task for executing a processing step is dynamically generated in the execution of the received query.
- a query execution unit for executing the task is realized by a computer having a processor core and manages the DB.
- the DBMS is a query execution plan that generates a query execution plan that includes a query reception unit that receives a query to the DB, processing steps necessary to execute the received query, and execution procedures of the processing steps.
- the generation unit and the received query are executed based on the generated query execution plan, and a task for executing a processing step is dynamically generated in the execution of the received query.
- the query execution unit executes a task in a plurality of threads executed by the processor core in executing the accepted query, and executes a plurality of tasks in one thread executed by the processor core.
- the query execution unit may dynamically generate a task for executing the database operation and execute the dynamically generated task in executing the query.
- the query execution unit (a) generates a task for executing a database operation, and (b) executes the generated task, so that it is necessary for a database operation corresponding to the task.
- the context includes first information indicating which one of one or more processing steps represented by the query execution plan is a processing step that starts execution in the newly generated task, and the first information Information including the second information related to the access destination of the data required for the processing step indicated by and the third information related to the data necessary for generating a result by the newly generated task.
- one thread executes a plurality of tasks, and a query is executed by the plurality of threads, so that a plurality of processor cores are used and the thread management overhead is reduced. As a result, the query execution time can be shortened.
- FIG. 1 is a diagram for explaining the outline of the DBMS according to the first embodiment.
- FIG. 2 is a diagram for explaining execution of a query in the DBMS according to the first embodiment.
- FIG. 3 is a configuration diagram of the computer system according to the first embodiment.
- FIG. 4 is a diagram for explaining the definitions of the DB tables and indexes according to the first embodiment.
- FIG. 5 is a diagram illustrating an example of a DB Part table according to the first embodiment.
- FIG. 6 is a diagram illustrating an example of the line item table of the DB according to the first embodiment.
- FIG. 7 is a schematic diagram illustrating an example of a data structure of a Part index and a Part table in the DB according to the first embodiment.
- FIG. 1 is a diagram for explaining the outline of the DBMS according to the first embodiment.
- FIG. 2 is a diagram for explaining execution of a query in the DBMS according to the first embodiment.
- FIG. 3 is a configuration diagram of the
- FIG. 8 is a diagram illustrating an example of a DB query according to the first embodiment.
- FIG. 9 is a diagram illustrating an example of a query execution plan according to the first embodiment.
- FIG. 10 is a diagram illustrating an example of a data structure of task management information according to the first embodiment.
- FIG. 11 is a diagram illustrating an example of task execution state information according to the first embodiment.
- FIG. 12 is a diagram illustrating a first example of process step execution state information according to the first embodiment.
- FIG. 13 is a diagram illustrating a second example of the process step execution state information according to the first embodiment.
- FIG. 14 is a diagram illustrating a third example of the processing step execution state information according to the first embodiment.
- FIG. 15 is a schematic diagram illustrating an example of a data structure of context management information according to the first embodiment.
- FIG. 16 is a diagram illustrating an example of a search table for thread # 1 according to the first embodiment.
- FIG. 17 is a diagram illustrating an example of a search table for thread # 2 according to the first embodiment.
- FIG. 18 is a diagram illustrating an example of a search table for thread # 3 according to the first embodiment.
- FIG. 19 is a diagram illustrating an example of a context according to the first embodiment.
- FIG. 20 is a flowchart of a query reception process according to the first embodiment.
- FIG. 21 is a flowchart of the query execution plan generation process according to the first embodiment.
- FIG. 22 is a flowchart of the inter-thread sharing flag setting process according to the first embodiment.
- FIG. 23 is a schematic diagram illustrating another example of the query execution plan according to the first embodiment.
- FIG. 24 is a flowchart of the result transmission process according to the first embodiment.
- FIG. 25 is a flowchart of thread execution processing according to the first embodiment.
- FIG. 26 is a flowchart of task execution processing according to the first embodiment.
- FIG. 27 is a flowchart of the context search process according to the first embodiment.
- FIG. 28 is a flowchart of query execution plan execution processing according to the first embodiment.
- FIG. 29 is a flowchart of DB page acquisition processing according to the first embodiment.
- FIG. 30 is a flowchart of the new task addition process according to the first embodiment.
- FIG. 31 is a flowchart of the context sharing determination process according to the first embodiment.
- FIG. 32 is a flowchart of the context registration process according to the first embodiment.
- FIG. 33 is a flowchart of task generation processing according to the first embodiment.
- FIG. 34 is a flowchart of the load distribution process according to the modification.
- FIG. 35 illustrates a configuration of a computer system according to the second embodiment.
- FIG. 1 is a diagram for explaining the outline of the DBMS according to the first embodiment.
- the DBMS 141 includes a client communication control unit 142, a query execution plan 143, a query execution unit 144, an execution task management unit 145, a thread management unit 146, and a DB buffer management unit 147.
- the query execution unit 144 includes a query execution plan execution unit 151, a context management unit 152, and a context sharing determination unit 153.
- the DBMS 141 (query execution unit 144) dynamically generates a task for executing a processing step in executing a query, and executes the dynamically generated task. Specifically, for example, the DBMS 141 (query execution unit 144) generates (a) a task for executing a processing step in executing a query, and (b) executes the generated task. Issuing a data read request to the DB to read data necessary for the processing step corresponding to the task, (c) based on the execution result of the Nth processing step corresponding to the task executed in (b) above.
- the DBMS 141 (query execution unit 144) uses a plurality of threads (kernel threads) provided by the operating system (OS) when executing a task, and each of the plurality of threads has one or a plurality of processors. Alternatively, it is executed by a plurality of processor cores. When the processor core executes the thread, the task assigned to the thread is executed.
- expressions such as the processor core executing a task or the DBMS 141 executing a task mean that the processor core executes a task assigned to the thread by executing the thread.
- the DBMS 141 receives a query via the client communication control unit 142.
- the query execution plan generation unit 143 generates a query execution plan PL for executing the received query.
- the query execution plan execution unit 151 executes the generated query execution plan PL.
- the thread management unit 146 manages a plurality of threads executed by the processor core of the processor in the computer in which the DBMS 141 is constructed.
- the execution task management unit 145 manages tasks executed by threads. In this embodiment, the execution task management unit 145 can assign a plurality of tasks to one thread. As a result, the overhead required for thread management can be reduced.
- the context management unit 152 manages the context used when executing the task.
- contexts there are shared contexts that are managed so that they can be used from a plurality of threads, and non-shared contexts that are managed so that they can be used from one thread.
- a thread that can use a non-shared context between threads uses the context preferentially over other threads.
- context # 0 and context # 1 are shared contexts between threads
- context # 2 context # 3
- context # 4 are non-shared contexts between threads.
- the context # 0 and the context # 1 are used.
- context # 2 is used.
- context # 3 is used.
- the context # 4 uses the context # 4 when executing the task assigned to the thread # 3.
- the context sharing determination unit 153 may use a context related to the first processing step of the query execution plan as a thread sharing context.
- the context sharing determination unit 153 may use a context related to the first processing step of each processing block as a context for sharing between threads. good.
- the context sharing determination unit 153 may use a context related to a processing step in which the number of subsequent processing steps is a predetermined number or more in one processing block as a context for sharing between threads.
- a processing block may be composed of one or more processing steps. An example of a query execution plan composed of a plurality of processing blocks that can be executed in parallel will be described later.
- FIG. 2 is a diagram for explaining the execution of the query in the DB management system according to the first embodiment. In the figure, the passage of time is expressed from top to bottom.
- the thread management unit 146 generates a thread # 1, a thread # 2, and a thread # 3. These threads # 1 to # 3 can be executed in parallel by different processor cores, for example.
- the query execution plan execution unit 151 (specifically, one processor core) generates a context # 0 for starting execution of a query and generates a task # 1. Since context # 0 is a context related to processing step # 1 which is the head of the query execution plan, it is shared among threads.
- the query execution plan execution unit 151 assigns task # 1 to thread # 1. Thread # 1 uses context # 0 and executes task # 1. When task # 1 is executed, context # 1 is generated, and task # 2 and task # 3 are generated.
- Thread # 1 is a context related to the processing step # 1 which is the head of the query execution plan, it is shared between threads. Thread # 1 assigns task # 2 to thread # 2, and assigns task # 3 to thread # 3. Thread # 2 uses context # 1 and executes task # 2. Thread # 3 uses context # 1 and executes task # 3.
- Thread # 1 executes task # 1 and executes DB access processing. As a result, a new context is generated. For example, the context # 2 related to the processing step # 3 is generated. Since the processing step of context # 2 is not the head of the query execution plan, it is not shared among threads. That is, the context # 2 is basically used by the thread # 1. Thread # 1 creates task # 4 and assigns it to thread # 1. The thread # 1 uses the context # 2 and executes the task # 4.
- Thread # 2 executes task # 2 and executes DB access processing. As a result, a new context is generated. For example, context # 3 related to process step # 3 is generated. Since the processing step of context # 3 is not the head of the query execution plan, it is not shared between threads. That is, the context # 3 is basically used by the thread # 2. Thread # 2 creates task # 5 and task # 6 and assigns them to thread # 2. Thread # 2 uses context # 3 and executes task # 5. Further, the thread # 2 executes the task # 6 using the context # 3.
- Thread # 3 executes task # 3 and executes DB access processing. As a result, a new context is generated. For example, context # 4 related to process step # 3 is generated. Since the processing step of context # 4 is not the head of the query execution plan, it is not shared between threads. That is, the context # 4 is basically used by the thread # 3. Thread # 3 creates task # 7 and task # 8 and assigns them to thread # 3. Thread # 3 executes task # 7 using context # 4. Furthermore, the thread # 3 executes the task # 8 using the context # 4.
- one thread executes a plurality of tasks, and a query is executed by a plurality of the threads.
- a plurality of processor cores are used and the thread management overhead is reduced, the query execution time can be shortened.
- FIG. 3 is a configuration diagram of the computer system according to the first embodiment.
- the computer system includes a computer 100 and an external storage device 200.
- the computer 100 and the external storage apparatus 200 are connected via a communication network 300.
- a protocol for communication through the communication network 300 for example, FC (Fibre Channel), SCSI (Small Computer System Interface), IB (Infini Band), or TCP / IP (Transmission Control Protocol / Internet Protocol) is adopted. Good.
- the computer 100 is, for example, a personal computer, a workstation, or a main frame.
- the computer 100 includes a network adapter 110, a processor (typically a microprocessor (for example, a CPU (Central Processing Unit))) 120, a local storage device 130, and a memory 140.
- the processor 120 executes a computer program such as an OS (Operating System) (not shown) and the DBMS 141.
- the one or more processors 120 have one or more processor cores. Each processor core can execute processing independently.
- the processor core has a cache with a shorter access latency than the memory 140.
- the processor core holds data recorded in the memory 140 in a cache and processes the data.
- each processor core can execute one thread (kernel thread) at a certain point in time.
- the memory 140 temporarily stores a program executed by the processor 120 and data used by the program.
- the memory 140 stores a DBMS 141 that is a program for performing DB management and a series of related processes and data.
- the memory 141 may store an AP (Application Program) 148 for issuing a query to the DBMS 141.
- the local storage device 130 stores a program and data used by the program.
- the network adapter 110 connects the communication network 300 and the computer 100.
- the processor 120 may be an element included in a control device connected to the network adapter 110, the memory 140, and the like.
- the control device may include dedicated hardware circuitry (eg, circuitry that encrypts and / or decrypts data).
- the computer 100 may include a plurality of at least one of the network adapter 110, the processor 120, the local storage device 130, and the memory 140 from the viewpoint of performance and redundancy.
- the computer 100 may include an input device (for example, a keyboard and a pointing device) and a display device (for example, a liquid crystal display) not shown.
- the input device and the display device may be integrated.
- the DBMS 141 executes a query issued to the DBMS 141.
- This query is issued by an AP 148 executed by the computer 100 or an AP executed by a computer (client) (not shown) connected to the communication network 300.
- the DBMS 141 executes a query issued by the AP 148, and transmits an I / O request for the DB 206 stored in the external storage apparatus 200 to the external storage apparatus 200 via the OS in accordance with the execution of the query.
- the OS may be an OS that operates on a virtual machine created and executed by a virtualization program.
- External storage device 200 stores data used by computer 100.
- the external storage apparatus 200 receives an I / O request from the computer 100, executes processing corresponding to the I / O request, and transmits the processing result to the computer 100.
- the external storage apparatus 200 includes a network adapter 201, a storage device group 203, and a controller 202 connected to them.
- the network adapter 201 connects the external storage device 200 to the communication network 300.
- the storage device group 203 includes one or more storage devices.
- the storage device is a non-volatile storage medium, for example, a magnetic disk, a flash memory, or other semiconductor memory.
- the storage device group 203 may be a group that stores data at a predetermined RAID level according to RAID (Redundant ARRAY of Independent Disks).
- a logical storage device (logical volume) based on the storage space of the storage device group 203 may be provided to the computer 100.
- the storage device group 203 stores the DB 206.
- the DB 206 includes one or more tables 204 and indexes 205.
- a table is a set of one or more records, and a record is composed of one or more columns.
- An index is a data structure created for one or more columns in a table, and speeds up access to the table by a selection condition that includes the columns targeted by the index.
- the index is a data structure that holds information (RowID) for specifying a record in a table including the value for each value of a target column, and a B-tree structure or the like is used.
- RowID information
- An example of a DB table configuration and an example of the relationship between tables will be described later.
- the controller 202 includes, for example, a memory and a processor, and inputs / outputs data to / from the storage device group 203 storing the DB 206 in accordance with an I / O request from the computer 100.
- the controller 202 stores data to be written in accordance with a write request from the computer 100 in the storage device group 203, reads data to be read in accordance with a read request from the computer 100 from the storage device group 203, and stores the data in the computer To 100.
- the external storage apparatus 200 may include a plurality of elements such as the controller 202 from the viewpoint of ensuring performance and ensuring redundancy.
- a plurality of external storage devices 200 may be provided.
- the DBMS 141 manages the DB 206 including business data.
- the DBMS 141 includes a client communication control unit 142, a query execution plan generation unit 143, a query execution unit 144, an execution task management unit 145, a thread management unit 146, and a DB buffer management unit 147.
- the client communication control unit 142 controls communication with a client or AP 148 connected to the communication network 300. Specifically, the client communication control unit 142 receives (accepts) a query issued from the client or the AP 148, and executes a process of transmitting the query processing result to the client or the AP 148.
- the query is described in, for example, SQL (Structured Query Language).
- the query execution plan generation unit 143 generates a query execution plan having one or more processing steps necessary for executing the query received by the client communication control unit 142.
- the query execution plan is, for example, information in which the execution order of processing steps to be performed at the time of executing a query is defined in a tree structure, and is stored in the memory 140. An example of the query execution plan will be described later.
- the DB buffer management unit 147 manages a storage area (DB buffer) for temporarily storing data in the DB 206.
- the DB buffer is constructed on the memory 140.
- the DB buffer may be constructed on the local storage device 130.
- the query execution unit 144 executes the query according to the query execution plan generated by the query execution plan generation unit 143, and returns the generated result to the query issuer.
- the query execution unit 144 includes a query execution plan execution unit 151, a context management unit 152, and a context sharing determination unit 153.
- the query execution plan execution unit 151 dynamically generates a task for executing a processing step in the query execution plan, assigns the task to a thread, and executes the query by the thread executing the task.
- the context management unit 152 manages a context including information necessary for executing the generated task.
- the context includes a first information indicating which one of one or more processing steps represented by the query execution plan is a processing step that starts execution in a task, and a processing step indicated by the first information.
- This is information including second information relating to an access destination of necessary data and third information relating to data necessary for generating a result by a task.
- the structure of context management information which is information for managing the context, will be described later.
- the context sharing determination unit 153 determines whether to share a context among a plurality of threads.
- the execution task management unit 145 manages tasks executed by threads.
- the task is, for example, a pseudo process or pseudo thread (user level thread) implemented by the DBMS 412.
- the task may be a set of pointers (function pointers) to a function in which each process is collected as a function.
- the structure of task management information which is information for managing tasks, will be described later.
- the thread management unit 146 manages a thread for executing a query.
- the thread is a thread (kernel thread) provided by the OS.
- the thread assigned by the processor core the task assigned to the thread is executed.
- a process may be used instead of a thread.
- At least a part of processing performed by at least one processing unit of the client communication control unit 142, the query execution plan generation unit 143, the query execution unit 144, and the DB buffer management unit 147 may be performed by hardware.
- the processing unit when the processing unit is the subject, the processing is actually performed by the processor 120 that executes the processing unit, but at least a part of the processing unit is realized by hardware.
- the hardware may be the subject instead of or in addition to the processor 120.
- a computer program for realizing the DBMS 141 may be installed in the computer 100 from a program source.
- the program source may be, for example, a storage medium readable by the computer 100 or another computer.
- the configuration of the DBMS 141 shown in FIG. 3 is an example.
- a certain processing unit may be divided into a plurality of processing units, or a single processing unit in which functions of a plurality of processing units are integrated may be constructed.
- FIG. 4 is a diagram for explaining definitions of DB tables and indexes according to the first embodiment.
- the DB 206 has, as the table 205, for example, a Part table including the column c1 and the column c2, and a Lineitem table including the column c3 and the column c4.
- the DB 206 includes, as the index 204, an index related to the Part table based on the value of the column c1 (Part index) and an index related to the Lineitem table based on the value of the column c3 (Lineitem index).
- FIG. 5 is a diagram illustrating an example of a DB Part table according to the first embodiment.
- the Part table of the DB 206 is logically a table in which, for example, the value of the column c1 is associated with the value of the corresponding column c2.
- FIG. 6 is a diagram illustrating an example of a DB line item table according to the first embodiment.
- the Lineitem table of the DB 206 is a table in which, for example, the value of the column c3 is associated with the value of the corresponding column c4.
- FIG. 7 is a diagram for explaining an example of the data structure of the part index and part table in the DB according to the first embodiment.
- the Part index has, for example, a B-tree structure for searching for a page and a slot of the part table storing the value of the corresponding column c2 based on the value of the column c1.
- a page is a minimum data unit in input / output with respect to the DB 206.
- the Part index manages the page P as a hierarchical structure.
- the Part index there are a leaf page that is the lowest page and an upper page that is an upper page of the leaf page.
- the highest page among the upper pages is referred to as a root page.
- the page P1 includes a pointer to a page P2 that manages the correspondence with the value of the column c1 of “100” or less, and a page that manages the correspondence with the value of the column c1 that is greater than “100” and less than or equal to “200”.
- a pointer to P3 and a pointer to page P4 that manages the correspondence relationship between the value of column c1 greater than “200” and less than or equal to “300” are stored.
- the upper page there is an entry that associates a pointer to the page one level below each page with the maximum value of the column c1 managed in the page one level below.
- One or more are provided.
- the leaf page corresponds to the value of the column c1 and the storage position in the Part table that stores the value of the column c2 corresponding to the value (for example, the page number of the Part table and the slot number in the page).
- page P8 which is a leaf page, corresponds to a row including the page and slot number in which the value of column c2 corresponding to the value “110” of column c1 is stored, and the value “130” of column c1.
- the page storing the value of the column c2 and the row including the slot number are stored.
- a row including the page and slot number in which the value of the column c2 corresponding to the value “130” of the column c1 is stored includes slot 2 of the page P100, slot 1 of the page P120, and slot 4 of the page P200. Is stored.
- the value of the column c2 corresponding to the value “130” of the column c1 becomes “id131” from the record of the slot 2 of the page P100 of the Part table, and “id132” from the record of the slot 1 of the page 120 of the Part table.
- “id133” is obtained from the record in the slot 4 of the page 200 of the Part table.
- FIG. 8 is a diagram illustrating an example of a DB query according to the first embodiment.
- the query shown in FIG. 8 is an example of a query for the DB 206 having the structure shown in FIGS.
- the query shown in FIG. 8 is obtained from the Part table and the Lineitem table with respect to the column c1 value “130” and the value of the column c2 and the value of the column c3 are the same. Means to extract the value of.
- FIG. 9 is a diagram illustrating an example of a query execution plan according to the first embodiment.
- the query execution plan shown in the figure indicates a query execution plan generated by the query execution plan generation unit 143 when the DBMS 141 receives the query shown in FIG.
- the query execution plan corresponding to the query shown in FIG. 8 includes processing step # 1 for performing index search using the Part index, processing step # 2 for acquiring records from the Part table, and indexing using the Lineitem index. Processing step # 3 for performing a search, processing step # 4 for acquiring a record from the Lineitem table, and processing step # 5 for combining these results with a nested loop are included.
- FIG. 10 is a diagram illustrating an example of a data structure of task management information according to the first embodiment.
- the task management information has a main data structure 71.
- the main data structure 71 stores thread specifying information (for example, thread number) for specifying a plurality of threads and a pointer to a list management structure 72 for managing tasks executed by the threads in association with each thread. .
- the list management structure 72 stores an executable list 72a for managing tasks that can be executed in the corresponding thread, and a waiting list 72b for managing tasks that are waiting for execution in the corresponding thread.
- the executable list 72a has a pointer to execution status information (task execution status information) 73 relating to tasks that can be executed in the corresponding thread.
- the task execution state information 73 has a pointer to the task execution state information 73 related to other tasks that can be executed in the corresponding thread.
- the execution state information 73 regarding the executable task in the thread # 2 is managed, and the execution waiting state in the thread # 2
- the execution state information 73 related to the task task execution state information 73 of task # 5 and task # 6 is managed. If there is a task bias among a plurality of threads, the task (that is, task execution state information 73) may be moved to another thread list.
- the executable list and the waiting list are managed for each thread, but the executable list and the waiting list may be shared among a plurality of threads. Further, the executable list and the waiting list may be managed for each processing step.
- FIG. 11 is a diagram illustrating an example of task execution state information according to the first embodiment.
- the task execution state information 73 stores a work area 73a, a processing step 73b, and a processing step execution state 73c.
- the work area 73a stores a pointer indicating the work area.
- the processing step 73b stores information for identifying a processing step to be executed by the corresponding task, for example, a processing step number.
- the processing step execution state 73c stores execution state information (processing step execution state information) 74 of the corresponding processing step. A specific example of the processing step execution state information 74 will be described later.
- FIG. 12 is a diagram illustrating a first example of processing step execution state information according to the first embodiment.
- FIG. 12 shows processing step execution state information for a task that uses the upper page in the index search.
- the processing step execution state information 74A includes a search condition 74a, a page number 74b, and a slot number 74c.
- the search condition 74a stores the search condition.
- the search condition 74a stores a key value range “115 or more and 195 or more and 195” that is a search condition included in the query.
- the page number 74b stores the number of the upper page (page number) used in task processing.
- the slot number 74c stores a slot number (slot number) in a page used in task processing.
- FIG. 13 is a diagram illustrating a second example of the process step execution state information according to the first embodiment.
- FIG. 13 shows processing step execution state information for a task that uses a leaf page in index search.
- the processing step execution state information 74B includes a search condition 74d, a page number 74e, a slot number 74f, and a processing row ID number 74g.
- the search condition 74d stores the search condition.
- the search condition 74d stores a key value range “115 or more and 195 or more and 195” that is a search condition.
- the page number 74e stores the page number of the leaf page used in task processing.
- the slot number 74f stores the slot number of the slot in the page used for task processing.
- the processing row ID number 74g stores the ID number of the row in the slot processed by the corresponding task (processing row ID number).
- FIG. 14 is a diagram illustrating a third example of the processing step execution state information according to the first embodiment.
- FIG. 14 shows processing step execution state information for a task for acquiring a record.
- the processing step execution state information 74C includes a page number 74h and a slot number 74i.
- the page number 74h stores the page number of the page used in task processing.
- the slot number 74i stores the slot number of the slot in the page used for task processing.
- FIG. 15 is a schematic diagram illustrating an example of a data structure of context management information according to the first embodiment.
- the context management information 80 includes a main structure 81 of a management list and a plurality of contexts 82.
- a pointer to the context 82 is stored in the main structure 81.
- Each context 82 stores a pointer to another context 82.
- the thread executing the task locks (locks) each context 82 as a unit. The locked context cannot be used by other threads.
- the context management information 80 stores search tables (thread search tables) 83, 84, 85, etc. corresponding to the thread executing the query.
- the search table 83 for the thread # 1 manages a pointer to the context 82 that can be used by the thread # 1.
- the search table 84 for thread # 2 manages a pointer to a context 82 that can be used by thread # 2.
- the search table 85 for the thread # 3 manages a pointer to the context 82 that can be used by the thread # 3.
- FIG. 16 is a diagram illustrating an example of a search table for thread # 1 according to the first embodiment.
- FIG. 17 is a diagram illustrating an example of a search table for thread # 2 according to the first embodiment.
- FIG. 18 is a diagram illustrating an example of a search table for thread # 3 according to the first embodiment.
- the thread # 1 search table 83 is a table that manages pointers to contexts that can be used by the thread # 1, and manages pointers to contexts related to each processing step in a list.
- a context pointer 83a related to the processing step # 1 a context pointer 83b related to the processing step # 2
- a context pointer 83c related to the processing step # 3 and a processing step # 4.
- a pointer to the context # 1 is stored in the pointer 83a
- a pointer to the context # 2 is stored in the pointer 83c.
- the thread # 2 search table 84 is a table that manages pointers to contexts that can be used by the thread # 2, and manages pointers to contexts related to each processing step in a list. As shown in FIG. 17, the pointer 84a to the context related to the processing step # 1, the pointer 84b to the context related to the processing step # 2, the pointer 84c to the context related to the processing step # 3, and the processing step # 4. A pointer 84d to a context related to In this embodiment, a pointer to the context # 1 is stored in the pointer 84a, and a pointer to the context # 3 is stored in the pointer 84c.
- the search table 85 for the thread # 3 is a table that manages pointers to contexts that can be used by the thread # 3, and manages pointers to contexts related to each processing step in a list.
- the pointer 85a to the context related to the processing step # 1 the pointer 85b to the context related to the processing step # 2
- the pointer 85c to the context related to the processing step # 3 and the processing step # 4.
- a pointer 85d to the context related to In this embodiment, a pointer to the context # 1 is stored in the pointer 85a, and a pointer to the context # 4 is stored in the pointer 85c.
- the context # 1 can be used in the thread # 1, the thread # 2, or the thread # 3.
- context # 2 can be used in thread # 1.
- Context # 3 can be used in thread # 2
- context # 4 can be used in thread # 3.
- the state in which the pointer to the context 82 is registered in the search table for a plurality of threads is called the context is shared between threads, and the pointer to the context is registered in the search table for one specific thread. A state is said to be non-shared between threads.
- a context that can be used by a processor core that executes a thread by using the thread-specific search table is referred to as a context that can be used by the thread.
- context # 1 is used by thread # 1, thread # 2, and thread # 3
- context # 2 is used by thread # 1
- context # 3 is used by thread # 2
- context # 4 is thread Used by # 3.
- the processing time associated with the use of the context can be shortened by the cache of the processor core.
- a thread search table for other threads may be referred to in order to equalize the amount of tasks executed by the threads.
- a thread-specific search table for other threads other threads are not shared among threads (context # 2, context # 3, context Use # 4).
- thread # 1 refers to the search table for thread # 2 or the search table for thread # 3
- context # 3 or context # 4 is assigned to thread # 2. Used by tasks assigned to 1.
- FIG. 19 is a diagram illustrating an example of a context according to the first embodiment.
- the context 82 includes a start step 82a, an intermediate result 82b, an execution state 82c, and a number 82d that can be generated.
- the start step 82a the number of the corresponding processing step is stored.
- the intermediate result 82b a pointer indicating a work area for storing an intermediate result necessary for a task executing the corresponding processing step is stored.
- the intermediate result is acquired data necessary for generating a query result.
- the execution state 82c the task execution state in the corresponding processing step, for example, information (for example, page number 820, slot number 821, and processing row ID number 822) specifying the processing content of the task to be executed next is specified. Store.
- the page number 820 stores the page number of the leaf page used in the processing of the next task.
- the slot number 821 stores the slot number in the page used in the processing of the next task.
- the processing row ID number 822 stores the ID number (processing row ID number) of the row in the slot used in the processing of the next task.
- the number of tasks that can be generated (number of tasks that can be generated) is stored in the number of possible generations 82d.
- the number of tasks that can be generated is the number of processes that are not generated as tasks out of the number of processes that logically branch. For example, when the key value “130” is a condition in the index search by the Part index shown in FIG. 7, there are three row IDs as entries corresponding to the key value “130” on the page P8.
- FIG. 20 is a flowchart of the query reception process according to the first embodiment.
- step S1 when the client communication control unit 142 receives a query from the AP 148 (step S1), the received query is passed to the query execution plan generation unit 143, and the query execution plan generation unit 143 performs the query execution plan generation process. (See FIG. 21) is executed (step S2).
- the thread management unit 146 After executing the query execution plan generation process, the thread management unit 146 generates a thread (step S3).
- the number of threads to be generated may be an arbitrary number, for example, the same number as the number of processor cores of the processor 120.
- the processor core on which the thread operates may be designated as a specific processor core for each thread. That is, processor affinity may be set.
- the same number of threads as the number of processor cores may be generated, and one of the threads may be set to be executed by each processor core. In this way, the efficiency of processing by each thread is good.
- a method of generating a thread there is a method of using a thread generation interface (function) provided by the OS, specifically, pthread_create ().
- the query execution plan execution unit 151 generates a context for starting execution of the query, generates a task for processing using the context, and assigns it to any one thread (step S4).
- the task is assigned to the thread created first by the thread management unit 146.
- the processor core of the processor 120 executes the thread, and the thread executes the task assigned to the thread.
- FIG. 21 is a flowchart of the query execution plan generation process according to the first embodiment.
- the query execution plan generation process is a process corresponding to step S2 of the query reception process shown in FIG.
- the query execution plan generation unit 143 generates a query execution plan from the query passed from the client communication control unit 142 (step S5). For example, when the query shown in FIG. 8 is received, the query execution plan shown in FIG. 9 is generated.
- the query execution plan generation unit 143 executes the inter-thread sharing flag setting process (see FIG. 22) (step S6), and ends the query execution plan generation process.
- FIG. 22 is a flowchart of the inter-thread sharing flag setting process according to the first embodiment.
- the inter-thread sharing flag setting process is a process corresponding to step S6 of the query execution plan generation process shown in FIG.
- the inter-thread sharing flag setting process is a process for setting, for a predetermined processing step in the query execution plan, an inter-thread sharing flag indicating that a context related to the processing step should be shared between threads.
- the query execution plan generation unit 143 performs processing while moving the pointer in order to trace the query execution plan having a tree structure.
- a pointer is set at the first processing step of the query execution plan (step S11).
- the query execution plan generation unit 143 determines whether or not there is a processing step indicated by the pointer in the query execution plan (step S12). As a result, when there is no processing step pointed to by the pointer ("No" in step S12), it means that processing has been performed for all processing steps of the query execution plan, so the query execution plan generation unit 143 Ends the inter-thread sharing flag setting process.
- step S12 when there is a processing step indicated by the pointer in the processing step of the query execution plan (“Yes” in step S12), the query execution plan generation unit 143 determines whether or not the processing step is the head of the processing block. Determination is made (step S13).
- the processing block refers to the set when one or more processing steps that must be sequentially executed in the query execution plan are divided into sets that can be executed in parallel.
- the query execution plan shown in FIG. 9 includes one processing block.
- the processing block will be described using another query execution plan.
- FIG. 23 is a diagram for explaining another example of the query execution plan according to the first embodiment.
- the query execution plan shown in FIG. 23 includes processing step # 1 for performing index search using the Part index, processing step # 2 for acquiring a record from the Part table, and processing step # 3 for executing a table scan on the Lineitem table. And processing step # 4 for hash-joining the results of processing step # 2 and processing step # 3.
- processing step # 1, processing step # 2, and processing step # 3 are processes that can be executed in parallel.
- This query execution plan includes a processing block # 1 including processing step # 1 and processing step # 2, and a processing block # 2 including processing step # 3 and processing step # 4.
- processing step # 1 and processing step # 3 are the first processing step of the processing block.
- a query execution plan corresponding to a query including a subquery or a derived table also includes a plurality of processing blocks.
- step S14 the query execution plan generation unit 143 performs inter-thread processing on the processing step.
- a sharing flag is set (step S14).
- the inter-thread sharing flag is set in process step # 1.
- an inter-thread sharing flag is set in process step # 1 and process step # 3.
- the process proceeds to step S15.
- the processing step is the head of the processing block, the context should be shared by a plurality of threads in order to distribute the starting task to the plurality of threads at an early stage of the processing block.
- step S13 if the result of step S13 is that the processing step is not the head of the processing block (“No” in step S13), the query execution plan generation unit 143 advances the processing to step S15.
- step S15 the query execution plan generation unit 143 moves the pointer to the next processing step, and the process proceeds to step S12.
- the inter-thread sharing flag is set for the first processing step of the processing block. For example, the number of subsequent processing steps in the processing block is predetermined. An inter-thread sharing flag may be set for a plurality of processing steps.
- FIG. 24 is a flowchart of a result transmission process according to the first embodiment.
- the result transmission process is started by the client communication control unit 142 after the client communication control unit 142 receives the query.
- the client communication control unit 142 confirms whether or not there is a result of the accepted query in the query execution unit 144 (step S21).
- step S21 when there is a query result (“Yes” in step S21), the client communication control unit 142 acquires the query result from the query execution unit 144 (step S22), and the AP 148 that is the query issuer. The result of the query is transmitted to (Step S26).
- the client communication control unit 142 determines whether the query end flag of the query execution unit 144 is “end” indicating the end of the query, or the query ends. It is determined whether it is “not finished” indicating that it has not been performed (step S23). As a result, when the query end flag is “end” (“end” in step S23), “NOROW” (no corresponding record) is set in the result (step S24), and the query is issued to AP148. The result of the query is transmitted (step S26).
- step S23 when the query end flag of the query execution unit 144 is “not finished” indicating that the query is not finished (“not finished” in step S23), the client communication control unit 142 indicates that the query execution unit 144 displays the result. Waiting for generation for a predetermined time (step S25), the process proceeds to step S21.
- FIG. 25 is a flowchart of thread execution processing according to the first embodiment.
- the thread execution process is realized by the processor core of the processor 120 executing the thread generated in step S3 of FIG.
- another processor core can perform thread execution processing for another thread in parallel.
- the processor core selects a task to be executed in the corresponding thread (step S31). Specifically, the processor core selects a task included in the executable list of the corresponding thread in the task management information managed by the execution task management unit 145.
- step S32 determines whether or not there is a task to be executed. If there is no task to be executed ("No" in step S32), the process proceeds to step S34 while there is a task to be executed. ("Yes” in step S32) starts the task or restarts the task (step S33). Specifically, the following processing is performed.
- the processor core selects one of the tasks included in the ready list.
- the processor core confirms the task execution state information of the selected task, and starts the task or restarts the task. If the task execution status information processing step is not set, task processing is started. Specifically, a task execution process (see FIG. 26) is executed. If the task execution state information processing step is set, the task processing is resumed.
- execution is resumed from the process where the task was interrupted.
- the process is resumed from step S66 of FIG.
- the processor core advances the process to step S31 after the execution of the task is completed or after the execution of the task is in a waiting state.
- step S34 the processor core checks whether or not there is another thread (step S34), and if there is no other thread (“No” in step S34), the processor core sets end to the query end flag of the query execution unit 144. (Step S35), and the thread execution process is terminated. As a result, the thread disappears. On the other hand, when there is another thread (“Yes” in step S34), the processor core ends the thread execution process. As a result, the thread disappears.
- FIG. 26 is a flowchart of task execution processing according to the first embodiment.
- the task execution process corresponds to the process when the task process is started in step S33 of FIG. This task execution process is realized by the processor core executing a task in a thread.
- the processor core executes a context search process (see FIG. 27) (step S36), and checks whether there is a context searched by the context search process (step S37). As a result, when there is no context (“none” in step S37), it means that there is no operation to the DB to be executed, and the task execution process is terminated.
- the processor core sets task execution state information 73 (see FIG. 11) of the task (step S38). Specifically, the processor core copies the value of the start step 82 a of the searched context 82 to the processing step 73 b of the task execution state information 73. Data in the work area indicated by the pointer of the context intermediate result 82 b is copied to the work area indicated by the pointer in the work area 73 a of the task execution state information 73. Further, the processor core copies the value of the execution state 82 c of the context 82 to the processing step execution state 73 c of the task execution state information 73. For example, in the case of the context 82 shown in FIG.
- the value “8” of the page number 820 of the context 82 is stored as the page number in the processing step execution state 73c of the task execution state information 73, and the slot number is The value “2” of the slot number 821 of the context 82 is stored, and the value “2” of the processing row ID number 822 of the context 82 is stored as the processing row ID number.
- the processor core updates the execution state 82c of the context 82 so as to correspond to the processing content in the next task. For example, in the context 82 shown in FIG. 19, the value of the processing row ID number 822 is incremented by 1 to “3”.
- this task starts processing in the subsequent processing by referring to the record (record having id 132) stored in the slot number “1” of the page “P120” indicated by the corresponding row ID. It becomes.
- step S38 the processor core executes query execution plan execution processing (see FIG. 28). If the task processing is finished, step 39 is finished and the processing proceeds to step S36.
- FIG. 27 is a flowchart of context search processing according to the first embodiment.
- the context search process is a process corresponding to step S36 in FIG.
- the processor core searches for a context using the pointer of the thread search table (83, 84, or 85) for the executing thread (self thread) (step S41).
- the processor core searches for contexts in order from the last processing step to the first processing step.
- the processor core checks whether or not there is a context that can be searched (step S42). As a result, when the context is found (“Yes” in step S42), the context search process is terminated.
- step S42 if there is no context (“No” in step S42), it means that the number of available contexts may be uneven among threads, so threads other than the own thread (other threads) ) Is searched for using the pointer of the thread-oriented search table (step S43).
- the processor core when acquiring a context from a thread-oriented search table for other threads, the processor core searches for the context in the order of the first processing step to the last processing step. Here, searching for the context from the first processing step is more likely to generate more tasks the closer to the first processing step, so the load can be distributed to each thread at an early stage. It is.
- step S43 the processor core ends the context search process.
- thread # 1 searches for a context in the search table for thread # 1.
- Thread # 1 searches in the order from processing step # 4 to processing step # 1 in the search table for thread # 1.
- thread # 1 finds a pointer to context # 2 and uses context # 2.
- context # 2 disappears, a pointer to the context # 1 is found and the context # 1 is used.
- context # 1 disappears, there is no available context in the search table for thread # 1, and thread # 1 searches for a context in the search table for threads other than its own thread. In this case, search is performed in the order of processing step # 1 to processing step # 4.
- Thread # 1 is processing step # 1 of search table for thread # 2, processing step # 1 of search table for thread # 3, processing step # 2 of search table for thread # 2, processing step of search table for thread # 3 # 2, search table processing step # 3 for thread # 2, search table processing step # 3 for thread # 3, search table processing step # 4 for thread # 2, search table processing step # 4 for thread # 3 Search in the order of. 17 and 18, when the pointer to the context # 1 disappears, the thread # 1 finds a pointer to the context # 3 in the search table for the thread # 2, and uses the context # 3.
- the context search is made by referring to the thread search table for other threads.
- the context may be searched by referring to a thread-specific search table for other threads.
- step S43 of the context search process it is possible to reduce the load imbalance between threads based on the number of available contexts between the threads. You may make it adjust with the load distribution thread
- FIG. 28 is a flowchart of the query execution plan execution process according to the first embodiment.
- the query execution plan execution process corresponds to step S39 in FIG.
- This query execution plan execution process is realized by executing a task assigned by a processor core to a thread.
- a logical function unit that executes the query execution plan execution process corresponds to the query execution plan execution unit 151.
- the processor core acquires a page in the DB 206 by executing a DB page acquisition process (see FIG. 29) (step S51).
- the processor core determines whether there is data that matches the search condition for the data in the page (step S52). For example, if it is an upper page of the index, it is a search process in the upper page, and if it is a leaf page, it is a search process of a leaf page.
- the processor ends the query execution plan execution process.
- step S52 when there is data that matches the search condition (“true” in step S52), the processor core determines whether there is one data or two or more data that match the search condition (step S52). S53). As a result, when the number of data matching the search condition is one (“1” in step S53), the processor core advances the process to step S55. On the other hand, when there are two or more data that match the search condition (“two or more” in step s53), the processor core executes a new task addition process (see FIG. 30) (step S54). Advances to step S55.
- step S55 the processor core executes processing for the DB page in the processing step by the task.
- the process for the DB page is, for example, a process of reading a page number that matches the search condition if it is an upper page of the index, and a process of reading a row ID that matches the search condition if it is a leaf page. If it is a table page, this is a process of reading a record column.
- step S56 the processor core determines the next DB page and processing for the DB page (step S56), and proceeds to step S57.
- step S57 the processor core releases the acquired DB page.
- step S58 the processor core determines whether or not there is a next process. Specifically, it is determined as “None” when the currently performed process step is completed and there is no next process step in the process block including the process step. As a result, when there is a next process (“Yes” in step S58), the processor core advances the process to step S51. On the other hand, when there is no next process (“No” in step S58), the processor core The result is passed to the query execution unit 144 (step S59), and the query execution plan execution process ends.
- the processor core determines the index root page (the page with the page number “P1”) as the next DB page, and assigns the key “130” to the page.
- the search process in the upper page to be searched is determined as the process for the DB page, and the process is started.
- the processor core reads page P1, and in step S52 searches for an entry including c1 “130” in the page P1. Since one entry including c1 “200” is found, in step S55 and step 56, the search process in the upper page is determined as the process for the DB page for the page P3 as the next process.
- step 51 to step S55 the process for page P3 is performed.
- the processor core reads the page P3, searches the page P3 for an entry including c1 “130”, and finds a pointer to the page P8 in the entry including c1 “130”. As a result, the page P8 is determined as the next DB page, and the search process in the leaf page for the page P8 is determined as the process for the DB page.
- step 51 to step S53 the processor core reads page P8, searches the page P8 for an entry including c1 “130”, and finds page “P100” in the Part table and slot number “2”.
- a new task addition process (step 54) is performed in order to process two data other than the data to be processed by the task.
- the data to be processed by the task is the first data
- the page P100 of the Part table is determined as the next DB page in step 56
- the record in the slot number 2 is acquired for the page P100. Is determined to be the process for the DB page.
- FIG. 29 is a flowchart of DB page acquisition processing according to the first embodiment.
- the DB page acquisition process corresponds to step S51 of the query execution plan execution process (FIG. 28).
- This DB page acquisition processing is realized by executing a task assigned to a thread by a processor core.
- the processor core searches the buffer page (DB buffer page) corresponding to the DB page to be acquired in the DB buffer management unit 147 (step S61), and checks whether there is a corresponding DB buffer page (step S62).
- step S63 determines whether or not reading of the page from the DB 206 has been completed. If it has been completed (“completed” in step S63), the DB page acquisition process ends. On the other hand, if the reading has not been completed (“incomplete” in step S63), the process proceeds to step S66.
- step S62 when there is no corresponding DB buffer page (“No” in step S62), the processor core acquires a free DB buffer page from the DB buffer management unit 147 (step S64), and the corresponding page for the DB 206.
- a DB page read request for reading the data into the empty DB buffer page is issued (step S65), and the process proceeds to step S66.
- step S66 the processor core waits for page reading to be completed.
- the processor core waits until the reading of the page is completed, that is, a method that does not adopt the synchronous I / O and executes other processing without reading the page, that is, asynchronous I / O. / O may be adopted.
- the processor core interrupts the processing of the task being executed to enter a wait state, and changes the task execution state information to the wait list. Then, the completion of reading the corresponding page is confirmed by another thread (or another task). Then, when the other thread (the processor core that executes another thread) confirms the completion of reading the page, the task execution state information of the task is changed to the executable list, and the processing of the task is resumed. You may do it.
- the processor core can execute other tasks without waiting for completion of page reading, and the processing efficiency in the DBMS 141 can be improved. Note that when the reading is completed, the processor core ends the DB page acquisition process.
- FIG. 30 is a flowchart of new task addition processing according to the first embodiment.
- the new task addition process corresponds to step S54 of the query execution plan execution process (FIG. 28).
- This new task addition processing is executed for data other than one data (for example, the first data) in the matching data when there are two or more data matching the condition in step S53.
- the processor core creates a context 82 based on the data to be processed (step S71).
- the processor core executes a context sharing determination process (see FIG. 31) for determining whether or not the created context 82 is shared between threads (step S72).
- the processor core executes a context registration process (see FIG. 32) for registering the created context 82 in the context management information 80 (step S73).
- the processor core determines whether or not a new task can be generated (step S74). Whether or not a new task can be generated is determined, for example, by determining whether or not the number of tasks generated in the DBMS 141 has reached the upper limit of the number of tasks that can be generated. it can.
- step S74 when a task can be generated (“Yes” in step S74), the processor core executes a task generation process (see FIG. 33) for generating a new task (step S75). The task addition process is terminated. On the other hand, if the task cannot be generated (“No” in step S74), the new task addition process is terminated without generating the task.
- FIG. 31 is a flowchart of the context sharing determination process according to the first embodiment.
- the context sharing determination process corresponds to step S72 of the new task addition process (FIG. 30). This context sharing determination process is realized by executing a task assigned by the processor core to a thread.
- the processor core refers to the inter-thread sharing flag of the processing step related to the generated context (step S81).
- the processor core determines that the context is shared among a plurality of threads.
- the context sharing determination process is terminated.
- the processor core determines that the context is not shared between threads that can be used by one thread. (Step S83), the context sharing determination process ends.
- the processor core may determine whether or not to share the generated context between threads based on the execution state of the DBMS 141.
- the number of existing tasks of the DBMS 141 is adopted as the execution state of the DBMS 141.
- the processor core determines that the context to be generated is shared among threads, and when the number of existing tasks is not less than or equal to the predetermined number, the generated context is not shared between threads. May be determined.
- the processor core determines the generated context between threads. If it is determined to be shared and the data amount of the intermediate result 82b included in the context is not equal to or less than a predetermined amount, the generated context may be determined to be non-shared between threads.
- FIG. 32 is a flowchart of the context registration process according to the first embodiment.
- the context registration process corresponds to step S73 of the new task addition process (FIG. 30). This context registration process is realized by executing a task assigned to a thread by the processor core.
- the processor core registers the created context in the management list of the context management information 80 (step S91). Specifically, the processor core connects the created context after the last context connected to the management list.
- step S92 the processor core confirms the result of the context sharing determination process (see FIG. 31) (step S92). If this result is sharing between threads (“shared” in step S92), a pointer to the created context is registered in a plurality of thread search tables, and the context registration processing is terminated.
- context pointers are registered in the search tables for all threads that execute DB access processing (step S93).
- a pointer to a context may be registered in a search table for a specific thread.
- the search table for threads to be registered is specified based on the hardware configuration information of the computer.
- the hardware configuration information may be a processor configuration, a cache configuration, or a memory configuration.
- a thread management table for a thread corresponding to the plurality of threads corresponding to the plurality of threads with the smallest total number of available context tasks can be generated. sign up.
- the processor core executing the thread # 2 and the processor core executing the thread # 3 are the same processor, and the processor is different from the processor executing the thread # 1, and the context # 1
- a new context # 4 is generated for the processing step # 1, it is registered in a thread thread search table executed by a processor core of a processor with a small number of contexts. In this case, it is registered in the thread search table for thread # 1. In this example, it is registered in the search table for one thread. However, if the processor of the processor core that executes thread # 1 is executing another thread that executes DB access processing, the search table for multiple threads is used. I will register.
- the thread management table for threads corresponding to a plurality of threads corresponding to the processor including the processor core that generated the context.
- the total number of available contexts may be registered in a thread management table for threads corresponding to the plurality of threads.
- it may be registered in a thread management table for a plurality of threads executed by processor cores sharing a cache within the processor.
- it may be registered in the thread management table of a plurality of threads executed by a processor core close to the memory for recording the context.
- it may be registered in a thread management table of a plurality of threads executed by a processor core close to a memory in which a DB buffer page to be referred to when using a context is recorded.
- the processor core registers a pointer in the search table for one thread and ends the context registration processing.
- a pointer to the created context is registered in the thread search table for the thread (self thread) being executed (step S94), and the context registration process is terminated.
- it may be registered in the search table for threads having the smallest available context, or may be registered in the search table for threads having the smallest total number of tasks that can be used in the context that can be generated.
- FIG. 33 is a flowchart of task generation processing according to the first embodiment.
- the task generation process corresponds to step S75 of the new task addition process (FIG. 30). This task generation processing is realized by executing a task assigned to a thread by the processor core.
- step S101 the processor core confirms the result of the context sharing determination process (see FIG. 31) (step S101). If this result is sharing between threads (“sharing between threads” in step S101), the processor core generates a task and adds two or more threads corresponding to the thread search table in which context pointers are registered. A task is assigned (step S102). The total number of tasks to be generated is limited to the number of contexts that can be generated. The number of tasks assigned to each thread is the number of contexts that can be generated divided by the number of threads. Thereafter, the processor core ends the task generation process.
- the processor core generates a task and registers one context search table corresponding to the registered thread pointer.
- a task is assigned to the thread (step S103).
- the number of tasks to be generated is limited to the number of contexts that can be generated.
- the thread to which the task is assigned may be the own thread executed by the processor core, or may be a thread other than the own thread.
- the processor core ends the task generation process.
- the query execution unit 144 may execute the following load distribution process.
- FIG. 34 is a flowchart of the load distribution process according to the modification.
- the load distribution process is executed by the query execution unit 144.
- the processor core can be realized by executing a thread (load distribution thread) other than a thread for performing DB processing. This load distribution process is started after the client communication control unit 142 receives a query.
- the processor core determines whether or not the query processing is finished (step S111), and when the query processing is finished (“end” in step S111), the load distribution processing is finished.
- the processor core calculates the sum of the number of tasks that can be generated in the context that can be used from the search table for each thread (step S112). .
- the processor core determines whether or not there is a bias in the total number of tasks that can be generated by each thread (step S113).
- the processor core may determine that there is a bias when the number of tasks that can be generated is a predetermined number (for example, 0) or less.
- step S113 if there is no bias in the total number of tasks that can be generated by each thread (“No” in step S113), the process proceeds to step S115.
- the processor core changes the position of the context, that is, a pointer that references the context is stored.
- the bias of the total number of tasks that can be generated in each thread can be reduced.
- a context pointer that can be used by a thread search table for a thread having the largest total task generation number is registered in a thread search table for threads having a small total task generation number.
- step S115 the processor core sleeps the load distribution process for a predetermined time, and advances the process to step S111.
- This load distribution process can distribute the load on each thread appropriately.
- the threads that use the context are changed based on the bias of the total number of tasks that can be generated by each thread.
- the thread load may be grasped and the thread using the context may be changed.
- the cost calculation for each thread may be performed, and the thread using the context may be changed based on the cost.
- the following values can be considered as examples of cost calculation.
- the context cost is the product of the number of processing steps processed from the context and the number that can be generated, and the total cost of the context available from the thread is the cost of the thread.
- the load balancing process may be executed by a thread for performing DB processing. For example, it may be executed at the end of the thread (when it is determined “None” at step S32) or at the end of the task (when it is determined “None” at step S37). In this case, the load distribution process executes steps S112 to S114.
- Example 2 will be described. At that time, differences from the first embodiment will be mainly described, and description of common points with the first embodiment will be omitted or simplified.
- FIG. 35 shows a configuration of a computer system according to the second embodiment.
- Application server (hereinafter referred to as AP server) 3502 is connected to a computer (hereinafter referred to as DB server) 100 on which DBMS 141 operates so as to communicate via communication network 3512.
- the DB server 100 is connected to the external storage device 200 so as to be able to communicate via the communication network 300.
- a user terminal (client terminal) 3501 is connected to an AP server 3502 so as to be able to communicate via a communication network arc 3511.
- the DB server 100 executes a DBMS 141 that manages the DB 206.
- the external storage device 200 stores the DB 206.
- the AP server 3502 executes an AP that issues a query to the DBMS 141 executed by the DB server 100.
- the user terminal 3501 issues a request to the AP executed by the AP server 3502.
- a plurality of user terminals 3501 or AP servers 3502 may exist.
- the AP server management terminal 3503 is connected to the AP server 3502 via the communication network 3514.
- the DB server management terminal 3504 is connected to the DB server 100 via the communication network 3515.
- the storage management terminal 3505 is connected to the external storage device 200 via the communication network 3516.
- the AP server management terminal 3503 is a terminal that manages the AP server 3502.
- the DB server management terminal 3504 is a terminal that manages the DB server 100.
- the storage management terminal 3505 is a terminal that manages the external storage apparatus 200.
- the DB server administrator or user may make settings related to the DBMS 141 from the DB server management terminal 3504. Note that at least two of the management terminals 3503 to 3505 may be common (integrated). Moreover, at least two of the communication networks 3511, 3512, 3514, 3515, 3516, and 300 may be common (integrated).
- the user terminal 3501 issues a request (hereinafter referred to as a user request) to the AP server 3502.
- the AP server 3502 generates a query according to the user request received in S121. Then, the generated query is issued to the DB server 100.
- the DB server 100 accepts a query from the AP server 3502, and executes the accepted query.
- the DB server 100 issues a data input / output request (for example, a data read request) necessary for executing the accepted query to the external storage device 200.
- the DB server 100 may issue a plurality of data input / output requests in parallel in the execution of one query.
- the DB server 100 may make the request of S123 in parallel several times in the execution of one query.
- the external storage apparatus 200 responds to the DB server 100 with respect to the data input / output request issued in S123.
- the external storage apparatus 200 may perform the response of S124 in parallel several times.
- the DB server 100 generates a query execution result and transmits it to the AP server 3502.
- the AP server 3502 receives the query execution result. Then, the response to the user request received in S121 according to the execution result is transmitted to the user terminal 3501.
- DBMS Database management system
Landscapes
- Engineering & Computer Science (AREA)
- Theoretical Computer Science (AREA)
- Physics & Mathematics (AREA)
- General Engineering & Computer Science (AREA)
- General Physics & Mathematics (AREA)
- Computational Linguistics (AREA)
- Data Mining & Analysis (AREA)
- Databases & Information Systems (AREA)
- Software Systems (AREA)
- Operations Research (AREA)
- Information Retrieval, Db Structures And Fs Structures Therefor (AREA)
Abstract
Description
(S121)ユーザ端末3501は、APサーバ3502に要求(以下、ユーザ要求)を発行する。
(S122)APサーバ3502は、S121で受信したユーザ要求に従いクエリを生成する。そして、生成したクエリをDBサーバ100に発行する。
(S123)DBサーバ100は、APサーバ3502からのクエリを受け付け、受け付けたクエリを実行する。DBサーバ100は、受け付けたクエリの実行において必要なデータ入出力要求(例えばデータ読出し要求)を外部ストレージ装置200に発行する。DBサーバ100は、一つのクエリの実行において、複数のデータ入出力要求を並行して発行することがある。そのため、DBサーバ100は、一つのクエリの実行において、S123の要求を複数回並行して行うことがある。
(S124)外部ストレージ装置200は、S123で発行されたデータ入出力要求について、DBサーバ100に応答する。外部ストレージ装置200は、S124の応答を複数回並行して行うことがある。
(S125)DBサーバ100は、クエリの実行結果を生成し、APサーバ3502に送信する。
(S126)APサーバ3502は、クエリの実行結果を受信する。そして、該実行結果に従う、S121で受信したユーザ要求に対する回答を、ユーザ端末3501に送信する。
Claims (16)
- プロセッサコアを有する計算機により実現されデータベースを管理するデータベース管理システムであって、
前記データベースへのクエリを受け付けるクエリ受付部と、
前記受け付けたクエリを実行するために必要な処理ステップと前記処理ステップの実行手順とを表す情報を含んだクエリ実行プランを生成するクエリ実行プラン生成部と、
前記生成したクエリ実行プランに基づいて前記受け付けたクエリを実行し、前記受け付けたクエリの実行において、処理ステップを実行するためのタスクを動的に生成し、前記動的に生成されたタスクを実行するクエリ実行部と
を有し、
前記クエリ実行部は、前記受け付けたクエリの実行において、プロセッサコアにより実行される複数のスレッドにおいてタスクを実行し、
前記プロセッサコアにより実行される一つのスレッドにおいて複数のタスクを実行し、
タスクを新たに生成する場合に、コンテキストを生成して、前記生成したコンテキストに基づいて前記生成されたタスクを実行し、
前記コンテキストは、前記新たに生成するタスクにおいて実行を開始する処理ステップが、前記クエリ実行プランが表す1以上の処理ステップのうちのいずれであるかを示す第1の情報と、前記第1の情報が示す処理ステップに要するデータのアクセス先に関する第2の情報と、前記新たに生成するタスクにより結果を生成するために必要なデータに関する第3の情報とを含む情報である
データベース管理システム。 - 前記クエリ実行プラン生成部は、各前記処理ステップに関わるコンテキストを、複数のスレッド間で共有するか、複数のスレッド間で共有しないかを判定し、
前記クエリ実行部は、前記判定結果に基づいて前記コンテキストを管理する
請求項1に記載のデータベース管理システム。 - 前記クエリ実行プラン生成部は、前記処理ステップの前記クエリ実行プランにおける他の処理ステップとの先行又は後続関係に基づいて、前記処理ステップに関わるコンテキストを複数のスレッド間で共有するか、又は共有しないかを判定する
請求項2に記載のデータベース管理システム。 - 前記プロセッサコアにより実行されるスレッドに割当たったタスクにおいて、前記データベースに対して非同期I/Oによるデータ読み込み要求を発行し、
前記スレッドを実行するプロセッサコアは、前記タスクにおけるデータ読み込み要求発行後に、前記データ読み込み要求に対応するデータの読み込みが完了する前に、実行可能な他のタスクの実行を行い、
前記スレッドを実行するプロセッサコアは、前記タスクにおける前記データ読み込み要求に対応するデータの読み込みが完了した後に、前記タスクの実行を再開する
請求項2に記載のデータベース管理システム。 - 前記処理ステップを実行するためのタスクを割り当て可能な前記スレッドは、前記プロセッサコアと同数であり、各スレッドを実行するプロセッサコアは、それぞれ別のプロセッサコアに設定されている
請求項2に記載のデータベース管理システム。 - 前記クエリ実行プラン生成部は、前記クエリ実行プランに、並行して実行可能な処理ステップを含む複数の処理ブロックが含まれる場合に、前記処理ブロックの先頭の処理ステップに関わるコンテキストを、複数のスレッド間で共有すると判定し、前記処理ブロックの他の処理ステップに関わるコンテキストを、スレッド間で共有しないと判定する
請求項2に記載のデータベース管理システム。 - 前記クエリ実行プラン生成部は、前記クエリ実行プランに、並行して実行可能な処理ステップを含む複数の処理ブロックが含まれる場合に、前記処理ブロックにおける後続の処理ステップの数が所定数以上ある処理ステップに関わるコンテキストを、複数のスレッド間で共有すると判定し、後続の処理ステップの数が所定数未満の処理ステップに関わるコンテキストを、スレッド間で共有しないと判定する
請求項2に記載のデータベース管理システム。 - 前記クエリ実行部は、一つのスレッドの利用可能なコンテキストの数が所定数を下回った場合に、他のスレッドの利用可能なコンテキストを、前記一つのスレッドに割当たったタスクで利用する
請求項2に記載のデータベース管理システム。 - 前記クエリ実行部は、前記複数のスレッドに関わる実行状態が所定の状態となった場合に、スレッド間の利用可能なコンテキストの数の差が所定数より小さくなるように、一つのスレッドの利用可能なコンテキストを、他のスレッドの利用可能なコンテキストに変更する
請求項2に記載のデータベース管理システム。 - 前記複数のスレッドの実行状態が所定の状態とは、前記スレッド間の利用可能なコンテキストの数の差が所定数以上となった状態である
請求項9に記載のデータベース管理システム。 - 前記クエリ実行部は、データベース管理システムの実行状態に基づいて、コンテキストを複数のスレッドで共有とするか否かを判定し、
前記コンテキストを複数のスレッド間で共有すると判定されている場合には、前記コンテキストを複数のスレッド間で共有するようにし、前記コンテキストを複数のスレッド間で共有しないと判定されている場合には、前記コンテキストを一つのスレッドが利用するようにする
請求項1に記載のデータベース管理システム - 前記データベース管理システムの実行状態とは、前記データベース管理システムにおいて現存するタスク数であり、
前記クエリ実行部は、現存する前記タスク数が所定数以下である場合には、前記コンテキストを複数のスレッドで共有とすると判定し、現存する前記タスク数が所定数以下でない場合には、前記コンテキストを共有しないと判定する
請求項11に記載のデータベース管理システム。 - 前記データベース管理システムの実行状態とは、コンテキストに含まれる第3の情報であり、
前記クエリ実行部は、前記コンテキストに含まれる第3の情報のデータ量が所定量以下である場合には、前記コンテキストを複数のスレッドで共有とすると判定し、前記コンテキストに含まれる第3の情報のデータ量が所定量以下でない場合には、前記コンテキストを共有しないと判定する
請求項12に記載のデータベース管理システム。 - 前記クエリ実行部は、前記コンテキストをいずれか一つのスレッドにより利用可能とし、
前記クエリ実行部は、前記スレッド間の利用可能なコンテキストの数の差が所定数以上となった場合に、前記スレッド間の利用可能なコンテキストの数の差が所定数より小さくなるように、一つのスレッドの利用可能なコンテキストを、他のスレッドの利用可能なコンテキストに変更する
請求項1に記載のデータベース管理システム。 - 記憶資源と、
前記記憶資源に接続され1以上のプロセッサコアを有する1以上のプロセッサを含んだ制御デバイスを有し、
前記制御デバイスが、
前記データベースへのクエリを受け付け、
前記受け付けたクエリを実行するために必要な処理ステップと前記処理ステップの実行手順とを表す情報を含んだクエリ実行プランを生成し、
前記生成したクエリ実行プランに基づいて前記受け付けたクエリを実行し、前記受け付けたクエリの実行において、処理ステップを実行するためのタスクを動的に生成し、前記動的に生成されたタスクを実行し、
前記制御デバイスは、前記受け付けたクエリの実行において、プロセッサコアにより実行される複数のスレッドにおいてタスクを実行し、
前記プロセッサコアにより実行される一つのスレッドにおいて複数のタスクを実行し、
タスクを新たに生成する場合に、コンテキストを生成して、前記生成したコンテキストに基づいて前記生成されたタスクを実行し、
前記コンテキストは、前記新たに生成するタスクにおいて実行を開始する処理ステップが、前記クエリ実行プランが表す1以上の処理ステップのうちのいずれであるかを示す第1の情報と、前記第1の情報が示す処理ステップに要するデータのアクセス先に関する第2の情報と、前記新たに生成するタスクにより結果を生成するために必要なデータに関する第3の情報とを含む情報である
計算機。 - データベースを管理するデータベース管理方法であって、
(a)前記データベースへのクエリを受け付け、
(b)前記受け付けたクエリを実行するために必要な複数の1以上の処理ステップと、前記1以上の処理ステップの実行手順とを表す情報を含んだクエリ実行プランを生成し、
(c)前記生成したクエリ実行プランに基づいて、処理ステップを実行するためのタスクを動的に生成し、前記動的に生成されたタスクを実行することで、前記受け付けたクエリを実行し、
前記(c)では、前記受け付けたクエリの実行において、プロセッサコアにより実行される複数のスレッドにおいてタスクを実行し、
前記プロセッサコアにより実行される一つのスレッドにおいて複数のタスクを実行し、
タスクを新たに生成する場合に、コンテキストを生成して、前記生成したコンテキストに基づいて前記生成されたタスクを実行し、
前記コンテキストは、前記新たに生成するタスクにおいて実行を開始する処理ステップが、前記クエリ実行プランが表す1以上の処理ステップのうちのいずれであるかを示す第1の情報と、前記第1の情報が示す処理ステップに要するデータのアクセス先に関する第2の情報と、前記新たに生成するタスクにより結果を生成するために必要なデータに関する第3の情報とを含む情報である
データベース管理方法。
Priority Applications (5)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
PCT/JP2012/061436 WO2013161076A1 (ja) | 2012-04-27 | 2012-04-27 | データベース管理システム、計算機、データベース管理方法 |
JP2014512271A JP5858307B2 (ja) | 2012-04-27 | 2012-04-27 | データベース管理システム、計算機、データベース管理方法 |
EP12875146.8A EP2843559A4 (en) | 2012-04-27 | 2012-04-27 | DATABASE MANAGEMENT SYSTEM, COMPUTERS AND DATABASE MANAGEMENT PROCEDURES |
US14/397,076 US10417227B2 (en) | 2012-04-27 | 2012-04-27 | Database management system, computer, and database management method |
US16/531,256 US11636107B2 (en) | 2012-04-27 | 2019-08-05 | Database management system, computer, and database management method |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
PCT/JP2012/061436 WO2013161076A1 (ja) | 2012-04-27 | 2012-04-27 | データベース管理システム、計算機、データベース管理方法 |
Related Child Applications (2)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
US14/397,076 A-371-Of-International US10417227B2 (en) | 2012-04-27 | 2012-04-27 | Database management system, computer, and database management method |
US16/531,256 Continuation US11636107B2 (en) | 2012-04-27 | 2019-08-05 | Database management system, computer, and database management method |
Publications (1)
Publication Number | Publication Date |
---|---|
WO2013161076A1 true WO2013161076A1 (ja) | 2013-10-31 |
Family
ID=49482442
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
PCT/JP2012/061436 WO2013161076A1 (ja) | 2012-04-27 | 2012-04-27 | データベース管理システム、計算機、データベース管理方法 |
Country Status (4)
Country | Link |
---|---|
US (2) | US10417227B2 (ja) |
EP (1) | EP2843559A4 (ja) |
JP (1) | JP5858307B2 (ja) |
WO (1) | WO2013161076A1 (ja) |
Cited By (1)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
WO2017013701A1 (ja) * | 2015-07-17 | 2017-01-26 | 株式会社日立製作所 | 計算機システム及びデータベース管理方法 |
Families Citing this family (6)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
GB2510426A (en) * | 2013-02-05 | 2014-08-06 | Ibm | Workload balancing in a distributed database |
JP6081031B2 (ja) * | 2014-09-17 | 2017-02-15 | 三菱電機株式会社 | 攻撃観察装置、及び攻撃観察方法 |
US9727648B2 (en) | 2014-12-19 | 2017-08-08 | Quixey, Inc. | Time-box constrained searching in a distributed search system |
WO2018219440A1 (en) | 2017-05-31 | 2018-12-06 | Huawei Technologies Co., Ltd. | System and method for dynamic determination of a number of parallel threads for a request |
JP7197794B2 (ja) | 2019-03-28 | 2022-12-28 | 富士通株式会社 | 情報処理装置および実行制御プログラム |
US20220012238A1 (en) * | 2020-07-07 | 2022-01-13 | AtScale, Inc. | Datacube access connectors |
Citations (4)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
JP2007034414A (ja) | 2005-07-22 | 2007-02-08 | Masaru Kiregawa | データベース管理システム及び方法 |
JP2007065978A (ja) * | 2005-08-31 | 2007-03-15 | Hitachi Ltd | 計算機システム及びデータベース管理システムプログラム |
JP2011159107A (ja) * | 2010-02-01 | 2011-08-18 | Nec Corp | スレッド数制限装置、スレッド数制限方法およびスレッド数制限プログラム |
WO2012026140A1 (ja) * | 2010-08-25 | 2012-03-01 | 株式会社日立製作所 | データベース処理方法、データベース処理システム及びデータベースサーバ |
Family Cites Families (21)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US5742806A (en) * | 1994-01-31 | 1998-04-21 | Sun Microsystems, Inc. | Apparatus and method for decomposing database queries for database management system including multiprocessor digital data processing system |
US5893912A (en) | 1997-08-13 | 1999-04-13 | International Business Machines Corporation | Thread context manager for relational databases, method and computer program product for implementing thread context management for relational databases |
US6205441B1 (en) * | 1999-03-31 | 2001-03-20 | Compaq Computer Corporation | System and method for reducing compile time in a top down rule based system using rule heuristics based upon the predicted resulting data flow |
US7966475B2 (en) * | 1999-04-09 | 2011-06-21 | Rambus Inc. | Parallel data processing apparatus |
US6678672B1 (en) * | 2000-05-31 | 2004-01-13 | Ncr Corporation | Efficient exception handling during access plan execution in an on-line analytic processing system |
US6996556B2 (en) * | 2002-08-20 | 2006-02-07 | International Business Machines Corporation | Metadata manager for database query optimizer |
US7383389B1 (en) * | 2004-04-28 | 2008-06-03 | Sybase, Inc. | Cache management system providing improved page latching methodology |
US8126870B2 (en) | 2005-03-28 | 2012-02-28 | Sybase, Inc. | System and methodology for parallel query optimization using semantic-based partitioning |
US20060294058A1 (en) * | 2005-06-28 | 2006-12-28 | Microsoft Corporation | System and method for an asynchronous queue in a database management system |
US8032522B2 (en) * | 2006-08-25 | 2011-10-04 | Microsoft Corporation | Optimizing parameterized queries in a relational database management system |
US9009187B2 (en) * | 2006-12-19 | 2015-04-14 | Ianywhere Solutions, Inc. | Assigning tasks to threads requiring limited resources using programmable queues |
US20120066683A1 (en) * | 2010-09-09 | 2012-03-15 | Srinath Nadig S | Balanced thread creation and task allocation |
US8336051B2 (en) * | 2010-11-04 | 2012-12-18 | Electron Database Corporation | Systems and methods for grouped request execution |
KR20120055089A (ko) * | 2010-11-23 | 2012-05-31 | 이화여자대학교 산학협력단 | 부하분산을 이용한 병렬형 충돌검사 방법과 병렬형 거리계산 방법 |
US8473484B2 (en) * | 2011-06-01 | 2013-06-25 | International Business Machines Corporation | Identifying impact of installing a database patch |
US8417689B1 (en) * | 2011-11-21 | 2013-04-09 | Emc Corporation | Programming model for transparent parallelization of combinatorial optimization |
US8914353B2 (en) * | 2011-12-20 | 2014-12-16 | Sap Se | Many-core algorithms for in-memory column store databases |
US9703566B2 (en) * | 2011-12-29 | 2017-07-11 | Intel Corporation | Sharing TLB mappings between contexts |
US8683296B2 (en) * | 2011-12-30 | 2014-03-25 | Streamscale, Inc. | Accelerated erasure coding system and method |
JP5858308B2 (ja) * | 2012-05-24 | 2016-02-10 | 株式会社日立製作所 | データベース管理システム、計算機、データベース管理方法 |
US9146609B2 (en) * | 2012-11-20 | 2015-09-29 | International Business Machines Corporation | Thread consolidation in processor cores |
-
2012
- 2012-04-27 US US14/397,076 patent/US10417227B2/en active Active
- 2012-04-27 JP JP2014512271A patent/JP5858307B2/ja active Active
- 2012-04-27 WO PCT/JP2012/061436 patent/WO2013161076A1/ja active Application Filing
- 2012-04-27 EP EP12875146.8A patent/EP2843559A4/en active Pending
-
2019
- 2019-08-05 US US16/531,256 patent/US11636107B2/en active Active
Patent Citations (4)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
JP2007034414A (ja) | 2005-07-22 | 2007-02-08 | Masaru Kiregawa | データベース管理システム及び方法 |
JP2007065978A (ja) * | 2005-08-31 | 2007-03-15 | Hitachi Ltd | 計算機システム及びデータベース管理システムプログラム |
JP2011159107A (ja) * | 2010-02-01 | 2011-08-18 | Nec Corp | スレッド数制限装置、スレッド数制限方法およびスレッド数制限プログラム |
WO2012026140A1 (ja) * | 2010-08-25 | 2012-03-01 | 株式会社日立製作所 | データベース処理方法、データベース処理システム及びデータベースサーバ |
Non-Patent Citations (2)
Title |
---|
HIDEOMI IDEI ET AL.: "QUERY PLAN RIYO SAKIYOMI GIJUTSU NI OKERU TAJU SHORI JIKKOJI NO SEINO MODEL KENTO - PERFORMANCE MODEL OF PREFETCH TECHNOLOGY USING QUERY PLAN ON MULTIPLEX QUERY EXECUTION", THE INSTITUTE OF ELECTRONICS, INFORMATION AND COMMUNICATION ENGINEERS DAI 18 KAI DATA KOGAKU WORKSHOP RONBUNSHU, vol. DEWS2007 E2-4, no. E2-4, 1 June 2007 (2007-06-01), XP055171025 * |
See also references of EP2843559A4 |
Cited By (3)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
WO2017013701A1 (ja) * | 2015-07-17 | 2017-01-26 | 株式会社日立製作所 | 計算機システム及びデータベース管理方法 |
JPWO2017013701A1 (ja) * | 2015-07-17 | 2018-03-22 | 株式会社日立製作所 | 計算機システム及びデータベース管理方法 |
US11321302B2 (en) | 2015-07-17 | 2022-05-03 | Hitachi, Ltd. | Computer system and database management method |
Also Published As
Publication number | Publication date |
---|---|
US10417227B2 (en) | 2019-09-17 |
JP5858307B2 (ja) | 2016-02-10 |
US20190354527A1 (en) | 2019-11-21 |
EP2843559A1 (en) | 2015-03-04 |
JPWO2013161076A1 (ja) | 2015-12-21 |
EP2843559A4 (en) | 2016-01-13 |
US11636107B2 (en) | 2023-04-25 |
US20150112967A1 (en) | 2015-04-23 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
JP5858307B2 (ja) | データベース管理システム、計算機、データベース管理方法 | |
EP3108374B1 (en) | Data management systems and methods | |
JP4659888B2 (ja) | データベース処理システム、計算機及びデータベース処理方法 | |
Richter et al. | Towards zero-overhead static and adaptive indexing in Hadoop | |
JP5950267B2 (ja) | データベース管理装置、データベース管理方法及び記憶媒体 | |
US11086841B1 (en) | Streams on shared database objects | |
Wang et al. | Fast and concurrent {RDF} queries using {RDMA-assisted}{GPU} graph exploration | |
US20180075080A1 (en) | Computer System and Database Management Method | |
Tang et al. | A data skew oriented reduce placement algorithm based on sampling | |
JP6168635B2 (ja) | データベース管理システム、計算機、データベース管理方法 | |
JP6707797B2 (ja) | データベース管理システム及びデータベース管理方法 | |
US7756827B1 (en) | Rule-based, event-driven, scalable data collection | |
JP6108418B2 (ja) | データベース管理システム、計算機、データベース管理方法 | |
US20210149903A1 (en) | Successive database record filtering on disparate database types | |
US11561953B2 (en) | Cosharding and randomized cosharding | |
JP2009223572A (ja) | 変換装置、サーバシステム、変換方法およびプログラム | |
JP5978297B2 (ja) | 管理システム及び管理方法 | |
Chang et al. | Resilient distributed computing platforms for big data analysis using Spark and Hadoop | |
US20160335321A1 (en) | Database management system, computer, and database management method | |
Spiegelberg et al. | Hyperspecialized Compilation for Serverless Data Analytics. | |
Yu et al. | Consistent and Efficient Batch Operations for NoSQL Databases with Hybrid Timestamp | |
Yuan et al. | High Performance RDF Updates with TripleBit+ | |
Shen et al. | Efficient Query Algorithm of Coallocation-Parallel-Hash-Join in the Cloud Data Center |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
121 | Ep: the epo has been informed by wipo that ep was designated in this application |
Ref document number: 12875146 Country of ref document: EP Kind code of ref document: A1 |
|
ENP | Entry into the national phase |
Ref document number: 2014512271 Country of ref document: JP Kind code of ref document: A |
|
NENP | Non-entry into the national phase |
Ref country code: DE |
|
WWE | Wipo information: entry into national phase |
Ref document number: 2012875146 Country of ref document: EP |
|
WWE | Wipo information: entry into national phase |
Ref document number: 14397076 Country of ref document: US |