WO2020125367A1 - 数据查询的方法及数据库代理 - Google Patents

数据查询的方法及数据库代理 Download PDF

Info

Publication number
WO2020125367A1
WO2020125367A1 PCT/CN2019/121470 CN2019121470W WO2020125367A1 WO 2020125367 A1 WO2020125367 A1 WO 2020125367A1 CN 2019121470 W CN2019121470 W CN 2019121470W WO 2020125367 A1 WO2020125367 A1 WO 2020125367A1
Authority
WO
WIPO (PCT)
Prior art keywords
data
database
response data
node
distributed
Prior art date
Application number
PCT/CN2019/121470
Other languages
English (en)
French (fr)
Inventor
丁岩
马玉伟
Original Assignee
中兴通讯股份有限公司
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 中兴通讯股份有限公司 filed Critical 中兴通讯股份有限公司
Publication of WO2020125367A1 publication Critical patent/WO2020125367A1/zh

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F16/00Information retrieval; Database structures therefor; File system structures therefor
    • G06F16/20Information retrieval; Database structures therefor; File system structures therefor of structured data, e.g. relational data
    • G06F16/24Querying
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F16/00Information retrieval; Database structures therefor; File system structures therefor
    • G06F16/20Information retrieval; Database structures therefor; File system structures therefor of structured data, e.g. relational data
    • G06F16/24Querying
    • G06F16/245Query processing
    • G06F16/2455Query execution
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F16/00Information retrieval; Database structures therefor; File system structures therefor
    • G06F16/20Information retrieval; Database structures therefor; File system structures therefor of structured data, e.g. relational data
    • G06F16/24Querying
    • G06F16/245Query processing
    • G06F16/2457Query processing with adaptation to user needs
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F16/00Information retrieval; Database structures therefor; File system structures therefor
    • G06F16/20Information retrieval; Database structures therefor; File system structures therefor of structured data, e.g. relational data
    • G06F16/27Replication, distribution or synchronisation of data between databases or within a distributed database system; Distributed database system architectures therefor
    • YGENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
    • Y02TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
    • Y02DCLIMATE CHANGE MITIGATION TECHNOLOGIES IN INFORMATION AND COMMUNICATION TECHNOLOGIES [ICT], I.E. INFORMATION AND COMMUNICATION TECHNOLOGIES AIMING AT THE REDUCTION OF THEIR OWN ENERGY USE
    • Y02D10/00Energy efficient computing, e.g. low power processors, power management or thermal management

Definitions

  • Embodiments of the present disclosure are related to but not limited to distributed database technology.
  • the table data in the distributed database will be distributed among different data nodes according to a certain distribution strategy. If the execution of the statement involves sorting and the data read involves multiple data nodes, the distributed data service proxy server PROXY( That is, the database agent) stores all the response data returned by each data node in the local memory, sorts them uniformly, and sends them to the client. When involving tens of millions of data, or even hundreds of millions of data, performing sort operations on the server's memory requirements are higher.
  • the PROXY process will be killed (closed) by the kernel oom killer (Out-Of-Memory killer) mechanism due to excessive memory usage, affecting normal business use.
  • the kernel oom killer Out-Of-Memory killer
  • an embodiment of the present disclosure provides a data query method, including: when a distributed database returns response data, the database agent obtains the local storage of the response data from the receive buffer; when from the distributed database When the size of the response data obtained by each data node reaches a preset value, the database agent sorts the locally stored response data and sends it to the client.
  • An embodiment of the present disclosure also provides a database agent, including: an acquisition and storage unit configured to obtain local storage of the response data ready to be received from the receiving buffer when the response data returned by any data node in the distributed database is ready; And a sending unit configured to sort the locally stored response data and send it to the client when the size of the response data obtained from each data node in the distributed database reaches a preset value.
  • An embodiment of the present disclosure also provides a database agent, including a memory, a processor, and a computer program stored on the memory and executable on the processor.
  • the computer program is executed by the processor to implement the above Data query method.
  • An embodiment of the present disclosure also provides a computer-readable storage medium having an information processing program stored on the computer-readable storage medium.
  • the information processing program is executed by a processor to implement the steps of the data query method described above.
  • FIG. 1 is a schematic flowchart of a data query method provided by an embodiment of the public example.
  • FIG. 2 is a schematic flowchart of a data query method provided by an embodiment of the present disclosure.
  • FIG. 3 is a schematic diagram of a logical process of a data query method provided by an embodiment of the present disclosure.
  • FIG. 4 is a signaling diagram of a data query method provided by an embodiment of the present disclosure.
  • FIG. 5 is a schematic structural diagram of a database agent provided by an embodiment of the present disclosure.
  • a distributed database data is distributed across multiple data nodes.
  • the database agent needs to sort the query data locally.
  • the local memory that stores the query data requires a large proportion of machine memory.
  • the kernel The oom killer mechanism kills and affects normal business use.
  • an embodiment of the present disclosure proposes a new data query scheme, which uses preset values (such as the number of data rows to be read at a time) to control the memory usage of sorted read statements on the PROXY side, thereby improving system availability.
  • FIG. 1 is a schematic flowchart of a data query method according to an embodiment of the public example. As shown in FIG. 1, the method includes steps 101 and 102.
  • Step 101 When the distributed database returns response data, the database agent obtains the local storage of the response data from the receiving buffer.
  • Step 102 When the size of the response data obtained from each data node in the distributed database reaches a preset value, the database agent sorts the locally stored response data and sends it to the client.
  • the embodiments of the present disclosure control the memory usage of the database agent PROXY sort read through preset values, thereby improving the availability of the system.
  • the method further includes: a database agent receives a query request sent by the client, the query request is used to obtain data from the distributed database and carries a sorting request; the database agent will The query request is split into a distributed request corresponding to each data node in the distributed database and sent to each corresponding data node, where the distributed request is used to obtain data from the corresponding data node and carry the sorting requirements.
  • the method further includes: configuring the preset value in advance, and allocating memory of the preset value to each data node in the distributed database.
  • the database agent obtains the local storage of the response data from the receive buffer, including: when the response data returned by any data node in the distributed database is ready, the database agent randomly An idle thread in the thread pool is allocated, and the ready response data is locally stored from the receiving buffer.
  • the database agent sorts the locally stored response data and sends it to the client, including: a thread in the database agent that obtains the response data of the last data node in the distributed database ready, and stores the locally stored response The data is sorted to generate the largest continuous row data list and sent to the client.
  • the method further includes: notifying all threads to continue to obtain the remaining response data of all data nodes, when each data node in the distributed database When the size of the obtained response data reaches the preset value, the locally stored response data is sorted again and sent to the client; it is determined whether the database agent has received all the response data of all the data nodes, otherwise the notification is returned The thread continues to obtain the remaining response data of all data nodes.
  • the database agent sends a distributed request corresponding to each data node in the distributed database to each corresponding data node, including: the database agent distributes correspondingly through tcp (transmission control protocol) links corresponding to different data nodes The distributed request of each data node in the distributed database is sent to the corresponding data node.
  • tcp transmission control protocol
  • the method further includes: the database agent listening to the socket object of the corresponding tcp link, waiting for the response data of each data node, wherein, when there is response data of any data node in the epoll ready queue , Indicating that the response data of any one of the data nodes is ready.
  • FIG. 2 is a schematic flowchart of a data query method provided by an embodiment of the present disclosure. As shown in FIG. 2, the method includes steps 201 to 204.
  • Step 201 The database agent sends a distributed query request to all data nodes in the distributed database, waiting for the response data.
  • the database agent receives the query request sent by the client, and splits the query request into distributed query requests corresponding to each data node in the distributed database.
  • the database agent After the database agent receives the select order request (ie query request) sent by the client, the database agent generates a corresponding select order execution statement (ie distributed query request) for each data node in the distributed database, Send to each data node separately.
  • select order request ie query request
  • distributed query request a select order execution statement for each data node in the distributed database
  • Step 202 When the distributed database returns response data, the database agent obtains the local storage of the response data from the receiving buffer.
  • the database agent Before receiving the query request sent by the client, the database agent configures a preset value in advance and allocates the memory of the preset value to each data node in the distributed database.
  • the preset value is the number of data lines read at a time.
  • each data node correspondingly applies for a preset size of memory, and uses a doubly linked list to put it in an idle queue management, which facilitates memory reuse and reduces memory fragmentation.
  • the database agent uses multiple threads in the thread pool to cache the response data of each database node (or data node, node), and caches the data from the socket buffer to local storage according to a preset value.
  • the database agent uses the row data as a unit, and calls the idle thread in the thread pool to sequentially obtain the response data stored in each data node from the idle queue (re-dynamic allocation when the data length exceeds the pre-application memory size (default value)) , Put into the use of queue management to facilitate subsequent data sorting.
  • Step 203 When the size of the response data obtained from each data node in the distributed database reaches a preset value, the database agent sorts the locally stored response data and sends it to the client.
  • the cache data of each data node is sorted in the use queue to generate a set of the largest continuous row data list.
  • the cached data According to the generated row data list, delete the buffer from the use queue and put it into the idle queue management.
  • the end time of the response data received by the database agent from different data nodes is different.
  • the end time of receiving the response data of a certain data node is the latest.
  • the data node's response data thread is responsible for sorting, generating the largest continuous row data list and sending all data in the list to the client, and re-notifying all threads to continue to receive the remaining data of all nodes.
  • the data processing thread puts all tasks into the ready queue and waits to continue to obtain data, and notifies the threads in the thread pool to continue to perform tasks.
  • steps 202 and 203 are repeatedly executed until the database agent receives the response data of all data nodes and sends it to the client.
  • the client queries the data table t1, and the data of the data table t1 is distributed in 3 nodes (group1, group2, group3), and the preset value preset in this embodiment is a single reading data row
  • the number is 1000 lines of data, and the memory of 1000 lines of data size is allocated to each of group1, group2, and group3 in advance.
  • FIG. 3 is a schematic diagram of a logical process of a data query method provided by an embodiment of the present disclosure. As shown in FIG. 3, the method includes steps 301 to 306.
  • step 301 the client sends a select statement to the data nodes group1, group2, and group3 through the database proxy proxy.
  • step 302 the proxy monitors the tcp link that issued the select orby statement in step 301 through epoll, and waits for the ready data.
  • Step 303 When the response data returned by any of the data nodes group1, group2, and group3 are ready, the idle thread in the calling thread pool is used to obtain the ready data local storage from the receiving buffer area.
  • the ready means that a data node has finished acquiring the flow data of the flow control size (1000 lines).
  • the local storage refers to storing the acquired ready response data in local memory.
  • step 302 is waited for ready data.
  • Step 304 for any one of the three nodes of group1, group2, and group3, as long as the data of the other two data nodes is not completed, continue to wait for their completion; otherwise, the staged data acquisition is completed, then sorted, and step 305 is executed .
  • the completion of the data acquisition of the data node means that the data size locally stored by the data node reaches the data size of 1000 rows, or the data acquisition ends.
  • the end of phased data acquisition means that the data size of any of the three data nodes group1, group2, and group3 stored locally reaches the data size of 1000 rows or the data acquisition ends.
  • step 305 the data stored locally in the three nodes group1, group2, and group3 are sorted according to the order field value, the current largest continuous data is discharged, and then the data is sent to the requesting client according to the mysql protocol.
  • Step 306 if all the data of the three nodes group1, group2, and group3 are all acquired, the current statement execution ends; otherwise, the flow of steps 302 to 305 is continued.
  • the client queries the data table t1.
  • a row of data in the table t1 is about 128 bytes, a total of 100 million rows, and a total amount of pre-stored data of 12.8 GB.
  • the entire table t1 data is distributed in three nodes of group1 (database 1), group2 (database 2), and group3 (database 3) through the hash strategy. Each group node has about 4GB of data.
  • the preset value is pre-configured, that is, the number of data lines for a single reading is 1000 lines of data, and the memory of 1000 lines of data size is allocated to each of group1, group2, and group3 in advance.
  • the three nodes have 1,000 reserved memory distributions, which means that each data node acquires one thousand pieces of data in each stage, so the staged data requires 384KB of memory.
  • FIG. 4 is a signaling diagram of the data query method provided in this embodiment. As shown in FIG. 4, the method includes: step 401 to step 405.
  • step 401 the client sends a select order request (SQL request) to the database proxy Proxy.
  • SQL request select order request
  • Step 402 the Proxy splits the client request into distributed requests, generates a request (SQL request) corresponding to the data node (group1, group2, group3), and sends it to the corresponding data node through the corresponding tcp links of different nodes, while the Proxy listens All the sub-request tcp link socket objects involved in the current request, waiting for the node's data response.
  • SQL request a request corresponding to the data node (group1, group2, group3)
  • the Proxy listens All the sub-request tcp link socket objects involved in the current request, waiting for the node's data response.
  • Step 403 when the response data of the group1, group2, and group3 nodes are in the epoll ready queue, the Proxy randomly allocates idle threads (thread1, thread2, thread3) in the thread pool, and obtains local storage of data from the tcp receive buffer When all the data nodes acquire the row data reaching the size of 1000 rows, all the data to be acquired at the current stage is completed.
  • the remaining response data continues to be acquired in the next stage (in fact, some threads have begun to repeat the operation of step 403 to acquire row data for local storage).
  • the data of each data node When the data of each data node is ready, it will notify the idle thread to store the ready data in the local memory.
  • the current thread When the data required by the current stage of a data node (1000 rows of data size) is stored, the current thread will store the data of the next stage , So that no single thread starves to death.
  • step 404 the data stored in the local memory of the three nodes (group1, group2, group3) are sorted according to the order field value (buffer data sorting) to generate a set of largest continuous row data list.
  • a list of the largest continuous row data can be generated by sorting the use queue of the cached data.
  • the data of each data node is controlled by the order, that is, the returned single-node data results are ordered before being sorted, and sorting here is to sort the data of multiple data nodes as a whole, and is in the middle Sorting at a certain stage (for example, it is likely that after multiple data nodes are sorted at the end of the stage, only part of the data is used for the acquired data, and some data need subsequent data to determine whether it is continuous). For example, in this embodiment, 1000 data of a single node are ordered when the database returns.
  • a single data node does not need to be sorted, it is already an ordered sequence; 2 Multiple data nodes need to be sorted Generate the currently known ordered continuous sequence, and the rest can not be judged until the next sort.
  • step 405 the Proxy sends the current maximum continuous list row data (staged result set) to the requesting client according to the mysql protocol.
  • the table data is distributed in multiple data nodes, and the corresponding sorting will also involve the response data of multiple data nodes.
  • the response data of each data node is returned in an orderly manner at the time of acquisition.
  • the data of a single data node is controlled by SQL statements. In this way, once the ordered data of multiple data nodes is stored locally at a certain stage, the staged sorting can be achieved according to a certain sorting strategy.
  • the phased data of a single data node is obtained by a separate thread, so that the phased data of multiple data nodes will use multiple threads to process the data and increase the processing speed;
  • the data processed in each phase can be mutually Independent, that is to say, when processing data sorting in the first stage, then the data in the second stage may also be processing sorting, increasing the concurrency of processing data.
  • the table data involves multiple data nodes, so it is very likely that part of the data node data is already ready, and some data is still being received or waiting to be received, so that a certain part of the ready data thread can be ready to obtain At other stages of data, resources are fully utilized.
  • FIG. 5 is a schematic structural diagram of a database agent provided by an embodiment of the present disclosure. As shown in FIG. 5, the database agent includes an acquisition and storage unit and a sorting and sending unit.
  • the acquisition and storage unit is configured to acquire ready response data from the receiving buffer when the response data returned by any data node in the distributed database is ready for local storage.
  • the sorting and sending unit is configured to sort the locally stored response data and send it to the client when the size of the response data obtained from each data node in the distributed database reaches a preset value.
  • the database agent also includes a receiving unit and a splitting and sending unit.
  • the receiving unit is configured to receive a query request sent by the client, where the query request is used to obtain data from the distributed database and carry a sorting requirement.
  • the splitting and sending unit is configured to split the query request into distributed query requests corresponding to each data node in the distributed database and send them to the corresponding data nodes, where the distributed query request is used to The corresponding data node obtains the data and carries the sorting requirements.
  • the database agent also includes a configuration unit.
  • the configuration unit is configured to configure the preset value in advance before the database agent receives the query request sent by the client, and allocate memory of the preset value to each data node in the distributed database.
  • the acquisition and storage unit is specifically configured that when the response data returned by any data node in the distributed database is ready, the database agent randomly allocates idle threads in the thread pool to obtain the ready response from the receive buffer Data is stored locally.
  • the sorting and sending unit is specifically configured as a thread in the database agent that obtains the response data of the last data node in the distributed database, sorts the locally stored response data, and generates the largest continuous row data list. Send to the client.
  • the database agent also includes a notification unit and a sorting and sending unit.
  • the notification unit is configured to notify all threads to continue to obtain the remaining response data of all data nodes.
  • the sorting and sending unit is further configured to, when the size of the response data obtained from each data node in the distributed database reaches a preset value again, sort the locally stored response data and send it to the client again; Whether the database agent has received all the response data of all the data nodes, and if not, execute the step of notifying all threads to continue to obtain the remaining response data of all the data nodes.
  • the splitting and sending unit is specifically configured to send the distributed query request of each data node in the corresponding distributed database to the corresponding data node through corresponding tcp links of different data nodes.
  • the database agent also includes a listening unit configured to listen to the socket object of the corresponding tcp link and wait for the response data of each data node.
  • a listening unit configured to listen to the socket object of the corresponding tcp link and wait for the response data of each data node.
  • An embodiment of the present disclosure also provides a database agent, including a memory, a processor, and a computer program stored on the memory and executable on the processor.
  • the computer program is executed by the processor to implement the above Any one of the data query methods.
  • An embodiment of the present disclosure also provides a computer-readable storage medium that stores an information processing program on the computer-readable storage medium, and when the information processing program is executed by a processor, implements any of the data query methods described above A step of.
  • the term computer storage medium includes both volatile and nonvolatile implemented in any method or technology for storing information such as computer readable instructions, data structures, program modules, or other data Sex, removable and non-removable media.
  • Computer storage media include but are not limited to RAM, ROM, EEPROM, flash memory or other memory technologies, CD-ROM, digital versatile disk (DVD) or other optical disk storage, magnetic cartridges, magnetic tape, magnetic disk storage or other magnetic storage devices, or may Any other medium used to store desired information and accessible by a computer.
  • the communication medium generally contains computer readable instructions, data structures, program modules, or other data in a modulated data signal such as a carrier wave or other transmission mechanism, and may include any information delivery medium .

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Databases & Information Systems (AREA)
  • Data Mining & Analysis (AREA)
  • Physics & Mathematics (AREA)
  • General Engineering & Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • Computational Linguistics (AREA)
  • Computing Systems (AREA)
  • Information Retrieval, Db Structures And Fs Structures Therefor (AREA)

Abstract

一种数据查询的方法及数据库代理,其中方法包括:当分布式数据库返回响应数据时,数据库代理从接收缓冲区获取所述响应数据本地存储(101);当从所述分布式数据库中各个数据节点获取的响应数据大小都达到预设值时,数据库代理将本地存储的响应数据进行排序后发送给客户端(102)。

Description

数据查询的方法及数据库代理 技术领域
本公开实施例涉及但不限于分布式数据库技术。
背景技术
分布式数据库中的表数据按照某种分发策略会分布在不同的数据节点中,如果执行语句涉及到排序,并且读取的数据涉及到多个数据节点时,需要分布式数据服务代理服务器PROXY(即数据库代理)将各个数据节点返回的所有响应数据都保存在本地内存中,统一排序并且发送给客户端。涉及到千万级、甚至上亿级的数据时,执行排序操作对服务器的内存要求较高。
并且客户端高并发的通过PROXY执行排序语句时,会因为使用内存过大导致PROXY进程被内核oom killer(Out-Of-Memory killer)机制kill(关闭)掉,影响正常业务使用。
发明内容
有鉴于此,本公开实施例提供了一种数据查询的方法,包括:当分布式数据库返回响应数据时,数据库代理从接收缓冲区获取所述响应数据本地存储;当从所述分布式数据库中各个数据节点获取的响应数据大小都达到预设值时,数据库代理将本地存储的响应数据进行排序后发送给客户端。
本公开实施例还提供了一种数据库代理,包括:获取及存储单元,配置为当分布式数据库中任一个数据节点返回的响应数据就绪时,从接收缓冲区获取就绪的响应数据本地存储;排序及发送单元,配置为当从所述分布式数据库中各个数据节点获取的响应数据大小都达到预设值时,将本地存储的响应数据进行排序后发送给客户端。
本公开实施例还提供了一种数据库代理,包括存储器、处理器及 存储在所述存储器上并可在所述处理器上运行的计算机程序,所述计算机程序被所述处理器执行时实现上述数据查询的方法。
本公开实施例还提供了一种计算机可读存储介质,所述计算机可读存储介质上存储有信息处理程序,所述信息处理程序被处理器执行时实现上述数据查询的方法的步骤。
附图说明
附图用来提供对本公开技术方案的进一步理解,并且构成说明书的一部分,与本申请的实施例一起用于解释本公开的技术方案,并不构成对本公开技术方案的限制。
图1为本公例一种实施例提供的数据查询方法的流程示意图。
图2为本公开一种实施例提供的数据查询方法的流程示意图。
图3为本公开一种实施例提供的数据查询方法的逻辑过程示意图。
图4为本公开一种实施例提供的数据查询方法的信令图。
图5为本公开一种实施例提供的数据库代理的结构示意图。
具体实施方式
为使本公开的目的、技术方案和优点更加清楚明白,下文中将结合附图对本公开的实施例进行详细说明。需要说明的是,在不冲突的情况下,本申请中的实施例及实施例中的特征可以相互任意组合。
在附图的流程图示出的步骤可以在诸如一组计算机可执行指令的计算机系统中执行。并且,虽然在流程图中示出了逻辑顺序,但是在某些情况下,可以以不同于此处的顺序执行所示出或描述的步骤。
分布式数据库中,数据分布在多个数据节点。当客户端查询数据时,数据库代理需要对查询的数据进行本地排序,高并发通过PROXY执行排序语句的情况下,要求存储查询数据的本地内存占用机器内存 比例较大,内存不够用时,容易被内核oom killer机制kill掉,影响正常业务使用。
有鉴于此,本公开实施例提出了一种新的数据查询方案,通过预设值(例如单次读取数据行数)来控制PROXY端排序读语句的内存使用,从而提升系统的可用性。
图1为本公例一种实施例提供的数据查询方法的流程示意图,如图1所示,该方法包括步骤101和步骤102。
步骤101,当分布式数据库返回响应数据时,数据库代理从接收缓冲区获取所述响应数据本地存储。
步骤102,当从所述分布式数据库中各个数据节点获取的响应数据大小都达到预设值时,数据库代理将本地存储的响应数据进行排序后发送给客户端。
与相关技术相比,本公开实施例通过预设值来控制数据库代理PROXY排序读的内存使用,从而提升系统的可用性。
其中,在分布式数据库返回响应数据之前,该方法还包括:数据库代理接收所述客户端发送的查询请求,所述查询请求用于向所述分布式数据库获取数据并携带排序要求;数据库代理将所述查询请求拆分成对应分布式数据库中各个数据节点的分布式请求,并发送给对应的各个数据节点,其中,所述分布式请求用于向对应的数据节点获取数据并携带排序要求。
其中,在数据库代理接收所述客户端发送的查询请求之前,该方法还包括:预先配置所述预设值,并分别为分布式数据库中各个数据节点分配所述预设值大小的内存。
其中,所述当分布式数据库返回响应数据时,数据库代理从接收缓冲区获取所述响应数据本地存储,包括:当分布式数据库中任一个数据节点返回的响应数据就绪时,所述数据库代理随机分配线程池中的空闲线程,从接收缓冲区中获取所述就绪的响应数据本地存储。
其中,所述数据库代理将本地存储的响应数据进行排序后发送给客户端,包括:所述数据库代理中获取所述分布式数据库中最后一个数据节点就绪的响应数据的线程,将本地存储的响应数据进行排序,生成最大连续的行数据列表并发送给所述客户端。
其中,在生成最大连续的行数据列表并发送给所述客户端之后,该方法还包括:通知所有线程继续获取所有数据节点剩下的响应数据,当从所述分布式数据库中各个数据节点再次获取的响应数据大小都达到预设值时,再次将本地存储的响应数据进行排序后发送给客户端;判断所述数据库代理是否接收完所有数据节点的所有响应数据,若否则返回所述通知所有线程继续获取所有数据节点剩下的响应数据的步骤。
其中,所述数据库代理将对应分布式数据库中各个数据节点的分布式请求发送给对应的各个数据节点,包括:所述数据库代理通过不同数据节点相应的tcp(传输控制协议)链路将对应分布式数据库中各个数据节点的分布式请求发送给对应的数据节点。
该方法还包括:所述数据库代理监听所述相应的tcp链路的socket(套接字)对象,等待各个数据节点的响应数据,其中,当epoll就绪队列中有任一个数据节点的响应数据时,表示所述任一个数据节点的响应数据就绪。
下面通过几个具体的实施例详细阐述上述实施例提供的技术方案。
图2为本公开一种实施例提供的数据查询方法的流程示意图,如图2所示,该方法包括步骤201至步骤204。
步骤201,数据库代理发送分布式查询请求给分布式数据库中所有的数据节点,等待响应数据。
其中,在步骤201之前,数据库代理接收客户端发送的查询请求,并将该查询请求拆分为对应分布式数据库中各个数据节点的分布式查询请求。
具体而言,例如数据库代理接收客户端发送的select order by请求(即查询请求),数据库代理针对分布式数据库中每个数据节点生成相应的select order by执行语句(即分布式查询请求)后,分别发送给各个数据节点。
步骤202,当分布式数据库返回响应数据时,数据库代理从接收缓冲区获取所述响应数据本地存储。
其中,在数据库代理接收所述客户端发送的查询请求之前,预先配置预设值,并分别为分布式数据库中各个数据节点分配所述预设值大小的内存。本实施例中,所述预设值为单次读取数据行数。
具体而言,在缓存数据前,每个数据节点对应地申请预设值大小的内存,使用双向链表放入空闲队列管理,方便内存复用,减少内存碎片。
其中,数据库代理利用线程池中的多线程缓存各个数据库节点(或称数据节点、节点)的响应数据,根据预设值从socket缓冲区中缓存数据到本地存储。
具体而言,数据库代理以行数据为单位,调用线程池中的空闲线程依次从空闲队列获取各个数据节点存入的响应数据(数据长度超出预申请内存大小(预设值)时重新动态分配),放入使用队列管理,方便后续数据排序。
步骤203,当从所述分布式数据库中各个数据节点获取的响应数据大小都达到预设值时,数据库代理将本地存储的响应数据进行排序后发送给客户端。
其中,缓存数据使用(发送给客户端)前:各个数据节点的缓存数据在使用队列中排序,生成一组最大连续的行数据列表。缓存数据使用时:根据生成的行数据列表,从使用队列删除缓冲区,放入空闲队列管理。
具体而言,由于各个数据节点和数据库代理之间的不同网络延迟,数据库代理接收不同数据节点的响应数据结束时间不一样,其中接收某一个数据节点的响应数据结束时间是最晚的,则接收该数据节点的 响应数据的线程负责排序,生成最大连续的行数据列表并发送列表所有数据给客户端,以及重新通知所有线程继续接收所有节点剩下的数据。处理数据的线程将所有任务放入就绪队列等待继续获取数据,通知线程池中线程继续执行任务。
步骤204,重复执行步骤202、203,直到数据库代理接收完所有数据节点的响应数据并且发送给客户端。
本公开一种实施例中,客户端查询数据表t1,数据表t1的数据分布在3个节点(group1、group2、group3),本实施例中预先配置的预设值为单次读取数据行数为1000行数据,并预先为group1、group2、group3各自分配1000行数据大小的内存。
图3为本公开实施例提供的数据查询方法的逻辑过程示意图,如图3所示,该方法包括步骤301至步骤306。
步骤301,客户端通过数据库代理proxy将select ordey by语句下发给数据节点group1、group2、group3。
步骤302,proxy通过epoll监听步骤301中下发过select ordey by语句的tcp链路,等待就绪数据。
其中,如果没有就绪数据,那么需要继续等待tcp数据响应;如果有就绪数据,那么将当前链路上获取数据的任务交给另外的线程去获取数据。
步骤303,当数据节点group1、group2、group3中任一个返回的响应数据均就绪时,调用线程池中的空闲线程从接收缓存区获取就绪的数据本地存储。
其中,所述就绪是指一个数据节点已经将结束流控大小(1000行)的行数据的获取。
其中,所述本地存储是指,将获取的就绪响应数据存储到本地内存。
其中,如果当前数据节点数据获取结束或者获取行数达到1000行 数据大小,那么其数据获取完成,将其中数据本地存储,等待其他节点的数据获取完成;否则继续步骤302等待就绪数据。
步骤304,对group1、group2、group3三个节点中的任意一个,只要其他两个数据节点的数据没有获取完成,就继续等它们获取完成;否则阶段性数据获取结束,就进行排序,执行步骤305。
其中,数据节点的数据获取完成是指该数据节点本地存储的数据大小达到1000行数据大小,或者数据获取结束。阶段性数据获取结束是指三个数据节点group1、group2、group3中任一个本地存储的数据大小都达到1000行数据大小或者数据获取结束。
步骤305,将group1、group2、group3三个节点本地存储的数据按照order by的字段值大小进行排序,排出当前的最大连续的数据,然后将这些数据按照mysql协议发送到提出请求的客户端。
步骤306,如果group1、group2、group3三个节点的所有数据全部获取结束,至此当前语句执行结束;否则继续步骤302到305的流程。
本公开一种实施例中,客户端查询数据表t1,表t1中数据一行大概128字节,总共一亿行,总共预存数据量12.8GB。按照主键a通过hash策略整个表t1数据分布在group1(数据库1)、group2(数据库2)、group3(数据库3)三个节点,每个group节点大概4GB左右数据。
预先配置预设值即单次读取数据行数为1000行数据,并预先为group1、group2、group3各自分配1000行数据大小的内存。3个节点预留内存分布为1000条,就是说每阶段每个数据节点获取一千条数据,那么阶段性数据需要内存384KB。
图4为本实施例提供的数据查询方法的信令图,如图4所示,该方法包括:步骤401至步骤步骤405。
步骤401,客户端发送select order by请求(SQL请求)给数据库 代理Proxy。
步骤402,Proxy将客户端请求拆成分布式请求,生成对应数据节点(group1、group2、group3)的请求(SQL请求),并通过不同节点相应的tcp链路发送给对应数据节点,同时Proxy监听当前请求所涉及的所有子请求tcp链路的socket对象,等待节点的数据响应。
步骤403,待epoll就绪队列中有group1、group2、group3节点的响应数据时,Proxy随机分配线程池中的空闲线程(线程1、线程2、线程3),从tcp接收缓冲区中获取数据本地存储,当所有数据节点获取行数据都达到1000行大小时,当前阶段所有需要获取的数据就获取完成。
其中,剩下的响应数据下个阶段继续获取(实际上有的线程已经开始重复步骤403的操作,获取行数据本地存储)。每个数据节点的数据就绪时,都会通知空闲线程去存储就绪数据到本地内存,当一个数据节点当前阶段所需数据(1000行数据大小)存储结束,目前的线程就去存储下个阶段的数据,这样不会出现单个线程饿死的情况。
步骤404,三个节点(group1、group2、group3)的本地内存存储的数据按照order by的字段值大小排序(缓冲区数据排序),生成一组最大连续的行数据列表。
其中,可以通过将缓存数据的使用队列进行排序,生成一组最大连续的行数据列表。
其中,每个数据节点的数据通过语句控制有序性,就是说返回的单节点的数据结果在未排序前就是有序的,此处排序是将多个数据节点的数据整体排序,并且是中间某个阶段的排序(例如很有可能多个数据节点阶段排序结束之后,获取的数据只使用了部分数据,还有部分数据需要后续数据确定是否是连续的)。例如本实施例中,单个节点的1000条数据在数据库返回时是有序的,可能会有以下情况需要处理:①单个数据节点不需要排序,已经是有序序列;②多个数据节点排序需要生成当前已知的有序连续序列,剩下无法判断的留到下次排序。
步骤405,Proxy将当前最大连续的列表行数据(阶段性结果集),按照mysql协议发送到请求的客户端。
如果有节点数据没有取完,继续重复上述阶段的步骤403-405,直到所有数据节点的数据取完并且发送给客户端,至此当前select order by语句执行结束。
相较于相关技术中所有数据节点的全部响应数据都接收完成之后才去排序,而本实施例提供的技术方案可以一边接收数据一边排序,增加处理数据能力,充分使用cpu(中央处理器)。例如,本实施例中,第一阶段每个数据节点1000条数据就绪,等待排序,第二阶段和第三阶段等数据就绪50%,等到第二阶段数据就绪排序时,可能后面很多阶段数据都已经开始存储本地,相比之前数据全部存储到本地,再进行排序,不仅省时间又省内存。每个数据节点每阶段一般都使用一个线程,假设并发使用一千个线程,那么每阶段使用内存384MB,考虑到接收数据和处理数据是不同的线程,所以是使用内存总计是384M*2=768M,极大地降低了内存使用。
以上实施例提供的技术方案,表数据分布在多个数据节点,相应地排序也会涉及到多数据节点的响应数据,每个数据节点的响应数据在获取时是有序返回的,在执行阶段,单个数据节点的数据通过SQL语句控制有序性;这样一旦某阶段本地存储完成多个数据节点的有序数据,按照一定的排序策略实现阶段性排序。
以上实施例提供的技术方案,单个数据节点阶段性的数据通过单独线程去获取数据,这样多个数据节点阶段性数据就会使用多线程处理数据,增加处理速度;每个阶段处理的数据可以相互独立,也就是说第一阶段处理数据排序时,那么第二阶段的数据也有可能在处理排序,增加处理数据的并发性。
以上实施例提供的技术方案,表数据涉及到多数据节点,所以很有可能数据会出现一部分数据节点数据已经就绪,一部分数据还在接收或者等待接收,这样某部分就绪数据的线程就可以就绪获取其他阶 段的数据,资源得到充分利用。
图5为本公开一种实施例提供的数据库代理的结构示意图,如图5所示,该数据库代理包括获取及存储单元和排序及发送单元。
获取及存储单元,配置为当分布式数据库中任一个数据节点返回的响应数据就绪时,从接收缓冲区获取就绪的响应数据本地存储。
排序及发送单元,配置为当从所述分布式数据库中各个数据节点获取的响应数据大小都达到预设值时,将本地存储的响应数据进行排序后发送给客户端。
其中,数据库代理还包括接收单元和拆分及发送单元。
接收单元,配置为接收所述客户端发送的查询请求,所述查询请求用于向所述分布式数据库获取数据并携带排序要求。
拆分及发送单元,配置为将所述查询请求拆分成对应分布式数据库中各个数据节点的分布式查询请求,并发送给对应的各个数据节点,其中,所述分布式查询请求用于向对应的数据节点获取数据并携带排序要求。
其中,数据库代理还包括配置单元。
配置单元,配置为在数据库代理接收所述客户端发送的查询请求之前,预先配置所述预设值,并分别为分布式数据库中各个数据节点分配所述预设值大小的内存。
其中,获取及存储单元,具体配置为当分布式数据库中任一个数据节点返回的响应数据就绪时,所述数据库代理随机分配线程池中的空闲线程,从接收缓冲区中获取所述就绪的响应数据本地存储。
其中,排序及发送单元,具体配置为所述数据库代理中获取所述分布式数据库中最后一个数据节点就绪的响应数据的线程,将本地存储的响应数据进行排序,生成最大连续的行数据列表并发送给所述客户端。
其中,数据库代理还包括通知单元和排序及发送单元。
通知单元,配置为通知所有线程继续获取所有数据节点剩下的响应数据。
排序及发送单元,还配置为当从所述分布式数据库中各个数据节点再次获取的响应数据大小都达到预设值时,再次将本地存储的响应数据进行排序后发送给客户端;判断所述数据库代理是否接收完所有数据节点的所有响应数据,若否则再次执行所述通知所有线程继续获取所有数据节点剩下的响应数据的步骤。
其中,所述拆分及发送单元,具体配置为通过不同数据节点相应的tcp链路将对应分布式数据库中各个数据节点的分布式查询请求发送给对应的数据节点。
其中,数据库代理还包括监听单元,配置为监听所述相应的tcp链路的socket对象,等待各个数据节点的响应数据,其中,当epoll就绪队列中有任一个数据节点的响应数据时,表示所述任一个数据节点的响应数据就绪。
本公开实施例还提供了一种数据库代理,包括存储器、处理器及存储在所述存储器上并可在所述处理器上运行的计算机程序,所述计算机程序被所述处理器执行时实现上述任一项所述数据查询的方法。
本公开实施例还提供了一种计算机可读存储介质,所述计算机可读存储介质上存储有信息处理程序,所述信息处理程序被处理器执行时实现上述任一项所述数据查询的方法的步骤。
本领域普通技术人员可以理解,上文中所公开方法中的全部或某些步骤、系统、装置中的功能模块/单元可以被实施为软件、固件、硬件及其适当的组合。在硬件实施方式中,在以上描述中提及的功能模块/单元之间的划分不一定对应于物理组件的划分;例如,一个物理组件可以具有多个功能,或者一个功能或步骤可以由若干物理组件合作 执行。某些组件或所有组件可以被实施为由处理器,如数字信号处理器或微处理器执行的软件,或者被实施为硬件,或者被实施为集成电路,如专用集成电路。这样的软件可以分布在计算机可读介质上,计算机可读介质可以包括计算机存储介质(或非暂时性介质)和通信介质(或暂时性介质)。如本领域普通技术人员公知的,术语计算机存储介质包括在用于存储信息(诸如计算机可读指令、数据结构、程序模块或其他数据)的任何方法或技术中实施的易失性和非易失性、可移除和不可移除介质。计算机存储介质包括但不限于RAM、ROM、EEPROM、闪存或其他存储器技术、CD-ROM、数字多功能盘(DVD)或其他光盘存储、磁盒、磁带、磁盘存储或其他磁存储装置、或者可以用于存储期望的信息并且可以被计算机访问的任何其他的介质。此外,本领域普通技术人员公知的是,通信介质通常包含计算机可读指令、数据结构、程序模块或者诸如载波或其他传输机制之类的调制数据信号中的其他数据,并且可包括任何信息递送介质。
虽然本公开所揭露的实施方式如上,但所述的内容仅为便于理解本公开而采用的实施方式,并非用以限定本公开。任何本公开所属领域内的技术人员,在不脱离本公开实施例所揭露的精神和范围的前提下,可以在实施的形式及细节上进行任何的修改与变化,但本公开的专利保护范围,仍须以所附的权利要求书所界定的范围为准。

Claims (10)

  1. 一种数据查询的方法,包括:
    当分布式数据库返回响应数据时,数据库代理从接收缓冲区获取所述响应数据本地存储;
    当从所述分布式数据库中各个数据节点获取的响应数据大小都达到预设值时,数据库代理将本地存储的响应数据进行排序后发送给客户端。
  2. 根据权利要求1所述的方法,其中,在分布式数据库返回响应数据之前,该方法还包括:
    数据库代理接收所述客户端发送的查询请求,所述查询请求用于向所述分布式数据库获取数据并携带排序要求;
    数据库代理将所述查询请求拆分成对应分布式数据库中各个数据节点的分布式查询请求,并发送给对应的各个数据节点,其中,所述分布式查询请求用于向对应的数据节点获取数据并携带排序要求。
  3. 根据权利要求2所述的方法,其中,在数据库代理接收所述客户端发送的查询请求之前,该方法还包括:
    预先配置所述预设值,并分别为分布式数据库中各个数据节点分配所述预设值大小的内存。
  4. 根据权利要求1所述的方法,其中,所述当分布式数据库返回响应数据时,数据库代理从接收缓冲区获取所述响应数据本地存储,包括:
    当分布式数据库中任一个数据节点返回的响应数据就绪时,所述数据库代理随机分配线程池中的空闲线程,从接收缓冲区中获取所述 就绪的响应数据本地存储。
  5. 根据权利要求4所述的方法,其中,所述数据库代理将本地存储的响应数据进行排序后发送给客户端,包括:
    所述数据库代理中获取所述分布式数据库中最后一个数据节点就绪的响应数据的线程,将本地存储的响应数据进行排序,生成最大连续的行数据列表并发送给所述客户端。
  6. 根据权利要求5所述的方法,其中,在生成最大连续的行数据列表并发送给所述客户端之后,该方法还包括:
    通知所有线程继续获取所有数据节点剩下的响应数据,当从所述分布式数据库中各个数据节点再次获取的响应数据大小都达到预设值时,再次将本地存储的响应数据进行排序后发送给客户端;
    判断所述数据库代理是否接收完所有数据节点的所有响应数据,若否则返回所述通知所有线程继续获取所有数据节点剩下的响应数据的步骤。
  7. 根据权利要求2所述的方法,其中,所述数据库代理将对应分布式数据库中各个数据节点的分布式查询请求发送给对应的各个数据节点,包括:
    所述数据库代理通过不同数据节点相应的tcp链路将对应分布式数据库中各个数据节点的分布式查询请求发送给对应的数据节点;
    该方法还包括:
    所述数据库代理监听所述相应的tcp链路的socket对象,等待各个数据节点的响应数据,其中,当epoll就绪队列中有任一个数据节点的响应数据时,表示所述任一个数据节点的响应数据就绪。
  8. 一种数据库代理,其中,包括:
    获取及存储单元,配置为当分布式数据库中任一个数据节点返回的响应数据就绪时,从接收缓冲区获取就绪的响应数据本地存储;
    排序及发送单元,配置为当从所述分布式数据库中各个数据节点获取的响应数据大小都达到预设值时,将本地存储的响应数据进行排序后发送给客户端。
  9. 一种数据库代理,其中,包括存储器、处理器及存储在所述存储器上并可在所述处理器上运行的计算机程序,所述计算机程序被所述处理器执行时实现如权利要求1至7中任一项所述数据查询的方法。
  10. 一种计算机可读存储介质,其中,所述计算机可读存储介质上存储有信息处理程序,所述信息处理程序被处理器执行时实现如权利要求1至7中任一项所述数据查询的方法的步骤。
PCT/CN2019/121470 2018-12-18 2019-11-28 数据查询的方法及数据库代理 WO2020125367A1 (zh)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
CN201811549994.7 2018-12-18
CN201811549994.7A CN111339132B (zh) 2018-12-18 2018-12-18 一种数据查询的方法及数据库代理

Publications (1)

Publication Number Publication Date
WO2020125367A1 true WO2020125367A1 (zh) 2020-06-25

Family

ID=71100797

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/CN2019/121470 WO2020125367A1 (zh) 2018-12-18 2019-11-28 数据查询的方法及数据库代理

Country Status (2)

Country Link
CN (1) CN111339132B (zh)
WO (1) WO2020125367A1 (zh)

Citations (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20020128995A1 (en) * 2001-03-09 2002-09-12 Muntz Daniel A. Namespace service in a distributed file system using a database management system
CN101930472A (zh) * 2010-09-09 2010-12-29 南京中兴特种软件有限责任公司 一种支持分布式数据库基于并行查询的方法
CN102201010A (zh) * 2011-06-23 2011-09-28 清华大学 无共享架构的分布式数据库系统及其实现方法
CN102289508A (zh) * 2011-08-31 2011-12-21 上海西本网络科技有限公司 分布式缓存阵列及其数据查询方法
CN102289473A (zh) * 2011-07-27 2011-12-21 迈普通信技术股份有限公司 一种多服务器分页查询的装置及方法
CN103051478A (zh) * 2012-12-24 2013-04-17 中兴通讯股份有限公司 一种大容量电信网管系统及其设置和应用方法
CN105373626A (zh) * 2015-12-09 2016-03-02 深圳融合永道科技有限公司 分布式人脸识别轨迹搜索系统和方法
CN106547796A (zh) * 2015-09-23 2017-03-29 南京中兴新软件有限责任公司 数据库的执行方法及装置

Patent Citations (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20020128995A1 (en) * 2001-03-09 2002-09-12 Muntz Daniel A. Namespace service in a distributed file system using a database management system
CN101930472A (zh) * 2010-09-09 2010-12-29 南京中兴特种软件有限责任公司 一种支持分布式数据库基于并行查询的方法
CN102201010A (zh) * 2011-06-23 2011-09-28 清华大学 无共享架构的分布式数据库系统及其实现方法
CN102289473A (zh) * 2011-07-27 2011-12-21 迈普通信技术股份有限公司 一种多服务器分页查询的装置及方法
CN102289508A (zh) * 2011-08-31 2011-12-21 上海西本网络科技有限公司 分布式缓存阵列及其数据查询方法
CN103051478A (zh) * 2012-12-24 2013-04-17 中兴通讯股份有限公司 一种大容量电信网管系统及其设置和应用方法
CN106547796A (zh) * 2015-09-23 2017-03-29 南京中兴新软件有限责任公司 数据库的执行方法及装置
CN105373626A (zh) * 2015-12-09 2016-03-02 深圳融合永道科技有限公司 分布式人脸识别轨迹搜索系统和方法

Also Published As

Publication number Publication date
CN111339132B (zh) 2023-05-26
CN111339132A (zh) 2020-06-26

Similar Documents

Publication Publication Date Title
US11431794B2 (en) Service deployment method and function management platform under serverless architecture
CN107800768B (zh) 开放平台控制方法和系统
US20190199801A1 (en) Lock Management Method in Cluster, Lock Server, and Client
WO2017133623A1 (zh) 一种数据流处理方法、装置和系统
CN110941481A (zh) 资源调度方法、装置及系统
US20170031622A1 (en) Methods for allocating storage cluster hardware resources and devices thereof
US20150363229A1 (en) Resolving task dependencies in task queues for improved resource management
US10686728B2 (en) Systems and methods for allocating computing resources in distributed computing
WO2017092505A1 (zh) 云计算环境下虚拟资源弹性伸展的方法,系统和设备
JP2015144020A5 (zh)
US11768706B2 (en) Method, storage medium storing instructions, and apparatus for implementing hardware resource allocation according to user-requested resource quantity
CN105516086B (zh) 业务处理方法及装置
WO2016061935A1 (zh) 一种资源调度方法、装置及计算机存储介质
JPWO2018220708A1 (ja) 資源割当システム、管理装置、方法およびプログラム
WO2018223789A1 (zh) 事务标识操作方法、系统和计算机可读存储介质
CN111753065A (zh) 请求响应方法、系统、计算机系统和可读存储介质
CN107562803B (zh) 数据供应系统及方法、终端
CN111586140A (zh) 一种数据交互的方法及服务器
US11144359B1 (en) Managing sandbox reuse in an on-demand code execution system
WO2016149945A1 (zh) 一种生命周期事件的处理方法及vnfm
CN109710679B (zh) 数据抽取方法及装置
CN109819674B (zh) 计算机存储介质、嵌入式调度方法及系统
WO2020125367A1 (zh) 数据查询的方法及数据库代理
CN111290842A (zh) 一种任务执行方法和装置
CN102375780A (zh) 一种分布式文件系统中元数据缓存管理的方法

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 19898901

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: DE

32PN Ep: public notification in the ep bulletin as address of the adressee cannot be established

Free format text: NOTING OF LOSS OF RIGHTS PURSUANT TO RULE 112(1) EPC (EPO FORM 1205A DATED 16/11/2021)

122 Ep: pct application non-entry in european phase

Ref document number: 19898901

Country of ref document: EP

Kind code of ref document: A1