WO2018188416A1 - 一种数据搜索的方法、装置和相关设备 - Google Patents

一种数据搜索的方法、装置和相关设备 Download PDF

Info

Publication number
WO2018188416A1
WO2018188416A1 PCT/CN2018/076750 CN2018076750W WO2018188416A1 WO 2018188416 A1 WO2018188416 A1 WO 2018188416A1 CN 2018076750 W CN2018076750 W CN 2018076750W WO 2018188416 A1 WO2018188416 A1 WO 2018188416A1
Authority
WO
WIPO (PCT)
Prior art keywords
memory
data
processor
search
memory controller
Prior art date
Application number
PCT/CN2018/076750
Other languages
English (en)
French (fr)
Inventor
沈胜宇
Original Assignee
华为技术有限公司
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 华为技术有限公司 filed Critical 华为技术有限公司
Publication of WO2018188416A1 publication Critical patent/WO2018188416A1/zh

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F16/00Information retrieval; Database structures therefor; File system structures therefor
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F9/00Arrangements for program control, e.g. control units
    • G06F9/06Arrangements for program control, e.g. control units using stored programs, i.e. using an internal store of processing equipment to receive or retain programs
    • G06F9/46Multiprogramming arrangements
    • G06F9/50Allocation of resources, e.g. of the central processing unit [CPU]
    • G06F9/5005Allocation of resources, e.g. of the central processing unit [CPU] to service a request
    • G06F9/5027Allocation of resources, e.g. of the central processing unit [CPU] to service a request the resource being a machine, e.g. CPUs, Servers, Terminals
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F9/00Arrangements for program control, e.g. control units
    • G06F9/06Arrangements for program control, e.g. control units using stored programs, i.e. using an internal store of processing equipment to receive or retain programs
    • G06F9/46Multiprogramming arrangements
    • G06F9/50Allocation of resources, e.g. of the central processing unit [CPU]
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F2209/00Indexing scheme relating to G06F9/00
    • G06F2209/50Indexing scheme relating to G06F9/50
    • G06F2209/502Proximity

Definitions

  • the present application relates to the field of computers, and in particular, to a method, apparatus, processor, memory controller and computing device for data search in a computing device.
  • FIG. 1A is a schematic diagram of a system architecture of a computing device in the prior art.
  • the system includes multiple CPUs, and each CPU is configured with a cache. Multiple CPUs share a single memory (main memory), and each CPU manages a section of memory in memory. The access to the data stored in the memory needs to be performed through the cache.
  • the cache is a small fast memory of the CPU, which is used to save a copy of the commonly used data in the memory to avoid frequent network loss and delay caused by the external slow and large capacity main memory. A copy of the data in the cache memory.
  • FIG. 1B is a schematic diagram of a data structure of a linked list and a tree in the prior art.
  • Each linked list or tree has a large number of nodes in its data structure.
  • Each node has the same internal structure, including data and pointers. The number of pointers may be one or more, and the pointers are used to identify associations of different data in the storage structure.
  • the CPU When any CPU in the computing device receives the search instruction, the CPU first obtains the complete data structure of the tree or linked list related to the search instruction from the main memory or cache corresponding to each CPU, and stores the complete data structure in the The cache of the CPU that executes the search instruction.
  • the target data is searched in the cache to determine the search result.
  • the CPU that receives the search instruction needs to sequentially acquire the data of the root node according to the data structure of the linked list or the tree, and then acquire the data of the next node according to the pointer of the data to other CPUs. And so on, each time the data is acquired and the data of the next node pointed to by the data pointer is acquired, two network delays are required.
  • the network delay means that each time the CPU receiving the search command needs to pass the data transmission process of the node controller, the processor, the cache, and the main memory, each network delay is usually on the order of 300 ns, and the network delay can reach 600 ns twice.
  • the above search process can lead to a long search time. Moreover, in the search process, since it is required that a plurality of processors in the computing device execute the instruction multiple times to acquire data and transmit the data to the processor that receives the search instruction, the performance of the entire system is degraded.
  • the present application provides a data search method, device, processor, memory controller and computing device, which can solve the problem that the search method takes a long time and affects the performance of the entire system in the prior art, reduces the delay of data search, and improves the data. Search process efficiency and overall system performance.
  • a method for data search comprising: receiving, by any one of the computing devices, a search request message, the search request message including a search condition and an address of a root node of the tree to be searched.
  • the first processor may first determine, according to the address of the root node of the tree to be searched in the search request message, the first memory where the address of the root node is located and the first memory controller associated with the processor managing the first memory. Then, the first processor determines a search instruction according to the search request message and a preset search algorithm, where the search instruction includes the search condition, an address of a root node of the tree to be searched, and the first processor Identify and send the search command to the first memory controller.
  • the first memory controller And receiving, by the first memory controller, the first data that satisfies the search condition in the first memory, and the second memory controller that is associated with the processor that manages the second memory of the subtree in which the tree to be searched is obtained, that satisfies the search condition. Second data. Finally, the first processor generates a search result based on the first data and the second data.
  • the search request message further includes an identifier of the tree to be searched.
  • the search instruction also includes an identifier of the tree to be searched.
  • the first processor determines, according to the search request message, the first memory where the address of the root node of the tree to be searched is located, and the memory controller associated with the processor that manages the first memory, including: First, the first processor determines, according to the address of the root node of the tree to be searched, the first memory where the address of the root node of the tree to be searched is located. Then, the processor that manages the first memory that is associated with the first memory is determined according to a preset memory and a processor mapping table. And determining, according to a mapping relationship between the processor and the memory controller, a first memory controller associated with the processor that manages the first memory.
  • the first processor generates a search result according to the first data and the second data, including: when the data search time meets a first threshold, the first processor is configured according to the received The first data and the second data generate the search result; or, when the number of the first data and the second data acquired by the first processor meets a second threshold, the first processor is configured according to the received The first processor and the second processor generate the search results.
  • the first processor sends a lookup request message to all the processors in the computing device, where the search request message carries an identifier of the tree to be searched to indicate the computing device Determining, according to the identifier of the to-be-searched tree, whether the sub-tree of the tree to be searched is stored in the memory managed by each processor; receiving a response message sent by the sub-tree processor storing the tree to be searched The response message is used to indicate that the processor manages a subtree in which the tree to be searched is stored; when the first processor receives the first memory controller and all the trees that manage the to-be-searched tree The search result is generated when the data associated with the search condition is sent by the memory controller associated with the processor in the memory of the subtree.
  • the computing process when the processor receives the search request message, the computing process can be migrated to the migration processing unit in the memory controller close to the data, thereby reducing the complete data between the processors in the computing device.
  • the storage structure stores the network delay caused by the buffer of the processor receiving the search request message, which improves the efficiency of the search process and reduces the delay of the data search process.
  • the migration processing unit in the memory controller near the data searches directly in its memory or cache, it avoids copying the complete data structure to the cache of the processor receiving the search request message, thereby reducing cross-processing.
  • the amount of data that needs to be transferred during data transfer between the controller, the node controller, the cache, and the memory saves the bandwidth of the data transfer.
  • the cache of the processor receiving the search request message does not need to store the complete data structure, it also reduces the storage overhead of the cache of the processor receiving the search request message, improving the overall performance of the computing device.
  • the present application provides a data search method, the method comprising: a first memory controller receiving a search instruction sent by a first processor, where the search instruction includes a search condition, an address of a root node of a tree to be searched, and The identifier of the first processor; the memory of the address of the root node of the tree to be searched is the first memory, and the first memory controller is the memory controller associated with the processor managing the first memory. Then, the first memory controller searches for data satisfying the search condition in the first memory according to the search instruction, and when there is the first data in the first memory that satisfies the search condition, the first memory controller acquires the first data, and according to the first The identification of a processor transmits the first data to the first processor.
  • the first memory controller sends a search instruction to the second memory controller, so that the second memory controller searches for the search condition in the second memory according to the search instruction.
  • Data when there is second data in the second memory that satisfies the search condition, acquiring the second data, and sending the second data to the first processor; wherein the second memory is a subtree where the tree to be searched is located.
  • the second memory controller is a memory controller associated with the processor that manages the second memory.
  • the search request message further includes an identifier of the tree to be searched.
  • the search instruction includes the identifier of the tree to be searched.
  • each processor further includes a cache
  • the first memory controller searches for the target data in the first memory according to the search instruction, including: determining, in the first cache, according to the address of the first memory Whether there is a data copy of the first memory, when the data copy of the first memory exists in the first cache, searching for the first data that satisfies the search condition in the first cache; or, when When the data copy of the first memory does not exist in the first cache, the data of the first memory is loaded into the first cache; wherein the first cache is a cache included in the first processor; the first memory controller is The first data in the first cache is found to satisfy the search condition.
  • the migration processing unit in the memory controller close to the data searches for the data satisfying the search request message according to the search request message, thereby reducing the cause of each processor in the computing device.
  • the network delay caused by storing the complete data storage structure into the cache of the processor receiving the search request message improves the efficiency of the search process and reduces the latency of the data search process.
  • the application provides a computing device, where the computing device includes at least two processors and a memory, and each of the at least two processors is associated with a memory controller, where the memory controller is used to implement Data communication between each processor and the memory; the computing device includes a first processor, a first memory controller, and a second memory controller;
  • the first processor is configured to receive a search request message, where the search request message includes a search condition and an address of a root node of the tree to be searched; and determining, according to the search request message, an address of a root node of the tree to be searched a first memory and a first memory controller associated with the processor managing the first memory; determining a search instruction according to the search request message and a preset search algorithm, wherein the search instruction includes the search condition, the An address of a root node of the tree to be searched and an identifier of the first processor; sending the search instruction to the first memory controller; receiving first data sent by the first memory controller, the first One data is data in the first memory that satisfies the search condition; receiving second data sent by the second memory controller, and the second memory controller is a memory control associated with a processor that manages the second memory
  • the second memory is a memory in which the subtree of the tree to be searched is stored, and the second data is data in the second memory that satisfies the search condition; The
  • the first memory controller is configured to receive the search instruction sent by the first processor, and search, in the first memory, data that meets the search condition according to the search instruction, when the first memory is in the first memory When there is first data that satisfies the search condition, the first data is acquired, and the first data is sent to the first processor according to the identifier of the first processor; when the to-be-searched When the tree has a subtree in the second memory, the search instruction is sent to the second memory controller, where the second memory is a memory in which the subtree of the tree to be searched is stored.
  • the second memory controller is a memory controller associated with the processor that manages the second memory;
  • the second memory controller is configured to receive the search instruction sent by the first memory controller, and search for data that meets the search condition in the second memory according to the search instruction, when the first When there is second data in the memory that satisfies the search condition, the second data is acquired, and the second data is sent to the first processor according to the identifier of the first processor.
  • the present application provides an apparatus for data search, the apparatus comprising various modules for performing the method of data search in the first aspect or any of the possible implementations of the first aspect.
  • the present application provides an apparatus for data search, the apparatus comprising various modules for performing the method of data search in the second aspect or the second aspect of the possible implementation.
  • the present application provides a processor, where the processor includes a memory controller, a cache, and a bus, wherein the memory controller and the cache are communicated through a bus, where the memory controller includes a migration processing circuit, a storage circuit, and a bus in which the migration processing circuit and the storage circuit are communicated by the bus or the direct connection mode, wherein the memory controller stores an execution instruction, and when the processor is running, the memory controller Execution instructions in the memory circuit of the memory controller are executed to perform the operational steps of the method of data search in the first aspect or the first aspect of the first aspect, using hardware resources in the processor.
  • the present application provides a computer readable storage medium having stored therein instructions that, when run on a computer, cause the computer to perform any of the first aspect or the first aspect The method described in the implementation.
  • the present application provides a memory controller, where the memory controller includes a migration processing circuit, a storage circuit, a communication circuit, and a bus, and the migration processing circuit, the storage circuit, and the communication circuit are directly connected through the bus or the communication circuit. Communicating in a manner in which the storage circuit stores therein an execution instruction, and when the memory controller is in operation, the processing circuit executes an execution instruction in the storage circuit to utilize hardware resources in the memory controller.
  • the present application provides a computer readable storage medium having stored therein instructions that, when run on a computer, cause the computer to perform either of the second aspect or the second aspect The method described in the implementation.
  • the present application provides a computer readable storage medium having instructions stored therein that, when run on a computer, cause the computer to perform any of the third aspect or the third aspect The method described in the implementation.
  • the present application may further combine to provide more implementations.
  • 1A is a schematic diagram of a system architecture of a computing device in the prior art
  • 1B is a schematic view showing the structure of a linked list and a tree in the prior art
  • FIG. 2A is a schematic diagram of a system architecture of a computing device according to an embodiment of the present invention.
  • 2B is a schematic diagram of a system architecture of another computing device according to an embodiment of the present invention.
  • FIG. 3 is a schematic flowchart of a method for data search according to an embodiment of the present invention.
  • FIG. 4 is a schematic flowchart of another method for data search according to an embodiment of the present invention.
  • FIG. 5 is a schematic structural diagram of a device 500 for data search according to an embodiment of the present invention.
  • FIG. 6 is a schematic structural diagram of a device 600 for data search according to an embodiment of the present invention.
  • FIG. 7 is a schematic structural diagram of a processor 700 for data search according to an embodiment of the present invention.
  • FIG. 8 is a schematic structural diagram of a memory controller 800 for data search according to an embodiment of the present invention.
  • FIG. 2A is a schematic diagram of a system architecture of a computing device 100 according to an embodiment of the present invention.
  • the computing device 100 includes a node controller 301, a plurality of processors, and a memory.
  • the computing device 100 can be a computing device such as a server, a desktop computer, a portable computer, or a virtual machine.
  • the node controller 301 is used to implement interconnection between multiple processors, and the processor and the node controller 301 can communicate through the on-chip interconnect bus.
  • the node controller can be a separate hardware chip composed of multiple electronic devices, or a hardware structure integrated inside the processor.
  • a node controller is an independent hardware chip as an example.
  • processors in the computing device may be interconnected by a node controller 301; or may be interconnected through a switching network composed of multiple node controllers 301, and the different node controllers 301 work together and cooperate. Achieve data transfer between multiple processors.
  • node controller 301 may be interconnected by a node controller 301; or may be interconnected through a switching network composed of multiple node controllers 301, and the different node controllers 301 work together and cooperate. Achieve data transfer between multiple processors.
  • the following embodiments of the present invention are described in detail by taking only one node controller in the computing device as an example.
  • the computing device 100 in FIG. 2A is a plurality of processor systems.
  • the following describes three processors as an example in the following embodiments of the present invention.
  • the computing device 100 includes a processor 201, a processor 211, and a processor 221.
  • the three processors share a single memory storage resource, and each processor is responsible for managing a section of memory in memory.
  • the storage space in the memory managed by the processor 201 is marked as the memory 205
  • the storage space in the memory managed by the processor 211 is marked as the memory 215
  • the storage space in the memory managed by the processor 221 is marked as the memory 225. .
  • the memory 205, the memory 215, and the memory 225 are respectively a part of the memory shared by the plurality of processors.
  • the memory 205, the memory 215, and the memory 225 may be consecutive storage spaces in the memory, or may be interrupted by the memory. Storage space.
  • the storage space of the memory managed by each CPU may be the same or different.
  • the specific implementation process may be configured according to service requirements, and the present invention is not limited.
  • Memory also known as main memory, is a storage space that the CPU can directly address. It can be made of a semiconductor device.
  • the memory is characterized by a fast access rate and is a major component of computing devices.
  • the programs we usually use, such as Windows operating programs, typing software, game software, etc., are generally installed on the external storage of the hard disk, but only the function can not be used, they must be transferred to the memory for processing.
  • the memory generally uses a semiconductor memory unit, including a random access memory (RAM) or a flash memory.
  • RAM random access memory
  • the memory can be used to store data of an application, such as data in the form of a tree or a linked list. For ease of description, in the following embodiments of the present invention, the data structure stored in the main memory is described as a tree.
  • an application uses a linked list or tree structure to store data.
  • the data structure of the linked list or tree can be stored according to a preset algorithm. For example, continuously stored data needs to be stored in a memory corresponding to a different processor.
  • each data writing process may be configured to be processed by each CPU in turn according to the polling manner, and stored in its corresponding memory.
  • the storage process of the data is prior art, and the present invention will not be described again.
  • the structure in which the application is stored is described as a tree structure as an example.
  • Each tree's data storage structure includes a large number of nodes, each of which has the same internal structure, including data and pointers.
  • the number of pointers may be one or more, and is used to identify associations of different data in the storage structure.
  • the starting node in the data storage structure of the root node identifier tree is used.
  • the part of the data structure is called a mother tree, and the memory is stored in the memory.
  • the part of the data structure is called a subtree. For example, as shown in FIG.
  • the data structure stored in the memory 215 may be referred to as a parent tree.
  • the data structure stored in the memory 215 is referred to as a mother tree. 216.
  • the memory 205 stores a part of the data of the non-root node in the data structure of the tree
  • the data structure stored in the memory 205 may be referred to as the data structure.
  • the subtree of the tree, here, is referred to as subtree 206.
  • the data structure stored in memory 225 can be referred to as subtree 226.
  • the computing device 100 shown in FIG. 2A includes at least two processors.
  • the computing device 100 includes a processor 201, a processor 211, and a processor 221. Only three processors are shown in FIG. 2A.
  • the computing device 100 may include two or more processors.
  • the present invention is not limited thereto.
  • the computing device 100 includes three processors as an example for further detailed description.
  • Each processor is configured with a cache and at least one memory controller (MC).
  • the cache is used to store a copy of the in-memory data managed by the processor to avoid frequent network loss and latency issues caused by frequent access to external slow bulk mains.
  • the cache 204 is configured in the processor 201
  • the cache 214 is configured in the processor 211
  • the cache 224 is configured in the processor 221.
  • the processor may be a CPU, or may be another general-purpose processor, a digital signal processor (DSP), a programmable logic device (PLD), or an application specific integrated circuit (ASIC). Field-programmable gate array (FPGA) or other programmable logic device.
  • the general purpose processor can be a microprocessor or any conventional processor or the like. In the following description of the embodiments of the present invention, the CPU is taken as an example for further description.
  • the memory controller is an important part of the data exchange between the internal control memory of the computing device and the CPU.
  • the memory controller can be a separate hardware chip.
  • the memory controller determines the maximum memory capacity, memory type and speed, memory particle data depth, and data width that the computing device can use, that is, determines the memory performance of the computing device, and thus the overall computing device. Performance has a large impact.
  • the processor has no pins directly connected to the memory. When the processor needs to read the target data in the memory, the processor transfers the memory address corresponding to the target data to the cache, and the cache looks for the cache. The cache segment corresponding to the memory address, if any, the processor can directly read the target data; if not, the memory controller loads the data corresponding to the memory address into the cache, and the processor reads the target data.
  • the memory controller includes a migrated processing unit (MPU).
  • MPU migrated processing unit
  • Each processor can be configured with at least one migration processing unit.
  • the migration processing unit consists of independent hardware in the memory controller and is used to store a subset of functions commonly used in processors, including only common fixed-point arithmetic operations such as addition, subtraction, multiplication, division, logical AND, logical OR, logic. XOR, memory read and write, atomic operations, etc.
  • the atomic operation refers to the sequential and continuous requirements among the multiple instructions executed by the processor, and the interrupts can not be generated during the execution of the instruction. Then, the operation performed by each instruction is called atomic operation.
  • the process of executing the instruction is: first, reading the value corresponding to the a parameter; then, adding 1 to the value corresponding to the a parameter; finally, writing the modified a parameter back to the original execution instruction.
  • the MPU does not include any complex functions such as floating point, vector, system state operations, and chaotic pipelines.
  • 2B is a schematic diagram of a system architecture of another computing device 100 according to an embodiment of the present invention. 2B differs from FIG. 2A in that the memory controller of FIG. 2A is integrated inside the processor, and the memory controller of FIG. 2B is a processor-independent hardware structure.
  • a data search method provided by an embodiment of the present invention is further introduced in conjunction with FIG. 3, as shown in the figure, the method includes:
  • the first processor receives a search request message.
  • the first processor is any one of the computing devices 100 shown in FIG. 2A or 2B.
  • the search request message may be that the first processor receives the search request message sent by the user, or may be a search request message sent by the application or other processor.
  • the search request message carries the search condition and the address of the root node of the tree to be searched.
  • the search condition can be any parameter when the data is stored, such as name, time, or a keyword that stores the data.
  • the search request message further includes an identifier of the tree to be searched.
  • the first processor determines, according to the search request message, a first memory where an address of a root node of the tree to be searched is located, and a first memory controller associated with a processor that manages the first memory.
  • the first processor determines, according to the following steps, the memory of the root node of the tree to be searched and the first memory controller associated with the processor that manages the first memory:
  • the first processor sends an address of a root node of the tree to be searched to an address decoder in the first processor.
  • An address decoder is a device in a processor that resolves address access relationships.
  • the address decoder can determine the memory address where it is located according to the address of the root node of the tree to be searched.
  • the address decoder sends, to the first processor, a memory address where the address of the root node of the tree to be searched is located.
  • the first processor determines, according to a memory address corresponding to a root node address of the tree to be searched, and a preset mapping relationship between the memory and the processor, a processor that manages a memory where the address of the root node of the tree to be searched is located.
  • the mapping between the preset memory and the processor is stored in the first processor.
  • the mapping between the preset memory and the processor may be determined.
  • the processor corresponding to the memory address where the root node address of the search tree is located that is, the processor that manages the memory of the root node address of the tree to be searched.
  • the first processor determines, according to a preset mapping relationship between the processor and the memory controller, a first memory controller that is associated with a processor that manages a memory of an address of a root node of the tree to be searched.
  • the first processor stores a mapping relationship between the preset processor and the memory controller. After the first processor determines the processor corresponding to the memory address of the root node address of the tree to be searched, the first processor may The mapping relationship between the processor and the memory controller determines the memory controller associated with the processor corresponding to the memory address of the root node address of the tree to be searched. For the convenience of subsequent description, the memory controller is recorded as the first memory control. Device.
  • the first processor determines the search instruction according to the search request message and the preset search algorithm.
  • a preset search algorithm is further stored in the memory, and the preset search algorithm is used to control the sequence or time when each memory controller executes the search request message in the search process.
  • search rules Can be stored in the memory managed by any processor. Generally, only one processor in a computing device can execute the search algorithm. For example, in the computing device shown in FIG. 2A, a preset search algorithm is stored in memory 215, which processor 201 can execute.
  • the processor in the computing device capable of executing the preset search algorithm may be notified to generate a search instruction according to the search condition in the search request message and the preset search algorithm.
  • the search instruction includes a search condition, an address of a root node of the tree to be searched, and an identifier of the first processor.
  • the search instruction may further include an identifier of the tree to be searched.
  • the data structure of the target tree to be searched may be first determined according to the identifier of the tree to be searched, and then the data satisfying the search condition may be searched according to the search condition.
  • the process by which the first processor determines the search instruction includes either of the following two conditions:
  • the processor receiving the search request message is a processor that can execute a search algorithm, that is, the first processor is a processor that can execute the search algorithm.
  • the search instruction can be directly generated according to the preset search algorithm and the search condition.
  • Case 2 The processor receiving the search request message is not the same processor as the processor executing the search algorithm. That is, the first processor cannot execute the search algorithm.
  • the first processor sends a search request message to the processor of the executable search algorithm through the node controller, and the processor of the executable search algorithm searches for the condition and the preset according to the search request message.
  • the search algorithm generates a search instruction and sends the search instruction to the first processor.
  • step S302 and step S303 has no sequential relationship, and step S302 may be performed to perform step S303. Step S303 may also be performed first, and then step S302 is performed. Alternatively, step S302 and step S303 are simultaneously performed.
  • the first processor sends a search instruction to the first memory controller.
  • the search instruction transmission process between the processor and the first memory controller includes: first, a network formed by the processor that sends the search instruction through the node controller The search command is sent to the processor that receives the search command; then, the processor that receives the search command sends the search command to the memory controller associated with the processor; and finally, the migration processing unit executes in the memory controller. Since the migration processing unit stores a subset of functions commonly used in the processor, the search process of the search instruction can be quickly completed, and the search efficiency is improved.
  • the memory of the address of the root node of the tree to be searched in step S302 is managed by another processor, for example, the memory of the address of the root node of the tree to be searched is managed by the second processor, and the search process thereof
  • the method includes: the first processor needs to first send the search instruction to the second processor by using a network formed by the node controller; and the second processor sends the search instruction to the first memory controller; and finally, the second A first migration processing unit in a memory controller executes a search instruction.
  • the second processor is a processor corresponding to the memory of the root node of the to-be-searched tree determined by the first processor in step S302, and the first memory controller is a memory controller associated with the second processor, and the first migration process is performed.
  • the unit is a migration processing unit in the first memory controller.
  • the processor 201 receives the search request message and the processor corresponding to the memory of the root node of the tree to be searched is the processor 211, the processor 201 first searches. The instruction is sent to the processor 211 through the node controller 301; then, the search instruction is sent by the processor 211 to the memory controller 212; finally, the further search processing is completed by the migration processing unit 213 in the memory controller 212.
  • the memory where the address of the root node of the tree to be searched determined in step S302 is managed by the first processor
  • the memory where the address of the root node of the tree to be searched is located is managed by a processor that receives the search request message.
  • the first processor directly sends the search instruction to the memory controller associated with the first processor, and further search processing is performed by the migration processing unit in the memory controller.
  • the processor 201 receives the search request message and the processor corresponding to the memory of the root node of the tree to be searched is the processor 201, the processor 201 first searches. The instructions are sent to the memory controller 202; then, the further search process is completed by the migration processing unit 203 in the memory controller 202.
  • the first memory controller acquires the first data that satisfies the search condition according to the search instruction.
  • the first memory is the memory of the parent tree of the tree to be searched.
  • the first memory controller After receiving the search instruction, the first memory controller first determines, by the first migration processing unit in the first memory controller, whether there is a corresponding cache segment in the cache of the processor that manages the first memory. If so, obtain data that meets the search criteria directly according to the search criteria.
  • the data in the first memory that satisfies the search condition is recorded as the first data. If not, the data corresponding to the address of the first memory is loaded into the cache; then, the first migration processing unit acquires the first data according to the search condition.
  • the memory controller reads the data in the memory by first loading the data in the memory address into the cache, and then reading the required data in the cache, thereby improving the memory controller read. The efficiency of the data.
  • the first memory controller sends the first data to the first processor.
  • the first memory controller transmits the first data to the first processor, and the transmission process is similar to step S304.
  • the processor that manages the memory of the root node of the tree to be searched is different from the processor that receives the search request message.
  • the first memory controller needs to send the first data to the second processor; then, the second processor sends the first data to the first processor via the node controller.
  • the processor that manages the memory of the root node of the tree to be searched is the same as the processor that receives the search request message, first The data is already stored in the cache of the first processor. At this time, the first memory controller only needs to notify the storage location of the first data of the first processor.
  • the first processor receives the first data, and stores the first data in a cache of the first processor.
  • the first memory controller starts in step S305, starting from the address of the root node of the tree to be searched, and sequentially searches whether the data of each node in the parent tree where the root node is located satisfies the search condition.
  • the first memory controller needs to acquire the data of the next node according to the pointer information in each node, and the pointer information is usually represented by a memory address.
  • the first memory controller determines by the pointer information that the memory address indicated by the pointer information does not belong to the memory managed by the current processor, it may further determine that the subtree of the tree to be searched is stored in the memory managed by the other processor.
  • the first memory controller records an address range of the first memory, and the first memory controller can determine whether it is within the address range of the first memory according to the memory address indicated by the pointer information. If so, it can be determined that the next data indicated by the pointer is in the first memory. If not, it can be determined that the next data indicated by the pointer is in other memory.
  • the first memory controller may send the memory address indicated by the pointer to the first processor, and the first processor may determine, according to the preset mapping relationship between the memory and the processor, a processor corresponding to the memory address indicated by the pointer.
  • the first memory controller may send the pointer information to the first tree to the first processor, and the first processor determines the processor corresponding to the memory of the subtree to be searched according to the mapping relationship between the memory and the processor.
  • the search instruction is sent by the first processor to the processor corresponding to the memory of the subtree to be searched for.
  • the memory of the subtree of the tree to be searched is recorded as the second memory
  • the processor managing the second memory is recorded as the third processor
  • the memory controller associated with the third processor is the second memory controller.
  • the second memory controller includes a second migration processing unit.
  • the data structure of the same tree may be stored in the memory corresponding to different processors.
  • the first memory controller searches for data satisfying the search condition in the memory of the parent tree, if the parent tree is found to have pointers to other memory address information, the first memory controller also needs to send the search instruction to the memory of the management subtree.
  • the memory controller associated with the processor is further processed by the migration processing unit in the memory controller.
  • the process of finding the data satisfying the search condition by the first memory controller and the memory controller associated with the processor managing the memory of the subtree of the tree to be searched can find the data satisfying the search condition simultaneously;
  • the first memory controller may first search for the data satisfying the search condition in the first memory, and then migrate the search instruction to the memory controller associated with the processor that manages the memory of the subtree where the tree to be searched is located.
  • the second memory controller acquires the second data that satisfies the search condition according to the search instruction.
  • the second memory controller sends the second data to the first processor.
  • the first processor receives the second data, and stores the second data in a cache of the first processor.
  • steps S309 to S311 is the same as the processing of steps S305 and S307, and details are not described herein again.
  • the search result acquired by the second migration processing unit according to the search instruction becomes the second data.
  • the first processor generates a search result according to the first data and the second data.
  • the first processor determines the search result of the search request message according to the acquired first data and the second data.
  • the first processor may summarize the first data and the second data according to a preset rule to obtain the summarized search result.
  • the manner of summarizing the first data and the second data may be sorting according to the receiving time, respectively recording a memory controller that transmits data satisfying the search condition and data satisfying the search condition. It is also possible to sort by a memory controller that transmits data satisfying the search condition, and sequentially enumerate a memory controller that transmits data satisfying the search condition and data that satisfies the search condition.
  • the first processor may also set a preset condition, and when the preset condition is met, generate a search result according to the acquired data satisfying the search condition.
  • the preset condition may be at least one of the following:
  • Manner 1 When the data search time satisfies the first threshold, the first processor generates a search result according to the first data and the second data.
  • Manner 2 When the quantity of the first data and the second data acquired by the first processor meets the second threshold, the first processor generates a search result according to the first data and the second data.
  • the first threshold is a positive integer greater than 1.
  • the first processor cannot receive the data that is sent by the memory controller and meets the search condition, and the data search time may be limited according to the first threshold.
  • the data search time satisfies the first threshold
  • the data acquired by the first processor is aggregated to generate a search result.
  • it may be determined according to whether the number of data satisfying the search condition acquired by the first processor satisfies a second threshold to determine when to generate the search result.
  • the first processor may also send a lookup request message to all the processors in the computing device, where the search request message carries the identifier of the tree to be searched.
  • the other processor may determine whether the subtree of the tree to be searched is stored in the memory managed by the tree according to the identifier of the tree to be searched. If so, the other processor sends a response message to the first processor, the response message indicating that the subtree of the tree to be searched is stored in the memory of the processor.
  • the first processor may record information of all the subtrees in the computing device in which the tree to be searched is stored, and each memory controller that stores the memory associated with the subtree of the tree to be searched may send the first processor after executing the search instruction. Response data that satisfies the search criteria or data that does not satisfy the search criteria.
  • the first processor receives data sent by the memory controller associated with the processor that manages the memory of the subtree in which the tree to be searched, the first processor generates a search result according to the acquired data.
  • the processor 201 may send a lookup request message to the processor 211 and the processor 221, where the search request message carries the tree to be searched. ID.
  • the processor 201 itself also queries in its managed memory 205 whether to store the subtree of the tree to be searched and records the result of the query.
  • the processor 211 queries the managed memory 215 according to the ID of the tree to be searched whether the subtree of the tree to be searched is stored, and sends the query result to the processor 201.
  • the processor 221 also queries whether the subtree of the tree to be searched is stored in the memory 225 managed by the ID of the tree to be searched, and sends the query result to the processor 201.
  • the processor 201 records the query result sent by each processor in the computing device, and receives response data of the data satisfying the search condition and the data that does not satisfy the search condition, which are sent by the memory controller associated with the processor that manages the memory of the tree to be searched. When, the search results are generated based on the acquired data.
  • the search process may be migrated to the migration processing unit in the memory controller close to the data, thereby reducing data storage between the processors in the computing device.
  • the network latency caused by the structure being stored in the cache of the processor improves the efficiency of the search process and reduces the latency of the data search process.
  • the migration processing unit in the memory controller near the data directly searches in its memory, it avoids copying the complete data structure to the cache of the processor executing the search instruction, thereby reducing cross-processors and nodes.
  • the amount of data that needs to be transferred by the controller, cache, and memory saves the bandwidth of the data transfer.
  • the cache of the processor executing the search instruction does not need to store the complete data structure, it also reduces the storage overhead of the cache of the processor receiving the search request message, improving the overall performance of the computing device.
  • the processor 201 receives a search request message, where the ID of the tree to be searched in the search request message is 1, the address of the root node of the tree to be searched is "101XX", and the search condition is to search for January 1 to 1 Data stored on the 2nd of the month. It is assumed that the data satisfying the search condition is a node marked with a black dot in the parent tree 217, a node marked with a black dot in the subtree 227, and a node marked with a black dot in the subtree 206.
  • the specific processing of the processor 201 is as follows:
  • the CPU 201 receives the search request message.
  • the CPU 201 determines, according to the search request message, a memory in which the address of the root node of the tree to be searched is located and a memory controller associated with the processor that manages the memory.
  • the CPU 201 may determine, according to the address of the root node of the tree to be searched, the memory of the root node of the tree to be searched as the memory 215. Then, the processor that manages the memory 215 is determined to be the processor 211 according to the preset mapping relationship between the memory and the processor. The memory controller associated with the processor 211 is determined to be the memory controller 212 according to the mapping relationship between the preset processor and the memory controller.
  • the CPU 201 determines a search instruction according to the search request message and a preset search algorithm.
  • the CPU 201 sends a search command to the CPU 211.
  • the CPU 211 sends a search command to the memory controller 212.
  • the data transfer process between the processor and the memory controller requires a network and processor formed by the node controller. For example, if the CPU 201 needs to send a search command to the memory controller 212, the data transfer process is transmitted from the CPU 201 to the CPU 211 via the node controller 301, and then the CPU 211 sends the search command to the memory controller 212.
  • the migration processing unit 213 in the memory controller 212 acquires the first data that satisfies the search condition according to the search instruction.
  • the memory controller 212 sends the first data to the CPU 211.
  • the processor 211 sends the first data to the CPU 201.
  • the processor 201 receives the first data and stores the first data in the cache 204.
  • the CPU 221 sends a search command to the memory controller 222.
  • the migration processing unit 223 in the memory controller 222 acquires the second data that satisfies the search condition in the memory 225 according to the search instruction.
  • the memory controller 222 sends the second data to the CPU 221.
  • the CPU 221 transmits the second data to the CPU 201.
  • the CPU 201 receives the second data and stores the second data in the cache 204.
  • the CPU 201 sends a search command to the memory controller 202.
  • the migration processing unit 203 in the memory controller 202 acquires the third data that satisfies the search condition according to the search instruction.
  • the memory controller 202 sends the memory address information storing the third data to the CPU 201.
  • the CPU 201 generates a search result according to the acquired first data, second data, and third data.
  • the search command can be directly sent to the memory control corresponding to the memory.
  • the further processing process is completed by the migration processing unit in the memory controller.
  • the search command is sent to the memory controller associated with the processor that manages the memory of the subtree in which the search tree is located, and then controlled by the memory.
  • the migration processing unit in the device completes the further search process.
  • the memory controller associated with the processor that manages the memory of the root node may send a search to the memory controller associated with the processor that manages the memory of the subtree in which the tree is to be searched.
  • the instruction may also send a search instruction to all memory controllers associated with the processor that manages the memory of the subtree in which the tree is to be searched.
  • the CPU 201 that receives the search request message may send the search instruction to the memory controller corresponding to the memory of the parent tree or the subtree of the tree to be searched, and the migration processing unit in the memory controller searches for the content in the memory or the cache. Searching for the data of the condition, the data search process is completed nearby, thereby avoiding the network delay problem caused by the processor storing the complete structure of the tree to be searched to the cache of the CPU 201, and improving the efficiency of the data search.
  • the size of the sequence numbers of the above processes does not mean the order of execution, and the order of execution of each process should be determined by its function and internal logic, and should not be taken to the embodiments of the present invention.
  • the implementation process constitutes any limitation.
  • a method for data search according to an embodiment of the present invention is described in detail above with reference to FIG. 2A to FIG. 4, and a device and a processor for data search according to an embodiment of the present invention will be described below with reference to FIG. 5 to FIG. And computing devices.
  • FIG. 5 is a schematic diagram of a device 500 for data search according to an embodiment of the present invention.
  • the device 500 corresponds to the first processor in FIG.
  • the apparatus 500 includes a receiving unit 501, a processing unit 502, a generating unit 503, and a transmitting unit 504.
  • the receiving unit 501 is configured to receive a search request message, where the search request message includes a search condition and an address of a root node of the tree to be searched.
  • the processing unit 502 is configured to determine, according to the search request message, a first memory in which an address of a root node of the tree to be searched is located, and a first memory controller associated with a processor that manages the first memory;
  • the generating unit 503 is configured to determine a search instruction according to the search request message and a preset search algorithm, where the search instruction includes the search condition, an address of a root node of the tree to be searched, and the device Identification
  • the sending unit 504 is configured to send the search instruction to the first memory controller
  • the receiving unit 501 is further configured to receive first data sent by the first memory controller, where the first data is data that satisfies the search condition in the first memory, and is sent by a second memory controller.
  • Second data the second memory controller is a memory controller associated with the processor managing the second memory, and the second memory is a memory in which the subtree storing the tree to be searched is located, the second data Data for satisfying the search condition in the second memory;
  • the generating unit 503 is further configured to generate a search result according to the first data and the second data.
  • the processing unit 502 determines, according to the search request message, the first memory where the address of the root node of the tree to be searched is located, and the first memory controller associated with the processor that manages the first memory, including:
  • the first processor determines the first memory controller associated with the processor that manages the first memory according to a mapping relationship between a preset processor and a memory controller.
  • the generating unit 503 generates a search result according to the first data and the second data, including:
  • the search result is generated according to the received first data and the second data; or, when the quantity of data acquired by the receiving unit that satisfies the search condition satisfies a second threshold, according to The received first processor and the second processor generate the search results.
  • the search request message further includes an identifier of the tree to be searched
  • the sending unit 504 is further configured to send a lookup request message to all the processors in the computing device, where the search request message carries the The identifier of the tree to be searched;
  • the receiving unit 501 is further configured to receive a response message that is sent by the subtree processor that stores the to-be-searched tree, where the response message is used to indicate that a subtree of the tree to be searched is stored in a memory managed by the processor;
  • the generating unit 503 is further configured to: when the receiving unit receives the first memory controller and a memory controller associated with a processor that manages a memory of a subtree in which the tree to be searched is located, satisfying the search The search result is generated when the condition data is used.
  • the apparatus 500 may correspond to performing the method described in the embodiments of the present invention, and the above-described and other operations and/or functions of the respective units in the apparatus 500 are respectively for implementing the first processor in FIG. The corresponding process of the method is not repeated here for brevity.
  • FIG. 6 is a schematic structural diagram of a device 600 for data search according to an embodiment of the present invention.
  • the device 600 corresponds to the first memory controller in FIG.
  • the apparatus 600 includes a receiving unit 601, a processing unit 602, and a transmitting unit 603.
  • the receiving unit 601 is configured to receive a search instruction sent by the first processor, where the search instruction includes a search condition, an address of a root node of the tree to be searched, and an identifier of the first processor, where the tree to be searched
  • the memory of the address of the root node is the first memory, and the first memory controller is a memory controller associated with the processor managing the first memory;
  • the processing unit 602 is configured to search, in the first memory, data that meets the search condition according to the search instruction, when the first data that satisfies the search condition exists in the first memory, the first
  • the memory controller acquires the first data, and sends the first data to the first processor according to the identifier of the first processor;
  • the sending unit 603 is configured to send the search instruction to the second memory controller when the tree to be searched has a subtree in the second memory.
  • the processing unit 602 searches, in the first memory, the data that meets the search condition according to the search instruction, including:
  • the first cache is a cache included in the first processor; and the first data that satisfies the search condition is searched in the first cache.
  • the apparatus 500 may correspond to performing the method described in the embodiments of the present invention, and the above and other operations and/or functions of the respective units in the apparatus 500 are respectively performed for implementing the first memory controller in FIG.
  • the corresponding flow of the method of the main body is not described herein for brevity.
  • FIG. 7 is a schematic diagram of a processor 700 according to an embodiment of the present invention.
  • the processor 700 includes a processing circuit 701, a memory controller 702, a cache 703, and a bus 704.
  • the processing circuit 701, the memory controller 702, and the cache 703 communicate via the bus 704.
  • the memory controller 702 includes a migration processing circuit 7021, a storage circuit 7022, a communication circuit 7023, and a bus 7024.
  • the memory processing circuit 702 transfers the processing circuit 7021, the storage circuit 7022, and the communication circuit 7023 through the bus 7024.
  • the memory circuit 7022 of the memory controller 702 is configured to store instructions for executing instructions stored by the memory circuit 7022 of the memory controller 702.
  • the memory circuit 7022 of the memory controller 702 stores program code, and the memory controller 702 can call the program code stored in the memory circuit 7022 of the memory controller 702 to perform the following operations:
  • the search request message including a search condition and an address of a root node of the tree to be searched;
  • the second memory controller receives, by the second memory controller, the second data sent by the second memory controller, where the second memory controller is a memory controller associated with the processor that manages the second memory, where the second memory is a subtree where the tree to be searched is stored.
  • the second data is data in the second memory that satisfies the search condition;
  • Search results are generated based on the first data and the second data.
  • the processing circuit 701 may be an application specific integrated circuit (ASIC), a programmable logic device (PLD), and a field-programmable gate array. Array, FPGA), or other programmable logic device.
  • ASIC application specific integrated circuit
  • PLD programmable logic device
  • FPGA field-programmable gate array
  • the bus 704 is an on-chip interconnect bus.
  • the processing circuit 701, the memory controller 702, and the cache 703 can also communicate by direct connection.
  • the processing circuit 701, the memory controller 702, and the cache 703 communicate by means of a switch or a direct connection circuit.
  • the migration processing circuit 7021 can be an application specific integrated circuit (ASIC), a field programmable gate array (FPGA), or other programmable logic device.
  • ASIC application specific integrated circuit
  • FPGA field programmable gate array
  • the memory circuit 7022 can include random access memory and provides instructions and data to the migration processing circuit 701.
  • the storage circuit 7022 can also include a non-volatile random access memory.
  • the communication circuit 7023 can be a communication interface for the memory controller 702 to communicate with other hardware circuits.
  • the bus 7024 includes an on-chip interconnect bus.
  • the migration processing circuit 7021, the storage circuit 7022, and the communication circuit 7023 can also communicate by direct connection.
  • the migration processing circuit 7021, the storage circuit 7022, and the communication circuit 7023 communicate by means of a switch or direct connection circuit.
  • process 700 corresponds to the apparatus 500 provided by the embodiment of the present invention, and the processor 700 is used to implement the corresponding process performed by the first processor in the method shown in FIG. No longer.
  • FIG. 8 is a schematic diagram of a memory controller 800 according to an embodiment of the present invention.
  • the memory controller includes a migration processing circuit 801, a storage circuit 802, a communication circuit 803, and a bus 804.
  • the circuit 801, the storage circuit 802, and the communication circuit 803 are communicated via a bus 804, wherein the storage circuit 802 stores therein an execution instruction, and when the memory controller 800 is in operation, the migration processing circuit 801 performs the storage.
  • Execution instructions in circuit 802 perform the following operations using hardware resources in memory controller 800:
  • a controller is a memory controller associated with the processor that manages the first memory
  • the first memory controller acquires the first Data, and transmitting the first data to the first processor;
  • the second memory controller when the tree to be searched has a subtree in the second memory, so that the second memory controller is in the first according to the search instruction Searching for data satisfying the search condition in the second memory, when the second data that satisfies the search condition exists in the second memory, acquiring the second data, and sending the second data to the first data a processor; wherein the second memory is a memory in which the subtree of the tree to be searched is located, and the second memory controller is a memory controller associated with a processor managing the second memory.
  • the migration processing circuit 801 may be an application specific integrated circuit (ASIC), a field programmable gate array (FPGA), or other programmable logic device.
  • ASIC application specific integrated circuit
  • FPGA field programmable gate array
  • the memory circuit 802 can include random access memory and provide instructions and data to the migration processing circuit 801.
  • Memory circuit 802 may also include a non-volatile random access memory.
  • storage circuit 802 can also store device type information.
  • the communication circuit 803 can be a communication interface for the memory controller 800 to communicate with other hardware circuits.
  • the bus 804 can communicate with the migration processing circuit 801, the storage circuit 802, and the communication circuit 803 in a direct connection manner in addition to the on-chip interconnect bus.
  • the migration processing circuit 801, the storage circuit 802, and the communication circuit 803 communicate by means of a switch or a direct connection circuit.
  • the memory controller 800 corresponds to the apparatus 600 provided by the embodiment of the present invention, and the memory controller 800 is used to implement the corresponding process performed by the first memory controller in the method shown in FIG. Concise, no longer repeat here.
  • an embodiment of the present invention further provides a computing device, where the computing device includes at least two processors and a memory, and each of the at least two processors is associated with at least one memory. a controller, each of the at least one memory controller is configured to perform data communication between the each processor and the memory; the computing device includes a first processor, a first memory controller, Second memory controller;
  • the first processor is configured to receive a search request message, where the search request message includes a search condition and an address of a root node of the tree to be searched; and determining, according to the search request message, an address of a root node of the tree to be searched a first memory and a first memory controller associated with the processor managing the first memory; determining a search instruction according to the search request message and a preset search algorithm, wherein the search instruction includes the search condition, the An address of a root node of the tree to be searched and an identifier of the first processor; sending the search instruction to the first memory controller; receiving first data sent by the first memory controller, the first One data is the data in the first memory that satisfies the search condition; the second data sent by the second memory controller is received, and the second memory controller is a memory controller associated with the processor that manages the second memory,
  • the second memory is a memory in which the subtree of the tree to be searched is stored, and the second data is data in the second memory that satisfies the search
  • the first memory controller is configured to receive the search instruction sent by the first processor, and search, in the first memory, data that meets the search condition according to the search instruction, when the first memory is in the first memory When there is first data that satisfies the search condition, the first data is acquired, and the first data is sent to the first processor according to the identifier of the first processor; when the to-be-searched When the tree has a subtree in the second memory, the search instruction is sent to the second memory controller, where the second memory is a memory in which the subtree of the tree to be searched is stored.
  • the second memory controller is a memory controller associated with the processor that manages the second memory;
  • the second memory controller is configured to receive the search instruction sent by the first memory controller, and search for data that meets the search condition in the second memory according to the search instruction, when the first When there is second data in the memory that satisfies the search condition, the second data is acquired, and the second data is sent to the first processor according to the identifier of the first processor.
  • the calculation process can be migrated to the migration processing unit in the memory controller close to the data, thereby reducing the complete data storage between the processors in the computing device.
  • the network delay caused by the structure being stored in the cache of the processor receiving the search request message improves the efficiency of the search process and reduces the delay of the data search process.
  • the migration processing unit in the memory controller near the data searches directly in its memory or cache, it avoids copying the complete data structure to the cache of the processor receiving the search request message, thereby reducing cross-processing.
  • the amount of data that needs to be transferred during data transfer between the controller, the node controller, the cache, and the memory saves the bandwidth of the data transfer.
  • the cache of the processor receiving the search request message does not need to store the complete data structure, it also reduces the storage overhead of the cache of the processor receiving the search request message, improving the overall performance of the computing device.
  • the above embodiments may be implemented in whole or in part by software, hardware, firmware or any other combination.
  • the above-described embodiments may be implemented in whole or in part in the form of a computer program product.
  • the computer program product includes one or more computer instructions.
  • the computer program instructions When the computer program instructions are loaded or executed on a computer, the processes or functions described in accordance with embodiments of the present invention are generated in whole or in part.
  • the computer can be a general purpose computer, a special purpose computer, a computer network, or other programmable device.
  • the computer instructions can be stored in a computer readable storage medium or transferred from one computer readable storage medium to another computer readable storage medium, for example, the computer instructions can be from a website site, computer, server or data center Transfer to another website site, computer, server, or data center by wire (eg, coaxial cable, fiber optic, digital subscriber line (DSL), or wireless (eg, infrared, wireless, microwave, etc.).
  • the computer readable storage medium can be any available media that can be accessed by a computer or a data storage device such as a server, data center, or the like that contains one or more sets of available media.
  • the usable medium can be a magnetic medium (eg, a floppy disk, a hard disk, a magnetic tape), an optical medium (eg, a DVD), or a semiconductor medium.
  • the semiconductor medium can be a solid state disk (SSD).
  • the disclosed systems, devices, and methods may be implemented in other manners.
  • the device embodiments described above are merely illustrative.
  • the division of the unit is only a logical function division.
  • there may be another division manner for example, multiple units or components may be combined or Can be integrated into another system, or some features can be ignored or not executed.
  • the mutual coupling or direct coupling or communication connection shown or discussed may be an indirect coupling or communication connection through some interface, device or unit, and may be in an electrical, mechanical or other form.
  • the units described as separate components may or may not be physically separated, and the components displayed as units may or may not be physical units, that is, may be located in one place, or may be distributed to multiple network units. Some or all of the units may be selected according to actual needs to achieve the purpose of the solution of the embodiment.

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Software Systems (AREA)
  • Physics & Mathematics (AREA)
  • General Engineering & Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • Data Mining & Analysis (AREA)
  • Databases & Information Systems (AREA)
  • Information Retrieval, Db Structures And Fs Structures Therefor (AREA)

Abstract

一种数据搜索的方法,该方法包括:第一处理器接收搜索请求消息,搜索请求消息中包括搜索条件和待搜索树的根节点的地址;根据搜索请求消息确定待搜索树的根节点的地址所在第一内存和管理第一内存的处理器关联的第一内存控制器;根据搜索请求消息和预置的搜索算法确定搜索指令;将搜索指令发送给第一内存控制器;接收第一内存控制器发送的第一数据,第一数据为第一内存中满足搜索条件的数据;接收第二内存控制器发送的第二数据,第二内存控制器为管理第二内存的处理器关联的内存控制器,第二内存为存储待搜索树的子树所在的内存,第二数据为第二内存中满足搜索条件的数据;根据第一数据和第二数据生成搜索结果。

Description

一种数据搜索的方法、装置和相关设备 技术领域
本申请涉及计算机领域,尤其涉及计算设备内一种数据搜索的方法、装置、处理器、内存控制器和计算设备。
背景技术
随着计算机技术的发展,一个计算设备系统中可配置多个处理器。例如,中央处理器(central process unit,CPU),以此提高计算机系统的处理效率。图1A为现有技术中一种计算设备的系统架构示意图,如图所示,该系统中包括多个CPU,每个CPU配置一个缓存(cache)。多个CPU共用一个内存(main memory),每个CPU管理内存中一段存储区域。内存中存储数据的访问需要通过缓存进行,缓存是CPU的小型快速存储器,用于保存内存中常用数据的副本,以避免频繁访问外部的慢速大容量主存带来的网络损耗和时延问题,缓存内存储内存中数据的副本。
在常见的应用中,例如数据库,通常利用链表或树的结构存储数据。图1B为现有技术中链表和树的数据结构的示意图。每个链表或树的数据结构中包括大量节点,每个节点的内部结构相同,均包括数据和指针两部分。其中,指针的数量可以是一个或多个,指针用于标识不同数据在存储结构中的关联关系。当计算设备中任意一个CPU接收搜索指令时,该CPU会先从各个CPU对应的主存或缓存中获取该搜索指令相关的树或链表的完整的数据结构,并将该完整的数据结构存储在执行搜索指令的CPU的缓存中。然后再按照搜索关键词在该缓存中查找目标数据确定搜索结果。但是,在现有技术的处理过程中,接收搜索指令的CPU需要按照链表或树的数据结构依次获取根节点的数据,再根据该数据的指针向其他CPU获取下一个节点的数据。依此类推,每次数据获取和获取该数据指针所指向的下一个节点的数据的过程中,均需要经过两次网络延迟。其中,网络延迟是指每次接收搜索指令的CPU均需要经过节点控制器、处理器、缓存、主存的数据传输过程,每次网络延迟通常在300ns量级,两次网络延迟可达到600ns。因此,上述搜索过程会导致搜索耗时长。而且,在搜索过程中,由于需要计算设备中多个处理器多次执行指令获取数据,并将该数据发送给接收搜索指令的处理器,而导致整个系统性能下降的问题。
发明内容
本申请提供了一种数据搜索的方法、装置、处理器、内存控制器和计算设备,能够解决现有技术中搜索方法耗时长和影响整个系统性能的问题,降低数据搜索的时延,提高数据搜索过程效率和整个系统的性能。
第一方面,提供一种数据搜索的方法,该方法包括:计算设备中任意一个处理器 接收搜索请求消息,该搜索请求消息中包括搜索条件和待搜索树的根节点的地址。首先,第一处理器可以先根据搜索请求消息中待搜索树的根节点的地址确定根节点的地址所在的第一内存和管理第一内存的处理器关联的第一内存控制器。然后,第一处理器会根据搜索请求消息和预置的搜索算法确定搜索指令,所述搜索指令中包括所述搜索条件、所述待搜索树的根节点的地址和所述第一处理器的标识,并将该搜索指令发送给第一内存控制器。然后,接收第一内存控制器发送的第一内存中满足搜索条件的第一数据,以及管理待搜索树的子树所在第二内存的处理器关联的第二内存控制器获取的满足搜索条件的第二数据。最后,第一处理器再根据第一数据和第二数据生成搜索结果。
可选地,搜索请求消息中还包括待搜索树的标识。搜索指令中还包括待搜索树的标识。
在一种可能的实现方式中,所述第一处理器根据搜索请求消息确定待搜索树的根节点的地址所在第一内存和管理所述第一内存的处理器关联的内存控制器,包括:首先,所述第一处理器根据所述待搜索树的根节点的地址确定所述待搜索树的根节点的地址所在所述第一内存。然后,根据预置的内存和处理器映射表确定与所述第一内存存在关联关系的所述管理所述第一内存的处理器。再根据处理器和内存控制器的映射关系确定管理所述第一内存的处理器关联的第一内存控制器。
在一种可能的实现方式中,所述第一处理器根据所述第一数据和第二数据生成搜索结果,包括:当数据搜索时间满足第一阈值时,所述第一处理器根据接收的第一数据和第二数据生成所述搜索结果;或者,当第一处理器获取的所述第一数据和所述第二数据的数量满足第二阈值时,所述第一处理器根据接收的第一处理器和所述第二处理器生成所述搜索结果。
在一种可能的实现方式中,所述第一处理器向所述计算设备中所有处理器发送查找请求消息,所述查找请求消息中携带所述待搜索树的标识,以指示所述计算设备中所有处理器根据所述待搜索树的标识确定所述各个处理器管理的内存中是否存储有所述待搜索树的子树;接收存储所述待搜索树的子树处理器发送的响应消息,所述响应消息用于指示处理器管理的内存中存储有所述待搜索树的子树;当所述第一处理器接收到所述第一内存控制器和所有管理所述待搜索树的子树所在内存的处理器关联的内存控制器发送的满足所述搜索条件的数据时,生成所述搜索结果。
通过上述内容的描述,当处理器接收到搜索请求消息时,可以将计算处理过程迁移到靠近数据的内存控制器中的迁移处理单元执行,降低了计算设备中各个处理器之间因将完整数据存储结构存储到接收搜索请求消息的处理器的缓存中所带来的网络延迟,提升了搜索过程的效率,降低了数据搜索处理的时延。进一步地,由于由数据就近的内存控制器中的迁移处理单元直接在其内存或缓存中进行搜索,避免将完整数据结构全部拷贝到接收搜索请求消息的处理器的缓存,由此减少了跨处理器、节点控制器、缓存和内存之间数据传输过程中所需传输的数据量,节省了数据传输的带宽。而且,由于接收搜索请求消息的处理器的缓存不在需要存储完整的数据结构,也同时减少了接收搜索请求消息的处理器的缓存的存储开销,提升了计算设备整体性能。
第二方面,本申请提供一种数据搜索的方法,该方法包括:第一内存控制器接收 第一处理器发送的搜索指令,该搜索指令中包括搜索条件、待搜索树的根节点的地址和第一处理器的标识;待搜索树的根节点的地址所在内存为第一内存,第一内存控制器为管理第一内存的处理器关联的内存控制器。然后,第一内存控制器根据搜索指令在第一内存中查找满足搜索条件的数据,当第一内存中存在满足搜索条件的第一数据时,第一内存控制器获取第一数据,并根据第一处理器的标识将第一数据发送给第一处理器。当待搜索的树在第二内存中存在子树时,第一内存控制器将搜索指令发送给第二内存控制器,以使得第二内存控制器根据搜索指令在第二内存中查找满足搜索条件的数据,当第二内存中存在满足搜索条件的第二数据时,获取第二数据,并将第二数据发送给第一处理器;其中,第二内存为存储待搜索树的子树所在的内存,第二内存控制器为管理第二内存的处理器关联的内存控制器。
可选地,搜索请求消息中还包括待搜索树的标识。搜索指令中包括待搜索树的标识。
在一种可能的实现方式中,每个处理器中还包括缓存,第一内存控制器根据搜索指令在所述第一内存中查找目标数据,包括:根据第一内存的地址确定第一缓存中是否有第一内存的数据副本,当所述第一缓存中存在所述第一内存的数据副本时,在所述第一缓存中查找满足所述搜索条件的所述第一数据;或者,当第一缓存中不存在所述第一内存的数据副本时,将第一内存的数据加载到第一缓存中;其中,第一缓存为第一处理器中包括的缓存;第一内存控制器在第一缓存中查找满足搜索条件的第一数据。
根据上述内存的描述,当处理器接收到搜索请求消息时,由靠近数据的内存控制器中迁移处理单元根据搜索请求消息查找满足搜索请求消息的数据,降低了计算设备中各个处理器之间因将完整数据存储结构存储到接收搜索请求消息的处理器的缓存中所带来的网络延迟,提升了搜索过程的效率,降低了数据搜索处理的时延。
第三方面,本申请提供一种计算设备,计算设备中包括至少两个处理器和内存,所述至少两个处理器中每个处理器关联一个内存控制器,所述内存控制器用于实现所述每个处理器与所述内存之间的数据通信;所述计算设备中包括第一处理器、第一内存控制器、第二内存控制器;
所述第一处理器,用于接收搜索请求消息,所述搜索请求消息包括搜索条件和待搜索树的根节点的地址;根据所述搜索请求消息确定所述待搜索树的根节点的地址所在第一内存和管理所述第一内存的处理器关联的第一内存控制器;根据所述搜索请求消息和预置的搜索算法确定搜索指令,所述搜索指令中包括所述搜索条件、所述待搜索树的根节点的地址和所述第一处理器的标识;将所述搜索指令发送给所述第一内存控制器;接收所述第一内存控制器发送的第一数据,所述第一数据为所述第一内存中满足所述搜索条件的数据;接收所述第二内存控制器发送的第二数据,所述第二内存控制器为管理第二内存的处理器关联的内存控制器,所述第二内存为存储所述待搜索树的子树所在的内存,所述第二数据为所述第二内存中满足所述搜索条件的数据;根据所述第一数据和所述第二数据生成搜索结果;
所述第一内存控制器,用于接收第一处理器发送的所述搜索指令;根据所述搜索指令在所述第一内存中查找满足所述搜索条件的数据,当所述第一内存中存在满足所述搜索条件的第一数据时,获取所述第一数据,并根据所述第一处理器的标识将所述第 一数据发送给所述第一处理器;当所述待搜索的树在第二内存中存在子树时,将所述搜索指令发送给所述第二内存控制器,其中,所述第二内存为存储所述待搜索树的所述子树所在的内存,所述第二内存控制器为管理所述第二内存的处理器关联的内存控制器;
所述第二内存控制器,用于接收所述第一内存控制器发送的所述搜索指令;根据所述搜索指令在所述第二内存中查找满足所述搜索条件的数据,当所述第二内存中存在满足所述搜索条件的第二数据时,获取所述第二数据,并根据所述第一处理器的标识将所述第二数据发送给所述第一处理器。
第四方面,本申请提供一种数据搜索的装置,该装置包括用于执行第一方面或第一方面任一种可能实现方式中数据搜索的方法的各个模块。
第五方面,本申请提供一种数据搜索的装置,该装置包括用于执行第二方面或第二方面任一种可能实现方式中数据搜索的方法的各个模块。
第六方面,本申请提供一种处理器,该处理器包括内存控制器、缓存和总线,所述内存控制器和缓存通过总线相通信,所述内存控制器中包括迁移处理电路、存储电路和总线,所述内存控制器中迁移处理电路和存储电路通过所述总线或直连方式相通信,所述内存控制器的存储电路中存储执行指令,所述处理器运行时,所述内存控制器执行所述内存控制器的存储电路中的执行指令以利用所述处理器中的硬件资源执行第一方面或第一方面任一种可能实现方式中数据搜索的方法的操作步骤。
第七方面,本申请提供一种计算机可读存储介质,所述计算机可读存储介质中存储有指令,当其在计算机上运行时,使得计算机执行第一方面或第一方面中任一种可能实现方式所述的方法。
第八方面,本申请提供一种内存控制器,所述内存控制器中包括迁移处理电路、存储电路、通信电路和总线,所述迁移处理电路、存储电路、通信电路通过所述总线或直连方式相通信,所述存储电路中存储有中用于存储执行指令,所述内存控制器运行时,所述处理电路执行所述存储电路中的执行指令以利用所述内存控制器中的硬件资源执行第二方面或第二方面任一种可能实现方式所述的方法。
第九方面,本申请提供了一种计算机可读存储介质,所述计算机可读存储介质中存储有指令,当其在计算机上运行时,使得计算机执行第二方面或第二方面任一种可能实现方式所述的方法。
第十方面,本申请提供了一种计算机可读存储介质,所述计算机可读存储介质中存储有指令,当其在计算机上运行时,使得计算机执行第三方面或第三方面任一种可能实现方式所述的方法。
本申请在上述各方面提供的实现方式的基础上,还可以进行进一步组合以提供更多实现方式。
附图说明
为了更清楚地说明本发明实施例的技术方案,下面将对本发明实施例中所需要使用的附图作简单地介绍。
图1A为现有技术中一种计算设备系统架构的示意图;
图1B为现有技术中一种链表和树的结构的示意图;
图2A为本发明实施例提供的一种计算设备的系统架构的示意图;
图2B为本发明实施例提供的另一种计算设备的系统架构的示意图;
图3为本发明实施例提供的一种数据搜索的方法的示意性流程图;
图4为本发明实施例提供的另一种数据搜索的方法的示意图流程图;
图5为本发明实施例提供的一种数据搜索的装置500的结构示意图;
图6为本发明实施例提供的一种数据搜索的装置600的结构示意图;
图7为本发明实施例提供的一种数据搜索的处理器700的结构示意图;
图8为本发明实施例提供的一种数据搜索的内存控制器800的结构示意图。
具体实施方式
下面将结合本发明实施例中的附图,对本发明实施例中的技术方案进行详细介绍。
图2A为本发明实施例提供的一种计算设备100的系统架构的示意图,如图所示,该计算设备100中包括节点控制器301、多个处理器和内存。其中,计算设备100可以是服务器、台式机、便携机或虚拟机等具有计算能力的设备。
节点控制器301用于实现多处理器之间的互联,处理器和节点控制器301之间可以通过片内互连总线通信。节点控制器可以是一块由多个电子器件组成的独立的硬件芯片,也可以是集成在处理器内部的硬件结构。图2A中以节点控制器为独立的硬件芯片为例。
值得说明的是,计算设备中不同处理器之间可以通过一个节点控制器301进行互联;也可以通过多个节点控制器301组成的交换网络进行互联,不同节点控制器301之间分工协作,共同实现多处理器之间的数据传输。为便于描述,本发明的以下实施例中以计算设备中仅包括一个节点控制器为例进行详细介绍。
图2A中计算设备100为多个处理器系统,本发明实施例的以下内容中以三个处理器为例进行描述。例如,计算设备100中包括处理器201、处理器211和处理器221。三个处理器共享一个内存的存储资源,每个处理器负责管理内存中一段存储空间。将处理器201所管理的内存中的存储空间标记为内存205,将处理器211所管理的内存中的存储空间标记为内存215,将处理器221所管理的内存中的存储空间标记为内存225。
值得说明的是,内存205、内存215和内存225分别为多个处理器所共享的内存中的一部分,内存205、内存215和内存225可以是内存中连续的存储空间,也可以是内存中间断的存储空间。另外,每个CPU管理的内存的存储空间的大小可以相同,也可以不同,具体实施过程中可以根据业务需求配置,本发明不作限制。
内存也称为主存,是CPU能够直接寻址的存储空间,可以由半导体器件制成,内存的特点是存取速率快,是计算设备的主要部件。我们平常使用的程序,如Windows操作程序、打字软件、游戏软件等,一般都是安装在硬盘等外存上的,但仅此是不能使用其功能的,必须把它们调入内存中进行处理。内存一般采用半导体存储单元,包括随机存储器(random access memory,RAM)或闪存。内存可以用于存储应用程序的数据,如树或链表形式的数据,为便于表述,在本发明的以下实施例中以主存中存储的数据 结构为树形式进行描述。
通常地,应用程序(例如数据库)会使用链表或树的结构存储数据。链表或树的数据结构可以按照预置算法存储,例如,连续存储的数据需要存储在不同处理器所对应的内存中。或者,可以配置每次数据的写入过程按照轮询的方式,依次由各个CPU分别进行处理,存储在其对应的内存中。数据的存储过程为现有技术,本发明不再赘述。本发明的以下描述中,以应用程序存储的结构为树结构为例进行描述。
每个树的数据存储结构中包括大量节点,每个节点的内部结构相同,均包括数据和指针两部分。其中,指针的数量可以是一个或多个,用于标识不同数据在存储结构中的关联关系。利用根节点标识树的数据存储结构中起始节点,对于同一个树而言,内存中存储有树的数据结构中根节点所在的一部分数据时,将该部分数据结构称为母树,内存中存储有非根节点的数据结构所在的部分数据时,将该部分数据结构称为子树。例如,如图2A所示,若内存215中存储有根节点所在的数据结构,那么内存215中存储的数据结构可称为母树,为便于后续表述,将内存215中存储的数据结构称为母树216。若同一树的数据结构分别存储在不同处理器管理的内存中,如内存205中存储有该树的数据结构中非根节点所在的一部分数据,则可以将内存205中存储的数据结构称为该树的子树,此处,将其记为子树206。依此类推,内存225中存储的数据结构可以称为子树226。
图2A所示的计算设备100中包括至少两个处理器,例如该计算设备100中包括处理器201、处理器211、处理器221。图2A中仅示出了三个处理器,具体实施过程中,该计算设备100中可以包括两个或两个以上处理器,本发明不做限制,为便于后续描述,本发明实施例中以计算设备100中包括三个处理器为例进行进一步详细描述。
每个处理器配置有一个缓存和至少一个内存控制器(memory controller,MC)。缓存用于存储该处理器管理的内存中数据的副本,以避免频繁访问外部的慢速大容量主存带来的网络损耗和时延问题。例如,图2A中,处理器201中配置缓存204,处理器211中配置缓存214、处理器221中配置缓存224。
处理器可以是CPU,也可以是其他通用处理器、数字信号处理器(digital signal processor,DSP)、可编程逻辑器件(programmable logic device,PLD)、实现专用集成电路(application specific integrated circuit,ASIC)、现场可编程门阵列(field-programmable gate array,FPGA)或者其他可编程逻辑器件。通用处理器可以是微处理器或者是任何常规的处理器等。在本发明实施例的以下描述中,以处理器为CPU为例进行进一步详细描述。
内存控制器是计算设备内部控制内存与CPU之间交换数据的重要组成部分。内存控制器可以是一个独立的硬件芯片。内存控制器决定了计算设备所能使用的最大内存容量、内存类型和速度、内存颗粒数据深度和数据宽度等等重要参数,也就是说决定了计算设备的内存性能,从而也对计算设备的整体性能产生较大影响。传统地计算设备系统中,处理器没有管脚直接连接到内存,当处理器需要读取内存中目标数据时,处理器会把目标数据对应的内存地址传递给缓存,缓存会查找缓存中是否有这个内存地址对应的缓存段,如果有,则处理器可以直接读取目标数据;如果没有,内存控制器会将内存地址对应的数据加载到缓存中,处理器再读取目标数据。
进一步地,内存控制器中包括迁移处理单元(migrated processing unit,MPU)。每个处理器可以配置至少一个迁移处理单元。迁移处理单元由内存控制器中独立硬件构成,用于存储处理器中常用的功能子集,其中仅包括常见的定点算术运算,例如,加法、减法、乘法、除法、逻辑与、逻辑或、逻辑异或、内存读写和原子操作等。其中,原子操作是指处理器所执行的多个指令中存在顺序和连续的要求,指令执行过程中不能够出现中断,那么,每个指令所执行的操作即称为原子操作。例如,指令执行的过程是:首先,读取a参数对应的值;然后,对a参数对应的值加1;最后,将修改后的a参数写回原执行指令中。应理解的是,MPU中不包括任何复杂功能,如浮点、向量、系统状态操作和乱系流水线等。
图2B为本发明实施例提供的另一种计算设备100的系统架构示意图。图2B与图2A的区别在于,图2A中内存控制器集成在处理器内部,而图2B中内存控制器为独立于处理器的硬件结构。
接下来,结合图3进一步介绍本发明实施例提供的一种数据搜索的方法,如图所示,该方法包括:
S301、第一处理器接收搜索请求消息。
第一处理器为图2A或图2B中所示计算设备100中任意一个处理器。搜索请求消息可以是第一处理器接收用户发送的搜索请求消息,也可以是应用程序或其他处理器发送的搜索请求消息。搜索请求消息中携带搜索条件和待搜索树的根节点的地址。搜索条件可以是数据存储时任意一个参数,如名称、时间,或者存储数据的关键字。
可选地,搜索请求消息中还包括待搜索树的标识(identification)。
S302、第一处理器根据搜索请求消息确定待搜索树的根节点的地址所在的第一内存和管理第一内存的处理器关联的第一内存控制器。
具体地,第一处理器接收到搜索请求消息后,会按照如下步骤确定待搜索树的根节点所在内存和管理第一内存的处理器关联的第一内存控制器:
S3021、第一处理器将待搜索树的根节点的地址发送给第一处理器中的地址译码器。
地址译码器是处理器中用于解析地址访问关系的器件。地址译码器可以根据待搜索树的根节点的地址,确定其所在的内存地址。
S3022、地址译码器向第一处理器发送待搜索树的根节点的地址所在内存地址。
S3023、第一处理器根据待搜索树的根节点地址所对应的内存地址,以及预置的内存和处理器的映射关系确定管理待搜索树的根节点的地址所在内存的处理器。
第一处理器中存储有预置的内存和处理器的映射关系,当第一处理器获取待搜索树的根节点地址所在内存地址时,可以根据预置的内存和处理器的映射关系确定待搜索树的根节点地址所在内存地址所对应的处理器,即管理待搜索树的根节点地址所在内存的处理器。
S3024、第一处理器根据预置的处理器和内存控制器的映射关系确定与管理待搜索树的根节点的地址所在内存的处理器关联的第一内存控制器。
第一处理器中存储有预置的处理器和内存控制器的映射关系,在第一处理器确定待搜索树的根节点地址所在内存地址所对应的处理器后,第一处理器可以根据预置的处理器和内存控制器的映射关系确定与待搜索树的根节点地址所在内存地址所对应的 处理器关联的内存控制器,为便于后续描述,将该内存控制器记为第一内存控制器。
S303、第一处理器根据搜索请求消息和预置的搜索算法确定搜索指令。
在图2A和图2B所示的计算设备中,内存中还存储有预置的搜索算法,该预置的搜索算法用于控制搜索过程中各个内存控制器执行搜索请求消息的先后顺序或时间,以及搜索规则。可以存储在任意一个处理器所管理的内存中。通常地,在计算设备中仅一个处理器可以执行该搜索算法。例如,如图2A所示的计算设备中,预置的搜索算法存储在内存215中,处理器201可以执行该搜索算法。
当第一处理器接收到搜索请求消息时,可以通知计算设备中能够执行预置的搜索算法的处理器根据搜索请求消息中的搜索条件和预置的搜索算法生成搜索指令。该搜索指令中包括搜索条件、待搜索树的根节点的地址和第一处理器的标识。
可选地,该搜索指令中还可以包括待搜索树的标识。当内存中存在多个树的数据结构时,可以先根据待搜索树的标识区分所需查找的目标树的数据结构,然后再按照搜索条件查找满足搜索条件的数据。
第一处理器确定搜索指令的过程包括以下两种情况中任意一种:
情况一:接收搜索请求消息的处理器为可以执行搜索算法的处理器,即第一处理器为可以执行搜索算法的处理器。当第一处理器接收搜索请求消息时,可以根据预置的搜索算法和搜索条件直接生成搜索指令。
情况二:接收搜索请求消息的处理器与可执行搜索算法的处理器不是同一处理器。即第一处理器不能执行搜索算法。当第一处理器接收搜索请求消息时,第一处理器通过节点控制器向可执行搜索算法的处理器发送搜索请求消息,由可执行搜索算法的处理器根据搜索请求消息中搜索条件和预置的搜索算法生成搜索指令,并将该搜索指令发送给第一处理器。
值得说明的是,本发明实施例中,步骤S302和步骤S303的执行无先后顺序关系,可以向执行步骤S302,再执行步骤S303。也可以先执行步骤S303,再执行步骤S302。或者,同时执行步骤S302和步骤S303。
S304、第一处理器向第一内存控制器发送搜索指令。
具体地,在图2A和图2B所示的计算设备中,处理器和第一内存控制器之间的搜索指令传输过程包括:首先,由发送搜索指令的处理器通过节点控制器构成的网络将搜索指令发送给接收搜索指令的处理器;然后,再由接收搜索指令的处理器将搜索指令发送给该处理器关联的内存控制器;最后,再由内存控制器中迁移处理单元执行。由于迁移处理单元中存储有处理器中常用的功能子集,可以快速完成搜索指令的搜索过程,提高搜索效率。
相应地,当步骤S302中确定的待搜索树的根节点的地址所在的内存由其他处理器管理时,例如,待搜索树的根节点的地址所在的内存由第二处理器管理,其搜索过程包括:第一处理器需要先将搜索指令通过节点控制器构成的网络将搜索指令发送给第二处理器;再由第二处理器将搜索指令发送给第一内存控制器;最后,再由第一内存控制器中第一迁移处理单元执行搜索指令。其中,第二处理器为步骤S302中第一处理器确定的待搜索树的根节点所在的内存对应的处理器,第一内存控制器为第二处理器关联的内存控制器,第一迁移处理单元为第一内存控制器中的迁移处理单元。
示例地,如图2A所示的计算设备中,假设处理器201接收到搜索请求消息,待搜索树的根节点所在的内存对应的处理器为处理器211,那么,处理器201会先将搜索指令通过节点控制器301发送给处理器211;然后,再由处理器211将搜索指令发送给内存控制器212;最后,再由内存控制器212中迁移处理单元213完成进一步地搜索处理。
可选地,当步骤S302中确定的待搜索树的根节点的地址所在的内存由第一处理器管理时,即待搜索树的根节点的地址所在的内存由接收搜索请求消息的处理器管理,那么,第一处理器直接将搜索指令发送给第一处理器关联的内存控制器,并由该内存控制器中的迁移处理单元完成进一步地搜索处理。
示例地,如图2A所示的计算设备中,假设处理器201接收到搜索请求消息,待搜索树的根节点所在的内存对应的处理器为处理器201,那么,处理器201会先将搜索指令发送给内存控制器202;然后,再由内存控制器202中的迁移处理单元203完成进一步地搜索处理。
S305、当第一内存中存在满足搜索条件的数据时,第一内存控制器根据搜索指令获取满足搜索条件的第一数据。
具体地,第一内存为待搜索树的母树所在的内存。第一内存控制器在接收到搜索指令后,由第一内存控制器中的第一迁移处理单元先确定在管理第一内存的处理器的缓存中是否有对应的缓存段。如果有,直接按照搜索条件获取满足搜索条件的数据。此处,将第一内存中满足搜索条件的数据记为第一数据。如果没有,会将第一内存的地址对应的数据加载到缓存中;然后,第一迁移处理单元再去按照搜索条件获取第一数据。
值得说明的是,通常地,内存控制器读取内存中数据的方式为:先将内存地址中数据加载到缓存中,然后再在缓存中读取所需数据,以此提高内存控制器读取数据的效率。
S306、第一内存控制器向第一处理器发送第一数据。
具体地,第一内存控制器在获取满足搜索条件的第一数据后,会将第一数据传输给第一处理器,其传输过程与步骤S304类似。
当步骤S302确定的待搜索树的根节点所在的内存对应的处理器为第二处理器时,即管理待搜索树的根节点所在内存的处理器与接收搜索请求消息的处理器不同。第一内存控制器需要先将第一数据发送给第二处理器;然后,第二处理器再将第一数据经过节点控制器发送给第一处理器。
当步骤S302确定的待搜索树的根节点所在的内存对应的处理器为第一处理器时,即管理待搜索树的根节点所在内存的处理器与接收搜索请求消息的处理器相同,第一数据已存储在第一处理器的缓存中,此时,第一内存控制器只需通知第一处理器第一数据的存储位置。
S307、第一处理器接收第一数据,并将第一数据存储在第一处理器的缓存中。
S308、当第一内存控制器确定待搜索树在其他处理器管理的内存中存在子树时,向第二内存控制器发送搜索指令。
第一内存控制器在步骤S305中,从待搜索树的根节点的地址开始,依次搜索根节 点所在的母树中各个节点的数据是否满足搜索条件。在搜索过程中,第一内存控制器需要根据各个节点中指针信息获取下一个节点的数据,指针信息通常利用内存地址表示。当第一内存控制器通过指针信息确定该指针信息所指示的内存地址不属于当前处理器所管理的内存时,则可以进一步确定在其他处理器管理的内存中存储有待搜索树的子树。具体地,第一内存控制器记录有第一内存的地址范围,第一内存控制器可以根据指针信息所指示的内存地址确定其是否在第一内存的地址范围内。如果在,则可确定该指针所指示的下一个数据在第一内存中。如果不在,则可确定该指针所指示的下一个数据在其他内存中。第一内存控制器可以将该指针所指示的内存地址发送给第一处理器,第一处理器可以根据预置的内存和处理器的映射关系确定该指针所指示的内存地址对应的处理器。
第一内存控制器可以将指向子树的指针信息发送给第一处理器,第一处理器根据内存和处理器的映射关系确定待搜索树的子树所在内存对应的处理器。由第一处理器将搜索指令发送给待搜索树的子树所在内存对应的处理器。为便于后续描述,将待搜索树的子树所在内存记为第二内存,将管理第二内存的处理器记为第三处理器,第三处理器关联的内存控制器是第二内存控制器,第二内存控制器中由包括第二迁移处理单元。
由上述内容可知,应用程序的数据在存储时,可能同一个树的数据结构会分别存储在不同处理器对应的内存中。当第一内存控制器在母树所在的内存搜索满足搜索条件的数据时,如果发现该母树中有指针指向其他内存的地址信息,第一内存控制器还需要将搜索指令发送给管理子树所在内存的处理器关联的内存控制器,由该内存控制器中的迁移处理单元进一步完成搜索处理。
值得说明的是,第一内存控制器查找满足搜索条件的数据的过程,以及管理待搜索树的子树所在内存的处理器关联的内存控制器查找满足搜索条件的数据的过程可以同时进行;也可以是第一内存控制器先在第一内存中查找完满足搜索条件的数据后,再将搜索指令迁移到管理待搜索树的子树所在内存的处理器关联的内存控制器。
S309、当待搜索树的子树中存在满足搜索条件的数据时,第二内存控制器根据搜索指令获取满足搜索条件的第二数据。
S310、第二内存控制器向第一处理器发送第二数据。
S311、第一处理器接收第二数据,并将第二数据存储在第一处理器的缓存中。
步骤S309至步骤S311的处理过程与步骤S305和步骤S307的处理过程相同,此处不再赘述。将第二迁移处理单元根据搜索指令获取的搜索结果成为第二数据。
S312、第一处理器根据第一数据和第二数据生成搜索结果。
当接收到管理待搜索树的母树和子树所在内存的处理器关联的内存控制器发送的数据时,第一处理器根据获取的第一数据和第二数据确定搜索请求消息的搜索结果。
具体地,第一处理器可以按照预置规则将第一数据和第二数据进行汇总得到汇总后的搜索结果。汇总第一数据和第二数据的方式可以是按照接收时间进行排序,分别记录发送满足搜索条件的数据的内存控制器和满足搜索条件的数据。也可以是按照发送满足搜索条件的数据的内存控制器进行排序,依次列举发送满足搜索条件的数据的内存控制器和满足搜索条件的数据。
作为本发明的一个可能的实施例,第一处理器也可以设置预设条件,当满足预设条件时,根据获取的满足搜索条件的数据生成搜索结果。预设条件可以是以下方式中的至少一种:
方式一:当数据搜索时间满足第一阈值时,第一处理器根据第一数据和第二数据生成搜索结果。
方式二:当第一处理器获取的第一数据和第二数据的数量满足第二阈值时,第一处理器根据第一数据和第二数据生成搜索结果。
其中,第一阈值为大于1的正整数,为避免计算设备中网络或硬件故障而导致第一处理器无法接收到内存控制器发送的满足搜索条件的数据,可以根据第一阈值限定数据搜索时间,当数据搜索时间满足第一阈值时,即将第一处理器所获取的数据进行汇总,生成搜索结果。同样地,可以根据第一处理器获取的满足搜索条件的数据的个数是否满足第二阈值来决定何时生成搜索结果。
作为本发明的另一个可能的实施例,第一处理器在接收到搜索请求消息后,也可以向计算设备中所有处理器发送查找请求消息,该查找请求消息中携带待搜索树的标识。当其他处理器接收到该查找请求消息时,可以根据待搜索树的标识确定其管理的内存中是否存储有待搜索树的子树。如果有,则其他处理器则向第一处理器发送响应消息,该响应消息用于指示该处理器的内存中存储有待搜索树的子树。第一处理器可以记录计算设备中所有存储有待搜索树的子树的信息,每个存储有待搜索树的子树的内存关联的内存控制器在执行完搜索指令后,会向第一处理器发送满足搜索条件的数据或无满足搜索条件的数据的响应信息。当第一处理器接收到所有管理待搜索树的子树所在内存的处理器关联的内存控制器发送的数据时,第一处理器再根据获取的数据生成搜索结果。
示例地,在图2A所示的计算设备100中,当处理器201接收到搜索请求消息后,可以向处理器211和处理器221发送查找请求消息,该查找请求消息中携带的待搜索树的ID。处理器201自身也会在其管理的内存205中查询是否存储有待搜索树的子树,并记录查询结果。处理器211在接收到处理器201发送的查找请求消息时,会根据待搜索树的ID在其管理的内存215中查询是否存储有待搜索树的子树,并将查询结果发送给处理器201。同样地,处理器221也会根据待搜索树的ID在其管理的内存225中查询是否存储有待搜索树的子树,并将查询结果发送给处理器201。处理器201记录计算设备中各个处理器发送的查询结果,当接收到所有管理待搜索树所在内存的处理器关联的内存控制器发送的满足搜索条件的数据和无满足搜索条件的数据的响应信息时,根据获取的数据生成搜索结果。
通过上述内容的描述,当处理器接收到搜索请求指令时,可以将搜索处理过程迁移到靠近数据的内存控制器中的迁移处理单元执行,降低了计算设备中各个处理器之间因将数据存储结构存储到该处理器的缓存中所带来的网络延迟,提升了搜索过程的效率,降低了数据搜索处理的时延。进一步地,由于由数据就近的内存控制器中的迁移处理单元直接在其内存中进行搜索,避免将完整数据结构全部拷贝到执行搜索指令的处理器的缓存,由此减少了跨处理器、节点控制器、缓存和内存的所需传输的数据量,节省了数据传输的带宽。而且,由于执行搜索指令的处理器的缓存不在需要存储 完整的数据结构,也同时减少了接收搜索请求消息的处理器的缓存的存储开销,提升了计算设备整体性能。
进一步地,以图2A所示的计算设备100和数据存储结构为例,结合图4进一步解释本申请中提供的一种数据搜索的方法。
在计算设备100中,处理器201接收搜索请求消息,该搜索请求消息中待搜索树的ID为1、待搜索树的根节点的地址为“101XX”,搜索条件为查找1月1日到1月2日存储的数据。假设满足搜索条件的数据为母树217中黑色圆点标记的节点、子树227中黑色圆点标记的节点和子树206中黑色圆点标记的节点。处理器201的具体处理过程如下:
S401、CPU201接收搜索请求消息。
S402、CPU201根据搜索请求消息确定待搜索树的根节点的地址所在的内存和管理该内存的处理器关联的内存控制器。
具体地,CPU201可以根据待搜索树的根节点的地址确定待搜索树的根节点所在内存为内存215。然后,根据预置的内存和处理器的映射关系确定管理内存215的处理器为处理器211。再根据预置的处理器和内存控制器的映射关系确定与处理器211关联的内存控制器为内存控制器212。
S403、CPU201根据搜索请求消息和预置的搜索算法确定搜索指令。
S404、CPU201将搜索指令发送给CPU211。
S405、CPU211将搜索指令发送给内存控制器212。
值得说明的是,在计算设备内部,处理器和内存控制器之间数据传输过程需要经过节点控制器构成的网络和处理器。例如,若CPU201需要将搜索指令发送给内存控制器212,那么,数据传输过程会从CPU201经节点控制器301传输给CPU211,然后CPU211再将搜索指令发送给内存控制器212。
S406、当内存215中存在满足搜索条件的数据时,由内存控制器212中迁移处理单元213根据搜索指令获取满足搜索条件的第一数据。
S407、内存控制器212向CPU211发送第一数据。
S408、处理器211向CPU201发送第一数据。
S409、处理器201接收第一数据,并将第一数据存储在缓存204中。
S410、当内存控制器212确定CPU221的内存225中存储有待搜索树的子树226时,向CPU221发送搜索指令。
S411、CPU221向内存控制器222发送搜索指令。
S412、当内存225中存在满足搜索条件的数据时,内存控制器222中迁移处理单元223根据搜索指令在内存225中获取满足搜索条件的第二数据。
S413、内存控制器222向CPU221发送第二数据。
S414、CPU221向CPU201发送第二数据。
S415、CPU201接收第二数据,并将第二数据存储在缓存204中。
S416、当内存控制器212确定CPU201的内存205中存储有待搜索树的子树206时,向CPU201发送搜索指令。
S417、CPU201向内存控制器202发送搜索指令。
S418、当内存205中存在满足搜索条件的数据时,内存控制器202中的迁移处理单元203根据搜索指令获取满足搜索条件的第三数据。
S419、内存控制器202向CPU201发送存储第三数据的内存地址信息。
S420、CPU201根据获取的第一数据、第二数据和第三数据生成搜索结果。
值得说明的是,当管理根节点所在的内存的处理器关联的内存控制器在搜索过程中发现其他内存中存储有待搜索树的子树时,可以直接将搜索指令发送给该内存对应的内存控制器,由该内存控制器中迁移处理单元完成进一步的搜索过程。也可以是在管理根节点所在的内存的处理器所关联的内存控制器完成搜索后,再向管理待搜索树的子树所在内存的处理器关联的内存控制器发送搜索指令,再由内存控制器中迁移处理单元完成进一步地搜索过程。若待搜索树的子树分别存储在不同内存中,管理根节点所在的内存的处理器关联的内存控制器可以逐个向管理待搜索树的子树所在内存的处理器关联的内存控制器发送搜索指令;也可以同时向所有管理待搜索树的子树所在内存的处理器关联的内存控制器发送搜索指令。通过上述步骤的描述,接收搜索请求消息的CPU201可以将搜索指令发送给待搜索树的母树或子树所在内存对应的内存控制器,由该内存控制器中迁移处理单元在内存或缓存中查找满足搜索条件的数据,就近完成数据搜索过程,以此避免处理器之间将待搜索树的完整结构存储到CPU201的缓存所带来的网络延迟问题,提高数据搜索的效率。
应理解,在本发明的各种实施例中,上述各过程的序号的大小并不意味着执行顺序的先后,各过程的执行顺序应以其功能和内在逻辑确定,而不应对本发明实施例的实施过程构成任何限定。
值得说明的是,对于上述方法实施例,为了简单描述,故将其都表述为一系列的动作组合,但是本领域技术人员应该知悉,本发明并不受所描述的动作顺序的限制,其次,本领域技术人员也应该知悉,说明书中所描述的实施例均属于优选实施例,所涉及的动作并不一定是本发明所必须的。
本领域的技术人员根据以上描述的内容,能够想到的其他合理的步骤组合,也属于本发明的保护范围内。其次,本领域技术人员也应该熟悉,说明书中所描述的实施例均属于优选实施例,所涉及的动作并不一定是本发明所必须的。
上文中结合图2A至图4,详细描述了根据本发明实施例所提供的数据搜索的方法,下面将结合图5至图7,描述根据本发明实施例所提供的数据搜索的装置、处理器和计算设备。
图5为本发明实施例提供的一种数据搜索的装置500的示意图,所述装置500对应图3中第一处理器。如图所示,所述装置500包括接收单元501、处理单元502、生成单元503和发送单元504。
所述接收单元501,用于接收搜索请求消息,所述搜索请求消息包括搜索条件和待搜索树的根节点的地址。
所述处理单元502,用于根据所述搜索请求消息确定所述待搜索树的根节点的地址所在第一内存和管理所述第一内存的处理器关联的第一内存控制器;
所述生成单元503,用于根据所述搜索请求消息和预置的搜索算法确定搜索指令,所述搜索指令中包括所述搜索条件、所述待搜索树的根节点的地址和所述装置的标识;
所述发送单元504,用于将所述搜索指令发送给所述第一内存控制器;
所述接收单元501,还用于接收所述第一内存控制器发送的第一数据,所述第一数据为所述第一内存中满足所述搜索条件的数据;接收第二内存控制器发送的第二数据,所述第二内存控制器为管理第二内存的处理器关联的内存控制器,所述第二内存为存储所述待搜索树的子树所在的内存,所述第二数据为所述第二内存中满足所述搜索条件的数据;
所述生成单元503,还用于根据所述第一数据和所述第二数据生成搜索结果。
可选地,所述处理单元502根据搜索请求消息确定待搜索树的根节点的地址所在第一内存和管理所述第一内存的处理器关联的第一内存控制器,包括:
根据所述待搜索树的根节点的地址确定所述待搜索树的根节点的地址所在所述第一内存;
根据预置的内存和处理器映射表确定与所述第一内存存在关联关系的所述管理所述第一内存的处理器;
所述第一处理器根据预置的处理器和内存控制器的映射关系确定与所述管理所述第一内存的处理器关联的所述第一内存控制器。
可选地,所述生成单元503根据所述第一数据和第二数据生成搜索结果,包括:
当搜索时间满足第一阈值时,根据接收的第一数据和第二数据生成所述搜索结果;或者,当所述接收单元获取的满足所述搜索条件的数据的数量满足第二阈值时,根据接收的第一处理器和所述第二处理器生成所述搜索结果。
可选地,所述搜索请求消息中还包括待搜索树的标识,所述发送单元504,还用于向所述计算设备中所有处理器发送查找请求消息,所述查找请求消息中携带所述待搜索树的标识;
所述接收单元501,还用于接收存储所述待搜索树的子树处理器发送的响应消息,所述响应消息用于指示处理器管理的内存中存储有所述待搜索树的子树;
所述生成单元503,还用于当所述接收单元接收到所述第一内存控制器和所有管理所述待搜索树的子树所在内存的处理器关联的内存控制器发送的满足所述搜索条件的数据时,生成所述搜索结果。
根据本发明实施例的装置500可对应于执行本发明实施例中描述的方法,并且装置500中的各个单元的上述和其它操作和/或功能分别为了实现图3中第一处理器为执行主体的所述方法的相应流程,为了简洁,在此不再赘述。
图6为本发明实施例提供的一种数据搜索的装置600的结构示意图,该装置600对应图3中第一内存控制器。该装置600包括接收单元601、处理单元602和发送单元603。
所述接收单元601,用于接收第一处理器发送的搜索指令,所述搜索指令中包括搜索条件、待搜索树的根节点的地址和所述第一处理器的标识,所述待搜索树的根节点的地址所在内存为第一内存,所述第一内存控制器为管理所述第一内存的处理器关联的内存控制器;
所述处理单元602,用于根据搜索指令在所述第一内存中查找满足所述搜索条件的数据,当所述第一内存中存在满足所述搜索条件的第一数据时,所述第一内存控制 器获取所述第一数据,并根据所述第一处理器的标识将所述第一数据发送给所述第一处理器;
所述发送单元603,用于当所述待搜索的树在第二内存中存在子树时,将所述搜索指令发送给所述第二内存控制器。
可选地,所述处理单元602根据所述搜索指令在所述第一内存中查找满足所述搜索条件的数据,包括:
根据所述第一内存的地址确定第一缓存中是否有所述第一内存的数据副本,当所述第一缓存中存在所述第一内存的数据副本时,在所述第一缓存中查找满足所述搜索条件的所述第一数据;或者,当所述第一缓存中不存在所述第一内存的数据副本时,将所述第一内存的数据加载到所述第一缓存中;其中,所述第一缓存为所述第一处理器中包括的缓存;在所述第一缓存中查找满足所述搜索条件的所述第一数据。
根据本发明实施例的装置500可对应于执行本发明实施例中描述的方法,并且装置500中的各个单元的上述和其它操作和/或功能分别为了实现图3中第一内存控制器为执行主体的所述方法的相应流程,为了简洁,在此不再赘述。
图7为本发明实施例提供的一种处理器700的示意图,如图所示,所述处理器700包括处理电路701、内存控制器702、缓存703和总线704。其中,处理电路701、内存控制器702、缓存703通过总线704进行通信。内存控制器702中包括迁移处理电路7021、存储电路7022、通信电路7023和总线7024,所述内存控制器702中迁移处理电路7021、存储电路7022、通信电路7023通过总线7024相通信。内存控制器702的存储电路7022用于存储指令,该内存控制器702用于执行所述内存控制器702的存储电路7022存储的指令。该所述内存控制器702的存储电路7022存储程序代码,且内存控制器702可以调用所述内存控制器702的存储电路7022中存储的程序代码执行以下操作:
接收搜索请求消息,所述搜索请求消息包括搜索条件和待搜索树的根节点的地址;
根据所述搜索请求消息确定所述待搜索树的所述根节点的地址所在第一内存和管理所述第一内存的处理器关联的第一内存控制器;
根据所述搜索请求消息和预置的搜索算法确定搜索指令,所述搜索指令中包括所述搜索条件、所述待搜索树的根节点的地址和所述处理器700的标识;
将所述搜索指令发送给所述第一内存控制器;
接收所述第一内存控制器发送的第一数据,所述第一数据为所述第一内存中满足所述搜索条件的数据;
接收第二内存控制器发送的第二数据,所述第二内存控制器为管理第二内存的处理器关联的内存控制器,所述第二内存为存储所述待搜索树的子树所在的内存,所述第二数据为所述第二内存中满足所述搜索条件的数据;
根据所述第一数据和所述第二数据生成搜索结果。
应理解,在本发明实施例中,该处理电路701可以是专用集成电路(application specific integrated circuit,ASIC)、可编程逻辑器件(programmable logic device,PLD),现场可编程门阵列(field-programmable gate array,FPGA)、或者其他可编程逻辑器件。
该总线704为片内互连总线。处理电路701、内存控制器702、缓存703之间也可以通过直连的方式相通信。例如,处理电路701、内存控制器702、缓存703采用开关或直连电路连线方式进行通信。
该迁移处理电路7021可以是专用集成电路(ASIC)、现场可编程门阵列(FPGA)或者其他可编程逻辑器件等。
该存储电路7022可以包括随机存取存储器,并向迁移处理电路701提供指令和数据。存储电路7022还可以包括非易失性随机存取存储器。
该通信电路7023可以为通信接口,用于内存控制器702与其他硬件电路进行通信。
该总线7024包括片内互连总线。迁移处理电路7021、存储电路7022和通信电路7023还可以通过直连方式相通信。例如,迁移处理电路7021、存储电路7022和通信电路7023通过开关或直连电路连线方式进行通信。
应理解,根据本发明实施例的处理700对应于本发明实施例提供的装置500,该处理器700用于实现图3中所示方法中第一处理器执行的相应流程,为了简洁,在此不再赘述。
图8为本发明实施例提供的一种内存控制器800的示意图,如图所示,所述内存控制器中包括迁移处理电路801、存储电路802、通信电路803和总线804,所述迁移处理电路801、存储电路802和通信电路803通过总线804相通信,所述存储电路802中存储有中用于存储执行指令,所述内存控制器800运行时,所述迁移处理电路801执行所述存储电路802中的执行指令以利用所述内存控制器800中的硬件资源执行以下操作:
接收第一处理器发送的搜索指令,所述搜索指令中包括搜索条件和待搜索树的根节点的地址,所述待搜索树的根节点的地址所在内存为第一内存,所述第一内存控制器为管理所述第一内存的处理器关联的内存控制器;
根据搜索指令在所述第一内存中查找满足所述搜索条件的数据,当所述第一内存中存在满足所述搜索条件的第一数据时,所述第一内存控制器获取所述第一数据,并将所述第一数据发送给所述第一处理器;
当所述待搜索的树在第二内存中存在子树时,将所述搜索指令发送给所述第二内存控制器,以使得所述第二内存控制器根据所述搜索指令在所述第二内存中查找满足所述搜索条件的数据,当所述第二内存中存在满足所述搜索条件的第二数据时,获取所述第二数据,并将所述第二数据发送给所述第一处理器;其中,所述第二内存为存储所述待搜索树的所述子树所在的内存,所述第二内存控制器为管理所述第二内存的处理器关联的内存控制器。
应理解,在本发明实施例中,该迁移处理电路801可以是专用集成电路(ASIC)、现场可编程门阵列(FPGA)或者其他可编程逻辑器件等。
该存储电路802可以包括随机存取存储器,并向迁移处理电路801提供指令和数据。存储电路802还可以包括非易失性随机存取存储器。例如,存储电路802还可以存储设备类型的信息。
该通信电路803可以为通信接口,用于内存控制器800与其他硬件电路进行通信。
该总线804除包括片内互连总线之外,还可以直连方式将迁移处理电路801、存 储电路802和通信电路803相通信。例如,迁移处理电路801、存储电路802和通信电路803通过开关或直连电路方式进行通信。
应理解,根据本发明实施例的内存控制器800对应于本发明实施例提供的装置600,该内存控制器800用于实现图3中所示方法中第一内存控制器执行的相应流程,为了简洁,在此不再赘述。
如图2A或2B所示,本发明实施例还提供一种计算设备,该所述计算设备中包括至少两个处理器和内存,所述至少两个处理器中每个处理器关联至少一个内存控制器,所述至少一个内存控制器中每个内存控制器用于所述每个处理器与所述内存之间的数据通信;所述计算设备中包括第一处理器、第一内存控制器、第二内存控制器;
所述第一处理器,用于接收搜索请求消息,所述搜索请求消息包括搜索条件和待搜索树的根节点的地址;根据所述搜索请求消息确定所述待搜索树的根节点的地址所在第一内存和管理所述第一内存的处理器关联的第一内存控制器;根据所述搜索请求消息和预置的搜索算法确定搜索指令,所述搜索指令中包括所述搜索条件、所述待搜索树的根节点的地址和所述第一处理器的标识;将所述搜索指令发送给所述第一内存控制器;接收所述第一内存控制器发送的第一数据,所述第一数据为所述第一内存中满足所述搜索条件的数据;接收第二内存控制器发送的第二数据,所述第二内存控制器为管理第二内存的处理器关联的内存控制器,所述第二内存为存储所述待搜索树的子树所在的内存,所述第二数据为所述第二内存中满足所述搜索条件的数据;根据所述第一数据和所述第二数据生成搜索结果;
所述第一内存控制器,用于接收第一处理器发送的所述搜索指令;根据所述搜索指令在所述第一内存中查找满足所述搜索条件的数据,当所述第一内存中存在满足所述搜索条件的第一数据时,获取所述第一数据,并根据所述第一处理器的标识将所述第一数据发送给所述第一处理器;当所述待搜索的树在第二内存中存在子树时,将所述搜索指令发送给所述第二内存控制器,其中,所述第二内存为存储所述待搜索树的所述子树所在的内存,所述第二内存控制器为管理所述第二内存的处理器关联的内存控制器;
所述第二内存控制器,用于接收所述第一内存控制器发送的所述搜索指令;根据所述搜索指令在所述第二内存中查找满足所述搜索条件的数据,当所述第二内存中存在满足所述搜索条件的第二数据时,获取所述第二数据,并根据所述第一处理器的标识将所述第二数据发送给所述第一处理器。
综上所述,当处理器接收到搜索请求消息时,可以将计算处理过程迁移到靠近数据的内存控制器中的迁移处理单元执行,降低了计算设备中各个处理器之间因将完整数据存储结构存储到接收搜索请求消息的处理器的缓存中所带来的网络延迟,提升了搜索过程的效率,降低了数据搜索处理的时延。进一步地,由于由数据就近的内存控制器中的迁移处理单元直接在其内存或缓存中进行搜索,避免将完整数据结构全部拷贝到接收搜索请求消息的处理器的缓存,由此减少了跨处理器、节点控制器、缓存和内存之间数据传输过程中所需传输的数据量,节省了数据传输的带宽。而且,由于接收搜索请求消息的处理器的缓存不在需要存储完整的数据结构,也同时减少了接收搜索请求消息的处理器的缓存的存储开销,提升了计算设备整体性能。
上述实施例,可以全部或部分地通过软件、硬件、固件或其他任意组合来实现。当使用软件实现时,上述实施例可以全部或部分地以计算机程序产品的形式实现。所述计算机程序产品包括一个或多个计算机指令。在计算机上加载或执行所述计算机程序指令时,全部或部分地产生按照本发明实施例所述的流程或功能。所述计算机可以为通用计算机、专用计算机、计算机网络、或者其他可编程装置。所述计算机指令可以存储在计算机可读存储介质中,或者从一个计算机可读存储介质向另一个计算机可读存储介质传输,例如,所述计算机指令可以从一个网站站点、计算机、服务器或数据中心通过有线(例如同轴电缆、光纤、数字用户线(DSL))或无线(例如红外、无线、微波等)方式向另一个网站站点、计算机、服务器或数据中心进行传输。所述计算机可读存储介质可以是计算机能够存取的任何可用介质或者是包含一个或多个可用介质集合的服务器、数据中心等数据存储设备。所述可用介质可以是磁性介质(例如,软盘、硬盘、磁带)、光介质(例如,DVD)、或者半导体介质。半导体介质可以是固态硬盘(solid state disk,SSD)。
本领域普通技术人员可以意识到,结合本文中所公开的实施例描述的各示例的单元及算法步骤,能够以电子硬件、或者计算机软件和电子硬件的结合来实现。这些功能究竟以硬件还是软件方式来执行,取决于技术方案的特定应用和设计约束条件。专业技术人员可以对每个特定的应用来使用不同方法来实现所描述的功能。
所属领域的技术人员可以清楚地了解到,为描述的方便和简洁,上述描述的系统、装置和单元的具体工作过程,可以参考前述方法实施例中的对应过程,在此不再赘述。
在本申请所提供的几个实施例中,应该理解到,所揭露的系统、装置和方法,可以通过其它的方式实现。例如,以上所描述的装置实施例仅仅是示意性的,例如,所述单元的划分,仅仅为一种逻辑功能划分,实际实现时可以有另外的划分方式,例如多个单元或组件可以结合或者可以集成到另一个系统,或一些特征可以忽略,或不执行。另一点,所显示或讨论的相互之间的耦合或直接耦合或通信连接可以是通过一些接口,装置或单元的间接耦合或通信连接,可以是电性,机械或其它的形式。
所述作为分离部件说明的单元可以是或者也可以不是物理上分开的,作为单元显示的部件可以是或者也可以不是物理单元,即可以位于一个地方,或者也可以分布到多个网络单元上。可以根据实际的需要选择其中的部分或者全部单元来实现本实施例方案的目的。
以上所述,仅为本发明的具体实施方式。熟悉本技术领域的技术人员根据本发明提供的具体实施方式,可想到变化或替换。

Claims (15)

  1. 一种数据搜索的方法,其特征在于,所述方法包括:
    第一处理器接收搜索请求消息,所述第一处理器为计算设备中任意一个处理器,所述搜索请求消息包括搜索条件和待搜索树的根节点的地址;
    所述第一处理器根据所述搜索请求消息确定所述待搜索树的所述根节点的地址所在第一内存和管理所述第一内存的处理器关联的第一内存控制器;
    所述第一处理器根据所述搜索请求消息和预置的搜索算法确定搜索指令,所述搜索指令中包括所述搜索条件、所述待搜索树的根节点的地址和所述第一处理器的标识;
    所述第一处理器将所述搜索指令发送给第一内存控制器;
    所述第一处理器接收所述第一内存控制器发送的第一数据,所述第一数据为所述第一内存中满足所述搜索条件的数据;
    所述第一处理器接收第二内存控制器发送的第二数据,所述第二内存控制器为管理第二内存的处理器关联的内存控制器,所述第二内存为存储所述待搜索树的子树所在的内存,所述第二数据为所述第二内存中满足所述搜索条件的数据;
    所述第一处理器根据所述第一数据和所述第二数据生成搜索结果。
  2. 根据权利要求1所述方法,其特征在于,所述第一处理器根据搜索请求消息确定待搜索树的根节点的地址所在第一内存和管理所述第一内存的处理器关联的第一内存控制器,包括:
    所述第一处理器根据所述待搜索树的根节点的地址确定所述待搜索树的根节点的地址所在所述第一内存;
    所述第一处理器根据预置的内存和处理器映射表确定与所述第一内存存在关联关系的所述管理所述第一内存的处理器;
    所述第一处理器根据预置的处理器和内存控制器的映射关系确定与所述管理所述第一内存的处理器关联的所述第一内存控制器。
  3. 根据权利要求2所述方法,其特征在于,所述第一处理器根据所述第一数据和第二数据生成搜索结果,包括:
    当数据搜索时间满足第一阈值时,所述第一处理器根据接收的第一数据和第二数据生成所述搜索结果;或者,
    当第一处理器获取的所述第一数据和所述第二数据的数量满足第二阈值时,所述第一处理器根据接收的第一处理器和所述第二处理器生成所述搜索结果。
  4. 根据权利要求2所述方法,其特征在于,所述搜索请求消息中还包括所述待搜索树的标识,所述方法包括:
    所述第一处理器向所述计算设备中所有处理器发送查找请求消息,所述查找请求消息中携带所述待搜索树的标识,以指示所述计算设备中所有处理器根据所述待搜索树的标识确定所述各个处理器管理的内存中是否存储有所述待搜索树的子树;
    所述第一处理器接收存储所述待搜索树的子树处理器发送的响应消息,所述响应消息用于指示处理器管理的内存中存储有所述待搜索树的子树;
    当所述第一处理器接收到所述第一内存控制器和所有管理所述待搜索树的子树所在内存的处理器关联的内存控制器发送的满足所述搜索条件的数据时,生成所述搜索 结果。
  5. 一种数据搜索的方法,其特征在于,所述方法包括:
    第一内存控制器接收第一处理器发送的搜索指令,所述搜索指令中包括搜索条件、待搜索树的根节点的地址和所述第一处理器的标识,所述待搜索树的根节点的地址所在内存为第一内存,所述第一内存控制器为管理所述第一内存的处理器关联的内存控制器;
    所述第一内存控制器根据所述搜索指令在所述第一内存中查找满足所述搜索条件的数据,当所述第一内存中存在满足所述搜索条件的第一数据时,所述第一内存控制器获取所述第一数据,并根据所述第一处理器的标识将所述第一数据发送给所述第一处理器;
    当所述待搜索的树在第二内存中存在子树时,所述第一内存控制器将所述搜索指令发送给所述第二内存控制器,以使得所述第二内存控制器根据所述搜索指令在所述第二内存中查找满足所述搜索条件的数据,当所述第二内存中存在满足所述搜索条件的第二数据时,获取所述第二数据,并将所述第二数据发送给所述第一处理器;其中,所述第二内存为存储所述待搜索树的所述子树所在的内存,所述第二内存控制器为管理所述第二内存的处理器关联的内存控制器。
  6. 根据权利要求5所述方法,其特征在于,所述每个处理器中还包括缓存,所述第一内存控制器根据搜索指令在所述第一内存中查找满足所述搜索条件的数据,包括:
    所述第一内存控制器根据所述第一内存的地址确定第一缓存中是否有所述第一内存的数据副本,当所述第一缓存中存在所述第一内存的数据副本时,在所述第一缓存中查找满足所述搜索条件的所述第一数据;或者,
    当所述第一缓存中不存在所述第一内存的数据副本时,将所述第一内存的数据加载到所述第一缓存中;其中,所述第一缓存为所述第一处理器中包括的缓存;在所述第一缓存中查找满足所述搜索条件的所述第一数据。
  7. 一种计算设备,其特征在于,所述计算设备中包括至少两个处理器和内存,所述至少两个处理器中每个处理器关联至少一个内存控制器,所述至少一个内存控制器中每个内存控制器用于实现所述每个处理器与所述内存之间的数据通信;所述计算设备中包括第一处理器、第一内存控制器、第二内存控制器;
    所述第一处理器,用于接收搜索请求消息,所述搜索请求消息包括搜索条件和待搜索树的根节点的地址;根据所述搜索请求消息确定所述待搜索树的根节点的地址所在第一内存和管理所述第一内存的处理器关联的第一内存控制器;根据所述搜索请求消息和预置的搜索算法确定搜索指令,所述搜索指令中包括所述搜索条件、所述待搜索树的根节点的地址和所述第一处理器的标识;将所述搜索指令发送给所述第一内存控制器;接收所述第一内存控制器发送的第一数据,所述第一数据为所述第一内存中满足所述搜索条件的数据;接收第二内存控制器发送的第二数据,所述第二内存控制器为管理第二内存的处理器关联的内存控制器,所述第二内存为存储所述待搜索树的子树所在的内存,所述第二数据为所述第二内存中满足所述搜索条件的数据;根据所述第一数据和所述第二数据生成搜索结果;
    所述第一内存控制器,用于接收第一处理器发送的所述搜索指令;根据所述搜索 指令在所述第一内存中查找满足所述搜索条件的数据,当所述第一内存中存在满足所述搜索条件的第一数据时,获取所述第一数据,并根据所述第一处理器的标识将所述第一数据发送给所述第一处理器;当所述待搜索的树在第二内存中存在子树时,将所述搜索指令发送给所述第二内存控制器,其中,所述第二内存为存储所述待搜索树的所述子树所在的内存,所述第二内存控制器为管理所述第二内存的处理器关联的内存控制器;
    所述第二内存控制器,用于接收所述第一内存控制器发送的所述搜索指令;根据所述搜索指令在所述第二内存中查找满足所述搜索条件的数据,当所述第二内存中存在满足所述搜索条件的第二数据时,获取所述第二数据,并根据所述第一处理器的标识将所述第二数据发送给所述第一处理器。
  8. 一种数据搜索的装置,其特征在于,所述装置包括接收单元、处理单元、生成单元和发送单元;
    所述接收单元,用于接收搜索请求消息,所述搜索请求消息包括搜索条件和待搜索树的根节点的地址;
    所述处理单元,用于根据所述搜索请求消息确定所述待搜索树的根节点的地址所在第一内存和管理所述第一内存的处理器关联的第一内存控制器;
    所述生成单元,用于根据所述搜索请求消息和预置的搜索算法确定搜索指令,所述搜索指令中包括所述搜索条件、所述待搜索树的根节点的地址和所述装置的标识;
    所述发送单元,用于将所述搜索指令发送给第一内存控制器;
    所述接收单元,还用于接收所述第一内存控制器发送的第一数据,所述第一数据为所述第一内存中满足所述搜索条件的数据;接收所述第二内存控制器发送的第二数据,所述第二内存控制器为管理第二内存的处理器关联的内存控制器,所述第二内存为存储所述待搜索树的子树所在的内存,所述第二数据为所述第二内存中满足所述搜索条件的数据;
    所述生成单元,还用于根据所述第一数据和所述第二数据生成搜索结果。
  9. 根据权利要求8所述装置,其特征在于,所述处理单元根据搜索请求消息确定待搜索树的根节点的地址所在第一内存和管理所述第一内存的处理器关联的第一内存控制器,包括:
    根据所述待搜索树的根节点的地址确定所述待搜索树的根节点的地址所在所述第一内存;
    根据预置的内存和处理器映射表确定与所述第一内存存在关联关系的所述管理所述第一内存的处理器;
    根据预置的处理器和内存控制器的映射关系确定与所述管理所述第一内存的处理器关联的所述第一内存控制器。
  10. 根据权利要求9所述装置,其特征在于,所述生成单元根据所述第一数据和第二数据生成搜索结果,包括:
    当搜索时间满足第一阈值时,根据接收的第一数据和第二数据生成所述搜索结果;或者,当所述接收单元获取的满足所述搜索条件的数据的数量满足第二阈值时,根据接收的第一处理器和所述第二处理器生成所述搜索结果。
  11. 根据权利要求9所述装置,其特征在于,所述搜索请求消息中还包括所述待搜索树的标识;
    所述发送单元,还用于向所述计算设备中所有处理器发送查找请求消息,所述查找请求消息中携带所述待搜索树的标识;
    所述接收单元,还用于接收存储所述待搜索树的子树处理器发送的响应消息,所述响应消息用于指示处理器管理的内存中存储有所述待搜索树的子树;
    所述生成单元,还用于当所述接收单元接收到所述第一内存控制器和所有管理所述待搜索树的子树所在内存的处理器关联的内存控制器发送的满足所述搜索条件的数据时,生成所述搜索结果。
  12. 一种数据搜索的装置,其特征在于,所述装置包括接收单元、处理单元和发送单元;
    所述接收单元,用于接收第一处理器发送的搜索指令,所述搜索指令中包括搜索条件、待搜索树的根节点的地址和所述第一处理器的标识,所述待搜索树的根节点的地址所在内存为第一内存,所述第一内存控制器为管理所述第一内存的处理器关联的内存控制器;
    所述处理单元,用于根据搜索指令在所述第一内存中查找满足所述搜索条件的数据,当所述第一内存中存在满足所述搜索条件的第一数据时,所述第一内存控制器获取所述第一数据,并根据所述第一处理器的标识将所述第一数据发送给所述第一处理器;
    所述发送单元,用于当所述待搜索的树在第二内存中存在子树时,将所述搜索指令发送给所述第二内存控制器。
  13. 根据权利要求12所述装置,其特征在于,所述处理单元根据所述搜索指令在所述第一内存中查找满足所述搜索条件的数据,包括:
    根据所述第一内存的地址确定第一缓存中是否有所述第一内存的数据副本,当所述第一缓存中存在所述第一内存的数据副本时,在所述第一缓存中查找满足所述搜索条件的所述第一数据;或者,
    当所述第一缓存中不存在所述第一内存的数据副本时,将所述第一内存的数据加载到所述第一缓存中;其中,所述第一缓存为所述第一处理器中包括的缓存;在所述第一缓存中查找满足所述搜索条件的所述第一数据。
  14. 一种处理器,其特征在于,所述处理器中包括至少一个内存控制器、缓存和总线,所述内存控制器和缓存通过总线相通信,所述内存控制器中包括迁移处理电路、存储电路和总线,所述内存控制器中迁移处理电路和存储电路通过所述总线或直连方式相通信,所述内存控制器的存储电路中存储执行指令,所述处理器运行时,所述内存控制器执行所述内存控制器的存储电路中的执行指令以利用所述处理器中的硬件资源执行权利要求1至4中任一所述方法的操作步骤。
  15. 一种内存控制器,其特征在于,所述内存控制器中包括迁移处理电路、存储电路和总线,所述迁移处理电路、存储电路通过所述总线或直连方式相通信,所述存储电路中存储有中用于存储执行指令,所述内存控制器运行时,所述处理电路执行所述存储电路中的执行指令以利用所述内存控制器中的硬件资源执行权利要求5至6中任一所述方法的操作步骤。
PCT/CN2018/076750 2017-04-14 2018-02-13 一种数据搜索的方法、装置和相关设备 WO2018188416A1 (zh)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
CN201710245610.1A CN108733678B (zh) 2017-04-14 2017-04-14 一种数据搜索的方法、装置和相关设备
CN201710245610.1 2017-04-14

Publications (1)

Publication Number Publication Date
WO2018188416A1 true WO2018188416A1 (zh) 2018-10-18

Family

ID=63793112

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/CN2018/076750 WO2018188416A1 (zh) 2017-04-14 2018-02-13 一种数据搜索的方法、装置和相关设备

Country Status (2)

Country Link
CN (1) CN108733678B (zh)
WO (1) WO2018188416A1 (zh)

Families Citing this family (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111309258B (zh) 2020-02-14 2021-10-15 苏州浪潮智能科技有限公司 一种b+树的存取方法、装置和计算机可读存储介质

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20030236793A1 (en) * 2002-06-19 2003-12-25 Ericsson Inc. Compressed prefix tree structure and method for traversing a compressed prefix tree
US20050038928A1 (en) * 2002-12-23 2005-02-17 Micron Technology, Inc. Distributed configuration storage
CN1858743A (zh) * 2006-03-10 2006-11-08 华为技术有限公司 关系型数据库中信息检索方法及装置
CN103970678A (zh) * 2014-04-21 2014-08-06 华为技术有限公司 目录设计方法及装置
CN105574054A (zh) * 2014-11-06 2016-05-11 阿里巴巴集团控股有限公司 一种分布式缓存范围查询方法、装置及系统

Family Cites Families (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN100452046C (zh) * 2006-06-23 2009-01-14 腾讯科技(深圳)有限公司 一种海量文件的存储方法及系统
CN103116661B (zh) * 2013-03-20 2016-01-27 广东宜通世纪科技股份有限公司 一种数据库的数据处理方法
CN104090942A (zh) * 2014-06-30 2014-10-08 中国电子科技集团公司第三十二研究所 应用于网络处理器中的Trie搜索方法及装置
CN105447156A (zh) * 2015-11-30 2016-03-30 北京航空航天大学 资源描述框架分布式引擎及增量式更新方法
CN106201350B (zh) * 2016-07-07 2019-10-18 华为技术有限公司 存储数据的方法、存储器和计算机系统

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20030236793A1 (en) * 2002-06-19 2003-12-25 Ericsson Inc. Compressed prefix tree structure and method for traversing a compressed prefix tree
US20050038928A1 (en) * 2002-12-23 2005-02-17 Micron Technology, Inc. Distributed configuration storage
CN1858743A (zh) * 2006-03-10 2006-11-08 华为技术有限公司 关系型数据库中信息检索方法及装置
CN103970678A (zh) * 2014-04-21 2014-08-06 华为技术有限公司 目录设计方法及装置
CN105574054A (zh) * 2014-11-06 2016-05-11 阿里巴巴集团控股有限公司 一种分布式缓存范围查询方法、装置及系统

Also Published As

Publication number Publication date
CN108733678B (zh) 2021-11-09
CN108733678A (zh) 2018-11-02

Similar Documents

Publication Publication Date Title
US8381230B2 (en) Message passing with queues and channels
US20150067243A1 (en) System and method for executing map-reduce tasks in a storage device
KR102240774B1 (ko) 지역 베이스보드 관리 제어기를 이용하여 패브릭 시스템에 걸쳐 불휘발성 메모리 익스프레스 내에서 공유된 그래픽 처리부 자원들을 할당하는 방법
CN110119304B (zh) 一种中断处理方法、装置及服务器
WO2023231345A1 (zh) 对多个交易进行分组的方法和区块链节点
WO2024001024A1 (zh) 在区块链系统中执行交易的方法、区块链系统和节点
WO2023231336A1 (zh) 执行交易的方法和区块链节点
US20220114145A1 (en) Resource Lock Management Method And Apparatus
US9369332B1 (en) In-memory distributed cache
CN111124270B (zh) 缓存管理的方法、设备和计算机程序产品
WO2017173618A1 (zh) 压缩数据的方法、装置和设备
US11231964B2 (en) Computing device shared resource lock allocation
US11237761B2 (en) Management of multiple physical function nonvolatile memory devices
US8543722B2 (en) Message passing with queues and channels
WO2016049807A1 (zh) 多核处理器系统的缓存目录处理方法和目录控制器
US9760577B2 (en) Write-behind caching in distributed file systems
WO2018188416A1 (zh) 一种数据搜索的方法、装置和相关设备
WO2024001025A1 (zh) 一种预执行缓存数据清理方法和区块链节点
WO2020219810A1 (en) Intra-device notational data movement system
EP4390646A1 (en) Data processing method in distributed system, and related system
CN112711564A (zh) 合并处理方法以及相关设备
WO2023029485A1 (zh) 数据处理方法、装置、计算机设备及计算机可读存储介质
WO2022063273A1 (zh) 一种基于numa属性的资源分配方法及装置
US7979660B2 (en) Paging memory contents between a plurality of compute nodes in a parallel computer
US11340964B2 (en) Systems and methods for efficient management of advanced functions in software defined storage systems

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 18785139

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: DE

122 Ep: pct application non-entry in european phase

Ref document number: 18785139

Country of ref document: EP

Kind code of ref document: A1