WO2021103207A1 - 基于在网计算的分布式信息检索方法、系统与装置 - Google Patents

基于在网计算的分布式信息检索方法、系统与装置 Download PDF

Info

Publication number
WO2021103207A1
WO2021103207A1 PCT/CN2019/126227 CN2019126227W WO2021103207A1 WO 2021103207 A1 WO2021103207 A1 WO 2021103207A1 CN 2019126227 W CN2019126227 W CN 2019126227W WO 2021103207 A1 WO2021103207 A1 WO 2021103207A1
Authority
WO
WIPO (PCT)
Prior art keywords
retrieval
search
preliminary
search result
network
Prior art date
Application number
PCT/CN2019/126227
Other languages
English (en)
French (fr)
Inventor
潘恒
张鹏豪
李振宇
谢高岗
Original Assignee
中国科学院计算技术研究所
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 中国科学院计算技术研究所 filed Critical 中国科学院计算技术研究所
Publication of WO2021103207A1 publication Critical patent/WO2021103207A1/zh

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F16/00Information retrieval; Database structures therefor; File system structures therefor
    • G06F16/20Information retrieval; Database structures therefor; File system structures therefor of structured data, e.g. relational data
    • G06F16/24Querying
    • G06F16/245Query processing
    • G06F16/2458Special types of queries, e.g. statistical queries, fuzzy queries or distributed queries
    • G06F16/2471Distributed queries
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L67/00Network arrangements or protocols for supporting network services or applications
    • H04L67/50Network services
    • H04L67/56Provisioning of proxy services
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L67/00Network arrangements or protocols for supporting network services or applications
    • H04L67/50Network services
    • H04L67/56Provisioning of proxy services
    • H04L67/565Conversion or adaptation of application format or content
    • H04L67/5651Reducing the amount or size of exchanged application data
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L69/00Network arrangements, protocols or services independent of the application payload and not provided for in the other groups of this subclass
    • H04L69/22Parsing or analysis of headers
    • YGENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
    • Y02TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
    • Y02DCLIMATE CHANGE MITIGATION TECHNOLOGIES IN INFORMATION AND COMMUNICATION TECHNOLOGIES [ICT], I.E. INFORMATION AND COMMUNICATION TECHNOLOGIES AIMING AT THE REDUCTION OF THEIR OWN ENERGY USE
    • Y02D10/00Energy efficient computing, e.g. low power processors, power management or thermal management

Definitions

  • the invention relates to the field of distributed information retrieval, in particular to a distributed information retrieval method and system based on online computing.
  • Mass data content is stored in the cluster distributed file system, and the characteristic values of different data are formed through methods such as hash calculation.
  • the retrieval server constructs the relationship between the data feature value and the data content location through a data structure such as a hash table.
  • the retrieval server When receiving a user's query request, the retrieval server will perform a linear search in the hash table maintained by it according to the characteristics of the requested data to find a matching hash bucket, and the data stored in the bucket is the possible query answer. Then, the retrieval server will perform operations such as reordering on the proxy server that sends all the answers to the query uniformly, and then returns the specific content of the Top-K query result to the user.
  • LSH Locally Sensitive Hash
  • LSH is recognized as one of the most effective methods for indexing similar data in high-dimensional spaces.
  • LSH functions ie h 1 , h 2 ,..., h k ) to perform hash calculations, thereby generating k hash value.
  • TLSH[4] is a variant of LSH, and its main idea is to project the d-dimensional point p ⁇ R d into the set ⁇ 0, 1, * ⁇ by constructing the TLSH function.
  • the TLSH function hashes the high-dimensional point p into a value by dividing the hyperplane, but the value is limited to 0, 1, or *. Among them, * means any match. Therefore, under k TLSH functions, a k-bit tristate sequence string will be generated, which is also the k-dimensional eigenvalue of point p.
  • the distributed retrieval system performs data query in different retrieval servers, and then returns the query answers to the centralized proxy server for further processing (such as reordering), as shown in Figure 1.
  • This communication model will cause "in-cast" problems.
  • the distributed retrieval system needs to support thousands of concurrent queries at the same time. Therefore, a large amount of answer data needs to be transmitted in the network at the same time, which leads to network congestion. The congestion of the network will inevitably lead to the reduction of retrieval efficiency.
  • the present invention proposes a distributed information retrieval method, which utilizes on-line computing to reduce the retrieval result data that needs to be transmitted simultaneously in the network, thereby avoiding network congestion and improving retrieval efficiency.
  • the distributed information retrieval method based on online computing of the present invention includes: according to the user's retrieval requirements, the proxy server sends retrieval instructions to the retrieval server through the network; retrieves through the retrieval server to obtain preliminary retrieval results, and Send to the network; aggregate the preliminary search results in the network to obtain the aggregated search results and send them to the proxy server; the proxy server selects the final search results from the aggregated search results and feeds them back to the user.
  • the retrieval server when the retrieval server performs retrieval, parallel retrieval is performed through a fast retrieval path and a slow retrieval path, and the first retrieval result obtained through the fast retrieval path and the first retrieval result obtained through the slow retrieval path are compared with those obtained through the slow retrieval path.
  • the second search result obtained by the search path is merged into the preliminary search result, wherein the fast search path is realized by using the parallel circuit of the TCAM component of the search server, and the slow search path is realized by the search algorithm software set in the search server .
  • the preliminary search results are aggregated through the switch of the network, wherein the switch receives the IP data packet generated by the preliminary search result from its physical port, and then according to the pre-configuration of the switch
  • the state automaton parses the IP data packet to obtain the preliminary search result, and recognizes the preliminary search result to be merged through the pipeline matching of the switch, and stores the preliminary search result to be merged in the register of the switch for storage and merging operations .
  • the preliminary retrieval result uses the ID corresponding to the retrieval instruction as the ID of the preliminary retrieval result
  • the step of aggregating the preliminary retrieval result further includes:
  • the ID of the preliminary search result is sequentially compared with the ID of the data stored in each register. If there is a register with the same ID, the preliminary search result is compared. The search result is stored at the end of the register with the same ID. Otherwise, it is stored in the register with empty data. If there is no register with empty data, it is stored in the register with the most data.
  • the present invention also proposes a distributed information retrieval system based on on-line computing, including: a retrieval instruction module, which is used to send a retrieval instruction to the retrieval server via a network by the proxy server according to the user’s retrieval requirements; and a preliminary retrieval module to pass the
  • the search server performs searches to obtain preliminary search results and sends them to the network;
  • the on-line aggregation module is used to aggregate the preliminary search results in the network to obtain the aggregate search results and send them to the proxy server;
  • the final result module It is used to select the final search result from the aggregated search results through the proxy server and feed it back to the user.
  • the preliminary retrieval module includes: a fast retrieval module for obtaining the first retrieval result through the parallel circuit of the TCAM component of the retrieval server; and a slow retrieval module for obtaining the first retrieval result through the retrieval server.
  • the search algorithm software U set in the server obtains the second search result; the result merging module is used to merge the first search result and the second search result into the preliminary search result.
  • the on-line aggregation module aggregates the preliminary retrieval results through a switch of the network, wherein the switch receives the IP data packet generated by the preliminary retrieval result from its physical port, and then according to The pre-configured state automaton of the switch parses the preliminary search result for the IP data packet, and recognizes the preliminary search result to be merged through the pipeline matching of the switch, and stores the preliminary search result to be merged in the register of the switch Perform storage and merge operations.
  • the online aggregation module further includes: a register replacement module, which is used to select registers for data storage and aggregation; wherein, for a plurality of the registers of the switch, when a new one is parsed In the preliminary search result, the ID of the preliminary search result is sequentially compared with the ID of the data stored in each register. If there is a register with the same ID, the preliminary search result is stored at the end of the register with the same ID, and vice versa Stored in the register with empty data, if there is no register with empty data, store in the register with the most data; the ID of the preliminary search result is the ID of the search instruction corresponding to the preliminary search result.
  • a register replacement module which is used to select registers for data storage and aggregation; wherein, for a plurality of the registers of the switch, when a new one is parsed In the preliminary search result, the ID of the preliminary search result is sequentially compared with the ID of the data stored in each register. If there is a register with the same ID,
  • the present invention also provides a readable storage medium that stores executable instructions, and the executable instructions are used to execute the aforementioned distributed information retrieval method based on online computing.
  • the present invention also provides a data processing device, including: a proxy server set in the network, the proxy server is provided with the readable storage medium as described above, and the processor of the proxy server calls and executes the readable storage medium
  • the executable instruction of the user can generate a search instruction according to the user's search requirements and send it to the search server through the network, and select the final search result to feed back to the user;
  • the switch set in the network the switch is set with the readable as described above Storage medium, the processor of the switch retrieves and executes the executable instructions in the readable storage medium to aggregate the preliminary search results;
  • the search server set in the network is provided with the readable storage as described above Media, the processor of the retrieval server retrieves and executes the executable instructions in the readable storage medium to obtain the preliminary retrieval result according to the retrieval instruction.
  • Fig. 1 is a schematic diagram of a query process of a distributed retrieval system in the prior art.
  • Fig. 2 is a flow chart of the distributed information retrieval method based on on-line computing of the present invention.
  • Fig. 3 is a schematic diagram of the fast and slow paths of the retrieval server information retrieval of the present invention.
  • Fig. 4 is a data flow diagram of the programmable switch of the present invention.
  • Fig. 5 is a schematic diagram of the data packet aggregation function of the programmable switch of the present invention.
  • Fig. 6 is a schematic diagram of the register replacement strategy in the programmable switch of the present invention.
  • Fig. 7 is a schematic diagram of the ternary matching algorithm of the present invention.
  • Figure 8 is a schematic diagram of the register selection and strategy replacement algorithm of the present invention.
  • the inventor When the inventor performed high-concurrency query operations after deploying the distributed information retrieval system, he found that there were a large number of answer data packets in the network, which directly reduced the retrieval efficiency of the system. Therefore, the inventor feels that if the data packets transmitted in the network can be reduced and the communication overhead can be reduced, this will help improve the overall performance of the retrieval system.
  • networks have already had computing capabilities, such as smart network cards and programmable switches (such as P4 switches). This makes it possible that the traditional computing tasks on the terminal server can be offloaded to the network.
  • the network can see the "global" data status and information to a certain extent, which is conducive to overall optimization and scheduling.
  • the inventor uses the programmable switch to identify and aggregate the answer data packets in the network, which can effectively reduce the network communication overhead and at the same time does not affect the normal high-speed data forwarding.
  • High-performance distributed retrieval system is the key to supporting massive data retrieval. With the improvement of the efficiency of retrieval algorithms, network performance has gradually become the bottleneck. However, the existing technology does not optimize network communication well. To this end, the present invention proposes a high-performance distributed information retrieval method based on online computing (hereinafter referred to as -NetSHa), which can improve the efficiency of network communication in a distributed information retrieval system.
  • -NetSHa online computing
  • the present invention adopts a fast and slow path, that is, searches are performed separately through TCAM and search software.
  • TCAM ternary content addressable memory, a three-state content addressable memory
  • NetSHa adopts a fast and slow path.
  • NetSHa logically divides each server into two parts: TCAM component (fast path) and server host (slow path).
  • the fast path uses the parallel circuit of the TCAM to search all of its contents very quickly, while the slow path is realized by software of the search algorithm.
  • the present invention adopts a bit operation algorithm that matches any three-state sequence.
  • key.p is equal to the sequence of key, but all "*" bits have been replaced by "0".
  • key1.p and key2.p to perform a bitwise OR operation with key.m, and finally compare the results of the operation to determine whether key1 and key2 match.
  • the present invention also uses the answer data packet aggregated by the programmable switch.
  • NetSHa data packets are aggregated and forwarded through programmable switches.
  • the programmable switch receives the IP data packet from the physical port, and analyzes the data packet according to its pre-configured state automaton, then uses the switch pipeline matching to identify the answer data packet to be merged, and enters the pipeline's "aggregation" table to query the answer data The merger.
  • the "aggregation" table uses the registers of the programmable switch to store and aggregate query answers.
  • a register replacement strategy is also adopted: the number of registers in the switch determines how many aggregation tasks it can execute in parallel. But the number of registers is limited.
  • the present invention adopts a replacement strategy in order to select a suitable register.
  • This strategy is a weight-based selection mechanism.
  • the register that carries the most data pairs will be selected.
  • NetSHa packets access the registers one by one. It compares the query ID with the ID stored in the register. If an "empty" register is found, it will be returned. Otherwise, the register with the most data pairs will be selected.
  • NetSHa extends the conventional network protocol to enable the programmable switch to recognize aggregatable data packets, and designs a bit-based matching algorithm and a memory scheduling mechanism to improve the overall efficiency of the distributed retrieval system.
  • Fig. 2 is a flow chart of the distributed information retrieval method based on on-line computing of the present invention. As shown in Fig. 2, the present invention includes:
  • Step S1 According to the user's search request, the proxy server sends a search instruction to the search server through the network;
  • Step S2 After receiving the retrieval instruction, the retrieval server performs information retrieval according to the retrieval instruction to obtain preliminary retrieval results, and sends the obtained preliminary retrieval solution results to the network.
  • the present invention is based on a distributed information retrieval system. At least one retrieval server participates in information retrieval, and each retrieval server participating in information retrieval may obtain one or more preliminary retrieval results after retrieving the information corresponding to the retrieval instruction; after obtaining the preliminary retrieval results, together with its corresponding Retrieve the ID of the command to generate an IP data packet and transmit it to the network;
  • FIG. 3 is a schematic diagram of the fast and slow path of the retrieval server information retrieval of the present invention.
  • the fast search path is realized by the parallel circuit of the TCAM component of the search server, and the slow search
  • the path is realized by the search algorithm software set in the search server.
  • the first search result can be obtained through the fast search path, and the second search result can be obtained through the slow search path.
  • the first search result and the second search result are merged. Then get the preliminary search result corresponding to the search instruction;
  • Step S3 Perform on-line calculations in the network, aggregate the preliminary search results into aggregated search results, and send the aggregated search results to the proxy server; by aggregating the initial search results, the amount of data transmitted in the network can be reduced, and the network can be improved.
  • Figure 4 is a data flow diagram of the programmable switch of the present invention
  • Figure 5 is a schematic diagram of the data packet aggregation function of the programmable switch of the present invention.
  • the present invention uses programmable switches in the network to perform aggregation operations, which specifically include: 1) When the programmable switch receives the IP data packet generated by the preliminary search result from its physical port, the programmable switch The pre-configured state automata parses the IP data packets to obtain the preliminary search results; 2) Identify the preliminary search results to be merged through the pipeline matching of the programmable switch, and store the preliminary search results to be merged into the register of the switch Perform storage and merge operations;
  • Fig. 6 is a schematic diagram of the register replacement strategy in the programmable switch of the present invention.
  • the present invention also proposes a register replacement strategy, that is, when a new preliminary search result is parsed, the ID of the preliminary search result is sequentially stored with each register If there is a register with the same ID, the preliminary search result will be stored at the end of this register. If there is no register with the same ID, the preliminary search result will be stored in a register with empty data. If there is no register with the same ID, the preliminary search result will be stored in a register with empty data. If there is a register with empty data, the preliminary search result is stored in the register with the most data.
  • Step S4 the proxy server selects the final search result from the aggregated search results and feeds it back to the user.
  • NetSHa uses TCAM components to speed up queries in the search server.
  • TCAM the cost and memory limitations of TCAM mean that its capacity is limited.
  • NetSHa uses a fast and slow path.
  • NetSHa logically divides the hash table on each server into two parts, one part is deployed on the TCAM component (fast path), and the other part is deployed on the server host (slow path).
  • the fast path uses the parallel circuit of the TCAM to search all of its contents very quickly.
  • the software implementation of the search algorithm is adopted.
  • the server When the query reaches the server, it will query all hash buckets in the fast path and the slow path. Then, the server combines the answers from the two paths to form its final candidate answer.
  • key.p For any three-state sequence key, it needs to be converted into two binary sequences key.p and key.m. And key.p is equal to the sequence of key, but all "*" bits have been replaced by "0".
  • key.m key1.m
  • key2.m 0011101.
  • key1.p key1.p
  • key.m 0111101
  • key2.p key2.p
  • key.m 0111111. Because key1.p is not equal to key2.p, key1 and key2 do not match.
  • the bit operation algorithm for matching any three-state sequence proposed by the present invention has low complexity when matching hash buckets, and only needs to perform three bitwise OR operations.
  • n is the number of hash buckets in the server host.
  • the ternary matching operation algorithm of the present invention is shown in Fig. 7. Key1 and key2 in Fig. 7 are the above-mentioned two ternary sequences to be compared. If the two match, it returns true, otherwise it returns false.
  • FIG. 2 shows the logical processing of the programmable switch used for data packet aggregation.
  • the programmable switch receives the IP data packet from the physical port, and parses the data packet header according to its pre-configured state automaton. Next, it configures a table (IP ToS table) to identify the NetSHa packet with the IP ToS reserved bit as 1. For NetSHa packets, they need to jump to the "aggregation" table for further processing (also called packet aggregation). Other data packets whose IP ToS reserved bit is 0 are regarded as regular data packets and are then forwarded normally.
  • IP ToS table IP ToS table
  • the switch performs lightweight packet aggregation. This is done by using switch registers, each of which is similar to an array. In order to complete the aggregation task, the switch will be initialized as a global "two-dimensional array" based on its registers. Each register stores two types of data: status and data pairs. These states record the query ID used to identify a specific query and the number of data pairs that have been carried in the register. The maximum capacity of each register used to carry data pairs is the same, which is regarded as a threshold. If the number of data pairs carried is equal to the threshold, the register will construct a new NetSHa data packet according to the data pairs it carries and the query ID, and then forward it as a regular data packet. Next, it resets its state, including query ID and counter value, and waits for the next packet.
  • a data packet When a data packet enters the "aggregation" table, it will select a register to fill. If there is already a register with the same query ID, the data packet will append its data pair to the end of the register until it is full. Otherwise, it needs to select an "empty" register to fill the packet data pair.
  • existing implementations use linear search to determine registers. However, the number of registers in the switch determines how many aggregation tasks it can execute in parallel. However, the number of registers is limited. This leads to a problem. If all the registers are occupied, the NetSHa data packet with the new query ID cannot be processed. In order to cope with this challenge, the present invention adopts a replacement strategy in order to select a suitable register.
  • This strategy is a weight-based selection mechanism.
  • the register that carries the most data pairs will be selected.
  • NetSHa packets access registers one by one. It compares the query ID with the ID of the register. If they are the same, the register is returned. Otherwise, it will traverse all registers when possible to record the first "empty" register. If an "empty" register is found, it will be returned. Otherwise, the register with the most data pairs will be selected (called the replacement register).
  • Figure 8 illustrates the register selection and strategy replacement algorithm of the present invention, where the input parameter q represents the query ID of the arriving data packet, R represents a group of registers in the switch, and n represents the number of registers.
  • the present invention also provides a data processing device for performing distributed information retrieval processing based on online computing, and a computer-readable storage medium, the readable storage medium stores executable instructions, and the executable instructions are executed by the processor
  • the data processing device of the present invention includes: a proxy server and a search server, a network connecting the proxy server and the search server, and a programmable switch set in the network; wherein, The processor of the proxy server calls the executable instructions of the readable storage medium to generate search instructions according to the user's search requirements, and sends the search instructions to the search server through the network, and after receiving the aggregated search results, selects the final search from them The result is fed back to the user; the processor of the retrieval server retrieves the executable instruction of the readable storage medium to obtain the preliminary retrieval result according to the retrieval instruction; the processor of the programmable switch retrieves and executes the executable instruction of the readable storage medium Execute instructions to aggregate the
  • each module in the above embodiment can be implemented in the form of hardware, for example, an integrated circuit to achieve its corresponding function, or it can be implemented in the form of a software function module, for example, the program/instruction stored in the memory is executed by the processor. To achieve its corresponding functions.
  • the embodiments of the present invention are not limited to the combination of hardware and software in any specific form.
  • the distributed retrieval method of the present invention uses the programmable switch of the network to aggregate the preliminary retrieval results obtained by the retrieval server on the network to reduce the transmission volume of retrieval data in the network, thereby effectively reducing the network communication overhead, and does not affect the normal The data is forwarded at a high speed.
  • the present invention proposes a new method for accelerating the distributed retrieval system through online computing.
  • the method of the present invention does not change the architecture of the distributed retrieval system, but is optimized in two aspects: First, the computing power of the programmable switch is used to aggregate the answer data packets, thereby reducing the number of data packets transmitted in the network ; Second, a fast mechanism with TCAM component deployment is designed to speed up the matching of similar data items in distributed servers.

Landscapes

  • Engineering & Computer Science (AREA)
  • Computer Networks & Wireless Communication (AREA)
  • Signal Processing (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • Software Systems (AREA)
  • Mathematical Physics (AREA)
  • Probability & Statistics with Applications (AREA)
  • Fuzzy Systems (AREA)
  • Computational Linguistics (AREA)
  • Data Mining & Analysis (AREA)
  • Databases & Information Systems (AREA)
  • General Engineering & Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • Computer Security & Cryptography (AREA)
  • Data Exchanges In Wide-Area Networks (AREA)

Abstract

一种基于在网计算的分布式信息检索方法,包括:根据用户的检索要求,代理服务器通过网络向检索服务器发出检索指令;通过该检索服务器进行检索以获取初步检索结果,并发送至该网络;在该网络中对该初步检索结果进行聚合,获得聚合检索结果并发送至该代理服务器;通过该代理服务器从该聚合检索结果中选出最终检索结果并反馈给该用户。利用网络的可编程交换机,对检索服务器获得的初步检索结果进行在网聚合,以减小网络中检索数据的传输量,从而有效降低网络通信开销,也不影响正常的数据高速转发。

Description

基于在网计算的分布式信息检索方法、系统与装置 技术领域
本发明涉及分布式信息检索领域,特别是一种基于在网计算的分布式信息检索方法和系统。
背景技术
随着信息技术的不断发展和互联网的日益普及,网络中存储的数据(如文本、图片、视频等)呈现爆炸性的增长。在日常生产与生活中,不同用户常常需要从海量的数据中搜索满足其需求的数据信息。为此,构建高吞吐量、低延迟的分布式信息检索系统(搜索引擎)显得尤为重要。
分布式信息检索系统主要依赖于计算机集群。海量数据内容存储于集群分布式文件系统中,并通过哈希计算等方法形成不同数据的特征值。而检索服务器通过哈希表等数据结构构建数据特征值与数据内容位置间的关系。当收到用户的查询请求,检索服务器会将根据请求数据特征在其维护的哈希表中进行线性查找,找到与之匹配的哈希桶,而桶中存储的数据即是可能的查询答案。然后,检索服务器将查询到的所有答案统一发送的代理服务器上进行重排序等操作,然后向用户返回Top-K查询结果的具体内容。
当前,分布式信息检索系统主要了采用MapReduce或Active DHT等成熟分布式框架以降低开发成本。对于高维数据特征值的计算,大多采用了局部敏感哈希方法及其相关的变体,具体如下:
(1)局部敏感哈希(LSH)。
LSH被公认为是在高维空间中对相似数据进行索引的最为有效方法之一。对于d维点空间点p∈R d,随机选择k(d>k>0)个LSH函数(即h 1,h 2,...,h k)分别进行哈希计算,从而产生k个hash值。然后将生成的哈希值串接起来,形成一个代表点p特征值的k维向量,其表示为S(p)=(h 1(p),h 2(p),...,h k(p))。
(2)三元位置敏感哈希(TLSH)。
TLSH[4]是LSH的变体,其主要思想是通过构造TLSH函数将d维点p∈R d投影到集合{0,1,*}中。逻辑上来讲,TLSH函数通过分割超平面的方法将高维点p哈希为一个值,但该值仅限于0、1或*。其中,*表示任意匹配。因此,在k个TLSH函数下,将生成一个k位的三态序列字符串,该字符串也就是点p的k维特征值。
但是,分布式检索系统在不同检索服务器中进行数据查询,然后将查询到的答案返回给集中式的代理服务器进行进一步处理(如重排序),具体如图1所示。这种通信模型将会导致“in-cast”的问题。此外,分布式检索系统需要同时支持成千上万的并发查询。因此,网络中需要同时传输大量的答案数据,从而导致网络的拥塞。而网络的拥塞必然导致检索效率的降低。
发明公开
本发明针对现有技术的不足,提出一种分布式信息检索方法,利用在网计算减小网络中需要同时传输的检索结果数据,从而避免网络的拥塞,提高检索效率。
具体来说,本发明的基于在网计算的分布式信息检索方法,包括:根据用户的检索要求,代理服务器通过网络向检索服务器发出检索指令;通过该检索服务器进行检索以获取初步检索结果,并发送至该网络;在该网络中对该初步检索结果进行聚合,获得聚合检索结果并发送至该代理服务器;通过该代理服务器从该聚合检索结果中选出最终检索结果并反馈给该用户。
本发明所述的分布式信息检索方法,其中该检索服务器进行检索时,通过快速检索路径和慢速检索路径进行并行检索,并将通过该快速检索路径获取的第一检索结果和通过该慢速检索路径获取的第二检索结果合并为该初步检索结果,其中该快速检索路径是利用该索服务器的TCAM组件的并联电路实现,该慢速检索路径是通过该索服务器内设置的搜索算法软件实现。
本发明所述的分布式信息检索方法,其中通过该网络的交换机对该初步检索结果进行聚合,其中该交换机从其物理端口接收该初步检索结果生成的IP数据包后,根据该交换机的预配置状态自动机对该IP数据包解析出该初步检索结果,并通过该交换机的流水线匹配识别待合并的初步检索结果,将该待合并的初步检索结果存储至该交换机的寄存器内进行存储和合并操作。
本发明所述的分布式信息检索方法,其中该初步检索结果以对应检索指令的ID为该初步检索结果的ID,则对该初步检索结果进行聚合的步骤还包括:
对于该交换机的多个该寄存器,当解析出新的初步检索结果时,将该初步检索结果的ID依次与各该寄存器存储的数据的ID进行比较,若存在相同ID的寄存器,则将该初步检索结果存储至该相同ID的寄存器的末尾,反之则存储至数据为空的寄存器,若不存在数据为空的寄存器则存储至有最多数据的寄存器。
本发明还提出一种基于在网计算的分布式信息检索系统,包括:检索指令模块, 用于根据用户的检索要求,代理服务器通过网络向检索服务器发出检索指令;初步检索模块,用于通过该检索服务器进行检索以获取初步检索结果,并发送至该网络;在网聚合模块,用于在该网络中对该初步检索结果进行聚合,获得聚合检索结果并发送至该代理服务器;最终结果模块,用于通过该代理服务器从该聚合检索结果中选出最终检索结果并反馈给该用户。
本发明所述的分布式信息检索系统,其中该初步检索模块包括:快速检索模块,用于通过该检索服务器的TCAM组件的并联电路获取第一检索结果;慢速检索模块,用于通过该索服务器内设置的搜索算法软件U获取第二检索结果;结果合并模块,用于将该第一检索结果和该二检索结果合并为该初步检索结果。
本发明所述的分布式信息检索系统,其中该在网聚合模块通过该网络的交换机对该初步检索结果进行聚合,其中该交换机从其物理端口接收该初步检索结果生成的IP数据包后,根据该交换机的预配置状态自动机对该IP数据包解析出该初步检索结果,并通过该交换机的流水线匹配识别待合并的初步检索结果,将该待合并的初步检索结果存储至该交换机的寄存器内进行存储和合并操作。
本发明所述的分布式信息检索系统,其中该在网聚合模块还包括:寄存器替换模块,用于选取寄存器进行数据存储和聚合;其中,对于该交换机的多个该寄存器,当解析出新的初步检索结果时,将该初步检索结果的ID依次与各该寄存器存储的数据的ID进行比较,若存在相同ID的寄存器,则将该初步检索结果存储至该相同ID的寄存器的末尾,反之则存储至数据为空的寄存器,若不存在数据为空的寄存器则存储至有最多数据的寄存器;该初步检索结果的ID为该初步检索结果对应检索指令的ID。
本发明还提出一种可读存储介质,存储有可执行指令,该可执行指令用于执行如前所述的基于在网计算的分布式信息检索方法。
本发明还提出一种数据处理装置,包括:设置在网络中的代理服务器,该代理服务器设置有如前所述的可读存储介质,该代理服务器的处理器调取并执行该可读存储介质中的可执行指令,以根据用户的检索要求生成检索指令并通过网络发送给检索服务器,选取最终检索结果反馈给该用户;;设置在该网络中的交换机,该交换机设置有如前所述的可读存储介质,该交换机的处理器调取并执行该可读存储介质中的可执行指令,以进行对初步检索结果的聚合;设置在该网络中的检索服务器,设置有如前所述的可读存储介质,该检索服务器的处理器调取并执行该可读存储介质中的可执行指令,以根据该检索指令获取该初步检索结果。
以下结合附图和具体实施例对本实用新型进行详细描述,但不作为对本发明的限 定。
附图简要说明
图1是现有技术的分布式检索系统查询过程示意图。
图2是本发明的基于在网计算的分布式信息检索方法流程图。
图3是本发明的检索服务器信息检索快慢路径示意图。
图4是本发明的可编程交换机的数据流图。
图5是本发明的可编程交换机的数据包聚合功能示意图。
图6是本发明的可编程交换机内寄存器替换策略示意图。
图7是本发明的三元匹配运算算法示意图。
图8是本发明的寄存器选择与策略替换算法示意图。
实现本发明的最佳方式
为了使本发明的目的、技术方案及优点更加清楚明白,以下结合附图,对本发明提出的基于在网计算的分布式信息检索方法和系统进一步详细说明。应当理解,此处所描述的具体实施方法仅仅用以解释本发明,并不用于限定本发明。
发明人在部署分布式信息检索系统后进行高并发查询操作时,发现网络中存在着大量的答案数据包,这直接降低了系统的检索效率。因此,发明人觉得如果能够降低网络中传输的数据包,减少通信开销,这有助于提升检索系统的整体性能。
近年来,网络已经具有计算能力,如智能网卡与可编程交换机(如P4交换机)。这使得传统在终端服务器上的计算任务可以卸载到网络中进行成为了可能。此外,网络在一定程度上可以看到“全局”的数据状态与信息,有利于整体的优化与调度。
因此,发明人利用可编程交换机识别并聚合网络中的答案数据包,既可有效降低网络通信开销,同时也不影响正常的数据高速转发。
高性能分布式检索系统是支撑海量数据检索的关键。随着检索算法效率的提升,网络性能逐渐成了瓶颈所在,然而现有技术并未对网络通信进行很好的优化。为此,本发明提出一种基于在网计算的高性能分布式信息检索方法(以下简称为-NetSHa),可以提高分布式信息检索系统中网络通信的效率。
首先,在检索服务器端,本发明采用了快慢路径,即通过TCAM和检索软件分别进行检索。具体来说,NetSHa采用TCAM(ternary content addressable memory,一种三态内容寻址存储器)组件来加速每个检索服务器中的数据查询。但是,由于TCAM 的成本和内存的限制,这意味着其容量空间有限。为此,NetSHa采用了一种快慢路径,NetSHa在逻辑上将每个服务器Server分为两部分:TCAM组件(快速路径)和服务器主机(慢速路径)。快速路径利用TCAM的并联电路非常快速地搜索其全部内容,而慢速路径则采用搜索算法的软件实现。
其次,本发明采用了匹配任意三态序列的位运算算法。对于任何一个三态序列key,都需要将其转换为两个二进制序列key.p和key.m。并且key.p等于key的序列,但是所有的“*”位都已被“0”替换。key.m是指key的掩码。更具体地说,对于key中的任意位,如果为“*”,则key.m中的对应位设置为“1”。否则,需要将其设置为“0”。例如key=011**0*,则key.p=0110000,key.m=0001101。接下来,在键key1.m和键key2.m之间进行按位“或”运算,以获得整体不需要考虑位的掩码key.m。然后使用key1.p和key2.p,分别与key.m进行按位“或”运算,最后比较运算后的结果,即可判断key1和key2是否匹配。
再次,本发明还采用了可编程交换机聚合的答案数据包。在NetSHa中,通过可编程交换机来聚合和转发数据包。可编程交换机从物理端口接收IP数据包,并根据其预配置状态自动机进行数据包的解析,然后通过交换机流水线匹配识别待合并的答案数据包,并进入流水线的“聚合”表进行查询答案数据的合并。“聚合”表利用可编程交换机的寄存器进行查询答案的储存以及聚合。在对初步检索结果进行聚合的过程中,还采用了对寄存器的替换策略:交换机中的寄存器数量决定了它可以并行执行多少个聚合任务。但寄存器的数量是有限的。如果所有寄存器都被占用,则带有新查询ID的NetSHa数据包将无法被处理。为此,本发明采用了一种替换策略,以便选择合适的寄存器。此策略是基于权重的选择机制。简而言之,将选择承载最多数据对的寄存器。NetSHa数据包一个接一个地访问寄存器。它将查询ID与寄存器中存储的ID进行比较。如果找到“空”寄存器,则将返回该寄存器。否则,将选择带有最多数据对的寄存器。为了实现上述优化,NetSHa扩展了常规网络协议以使可编程交换机能够识别可聚合的数据包,并设计了基于位的匹配算法和内存调度机制以提高分布式检索系统的整体效率。
图2是本发明的基于在网计算的分布式信息检索方法流程图,如图2所示,本发明的包括:
步骤S1,根据用户的检索要求,代理服务器通过网络向检索服务器发出检索指令;
步骤S2,检索服务器接收到检索指令后,根据检索指令进行信息检索以获取初步检索结果,并将获取的初步检索解结果发送至网络,其中,本发明是基于分布式信息检索系统,因此将有至少一个检索服务器参与信息检索,且每个参与信息检索的检索 服务器,当检索到与检索指令对应的信息后,都可能获得一个或多个初步检索结果;获取初步检索结果后,连同其对应的检索指令的ID生成IP数据包并传送至网络;
为提高检索服务器的检索性能,本发明采用了快慢路径并行检索方式,图3是本发明的检索服务器信息检索快慢路径示意图。如图3所示,具体来说,当检索服务器进行检索时,通过快速检索路径和慢速检索路径进行并行检索,其中,快速检索路径是利用检索服务器的TCAM组件的并联电路实现,慢速检索路径是通过索服务器内设置的搜索算法软件实现,通过快速检索路径可以获取第一检索结果,通过慢速检索路径可以获取的第二检索结果,将第一检索结果与第二检索结果进行合并,则得到对应检索指令的初步检索结果;
步骤S3,在网络中进行在网计算,将初步检索结果聚合为聚合检索结果,并将聚合检索结果发送至代理服务器;通过对初始检索结果进行聚合,可以减少网络中传输的数据量,提升网络的传输性能;图4是本发明的可编程交换机的数据流图,图5是本发明的可编程交换机的数据包聚合功能示意图。如图4、5所示,本发明通过网络中的可编程交换机进行聚合操作,具体包括:1)当可编程交换机从其物理端口接收到初步检索结果生成的IP数据包后,根据可编程交换机的预配置状态自动机对IP数据包进行解析,获得初步检索结果;2)通过可编程交换机的流水线匹配识别待合并的初步检索结果,将该待合并的初步检索结果存储至该交换机的寄存器内进行存储和合并操作;
图6是本发明的可编程交换机内寄存器替换策略示意图。如图6所示,当可编程交换机的多个寄存器时,本发明还提出一种寄存器的替换策略,即,当解析出新的初步检索结果时,将初步检索结果的ID依次与各寄存器存储的数据的ID进行比较,若存在相同ID的寄存器,则将初步检索结果存储至这个寄存器的末尾,若不存在相同ID的寄存器,则将初步检索结果存储至数据为空的寄存器,若也不存在数据为空的寄存器,则将初步检索结果存储至有最多数据的寄存器。
步骤S4,通过该代理服务器从该聚合检索结果中选出最终检索结果并反馈给该用户。
下面详细说明本发明的各个关键点:
一、快慢路径
NetSHa采用TCAM组件来加速检索服务器中的查询。但是,TCAM的成本和内存限制意味着它的容量有限。为此,NetSHa采用了一种快慢路径。
具体地说,NetSHa在逻辑上将每个服务器上的哈希表分分割成两部分,一部分部署在TCAM组件(快速路径),另一部分部署到服务器主机上(慢速路径)。快速路 径利用TCAM的并联电路非常快速地搜索其全部内容。在慢速路径中,采用搜索算法的软件实现。当查询到达服务器时,它将查询快速路径和慢速路径中的所有哈希桶。然后,服务器将来自两个路径的答案组合起来,以构成其最终的候选答案。
二、匹配任意三态序列的位运算算法
对于任何一个三态序列key,都需要将其转换为两个二进制序列key.p和key.m。并且key.p等于key的序列,但是所有的“*”位都已被“0”替换。key.m是指key的掩码。更具体地说,对于key中的任意位,如果为“*”,则key.m中的对应位设置为“1”。否则,需要将其设置为“0”。例如key=011**0*,则key.p=0110000,key.m=0001101。
接下来,讨论如何匹配两个三态序列,即key1和key2。首先在键key1.m和键key2.m之间进行按位“或”运算,以获得整体不需要考虑位的掩码key.m。接下来,使用key1.p和key2.p,分别与key.m进行按位“或”运算。最后,比较运算后的结果,即可判断key1和key2是否匹配。
例如,假设key1=011**0*(key1.m=0001101和key1.p=0110000)和key2=01*1*1*(key2.m=0010101和key2.p=0101010)。可以得key.m=key1.m|key2.m=0011101。计算key1.p=key1.p|key.m=0111101和key2.p=key2.p|key.m=0111111。因为key1.p不等于key2.p,所以key1与key2不匹配。本发明提出的匹配任意三态序列的位运算算法在对哈希桶进行匹配时具有较低的复杂度,并且只需要执行三个按位“或”运算。一般而言,该搜索算法的复杂度为O(n),其中n是服务器主机中哈希桶的数量。本发明的三元匹配运算算法如图7所示,图7中的key1和key2即为上述的两个待比较的三态序列,如果二者相匹配,则返回true,否则返回false。
三、可编程交换机中的聚合
在NetSHa中,部署了可编程交换机来聚合和转发数据包。附图2给出了用于数据包聚合的可编程交换机的逻辑处理。具体来说,可编程交换机从物理端口接收IP数据包,并根据其预配置的状态自动机将其解析数据包头。接下来,它配置一个表(IP ToS表)以标识IP ToS保留位为1的NetSHa数据包。对于NetSHa数据包,它们需要跳转到“聚合”表以进行进一步处理(也称为数据包聚合)。IP ToS保留位为0的其他数据包被视为常规数据包,随后正常转发。
在“聚合”表中,交换机执行轻量级的数据包聚合。这是通过使用交换机寄存器来完成的,每个寄存器都类似于一个数组。为了完成聚合任务,交换机将基于其寄存器初始化为全局“二维数组”。每个寄存器存储两种类型的数据:状态和数据对。这些状态记录了用于标识特定查询的查询ID,以及用于指示该寄存器中已经携带的数据对的 个数。而每个寄存器用于承载数据对的最大容量是相同的,这被视为阈值。如果携带的数据对的数量等于阈值,则寄存器将根据其携带的数据对和查询ID构造一个新的NetSHa数据包,然后将其作为常规数据包进行转发。接下来,它重置其状态,包括查询ID和计数器值,并等待下一个数据包。
四、替换策略
当一个数据包进入“聚合”表时,它将选择一个寄存器进行填充。如果已经有一个具有相同查询ID的寄存器,则数据包会将其数据对附加到寄存器的末尾,直到已满为止。否则,它需要选择一个“空”寄存器来填充分组数据对。从这个意义上讲,现有的实现采用线性搜索来确定寄存器。然而,交换机中的寄存器数量决定了它可以并行执行多少个聚合任务。然而,寄存器的数量是有限的。这导致了一个问题,如果所有寄存器都被占用,则带有新查询ID的NetSHa数据包将无法进行处理。为了应对这一挑战,本发明采用了替换策略,以便选择合适的寄存器。此策略是基于权重的选择机制。简而言之,将选择承载最多数据对的寄存器。如图4所示,NetSHa数据包一个接一个地访问寄存器。它将查询ID与寄存器的ID进行比较。如果它们相同,则返回寄存器。否则,它将在可能的情况下遍历所有寄存器以记录第一个“空”寄存器。如果找到“空”寄存器,则将返回该寄存器。否则,将选择带有最多数据对的寄存器(称为替换寄存器)。图8说明了本发明的寄存器选择与策略替换算法,其中输入参数q代表到达的数据包查询ID,R代表交换机中一组寄存器,n代表寄存器的个数,整个算法逻辑会按照(相同查询ID寄存器>空闲寄存器>当前装载最多数据对寄存器)优先级顺序返回所选择的寄存器。为避免丢失数据,如果替换了寄存器,则必须首先聚合其现有数据对,构造一个NetSHa数据包并进行传输。此后,可以清除该寄存器并将其用于处理新的数据。
本发明还提出一种数据处理装置,用于进行基于在网计算的分布式信息检索处理,以及一种计算机可读存储介质,可读存储介质存储有可执行指令,可执行指令被处理器执行时,实现上述基于同构多链的并行事务处理方法;本发明的数据处理装置包括:代理服务器和检索服务器,连接代理服务器和检索服务器的网络,以及设置于网络中的可编程交换机;其中,代理服务器的处理器调取可读存储介质的可执行指令,以根据用户的检索要求生成检索指令,并通过网络将检索指令发送给检索服务器,以及在接收到聚合检索结果后,从中选取最终检索结果反馈给用户;检索服务器的处理器调取可读存储介质的可执行指令,以根据该检索指令获取该初步检索结果;可编程交换机的处理器调取并执行该可读存储介质中的可执行指令,以进行对初步检索结果的聚 合。本领域普通技术人员可以理解上述方法中的全部或部分步骤可通过程序来指令相关硬件(例如处理器)完成,所述程序可以存储于可读存储介质中,如只读存储器、磁盘或光盘等。上述实施例的全部或部分步骤也可以使用一个或多个集成电路来实现。相应地,上述实施例中的各模块可以采用硬件的形式实现,例如通过集成电路来实现其相应功能,也可以采用软件功能模块的形式实现,例如通过处理器执行存储于存储器中的程序/指令来实现其相应功能。本发明实施例不限制于任何特定形式的硬件和软件的结合。
本发明的分布式检索方法,利用网络的可编程交换机,对检索服务器获得的初步检索结果进行在网聚合,以减小网络中检索数据的传输量,从而有效降低网络通信开销,也不影响正常的数据高速转发。
以上实施方式仅用于说明本发明,而并非对本发明的限制,有关技术领域的普通技术人员,在不脱离本发明的精神和范围的情况下,还可以做出各种变化和变形,因此所有等同的技术方案也属于本发明的范畴,本发明的专利保护范围应由权利要求限定。
工业应用性
本发明提出了一种通过在网计算来加速分布式检索系统的新方法。本发明的方法不会更改分布式检索系统的体系架构,而是在两个方面进行了优化:其一,利用可编程交换机的计算能力来聚合答案数据包,从而减少网络中传输的数据包数量;其二,设计了一种带有TCAM组件部署的快速机制,以加快分布式服务器中相似数据项的匹配。

Claims (10)

  1. 一种基于在网计算的分布式信息检索方法,其特征在于,包括:
    根据用户的检索要求,代理服务器通过网络向检索服务器发出检索指令;
    通过该检索服务器进行检索以获取初步检索结果,并发送至该网络;
    在该网络中对该初步检索结果进行聚合,获得聚合检索结果并发送至该代理服务器;
    通过该代理服务器从该聚合检索结果中选出最终检索结果并反馈给该用户。
  2. 如权利要求1所述的分布式信息检索方法,其特征在于,该检索服务器进行检索时,通过快速检索路径和慢速检索路径进行并行检索,并将通过该快速检索路径获取的第一检索结果和通过该慢速检索路径获取的第二检索结果合并为该初步检索结果,其中该快速检索路径是利用该索服务器的TCAM组件的并联电路实现,该慢速检索路径是通过该索服务器内设置的搜索算法软件实现。
  3. 如权利要求1所述的分布式信息检索方法,其特征在于,通过该网络的交换机对该初步检索结果进行聚合,其中该交换机从其物理端口接收该初步检索结果生成的IP数据包后,根据该交换机的预配置状态自动机对该IP数据包解析出该初步检索结果,并通过该交换机的流水线匹配识别待合并的初步检索结果,将该待合并的初步检索结果存储至该交换机的寄存器内进行存储和合并操作。
  4. 如权利要求3所述的分布式信息检索方法,其特征在于,该初步检索结果以对应检索指令的ID为该初步检索结果的ID,则对该初步检索结果进行聚合的步骤还包括:
    对于该交换机的多个该寄存器,当解析出新的初步检索结果时,将该初步检索结果的ID依次与各该寄存器存储的数据的ID进行比较,若存在相同ID的寄存器,则将该初步检索结果存储至该相同ID的寄存器的末尾,反之则存储至数据为空的寄存器,若不存在数据为空的寄存器则存储至有最多数据的寄存器。
  5. 一种基于在网计算的分布式信息检索系统,其特征在于,包括:
    检索指令模块,用于根据用户的检索要求,代理服务器通过网络向检索服务器发出检索指令;
    初步检索模块,用于通过该检索服务器进行检索以获取初步检索结果,并发送至该网络;
    在网聚合模块,用于在该网络中对该初步检索结果进行聚合,获得聚合检索结果并发送至该代理服务器;
    最终结果模块,用于通过该代理服务器从该聚合检索结果中选出最终检索结果并反馈给该用户。
  6. 如权利要求5所述的分布式信息检索系统,其特征在于,该初步检索模块包括:
    快速检索模块,用于通过该检索服务器的TCAM组件的并联电路获取第一检索结果;
    慢速检索模块,用于通过该索服务器内设置的搜索算法软件U获取第二检索结果;
    结果合并模块,用于将该第一检索结果和该二检索结果合并为该初步检索结果。
  7. 如权利要求5所述的分布式信息检索系统,其特征在于,该在网聚合模块通过该网络的交换机对该初步检索结果进行聚合,其中该交换机从其物理端口接收该初步检索结果生成的IP数据包后,根据该交换机的预配置状态自动机对该IP数据包解析出该初步检索结果,并通过该交换机的流水线匹配识别待合并的初步检索结果,将该待合并的初步检索结果存储至该交换机的寄存器内进行存储和合并操作。
  8. 如权利要求7所述的分布式信息检索系统,其特征在于,该在网聚合模块还包括:寄存器替换模块,用于选取寄存器进行数据存储和聚合;其中,对于该交换机的多个该寄存器,当解析出新的初步检索结果时,将该初步检索结果的ID依次与各该寄存器存储的数据的ID进行比较,若存在相同ID的寄存器,则将该初步检索结果存储至该相同ID的寄存器的末尾,反之则存储至数据为空的寄存器,若不存在数据为空的寄存器则存储至有最多数据的寄存器;该初步检索结果的ID为该初步检索结果对应检索指令的ID。
  9. 一种可读存储介质,存储有可执行指令,该可执行指令用于执行如权 利要求1~4任一项所述的基于在网计算的分布式信息检索方法。
  10. 一种数据处理装置,包括:
    设置在网络中的代理服务器,该代理服务器设置有如权利要求9所述的可读存储介质,该代理服务器的处理器调取并执行该可读存储介质中的可执行指令,以根据用户的检索要求生成检索指令并通过网络发送给检索服务器,选取最终检索结果反馈给该用户;
    设置在该网络中的交换机,该交换机设置有如权利要求9所述的可读存储介质,该交换机的处理器调取并执行该可读存储介质中的可执行指令,以进行对初步检索结果的聚合;
    设置在该网络中的检索服务器,设置有如权利要求9所述的可读存储介质,该检索服务器的处理器调取并执行该可读存储介质中的可执行指令,以根据该检索指令获取该初步检索结果。
PCT/CN2019/126227 2019-11-25 2019-12-18 基于在网计算的分布式信息检索方法、系统与装置 WO2021103207A1 (zh)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
CN201911166655.5A CN111143427B (zh) 2019-11-25 2019-11-25 基于在网计算的分布式信息检索方法、系统与装置
CN201911166655.5 2019-11-25

Publications (1)

Publication Number Publication Date
WO2021103207A1 true WO2021103207A1 (zh) 2021-06-03

Family

ID=70516654

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/CN2019/126227 WO2021103207A1 (zh) 2019-11-25 2019-12-18 基于在网计算的分布式信息检索方法、系统与装置

Country Status (2)

Country Link
CN (1) CN111143427B (zh)
WO (1) WO2021103207A1 (zh)

Families Citing this family (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
EP4152702A4 (en) * 2020-06-08 2023-11-15 Huawei Technologies Co., Ltd. METHOD, APPARATUS AND APPARATUS FOR PROCESSING TAX MESSAGES IN A COLLECTIVE COMMUNICATIONS SYSTEM AND SYSTEM
CN111931033A (zh) * 2020-08-11 2020-11-13 深圳市欢太科技有限公司 一种检索方法、检索装置及服务器

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN102436513A (zh) * 2012-01-18 2012-05-02 中国电子科技集团公司第十五研究所 分布式检索方法和系统
EP2518638A2 (en) * 2011-04-27 2012-10-31 Verint Systems Limited System and method for keyword spotting using multiple character encoding schemes
CN104023039A (zh) * 2013-02-28 2014-09-03 国际商业机器公司 数据包传输方法和装置
CN108241627A (zh) * 2016-12-23 2018-07-03 北京神州泰岳软件股份有限公司 一种异构数据存储查询方法和系统
CN109033123A (zh) * 2018-05-31 2018-12-18 康键信息技术(深圳)有限公司 基于大数据的查询方法、装置、计算机设备和存储介质

Family Cites Families (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP5061372B2 (ja) * 2008-11-06 2012-10-31 Necアクセステクニカ株式会社 ウェブ検索システム、ウェブ検索方法、およびウェブ検索プログラム
CN101694672B (zh) * 2009-10-16 2011-05-18 华中科技大学 一种分布式安全检索系统
CN102521350B (zh) * 2011-12-12 2014-07-16 浙江大学 基于历史点击数据的分布式信息检索集合选择方法
CN104050235B (zh) * 2014-03-27 2017-02-22 浙江大学 基于集合选择的分布式信息检索方法
US20150326480A1 (en) * 2014-05-07 2015-11-12 Alcatel Lucent Conditional action following tcam filters
US9984144B2 (en) * 2015-08-17 2018-05-29 Mellanox Technologies Tlv Ltd. Efficient lookup of TCAM-like rules in RAM
CN107967219B (zh) * 2017-11-27 2021-08-06 北京理工大学 一种基于tcam的大规模字符串高速查找方法
US10901897B2 (en) * 2018-01-16 2021-01-26 Marvell Israel (M.I.S.L.) Ltd. Method and apparatus for search engine cache

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
EP2518638A2 (en) * 2011-04-27 2012-10-31 Verint Systems Limited System and method for keyword spotting using multiple character encoding schemes
CN102436513A (zh) * 2012-01-18 2012-05-02 中国电子科技集团公司第十五研究所 分布式检索方法和系统
CN104023039A (zh) * 2013-02-28 2014-09-03 国际商业机器公司 数据包传输方法和装置
CN108241627A (zh) * 2016-12-23 2018-07-03 北京神州泰岳软件股份有限公司 一种异构数据存储查询方法和系统
CN109033123A (zh) * 2018-05-31 2018-12-18 康键信息技术(深圳)有限公司 基于大数据的查询方法、装置、计算机设备和存储介质

Also Published As

Publication number Publication date
CN111143427A (zh) 2020-05-12
CN111143427B (zh) 2023-09-12

Similar Documents

Publication Publication Date Title
US9531723B2 (en) Phased bucket pre-fetch in a network processor
US7177874B2 (en) System and method for generating and processing results data in a distributed system
Zhang et al. A Scalable Publish/Subscribe Broker Network Using Active Load Balancing
US9225643B2 (en) Lookup cluster complex
US8943103B2 (en) Improvements to query execution in a parallel elastic database management system
JP6004299B2 (ja) フローテーブルをマッチングするための方法及び装置、並びにスイッチ
US8924687B1 (en) Scalable hash tables
US20040186832A1 (en) System and method for controlling processing in a distributed system
CN101009656A (zh) 路由系统及其管理规则条目的方法
Wang et al. Fast name lookup for named data networking
CN110169019B (zh) 数据库功能定义的网络交换机和数据库系统
WO2021103207A1 (zh) 基于在网计算的分布式信息检索方法、系统与装置
EP3559833B1 (en) Best-efforts database functions
US20040181524A1 (en) System and method for distributed processing in a node environment
US20150350381A1 (en) Vertically-Tiered Client-Server Architecture
Xin et al. FPGA-based updatable packet classification using TSS-combined bit-selecting tree
Zhang et al. NetSHa: In-network acceleration of LSH-based distributed search
CN106789706A (zh) 一种基于tcam的网络分流系统
Zhang et al. AIR: An AI-based TCAM entry replacement scheme for routers
US20150254100A1 (en) Software Enabled Network Storage Accelerator (SENSA) - Storage Virtualization Offload Engine (SVOE)
KR102571783B1 (ko) 대용량 검색 처리를 수행하는 검색 처리 시스템 및 그 제어방법
WO2001078309A2 (en) A method and apparatus for wire-speed application layer classification of data packets
CN117640513A (zh) 一种数据处理方法、装置和系统
CN116886605B (zh) 一种流表卸载系统、方法、设备以及存储介质
US20230318975A1 (en) Minimizing deviation from average latency of table lookups

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 19953983

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: DE

122 Ep: pct application non-entry in european phase

Ref document number: 19953983

Country of ref document: EP

Kind code of ref document: A1