CN111026713B - Search system, data search method and operation time determination method - Google Patents

Search system, data search method and operation time determination method Download PDF

Info

Publication number
CN111026713B
CN111026713B CN201911281045.XA CN201911281045A CN111026713B CN 111026713 B CN111026713 B CN 111026713B CN 201911281045 A CN201911281045 A CN 201911281045A CN 111026713 B CN111026713 B CN 111026713B
Authority
CN
China
Prior art keywords
result
optimal result
search
node
preferred
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN201911281045.XA
Other languages
Chinese (zh)
Other versions
CN111026713A (en
Inventor
徐鹏飞
周轶凡
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Hangzhou Dt Dream Technology Co Ltd
Original Assignee
Hangzhou Dt Dream Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Hangzhou Dt Dream Technology Co Ltd filed Critical Hangzhou Dt Dream Technology Co Ltd
Priority to CN201911281045.XA priority Critical patent/CN111026713B/en
Publication of CN111026713A publication Critical patent/CN111026713A/en
Application granted granted Critical
Publication of CN111026713B publication Critical patent/CN111026713B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F16/00Information retrieval; Database structures therefor; File system structures therefor
    • G06F16/10File systems; File servers
    • G06F16/14Details of searching files based on file metadata
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F16/00Information retrieval; Database structures therefor; File system structures therefor
    • G06F16/10File systems; File servers
    • G06F16/18File system types
    • G06F16/182Distributed file systems
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F9/00Arrangements for program control, e.g. control units
    • G06F9/06Arrangements for program control, e.g. control units using stored programs, i.e. using an internal store of processing equipment to receive or retain programs
    • G06F9/46Multiprogramming arrangements
    • G06F9/50Allocation of resources, e.g. of the central processing unit [CPU]
    • G06F9/5005Allocation of resources, e.g. of the central processing unit [CPU] to service a request
    • G06F9/5011Allocation of resources, e.g. of the central processing unit [CPU] to service a request the resources being hardware resources other than CPUs, Servers and Terminals
    • G06F9/5016Allocation of resources, e.g. of the central processing unit [CPU] to service a request the resources being hardware resources other than CPUs, Servers and Terminals the resource being the memory
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F9/00Arrangements for program control, e.g. control units
    • G06F9/06Arrangements for program control, e.g. control units using stored programs, i.e. using an internal store of processing equipment to receive or retain programs
    • G06F9/46Multiprogramming arrangements
    • G06F9/50Allocation of resources, e.g. of the central processing unit [CPU]
    • G06F9/5005Allocation of resources, e.g. of the central processing unit [CPU] to service a request
    • G06F9/5027Allocation of resources, e.g. of the central processing unit [CPU] to service a request the resource being a machine, e.g. CPUs, Servers, Terminals
    • YGENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
    • Y02TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
    • Y02DCLIMATE CHANGE MITIGATION TECHNOLOGIES IN INFORMATION AND COMMUNICATION TECHNOLOGIES [ICT], I.E. INFORMATION AND COMMUNICATION TECHNOLOGIES AIMING AT THE REDUCTION OF THEIR OWN ENERGY USE
    • Y02D10/00Energy efficient computing, e.g. low power processors, power management or thermal management

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Engineering & Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • Software Systems (AREA)
  • Data Mining & Analysis (AREA)
  • Databases & Information Systems (AREA)
  • Library & Information Science (AREA)
  • Management, Administration, Business Operations System, And Electronic Commerce (AREA)
  • Power Sources (AREA)

Abstract

The invention provides a searching method and a searching device, wherein the method comprises the following steps: obtaining the optimal result of each computing node; if the preferred result which is better than the current optimal result exists in all the preferred results, selecting the optimal preferred result from all the preferred results as the current optimal result; otherwise, keeping the current optimal result unchanged; judging whether the specified parameters meet the set threshold value; if not, sending the current optimal result to each computing node so that each computing node performs heuristic search by using the current optimal result to obtain a plurality of search results, determining an optimal result from the plurality of search results, and sending the optimal result to the sink node; if yes, outputting the current optimal result. By the technical scheme of the invention, the distributed computing advantages can be effectively utilized, the heuristic search task is distributed to a plurality of computing nodes, and each computing node carries out heuristic search, so that the performance of heuristic search can be improved, and the efficiency of heuristic search can be improved.

Description

Search system, data search method and operation time determination method
Technical Field
The present invention relates to the field of communications technologies, and in particular, to a search system, a data search method, and a method for determining an operation time.
Background
A machine may be considered to have some artificial intelligence when it performs some task in human activity in place of a human. Artificial intelligence has developed rapidly since its inception. In artificial intelligence techniques, heuristic search is typically employed for processing. Some tasks involved in human activities are often ambiguous, a solution may not be available, such as medical diagnosis, and some tasks may be ambiguous, but the solution process is computationally expensive, such as a complex fuzzy match. Based on the above problem, heuristic search can be used for processing, so that a better solution can be obtained within an acceptable time.
Heuristic search is search in a state space, and by evaluating each search position, the best position is obtained, and then the best position is searched, and so on, until the target is searched, the optimal result is obtained. Therefore, a large number of unnecessary search paths can be omitted, and the efficiency is improved.
In the heuristic search process, the computing device searches based on initial data, assumes that 10 searches are performed to obtain 10 search results, compares the 10 search results to obtain an optimal result, and assumes that the optimal result is the 2 nd search result. The computing device searches based on the 2 nd search result, searches for 10 times again to obtain 10 search results, compares the new 10 search results with the 2 nd search result to obtain an optimal result, and so on until the heuristic search process is finished to obtain an optimal result.
In the above process, the effect of the heuristic search has a large relationship with the performance of the computing device. The computing device searches frequently, the performance is poor due to too many times of searching, even the problems that the operation of the computing device becomes slow, the computing device is halted and the like are caused, the computing device cannot search a better result, and the searching performance is poor.
Disclosure of Invention
The invention provides a searching method, which is applied to a system comprising a sink node and a plurality of computing nodes, wherein the sink node is respectively connected with the computing nodes, each computing node is a node with CPU resources and memory resources and has a computing function, the sink node is a node with CPU resources and memory resources and has a control function, the method is applied to the sink node, and the method comprises the following steps:
a, obtaining an optimal result of each computing node;
b, if the optimal result is better than the current optimal result in all the optimal results, selecting the optimal result from all the optimal results as the current optimal result; otherwise, keeping the current optimal result unchanged;
step C, judging whether the specified parameters meet the set threshold value; if yes, executing step E; otherwise, executing step D;
step D, sending the current optimal result to each computing node so that each computing node performs heuristic search by using the current optimal result to obtain a plurality of search results, determining an optimal result from the plurality of search results, sending the optimal result to the sink node, and returning to the step A;
and E, outputting the current optimal result.
After sending the current optimal result to each of the computing nodes, the method further includes:
sending the total running time to each computing node, so that each computing node determines a preferred result from a plurality of search results after the heuristic search time reaches the total running time; or,
sending the cycle running times to each computing node, so that each computing node determines an optimal result from a plurality of search results after the heuristic search times reach the cycle running times; or,
and sending the total running time and the cycle running times to each computing node, so that each computing node determines a preferred result from a plurality of search results after heuristic search time reaches the total running time or heuristic search times reach the cycle running times.
After selecting the optimal preferred result from all the preferred results as the current optimal result and before sending the total running time to each computing node, the method further includes: if the preferred result which is better than the current optimal result does not exist in all the preferred results, increasing the total running time of the local record, and updating the increased total running time into the total running time of the local record; if there is a preferred result of all preferred results that is better than the current best result, the total running time of the local record is kept unchanged.
The process of the computing node performing heuristic search by using the current optimal result to obtain a plurality of search results specifically comprises: if the optimal result which is better than the current optimal result exists in all the optimal results, the computing node uses the current optimal result to perform heuristic search to obtain a plurality of search results; or if the optimal result which is better than the current optimal result does not exist in all the optimal results, the computing node uses the current optimal result to carry out heuristic search to obtain a plurality of search results; or, the computing node performs heuristic search by using the preferred result of the computing node to obtain a plurality of search results.
The process of judging whether the designated parameter meets the set threshold value includes:
if the deviation degree of the current optimal result and the search target value is smaller than a preset first threshold, determining that the preset threshold is reached; otherwise, determining that the preset threshold value is not met; or,
counting the number of currently executed over steps; if the number of the super steps is larger than a preset second threshold, determining that the preset threshold is met; otherwise, determining that the preset threshold value is not met; or,
counting the search time which is executed currently; if the search time is greater than a preset third threshold, determining that a set threshold is reached; otherwise, determining that the set threshold is not met.
The invention provides a search device, which is applied to a system comprising a sink node and a plurality of computing nodes, wherein the sink node is respectively connected with the computing nodes, each computing node is a node with CPU resources and memory resources and has a computing function, the sink node is a node with CPU resources and memory resources and has a control function, the search device is applied to computing equipment, and when the computing equipment is used as the sink node, the search device specifically comprises:
the acquisition module is used for acquiring the optimal result of each computing node;
the processing module is used for selecting the optimal preferred result from all the preferred results as the current optimal result when the preferred result which is better than the current optimal result exists in all the preferred results; when the optimal result which is better than the current optimal result does not exist in all the optimal results, keeping the current optimal result unchanged;
the judging module is used for judging whether the specified parameters meet the set threshold value;
the sending module is used for sending the current optimal result to each computing node when the judgment result is negative so that each computing node can perform heuristic search by using the current optimal result to obtain a plurality of search results, determining an optimal result from the plurality of search results and sending the optimal result to the sink node;
and the output module is used for outputting the current optimal result when the judgment result is yes.
The sending module is further configured to send a total running time to each of the computing nodes after sending the current optimal result to each of the computing nodes, so that each of the computing nodes determines a preferred result from the plurality of search results after a heuristic search time reaches the total running time; or,
sending the cycle running times to each computing node, so that each computing node determines an optimal result from a plurality of search results after the heuristic search times reach the cycle running times; or,
and sending the total running time and the cycle running times to each computing node, so that each computing node determines a preferred result from a plurality of search results after heuristic search time reaches the total running time or heuristic search times reach the cycle running times.
The processing module is further configured to, after selecting an optimal preferred result from all the preferred results as a current optimal result, increase a total running time of the local record when there is no preferred result that is better than the current optimal result in all the preferred results before sending the total running time to each of the computing nodes, and update the increased total running time to the total running time of the local record; when there is a preferred result of all preferred results that is better than the current best result, then the total running time of the local record is kept unchanged.
When the computing device is used as a computing node, the searching device further comprises a searching module, wherein: the searching module is used for performing heuristic search by using the current optimal result to obtain a plurality of searching results when all the optimal results have optimal results which are better than the current optimal result; or,
when the optimal result which is better than the current optimal result does not exist in all the optimal results, performing heuristic search by using the current optimal result to obtain a plurality of search results; or carrying out heuristic search by using the preferred result of the computing node to obtain a plurality of search results.
The judging module is specifically configured to determine that the preset threshold is reached when the deviation degree between the current optimal result and the search target value is smaller than a preset first threshold in the process of judging whether the specified parameter meets the preset threshold; otherwise, determining that the preset threshold value is not met; or counting the number of currently executed over steps; if the number of the super steps is larger than a preset second threshold, determining that the preset threshold is met; otherwise, determining that the preset threshold is not met; or, counting the search time which is executed currently; if the search time is greater than a preset third threshold, determining that a set threshold is reached; otherwise, it is determined that the set threshold is not met.
Based on the technical scheme, the heuristic search task can be distributed to a plurality of computing nodes by effectively utilizing the distributed computing advantages, and each computing node carries out heuristic search, so that the performance of the heuristic search can be improved, and the efficiency of the heuristic search can be improved.
Drawings
In order to more clearly illustrate the embodiments of the present invention or the technical solutions in the prior art, the drawings used in the description of the embodiments of the present invention or the prior art will be briefly described below, it is obvious that the drawings in the following description are only some embodiments described in the present invention, and other drawings can be obtained by those skilled in the art according to the drawings.
FIG. 1 is a schematic diagram of a system architecture in one embodiment of the present invention;
FIG. 2 is a flow diagram of a search method in one embodiment of the invention;
FIG. 3 is a hardware block diagram of a computing device in one embodiment of the invention;
fig. 4 is a configuration diagram of a search device in one embodiment of the present invention.
Detailed Description
The terminology used herein is for the purpose of describing particular embodiments only and is not intended to be limiting of the invention. As used in this specification and the appended claims, the singular forms "a", "an", and "the" are intended to include the plural forms as well, unless the context clearly indicates otherwise. It should also be understood that the term "and/or" as used herein is meant to encompass any and all possible combinations of one or more of the associated listed items.
It is to be understood that although the terms first, second, third, etc. may be used herein to describe various information, these information should not be limited to these terms. These terms are only used to distinguish one type of information from another. For example, first information may also be referred to as second information, and similarly, second information may also be referred to as first information, without departing from the scope of the present invention. Depending on the context, moreover, the word "if" is used may be interpreted as "at … …" or "at … …" or "in response to a determination".
In view of the problems in the prior art, the embodiment of the present invention provides a search method, which may be applied to a system including a sink node and a plurality of computing nodes. The aggregation node is connected to a plurality of computing nodes, and each computing node is a node having a Central Processing Unit (CPU) resource and a memory resource and has a computing function. The sink node is a node with CPU resources and memory resources and has a control function. In one example, a heuristic search strategy is configured on the computing node, and a heuristic search may be performed based on the heuristic search strategy, that is, the embodiment of the present invention may be used to execute the relevant processes. The sink node is used for controlling each computing node to execute heuristic search and processing based on the information reported by each computing node, namely, the embodiment of the invention can be adopted to execute relevant processes.
The heuristic search is a search in a state space, the best position is obtained by evaluating each search position, the search is carried out from the best position, and the like until a target is searched, and the optimal result is obtained. Therefore, a large number of unnecessary search paths can be omitted, and the efficiency is improved.
The system including the sink node and the plurality of computing nodes may be various types of systems, which is not limited in the embodiment of the present invention as long as the system includes the sink node and the plurality of computing nodes, the sink node is connected to the plurality of computing nodes, each computing node has a computing function, and the sink node has a function of controlling each computing node.
As shown in fig. 1, which is a schematic structural diagram of a system in an embodiment of the present invention, the system may include a sink node and a plurality of computing nodes, where the sink node is connected to the plurality of computing nodes respectively. In one example, one or more computing nodes (having logic units of memory resources and CPU resources and having computing functions) may be configured on a computing device (which may be a real physical device or may be implemented by a virtual machine), and different computing nodes may be located on the same computing device or different computing devices. As shown in FIG. 1, compute nodes 1 and 2 are located on compute device 1, and compute nodes 3, 4, and 5 are located on compute device 2. In one example, the computing device may be a PC (Personal Computer), laptop, tablet, or the like. In addition, a sink node (a logic unit having memory resources and CPU resources and having a control function) may be configured on the computing device. The computing device where the sink node is located may be the same as or different from the computing device where the computing node is located, and is located on the computing device 3 in fig. 1 as an example.
In the application scenario, the search method provided in the embodiment of the present invention may be applied to a sink node, as shown in fig. 2, and the search method may include the following steps:
step 201, obtaining the preferred result of each computing node.
Step 202, if there is a preferred result which is better than the current optimal result in all the preferred results, selecting the optimal preferred result from all the preferred results as the current optimal result; otherwise, keeping the current optimal result unchanged.
Step 203, judging whether the designated parameters meet the set threshold value; if yes, go to step 205; otherwise, step 204 is performed.
Step 204, sending the current optimal result to each computing node, so that each computing node performs heuristic search by using the current optimal result to obtain a plurality of search results, determining a preferred result from the plurality of search results, sending the preferred result to the sink node, and then returning to step 201.
And step 205, outputting the current optimal result and ending the heuristic search process.
For step 201, in the process of first obtaining the preferred result of each computing node, the sink node may obtain the original data, split the original data into a plurality of sub-data, and allocate each sub-data to one computing node. For example, the original data is split into sub-data 1, 2, 3, 4, 5, and sub-data 1 is allocated to compute node 1, sub-data 2 is allocated to compute node 2, sub-data 3 is allocated to compute node 3, sub-data 4 is allocated to compute node 4, and sub-data 5 is allocated to compute node 5.
Each computing node can perform heuristic search by using the subdata distributed to itself to obtain a plurality of search results, determine a preferred result from the plurality of search results, and send the preferred result to the sink node. Thus, the sink node can acquire the preferred result of each computing node. For the heuristic search algorithm, a traditional ant colony algorithm, a genetic algorithm, a simulated annealing algorithm and the like can be adopted, and the heuristic search algorithm is not described in detail any more, so that all algorithms capable of realizing the heuristic search can be applicable at present.
In one example, in an initial state, a total runtime, such as 10 seconds, may be configured on the aggregation node, and the aggregation node may send the total runtime to each of the compute nodes. In this way, during the heuristic search, if the heuristic search time reaches the total operation time, the heuristic search may be stopped, and a preferred result may be determined from a plurality of search results currently obtained, and sent to the sink node. For example, the computing node 1 starts timing from the first time of performing the heuristic search, stops the heuristic search if the heuristic search time reaches 10 seconds, obtains 20 search results assuming that 20 heuristic searches have been performed, and determines a preferred result from the 20 search results.
In another example, in the initial state, the number N of loop operations may be configured on the sink node, for example, 10 times, and the sink node may send the number N of loop operations to each computing node. In this way, in the process of performing heuristic search, if the number of heuristic searches reaches the number of loop operations N, the heuristic search may be stopped, and a preferred result may be determined from a plurality (i.e., N) of search results currently obtained, and sent to the sink node. For example, the computing node 1 counts from the first heuristic search, stops the heuristic search if the number of heuristic searches has reached 10, and obtains a total of 10 search results currently, and determines a preferred result from the 10 search results.
In another example, in an initial state, a total running time (e.g., 10 seconds, etc.) and a number of loop runs N (e.g., 10 times, etc.) may be configured on the aggregation node, and the aggregation node may send the total running time and the number of loop runs N to each computing node. In this way, in the process of performing heuristic search, if the heuristic search time reaches the total operation time or the heuristic search times reaches the cycle operation times N, the heuristic search may be stopped, and a preferred result may be determined from a plurality of search results obtained currently, and sent to the sink node. If the condition "heuristic search time reaches total running time" is met first, the heuristic search can be stopped immediately, or if the condition "heuristic search times reaches loop running times N" is met first, the heuristic search can also be stopped immediately.
Based on the above manner, each computing node sends the preferred result to the sink node, so that the sink node can obtain the preferred result of each computing node. For example, the preferred results for compute node 1, compute node 2, compute node 3, compute node 4, and compute node 5 are R11, R12, R13, R14, R15, respectively.
In practical applications, since the heuristic search is an iterative loop process, step 201 is executed multiple times, and in the process of acquiring the preferred result of each computing node M times (where M is a positive integer greater than or equal to 2), the step is executed after step 204, and step 204 sends the current optimal result to each computing node. In this way, each computing node may perform heuristic search using the current optimal result to obtain a plurality of search results, determine a preferred result from the plurality of search results, and send the preferred result to the sink node, so that the sink node may obtain the preferred result of each computing node, that is, execute step 201 again. For the heuristic search algorithm for each computing node by using the current optimal result, the traditional ant colony algorithm, genetic algorithm, simulated annealing algorithm and the like can be adopted, and the detailed description of the heuristic search algorithm is omitted.
On the basis of sending the current optimal result to each computing node, in step 204, in an example, the sink node may send the total running time to each computing node, and during the heuristic search, if the heuristic search time reaches the total running time, each computing node may stop the heuristic search, determine a preferred result from a plurality of search results obtained currently, and send the preferred result to the sink node. In another example, the sink node may send the number N of loop operations to each computing node, and during the heuristic search, if the number of heuristic searches reaches the number N of loop operations, the heuristic search may be stopped, and a preferred result is determined from a plurality (i.e., N) of search results currently obtained, and the preferred result is sent to the sink node. In another example, the sink node may send the total running time and the number of loop running times N to each computing node, and during the heuristic search, if the heuristic search time reaches the total running time or the heuristic search number reaches the number of loop running times N, the heuristic search may be stopped, and a preferred result may be determined from a plurality of search results currently obtained, and sent to the sink node. If the condition that the heuristic search time reaches the total running time is met, the heuristic search can be stopped immediately, or if the condition that the heuristic search times reaches the cycle running times N is met, the heuristic search can also be stopped immediately.
Based on the above manner, each computing node sends the preferred result to the sink node, so that the sink node can obtain the preferred result of each computing node. When the preferred result of each computing node is obtained at the mth time (M is a positive integer greater than or equal to 2), the preferred results of the computing nodes 1, 2, 3, 4 and 5 are RM1, RM2, RM3, RM4 and RM5, respectively.
The total running time and/or the number of loop runs transmitted in step 204 may be the same or different from the total running time and/or the number of loop runs transmitted in the initial state. If they are different, the total running time sent in step 204 may be greater than the total running time sent in the initial state, and the number of loop runs sent in step 204 may be greater than the number of loop runs sent in the initial state.
In one example, assuming that the aggregation node sends the total running time to each of the compute nodes, before sending the total running time to each of the compute nodes, if there is no result with better than the current optimal result in all the preferred results (known from step 202), the total running time of the local record is increased, and the increased total running time is updated to the total running time of the local record, and if there is a result with better than the current optimal result in all the preferred results (known from step 202), the total running time of the local record is kept unchanged. For example, assuming that the total running time of the local record is 10 seconds, if there is no preferred result which is better than the current optimal result in all the preferred results, the total running time is increased to 13 seconds, and 13 seconds are updated to the total running time of the local record, and then the total running time of the local record is sent to each computing node for 13 seconds. For another example, assuming that the total running time of the local record is 10 seconds, if there is a preferred result that is better than the current optimal result in all the preferred results, the total running time of the local record is sent to each computing node for 10 seconds.
In another example, assuming that the aggregation node sends the number N of loop operations to each computing node, before sending the number N of loop operations to each computing node, if there is no preferred result that is better than the current optimal result in all the preferred results, the number N of loop operations recorded locally is increased, the increased number N of loop operations is updated to the number N of loop operations recorded locally, and if there is a preferred result that is better than the current optimal result in all the preferred results, the number N of loop operations recorded locally is kept unchanged. For example, assuming that the number of loop operations recorded locally is 10, if there is no preferred result that is better than the current optimal result in all the preferred results, the number of loop operations is increased to 12, and 12 times are updated to the number of loop operations recorded locally, and the number of loop operations recorded locally is 12 times sent to each computing node. For another example, assuming that the number of loop operations recorded locally is 10, if there is a preferred result that is better than the current optimal result in all the preferred results, the number of loop operations recorded locally is 10, and the preferred result is sent to each computing node.
In another example, assuming that the aggregation node sends the total running time and the number of loop running times N to each computing node, before sending the total running time and the number of loop running times N to each computing node, if there is no preferred result that is better than the current optimal result in all the preferred results, the total running time recorded locally may be increased, and/or the number of loop running times N recorded locally may be increased, the increased total running time may be updated to the total running time recorded locally, and/or the increased number of loop running times N may be updated to the number of loop running times N recorded locally. If the preferred result which is better than the current optimal result exists in all the preferred results, the total running time of the local record is kept unchanged, and the loop running times N of the local record are kept unchanged.
The effect of sending the total running time and/or the cycle running times to each computing node is as follows:
and aiming at the mode of sending the total running time to each computing node, stopping the heuristic search and sending the optimal result to the sink node after the heuristic search time reaches the total running time. Therefore, each computing node can send the optimized result to the sink node at substantially the same time, resources of each computing node can be fully utilized, and the situation that after the optimized result is reported by the computing node, other computing nodes do not report the optimized result yet is avoided, so that the computing nodes are prevented from being idle for a long time, distributed computing advantages of each computing node are effectively utilized, and the total running time of each computing node can be limited by time.
And aiming at the mode of sending the cycle running times to each computing node, each computing node stops heuristic search and sends an optimal result to the sink node after the heuristic search times reach the cycle running times. Therefore, each computing node can complete heuristic search for the same number of times, namely, preferred results are selected from the search results for the same number of times, so that the preferred results obtained by the aggregation nodes are the more valuable preferred results.
The method has the advantages of the two methods for sending the total running time and the cycle running times to each computing node. In addition, in this way, the number of times of loop operation can be set to be large, so that when the heuristic search time reaches the total operation time, the number of times of heuristic search does not reach the number of times of loop operation yet, and each computing node can send the optimal result to the sink node at substantially the same time.
If there is no preferred result which is better than the current optimal result in all the preferred results, the reason for increasing the total running time and/or the cycle running times of the local records is as follows:
if the preferred result which is better than the current optimal result does not exist in all the preferred results, the heuristic search which is executed currently does not obtain a better preferred result, and the total running time and/or the cycle running times are increased, so that each computing node can conduct heuristic search for more times, and the probability of obtaining the preferred result which is better than the current optimal result is improved.
For step 202, the sink node will compare all preferred results from each compute node to the current optimal result of the local record. And if the preferred result which is better than the current optimal result exists in all the preferred results, selecting the optimal preferred result from all the preferred results as the current optimal result. If the preferred result which is better than the current optimal result does not exist in all the preferred results, the current optimal result is kept unchanged.
In one example, in the initial state, the sink node configures a current optimal result, and the current optimal result is a default value. After receiving the preferred results R11, R12, R13, R14, and R15, the sink node assumes that the preferred results R11, R12, R13, R14, and R15 are all preferred results better than the current optimal result (default value), and therefore selects the optimal preferred result (assumed to be R15) from all the preferred results as the current optimal result. In the next iteration process, after receiving the preferred results R21, R22, R23, R24, and R25, the sink node selects the preferred result R21 as the current optimal result, assuming that the preferred result R21 is a preferred result better than the current optimal result R15. In the next iteration process, after receiving the preferred results R31, R32, R33, R34, and R35, the sink node assumes that there is no preferred result that is better than the current optimal result R21, so the current optimal result is kept as R21, and so on.
In an example, an evolutionary-free number may be configured on the aggregation node, and the initial value of the evolutionary-free number is 0. Thus, for step 202, if there is a preferred result that is better than the current best result in all preferred results, the number of times of non-evolution may be set to 0 (if the number of times of non-evolution is currently 0, the number of times of non-evolution is kept unchanged, and if the number of times of non-evolution is not currently 0, the number of times of non-evolution is modified to 0). And if the preferred result which is better than the current optimal result does not exist in all the preferred results, adding 1 to the number of times of non-evolution, and if the number of times of non-evolution is 1 currently, modifying the number of times of non-evolution to 2.
Based on the numerical value of the number of times of non-evolution, the following information can be known: if the number of times of non-evolution is 0, the preferred result which is better than the current optimal result exists in all the preferred results; if the number of times of non-evolution is not 0, it can be shown that there is no preferred result better than the current optimal result in all the preferred results, and the number of times that the current optimal result has not changed is the number of times of non-evolution.
For step 203, the process that the sink node determines whether the specified parameter meets the threshold value set is specifically, but not limited to, the following manner: if the deviation degree of the current optimal result and the search target value is smaller than a preset first threshold, determining that the preset threshold is reached; otherwise, determining that the set threshold is not met. Or counting the number of currently executed over steps; if the number of the over steps is larger than a preset second threshold, determining that the preset threshold is met; otherwise, determining that the set threshold is not met. Or, counting the search time which is executed currently; if the search time is greater than a preset third threshold, determining that the set threshold is met; otherwise, determining that the set threshold is not met.
For the search target value, the following process will be described, and details are not repeated here.
The whole process is called Super Step (Super Step) from the time when the sink node sends the current optimal result to each computing node until the sink node obtains the current optimal result. Thus, each time step 202 is completed, an over-step is considered to be completed, so that the number of over-steps that have currently been performed can be counted.
For step 204, the process of the computing node performing heuristic search using the current optimal result to obtain a plurality of search results may include, but is not limited to, the following: and if the preferred result which is better than the current optimal result exists in all the preferred results, the computing node uses the current optimal result to carry out heuristic search to obtain a plurality of search results. Or if the optimal result which is better than the current optimal result does not exist in all the optimal results, the computing node uses the current optimal result to carry out heuristic search to obtain a plurality of search results; or, the calculation node uses the preferred result of the calculation node to perform heuristic search to obtain a plurality of search results.
In one example, the sink node may notify each computing node of information on whether there is a preferred result that is better than the current optimal result among all the preferred results. Based on the notification information, each computing node can know whether the preferred result better than the current optimal result exists in all the preferred results.
In one example, if there is no preferred result that is better than the current optimal result in all the preferred results, and if the deviation degree between the preferred result of the computing node and the current optimal result is smaller than the preset threshold, the computing node may perform heuristic search using the preferred result of the computing node to obtain a plurality of search results. If the deviation degree between the optimal result of the computing node and the current optimal result is not smaller than the preset threshold value, the computing node can use the current optimal result to perform heuristic search to obtain a plurality of search results.
For example, after receiving the preferred results R21, R22, R23, R24, and R25, the sink node issues the current optimal result R21 to the computing node 5 because the preferred result R21 is a preferred result better than the current optimal result R15. After the computing node 5 receives the current optimal result R21, because the current situation is that there is a preferred result (i.e., R21) that is better than the current optimal result in all the preferred results, the computing node 5 performs a heuristic search using the current optimal result R21 to obtain a plurality of search results.
For another example, after receiving the preferred results R31, R32, R33, R34, and R35, the sink node issues the current optimal result R21 to the computing node 5 because there is no preferred result that is better than the current optimal result R21. After the computing node 5 receives the current optimal result R21, because the current situation is that there is no optimal result that is better than the current optimal result in all the optimal results, the computing node 5 performs heuristic search using the current optimal result R21 to obtain a plurality of search results, or the computing node 5 performs heuristic search using the optimal result R35 of the computing node 5 to obtain a plurality of search results.
In the above process, the computing node determines a preferred result from a plurality of search results, and the sink node compares the preferred result with the current optimal result and the search target value, so as to implement the above process, the search result may be evaluated using an evaluation function (which is selected and not described again), and the evaluation function gives the search target value. By evaluating the search results (the current best result, the preferred result, etc. are also actually search results selected from a plurality of search results) using the evaluation function, the search results are better if the evaluation results are closer to the search target value.
Based on the technical scheme, the heuristic search task can be distributed to the plurality of computing nodes by effectively utilizing the distributed computing advantages, and each computing node carries out heuristic search, so that the performance of the heuristic search can be improved, and the efficiency of the heuristic search can be improved.
Based on the same inventive concept as the method, the embodiment of the invention also provides a searching device, and the searching device is applied to the computing equipment. The searching device can be implemented by software, or by hardware, or by a combination of hardware and software. A logical means, for example implemented in software, is formed by a processor of the computing device in which it resides reading corresponding computer program instructions in the non-volatile memory. From a hardware aspect, as shown in fig. 3, which is a hardware structure diagram of a computing device where the search apparatus provided by the present invention is located, in addition to the processor and the nonvolatile memory shown in fig. 3, the computing device may further include other hardware, such as a forwarding chip, a network interface, and a memory, which are responsible for processing a packet; in terms of hardware architecture, the computing device may also be a distributed device, possibly including multiple interface cards, to facilitate extensions of message processing at the hardware level.
As shown in fig. 4, a structure diagram of a search apparatus provided by the present invention is applied to a system including a sink node and a plurality of computing nodes, where the sink node is connected to the plurality of computing nodes, each computing node is a node having CPU resources and memory resources and has a computing function, the sink node is a node having CPU resources and memory resources and has a control function, the search apparatus is applied to a computing device, and when the computing device is used as the sink node, the search apparatus specifically includes:
an obtaining module 11, configured to obtain an optimal result of each computing node;
the processing module 12 is configured to, when there is a preferred result that is better than the current optimal result in all the preferred results, select an optimal preferred result from all the preferred results as the current optimal result; when the optimal result which is better than the current optimal result does not exist in all the optimal results, keeping the current optimal result unchanged;
a judging module 13, configured to judge whether the specified parameter meets a set threshold;
a sending module 14, configured to send the current optimal result to each computing node when the determination result is negative, so that each computing node performs heuristic search using the current optimal result to obtain multiple search results, determine a preferred result from the multiple search results, and send the preferred result to a sink node;
and the output module 15 is configured to output the current optimal result when the determination result is yes.
The sending module 14 is further configured to send a total running time to each of the computing nodes after sending the current optimal result to each of the computing nodes, so that each of the computing nodes determines a preferred result from the plurality of search results after a heuristic search time reaches the total running time; or,
sending the cycle running times to each computing node, so that each computing node determines an optimal result from a plurality of search results after the heuristic search times reach the cycle running times; or,
and sending the total running time and the cycle running times to each computing node, so that each computing node determines a preferred result from a plurality of search results after heuristic search time reaches the total running time or heuristic search times reach the cycle running times.
The processing module 12 is further configured to, after selecting an optimal preferred result from all the preferred results as a current optimal result, before sending the total running time to each of the computing nodes, increase the total running time of the local record when there is no preferred result that is better than the current optimal result in all the preferred results, and update the increased total running time to the total running time of the local record; when there is a preferred result of all preferred results that is better than the current best result, then the total running time of the local record is kept unchanged.
When the computing device is a computing node, then the searching means further comprises a searching module (not embodied in the figure), wherein: the searching module is used for performing heuristic search by using the current optimal result to obtain a plurality of searching results when all the optimal results have optimal results which are better than the current optimal result; or when the optimal result which is better than the current optimal result does not exist in all the optimal results, performing heuristic search by using the current optimal result to obtain a plurality of search results; or carrying out heuristic search by using the preferred result of the computing node to obtain a plurality of search results.
The judging module 13 is specifically configured to, in a process of judging whether the specified parameter meets a set threshold, determine that the specified threshold is met when a deviation degree between the current optimal result and the search target value is smaller than a preset first threshold; otherwise, determining that the preset threshold value is not met; or counting the number of currently executed over steps; if the number of the super steps is larger than a preset second threshold, determining that the preset threshold is met; otherwise, determining that the preset threshold is not met; or, counting the search time which is executed currently; if the search time is greater than a preset third threshold, determining that the set threshold is reached; otherwise, determining that the set threshold is not met.
The modules of the device can be integrated into a whole or can be separately deployed. The modules can be combined into one module, and can also be further split into a plurality of sub-modules.
Through the above description of the embodiments, those skilled in the art will clearly understand that the present invention may be implemented by software plus a necessary general hardware platform, and certainly may also be implemented by hardware, but in many cases, the former is a better embodiment. Based on such understanding, the technical solutions of the present invention may be embodied in the form of a software product, which is stored in a storage medium and includes instructions for causing a computer device (which may be a personal computer, a server, or a network device) to execute the methods according to the embodiments of the present invention. Those skilled in the art will appreciate that the drawings are merely schematic representations of one preferred embodiment and that the blocks or flow diagrams in the drawings are not necessarily required to practice the present invention.
Those skilled in the art will appreciate that the modules in the devices in the embodiments may be distributed in the devices in the embodiments according to the description of the embodiments, and may be correspondingly changed in one or more devices different from the embodiments. The modules of the above embodiments may be combined into one module, or further split into multiple sub-modules. The above-mentioned serial numbers of the embodiments of the present invention are only for description, and do not represent the advantages and disadvantages of the embodiments.
The above disclosure is only for a few specific embodiments of the present invention, however, the present invention is not limited to the above embodiments, and any variations that can be considered by those skilled in the art are within the scope of the present invention.

Claims (14)

1. A search system, the system comprising: a sink node and a plurality of compute nodes; the aggregation node is connected with a plurality of computing nodes respectively, each computing node is a node with a CPU resource and a memory resource and has a computing function, and the aggregation node is a node with a CPU resource and a memory resource and has a control function;
the sink node is used for acquiring the optimal result of each computing node; if the optimal result is better than the current optimal result in all the optimal results, selecting the optimal result from all the optimal results as the current optimal result; otherwise, keeping the current optimal result unchanged; wherein the excellence of the preferred result is inversely related to the specified degree of deviation; the specified deviation degree is as follows: evaluating the optimal result by adopting a preset evaluation function to obtain the deviation degree of the evaluation result and a preset search target value; the preset search target value is obtained through the preset valuation function; judging whether the specified parameters meet the set threshold value; if yes, outputting the current optimal result; otherwise, sending the current optimal result to each computing node, so that each computing node performs heuristic search by using the current optimal result to obtain a plurality of search results, determining an optimal result from the plurality of search results, and sending the optimal result to the sink node;
each computing node is configured to perform heuristic search using the current optimal result to obtain a plurality of search results, determine an optimal result from the plurality of search results, and send the optimal result to the sink node.
2. The system of claim 1,
the sink node is further configured to send the total running time to each of the computing nodes after sending the current optimal result to each of the computing nodes;
each computing node is further used for determining a preferred result from the plurality of search results after the heuristic search time reaches the total operation time;
or,
the sink node is further configured to send the number of times of loop operation to each of the computing nodes after sending the current optimal result to each of the computing nodes;
each computing node is also used for determining an optimal result from a plurality of search results after the heuristic search times reach the cycle operation times;
or,
the sink node is further configured to send the total running time and the number of times of the loop running to each of the computing nodes after sending the current optimal result to each of the computing nodes;
each computing node is further used for determining a preferred result from the plurality of search results after heuristic search time reaches the total operation time or heuristic search times reaches the cycle operation times.
3. The system of claim 2,
the sink node is further configured to increase the total running time of the local record and update the increased total running time to the total running time of the local record if there is no preferred result that is better than the current optimal result in all the preferred results after selecting the optimal preferred result from all the preferred results as the current optimal result and before sending the total running time to each computing node; if there is a preferred result of all preferred results that is better than the current best result, the total running time of the local record is kept unchanged.
4. The system of claim 1,
each computing node, when performing heuristic search by using the current optimal result to obtain a plurality of search results, is specifically configured to perform heuristic search by using the current optimal result to obtain a plurality of search results if there is a preferred result that is better than the current optimal result in all the preferred results; or if the optimal result which is better than the current optimal result does not exist in all the optimal results, the computing node uses the current optimal result to perform heuristic search to obtain a plurality of search results; or, the calculation node performs heuristic search by using the preferred result of the calculation node to obtain a plurality of search results.
5. The system of claim 1,
the sink node is specifically configured to determine that the specified threshold is reached if a deviation degree between the current optimal result and the preset search target value is smaller than a preset first threshold when determining whether the specified parameter meets the preset threshold; otherwise, determining that the preset threshold value is not met; or counting the number of currently executed over steps; if the number of the super steps is larger than a preset second threshold, determining that the preset threshold is met; otherwise, determining that the preset threshold value is not met; or, counting the search time which is executed currently; if the search time is greater than a preset third threshold, determining that a set threshold is reached; otherwise, determining that the preset threshold is not met; wherein the number of supersteps refers to the number of executed supersteps; the step-over refers to the period from the time when the sink node sends the current optimal result to each computing node to the time when the sink node obtains the current optimal result.
6. A data searching method is applied to a system comprising a sink node and a plurality of computing nodes, wherein the sink node is respectively connected with the computing nodes, each computing node is a node with CPU resources and memory resources and has a computing function, the sink node is a node with CPU resources and memory resources and has a control function, and the method is applied to the sink node and comprises the following steps:
acquiring original data, splitting the original data into a plurality of subdata, and distributing each subdata to a computing node, so that each computing node uses the subdata received by the node to perform heuristic search to obtain a plurality of search results, and determining an optimal result from the plurality of search results;
obtaining the optimal result of each computing node;
if the preferred result which is better than the current optimal result exists in all the preferred results, selecting the optimal preferred result from all the preferred results as the current optimal result; otherwise, keeping the current optimal result unchanged; wherein the excellence of the preferred result is inversely related to the specified degree of deviation; the specified deviation degree is as follows: evaluating the optimal result by adopting a preset evaluation function to obtain the deviation degree of the evaluation result and a preset search target value; the preset search target value is obtained through the preset valuation function;
judging whether the specified parameters meet the set threshold value;
if yes, outputting the current optimal result;
if not, sending the current optimal result to each computing node so that each computing node performs heuristic search by using the current optimal result to obtain a plurality of search results, determining an optimal result from the plurality of search results, sending the optimal result to the sink node, and returning to the step of obtaining the optimal result of each computing node.
7. The method of claim 6, wherein after sending the current optimal result to the computing nodes, the method further comprises:
sending the total running time to each computing node, so that each computing node determines a preferred result from a plurality of search results after the heuristic search time reaches the total running time; or,
sending the cycle running times to each computing node so that each computing node determines an optimal result from a plurality of search results after the heuristic search times reach the cycle running times; or,
and sending the total running time and the cycle running times to each computing node, so that each computing node determines a preferred result from a plurality of search results after heuristic search time reaches the total running time or heuristic search times reach the cycle running times.
8. The method of claim 7, wherein after selecting the optimal preferred result from all the preferred results as the current optimal result and before sending the total running time to the computing nodes, the method further comprises:
if the optimal result which is better than the current optimal result does not exist in all the optimal results, increasing the total running time of the local record, and updating the increased total running time into the total running time of the local record;
if there is a preferred result of all preferred results that is better than the current best result, the total running time of the local record is kept unchanged.
9. The method of claim 7, wherein the process of the computing node performing a heuristic search using the current optimal result to obtain a plurality of search results comprises:
if the optimal result which is better than the current optimal result exists in all the optimal results, the computing node uses the current optimal result to perform heuristic search to obtain a plurality of search results; or,
if the optimal result which is better than the current optimal result does not exist in all the optimal results, the computing node uses the current optimal result to perform heuristic search to obtain a plurality of search results; or, the calculation node performs heuristic search by using the preferred result of the calculation node to obtain a plurality of search results.
10. The method according to claim 7, wherein the determining whether the specified parameter satisfies the threshold includes:
if the deviation degree of the current optimal result and the preset searching target value is smaller than a preset first threshold, determining that the preset threshold is reached; otherwise, determining that the preset threshold value is not met; or,
counting the number of currently executed supersteps; if the number of the super steps is larger than a preset second threshold, determining that the preset threshold is met; otherwise, determining that the preset threshold value is not met; or,
counting the search time which is executed currently; if the search time is greater than a preset third threshold, determining that a set threshold is reached; otherwise, determining that the preset threshold value is not met;
wherein the number of supersteps refers to the number of executed supersteps; the step-over refers to the period from the time when the sink node sends the current optimal result to each computing node to the time when the sink node obtains the current optimal result.
11. A running time determination method is characterized in that the method is applied to a system comprising a sink node and a plurality of computing nodes, the sink node acquires the preferred result of each computing node, and if the preferred result which is better than the current optimal result exists in all the preferred results, the optimal preferred result is selected from all the preferred results to serve as the current optimal result; otherwise, keeping the current optimal result unchanged; when the designated parameters are judged not to meet the set threshold value, the current optimal result and the running time are sent to each computing node; each computing node utilizes the current optimal result to conduct heuristic search with the search duration being the running time, determines the optimal result in the obtained multiple search results and returns the optimal result to the aggregation node;
the method is applied to a sink node, and comprises the following steps:
after the optimal result of each calculation stage is obtained, the excellence degree of the optimal result of each calculation node and the current optimal result is compared; wherein the excellence of the preferred result is inversely related to the specified degree of deviation; the specified deviation degree is as follows: evaluating the optimal result by adopting a preset evaluation function to obtain the deviation degree of the evaluation result and a preset search target value; the preset search target value is obtained through the preset valuation function;
and determining the running time of each computing node for carrying out the heuristic search based on the comparison result.
12. The method of claim 11, wherein determining a runtime of each compute node to perform the heuristic search based on the comparison comprises:
if the comparison result is that no preferred result which is better than the current optimal result exists in all the preferred results, increasing the total running time of the local record, and updating the increased total running time to the total running time of the local record;
if the comparison result is that the preferred result which is better than the current optimal result exists in all the preferred results, the total running time of the local record is kept unchanged.
13. The method according to claim 12, wherein said sink node is configured with an evolutionary-free number, and a value of the evolutionary-free number is used to indicate the comparison result;
the comparison result is determined as follows:
detecting whether the value of the non-evolution times is zero or not;
if yes, determining that the optimal result better than the current optimal result exists in all the optimal results;
and if not, determining that no preferred result which is better than the current optimal result exists in all the preferred results.
14. The method of claim 13, wherein the number of non-evolutionary times is updated by:
after the optimization results of each computing node are compared with the excellence degree of the current optimal result, if the optimization results which are more excellent than the current optimal result exist in all the optimization results, the value of the number of times of non-evolution is set to be zero;
and if the preferred result which is better than the current optimal result does not exist in all the preferred results, increasing the value of the non-evolutionary times.
CN201911281045.XA 2016-08-03 2016-08-03 Search system, data search method and operation time determination method Active CN111026713B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201911281045.XA CN111026713B (en) 2016-08-03 2016-08-03 Search system, data search method and operation time determination method

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
CN201911281045.XA CN111026713B (en) 2016-08-03 2016-08-03 Search system, data search method and operation time determination method
CN201610628370.9A CN106227878B (en) 2016-08-03 2016-08-03 Searching method and device

Related Parent Applications (1)

Application Number Title Priority Date Filing Date
CN201610628370.9A Division CN106227878B (en) 2016-08-03 2016-08-03 Searching method and device

Publications (2)

Publication Number Publication Date
CN111026713A CN111026713A (en) 2020-04-17
CN111026713B true CN111026713B (en) 2023-03-31

Family

ID=57535810

Family Applications (2)

Application Number Title Priority Date Filing Date
CN201610628370.9A Active CN106227878B (en) 2016-08-03 2016-08-03 Searching method and device
CN201911281045.XA Active CN111026713B (en) 2016-08-03 2016-08-03 Search system, data search method and operation time determination method

Family Applications Before (1)

Application Number Title Priority Date Filing Date
CN201610628370.9A Active CN106227878B (en) 2016-08-03 2016-08-03 Searching method and device

Country Status (1)

Country Link
CN (2) CN106227878B (en)

Families Citing this family (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN107480231A (en) * 2017-08-04 2017-12-15 深圳大学 Heuristic expansion search extension algorithm based on the track inquiry with sequence interest region

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101211342A (en) * 2006-12-29 2008-07-02 上海芯盛电子科技有限公司 Concurrent type frog jump heuristic search algorithm
CN102820662A (en) * 2012-08-17 2012-12-12 华北电力大学 Distributed power source contained power system multi-target reactive-power optimization method
CN103164495A (en) * 2011-12-19 2013-06-19 中国人民解放军63928部队 Half-connection inquiry optimizing method based on periphery searching and system thereof
CN103646035A (en) * 2013-11-14 2014-03-19 北京锐安科技有限公司 Information search method based on heuristic method
CN104199878A (en) * 2014-08-21 2014-12-10 西安闻泰电子科技有限公司 Game engine shortest path search method and game engine system

Family Cites Families (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CA2509496A1 (en) * 2005-06-06 2006-12-06 3618633 Canada Inc. Search-enhanced trie-based syntactic pattern recognition of sequences
US7496683B2 (en) * 2006-07-27 2009-02-24 International Business Machines Corporation Maximization of sustained throughput of distributed continuous queries
CN101996102B (en) * 2009-08-31 2013-07-17 中国移动通信集团公司 Method and system for mining data association rule
CN102486739B (en) * 2009-11-30 2015-03-25 国际商业机器公司 Method and system for distributing data in high-performance computer cluster
US9367108B2 (en) * 2012-06-28 2016-06-14 Nec Corporation Reduction of operational cost using energy storage management and demand response
CN103581225A (en) * 2012-07-25 2014-02-12 中国银联股份有限公司 Distributed system node processing task method
CN104504147B (en) * 2015-01-04 2018-04-10 华为技术有限公司 A kind of resource coordination method of data-base cluster, apparatus and system
CN105426489A (en) * 2015-11-23 2016-03-23 宁波数方信息技术有限公司 Memory calculation based distributed expandable data search system

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101211342A (en) * 2006-12-29 2008-07-02 上海芯盛电子科技有限公司 Concurrent type frog jump heuristic search algorithm
CN103164495A (en) * 2011-12-19 2013-06-19 中国人民解放军63928部队 Half-connection inquiry optimizing method based on periphery searching and system thereof
CN102820662A (en) * 2012-08-17 2012-12-12 华北电力大学 Distributed power source contained power system multi-target reactive-power optimization method
CN103646035A (en) * 2013-11-14 2014-03-19 北京锐安科技有限公司 Information search method based on heuristic method
CN104199878A (en) * 2014-08-21 2014-12-10 西安闻泰电子科技有限公司 Game engine shortest path search method and game engine system

Non-Patent Citations (2)

* Cited by examiner, † Cited by third party
Title
张巍 ; 李先贤 ; .基于启发式搜索算法的网格信息查询优化.计算机工程.(第19期),全文. *
江贺,周智,邹鹏,陈国良.求解TSP问题的并集搜索的新宏启发算法.中国科学技术大学学报.(第03期),全文. *

Also Published As

Publication number Publication date
CN106227878B (en) 2020-01-14
CN111026713A (en) 2020-04-17
CN106227878A (en) 2016-12-14

Similar Documents

Publication Publication Date Title
US11016673B2 (en) Optimizing serverless computing using a distributed computing framework
WO2021011088A1 (en) Automated generation of machine learning models for network evaluation
CN106878194B (en) Message processing method and device
US10303135B2 (en) Method and apparatus for controlling smart home device
Kochovski et al. An architecture and stochastic method for database container placement in the edge-fog-cloud continuum
WO2017157160A1 (en) Data table joining mode processing method and apparatus
Yin et al. Experiences with ml-driven design: A noc case study
CN110162270A (en) Date storage method, memory node and medium based on distributed memory system
WO2020095313A1 (en) Managing computation load in a fog network
CN102385536A (en) Method and system for realization of parallel computing
CN113162888B (en) Security threat event processing method and device and computer storage medium
JP2017117449A (en) Data flow programming of computing apparatus with vector estimation-based graph partitioning
CN111026713B (en) Search system, data search method and operation time determination method
US9880923B2 (en) Model checking device for distributed environment model, model checking method for distributed environment model, and medium
CN107920067B (en) Intrusion detection method on active object storage system
Wei et al. Sequential testing policies for complex systems under precedence constraints
WO2011114135A1 (en) Detecting at least one community in a network
CN108563489A (en) A kind of virtual machine migration method and system of data center's total management system
Li et al. Active learning for causal Bayesian network structure with non-symmetrical entropy
CN110021166B (en) Method and device for processing user travel data and computing equipment
CN113238855A (en) Path detection method and device
Souza et al. Ranking strategies for quality-aware service selection
CN111476663B (en) Data processing method and device, node equipment and storage medium
CN113377688B (en) L1 cache sharing method for GPU
CN109727135A (en) Promote method, the computer-readable medium of the operation of block chain information and processing capacity

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant