WO2021134810A1 - 支撑点并行枚举负载均衡方法、装置、设备及介质 - Google Patents

支撑点并行枚举负载均衡方法、装置、设备及介质 Download PDF

Info

Publication number
WO2021134810A1
WO2021134810A1 PCT/CN2020/071011 CN2020071011W WO2021134810A1 WO 2021134810 A1 WO2021134810 A1 WO 2021134810A1 CN 2020071011 W CN2020071011 W CN 2020071011W WO 2021134810 A1 WO2021134810 A1 WO 2021134810A1
Authority
WO
WIPO (PCT)
Prior art keywords
parameters
network
training
load
state parameter
Prior art date
Application number
PCT/CN2020/071011
Other languages
English (en)
French (fr)
Inventor
毛睿
胡梓良
陆敏华
Original Assignee
深圳大学
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 深圳大学 filed Critical 深圳大学
Publication of WO2021134810A1 publication Critical patent/WO2021134810A1/zh

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F9/00Arrangements for program control, e.g. control units
    • G06F9/06Arrangements for program control, e.g. control units using stored programs, i.e. using an internal store of processing equipment to receive or retain programs
    • G06F9/46Multiprogramming arrangements
    • G06F9/50Allocation of resources, e.g. of the central processing unit [CPU]
    • G06F9/5083Techniques for rebalancing the load in a distributed system
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/045Combinations of networks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/08Learning methods

Definitions

  • This application relates to the field of big data mining, in particular to support point parallel enumeration load balancing methods, devices, equipment and media.
  • IOPEA Parallel Pivot Enumeration Algorithm on I/O Multiplexing is based on the support point parallel enumeration algorithm of IO multiplexing.
  • the data that needs to be calculated is divided into more parts at the beginning of the program.
  • the task After the task is distributed to the CPU core or GPU card, it uses select to continuously poll and monitor the handles of multiple sub-processes. After any sub-process has completed the calculation and returns the result, the next calculation task is distributed to it.
  • the polling mechanism it can be timely and effective Find idle computing resources, and then dispatch tasks to them to achieve the purpose of making full use of hardware resources; although under the premise of multiple parallelism, using the I/O multiplexing method, unused computing resources can be found in time.
  • this strategy has obvious shortcomings. It needs to adopt I/O multiplexing method to continuously scan multiple processes and monitor read events in blocking mode to determine whether the process calculation is over, which causes a waste of resources. .
  • this application is proposed in order to provide support point parallel enumeration load balancing methods, devices, equipment and media that overcome or at least partially solve the above-mentioned problems, including:
  • a support point parallel enumeration load balancing method involves predicting the load situation of a node, and is characterized in that it includes:
  • determining the current load situation corresponding to the state parameter includes: determining the current load situation in the correspondence relationship that is the same as the current state parameter The load and other conditions corresponding to the state parameters are determined to be the current load conditions;
  • the processing tasks of the sub-process are increased or decreased.
  • the state parameters include: task parameters and/or performance parameters, and/or a one-dimensional or two-dimensional array composed of features extracted from the task parameters and the performance parameters according to a set rule ;among them,
  • the task parameters include: at least one of texture data, distance data, and quantity data;
  • the performance parameters include: mask thermal task parameters, which specifically include at least one of the number of thermal points, the type of thermal point color, and the depth of the thermal point color;
  • the corresponding relationship includes: functional relationship
  • the state parameter is an input parameter of the functional relationship
  • the load condition is an output parameter of the functional relationship
  • the step of determining the current load situation corresponding to the current state parameter through the corresponding relationship includes:
  • the corresponding relationship includes a functional relationship
  • the current state parameter is input into the functional relationship, and the output parameter of the functional relationship is determined to be the current load condition.
  • step of establishing the corresponding relationship between the state parameter of the node and the load condition of the node includes:
  • step of obtaining sample data used to establish the correspondence between the state parameter and the load situation includes:
  • the data pair formed by the load condition and the selected state parameter is used as sample data.
  • the network structure includes: at least one of a BP neural network, a CNN neural network, an RNN neural network, a residual neural network, and a multi-stage recurrent neural network;
  • the network parameters include: at least one of the number of input nodes, the number of output nodes, the number of hidden layers, the number of hidden nodes, the number of dense blocks, the number of output layers, the number of convolutional layers, the number of transition layers, the initial weights, and the bias values one.
  • training the network structure and the network parameters includes:
  • Part of the data in the sample data is selected as a training sample, the state parameters in the training sample are input to the network structure, and training is performed through the activation function of the network structure and the network parameters to obtain actual training result;
  • Testing the network structure and the network parameters includes:
  • training the network structure and the network parameters further includes:
  • Testing the network structure and the network parameters also includes:
  • the network structure and the network parameters are retrained until the actual test error after the retraining is slower than the set test error.
  • a supporting point parallel enumeration load balancing device includes:
  • the establishment module is used to use the self-learning ability of the artificial neural network to establish the corresponding relationship between the state parameters of the node and the load situation of the node;
  • the acquisition module is used to acquire the current state parameters of the node
  • the determining module is configured to determine the current load situation corresponding to the current state parameter through the corresponding relationship; specifically, determining the current load situation corresponding to the state parameter includes: comparing the corresponding relationship with the The load and other conditions corresponding to the state parameters with the same current state parameters are determined to be the current load conditions;
  • the task allocation module is used to increase or decrease the processing tasks of the sub-processes according to the current load situation.
  • a device comprising a processor, a memory, and a computer program stored on the memory and capable of running on the processor, and the computer program is executed by the processor to realize the parallel enumeration of support points as described above. The steps of the load balancing method.
  • a computer-readable storage medium stores a computer program on the computer-readable storage medium, and when the computer program is executed by a processor, the steps of the support point parallel enumeration load balancing method described above are realized.
  • the corresponding relationship between the state parameters of the node and the load situation of the child processes in the node is established; the current state parameters of the node are obtained; and the corresponding relationship is used to determine The current load situation corresponding to the current state parameter; specifically, determining the current load situation corresponding to the state parameter includes: assigning the load corresponding to the state parameter that is the same as the current state parameter in the corresponding relationship, etc. The situation is determined as the current load situation; according to the current load situation, the processing tasks of the sub-processes are increased or reduced.
  • the computing resources of heterogeneous parallel platforms can be more fully utilized. The optimal support point combination under the large amount of data is developed to ensure that the task load assigned to the same node is close, thereby alleviating the problem of load balancing.
  • Fig. 1 is a flow chart of the steps of a load balancing method for parallel enumeration of support points provided by an embodiment of the present application;
  • FIG. 2a is a schematic diagram of a type-texture data structure of a support point parallel enumeration load balancing method provided by an embodiment of the present application;
  • 2b is a schematic diagram of a type 2 texture data structure of a support point parallel enumeration load balancing method provided by an embodiment of the present application;
  • 2c is a schematic diagram of the type three texture data structure of a support point parallel enumeration load balancing method provided by an embodiment of the present application;
  • FIG. 2d is a schematic diagram of a type four texture data structure of a support point parallel enumeration load balancing method provided by an embodiment of the present application;
  • 2e is a heat diagram of distance calculation times of a load balancing method for parallel enumeration of support points provided by an embodiment of the present application;
  • 2f is a heat diagram of distance calculation times of a load balancing method for parallel enumeration of support points provided by an embodiment of the present application;
  • FIG. 2g is a heat diagram of distance calculation times of a load balancing method for parallel enumeration of support points provided by an embodiment of the present application;
  • 2h is a heat diagram of distance calculation times of a load balancing method for parallel enumeration of support points provided by an embodiment of the present application;
  • 2i is a heat diagram of the number of calculations required for the support point combination of a support point parallel enumeration load balancing method provided by an embodiment of the present application;
  • FIG. 2j is a heat diagram of the number of calculations required for the support point combination of a support point parallel enumeration load balancing method provided by an embodiment of the present application;
  • 2k is a heat diagram of the number of calculations required for the support point combination of a support point parallel enumeration load balancing method provided by an embodiment of the present application;
  • FIG. 21 is a heat diagram of the number of calculation times required for a support point combination of a support point parallel enumeration load balancing method provided by an embodiment of the present application;
  • FIG. 3 is a schematic diagram of a global task stealing and redistribution process of a support point parallel enumeration load balancing method provided by an embodiment of the present application;
  • FIG. 4 is a structural block diagram of a support point parallel enumeration load balancing device provided by an embodiment of the present application
  • Fig. 5 is a schematic structural diagram of a computer device according to an embodiment of the present invention.
  • the method involves predicting the load situation of a node, and includes:
  • the corresponding relationship between the state parameters of the node and the load situation of the child processes in the node is established; the current state parameters of the node are obtained; and the corresponding relationship is used to determine The current load situation corresponding to the current state parameter; specifically, determining the current load situation corresponding to the state parameter includes: assigning the load corresponding to the state parameter that is the same as the current state parameter in the corresponding relationship, etc. The situation is determined as the current load situation; according to the current load situation, the processing tasks of the sub-processes are increased or reduced.
  • the computing resources of heterogeneous parallel platforms can be more fully utilized. The optimal support point combination under the large amount of data is developed to ensure that the task load assigned to the same node is close, thereby alleviating the problem of load balancing.
  • the self-learning ability of the artificial neural network is used to establish the correspondence between the state parameters of the node and the load condition of the node.
  • the artificial neural network algorithm can be used to collect the state parameters of a large number of different types of nodes (including but not limited to one or more of the following: etc.), and select the state parameters and load conditions of several nodes as sample data , Learning and training the neural network, by adjusting the network structure and the weights between network nodes, the neural network can fit the relationship between the state parameters and load conditions in the nodes, and finally the neural network can accurately fit different nodes Correspondence between the status parameters and load conditions in the.
  • nodes including but not limited to one or more of the following: etc.
  • the state parameters include: task parameters and/or performance parameters, and/or one-dimensional or two-dimensional features composed of features extracted from the task parameters and the performance parameters according to a set rule The above array;
  • the task parameters include: at least one of the number of tasks, the number of subprocesses, and the way of task division;
  • the performance parameters include: sub-process calculation speed;
  • the above data is converted into an image sequence for display and analysis.
  • the calculated support point performance data is used to generate a corresponding mask heat map.
  • the images generated by the number of tasks, the number of sub-processes, and task segmentation include four types of texture data, as shown in Figure 2-a to Figure 2-d, and several images are stored in each category.
  • the size of each image is 512*512, and the corresponding heat map generated by the number of distance calculations is shown in Figure 2e to Figure 21.
  • the deeper the area in Figure 2i-1 indicates that the number of calculations required for the support point combination corresponding to this area is greater. More, the deeper the area in Figure 2e-h, the less the number of distance calculations.
  • Figure 2e corresponds to Figure 2i
  • Figure 2f corresponds to Figure 2j
  • Figure 2g corresponds to Figure 2k
  • Figure 2h corresponds to Figure 21
  • the hot spots in Figure 2i-l in the corresponding two images It means that the performance of the supporting point combination sequence that constitutes this image is worse, that is, it takes more time in the actual computing task.
  • the hot spots in Figure 2i-1 are called the key areas that the network needs to analyze, locate and predict. The number of key areas and the color depth in the heat map directly affect the distribution of subsequent tasks.
  • the corresponding relationship includes: a functional relationship; the state parameter is an input parameter of the functional relationship, and the load condition is an output parameter of the functional relationship;
  • the corresponding relationship includes: a functional relationship.
  • the state parameter is an input parameter of the functional relationship
  • the load condition is an output parameter of the functional relationship
  • step S110 the specific process of "establishing the correspondence between the state parameters of the node and the load conditions of the sub-processes in the node" in step S110 may be further described in conjunction with the following description.
  • data collection collect the state parameters and corresponding load conditions of patients with different health conditions; and, collect the state parameters and corresponding load conditions of patients of different ages; and, collect the state parameters and corresponding loads of patients of different genders happening.
  • one part of the obtained input and output parameter pairs is used as the training sample data, and the other part is used as the test sample data.
  • the operation process is simple and the reliability of the operation result is high.
  • the basic structure of the network For example: according to different age, disease, gender and other data characteristics that have an impact on the situation of the node and the laws contained therein, the basic structure of the network, the number of input and output nodes of the network, the number of hidden layers of the network, and the number of hidden nodes can be preliminarily determined , Initial network weights, etc.
  • the network structure includes: at least one of a BP neural network, a CNN neural network, an RNN neural network, a residual neural network, and a multi-stage recurrent neural network;
  • the network parameters include: at least one of the number of input nodes, the number of output nodes, the number of hidden layers, the number of hidden nodes, the number of dense blocks, the number of output layers, the number of convolutional layers, the number of transition layers, the initial weights, and the bias values one.
  • the network structure preferably adopts a multi-stage cyclic neural network, among which.
  • the key area detected by the nth image in the mth cycle stage is obtained through the algorithm First, extract the data of each channel of each image, and generate a single-channel image corresponding to the extracted data of each channel, and extract the regional features f n,m of each image with more logarithmic distance calculations. .
  • the variables of the first stage Take 0 as the initial value, and the key area sequence ⁇ K 1 , K 2 ,..., K N ⁇ in the last stage as the final output detection value.
  • This step-by-step approach enables the network to learn the mapping of key sequences from multiple dimensions and multiple modules.
  • Extract multi-channel data and use the extracted data as the output of each stage of the network.
  • training sample data is needed to train the designed neural network.
  • the training method can be adjusted according to the actual network structure and the problems found in the training.
  • the step “Using the sample data to train and test the network structure and the network parameters to determine the corresponding relationship between the state parameter and the load situation” may be further described in conjunction with the following description.
  • a part of the sample data is selected as a training sample, the state parameters in the training sample are input to the network structure, and the activation function of the network structure and the network parameters Perform training to obtain the actual training result; determine whether the actual training error between the actual training result and the corresponding load situation in the training sample meets the preset training error; when the actual training error meets the preset training error When determining that the training of the network structure and the network parameters is completed;
  • training the network structure and the network parameters further includes:
  • the network parameters are updated through the error energy function of the network structure; the network parameters are updated through the activation function of the network structure and the updated network parameters. Retrain until the actual training error after retraining meets the set training error;
  • the network training test is completed.
  • the network structure and network parameters obtained by using the test samples for training are tested to further verify the reliability of the network structure and network parameters.
  • the step “Using the sample data to train and test the network structure and the network parameters to determine the corresponding relationship between the state parameter and the load situation” may be further described in conjunction with the following description. The specific process of testing the network structure and the network parameters.
  • test sample another part of the sample data is selected as a test sample
  • the state parameters in the test sample are input into the trained network structure, and the activation function and The network parameters after the training are tested to obtain actual test results; it is determined whether the actual test error between the actual test result and the corresponding load condition in the test sample meets the set test error; When the test error satisfies the set test error, it is determined that the test of the network structure and the network parameter is completed.
  • step S120 obtain the current state parameters of the node
  • the current load situation corresponding to the current state parameter is determined through the correspondence relationship; specifically, determining the current load situation corresponding to the state parameter includes: comparing the correspondence relationship with The load and other conditions corresponding to the state parameters with the same current state parameters are determined to be the current load conditions;
  • determining the current load situation corresponding to the current state parameter in step S130 may include: determining the load situation corresponding to the state parameter that is the same as the current state parameter in the corresponding relationship as The current load situation.
  • determining the current load condition corresponding to the state parameter in step S130 may further include: when the corresponding relationship may include a functional relationship, inputting the current state parameter into the functional relationship, The output parameter for determining the functional relationship is the current load condition.
  • the determination method is simple and the determination result is highly reliable.
  • it may further include: a process of verifying whether the current load situation is consistent with the actual load situation.
  • the correspondence relationship may be updated At least one maintenance operation among, correction, and re-learning.
  • the equipment itself cannot know the actual load condition, and the operator's feedback operation is needed. That is, if the equipment intelligently judges the load condition, the operator feedbacks that it does not match the actual state through the operation, and the equipment can learn it.
  • the actual load situation can be displayed through the AR display module to verify whether the determined current load situation is consistent with the actual load situation.
  • the current load condition can be determined according to the current state parameter according to the corresponding relationship after maintenance.
  • the load condition corresponding to the state parameter that is the same as the current state parameter in the corresponding relationship after maintenance is determined as the current load condition.
  • step S140 As described in step S140 above, according to the current load situation, the processing tasks of the sub-process are increased or decreased.
  • the excess tasks in this part of the sub-process are divided based on the current total number of sub-processes.
  • Part of the task is re-divided into several data tasks and sent to all sub-processes to ensure the task load balance among the sub-processes.
  • the number of distance calculations is estimated by a multi-stage neural network, and the number of distance calculations is calculated according to Divided into several levels from large to small, the task sequences of the same level are allocated to the same computing node to ensure that the task loads allocated to the same computing node are close, thereby alleviating the problem of load balancing.
  • the description is relatively simple, and for related parts, please refer to the part of the description of the method embodiment.
  • a support point parallel enumeration load balancing device provided by an embodiment of the present application is shown, which is characterized in that it includes:
  • the establishment module 410 is configured to use the self-learning ability of the artificial neural network to establish the corresponding relationship between the state parameters of the node and the load condition of the node;
  • the obtaining module 420 is used to obtain the current state parameters of the node
  • the determining module 430 is configured to determine the current load situation corresponding to the current state parameter through the corresponding relationship; specifically, determining the current load situation corresponding to the state parameter includes: comparing the corresponding relationship with the current load situation. The load and other conditions corresponding to the state parameters with the same current state parameters are determined to be the current load conditions;
  • the task allocation module 440 is configured to increase or decrease the processing tasks of the sub-processes according to the current load situation.
  • the state parameters include: task parameters and/or performance parameters, and/or one-dimensional or two-dimensional features composed of features extracted from the task parameters and the performance parameters according to a set rule The above array;
  • the task parameters include: at least one of the number of tasks, the number of sub-processes, and the way of task division;
  • the performance parameters include: sub-process calculation speed;
  • the corresponding relationship includes: functional relationship
  • the state parameter is an input parameter of the functional relationship
  • the load condition is an output parameter of the functional relationship
  • the determining module 430 includes:
  • the function determination sub-module is configured to input the current state parameter into the function relationship when the corresponding relationship includes a function relationship, and determine that the output parameter of the function relationship is the current load condition.
  • the establishment module 410 includes:
  • the analysis sub-module is used to analyze the characteristics and laws of the state parameters, and determine the network structure and network parameters of the artificial neural network according to the characteristics and laws;
  • the training sub-module is configured to use the sample data to train and test the network structure and the network parameters, and determine the corresponding relationship between the state parameters and the load conditions.
  • the acquiring submodule includes:
  • the collection sub-module is used to collect the status parameters and load conditions of the nodes in processing different types of data
  • the analysis sub-module is used to analyze the state parameters and select data related to the load conditions as the state parameters in combination with pre-stored expert experience information;
  • the sample data generation sub-module is used to use the data pair formed by the load situation and the selected state parameters as sample data.
  • the network structure includes: at least one of a BP neural network, a CNN neural network, an RNN neural network, a residual neural network, and a multi-stage recurrent neural network;
  • the network parameters include: at least one of the number of input nodes, the number of output nodes, the number of hidden layers, the number of hidden nodes, the number of dense blocks, the number of output layers, the number of convolutional layers, the number of transition layers, the initial weights, and the bias values one.
  • the training sub-module includes:
  • the training result generation sub-module is used to select a part of the sample data as training samples, input the state parameters in the training samples into the network structure, and pass the activation function of the network structure and the Network parameters are trained to obtain actual training results;
  • the training result error judgment sub-module is used to determine whether the actual training error between the actual training result and the corresponding load condition in the training sample meets the preset training error;
  • the training completion determination sub-module is configured to determine that the training of the network structure and the network parameters is completed when the actual training error meets the preset training error;
  • test submodule is used to test the network structure and the network parameters, and the test submodule includes:
  • the test result generation sub-module is used to select another part of the sample data as a test sample, and input the state parameter in the test sample into the network structure after the training, and use the activation Function and the network parameters completed by the training are tested to obtain actual test results;
  • test result error judgment sub-module is used to determine whether the actual test error between the actual test result and the corresponding load condition in the test sample meets the set test error
  • the test completion determination sub-module is configured to determine that the test of the network structure and the network parameters is completed when the actual test error meets the set test error.
  • the training sub-module further includes:
  • the network parameter update sub-module is configured to update the network parameters through the error energy function of the network structure when the actual training error does not meet the set training error;
  • the first retraining sub-module is configured to perform retraining through the activation function of the network structure and the updated network parameters until the actual training error after the retraining meets the set training error;
  • the test sub-module also includes:
  • the second retraining submodule is used to retrain the network structure and the network parameters when the actual test error does not meet the set test error, until the actual test error after the retraining is slow Quickly set the test error.
  • FIG. 5 a computer device of a support point parallel enumeration load balancing method of the present invention is shown, which may specifically include the following:
  • the above-mentioned computer device 12 is represented in the form of a general-purpose computing device.
  • the components of the computer device 12 may include, but are not limited to: one or more processors or processing units 16, a system memory 28, connecting different system components (including system memory 28 and processing unit 16) The bus 18.
  • the bus 18 represents one or more of several types of bus 18 structures, including a memory bus 18 or a memory controller, a peripheral bus 18, a graphics acceleration port, a processor, or a bureau that uses any of the bus 18 structures.
  • Domain bus 18 includes but are not limited to industry standard architecture (ISA) bus 18, microchannel architecture (MAC) bus 18, enhanced ISA bus 18, audio and video electronics standards association (VESA) local bus 18, and Peripheral Component Interconnect (PCI) bus 18.
  • ISA industry standard architecture
  • MAC microchannel architecture
  • VESA audio and video electronics standards association
  • PCI Peripheral Component Interconnect
  • the computer device 12 typically includes a variety of computer system readable media. These media can be any available media that can be accessed by the computer device 12, including volatile and nonvolatile media, removable and non-removable media.
  • the system memory 28 may include computer system readable media in the form of volatile memory, such as random access memory (RAM) 30 and/or cache memory 32.
  • the computer device 12 may further include other removable/non-removable, volatile/non-volatile computer system storage media.
  • the storage system 34 may be used to read and write non-removable, non-volatile magnetic media (commonly referred to as "hard drives").
  • a disk drive for reading and writing to a removable non-volatile disk such as a "floppy disk”
  • a removable non-volatile disk such as CD-ROM, DVD-ROM
  • other optical media read and write optical disc drives.
  • each drive can be connected to the bus 18 through one or more data medium interfaces.
  • the memory may include at least one program product, the program product having a set (e.g., at least one) of program modules 42 configured to perform the functions of the various embodiments of the present invention.
  • a program/utility tool 40 having a set of (at least one) program module 42 may be stored in, for example, a memory.
  • Such program module 42 includes, but is not limited to, an operating system, one or more application programs, and other program modules 42 and program data, each of these examples or some combination may include the realization of a network environment.
  • the program module 42 generally executes the functions and/or methods in the described embodiments of the present invention.
  • the computer device 12 may also communicate with one or more external devices 14 (such as a keyboard, pointing device, display 24, camera, etc.), and may also communicate with one or more devices that enable a user to interact with the computer device 12, and/ Or communicate with any device (such as a network card, modem, etc.) that enables the computer device 12 to communicate with one or more other computing devices. This communication can be performed through an input/output (I/O) interface 22.
  • the computer device 12 may also communicate with one or more networks (such as a local area network (LAN)), a wide area network (WAN), and/or a public network (such as the Internet) through the network adapter 20. As shown in the figure, the network adapter 20 communicates with other modules of the computer device 12 through the bus 18.
  • LAN local area network
  • WAN wide area network
  • public network such as the Internet
  • the processing unit 16 executes various functional applications and data processing by running programs stored in the system memory 28, such as implementing the support point parallel enumeration load balancing method provided by the embodiment of the present invention.
  • the above-mentioned processing unit 16 executes the above-mentioned program, it realizes: using the self-learning ability of the artificial neural network to establish the correspondence between the state parameters of the node and the load conditions of the child processes in the node; obtaining the current state parameters of the node; Corresponding relationship, determining the current load situation corresponding to the current state parameter; specifically, determining the current load situation corresponding to the state parameter includes: comparing the state parameter that is the same as the current state parameter in the corresponding relationship The corresponding load and other conditions are determined as the current load conditions; according to the current load conditions, the processing tasks of the sub-processes are increased or decreased.
  • the present invention also provides a computer-readable storage medium on which a computer program is stored.
  • the program is executed by a processor, the support point parallel enumeration load balancing method as provided in all the embodiments of the present application is realized when the program is executed. :
  • the program when executed by the processor, it is realized: using the self-learning ability of the artificial neural network to establish the corresponding relationship between the state parameter of the node and the load situation of the child process in the node; obtaining the current state parameter of the node; Relationship, determining the current load situation corresponding to the current state parameter; specifically, determining the current load situation corresponding to the state parameter includes: corresponding to the state parameter that is the same as the current state parameter in the corresponding relationship The load, etc., is determined as the current load situation; according to the current load situation, the processing tasks of the sub-process are increased or decreased.
  • the computer-readable medium may be a computer-readable medium or a computer-readable storage medium.
  • the computer-readable storage medium may be, for example, but not limited to, an electrical, magnetic, optical, electromagnetic, infrared, or semiconductor system, device, or device, or a combination of any of the above. More specific examples (non-exhaustive list) of computer-readable storage media include: electrical connections with one or more wires, portable computer disks, hard disks, random access memory (RAM), read-only memory (ROM), Erasable programmable read-only memory (EPOM or flash memory), optical fiber, portable compact disk read-only memory (CD-ROM), optical storage device, magnetic storage device, or any suitable combination of the above.
  • the computer-readable storage medium can be any tangible medium that contains or stores a program, and the program can be used by or in combination with an instruction execution system, apparatus, or device.
  • the computer-readable signal medium may include a data signal propagated in baseband or as a part of a carrier wave, and computer-readable program code is carried therein. This propagated data signal can take many forms, including, but not limited to, electromagnetic signals, optical signals, or any suitable combination of the foregoing.
  • the computer-readable signal medium may also be any computer-readable medium other than the computer-readable storage medium, and the computer-readable medium may send, propagate, or transmit the program for use by or in combination with the instruction execution system, apparatus, or device .
  • the computer program code used to perform the operations of the present invention can be written in one or more programming languages or a combination thereof.
  • the above-mentioned programming languages include object-oriented programming languages such as Java, Smalltalk, C++, and also conventional Procedural programming language-such as "C" language or similar programming language.
  • the program code can be executed entirely on the user's computer, partly on the user's computer, executed as an independent software package, partly on the user's computer and partly executed on a remote computer or entirely executed on the remote computer or server.
  • the remote computer can be connected to the user’s computer through any kind of network, including a local area network (LAN) or a wide area network (WAN), or it can be connected to an external computer (for example, using an Internet service provider to Connect via the Internet).
  • LAN local area network
  • WAN wide area network
  • an Internet service provider for example, using an Internet service provider to Connect via the Internet.

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • Software Systems (AREA)
  • General Physics & Mathematics (AREA)
  • General Engineering & Computer Science (AREA)
  • General Health & Medical Sciences (AREA)
  • Biomedical Technology (AREA)
  • Data Mining & Analysis (AREA)
  • Evolutionary Computation (AREA)
  • Biophysics (AREA)
  • Molecular Biology (AREA)
  • Computing Systems (AREA)
  • Computational Linguistics (AREA)
  • Artificial Intelligence (AREA)
  • Mathematical Physics (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Health & Medical Sciences (AREA)
  • Management, Administration, Business Operations System, And Electronic Commerce (AREA)
  • Supply And Distribution Of Alternating Current (AREA)

Abstract

本申请提供了一种支撑点并行枚举负载均衡方法、装置、设备及介质,所述方法涉及对节点负载情况的预测,包括:利用人工神经网络的自学习能力,建立节点的状态参数与节点内子进程的负载情况之间的对应关系;获取节点的当前状态参数;通过对应关系,确定与当前状态参数对应的当前负载情况;具体地,确定与状态参数对应的当前负载情况,包括:将对应关系中与当前状态参数相同的状态参数所对应的负载等情况,确定为当前负载情况;依据当前负载情况,增加或减少子进程的处理任务,在运行并行的支撑点枚举算法时,能更加充分利用异构并行平台计算资源得出大数据量下的最优支撑点组合,以确保同一节点被分配到的任务负载接近,从而缓解负载均衡的问题。

Description

支撑点并行枚举负载均衡方法、装置、设备及介质 技术领域
本申请涉及大数据挖掘领域,特别是支撑点并行枚举负载均衡方法、装置、设备及介质。
背景技术
目前,已经有一些支撑点选取算法,但是不同算法间的性能差别往往不大,用复杂的数学工具以很高的构建计算代价得到的支撑点带来的索引性能提升往往相对较少。
随着数据量增大,将会出现计算量呈指数级上升,计算时间过长的问题,将影响到整个领域的研究进度,因此,研究一种计算时间短的支撑点枚举方式是目前亟待解决的问题。
现有技术中,IOPEA(Parallel Pivot Enumeration Algorithm on I/O Multiplexing基于IO复用的支撑点并行枚举算法),在程序初始就将需要计算的数据切分成更多的部分,在将待处理的任务分发给CPU核或GPU卡后,通过使用select不断轮询监听多个子进程句柄,任意一个子进程计算完毕并且返回结果后,即为它分发下一个计算任务,通过轮训机制,可以及时有效地发现空闲计算资源,然后为其派送任务以达到充分利用硬件资源的目的;虽然在多重并行的前提下,采用I/O复用的方法,能及时发现未被利用的计算资源。但是该策略有着明显的缺陷,需要采用I/O复用的方法,对多个进程进行不间断扫描,以阻塞模式监听读事件,借此判断该进程计算是否结束,这样就造成了资源的浪费。
发明内容
鉴于所述问题,提出了本申请以便提供克服所述问题或者至少部分地解决所述问题的支撑点并行枚举负载均衡方法、装置、设备及介质,包括:
一种支撑点并行枚举负载均衡方法,所述方法涉及对节点的负载情况的预测,其特征在于,包括:
利用人工神经网络的自学习能力,建立节点的状态参数与节点的负载情况之间的对应关系;
获取节点的当前状态参数;
通过所述对应关系,确定与所述当前状态参数对应的当前负载情况;具 体地,确定与所述状态参数对应的当前负载情况,包括:将所述对应关系中与所述当前状态参数相同的状态参数所对应的负载等情况,确定为所述当前负载情况;
依据所述当前负载情况,增加或减少所述子进程的处理任务。
进一步地,所述状态参数,包括:任务参数和/或性能参数,和/或由按设定规律自所述任务参数、所述性能参数中提取的特征组成的一维或两维以上的数组;其中,
所述任务参数,包括:纹理数据、距离数据、数量数据中的至少之一;
和/或,
所述性能参数,包括:掩码热力任务参数,具体包括,热力点数量、热力点颜色种类,热力点颜色深浅中的至少之一;
和/或,
所述对应关系,包括:函数关系;
所述状态参数为所述函数关系的输入参数,所述负载情况为所述函数关系的输出参数;
通过所述对应关系,确定与所述当前状态参数对应的当前负载情况的步骤,包括:
当所述对应关系包括函数关系时,将所述当前状态参数输入所述函数关系中,确定所述函数关系的输出参数为当前负载情况。
进一步地,所述建立节点的状态参数与节点的负载情况之间的对应关系的步骤,包括:
获取用于建立所述状态参数与所述负载情况之间的对应关系的样本数据;
分析所述状态参数的特性及其规律,根据所述特性及其规律,确定所述人工神经网络的网络结构及其网络参数;
使用所述样本数据,对所述网络结构和所述网络参数进行训练和测试,确定所述状态参数与所述负载情况的所述对应关系。
进一步地,所述获取用于建立所述状态参数与所述负载情况之间的对应关系的样本数据的步骤,包括:
收集节点的在处理不同类型数据下的状态参数和负载情况;
对所述状态参数进行分析、并结合预存的专家经验信息,选取与所述负载情况相关的数据作为所述状态参数;
将所述负载情况、以及选取的所述状态参数构成的数据对,作为样本数据。
进一步地,所述网络结构,包括:BP神经网络、CNN神经网络、RNN神经网络、残差神经网络、多阶段循环神经网络中的至少之一;
和/或,
所述网络参数,包括:输入节点数、输出节点数、隐藏层数、隐节点数、密集块数、输出层数、卷积层数、过度层数、初始权值、偏置值中的至少之一。
进一步地,对所述网络结构和所述网络参数进行训练,包括:
选取所述样本数据中的一部分数据作为训练样本,将所述训练样本中的所述状态参数输入到所述网络结构,通过所述网络结构的激活函数和所述网络参数进行训练,得到实际训练结果;
确定所述实际训练结果与所述训练样本中的相应负载情况之间的实际训练误差是否满足预设训练误差;
当所述实际训练误差满足所述预设训练误差时,确定对所述网络结构和所述网络参数的所述训练完成;
和/或,
对所述网络结构和所述网络参数进行测试,包括:
选取所述样本数据中的另一部分数据作为测试样本,将所述测试样本中的所述状态参数输入到所述训练完成的所述网络结构中,以所述激活函数和所述训练完成的所述网络参数进行测试,得到实际测试结果;
确定所述实际测试结果与所述测试样本中的相应负载情况之间的实际测试误差是否满足设定测试误差;
当所述实际测试误差满足所述设定测试误差时,确定对所述网络结构和所述网络参数的所述测试完成。
进一步地,对所述网络结构和所述网络参数进行训练,还包括:
当所述实际训练误差不满足所述设定训练误差时,通过所述网络结构的误差能量函数更新所述网络参数;
通过所述网络结构的所述激活函数和更新后的所述网络参数进行重新训练,直至所述重新训练后的实际训练误差满足所述设定训练误差;
和/或,
对所述网络结构和所述网络参数进行测试,还包括:
当所述实际测试误差不满足所述设定测试误差时,对所述网络结构和所述网络参数进行重新训练,直至所述重新训练后的实际测试误差慢速所述设定测试误差。
一种支撑点并行枚举负载均衡装置,包括:
建立模块,用于利用人工神经网络的自学习能力,建立节点的状态参数与节点的负载情况之间的对应关系;
获取模块,用于获取节点的当前状态参数;
确定模块,用于通过所述对应关系,确定与所述当前状态参数对应的当前负载情况;具体地,确定与所述状态参数对应的当前负载情况,包括:将所述对应关系中与所述当前状态参数相同的状态参数所对应的负载等情况,确定为所述当前负载情况;
任务分配模块,用于依据所述当前负载情况,增加或减少所述子进程的处理任务。
一种设备,包括处理器、存储器及存储在所述存储器上并能够在所述处理器上运行的计算机程序,所述计算机程序被所述处理器执行时实现如上所述的支撑点并行枚举负载均衡方法的步骤。
一种计算机可读存储介质,所述计算机可读存储介质上存储计算机程序,所述计算机程序被处理器执行时实现如上所述的支撑点并行枚举负载均衡方法的步骤。
本申请具有以下优点:
在本申请的实施例中,通过利用人工神经网络的自学习能力,建立节点的状态参数与节点内子进程的负载情况之间的对应关系;获取节点的当前状态参数;通过所述对应关系,确定与所述当前状态参数对应的当前负载情况;具体地,确定与所述状态参数对应的当前负载情况,包括:将所述对应关系中与所述当前状态参数相同的状态参数所对应的负载等情况,确定为所述当前负载情况;依据所述当前负载情况,增加或减少所述子进程的处理任务,在运行并行的支撑点枚举算法时,能更加充分利用异构并行平台计算资源得出大数据量下的最优支撑点组合,以确保同一节点被分配到的任务负载接近,从而缓解负载均衡的问题。
附图说明
为了更清楚地说明本申请的技术方案,下面将对本申请的描述中所需要使用的附图作简单地介绍,显而易见地,下面描述中的附图仅仅是本申请的一些实施例,对于本领域普通技术人员来讲,在不付出创造性劳动性的前提下,还可以根据这些附图获得其他的附图。
图1是本申请一实施例提供的一种支撑点并行枚举负载均衡方法的步骤 流程图;
图2a是本申请一实施例提供的一种支撑点并行枚举负载均衡方法的种类一纹理数据结构示意图;
图2b是本申请一实施例提供的一种支撑点并行枚举负载均衡方法的种类二纹理数据结构示意图;
图2c是本申请一实施例提供的一种支撑点并行枚举负载均衡方法的种类三纹理数据结构示意图;
图2d是本申请一实施例提供的一种支撑点并行枚举负载均衡方法的种类四纹理数据结构示意图;
图2e是本申请一实施例提供的一种支撑点并行枚举负载均衡方法的距离计算次数热力图;
图2f是本申请一实施例提供的一种支撑点并行枚举负载均衡方法的距离计算次数热力图;
图2g是本申请一实施例提供的一种支撑点并行枚举负载均衡方法的距离计算次数热力图;
图2h是本申请一实施例提供的一种支撑点并行枚举负载均衡方法的距离计算次数热力图;
图2i是本申请一实施例提供的一种支撑点并行枚举负载均衡方法的支撑点组合所需计算次数热力图;
图2j是本申请一实施例提供的一种支撑点并行枚举负载均衡方法的支撑点组合所需计算次数热力图;
图2k是本申请一实施例提供的一种支撑点并行枚举负载均衡方法的支撑点组合所需计算次数热力图;
图2l是本申请一实施例提供的一种支撑点并行枚举负载均衡方法的支撑点组合所需计算次数热力图;
图3是本申请一实施例提供的一种支撑点并行枚举负载均衡方法的全局任务窃取再分发过程示意图;
图4是本申请一实施例提供的一种支撑点并行枚举负载均衡装置的结构框图;
图5是本发明一实施例的一种计算机设备的结构示意图。
具体实施方式
为使本申请的所述目的、特征和优点能够更加明显易懂,下面结合附图和具体实施方式对本申请作进一步详细的说明。显然,所描述的实施例是本申请一部分实施例,而不是全部的实施例。基于本申请中的实施例,本领域普通技术人员在没有作出创造性劳动前提下所获得的所有其他实施例,都属于本申请保护的范围。
参照图1,示出了本申请一实施例提供的一种支撑点并行枚举负载均衡方法,所述方法涉及对节点的负载情况的预测,包括:
S110、利用人工神经网络的自学习能力,建立节点的状态参数与节点的负载情况之间的对应关系;
S120、获取节点的当前状态参数;
S130、通过所述对应关系,确定与所述当前状态参数对应的当前负载情况;具体地,确定与所述状态参数对应的当前负载情况,包括:将所述对应关系中与所述当前状态参数相同的状态参数所对应的负载等情况,确定为所述当前负载情况;
S140、依据所述当前负载情况,增加或减少所述子进程的处理任务。
在本申请的实施例中,通过利用人工神经网络的自学习能力,建立节点的状态参数与节点内子进程的负载情况之间的对应关系;获取节点的当前状态参数;通过所述对应关系,确定与所述当前状态参数对应的当前负载情况;具体地,确定与所述状态参数对应的当前负载情况,包括:将所述对应关系中与所述当前状态参数相同的状态参数所对应的负载等情况,确定为所述当前负载情况;依据所述当前负载情况,增加或减少所述子进程的处理任务,在运行并行的支撑点枚举算法时,能更加充分利用异构并行平台计算资源得出大数据量下的最优支撑点组合,以确保同一节点被分配到的任务负载接近,从而缓解负载均衡的问题。
下面,将对本示例性实施例中支撑点并行枚举负载均衡方法作进一步地说明。
如上述步骤S110所述,利用人工神经网络的自学习能力,建立节点的状态参数与节点的负载情况之间的对应关系。
例如:利用人工神经网络算法来分析负载情况对应的节点中的状态参数规律,通过人工神经网络的自学习、自适应特性找到节点中的状态参数与负载情况间的映射规律。
例如:可以利用人工神经网络算法,通过对大量不同类型的(包括但不限于如下的一种或多种:等)节点中的状态参数汇总收集,选取若干节点的 状态参数及负载情况作为样本数据,对神经网络进行学习和训练,通过调整网络结构及网络节点间的权值,使神经网络拟合节点中的状态参数及负载情况之间的关系,最终使神经网络能准确拟合出不同节点中的状态参数及负载情况的对应关系。
在一实施例中,所述状态参数,包括:任务参数和/或性能参数,和/或由按设定规律自所述任务参数、所述性能参数中提取的特征组成的一维或两维以上的数组;
可选地,所述任务参数,包括:任务数量、子进程数量、任务分割方式中的至少之一;
具体地,任务数量为S={xi|i=1,2,...,n},其中,S中共有n(n>=k)个数据点。任务分割方式为从S中选择k个点作为支撑点,其中,支撑点数量k为子进程数量,记为P={pj|j=1,2,...,k}。在S中的n个数据中选取k个支撑点的所有方式共有C(n,k)种选择方式,也即存在C(n,k)种不同的P的组合。例如,当n=1000,k=3时,Cnk=1.66167E+08。
需要说明的是,在本实施例中,优选将以上数据转化为图像序列进行显示以及分析,具体为,将大量的组合情况,按照顺序以262144(512x512)的步长进行划分,并对划分后的数据进行归一化处理,以集合P中的k个元素作为多通道值生成待处理的图像序列。
具体地,公式如下:
Figure PCTCN2020071011-appb-000001
式中,
Figure PCTCN2020071011-appb-000002
表示由P中第k个维度数据生成的图像矩阵,矩阵中的每一个元素均来自按照512*512为步长划分的支撑点组合P。
可选地,所述性能参数,包括:子进程计算速度;
需要说明的是,在本实施例中,优选将以上数据转化为图像序列进行显示以及分析,具体为,利用已经计算出的支撑点性能数据生成相应的掩码(mask)热力图。
在一具体实现中,通过任务数量、子进程数量、任务分割方式生成的图像包括四种类型的纹理数据,如图2-a至图2-d所示,每个类别中保存若干张图像,每副图像大小为512*512,对应的以距离计算次数生成的热力图如图2e至图2l所示,图2i-l中越深的区域表示该区域对应的支撑点组合所需计算次数越多,图2e-h中越深的区域则表示距离计算次数越少。其中,图 2e与图2i相对应,图2f与图2j相对应,图2g与图2k相对应,图2h与图2l相对应,对应的两幅图像中图2i-l中的热力点越多表示构成这个图像的支撑点组合序列性能越差,即在实际的计算任务中需要消耗更多的时间。其中,图2i-l中的热力点称为网络需要分析定位以及预测的关键区域,关键区域的数量及热力图中的颜色深浅直接影响了后续任务的分配。
可选地,所述对应关系,包括:函数关系;所述状态参数为所述函数关系的输入参数,所述负载情况为所述函数关系的输出参数;
具体地,
在一实施例中,所述对应关系,包括:函数关系。
优选地,所述状态参数为所述函数关系的输入参数,所述负载情况为所述函数关系的输出参数;
由此,通过多种形式的对应关系,可以提升对当前负载情况确定的灵活性和便捷性。
在一实施例中,可以结合下列描述进一步说明步骤S110中“建立节点的状态参数与节点内子进程的负载情况之间的对应关系”的具体过程。
如下列步骤所述:获取用于建立所述状态参数与所述负载情况之间的对应关系的样本数据;
在一进阶实施例中,可以结合下列描述进一步说明“获取用于建立所述状态参数与所述负载情况之间的对应关系的样本数据”的具体过程。
如下列步骤所述:收集不同节点状况的患者的所述状态参数和所述负载情况;
例如:数据搜集:搜集不同健康状况的患者的状态参数及对应的负载情况;以及,搜集不同年龄的患者的状态参数及对应的负载情况;以及,搜集不同性别的患者的状态参数及对应的负载情况。
由此,通过多种途径收集运行数据,有利于增加运行数据的量,提升人工神经网络的学习能力,进而提升确定的对应关系的精准性和可靠性。
如下列步骤所述:对所述状态参数进行分析、并结合预存的专家经验信息,选取与所述负载情况相关的数据作为所述状态参数(例如:选取对负载情况有影响的状态参数作为输入参数,将指定参数作为输出参数);
例如:通过将已确诊的志愿者的相关数据中的状态参数作为输入参数,将其相关数据中的负载情况作为输出参数。
如下列步骤所述:将所述负载情况、以及选取的所述状态参数构成的数据对,作为样本数据。
例如:将得到的输入、输出参数对,一部分用作训练本样数据,一部分用作测试样本数据。
由此,通过对收集到的状态参数进行分析及处理,进而得到样本数据,操作过程简单,操作结果可靠性高。
如下列步骤所述:分析所述状态参数的特性及其规律,根据所述特性及其规律,确定所述人工神经网络的网络结构及其网络参数;
例如:根据不同的年龄、病情、性别等对节点情况具有影响的数据特性及其所蕴含的规律,可初步确定网络的基本结构、网络的输入、输出节点数、网络隐层数、隐节点数、网络初始权值等。
优选地,所述网络结构,包括:BP神经网络、CNN神经网络、RNN神经网络、残差神经网络、多阶段循环神经网络中的至少之一;
和/或,
所述网络参数,包括:输入节点数、输出节点数、隐藏层数、隐节点数、密集块数、输出层数、卷积层数、过度层数、初始权值、偏置值中的至少之一。
需要说明的是,所述网络结构优选采用多阶段循环神经网络,其中。
具体地,通过算法得到第n幅图像在第m个循环阶段检测出的关键区域
Figure PCTCN2020071011-appb-000003
首先,对每幅图像的每个通道数据进行提取,并将提取的每个通道的数据对应生成一幅单通道图像,提取每幅图像的对数距离计算次数较多的区域特征f n,m
由于在当前阶段处理过程中不仅需要考虑当前图像的信息I n,还需要加入来自上一阶段提取到的关键区域特征I n-1。然后将得到的关键区域特征作为特征匹配模块Φ r的输入,最后得到关键区域预测信息
Figure PCTCN2020071011-appb-000004
其中,特征匹配模块Φ r包括三个组成部分,关键区域特征张量
Figure PCTCN2020071011-appb-000005
隐藏层变量
Figure PCTCN2020071011-appb-000006
Figure PCTCN2020071011-appb-000007
公式中W p和W r分别是Φ p和Φ r的参数,网络参数
Figure PCTCN2020071011-appb-000008
k={1,2,…,M},M表示阶段数量。
在模型中,第一个阶段的变量
Figure PCTCN2020071011-appb-000009
以0作为初始值,最后一个阶段的关键区域序列{K 1,K 2,…,K N}作为最终输出检测值。这样逐步递进的方式使得网络能够良好的从多个维度多个模块中学习关键序列的映射。
将多通道数据进行提取,将提取后的数据分别作为网络每个阶段的输
Figure PCTCN2020071011-appb-000010
需要说明的是,在多段化的优化过程中,具体包括以下过程:
Figure PCTCN2020071011-appb-000011
如下列步骤所述:使用所述样本数据,对所述网络结构和所述网络参数进行训练和测试,确定所述状态参数与所述负载情况的所述对应关系。
例如:网络设计完成后,需用训练样本数据对设计完成的神经网络进行训练。训练方法可根据实际的网络结构及训练中发现的问题进行调整。
由此,通过收集图像数据,从中选取样本数据,并基于样本数据进行训练和测试,确定状态参数与负载情况之间的对应关系,有利于提升对指定参数生成的精准性。
可选地,可以结合下列描述进一步说明步骤“使用所述样本数据,对所述网络结构和所述网络参数进行训练和测试,确定所述状态参数与所述负载情况的所述对应关系”中对所述网络结构和所述网络参数进行训练的具体过程。
如下列步骤所述,选取所述样本数据中的一部分数据作为训练样本,将所述训练样本中的所述状态参数输入到所述网络结构,通过所述网络结构的激活函数和所述网络参数进行训练,得到实际训练结果;确定所述实际训练结果与所述训练样本中的相应负载情况之间的实际训练误差是否满足预设 训练误差;当所述实际训练误差满足所述预设训练误差时,确定对所述网络结构和所述网络参数的所述训练完成;
更可选地,对所述网络结构和所述网络参数进行训练,还包括:
当所述实际训练误差不满足所述设定训练误差时,通过所述网络结构的误差能量函数更新所述网络参数;通过所述网络结构的所述激活函数和更新后的所述网络参数进行重新训练,直至所述重新训练后的实际训练误差满足所述设定训练误差;
例如:若测试误差满足要求,则网络训练测试完成。
由此,通过将测试样本用于训练得到的网络结构和网络参数进行测试,以进一步验证网络结构及网络参数的可靠性。
可选地,可以结合下列描述进一步说明步骤“使用所述样本数据,对所述网络结构和所述网络参数进行训练和测试,确定所述状态参数与所述负载情况的所述对应关系”中对所述网络结构和所述网络参数进行测试的具体过程。
如下列步骤所述,选取所述样本数据中的另一部分数据作为测试样本,将所述测试样本中的所述状态参数输入到所述训练完成的所述网络结构中,以所述激活函数和所述训练完成的所述网络参数进行测试,得到实际测试结果;确定所述实际测试结果与所述测试样本中的相应负载情况之间的实际测试误差是否满足设定测试误差;当所述实际测试误差满足所述设定测试误差时,确定对所述网络结构和所述网络参数的所述测试完成。
如上述步骤S120所述,获取节点的当前状态参数;
如上述步骤S130所述,通过所述对应关系,确定与所述当前状态参数对应的当前负载情况;具体地,确定与所述状态参数对应的当前负载情况,包括:将所述对应关系中与所述当前状态参数相同的状态参数所对应的负载等情况,确定为所述当前负载情况;
例如:实时识别出节点的状态参数。
由此,通过基于对应关系,根据当当前状态参数有效地识别出当前负载情况,从而为节点的任务分配提供准确的判断依据,且判断结果精准性好。
在一个可选例子中,步骤S130中确定与所述当前状态参数对应的当前负载情况,可以包括:将所述对应关系中与所述当前状态参数相同的状态参数所对应的负载情况,确定为所述当前负载情况。
在一个可选例子中,步骤S130中确定与所述状态参数对应的当前负载情况,还可以包括:当所述对应关系可以包括函数关系时,将所述当前状态参数输入所述函数关系中,确定所述函数关系的输出参数为当前负载情况。
由此,通过基于对应关系或函数关系,根据所述当前状态参数确定当前负载情况,确定方式简便,确定结果可靠性高。
在一个可选实施方式中,还可以包括:验证所述当前负载情况与实际负载情况是否相符的过程。
可选地,可以接收到所述当前负载情况与实际负载情况不符的验证结果、和/或确定所述对应关系中没有与所述当前状态参数相同的状态参数时,对所述对应关系进行更新、修正、再学习中的至少一种维护操作。
例如:设备本身无法获知到实际负载情况,需要有操作人员的反馈操作才行,即如果设备智能判断出负载情况,操作人员通过操作反馈其与实际的状态不符,设备才能获知。
验证所述当前负载情况与实际负载情况是否相符(例如:可以通过AR显示模块对实际负载情况进行显示,以验证确定的所述当前负载情况与实际负载情况是否相符)。
当所述当前负载情况与实际负载情况不符、和/或所述对应关系中没有与所述当前状态参数相同的状态参数时,对所述对应关系进行更新、修正、再学习中的至少一种维护操作。
例如:可以根据维护后的对应关系,根据所述当前状态参数确定当前负载情况。例如:将维护后的所述对应关系中与所述当前状态参数相同的状态参数对应的负载情况,确定为当前负载情况。
由此,通过对确定的状态参数与负载情况之间的对应关系的维护,有利于提升对负载情况确定的精准性和可靠性。
如上述步骤S140所述,依据所述当前负载情况,增加或减少所述子进程的处理任务。
参照图3,需要说明的是,当存在子进程的当前负载情况超过其他子进程时,将该部分子进程中的超出部分的任务进行以当前进子进程总数为分割基础的任务分割,将超出部分的任务重新分割成若干个数据任务,并派送给所有子进程,以确保各个子进程之间的任务负载均衡,例如:通过多阶段的神经网络对距离计算次数的估计,将距离计算次数按照由大到小分为若干个级别,将同一级别的任务序列分配至同一计算节点中,以确保使得同一计算节点被分配到的任务负载接近,从而缓解负载均衡的问题。
对于装置实施例而言,由于其与方法实施例基本相似,所以描述的比较简单,相关之处参见方法实施例的部分说明即可。
参照图4,示出了本申请一实施例提供的一种支撑点并行枚举负载均衡 装置,其特征在于,包括:
建立模块410,用于利用人工神经网络的自学习能力,建立节点的状态参数与节点的负载情况之间的对应关系;
获取模块420,用于获取节点的当前状态参数;
确定模块430,用于通过所述对应关系,确定与所述当前状态参数对应的当前负载情况;具体地,确定与所述状态参数对应的当前负载情况,包括:将所述对应关系中与所述当前状态参数相同的状态参数所对应的负载等情况,确定为所述当前负载情况;
任务分配模块440,用于依据所述当前负载情况,增加或减少所述子进程的处理任务。
在一实施例中,所述状态参数,包括:任务参数和/或性能参数,和/或由按设定规律自所述任务参数、所述性能参数中提取的特征组成的一维或两维以上的数组;其中,
所述任务参数,包括:任务数量、子进程数量、任务分割方式中的至少之一;
和/或,
所述性能参数,包括:子进程计算速度;
和/或,
所述对应关系,包括:函数关系;
所述状态参数为所述函数关系的输入参数,所述负载情况为所述函数关系的输出参数;
所述确定模块430,包括:
函数确定子模块,用于当所述对应关系包括函数关系时,将所述当前状态参数输入所述函数关系中,确定所述函数关系的输出参数为当前负载情况。
在一实施例中,所述建立模块410,包括:
获取子模块,用于获取用于建立所述状态参数与所述负载情况之间的对应关系的样本数据;
分析子模块,用于分析所述状态参数的特性及其规律,根据所述特性及其规律,确定所述人工神经网络的网络结构及其网络参数;
训练子模块,用于使用所述样本数据,对所述网络结构和所述网络参数进行训练和测试,确定所述状态参数与所述负载情况的所述对应关系。
在一实施例中,所述获取子模块,包括:
收集子模块,用于收集节点的在处理不同类型数据下的状态参数和负载 情况;
分析子模块,用于对所述状态参数进行分析、并结合预存的专家经验信息,选取与所述负载情况相关的数据作为所述状态参数;
样本数据生成子模块,用于将所述负载情况、以及选取的所述状态参数构成的数据对,作为样本数据。
在一实施例中,
所述网络结构,包括:BP神经网络、CNN神经网络、RNN神经网络、残差神经网络、多阶段循环神经网络中的至少之一;
和/或,
所述网络参数,包括:输入节点数、输出节点数、隐藏层数、隐节点数、密集块数、输出层数、卷积层数、过度层数、初始权值、偏置值中的至少之一。
在一实施例中,
所述训练子模块,包括:
训练结果生成子模块,用于选取所述样本数据中的一部分数据作为训练样本,将所述训练样本中的所述状态参数输入到所述网络结构,通过所述网络结构的激活函数和所述网络参数进行训练,得到实际训练结果;
训练结果误差判断子模块,用于确定所述实际训练结果与所述训练样本中的相应负载情况之间的实际训练误差是否满足预设训练误差;
训练完成判定子模块,用于当所述实际训练误差满足所述预设训练误差时,确定对所述网络结构和所述网络参数的所述训练完成;
和/或,
测试子模块,用于对所述网络结构和所述网络参数进行测试,所述测试子模块,包括:
测试结果生成子模块,用于选取所述样本数据中的另一部分数据作为测试样本,将所述测试样本中的所述状态参数输入到所述训练完成的所述网络结构中,以所述激活函数和所述训练完成的所述网络参数进行测试,得到实际测试结果;
测试结果误差判断子模块,用于确定所述实际测试结果与所述测试样本中的相应负载情况之间的实际测试误差是否满足设定测试误差;
测试完成判定子模块,用于当所述实际测试误差满足所述设定测试误差时,确定对所述网络结构和所述网络参数的所述测试完成。
在一实施例中,
所述训练子模块,还包括:
网络参数更新子模块,用于当所述实际训练误差不满足所述设定训练误差时,通过所述网络结构的误差能量函数更新所述网络参数;
第一重训练子模块,用于通过所述网络结构的所述激活函数和更新后的所述网络参数进行重新训练,直至所述重新训练后的实际训练误差满足所述设定训练误差;
和/或,
所述测试子模块,还包括:
第二重训练子模块,用于当所述实际测试误差不满足所述设定测试误差时,对所述网络结构和所述网络参数进行重新训练,直至所述重新训练后的实际测试误差慢速所述设定测试误差。
参照图5,示出了本发明的一种支撑点并行枚举负载均衡方法的计算机设备,具体可以包括如下:
上述计算机设备12以通用计算设备的形式表现,计算机设备12的组件可以包括但不限于:一个或者多个处理器或者处理单元16,系统存储器28,连接不同系统组件(包括系统存储器28和处理单元16)的总线18。
总线18表示几类总线18结构中的一种或多种,包括存储器总线18或者存储器控制器,外围总线18,图形加速端口,处理器或者使用多种总线18结构中的任意总线18结构的局域总线18。举例来说,这些体系结构包括但不限于工业标准体系结构(ISA)总线18,微通道体系结构(MAC)总线18,增强型ISA总线18、音视频电子标准协会(VESA)局域总线18以及外围组件互连(PCI)总线18。
计算机设备12典型地包括多种计算机系统可读介质。这些介质可以是任何能够被计算机设备12访问的可用介质,包括易失性和非易失性介质,可移动的和不可移动的介质。
系统存储器28可以包括易失性存储器形式的计算机系统可读介质,例如随机存取存储器(RAM)30和/或高速缓存存储器32。计算机设备12可以进一步包括其他移动/不可移动的、易失性/非易失性计算机体统存储介质。仅作为举例,存储系统34可以用于读写不可移动的、非易失性磁介质(通常称为“硬盘驱动器”)。尽管图5中未示出,可以提供用于对可移动非易失性磁盘(如“软盘”)读写的磁盘驱动器,以及对可移动非易失性光盘(例如CD-ROM,DVD-ROM或者其他光介质)读写的光盘驱动器。在这些情况下,每个驱动器可以通过一个或者多个数据介质界面与总线18相连。存储器可以包括至少一个程序产品,该程序产品具有一组(例如至少一个)程序 模块42,这些程序模块42被配置以执行本发明各实施例的功能。
具有一组(至少一个)程序模块42的程序/实用工具40,可以存储在例如存储器中,这样的程序模块42包括——但不限于——操作系统、一个或者多个应用程序、其他程序模块42以及程序数据,这些示例中的每一个或某种组合中可能包括网络环境的实现。程序模块42通常执行本发明所描述的实施例中的功能和/或方法。
计算机设备12也可以与一个或多个外部设备14(例如键盘、指向设备、显示器24、摄像头等)通信,还可与一个或者多个使得用户能与该计算机设备12交互的设备通信,和/或与使得该计算机设备12能与一个或多个其他计算设备进行通信的任何设备(例如网卡,调制解调器等等)通信。这种通信可以通过输入/输出(I/O)界面22进行。并且,计算机设备12还可以通过网络适配器20与一个或者多个网络(例如局域网(LAN)),广域网(WAN)和/或公共网络(例如因特网)通信。如图所示,网络适配器20通过总线18与计算机设备12的其他模块通信。应当明白,尽管图5中未示出,可以结合计算机设备12使用其他硬件和/或软件模块,包括但不限于:微代码、设备驱动器、冗余处理单元16、外部磁盘驱动阵列、RAID系统、磁带驱动器以及数据备份存储系统34等。
处理单元16通过运行存储在系统存储器28中的程序,从而执行各种功能应用以及数据处理,例如实现本发明实施例所提供的支撑点并行枚举负载均衡方法。
也即,上述处理单元16执行上述程序时实现:利用人工神经网络的自学习能力,建立节点的状态参数与节点内子进程的负载情况之间的对应关系;获取节点的当前状态参数;通过所述对应关系,确定与所述当前状态参数对应的当前负载情况;具体地,确定与所述状态参数对应的当前负载情况,包括:将所述对应关系中与所述当前状态参数相同的状态参数所对应的负载等情况,确定为所述当前负载情况;依据所述当前负载情况,增加或减少所述子进程的处理任务。
在本发明实施例中,本发明还提供一种计算机可读存储介质,其上存储有计算机程序,该程序被处理器执行时实现如本申请所有实施例提供的支撑点并行枚举负载均衡方法:
也即,给程序被处理器执行时实现:利用人工神经网络的自学习能力,建立节点的状态参数与节点内子进程的负载情况之间的对应关系;获取节点的当前状态参数;通过所述对应关系,确定与所述当前状态参数对应的当前 负载情况;具体地,确定与所述状态参数对应的当前负载情况,包括:将所述对应关系中与所述当前状态参数相同的状态参数所对应的负载等情况,确定为所述当前负载情况;依据所述当前负载情况,增加或减少所述子进程的处理任务。
可以采用一个或多个计算机可读的介质的任意组合。计算机可读介质可以是计算机克顿信号介质或者计算机可读存储介质。计算机可读存储介质例如可以是——但不限于——电、磁、光、电磁、红外线或半导体的系统、装置或器件,或者任意以上的组合。计算机可读存储介质的更具体的例子(非穷举的列表)包括:具有一个或多个导线的电连接、便携式计算机磁盘、硬盘、随机存取存储器(RAM)、只读存储器(ROM)、可擦可编程只读存储器(EPOM或闪存)、光纤、便携式紧凑磁盘只读存储器(CD-ROM)、光存储器件、磁存储器件或者上述的任意合适的组合。在本文件中,计算机可读存储介质可以是任何包含或存储程序的有形介质,该程序可以被指令执行系统、装置或者器件使用或者与其结合使用。
计算机可读的信号介质可以包括在基带中或者作为载波一部分传播的数据信号,其中承载了计算机可读的程序代码。这种传播的数据信号可以采用多种形式,包括——但不限于——电磁信号、光信号或上述的任意合适的组合。计算机可读的信号介质还可以是计算机可读存储介质以外的任何计算机可读介质,该计算机可读介质可以发送、传播或者传输用于由指令执行系统、装置或者器件使用或者与其结合使用的程序。
可以以一种或多种程序设计语言或其组合来编写用于执行本发明操作的计算机程序代码,上述程序设计语言包括面向对象的程序设计语言——诸如Java、Smalltalk、C++,还包括常规的过程式程序设计语言——诸如“C”语言或类似的程序设计语言。程序代码可以完全地在用户计算机上执行、部分地在用户计算机上执行、作为一个独立的软件包执行、部分在用户计算机上部分在远程计算机上执行或者完全在远程计算机或者服务器上执行。在涉及远程计算机的情形中,远程计算机可以通过任意种类的网络——包括局域网(LAN)或广域网(WAN)——连接到用户计算机,或者,可以连接到外部计算机(例如利用因特网服务提供商来通过因特网连接)。本说明书中的各个实施例均采用递进的方式描述,每个实施例重点说明的都是与其他实施例的不同之处,各个实施例之间相同相似的部分互相参见即可。
尽管已描述了本申请实施例的优选实施例,但本领域内的技术人员一旦得知了基本创造性概念,则可对这些实施例做出另外的变更和修改。所以,所附权利要求意欲解释为包括优选实施例以及落入本申请实施例范围的所 有变更和修改。
最后,还需要说明的是,在本文中,诸如第一和第二等之类的关系术语仅仅用来将一个实体或者操作与另一个实体或操作区分开来,而不一定要求或者暗示这些实体或操作之间存在任何这种实际的关系或者顺序。而且,术语“包括”、“包含”或者其任何其他变体意在涵盖非排他性的包含,从而使得包括一系列要素的过程、方法、物品或者终端设备不仅包括那些要素,而且还包括没有明确列出的其他要素,或者是还包括为这种过程、方法、物品或者终端设备所固有的要素。在没有更多限制的情况下,由语句“包括一个……”限定的要素,并不排除在包括所述要素的过程、方法、物品或者终端设备中还存在另外的相同要素。
以上对本申请所提供的支撑点并行枚举负载均衡方法、装置、设备及介质,进行了详细介绍,本文中应用了具体个例对本申请的原理及实施方式进行了阐述,以上实施例的说明只是用于帮助理解本申请的方法及其核心思想;同时,对于本领域的一般技术人员,依据本申请的思想,在具体实施方式及应用范围上均会有改变之处,综上所述,本说明书内容不应理解为对本申请的限制。

Claims (10)

  1. 一种支撑点并行枚举负载均衡方法,其特征在于,包括:
    利用人工神经网络的自学习能力,建立节点的状态参数与节点内子进程的负载情况之间的对应关系;
    获取节点的当前状态参数;
    通过所述对应关系,确定与所述当前状态参数对应的当前负载情况;具体地,确定与所述状态参数对应的当前负载情况,包括:将所述对应关系中与所述当前状态参数相同的状态参数所对应的负载等情况,确定为所述当前负载情况;
    依据所述当前负载情况,增加或减少所述子进程的处理任务。
  2. 根据权利要求1所述的方法,其特征在于,
    所述状态参数,包括:任务参数和/或性能参数,和/或由按设定规律自所述任务参数、所述性能参数中提取的特征组成的一维或两维以上的数组;其中,
    所述任务参数,包括:任务数量、子进程数量、任务分割方式中的至少之一;
    和/或,
    所述性能参数,包括:子进程计算速度;
    和/或,
    所述对应关系,包括:函数关系;
    所述状态参数为所述函数关系的输入参数,所述负载情况为所述函数关系的输出参数;
    通过所述对应关系,确定与所述当前状态参数对应的当前负载情况的步骤,包括:
    当所述对应关系包括函数关系时,将所述当前状态参数输入所述函数关系中,确定所述函数关系的输出参数为当前负载情况。
  3. 根据权利要求1所述的方法,其特征在于,所述建立节点的状态参数与节点的负载情况之间的对应关系的步骤,包括:
    获取用于建立所述状态参数与所述负载情况之间的对应关系的样本数据;
    分析所述状态参数的特性及其规律,根据所述特性及其规律,确定所述人工神经网络的网络结构及其网络参数;
    使用所述样本数据,对所述网络结构和所述网络参数进行训练和测试,确定所述状态参数与所述负载情况的所述对应关系。
  4. 根据权利要求3所述的方法,其特征在于,所述获取用于建立所述状态参数与所述负载情况之间的对应关系的样本数据的步骤,包括:
    收集节点的在处理不同类型数据下的状态参数和负载情况;
    对所述状态参数进行分析、并结合预存的专家经验信息,选取与所述负载情况相关的数据作为所述状态参数;
    将所述负载情况、以及选取的所述状态参数构成的数据对,作为样本数据。
  5. 根据权利要求4所述的方法,其特征在于,
    所述网络结构,包括:BP神经网络、CNN神经网络、RNN神经网络、残差神经网络、多阶段循环神经网络中的至少之一;
    和/或,
    所述网络参数,包括:输入节点数、输出节点数、隐藏层数、隐节点数、密集块数、输出层数、卷积层数、过度层数、初始权值、偏置值中的至少之一。
  6. 根据权利要求3-5任一项所述的方法,其特征在于,
    对所述网络结构和所述网络参数进行训练,包括:
    选取所述样本数据中的一部分数据作为训练样本,将所述训练样本中的所述状态参数输入到所述网络结构,通过所述网络结构的激活函数和所述网络参数进行训练,得到实际训练结果;
    确定所述实际训练结果与所述训练样本中的相应负载情况之间的实际训练误差是否满足预设训练误差;
    当所述实际训练误差满足所述预设训练误差时,确定对所述网络结构和所述网络参数的所述训练完成;
    和/或,
    对所述网络结构和所述网络参数进行测试,包括:
    选取所述样本数据中的另一部分数据作为测试样本,将所述测试样本中的所述状态参数输入到所述训练完成的所述网络结构中,以所述激活函数和所述训练完成的所述网络参数进行测试,得到实际测试结果;
    确定所述实际测试结果与所述测试样本中的相应负载情况之间的实际测试误差是否满足设定测试误差;
    当所述实际测试误差满足所述设定测试误差时,确定对所述网络结构和所述网络参数的所述测试完成。
  7. 根据权利要求6所述的方法,其特征在于,
    对所述网络结构和所述网络参数进行训练,还包括:
    当所述实际训练误差不满足所述设定训练误差时,通过所述网络结构的误差能量函数更新所述网络参数;
    通过所述网络结构的所述激活函数和更新后的所述网络参数进行重新训练,直至所述重新训练后的实际训练误差满足所述设定训练误差;
    和/或,
    对所述网络结构和所述网络参数进行测试,还包括:
    当所述实际测试误差不满足所述设定测试误差时,对所述网络结构和所述网络参数进行重新训练,直至所述重新训练后的实际测试误差慢速所述设定测试误差。
  8. 一种支撑点并行枚举负载均衡装置,其特征在于,包括:
    建立模块,用于利用人工神经网络的自学习能力,建立节点的状态参数与节点的负载情况之间的对应关系;
    获取模块,用于获取节点的当前状态参数;
    确定模块,用于通过所述对应关系,确定与所述当前状态参数对应的当前负载情况;具体地,确定与所述状态参数对应的当前负载情况,包括:将所述对应关系中与所述当前状态参数相同的状态参数所对应的负载等情况,确定为所述当前负载情况;
    任务分配模块,用于依据所述当前负载情况,增加或减少所述子进程的处理任务。
  9. 一种设备,其特征在于,包括处理器、存储器及存储在所述存储器上并能够在所述处理器上运行的计算机程序,所述计算机程序被所述处理器执行时实现如权利要求1至7中任一项所述的方法。
  10. 一种计算机可读存储介质,其特征在于,所述计算机可读存储介质上存储计算机程序,所述计算机程序被处理器执行时实现如权利要求1至7中任一项所述的方法。
PCT/CN2020/071011 2019-12-31 2020-01-08 支撑点并行枚举负载均衡方法、装置、设备及介质 WO2021134810A1 (zh)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
CN201911422115.9A CN111158918B (zh) 2019-12-31 2019-12-31 支撑点并行枚举负载均衡方法、装置、设备及介质
CN201911422115.9 2019-12-31

Publications (1)

Publication Number Publication Date
WO2021134810A1 true WO2021134810A1 (zh) 2021-07-08

Family

ID=70560678

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/CN2020/071011 WO2021134810A1 (zh) 2019-12-31 2020-01-08 支撑点并行枚举负载均衡方法、装置、设备及介质

Country Status (2)

Country Link
CN (1) CN111158918B (zh)
WO (1) WO2021134810A1 (zh)

Families Citing this family (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111737531B (zh) * 2020-06-12 2021-05-28 深圳计算科学研究院 一种应用驱动的图划分调整方法和系统
CN112668912B (zh) * 2020-12-31 2024-06-14 中软数科(海南)信息科技有限公司 人工神经网络的训练方法、动态计算切分调度方法、存储介质及系统

Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN102480529A (zh) * 2010-11-24 2012-05-30 北京无线恒远科技有限公司 实现广域网负载均衡的域名解析方法及域名解析服务器
CN104407841A (zh) * 2014-11-25 2015-03-11 大连理工大学 基于aws的gpu并行粒子群算法
EP3324304A1 (en) * 2015-07-15 2018-05-23 ZTE Corporation Data processing method, device and system
CN108804383A (zh) * 2018-05-30 2018-11-13 深圳大学 基于度量空间的支撑点并行枚举方法及装置

Family Cites Families (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101695050A (zh) * 2009-10-19 2010-04-14 浪潮电子信息产业股份有限公司 一种基于网络流量自适应预测的动态负载均衡方法
CN103744643B (zh) * 2014-01-10 2016-09-21 浪潮(北京)电子信息产业有限公司 一种多线程程序下多节点并行架构的方法及装置
CN105227410A (zh) * 2015-11-04 2016-01-06 浪潮(北京)电子信息产业有限公司 基于自适应神经网络的服务器负载检测的方法及系统
US10412158B2 (en) * 2016-07-27 2019-09-10 Salesforce.Com, Inc. Dynamic allocation of stateful nodes for healing and load balancing
CN110704542A (zh) * 2019-10-15 2020-01-17 南京莱斯网信技术研究院有限公司 一种基于节点负载的数据动态分区系统

Patent Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN102480529A (zh) * 2010-11-24 2012-05-30 北京无线恒远科技有限公司 实现广域网负载均衡的域名解析方法及域名解析服务器
CN104407841A (zh) * 2014-11-25 2015-03-11 大连理工大学 基于aws的gpu并行粒子群算法
EP3324304A1 (en) * 2015-07-15 2018-05-23 ZTE Corporation Data processing method, device and system
CN108804383A (zh) * 2018-05-30 2018-11-13 深圳大学 基于度量空间的支撑点并行枚举方法及装置

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
LU KEZHONG, MAO RUI, CHEN GUOLIANG: "Parallel Computing Framework for Big Data", SCIENCE BULLETIN, vol. 60, no. 5-6, 1 February 2015 (2015-02-01), pages 566 - 569, XP055826343, ISSN: 0023-074X, DOI: 10.1360/N972014-00834 *

Also Published As

Publication number Publication date
CN111158918A (zh) 2020-05-15
CN111158918B (zh) 2022-11-11

Similar Documents

Publication Publication Date Title
CN106290378B (zh) 缺陷分类方法和缺陷检查系统
CN105488539B (zh) 分类模型的生成方法及装置、系统容量的预估方法及装置
WO2023005120A1 (zh) 楼宇能耗预测方法、装置、计算机设备和存储介质
WO2023050534A1 (zh) 轨道交通站点设备能耗预测方法、装置、设备和存储介质
WO2021134810A1 (zh) 支撑点并行枚举负载均衡方法、装置、设备及介质
US12079476B2 (en) Data processing method, apparatus, device, and readable storage medium
WO2021121296A1 (zh) 习题测试数据生成方法以及装置
CN111985831A (zh) 云计算资源的调度方法、装置、计算机设备及存储介质
CN112613584A (zh) 一种故障诊断方法、装置、设备及存储介质
WO2022088602A1 (zh) 相似对问题预测的方法、装置及电子设备
CN113707323A (zh) 基于机器学习的疾病预测方法、装置、设备及介质
JP7355299B2 (ja) 学習用データセット生成システム、学習サーバ、及び学習用データセット生成プログラム
CN116703046A (zh) 实时派工顺序的控制方法及系统、电子设备和存储介质
US11640558B2 (en) Unbalanced sample classification method and apparatus
CN113919432A (zh) 一种分类模型构建方法、数据分类方法及装置
CN112330512A (zh) 知识蒸馏学习模型的预测方法、系统、设备及存储介质
EP3899820A1 (en) Contact center call volume prediction
CN112860531B (zh) 基于深度异构图神经网络的区块链广泛共识性能评测方法
CN116977256A (zh) 缺陷检测模型的训练方法、装置、设备及存储介质
CN115543638A (zh) 基于不确定性的边缘计算数据收集分析方法、系统及设备
CN112905166B (zh) 人工智能编程系统、计算机设备、计算机可读存储介质
CN114490405A (zh) 资源需求量确定方法、装置、设备及存储介质
CN113516398A (zh) 基于分层抽样的风险设备识别方法、装置及电子设备
JP2021099781A (ja) 人工知能学習データの生成のためのクラウドソーシング基盤プロジェクトの作業者及び検収者の増減運用方法
WO2020232899A1 (zh) 数据分析系统会诊方法及相关装置

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 20909209

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: DE

32PN Ep: public notification in the ep bulletin as address of the adressee cannot be established

Free format text: NOTING OF LOSS OF RIGHTS PURSUANT TO RULE 112(1) EPC (EPO FORM 1205 DATED 28/10/2022)

122 Ep: pct application non-entry in european phase

Ref document number: 20909209

Country of ref document: EP

Kind code of ref document: A1