CN105550161A - Parallel logic regression method and system for heterogeneous systems - Google Patents
Parallel logic regression method and system for heterogeneous systems Download PDFInfo
- Publication number
- CN105550161A CN105550161A CN201510945415.0A CN201510945415A CN105550161A CN 105550161 A CN105550161 A CN 105550161A CN 201510945415 A CN201510945415 A CN 201510945415A CN 105550161 A CN105550161 A CN 105550161A
- Authority
- CN
- China
- Prior art keywords
- vector
- gradient
- computing node
- row
- objective function
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Pending
Links
- 238000000034 method Methods 0.000 title claims abstract description 32
- 239000013598 vector Substances 0.000 claims abstract description 149
- 239000011159 matrix material Substances 0.000 claims abstract description 24
- 238000004364 calculation method Methods 0.000 claims description 26
- 238000005070 sampling Methods 0.000 claims description 6
- 230000006870 function Effects 0.000 description 39
- 238000010586 diagram Methods 0.000 description 5
- 102100029469 WD repeat and HMG-box DNA-binding protein 1 Human genes 0.000 description 2
- 101710097421 WD repeat and HMG-box DNA-binding protein 1 Proteins 0.000 description 2
- 238000007477 logistic regression Methods 0.000 description 2
- 238000010801 machine learning Methods 0.000 description 2
- 238000006243 chemical reaction Methods 0.000 description 1
- 238000010276 construction Methods 0.000 description 1
- 238000005516 engineering process Methods 0.000 description 1
- 238000012804 iterative process Methods 0.000 description 1
- 238000005457 optimization Methods 0.000 description 1
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F17/00—Digital computing or data processing equipment or methods, specially adapted for specific functions
- G06F17/10—Complex mathematical operations
- G06F17/16—Matrix or vector computation, e.g. matrix-matrix or matrix-vector multiplication, matrix factorization
Abstract
The invention discloses a parallel logic regression method and system for heterogeneous systems. The method comprises the following steps: computing the gradient of a target function of a logic regression model through a parallel manner; forming the feature vectors of samples used during the gradient computation into a sample matrix; classifying labels to form label vectors; respectively dividing the sample matrix, the label vectors and the feature weight vectors; and after the division, distributing to batched computational nodes to compute respectively and regressing the results in parallel to obtain the gradient values of a large number of samples so as to determine the target characteristic weight vector according to the gradients obtained according to the parallel computation and complete the solution of LR problems. According to the method and system disclosed in the invention, the parallel solution of the LR problems of large-scale samples can be effectively carried out by utilizing batched computational nodes.
Description
Technical field
The present invention relates to machine learning field, particularly relate to a kind of parallel logic homing method and system of heterogeneous system.
Background technology
Logistic regression (LogisticRegression, be called for short LR) be a kind of sorting algorithm very conventional in machine learning, be widely used at internet arena, no matter be in ad system, carry out CTR estimate, the identification rubbish contents etc. estimated in conversion ratio or anti-spam system in commending system can see its figure.LR receives the favor of numerous application persons with the universality of its simple principle and application.
In LR model, being weighted the value on the different dimensions of proper vector by feature weight vector, and being compressed to the scope of 0 ~ 1 with logical function, is the probability of positive sample as this sample.Logical function curve as shown in Figure 1, given M training sample (X
1, y
1), (X
2, y
2) ... (X
m, y
m), wherein X
j={ x
ji| i=1,2...N} are the proper vector of N dimension; y
jfor tag along sort, value is+1 or-1 ,+1 expression sample is positive sample, and-1 represents that sample is negative sample.In LR model, to be the probability of positive sample be a jth sample:
Wherein W is the feature weight vector of N dimension, the model parameter that namely will solve in LR problem.
Solve LR problem, be exactly the suitable feature weight vector W of searching one, make the positive sample inside for training set, P (y
j=1|W, X
j) value as far as possible greatly; For the negative sample inside training set, this value is as far as possible little, or P (y
j=-1|W, X
j) as far as possible large.Be expressed as by joint probability and solve:
Log asked to above formula and get negative sign, being then equivalent to:
Formula (1) is exactly the objective function that LR solves, and finding suitable W and make objective function f (W) minimum, is a Unconstrained Optimization Problem, and the common practice addressed this problem is a given initial W at random
0, by iteration, in each iteration calculating target function descent direction and upgrade W, until objective function is stabilized in minimum point, iterative process is as shown in Figure 2.
The difference of different optimized algorithms is just the calculating of objective function descent direction Dt, but in a practical situation, need to utilize extensive sample data to train, objective function descent direction Dt is solved to extensive sample, data volume to be processed is huge, directly utilize unit directly to carry out Dt to each sample to solve, solution efficiency is low.
Summary of the invention
In view of this, fundamental purpose of the present invention is the parallel logic homing method and the system that provide a kind of heterogeneous system, can carry out the LR problem solving of extensive sample efficiently.
For achieving the above object, the invention provides a kind of parallel logic homing method of heterogeneous system, comprising:
Obtain the objective function of Logic Regression Models;
The gradient of objective function described in parallel computation;
According to result of calculation determination target signature weight vectors;
Described in described parallel computation, the gradient of objective function comprises:
The tag along sort of the sample of M in training set is formed the label vector of a M dimension, M N dimensional feature vector is formed the sample matrix of a M*N, obtain the computing node of the capable n row of m, by described label vector sum sample matrix divided by row, for each computing node distributes M/m proper vector and tag along sort, by the current signature weight vectors divided by column that sample matrix and N are tieed up, be that each computing node distributes N/n dimensional feature vector and current signature weight vectors;
Each computing node is made to carry out the dot product of the feature weight vector respective components of divided by column and the respective components of proper vector divided by column respectively, the result of calculation of the identical computing node of line number is carried out and returned, the current signature weight vectors of often being gone respectively and the dot product result of character pair vector, turn back in the computing node that often row is corresponding by each described dot product result;
Each computing node is made to calculate the intermediate scalar of described target function gradient according to the respective components of each described dot product result and label vector divided by row respectively, and respectively each described intermediate scalar is multiplied with the respective components of proper vector divided by row, the result of calculation of row number identical computing node is carried out and returned, obtains the component that gradient vector often arranges respectively;
The component described gradient vector often arranged carries out merging the gradient obtaining objective function.
Preferably, the objective function of described Logic Regression Models is:
w is the current signature weight vectors of N dimension, X
jfor the sampling feature vectors of N dimension, y
jfor tag along sort.
Preferably, the gradient of described objective function is G
t,
Preferably, comprise according to result of calculation determination target signature weight vectors:
Steps A: make iterations be 0, determines initial weight characteristic vector W when iterations is 0
0;
Step B: make iterations value add 1, the gradient of objective function according to the parallel computation of present weight proper vector, according to described gradient calculation direction of search value, upgrades present weight proper vector according to described direction of search value;
Step C: judge whether described Grad meets and preset iteration stopping condition, if so, then enter step D, otherwise return step B;
Step D: current signature weight vectors is defined as target signature weight vectors.
Present invention also offers a kind of parallel logic regression system of heterogeneous system, comprising:
Objective function determination module, for obtaining the objective function of Logic Regression Models;
Parallel computation module, for the gradient of objective function described in parallel computation;
Target signature weight vectors determination module, for according to result of calculation determination target signature weight vectors;
Described parallel computation module comprises:
Computing node distribution sub module, for the tag along sort of the sample of M in training set being formed the label vector of a M dimension, M N dimensional feature vector is formed the sample matrix of a M*N, obtain the computing node of the capable n row of m, by described label vector sum sample matrix divided by row, for each computing node distributes M/m proper vector and tag along sort, by the current signature weight vectors divided by column that sample matrix and N are tieed up, be that each computing node distributes N/n dimensional feature vector and current signature weight vectors;
Row parallel computation submodule, for the dot product making each computing node carry out the feature weight vector respective components of divided by column and the respective components of proper vector divided by column respectively, the result of calculation of the identical computing node of line number is carried out and returned, the current signature weight vectors of often being gone respectively and the dot product result of character pair vector, turn back in the computing node that often row is corresponding by each described dot product result;
Row parallel computation submodule, the intermediate scalar of described target function gradient is calculated according to the respective components of each described dot product result and label vector divided by row respectively for making each computing node, and respectively each described intermediate scalar is multiplied with the respective components of proper vector divided by row, the result of calculation of row number identical computing node is carried out and returned, obtains the component that gradient vector often arranges respectively;
Merge submodule, the component for described gradient vector often being arranged carries out merging the gradient obtaining objective function.
Preferably, the objective function of described Logic Regression Models is:
w is the current signature weight vectors of N dimension, X
jfor the sampling feature vectors of N dimension, y
jfor tag along sort.
Preferably, the gradient of described objective function is G
t,
Apply parallel logic homing method and the system of a kind of heterogeneous system provided by the invention, the mode of the gradient calculation of the objective function of Logic Regression Models by parallelization is calculated, the proper vector of sample compute gradient used forms sample matrix, tag along sort forms label vector, by sample matrix, label vector sum feature weight vector divides respectively, the computing node being assigned to batch after division calculates respectively and again result is returned the Grad obtaining great amount of samples, thus according to the gradient determination target signature weight vectors that parallel computation obtains, complete solving of LR problem, the computing node of batch can be utilized to carry out the Parallel implementation of the LR problem of extensive sample efficiently.
Accompanying drawing explanation
In order to be illustrated more clearly in the embodiment of the present invention or technical scheme of the prior art, be briefly described to the accompanying drawing used required in embodiment or description of the prior art below, apparently, accompanying drawing in the following describes is only embodiments of the invention, for those of ordinary skill in the art, under the prerequisite not paying creative work, other accompanying drawing can also be obtained according to the accompanying drawing provided.
Fig. 1 is the curve map of logical function in LR model;
Fig. 2 is the iterative method flow diagram of LR model;
Fig. 3 is the process flow diagram of the parallel logic homing method embodiment one of a kind of heterogeneous system of the present invention;
Fig. 4 is the detail flowchart of the parallel logic homing method embodiment one of a kind of heterogeneous system of the present invention;
Fig. 5 is the detailed schematic process flow diagram of the parallel logic homing method embodiment one of a kind of heterogeneous system of the present invention;
Fig. 6 is the structural representation of the parallel logic regression system embodiment two of a kind of heterogeneous system of the present invention;
Fig. 7 is the detailed construction schematic diagram of the parallel logic regression system embodiment two of a kind of heterogeneous system of the present invention.
Embodiment
Below in conjunction with the accompanying drawing in the embodiment of the present invention, be clearly and completely described the technical scheme in the embodiment of the present invention, obviously, described embodiment is only the present invention's part embodiment, instead of whole embodiments.Based on the embodiment in the present invention, those of ordinary skill in the art, not making the every other embodiment obtained under creative work prerequisite, belong to the scope of protection of the invention.
The invention provides a kind of parallel logic homing method of heterogeneous system, Fig. 3 shows the process flow diagram of the parallel logic homing method embodiment one of heterogeneous system of the present invention, comprising:
Step S101: the objective function obtaining Logic Regression Models;
Given M training sample (X
1, y
1), (X
2, y
2) ... (X
m, y
m), wherein X
j={ x
ji| i=1,2...N} are the proper vector of N dimension; y
jfor tag along sort, value is+1 or-1 ,+1 expression sample is positive sample, and-1 represents that sample is negative sample.In LR model, to be the probability of positive sample be a jth sample:
Wherein W is the feature weight vector of N dimension, the model parameter that namely will solve in LR problem.
Solve LR problem, be exactly the suitable feature weight vector W of searching one, make the positive sample inside for training set, P (y
j=1|W, X
j) value as far as possible greatly; For the negative sample inside training set, this value is as far as possible little, or P (y
j=-1|W, X
j) as far as possible large.Be expressed as by joint probability and solve:
Log asked to above formula and get negative sign, being then equivalent to:
Step S102: the gradient of objective function described in parallel computation;
The descent direction D of above-mentioned objective function
t, D
t=-G
t, G
tfor the gradient of objective function,
As shown in Figure 4, particularly, step S102 comprises:
Step S201: the label vector tag along sort of the sample of M in training set being formed a M dimension, M N dimensional feature vector is formed the sample matrix of a M*N, obtain the computing node of the capable n row of m, by described label vector sum sample matrix divided by row, for each computing node distributes M/m proper vector and tag along sort, by the current signature weight vectors divided by column that sample matrix and N are tieed up, be that each computing node distributes N/n dimensional feature vector and current signature weight vectors;
Data are pressed task division by total parameter server (node0), build described server node and other child nodes (node1, node2 ...) server, the i.e. framework of computing node cooperated computing, by whole training dataset according to being laterally divided into each machine in units of instance carries out Distributed Calculation, according to longitudinally becoming multiple subsegment to carry out distributed calculating the instance Data Placement of super large dimension, in a broadcast manner ready-portioned data slot is distributed to afterwards on all child node servers, described computing node builds the framework of described CPU and heavy nucleus (MIC) coprocessor cooperated computing, comprise: described CPU will be connected to the described CPU of same single node server and multiple MIC coprocessor as the framework coordinating to calculate, in this framework, the number of all devices is the number of CPU and the number sum of MIC coprocessor.
After distribution, the line number of the computing node that the feature of same sample is corresponding is identical, and the row of the computing node that the feature of different sample identical dimensional is corresponding are number identical, and the proper vector of a sample is split to be assigned in the node of same a line different lines, that is: X
r,k=<X
(r, 1), k..., X
(r, c), k..., X
(r, n), k>.
Wherein X
r,krepresent the kth vector that r is capable, X
(r, c), krepresent X
r,kcomponent on c row node.Same, use W
cthe component of representation feature vector W on c row node, that is: W=<W
1..., W
c..., W
n>.The gradient calculation Formula dependency of objective function is in two result of calculations: feature weight vector W
tand feature vector, X
jdot product, scalar [σ (y
jw
t tx
j)-1] y
jand feature vector, X
jbe multiplied.
Step S202: make each computing node carry out the dot product of the feature weight vector respective components of divided by column and the respective components of proper vector divided by column respectively, the result of calculation of the identical computing node of line number is carried out and returned, the current signature weight vectors of often being gone respectively and the dot product result of character pair vector, turn back in the computing node that often row is corresponding by each described dot product result;
Each computing node parallel computation dot product,
the computing node merger dot product result identical to line number:
the dot product result calculated needs to turn back in all computing nodes of this row.
Step S203: make each computing node calculate the intermediate scalar of described target function gradient according to the respective components of each described dot product result and label vector divided by row respectively, and respectively each described intermediate scalar is multiplied with the respective components of proper vector divided by row, the result of calculation of row number identical computing node is carried out and returned, obtains the component that gradient vector often arranges respectively;
Each computing node independently calculates scalar [σ (y
jw
t tx
j)-1] y
jrow component and feature vector, X
jbeing multiplied of row component, calculates G
(r, c), t, merger is carried out to row number identical node and obtains the component that gradient vector often arranges respectively
Step S204: the component described gradient vector often arranged carries out merging the gradient obtaining objective function.
Root node receives gradient component and carries out merging the Grad G obtaining objective function
t=<G
1, t..., G
n,t>.
Step S103: according to result of calculation determination target signature weight vectors;
Step S103 specifically comprises:
Steps A: make iterations be 0, determines initial weight characteristic vector W when iterations is 0
0;
Step B: make iterations value add 1, the gradient of objective function according to the parallel computation of present weight proper vector, according to described gradient calculation direction of search value, upgrades present weight proper vector according to described direction of search value;
Step C: judge whether described Grad meets and preset iteration stopping condition, if so, then enter step D, otherwise return step B;
Step D: current signature weight vectors is defined as target signature weight vectors.
The detail flowchart of the present embodiment is shown in Fig. 5, each child node constructs a decision tree central processing unit (CPU), distribute M thread, and be the data slot that each thread distributes 1/M, M namely new data set M ', calculates dot product result respectively for the individual different data set of this M ' on different mic cards.Namely for the thread that first in M ' data set M1 ', M1 ' are corresponding, M1 ' is sent on corresponding mic card, calculates dot product result for this data set of M1 '.For other data sets M2 ' of child node, the subdata collection such as M3 ' walk abreast and do same operation.Namely each MIC coprocessor there is a data set, after all data sets have all calculated, each child node dot product result is aggregated into parameter server node.
The parallel logic homing method of a kind of heterogeneous system that application the present embodiment provides, the mode of the gradient calculation of the objective function of Logic Regression Models by parallelization is calculated, the proper vector of sample compute gradient used forms sample matrix, tag along sort forms label vector, by sample matrix, label vector sum feature weight vector divides respectively, the computing node being assigned to batch after division calculates respectively and again result is returned the Grad obtaining great amount of samples, thus according to the gradient determination target signature weight vectors that parallel computation obtains, complete solving of LR problem, the computing node of batch can be utilized to carry out the Parallel implementation of the LR problem of extensive sample efficiently.
Present invention also offers a kind of parallel logic regression system of heterogeneous system, Fig. 6 shows the structural representation of the parallel logic regression system embodiment two of heterogeneous system of the present invention, comprising:
Objective function determination module 101, for obtaining the objective function of Logic Regression Models;
Parallel computation module 102, for the gradient of objective function described in parallel computation;
Target signature weight vectors determination module 103, for according to result of calculation determination target signature weight vectors;
As shown in Figure 7, described parallel computation module 102 comprises particularly:
Computing node distribution sub module 201, for the tag along sort of the sample of M in training set being formed the label vector of a M dimension, M N dimensional feature vector is formed the sample matrix of a M*N, obtain the computing node of the capable n row of m, by described label vector sum sample matrix divided by row, for each computing node distributes M/m proper vector and tag along sort, by the current signature weight vectors divided by column that sample matrix and N are tieed up, be that each computing node distributes N/n dimensional feature vector and current signature weight vectors;
Row parallel computation submodule 202, for the dot product making each computing node carry out the feature weight vector respective components of divided by column and the respective components of proper vector divided by column respectively, the result of calculation of the identical computing node of line number is carried out and returned, the current signature weight vectors of often being gone respectively and the dot product result of character pair vector, turn back in the computing node that often row is corresponding by each described dot product result;
Row parallel computation submodule 203, the intermediate scalar of described target function gradient is calculated according to the respective components of each described dot product result and label vector divided by row respectively for making each computing node, and respectively each described intermediate scalar is multiplied with the respective components of proper vector divided by row, the result of calculation of row number identical computing node is carried out and returned, obtains the component that gradient vector often arranges respectively;
Merge submodule 204, the component for described gradient vector often being arranged carries out merging the gradient obtaining objective function.
The objective function of Logic Regression Models described in the present embodiment is:
w is the current signature weight vectors of N dimension, X
jfor the sampling feature vectors of N dimension, y
jfor tag along sort, the gradient of objective function is G
t,
The parallel logic regression system of a kind of heterogeneous system that application the present embodiment provides, the mode of the gradient calculation of the objective function of Logic Regression Models by parallelization is calculated, the proper vector of sample compute gradient used forms sample matrix, tag along sort forms label vector, by sample matrix, label vector sum feature weight vector divides respectively, the computing node being assigned to batch after division calculates respectively and again result is returned the Grad obtaining great amount of samples, thus according to the gradient determination target signature weight vectors that parallel computation obtains, complete solving of LR problem, the computing node of batch can be utilized to carry out the Parallel implementation of the LR problem of extensive sample efficiently.
It should be noted that, each embodiment in this instructions all adopts the mode of going forward one by one to describe, and what each embodiment stressed is the difference with other embodiments, between each embodiment identical similar part mutually see.For system class embodiment, due to itself and embodiment of the method basic simlarity, so description is fairly simple, relevant part illustrates see the part of embodiment of the method.
Finally, also it should be noted that, in this article, term " comprises ", " comprising " or its any other variant are intended to contain comprising of nonexcludability, thus make to comprise the process of a series of key element, method, article or equipment and not only comprise those key elements, but also comprise other key elements clearly do not listed, or also comprise by the intrinsic key element of this process, method, article or equipment.When not more restrictions, the key element limited by statement " comprising ... ", and be not precluded within process, method, article or the equipment comprising described key element and also there is other identical element.
Be described in detail method and system provided by the present invention above, apply specific case herein and set forth principle of the present invention and embodiment, the explanation of above embodiment just understands method of the present invention and core concept thereof for helping; Meanwhile, for one of ordinary skill in the art, according to thought of the present invention, all will change in specific embodiments and applications, in sum, this description should not be construed as limitation of the present invention.
Claims (7)
1. a parallel logic homing method for heterogeneous system, is characterized in that, comprising:
Obtain the objective function of Logic Regression Models;
The gradient of objective function described in parallel computation;
According to result of calculation determination target signature weight vectors;
Described in described parallel computation, the gradient of objective function comprises:
The tag along sort of the sample of M in training set is formed the label vector of a M dimension, M N dimensional feature vector is formed the sample matrix of a M*N, obtain the computing node of the capable n row of m, by described label vector sum sample matrix divided by row, for each computing node distributes M/m proper vector and tag along sort, by the current signature weight vectors divided by column that sample matrix and N are tieed up, be that each computing node distributes N/n dimensional feature vector and current signature weight vectors;
Each computing node is made to carry out the dot product of the feature weight vector respective components of divided by column and the respective components of proper vector divided by column respectively, the result of calculation of the identical computing node of line number is carried out and returned, the current signature weight vectors of often being gone respectively and the dot product result of character pair vector, turn back in the computing node that often row is corresponding by each described dot product result;
Each computing node is made to calculate the intermediate scalar of described target function gradient according to the respective components of each described dot product result and label vector divided by row respectively, and respectively each described intermediate scalar is multiplied with the respective components of proper vector divided by row, the result of calculation of row number identical computing node is carried out and returned, obtains the component that gradient vector often arranges respectively;
The component described gradient vector often arranged carries out merging the gradient obtaining objective function.
2. the parallel logic homing method of heterogeneous system according to claim 1, is characterized in that, the objective function of described Logic Regression Models is:
w is the current signature weight vectors of N dimension, X
jfor the sampling feature vectors of N dimension, y
jfor tag along sort.
3. the parallel logic homing method of heterogeneous system according to claim 2, is characterized in that, the gradient of described objective function is G
t,
4. the parallel logic homing method of heterogeneous system according to claim 3, is characterized in that, comprises according to result of calculation determination target signature weight vectors:
Steps A: make iterations be 0, determines initial weight characteristic vector W when iterations is 0
0;
Step B: make iterations value add 1, the gradient of objective function according to the parallel computation of present weight proper vector, according to described gradient calculation direction of search value, upgrades present weight proper vector according to described direction of search value;
Step C: judge whether described Grad meets and preset iteration stopping condition, if so, then enter step D, otherwise return step B;
Step D: current signature weight vectors is defined as target signature weight vectors.
5. a parallel logic regression system for heterogeneous system, is characterized in that, comprising:
Objective function determination module, for obtaining the objective function of Logic Regression Models;
Parallel computation module, for the gradient of objective function described in parallel computation;
Target signature weight vectors determination module, for according to result of calculation determination target signature weight vectors;
Described parallel computation module comprises:
Computing node distribution sub module, for the tag along sort of the sample of M in training set being formed the label vector of a M dimension, M N dimensional feature vector is formed the sample matrix of a M*N, obtain the computing node of the capable n row of m, by described label vector sum sample matrix divided by row, for each computing node distributes M/m proper vector and tag along sort, by the current signature weight vectors divided by column that sample matrix and N are tieed up, be that each computing node distributes N/n dimensional feature vector and current signature weight vectors;
Row parallel computation submodule, for the dot product making each computing node carry out the feature weight vector respective components of divided by column and the respective components of proper vector divided by column respectively, the result of calculation of the identical computing node of line number is carried out and returned, the current signature weight vectors of often being gone respectively and the dot product result of character pair vector, turn back in the computing node that often row is corresponding by each described dot product result;
Row parallel computation submodule, the intermediate scalar of described target function gradient is calculated according to the respective components of each described dot product result and label vector divided by row respectively for making each computing node, and respectively each described intermediate scalar is multiplied with the respective components of proper vector divided by row, the result of calculation of row number identical computing node is carried out and returned, obtains the component that gradient vector often arranges respectively;
Merge submodule, the component for described gradient vector often being arranged carries out merging the gradient obtaining objective function.
6. the parallel logic regression system of heterogeneous system according to claim 5, is characterized in that, the objective function of described Logic Regression Models is:
w is the current signature weight vectors of N dimension, X
jfor the sampling feature vectors of N dimension, y
jfor tag along sort.
7. the parallel logic regression system of heterogeneous system according to claim 6, is characterized in that, the gradient of described objective function is G
t,
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201510945415.0A CN105550161A (en) | 2015-12-16 | 2015-12-16 | Parallel logic regression method and system for heterogeneous systems |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201510945415.0A CN105550161A (en) | 2015-12-16 | 2015-12-16 | Parallel logic regression method and system for heterogeneous systems |
Publications (1)
Publication Number | Publication Date |
---|---|
CN105550161A true CN105550161A (en) | 2016-05-04 |
Family
ID=55829350
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN201510945415.0A Pending CN105550161A (en) | 2015-12-16 | 2015-12-16 | Parallel logic regression method and system for heterogeneous systems |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN105550161A (en) |
Cited By (2)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN106407561A (en) * | 2016-09-19 | 2017-02-15 | 复旦大学 | A division method of the parallel GPDT algorithm on a multi-core SOC |
CN113240100A (en) * | 2021-07-12 | 2021-08-10 | 深圳市永达电子信息股份有限公司 | Parallel computing method and system based on discrete Hopfield neural network |
-
2015
- 2015-12-16 CN CN201510945415.0A patent/CN105550161A/en active Pending
Cited By (3)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN106407561A (en) * | 2016-09-19 | 2017-02-15 | 复旦大学 | A division method of the parallel GPDT algorithm on a multi-core SOC |
CN106407561B (en) * | 2016-09-19 | 2020-07-03 | 复旦大学 | Method for dividing parallel GPDT algorithm on multi-core SOC |
CN113240100A (en) * | 2021-07-12 | 2021-08-10 | 深圳市永达电子信息股份有限公司 | Parallel computing method and system based on discrete Hopfield neural network |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
Su et al. | Greedy hash: Towards fast optimization for accurate hash coding in cnn | |
Xie et al. | Sequential graph neural network for urban road traffic speed prediction | |
CN105897584B (en) | Paths planning method and controller | |
Adhikari et al. | Iterative bounding box annotation for object detection | |
CN109903117A (en) | A kind of knowledge mapping processing method and processing device for commercial product recommending | |
Despalatović et al. | Community structure in networks: Girvan-Newman algorithm improvement | |
Zhang et al. | Rapid Physarum Algorithm for shortest path problem | |
CN103399868A (en) | Method for optimizing foreign trade logistics path | |
CN101976245A (en) | Sequencing method of node importance in network | |
CN103279505B (en) | A kind of based on semantic mass data processing method | |
Gao et al. | An amoeboid algorithm for solving linear transportation problem | |
CN107133279A (en) | A kind of intelligent recommendation method and system based on cloud computing | |
CN105224577A (en) | Multi-label text classification method and system | |
He et al. | Efficient and scalable multi-task regression on massive number of tasks | |
CN105550161A (en) | Parallel logic regression method and system for heterogeneous systems | |
Lakdashti et al. | Computation of shortest path in a vague network by euclidean distance | |
CN111626311A (en) | Heterogeneous graph data processing method and device | |
CN104778205A (en) | Heterogeneous information network-based mobile application ordering and clustering method | |
CN112559807A (en) | Graph pattern matching method based on multi-source point parallel exploration | |
CN104850591B (en) | A kind of the conversion storage method and device of data | |
CN110083674B (en) | Intellectual property information processing method and device | |
Raymond et al. | Positive edge: A pricing criterion for the identification of non-degenerate simplex pivots | |
Zhang et al. | Slime mould inspired applications on graph-optimization problems | |
CN109299725A (en) | A kind of forecasting system and device based on the decomposition of tensor chain Parallel Implementation high-order dominant eigenvalue | |
CN104965869A (en) | Mobile application sorting and clustering method based on heterogeneous information network |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
C06 | Publication | ||
PB01 | Publication | ||
C10 | Entry into substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
RJ01 | Rejection of invention patent application after publication |
Application publication date: 20160504 |
|
RJ01 | Rejection of invention patent application after publication |