CN107015946A - Distributed high-order SVD and its incremental computations a kind of method - Google Patents
Distributed high-order SVD and its incremental computations a kind of method Download PDFInfo
- Publication number
- CN107015946A CN107015946A CN201610056751.4A CN201610056751A CN107015946A CN 107015946 A CN107015946 A CN 107015946A CN 201610056751 A CN201610056751 A CN 201610056751A CN 107015946 A CN107015946 A CN 107015946A
- Authority
- CN
- China
- Prior art keywords
- tensor
- calculateworker
- ordermaster
- hosvd
- nodes
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Pending
Links
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F17/00—Digital computing or data processing equipment or methods, specially adapted for specific functions
- G06F17/10—Complex mathematical operations
- G06F17/16—Matrix or vector computation, e.g. matrix-matrix or matrix-vector multiplication, matrix factorization
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F9/00—Arrangements for program control, e.g. control units
- G06F9/06—Arrangements for program control, e.g. control units using stored programs, i.e. using an internal store of processing equipment to receive or retain programs
- G06F9/46—Multiprogramming arrangements
- G06F9/50—Allocation of resources, e.g. of the central processing unit [CPU]
- G06F9/5005—Allocation of resources, e.g. of the central processing unit [CPU] to service a request
- G06F9/5027—Allocation of resources, e.g. of the central processing unit [CPU] to service a request the resource being a machine, e.g. CPUs, Servers, Terminals
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F2209/00—Indexing scheme relating to G06F9/00
- G06F2209/50—Indexing scheme relating to G06F9/50
- G06F2209/5017—Task decomposition
Abstract
The invention discloses a kind of distributed high-order SVD and its method for incremental computations.The decomposition of current tensor often uses the HOSVD methods based on unit, it is not enough present in technology, present invention introduces Distributed Architecture and its design concept, traditional HOSVD decomposition algorithms based on unit are modified, the parallelization of HOSVD decomposition algorithms is realized, to solve to cause in HOSVD decomposable processes internal memory to overflow and the problem of unit processing time is long because unit internal memory is limited.The HOSVD decomposition algorithms of tensor incremental mode are also achieved simultaneously, so as to improve the efficiency of tensor resolution, can be preferably applied in big data.
Description
Technical field
The present invention relates to the data processing field of big data, more particularly to a kind of distributed high-order SVD and its incremental computations
Method.
Technical background
With the arriving of cloud era, big data (Big data) has also attracted the concern of more and more people.Big data has
Many associations, high-dimensional, the characteristics of multivariable, and tensor is as a kind of expression-form of high dimensional data structure, and it can be well
It is adapted to the various features of big data, therefore increasing big data is applied and organizes data with the form of tensor, and
It is handled and analyzed using higher-dimension array theory.Singular value decomposition (SVD) is a kind of widely used during big data is handled
Matrix decomposition technology, and high-order SVD (HOSVD) is SVD is mapped to a kind of decomposed form for tensor of higher dimensional space, its energy
Less original tensor of data approximate expression, extraction core data and incidence relation is efficiently used, so as to greatly reduce number
According to treating capacity.Research shows that applications of the HOSVD in terms of latent semantic analysis, recommendation, image procossing tends to obtain very
Good effect.
From the point of view of current research method and application, they often use the HOSVD based on unit to decomposing for tensor
Method, for example, patent " the multi-focus image fusion side based on Higher-order Singular value decomposition and fuzzy reasoning of Southern Yangtze University's application
Method " (application number:CN201410057924.5, application publication number:CN103985104A embodiment part) is employed
The HOSVD decomposition methods of unit.But the huge tensor of data volume can not be handled by general computer, because these tensors
Data total amount exceeded internal memory size limit and handle so many data need for quite a long time, so these grind
It is inefficiency to study carefully method in the environment of big tensor, and this also results in these methods and tends not to enough directly apply to reality
Big data scene.HOSVD is to an extensive tensor and is decomposed needs to expend substantial amounts of internal memory and longer time, how to have
Effect ground solves the speed for causing the problem of internal memory overflows in decomposable process because of memory consumption and accelerating to decompose, and is our institute faces
The key issue faced, therefore the research of the computational methods more efficient to HOSVD is important and urgent.
The problem of for the above, the present invention introducing Distributed Architecture and its design concept, to traditional based on unit
HOSVD decomposition algorithms are modified, and realize the parallelization of HOSVD decomposition algorithms, while also achieving tensor incremental mode
HOSVD decomposition algorithms.
The content of the invention
It is an object of the invention to for above-mentioned the deficiencies in the prior art, there is provided a kind of distributed HOSVD's
Decomposition method, to solve to cause internal memory in HOSVD decomposable processes to overflow and unit processing time mistake because unit internal memory is limited
Long the problem of, while the HOSVD that this method is also applied for the incremental mode of tensor is decomposed, so as to improve the effect of tensor resolution
Rate, can be preferably applied in big data.
To achieve these goals, the present invention is adopted the following technical scheme that:
1. a kind of distributed system framework of tree-like, loop configuration fusion, its main part by
The class node of PartitionWorker, OrderMaster, CalculateWorker and RoundRobinWorker tetra-
Constitute.
A) a PartitionWorker node is only existed in whole distributed system, it is responsible for cutting for original tensor
Block, the operation of distribution, i.e., will need son of the original tensor stripping and slicing to be processed into minimum unit in PartitionWorker nodes
Tensor block, and the sub- tensor block of each minimum unit is distributed in different CalculateWorker at progress HOSVD decomposition
Reason.Meanwhile, PartitionWorker is also the management node of whole distributed system, when it complete original tensor stripping and slicing, point
After hair operation, just start to monitor whole distributed system, occur if distributed system is faulty, failure can one by one up
Report, if failure cannot all be solved in lower level node, it is eventually reported on PartitionWorker nodes, and
Made a policy by PartitionWorker nodes.
B) OrderMaster is created by PartitionWorker.If the number of original order of a tensor (Order) is N,
N+1 OrderMaster (OrderMaster is then there is in whole distributed system0、OrderMaster1、……、
OrderMasterN, they are responsible for generating CalculateWorker nodes.CalculateWorker nodes are responsible for processing distribution
Or the calculating operation of the matrix in reduction process.OrderMaster nodes manage CalculateWorker simultaneously, when
When CalculateWorker breaks down, OrderMaster can be accordingly handled failure problems, if OrderMaster
It can not handle, it can be reported to failure in PartitionWorker.It is right when CalculateWorker nodes, which are calculated, to be completed
The OrderMaster nodes answered can stop the CalculateWorker tasks, to discharge the memory source shared by the task.
C) CalculateWorker is created by OrderMaster.In stripping and slicing, distribution procedure in original tensor, if edge
The J ranks for original tensor cut K blocks, then OrderMaster1Node will generate K CalculateWorker
(CalculateWorker0、CalculateWorker1、……、CalculateWorkerK-1), each CalculateWorker
Node is responsible for the calculating process of processing distribution or the matrix in reduction process.If the number of original order of a tensor (Order) is N,
Then OrderMaster is there is in distributed system0、OrderMaster1、……、OrderMasterNCommon N+1
OrderMaster nodes.Wherein OrderMasterNThe CalculateWorker nodes generated are responsible for handling minimum unit
The calculating process that sub- tensor block HOSVD is decomposed, i.e. OrderMasterNThe CalculateWorker nodes of generation are responsible for processing
The stripping and slicing of gauge block, the calculating process of distribution.For OrderMaster0、OrderMaster1、……、OrderMasterN-1, it
The CalculateWorker nodes that are generated be responsible for handling the orthogonalization meter of sub- tensor merged block, matrix in reduction process
Calculate operation.
D) during merging, reducing, there may be the problem of CalculateWorker node memories overflow, it is former
Because being OrderMaster0/CalculateWorker0OrderMaster will be received1/CalculateWorker0、
OrderMaster1/CalculateWorker1、……、OrderMaster1/CalculateWorkerK-1The result calculated.
When these intermediate results it is excessive, it is excessive when, single CalculateWorker node memories can be caused to overflow.Now we need
A series of RoundRobinWorker nodes are created, and the data in merging, reduction process are subjected to burst, then by these
Fragment data is distributed on each RoundRobinWorker node, they will handle single CalculateWorker nodes because
Internal memory is limited and imponderable data.
2. the stripping and slicing of tensor, the method for distribution, stripping and slicing, distribution procedure occur in PartitionWorker nodes.
A) for a N rank tensor, we first along mould one carry out stripping and slicing, then to the sub- tensor block after stripping and slicing along
Mould two carries out stripping and slicing, then carries out the stripping and slicing of mould three to the sub- tensor block after the stripping and slicing of mould two, by that analogy, until mould N strippings and slicings are completed, cuts
Block generates a series of sub- tensor block of minimum units.
B) carried out after the completion of dicing process, it is necessary to which this little tensor block is distributed on each CalculateWorker
HOSVD is decomposed.PartitionWorker notifies OrderMasterNNode creates a series of CalculateWorker nodes,
After CalculateWorker nodes are all created successfully, the sub- tensor block of minimum unit is sent to by PartitionWorker
On corresponding CalculateWorker, the HOSVD decomposable processes of the sub- tensor block of minimum unit are handled by it.Completed when decomposing
Afterwards, OrderMasterNOrderMaster will be notifiedN-1Go to create a series of new CalculateWorker, and tied decomposing
Fruit is sent to new CalculateWorker, merging, reduction treatment for carrying out sub- tensor block, when new
CalculateWorker is received after all decomposition results, OrderMasterN-1Notify OrderMasterNStop falling its life
Into CalculateWorker, discharge its occupancy memory source.
3. the merging of tensor, the method for reduction, merge, reduction process occurs in OrderMaster0—
OrderMasterN-1In produced CalculateWorker nodes.
A) inverse process that can be regarded as stripping and slicing is merged.For a series of sub- tensor block of minimum units, it would be desirable to will
It is merged into original big tensor.Minimum unit tensor block is merged along mould N first, then by the tensor block edge after merging
Mould N-1 to merge, the like, form original tensor after finally merging along mould 1.
B), it is necessary to which (the mould j expansion matrixes of sub- tensor block are done by the intermediate result of decomposition during merging along mould i
One side Jacobi SVD decompose obtained U, ∑, V matrixes) carry out splicing and recovery processing according to certain mode.
If 1) i==j, intermediate result is carried out by the additional processing of row.According to position of the sub- tensor block in original tensor
Each V* ∑ matrix is stitched together by order by row, and each U matrix is stitched together by diagonal.If (by two sub- tensor blocks
Merge, reduce, then build A matrixes (V1/Σ1 V2*Σ2), Metzler matrix)
If 2) i==(j+1) %N, intermediate result is carried out by the additional processing of row.According to sub- tensor block in original tensor
Sequence of positions each U* ∑ matrix is stitched together by row, each V matrix is stitched together by diagonal.If (by two sons
Tensor merged block, reduction, then build A matrixes (U1/Σ1 U2*Σ2), Metzler matrix)
3) otherwise, intermediate result is carried out by the interspersed processing of row.According to sequence of positions of the sub- tensor block in original tensor
Each U* ∑ matrix is stitched together by row, each V matrix is stitched together by diagonal, is deployed according still further to sub- tensor block mould j
The matrix position relationship that tensor block mould j deploys in matrix after merging adjusts the row of V matrixes.If (by two sub- tensor merged blocks,
Reduction, then build A matrixes (U1/Σ1 U2*Σ2), elementary transformation matrix K, Metzler matrix)
C), it is necessary to which the intermediate data after merging to step b) is orthogonalized processing after union operation is completed.Here adopt
Processing is orthogonalized to the A matrixes obtained after merging with one side Jacobi SVD methods, i.e. A matrixes multiply one by the right side
Individual orthogonal matrix V, realizes oneself orthogonalization between the column and the column, and the orthogonal matrix is by a succession of Jacobi spin matrixs
What product was obtained, i.e. V=J1J2J3…Jk, then the Metzler matrix right side multiplied into the orthogonal matrix.Matrix after merging is calculated according to this method
SVD decomposition results U, ∑, V, and result of calculation is sent in the CalculateWorker of last layer.
4. a kind of distributed HOSVD of increment type computational methods, its Integral Thought and distribution HOSVD decomposition methods are big
Body phase is same, and PartitionWorker nodes now are no longer responsible for the stripping and slicing of original tensor block, distribution work, but according to
The rule ordering that amount merges informs each OrderMasterNThe tensor of CalculateWorker nodes under node corresponding to it
The position of block, is gone to take the data of tensor block by the CalculateWorker, and does HOSVD decomposition, then by obtained decomposition knot
Fruit reports the CalculateWorker of last layer, and the purpose that increment type HOSVD is decomposed is reached with this.
The present invention compared with prior art, its remarkable advantage:(1) solve tensor data total amount it is excessive caused by separate unit
The problem of machine can not carry out HOSVD decomposition.(2) Distributed Architecture merged using tree structure with loop configuration, greatly
Accelerate the speed of HOSVD decomposition.(3) a kind of distributed HOSVD of increment type computational methods are proposed so that point of tensor
Solution is more flexible.Distributed high-order SVD decomposition set forth in the present invention and its method for incremental computation, are fitted with good
Answering property and practicality, can be such that tensor model more preferably, is more easily applied under big data environment.
Brief description of the drawings:
Fig. 1:The stripping and slicing mode of three rank tensors
Fig. 2:Tree-like, loop configuration fusion distributed system framework
Fig. 3:The overall procedure of distributed HOSVD decomposition algorithms
Embodiment:
The embodiment of the present invention carries out distribution HOSVD operation splittings using three rank tensors of an arbitrary size.Tie below
Closing the drawings and specific embodiments, the present invention will be further described.
1. the stripping and slicing mode of tensor
Such as Fig. 1, for a three rank tensors, in the dicing process of tensor, we carry out uniform stripping and slicing along each mould.
Implementation procedure includes following three step:
1) stripping and slicing of mould one is carried out to original tensor.Uniform cutting is carried out along a pair of original tensors of mould, i.e., original tensor
Two sub- tensors above and below being equably cut into;
2) stripping and slicing of mould two is carried out to the sub- tensor block after the stripping and slicing of mould one.Uniform cutting is carried out along two pairs of sub- tensor blocks of mould,
The sub- tensor after the stripping and slicing of mould one is equably cut into the two sub- tensors in left and right;
3) stripping and slicing of mould three is carried out to the sub- tensor block after the stripping and slicing of mould two.Uniform cutting is carried out along three pairs of sub- tensor blocks of mould,
The sub- tensor after the stripping and slicing of mould two is equably cut into former and later two sub- tensors;
After three steps more than completing, original tensor is the sub- tensor for being equably cut into 8 minimum units
Block, then needs this 8 sub- tensor blocks being distributed to progress HOSVD resolution process in different nodes.
2. the distributed HOSVD system frameworks of tree-like loop configuration fusion
Such as Fig. 2, for a three rank tensors, there is a PartitionWorker node, 4 in distributed system
OrderMaster nodes (OrderMaster0——OrderMaster3), if can be generated below each OrderMaster nodes
Dry CalculateWorker node (round dot marked in figure by digital 0-7) and RoundRobinWorker nodes are (in figure
Unmarked digital round dot).
1) a PartitionWorker node is only existed in whole distributed system, it is responsible for cutting for original tensor
Block, distribution work, while it supervises whole distributed system.
2) OrderMaster nodes are created by PartitionWorker nodes, and OrderMaster nodes are responsible for wound
Build, stop each CalculateWorker node, while it also supervises the operation shape of each CalculateWorker node
State.
i.OrderMaster3Node will create 8 CalculateWorker (CalculateWorker0—
CalculateWorker7), it be responsible for supervise minimum unit sub- tensor block HOSVD operation splittings, when processing complete and
As a result OrderMaster has been uploaded to2Under CalculateWorker nodes after, OrderMaster3This 8 nodes will be stopped
To discharge resource.
ii.OrderMaster2Node will create 4 CalculateWorker (CalculateWorker0—
CalculateWorker3), it is responsible for supervising the operation that the mould three of sub- tensor block merges, reduced.When processing is completed and result
OrderMaster is uploaded to1Under CalculateWorker nodes after, OrderMaster2Released this 4 nodes are stopped
Put resource.
iii.OrderMaster1Node will create 2 CalculateWorker (CalculateWorker0—
CalculateWorker1), it is responsible for supervising the operation that the mould two of sub- tensor block merges, reduced.When processing is completed and result
OrderMaster is uploaded to0Under CalculateWorker nodes after, OrderMaster1Released this 2 nodes are stopped
Put resource.
iv.OrderMaster0Node will create 1 CalculateWorker0, it is responsible for supervising the mould one of sub- tensor block
The operation for merging, reducing.
3) CalculateWorker nodes are created by OrderMaster nodes, and it is responsible for the decomposition of sub- tensor block, gone back
Former calculating operation.
i.OrderMaster3Under CalculateWorker will receive the minimums distributed of PartitionWorker
The sub- tensor block of unit, then carries out HOSVD operation splittings to sub- tensor block, after the completion of decomposition, and they can be decomposition result
Upload to OrderMaster2Under in corresponding CalculateWorker.
ii.OrderMaster2Under CalculateWorker by receive come from OrderMaster3Under
CalculateWorker decomposition result, then to decomposition result carry out mould three merge, reduction treatment, and to merge reduction after
Data carry out Jacobi operation splittings again, after the completion of decomposition, decomposition result can be sent to OrderMaster by they1Under
In corresponding CalculateWorker.
iii.OrderMaster1Under CalculateWorker by receive come from OrderMaster2Under
CalculateWorker decomposition result, then to decomposition result carry out mould two merge, reduction treatment, and to merge reduction after
Data carry out Jacobi operation splittings again, after the completion of decomposition, decomposition result can be sent to OrderMaster by they0Under
In corresponding CalculateWorker.
iv.OrderMaster0Under CalculateWorker0OrderMaster is come from by receiving1Under
CalculateWorker decomposition result, then to decomposition result carry out mould one merge, reduction treatment, and to merge reduction after
Data carry out Jacobi operation splittings again, after the completion of decomposition, decomposition result can be output to file by they, and be finally completed
The HOSVD operation splittings of original tensor.
4) in step 3) ii, iii, iv during may have CalculateWorker because of the conjunction of intermediate result
And, internal memory is the problem of overflow caused by reduction, now the CalculateWorker nodes will be by one
RoundRobinWorker rings (ring in figure pointed by dotted arrow) are substituted.System can be in the CalculateWorker
Data stripping and slicing, be then distributed to each RoundRobinWorker node go calculating, so as to solve CalculateWorker
The problem of individual node memory failure.After the completion of the calculating of RoundRobinWoker nodes, they can close the result calculated
And, then return in the CalculateWorker nodes corresponding to last layer OrderMaster.
Such as Fig. 3, the overall procedure that distributed HOSVD is decomposed is as follows:
A. -300 are started
Beginning condition:PartitionWorker, OrderMaster, CalculateWorker have been created in system
Deng node, while PartitionWorker nodes have obtained the data of original tensor.
B. original tensor stripping and slicing -301
Using tensor block cutting method proposed by the present invention, stripping and slicing is carried out successively along each rank of tensor, until will be original
Sub- tensor block of the tensor stripping and slicing into minimum unit.
The sub- tensor block of minimum unit is sent to corresponding OrderMaster by C.PartitionWorkerNUnder
- 302 in CalculateWorker nodes
For three rank tensor examples, it is cut into 8 sub- tensor blocks, then by this 8 sub- tensor blocks by they
Sequence of positions in original tensor, is sent in sequence to OrderMaster3/CalculateWorker0、OrderMaster3/
CalculateWorker1、……、OrderMaster3/CalculateWorker7In this 8 CalculateWorker nodes.
D.OrderMasterNUnder CalculateWorker nodes HOSVD points are carried out to the sub- tensor block of minimum unit
Solution -303
For three rank tensor examples, OrderMaster0Under 8 CalculateWorker nodes to the son that receives
Tensor block by mould expansion, be then One sided Jacobi SVD to expansion matrix and decompose.
E.OrderMasterNUnder CalculateWorker nodes result of calculation is delivered to OrderMasterN-1Under
- 304 in corresponding CalculateWorker nodes
For three rank tensor examples, OrderMaster0Under 8 CalculateWorker nodes complete One
After sided Jacobi SVD are decomposed, decomposition result is uploaded into OrderMaster2Under 4 it is corresponding
In CalculateWorker nodes, i.e. OrderMaster3/CalculateWorker0、CalculateWorker1Knot will be decomposed
Fruit uploads to OrderMaster2/CalculateWorker0In, OrderMaster3/CalculateWorker2、
CalculateWorker3Decomposition result is uploaded into OrderMaster2/CalculateWorker1In, by that analogy.When complete
After the completion of portion is uploaded, OrderMaster3Node will stop falling its all CalculateWorker node, releasing memory money
Source.
F.CalculateWorker nodes receive data, and judge to merge using what mode, restoring operation-
305
A) it is by row additional -306
Each V* ∑ matrix is stitched together by row according to sequence of positions of the sub- tensor block in original tensor, by each U
Matrix is stitched together by diagonal.If (by two sub- tensor merged blocks, reduction, building A matrixes (V1/Σ1 V2*Σ2), M squares
Battle array)
B) it is by row additional -307
Each U* ∑ matrix is stitched together by row according to sequence of positions of the sub- tensor block in original tensor, by each V
Matrix is stitched together by diagonal.If (by two sub- tensor merged blocks, reduction, building A matrixes (U1/Σ1 U2*Σ2), M squares
Battle array)
C) it is interspersed by row
Each U* ∑ matrix is stitched together by row according to sequence of positions of the sub- tensor block in original tensor, by each V
Matrix is stitched together by diagonal, according still further to the position in sub- tensor block mould j expansion matrix after merging tensor block mould j expansion matrix
Put the row that relation adjusts V matrixes.If (by two sub- tensor merged blocks, reduction, building A matrixes (U1/Σ1 U2*Σ2), it is elementary
Transformation matrix K, Metzler matrix)
G. matrix is subjected to splicing -308 according to corresponding connecting method
H. judge whether splicing can cause internal memory to overflow-309
A) it can cause to overflow, then into step I
B) will not, then into step J
I. RoundRobinWorker node cycles are created, and the data in CalculateWorker nodes are split,
It is distributed in each RoundRobinWorker node, into step J-310
If spliced matrix is excessive, may result in internal memory overflow, calculate overlong time the problem of, now need by
Matrix carries out piecemeal, and processing is then orthogonalized to matrix by the way of RoundRobin circulations.
J. Jacobi orthogonalizations -311 are carried out to the matrix after merging
K. the whether global orthogonalization of judgment matrix-312
A) orthogonalization, into step L
B) non-orthogonalization, return to step J
L. judge whether merging, reduction complete-313
A) completed, into step N
B) do not complete, into step M
M. result is uploaded in the CalculateWorker of corresponding last layer OrderMaster generations, into step
F-314
For three rank tensor examples, i.e. OrderMaster2/CalculateWorker0、CalculateWorker1Will
Result of calculation uploads to OrderMaster1/CalculateWorker0In, OrderMaster2/CalculateWorker2、
CalculateWorker3Result of calculation is uploaded into OrderMaster1/CalculateWorker1In.Similarly
OrderMaster1/CalculateWorker0、CalculateWorker1Result of calculation is uploaded into OrderMaster0/
CalculateWorker0In.
N. -315 are completed
The distributed HOSVD of original tensor, which is decomposed, to be completed, and decomposition result is written in file, and close distributed system
System, discharges resource.
Claims (4)
1. a kind of distributed system framework of tree-like, loop configuration fusion, its main part by PartitionWorker,
The class node of OrderMaster, CalculateWorker and RoundRobinWorker tetra- is constituted.
2. the stripping and slicing of tensor, the method for distribution, stripping and slicing, distribution procedure occur in PartitionWorker nodes.
3. the merging of tensor, the method for reduction, merge, reduction process occurs—
In produced CalculateWorker nodes.
4. a kind of distributed HOSVD of increment type computational methods, its Integral Thought and the big body phase of distribution HOSVD decomposition methods
Together, PartitionWorker nodes now are no longer responsible for the stripping and slicing of original tensor block, distribution work, but are closed according to tensor
And rule ordering inform eachThe tensor of CalculateWorker nodes under node corresponding to it
The position of block, is gone to take the data of tensor block by the CalculateWorker, and does HOSVD decomposition, then by obtained decomposition knot
Fruit reports the CalculateWorker of last layer, and the purpose that increment type HOSVD is decomposed is reached with this.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201610056751.4A CN107015946A (en) | 2016-01-27 | 2016-01-27 | Distributed high-order SVD and its incremental computations a kind of method |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201610056751.4A CN107015946A (en) | 2016-01-27 | 2016-01-27 | Distributed high-order SVD and its incremental computations a kind of method |
Publications (1)
Publication Number | Publication Date |
---|---|
CN107015946A true CN107015946A (en) | 2017-08-04 |
Family
ID=59439250
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN201610056751.4A Pending CN107015946A (en) | 2016-01-27 | 2016-01-27 | Distributed high-order SVD and its incremental computations a kind of method |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN107015946A (en) |
Cited By (2)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN108170639A (en) * | 2017-12-26 | 2018-06-15 | 云南大学 | Tensor CP based on distributed environment decomposes implementation method |
CN111291240A (en) * | 2018-12-06 | 2020-06-16 | 华为技术有限公司 | Method for processing data and data processing device |
Citations (2)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN102129550A (en) * | 2011-02-17 | 2011-07-20 | 华南理工大学 | Scene perception method |
CN104331421A (en) * | 2014-10-14 | 2015-02-04 | 安徽四创电子股份有限公司 | High-efficiency processing method and system for big data |
-
2016
- 2016-01-27 CN CN201610056751.4A patent/CN107015946A/en active Pending
Patent Citations (2)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN102129550A (en) * | 2011-02-17 | 2011-07-20 | 华南理工大学 | Scene perception method |
CN104331421A (en) * | 2014-10-14 | 2015-02-04 | 安徽四创电子股份有限公司 | High-efficiency processing method and system for big data |
Non-Patent Citations (1)
Title |
---|
李存琛: "海量数据分布式存储技术的研究与应用", 《中国优秀硕士学位论文全文数据库信息科技辑(月刊)》 * |
Cited By (4)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN108170639A (en) * | 2017-12-26 | 2018-06-15 | 云南大学 | Tensor CP based on distributed environment decomposes implementation method |
CN108170639B (en) * | 2017-12-26 | 2021-08-17 | 云南大学 | Tensor CP decomposition implementation method based on distributed environment |
CN111291240A (en) * | 2018-12-06 | 2020-06-16 | 华为技术有限公司 | Method for processing data and data processing device |
CN111291240B (en) * | 2018-12-06 | 2023-12-08 | 华为技术有限公司 | Method for processing data and data processing device |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
Li et al. | Coding for distributed fog computing | |
Li et al. | Near-optimal straggler mitigation for distributed gradient methods | |
Karakus et al. | Straggler mitigation in distributed optimization through data encoding | |
Giselsson et al. | On feasibility, stability and performance in distributed model predictive control | |
Das et al. | C 3 LES: Codes for coded computation that leverage stragglers | |
Li et al. | Polynomially coded regression: Optimal straggler mitigation via data encoding | |
Brandão et al. | A biased random‐key genetic algorithm for single‐round divisible load scheduling | |
Bian et al. | Maximizing parallelism in distributed training for huge neural networks | |
Zhao et al. | Building efficient deep neural networks with unitary group convolutions | |
WO2017111937A1 (en) | Triangular dual embedding for quantum annealing | |
Wai et al. | A consensus-based decentralized algorithm for non-convex optimization with application to dictionary learning | |
Guo et al. | Verifying random quantum circuits with arbitrary geometry using tensor network states algorithm | |
WO2018120723A1 (en) | Video compressive sensing reconstruction method and system, and electronic apparatus and storage medium | |
CN105373517A (en) | Spark-based distributed matrix inversion parallel operation method | |
CN111738276A (en) | Image processing method, device and equipment based on multi-core convolutional neural network | |
CN107015946A (en) | Distributed high-order SVD and its incremental computations a kind of method | |
CN113158243A (en) | Distributed image recognition model reasoning method and system | |
TWI738048B (en) | Arithmetic framework system and method for operating floating-to-fixed arithmetic framework | |
Zeng et al. | An efficient reconfigurable framework for general purpose CNN-RNN models on FPGAs | |
Elkordy et al. | Compressed coded distributed computing | |
CN104281636B (en) | The concurrent distributed approach of magnanimity report data | |
CN112612601A (en) | Intelligent model training method and system for distributed image recognition | |
Sampathirao et al. | Distributed solution of stochastic optimal control problems on GPUs | |
CN107256203A (en) | The implementation method and device of a kind of matrix-vector multiplication | |
Wang et al. | Solving low-rank semidefinite programs via manifold optimization |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
WD01 | Invention patent application deemed withdrawn after publication | ||
WD01 | Invention patent application deemed withdrawn after publication |
Application publication date: 20170804 |