CN106708973A - Method for accelerating Monte Carlo converse solution of PageRank problems - Google Patents

Method for accelerating Monte Carlo converse solution of PageRank problems Download PDF

Info

Publication number
CN106708973A
CN106708973A CN201611109924.0A CN201611109924A CN106708973A CN 106708973 A CN106708973 A CN 106708973A CN 201611109924 A CN201611109924 A CN 201611109924A CN 106708973 A CN106708973 A CN 106708973A
Authority
CN
China
Prior art keywords
pagerank
value
gpu
matrix
thread
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN201611109924.0A
Other languages
Chinese (zh)
Inventor
郭梦含
赖斯
杨溢
林小拉
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Sun Yat Sen University
Original Assignee
Sun Yat Sen University
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Sun Yat Sen University filed Critical Sun Yat Sen University
Priority to CN201611109924.0A priority Critical patent/CN106708973A/en
Publication of CN106708973A publication Critical patent/CN106708973A/en
Pending legal-status Critical Current

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F16/00Information retrieval; Database structures therefor; File system structures therefor
    • G06F16/90Details of database functions independent of the retrieved data types
    • G06F16/95Retrieval from the web
    • G06F16/951Indexing; Web crawling techniques

Landscapes

  • Engineering & Computer Science (AREA)
  • Databases & Information Systems (AREA)
  • Theoretical Computer Science (AREA)
  • Data Mining & Analysis (AREA)
  • Physics & Mathematics (AREA)
  • General Engineering & Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • Management, Administration, Business Operations System, And Electronic Commerce (AREA)

Abstract

The invention provides a method for accelerating Monte Carlo converse solution of PageRank problems. The method comprises the following steps of: carrying out PageRank simulation solution through replacing a pseudo-random sequence by a quasi-random sequence, wherein the quasi-random sequence can be used in the solution process of each component after being generated for once, the pseudo-random sequence needs to be re-generated in the solution process of each component, and the quasi-random sequence can be efficiently generated on a GPU in parallel so that much time can be saved in the aspect of random sequence generation; importing a relaxation method to carry out solution, converting a target function, carrying relaxation, and decreasing the spectral radius of an iteration matrix to improve a convergence speed; and fully utilize a shared memory of the GPU as a cache, and carrying out an efficient reduce parallel operation without repeatedly accessing a global memory with relatively low reading speed when the result of each Markov chain calculation is accumulated, so as to improve the efficiency.

Description

A kind of accelerated method of the Converse solved PageRank problems in Monte Carlo
Technical field
The present invention relates to parallel computation field, more particularly, to a kind of Converse solved PageRank problems in Monte Carlo Accelerated method.
Background technology
With the progress of society, the webpage quantity produced in internet presents explosive growth.PageRank is used as searching The core algorithm that index is held up, it effectively according to the linking relationship between webpage can be carried out rationally to Search Results by importance Sequence.Therefore, in substantial amounts of webpage, PageRank large-scale data treatment in how efficiently, robustly solve seem It is particularly important.Additionally, in many PageRank application scenarios, generally only little page needs to be calculated, such as exist The node of high-impact is detected in social networks, is used to select high-quality " seed " page etc. in TrustRank methods, How improvement is carried out to original PageRank algorithms for these scenes is also an emphasis and difficult point.
The value that current Markov Monte Carlo solves PageRank mainly has following two methods:
(1) redirect to browsing before webpage by being simulated, PageRank value is calculated using Markov monte carlo method. Compared with traditional deterministic Power methods, one is by can approximately obtain relatively important after a wheel iteration to the method The estimation of webpage, two can be to make constantly PageRank value according to the change of structure between webpage to update.But natural web Figure has the summit of many in-degrees high, and traditional PageRank methods are larger in the PageRank expenses for calculating these summits, Ziv Bar-Yossef et al. points out that calculating PageRank value is more highly efficient than original web graph in reverse web graph;
(2) simulated target webpage is inversely counted to the figure after transposition to external diffusion using Markov monte carlo method Calculate.The advantage of the method can be just only to calculate a PageRank value for webpage, without calculating all webpages PageRank value.Another advantage is that parallelization calculating is realized on GPU.But the method efficiency aspect of performance have with Lower deficiency:A pseudo-random sequence that () uses needs to regenerate, it is necessary to a certain amount of expense in the solution of each component.(b) Directly using the global memory of GPU, without using the shared drive of GPU as caching, calculating speed is not good.
Based on existing method produced problem, in face of the burgeoning trend of internet data, it should from raising Ma Erke The efficiency that husband Monte Carlo solves PageRank is set out, and accelerates treatment large-scale data.
The content of the invention
The present invention provides a kind of accelerated method for putting forward the Converse solved PageRank problems in efficient Monte Carlo.
In order to reach above-mentioned technique effect, technical scheme is as follows:
A kind of accelerated method of the Converse solved PageRank problems in Monte Carlo, comprises the following steps:
S1:Determine inceptive direction number, the direction number of quasi-random sequence obtained by inceptive direction number, be then calculated plan with Machine ordered series of numbers;
S2:PageRank iterative formulas are changed, new Iterative Matrix G linear equation systems corresponding with its are obtained Matrix number A, calculating matrixA characteristic value extreme values, the distribution according to characteristic value obtains relaxation factor, using the relaxation factor pair PageRank iterative formulas carry out laxization treatment, the quasi-random ordered series of numbers obtained in S1 are recycled, to after laxization PageRank iterative formulas are emulated using Markov monte carlo method, the target component x for being solvediEvery horse Er Kefu chain simulation results;
S3:To the target component x for solvingiEvery Markov Chain simulation result, first application distribution GPU shared drives Cached, then carry out reduction to each simulation result in GPU shared drives sues for peace parallel, the long and that then will converge is removed With the chain number for emulating, target component x is tried to achieveiDesired value.
Further, the detailed process of the step S1 is as follows:
S11:Using the T primitive polynomial on binary finite field gf (2), the basis that wherein i-th dimension sequence is used is more Xiang Shiwei:
Pi(x)=xr+t1,ixr-1+tr-1,iX+1,
Wherein r is PiThe number of degrees, t1,i,t2,i,…,tr-1,iIt is the coefficient of the primitive polynomial that value is 0 or 1;
S12:Inceptive direction number is chosen, is the primitive polynomial P of r for a number of degreesiThere is r inceptive direction number: v1,i,…,vr,i, whereinmj,iFor one less than 2jPositive odd number;
S13:Direction number is calculated by inceptive direction number, formula is as follows:
S14:Initialization sequence initial valueAccording to the direction number tried to achieve, andTried to achieve using Gray code method The sequence of i-th dimension, formula is as follows:
Further, relaxation factor is calculated in the step S2 and application method of relaxation solves the mistake of PageRank iterative formulas Journey is comprised the following steps:
S21:PageRank iterative formulas are converted:
By formula,X=Gx+b is designated as, obtains corresponding linear Equation group is Ax=b.Wherein relevant parameter is described as follows:The PageRank value object vector that x is to solve for, α is damped coefficient, M It is PageRank Iterative Matrix, N is node number, and E is the matrix that element is all 1, and e is the column vector that element is all 1, and G is α M, B isA=(I-G);
S22:Calculating matrixA characteristic value extreme values, whereinIt is the inverse matrix of the diagonal matrix of A, obtains characteristic value most Big value λmaxWith characteristic value minimum value λmin, relaxation factor ω is calculated according to the characteristic value extreme value obtained;
S23:PageRank iterative formulas are solved using method of relaxation:
DωAx=DωB, wherein DωIt is diagonal matrix, cornerwise element isWherein ai,jIt is the element of matrix A, obtains Iterative Matrix G after to laxizationω=I-DωA;
S24:Markov Chain is simulated using the quasi-random sequence obtained in step S1, simulation process is assigned to GPU Multiple thread beam warp perform, each thread inside thread beam warp performs a markovian random walk, many Individual thread is performed simultaneously, each solves a solution of target component, the target component x for being solvediEvery Markov Chain Simulation result.
Further, the detailed process of the step S3 is as follows:
S31:Application first distributes GPU shared drives, a simulation result for preserving each target component;
S32:The simulation result of each Markov Chain random walk that will be calculated in S2 is stored in the shared interior of GPU In depositing, the position of storage is the exclusive corresponding shared drive array indexs of thread id of current thread;
S33:Reduce parallelization summations are carried out to various pieces solution in the shared drive of GPU:
First in first round iteration, be divided into for reduction solution data by the thread block block numbers blockDimX according to GPU Two subsets, sub-set size isThen the value for thread id=i being preserved is with thread id=i+size's Value summation, is as a result saved on the corresponding shared drive of thread id=i, then in next round iteration, step-length size is subtracted Half, proceed reduction summation;
Above procedure is repeated, until step-length terminates when being 0, and final result sum is asked divided by markovian number Solution desired value, i.e.,The desired value is returned, and solving result is copied back into CPU internal memories by GPU internal memories.
Compared with prior art, the beneficial effect of technical solution of the present invention is:
The present invention carries out PageRank and emulates solution using quasi-random sequence instead of pseudo-random sequence, and quasi-random sequence is once After generation, can be used in the solution procedure of each component, and the solution in each component is then needed using pseudo-random sequence Regenerated in journey, in addition, quasi-random sequence can on GPU efficient parallel generation, can be with the generation of random sequence Save many times;Introduce method of relaxation to be solved, changed by by object function, and carry out laxization, reduce iteration Spectral radius radiuses accelerates convergence rate;The shared drive of GPU is made full use of as caching, and collects each Ma Er in accumulation Efficiently reduce parallelizations operation is carried out during the result that section husband chain is calculated, in the overall situation slower without repeatedly accessing reading speed Deposit, so as to improve efficiency.
Brief description of the drawings
Fig. 1 is acceleration of the present invention based on the Converse solved PageRank problems in reverse Monte Carlo, Markov Monte Carlo The schematic flow sheet of method;
Fig. 2 is the task distribution schematic diagram of present invention generation quasi-random sequence;
Fig. 3 is that the present invention distributes schematic diagram using the thread that GPU solves PageRank component products;
Fig. 4 carries out parallelization reduce summation schematic diagrames to each simulation result for the present invention.
Specific embodiment
Accompanying drawing being for illustration only property explanation, it is impossible to be interpreted as the limitation to this patent;
In order to more preferably illustrate the present embodiment, accompanying drawing some parts have omission, zoom in or out, and do not represent actual product Size;
To those skilled in the art, it can be to understand that some known features and its explanation may be omitted in accompanying drawing 's.
Technical scheme is described further with reference to the accompanying drawings and examples.
Embodiment 1
As shown in figure 1, the present invention first reads the diagram data of input, then pre-generatmg quasi-random sequence, is then used by relaxing Method is pre-processed to the PageRank iterative formulas after conversion, and carries out Markov Monte Carlo multi-threaded parallel in GPU Change emulation.Last each simulation result to target component xi on shared drive carries out parallelization reduce summations.
A kind of present invention accelerated method of the Converse solved PageRank problems in Monte Carlo realizes that step is as follows:
1. the diagram data being input into is pre-processed first.Comprise the following steps that:Map file is read, neighbour is stored data into Connect in matrix array, count the out-degree and in-degree on each summit.Sieves selects hovering node, is unit of being expert at by hovering node Element is set to 1.According to summit go out angle value will abut against matrix be expressed as all elements non-negative and row and for 1 Markov iteration square Battle array.
2. quasi-random sequence and then on GPU is generated.Inceptive direction number v is chosen first1,i,…,vr,i, by inceptive direction number Using below equationTo the direction number of quasi-random sequence, due in quasi-random sequence During column-generation, direction number needs frequent visit, therefore accelerates meter in storing it in shared drive in implementation process Calculate.As shown in Fig. 2 for each dimension of quasi-random sequence, being generated with a block correspondence.For i-th sequence, carry M numbers before preceding formation sequence, the then ensuing number of continuous grey iterative generation.
3. PageRank iterative formulas are represented again and calculate relaxation factor.For the PageRank iteration public affairs after conversion Formula, solution matrixThe characteristic value extreme value of A, according to the distribution of characteristic value extreme value, obtains relaxation factor ω.
4. according to relaxation factor ω and the diagonal element of matrix ACalculate diagonal matrix Dω, by method of relaxation application To PageRank iterative formulas, Iterative Matrix G is obtainedω
5. Markov Chain random walk is simulated.Markov Chain is simulated using the quasi-random sequence of generation, will be simulated Multiple warp that process is assigned to GPU are performed, and each thread inside warp performs a markovian random walk, many Individual thread is performed simultaneously, each solves a solution of target component, and specific thread distribution schematic diagram is as shown in Figure 3.
6. as shown in figure 4, carrying out parallelization summation to Markov Chain analog result in shared drive.First according in advance The markovian number of setting, the size of application distribution GPU shared drives.After the emulation of all threads terminates, will be each The corresponding solving result of individual thread id is stored in shared drive.The various pieces solution of the target component for then solving is carried out Reduce parallelizations are sued for peace.Comprise the following steps that:
A. first in first round iteration, reduce solutions data are divided into two by the block numbers blockDimX according to GPU Individual subset, sub-set size is
B. the value for and then by thread id=i preserving is sued for peace with the value of thread id=i+size, is as a result saved in thread id=i Corresponding shared drive on.
C. step-length size is halved, judges the size of step-length size.If step-length is more than 0, jumps to step b and proceed Reduce sues for peace.If step-length is equal to 0, terminate reduce iteration.
D. final result sum is solved into desired value divided by markovian number, i.e.,
E. the desired value is returned, and solving result is copied back into CPU internal memories by GPU internal memories.
Above-described embodiment is the present invention preferably implementation method, but embodiments of the present invention are not by above-described embodiment Limitation, it is other it is any without departing from Spirit Essence of the invention and the change, modification, replacement made under principle, combine, simplification, Equivalent substitute mode is should be, is included within protection scope of the present invention.
The same or analogous part of same or analogous label correspondence;
Position relationship for the explanation of being for illustration only property described in accompanying drawing, it is impossible to be interpreted as the limitation to this patent;
Obviously, the above embodiment of the present invention is only intended to clearly illustrate example of the present invention, and is not right The restriction of embodiments of the present invention.For those of ordinary skill in the field, may be used also on the basis of the above description To make other changes in different forms.There is no need and unable to be exhaustive to all of implementation method.It is all this Any modification, equivalent and improvement made within the spirit and principle of invention etc., should be included in the claims in the present invention Protection domain within.

Claims (4)

1. the accelerated method of the Converse solved PageRank problems in a kind of Monte Carlo, it is characterised in that comprise the following steps:
S1:Determine inceptive direction number, the direction number of quasi-random sequence is obtained by inceptive direction number, be then calculated quasi-random numbers Row;
S2:PageRank iterative formulas are changed, new Iterative Matrix G system of linear equations coefficient squares corresponding with its are obtained Battle array A, calculating matrixCharacteristic value extreme value, the distribution according to characteristic value obtains relaxation factor, using the relaxation factor pair PageRank iterative formulas carry out laxization treatment, the quasi-random ordered series of numbers obtained in S1 are recycled, to after laxization PageRank iterative formulas are emulated using Markov monte carlo method, the target component x for being solvediEvery horse Er Kefu chain simulation results;
S3:To the target component x for solvingiEvery Markov Chain simulation result, first application distribution GPU shared drives carry out Caching, then carries out reduction and sues for peace parallel in GPU shared drives to each simulation result, then will converge the long and divided by imitative Genuine chain number, tries to achieve target component xiDesired value.
2. the accelerated method of the Converse solved PageRank problems in Monte Carlo according to claim 1, it is characterised in that institute The detailed process for stating step S1 is as follows:
S11:Use the T primitive polynomial on binary finite field gf (2), the primitive polynomial that wherein i-th dimension sequence is used For:
Pi(x)=xr+t1,ixr-1+tr-1,iX+1,
Wherein r is PiThe number of degrees, t1,i,t2,i,…,tr-1,iIt is the coefficient of the primitive polynomial that value is 0 or 1;
S12:Inceptive direction number is chosen, is the primitive polynomial P of r for a number of degreesiThere is r inceptive direction number:v1,i,…, vr,i, whereinmj,iFor one less than 2jPositive odd number;
S13:Direction number is calculated by inceptive direction number, formula is as follows:
v j , i = t 1 , i v j - 1 , i ⊕ t 2 , i v j - 2 , i ... ⊕ t r - 1 , i v j - r + 1 , i
S14:Initialization sequence initial valueAccording to the direction number tried to achieve, andI-th dimension is tried to achieve using Gray code method Sequence, formula is as follows:
x n i = x n - 1 i ⊕ v c n - 1 .
3. the accelerated method of the Converse solved PageRank problems in Monte Carlo according to claim 2, it is characterised in that institute State and calculate in step S2 relaxation factor and comprised the following steps using the process that method of relaxation solves PageRank iterative formulas:
S21:PageRank iterative formulas are converted:
By formula,X=Gx+b is designated as, corresponding linear equation is obtained Group is Ax=b.Wherein relevant parameter is described as follows:The PageRank value object vector that x is to solve for, α is damped coefficient, and M is PageRank Iterative Matrix, N is node number, and E is the matrix that element is all 1, and e is the column vector that element is all 1, and G is α M, b ForA=(I-G);
S22:Calculating matrixCharacteristic value extreme value, whereinIt is the inverse matrix of the diagonal matrix of A, obtains characteristic value maximum λmaxWith characteristic value minimum value λmin, relaxation factor ω is calculated according to the characteristic value extreme value obtained;
S23:PageRank iterative formulas are solved using method of relaxation:
DωAx=DωB, wherein DωIt is diagonal matrix, cornerwise element isWherein ai,jIt is the element of matrix A, obtains pine Iterative Matrix G after relaxationizationω=I-DωA;
S24:Markov Chain is simulated using the quasi-random sequence obtained in step S1, simulation process is assigned to many of GPU Individual thread beam warp is performed, and each thread inside thread beam warp performs a markovian random walk, multiple lines Journey is performed simultaneously, each solves a solution of target component, the target component x for being solvediEvery Markov Chain emulation As a result.
4. the accelerated method of the Converse solved PageRank problems in Monte Carlo according to claim 3, it is characterised in that institute The detailed process for stating step S3 is as follows:
S31:Application first distributes GPU shared drives, a simulation result for preserving each target component;
S32:The simulation result of each Markov Chain random walk that will be calculated in S2 is stored in the shared drive of GPU In, the position of storage is the exclusive corresponding shared drive array indexs of thread id of current thread;
S33:Reduce parallelization summations are carried out to various pieces solution in the shared drive of GPU:
First in first round iteration, reduction solution data are divided into two by the thread block block numbers blockDimX according to GPU Subset, sub-set size isThen the value that thread id=i is preserved is asked with the value of thread id=i+size With, as a result it is saved on the corresponding shared drive of thread id=i, then in next round iteration, step-length size is halved, after It is continuous to carry out reduction summation;
Above procedure is repeated, until step-length terminates when being 0, and final result sum the phase is solved into divided by markovian number Prestige value, i.e.,The desired value is returned, and solving result is copied back into CPU internal memories by GPU internal memories.
CN201611109924.0A 2016-12-06 2016-12-06 Method for accelerating Monte Carlo converse solution of PageRank problems Pending CN106708973A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201611109924.0A CN106708973A (en) 2016-12-06 2016-12-06 Method for accelerating Monte Carlo converse solution of PageRank problems

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201611109924.0A CN106708973A (en) 2016-12-06 2016-12-06 Method for accelerating Monte Carlo converse solution of PageRank problems

Publications (1)

Publication Number Publication Date
CN106708973A true CN106708973A (en) 2017-05-24

Family

ID=58937520

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201611109924.0A Pending CN106708973A (en) 2016-12-06 2016-12-06 Method for accelerating Monte Carlo converse solution of PageRank problems

Country Status (1)

Country Link
CN (1) CN106708973A (en)

Cited By (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN110011838A (en) * 2019-03-25 2019-07-12 武汉大学 A kind of method for real time tracking of dynamic network PageRank value
CN110020087A (en) * 2017-12-29 2019-07-16 中国科学院声学研究所 A kind of distributed PageRank accelerated method based on similarity estimation
CN111353083A (en) * 2018-12-20 2020-06-30 中国科学院计算机网络信息中心 Method and device for sorting web pages through computing cluster

Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN102735330A (en) * 2012-06-15 2012-10-17 天津大学 Self-adaption stochastic resonance weak signal detecting method based on particle swarm optimization algorithm
US20120330864A1 (en) * 2011-06-21 2012-12-27 Microsoft Corporation Fast personalized page rank on map reduce
CN103106278A (en) * 2013-02-18 2013-05-15 人民搜索网络股份公司 Method and device of acquiring weighted values

Patent Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20120330864A1 (en) * 2011-06-21 2012-12-27 Microsoft Corporation Fast personalized page rank on map reduce
CN102735330A (en) * 2012-06-15 2012-10-17 天津大学 Self-adaption stochastic resonance weak signal detecting method based on particle swarm optimization algorithm
CN103106278A (en) * 2013-02-18 2013-05-15 人民搜索网络股份公司 Method and device of acquiring weighted values

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
刘记云: ""基于MapReduce的个性化PageRank算法研究"", 《中国优秀硕士学位论文全文数据库信息科技辑》 *

Cited By (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN110020087A (en) * 2017-12-29 2019-07-16 中国科学院声学研究所 A kind of distributed PageRank accelerated method based on similarity estimation
CN111353083A (en) * 2018-12-20 2020-06-30 中国科学院计算机网络信息中心 Method and device for sorting web pages through computing cluster
CN111353083B (en) * 2018-12-20 2023-04-28 中国科学院计算机网络信息中心 Method and device for ordering web pages through computing clusters
CN110011838A (en) * 2019-03-25 2019-07-12 武汉大学 A kind of method for real time tracking of dynamic network PageRank value
CN110011838B (en) * 2019-03-25 2021-08-03 武汉大学 Real-time tracking method for PageRank value of dynamic network

Similar Documents

Publication Publication Date Title
CN111553484B (en) Federal learning method, device and system
Plimpton et al. Mapreduce in MPI for large-scale graph algorithms
Lauterbach et al. Fast BVH construction on GPUs
JP2011041326A5 (en)
CN106708973A (en) Method for accelerating Monte Carlo converse solution of PageRank problems
CN105245343B (en) A kind of online static signature system and method based on multivariable cryptographic technique
Li et al. 1-bit LAMB: communication efficient large-scale large-batch training with LAMB’s convergence speed
Flores et al. A solution space for a system of null-state partial differential equations: part 3
Pacurib et al. Solving sudoku puzzles using improved artificial bee colony algorithm
CN109993293A (en) A kind of deep learning accelerator suitable for stack hourglass network
Bisson et al. A GPU implementation of the sparse deep neural network graph challenge
CN114064984A (en) Sparse array linked list-based world state increment updating method and device
US20090254319A1 (en) Method and system for numerical simulation of a multiple-equation system of equations on a multi-processor core system
CN116128019A (en) Parallel training method and device for transducer model
Kim et al. Finequant: Unlocking efficiency with fine-grained weight-only quantization for llms
Javarone Solving optimization problems by the public goods game
CN112711631B (en) Digital twin information synchronization method, system, readable storage medium and device
CN113806261A (en) Pooling vectorization implementation method for vector processor
Yang et al. Main memory evaluation of recursive queries on multicore machines
Roberge et al. Comparison of parallel particle swarm optimizers for graphical processing units and multicore processors
CN107392935A (en) A kind of particle computational methods and particIe system based on integral formula
Yelmewad et al. Parallel iterative hill climbing algorithm to solve TSP on GPU
CN115481364A (en) Parallel computing method for large-scale elliptic curve multi-scalar multiplication based on GPU (graphics processing Unit) acceleration
Dey et al. Interleaver design for deep neural networks
Yelmewad et al. Near optimal solution for traveling salesman problem using GPU

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
AD01 Patent right deemed abandoned
AD01 Patent right deemed abandoned

Effective date of abandoning: 20210122