CN107957962A - It is a kind of to calculate efficient figure division methods and system towards big figure - Google Patents
It is a kind of to calculate efficient figure division methods and system towards big figure Download PDFInfo
- Publication number
- CN107957962A CN107957962A CN201711375929.2A CN201711375929A CN107957962A CN 107957962 A CN107957962 A CN 107957962A CN 201711375929 A CN201711375929 A CN 201711375929A CN 107957962 A CN107957962 A CN 107957962A
- Authority
- CN
- China
- Prior art keywords
- vertex
- dram
- nvm
- entry
- processing unit
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Pending
Links
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F12/00—Accessing, addressing or allocating within memory systems or architectures
- G06F12/02—Addressing or allocation; Relocation
- G06F12/08—Addressing or allocation; Relocation in hierarchically structured memory systems, e.g. virtual memory systems
- G06F12/0802—Addressing of a memory level in which the access to the desired data or data block requires associative addressing means, e.g. caches
- G06F12/0866—Addressing of a memory level in which the access to the desired data or data block requires associative addressing means, e.g. caches for peripheral storage systems, e.g. disk cache
- G06F12/0871—Allocation or management of cache space
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F12/00—Accessing, addressing or allocating within memory systems or architectures
- G06F12/02—Addressing or allocation; Relocation
- G06F12/08—Addressing or allocation; Relocation in hierarchically structured memory systems, e.g. virtual memory systems
- G06F12/0802—Addressing of a memory level in which the access to the desired data or data block requires associative addressing means, e.g. caches
- G06F12/0877—Cache access modes
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F12/00—Accessing, addressing or allocating within memory systems or architectures
- G06F12/02—Addressing or allocation; Relocation
- G06F12/08—Addressing or allocation; Relocation in hierarchically structured memory systems, e.g. virtual memory systems
- G06F12/0802—Addressing of a memory level in which the access to the desired data or data block requires associative addressing means, e.g. caches
- G06F12/0893—Caches characterised by their organisation or structure
- G06F12/0895—Caches characterised by their organisation or structure of parts of caches, e.g. directory or tag array
Landscapes
- Engineering & Computer Science (AREA)
- Theoretical Computer Science (AREA)
- Physics & Mathematics (AREA)
- General Engineering & Computer Science (AREA)
- General Physics & Mathematics (AREA)
- Memory System Of A Hierarchy Structure (AREA)
Abstract
The present invention propose it is a kind of calculate efficient figure division methods and system towards big figure, diagram data is divided into multiple vertex by this method, and switchs to queue by vertex is randomly ordered;Subregion distribution is carried out to first vertex according to queue sequence, that is, is assigned to processing unit, using the partition information on the vertex as value after distributing, the adjoint point on this vertex is stored in DRAM or NVM as key in the form of dictionary entry;Follow-up vertex, first judges the entry for whether having using this vertex as key in DRAM or NVM, such as exists, the partition information on this vertex directly is appended to corresponding entry in DRAM or NVM;If it does not exist, then the point is assigned to the minimum processing unit of load.It is stored in the partition information on each vertex distributed as value, the adjoint point on this vertex as key in the form of dictionary entry in corresponding caching.This method directly can search the corresponding entry with this for key according to current fixed-point every time, and efficiency is improved.
Description
Technical field
The present invention relates to computer realm, and in particular to a kind of to calculate efficient figure division methods and system towards big figure.
Background technology
Nowadays at present, the huge and continuous growth of figure, such as a figure being made of cerebral nerve, up to hundreds of
A TB, most typically WWW, can be captured by search engine about 1,000,000,000,000 linking relationship figure, it is estimated that following scale
Will be more than 10,000,000,000,000.Global maximum social networks Facebook possesses about 1,000,000,000 user at present, and corresponding is several
10000000000 relational links.Common computer is since the limitation of memory can not be to these figure (big figure) normal processing, this is to common
Figure calculate and bring severe challenge (as found connected component, calculating triangle and Pagerank).The solution party of one standard
Case is that diagram data is divided into multiple subgraphs to be loaded into different processing units progress Distributed Calculation.For this reason, Spark, Pregel,
Giraph and Trinity distributeds system framework is developed by successive, these systems mainly utilize puppet according to node ID
Task is distributed to each processing unit by random Harsh function, although can reach load balancing, yet with logical between subregion
It is slow that traffic causes greatly the time for calculating operation to gesticulate the measured algorithm of sub-prime very much.Fortunately, these systems are supported self-defined
Dividing mode, user can replace existing hash algorithm with a more complicated dividing mode.
The division management of figure is the premise of Distributed Calculation, when the traffic between sub-district in figure calculating process or operation
Between it is directly related.Offline division frequently accesses vertex and side due to needing iteration to calculate, it will causes the higher time to be answered
Miscellaneous to spend and to the capacity requirement of memory, it is difficult to apply in large-scale figure division, streaming is divided due to efficiently division management,
Continuous development has been obtained in recent years, newest Fennel algorithms are close in division quality to be even more than Metis divisions, and
And flow algorithm also can effectively handle dynamic big diagram data.
In existing streaming method, the vertex new for being loaded into memory, its required adjoint point partition information has all preserved
In memory, and it is first to search adjoint point every time, corresponding subregion, the complexity of lookup and neighbours is then searched according to adjoint point
The quantity of point is directly related, and search efficiency is low.In practice, the memory that many big figure scales have exceeded common computer holds
Amount, or even some figure scale units have reached TB grade, so if will all point and put partition informations be all stored in caching be it is capable not
Logical.If part intermediate data storage will certainly be reduced into performance in hard disk.
New nonvolatile memory (Non-Volatile Memory, NVM) is had beneficial to alleviation conventional store body
The problem of in architecture.NVM is a kind of brand-new storage medium, it have can byte addressing, non-volatile, quiescent dissipation is low,
The advantages that density is high, meanwhile, the time delay that processor accesses NVM is close with the time delay for accessing DRAM, and is better than flash memory and disk.
NVM is the general designation of the non-volatile memory medium with these characteristics.NVM usually can alleviate tradition as main memory or external memory
The problem of storage architecture.On the one hand, due to NVM have can byte addressing characteristic, it can directly carry it is total in memory
It is accessed on line, processor can instruct the data on memory access NVM by load/store, without passing through time-consuming I/O
The visiting deposit data of operation.Meanwhile NVM is close with DRAM in access time delay and readwrite performance etc., quiescent dissipation is less than DRAM,
Figure division, which is that read-write is intensive, to be calculated, and since NVM write performances are poor, writes power consumption height, and write operation number is limited, if directly will
NVM is as memory, since the write operation of frequent directly affects the service life of NVM.
The content of the invention
In order to overcome above-mentioned defect existing in the prior art, height is calculated towards big figure the object of the present invention is to provide one kind
Imitate figure division methods and system.
In order to realize the above-mentioned purpose of the present invention, efficient figure division methods are calculated towards big figure the present invention provides one kind,
Comprise the following steps:
S1, multiple vertex are divided into by diagram data, and switch to queue by vertex is randomly ordered;
S2, carries out subregion distribution to first vertex according to queue sequence, that is, is assigned to after processing unit distributes with this
The partition information on vertex is stored in DRAM, if DRAM as value, the adjoint point on this vertex as key in the form of dictionary entry
Data capacity be then stored in more than given threshold in NVM;
S3, for follow-up vertex in queue, first judges the entry for whether having using this vertex as key in DRAM or NVM, if
In the presence of, the partition information on this vertex is directly appended to corresponding entry in DRAM or NVM, i.e., according to the value in the entry,
The point is assigned to corresponding processing unit, often distributes a vertex, to be assigned the partition information on vertex as value,
The adjoint point on this vertex is stored in corresponding caching as key in the form of dictionary entry;
If the point is assigned to the minimum processing of load there is no the entry using this vertex as key in DRAM or NVM
Unit, then using the partition information on each vertex distributed as value, the adjoint point on this vertex is as key, in the form of dictionary entry
It is stored in DRAM, is stored in if the data capacity of DRAM exceedes given threshold in NVM, until all the points are all divided in queue
With finishing, wherein, the minimum processing unit of the load is the subregion for possessing vertex minimum number.
This method directly can search the corresponding entry with this for key according to current fixed-point every time, and buffer efficiency is high.
Further, when subregion distributes, vertex to be allocated is distributed to the most subregion of the adjoint point quantity containing the vertex.
Further, subregion is carried out using the following formula:
Wherein, SindThe subregion being assigned to for vertex v, N (v) represent neighbours' point of vertex v, Si tRepresent on the vertex that t moment has divided
Subregion state, n are the number of vertex of figure, and k is the number of partitions, f1Refer in t moment, neighbours' points containing vertex v in each subregion
Amount, f2For penalty, n/k is average load.
Further, in the partition process of diagram data, when the generation or renewal of each entry, the synchronized update entry
Timestamp;
When the data capacity in DRAM exceedes given threshold, before and after entry time stamp in caching, by the time most
Forward entry data is moved in NVM;
When the data cached data being reduced in given threshold and NVM in DRAM are not empty, by timestamp in NVM most
Entry is moved in DRAM in caching rearward.
This scheme reduces the number write to NVM, the service life of NVM is lifted.
Further, when follow-up vertex is allocated in queue, if existed in DRAM or NVM using this vertex as key
Entry when, after point is assigned to corresponding processing unit according to the value in entry, this entry is removed from the cache, this is carried
The high utilization rate of memory headroom.
The invention also provides calculate efficient figure dividing system, including CPU, DRAM, NVM and processing list towards big figure
Member, the processing unit are the computing unit in cluster, and the CPU and DRAM, NVM, processing unit communicate to connect, the DRAM
Communicated to connect with NVM, described CPU, DRAM, NVM and processing unit are calculated by claim 1-3 any one of them towards big figure
Efficient figure division methods cache big figure.
Beneficial effects of the present invention:
1st, the application does not have to consider whether memory size meets the requirements, and compare disk or solid state hard disc, divides efficiency
It is improved, close to the processing speed of memory;
2nd, using the present invention, the corresponding entry with this for key directly can be searched according to current fixed-point, efficiency substantially carries
Rise;
3rd, the number of writing of NVM is substantially reduced, lifts NVM average life spans.
The additional aspect and advantage of the present invention will be set forth in part in the description, and will partly become from the following description
Obtain substantially, or recognized by the practice of the present invention.
Brief description of the drawings
The above-mentioned and/or additional aspect and advantage of the present invention will become in the description from combination accompanying drawings below to embodiment
Substantially and it is readily appreciated that, wherein:
Fig. 1 is the mixing storage rack composition of the present invention;
Fig. 2 is figure division exemplary plot;
Fig. 3 is the vertex queue schematic diagram of big figure;
Fig. 4-Fig. 6 is the comparison diagram of the present invention and the content capacity shared by other two kinds of caching method treatment progress;
Fig. 7-9 is that the number of writing of different replacement policies compares figure.
Embodiment
The embodiment of the present invention is described below in detail, the example of the embodiment is shown in the drawings, wherein from beginning to end
Same or similar label represents same or similar element or has the function of same or like element.Below with reference to attached
The embodiment of figure description is exemplary, and is only used for explaining the present invention, and is not considered as limiting the invention.
In the description of the present invention, unless otherwise prescribed with limit, it is necessary to explanation, term " installation ", " connected ",
" connection " should be interpreted broadly, for example, it may be the connection inside mechanical connection or electrical connection or two elements, can
To be to be connected directly, can also be indirectly connected by intermediary, for the ordinary skill in the art, can basis
Concrete condition understands the concrete meaning of above-mentioned term.
A kind of hybrid storage organization as shown in Figure 1, is level cache by DRAM, two level disks of the NVM as disk,
The present invention is based on calculating efficient figure division methods towards big figure this architecture provides one kind, comprises the following steps:
S1, multiple vertex are divided into by diagram data, and switch to queue by vertex is randomly ordered.
S2, carries out subregion distribution to first vertex according to queue sequence, that is, is assigned to processing unit, processing list here
Member is the computing unit in cluster.Using the partition information on the vertex as value after distributing, the adjoint point on this vertex as key, with
The form of dictionary entry is stored in DRAM, is stored in if the data capacity of DRAM exceedes given threshold in NVM or NVM
In.
S3, for follow-up vertex in queue, first judges the entry for whether having using this vertex as key in DRAM or NVM, if
In the presence of, the partition information on this vertex is directly appended to corresponding entry in DRAM or NVM, i.e., according to the value in the entry,
The point is assigned to corresponding processing unit.Often distribute a vertex, to be assigned the partition information on vertex as value,
The adjoint point on this vertex is stored in corresponding caching as key in the form of dictionary entry.
When follow-up vertex is allocated in queue, if existed in DRAM or NVM using entry of this vertex as key,
After point is assigned to corresponding processing unit by the value according to entry, this entry is removed from the cache.
If it does not exist, then the point is assigned to the minimum processing unit of load, then point by each vertex distributed
Area's information is stored in DRAM, if the data of DRAM are held as value, the adjoint point on this vertex as key in the form of dictionary entry
Amount is then stored in NVM more than given threshold, until all the points are all assigned in queue, wherein, the minimum place of the load
It is the subregion for possessing vertex minimum number to manage unit.
When subregion distributes, vertex to be allocated is distributed to the most subregion of the adjoint point quantity containing the vertex.
Can specifically the following formula be used to carry out subregion:
Wherein, N (v) represents neighbours' point of vertex v,Represent the vertex subregion state divided in t moment, n is figure
Number of vertex, k are the number of partitions.From above equation as can be seen that each vertex only checks once, equation first half function f1Contain
Justice is in t moment, and neighbours' point quantity containing vertex v, in order to make subregion load balancing, is multiplied by behind equation in each subregion
Penalty f2.The sub-district of functional value maximum is selected, vertex v is assigned in this sub-district, n/k is average load.
As shown in table 1, Far Left one is classified as data set in table, and mixing storage organization of the invention is tied with other storages respectively
Structure compares on the division time, in non-volatile memory apparatus, disk, solid state hard disc, sets respectively slow in DRAM
Deposit capacity is respectively corresponding diagram 5%, 9%, 13% and 17%.Wherein cache contents storage organization is all incorporated in the present invention deposits
Storage structure.It can be seen that the mixing storage organization of the present invention reduces much than other storage organizations in time.
Set DRAM memory capacity unrestricted, compared with the original partition time, directly found out by table data and drawn
Enter the superiority of the cache contents structure of the present invention.
Division time contrast under 1 different medium of table
Figure below illustrates the process of division, using above-mentioned algorithm partition is three sub-districts by the left figure of Fig. 2, wherein, Fig. 2
Left figure be figure primary expression mode, in the form of a file there are in computer disk, be loaded into memory, calculating process is such as
Shown in Fig. 3 and table 2, big diagram data is first divided into multiple vertex, and switchs to queue by vertex is randomly ordered, as shown in Figure 3;
Table 2 is carries out big figure G from moment T to moment T+6 the content (adjacent side structure) of dynamic buffering data in partition process, in T
Carve the vertex (v that processing ID number is 11), after being assigned, using the partition information of this point as value, adjoint point is as key (v2→S1,
v3→S1) preserve in the buffer.At the T+1 moment, vertex v is calculated3, due to having saved point v in the buffer3Adjoint point subregion
Situation (v3→S1), so the subregion according to belonging to this entry directly calculates this point, by point v3It is assigned, due to this point
Entry (v3→S1) become redundant data, so being directly deleted.Until all vertex are all assigned, algorithm terminates.
One point can only be assigned to a subregion, and a key can correspond to multiple values in caching, which is that basis has been divided
The adjoint point number matched somebody with somebody calculates, such as (v2 → S1) be not the meaning for point v2 being assigned to s1, only it is to be understood that the adjoint point of v2 is in s1
In have one can.Wherein (V4:S2, S3) show that the adjoint point of point v4 has one in sub-district s2, there is one in s3, know
These information of road can substitute into above-mentioned formula and be calculated.In computer realm, value can be understood as needing the data stored,
The numbering for the value that key is deposited for needs.
Table 2 carries out figure G from moment T to moment T+6 the content of dynamic buffering data in partition process
Three kinds of cache contents storage organizations as Figure 4-Figure 6 account for correspondence with the process of division, data cached capacity
The percentage of figure scale.Wherein, OCS represents cache contents storage organization using the present invention, and OCSnodel represents to use this hair
Bright cache contents storage organization but no introducing delete operation, NPS represents the storage organization of primal algorithm, with point and corresponding points
Partition information as basic entry.Thus figure is understood, with a process for processing, is tied using NPS structures and OCSnodel
The scale of structure is constantly increasing, and OCSnodel structures have finally reached the 100% of artwork scale.Storage knot using the present invention
The flow that structure OCS is handled with vertex, to peaking after capacity but constantly reducing, NPS need maximum storage capacity
Be less than OCS and OCSnodel, but required maximum capacity also account for close to 30%, for TB and big figure for, it is clear that
Common unit is helpless at present.
In order to reduce the number write to NVM, using nearest minimal data migration strategy is circulated, in the partition process of diagram data
In, when the generation or renewal of each entry, the timestamp of the synchronized update entry;
When the data capacity in DRAM exceedes given threshold, before and after entry time stamp in caching, by the time most
Forward entry data is moved in NVM;
When the data cached data being reduced in given threshold and NVM in DRAM are not empty, by timestamp in NVM most
Entry is moved in DRAM in caching rearward.
As Figure 7-9, the percentage (5%, 10%, 15%) that DRAM cache capacity is corresponding diagram is set.When in DRAM
It is data cached more than the scale specified when, data will write NVM in, in order to verify this tactful superiority, also to other plans
Experiment has slightly been carried out to contrast result.
Random is randomized policy, when memory cache data expire, is randomly choosed in data write-in NVM therein.LFU
For strategy is least commonly used, counting variable is added in entry, to update entry number as base unit, by the minimum bar of number
Mesh is moved in NVM.LRU is least recently used strategy, adds timestamp in entry, the entry of time at most is moved to
In NVM.LRUloopTo circulate least recently used strategy, the data cached strategy and LRU moved in NVM in wherein DRAM
It is identical, unlike, from the figure 3, it may be seen that at the position of processing to the about 0.05%-0.2% in the queue of vertex, required is slow
Deposit capacity constantly to reduce, unnecessary space is had in DRAM, the present invention introduces writeback policies again using this space, when this is remaining
Space increases to certain scale, selects nearest entry data to move in DRAM according to timestamp the entry in NVM, again
Reduce and write number.Fig. 3 results embody this tactful superiority, write number and are considerably less than other strategies.
The invention also provides one kind towards big figure calculates efficient figure dividing system, including CPU, DRAM, NVM and processing
Unit, the processing unit are the computing unit in cluster, and the CPU and DRAM, NVM, processing unit communicate to connect, described
DRAM and NVM is communicated to connect, and described CPU, DRAM, NVM and processing unit calculate efficiently figure division side by above-mentioned towards big figure
Method caches big figure.
In the description of this specification, reference term " one embodiment ", " some embodiments ", " example ", " specifically show
The description of example " or " some examples " etc. means specific features, structure, material or the spy for combining the embodiment or example description
Point is contained at least one embodiment of the present invention or example.In the present specification, schematic expression of the above terms is not
Necessarily refer to identical embodiment or example.Moreover, particular features, structures, materials, or characteristics described can be any
One or more embodiments or example in combine in an appropriate manner.
Although an embodiment of the present invention has been shown and described, it will be understood by those skilled in the art that:Not
In the case of departing from the principle of the present invention and objective a variety of change, modification, replacement and modification can be carried out to these embodiments, this
The scope of invention is limited by claim and its equivalent.
Claims (6)
1. a kind of calculate efficient figure division methods towards big figure, it is characterised in that comprises the following steps:
S1, multiple vertex are divided into by diagram data, and switch to queue by vertex is randomly ordered;
S2, carries out subregion distribution to first vertex according to queue sequence, that is, processing unit is assigned to, with the vertex after distributing
Partition information as value, the adjoint point on this vertex is stored in DRAM, if the number of DRAM as key in the form of dictionary entry
Then it is stored according to capacity more than given threshold in NVM;
S3, for follow-up vertex in queue, first judges the entry for whether having using this vertex as key in DRAM or NVM, if it does,
The partition information on this vertex is directly appended to corresponding entry in DRAM or NVM, i.e., according to the value in the entry, by the point
Corresponding processing unit is assigned to, often distributes a vertex, the partition information to be assigned vertex is used as value, this vertex
Adjoint point as key, be stored in the form of dictionary entry in corresponding caching;
If the point is assigned to the minimum processing unit of load there is no the entry using this vertex as key in DRAM or NVM,
Stored again using the partition information on each vertex distributed as value, the adjoint point on this vertex as key in the form of dictionary entry
In DRAM, it is stored in if the data capacity of DRAM exceedes given threshold in NVM, until all the points all distribute in queue
Finish, wherein, the minimum processing unit of the load is the subregion for possessing vertex minimum number.
2. according to claim 1 calculate efficient figure division methods towards big figure, it is characterised in that, will when subregion distributes
Vertex to be allocated is distributed to the most subregion of the adjoint point quantity containing the vertex.
3. according to claim 1 calculate efficient figure division methods towards big figure, it is characterised in that using the following formula into
Row subregion:Wherein, SindThe subregion being assigned to for vertex v, N (v)
Neighbours' point of vertex v is represented,Represent the vertex subregion state divided in t moment, n is the number of vertex of figure, and k is subregion
Number, f1Refer in t moment, neighbours' point quantity containing vertex v, f in each subregion2For penalty, n/k is average load.
4. according to claim 1 calculate efficient figure division methods towards big figure, it is characterised in that in the division of diagram data
During, when the generation or renewal of each entry, the timestamp of the synchronized update entry;
It is before and after entry time stamp in caching, the time is most forward when the data capacity in DRAM exceedes given threshold
Entry data move in NVM;
When the data cached data being reduced in given threshold and NVM in DRAM are not empty, by timestamp in NVM most rearward
Caching in entry move in DRAM.
5. according to claim 1 calculate efficient figure division methods towards big figure, it is characterised in that follow-up in queue
When vertex is allocated, if existed in DRAM or NVM using entry of this vertex as key, the value according to entry will be put minute
After being fitted on corresponding processing unit, this entry is removed from the cache.
6. a kind of calculate efficient figure dividing system towards big figure, it is characterised in that including CPU, DRAM, NVM and processing unit,
The processing unit is the computing unit in cluster, and the CPU and DRAM, NVM, processing unit communicate to connect, the DRAM with
NVM is communicated to connect, and described CPU, DRAM, NVM and processing unit are calculated high by claim 1-5 any one of them towards big figure
Effect figure division methods cache big figure.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201711375929.2A CN107957962A (en) | 2017-12-19 | 2017-12-19 | It is a kind of to calculate efficient figure division methods and system towards big figure |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201711375929.2A CN107957962A (en) | 2017-12-19 | 2017-12-19 | It is a kind of to calculate efficient figure division methods and system towards big figure |
Publications (1)
Publication Number | Publication Date |
---|---|
CN107957962A true CN107957962A (en) | 2018-04-24 |
Family
ID=61959253
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN201711375929.2A Pending CN107957962A (en) | 2017-12-19 | 2017-12-19 | It is a kind of to calculate efficient figure division methods and system towards big figure |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN107957962A (en) |
Cited By (6)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN109522428A (en) * | 2018-09-17 | 2019-03-26 | 华中科技大学 | A kind of external memory access method of the figure computing system based on index positioning |
CN111209106A (en) * | 2019-12-25 | 2020-05-29 | 北京航空航天大学杭州创新研究院 | Streaming graph partitioning method and system based on cache mechanism |
CN111292225A (en) * | 2018-12-07 | 2020-06-16 | 三星电子株式会社 | Partitioning graphics data for large-scale graphics processing |
CN112912865A (en) * | 2018-07-27 | 2021-06-04 | 浙江天猫技术有限公司 | Graph data storage method and system and electronic equipment |
CN113377523A (en) * | 2021-01-13 | 2021-09-10 | 绍兴文理学院 | Heterogeneous sensing stream graph partitioning method |
CN112912865B (en) * | 2018-07-27 | 2024-06-07 | 浙江天猫技术有限公司 | Graph data storage method and system and electronic equipment |
Citations (2)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US20140379985A1 (en) * | 2013-06-25 | 2014-12-25 | International Business Machines Corporation | Multi-level aggregation techniques for memory hierarchies |
CN104778126A (en) * | 2015-04-20 | 2015-07-15 | 清华大学 | Method and system for optimizing transaction data storage in non-volatile memory |
-
2017
- 2017-12-19 CN CN201711375929.2A patent/CN107957962A/en active Pending
Patent Citations (2)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US20140379985A1 (en) * | 2013-06-25 | 2014-12-25 | International Business Machines Corporation | Multi-level aggregation techniques for memory hierarchies |
CN104778126A (en) * | 2015-04-20 | 2015-07-15 | 清华大学 | Method and system for optimizing transaction data storage in non-volatile memory |
Non-Patent Citations (1)
Title |
---|
QI LI等: "Streaming Graph Partitioning for Large Graphs with Limited Memory", 《IEEE》 * |
Cited By (9)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN112912865A (en) * | 2018-07-27 | 2021-06-04 | 浙江天猫技术有限公司 | Graph data storage method and system and electronic equipment |
CN112912865B (en) * | 2018-07-27 | 2024-06-07 | 浙江天猫技术有限公司 | Graph data storage method and system and electronic equipment |
CN109522428A (en) * | 2018-09-17 | 2019-03-26 | 华中科技大学 | A kind of external memory access method of the figure computing system based on index positioning |
CN109522428B (en) * | 2018-09-17 | 2020-11-24 | 华中科技大学 | External memory access method of graph computing system based on index positioning |
CN111292225A (en) * | 2018-12-07 | 2020-06-16 | 三星电子株式会社 | Partitioning graphics data for large-scale graphics processing |
CN111292225B (en) * | 2018-12-07 | 2022-05-31 | 三星电子株式会社 | Partitioning graphics data for large-scale graphics processing |
CN111209106A (en) * | 2019-12-25 | 2020-05-29 | 北京航空航天大学杭州创新研究院 | Streaming graph partitioning method and system based on cache mechanism |
CN111209106B (en) * | 2019-12-25 | 2023-10-27 | 北京航空航天大学杭州创新研究院 | Flow chart dividing method and system based on caching mechanism |
CN113377523A (en) * | 2021-01-13 | 2021-09-10 | 绍兴文理学院 | Heterogeneous sensing stream graph partitioning method |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN104115133B (en) | For method, system and the equipment of the Data Migration for being combined non-volatile memory device | |
CN107943867B (en) | High-performance hierarchical storage system supporting heterogeneous storage | |
CN107957962A (en) | It is a kind of to calculate efficient figure division methods and system towards big figure | |
US9805048B2 (en) | System and method for managing a deduplication table | |
CN104794070B (en) | Solid state flash memory write buffer system and method based on dynamic non-covered RAID technique | |
US9934231B2 (en) | System and methods for prioritizing data in a cache | |
CN102760101B (en) | SSD-based (Solid State Disk) cache management method and system | |
EP2519883B1 (en) | Efficient use of hybrid media in cache architectures | |
CN107066393A (en) | The method for improving map information density in address mapping table | |
CN103907100B (en) | High-speed buffer storage data storage system and the method for storing padding data to it | |
US6857045B2 (en) | Method and system for updating data in a compressed read cache | |
CN101814044B (en) | Method and device for processing metadata | |
CN104834607B (en) | A kind of hit rate for improving distributed caching and the method for reducing solid state hard disc abrasion | |
CN102779096B (en) | Page, block and face-based three-dimensional flash memory address mapping method | |
CN108762671A (en) | Mixing memory system and its management method based on PCM and DRAM | |
CN102768645B (en) | The solid state hard disc forecasting method of hybrid cache and solid-state hard disk SSD | |
CN107924291B (en) | Storage system | |
CN104503703B (en) | The treating method and apparatus of caching | |
CN105389135B (en) | A kind of solid-state disk inner buffer management method | |
CN103942161B (en) | Redundancy elimination system and method for read-only cache and redundancy elimination method for cache | |
CN105787037B (en) | A kind of delet method and device of repeated data | |
CN106095342A (en) | Watt recording disc array construction method and the system of a kind of dynamically changeable long strip | |
CN108845957B (en) | Replacement and write-back self-adaptive buffer area management method | |
CN106873912A (en) | The dynamic partition storage method and device, system of TLC chip solid state hard discs | |
CN111580754B (en) | Write-friendly flash memory solid-state disk cache management method |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
RJ01 | Rejection of invention patent application after publication |
Application publication date: 20180424 |
|
RJ01 | Rejection of invention patent application after publication |