CN104834482A - Hybrid buffer - Google Patents
Hybrid buffer Download PDFInfo
- Publication number
- CN104834482A CN104834482A CN201510219079.1A CN201510219079A CN104834482A CN 104834482 A CN104834482 A CN 104834482A CN 201510219079 A CN201510219079 A CN 201510219079A CN 104834482 A CN104834482 A CN 104834482A
- Authority
- CN
- China
- Prior art keywords
- memory
- data
- processor
- nonvolatile memory
- mixing
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Pending
Links
Landscapes
- Memory System Of A Hierarchy Structure (AREA)
Abstract
The invention provides a hybrid buffer, which is applied to a storage structure of a computer. The hybrid buffer comprises a dynamic random access memory comprising a plurality of dynamic random access memory chips; a nonvolatile memory comprising a plurality of nonvolatile memory chips, wherein the dynamic random access memory and the nonvolatile memory are in a parallel structure or a serial structure for data buffer or storage. A self-learning module in the last-stage hybrid buffer dynamically configures the structures of an embedded dynamic random access memory and a 3D novel nonvolatile memory in the last-stage hybrid buffer according to behaviors or use habits of a current user by learning the behaviors or use habits of the current user for a certain period, so that a system reaches the optimum energy efficiency.
Description
Technical field
The present invention relates to memory device technical field, particularly relate to a kind of hybrid cache device.
Background technology
Along with the development of computer-related technologies, computing machine has been applied in the every aspect in life.Computing machine is generally by processor, and the part such as storer and input and output composition, wherein the most important thing is processor and storer, and processor is used for processing data, and storer is used for cushioning and storing data.Initial Computer Storage structure is mainly made up of processor and single storer, but along with the development of integrated circuit, the speed goes of the process data of processor is fast, although the read or write speed of corresponding storer is also accelerating, but the difference between the speed of processor 1 and the read or write speed of storer is increasing, that is the delay in the process of data from memory transfer to processor is increasing, and corresponding power consumption is also increasing.
In order to solve the time delay and power problems that bring due to the difference between the speed of processor and memory read/write speed, there has been proposed the computing machine dynamic data attemper structure widely applied at present, for solving the time delay and power problems that bring due to the difference between the speed of processor and memory read/write speed, there has been proposed the current computing machine dynamic data attemper structure widely applied by processor, buffer on sheet, internal memory and the outer mass storage of sheet are formed, on sheet, buffer generally adopts static RAM (SRAM) to realize, internal memory generally adopts dynamic RAM (DRAM) to realize, the outer mass storage of sheet can be hard disk drive (HDD, Hard Disk Driver), also can be solid-state memory (SSD, Solid State Driver).Buffer on sheet in current computer system, the speed reading data in the outer mass storage of internal memory and sheet reduces successively, time delay increases successively, the power consumption reading data increases successively, buffer on sheet, the storage density of internal memory and the outer mass storage of sheet increases successively, corresponding cost reduces successively, therefore when processor will read the data in the outer mass storage of sheet, generally first from mass storage sheet, data are read in internal memory, again data to be read sheet in buffer from internal memory, last processor directly reads data from buffer sheet.For the computer system widely applied at present, because the cost of buffer is too high on sheet, so the storage density of buffer is general all less on sheet, therefore move with regard to having a large amount of data between buffer and internal memory on sheet, that is a large amount of energy can be consumed, and due to internal memory be realize with dynamic RAM, dynamic RAM is due to the existence of leakage current, the information stored there will be the phenomenon of loss, so in order to ensure that the information stored is not lost, dynamic RAM generally will carry out self refresh operation through certain cycle, and the power consumption that the self refresh operation of dynamic RAM also can make increase extra.Buffer on sheet, storage density ratio between internal memory and Large Copacity chip external memory decide the performance of computing machine thus, if Large Copacity chip external memory is very large, and buffer memory and internal memory are very little on sheet, so move with regard to there are a large amount of data between buffer memory on Large Copacity chip external memory and internal memory, sheet, such processor just has a large amount of delays when reading and writing data, and consumes more energy.If the storage density of buffer and internal memory in increase bit, although so the delay of processor when reading and writing data and power consumption are reduced, but but because the cost of buffer and internal memory is larger on sheet, add the cost of system, simultaneously because internal memory self refresh operation can bring certain power consumption, thus the power consumption of system is increased.
Summary of the invention
For above-mentioned technical matters, this application provides a kind of mixing memory, be applied in the storage organization of computing machine, described mixing memory comprises:
Dynamic RAM, comprises several dynamic random storage chips;
Nonvolatile memory, comprises several non-volatile memory chips, is parallel organization or serial structure between described dynamic RAM and described nonvolatile memory, for buffer memory or the storage of data;
Self-learning module, be connected with described dynamic RAM and described nonvolatile memory respectively, for making regular check on the service data and use habit that learn described computer user, and be parallel organization or serial structure between dynamic RAM and described nonvolatile memory according to the output control of study;
Preferably, the data of the most frequently accessing in described computer user a period of time, according to the result of study, are stored among described nonvolatile memory by described self-learning module.
Preferably, described self-study module installation is inner or outside at described mixing memory, by hardware circuit or software simulating.
Preferably, the storage organization of described computing machine also comprises buffer memory on processor and sheet, and described mixing memory is connected with described processor, for buffer memory or the storage of data by described upper buffer memory.
Preferably, when between described dynamic RAM and described nonvolatile memory being serial structure, described dynamic RAM is connected with described processor, for buffer memory or the storage of data by described upper buffer memory.
Preferably, when between described dynamic RAM and described nonvolatile memory being parallel organization, described dynamic RAM and described nonvolatile memory are all connected with described processor, for buffer memory or the storage of data by described upper buffer memory.
Preferably, described nonvolatile memory is the novel nonvolatile memory of 3D.
Preferably, described dynamic RAM is embedded DRAM.
Present invention also offers a kind of Computer Storage structure, comprise described mixing memory, for the storage of described computer data.
Present invention also offers a kind of afterbody hybrid cache device, comprise described mixing memory, for the buffer memory of described computer data.
In sum, owing to have employed technique scheme, the beneficial effect of the application has: the self-learning module in described afterbody hybrid cache device learns behavior or the use habit of active user through certain hour, configure the structure of embedded DRAM in described afterbody hybrid cache device and the novel nonvolatile memory of 3D according to the behavior of active user or use habit dynamically, thus make the efficiency that system reaches optimum, embedded DRAM and the novel nonvolatile memory of 3D are mixed as processor afterbody buffer memory, because the novel nonvolatile memory of 3D adopts 3D manufacture technics, so storage density can be very large, so a large amount of data relevant with use habit in specific user's certain hour can be stored in the novel nonvolatile memory of 3D by we, reduce processor the moving of data when reading and writing these data, thus reduce because data move brought delay and power consumption, simultaneously in the end add a self-learning module in one-level hybrid cache device, study is made regular check on through certain hour, the application program the most frequently use specific user during this period of time or data are stored in the novel nonvolatile memory of 3D, reduce delay and the power consumption of these application programs of processor process and data.
Accompanying drawing explanation
The afterbody hybrid cache device structure that Fig. 1 the present invention proposes;
The afterbody hybrid cache device serial structure that Fig. 2 the present invention proposes;
The afterbody hybrid cache device parallel organization that Fig. 3 the present invention proposes;
Fig. 4 self-learning module selects the schematic diagram of afterbody hybrid cache device according to the use habit of different user;
Fig. 5 a and 5b self-learning module select the schematic diagram of afterbody hybrid cache device according to the different application of unification user;
Fig. 6 a and 6b self-learning module select the schematic diagram of afterbody hybrid cache device according to the different subroutines of the same application domain of unification user;
Fig. 7 example case 1 schematic diagram;
Fig. 8 example case 1 schematic diagram;
The schematic diagram of the method example that Fig. 9 utilizes the present invention to propose;
Figure 10 is the Computer Storage structure that intel corporation proposes;
Figure 11 is the Computer Storage structure that IBM Corporation proposes;
Figure 12 is the Computer Storage structure that company of Micron Technology and Hai Li company propose;
Figure 13 is the structural drawing of current data center;
Data center's schematic diagram of the afterbody hybrid cache device that Figure 14 utilizes the present invention to propose;
Figure 15 is the serial structure of mixing memory;
Figure 16 is the parallel organization of mixing memory;
Figure 17 is the structure of the mixing memory adding self-learning module;
Figure 18 a and 18b is that self-learning module selects the schematic diagram of mixing internal memory according to the use habit of different user;
Figure 19 a and 19b is that self-learning module selects the schematic diagram of mixing internal memory according to the different application of unification user;
Figure 20 a and 20b is that self-learning module selects the schematic diagram of mixing internal memory according to the different subroutines of the same application domain of unification user.
Embodiment
Below in conjunction with accompanying drawing, the specific embodiment of the present invention is further described.
A kind of afterbody hybrid cache device, concrete structure as shown in Figure 1, afterbody hybrid cache device is made up of the novel nonvolatile memory 6_2 and self-learning module 6_3 of the 3D of Embedded dynamic RAM 6_1, superelevation storage density, wherein embedded DRAM 6_1 is made up of the individual embedded DRAM chip of M (M >=1), and 3D novel nonvolatile memory 6_2 is made up of the novel nonvolatile memory chip of N (N >=1) individual 3D.The storage density of current embedded DRAM 6_1 can reach very high, such as Intel is under 22nm technique, the memory capacity of each embedded DRAM 6_1 chip can reach 1Gb, and by under 14nm FinFET technique afterwards, the memory capacity of each embedded DRAM 6_1 chip can reach several Gb.
The novel nonvolatile memory 6_2 of 3D can be the 3D phase transition storage (3D PCM) that technology is ripe gradually, also can be the novel nonvolatile memory of other 3D.The novel nonvolatile memory 6_2 of described 3D refers to the nonvolatile memory made by 3D technique, instead of by 3D encapsulation technology by stacking.The novel nonvolatile memory 6_2 of the described 3D mentioned in the present invention has the following advantages:
The novel nonvolatile memory 6_2 of described 3D refers to the nonvolatile memory made by 3D technique, what therefore the storage density of each chip can be done is very large, such as intel corporation produces the 3D phase transition storage researched and developed, the memory capacity of each chip can reach 128Gb or 256Gb, even higher in the near future, such as reach Tb magnitude;
Described 3D novel nonvolatile memory 6_2 is because be non-volatile, so need not carry out self refresh operation as dynamic RAM, therefore power consumption can reduce greatly;
The novel nonvolatile memory 6_2 of described 3D is encapsulated by multi-chip module encapsulation technology (MCM:Multi Chip Module), so there is no such as mix the restriction storing the heat dissipation problem that cube (HMC) and high bandwidth internal memory (HBM) run into;
The random read-write speed of the novel nonvolatile memory 6_2 of described 3D is very fast, and such as random write reaches 200ns-400ns, and therefore described in processor random read-write, the delay of afterbody hybrid cache device will reduce.
Self-learning module 6_3 in described afterbody hybrid cache device within a certain period of time, the behavior of regular inspection study specific user or use habit, thus determine which data should be stored in the novel nonvolatile memory 6_2 of 3D, be stored in the data that can use for the application program of specific user or specific user in the novel nonvolatile memory 6_2 of 3D.Such as user X, self-learning module 6_3 in afterbody hybrid cache device learns through the inspection of certain hour, find that the application program that user X uses most is application program X_1, so we are just stored in application program X_1 in the novel nonvolatile memory 6_2 of 3D; For user Y, the self-learning module 6_3 in afterbody hybrid cache device learns through the inspection of certain hour, and find that the data that user Y the most frequently accesses are data Y_1, so we are just stored in data Y_1 in the novel nonvolatile memory 6_2 of 3D.
The structure of embedded DRAM 6_1 and the 3D novel nonvolatile memory 6_2 in described afterbody hybrid cache device can have following two kinds, is respectively serial structure and parallel organization.The schematic diagram of serial structure as shown in Figure 2, in figure, 7_1 is processor, 7_2 is buffer memory on what sheet front, 7_3 is the described afterbody hybrid cache device that the present invention proposes, 7_3_1 is embedded DRAM, 7_3_2 is the novel nonvolatile memory of 3D, in serial structure, embedded DRAM 7_3_1 is used as the impact damper of the novel nonvolatile memory 7_3_2 of 3D, and wherein the addressable space of afterbody hybrid cache device 7_3 is the novel nonvolatile memory 7_3_2 of 3D.One of benefit of serial structure effectively can reduce the write operation number of times to the novel nonvolatile memory 7_3_2 of 3D exactly, thus equivalence improves its permanance (endurance).The schematic diagram of parallel organization as shown in Figure 3, in figure, 8_1 is processor, and 8_2 is buffer memory on what sheet front, and 8_3 is the described afterbody hybrid cache device that the present invention proposes, 8_3_1 is embedded DRAM, and 8_3_2 is the novel nonvolatile memory of 3D.In parallel organization, the addressable space of afterbody hybrid cache device 8_3 is the novel nonvolatile memory 8_3_2 of embedded DRAM 8_3_1 and 3D, store processor 8_1 in such as embedded DRAM 8_3_1 and read and write ratio data more frequently, store processor 8_1 in the novel nonvolatile memory 8_3_2 of 3D and relatively read and write data infrequently, the data infrequently read and write stored in 3D described here novel nonvolatile memory 8_3_2 are relative to the data of the frequent read-write stored in embedded DRAM 8_3_1.Self-learning module in described afterbody hybrid cache device learns behavior or the use habit of active user through certain hour, configure the structure of embedded DRAM in described afterbody hybrid cache device and the novel nonvolatile memory of 3D according to the behavior of active user or use habit dynamically, thus make the efficiency that system reaches optimum.Such as when just powering on, the structure of the embedded DRAM in afterbody hybrid cache device and the novel nonvolatile memory of 3D is serial structure, described self-learning module learns behavior or the use habit of active user through inspection after a while, is parallel organization by the structural adjustment of the embedded DRAM in described afterbody hybrid cache device and the novel nonvolatile memory of 3D.We are from the method for afterbody hybrid cache device structure described in self-learning module dynamic conditioning described in different level analysis below:
For different users, such as user X and user Y, described self-learning module is by the study of a period of time, according to use habit or the behavior of user X, find user X in use, by the highest for the efficiency that afterbody hybrid cache device is configured to serial structure system, according to use habit or the behavior of user Y, find user Y in use, by the highest for the efficiency that afterbody hybrid cache device is configured to parallel architecture systems, therefore when user X in use, described afterbody hybrid cache device is configured to serial structure by described self-learning module, when user Y in use, described afterbody hybrid cache device is configured to parallel organization by described self-learning module, specifically as shown in Figure 4.
For same user, processor described afterbody hybrid cache device when processing different application programs should adopt different structures.If user X will process M application program, be respectively application program X_1, application program X_2 ..., application program X_M, wherein M >=1.Described self-learning module is through the use habit of the study active user of certain hour or behavior, find that processor is at executive utility X_1, application program X_2, during application program X_H, during afterbody hybrid cache device employing serial structure, the efficiency of system is the highest, processor is at executive utility X_ (H+1), adopt the efficiency of system during parallel organization the highest time application program X_M (M > H+1), therefore processor is when executive utility X_1, described afterbody hybrid cache device is just configured to serial structure by described self-learning module, processor is when executive utility X_M, described afterbody hybrid cache device is just configured to parallel organization by described self-learning module, specifically as shown in accompanying drawing 5a and 5b.
For the same application program of same user, because each application program is made up of several subroutines, therefore processor described afterbody hybrid cache device when performing the different subroutine of same application program also should adopt different structures.If for the application A of same user X, application A has N number of subroutine, is respectively subroutine A_1, subroutine A_2 ..., subroutine A_N, wherein N >=1.Through the study of certain hour, described self-learning module finds that processor is at execution subroutine A_1, subroutine A_2, time subroutine A_E (E >=1), during described afterbody hybrid cache device employing serial structure, the efficiency of system is the highest, processor at execution subroutine A_ (E+1) ..., time subroutine A_N (N >=E+1), during described afterbody hybrid cache device employing parallel organization, system energy efficiency is the highest.Therefore as processor execution subroutine A_2, described self-learning module configuration afterbody hybrid cache device is serial structure, as processor execution subroutine A_ (E+1), described self-learning module configuration afterbody hybrid cache device is parallel organization, specifically as shown in accompanying drawing 6a and 6b.
The method of described self-learning module dynamic conditioning afterbody hybrid cache device structure is utilized to be equally applicable to the combination of above three kinds of situations, such as self-learning module is through the self study of certain hour, find that processor described afterbody hybrid cache device when the subroutine Y_1_2 of the application program Y_1 of process user Y adopts the efficiency of serial structure system the highest, find that processor described afterbody hybrid cache device when the subroutine X_3_1 of the application program X_3 of process user X adopts the efficiency of parallel architecture systems the highest, therefore as the subroutine Y_1_2 of the application program Y_1 of processor process user Y, afterbody hybrid cache device is configured to serial structure by described self-learning module, as the subroutine X_3_1 of the application program X_3 of processor process user X, afterbody hybrid cache device is configured to parallel organization by described self-learning module.
Embedded DRAM in the afterbody hybrid cache device that the present invention proposes is made up of the individual embedded DRAM chip of M (M >=1), the novel nonvolatile memory of 3D is made up of the novel nonvolatile memory chip of N (N >=1) individual 3D, therefore described self-learning module is according to the study of certain hour, according to the use habit of specific user or behavior by novel for the part or all of 3D in embedded for part or all in embedded DRAM random access memory chip and the novel nonvolatile memory of 3D nonvolatile memory chip composition serial structure or parallel organization, thus make system energy efficiency the highest.Such as server has X processor (multiple-core server), is respectively processor _ 1, processor _ 2 ..., processor _ X, wherein X >=1.Active user Z application program to be processed has i, be respectively App_1, App_2 ..., App_i, a corresponding X processor is divided into i processor group, be respectively processor group _ 1, processor group _ 2 ... processor group _ i, wherein there is Y1 processor in processor group _ 1, in processor group _ 2, have Y2 processor ... Yi processor is had in processor group _ i, here Y1+Y2+ ... + Yi=X, processor group _ 1 process App_1, processor group _ 2 process App_2,, processor group _ i process App_i.Described self-learning module is through the study of certain hour, when finding that A_2 novel nonvolatile memory chip of 3D in the A_1 in the embedded DRAM in afterbody hybrid cache embedded DRAM chip and the novel nonvolatile memory of 3D is formed serial structure by processor group _ 1 when processing App_1, processor group _ 1 efficiency is the highest, B_2 novel nonvolatile memory chip of 3D in B_1 embedded DRAM chip in embedded DRAM in afterbody hybrid cache device and the novel nonvolatile memory of 3D is formed parallel organization by processor group _ 2 during when processing App_2, processor group _ 2 efficiency is the highest, i_2 novel nonvolatile memory chip of 3D in i_1 embedded DRAM chip in embedded DRAM in afterbody hybrid cache device and the novel nonvolatile memory of 3D is formed serial structure by processor group _ i during when processing App_i, processor group _ i efficiency is the highest.Therefore A_2 the novel nonvolatile memory chip of 3D in A_1 embedded DRAM chip in the embedded DRAM in afterbody hybrid cache device and the novel nonvolatile memory of 3D is configured to serial structure by processor group _ 1 described self-learning module when processing App_1, B_2 the novel nonvolatile memory chip of 3D in B_1 embedded DRAM chip in embedded DRAM in afterbody hybrid cache device and the novel nonvolatile memory of 3D is configured to parallel organization by processor group _ 2 described self-learning module when processing App_2, i_2 the novel nonvolatile memory chip of 3D in i_1 embedded DRAM chip in embedded DRAM in afterbody hybrid cache device and the novel nonvolatile memory of 3D is configured to serial structure by processor group _ i described self-learning module when processing App_i.Method from different level analysis self-learning module dynamic conditioning afterbody hybrid cache device structure noted earlier and array configuration thereof are also applicable to this example, are not just repeating here.
Self-learning module 6_3 in described afterbody hybrid cache device both can be implemented in described afterbody hybrid cache device, also can be implemented in the outside of described afterbody hybrid cache device; And self-learning module 6_3 can be realized by hardware circuit also can by software simulating.
We give one example the application illustrating that afterbody hybrid cache device self-learning module is other below.For a certain company X, the application program used by research and development has L, is respectively application program X_1, application program X_2 ..., application program X_L, wherein L >=1, the storage size that this L application program takies is M (M > 0).Company X data to be processed are Y, and the storage size shared by data Y is N (N > 0).For company X lease server, the memory capacity P of the internal memory of server, below we divide several situation to analyze:
If the memory capacity P of the internal memory of server is smaller, i.e. P < M+N, L application program and data Y are stored in the outer jumbo storer of sheet at the beginning, here we suppose that the memory capacity of internal memory can only store an application program, if the application program that server processes at present is application program X_1, namely the application program stored in current internal memory is application program X_1, as shown in Figure 7.When processor will process the application program or data that do not have in internal memory, such as application program X_3, the application program X_3 be stored in mass storage can move in internal memory from Large Copacity chip external memory, then to move to sheet in buffer memory from internal memory, performed by processor again, as shown in Figure 8.Can bring very large power consumption in the process of carrying application program X_3, and internal memory needs to carry out self refresh operation through the regular hour, this also can bring a part of power consumption.Substitute because application program X_1 unborn in internal memory is employed program X_3 simultaneously, next time again executive utility X_1 time, still need to be transported in internal memory in jumbo storer from sheet, then move to sheet in buffer memory from internal memory, finally performed by server.
If this server data center leasing company increases the memory capacity of internal memory, make the memory capacity of internal memory be greater than or equal to application program capacity M's and data capacity N and, i.e. P >=M+N.Server operationally, although all application programs and data can be placed in internal memory, but because memory size is excessive, the power consumption that internal memory self-refresh brings is very large, and the maximum memory capacity that each server can have is certain, if when the memory capacity M's of application program and data storage capacity N with exceeded the maximal value of current server memory capacity, so this server data center leasing company just must increase the number buying server, thus data center's cost is increased.
Utilize the afterbody hybrid cache device that the present invention proposes superelevation storage density, self-learning module in afterbody hybrid cache device learns through the inspection of certain hour, the application program that discovery company X during this period of time often uses has H, be respectively application program X_1, application program X_2, application program X_H (L >=H >=1), the storage size that this H application program takies is Q (Q≤M), the data that company X often processes are data T, the capacity of data T is R (R≤N), in afterbody hybrid cache device, the memory capacity of the novel nonvolatile memory of 3D is Z, if in afterbody hybrid cache device the memory capacity Z of the novel nonvolatile memory of 3D be greater than Q and R and, i.e. Z >=R+Q, so our application program of just company X often being processed and data all put into the novel nonvolatile memory of 3D of afterbody hybrid cache device, when server will perform or process these application programs and data, directly from the novel nonvolatile memory of 3D, read data, as shown in Figure 9.If in afterbody hybrid cache device the memory capacity Z of the novel nonvolatile memory of 3D be less than Q and R and, i.e. Z < R+Q, a part so in our application program of often being performed by server and data is stored in the novel nonvolatile memory of 3D, a part puts into internal memory in addition, we only need a small amount of internal memory like this, not only reduce the refresh power consumption of internal memory, more substantially reduce this server data center leasing company operation expense.
Below in conjunction with example, the Computer Storage structure of the afterbody hybrid cache device utilizing the present invention to propose and above-mentioned several Computer Storage structure are contrasted.
The Computer Storage structural representation that accompanying drawing 10 proposes for intel corporation, the Computer Storage structure that the present invention and Intel propose contrasts.Along with the arrival of large data age, processor data grows to be processed is many, suppose that the data volume of current computing machine data A to be processed is 5GB, the computing machine of the Computer Storage structure proposed for utilizing Intel, because the data volume of data A is too large, can not all put into embedded DRAM 3_3, therefore data A can only be put into internal memory 3_4, this just requires that the storage density of internal memory 3_4 is very large, data A still needs to move to buffer memory 3_3 from internal memory 3_4 to go simultaneously, gone to perform by processor 3_1 again, so the delay that processor 3_1 reads and writes can become large, the power consumption consumed also can become large.Simultaneously because dynamic RAM internal memory 3_4 is in order to ensure that data A's is accurate, need regular to carry out self refresh operation, so also certain power consumption can be brought.If other data need processed to cause in embedded DRAM or the data A of internal memory 3_4 is washed out, when we need again to process data A, data A must move in internal memory by again from mass storage sheet, move in buffer memory by internal memory again, a large amount of power consumptions can be brought like this.And the computing machine of the mixing afterbody buffer memory of the super-high density utilizing the present invention to propose, because we add the novel nonvolatile memory 6_2 of 3D by the end level cache, so the storage density of buffer memory is very large, data A just can be stored in mixing afterbody buffer memory by we like this, when processor will perform data A time, directly read in mixing afterbody buffer memory, decrease the movement of data, so reduce the delay of the A that reads and writes data, reduce the power consumption of the A that reads and writes data simultaneously; And we can close completely or not need internal memory 3_4.And have a self-learning module in afterbody hybrid cache device, through the regular hour, self-learning module can be regular inspection study current specific user use habit, the data that active user the most frequently processes are stored in the novel nonvolatile memory 6_2 of 3D, such user directly in the end can read data in one-level hybrid cache device when performing these data, and do not need repeatedly data to be moved into internal memory from chip external memory, then move into buffer memory from internal memory.
The Computer Storage structural representation that accompanying drawing 11 proposes for IBM Corporation, the Computer Storage structure that the present invention and IBM propose contrasts.Suppose that the data volume of current computing machine data A to be processed is 5GB, the computing machine of the mixing internal storage structure utilizing IBM to propose, because on sheet, the storage density of buffer memory 4_2 is very little, therefore processor 4_1 is when the A that reads and writes data, data A still to need to move to from internal memory 4_3 on sheet in buffer memory 4_2, performed by processor 4_1 again, that is still there is the movement of data, so move brought delay due to data A and power consumption is also larger, and mixing internal memory is generally linked together by printed circuit board (PCB) and processor chips, like this due to the existence of printed circuit board (PCB), very large RC is brought to postpone, thus the power consumption making data A read and write increases.The computing machine of the mixing afterbody buffer memory utilizing the present invention to propose, because the storage density of described afterbody hybrid cache device is very large, so data A can be stored in described afterbody hybrid cache device, and need not be stored in internal memory, decrease the movement of data on the one hand, thus reduce because data A moves consumed power consumption, on the other hand not because printed circuit board (PCB) and the RC that brings postpones and power consumption.
Accompanying drawing 12 is the Computer Storage structural representation that company of Micron Technology and Hynix company propose, and the Computer Storage structure that the present invention and company of Micron Technology and Hynix company propose is contrasted.Although utilize mixing to store cube (HMC) technology or high bandwidth internal memory (HBM) technology, reduce the delay and power consumption that read and write data to a certain extent, but adopt the technology of stacked package to realize because mixing stores cube 5_4 and high bandwidth internal memory 5_4, there is the problem of heat radiation, thus limit the increase of storage density, suppose that the data volume of current computing machine data A to be processed is very large, exceed the storage density that mixing stores cube 5_4 and high bandwidth memory techniques 5_4, more mixing is so just needed to store cube 5_4 or/and high bandwidth internal memory 5_4 stores data A, although mixing stores cube 5_4 or/and high bandwidth internal memory 5_4 uses 3D encapsulation technology, but because mixing stores cube 5_4 or/and high bandwidth internal memory 5_4 uses increasing of number, thus area is increased.And data are stored in mixing storage cube 5_4 or/and in high bandwidth internal memory 5_4, processor 5_1 is when execution data, still mixing is needed to store cube 5_4 or/and to carry on data A to sheet in buffer memory 5_2 in high bandwidth internal memory 5_4, therefore still can increase the delay that processor 5_1 reads and writes data, thus increase the power consumption read and write data.And the computing machine of the afterbody hybrid cache device utilizing the present invention to propose, there is not such problem, because 3D novel nonvolatile memory 6_2 3D technique makes, it is very large that storage density can be done, data A can be stored in the novel nonvolatile memory 6_2 of 3D completely, and the problem of heat radiation can not be there is.And have a self-learning module in afterbody hybrid cache device, through the regular hour, self-learning module can be regular inspection study current specific user use habit, the data that active user the most frequently processes are stored in the novel nonvolatile memory 6_2 of 3D, such user directly in the end can read data in one-level hybrid cache device when performing these data, and do not need repeatedly data to be moved into internal memory from chip external memory, then move into buffer memory from internal memory.
Along with the arrival of large data age, the Storage and Processing of data is all realized by data center, the basic structure of current data center as shown in Figure 13, in figure, processor 15_1 is generally the processor of arm processor or Intel, and all mainly to exist with the form of multinuclear, on sheet, buffer memory 15_2 is generally 3 grades, be respectively Level_1 buffer memory, Level_2 buffer memory and Level_3 buffer memory, these three grades of buffer memorys mainly use static RAM (SRAM) to realize, afterbody buffer memory 15_3 realizes with embedded DRAM (eDRAM), internal memory 15_4 is the DIMM technology (R-DIMM using band register, RegistedDual-inline-Memory-Modules) dynamic random access memory chip realizes or uses global buffer module technology (FB-DIMM, Fully BufferedDual-inline-Memory-Modules) dynamic random access memory chip realize.The outer mass storage 15_5 of sheet is generally solid state hard disc storer (SSD) or mechanical hard disk storer (HDD), be used for storing data, tape (TAPE) 15_6 still exists as the chip external memory that cost is minimum, as afterbody chip external memory medium.
The afterbody hybrid cache device of the super-high density utilizing the present invention to propose, we have proposed a kind of structure of new data center, as shown in Figure 14, processor 16_1 in figure, buffer memory 16_2 on sheet, the same with in current data center of tape (TAPE) 16_6, in figure, internal memory 16_4 uses DIMM (R-DIMM) technology of band register or uses the novel nonvolatile memory of 3D-NAND or 3D of global buffer module (FB-DIMM) technology to realize, as 3D-PCM, compared to the method making internal memory in existing data center of dynamic RAM, self refresh operation need not be carried out, save a large amount of power consumptions, and 3D-NAND or 3D-PCM is non-volatile, data can not make loss because of power down.In figure, 16_5 is the outer mass storage of sheet, mainly realize with 3D-NAND or the novel nonvolatile memory realization of 3D, such as 3D-PCM, here 3D-NAND and 3D-PCM mentioned all has pretreated function, so-called pre-service is namely when user will carry out certain operation, such as query manipulation, processor in 3D-NAND or 3D-PCM can carry out query manipulation to the data stored therein in advance, the benefit done so only need be carried to internal memory 16_4 by the data meeting user's condition, and the total data be stored in described 3D-NAND or 3D-PCM need not be delivered to internal memory 16_4 by processor 16_1 and inquire about, decrease the carrying number of times of data, thus reduce power consumption, also reduce the load of processor 16_1 simultaneously.Compare present data center, because there is the nonvolatile memory of the novel vast capacity of 3D in afterbody hybrid cache device in data center's structure that the present invention proposes, so more data can be stored in the buffer memory of data center, reduce processor the moving of data when reading and writing these data, thus reduce because data move brought delay and power consumption, simultaneously in the end add a self-learning module in one-level hybrid cache device, study is made regular check on through certain hour, the application program the most frequently use specific user during this period of time or data are stored in the novel nonvolatile memory of 3D, reduce delay and the power consumption of these application programs of processor process and data.
Data center as shown in figure 14, when data center comes into operation at the beginning, only can need buy a certain amount of SSD with preprocessing function of configuration, because a large amount of behaviors is inquiry work in the operation of contemporary data center, probably account for 70 ~ 80%, even higher, so for a large amount of lease client, do not need the very strong server of a lot of servers or performance to meet their demand, a certain amount of SSD with pre-service query function just can meet the data query requirement of great majority lease client.And other 20 ~ 30% need the high performance operation that on server, processor A LU participates in can be come by the server of the buffer with afterbody mixing vast capacity, form because the operation of more than 50% of general processor is carried by data, move into internal memory by data from chip external memory, then move into buffer by internal memory; Otherwise still, the data after the change in buffer are write back internal memory, then writes back chip external memory, so the carrying reducing data both can improve efficiency and the performance of processor process, greatly can reduce again the power consumption of server.The buffer of adding afterbody mixing vast capacity of the present invention has the function of self study, through study after a while and statistics, the use habit of specific tenant can be recognized intelligently, and its application program the most frequently used and data are called on the non-volatile novel memory devices of 3D Large Copacity on the buffer of afterbody mixing vast capacity, within ensuing service time, processor will no longer need at chip external memory, a large amount of data carryings is carried out between internal memory and buffer, not only substantially increase the Consumer's Experience of tenant, more greatly reduce the power consumption of data center, this power consumption reduced comes from the motor power consumption in the internal memory self-refresh power consumption and traditional HDD saved, with at chip external memory, between internal memory and buffer memory, data carry the power consumption consumed.
Through use (such as 2 years) after a while, self-learning module can also according to service condition interior during this period of time or/and the use feedback of user, be supplied to data center's suggestion, such as, increase the SSD (reducing the power consumption of data center further) bought with preprocessing function, or increase the server (further improve the highest point reason efficiency of data center) of buying with the buffer of afterbody mixing vast capacity, and be to accomplish this point completely for traditional data center.
What the present invention proposed also can be applied in mixing internal memory (Hybrid Main Memory) by the technology of embedded DRAM and the novel non-volatile memory architecture of 3D in self-learning module dynamic-configuration afterbody hybrid cache device.The way of realization of current mixing internal memory is made up of dynamic RAM and nonvolatile memory, wherein dynamic RAM is made up of M (M >=1) individual dynamic random access memory chip, and nonvolatile memory is made up of N (N >=1) individual dynamic random access memory chip.There are two kinds of structures with in dynamic RAM and the mixing of nonvolatile memory composition, are respectively serial structure and parallel organization.As shown in Figure 15, in figure, 17_1 is processor to the serial structure of mixing internal memory, and 17_2 is buffer memory on sheet, and 17_3 is mixing internal memory, and wherein 17_3_1 is dynamic RAM, and 17_3_2 is nonvolatile memory, and 17_4 is the outer mass storage of sheet.In the serial structure of mixing internal memory, dynamic RAM 17_3_1 is used as the impact damper of nonvolatile memory 17_3_2, and the addressable space wherein mixing internal memory is nonvolatile memory.The parallel organization of mixing internal memory as shown in Figure 16, in figure, 18_1 is processor, 18_2 is buffer memory on sheet, 18_3 is mixing internal memory, wherein 18_3_1 is dynamic RAM, and 18_3_2 is nonvolatile memory, and 18_4 is the outer mass storage of sheet, in the parallel organization of mixing internal memory, the addressable space of mixing internal memory is the outer mass storage of dynamic RAM and sheet.Store processor 18_1 in such as dynamic RAM 18_3_1 and read and write ratio data more frequently, store processor 18_1 in nonvolatile memory 18_3_2 and relatively read and write data infrequently, the data infrequently read and write stored in nonvolatile memory 18_3_2 described here are relative to the data of the frequent read-write stored in dynamic RAM 18_3_1.For a certain mixing internal memory, once out manufactured, the structure of mixing internal memory just determines, and is the one in serial structure or parallel organization, can not changes.
Utilize the technology that the present invention proposes, a self-learning module 19_3_3 is added in mixing internal memory 19_3, self-learning module in described mixing internal memory learns behavior or the use habit of active user through certain hour, the structure of dynamic RAM 19_3_1 and nonvolatile memory 19_3_2 in mixing internal memory 19_3 is configured dynamically according to the behavior of active user or use habit, namely serial structure or parallel organization is selected, thus reaching optimum efficiency, concrete structure is as shown in Figure 17.We are from the method mixing internal storage structure described in self-learning module dynamic conditioning described in different level analysis below:
For different users, such as user X and user Y, described self-learning module is by the study of a period of time, according to use habit or the behavior of user X, find user X in use, mixing memory configurations become the efficiency of system during serial structure the highest, according to use habit or the behavior of user Y, find user Y in use, mixing memory configurations become the efficiency of system during parallel organization the highest, therefore when user X in use, described mixing memory configurations is become serial structure by described self-learning module, when user Y in use, described mixing memory configurations is become parallel organization by described self-learning module, specifically as shown in accompanying drawing 18a and 18b.
For same user, processor described mixing internal memory when processing different application programs should adopt different structures.If user X will process M application program, be respectively application program X_1, application program X_2 ..., application program X_M, wherein M >=1.Described self-learning module is through the study of certain hour, find that processor is at executive utility X_1, application program X_2, during application program X_H mix internal memory adopt serial structure time system efficiency the highest, processor at executive utility X_ (H+1) ... the efficiency mixing system during internal memory employing parallel organization during application program X_M is the highest, wherein M > H+1.Therefore processor is when executive utility X_1, described mixing memory configurations is just become serial structure by described self-learning module, processor is when executive utility X_M, and described mixing memory configurations is just become parallel organization, specifically as shown in accompanying drawing 19a and 19b by described self-learning module.
For the same application program of same user, because each application program is made up of several subroutines, therefore processor described mixing internal memory when performing the different subroutine of same application program also should adopt different structures.If for the application A of same user X, application A is made up of N number of subroutine, is respectively subroutine A_1, subroutine A_2 ..., subroutine A_N, wherein N >=1.Through the study of certain hour, described self-learning module finds that processor is at execution subroutine A_1, subroutine A_2, during subroutine A_E, during described mixing internal memory employing serial structure, the efficiency of system is the highest, processor at execution subroutine A_ (E+1) ..., the efficiency that time subroutine A_N (N >=E+1), during described mixing internal memory employing parallel organization, system passes through is the highest.Therefore as processor execution subroutine A_2, serial structure is saved as in described self-learning module configuration mixing, as processor execution subroutine A_ (E+1), in described self-learning module configuration mixing, save as parallel organization, specifically as shown in accompanying drawing 20a and 20b.
The method of described self-learning module dynamic conditioning mixing internal storage structure is utilized to be equally applicable to the combination of above three kinds of situations, such as self-learning module is through the self study of certain hour, find that processor described mixing internal memory when the subroutine Y_1_2 of the application program Y_1 of process user Y adopts the efficiency of serial structure system the highest, find that processor described mixing internal memory when the subroutine X_3_1 of the application program X_3 of process user X adopts the efficiency of parallel architecture systems the highest, therefore as the subroutine Y_1_2 of the application program Y_1 of processor process user Y, mixing memory configurations is become serial structure by described self-learning module, as the subroutine X_3_1 of the application program X_3 of processor process user X, mixing memory configurations is become parallel organization by described self-learning module.
Dynamic RAM in the mixing internal memory that the present invention proposes is made up of M (M >=1) individual dynamic random access memory chip, nonvolatile memory is made up of N (N >=1) individual nonvolatile memory chip, therefore described self-learning module is according to the study of certain hour, according to the use habit of specific user or behavior by the part random access memory chip in dynamic RAM and the composition serial structure of the part nonvolatile memory chip in nonvolatile memory or parallel organization, thus make system energy efficiency the highest.Such as server has X processor, is respectively processor _ 1, processor _ 2 ..., processor _ X, wherein X >=1.Active user Z application program to be processed has i, be respectively App_1, App_2 ..., App_i, a corresponding X processor is divided into i processor group, be respectively processor group _ 1, processor group _ 2 ... processor group _ i, wherein there is Y1 processor in processor group _ 1, in processor group _ 2, have Y2 processor ... Yi processor is had in processor group _ i, here Y1+Y2+ ... + Yi=X, processor group _ 1 process App_1, processor group _ 2 process App_2,, processor group _ i process App_i.Described self-learning module is through the study of certain hour, when finding that A_1 dynamic random access memory chip in the dynamic RAM in mixing internal memory and A_2 nonvolatile memory chip in nonvolatile memory are formed serial structure by processor group _ 1 when processing App_1, processor group _ 1 efficiency is the highest, B_1 dynamic random access memory chip in dynamic RAM in mixing internal memory and B_2 nonvolatile memory chip in nonvolatile memory are formed parallel organization by processor group _ 2 during when processing App_2, processor group _ 2 efficiency is the highest, i_1 dynamic random access memory chip in dynamic RAM in mixing internal memory and i_2 nonvolatile memory chip in nonvolatile memory are formed serial structure by processor group _ i during when processing App_i, processor group _ i efficiency is the highest.Therefore A_1 dynamic random access memory chip in the dynamic RAM in mixing internal memory and A_2 nonvolatile memory chip in nonvolatile memory are configured to serial structure by processor group _ 1 described self-learning module when processing App_1, B_1 dynamic random access memory chip in dynamic RAM in mixing internal memory and B_2 nonvolatile memory chip in nonvolatile memory are configured to serial structure by processor group _ 2 described self-learning module when processing App_2, i_1 dynamic random access memory chip in dynamic RAM in mixing internal memory and i_2 nonvolatile memory chip in nonvolatile memory are configured to serial structure by processor group _ i described self-learning module when processing App_i.Method from different level analysis self-learning module dynamic conditioning mixing internal storage structure noted earlier and array configuration thereof are also applicable to this example, are not just repeating here.
The present invention proposes a kind of implementation method of afterbody hybrid cache device of super-high density, embedded DRAM and the novel nonvolatile memory of 3D are mixed as processor afterbody buffer memory, because the novel nonvolatile memory of 3D adopts 3D manufacture technics, so storage density can be very large, so a large amount of data relevant with use habit in specific user's certain hour can be stored in the novel nonvolatile memory of 3D by we, reduce processor the moving of data when reading and writing these data, thus reduce because data move brought delay and power consumption, simultaneously in the end add a self-learning module in one-level hybrid cache device, study is made regular check on through certain hour, the application program the most frequently use specific user during this period of time or data are stored in the novel nonvolatile memory of 3D, reduce delay and the power consumption of these application programs of processor process and data.
The foregoing is only preferred embodiment of the present invention; not thereby embodiments of the present invention and protection domain is limited; to those skilled in the art; the equivalent replacement that all utilizations instructions of the present invention and diagramatic content are made and the scheme that apparent change obtains should be recognized, all should be included in protection scope of the present invention.
Claims (10)
1. a mixing memory, is characterized in that, is applied in the storage organization of computing machine, and described mixing memory comprises:
Dynamic RAM, comprises several dynamic random storage chips;
Nonvolatile memory, comprises several non-volatile memory chips, is parallel organization or serial structure between described dynamic RAM and described nonvolatile memory, for buffer memory or the storage of data;
Self-learning module, be connected with described dynamic RAM and described nonvolatile memory respectively, for making regular check on the service data and use habit that learn described computer user, and be parallel organization or serial structure between dynamic RAM and described nonvolatile memory according to the output control of study.
2. mixing memory according to claim 1, is characterized in that, the data of the most frequently accessing in described computer user a period of time, according to the result of study, are stored among described nonvolatile memory by described self-learning module.
3. mixing memory according to claim 2, is characterized in that, described self-study module installation is inner or outside at described mixing memory, by hardware circuit or software simulating.
4. mixing memory according to claim 1, is characterized in that, the storage organization of described computing machine also comprises buffer memory on processor and sheet, and described mixing memory is connected with described processor, for buffer memory or the storage of data by described upper buffer memory.
5. mixing memory according to claim 4, it is characterized in that, when between described dynamic RAM and described nonvolatile memory being serial structure, described dynamic RAM is connected with described processor, for buffer memory or the storage of data by described upper buffer memory.
6. mixing memory according to claim 4, it is characterized in that, when between described dynamic RAM and described nonvolatile memory being parallel organization, described dynamic RAM and described nonvolatile memory are all connected with described processor, for buffer memory or the storage of data by described upper buffer memory.
7. mixing memory according to claim 1, is characterized in that, described nonvolatile memory is the novel nonvolatile memory of 3D.
8. mixing memory according to claim 1, is characterized in that, described dynamic RAM is embedded DRAM.
9. a Computer Storage structure, comprises as right wants in 1-8 as described in any one mixing memory, for the storage of described computer data.
10. an afterbody hybrid cache device, comprises as right wants as described in 1-9 any one mixing memory, for the buffer memory of described computer data.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201510219079.1A CN104834482A (en) | 2015-04-30 | 2015-04-30 | Hybrid buffer |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201510219079.1A CN104834482A (en) | 2015-04-30 | 2015-04-30 | Hybrid buffer |
Publications (1)
Publication Number | Publication Date |
---|---|
CN104834482A true CN104834482A (en) | 2015-08-12 |
Family
ID=53812398
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN201510219079.1A Pending CN104834482A (en) | 2015-04-30 | 2015-04-30 | Hybrid buffer |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN104834482A (en) |
Cited By (3)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN110310678A (en) * | 2019-06-04 | 2019-10-08 | 上海新储集成电路有限公司 | A kind of intelligent chip |
CN114153402A (en) * | 2022-02-09 | 2022-03-08 | 阿里云计算有限公司 | Memory and data reading and writing method thereof |
WO2024169299A1 (en) * | 2023-02-15 | 2024-08-22 | 苏州元脑智能科技有限公司 | Storage method and apparatus, device, and non-volatile readable storage medium |
Citations (5)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
KR20110037092A (en) * | 2009-10-05 | 2011-04-13 | (주) 그레이프테크놀로지 | Hybrid memory structure having ram and flash interface and data storing method thereof |
CN102591593A (en) * | 2011-12-28 | 2012-07-18 | 华为技术有限公司 | Method for switching hybrid storage modes, device and system |
US20130077382A1 (en) * | 2011-09-26 | 2013-03-28 | Samsung Electronics Co., Ltd. | Hybrid memory device, system including the same, and method of reading and writing data in the hybrid memory device |
CN103593324A (en) * | 2013-11-12 | 2014-02-19 | 上海新储集成电路有限公司 | Quick-start and low-power-consumption computer system-on-chip with self-learning function |
CN104461389A (en) * | 2014-12-03 | 2015-03-25 | 上海新储集成电路有限公司 | Automatically learning method for data migration in mixing memory |
-
2015
- 2015-04-30 CN CN201510219079.1A patent/CN104834482A/en active Pending
Patent Citations (5)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
KR20110037092A (en) * | 2009-10-05 | 2011-04-13 | (주) 그레이프테크놀로지 | Hybrid memory structure having ram and flash interface and data storing method thereof |
US20130077382A1 (en) * | 2011-09-26 | 2013-03-28 | Samsung Electronics Co., Ltd. | Hybrid memory device, system including the same, and method of reading and writing data in the hybrid memory device |
CN102591593A (en) * | 2011-12-28 | 2012-07-18 | 华为技术有限公司 | Method for switching hybrid storage modes, device and system |
CN103593324A (en) * | 2013-11-12 | 2014-02-19 | 上海新储集成电路有限公司 | Quick-start and low-power-consumption computer system-on-chip with self-learning function |
CN104461389A (en) * | 2014-12-03 | 2015-03-25 | 上海新储集成电路有限公司 | Automatically learning method for data migration in mixing memory |
Cited By (3)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN110310678A (en) * | 2019-06-04 | 2019-10-08 | 上海新储集成电路有限公司 | A kind of intelligent chip |
CN114153402A (en) * | 2022-02-09 | 2022-03-08 | 阿里云计算有限公司 | Memory and data reading and writing method thereof |
WO2024169299A1 (en) * | 2023-02-15 | 2024-08-22 | 苏州元脑智能科技有限公司 | Storage method and apparatus, device, and non-volatile readable storage medium |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN106257400B (en) | The method of processing equipment, computing system and processing equipment access main memory | |
US8090897B2 (en) | System and method for simulating an aspect of a memory circuit | |
US8041881B2 (en) | Memory device with emulated characteristics | |
Venkatesan et al. | Stag: Spintronic-tape architecture for gpgpu cache hierarchies | |
CN105808455B (en) | Memory access method, storage-class memory and computer system | |
CN111158633A (en) | DDR3 multichannel read-write controller based on FPGA and control method | |
US20130132704A1 (en) | Memory controller and method for tuned address mapping | |
US20170236566A1 (en) | Data transfer for multi-loaded source synchrous signal groups | |
CN111399757B (en) | Memory system and operating method thereof | |
CN104834482A (en) | Hybrid buffer | |
US8750068B2 (en) | Memory system and refresh control method thereof | |
Atwood | PCM Applications and an Outlook to the Future | |
CN110286851B (en) | Reconfigurable processor based on three-dimensional memory | |
US11720463B2 (en) | Managing memory objects that are assigned a respective designation | |
Wajid et al. | Architecture for Faster RAM Controller Design with Inbuilt Memory | |
Li et al. | Ultra-Large Last-Level Cache (UL^ 3C) of Phase Change Memory | |
Raeisi et al. | A Study of Emerging Memory Technology in Hybrid Architectural Approaches of GPGPU | |
CN104932991B (en) | A method of substituting mixing memory using afterbody hybrid cache |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
C06 | Publication | ||
PB01 | Publication | ||
EXSB | Decision made by sipo to initiate substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
RJ01 | Rejection of invention patent application after publication | ||
RJ01 | Rejection of invention patent application after publication |
Application publication date: 20150812 |