CN104520808A - Providing data to be retrieved - Google Patents

Providing data to be retrieved Download PDF

Info

Publication number
CN104520808A
CN104520808A CN201280075261.9A CN201280075261A CN104520808A CN 104520808 A CN104520808 A CN 104520808A CN 201280075261 A CN201280075261 A CN 201280075261A CN 104520808 A CN104520808 A CN 104520808A
Authority
CN
China
Prior art keywords
data
group
module
anticipatory
neural network
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN201280075261.9A
Other languages
Chinese (zh)
Inventor
V.阿瓦斯蒂
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Hewlett Packard Development Co LP
Original Assignee
Hewlett Packard Development Co LP
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Hewlett Packard Development Co LP filed Critical Hewlett Packard Development Co LP
Publication of CN104520808A publication Critical patent/CN104520808A/en
Pending legal-status Critical Current

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F12/00Accessing, addressing or allocating within memory systems or architectures
    • G06F12/02Addressing or allocation; Relocation
    • G06F12/08Addressing or allocation; Relocation in hierarchically structured memory systems, e.g. virtual memory systems
    • G06F12/0802Addressing of a memory level in which the access to the desired data or data block requires associative addressing means, e.g. caches
    • G06F12/0862Addressing of a memory level in which the access to the desired data or data block requires associative addressing means, e.g. caches with prefetch
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F12/00Accessing, addressing or allocating within memory systems or architectures
    • G06F12/02Addressing or allocation; Relocation
    • G06F12/08Addressing or allocation; Relocation in hierarchically structured memory systems, e.g. virtual memory systems
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F12/00Accessing, addressing or allocating within memory systems or architectures
    • G06F12/02Addressing or allocation; Relocation
    • G06F12/08Addressing or allocation; Relocation in hierarchically structured memory systems, e.g. virtual memory systems
    • G06F12/0802Addressing of a memory level in which the access to the desired data or data block requires associative addressing means, e.g. caches
    • G06F12/0866Addressing of a memory level in which the access to the desired data or data block requires associative addressing means, e.g. caches for peripheral storage systems, e.g. disk cache
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F12/00Accessing, addressing or allocating within memory systems or architectures
    • G06F12/02Addressing or allocation; Relocation
    • G06F12/08Addressing or allocation; Relocation in hierarchically structured memory systems, e.g. virtual memory systems
    • G06F12/0802Addressing of a memory level in which the access to the desired data or data block requires associative addressing means, e.g. caches
    • G06F12/0866Addressing of a memory level in which the access to the desired data or data block requires associative addressing means, e.g. caches for peripheral storage systems, e.g. disk cache
    • G06F12/0871Allocation or management of cache space
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F2212/00Indexing scheme relating to accessing, addressing or allocation within memory systems or architectures
    • G06F2212/10Providing a specific technical effect
    • G06F2212/1016Performance improvement
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F2212/00Indexing scheme relating to accessing, addressing or allocation within memory systems or architectures
    • G06F2212/60Details of cache memory
    • G06F2212/6024History based prefetching
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F2212/00Indexing scheme relating to accessing, addressing or allocation within memory systems or architectures
    • G06F2212/60Details of cache memory
    • G06F2212/6026Prefetching based on access pattern detection, e.g. stride based prefetch

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Engineering & Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • Information Retrieval, Db Structures And Fs Structures Therefor (AREA)

Abstract

Systems and methods for providing data to be retrieved are included herein. In one example, a method includes detecting a data retrieval request. The method also includes identifying a first set of prospective data from a sequential memory module based on the data retrieval request. Additionally, the method includes identifying a second set of prospective data from a neural network module. Furthermore, the method includes identifying a third set of prospective data based on the conditional probability module. In addition, the method includes combining the second set of prospective data and the third set of prospective data to produce a set of predicted data. The method also includes combining the first set of prospective data with the set of predicted data to produce a set of results and retrieving the set of results from a storage device.

Description

Data to be retrieved are provided
Background technology
Modern day computing systems has developed into and has comprised multiple memorizers part.Such as, computing system can comprise nonvolatile semiconductor memory member and several volatile memory device.In many computing systems, nonvolatile semiconductor memory member has the storage volume larger than volatile memory device.But the access time being stored in the data in nonvolatile semiconductor memory member may be slower than the access time of the data be stored in volatile memory device.Therefore, the copy of the data from non-volatile memory source is stored in volatile storage source by some computing systems.So processor can attempt the data of asking before request is from slower non-volatile memory source from volatile storage source.But which data is prediction processor may ask may be difficult.In addition, if the data be stored are not by processor request, then may be poor efficiency in volatile memory by the data storing from nonvolatile semiconductor memory member.
Accompanying drawing explanation
In the following detailed description and particular instance has been described with reference to the drawings, in the accompanying drawings:
Fig. 1 is the block diagram of the example of the computing system that can provide data to be retrieved;
Fig. 2 is the process flow diagram flow chart of the example of the method illustrated for providing data to be retrieved;
Fig. 3 is the process flow diagram flow chart that the example providing the method for the system of data to be retrieved for initialization is shown;
Fig. 4 is the example of the data stream illustrated in the system that can provide data to be retrieved; And
Fig. 5 is the example of the tangible non-transitory computer-readable medium that can provide data to be retrieved.
Embodiment
Develop multiple method to identify and retrieved the copy of the data be stored in nonvolatile semiconductor memory member.Such as, certain methods based on the data by processor request from the copy of nonvolatile semiconductor memory member identification and retrieve data.These methods can store the copy having and follow by the data of the Coutinuous store address of the memory address of the data of processor last-minute plea.But many application may not configure storage data continuously.Such as, relational database can store the data in form.Data for each form of relational database can store in the nonvolatile memory with discontinuous configuration, because after can being stored in the row from the second database table from the row of the first database table.In this example, processor can not request msg in a continuous mode.Therefore, retrieval and by the copy of data of nonvolatile storage, the continuation method be stored in volatile storage source may be poor efficiency.
Technology disclosed herein describes a kind of for providing the method for data to be retrieved.As mentioned in this article, these data comprise any data block, data page, form or can by any other information of processor request.Coutinuous store module, neural network module and conditional probability module is used to identify data to be retrieved.Based on attempting, each module can determine that processor may ask the calculating of which data to identify anticipatory data to be retrieved.Then determine that the combination of the anticipatory data identified in each module is with the accurate anticipatory data providing processor to ask.The combination of these modules can by being supplied to by the copy of the data of processor request the efficiency that quick storage device improves computing system by most probable.
Fig. 1 is the block diagram of the example of the computing system 100 that may be used for providing data to be retrieved.Computing system 100 can comprise such as mobile phone, laptop computer, desk-top computer or panel computer etc.Computing system 100 can comprise the processor 102 being suitable for performing the instruction stored.Processor 102 can be single core processor, polycaryon processor, computing cluster or other suitable configurations any amount of.
Processor 102 can pass through system bus 104(such as PCI, fast PCI, HyperTransport, serial ATA etc.) be connected to the I/O device interface 106 being suitable for computing system 100 being connected to one or more I/O (I/O) device 108.I/O device 108 can comprise such as keyboard and indicating device, and wherein indicating device can comprise touch pad or touch-screen etc.I/O device 108 can be the built-in component of computing system 100, or can be the device being connected to computing system 100 in outside.
Processor 102 can also be linked to by system bus 104 display interface 110 being suitable for computing system 100 being connected to display device 112.Display device 112 can comprise the display screen of the built-in component as computing system 100.Display device 112 can also comprise computer monitor and control unit, televisor or projector etc., and it is connected to computing system 100 in outside.In addition, processor 102 can also be linked to network interface unit (NIC) 114 by system bus 104.NIC 114 can be suitable for, by system bus 104, computing system 100 is connected to network (description).This network (description) can be wide area network (WAN), Local Area Network or the Internet etc.
First processor searches for requested data in storer 116.Storer 116 can comprise random access memory (such as SRAM, DRAM, SONOS, eDRAM, EDO RAM, DDR RAM, RRAM, PRAM etc.), ROM (read-only memory) (such as mask rom, PROM, EPROM, EEPROM etc.), flash memory, nonvolatile memory or any other suitable storage system.If requested instruction or data are not arranged in storer 116, then processor 102 can search for requested instruction or data in storage device 118.Storage device 118 can comprise hard disk drive, CD-ROM drive, USB flash drive, drive array or its any other suitable combination.In some instances, storage device 118 can comprise the instruction and data of all storages for computing system 100.Storage device 118 can also comprise storage manager 120, and it comprises neural network module 122, conditional probability module 124 and Coutinuous store module 126.Storage manager 120 can based on neural network module 122, Coutinuous store module 126 and conditional probability module 124 provide treat from stocking mechanism 106 retrieve data and store the data to storer 116.Neural network module 122, Coutinuous store module 126 and conditional probability module 124 can identify most possibly by data that processor 102 is asked.
Be to be understood that the block diagram of Fig. 1 is not intended to show that computing system 100 comprises all parts shown in Fig. 1.Or rather, computing system 100 can comprise less parts or other parts (such as, other memory device, video card, other network interface etc.) not shown in Figure 1.In addition, any function of storage manager 120 can partially or even wholly be implemented within hardware or in processor 102.Such as, described function can utilize special IC, implement being implemented in the coprocessor in the logic in processor 102 or on peripheral unit etc.
Fig. 2 is the process flow diagram flow chart of the example of the method illustrated for providing data to be retrieved.Method 200 may be used for providing the data by using computing system (computing system 100 described in such as Fig. 1) to retrieve.Method 200 can be implemented by storage manager 120, and storage manager 120 can be based in part on neural network module and conditional probability module to provide data to be retrieved.
At block 202 place, detect data retrieval request.As mentioned in this article, data retrieval request can comprise the request of retrieve data (such as data block, data page, data form or information associated with the data).Such as, data retrieval request can comprise retrieval likely by the instruction of processor request.In some instances, data retrieval request can identify will be replicated and be stored in the data in memory device in stocking mechanism.In these examples, memory device can be faster than storage device.By being stored in memory device by the copy of data, processor can access requested data with the shorter time period.
At block 204 place, Coutinuous store Module recognition first group of anticipatory data.This first group of anticipatory data can be identified by identifying the memory address of data block that finally be accessed by processor and retrieving next consecutive data block based on the memory address of the data block finally accessed by processor.Such as, the data that the processor that Coutinuous store module can detect the memory address N place being stored in stocking mechanism finally accesses.Then Coutinuous store module can be retrieved the data at the memory address N+1 place residing in stocking mechanism and be stored in memory by the copy of retrieved data.In some instances, Coutinuous store module can carry out the scope of retrieves data blocks based on the Coutinuous store address after the memory address being positioned at the data block finally accessed by processor.By being stored in memory by the copy of continuous data, Coutinuous store module can improve the execution speed of instruction, because processor can from memory device instead of from slower storage device request msg.
At block 206 place, determine whether neural network module and conditional probability module are initialised.As mentioned in this article, neural network module comprises the neural tuple of interconnection.As mentioned in this article, neuron comprises the device with one or more input and one or more output.In some instances, neuron can comprise mathematical function or any other suitable mathematical computations.Mathematical function or computing application can input in one group and return output by neuron.Such as, neuron can comprise polynomial function, and this polynomial function comprises the several variablees representing different input value.In some instances, described input value can be for likely by the memory address of the data block of processor request.Then polynomial function can calculate representative most possibly by the output of the data block of processor request.In other instances, neuron can return representative likely by multiple outputs of one group of data block of processor request.
As mentioned in this article, conditional probability module can comprise matrix or conditional probability any other suitably represent.As mentioned in this article, conditional probability comprises the probability of an event of the generation based on second event.Such as, when given B, the conditional probability of A is if the probability of A when known B occurs.In some instances, when given B the conditional probability of A be if known data point B by another data point A during processor request just by the probability of processor request.Conditional probability module can provide data block just by the conditional probability of processor request based on previous other data blocks requested.If neural network module and conditional probability module are initialised, then this process continues at block 208 place.If neural network module and conditional probability module are not initialised, then this process continues at block 210 place.
At block 210 place, first group of anticipatory data is by the result group returned as data.Neural network module and conditional probability module may can not make anticipatory data be included in the result group of data, because neural network module and conditional probability module may not be initialised.In some instances, the subset of first group can be returned group as a result.Such as, storage manager 120 can be arranged to retrieval five anticipatory data blocks, and Coutinuous store module returns first group of ten anticipatory data block.Storage manager 120 then can from the subset of the first group selection five data blocks.Storage manager 120 can based on any amount of appropriate technology, such as Stochastic choice, select first member or the last batch of member selecting first group etc. of first group, select the subset of first group.After first group is returned as anticipatory data, this process terminates at block 220 place.
If neural network module and conditional probability module are initialised, then this process continues at block 208 place.At block 208 place, data retrieval request is sent to neural network module and conditional probability module.Neural network module and conditional probability module can accept the data retrieval request of any suitable quantity and each module can identify one group of anticipatory data.
At block 212 place, expect requested data from neural network module identification second group.In some instances, neural network module accept any suitable quantity likely by the anticipatory data block of processor request as input.In some instances, data retrieval request instruction anticipatory data block is used as the input of neural network module.Then one group of weight can be applied to likely by the anticipatory data block of processor request by neural network module.Then the combination of each weight and anticipatory data block is sent to neuron.Each neuron can comprise transport function, such as polynomial function.Transport function calculates output based on the combination of each weight and anticipatory data block.This output can change according to the type of the transport function used in neural network module.Such as, linear transfer function may cause the output being output into ratio from neuronic and total weighting.In other instances, threshold value or S type transport function can be used.Be greater than threshold value according to output or be less than threshold value, the output of threshold value transport function can be arranged on one of two levels place.The output of S type function can be continuous print and nonlinear.
At block 214 place, from conditional probability Module recognition the 3rd group of anticipatory data.As discussed above, conditional probability module can comprise based on other data blocks whether by any suitable data block of processor request just by the probability of processor request.Such as, conditional probability module can be represented as matrix.Each unit of matrix represents other data blocks previously by the probability of processor request.In some instances, if the probability of a unit is lower than threshold value, can be stored in zero in this unit.Such as, conditional probability module can have threshold value 25%.If the requested probability of unit A of conditional probability module is 10%, then zero can be stored as unit A by the probability from processor request.By replacing, lower than the probability of threshold value, these probability to be saved as sparse matrix with zero.In these examples, the unit with the conditioned probability matrix of nonzero value can be stored, and the unit with null value is not stored.Then less storage space can be used to store the data in sparse matrix.
At block 216 place, second group of anticipatory data and the 3rd group of anticipatory data combine to produce prediction group.The combination of two groups of anticipatory datas can based on the accuracy rate of neural network module and conditional probability module.In some instances, storage manager 120 can determine the accuracy rate of neural network module and conditional probability module by the output monitoring neural network module and conditional probability module.Then can the output of comparative neural network module and conditional probability module and the requested data of reality.Then storage manager 120 can determine the accuracy rate of each module.As mentioned in this article, accuracy rate is that the requested data of expection are in fact by the ratio of processor request.Such as, in one case, 30% of the anticipatory data returned by conditional probability module can after a while by processor request, and 80% of the anticipatory data returned by neural network module can after a while by processor request.In this case, storage manager 120 can be determined the anticipatory data from the larger quantity of neural network module Recycle ratio conditional probability module, because neural network module is more accurate.As a result, storage manager 120 can return the prediction group with high-accuracy.
In other instances, storage manager 120 can determine that the accuracy rate of neural network module or conditional probability module drops to below threshold value.Then storage manager 120 can drop to below threshold value just stops from neural network module or conditional probability module retrieves anticipatory data once the accuracy rate exported.In some instances, by the anticipatory data of a certain quantity from neural network module and the anticipatory data from a certain quantity of conditional probability module are combined to produce prediction group.Such as, prediction group is arranged in the embodiment returning one group eight the requested data blocks of expection wherein, storage manager 120 can based on the accuracy rate of neural network module and conditional probability module determine in anticipatory data block two will by conditional probability Module recognition by six by neural network module identification and in anticipatory data block.
At block 218 place, be combined by the prediction group of Coutinuous store Module recognition and first group the result group forming anticipatory data.Result group can be any appropriately combined of first group and prediction group.Such as, result group can be arranged to and return 10 anticipatory data blocks.If storage manager 120 determines that prediction group has the accuracy rate of 90%, then storage manager 120 can select nine anticipatory data blocks from prediction group and from the first group selection data block produced by Coutinuous store module.In other instances, prediction group may have the accuracy rate dropping to below threshold value, and in this case, storage manager 120 can select whole result group from first group of anticipatory data.Then neural network module and conditional probability module can be reinitialized to improve the accuracy rate of prediction group.Discuss in more detail about Fig. 3 below and reinitialize.
The process flow diagram flow chart of Fig. 2 is not intended to show that the step of method 200 will perform with any concrete order, or all will comprise the institute of method 200 in steps in each case.Such as, method 200 can parallelism recognition second group of anticipatory data and the 3rd group of anticipatory data.In addition, depend on application-specific, any amount of additional step can be comprised in method 200.
Fig. 3 is the process flow diagram flow chart of the example of the method illustrated for initialization neural network module and conditional probability module.Method 300 may be used for by using computing system (computing system 100 described in such as Fig. 1) to come initialization neural network module and conditional probability module.Method 300 can be implemented by storage manager 120, and storage manager 120 can determine whether neural network module or conditional probability module will be initialised.
At block 302 place, based on initialization standard, determine whether neural network module or conditional probability module will be initialised.Whether initialization standard instruction neural network module or conditional probability module are not also initialised, or whether neural network module or conditional probability module will be reinitialized.In some instances, initialization standard can based on the accuracy rate of neural network module or conditional probability module.In these examples, storage manager 120 can be arranged to and comprise accuracy rate threshold value.If drop to below accuracy rate threshold value from the accuracy rate of the output of neural network module or conditional probability module, then neural network module or conditional probability module can be reinitialized.In some instances, by monitoring by the data of processor request and recalculating neural network module and conditional probability module based on requested data, neural network module and conditional probability module is constantly updated in initialization with during reinitializing.If neural network module or conditional probability module are not initialised or reinitialize, then this process terminates at block 304 place.If neural network module or conditional probability module will be initialised or reinitialize, then this process continues at block 306 place.
At block 306 place, stop from neural network module and conditional probability Module recognition anticipatory data.Then by Coutinuous store Module recognition anticipatory data.In some instances, by Coutinuous store Module recognition anticipatory data can increase storage in memory by the quantity of the data block of processor request.In these examples, the accuracy rate of storage manager 120 can improve, because of can be comprised by the anticipatory data of Coutinuous store Module recognition than by the larger quantity of the anticipatory data of neural network module or conditional probability Module recognition by the data block of processor access.
At block 308 place, be sent to neural network module and conditional probability module from the requested data of processor.Requested data are sent to neural network module and the conditional probability module permission neuron of neural network module and the conditional probability of conditional probability module and are arranged to raising accuracy rate.Such as, if processor starts to perform new opplication, then this processor can ask previously do not had requested data block.In this example, the accuracy rate of neural network module may reduce, because no longer produce accurate result for identifying the standard of anticipatory data can ask to be different from the data of previously application due to new opplication.Such as, the accuracy rate of neural network module or conditional probability module may be reduced to 20% from 60%.In order to improve the accuracy rate of neural network module and conditional probability module, requested data block can be sent to two modules.Each requested data block can improve neural network module or conditional probability Module recognition by the possibility of the anticipatory data by processor request.Such as, after based on requested data configuration neural network module, neural network module can be 2% more may to identify by the anticipatory data block of processor request.Discuss in more detail about Fig. 4 below and configure neural network module and conditional probability module in initialization with during reinitializing.
At block 310 place, produce anticipatory data from neural network module and conditional probability module.Anticipatory data is produced in response to the data from processor last-minute plea.Such as, conditional probability module can be sent in each data block by processor request after recalculate conditional probability in conditional probability module.
At block 312 place, determine that the accuracy rate of the anticipatory data produced is whether on threshold value.In some instances, Coutinuous store module results can represent the threshold value of accuracy rate.In other instances, storage manager 120 can based on the anticipatory data block of the accuracy rate of each module from neural network module and conditional probability Module recognition some.Upon initialization, the accuracy rate of neural network module and conditional probability module can be enhanced.Such as, upon initialization, neural network module can have the accuracy rate of 60%, and the accuracy rate of Coutinuous store module is 40%.In this example, neural network module can start to provide anticipatory data and this process terminates.If the accuracy rate of the anticipatory data produced is below threshold value, then this process turns back to block 308.Each iteration of this process can improve the accuracy rate of neural network module and conditional probability module, makes the anticipatory data produced more likely by processor request.
The process flow diagram flow chart of Fig. 3 is not intended to show that the step of method 300 will perform with any concrete order, or all will comprise the institute of method 300 in steps in each case.Such as, neural network module and conditional probability module can by independent initialization or by parallel initialization.In addition, depend on application-specific, any amount of additional step can be comprised in method 200.
Fig. 4 is the example of the data stream illustrated in the computing system that can provide data to be retrieved.In some embodiments, computing system 100 can provide by the storage manager 120 resided in stocking mechanism 106 data treating to retrieve from stocking mechanism 106.
In some instances, first storage manager 400 detects data retrieval request 402.As discussed above, data retrieval request indicates a certain amount of data to be retrieved from stocking mechanism and to be stored in memory.Data to be retrieved are most possibly by the data of processor request.Such as, data retrieval request can indicate 10 data blocks will to be retrieved from stocking mechanism and each of described 10 data blocks will have by the high probability from processor request.
Data retrieval request is sent to Coutinuous store module 404, neural network module 406 and conditional probability module 408.The result of data retrieval request 402 is combinations of the anticipatory data from the identification of Coutinuous store module 404, neural network module 406 and conditional probability 408.As discussed above, Coutinuous store module 404 detects by the memory address of the data block of processor last-minute plea.Then Coutinuous store module 404 retrieves next consecutive data block based on this memory address.
Neural network module 406 identifies anticipatory data based on the neuron be previously configured.In some instances, neural network module 406 comprises input layer, two neuron middle layers and output layer.Neuron middle layer can accept multiple input value and export single output valve.In some instances, middle layer can be connected with simple feedback configuration (formation), wherein inputs and is sent to the first middle layer by from input layer.Then the output in the first middle layer is sent to the second middle layer and the output in the second middle layer is identified as anticipatory data by neural network module 406.In other instances, neuron can be connected with the structure of complexity, and wherein the output in middle layer recursively can be used as input.In these examples, the anticipatory data identified by neural network module 406 can comprise individual data block.In other instances, neural network module 406 can produce multiple anticipatory data block.
Conditional probability module 408 can comprise the conditional probability of the anticipatory data of the data based on previous Request.Such as, the conditional probability being stored in each anticipatory data block in conditional probability module 408 can be determined according to equation 1.
In equation 1, A represents anticipatory data to be retrieved, and B represents the data of previous Request.In some instances, conditional probability module can be stored as matrix.As discussed above, if probability is below threshold value, it is zero that this probability can be stored.As a result, some conditional probability modules may mainly be filled with zero.In these examples, conditional probability can be stored as sparse matrix, wherein only stores the unit with the matrix of nonzero value.
Then the selector switch 410 that predicts the outcome is sent to by the anticipatory data of neural network module and conditional probability Module recognition.Predicting the outcome selector switch 410 can with the anticipatory data of the configuration of any suitable quantity combination from neural network module 406 and conditional probability module 408.Such as, storage manager 400 can be arranged to the one group of anticipatory data block returned from the selector switch 410 that predicts the outcome.The selector switch 410 that predicts the outcome can also be arranged to and identify the anticipatory data block of some from neural network module 406 and identify the anticipatory data block of some from conditional probability module 408.Such as, the selector switch 410 that predicts the outcome can be determined to identify five anticipatory data blocks from neural network module 406 and will identify seven anticipatory data blocks from conditional probability module 408.In some instances, the quantity of the anticipatory data block identified from neural network module 406 and conditional probability module 408 is based on the accuracy rate of neural network module 406 and conditional probability module 408.Such as, storage manager 400 can by comparing the accuracy rate of being followed the tracks of neural network module 406 and conditional probability module 408 by the anticipatory data of each Module recognition and the requested data of reality.Then the selector switch 410 that predicts the outcome can use the accuracy rate of neural network module 406 and conditional probability module 408 to identify one group to predict anticipatory data.
Then anticipatory data from the prediction anticipatory data of the selector switch 410 that predicts the outcome and identification in Coutinuous store module 404 is sent to net result selector switch 412.Net result selector switch 412 can combine from the prediction anticipatory data of the selector switch 410 that predicts the outcome and the anticipatory data from Coutinuous store module 404 with the combination of any suitable quantity.Such as, storage manager 400 can by comparing the previous anticipatory data identified by Coutinuous store module 404 and the information being stored the accuracy rate about Coutinuous store module 404 by the real data of processor request.Then net result selector switch 412 can identify one group of anticipatory data based on the ratio of the quantity of the anticipatory data block from Coutinuous store module 404 and the selector switch 410 that predicts the outcome.Then storage manager 400 can be retrieved from storage device the anticipatory data that identified by net result selector switch 412 and be stored in memory device by this anticipatory data.
In some instances, when the accuracy rate of arbitrary module drops to below threshold value, neural network module 406 and conditional probability module 408 can be reinitialized (being indicated by dotted line).As above in figure 3 discuss, then Coutinuous store module 404 can determine to expect requested data while neural network module 406 and conditional probability module 408 are reinitialized.During initialization and reinitializing, previous requested data 414 can be sent to neural network module 406 and conditional probability module 408.Then can be sent to the selector switch 410 that predicts the outcome by the anticipatory data of each Module recognition, it can analyze the accuracy rate of neural network module 406 and conditional probability module 408.If the accuracy rate of neural network module 406 or conditional probability module 408 brings up to more than threshold value, then the selector switch 410 that predicts the outcome can start with the accuracy rate higher than threshold value from this model choice anticipatory data.In some instances, the selector switch 410 that predicts the outcome can continue the data of transmission previous Request to module until the accuracy rate of this module rises on threshold value.Such as, threshold value can indicate the anticipatory data block of the certain percentage identified by neural network module 406 or conditional probability module 408 by processor access.
The block diagram of Fig. 4 only for illustration of object and can provide expection requested data with any amount of configuration.Such as, by independent during same time section, Coutinuous store module 404, neural network 406 and conditional probability module 408 can identify that anticipatory data carrys out parallel work-flow.In addition, storage manager 400 can comprise the attached storage parts of any other right quantity, such as register, for storing the accuracy rate information of neural network module 406 and conditional probability module 408.
Fig. 5 is the block diagram of the tangible non-transitory computer-readable medium 500 that the data providing to be retrieved are shown.Tangible non-transitory computer-readable medium 500 can be accessed via computer bus 504 by processor 502.In addition, tangible non-transitory computer-readable medium 500 can comprise the code that guidance of faulf handling device 502 performs the step of current method.
The various software parts discussed herein can be stored on tangible non-transitory computer-readable medium 500, as indicated in figure 5.Such as, storage manager 506 can be suitable for carrying out based on the anticipatory data identified by neural network module 508, conditional probability module 510 and Coutinuous store module 512 data that guidance of faulf handling device 502 provides to be retrieved.Neural network module 508 and conditional probability module 510 can calculate (such as conditional probability, polynomial function and other mathematical operations) and identify anticipatory data to be retrieved based on difference.Coutinuous store module 512 can identify anticipatory data based on the previous data by processor request.Be to be understood that any amount of Add-ons parts not shown in Figure 5 can be included in tangible non-transitory computer-readable medium 500 according to application-specific.
This example can allow various amendment and replacement form and be illustrated only for illustration of object.In addition, be to be understood that this technology is not intended to be limited to instantiation disclosed herein.In fact, the scope of claims is believed to comprise all replacements that theme obvious and disclosed for a person skilled in the art relates to, amendment and equivalent.

Claims (15)

1., for providing a method for data to be retrieved, comprising:
Detect data retrieval request;
Use Coutinuous store Module recognition first group of anticipatory data, described Coutinuous store module section ground is based on first group of anticipatory data described in the data block identification finally accessed by processor;
Use neural network module identification second group of anticipatory data, described neural network module is based in part on second group of anticipatory data described in neural network and described data retrieval request identification;
Service condition probabilistic module identification the 3rd group of anticipatory data, described conditional probability module section ground is based on the 3rd group of anticipatory data described in conditional probability and described data retrieval request identification;
Combine described second group of anticipatory data and described 3rd group of anticipatory data to produce predicted data group;
Combine described first group of anticipatory data and described predicted data group with the group that bears results;
Described result group is retrieved from storage device; And
Described result group is stored in memory device.
2. method according to claim 1, wherein said neural network module comprises input layer, two middle layers and output layer.
3. method according to claim 1, wherein said conditional probability module comprises matrix, and described matrix comprises the entry of each anticipatory data of the data about previous Request.
4. method according to claim 3, wherein said matrix is sparse matrix.
5. method according to claim 1, wherein comprises whether calculating a data block just by the conditional probability of processor request by processor request based on another data block based on described conditional probability Module recognition the 3rd group of anticipatory data.
6. method according to claim 1, wherein comprises based on first group of anticipatory data described in Coutinuous store Address Recognition from Coutinuous store Module recognition first group of anticipatory data based on described data retrieval request.
7. method according to claim 1, comprising:
Detect the standard that instruction reinitializes described neural network module and described conditional probability module;
Return the anticipatory data from described Coutinuous store module;
Described neural network module and described conditional probability module is reinitialized based on the requested data of reality; And
Return the anticipatory data from described Coutinuous store module, described neural network module and described conditional probability module.
8. method according to claim 1, comprising: compare based on the accuracy rate of the requested data of reality, removes anticipatory data or remove anticipatory data from described 3rd group of anticipatory data from described second group of anticipatory data.
9., for providing a system for data to be retrieved, comprising:
Comprise the storage device of computer-readable instruction, described computer-readable instruction comprises neural network module and conditional probability module; With
Processor, described processor perform described computer-readable instruction with:
Detect data retrieval request;
Use Coutinuous store Module recognition first group of anticipatory data, described Coutinuous store module section ground is based on first group of anticipatory data described in the data block identification finally accessed by processor;
Use neural network module identification second group of anticipatory data, described neural network module is based in part on second group of anticipatory data described in neural network and described data retrieval request identification;
Service condition probabilistic module identification the 3rd group of anticipatory data, described conditional probability module section ground is based on the 3rd group of anticipatory data described in conditional probability and described data retrieval request identification;
Combine described second group of anticipatory data and described 3rd group of anticipatory data to produce predicted data group;
Combine described first group of anticipatory data and described predicted data group with the group that bears results; And
Described result group is retrieved from storage device.
10. system according to claim 9, whether wherein said processor calculates a data block just by the conditional probability of processor request by processor request based on another data block.
11. systems according to claim 9, wherein said processor is based on first group of anticipatory data described in Coutinuous store Address Recognition.
12. systems according to claim 9, wherein said processor:
Detect the standard that instruction reinitializes described neural network module and described conditional probability module;
Return the anticipatory data from described Coutinuous store module;
Described neural network module and described conditional probability module is reinitialized based on the requested data of reality; And
Return the anticipatory data from described Coutinuous store module, described neural network module and described conditional probability module.
13. 1 kinds of tangible non-transitory computer-readable mediums comprising code, described code guidance of faulf handling device:
Detect data retrieval request;
Use Coutinuous store Module recognition first group of anticipatory data, described Coutinuous store module section ground is based on first group of anticipatory data described in the data block identification finally accessed by processor;
Use neural network module identification second group of anticipatory data, described neural network module is based in part on second group of anticipatory data described in neural network and described data retrieval request identification;
Service condition probabilistic module identification the 3rd group of anticipatory data, described conditional probability module section ground is based on the 3rd group of anticipatory data described in conditional probability and described data retrieval request identification;
Combine described second group of anticipatory data and described 3rd group of anticipatory data to produce predicted data group;
Combine described first group of anticipatory data and described predicted data group with the group that bears results; And
Described result group is retrieved from storage device.
The 14. tangible non-transitory computer-readable mediums comprising code according to claim 13, described code is guidance of faulf handling device also:
Detect the standard that instruction reinitializes described neural network module and described conditional probability module;
Return the anticipatory data from described Coutinuous store module;
Described neural network module and described conditional probability module is reinitialized based on the requested data of reality; And
Return the anticipatory data from described Coutinuous store module, described neural network module and described conditional probability module.
15. tangible non-transitory computer-readable mediums according to claim 13, wherein whether processor calculates a data block just by the conditional probability of processor request by processor request based on another data block.
CN201280075261.9A 2012-07-12 2012-07-12 Providing data to be retrieved Pending CN104520808A (en)

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
PCT/US2012/046514 WO2014011181A1 (en) 2012-07-12 2012-07-12 Providing data to be retrieved

Publications (1)

Publication Number Publication Date
CN104520808A true CN104520808A (en) 2015-04-15

Family

ID=49916443

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201280075261.9A Pending CN104520808A (en) 2012-07-12 2012-07-12 Providing data to be retrieved

Country Status (3)

Country Link
EP (1) EP2872986A4 (en)
CN (1) CN104520808A (en)
WO (1) WO2014011181A1 (en)

Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
EP0470734A1 (en) * 1990-08-06 1992-02-12 NCR International, Inc. Cache memory management system
EP0712082A1 (en) * 1994-11-14 1996-05-15 International Business Machines Corporation Method and apparatus for adaptive circular predictive buffer management
US5682500A (en) * 1992-06-04 1997-10-28 Emc Corporation System and method for determining sequential cache data access in progress
CN101046784A (en) * 2006-07-18 2007-10-03 威盛电子股份有限公司 Memory data access system and method and memory controller
CN100565450C (en) * 2003-12-24 2009-12-02 英特尔公司 Adaptable caching
US20120041914A1 (en) * 2010-08-16 2012-02-16 Durga Deep Tirunagari System and Method for Effective Caching Using Neural Networks

Family Cites Families (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20030204675A1 (en) * 2002-04-29 2003-10-30 Dover Lance W. Method and system to retrieve information from a storage device
US7555609B2 (en) * 2006-10-27 2009-06-30 Via Technologies, Inc. Systems and method for improved data retrieval from memory on behalf of bus masters

Patent Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
EP0470734A1 (en) * 1990-08-06 1992-02-12 NCR International, Inc. Cache memory management system
US5682500A (en) * 1992-06-04 1997-10-28 Emc Corporation System and method for determining sequential cache data access in progress
EP0712082A1 (en) * 1994-11-14 1996-05-15 International Business Machines Corporation Method and apparatus for adaptive circular predictive buffer management
CN100565450C (en) * 2003-12-24 2009-12-02 英特尔公司 Adaptable caching
CN101046784A (en) * 2006-07-18 2007-10-03 威盛电子股份有限公司 Memory data access system and method and memory controller
US20120041914A1 (en) * 2010-08-16 2012-02-16 Durga Deep Tirunagari System and Method for Effective Caching Using Neural Networks

Non-Patent Citations (3)

* Cited by examiner, † Cited by third party
Title
BENHAMMADI F. ET AL.: "CPU load prediction using neuro-fuzzy and Bayesian inferences", 《NEUROCOMPUTING》 *
LIAO S. ET AL.: "Machine learningbased prefetch optimization for Data Center Applications", 《THE CONFERENCE ON HIGH PERFORMACE COMPUTING NETWORKING, STORAGE AND ANALYSISI》 *
许欢庆,王永成: "基于用户访问路径分析的网页预取模型", 《软件学报》 *

Also Published As

Publication number Publication date
WO2014011181A1 (en) 2014-01-16
EP2872986A1 (en) 2015-05-20
EP2872986A4 (en) 2016-03-23

Similar Documents

Publication Publication Date Title
CN112970006B (en) Memory access prediction method and circuit based on recurrent neural network
CN103543987B (en) The feedback run for efficient parallel drives regulation
US20220011982A1 (en) Parallel Memory Access and Computation in Memory Devices
US20200175314A1 (en) Predictive data analytics with automatic feature extraction
US20220327544A1 (en) Method and system for detecting fraudulent transactions using a fraud detection model trained based on dynamic time segments
US10592365B2 (en) Method and apparatus for managing storage system
US11295236B2 (en) Machine learning in heterogeneous processing systems
TW201447750A (en) Coalescing memory access requests
CN113469354A (en) Memory-constrained neural network training
CN106776757B (en) Method and device for indicating user to complete online banking operation
US20200117449A1 (en) Accelerated Access to Computations Results Generated from Data Stored in Memory Devices
US20220122736A1 (en) Machine learning techniques for generating hybrid risk scores
Piza-Davila et al. A CUDA-based hill-climbing algorithm to find irreducible testors from a training matrix
CN104520808A (en) Providing data to be retrieved
US12094531B2 (en) Caching techniques for deep learning accelerator
CN114818484A (en) Training method of driving environment model and prediction method of driving environment information
CN112668788A (en) User scoring model training method based on deep learning and related equipment
Mohamed et al. Fast large-scale multimedia indexing and searching
KR20230015009A (en) System for prediction of early dropping out in outpatients with alcohol use disorders and method thereof
CN117808863A (en) Image size detection method and device, electronic equipment and storage medium
KR20240158551A (en) Method and System for Improving the Utilization of NPU On-Chip Memory with Computation Rearrangement for DNN Training
CN116403720A (en) Intervention result prediction method and device for external intervention behavior and computer equipment
CN114896422A (en) Knowledge graph complementing method, device, equipment and medium
CN107291711A (en) The method and device of variable discretization

Legal Events

Date Code Title Description
C06 Publication
PB01 Publication
C10 Entry into substantive examination
SE01 Entry into force of request for substantive examination
WD01 Invention patent application deemed withdrawn after publication

Application publication date: 20150415

WD01 Invention patent application deemed withdrawn after publication