CN104320448A - Method and device for accelerating caching and prefetching of computing device based on big data - Google Patents

Method and device for accelerating caching and prefetching of computing device based on big data Download PDF

Info

Publication number
CN104320448A
CN104320448A CN201410548817.2A CN201410548817A CN104320448A CN 104320448 A CN104320448 A CN 104320448A CN 201410548817 A CN201410548817 A CN 201410548817A CN 104320448 A CN104320448 A CN 104320448A
Authority
CN
China
Prior art keywords
data
buffer
buffer memory
caching
service
Prior art date
Application number
CN201410548817.2A
Other languages
Chinese (zh)
Other versions
CN104320448B (en
Inventor
张维加
Original Assignee
张维加
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 张维加 filed Critical 张维加
Priority to CN201410548817.2A priority Critical patent/CN104320448B/en
Publication of CN104320448A publication Critical patent/CN104320448A/en
Application granted granted Critical
Publication of CN104320448B publication Critical patent/CN104320448B/en

Links

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L67/00Network-specific arrangements or communication protocols supporting networked applications
    • H04L67/10Network-specific arrangements or communication protocols supporting networked applications in which an application is distributed across nodes in the network
    • H04L67/1097Network-specific arrangements or communication protocols supporting networked applications in which an application is distributed across nodes in the network for distributed storage of data in a network, e.g. network file system [NFS], transport mechanisms for storage area networks [SAN] or network attached storage [NAS]
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L67/00Network-specific arrangements or communication protocols supporting networked applications
    • H04L67/28Network-specific arrangements or communication protocols supporting networked applications for the provision of proxy services, e.g. intermediate processing or storage in the network
    • H04L67/2842Network-specific arrangements or communication protocols supporting networked applications for the provision of proxy services, e.g. intermediate processing or storage in the network for storing data temporarily at an intermediate stage, e.g. caching
    • H04L67/2847Network-specific arrangements or communication protocols supporting networked applications for the provision of proxy services, e.g. intermediate processing or storage in the network for storing data temporarily at an intermediate stage, e.g. caching involving pre-fetching or pre-delivering data based on network characteristics

Abstract

The invention provides a method and a device for accelerating caching and prefetching of a computing device based on big data. Different from a traditional caching mode aiming at optimizing devices, the method is characterized in that data are submitted by a large number of caching or prefetching service devices to a cloud side, the cloud side performs statistics and analysis on the received data and digs optimized caching or prefetching schemes for different applications, next, the optimized caching and pre-judging schemes are returned to the caching service device for processing in such a manner of active feedback or passive response, and the pre-judging work and targeted optimization can be directly performed without re-accumulating the cached data for a long time, wherein the data include partial characteristic data of various applications or network operations on served devices that the service devices serve, and the characteristic data main refer to the characteristic data concerned by caching and prefetching operations, such as application read operation-write operation ratio, I/O request type, file size, use frequency, caching optimization experience, served side caching device hardware type and user group character.

Description

A kind of buffer memory of the computing equipment based on large data and look ahead accelerated method and device

Technical field

This product belongs to computer equipment and information science technology field.A kind of computer mutual based on the striding equipment of large data and cloud and smart machine accelerated method.

Background technology

First it should be noted that namely the disk buffering of the computer that the buffer memory of indication of the present invention mainly refers to and intellectual computing device runs for accelerating system, breaks through the buffer memory of disk performance bottleneck, instead of video stream media buffer memory or route web buffer memory.

Disk Caching Disk occurs to solve disk speed bottleneck.The raising of disk performance lags far behind the electronic equipments such as processor, and this makes storage system be still the performance bottleneck of whole computer system.Buffer memory (Caching) and look ahead (Prefetching) are two kinds of very effective technology that can improve performance of storage system.The thought of caching technology is placed in fast access equipment the data of often accessing, and accelerates its access speed, reduces the stand-by period.Prefetching technique is that the data in advance that likely at once will be accessed in the future is prefetched in fast equipment from slow devices.Wherein, be in fact also the one of disk buffering allotment owing to looking ahead, therefore in this article both be referred to as Disk Caching Disk.

Caching technology (Caching), as its name suggests, exactly when the equipment readwrite performance of upper and lower two levels differs greatly, a resilient coating between the high-performance equipment and the low-performance equipment of next stage of upper level, its capacity is lower than the low-performance equipment of next stage, and performance is often lower than the high-performance equipment of upper level, but its speed is greater than low-performance equipment, and the read-write originally being pointed to low-performance equipment by transfer carrys out improving performance.Cache mono-word derives from one section of electronic engineering journal article of 1967.Between every two kinds of hardware larger in speed difference, for coordinating the structure of both data transmission bauds differences, all Cache can be referred to as.Take into account the critical role of caching technology in whole storage system just, in a large number improving cache hit rate, minimize the buffer storage managing algorithm that magnetic disc i/o quantity is target and emerge in large numbers one after another.Such as, LRU is the buffer storage managing algorithm be most widely used, and the core concept of algorithm is exactly preferentially the data of those equipment least referenced within the most a recent period of time are replaced out buffer memory, thus ensures the utilization ratio of buffer memory to greatest extent.In addition, also have some buffer storage managing algorithms contrary with LRU, they design for specific access pattern in application.Replacement algorithm (Most Recently Used, MRU) is such as used at most to be also referred to as reading-replacement algorithm.Preferentially to replace out the least-recently-used data of equipment different from LRU, and MRU always replaces out most recently used data block from equipment buffer memory.Why like this, be because MRU is at first designed by some access modules being similar to sequential scanning, scan round.No matter be based on spatial locality or access frequency, the final goal of buffer storage managing algorithm is exactly improve the hit rate of equipment end buffer memory, the quantity of minimization device magnetic disc i/o.

Prefetching technique (Prefetching) is another important technology improving performance of storage system.Looking ahead is those not yet to be accessed but the data in advance may accessed future reads the high-speed processing apparatus such as buffer memory in batch from low speed storage device such as disks, to improve the speed of data access, and the final performance promoting whole storage system.

The validity of prefetching technique depends primarily on two aspects: one is the accuracy of looking ahead and the cache hit rate affected by it, and another is then the excavation of middle succession of looking ahead.The accuracy promoting prediction by preserving more history visit informations more of a specified duration is attempted in some researchs.An other class algorithm excavates the access relation between file or between data block by the history visit information of equipment, and based on the visit data in these Relationship Prediction future, improves the hit rate of buffer memory.

No matter buffer memory or look ahead, all the time, the algorithm of Disk Caching Disk, optimization, self-teaching are all locals, for concrete equipment, long correction is placed hope in optimization.This is because, in the past, disk buffering system is all for promoting the design of this equipment performance, and it both cannot obtain the cache information of other equipment, also the operation of other equipment cannot be affected, even if in addition huge realization of difference seems also meaningless between each device systems alternately between equipment.

Reason 1. cannot obtain the cache information of other equipment

In the past, disk buffering all forms an isolated system with respective equipment, alternately not any with the buffer memory of other equipment.

Reason 2. cannot affect the operation of other equipment

Since be all isolated system separately, nature cannot affect each other.

Even if huge realization of difference seems also meaningless between each device systems alternately between reason 3. equipment

For the process of establishing of buffer memory itself, need the service data accumulating this equipment, just can count active file, and these active files are carried out buffer memory.Obviously, said active file is exactly for concrete equipment herein, and have left concrete equipment does not just have this concept of active file at all.The computer common programs of a Computer Engineer, such as Visual C or Dreamwaver, not can may install on the computer of domestic consumer, what comparativity does that caching system between the two have again? still more, also difference is huge for the buffer memory of distinct device itself, past, most computer did not have the disk buffering outside placement processor buffer memory, the application of minority server has caching technology, the application of some desktop computers has based on solid state hard disc caching technology (as hybrid hard disk), between these equipment, without buffer memory, there is buffer memory, there is complete different buffer memory device, between equipment, difference is huge.Therefore device oriented caching technology does not have mutual possibility.

But disk buffering scheme not necessarily will be confined to concrete equipment as object.If transfer to have particular application as the cache optimization scheme of object formation for application, just there is the value of data interaction between distinct device, if and configure an interactive device for disk buffering, by the disk buffering of local device for uploading between the optimization data of embody rule and high in the clouds, feeding back, the function of mutual, statistics, data analysis just can be realized.

Further, the pattern of disk buffering can change.In another patent of inventor (2014105350389), inventors herein propose the disk buffering system of striding equipment, be the multichannel network of short distance between service end and serviced end in the caching system of striding equipment, and it is mutual that optical fiber etc. can be relied between service end to realize.Like this, the caching system of this striding equipment just can form network, obtains large data, and can apply cloud.

Summary of the invention

The present invention proposes a kind of buffer memory based on large data and cloud and forecasting method, the Local Characteristic of buffer memory and prefetching technique before change, excavate the data characteristic of buffering scheme and experience, become the caching that the caching of device object type is application type, become permanent plant type and be operating as striding equipment operation.

The method needs multiple or a large amount of buffer service device to submit data to high in the clouds, the various application programs on multiple or a large amount of serviced ends that packet is served containing these buffer service devices, the Partial Feature data of network operation and associated documents, these characteristics are mainly feature that caching pays close attention to as read-write ratio, I/O request type, reading and writing of files size, frequency of utilization, cache optimization experience, serviced end buffer memory device type of hardware, customer group feature, file association function and on respective equipment through the experience schemes etc. that self-teaching obtains, high in the clouds is carried out add up and is analyzed, with application program after receiving data, game is objects of statistics with the object of the applications such as network address, and analyzing total bears for the cached configuration of different application or the prioritization scheme of looking ahead that (or different application is at distinct device, cached configuration under the concrete situation such as user or the prioritization scheme of looking ahead), then in the mode such as active feedback or passive response, the buffering scheme after optimizing and prefetch scheme are turned back to buffer service device and carry out respective handling as optimized, anticipation etc.See shown in accompanying drawing 1.

Certainly, the data uploading download between buffer service device and high in the clouds are all with the form transmission of ciphertext.

Further, the data that buffer service device is uploaded also can comprise the buffer memory hardware characteristics of respective equipment, are also used in the scheme that application high in the clouds feeds back to.Analyzing the cache optimization scheme provided so is beyond the clouds not that an application is a, but concrete, classification, as on which kind of buffer structure, takes which kind of buffer memory or prefetch scheme to this application.Thisly carry out different disposal according to all kinds of buffer memory device and be conducive to optimizing application scheme.

Further, the data that buffer service device is uploaded also can comprise user group's characteristic, as the range of age, vocational area, AOI etc., correspondingly, the optimization buffering scheme of high in the clouds feedback also comprises for different user group for the optimization of the use characteristic of different application objects or anticipation scheme.Such as, the user of specific industry, age bracket uses equipment to have respective obvious crowd characteristic, as the elderly would not use the 3D game of reading random cache in a large number, and is more prone to use the browser writing buffer memory more.Be aware of these features, and apply these features, the effect of prefetch and buffer can both be played better.Certainly, these information are all customer group information, and device neither needs the personal information that also will not obtain any user itself.These customer group information are also through encryption.

So a kind of device is provided with according to the method, after this device is installed to computing equipment, following work can be performed: 1. the internal memory transferring serviced end environment division, invented disk as level cache, and preserve its content to file data bag when shutting down, be loaded into this packet during start to virtual memory disk; 2. detect whether there is available disk buffer memory, such as the high speed flash memory whether having wigig external is detected to the mobile device of low speed disk, if available cache memory detected, be just created as L2 cache (or agreeing to whether create by user), carry out buffer memory according to read-write operation etc. and look ahead.After self-teaching after a while with optimization, device can with ciphertext upload these optimize in respective system after cache mode configuration data to processing server (high in the clouds), processing server (high in the clouds) is to the statistics of the various application programs be cached of multiple device, game, network operation and associated documents, to be applied as unit, carry out optimization or the anticipation of buffer structure, then feed back to each device and new device.

Certainly, this feedback may not be real-time, such as with application with buffer memory device type etc. for index, by the scheme presence server end optimized, when device is installed to after on an equipment, various application on checkout equipment, and the situation of equipment, the situation etc. of user type, initiate respective request, after server is retrieved again to server, to be suitable for this particular type, prioritization scheme feeds back to device.

Device and all kinds of buffer memory device also being classified, so that optimizing application scheme.Such as, mark goes out LPT device and serial device respectively, for Parallel I/O, adopt fine-grained synchrolock mechanism to increase the concurrency of I/O process, thus improve I/O performance, and for example, mark distinguishes the type of I/O, judge the random write operation I/O type that buffer memory device is good at most, by differentiating its feature in I/O process, optimal selection buffer memory device carries out buffer memory.

beneficial effect

The present invention can change the cache optimization of computing equipment and prefetching optimization mechanism, can promote application that buffer memory device uses for first time, new install application, the website of new access and the low application of frequency of utilization buffer memory acceleration capacity.For the application often used, also can by promoting buffer memory and the effect of looking ahead further for device hardware characteristic sum user characteristics etc.

Its effect is widely, and scene is exemplified below.

Citing one: on a large amount of serviced equipment, certain file of certain games all presents and frequently reads feature, then when this program of equipment new clothes, as buffer memory, that is written infrequently written document and clips to high-speed equipment on other equipment in the work directly can carrying out anticipation character, and data cached without the need to again accumulating.

Citing two: on a large amount of serviced equipment, certain program all presents frequent write work, as certain shopping browser, then when starting this browser, can for it, distribution be larger writes buffer memory on anticipation character ground, and data cached without the need to again accumulating.

In fact, many programs due to the frequency of utilization of user not high, the buffer memory of optimum cannot be learnt on a single device, but the acquisition of striding equipment data, just can carry out statistics and the judgement of mass data sample, make the program of many little uses even program of first time use to be previously optimization.

concrete case study on implementation

Based on method of the present invention, design implements a kind of device.The device applying method of the present invention both can be hardware, also can be software, also can be the combination of software and hardware.The sample device that this place shows is a kind of software, if carry upper external buffer memory device or express network parts just become software and hardware combining equipment for the sample of this place displaying.

The workflow of this sample device:

1. transfer the internal memory of serviced end environment division, invented disk as level cache, and preserve its content to file data bag when shutting down, be loaded into this packet during start to virtual memory disk; 2. detect whether there is available disk buffer memory, such as the high speed flash memory whether having wigig external is detected to the mobile device of low speed disk, if available cache memory detected, be just created as L2 cache (or agreeing to whether create by user), carry out buffer memory according to read-write operation etc. and look ahead.By carrying out long term monitoring identification to user habit, the system of judging is about to use which data, is pre-stored in buffer memory device.See shown in accompanying drawing 2.

Meanwhile, device is also tested the various buffer memorys of device hardware and establishment, judges its caching performance feature, sorts out, and such as belongs to random and reads buffer memory device or 4K writes buffer memory device.This categorizing information both can have been encrypted and upload high in the clouds together with the cache optimization data of the machine, was also used in the scheme that application high in the clouds feeds back to.Analyzing the cache optimization scheme provided beyond the clouds is not that an application is a, but concrete, classification, as on which kind of buffer structure, applies which kind of scheme.Thisly carry out different disposal according to all kinds of buffer memory device and be conducive to optimizing application scheme.Such as, mark goes out LPT device and serial device respectively, and and for example, mark distinguishes the type of I/O, and judge the random write operation I/O type that buffer memory device is good at most, by differentiating its feature in I/O process, optimal selection buffer memory device carries out buffer memory.

Meanwhile, device also roughly judges user group's feature, according to device type, age, and the application distribution on equipment, judge the occupation of user, interest, age etc.Certainly, these information are all customer group information, and device neither needs the personal information that also will not obtain any user itself.These customer group information are also delivered to high in the clouds in an encrypted form.

Then device can with ciphertext upload these optimize in respective system after cache mode configuration data to processing server, processing server is to the statistics of the various application programs be cached of multiple device, game, network operation and associated documents, to be applied as unit, carry out optimization or the anticipation of buffer structure, then feed back to each device and new device.Certainly, this feedback may not be real-time, such as with application with buffer memory device type etc. for index, by the scheme presence server end optimized, when device is installed to after on an equipment, various application on checkout equipment, and the situation of equipment, the situation etc. of user type, initiate respective request, after server is retrieved again to server, to be suitable for this particular type, prioritization scheme feeds back to device.

As: AutoCAD buffer memory best on building trade user, Dell Latitude 600 computer and prefetch scheme.

Because the best cache prefetching scheme of same application program on dissimilar user, distinct device is obviously different.

Except above-mentioned flow process, the design of device further comprises: 1. pair Installed System Memory provides Intelligent Compression and backstage automatically to discharge; 2. application program is carried out virtualization process by device, thus prestore that (virtualization theory mainly utilizes the Intel Virtualization Technology of sandbox to system environment file needed for more even all program files and program in the buffer, first application program is installed to actions all in operation all record and be processed into local file, when performing master program file, it can produce a virtual environment temporarily perform, similar PowerShadow is the same, all operations related to are all complete in this virtual environment, can't go dynamic system originally.)。

The above technological means being specific embodiments of the invention and using, can derive according to exposure herein or instruction and derive many changes and correction, if the equivalence done according to conception of the present invention changes, when its effect produced does not exceed the connotation that specification and accompanying drawing contain yet, all should be considered as, within technology category of the present invention, closing first Chen Ming.

Accompanying drawing explanation

Fig. 1. based on the equipment buffer memory principle figure of large data.

Fig. 2. sample device schematic diagram.

Claims (10)

1. a computer and smart machine accelerated method, the method submits data by multiple or a large amount of buffer service devices or service unit of looking ahead to high in the clouds (as remote server etc.), high in the clouds is carried out adding up and is analyzed after receiving data, in the mode such as active feedback or passive response, feedback information (as the buffering scheme after optimization or prefetch scheme) is fed back to each service unit, so that each service unit carries out corresponding buffer memory or prefetch process (cache optimization as applied for certain) etc. according to feedback information again.
2. the method described by a claim 1, it is characterized in that, the Partial Feature data of various application program on multiple or a large amount of serviced equipment that these buffer service devices or service unit of looking ahead serve or network operation are comprised in the data submitted to, so-called characteristic mainly refers to the characteristic that buffer memory and prefetch operation are paid close attention to, such as apply read operation write operation ratio, I/O request type, the most normal reading and writing of files size, application frequency of utilization, access to netwoks frequency, cache optimization experience, serviced end buffer memory device type of hardware, customer group feature and file association function etc., high in the clouds is carried out adding up and is analyzed after receiving data, mainly with the object of application program or the application such as game or network address for objects of statistics, analyzing total bears corresponding cached configuration or the prioritization scheme of looking ahead as the best cached configuration scheme for different application, prefetch scheme (or different application is at distinct device, cached configuration under the concrete situation such as different user or the prioritization scheme etc. of looking ahead), then the buffering scheme after optimization and prefetch scheme are informed each service unit to carry out respective handling as optimized in the mode such as active feedback or passive response by high in the clouds again, anticipation etc.
3. the method described by claim 1, is characterized in that, buffer service device or the data uploading download of looking ahead between service unit and high in the clouds are all transmitted in an encrypted form.
4. the method described by a claim 1, it is characterized in that, the data that buffer service device or service unit of looking ahead are uploaded also comprise customer group characteristic, as the range of age, vocational area, AOI etc., correspondingly, the prioritization scheme of high in the clouds feedback also comprises and to be optimized for the use characteristic of different application objects for different user group or the scheme of anticipation.
5. the method described by a claim 1, it is characterized in that, the data that buffer service device or service unit of looking ahead are uploaded also comprise concrete buffer memory device type of hardware or characteristic information, accordingly, analyzing the prioritization scheme provided beyond the clouds is not that an application is corresponding a, but concrete, classification, as on which kind of buffer structure, which kind of scheme certain application corresponding should adopt.
6. the method described by a claim 1, it is characterized in that, the data that buffer service device or service unit of looking ahead are uploaded also comprise I/O type information, accordingly, analyze the prioritization scheme provided beyond the clouds also to comprise scheme to different I in different application/O type (such as, mark distinguishes the type of I/O, judges that the random write that buffer memory device is good at most operates I/O type, by differentiating its feature in I/O process, optimal selection buffer memory device carries out buffer memory).
7. the method described by a claim 1, it is characterized in that, after high in the clouds analyzes the scheme of optimization, using application, buffer memory device type etc. as index, by the scheme presence server end optimized, when buffer service device or service unit of looking ahead are installed to after on an equipment, the characteristic informations such as the various application on checkout equipment and buffer memory device, initiate respective request to server, after server is retrieved, the prioritization scheme being suitable for this concrete situation is fed back to device again.
8. the buffer service device manufactured according to method described in claim 1, it is characterized in that, this device is after being deployed to serviced end computing equipment, this device transfers the partial memory of serviced end environment division, invented disk as buffer memory, and preserve wherein cache contents, be again loaded into during start to file when shutting down.
9. the buffer service device manufactured according to method described in claim 1, it is characterized in that, this device is after being deployed to serviced end computing equipment, this device detects whether there is available disk buffer memory, such as the high speed flash memory whether having wigig external is detected to the mobile device of low speed disk, if available cache memory detected, just utilize and create buffer zone.
10., according to the buffer service device that method described in claim 1 manufactures, it is characterized in that, this device itself is also with external buffer memory parts.
CN201410548817.2A 2014-10-17 2014-10-17 A kind of caching of the calculating equipment based on big data and prefetch acceleration method and device CN104320448B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201410548817.2A CN104320448B (en) 2014-10-17 2014-10-17 A kind of caching of the calculating equipment based on big data and prefetch acceleration method and device

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201410548817.2A CN104320448B (en) 2014-10-17 2014-10-17 A kind of caching of the calculating equipment based on big data and prefetch acceleration method and device

Publications (2)

Publication Number Publication Date
CN104320448A true CN104320448A (en) 2015-01-28
CN104320448B CN104320448B (en) 2019-11-01

Family

ID=52375617

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201410548817.2A CN104320448B (en) 2014-10-17 2014-10-17 A kind of caching of the calculating equipment based on big data and prefetch acceleration method and device

Country Status (1)

Country Link
CN (1) CN104320448B (en)

Cited By (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN105867832A (en) * 2015-01-19 2016-08-17 张维加 User-and-application-oriented method and device for accelerating computer and intelligent device
WO2016127656A1 (en) * 2015-02-10 2016-08-18 中兴通讯股份有限公司 Data caching method and device for net disk client
CN105955709A (en) * 2016-04-16 2016-09-21 浙江大学 Prefetching energy efficiency optimization adaptive device and method based on machine learning
CN106997351A (en) * 2016-01-22 2017-08-01 阿里巴巴集团控股有限公司 A kind of caching resource management method and system and device
CN107094179A (en) * 2017-05-24 2017-08-25 浙江度衍信息技术有限公司 A kind of website visiting request processing method
WO2020094157A1 (en) * 2018-11-05 2020-05-14 张维加 Electronic trading system based on base layer translation
CN111399913B (en) * 2020-06-05 2020-09-01 浙江大学 Processor accelerated instruction fetching method based on prefetching

Citations (10)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101566927A (en) * 2008-04-23 2009-10-28 杭州华三通信技术有限公司 Memory system, memory controller and data caching method
CN101741866A (en) * 2010-02-01 2010-06-16 浪潮(北京)电子信息产业有限公司 On-line storage system and method
CN102748836A (en) * 2012-07-25 2012-10-24 广东志高空调有限公司 Air conditioning system and water-cooling and direct-cooling unit
CN102799485A (en) * 2012-07-12 2012-11-28 北京恒华伟业科技股份有限公司 Historical data migration method and device
CN102917031A (en) * 2012-09-25 2013-02-06 浙江图讯科技有限公司 Data computing system of safety production cloud service platform for industrial and mining enterprises
CN103095848A (en) * 2013-02-06 2013-05-08 上海帜讯信息技术有限公司 Method of cloud folder arrangement and information interaction for enterprise client terminal
CN103338261A (en) * 2013-07-04 2013-10-02 北京泰乐德信息技术有限公司 Storage and processing method and system of rail transit monitoring data
CN103347065A (en) * 2013-06-24 2013-10-09 李俊岭 Service information processing system and method
CN103391185A (en) * 2013-08-12 2013-11-13 北京泰乐德信息技术有限公司 Cloud security storage and processing method and system for rail transit monitoring data
CN103902473A (en) * 2012-12-31 2014-07-02 华为技术有限公司 Data processing method and data caching system

Patent Citations (10)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101566927A (en) * 2008-04-23 2009-10-28 杭州华三通信技术有限公司 Memory system, memory controller and data caching method
CN101741866A (en) * 2010-02-01 2010-06-16 浪潮(北京)电子信息产业有限公司 On-line storage system and method
CN102799485A (en) * 2012-07-12 2012-11-28 北京恒华伟业科技股份有限公司 Historical data migration method and device
CN102748836A (en) * 2012-07-25 2012-10-24 广东志高空调有限公司 Air conditioning system and water-cooling and direct-cooling unit
CN102917031A (en) * 2012-09-25 2013-02-06 浙江图讯科技有限公司 Data computing system of safety production cloud service platform for industrial and mining enterprises
CN103902473A (en) * 2012-12-31 2014-07-02 华为技术有限公司 Data processing method and data caching system
CN103095848A (en) * 2013-02-06 2013-05-08 上海帜讯信息技术有限公司 Method of cloud folder arrangement and information interaction for enterprise client terminal
CN103347065A (en) * 2013-06-24 2013-10-09 李俊岭 Service information processing system and method
CN103338261A (en) * 2013-07-04 2013-10-02 北京泰乐德信息技术有限公司 Storage and processing method and system of rail transit monitoring data
CN103391185A (en) * 2013-08-12 2013-11-13 北京泰乐德信息技术有限公司 Cloud security storage and processing method and system for rail transit monitoring data

Cited By (9)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN105867832A (en) * 2015-01-19 2016-08-17 张维加 User-and-application-oriented method and device for accelerating computer and intelligent device
WO2016127656A1 (en) * 2015-02-10 2016-08-18 中兴通讯股份有限公司 Data caching method and device for net disk client
CN105988721A (en) * 2015-02-10 2016-10-05 中兴通讯股份有限公司 Data caching method and apparatus for network disk client
CN106997351A (en) * 2016-01-22 2017-08-01 阿里巴巴集团控股有限公司 A kind of caching resource management method and system and device
CN105955709A (en) * 2016-04-16 2016-09-21 浙江大学 Prefetching energy efficiency optimization adaptive device and method based on machine learning
CN105955709B (en) * 2016-04-16 2018-02-27 浙江大学 Efficiency optimization self-reacting device and method are prefetched based on machine learning
CN107094179A (en) * 2017-05-24 2017-08-25 浙江度衍信息技术有限公司 A kind of website visiting request processing method
WO2020094157A1 (en) * 2018-11-05 2020-05-14 张维加 Electronic trading system based on base layer translation
CN111399913B (en) * 2020-06-05 2020-09-01 浙江大学 Processor accelerated instruction fetching method based on prefetching

Also Published As

Publication number Publication date
CN104320448B (en) 2019-11-01

Similar Documents

Publication Publication Date Title
US10484498B2 (en) System for prefetching digital tags
US10496613B2 (en) Method for processing input/output request, host, server, and virtual machine
JP6073366B2 (en) Application driven CDN pre-caching
JP6378207B2 (en) Efficient query processing using histograms in the columnar database
CN103348342B (en) Personal content stream based on user's topic profile
EP2612250B1 (en) Method and system for inserting cache blocks
US10037285B2 (en) Multi-tiered storage systems and methods for adaptive content streaming
US20160234335A1 (en) Social-driven precaching of accessible objects
US20190138221A1 (en) Method and Apparatus for SSD Storage Access
Seo et al. HPMR: Prefetching and pre-shuffling in shared MapReduce computation environment
JP5651238B2 (en) Data access management in hybrid memory server
CN104424199B (en) searching method and device
US20150032967A1 (en) Systems and methods for adaptive prefetching
KR101023883B1 (en) Storage system using high speed storage divece as cache
Chan et al. Maintaining interactivity while exploring massive time series
JP2015509229A5 (en)
US9182927B2 (en) Techniques for implementing hybrid flash/HDD-based virtual disk files
CA2881206A1 (en) Methods, systems and devices relating to data storage interfaces for managing address spaces in data storage devices
EP2478441B1 (en) Read and write aware cache
CN104115133B (en) For method, system and the equipment of the Data Migration for being combined non-volatile memory device
EP2539821B1 (en) Caching based on spatial distribution of accesses to data storage devices
US20150106578A1 (en) Systems, methods and devices for implementing data management in a distributed data storage system
US6327644B1 (en) Method and system for managing data in cache
US9386117B2 (en) Server side data cache system
US8775479B2 (en) Method and system for state maintenance of a large object

Legal Events

Date Code Title Description
PB01 Publication
C06 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant