CN104320448B - A kind of caching of the calculating equipment based on big data and prefetch acceleration method and device - Google Patents

A kind of caching of the calculating equipment based on big data and prefetch acceleration method and device Download PDF

Info

Publication number
CN104320448B
CN104320448B CN201410548817.2A CN201410548817A CN104320448B CN 104320448 B CN104320448 B CN 104320448B CN 201410548817 A CN201410548817 A CN 201410548817A CN 104320448 B CN104320448 B CN 104320448B
Authority
CN
China
Prior art keywords
scheme
caching
data
buffer
equipment
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN201410548817.2A
Other languages
Chinese (zh)
Other versions
CN104320448A (en
Inventor
张维加
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Individual
Original Assignee
Individual
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Individual filed Critical Individual
Priority to CN201410548817.2A priority Critical patent/CN104320448B/en
Publication of CN104320448A publication Critical patent/CN104320448A/en
Application granted granted Critical
Publication of CN104320448B publication Critical patent/CN104320448B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L67/00Network arrangements or protocols for supporting network services or applications
    • H04L67/01Protocols
    • H04L67/10Protocols in which an application is distributed across nodes in the network
    • H04L67/1097Protocols in which an application is distributed across nodes in the network for distributed storage of data in networks, e.g. transport arrangements for network file system [NFS], storage area networks [SAN] or network attached storage [NAS]
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L67/00Network arrangements or protocols for supporting network services or applications
    • H04L67/50Network services
    • H04L67/56Provisioning of proxy services
    • H04L67/568Storing data temporarily at an intermediate stage, e.g. caching
    • H04L67/5681Pre-fetching or pre-delivering data based on network characteristics

Landscapes

  • Engineering & Computer Science (AREA)
  • Computer Networks & Wireless Communication (AREA)
  • Signal Processing (AREA)
  • Memory System Of A Hierarchy Structure (AREA)

Abstract

A kind of caching of the calculating equipment based on big data and prefetch acceleration method and device, from traditional using equipment as unlike the cache mode of optimization object, this method submits data to cloud by largely caching or prefetching service unit, the Partial Feature data by various application programs or network operation on service equipment serviced comprising these service units, so-called characteristic refers mainly to caching and pre- extract operation characteristic of interest, for example apply read operation write operation ratio, I/O request type, file size, frequency of use, cache optimization experience, by server-side buffer memory device type of hardware, user's group character etc., cloud is counted and is analyzed after receiving the data, it excavates the optimization for different application and caches or prefetch scheme, again by the buffering scheme and anticipation scheme after optimization in a manner of active feedback or passive response etc. It is handled back to buffer service device, the work and specific aim that can directly carry out anticipation property optimize, data cached without long-term accumulation again.

Description

A kind of caching of the calculating equipment based on big data and prefetch acceleration method and device
Technical field
The product belongs to computer equipment and information science technology field.Be it is a kind of based on big data and cloud across setting The computer and smart machine accelerated method of standby interaction.
Background technique
Firstly the need of explanation, the disk of computer and intellectual computing device that the signified caching of the present invention is primarily referred to as Caching is run for acceleration system, break through the caching of disk performance bottleneck, rather than video stream media caches or routing web Caching.
Disk Caching Disk is that occur to solve disk speed bottleneck.The raising of disk performance is far behind located The electronic equipments such as device are managed, it is the performance bottleneck of entire computer system that this, which makes storage system still,.Cache (Caching) and Prefetching (Prefetching) is the two kinds of very effective technologies that can be improved performance of storage system.The thought of caching technology is The data often accessed are placed in quick access equipment, its access speed is accelerated, reduce the waiting time.Prefetching technique is that handle will Come be possible at once it is to be accessed to data be prefetched in fast equipment from slow devices in advance.Wherein, due to prefetching reality On border it is also one kind of disk buffering allotment, therefore the two is referred to as Disk Caching Disk herein.
Caching technology (Caching), as its name suggests, exactly when the equipment readwrite performance of upper layer and lower layer time differs greatly, A buffer layer between the high-performance equipment of upper level and the low-performance equipment of next stage, capacity is lower than next stage Low-performance equipment, and performance is often below the high-performance equipment of upper level, but its speed is greater than low-performance equipment, passes through transfer Originally the read-write for being directed toward low-performance equipment carrys out improving performance.Mono- word of Cache was discussed from an electronic engineering periodical in 1967 Text.All speed that is located at differs between biggish two kinds of hardware, for coordinating the structure of the two data transmission bauds difference, Referred to as Cache.Critical role of the caching technology in entire storage system is exactly considered, largely to improve cache hit Rate, the buffer storage managing algorithm that minimum magnetic disc i/o quantity is target emerge in large numbers one after another.For example, LRU is the caching being most widely used Management algorithm, the core concept of algorithm are exactly that preferentially by those equipment, the data of least referenced are replaced out within the most a recent period of time Caching, to ensure the utilization efficiency of caching to greatest extent.In addition, there are also some buffer storage managing algorithms opposite with LRU, they It is designed for specific access mode in application.Such as at most using replacement algorithm (Most Recently Used, MRU) Also referred to as read-replace algorithm.Preferentially replace out that the least recently used data of equipment are different, and MRU is always from equipment from LRU Most recently used data block is replaced out in caching.Why in this way, be because MRU be initially for it is some be similar to sequential scan, Designed by the access module of scan round.It is either based on spatial locality or access frequency, buffer storage managing algorithm is most Whole target is exactly the quantity for improving the hit rate of equipment end caching, minimizing equipment magnetic disc i/o.
Prefetching technique (Prefetching) is another important technology for improving performance of storage system.Prefetch is by those It not yet accesses but the following data that may be accessed reads the high speed storings such as caching in batch from low speed storage devices such as disks in advance In equipment, to improve the speed of data access, and the performance of entire storage system is finally promoted.
The validity of prefetching technique depends primarily on two aspects: one is the accuracy prefetched and is affected by it slow Hit rate is deposited, another is then the excavation for prefetching middle succession.Some researchs attempt by saving more more long history access Information promotes the accuracy of prediction.In addition a kind of algorithm excavated by the history access information of equipment between file or Access relation between data block, and the access data based on these Relationship Predictions future, improve the hit rate of caching.
No matter cache or prefetches, all the time, the algorithm of Disk Caching Disk, optimization, self-teaching are all locals, For specific equipment, prolonged correction is placed hope on into optimization.This is because in the past, disk buffering system be all for The design of this equipment performance is promoted, the cache information of other equipment can not be both obtained, can not also influence the operation of other equipment, In addition difference is huge meaningless realizing that the interaction between each device systems seems between equipment.
Reason 1. can not obtain the cache information of other equipment
In the past, disk buffering all forms an isolated system with respective equipment, not with the cachings of other equipment Any interaction.
Reason 2. can not influence the operation of other equipment
Since being each isolated system, can not affect one another naturally.
Difference is huge meaningless realizing that the interaction between each device systems seems between 3. equipment of reason
It for caching the establishment process of itself, needs to accumulate the operation data of the equipment, can just count active file, And these active files are cached.Obviously, active file described herein has left and specifically sets aiming at specific equipment It is standby basic just without this concept of active file.The computer common programs of one Computer Engineer, for example, Visual C or Dreamwaver may not will be installed on the computer of ordinary user, that caching system between the two has assorted again Comparativity still more, also difference is huge for caching of distinct device itself, and there is no at setting for past overwhelming majority computer The disk buffering except device caching is managed, a small number of server applications have caching technology, and some desktop computer applications have based on solid-state Hard disk cache technology (such as hybrid hard disk), between these equipment, no caching, there is caching, there is entirely different buffer memory device, Difference is huge between equipment.Therefore the possibility that device oriented caching technology does not interact.
However, disk buffering scheme is not necessarily intended to be confined to using specific equipment as object.If then to have particular application as Object forms the cache optimization scheme for application, there is the value of data interaction between distinct device, and if being disk The disk buffering of local device is directed between the optimization data and cloud of concrete application by one interactive device of cached configuration It passes, feedback, it will be able to realize the function of interaction, statistics, data analysis.
Also, the mode of disk buffering can change.In another patent of inventor (2014105350389), the disk buffering system of striding equipment, server-side and quilt in the caching system of striding equipment are inventors herein proposed It is the network of short distance multichannel between server-side, and can be by the realizations such as optical fiber interaction between server-side.In this way, should be across setting Standby caching system can form network, obtain big data, and can apply cloud.
Summary of the invention
The present invention proposes a kind of caching and forecasting method based on big data and cloud, caches before changing and prefetches skill The Local Characteristic of art excavates the data characteristic of buffering scheme and experience, becomes the caching of device object type as application type Caching, become fixed equipment type operation be striding equipment operates.
The present invention provides a kind of computer and smart machine accelerated method, this method is by multiple or a large amount of buffer service Device prefetches service unit to cloud, including remote server submission data, and cloud is counted after receiving the data and divided Analysis, then include buffering scheme after optimization by feedback information with active feedback or passive response mode or prefetch scheme and feed back to respectively Service unit, so that each service unit is cached or prefetched accordingly processing according to feedback information, including for some application Cache optimization;
The buffer service device prefetches the data of upload or downloading between service unit and cloud all to encrypt Form transmitting, the buffer service device or prefetch service unit upload data further include user group characteristic, specifically For the range of age, vocational area, interest range;Correspondingly, the prioritization scheme of cloud feedback also includes right for different user group In the scheme for being optimized or being prejudged using feature of different application objects;
The buffer service device or the data for prefetching service unit upload further include I/O type information, correspondingly, in cloud The prioritization scheme that end analysis provides includes to difference I/O type in different applicationPrioritization scheme, the class of I/O is distinguished including label Type judges the random read operation I/O type that buffer memory device is most good at, and by differentiating its feature during I/O, optimal selection is slow Equipment is deposited to be cached;
Workflow:
The first, it transfers by the memory of server device part, is invented disk as level cache, and during shutdown Its content is saved to file data packet, when booting is loaded into the data packet to virtual memory disk;
The second, it detects whether there are available disk caching, has detected whether wigig including the mobile device to low speed disk External high speed flash memory, or can detect available cache memory, be created that as L2 cache or agree to whether create by user, according to reading Write operation etc. is cached and is prefetched, and is carried out long term monitoring identification by being accustomed to user, is judged which system will use Data are pre-stored in buffer memory device;
Device also tests device hardware and the various cachings of creation, judges its caching performance feature, is sorted out, Including belonging to random reading buffer memory device or 4K write buffer equipment;The categorizing information both can be with the cache optimization data one of the machine It plays encryption and uploads cloud, be also used in the scheme fed back to using cloud;Correspondingly, analyzing the prioritization scheme provided beyond the clouds simultaneously Not instead of one application is corresponding a, the portion in specific, classification prioritization scheme, right on which kind of buffer structure Answer some application this to be conducive to optimizing application scheme according to all kinds of buffer memory devices progress different disposal using which kind of scheme; Go out LPT device and serial device respectively including marking, label distinguishes the type of I/O, judges the random write that buffer memory device is most good at I/O type is operated, by differentiating its feature during I/O, optimal selection buffer memory device is cached.
In the data of the submission comprising these buffer service devices or prefetch service unit serviced it is multiple or a large amount of By the Partial Feature data of various application programs or network operation on service equipment, so-called characteristic refer mainly to caching with Pre- extract operation characteristic of interest, including application read operation write operation ratio, I/O request type, most normal reading and writing of files are big It is small, using frequency, network access frequency, cache optimization experience, by server-side buffer memory device type of hardware, user's group character And file association function, cloud are counted and are analyzed after receiving the data, with the application such as application program or game or network address The object of level is objects of statistics, and analysis and summary goes out corresponding cached configuration or the prioritization scheme prefetched as being directed to different application Best cached configuration scheme prefetches the cached configuration of scheme or different application under the specific situation such as distinct device, different user Or the prioritization scheme prefetched, then cloud again with active feedback or passive response mode by after optimization buffering scheme and the side of prefetching It includes optimization, anticipation that case, which informs each service unit to carry out respective handling,.
It,, will using application, buffer memory device type as index after the cloud analyzes the scheme of optimization in the above method The scheme presence server end of optimization, when buffer service device or after prefetch service unit and be mounted in an equipment, detection Various applications and buffer memory device characteristic information in equipment initiate respective request to server, and server carries out next time again After retrieval, the prioritization scheme for being applicable in the specific situation is fed back into device.
The present invention also provides kinds to be accelerated based on the computer that big data is interacted with the striding equipment of cloud with smart machine The buffer service device of method, after being deployed to by the calculating equipment in server-side, which transfers is set the device by server-side Partial memory is invented disk as caching, and saves wherein cache contents during shutdown and open to file by standby partial memory It is loaded into again when machine.
In above-mentioned apparatus, for the device after being deployed to by server-side calculating equipment, which detects whether that presence can use magnetic Disk caching, detects whether the external high speed flash memory of wigig including the mobile device to low speed disk, as can detecting available slow It deposits, creates buffer zone with regard to utilizing.
In above-mentioned apparatus, the device itself also has external caching component.
The invention has the advantages that:
The present invention can change the cache optimization and prefetching optimization mechanism for calculating equipment, be able to ascend buffer memory device for the The caching acceleration capacity of the application of first use, the application newly installed, the website newly accessed and the low application of frequency of use.It is right In the application being commonly used, also it can cache and prefetch by being directed to the further promotion such as device hardware feature and user characteristics Effect.
Its effect is widely that scene is exemplified below.
Citing one: largely by certain file of certain games on service equipment all show frequently read feature, then when The equipment new clothes program, the work that can directly carry out anticipation property such as cache that and are written infrequently written document folder in other equipment It is data cached without accumulating again to high-speed equipment.
Citing two: largely all showed by certain program on service equipment and frequently write into work, as certain do shopping browser, then when When starting the browser, biggish write buffer is distributed while property can be prejudged for it, it is data cached without accumulating again.
In fact, many programs since the frequency of use of user is not high, can not learn to optimal on a single device Caching, but the acquisition of striding equipment data, it will be able to the statistics and judgement for carrying out mass data sample, so that many is rarely employed The program even program that uses for the first time can be previously optimization.
Detailed description of the invention
Equipment buffer memory principle figure of Fig. 1 based on big data.
Fig. 2 sample schematic device.
Specific embodiment
Based on method of the invention, design implements a kind of device.Using the device of method of the invention either hard Part is also possible to software, is also possible to the combination of software and hardware.The sample device that this place is shown is a kind of software, if it is this place Certainly take external buffer memory device or high speed network component just becomes software and hardware combining equipment to the sample of displaying.
The present invention proposes a kind of caching and forecasting method based on big data and cloud, caches before changing and prefetches skill The Local Characteristic of art excavates the data characteristic of buffering scheme and experience, becomes the caching of device object type as application type Caching, become fixed equipment type operation be striding equipment operates.
This method needs multiple or a large amount of buffer service devices to submit data to cloud, and data are filled comprising these buffer services Set serviced multiple or largely by the Partial Feature number of various application programs, network operation and associated documents in server-side According to these characteristics are that caching feature of interest is such as read and write ratio, I/O request type, reading and writing of files size, used Frequency, cache optimization experience, by server-side buffer memory device type of hardware, user's group character, file association function and respective The experience schemes obtained in equipment through self-teaching, cloud is counted and is analyzed after receiving the data, with application program, game Object with network address application is objects of statistics, and analysis and summary goes out the optimization side for being directed to the cached configuration of different application or prefetching Cached configuration under the specific situation such as distinct device, user of case or different application or the prioritization scheme prefetched, then with actively anti- Buffering scheme after optimization is returned to buffer service device with the scheme that prefetches and carries out respective handling such as by feedback or passive response mode Optimization, anticipation etc..As shown in attached drawing 1.
Certainly, the data of the upload downloading between buffer service device and cloud are all transmitted in the form of ciphertext.
Further, the data that buffer service device uploads may also include the caching hardware characteristics of respective equipment, can also use In the scheme fed back to using cloud.Analyzing the cache optimization scheme provided beyond the clouds in this way is not that an application is a, But which kind of caching or the side of prefetching taken to the application on which kind of buffer structure for the cache optimization scheme of specific classification Case.It is this to be conducive to optimizing application scheme according to all kinds of buffer memory devices progress different disposal.
Further, the data that buffer service device uploads may also include user group's characteristic, specifically include the age Range, vocational area, interest range;Correspondingly, cloud feedback optimization buffering scheme also include for different user group for The optimization or anticipation scheme using feature of different application objects.User including specific industry, age bracket is had using equipment Respective obvious crowd characteristic would not use a large amount of 3D game for reading random caches including the elderly, and be more likely to using The browser of more write buffers.These features are aware of, and apply these features, can preferably play and prefetch and caching Effect.Certainly, these information are all user group information, the personal letter that device had not both needed or will not obtain any user itself Breath.These user group information also pass through encryption.
Following work can be executed by being provided with a kind of such device after this device is installed to calculating equipment according to this method Make:
1. transferring by the memory of server device part, disk is invented as level cache, and protect during shutdown Its content is deposited to file data packet, when booting is loaded into the data packet to virtual memory disk;
2. detecting whether to have detected whether outside wigig there are available disk caching including the mobile device to low speed disk The high speed flash memory connect, can detect available cache memory, be created that as L2 cache or agree to whether create by user, grasped according to read-write Work etc. is cached and is prefetched.After self-teaching after a period of time and optimization, device can be uploaded these with ciphertext and respectively be Cache mode configuration data after optimizing in system delays multiple devices to processing server or cloud, processing server or cloud The statistical data of various application programs, game, network operation and the associated documents deposited, as unit of application, to be cached The optimization or anticipation of structure, then feed back each device and new device.
Certainly, this feedback may not be real-time, including will be optimized with application and buffer memory device type etc. to index Scheme presence server end, after device is mounted in an equipment, various applications and equipment on detection device The case where situation, user type, initiates respective request to server, after server is retrieved again, will be applicable in the concrete type , prioritization scheme feeds back to device.
Device and all kinds of buffer memory devices are also classified, so as to optimizing application scheme.Go out respectively parallel including marking Equipment and serial device increase the concurrency of I/O process using fine-grained synchronous lock mechanism for Parallel I/O, so as to improve I/O performance, label distinguish the type of I/O, the random read operation I/O type that buffer memory device is most good at are judged, by I/O process Middle its feature of differentiation, optimal selection buffer memory device are cached.
The workflow of the sample device:
1. transferring by the memory of server device part, disk is invented as level cache, and protect during shutdown Its content is deposited to file data packet, when booting is loaded into the data packet to virtual memory disk;2. detecting whether that presence can use magnetic Disk caching, detects whether the external high speed flash memory of wigig including the mobile device to low speed disk, can detect available slow It deposits, be created that as L2 cache or agree to whether create by user, cached and prefetched according to read-write operation etc..By to Family habit carries out long term monitoring identification, judges which data is system will use, and is pre-stored in buffer memory device.See 2 institute of attached drawing Show.
At the same time, device also tests device hardware and the various cachings of creation, judges its caching performance feature, Sorted out, including belongs to random reading buffer memory device or 4K write buffer equipment.The categorizing information both can be with the caching of the machine Optimization data encrypt together uploads cloud, is also used in the scheme fed back to using cloud.It is excellent that the caching provided is analyzed beyond the clouds Change scheme is not an application portion, but specific, classification, such as on which kind of buffer structure, using which kind of scheme. It is this to be conducive to optimizing application scheme according to all kinds of buffer memory devices progress different disposal.Go out LPT device and string respectively including marking Row equipment, label distinguish the type of I/O, the random read operation I/O type that buffer memory device is most good at are judged, by I/O process Middle its feature of differentiation, optimal selection buffer memory device are cached.
At the same time, device also substantially judges user group's feature, according to device type, age and equipment On application distribution, judge occupation, the interest, age of user.Certainly, these information are all user group information, and device was both not required to Also the personal information of any user itself will not be obtained.These user group information are also transmitted to cloud in an encrypted form.
Then device can be uploaded these cache mode configuration datas after optimizing in respective system with ciphertext and be taken to processing Business device, statistics of the processing server to the various application programs, game, network operation and the associated documents that are cached of multiple devices Data, carry out the optimization or anticipation of buffer structure as unit of application, then feed back each device and new device.When So, this feedback may not be real-time, including with application and buffer memory device type etc. to index, the scheme of optimization is existed Server end, after device is mounted in an equipment, the case where various applications and equipment on detection device, user The case where type etc., initiates respective request to server, after server is retrieved again, will be applicable in the concrete type, optimizes Scheme feeds back to device.
Such as: optimal AutoCAD is cached and is prefetched scheme on building trade user, 600 computer of Dell Latitude.
Because best cache prefetching scheme of the same application program on different type user, distinct device is clearly not With.
In addition to above-mentioned process, the design of device is further comprised: 1. pairs of Installed System Memories provide Intelligent Compression and release automatically with backstage It puts;2. application program is carried out virtualization process by device, thus system needed for prestoring more or even all program files and program In the buffer, virtualization theory is mainly the virtualization technology for utilizing sandbox to environment file, and application program is first installed to operation In all movements all records get up and be processed into local file, when executing master program file, it can temporarily generation one Virtual environment executes, and similar PowerShadow is the same, all operations being related to are completed in this virtual environment, can't The system for removing script.
The above is specific embodiments of the present invention and the technological means used, according to the exposure or introduction of this paper It can derive and derive many change and amendment, if equivalent change made by conception under this invention, generated effect is still It when the connotation covered without departing from specification and attached drawing, is regarded as within technology scope of the invention, closes elder generation Chen Ming.

Claims (6)

1. a kind of computer and smart machine accelerated method, which is characterized in that this method is filled by multiple or a large amount of buffer service It sets or prefetches service unit and data are submitted to cloud, including remote server, cloud is counted and is analyzed after receiving the data, Include again buffering scheme after optimization by feedback information with active feedback or passive response mode or prefetches scheme and feed back to each clothes Business device, so that each service unit is cached or prefetched accordingly processing according to feedback information, including for some application Cache optimization;
The buffer service device prefetches the data of upload or downloading between service unit and cloud all shapes to encrypt Formula transmitting, the buffer service device or the data for prefetching service unit upload further include user group characteristic, specially year Age range, vocational area, interest range;Correspondingly, the prioritization scheme of cloud feedback also includes for different user group for not With the scheme for being optimized or being prejudged using feature of application;
The buffer service device or the data for prefetching service unit upload further include I/O type information;Correspondingly, dividing beyond the clouds Analysing the prioritization scheme provided includes the prioritization scheme to difference I/O type in different application, and the type of I/O is distinguished including label, The random read operation I/O type that buffer memory device is most good at is judged, by differentiating its feature, optimal selection caching during I/O Equipment is cached;
Workflow:
The first, it transfers by the memory of server device part, is invented disk as level cache, and save during shutdown For its content to file data packet, when booting, is loaded into the data packet to virtual memory disk;
The second, it detects whether there are available disk caching, has detected whether that wigig is external including the mobile device to low speed disk High speed flash memory, or can detect available cache memory, be created that as L2 cache or agree to whether create by user, grasped according to read-write Work etc. is cached and is prefetched, and is carried out long term monitoring identification by being accustomed to user, is judged which data is system will use, It is pre-stored in buffer memory device;
Device also tests device hardware and the various cachings of creation, judges its caching performance feature, is sorted out, including Belong to random reading buffer memory device or 4K write buffer equipment;The categorizing information can both add together with the cache optimization data of the machine Close upload cloud, is also used in the scheme fed back to using cloud;Correspondingly, analyzing the prioritization scheme provided beyond the clouds and not being One application is corresponding a, but portion in specific, classification prioritization scheme, on which kind of buffer structure, it is corresponding certain Which kind of scheme a application should use, this to be conducive to optimizing application scheme according to all kinds of buffer memory devices progress different disposal;Including Label goes out LPT device and serial device respectively, and label distinguishes the type of I/O, judges the random read operation that buffer memory device is most good at I/O type, by differentiating its feature during I/O, optimal selection buffer memory device is cached.
2. a kind of method described in claim 1, which is characterized in that filled in the data of the submission comprising these buffer services Set or prefetch that service unit serviced is multiple or largely by the part of various application programs or network operation on service equipment Characteristic, so-called characteristic refers mainly to caching and pre- extract operation characteristic of interest, including writes using read operation Operation ratio, I/O request type, most normal reading and writing of files size, using frequency, network access frequency, cache optimization experience, It is counted after receiving the data by server-side buffer memory device type of hardware, user's group character and file association function, cloud With analysis, using the object of application program or game or network address application as objects of statistics, analysis and summary goes out corresponding caching and matches The prioritization scheme set or prefetched such as is directed to the best cached configuration scheme of different application, prefetches scheme or different application sets in difference Cached configuration standby, under the specific situation of different user or the prioritization scheme prefetched, then cloud is answered with active feedback or passively again Answering mode and the buffering scheme after optimization is informed each service unit with the scheme that prefetches to carry out respective handling includes optimizing, in advance Sentence.
3. a kind of method described in claim 1, which is characterized in that after cloud analyzes the scheme of optimization, to apply, cache Device type is as index, by the scheme presence server end of optimization, when buffer service device or prefetches service unit and is mounted After on to an equipment, various applications and buffer memory device characteristic information on detection device initiate respective request to server, Server is retrieved again next time after, the prioritization scheme for being applicable in the specific situation is fed back into device.
4. it is a kind of according to claim 1 the method manufacture buffer service device, which is characterized in that the device be deployed to by After calculating equipment in server-side, which is transferred by the partial memory of server device, and partial memory is invented disk and is made For caching, and wherein cache contents are saved during shutdown to file, when booting is loaded into again.
5. it is a kind of according to claim 1 the method manufacture buffer service device, which is characterized in that the device be deployed to by After server-side calculates equipment, which is detected whether there are available disk caching, including the mobile device detection to low speed disk Whether there is the high speed flash memory that wigig is external, as can detecting available cache memory, creates buffer zone with regard to utilizing.
6. a kind of buffer service device manufactured according to claim 1 the method, which is characterized in that the device itself also has External caching component.
CN201410548817.2A 2014-10-17 2014-10-17 A kind of caching of the calculating equipment based on big data and prefetch acceleration method and device Active CN104320448B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201410548817.2A CN104320448B (en) 2014-10-17 2014-10-17 A kind of caching of the calculating equipment based on big data and prefetch acceleration method and device

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201410548817.2A CN104320448B (en) 2014-10-17 2014-10-17 A kind of caching of the calculating equipment based on big data and prefetch acceleration method and device

Publications (2)

Publication Number Publication Date
CN104320448A CN104320448A (en) 2015-01-28
CN104320448B true CN104320448B (en) 2019-11-01

Family

ID=52375617

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201410548817.2A Active CN104320448B (en) 2014-10-17 2014-10-17 A kind of caching of the calculating equipment based on big data and prefetch acceleration method and device

Country Status (1)

Country Link
CN (1) CN104320448B (en)

Families Citing this family (9)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN105867832B (en) * 2015-01-19 2020-07-24 张维加 User and application oriented computer and intelligent equipment acceleration method and device
CN105988721A (en) * 2015-02-10 2016-10-05 中兴通讯股份有限公司 Data caching method and apparatus for network disk client
CN106997351B (en) * 2016-01-22 2021-03-02 斑马智行网络(香港)有限公司 Resource cache management method, system and device
CN105955709B (en) * 2016-04-16 2018-02-27 浙江大学 Efficiency optimization self-reacting device and method are prefetched based on machine learning
CN107094179A (en) * 2017-05-24 2017-08-25 浙江度衍信息技术有限公司 A kind of website visiting request processing method
CN109413189B (en) * 2018-11-05 2022-03-11 张维加 Electronic transaction system based on bottom translation
CN109753505B (en) * 2018-12-26 2022-06-24 济南银华信息技术有限公司 Method and system for creating temporary storage unit in big data storage system
CN110865947B (en) * 2019-11-14 2022-02-08 中国人民解放军国防科技大学 Cache management method for prefetching data
CN111399913B (en) * 2020-06-05 2020-09-01 浙江大学 Processor accelerated instruction fetching method based on prefetching

Citations (10)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101566927A (en) * 2008-04-23 2009-10-28 杭州华三通信技术有限公司 Memory system, memory controller and data caching method
CN101741866A (en) * 2010-02-01 2010-06-16 浪潮(北京)电子信息产业有限公司 On-line storage system and method
CN102748836A (en) * 2012-07-25 2012-10-24 广东志高空调有限公司 Air conditioning system and water-cooling and direct-cooling unit
CN102799485A (en) * 2012-07-12 2012-11-28 北京恒华伟业科技股份有限公司 Historical data migration method and device
CN102917031A (en) * 2012-09-25 2013-02-06 浙江图讯科技有限公司 Data computing system of safety production cloud service platform for industrial and mining enterprises
CN103095848A (en) * 2013-02-06 2013-05-08 上海帜讯信息技术有限公司 Method of cloud folder arrangement and information interaction for enterprise client terminal
CN103338261A (en) * 2013-07-04 2013-10-02 北京泰乐德信息技术有限公司 Storage and processing method and system of rail transit monitoring data
CN103347065A (en) * 2013-06-24 2013-10-09 李俊岭 Service information processing system and method
CN103391185A (en) * 2013-08-12 2013-11-13 北京泰乐德信息技术有限公司 Cloud security storage and processing method and system for rail transit monitoring data
CN103902473A (en) * 2012-12-31 2014-07-02 华为技术有限公司 Data processing method and data caching system

Patent Citations (10)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101566927A (en) * 2008-04-23 2009-10-28 杭州华三通信技术有限公司 Memory system, memory controller and data caching method
CN101741866A (en) * 2010-02-01 2010-06-16 浪潮(北京)电子信息产业有限公司 On-line storage system and method
CN102799485A (en) * 2012-07-12 2012-11-28 北京恒华伟业科技股份有限公司 Historical data migration method and device
CN102748836A (en) * 2012-07-25 2012-10-24 广东志高空调有限公司 Air conditioning system and water-cooling and direct-cooling unit
CN102917031A (en) * 2012-09-25 2013-02-06 浙江图讯科技有限公司 Data computing system of safety production cloud service platform for industrial and mining enterprises
CN103902473A (en) * 2012-12-31 2014-07-02 华为技术有限公司 Data processing method and data caching system
CN103095848A (en) * 2013-02-06 2013-05-08 上海帜讯信息技术有限公司 Method of cloud folder arrangement and information interaction for enterprise client terminal
CN103347065A (en) * 2013-06-24 2013-10-09 李俊岭 Service information processing system and method
CN103338261A (en) * 2013-07-04 2013-10-02 北京泰乐德信息技术有限公司 Storage and processing method and system of rail transit monitoring data
CN103391185A (en) * 2013-08-12 2013-11-13 北京泰乐德信息技术有限公司 Cloud security storage and processing method and system for rail transit monitoring data

Also Published As

Publication number Publication date
CN104320448A (en) 2015-01-28

Similar Documents

Publication Publication Date Title
CN104320448B (en) A kind of caching of the calculating equipment based on big data and prefetch acceleration method and device
US20210044662A1 (en) Server side data cache system
US10057367B2 (en) Systems and methods for data caching in a communications network
CN105653591B (en) A kind of industrial real-time data classification storage and moving method
CN103856567B (en) Small file storage method based on Hadoop distributed file system
EP3131265B1 (en) Data prefetching method for distributed hash table dht storage system, node, and system
CN105867832B (en) User and application oriented computer and intelligent equipment acceleration method and device
US9244980B1 (en) Strategies for pushing out database blocks from cache
CA2867589A1 (en) Systems, methods and devices for implementing data management in a distributed data storage system
US11429630B2 (en) Tiered storage for data processing
CN106021445A (en) Cached data loading method and apparatus
US20170262232A1 (en) Method and apparatus for optimizing data storage based on application
CN105917318A (en) System and method for implementing SSD-based I/O caches
CN101841438B (en) Method or system for accessing and storing stream records of massive concurrent TCP streams
US10223270B1 (en) Predicting future access requests by inverting historic access requests in an object storage system
CN104133783B (en) Method and device for processing distributed cache data
CN104021028B (en) Web buffering method and device in virtual machine environment
US20090327303A1 (en) Intelligent allocation of file server resources
US9380126B2 (en) Data collection and distribution management
US9851925B2 (en) Data allocation control apparatus and data allocation control method
CN109144431A (en) Caching method, device, equipment and the storage medium of data block
CN114651433A (en) Access pattern driven data placement in cloud storage
CN110362776A (en) Browser front-end data storage method, device, equipment and readable storage medium storing program for executing
US20230185715A1 (en) Queue optimization via predicitve caching in cloud computing
CN104850548B (en) A kind of method and system for realizing big data platform input/output processing

Legal Events

Date Code Title Description
C06 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant