CN105867832A - User-and-application-oriented method and device for accelerating computer and intelligent device - Google Patents

User-and-application-oriented method and device for accelerating computer and intelligent device Download PDF

Info

Publication number
CN105867832A
CN105867832A CN201510022782.3A CN201510022782A CN105867832A CN 105867832 A CN105867832 A CN 105867832A CN 201510022782 A CN201510022782 A CN 201510022782A CN 105867832 A CN105867832 A CN 105867832A
Authority
CN
China
Prior art keywords
user
application
data
hardware
clouds
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN201510022782.3A
Other languages
Chinese (zh)
Other versions
CN105867832B (en
Inventor
张维加
Original Assignee
张维加
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 张维加 filed Critical 张维加
Priority to CN201510022782.3A priority Critical patent/CN105867832B/en
Priority to PCT/CN2015/098536 priority patent/WO2016115957A1/en
Publication of CN105867832A publication Critical patent/CN105867832A/en
Application granted granted Critical
Publication of CN105867832B publication Critical patent/CN105867832B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING; COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/06Digital input from, or digital output to, record carriers, e.g. RAID, emulated record carriers or networked record carriers

Abstract

Provided are a user-and-application-oriented method and device for accelerating a computer and an intelligent device. The method comprises: arranging caching and prefetching service control devices on a large quantity of computers, the devices establishing internal storage virtual disk on a piece of equipment, performing reading-writing tests in different types on each hardware part, modeling the equipment to a combination of different performance parameter data devices, the control devices can be provided with external solid-state hardware and participate in the modeling, then, the control devices preliminarily analyzing application information on a served device, and through network operation, analyzing user types, together with hardware modeling, submitting to a cloud end, after the cloud end analyzes, according to a previous file, giving out acceleration schemes aimed at different applications and different user groups for hardware in different types, returning to the device to perform primary processing, and the devices starting to count reading-writing operation, I/O types, operation frequency of each application, after a period of time, combining with effect feedback, feeding back to the cloud end again, the cloud end recording and giving out a correction scheme; repeatedly iterating until the scheme is basically complete, and storing a final scheme and history in the cloud end.

Description

The computer of a kind of user oriented and application and smart machine acceleration method and device
Technical field
This product belongs to computer equipment and information science technology field.It is the mutual computer of a kind of striding equipment based on big data Yu cloud and smart machine accelerated method.
Background technology
Firstly the need of explanation, the computer that the caching of indication of the present invention is primarily referred to as and the disk buffering of intellectual computing device, i.e. it is used for accelerating computer or operation, breaks through caching rather than the video stream media caching of disk performance bottleneck or route web caching.
Disk Caching Disk occurs to solve disk speed bottleneck.The raising of disk performance lags far behind the electronic equipments such as processor, and this makes storage system be still the performance bottleneck of whole computer system.Caching (Caching) And to prefetch (Prefetching) be the two kinds of very effective technology that can improve performance of storage system.The thought of caching technology is the data often accessed to be placed in quick access equipment, accelerates its access speed, reduces the waiting time.Prefetching technique be in the future the most to be accessed to data be prefetched in fast equipment from slow devices in advance.Wherein, the most also it is the one of disk buffering allotment owing to prefetching, therefore in this article both is referred to as Disk Caching Disk.
Caching technology (Caching), as its name suggests, it is exactly when the equipment readwrite performance of upper and lower two levels differs greatly, a cushion between the high-performance equipment and the low-performance equipment of next stage of upper level, its capacity is less than the low-performance equipment of next stage, and performance is often below the high-performance equipment of upper level, but its speed carrys out improving performance more than low-performance equipment, the read-write originally being pointed to low-performance equipment by transfer.Cache mono-word derives from an electronic engineering journal article of 1967.Between every two kinds of hardware bigger in speed difference, for coordinating the structure of both data transmission bauds difference, all can be referred to as Cache.Take into account caching technology critical role in whole storage system just, in a large number to improve cache hit rate, to minimize the magnetic disc i/o quantity buffer storage managing algorithm as target and emerge in large numbers one after another.Such as, LRU is the buffer storage managing algorithm being most widely used, and the core concept of algorithm is exactly preferentially those equipment data of least referenced within a recent period of time to be replaced out caching, thus ensures the utilization ratio of caching to greatest extent.It addition, also have some buffer storage managing algorithms contrary with LRU, they design for specific access pattern in application.Such as at most use and replace algorithm (Most Recently Used, MRU) also referred to as read-replace algorithm.Preferentially replacing out the least-recently-used data of equipment from LRU different, MRU always replaces out most recently used data block from equipment caches.The most such, being because MRU is initially designed by the access module being similar to sequential scan, scan round for some.Either based on spatial locality or access frequency, the final goal of buffer storage managing algorithm is exactly to improve the hit rate that equipment end caches, the quantity minimizing equipment magnetic disc i/o.
Prefetching technique (Prefetching) is another important technology improving performance of storage system.Prefetch the data being those not yet to be accessed but may access future to read in batch the high-speed processing apparatus such as caching from low speed storage device such as disks in advance, to improve the speed of data access, and finally promote the performance of whole storage system.
The effectiveness of prefetching technique depends primarily on two aspects: one is the degree of accuracy prefetched and the cache hit rate being affected by, and another is then the excavation prefetching middle succession.Some researchs attempt to promote the accuracy of prediction by preserving more more long history access information.An other class algorithm accesses information by the history of equipment and excavates the access relation between file or between data block, and based on the following access data of these Relationship Prediction, improves the hit rate of caching.
No matter cache or prefetch, all the time, there is many problems, so that have impact on its application.
Such as, old caching technology is with equipment as object, its object is to lifting means performance, and being allowed to do anything has performance boost.So there are three drawbacks, first, caching is all had to equipment as object designs, possess the most portable, define hardware binding, can not be general, second, the performance boost of one equipment does not has any help for other equipment, that is, marginal cost cannot be reduced, marginal utility cannot be promoted, for example, by being provided with bigger caching, Samsung 850EVO disk obtains ratio 840EVO better performance, but this part thing does not has any help for existing 840EVO, 3rd, the least to user aid, or as a example by Samsung, the caching of its solid state hard disc is generally designed relatively low, reason is the simplest, as described in its designer, the performance boost that user's general sensory brings less than more high level cache, divide up although performance indications are run, actual application satisfaction is the most relatively low.
The most such as, the algorithm of old cache prefetching, optimization, self-teaching are all locals, for concrete equipment, effect have great expectations of the immediate effect that hardware brings, even if having the later stage to optimize also place hope on correction for a long time.This is because, in the past, any cache prefetching system both cannot obtain the cache information of other equipment, also the operation of other equipment cannot be affected, still more not with application and user type (customer group) as object with equipment as object in the case of, even if between equipment difference huge realize between each device systems seem the most meaningless alternately:
Reason 1. cannot obtain the cache information of other equipment
In the past, disk buffering all with respective equipment formed an isolated system, with the caching of other equipment do not have any alternately.
Reason 2. cannot affect the operation of other equipment
Since being each isolated system, nature cannot affect one another.
Even if between reason 3. equipment difference huge realize between each device systems seem the most meaningless alternately
By caching itself set up process as a example by, need to accumulate the service data of this equipment, just can count active file, and these active files are cached.Obviously, active file described herein is aiming at concrete equipment, and have left concrete equipment does not the most just have this concept of active file.The computer common programs of one Computer Engineer, such as Visual C or Dreamwaver, not can may install on the computer of domestic consumer, and what comparability that caching system between the two has again?Still more, the caching of distinct device also difference itself is huge, past overwhelming majority computer does not arrange the disk buffering outside processor cache, the application of minority server has caching technology, and the application of some desk computers has based on solid state hard disc caching technology (such as hybrid hard disk), between these equipment, without caching, having caching, have entirely different buffer memory device, between equipment, difference is huge.The most device oriented caching technology does not has mutual possibility.
In a word, the disk buffering in past or Computer Cache prefetch with equipment as object, local, cause difficult transplanting, and the most general, marginal utility is low, and marginal cost is high, and actual application satisfaction rate is low, optimizes the problems such as slow time-consuming.
, if to change this pattern, it is necessary to carry out redesigning and mode of operation rewriting to existing various kinds of equipment, software and hardware.
But this redesign is worth.Although the present invention is only its desk study, also obtain unexpected effect.
Further, the pattern of disk buffering can change.In another patent of inventor (2014105350389), the calculating inventors herein proposing striding equipment adds speed system, essence is performance conveying, service end and be short distance between service end but multichannel network in the caching system of striding equipment, can in short distance transportation performance, and it is mutual that optical fiber etc. can be relied between service end to realize.So, the caching system of this striding equipment just can form network, it is thus achieved that big data, and can apply cloud.
Summary of the invention
The present invention proposes the computer acceleration method and device of a kind of user oriented and application.
The present invention program caches the Local Characteristic with prefetching technique before changing, excavate the data characteristic of buffering scheme and experience, become the caching caching into application with user object type of device object type, become fixing equipment type operation and work in coordination with operation into striding equipment networking, become the pre-taking equipment of single cache and be three grades.
Method flow is summarized: (controlling equipment install and identify)----equipment modeling---and-application and user type data are transmitted to high in the clouds together with device model---and-cloud computing just suboptimization is attempted---and-feedback collection transmits to high in the clouds----high in the clouds secondary is corrected feedback and records data----the most repeatedly until improving feedback log history.
Tradition cache prefetching often completes in an equipment, and the cache prefetching technology that the present invention provides architecturally takes tertiary structure: the cache prefetching of great amount of terminals controls device, device is controlled as one-level, contribute big data, and quantity is few but analyzes the cloud server that especially fuzzy operation ability is strong, as Two-stage control device, carries out cloud computing, add the external solid-state hardware controlling the USB interface that device carries, altogether completed by three grades.In terms of service area, this tertiary structure is the most no longer local, and they have connected collaborative work by network each other.
The method needs multiple or a large amount of cache prefetching terminal control mechanisms and possesses the cloud server of fuzzy analysis ability.Accelerated calculating equipment is processed by these terminal control mechanisms in advance, including being loaded into accelerating hardware, the detection network equipment and segmentation internal memory and storage device, then control device hardware components each to equipment and carry out the test (such as 4K read-write, sequentially read-write etc.) of dissimilar read-write, equipment is modeled into the combination of different performance supplemental characteristic device, device and all kinds of buffer memory devices are also carried out classification, in order to optimizing application scheme.Such as, labelling goes out LPT device and serial device respectively, for Parallel I/O, use fine-grained synchrolock mechanism to increase the concurrency of I/O process, thus improve I/O performance, and for example, labelling distinguishes the type of I/O, judging the random write operation I/O type that buffer memory device is good at most, by differentiating its feature during I/O, optimal selection buffer memory device caches.
Control device preliminary analysis subsequently by the various application informations on service equipment and the user characteristic data of network operation, then together with hardware modeling result, high in the clouds is submitted to, high in the clouds carries out adding up and fuzzy analysis after receiving data, according to first there being experience data archival that difference modeling hardware is provided the optimization speeding scheme for different application different user types (customer group), return cache service unit carries out processing for the first time.
After the first time configuration guide scheme that Preliminary Applications high in the clouds returns, after self-teaching after a while with optimization, control device and add up each application read operation write operation ratio, I/O request type, active file number and size, use frequency, user type feature etc. again, combine to test oneself over time and again feed back high in the clouds with User feedback collection.The configuration data of the cache mode after these optimize in respective system are uploaded to processing server (high in the clouds) with ciphertext.
High in the clouds records data and feedback and provides correction or the second preferred version.So it is repeated several times, reaches basic and improve and by final result and optimization Historic preservation high in the clouds.
High in the clouds is added up and is analyzed after receiving optimization final data, with application program, game and the application such as network address object as object of statistics, analysis and summary goes out the cached configuration for different application or the prioritization scheme (or different application cached configuration under the concrete situation such as distinct device, user or the prioritization scheme that prefetches) prefetched, in order to the buffering scheme after optimizing in modes such as active feedback or passive responses the most again returns to buffer service device with the scheme of prefetching and carries out respective handling such as optimization, anticipation etc..As shown in accompanying drawing 1.
Certainly, the data uploading download between buffer service device and high in the clouds are all transmitted with the form of ciphertext.
Further, the data that buffer service device is uploaded may also include the caching hardware characteristics of respective equipment, is also used in the scheme that application high in the clouds feeds back to.Analyzing the cache optimization scheme provided the most beyond the clouds is not that an application is a, but concrete, classification, as on which kind of buffer structure, which kind of caching this application is taked or prefetches scheme.This carry out different disposal according to all kinds of buffer memory devices and be conducive to optimizing application scheme.
Further, the data that buffer service device is uploaded may also include user group's characteristic, such as the range of age, vocational area, AOI etc., correspondingly, the optimization buffering scheme of high in the clouds feedback also includes optimization or the anticipation scheme for different user types (customer group), different application objects being used to feature.Such as, specific industry, the user of age bracket use equipment to have respective obvious crowd characteristic, as old people would not use a large amount of 3D game reading random cache, and are more likely to use more browser writing caching.It is aware of these features, and applies these features, can preferably play the effect prefetching and caching.Certainly, these information are all customer group information, and device both need not also will not obtain the personal information of any user itself.These customer group information also pass through encryption.
Control device can also select to open service node pattern (user has one's choice).If user allows to open service node pattern, then this control device is also by the instruction according to cloud server, provides the services such as cdn caching, low coverage network-caching, VPN service, smtp service for other peripheries user.Meanwhile, user also will obtain certain profit return.
It is provided with multiple sample device according to the method, sees and be embodied as case part.
Beneficial effect and the creativeness of invention
The present invention can change cache optimization and the prefetching optimization mechanism of calculating equipment, it is possible to the caching acceleration capacity of the application that the application that lifting buffer memory device used for first time, newly installed application, the new website accessed and use frequency are low.For commonly used application, it is also possible to by promoting caching further and the effect prefetched for device hardware feature and user type feature etc..
Its effect is widely, for user level, even if equipment is the most just installed, the website that user is concerned about just can quickly access with related web site, even this user may be only for the first time or (this ratio accounts for the biggest this website of back-call, it is all to access the website less than three times that 60% network of general netizen accesses), and this was the most impossible in the past.Device relies on the big data of customer group can also excavate more website relatedness and speed technology.
Similar, even if equipment is the most just installed, what user commonly used applies with the application liked it is possible to enough smoothnesses are run.More than and, 2 is all that number of users is the most, is distributed the widest, and Consumer's Experience will be more preferable, possesses network effects and snowball effect.
For application, effect scene is exemplified below.
Citing one: all presented by certain file of certain games on service equipment in a large number and frequently read feature, then when this program of equipment new clothes, can directly carry out the work of anticipation character such as to cache that and on other equipment, be written infrequently written document clip to high-speed equipment, and data cached without again accumulating.
Citing two: all presented by certain program on service equipment in a large number and frequently write into work, such as certain browser of doing shopping, then when starting this browser, can distribute for it and bigger writes caching in anticipation character ground, and data cached without again accumulating.
In fact, many programs are the highest due to the use frequency of user, cannot learn on a single device to optimum caching, but the acquisition of striding equipment data, just can carry out statistics and the judgement of mass data sample so that the program that the program that many is rarely employed uses even for the first time can be previously optimization.
For equipment aspect, device is general, transplantable, is also interconnection, and upgrading high in the clouds can be relied on to complete the upgrading of continuous follow-up function.
The creativeness of the present invention:
The working method of a kind of new cache prefetching of the invention and device manufacture.
First, the disk buffering in past or Computer Cache prefetch with equipment as object, local, cause difficult transplanting, and the most general, marginal utility is low, and marginal cost is high, and actual application satisfaction rate is low, optimizes the problems such as slow time-consuming.And the present invention provide new cache prefetching with application and user type (customer group) as object, network, having versatility and transplantability, with network size effect, marginal utility is high, application satisfaction rate is high, and can be rapidly completed and distribute rationally.
Second, different from the cache prefetching technology in past, the cache prefetching technology that the present invention provides have employed equipment modeling on method flow---and-application and user type data are transmitted to high in the clouds together with device model---and-cloud computing just suboptimization is attempted---and-feedback collection transmits to high in the clouds----high in the clouds secondary is corrected feedback and records data----the most repeatedly until improving feedback log history.Workflow in the present invention exists iteration, and during work, construct a kind of big data reflecting application feature and user type feature with hardware model relation with growing out of nothing, and the cloud computing mode of a kind of new fuzzy analysis and iterative guidance.
3rd, tradition cache prefetching often completes in an equipment, and the cache prefetching technology that the present invention provides architecturally takes tertiary structure: the cache prefetching of great amount of terminals controls device, control device as one-level, contribute big data, and quantity is few but analyzes the cloud server that especially fuzzy operation ability is strong, as Two-stage control device, carry out cloud computing, add the external solid-state hardware controlling the USB interface that device carries, altogether completed by three grades.
In terms of service area, this tertiary structure is the most no longer local, and they have connected collaborative work by network each other.
4th, the cache prefetching technology in past all ignores the difference of this internal of user type.For same equipment, the demand of different user also exists the biggest difference.The final object of technical service should be people rather than equipment.One old people uses same browser may be mainly used for seeing video and seeing news, and a youngster main uses is probably object for appreciation web game, and it should be distinct that this species diversity is reflected on the buffering scheme of application.Certainly, this most entirely ignores prejudice in other words, and this is natively had no idea by the technical scheme in past, and equipment cannot predict its buyer before sale, and program cannot predict its user before being downloaded.And use the present invention just can excavate user type, create relevant big data and be applied in cache prefetching technology.
It is embodied as case
Method of based on the present invention, a kind of device of design implementation.The device of the method for the application present invention both can be hardware, it is also possible to be software, it is also possible to be the combination of software and hardware.The sample device that this place shows has two, and wherein first is a kind of software and hardware combining equipment, and second sample omits external buffer memory device becomes a kind of software with express network parts.
First sample controls device, and the external solid-state accelerating hardware of a band USB3.0 connecting line with one.This solid-state accelerating hardware possesses order per second for 620MB and reads, the sequential write that 550MB is per second, 4K per second for 120MB reads, 4K per second for 160MB writes, above speed is to survey data parameters when dispatching from the factory by Thunderbolt, this performance substantially can be reached, the workflow (shown in accompanying drawing 2) of device under USB3:
The first step, caching is loaded into and virtualization work.
1. it is loaded into solid-state accelerating hardware 2. and transfers by the internal memory of service end environment division, invented disk as level cache, and preserve its content to file data bag when shutdown, this packet it is loaded into virtual memory disk during start, transferring size is first initial setting minima, progressively revises with after the feedback procedure of high in the clouds subsequently;3. detect whether to there is other available disks caching, the such as mobile device to low speed disk detects whether high speed flash memory external for wigig, if available cache memory being detected, it is created that as L2 cache (or being agreed to whether create by user), in order to cache according to read-write operation etc. and prefetch.
Second step, measures work.
After completing to prepare operation, control device and the various caching parts of device hardware Yu establishment are carried out the test of dissimilar read-write, such as 4K read-write, 512K random read-write, sequentially read-write etc., judge to treat the caching performance feature of acceleration equipment various piece, external accelerating hardware is also assisted in test, because the USB interface of equipment can bring the biggest impact.
3rd step, modeling work.
According to measurement data, and read hardware other information such as size, interface etc. by system function such as Windows function, then equipment is modeled into the combination of the data set of each different performance parameter, and provide the scoring of each readwrite performance and the comprehensive grading of various piece, and sort out, such as this partly belongs to read buffer memory device at random or 4K writes buffer memory device.This categorizing information both can have been encrypted together with the cache optimization data of the machine and upload high in the clouds, was also used in the scheme that application high in the clouds feeds back to.Because analyzing the cache optimization scheme provided beyond the clouds is not that an application is a, but concrete, classification, as on which kind of buffer structure, apply which kind of scheme.This carry out different disposal according to all kinds of buffer memory devices and be conducive to optimizing application scheme.Such as, labelling goes out LPT device and serial device respectively, and and for example, labelling distinguishes the type of I/O, it is judged that the random write operation I/O type that buffer memory device is good at most, by differentiating its feature during I/O, optimal selection buffer memory device caches.
4th step, scans application state, and substantially determines user type (customer group).
This step has many implementations in fact.In sample, our control device is do so: scanning imaging system installation directory obtains application kind, scanning Prefetch catalogue and daily record obtain application and use frequency, scanning system TEMP file obtains frequentation and asks website and infer user type feature according to website, infers user habit.User group's feature is substantially judged with cache file by device according to website, according to device type, application distribution on age, and equipment, it is judged that the occupation of user, interest, age etc..Certainly, these information are all the characteristic informations of user group's property, and device both need not also will not obtain the personal information of any user itself.And these customer group information all can be delivered to high in the clouds in an encrypted form.
5th step, data tentatively upload high in the clouds.
Control device preliminary analysis and high in the clouds is submitted to by the various application informations on service equipment and the user characteristic data of network operation, and hardware modeling result.
nullUpload and data do not have any user privacy information,It is all abstract model information and customer group,Such as one typically substantially can comprise following type information,Following information is only for example: the most frequently used application program (contention between monster and beast,Taobao's browser,Word),User characteristics (20-30 year,Man,Like shopping and browse automotive-related websites,And page trip such as 4399),Microcomputer modelling feature (test feature: 32 systems,4GB physics DDR2 internal memory,System identification 3.2GB,Accelerating hardware part is connected by USB3.0 generic interface,64GB altogether,And have employed usb protocol optimization acceleration,Create memory virtual disk 512MB,Single hard disk,Seagate hybrid hard disk is 1TB,Wherein memory virtual disk is surveyed and is divided into order reading 2200MB per second,Sequential write 1020MB is per second,It is per second that 4K reads 500MB,It is per second that 4K writes 300MB,It is per second that external accelerating hardware order reads 480MB,Sequential write 480MB is per second,It is per second that 4K reads 100MB,It is per second that 4K writes 160MB,It is per second that the order of hybrid hard disk reads 150MB,Sequential write 120MB is per second,It is per second that 4K reads 1MB,4K writes 0.5MB other parameters of grade per second,Modelling feature: be set to the buffer area A of a 4K,One memory virtual write buffer area B,One order reads buffer area C,One modeling actual for mixed zone D----certainly can be more more complex than this,Use for explanation herein) etc.,These information are uploaded to cloud server in an encrypted format.
6th step, the preliminary fuzzy analysis in high in the clouds.
First, this is as the Internet, if this is the initial stage of just networking, high in the clouds does not has data with existing, several parts of the most initial archives and data are to need artificial engineer input, including the substantial amounts of cache prefetching scheme of different application for all types of user group under various kinds of equipment environment.Here scheme imperfection is irrespective because can later during by continuous iteration, perfect.
Next the flow process after the network that introduces that yes here is the most tentatively set up.
Receiving application data, after customer group feature and hardware modeling result etc. upload data, high in the clouds carries out adding up and fuzzy analysis after receiving data, according to first there being experience data archival that difference modeling hardware is provided the optimization speeding scheme for different application different user group, return cache service unit carries out processing for the first time.
The processing mode of such as above example may be exactly: according to server database data, frequently reads feature owing to all being presented by certain file of contention between monster and beast on service equipment in a large number, therefore high in the clouds returns scheme and requires caching that is written infrequently written document and clips to C;According to server database data, all presented by Taobao's browser on service equipment in a large number and frequently write into work, therefore distribute bigger writing for it and be cached to B;It is directed to a large amount of 4K by Word on service equipment in a large number read and write, distributes A district;Owing to user likes shopping and browses automotive-related websites, and page trip is such as 4399, therefore high in the clouds returns scheme and requires the main page of cache prefetching related web site, and is redirected to neighbouring node with some cachings of cdn technical arrangement;And give other system and the cache prefetching allocation plan etc. of application with some for the data uploaded and model.
After analysis, such scheme is returned and controls device by server.
7th step, depth data and test staining effect.
Control device over time to carry out again testing oneself and User feedback collection, and the depth data obtained in uploading a period of time, each application read operation write operation ratio, I/O request type, active file number and size, use frequency should be included as far as possible, feedback high in the clouds, high in the clouds provides correction or the second preferred version according to feedback.
8th step, iterate scheme.
After being so repeated several times, reach the most perfect.
9th step, server database updates, and by final result and optimization Historic preservation high in the clouds.
Cloud server receives uploads these final optimization pass cache modes in respective system configuration data with ciphertext, the statistical data of various application programs, game, network operation and associated documents that multiple devices are cached by processing server, with application, user and device model as taxon, recorded data base, such as: building trade user, Dell AutoCAD optimal on Latitude 600 computer caches and prefetches scheme.(the same application program optimal cache prefetching scheme on dissimilar user, distinct device is clearly different.) to coordinate new device afterwards.
Tenth step, then service node (user's alternative mode).
Control device can also select to open service node pattern (user has one's choice).If user allows to open service node pattern, then this control device is also by the instruction according to cloud server, provides the services such as cdn caching, low coverage network-caching, VPN service, smtp service for other peripheries user.Meanwhile, user also will obtain certain profit return.
The design of this sample one device further comprises: 1. pair Installed System Memory provides Intelligent Compression to be automatically releasable with backstage;2. application program is carried out virtualization process by device, thus needed for prestore the most all more program files and program system environment file (virtualization theory can be to redirect and environment Intel Virtualization Technology etc. in the buffer, the application program oneself being virtualized comprises, can be).
The workflow of second sample device:
The first step, caching creates and virtualization work.
1. transfer by the internal memory of service end environment division, invented disk as level cache, and preserve its content to file data bag when shutdown, this packet it is loaded into virtual memory disk during start, transferring size is first initial setting minima, progressively revises with after the feedback procedure of high in the clouds subsequently;2. detect whether to there is available disk caching, the such as mobile device to low speed disk detects whether high speed flash memory external for wigig, if available cache memory being detected, it is created that as L2 cache (or being agreed to whether create by user), in order to cache according to read-write operation etc. and prefetch.
Second step, measures work.
After completing to prepare operation, control device and the various caching parts of device hardware Yu establishment are carried out the test of dissimilar read-write, such as 4K read-write, 512K random read-write, sequentially read-write etc., judging to treat the caching performance feature of acceleration equipment various piece, when there is the situations such as the most external solid state hard disc of external hardware device, these external equipments also assist in test.
3rd step, modeling work.
According to measurement data, and read hardware other information such as size, interface etc. by system function such as Windows function, then equipment is modeled into the combination of the data set of each different performance parameter, and provide the scoring of each readwrite performance and the comprehensive grading of various piece, and sort out, such as this partly belongs to read buffer memory device at random or 4K writes buffer memory device.This categorizing information both can have been encrypted together with the cache optimization data of the machine and upload high in the clouds, was also used in the scheme that application high in the clouds feeds back to.Because analyzing the cache optimization scheme provided beyond the clouds is not that an application is a, but concrete, classification, as on which kind of buffer structure, apply which kind of scheme.This carry out different disposal according to all kinds of buffer memory devices and be conducive to optimizing application scheme.Such as, labelling goes out LPT device and serial device respectively, and and for example, labelling distinguishes the type of I/O, it is judged that the random write operation I/O type that buffer memory device is good at most, by differentiating its feature during I/O, optimal selection buffer memory device caches.
4th step, scans application state, and substantially determines user type (customer group).
This step has many implementations in fact.In sample two, our control device is do so: scanning imaging system installation directory obtains application kind, scanning Prefetch catalogue and daily record obtain application and use frequency, scanning system TEMP file obtains frequentation and asks website and infer customer group feature according to website, infers user habit.User group's feature is substantially judged with cache file by device according to website, according to device type, application distribution on age, and equipment, it is judged that the occupation of user, interest, age etc..Certainly, these information are all the characteristic informations of user group's property, and device both need not also will not obtain the personal information of any user itself.And these customer group information all can be delivered to high in the clouds in an encrypted form.
5th step, data tentatively upload high in the clouds.
Control device preliminary analysis and high in the clouds is submitted to by the various application informations on service equipment and the user characteristic data of network operation, and hardware modeling result.
nullUpload and data do not have any user privacy information,It is all abstract model information and customer group,Such as one typically substantially can comprise following type information,Following information is only for example: the most frequently used application program (contention between monster and beast,Taobao's browser,Word),User characteristics (20-30 year,Man,Like shopping and browse automotive-related websites,And page trip such as 4399),Microcomputer modelling feature (test feature: 32 systems,4GB physics DDR2 internal memory,System identification 3.2GB,Create memory virtual disk 512MB,Double hard disks,Wherein SSD is 32GB,HDD is 1TB,Wherein memory virtual disk is surveyed and is divided into order reading 2200MB per second,Sequential write 1020MB is per second,It is per second that 4K reads 500MB,It is per second that 4K writes 300MB,It is per second that SSD order reads 300MB,Sequential write 120MB is per second,It is per second that 512K reads 280MB,It is per second that 512K writes 110MB,It is per second that 4K reads 10MB,It is per second that 4K writes 16MB,It is per second that the order of HDD reads 80MB,Sequential write 60MB is per second,It is per second that 4K reads 0.1MB,4K writes 0.05MB other parameters of grade per second,Modelling feature: be set to the buffer area A of a 4K,One memory virtual write buffer area B,One order reads buffer area C,One modeling actual for mixed zone D----certainly can be more more complex than this,Use for explanation herein) etc.,These information are uploaded to cloud server in an encrypted format.
6th step, the preliminary fuzzy analysis in high in the clouds.
First, this is as the Internet, if this is the initial stage of just networking, high in the clouds does not has data with existing, several parts of the most initial archives and data are to need artificial engineer input, including the substantial amounts of cache prefetching scheme of different application for all types of user group under various kinds of equipment environment.Here scheme imperfection is irrespective because can later during by continuous iteration, perfect.
Next the flow process after the network that introduces that yes here is the most tentatively set up.
Receiving application data, after customer group feature and hardware modeling result etc. upload data, high in the clouds carries out adding up and fuzzy analysis after receiving data, according to first there being experience data archival that difference modeling hardware is provided the optimization speeding scheme for different application different user group, return cache service unit carries out processing for the first time.
The processing mode of such as above example may be exactly: according to server database data, frequently reads feature owing to all being presented by certain file of contention between monster and beast on service equipment in a large number, therefore high in the clouds returns scheme and requires caching that is written infrequently written document and clips to C;According to server database data, all presented by Taobao's browser on service equipment in a large number and frequently write into work, therefore distribute bigger writing for it and be cached to B;It is directed to a large amount of 4K by Word on service equipment in a large number read and write, distributes A district;Owing to user likes shopping and browses automotive-related websites, and page trip is such as 4399, therefore high in the clouds returns scheme and requires the main page of cache prefetching related web site, and is redirected to neighbouring node with some cachings of cdn technical arrangement;And give other system and the cache prefetching allocation plan etc. of application with some for the data uploaded and model.
After analysis, such scheme is returned and controls device by server.
7th step, depth data and test staining effect.
Control device over time to carry out again testing oneself and User feedback collection, and the depth data obtained in uploading a period of time, each application read operation write operation ratio, I/O request type, active file number and size, use frequency should be included as far as possible, feedback high in the clouds, high in the clouds provides correction or the second preferred version according to feedback.
8th step, iterate scheme.
After being so repeated several times, reach the most perfect.
9th step, server database updates, and by final result and optimization Historic preservation high in the clouds.
Cloud server receives uploads these final optimization pass cache modes in respective system configuration data with ciphertext, the statistical data of various application programs, game, network operation and associated documents that multiple devices are cached by processing server, with application, user and device model as taxon, recorded data base, such as: building trade user, Dell AutoCAD optimal on Latitude 600 computer caches and prefetches scheme.(because the same application program optimal cache prefetching scheme on dissimilar user, distinct device is clearly different.) to coordinate new device afterwards.
The design of this sample two devices also includes: 1. pair Installed System Memory provides Intelligent Compression to be automatically releasable with backstage;2. application program is carried out virtualization process by device, thus needed for prestore the most all more program files and program system environment file is in the buffer.
In addition to caching loading has difference, other also can be found in shown in accompanying drawing 2.
The above is the specific embodiment of the present invention and the technological means used, change and the correction deriving many can be derived according to exposure herein or teaching, if the equivalence change that conception under this invention is made, when acting on, produced by it, the connotation still contained without departing from description and accompanying drawing, it is regarded as within the technology category of the present invention, closes first Chen Ming.
Accompanying drawing explanation
Fig. 1. equipment ultimate principle figure.
Fig. 2. sample device schematic diagram.

Claims (10)

  1. null1. one kind based on big data and cloud、With user and the computer applied as object and smart machine accelerated method,The method is in multiple stage or treats in a large number to arrange control device on acceleration equipment,Controlled device by these and these are treated that acceleration equipment main hardware part is identified or performance test,And transfer and invented disk as caching by the partial memory of service end environment division,And treat on acceleration equipment that the modes such as relative program Directory caching catalogue obtain the application instance data treated on acceleration equipment by such as scanning、And the user type characteristic such as network operation (user type characteristic can optionally decide whether to obtain),High in the clouds remote server is submitted to together with the hardware characteristics data identified,High in the clouds is after receiving above-mentioned data,In conjunction with high in the clouds, original existing data base carries out computational analysis,And by such as retrieving similar hardware、The optimal buffering scheme of corresponding application programs and prefetch the modes such as configuration under similar users,Different treat that acceleration equipment provides for its particular hardware to each、Concrete application、The even optimization of particular user type (this is optional) caches or prefetches speeding scheme,Each service control device is fed back in modes such as active feedback or passive responses,Acceleration is cached accordingly according to feedback information by controlling device、Cache optimization or prefetch process etc..
  2. null2. the method described by a claim 1,It is characterized in that,It not to be done directly application and be on the process of iteration after controlled state receives feedback scheme,I.e.,While tentatively indicating allocating cache to prefetch service according to server,Control device and start statistical trace status information the most each application read-write operation ratio、I/O request type、The most normal reading and writing of files size、Operation frequency、File association empirical relation、Response time etc.,Combine staining effect over time or user satisfaction feeds back high in the clouds again,High in the clouds is analyzed providing correction or the second preferred version after receiving data and feedback once again,So iterate until the most perfect,Again final scheme is preserved high in the clouds,High in the clouds is with application、Type of hardware、Buffer memory device type、User characteristicses etc. are as index,Add the scheme of final optimization pass to data base,And optionally record part optimizes procedural information or all optimizes history to data base,I.e.,---each several part identification of-device hardware and modeling--------application data and user type data acquisition----application data that virtualize and cache establishment that have employed this step on method flow: (controlling equipment to install)、User type data、Device model data are transmitted to high in the clouds---and-cloud computing just suboptimization is attempted---and-feedback collection transmits to high in the clouds----high in the clouds secondary rectification feedback----the most repeatedly until feeding back and record close to improving----high in the clouds forms big data and continuouslys optimize、Accumulation.
  3. 3. the method described by a claim 2, it is characterised in that the internal memory virtualization caching of original allocation distributes according to minimal size, and is sized by server instruction during optimization later.
  4. null4. the method described by a claim 1,It is characterized in that,The method also uses caching shunting,Control device itself and combine the external solid-state hardware connected with USB interface or Wigig,It is installed to during device be loaded in the lump,Extra cache prefetching hardware is provided when controlling device work,If 4K read-write cache is to the virtual memory disk dissolved,512K and random read-write are cached to external solid-state hardware,Realize caching shunting,And it takes different shunting schemes for different application,These shunting scheme cloud servers are through determining the data analysis uploaded,I.e.,Architecturally take the tertiary structure of collaborative work: controlled device as one-level accelerator by the cache prefetching of great amount of terminals,It is responsible for terminal service (virtualization transformation、Caching、Prefetch) and contribute big data,By cloud server as two grades of accelerators,It is responsible for prioritization scheme to calculate and database index iteration,The USB interface carried by control device or the external solid-state hardware of Wigig interface are as three grades of accelerators,It is responsible for controlling device and caching shunting is provided.
  5. 5. one kind according to method described in claim 1, it is characterized in that, control device itself and serve as new service node, make the service ability of accelerator net increasing and increase with accelerated equipment, as, these control the device instruction according to cloud server, provide such as networks such as cdn caching, low coverage network-caching, VPN service, SMTP services for other peripheries user or accelerate service.
  6. null6. the method described by a claim 5,It is characterized in that,The external solid-state hardware that this device itself also connects with USB or Wigig,The cache prefetching hardware providing extra when device works shunts as caching,Storage needed for external hardware provides when opening service node pattern simultaneously、Networking components etc. are supported,I.e.,Architecturally take collaborative work、And the tertiary structure of oneself's extension: controlled device as one-level accelerator by the cache prefetching of great amount of terminals,It is responsible for terminal service (virtualization transformation、Caching、Prefetch) and contribute big data,By cloud server as two grades of accelerators,It is responsible for prioritization scheme to calculate and database index iteration,The USB interface carried by control device or the external solid-state hardware of Wigig interface are as three grades of accelerators,It is responsible for controlling device and caching shunting and hardware supported are provided,And the accelerated equipment of extension is new service node under node mode.
  7. 7. the method described by a claim 1, it is characterised in that the data uploading download controlled between device and high in the clouds are transmitted the most in an encrypted form.
  8. 8. the method described by a claim 1, it is characterized in that, the data that control device is uploaded also include reflecting the characteristic of crowd belonging to equipment user, such as age of user scope, vocational area, AOI etc., correspondingly, the prioritization scheme of high in the clouds feedback also includes using feature to be optimized or the scheme of anticipation for different user types for different application objects.
  9. 9. the method described by a claim 1, it is characterized in that, the data that control device is uploaded also include concrete buffer memory device type of hardware or characteristic information, accordingly, analyzing the prioritization scheme provided beyond the clouds is not that an application is corresponding a, but concrete, classification, as on which kind of buffer structure, which kind of scheme certain application corresponding should use.
  10. null10. the method described by a claim 1,It is characterized in that,Control device and obtain application kind by scanning system registration table,Scanning imaging system installation directory obtains the quantity of documents of application、Size、Reading/writing characteristics,Scanning system itself prefetch Prefetch catalogue、System cache and system journal obtain application and use frequency,Scanning system TEMP file、Collection and browser rs cache file obtain frequentation and ask website and according to website、Network address、Cache file infers user's owning user type feature,Infer user habit,And according to device type、Age,And the application distribution on equipment,Judge the occupation of user、Interest、Ages etc., (these information were all the characteristic information of user group's property,Device both need not the most never obtain the personal information of any user itself,And these user group's characteristic informations the most all can transmit in an encrypted form).
CN201510022782.3A 2015-01-19 2015-01-19 User and application oriented computer and intelligent equipment acceleration method and device Active CN105867832B (en)

Priority Applications (2)

Application Number Priority Date Filing Date Title
CN201510022782.3A CN105867832B (en) 2015-01-19 2015-01-19 User and application oriented computer and intelligent equipment acceleration method and device
PCT/CN2015/098536 WO2016115957A1 (en) 2015-01-19 2015-12-24 Method and device for accelerating computers and intelligent devices for users and applications

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201510022782.3A CN105867832B (en) 2015-01-19 2015-01-19 User and application oriented computer and intelligent equipment acceleration method and device

Publications (2)

Publication Number Publication Date
CN105867832A true CN105867832A (en) 2016-08-17
CN105867832B CN105867832B (en) 2020-07-24

Family

ID=56416402

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201510022782.3A Active CN105867832B (en) 2015-01-19 2015-01-19 User and application oriented computer and intelligent equipment acceleration method and device

Country Status (2)

Country Link
CN (1) CN105867832B (en)
WO (1) WO2016115957A1 (en)

Cited By (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN106547482A (en) * 2016-10-17 2017-03-29 上海传英信息技术有限公司 A kind of method and device that internal memory is saved using buffering
CN107832017A (en) * 2017-11-14 2018-03-23 中国石油集团川庆钻探工程有限公司地球物理勘探公司 A kind of method and device for improving geological data storage IO performances
CN112615794A (en) * 2020-12-08 2021-04-06 四川迅游网络科技股份有限公司 Intelligent acceleration system and method for service flow characteristics

Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN104134033A (en) * 2014-07-29 2014-11-05 广州金山网络科技有限公司 Method and device for identifying user equipment
CN104135520A (en) * 2014-07-29 2014-11-05 广州金山网络科技有限公司 Method and device for identifying Android terminal
CN104253701A (en) * 2013-06-28 2014-12-31 北京艾普优计算机系统有限公司 Running method of computer network, gateway device and server device
CN104320448A (en) * 2014-10-17 2015-01-28 张维加 Method and device for accelerating caching and prefetching of computing device based on big data

Family Cites Families (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN102045385B (en) * 2010-10-21 2013-09-04 李斌 System and equipment for realizing personal cloud computing
CN102368737A (en) * 2011-11-25 2012-03-07 裘嘉 Cloud storage system and data access method thereof
JP6116941B2 (en) * 2013-02-28 2017-04-19 株式会社東芝 Information processing device
CN103500076A (en) * 2013-10-13 2014-01-08 张维加 Novel USB protocol computer accelerating device based on multi-channel SLC NAND and DRAM cache memory

Patent Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN104253701A (en) * 2013-06-28 2014-12-31 北京艾普优计算机系统有限公司 Running method of computer network, gateway device and server device
CN104134033A (en) * 2014-07-29 2014-11-05 广州金山网络科技有限公司 Method and device for identifying user equipment
CN104135520A (en) * 2014-07-29 2014-11-05 广州金山网络科技有限公司 Method and device for identifying Android terminal
CN104320448A (en) * 2014-10-17 2015-01-28 张维加 Method and device for accelerating caching and prefetching of computing device based on big data

Cited By (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN106547482A (en) * 2016-10-17 2017-03-29 上海传英信息技术有限公司 A kind of method and device that internal memory is saved using buffering
CN107832017A (en) * 2017-11-14 2018-03-23 中国石油集团川庆钻探工程有限公司地球物理勘探公司 A kind of method and device for improving geological data storage IO performances
CN112615794A (en) * 2020-12-08 2021-04-06 四川迅游网络科技股份有限公司 Intelligent acceleration system and method for service flow characteristics
CN112615794B (en) * 2020-12-08 2022-07-29 四川迅游网络科技股份有限公司 Intelligent acceleration system and method for service flow characteristics

Also Published As

Publication number Publication date
WO2016115957A1 (en) 2016-07-28
CN105867832B (en) 2020-07-24

Similar Documents

Publication Publication Date Title
CN101576918B (en) Data buffering system with load balancing function
US10785322B2 (en) Server side data cache system
CN104320448B (en) A kind of caching of the calculating equipment based on big data and prefetch acceleration method and device
CN102663012B (en) A kind of webpage preloads method and system
CN104412249B (en) File disposal in file system based on cloud
CN104424199B (en) searching method and device
CN106021445A (en) Cached data loading method and apparatus
CN104123238A (en) Data storage method and device
CN103797477A (en) Predicting user navigation events
CN105550338B (en) A kind of mobile Web cache optimization method based on HTML5 application cache
CN104834675A (en) Query performance optimization method based on user behavior analysis
CN101482882A (en) Method and system for cross-domain treatment of COOKIE
CN103795781B (en) A kind of distributed caching method based on file prediction
US7523094B1 (en) Asynchronous task for energy cost aware database query optimization
CN105683928B (en) For the method for data cache policies, server and memory devices
US20120005017A1 (en) Method and system for providing advertisements
CN109240946A (en) The multi-level buffer method and terminal device of data
US20150281390A1 (en) Intelligent File Pre-Fetch Based on Access Patterns
CN107506154A (en) A kind of read method of metadata, device and computer-readable recording medium
CN105867832A (en) User-and-application-oriented method and device for accelerating computer and intelligent device
CN104111898A (en) Hybrid storage system based on multidimensional data similarity and data management method
CN109086141A (en) EMS memory management process and device and computer readable storage medium
CN107329910A (en) A kind of web front end data based on localStorage are locally stored and access method
CN107480074A (en) A kind of caching method, device and electronic equipment
CN107480072A (en) Lucidification disposal service end cache optimization method and system based on association mode

Legal Events

Date Code Title Description
C06 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
CB02 Change of applicant information
CB02 Change of applicant information

Address after: 311108 2 / F, Shengzhou building, No.37, Beizhi street, Shengzhou, Shaoxing, Zhejiang

Applicant after: Zhang Weijia

Address before: 202, room 1, unit 18, North Garden, olive tree garden, Chong Yin Street, Yuhang District, Hangzhou, Zhejiang, 311108

Applicant before: Zhang Weijia

GR01 Patent grant
GR01 Patent grant