CN110795157B - Method for improving starting-up speed of diskless workstation by using limited cache - Google Patents

Method for improving starting-up speed of diskless workstation by using limited cache Download PDF

Info

Publication number
CN110795157B
CN110795157B CN201911024635.4A CN201911024635A CN110795157B CN 110795157 B CN110795157 B CN 110795157B CN 201911024635 A CN201911024635 A CN 201911024635A CN 110795157 B CN110795157 B CN 110795157B
Authority
CN
China
Prior art keywords
data
read
diskless workstation
diskless
starting
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN201911024635.4A
Other languages
Chinese (zh)
Other versions
CN110795157A (en
Inventor
李广斌
郝岩
杨程雲
林芳菲
郭月丰
彭寿林
吴建华
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
HANGZHOU SHUNWANG TECHNOLOGY CO LTD
Original Assignee
HANGZHOU SHUNWANG TECHNOLOGY CO LTD
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by HANGZHOU SHUNWANG TECHNOLOGY CO LTD filed Critical HANGZHOU SHUNWANG TECHNOLOGY CO LTD
Priority to CN201911024635.4A priority Critical patent/CN110795157B/en
Publication of CN110795157A publication Critical patent/CN110795157A/en
Application granted granted Critical
Publication of CN110795157B publication Critical patent/CN110795157B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F9/00Arrangements for program control, e.g. control units
    • G06F9/06Arrangements for program control, e.g. control units using stored programs, i.e. using an internal store of processing equipment to receive or retain programs
    • G06F9/44Arrangements for executing specific programs
    • G06F9/4401Bootstrapping
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F9/00Arrangements for program control, e.g. control units
    • G06F9/06Arrangements for program control, e.g. control units using stored programs, i.e. using an internal store of processing equipment to receive or retain programs
    • G06F9/46Multiprogramming arrangements
    • G06F9/50Allocation of resources, e.g. of the central processing unit [CPU]
    • G06F9/5005Allocation of resources, e.g. of the central processing unit [CPU] to service a request
    • G06F9/5011Allocation of resources, e.g. of the central processing unit [CPU] to service a request the resources being hardware resources other than CPUs, Servers and Terminals
    • G06F9/5022Mechanisms to release resources
    • YGENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
    • Y02TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
    • Y02DCLIMATE CHANGE MITIGATION TECHNOLOGIES IN INFORMATION AND COMMUNICATION TECHNOLOGIES [ICT], I.E. INFORMATION AND COMMUNICATION TECHNOLOGIES AIMING AT THE REDUCTION OF THEIR OWN ENERGY USE
    • Y02D10/00Energy efficient computing, e.g. low power processors, power management or thermal management

Abstract

The invention discloses a method for improving the starting-up speed of a diskless workstation by using a limited buffer memory. Meanwhile, the invention utilizes the locality principle, the starting data similarity of each diskless workstation in the same application scene is very high, and the starting data required by other diskless workstations when starting can be predicted according to the starting data of the known diskless workstation.

Description

Method for improving starting-up speed of diskless workstation by using limited cache
Technical Field
The invention belongs to the technical field of diskless computers and data prefetching, and particularly relates to a method for improving the starting-up speed of a diskless workstation by using a limited cache.
Background
In recent years, internet traffic has grown without a synchronous increase in internet capacity, and the net effect of this growth has been to significantly increase user perceived delay, i.e., the time between the arrival of a client sending a data request and a response. Potential sources of delay are heavy load of the Web server, network congestion, low bandwidth, bandwidth underutilization, and propagation delay.
Meanwhile, with the development of a processor, the difference between the response speed of data on network transmission and the speed of the processor is larger and larger; the data required in the operation is requested to the cloud server, and the time for waiting for network transmission seriously influences the whole operation speed of the application and the user experience. In order to shorten the network transmission time, a solution is generally adopted to increase the bandwidth and the server nearby to shorten the distance, but the bandwidth and the server are based on the infrastructure, so that the bandwidth and the server are difficult to increase after reaching the highest condition which can be met.
Another solution to reduce latency is to cache Web files at various points (clients/proxy/servers) in the network, with time locality of cache utilization, effective client caching reduces client perceived latency, server load, and the number of packets traveling, thereby increasing the available bandwidth, which involves data prefetching (prefetching) techniques.
Data prefetching refers to the process of deducing future requests of clients to Web objects, putting these objects into cache in advance in the background before the clients make explicit requests, making use of the idle time of the clients. The main advantage of employing prefetching is that it can prevent bandwidth under-utilization and hide part of the delay; furthermore, without a well-designed pre-fetching scheme, the client may not use multiple transfer documents at all, wasting bandwidth; an effective pre-fetch scheme is combined with a transmission rate control mechanism, which can shape network traffic and significantly reduce burstiness, thereby improving network performance.
In the prior art for data prereading in a non-disk-free environment, namely a distributed cache model (DLSDCM) based on file prediction, the model predicts files based on a client file prediction model, and comprehensively schedules all user requests in a distributed network from a server angle, so that the throughput of a client and the data access are improved, the data access of other clients are not influenced, and the DLSDCM is realized in two parts, namely the client and the server. Each client maintains DLS (double last successor) file prediction data on the local machine, and when each time of reading a request, two files behind a predicted read request target file are read in advance, and prediction is mainly carried out according to the files and hit times of the previous times of requests; the server maintains two queues: the read request queue and the read-ahead request queue are responsible for the read request scheduling and the read-ahead request scheduling of the client.
The DLSDCM client is realized by pre-reading file data predicted by a DLSDCM model, storing the oversized data in a local disk, applying a memory with a certain size (the size can be manually modified) to the client for a read request for caching and pre-reading the request for caching, and creating a cache directory on the disk; and when the pre-read file is larger than the memory cache size, writing the pre-read file into the disk cache directory. The read request buffer area and the pre-read request buffer area are reserved for a period of time after one read request is finished, and if no new read request exists for a long time, the read request buffer area and the pre-read request buffer area are recovered; and the data in the disk cache directory is reserved for a long time, and when the data in the cache directory is about to reach a specified value, the earliest stored data is recovered by adopting a strategy of storing first and recovering first.
However, the prior art does not support continuous storage of larger files by using a smaller cache, but stores the request file into the cache once, reserves a period of time after the read request is finished once, empties the cache space, and reads the file queue in the cache directory into the full cache once again; therefore, a large memory space is required to be occupied as a cache, and data in each cache is required to be read out for a period of time intermittently, so that the original file is reserved for a period of time, and then the cache is emptied, and a new file is read in from a server. Thus, the prior art uses the process of caching to provide predicted files to clients that is not continuous, but intermittent, due to the limitations of cache space.
Disclosure of Invention
In view of the above, the present invention provides a method for improving the boot speed of a diskless workstation by using a limited cache, so as to strategically synchronize local operation with network transmission, and reduce boot latency.
A method for improving the starting-up speed of a diskless workstation by using a limited cache, namely, obtaining the data required by starting up the diskless workstation through prediction, requesting the required data from a server when the diskless workstation is started, and sequentially sending corresponding data to the cache of the diskless workstation by the server; the diskless workstation operates and uses the existing data in the cache while waiting for the server to transmit the data, clears the data from the cache after a part of the data is used up, stores the received data into the emptied cache space, and completes starting up after all the data are used up.
Further, in the specific implementation process, the server marks serial numbers for data required by starting up the diskless workstation according to the request sequence of the diskless workstation, and when the diskless workstation is started, the server sends first batch of data to the diskless workstation for caching, and the diskless workstation uses the batch of data along with running; when the diskless workstation runs the use data, the server sends the next batch of data to the diskless workstation for caching, a plurality of steps of starting data are sent in advance each time, and each time the diskless workstation uses a part of data, the cache space is correspondingly released, and the released space is used for storing a plurality of steps of starting data which are newly received.
Further, the prefetched data of the diskless workstation is always kept a plurality of steps earlier than the data currently used in running, and the data of corresponding quantity is prefetched by adopting an unfixed step number, namely referring to the residual storage space of the cache.
Further, the diskless workstation cache may employ a fixed size region of local memory, or any other independent memory space.
Furthermore, based on the locality principle, that is, the starting data similarity rate of each diskless workstation in the same application scene is higher, so that starting data required by starting other diskless workstations is predicted according to the starting data of the known diskless workstation.
Further, the diskless workstation adopts a staged pre-reading strategy and introduces a pre-reading virtual slider in the process of running and using the cached data, in particular: dividing the whole startup data into a plurality of stages according to the sequence, and separating pre-reading of each stage; when the read-write request is read in advance, if the read-write request is not near the read-write range, searching the read-write range matched with the read-write request, and generating a new read-write virtual slider; when the continuous multiple read-write requests are all corresponding to the new pre-read virtual sliding block, the pre-read range is jumped to the pre-read virtual sliding block; when a few individual read-write requests correspond within a new pre-read virtual slider, then noise is deemed to be present and the pre-read virtual slider is discarded. The strategy increases certain flexibility and jumping, can improve hit rate when the system starts data change to be larger, and prevents data from being penetrated, especially when a pre-reading virtual slider is introduced, the jump is prevented from being too serious, and the pre-reading is prevented from being deviated, so that the hit rate is reduced.
Further, for the pre-read data management strategy of a plurality of diskless workstations, firstly, unique identification is made according to software and hardware, so that pre-read data of different software and hardware are stored separately; meanwhile, the server periodically updates the pre-read data according to a certain strategy, and gathers multiple starting data (such as starting data with more frequency of occurrence according to the starting times, starting data with more frequency of occurrence according to the terminals, or starting data of a plurality of terminals with highest similarity with other starting data) of a plurality of terminals to form a complete pre-read data.
The invention downloads the predictable data from the server in advance and stores the data into the local cache, which is earlier than the request time, saves the waiting time after the data is requested, and effectively shortens the service running time of the client/server model; meanwhile, by utilizing the locality principle, the starting data similarity of all diskless workstations in the same application scene is high, and starting data required by starting other diskless workstations can be predicted according to the starting data of the known diskless workstations.
Besides the diskless startup scenario, the technology of the invention has potential and is suitable for popularization and application in more services or applications (such as cloud service, web page and mobile phone application), and the effect of shortening the response time is achieved by downloading the predictable data to the local buffer area in advance, meanwhile, the buffer size is supported to be capable of being set, excessive buffer is not occupied, and the space can be released after the service or application is finished.
Drawings
FIG. 1 is a schematic diagram of a boot flow of a diskless workstation without prefetch.
Fig. 2 is a schematic diagram of a startup flow of a diskless workstation based on a prefetch technique according to the present invention.
Fig. 3 is a schematic diagram of a diskless workstation buffer space.
Fig. 4 is a step diagram of all boot data.
Detailed Description
In order to more particularly describe the present invention, the following detailed description of the technical scheme of the present invention is provided with reference to the accompanying drawings and the specific embodiments.
In the downstream-network cloud product of the company, many services need to be run in the starting process, and the running process itself needs more time, so that the starting time is long, and the user experience is influenced by the overlong starting time; in addition, when the cache space is limited and the predicted starting data is far larger than the cache space, the condition that the predicted data requested by the client is always stored in the cache in advance is satisfied, the data is used before the operation of the client, and the data is not required to be transmitted from the server, so that the invention is the design purpose of the technology, the program operation time or the network transmission time can not be shortened, the local operation and the network transmission are strategically performed synchronously, and the waiting time is shortened.
When the diskless workstation starts to operate, the required starting data is required to be requested to a system server, the server sends the data after receiving the request, and the diskless workstation needs to wait until the data is received and then continue to operate. In order to reduce or even eliminate the waiting time for the transmission of the startup data and shorten the startup time, the invention stores the data requested from the system server in the limited buffer memory of the diskless workstation in advance, can be directly fetched from the buffer memory during operation without waiting for network transmission, and synchronously carries out the operation and the transmission of the data, wherein the buffer memory is a part of special storage space in the memory of the diskless workstation, and the size of the space can be set.
The technology comprises two main points:
(1) and after predicting the data required by the starting of the diskless workstation, sending a data packet to the cache of the diskless workstation in advance during starting.
(2) The buffer memory space is smaller than the starting-up data, and the complete starting-up data cannot be sent at one time; the server side sends a part of data each time, after the part of data is used when the diskless workstation is started to operate, the server side sends and writes a plurality of steps of subsequent data into the corresponding cache space, and the written data always advances a plurality of steps of data in operation.
If the similar technology is not used, the data is sent from the cloud server after the client requests the data, the client needs to wait for the transmission time after each request of the data, and the starting time is greatly prolonged; in order to reduce the data transmission time, a method of increasing bandwidth and making a server nearby is adopted. The technology of the invention further effectively utilizes the idle bandwidth by predicting and transmitting the data packet in advance, and effectively shortens the time from a lower layer.
If the starting data is stored in a diskless workstation in advance before starting, firstly, in a downstream network-cloud environment, the diskless workstation starts memory emptying each time, including the storage of the starting data; and secondly, the starting-up data are dynamic and depend on unified management of cloud services. By using the technical scheme of the invention, the diskless workstation takes all the previous prediction information out of the server side, and predicts the download data in advance according to the current progress of the diskless workstation.
As shown in fig. 1, under the ordinary condition that similar technology is not used, after a startup event, the client needs to request data from the server for multiple times, and the client needs to wait for the server to return data each time and then continue to run, so that the process is repeated until the startup is completed; the various services provided by the downstream cloud are all operated in the starting process, and the client needs to download large starting data from the server.
As shown in fig. 2, when the technical scheme of the present invention is used, after a boot event, the server sends the predicted previous part of boot data to the diskless workstation for caching, and the diskless workstation requests data for many times, all of the data are fetched from the local cache, so that the speed is high, and the released cache receives the next part of data.
If the application scene of the downstream cloud is an internet bar, the starting data of the same internet bar corresponds to the system and software configuration of the internet bar to each terminal, and the terminals in the internet bar scene are used as diskless workstations, so that the starting data similarity rate of each diskless workstation in the same internet bar is very high. After the network management configuration, when the first diskless workstation starts up the request data, the server side obtains the start-up data, so that the same start-up data of each of the rest diskless workstations is conveniently predicted, and the prediction of the local principle is similar.
However, since the diskless workstation has limited cache, the complete startup data is not stored, and the technical scheme of the invention comprises the following steps:
(1) The boot data marks the sequence number (step) as requested.
(2) When the diskless workstation is started, the server side sends first batch of data to the diskless workstation buffer memory, and the diskless workstation processes the batch of data and sends data to be requested by the next batch of diskless workstation to the buffer memory, and each time, the server side sends a plurality of steps of starting data in advance.
(3) Each time a diskless workstation uses a portion of data, the corresponding buffer space is freed up, and the freed space stores the newly received data of several steps.
Finally, the data transmission in the starting process reaches the diskless workstation buffer before the data is requested, the requested data is fetched from the buffer, the time for waiting for network transmission is saved, and the operation and the data transmission are performed simultaneously.
Because the buffer space is limited, the data needs to be written into the buffer and released orderly, in order to realize the step (2), the data is sent and received and read and written by taking a plurality of steps as granularity (such as 200 steps), and the number of steps can be set. The server side transmits a part of data each time, and when the diskless workstation uses the data in the cache, corresponding space is released each time; the server side continuously writes future request data into the client side along with the data use and cache release of the client side, so that the writing step number is always advanced by a fixed number of steps (the fixed step number can be set in advance) compared with the running data, and the fact that the client side can run quickly at some time is considered, so that the cache keeps storing predicted data in advance. The number of steps is 200, and the specific implementation steps are as follows:
2.1 the diskless client is started for the first time, the disk index read by the operating system (the position of the number of sectors read by the operating system and the length of the read content) is recorded, and the data is reported to the server.
2.2 the diskless client is started for the second time, the index data of starting up is obtained from the server, and 200 reading requests are sent to the service according to the index data.
2.3 when the operating system sends the first read request, the data is in fact already in memory and no longer needs to be retrieved from the server.
2.4 every time a subsequent read request of an operating system is received, 2 read requests are sent to the server according to the sequence of the disk index data, and the read content of the driver is kept ahead of the read request of the operating system.
2.5 the content which the operating system needs to read can be directly obtained from the memory without being obtained from the server, thereby greatly reducing the corresponding time for reading the magnetic disk and accelerating the starting-up speed.
Fig. 3 shows a diskless workstation buffer, in which the sequence numbers represent the sequence of data fetching from the buffer when the diskless workstation is running, and the buffer and the server side write data are released according to the sequence of steps. Because the operation and transmission rhythms are different and the time is unstable, a space with inconsistent rhythms is reserved for operation and transmission in advance by a plurality of steps, and data transmission is not in place when the operation speed is very high.
Referring to all the startup data, fig. 4 shows that the data sent in advance by the server is "pushed forward" along with the operation data of the diskless workstation, because the number of steps sent in advance is set, the number of steps is unchanged, each time the diskless workstation operates to use a part of data, the next data is requested, and the server sends the data of [ requested steps+steps ], so that the data sent in advance is always ahead of the fixed number of steps of the data used in operation.
In the use of products which we put into enterprises, shortening the start-up time is the main effect observed. The following data were obtained by testing: on a Windows7 system, the starting speed is 100.5s without using the technology of the invention, the starting speed after use is 41.1s, and the starting speed is improved by 59 percent; on a Windows10 system, the starting speed without using the technology is 135s, the starting speed after use is 82s, and the starting speed is improved by 41.3%; therefore, the application of the technology effectively shortens the time of each download node at one time, saves the workload of shortening the running time by optimizing on other programs and improving the code running efficiency, and has the advantages of being relatively large in cost as compared with the effect cup, the salary and the like.
The previous description of the embodiments is provided to facilitate a person of ordinary skill in the art in order to make and use the present invention. It will be apparent to those having ordinary skill in the art that various modifications to the above-described embodiments may be readily made and the generic principles described herein may be applied to other embodiments without the use of inventive faculty. Therefore, the present invention is not limited to the above-described embodiments, and those skilled in the art, based on the present disclosure, should make improvements and modifications within the scope of the present invention.

Claims (6)

1. A method for improving the starting-up speed of a diskless workstation by using a limited cache is characterized by comprising the following steps: the method comprises the steps that data required by starting up a diskless workstation are obtained through prediction, the diskless workstation requests the server for the required data when starting up, and the server sequentially sends corresponding data to a cache of the diskless workstation; the diskless workstation operates and uses the existing data in the cache while waiting for the server to transmit the data, and immediately clears the data from the cache after a part of the data is used up, and stores the received data into the emptied cache space until all the data are used up, and then the machine is started;
the diskless workstation adopts a staged pre-reading strategy and introduces a pre-reading virtual slider in the process of using cache data in running, and specifically: dividing the whole startup data into a plurality of stages according to the sequence, and separating pre-reading of each stage; when the read-write request is read in advance, if the read-write request is not near the read-write range, searching the read-write range matched with the read-write request, and generating a new read-write virtual slider; when the continuous multiple read-write requests are all corresponding to the new pre-read virtual sliding block, the pre-read range is jumped to the pre-read virtual sliding block; when a few individual read-write requests correspond within a new pre-read virtual slider, then noise is deemed to be present and the pre-read virtual slider is discarded.
2. The method for improving the boot speed of a diskless workstation using a limited cache as set forth in claim 1, wherein: in the specific implementation process, the server marks serial numbers on data required by starting up the diskless workstation according to the request sequence of the diskless workstation, and when the diskless workstation is started, the server sends first batch of data to the diskless workstation for caching, and the diskless workstation uses the batch of data along with running; when the diskless workstation runs the use data, the server sends the next batch of data to the diskless workstation for caching, a plurality of steps of starting data are sent in advance each time, and each time the diskless workstation uses a part of data, the cache space is correspondingly released, and the released space is used for storing a plurality of steps of starting data which are newly received.
3. The method for improving the boot speed of a diskless workstation using a limited cache as set forth in claim 1, wherein: the prefetched data of the diskless workstation is always kept a plurality of steps ahead of the currently running data, and the unfixed step number is adopted, namely, the data of corresponding quantity is prefetched by referring to the residual storage space of the cache.
4. The method for improving the boot speed of a diskless workstation using a limited cache as set forth in claim 1, wherein: the diskless workstation cache may employ a fixed size region of local memory, or any other independent memory space.
5. The method for improving the boot speed of a diskless workstation using a limited cache as set forth in claim 1, wherein: based on the locality principle, that is, the starting data similarity rate of each diskless workstation in the same application scene is higher, so that starting data required by starting other diskless workstations is predicted according to the starting data of the known diskless workstation.
6. The method for improving the boot speed of a diskless workstation using a limited cache as set forth in claim 1, wherein: for the pre-read data management strategy of a plurality of diskless workstations, firstly, unique identification is made according to software and hardware, so that pre-read data of different software and hardware are stored separately; and meanwhile, the server periodically updates the pre-read data according to a certain strategy, and gathers the multiple starting data of a plurality of terminals to form a complete pre-read data.
CN201911024635.4A 2019-10-25 2019-10-25 Method for improving starting-up speed of diskless workstation by using limited cache Active CN110795157B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201911024635.4A CN110795157B (en) 2019-10-25 2019-10-25 Method for improving starting-up speed of diskless workstation by using limited cache

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201911024635.4A CN110795157B (en) 2019-10-25 2019-10-25 Method for improving starting-up speed of diskless workstation by using limited cache

Publications (2)

Publication Number Publication Date
CN110795157A CN110795157A (en) 2020-02-14
CN110795157B true CN110795157B (en) 2023-05-12

Family

ID=69441394

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201911024635.4A Active CN110795157B (en) 2019-10-25 2019-10-25 Method for improving starting-up speed of diskless workstation by using limited cache

Country Status (1)

Country Link
CN (1) CN110795157B (en)

Families Citing this family (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN114553857A (en) * 2022-01-25 2022-05-27 西安歌尔泰克电子科技有限公司 Data transmission method and device, wrist-worn equipment and medium

Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101814038A (en) * 2010-03-23 2010-08-25 杭州顺网科技股份有限公司 Method for increasing booting speed of computer
CN102323888A (en) * 2011-08-11 2012-01-18 杭州顺网科技股份有限公司 A kind of diskless computer starts accelerated method
CN104408209A (en) * 2014-12-25 2015-03-11 中科创达软件股份有限公司 File processing method, file processing device and electronic equipment in start-up process of mobile operating system

Family Cites Families (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US7036040B2 (en) * 2002-11-26 2006-04-25 Microsoft Corporation Reliability of diskless network-bootable computers using non-volatile memory cache
US20090327453A1 (en) * 2008-06-30 2009-12-31 Yu Neng-Chien Method for improving data reading speed of a diskless computer

Patent Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101814038A (en) * 2010-03-23 2010-08-25 杭州顺网科技股份有限公司 Method for increasing booting speed of computer
CN102323888A (en) * 2011-08-11 2012-01-18 杭州顺网科技股份有限公司 A kind of diskless computer starts accelerated method
CN104408209A (en) * 2014-12-25 2015-03-11 中科创达软件股份有限公司 File processing method, file processing device and electronic equipment in start-up process of mobile operating system

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
谭怀亮 ; 彭诗辉 ; 贺再红 ; .基于混合存储的无盘网络服务器数据优化分布方法.计算机工程.2016,(第04期),全文. *

Also Published As

Publication number Publication date
CN110795157A (en) 2020-02-14

Similar Documents

Publication Publication Date Title
EP2791815B1 (en) Application-driven cdn pre-caching
CN107197359B (en) Video file caching method and device
US20050086386A1 (en) Shared running-buffer-based caching system
EP2562991B1 (en) Data prefetching method, node and system for distributed hash table dht memory system
US20170149860A1 (en) Partial prefetching of indexed content
CN105653684B (en) Pre-reading method and device of distributed file system
JP2015509229A5 (en)
CN106681990B (en) Data cached forecasting method under a kind of mobile cloud storage environment
US8776158B1 (en) Asynchronous shifting windows caching for forward and backward video streaming
CN106599239A (en) Webpage content data acquisition method and server
WO2023020085A1 (en) Data processing method and system based on multi-level cache
CN102307234A (en) Resource retrieval method based on mobile terminal
US20170264672A1 (en) Methods, systems, and media for stored content distribution and access
CN102546674A (en) Directory tree caching system and method based on network storage device
CN110795157B (en) Method for improving starting-up speed of diskless workstation by using limited cache
US11489911B2 (en) Transmitting data including pieces of data
CN111787062B (en) Wide area network file system-oriented adaptive fast increment pre-reading method
US20180131783A1 (en) Video and Media Content Delivery Network Storage in Elastic Clouds
WO2016090985A1 (en) Cache reading method and apparatus, and cache reading processing method and apparatus
CN109992209B (en) Data processing method and device and distributed storage system
WO2010031297A1 (en) Method of wireless application protocol (wap) gateway pull service and system thereof
JP5192506B2 (en) File cache management method, apparatus, and program
CN115297095B (en) Back source processing method, device, computing equipment and storage medium
KR102235622B1 (en) Method and Apparatus for Cooperative Edge Caching in IoT Environment
Shariff et al. An overview to pre-fetching techniques for content caching of mobile applications

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant