WO2021169298A1 - 减少回源请求的方法、装置及计算机可读存储介质 - Google Patents

减少回源请求的方法、装置及计算机可读存储介质 Download PDF

Info

Publication number
WO2021169298A1
WO2021169298A1 PCT/CN2020/119123 CN2020119123W WO2021169298A1 WO 2021169298 A1 WO2021169298 A1 WO 2021169298A1 CN 2020119123 W CN2020119123 W CN 2020119123W WO 2021169298 A1 WO2021169298 A1 WO 2021169298A1
Authority
WO
WIPO (PCT)
Prior art keywords
cache
target file
file
unit
target
Prior art date
Application number
PCT/CN2020/119123
Other languages
English (en)
French (fr)
Inventor
魏海通
张毅
Original Assignee
平安科技(深圳)有限公司
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 平安科技(深圳)有限公司 filed Critical 平安科技(深圳)有限公司
Publication of WO2021169298A1 publication Critical patent/WO2021169298A1/zh

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F16/00Information retrieval; Database structures therefor; File system structures therefor
    • G06F16/10File systems; File servers
    • G06F16/18File system types
    • G06F16/182Distributed file systems
    • G06F16/1824Distributed file systems implemented using Network-attached Storage [NAS] architecture
    • G06F16/183Provision of network file services by network file servers, e.g. by using NFS, CIFS
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F16/00Information retrieval; Database structures therefor; File system structures therefor
    • G06F16/10File systems; File servers
    • G06F16/13File access structures, e.g. distributed indices
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F16/00Information retrieval; Database structures therefor; File system structures therefor
    • G06F16/10File systems; File servers
    • G06F16/17Details of further file system functions
    • G06F16/172Caching, prefetching or hoarding of files

Definitions

  • This application relates to the field of big data technology, and in particular to a method, device and computer-readable storage medium for reducing back-to-source requests based on a content distribution network.
  • the content delivery network (CDN) is used for content distribution as the name suggests, and it inevitably requires content caching.
  • CDN content delivery network
  • the inventor realizes that for the distribution of larger target files, fragmented storage effectively increases the hit rate of target file requests and can reduce the consumption of back-to-origin request traffic.
  • shard storage size there is no industry standard for the definition of shard storage size.
  • Different companies generally define the global shard storage size according to their own business conditions, such as Facebook Cloud 512k and Qiniu Cloud 1M, so there is no problem with stable business or small fluctuations. But some cases are problematic. For example, some customers distribute through CDN fusion vendors, and CDN fusion vendors and CDN edge vendors have different shard storage sizes.
  • a method for reducing back-to-origin requests provided by this application includes:
  • a corresponding memory unit is generated in the client terminal and combined with the content distribution network cache to form a cache unit;
  • This application provides a device for reducing back-to-source requests, including:
  • the caching unit generating module is configured to send a read request of the target file to the client source server through the content distribution network cache, and receive the memory size of the target file fed back by the client source server based on the read request, according to the The memory size of the target file, generate a corresponding memory unit in the client terminal and combine it with the content distribution network cache to form a cache unit;
  • a region partitioning module configured to partition the cache unit based on the memory size of the target file to obtain a fragmented cache area
  • the loading and merging module is used for structurally splitting the target file according to the fragmented cache area to form a target sub-file set, and loading the target sub-file set to the client terminal through the cache unit To obtain an independent temporary file set, and restore the independent temporary file set to the target file after a merging operation, thereby completing the reduction of requests for returning to the source.
  • the present application provides an electronic device that includes a memory and a processor, the memory stores a reduced return request program that can be run on the processor, and the reduced return request program is processed by the processor.
  • a corresponding memory unit is generated in the client terminal and combined with the content distribution network cache to form a cache unit;
  • the present application provides a computer-readable storage medium with a reduced return-to-origin request program stored on the computer-readable storage medium, and the reduced return-to-origin request program may be executed by one or more processors to achieve the following Steps of the method to reduce back-to-origin requests:
  • a corresponding memory unit is generated in the client terminal and combined with the content distribution network cache to form a cache unit;
  • FIG. 1 is a schematic flowchart of a method for reducing back-to-origin requests provided by an embodiment of this application;
  • FIG. 2 is a schematic diagram of the internal structure of an electronic device for reducing return-to-source requests provided by an embodiment of the application;
  • FIG. 3 is a schematic diagram of modules of an apparatus for reducing return-to-origin requests provided by an embodiment of the application.
  • This application provides a method to reduce back-to-origin requests.
  • FIG. 1 it is a schematic flowchart of a method for reducing back-to-origin requests provided by an embodiment of this application.
  • the method can be executed by an electronic device, and the electronic device can be implemented by software and/or hardware.
  • the method for reducing back-to-origin requests includes:
  • the content delivery network (Content Delivery Network, CDN) is an intelligent virtual network built on the basis of an existing network. Function modules such as scheduling enable users to obtain the required content nearby, reduce network congestion, and improve user access response speed and hit rate.
  • the CDN cache is an agent of the customer origin site, and plays a role of sharing the storage pressure for the customer origin server.
  • the client origin server refers to a number of servers required to operate and maintain a website, and target files are stored on the server.
  • the target files can be video resources, audio resources, or large-scale data resources, and the client terminal needs to obtain The device of the target file.
  • the sending of the request for reading the target file to the client source server through the content distribution network cache in this application includes: obtaining the request address of the target file to access the client source server, and loading the request address to the client source server.
  • the request sentence set the request sentence is received through the content distribution network cache, the address of the target file in the client source server is searched according to the request sentence, the read request of the target file is completed, and The memory size of the target file is calculated, and the size value of the memory space occupied by the target file is returned to the client terminal.
  • the statement requested by the range is: range: proxy_set_headerRange ⁇ slice_range(fileaddr), and the default "fileaddr" in the last bracket in the range statement is the address of the target file that the client terminal needs to obtain.
  • the client terminal generates a corresponding memory area in the memory unit of the client terminal according to the memory size of the target file, and combines the CDN cache to form the cache unit.
  • the forming of the cache unit includes: when the memory size of the target file ⁇ a first value, combining the first ratio of the memory unit and the CDN cache as The cache unit, when the first value ⁇ the memory size of the target file ⁇ the second value, combines the second ratio of the memory unit and the CDN cache as the cache unit, and when the When the second value ⁇ the memory size of the target file, the third ratio of the memory unit and the CDN cache are combined as the cache unit.
  • the first value is 500M
  • the second value is 1G
  • the first ratio is 20%
  • the second ratio is 40%
  • the third ratio is 60%.
  • the present application divides the cache unit into two areas according to the memory size of the target file, namely: a basic sharded cache area and a supplementary sharded cache area, wherein the space of the basic sharded cache area It is larger than the space of the supplementary fragment cache area.
  • the basic fragmented cache area includes several fragmented cache areas with a storage space of 2M.
  • the supplementary fragment cache area determines whether to set according to the memory size of the target file occupies the memory size of the cache unit space.
  • the supplementary fragment cache area includes several fragments with a storage space of 1M or 512kB. Cache area.
  • the calculation method of the value mk of the fractional part includes:
  • [n'M] represents the rounding operation to the memory size n'M of the target file.
  • n'is an even number all of the cache units are set as the basic fragmented cache area, and all of the basic fragmented cache areas are set as a number of the fragmented cache areas with a size of 2M as a unit , The number is n'/20.
  • the cache unit is divided into a basic segmented cache area and a supplementary segmented cache area.
  • the basic segmented cache area contains n'-1/20 of several 2M-sized caches.
  • the supplementary segmented cache area only includes one 1M size of the segmented cache area.
  • the cache unit also needs to be divided into a supplementary sliced cache area and a basic sliced cache area.
  • This application divides n'into an integer part [n'] and a floating-point number part m.
  • the integer part [n'] is divided into regions according to the above-mentioned method of processing integers.
  • the floating-point number part m it is metadata of mkB size.
  • the size of nG is 2.3G
  • the size of the target file is 2.3G
  • the fragment metadata cache module (used to store the 1M metadata in 2355-1), and then convert the fractional part to 204.8kB ⁇ 512kB in kB, then set one in the supplementary fragment cache area
  • the fragmented cache area of 512kB size is
  • the cache unit will be divided into a basic segmented cache area and a supplementary segmented cache area, and the basic segmented cache area contains 118 2M segments.
  • the supplementary slice cache area includes one 1M sliced cache area and one 512kB sliced cache area.
  • the client origin server structurally splits the target file according to the fragmented cache area to obtain several target sub-files in units of 2M, 1M, and kB.
  • the 2M, 1M, and 512kB segmented cache areas in the basic segmented cache area and the supplementary segmented cache area are structured to load the sub-files, and the segments are cached
  • the size of the area and the size of the target sub-files are the same or fit as much as possible, so that the space of the cache unit can be utilized to the maximum.
  • the preferred embodiment of the present application traverses the target sub-file set through a loop command, and traverses each target sub-file of 2M (because the number of sub-files of 2M is the largest, which is much larger than 1M and sub-files in kB units).
  • the load command is sequentially loaded into the 2M sliced cache area in the basic sliced cache area of the cache unit, after loading
  • the 2M fragmented cache area transmits the stored 2M target sub-file to the client terminal to form an independent temporary file.
  • the 2M fragmented cache area After the 2M fragmented cache area transfers a 2M target sub-file, it immediately adds a new Load queue, wait for the next 2M of the sub-file to be loaded, and then transfer (usually, all the 2M slice metadata cache modules in the basic slice buffer area are not enough to transfer the entire target file at once Therefore, all the 2M fragment metadata cache modules must be loaded cyclically).
  • the integer part of the size of the target file in M is an odd number, a 1M fragmented buffer area that supplements the fragmented buffer area needs to be transmitted once.
  • the size of the target file is With M as the unit value, there is also a floating-point part, that is, a fractional part. If the fractional part is greater than 512kB, it needs to be transferred once in the 1M fragmented buffer area of the supplementary fragmented buffer area. If the fractional part is less than 512kB Then, the 512kB fragmented buffer area of the supplementary fragmented buffer area needs to be transmitted once.
  • the set of target sub-files transmitted to the client terminal through the cache unit will form independent temporary files, that is: how many target sub-files are transmitted to the client terminal will form the number of independent temporary files.
  • Temporary files where the target sub-file and the independent temporary file are in a one-to-one correspondence.
  • the memory size of the target file is 2.3G
  • the target file is structured and split, so
  • the number of 1M target subfiles is 1 (because the integer part of the target file's memory size is 2355 which is an odd number, 2354M is cyclically transmitted through the 2M fragmented cache area
  • the target sub-files of 1M are left), the number of target sub-files of 204.8kB is 1, and the 1177 target sub-files of 2M pass the 118 2M target sub-files mentioned in S2.
  • the fragmented buffer area is cyclically transmitted, the 1M target subfile is cyclically transmitted through the 1M-sized fragmented buffer area in the above S2, and the 1 204.8kB target subfile is passed through the above In S2, the one segmented buffer area with a size of 512kB is cyclically transmitted.
  • the merging operation (the number of target sub-files after the structured split is 1179 is transmitted to the client terminal to generate 1179 independent temporary files), and the storage operation is completed.
  • This application also provides an electronic device that reduces back-to-origin requests.
  • FIG. 2 it is a schematic diagram of the internal structure of an electronic device provided by an embodiment of this application.
  • the electronic device 1 may be a PC (Personal Computer, personal computer), or a terminal device such as a smart phone, a tablet computer, or a portable computer, or a server or a combination of servers.
  • the electronic device 1 at least includes a memory 11, a processor 12, a communication bus 13, and a network interface 14.
  • the memory 11 includes at least one type of readable storage medium, and the readable storage medium includes flash memory, hard disk, multimedia card, card-type memory (for example, SD or DX memory, etc.), magnetic memory, magnetic disk, optical disk, and the like.
  • the memory 11 may be an internal storage unit of the electronic device 1 in some embodiments, such as a hard disk of the electronic device 1.
  • the memory 11 may also be an external storage device of the electronic device 1, such as a plug-in hard disk equipped on the electronic device 1, a smart memory card (SmartMediaCard, SMC), a Secure Digital (SD) card, and a flash memory. Card (FlashCard) etc.
  • the memory 11 may also include both an internal storage unit of the electronic device 1 and an external storage device.
  • the memory 11 can be used not only to store application software and various data installed in the electronic device 1, such as reducing the code of the back-to-origin request program 01, etc., but also to temporarily store data that has been output or will be output.
  • the processor 12 may be a central processing unit (CPU), controller, microcontroller, microprocessor, or other data processing chip, for running program codes or processing data stored in the memory 11, For example, execute Reduce Back-to-Source Request Procedure 01 and so on.
  • CPU central processing unit
  • controller microcontroller
  • microprocessor or other data processing chip
  • the communication bus 13 is used to realize the connection and communication between these components.
  • the network interface 14 may optionally include a standard wired interface and a wireless interface (such as a WI-FI interface), and is usually used to establish a communication connection between the electronic device 1 and other electronic devices.
  • a standard wired interface and a wireless interface such as a WI-FI interface
  • the electronic device 1 may also include a user interface.
  • the user interface may include a display (Display) and an input unit such as a keyboard (Keyboard).
  • the optional user interface may also include a standard wired interface and a wireless interface.
  • the display may be an LED display, a liquid crystal display, a touch liquid crystal display, an OLED (Organic Light-Emitting Diode, organic light emitting diode) touch device, etc.
  • the display can also be appropriately called a display screen or a display unit, which is used to display the information processed in the electronic device 1 and to display a visualized user interface.
  • Figure 2 only shows the electronic device 1 with components 11-14 and the reduction back-to-origin request program 01.
  • the structure shown in Figure 1 does not constitute a limitation on the electronic device 1, and may include Fewer or more parts than shown, or some parts in combination, or different parts arrangement.
  • the memory 11 stores the reduction back-to-origin request program 01; when the processor 12 executes the reduction back-to-origin request program 01 stored in the memory 11, the following steps are implemented:
  • Step 1 Send a read request of the target file to the client source server through the content distribution network cache, and receive the memory size of the target file that the client source server feeds back based on the read request, according to the target file's memory size
  • the size of the memory, a corresponding memory unit is generated in the client terminal and combined with the content distribution network cache to form a cache unit.
  • the content delivery network (Content Delivery Network, CDN) is an intelligent virtual network built on the basis of an existing network. Function modules such as scheduling enable users to obtain the required content nearby, reduce network congestion, and improve user access response speed and hit rate.
  • the CDN cache is an agent of the customer origin site, and plays a role of sharing the storage pressure for the customer origin server.
  • the client origin server refers to a number of servers required to operate and maintain a website, and target files are stored on the server.
  • the target files can be video resources, audio resources, or large-scale data resources, and the client terminal needs to obtain The device of the target file.
  • the sending of the request for reading the target file to the client source server through the content distribution network cache in this application includes: obtaining the request address of the target file to access the client source server, and loading the request address to the client source server.
  • the request sentence set the request sentence is received through the content distribution network cache, the address of the target file in the client source server is searched according to the request sentence, the read request of the target file is completed, and The memory size of the target file is calculated, and the size value of the memory space occupied by the target file is returned to the client terminal.
  • the statement requested by the range is: range: proxy_set_headerRange ⁇ slice_range(fileaddr), and the default "fileaddr" in the last bracket in the range statement is the address of the target file that the client terminal needs to obtain.
  • the client terminal generates a memory area of a certain proportion in the memory unit of the client terminal according to the memory size of the target file, and combines the CDN cache to form the cache unit.
  • the ratio is: when the memory size of the target file ⁇ a first value, the first ratio of the memory unit and the CDN cache are combined as the cache unit When the first value ⁇ the memory size of the target file ⁇ the second value, the second ratio of the memory unit and the CDN cache are combined as the cache unit, and when the second value ⁇ When the memory size of the target file is used, the third ratio of the memory unit and the CDN cache are combined as the cache unit.
  • the first value is 500M
  • the second value is 1G
  • the first ratio is 20%
  • the second ratio is 40%
  • the third ratio is 60%.
  • Step 2 Perform area division on the cache unit based on the memory size of the target file to obtain a fragmented cache area.
  • the present application divides the cache unit into two areas according to the memory size of the target file, namely: a basic sharded cache area and a supplementary sharded cache area, wherein the space of the basic sharded cache area It is larger than the space of the supplementary fragment cache area.
  • the basic fragmented cache area includes several fragmented cache areas with a storage space of, for example, 2M.
  • the supplementary fragment cache area determines whether to set according to the memory size of the target file occupies the memory size of the cache unit space.
  • the supplementary fragment cache area includes several fragments with a storage space of 1M or 512kB. Cache area.
  • the calculation method of the value mk of the fractional part includes:
  • [n'M] represents the rounding operation to the memory size n'M of the target file.
  • n'is an even number all of the cache units are set as the basic fragmented cache area, and all of the basic fragmented cache areas are set as a number of the fragmented cache areas with a size of 2M as a unit , The number is n'/20.
  • the cache unit is divided into a basic segmented cache area and a supplementary segmented cache area.
  • the basic segmented cache area contains n'-1/20 of several 2M-sized caches.
  • the supplementary segmented cache area only includes one 1M size of the segmented cache area.
  • the cache unit also needs to be divided into a supplementary sliced cache area and a basic sliced cache area.
  • This application divides n'into an integer part [n'] and a floating-point number part m.
  • the integer part [n'] is divided into regions according to the above-mentioned method of processing integers.
  • the floating-point number part m it is metadata of mkB size.
  • the size of nG is 2.3G
  • the size of the target file is 2.3G
  • the fragment metadata cache module (used to store the 1M metadata in 2355-1), and then convert the fractional part to 204.8kB ⁇ 512kB in kB, then set one in the supplementary fragment cache area
  • the fragmented cache area of 512kB size is
  • the cache unit will be divided into a basic segmented cache area and a supplementary segmented cache area, and the basic segmented cache area contains 118 2M segments.
  • the supplementary slice cache area includes one 1M sliced cache area and one 512kB sliced cache area.
  • Step 3 Structurally split the target file according to the fragmented cache area to form a target sub-file set, and load the target sub-file set to the client terminal through the cache unit to obtain an independent Temporary file set, the independent temporary file set is restored to the target file after a merging operation, thereby completing the reduction of requests for returning to the source.
  • the client origin server structurally splits the target file according to the fragmented cache area to obtain several target sub-files in units of 2M, 1M, and kB.
  • the 2M, 1M, and 512kB segmented cache areas in the basic segmented cache area and the supplementary segmented cache area are structured to load the sub-files, and the segments are cached
  • the size of the area and the size of the target sub-files are the same or fit as much as possible, so that the space of the cache unit can be utilized to the maximum.
  • the preferred embodiment of the present application traverses the target sub-file set through a loop command, and traverses each target sub-file of 2M (because the number of sub-files of 2M is the largest, which is much larger than 1M and sub-files in kB units).
  • the load command is sequentially loaded into the 2M sliced cache area in the basic sliced cache area of the cache unit, after loading
  • the 2M fragmented cache area transmits the stored 2M target sub-file to the client terminal to form an independent temporary file.
  • the 2M fragmented cache area After the 2M fragmented cache area transfers a 2M target sub-file, it immediately adds a new Load queue, wait for the next 2M of the sub-file to be loaded, and then transfer (usually, all the 2M slice metadata cache modules in the basic slice buffer area are not enough to transfer the entire target file at once Therefore, all the 2M fragment metadata cache modules must be loaded cyclically).
  • the integer part of the size of the target file in M is an odd number, a 1M fragmented buffer area that supplements the fragmented buffer area needs to be transmitted once.
  • the size of the target file is With M as the unit value, there is also a floating-point part, that is, a fractional part. If the fractional part is greater than 512kB, it needs to be transferred once in the 1M fragmented buffer area of the supplementary fragmented buffer area. If the fractional part is less than 512kB Then, the 512kB fragmented buffer area of the supplementary fragmented buffer area needs to be transmitted once.
  • the set of target sub-files transmitted to the client terminal through the cache unit will form independent temporary files, that is: how many target sub-files are transmitted to the client terminal will form the number of independent temporary files.
  • Temporary files where the target sub-file and the independent temporary file are in a one-to-one correspondence.
  • the memory size of the target file is 2.3G
  • the target file is structured and split, so
  • the number of 1M target subfiles is 1 (because the integer part of the target file's memory size is 2355 which is an odd number, 2354M is cyclically transmitted through the 2M fragmented cache area
  • the target sub-files of 1M are left), the number of target sub-files of 204.8 kB is 1, and the 1177 target sub-files of 2M pass the 118 target sub-files of 2M mentioned in S2.
  • the fragmented buffer area is cyclically transmitted, the 1M target subfile is cyclically transmitted through the 1M-sized fragmented buffer area in the above S2, and the 1 204.8kB target subfile is passed through the above In S2, the one segmented buffer area with a size of 512kB is cyclically transmitted.
  • the merging operation (the number of target sub-files after the structured split is 1179 is transmitted to the client terminal to generate 1179 independent temporary files), and the storage operation is completed.
  • the reduced-return-to-origin request program may also be divided into one or more modules, and the one or more modules are stored in the memory 11 and run by one or more processors (this embodiment It is executed by the processor 12) to complete the present application.
  • the module referred to in the present application refers to a series of computer program instruction segments capable of completing specific functions, and is used to describe the execution process of reducing the back-to-origin request program in the electronic device.
  • this is a schematic diagram of the program modules of the device for reducing back-to-origin requests in an embodiment of this application.
  • the device for reducing back-to-origin requests can be divided into a cache unit generating module 10 and a region dividing module 20.
  • Load and merge module 30 Illustratively:
  • the caching unit generating module 10 is configured to: send a read request of a target file to a client origin server through a content distribution network cache, and receive the memory size of the target file fed back by the client origin server based on the read request, According to the memory size of the target file, a corresponding memory unit is generated in the client terminal and combined with the content distribution network cache to form a cache unit.
  • the area dividing module 20 is configured to: divide the cache unit into areas based on the memory size of the target file to obtain a fragmented cache area.
  • the loading and merging module 30 is configured to: structurally split the target file according to the fragmented cache area to form a target sub-file set, and load the target sub-file set to the In the client terminal, an independent temporary file set is obtained, and the independent temporary file set is restored to the target file after a merging operation, thereby completing the reduction of requests for returning to the source.
  • the above-mentioned program modules such as the cache unit generating module 10, the area dividing module 20, and the loading and merging module 30 can form a device for reducing back-to-origin requests.
  • the functions or operation steps implemented by the device when executed are substantially the same as those in the above-mentioned embodiment. Go into details again.
  • the embodiment of the present application also proposes a computer-readable storage medium, the computer-readable storage medium stores a reduced-return-to-origin request program, and the reduced return-to-origin request program can be executed by one or more processors to To achieve the following operations:
  • a corresponding memory unit is generated in the client terminal and combined with the content distribution network cache to form a cache unit;
  • the computer-readable storage medium may be non-volatile or volatile.

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Data Mining & Analysis (AREA)
  • Databases & Information Systems (AREA)
  • Physics & Mathematics (AREA)
  • General Engineering & Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • Information Retrieval, Db Structures And Fs Structures Therefor (AREA)
  • Information Transfer Between Computers (AREA)

Abstract

一种减少回源请求的方法、电子设备以及计算机可读存储介质,涉及大数据技术。所述方法包括:通过内容分发网络缓存向客户源服务器发送目标文件的读取请求,并接收所述客户源服务器回复的所述目标文件的内存大小,根据所述目标文件的内存大小生成缓存单元(S1);对所述缓存单元进行区域划分,得到分片缓存区域(S2);根据所述分片缓存区域对所述目标文件进行结构化拆分,形成目标子文件集,并将所述目标子文件集通过所述缓存单元加载至客户终端中,得到独立临时文件集,将所述独立临时文件集进行合并操作后还原成所述目标文件,从而完成减少回源的请求(S3)。该方法能够减少分片存储中的回源请求。

Description

减少回源请求的方法、装置及计算机可读存储介质
本申请要求于2020年02月29日提交中国专利局、申请号为202010134479.3,发明名称为“减少回源请求的方法、装置及计算机可读存储介质”的中国专利申请的优先权,其全部内容通过引用结合在本申请中。
技术领域
本申请涉及大数据技术领域,尤其涉及一种基于内容分发网络的减少回源请求的方法、装置及计算机可读存储介质。
背景技术
内容分发网络(简称CDN)顾名思义就是用于内容分发,其必然需要内容缓存。发明人意识到,对于较大目标文件的分发,分片存储有效地增加了目标文件请求的命中率,可以减少回源请求流量的消耗。其中,对于分片存储大小的定义,行业没有标准。不同的企业一般根据自己业务情况定义全局的分片存储大小,比如阿里云512k,七牛云1M,这样对于业务稳定或波动不大的情况没有问题。但有些情况是有问题的,如一些客户通过CDN融合厂商分发,而CDN融合厂商和CDN边缘厂家的分片存储大小不同,首次拉取资源会造成CDN边缘厂家到CDN融合厂商的不命中,进而导致增大回源,给客户带来不必要的损失。如:A厂家需要回源融合分发的是B厂家512k的分片,在缓存丢失后由于B厂家是1M的分片,所以B厂家用1M分片回源到客户源站,这样就放大2倍了,如果有请求洪峰1G,瞬间造成2倍的回源放大达到2G,这种情况下客户源站会负载过大。
发明内容
本申请提供的一种减少回源请求的方法,包括:
通过内容分发网络缓存向客户源服务器发送目标文件的读取请求,并接收所述客户源服务器基于所述读取请求反馈的所述目标文件的内存大小,根据所述目标文件的内存大小,在客户终端中生成对应的内存单元并与所述内容分发网络缓存组合形成缓存单元;
基于所述目标文件的内存大小对所述缓存单元进行区域划分,得到分片缓存区域;
根据所述分片缓存区域对所述目标文件进行结构化拆分,形成目标子文件集,并将所述目标子文件集通过所述缓存单元加载至所述客户终端中,得到独立临时文件集,将所述独立临时文件集进行合并操作后还原成所述目标文件,从而完成减少回源的请求。
本申请提供一种减少回源请求装置,包括:
缓存单元生成模块,用于通过内容分发网络缓存向客户源服务器发送目标文件的读取请求,并接收所述客户源服务器基于所述读取请求反馈的所述目标文件的内存大小,根据所述目标文件的内存大小,在客户终端中生成对应的内存单元并与所述内容分发网络缓存组合形成缓存单元;
区域划分模块,用于基于所述目标文件的内存大小对所述缓存单元进行区域划分,得到分片缓存区域;
加载合并模块,用于根据所述分片缓存区域对所述目标文件进行结构化拆分,形成目标子文件集,并将所述目标子文件集通过所述缓存单元加载至所述客户终端中,得到独立临时文件集,将所述独立临时文件集进行合并操作后还原成所述目标文件,从而完成减少回源的请求。
本申请提供一种电子设备,所述电子设备包括存储器和处理器,所述存储器上存储有 可在所述处理器上运行的减少回源请求程序,所述减少回源请求程序被所述处理器执行,以实现如下所述的减少回源请求方法的步骤:
通过内容分发网络缓存向客户源服务器发送目标文件的读取请求,并接收所述客户源服务器基于所述读取请求反馈的所述目标文件的内存大小,根据所述目标文件的内存大小,在客户终端中生成对应的内存单元并与所述内容分发网络缓存组合形成缓存单元;
基于所述目标文件的内存大小对所述缓存单元进行区域划分,得到分片缓存区域;
根据所述分片缓存区域对所述目标文件进行结构化拆分,形成目标子文件集,并将所述目标子文件集通过所述缓存单元加载至所述客户终端中,得到独立临时文件集,将所述独立临时文件集进行合并操作后还原成所述目标文件,从而完成减少回源的请求。
本申请提供一种计算机可读存储介质,所述计算机可读存储介质上存储有减少回源请求程序,所述减少回源请求程序可被一个或者多个处理器执行,以实现如下所述的减少回源请求的方法的步骤:
通过内容分发网络缓存向客户源服务器发送目标文件的读取请求,并接收所述客户源服务器基于所述读取请求反馈的所述目标文件的内存大小,根据所述目标文件的内存大小,在客户终端中生成对应的内存单元并与所述内容分发网络缓存组合形成缓存单元;
基于所述目标文件的内存大小对所述缓存单元进行区域划分,得到分片缓存区域;
根据所述分片缓存区域对所述目标文件进行结构化拆分,形成目标子文件集,并将所述目标子文件集通过所述缓存单元加载至所述客户终端中,得到独立临时文件集,将所述独立临时文件集进行合并操作后还原成所述目标文件,从而完成减少回源的请求。
附图说明
图1为本申请一实施例提供的减少回源请求的方法的流程示意图;
图2为本申请一实施例提供的减少回源请求的电子设备的内部结构示意图;
图3为本申请一实施例提供的减少回源请求装置的模块示意图。
本申请目的的实现、功能特点及优点将结合实施例,参照附图做进一步说明。
具体实施方式
应当理解,此处所描述的具体实施例仅仅用以解释本申请,并不用于限定本申请。
本申请提供一种减少回源请求的方法。参照图1所示,为本申请一实施例提供的减少回源请求的方法的流程示意图。该方法可以由一个电子设备执行,该电子设备可以由软件和/或硬件实现。
在本实施例中,减少回源请求的方法包括:
S1、通过内容分发网络缓存向客户源服务器发送目标文件的读取请求,并接收所述客户源服务器基于所述读取请求后反馈的所述目标文件的内存大小,根据所述目标文件的内存大小,在客户终端中生成对应的内存单元并与所述内容分发网络缓存组合形成缓存单元。
本申请较佳实施例中,所述内容分发网络(ContentDeliveryNetwork,CDN)是构建在现有网络基础之上的智能虚拟网络,依靠部署在各地的边缘服务器,通过中心平台的负载均衡、内容分发、调度等功能模块,使用户就近获取所需内容,降低网络拥塞,提高用户访问响应速度和命中率。
所述CDN缓存是客户源站的代理,起到为所述客户源站服务器分担存储压力的作用。所述客户源站服务器是指运行维护一个网站所需要的若干服务器,在该服务器上存储着目标文件,所述目标文件可以是视频资源、音频资源或者大型数据资源,所述客户终端为需要获取所述目标文件的设备。
较佳地,本申请所述通过内容分发网络缓存向客户源服务器发送目标文件的读取请 求,包括:获取所述目标文件访问所述客户源服务器的请求地址,将所述请求地址加载至预设的请求语句中,通过所述内容分发网络缓存接收所述请求语句,根据所述请求语句查找所述目标文件在所述客户源服务器中的地址,完成所述目标文件的读取请求,并计算所述目标文件的内存大小,将所述目标文件的所占内存空间的大小值回复给所述客户终端。其中,所述range请求的语句为:range:proxy_set_headerRange¥slice_range(fileaddr),所述range语句中最后括号中缺省的“fileaddr”为所述客户终端需要获取的所述目标文件的地址。
进一步地,所述客户终端根据所述目标文件的内存大小,在客户终端的内存单元中生成对应的内存区域和所述CDN缓存进行组合形成所述缓存单元。其中,在本申请较佳实施例中所述形成所述缓存单元包括:当所述目标文件的内存大小≤第一数值时,将所述内存单元的第一比例和所述CDN缓存进行组合作为所述缓存单元,当所述第一数值<所述目标文件的内存大小≤第二数值时,将所述内存单元的第二比例和所述CDN缓存进行组合作为所述缓存单元,当所述第二数值<所述目标文件的内存大小时,将所述内存单元的第三比例和所述CDN缓存进行组合作为所述缓存单元。优选地,所述第一数值为500M,所述第二数值为1G,所述第一比例为20%,所述第二比例为40%,以及所述第三比例为60%。
S2、基于所述目标文件的内存大小对所述缓存单元进行区域划分,得到分片缓存区域。
较佳地,本申请根据所述目标文件的内存大小将所述缓存单元划分为两个区域,即:基本分片缓存区域和补充分片缓存区域,其中,所述基本分片缓存区域的空间大于所述补充分片缓存区域的空间。例如,所述基本分片缓存区域包含储存空间大小为2M的若干分片缓存区域。所述补充分片缓存区域根据所述目标文件的内存大小占所述缓存单元空间内存大小来确定是否设定,例如,所述补充分片缓存区域包含若干储存空间大小为1M或512kB的分片缓存区域。
详细地,所述区域划分包括:预设所述目标文件的内存大小为nG,利用计算公式nG×1024=n'M将所述目标文件的内存大小从以G为单位换算成以M为单位,得到以M为单位的所述目标文件的内存大小为n'M,并计算出所述目标文件的内存大小n'M的小数部分的数值mk,根据所述mk的大小在所述缓存单元中按预设的方式增加相应内存大小的分片缓存区域。其中,所述小数部分的数值mk的计算方法包括:
n'M-[n'M]=mk
其中,[n'M]表示对目标文件的内存大小n'M进行取整操作。
进一步地,当n'为偶数时,所述缓存单元全部设置为所述基本分片缓存区域,且所述基本分片缓存区域中全部设置为以2M大小为单位的若干所述分片缓存区域,数量为n'/20。当n'为奇数时,将所述缓存单元划分为基本分片缓存区域和补充分片缓存区域,所述基本分片缓存区域中包含数量为n'-1/20个的2M大小的若干所述分片缓存区域,所述补充分片缓存区域只包含1个1M大小的所述分片缓存区域。
进一步地,如果n'包含浮点数,则将所述缓存单元也需要划分为补充分片缓存区域和基本分片缓存区域。本申请将n'划分为整数部分[n']和浮点数部分m,整数部分[n']按上述对整数处理的方式进行区域划分,对浮点数部分m,即是mkB大小的元数据,当m>512KB时,则在[n']区域划分的基础上,为补充分片缓存区域再添加1个大小为1M的分片缓存区域,用于存放所述mkB大小的所述元数据;当m≤512KB时,则在[n']区域划分的基础上,为补充分片缓存区域再添加1个大小为512kB的所述分片缓存区域。
例如:对于n=2.3,则nG大小为2.3G,所述目标文件大小为2.3G,将所述目标文件 大小换算为以M为单位:2.3G×1024=2355.2M,则2355.2M-[2355.2M]=0.2M=204.8kB,[2355.2M]=2355M,整数部分数值为奇数,则将所述缓存单元划分为基本分片缓存区域和补充分片缓存区域,所述基本分片缓存区域中2M所述分片元数据缓存模块的数量为:(2355-1)/20=117.7,取118个2M所述分片元数据缓存模块,所述补充分片缓存区域设置1个1M大小的所述分片元数据缓存模块(用来存放2355-1中的那个1M的元数据),再将小数部分换算以kB为单位为204.8kB<512kB,则在所述补充分片缓存区域设置1个512kB大小的所述分片缓存区域。
优选地,对内存大小为2.3G的所述目标文件,所述缓存单元将被划分为基本分片缓存区域和补充分片缓存区域,所述基本分片缓存区域中包含118个2M所述分片缓存区域,所述补充分片缓存区域中包含1个1M大小的所述分片缓存区域和1个512kB大小的所述分片缓存区域。
S3、根据所述分片缓存区域对所述目标文件进行结构化拆分,形成目标子文件集,并将所述目标子文件集通过所述缓存单元加载至所述客户终端中,得到独立临时文件集,将所述独立临时文件集进行合并操作后还原成所述目标文件,从而完成减少回源的请求。
本申请较佳实施例中,所述客户源站服务器根据所述分片缓存区域将所述目标文件进行结构化拆分后得到若干个以2M、1M和kB为单位的目标子文件。通过所述结构化拆分,在所述基本分片缓存区域和补充分片缓存区域中的2M、1M及512kB的分片缓存区域对所述子文件进行结构化加载,将所述分片缓存区域的大小和所述各目标子文件的大小尽可能的相同或者契合,这样可以最大限度的利用所述缓存单元的空间。
进一步地,本申请较佳实施例通过循环命令对所述目标子文件集进行遍历,对每一个2M的目标子文件(因为2M的子文件数量最多,远远大于1M和以kB为单位的子文件的数量,按照划分规则,1M和以kB为单位的子文件的数量不超过1)通过load命令依次加载到所述缓存单元的基本分片缓存区域中的2M分片缓存区域中,加载后的所述2M分片缓存区域将存储的所述2M的目标子文件传输到所述客户终端,形成独立临时文件,所述2M分片缓存区域传输完一个2M的目标子文件以后,随即加入新的加载队列,等待下一个2M的所述子文件加载进来,再进行传输(通常情况下,所述基本分片缓存区域中所有的2M分片元数据缓存模块不足以将整个目标文件一次传输完毕,因此,对所有的所述2M分片元数据缓存模块都要循环加载)。
进一步地,如果所述目标文件的大小以M为单位数值的整数部分是奇数,则还需要用到1个补充分片缓存区域的1M分片缓存区域传输1次,如果所述目标文件的大小以M为单位数值还存在浮点数部分,即小数部分,若小数部分>512kB,则还需要用到1次所述补充分片缓存区域的1M分片缓存区域传输1次,若小数部分<512kB则还需要用到1次所述补充分片缓存区域的512kB所述分片缓存区域传输1次。
较佳地,所述目标子文件集经过所述缓存单元传输到所述客户终端都将形成独立临时文件,即:有多少个目标子文件传输到所述客户终端就会形成多少个所述独立临时文件,其中,所述目标子文件和所述独立临时文件是一一对应的关系。所述客户终端通过合并命令将所有的所述独立临时文件进行合并操作后,将所述目标文件进行还原,从而完成减少回源的请求。
例如:所述目标文件的内存大小为2.3G,将所述目标文件的内存大小换算为以M为单位:2.3G×1024=2355.2M,则对所述目标文件进行结构化拆分操作,所述浮点数部分为:2355.2M-[2355.2M]=0.2M换算为kB为单位为204.8kB,整数部分为[2355.2M]=2355M为奇数,则所述结构化拆分生成2M的目标子文件数量为(2355-1)/2=1177个,1M的目标子文件数量为1(因为所述目标文件的内存大小整数部分为2355是奇数,通过所述2M的分片缓存区域循环传输了2354M的所述目标子文件,还剩下了1M的所述目标子文件), 204.8kB的目标子文件数量为1,所述1177个2M的目标子文件通过上述S2中所述118个2M所述分片缓存区域进行循环传输,所述1个1M的目标子文件通过上述S2中所述1个1M大小的所述分片缓存区域进行循环传输,所述1个204.8kB的目标子文件通过上述S2中所述1个512kB大小的所述分片缓存区域进行循环传输,完成所有目标子文件的传输后,所述客户终端将所述生成的1177+1+1=1179个独立临时文件,进行合并操作(结构化拆分后目标子文件的数量为1179则传输到客户终端生成1179个独立临时文件),完成所述存储操作。
本申请还提供一种减少回源请求的电子设备。参照图2所示,为本申请一实施例提供的电子设备的内部结构示意图。
在本实施例中,所述电子设备1可以是PC(PersonalComputer,个人电脑),或者是智能手机、平板电脑、便携计算机等终端设备,也可以是一种服务器或者服务器组合等。该电子设备1至少包括存储器11、处理器12,通信总线13,以及网络接口14。
其中,存储器11至少包括一种类型的可读存储介质,所述可读存储介质包括闪存、硬盘、多媒体卡、卡型存储器(例如,SD或DX存储器等)、磁性存储器、磁盘、光盘等。存储器11在一些实施例中可以是电子设备1的内部存储单元,例如该电子设备1的硬盘。存储器11在另一些实施例中也可以是电子设备1的外部存储设备,例如电子设备1上配备的插接式硬盘,智能存储卡(SmartMediaCard,SMC),安全数字(SecureDigital,SD)卡,闪存卡(FlashCard)等。进一步地,存储器11还可以既包括电子设备1的内部存储单元也包括外部存储设备。存储器11不仅可以用于存储安装于电子设备1的应用软件及各类数据,例如减少回源请求程序01的代码等,还可以用于暂时地存储已经输出或者将要输出的数据。
处理器12在一些实施例中可以是一中央处理器(CentralProcessingUnit,CPU)、控制器、微控制器、微处理器或其他数据处理芯片,用于运行存储器11中存储的程序代码或处理数据,例如执行减少回源请求程序01等。
通信总线13用于实现这些组件之间的连接通信。
网络接口14可选的可以包括标准的有线接口、无线接口(如WI-FI接口),通常用于在该电子设备1与其他电子设备之间建立通信连接。
可选地,该电子设备1还可以包括用户接口,用户接口可以包括显示器(Display)、输入单元比如键盘(Keyboard),可选的用户接口还可以包括标准的有线接口、无线接口。可选地,在一些实施例中,显示器可以是LED显示器、液晶显示器、触控式液晶显示器以及OLED(OrganicLight-EmittingDiode,有机发光二极管)触摸器等。其中,显示器也可以适当的称为显示屏或显示单元,用于显示在电子设备1中处理的信息以及用于显示可视化的用户界面。
图2仅示出了具有组件11-14以及减少回源请求程序01的电子设备1,本领域技术人员可以理解的是,图1示出的结构并不构成对电子设备1的限定,可以包括比图示更少或者更多的部件,或者组合某些部件,或者不同的部件布置。
在图2所示的电子设备1实施例中,存储器11中存储有减少回源请求程序01;处理器12执行存储器11中存储的减少回源请求程序01时实现如下步骤:
步骤一、通过内容分发网络缓存向客户源服务器发送目标文件的读取请求,并接收所述客户源服务器基于所述读取请求后反馈的所述目标文件的内存大小,根据所述目标文件的内存大小,在客户终端中生成对应的内存单元并与所述内容分发网络缓存组合形成缓存单元。
本申请较佳实施例中,所述内容分发网络(ContentDeliveryNetwork,CDN)是构建在现有网络基础之上的智能虚拟网络,依靠部署在各地的边缘服务器,通过中心平台的负载 均衡、内容分发、调度等功能模块,使用户就近获取所需内容,降低网络拥塞,提高用户访问响应速度和命中率。
所述CDN缓存是客户源站的代理,起到为所述客户源站服务器分担存储压力的作用。所述客户源站服务器是指运行维护一个网站所需要的若干服务器,在该服务器上存储着目标文件,所述目标文件可以是视频资源、音频资源或者大型数据资源,所述客户终端为需要获取所述目标文件的设备。
较佳地,本申请所述通过内容分发网络缓存向客户源服务器发送目标文件的读取请求,包括:获取所述目标文件访问所述客户源服务器的请求地址,将所述请求地址加载至预设的请求语句中,通过所述内容分发网络缓存接收所述请求语句,根据所述请求语句查找所述目标文件在所述客户源服务器中的地址,完成所述目标文件的读取请求,并计算所述目标文件的内存大小,将所述目标文件的所占内存空间的大小值回复给所述客户终端。其中,所述range请求的语句为:range:proxy_set_headerRange¥slice_range(fileaddr),所述range语句中最后括号中缺省的“fileaddr”为所述客户终端需要获取的所述目标文件的地址。
进一步地,所述客户终端根据所述目标文件的内存大小,在客户终端的内存单元中生成一定比例大小的内存区域和所述CDN缓存进行组合形成所述缓存单元。其中,在本申请较佳实施例中所述比例为:当所述目标文件的内存大小≤第一数值时,将所述内存单元的第一比例和所述CDN缓存进行组合作为所述缓存单元,当所述第一数值<所述目标文件的内存大小≤第二数值时,将所述内存单元的第二比例和所述CDN缓存进行组合作为所述缓存单元,当所述第二数值<所述目标文件的内存大小时,将所述内存单元的第三比例和所述CDN缓存进行组合作为所述缓存单元。优选地,所述第一数值为500M,所述第二数值为1G,所述第一比例为20%,所述第二比例为40%,以及所述第三比例为60%。
步骤二、基于所述目标文件的内存大小对所述缓存单元进行区域划分,得到分片缓存区域。
较佳地,本申请根据所述目标文件的内存大小将所述缓存单元划分为两个区域,即:基本分片缓存区域和补充分片缓存区域,其中,所述基本分片缓存区域的空间大于所述补充分片缓存区域的空间。例如,所述基本分片缓存区域包含储存空间大小为,例如,2M的若干所述分片缓存区域。所述补充分片缓存区域根据所述目标文件的内存大小占所述缓存单元空间内存大小来确定是否设定,例如,所述补充分片缓存区域包含若干储存空间大小为1M或512kB的分片缓存区域。
详细地,所述区域划分包括:预设所述目标文件的内存大小为nG,利用计算公式nG×1024=n'M将所述目标文件的内存大小从以G为单位换算成以M为单位,得到以M为单位的所述目标文件的内存大小为n'M,并计算出所述目标文件的内存大小n'M的小数部分的数值mk,根据所述mk的大小在所述缓存单元中按预设的方式增加相应内存大小的分片缓存区域。其中,所述小数部分的数值mk的计算方法包括:
n'M-[n'M]=mk
其中,[n'M]表示对目标文件的内存大小n'M进行取整操作。
进一步地,当n'为偶数时,所述缓存单元全部设置为所述基本分片缓存区域,且所述基本分片缓存区域中全部设置为以2M大小为单位的若干所述分片缓存区域,数量为n'/20。当n'为奇数时,将所述缓存单元划分为基本分片缓存区域和补充分片缓存区域,所述基本分片缓存区域中包含数量为n'-1/20个的2M大小的若干所述分片缓存区域,所述补充分片缓存区域只包含1个1M大小的所述分片缓存区域。
进一步地,如果n'包含浮点数,则将所述缓存单元也需要划分为补充分片缓存区域和 基本分片缓存区域。本申请将n'划分为整数部分[n']和浮点数部分m,整数部分[n']按上述对整数处理的方式进行区域划分,对浮点数部分m,即是mkB大小的元数据,当m>512KB时,则在[n']区域划分的基础上,为补充分片缓存区域再添加1个大小为1M的分片缓存区域,用于存放所述mkB大小的所述元数据;当m≤512KB时,则在[n']区域划分的基础上,为补充分片缓存区域再添加1个大小为512kB的所述分片缓存区域。
例如:对于n=2.3,则nG大小为2.3G,所述目标文件大小为2.3G,将所述目标文件大小换算为以M为单位:2.3G×1024=2355.2M,则2355.2M-[2355.2M]=0.2M=204.8kB,[2355.2M]=2355M,整数部分数值为奇数,则将所述缓存单元划分为基本分片缓存区域和补充分片缓存区域,所述基本分片缓存区域中2M所述分片元数据缓存模块的数量为:(2355-1)/20=117.7,取118个2M所述分片元数据缓存模块,所述补充分片缓存区域设置1个1M大小的所述分片元数据缓存模块(用来存放2355-1中的那个1M的元数据),再将小数部分换算以kB为单位为204.8kB<512kB,则在所述补充分片缓存区域设置1个512kB大小的所述分片缓存区域。
优选地,对内存大小为2.3G的所述目标文件,所述缓存单元将被划分为基本分片缓存区域和补充分片缓存区域,所述基本分片缓存区域中包含118个2M所述分片缓存区域,所述补充分片缓存区域中包含1个1M大小的所述分片缓存区域和1个512kB大小的所述分片缓存区域。
步骤三、根据所述分片缓存区域对所述目标文件进行结构化拆分,形成目标子文件集,并将所述目标子文件集通过所述缓存单元加载至所述客户终端中,得到独立临时文件集,将所述独立临时文件集进行合并操作后还原成所述目标文件,从而完成减少回源的请求。
本申请较佳实施例中,所述客户源站服务器根据所述分片缓存区域将所述目标文件进行结构化拆分后得到若干个以2M、1M和kB为单位的目标子文件。通过所述结构化拆分,在所述基本分片缓存区域和补充分片缓存区域中的2M、1M及512kB的分片缓存区域对所述子文件进行结构化加载,将所述分片缓存区域的大小和所述各目标子文件的大小尽可能的相同或者契合,这样可以最大限度的利用所述缓存单元的空间。
进一步地,本申请较佳实施例通过循环命令对所述目标子文件集进行遍历,对每一个2M的目标子文件(因为2M的子文件数量最多,远远大于1M和以kB为单位的子文件的数量,按照划分规则,1M和以kB为单位的子文件的数量不超过1)通过load命令依次加载到所述缓存单元的基本分片缓存区域中的2M分片缓存区域中,加载后的所述2M分片缓存区域将存储的所述2M的目标子文件传输到所述客户终端,形成独立临时文件,所述2M分片缓存区域传输完一个2M的目标子文件以后,随即加入新的加载队列,等待下一个2M的所述子文件加载进来,再进行传输(通常情况下,所述基本分片缓存区域中所有的2M分片元数据缓存模块不足以将整个目标文件一次传输完毕,因此,对所有的所述2M分片元数据缓存模块都要循环加载)。
进一步地,如果所述目标文件的大小以M为单位数值的整数部分是奇数,则还需要用到1个补充分片缓存区域的1M分片缓存区域传输1次,如果所述目标文件的大小以M为单位数值还存在浮点数部分,即小数部分,若小数部分>512kB,则还需要用到1次所述补充分片缓存区域的1M分片缓存区域传输1次,若小数部分<512kB则还需要用到1次所述补充分片缓存区域的512kB所述分片缓存区域传输1次。
较佳地,所述目标子文件集经过所述缓存单元传输到所述客户终端都将形成独立临时文件,即:有多少个目标子文件传输到所述客户终端就会形成多少个所述独立临时文件,其中,所述目标子文件和所述独立临时文件是一一对应的关系。所述客户终端通过合并命令将所有的所述独立临时文件进行合并操作后,将所述目标文件进行还原,从而完成减少 回源的请求。
例如:所述目标文件的内存大小为2.3G,将所述目标文件的内存大小换算为以M为单位:2.3G×1024=2355.2M,则对所述目标文件进行结构化拆分操作,所述浮点数部分为:2355.2M-[2355.2M]=0.2M换算为kB为单位为204.8kB,整数部分为[2355.2M]=2355M为奇数,则所述结构化拆分生成2M的目标子文件数量为(2355-1)/2=1177个,1M的目标子文件数量为1(因为所述目标文件的内存大小整数部分为2355是奇数,通过所述2M的分片缓存区域循环传输了2354M的所述目标子文件,还剩下了1M的所述目标子文件),204.8kB的目标子文件数量为1,所述1177个2M的目标子文件通过上述S2中所述118个2M所述分片缓存区域进行循环传输,所述1个1M的目标子文件通过上述S2中所述1个1M大小的所述分片缓存区域进行循环传输,所述1个204.8kB的目标子文件通过上述S2中所述1个512kB大小的所述分片缓存区域进行循环传输,完成所有目标子文件的传输后,所述客户终端将所述生成的1177+1+1=1179个独立临时文件,进行合并操作(结构化拆分后目标子文件的数量为1179则传输到客户终端生成1179个独立临时文件),完成所述存储操作。
可选地,在其他实施例中,减少回源请求程序还可以被分割为一个或者多个模块,一个或者多个模块被存储于存储器11中,并由一个或多个处理器(本实施例为处理器12)所执行以完成本申请,本申请所称的模块是指能够完成特定功能的一系列计算机程序指令段,用于描述减少回源请求程序在电子设备中的执行过程。
参照图3所示,为本申请一实施例中的减少回源请求装置的程序模块示意图,该实施例中,所述减少回源请求装置可以被分割为缓存单元生成模块10、区域划分模块20、加载合并模块30示例性地:
所述缓存单元生成模块10用于:通过内容分发网络缓存向客户源服务器发送目标文件的读取请求,并接收所述客户源服务器基于所述读取请求反馈的所述目标文件的内存大小,根据所述目标文件的内存大小,在客户终端中生成对应的内存单元并与所述内容分发网络缓存组合形成缓存单元。
所述区域划分模块20用于:基于所述目标文件的内存大小对所述缓存单元进行区域划分,得到分片缓存区域。
所述加载合并模块30用于:根据所述分片缓存区域对所述目标文件进行结构化拆分,形成目标子文件集,并将所述目标子文件集通过所述缓存单元加载至所述客户终端中,得到独立临时文件集,将所述独立临时文件集进行合并操作后还原成所述目标文件,从而完成减少回源的请求。
上述缓存单元生成模块10、区域划分模块20、加载合并模块30等程序模块可以组成一个减少回源请求装置,该装置被执行时所实现的功能或操作步骤与上述实施例大体相同,在此不再赘述。
此外,本申请实施例还提出一种计算机可读存储介质,所述计算机可读存储介质上存储有减少回源请求程序,所述减少回源请求程序可被一个或多个处理器执行,以实现如下操作:
通过内容分发网络缓存向客户源服务器发送目标文件的读取请求,并接收所述客户源服务器基于所述读取请求反馈的所述目标文件的内存大小,根据所述目标文件的内存大小,在客户终端中生成对应的内存单元并与所述内容分发网络缓存组合形成缓存单元;
基于所述目标文件的内存大小对所述缓存单元进行区域划分,得到分片缓存区域;
根据所述分片缓存区域对所述目标文件进行结构化拆分,形成目标子文件集,并将所述目标子文件集通过所述缓存单元加载至所述客户终端中,得到独立临时文件集,将所述 独立临时文件集进行合并操作后还原成所述目标文件,从而完成减少回源的请求。
所述计算机可读存储介质可以是非易失性,也可以是易失性。
本申请计算机可读存储介质具体实施方式与上述电子设备和方法各实施例基本相同,在此不作累述。
需要说明的是,上述本申请实施例序号仅仅为了描述,不代表实施例的优劣。并且本文中的术语“包括”、“包含”或者其任何其他变体意在涵盖非排他性的包含,从而使得包括一系列要素的过程、装置、物品或者方法不仅包括那些要素,而且还包括没有明确列出的其他要素,或者是还包括为这种过程、装置、物品或者方法所固有的要素。在没有更多限制的情况下,由语句“包括一个……”限定的要素,并不排除在包括该要素的过程、装置、物品或者方法中还存在另外的相同要素。
通过以上的实施方式的描述,本领域的技术人员可以清楚地了解到上述实施例方法可借助软件加必需的通用硬件平台的方式来实现,当然也可以通过硬件,但很多情况下前者是更佳的实施方式。基于这样的理解,本申请的技术方案本质上或者说对现有技术做出贡献的部分可以以软件产品的形式体现出来,该计算机软件产品存储在如上所述的一个存储介质(如ROM/RAM、磁碟、光盘)中,包括若干指令用以使得一台终端设备(可以是手机,计算机,服务器,或者网络设备等)执行本申请各个实施例所述的方法。
以上仅为本申请的优选实施例,并非因此限制本申请的专利范围,凡是利用本申请说明书及附图内容所作的等效结构或等效流程变换,或直接或间接运用在其他相关的技术领域,均同理包括在本申请的专利保护范围内。

Claims (20)

  1. 一种减少回源请求的方法,其中,所述方法包括:
    通过内容分发网络缓存向客户源服务器发送目标文件的读取请求,并接收所述客户源服务器基于所述读取请求反馈的所述目标文件的内存大小,根据所述目标文件的内存大小,在客户终端中生成对应的内存单元并与所述内容分发网络缓存组合形成缓存单元;
    基于所述目标文件的内存大小对所述缓存单元进行区域划分,得到分片缓存区域;
    根据所述分片缓存区域对所述目标文件进行结构化拆分,形成目标子文件集,并将所述目标子文件集通过所述缓存单元加载至所述客户终端中,得到独立临时文件集,将所述独立临时文件集进行合并操作后还原成所述目标文件,从而完成减少回源的请求。
  2. 如权利要求1所述的减少回源请求的方法,其中,所述通过内容分发网络缓存向客户源服务器发送目标文件的读取请求,包括:
    获取所述目标文件访问所述客户源服务器的请求地址,将所述请求地址加载至预设的请求语句中,通过所述内容分发网络缓存接收所述请求语句,根据所述请求语句查找所述目标文件在所述客户源服务器中的地址,完成所述目标文件的读取请求。
  3. 如权利要求1所述的减少回源请求的方法,其中,所述根据所述目标文件的内存大小,在客户终端中生成一定比例大小的内存单元并与所述内容分发网络缓存组合形成缓存单元,包括;
    当所述目标文件的内存大小≤第一数值时,将所述内存单元的第一比例和所述内容分发网络缓存进行组合作为所述缓存单元;
    当所述第一数值<所述目标文件的内存大小≤第二数值时,将所述内存单元的第二比例和所述内容分发网络缓存进行组合作为所述缓存单元;
    当所述第二数值<所述目标文件的内存大小时,将所述内存单元的第三比例和所述内容分发网络缓存进行组合作为所述缓存单元。
  4. 如权利要求3所述的减少回源请求的方法,其中,所述第一数值为500M,所述第二数值为1G,所述第一比例为20%,所述第二比例为40%,以及所述第三比例为60%。
  5. 如权利要求1所述的减少回源请求的方法,其中,所述分片缓存区域包括基本分片缓存区域和补充分片缓存区域,其中,所述基本分片缓存区域的空间大于所述补充分片缓存区域的空间;及
    所述区域划分包括:
    将所述目标文件的内存大小换算成以M字节为单位,得到所述目标文件的内存大小n'M;
    若所述n'为偶数时,将所述缓存单元全部划分为基本分片缓存区域;
    若所述n'为奇数时,将所述缓存单元划分为基本分片缓存区域和补充分片缓存区域;
    若所述n'包含浮点数,将所述缓存单元划分为补充分片缓存区域和基本分片缓存区域。
  6. 如权利要求5所述的减少回源请求的方法,其中,所述将所述目标文件的内存大小换算成以M字节为单位,得到所述目标文件的内存大小n'M,包括:
    通过下述公式计算所述目标文件的内存大小n'M中小数部分的数值mk:
    n'M-[n'M]=mk
    其中,[n'M]表示对目标文件的内存大小n'M进行取整操作;
    根据所述mk的大小在所述缓存单元中按预设的方式增加相应内存大小的分片缓存区域。
  7. 如权利要求1至6中任意一项所述的减少回源请求的方法,其中,所述将所述目标 子文件集通过所述缓存单元加载至所述客户终端中,得到独立临时文件集,包括:
    通过循环命令对所述目标子文件集进行遍历,得到需要进行传输的目标子文件集;
    利用load命令依次将需要进行传输的目标子文件集加载到所述缓存单元中对应的分片缓存区域中;
    根据所述对应的分片缓存区域将所述需要进行传输的目标子文件集传输至所述客户终端,得到所述独立临时文件集。
  8. 一种减少回源请求装置,其中,所述减少回源请求系装置包括:
    缓存单元生成模块,用于通过内容分发网络缓存向客户源服务器发送目标文件的读取请求,并接收所述客户源服务器基于所述读取请求反馈的所述目标文件的内存大小,根据所述目标文件的内存大小,在客户终端中生成对应的内存单元并与所述内容分发网络缓存组合形成缓存单元;
    区域划分模块,用于基于所述目标文件的内存大小对所述缓存单元进行区域划分,得到分片缓存区域;
    加载合并模块,用于根据所述分片缓存区域对所述目标文件进行结构化拆分,形成目标子文件集,并将所述目标子文件集通过所述缓存单元加载至所述客户终端中,得到独立临时文件集,将所述独立临时文件集进行合并操作后还原成所述目标文件,从而完成减少回源的请求。
  9. 一种电子设备,其中,所述电子设备包括存储器和处理器,所述存储器上存储有可在所述处理器上运行的减少回源请求程序,所述减少回源请求程序被所述处理器执行时实现如下所述的减少回源请求方法的步骤:
    通过内容分发网络缓存向客户源服务器发送目标文件的读取请求,并接收所述客户源服务器基于所述读取请求反馈的所述目标文件的内存大小,根据所述目标文件的内存大小,在客户终端中生成对应的内存单元并与所述内容分发网络缓存组合形成缓存单元;
    基于所述目标文件的内存大小对所述缓存单元进行区域划分,得到分片缓存区域;
    根据所述分片缓存区域对所述目标文件进行结构化拆分,形成目标子文件集,并将所述目标子文件集通过所述缓存单元加载至所述客户终端中,得到独立临时文件集,将所述独立临时文件集进行合并操作后还原成所述目标文件,从而完成减少回源的请求。
  10. 如权利要求9所述的电子设备,其中,所述通过内容分发网络缓存向客户源服务器发送目标文件的读取请求,包括:
    获取所述目标文件访问所述客户源服务器的请求地址,将所述请求地址加载至预设的请求语句中,通过所述内容分发网络缓存接收所述请求语句,根据所述请求语句查找所述目标文件在所述客户源服务器中的地址,完成所述目标文件的读取请求。
  11. 如权利要求9所述的电子设备,其中,所述根据所述目标文件的内存大小,在客户终端中生成一定比例大小的内存单元并与所述内容分发网络缓存组合形成缓存单元,包括;
    当所述目标文件的内存大小≤第一数值时,将所述内存单元的第一比例和所述内容分发网络缓存进行组合作为所述缓存单元;
    当所述第一数值<所述目标文件的内存大小≤第二数值时,将所述内存单元的第二比例和所述内容分发网络缓存进行组合作为所述缓存单元;
    当所述第二数值<所述目标文件的内存大小时,将所述内存单元的第三比例和所述内容分发网络缓存进行组合作为所述缓存单元。
  12. 如权利要求9所述的电子设备,其中,所述分片缓存区域包括基本分片缓存区域和补充分片缓存区域,其中,所述基本分片缓存区域的空间大于所述补充分片缓存区域的空间;及
    所述区域划分包括:
    将所述目标文件的内存大小换算成以M字节为单位,得到所述目标文件的内存大小n'M;
    若所述n'为偶数时,将所述缓存单元全部划分为基本分片缓存区域;
    若所述n'为奇数时,将所述缓存单元划分为基本分片缓存区域和补充分片缓存区域;
    若所述n'包含浮点数,将所述缓存单元划分为补充分片缓存区域和基本分片缓存区域。
  13. 如权利要求12所述的电子设备,其中,所述将所述目标文件的内存大小换算成以M字节为单位,得到所述目标文件的内存大小n'M,包括:
    通过下述公式计算所述目标文件的内存大小n'M中小数部分的数值mk:
    n'M-[n'M]=mk
    其中,[n'M]表示对目标文件的内存大小n'M进行取整操作;
    根据所述mk的大小在所述缓存单元中按预设的方式增加相应内存大小的分片缓存区域。
  14. 如权利要求9至13中任意一项所述的电子设备,其中,所述将所述目标子文件集通过所述缓存单元加载至所述客户终端中,得到独立临时文件集,包括:
    通过循环命令对所述目标子文件集进行遍历,得到需要进行传输的目标子文件集;
    利用load命令依次将需要进行传输的目标子文件集加载到所述缓存单元中对应的分片缓存区域中;
    根据所述对应的分片缓存区域将所述需要进行传输的目标子文件集传输至所述客户终端,得到所述独立临时文件集。
  15. 一种计算机可读存储介质,其中,所述计算机可读存储介质上存储有减少回源请求程序,所述减少回源请求程序可被一个或者多个处理器执行,以实现如下所述的减少回源请求的方法的步骤:
    通过内容分发网络缓存向客户源服务器发送目标文件的读取请求,并接收所述客户源服务器基于所述读取请求反馈的所述目标文件的内存大小,根据所述目标文件的内存大小,在客户终端中生成对应的内存单元并与所述内容分发网络缓存组合形成缓存单元;
    基于所述目标文件的内存大小对所述缓存单元进行区域划分,得到分片缓存区域;
    根据所述分片缓存区域对所述目标文件进行结构化拆分,形成目标子文件集,并将所述目标子文件集通过所述缓存单元加载至所述客户终端中,得到独立临时文件集,将所述独立临时文件集进行合并操作后还原成所述目标文件,从而完成减少回源的请求。
  16. 如权利要求15所述的计算机可读存储介质,其中,所述通过内容分发网络缓存向客户源服务器发送目标文件的读取请求,包括:
    获取所述目标文件访问所述客户源服务器的请求地址,将所述请求地址加载至预设的请求语句中,通过所述内容分发网络缓存接收所述请求语句,根据所述请求语句查找所述目标文件在所述客户源服务器中的地址,完成所述目标文件的读取请求。
  17. 如权利要求15所述的计算机可读存储介质,其中,所述根据所述目标文件的内存大小,在客户终端中生成一定比例大小的内存单元并与所述内容分发网络缓存组合形成缓存单元,包括;
    当所述目标文件的内存大小≤第一数值时,将所述内存单元的第一比例和所述内容分发网络缓存进行组合作为所述缓存单元;
    当所述第一数值<所述目标文件的内存大小≤第二数值时,将所述内存单元的第二比例和所述内容分发网络缓存进行组合作为所述缓存单元;
    当所述第二数值<所述目标文件的内存大小时,将所述内存单元的第三比例和所述内容分发网络缓存进行组合作为所述缓存单元。
  18. 如权利要求15所述的计算机可读存储介质,其中,所述分片缓存区域包括基本分片缓存区域和补充分片缓存区域,其中,所述基本分片缓存区域的空间大于所述补充分片缓存区域的空间;及
    所述区域划分包括:
    将所述目标文件的内存大小换算成以M字节为单位,得到所述目标文件的内存大小n'M;
    若所述n'为偶数时,将所述缓存单元全部划分为基本分片缓存区域;
    若所述n'为奇数时,将所述缓存单元划分为基本分片缓存区域和补充分片缓存区域;
    若所述n'包含浮点数,将所述缓存单元划分为补充分片缓存区域和基本分片缓存区域。
  19. 如权利要求18所述的计算机可读存储介质,其中,所述将所述目标文件的内存大小换算成以M字节为单位,得到所述目标文件的内存大小n'M,包括:
    通过下述公式计算所述目标文件的内存大小n'M中小数部分的数值mk:
    n'M-[n'M]=mk
    其中,[n'M]表示对目标文件的内存大小n'M进行取整操作;
    根据所述mk的大小在所述缓存单元中按预设的方式增加相应内存大小的分片缓存区域。
  20. 如权利要求15至19中任意一项所述的计算机可读存储介质,其中,所述将所述目标子文件集通过所述缓存单元加载至所述客户终端中,得到独立临时文件集,包括:
    通过循环命令对所述目标子文件集进行遍历,得到需要进行传输的目标子文件集;
    利用load命令依次将需要进行传输的目标子文件集加载到所述缓存单元中对应的分片缓存区域中;
    根据所述对应的分片缓存区域将所述需要进行传输的目标子文件集传输至所述客户终端,得到所述独立临时文件集。
PCT/CN2020/119123 2020-02-29 2020-09-29 减少回源请求的方法、装置及计算机可读存储介质 WO2021169298A1 (zh)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
CN202010134479.3 2020-02-29
CN202010134479.3A CN111339057A (zh) 2020-02-29 2020-02-29 减少回源请求的方法、装置及计算机可读存储介质

Publications (1)

Publication Number Publication Date
WO2021169298A1 true WO2021169298A1 (zh) 2021-09-02

Family

ID=71184114

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/CN2020/119123 WO2021169298A1 (zh) 2020-02-29 2020-09-29 减少回源请求的方法、装置及计算机可读存储介质

Country Status (2)

Country Link
CN (1) CN111339057A (zh)
WO (1) WO2021169298A1 (zh)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN114466032A (zh) * 2021-12-27 2022-05-10 天翼云科技有限公司 一种cdn系统合并回源方法、装置及存储介质

Families Citing this family (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111339057A (zh) * 2020-02-29 2020-06-26 平安科技(深圳)有限公司 减少回源请求的方法、装置及计算机可读存储介质
CN112055044B (zh) * 2020-07-20 2022-11-04 云盾智慧安全科技有限公司 数据请求方法及服务器、计算机可存储介质
CN112417350B (zh) * 2020-09-17 2023-03-24 上海哔哩哔哩科技有限公司 数据存储调整方法、装置及计算机设备

Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN103227826A (zh) * 2013-04-23 2013-07-31 蓝汛网络科技(北京)有限公司 一种文件传输方法及装置
CN105450780A (zh) * 2015-12-31 2016-03-30 深圳市网心科技有限公司 一种cdn系统及其回源方法
CN105791366A (zh) * 2014-12-26 2016-07-20 中国电信股份有限公司 一种大文件HTTP-Range 下载方法、缓存服务器及系统
US20170366488A1 (en) * 2012-01-31 2017-12-21 Google Inc. Experience sharing system and method
CN109167845A (zh) * 2018-11-27 2019-01-08 云之端网络(江苏)股份有限公司 一种面向大文件分发场景的分片缓存及重组方法
CN111339057A (zh) * 2020-02-29 2020-06-26 平安科技(深圳)有限公司 减少回源请求的方法、装置及计算机可读存储介质

Patent Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20170366488A1 (en) * 2012-01-31 2017-12-21 Google Inc. Experience sharing system and method
CN103227826A (zh) * 2013-04-23 2013-07-31 蓝汛网络科技(北京)有限公司 一种文件传输方法及装置
CN105791366A (zh) * 2014-12-26 2016-07-20 中国电信股份有限公司 一种大文件HTTP-Range 下载方法、缓存服务器及系统
CN105450780A (zh) * 2015-12-31 2016-03-30 深圳市网心科技有限公司 一种cdn系统及其回源方法
CN109167845A (zh) * 2018-11-27 2019-01-08 云之端网络(江苏)股份有限公司 一种面向大文件分发场景的分片缓存及重组方法
CN111339057A (zh) * 2020-02-29 2020-06-26 平安科技(深圳)有限公司 减少回源请求的方法、装置及计算机可读存储介质

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN114466032A (zh) * 2021-12-27 2022-05-10 天翼云科技有限公司 一种cdn系统合并回源方法、装置及存储介质
CN114466032B (zh) * 2021-12-27 2023-11-03 天翼云科技有限公司 一种cdn系统合并回源方法、装置及存储介质

Also Published As

Publication number Publication date
CN111339057A (zh) 2020-06-26

Similar Documents

Publication Publication Date Title
WO2021169298A1 (zh) 减少回源请求的方法、装置及计算机可读存储介质
KR20200027413A (ko) 데이터 저장 방법, 장치 및 시스템
US20170193416A1 (en) Reducing costs related to use of networks based on pricing heterogeneity
JP5975501B2 (ja) コンピューティングシステムにおいてストレージデータの暗号化不要の整合性保護を促進するメカニズム
US9432484B1 (en) CIM-based data storage management system having a restful front-end
CN108090078B (zh) 文档在线预览方法及装置、存储介质、电子设备
US11494386B2 (en) Distributed metadata-based cluster computing
TW201220197A (en) for improving the safety and reliability of data storage in a virtual machine based on cloud calculation and distributed storage environment
CN107197359B (zh) 视频文件缓存方法及装置
US10169348B2 (en) Using a file path to determine file locality for applications
CN102307234A (zh) 基于移动终端的资源检索方法
CN112199442B (zh) 分布式批量下载文件方法、装置、计算机设备及存储介质
US11709835B2 (en) Re-ordered processing of read requests
WO2023169235A1 (zh) 数据访问方法、系统、设备及存储介质
WO2021139431A1 (zh) 微服务的数据同步方法、装置、电子设备及存储介质
US10877848B2 (en) Processing I/O operations in parallel while maintaining read/write consistency using range and priority queues in a data protection system
US20230114100A1 (en) Small file restore performance in a deduplication file system
WO2021164163A1 (zh) 一种请求处理方法、装置、设备及存储介质
CN111259060B (zh) 数据查询的方法及装置
CN113806300A (zh) 数据存储方法、系统、装置、设备及存储介质
WO2012171363A1 (zh) 分布式缓存系统中的数据操作方法和装置
KR101694301B1 (ko) 스토리지 시스템의 파일 처리 방법 및 그 방법에 따른 데이터 서버
US10664170B2 (en) Partial storage of large files in distinct storage systems
CN112711572B (zh) 适用于分库分表的在线扩容方法和装置
US11249916B2 (en) Single producer single consumer buffering in database systems

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 20921206

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: DE

122 Ep: pct application non-entry in european phase

Ref document number: 20921206

Country of ref document: EP

Kind code of ref document: A1