CN115883910A - Progressive elastic caching method and device for fragmented video acceleration - Google Patents

Progressive elastic caching method and device for fragmented video acceleration Download PDF

Info

Publication number
CN115883910A
CN115883910A CN202211681973.7A CN202211681973A CN115883910A CN 115883910 A CN115883910 A CN 115883910A CN 202211681973 A CN202211681973 A CN 202211681973A CN 115883910 A CN115883910 A CN 115883910A
Authority
CN
China
Prior art keywords
video
fragment
file
mapping relation
module
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202211681973.7A
Other languages
Chinese (zh)
Inventor
阮小洲
李国林
李哲
邓铭豪
彭宇佳
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Tianyi Cloud Technology Co Ltd
Original Assignee
Tianyi Cloud Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Tianyi Cloud Technology Co Ltd filed Critical Tianyi Cloud Technology Co Ltd
Priority to CN202211681973.7A priority Critical patent/CN115883910A/en
Publication of CN115883910A publication Critical patent/CN115883910A/en
Pending legal-status Critical Current

Links

Images

Landscapes

  • Information Transfer Between Computers (AREA)

Abstract

The invention provides a progressive elastic caching method and device for accelerating a fragmented video, and belongs to the technical field of network video loading. The method comprises the following steps: s1: initiating a fragment video head file request; s2: acquiring the mapping relation of the fragment files according to the header file; s3: initiating a video slicing request; s4: calculating the number of prefetcheable fragments according to the fragment file; s5: and controlling the video cache module to preload the subsequent corresponding slices according to the mapping relation and the number of the pre-fetched slices. According to the invention, the mapping relation of the fragment file is obtained according to the header file after the fragment video header file request is received, the pre-acquirable fragment number is calculated according to the fragment file after the fragment video request is received, and the video cache module is controlled to pre-load the subsequent corresponding fragments according to the mapping relation and the pre-acquirable fragment number, so that the cache of the video which is not commonly used is reduced, the video storage is optimized, the video obtaining efficiency is improved, and the video access experience of a user is improved.

Description

Progressive elastic caching method and device for fragmented video acceleration
Technical Field
The invention relates to the technical field of network video loading, in particular to a progressive elastic caching method and device for accelerating a segmented video.
Background
In order to improve the quality of the Content Delivery Network (CDN) service and speed up the response to the user request, CDN manufacturers provide a content preloading function, i.e., cache content to a service node before the user request. Most content providers use fragmented video files, that is, a complete content is composed of a plurality of video files, and a fragmented video header file describes the relationship between the video files.
At the present stage, two preloading modes of caching the first N fragments and all the fragments exist. The method of full fragment caching can improve the access speed, but when some fragment files are not accessed, a large amount of cache space is wasted. In the partial first N-piece caching method, when files which are not cached are accessed, the response speed is slow due to the need of returning to the source. The existing video fragment file preloading method cannot give consideration to two aspects of CDN service access speed improvement and cache space optimization.
Disclosure of Invention
In order to solve the foregoing problems, embodiments of the present application provide a progressive elastic buffer method and apparatus for accelerating a segmented video.
In a first aspect, the present application provides a progressive elastic buffer method for accelerating a segmented video, including the following steps:
s1: initiating a fragment video head file request;
s2: acquiring the mapping relation of the fragment files according to the header file;
s3: initiating a video slicing request;
s4: calculating the number of prefetched fragments according to the fragment file;
s5: and controlling the video cache module to preload the subsequent corresponding slices according to the mapping relation and the pre-fetching slice number.
Preferably, step S2 specifically includes:
s21: analyzing the header file to obtain slice information;
s22: and establishing a mapping relation of the sliced video file according to the slice information.
Preferably, the steps S2 and S3 further include: caching the mapping relation;
the step S5 specifically includes:
s51: searching a mapping relation corresponding to the fragment file;
s52: and controlling the video cache module to prefetch the corresponding fragment file under the combination of the mapping relation and the number of the prefetched fragments.
Preferably, step S4 specifically includes:
s41: acquiring bandwidth, query rate per second and storage utilization rate indexes of a current resource pool or node;
s42: acquiring the integral heat of the fragment file;
s43: and calculating the number of the prefetched fragments according to the bandwidth, the query rate per second and the storage utilization rate indexes and by combining the overall heat.
In a second aspect, an embodiment of the present application provides a progressive elastic buffer apparatus for accelerating a fragmented video, including
The system comprises a client module, a gateway module and a video cache module;
the client module is used for respectively initiating a fragment video head file request and a fragment video initiating request;
the video cache module is used for storing all the fragment videos;
the gateway module is used for acquiring the mapping relation of the fragment file according to the header file when the client module initiates the fragment video file, calculating the number of pre-fetching fragments according to the fragment file when the client module initiates a fragment video request, and controlling the video cache module to pre-load the subsequent corresponding fragments according to the mapping relation and the number of pre-fetching fragments.
Specifically, the gateway module comprises
The parsing unit is used for parsing the header file to obtain slice information;
and the mapping establishing unit is used for establishing the mapping relation of the sliced video file according to the slice information.
Specifically, the system further comprises a sequence caching module for caching the mapping relation;
the gateway module further comprises
The searching unit is used for searching the mapping relation corresponding to the fragment file from the sequence cache module;
and the prefetching unit is used for controlling the video cache module to prefetch the corresponding fragment file under the combination of the mapping relation and the number of the prefetched fragments.
Specifically, the gateway module further comprises
The index acquisition unit is used for acquiring the bandwidth, the query rate per second and the storage utilization index of the current resource pool or node;
the heat acquisition unit is used for acquiring the overall heat of the fragment file;
and the calculating unit is used for calculating the number of the prefetched fragments according to the bandwidth, the query rate per second and the storage utilization rate index and by combining the overall heat.
In a third aspect, an embodiment of the present application provides an electronic device, which includes a memory, a processor, and a computer program stored in the memory and executable on the processor, where the processor executes the computer program to implement the steps of the method as provided in the first aspect or any one of the possible implementation manners of the first aspect.
In a fourth aspect, the present application provides a computer-readable storage medium, on which a computer program is stored, which, when executed by a processor, implements the steps of the method as provided in the first aspect or any one of the possible implementations of the first aspect.
The beneficial effects of the invention are as follows: the method comprises the steps of obtaining the mapping relation of a fragment file according to a header file after receiving a fragment video header file request, calculating the number of prefetched fragments according to the fragment file after receiving the fragment video request, and controlling a video cache module to preload subsequent corresponding fragments according to the mapping relation and the number of prefetched fragments, so that the cache of the infrequent videos is reduced, the video storage is optimized, the video obtaining efficiency is improved, and the video access experience of a user is improved.
Drawings
In order to more clearly illustrate the technical solutions in the embodiments of the present application, the drawings needed to be used in the embodiments will be briefly described below, and it is obvious that the drawings in the following description are only some embodiments of the present application, and it is obvious for those skilled in the art to obtain other drawings without creative efforts.
Fig. 1 is a schematic flowchart of a progressive elastic buffer method for accelerating a segmented video according to an embodiment of the present application;
fig. 2 is a schematic structural diagram of a progressive elastic buffer apparatus for accelerating a sliced video according to an embodiment of the present application;
fig. 3 is a schematic structural diagram of an electronic device according to an embodiment of the present disclosure;
fig. 4 is a schematic flowchart illustrating a process of establishing a mapping relationship of a slice file in a progressive elastic cache method for accelerating a sliced video according to an embodiment of the present application;
fig. 5 is a schematic flowchart illustrating a process when a gateway dynamically calculates a pre-fetching policy in a progressive elastic cache method for accelerating a fragmented video according to an embodiment of the present application;
fig. 6 is a schematic flowchart illustrating an implementation of a progressive elastic buffer apparatus for accelerating a fragmented video according to an embodiment of the present disclosure;
fig. 7 is a flowchart illustrating a conventional preloading procedure in a progressive elastic caching method for fragmented video acceleration according to an embodiment of the present disclosure.
Detailed Description
The technical solutions in the embodiments of the present application will be clearly and completely described below with reference to the drawings in the embodiments of the present application.
In the following description, the terms "first" and "second" are used for descriptive purposes only and are not intended to indicate or imply relative importance. The following description provides embodiments of the present application, which may be combined or interchanged with one another, and therefore the present application is also to be construed as encompassing all possible combinations of the same and/or different embodiments described. Thus, if one embodiment includes features a, B, C and another embodiment includes features B, D, then this application should also be construed to include embodiments that include all other possible combinations of one or more of a, B, C, D, although such embodiments may not be explicitly recited in the following text.
The following description provides examples, and does not limit the scope, applicability, or examples set forth in the claims. Changes may be made in the function and arrangement of elements described without departing from the scope of the disclosure. Various examples may omit, substitute, or add various procedures or components as appropriate. For example, the described methods may be performed in an order different than the order described, and various steps may be added, omitted, or combined. Furthermore, features described with respect to some examples may be combined into other examples.
Please refer to fig. 1. Fig. 1 is a flowchart illustrating a progressive elastic buffer method for fragmented video acceleration according to an embodiment of the present disclosure. In an embodiment of the present application, the method includes the steps of:
s1: initiating a fragment video head file request;
s2: acquiring the mapping relation of the fragment files according to the header file;
s3: initiating a video slicing request;
s4: calculating the number of prefetcheable fragments according to the fragment file;
s5: and controlling the video cache module to preload the subsequent corresponding slices according to the mapping relation and the pre-fetching slice number.
In the present embodiment, today HLS preloading faces one such problem: the existing method cannot give consideration to two aspects of CDN service access speed improvement and cache space optimization, but customers have requirements on response speed, and manufacturers face cost pressure. The method combines the user request, the network quality and the preloading, preloads other fragment files according to the fragment files requested by the client, and dynamically calculates the number of the loaded fragment files according to the network quality. Compared with a full-fragment caching method, the method reduces the preloading redundancy and improves the utilization rate of the storage space; compared with the partial front N-piece caching method, the preloaded pieces are calculated according to the request and the network quality, so that the buffering time is reduced, the access speed of a user when requesting the CDN service can be ensured, and the storage space is not wasted as much as possible on the basis of ensuring the access speed.
In an implementation manner, step S2 specifically includes:
s21: analyzing the header file to obtain slice information;
s22: and establishing a mapping relation of the sliced video file according to the slice information.
In one possible embodiment, between steps S2 and S3, further comprising: caching the mapping relation;
the step S5 specifically includes:
s51: searching a mapping relation corresponding to the fragment file;
s52: and controlling the video cache module to prefetch the corresponding fragment file under the combination of the mapping relation and the number of the prefetched fragments.
In one possible implementation, step S4 specifically includes:
s41: acquiring bandwidth, query rate per second and storage utilization rate indexes of a current resource pool or node;
s42: acquiring the integral heat of the fragment file;
s43: and calculating the number of the prefetched fragments according to the bandwidth, the query rate per second and the storage utilization rate indexes and by combining the overall heat.
In the embodiment of the application, a preloading component capable of quickly responding to a user request and improving the utilization rate of a node storage space is designed by modifying the defects of a fragment video preloading method in the industry at present, the component analyzes the content of a fragment video head file to obtain the mapping relation among video fragments, and then when the user requests a certain fragment, the dynamic preloading fragment number N is calculated according to indexes such as the network bandwidth, qps (query rate per second), storage utilization rate and fragment file heat degree of a CDN (content delivery network), for example, if the heat degree is too low, 1 to 2 fragments are cached in a short time, if the heat degree is high, multiple fragments are cached for a long time, a machine with low resource utilization rate can cache more, and if the resource is in short, the strategy can cache less. And then detecting whether N next fragment files of the fragment are preloaded or not, and if not, immediately preloading to achieve balance between overhead and service quality.
In the embodiment of the present application, referring to fig. 7, fig. 7 is a schematic flow diagram of a conventional preloading process in a progressive elastic cache method for accelerating a segmented video provided in the embodiment of the present application, where in the conventional preloading manner, after a request of a client triggers preloading, a CDN node creates a thread to analyze slice file information of a response, and then requests all slice files according to file information, so as to cache all index files and slice files therein on the CDN node.
In the embodiments of the present application, the improved preloading scheme is generally divided into two steps: referring to fig. 4, fig. 4 is a schematic flow chart of a method for progressive elastic caching for fragmented video acceleration according to an embodiment of the present disclosure when a mapping relationship of a slice file is established, where a client requests to trigger establishment of a mapping relationship of a slice file, the client initiates a fragmented video header file request, establishes a mapping relationship of a slice file after parsing contents of the header file, and caches the mapping relationship in a CDN node; referring to fig. 5, fig. 5 is a schematic flow diagram illustrating how a gateway dynamically calculates a prefetching policy in a progressive elastic cache method for fragmented video acceleration according to an embodiment of the present application, where a fragmented video request of a client triggers preloading, the gateway dynamically calculates the prefetching policy, and the gateway calculates the number of prefetched fragments according to indexes such as bandwidth, qps, and storage utilization of a current resource pool or node, in combination with the overall heat of the fragmented file, and then completes prefetching of subsequent resources.
The traditional preloading scheme is that a video request is triggered once, and all slice files are cached; the improved scheme can automatically calculate the number of the preloaded fragments according to each slicing request of a client and by combining some indexes, and sequentially and gradually preload the video fragments, so that the preloading speed is increased, and meanwhile, the bandwidth waste caused by unnecessary preloading is avoided.
And determining corresponding pre-loading information according to the segmented video currently requested by the client, the interrelation of the segmented videos, the cache state of the file, the overall heat of the segments and the quality index of the CDN. The preloading information comprises the number of the fragments needing preloading (dynamic calculation), the video links of the fragments needing preloading and the address of the content distribution network node needing preloading. Compared with the prior art, the method and the device have the advantages that the preloading information is determined according to the segmented videos requested by the client at present, the interrelation of the segmented videos, CDN network bandwidth, qps, storage utilization rate and other indexes, so that dynamic preloading of the segmented videos is realized, cache of the videos which are not commonly used is reduced under the condition that the storage space of the CDN nodes is limited, video storage of the CDN nodes is optimized, meanwhile, the video obtaining efficiency of the user is improved, and the video access experience of the user is improved.
The traditional preloading mode comprises two modes of preloading all slice file caches and preloading N slices before partial caches, wherein the two modes are triggered at one time through a request, the access speed of the slice file can be improved to a certain extent, and the two modes have defects. The manner of caching all of the slice files wastes a large amount of cache space when some of the slice files are not being accessed. And the partial caching mode can only improve the access speed for the first N slices, and when the slices without preloading are accessed, the access speed is slow due to the source return.
According to the method and the device, the preloading information is determined according to the segmented videos, the interrelation of the segmented videos and the file caching state requested by the client at present, the quality index of the CDN network and the popularity information of the segmented files, and the caching system in the CDN dynamically loads the segmented videos through the preloading information, so that the dynamic preloading of the segmented videos is realized, the video segmented loading time is shortened, and meanwhile, the caching of the videos which are not frequently used is reduced. Compared with the prior art, the main advantages are that: by the aid of the progressive preloading method, functions of caching according to needs and pre-caching are realized, and caching space can be saved to the maximum extent while fast service is guaranteed; by monitoring indexes such as bandwidth, qps and storage utilization rate of the node or resource pool in real time, the number of prefetched resources is calculated, and bandwidth waste and node operation resource waste caused by preloading of unnecessary files are avoided. The application provides a method for dynamically preloading sliced video slices by triggering with a client request and combining the current state of a CDN (content delivery network) for the first time. Compared with the mode of triggering and preloading all the slice files at one time in the prior art, the method provided by the application can effectively improve the utilization rate of system resources; according to the method and the device, the mapping relation of the slice files is established and cached at the CDN node, the preloading information triggered by each slice request is obtained according to the mapping relation of the current file, and the query efficiency is high; the method and the device have the advantages that the common preloading in the CDN and the caching during the request are combined, namely, the preloading improves the response efficiency, the caching during the request saves the storage space, and the CDN service efficiency is improved.
The method effectively reduces the occupation of low-efficiency resources on the content distribution network cache in a progressive cache mode, and can ensure the quick response to the client. The method is applied to the CDN node in a plug-in mode, whether the CDN node is started or not is controlled in a switch mode, and the method has good compatibility, transportability and expandability. The mapping relation of the slice files established by the method can directly provide support in scenes other than preloading, such as the condition that follow-up slice files are inquired according to the current files, and the like. The CDN node can bear the segmented video preloading scene with higher requirements of customers on the premise of ensuring that node storage resources and computing resources are not wasted by applying the method.
A progressive elastic buffer apparatus for acceleration of fragmented video according to an embodiment of the present application will be described in detail with reference to fig. 2. It should be noted that, fig. 2 shows a progressive elastic buffer apparatus for video slicing acceleration, which is used to execute the method of the embodiment shown in fig. 1 of the present application, for convenience of description, only the relevant portions of the embodiment of the present application are shown, and details of the implementation are not disclosed, please refer to the embodiment shown in fig. 1 of the present application.
Referring to fig. 2, fig. 2 is a schematic structural diagram of a progressive elastic buffer apparatus for accelerating a sliced video according to an embodiment of the present application. As shown in fig. 2, the device comprises
A client module 201, a gateway module 202 and a video cache module 203;
the client module 201 is configured to initiate a fragment video header file request and initiate a fragment video request respectively;
a video caching module 202, configured to cache the sliced video;
the gateway module 203 is configured to obtain a mapping relationship of the fragmented video file according to the header file when the client module 201 initiates the fragmented video file, calculate a pre-fetching fragment number according to the fragmented video file when the client module 201 initiates a fragmented video request, and control the video cache module 202 to pre-load a subsequent corresponding fragment according to the mapping relationship and the pre-fetching fragment number.
In one embodiment, the gateway module 203 comprises
The parsing unit is used for parsing the header file to obtain slice information;
and the mapping establishing unit is used for establishing the mapping relation of the sliced video file according to the slice information.
In one implementation, the system further includes a sequence caching module 204 for caching the mapping relationship;
the gateway module 203 further comprises
The searching unit is used for searching the mapping relation corresponding to the fragment file from the sequence cache module;
and the prefetching unit is used for controlling the video cache module to prefetch the corresponding fragment file under the combination of the mapping relation and the number of the prefetched fragments.
In one embodiment, the gateway module 203 further comprises
The index acquisition unit is used for acquiring the bandwidth, the query rate per second and the storage utilization index of the current resource pool or node;
the heat acquisition unit is used for acquiring the overall heat of the fragment file;
and the calculating unit is used for calculating the number of the prefetched fragments by combining the overall heat according to the bandwidth, the query rate per second and the storage utilization rate index.
In the embodiment of the present application, referring to fig. 6, fig. 6 is a schematic flowchart of a process of implementing a progressive elastic buffer apparatus for accelerating a sliced video according to the embodiment of the present application, where the process mainly includes: the system comprises a gateway module 203, a video cache module 202, a client module 201 and a sequence cache module 204.
The gateway module 203: receiving the request of the client module 201, establishing a mapping relation, managing a prefetching strategy and controlling the flow of the method. The gateways of the nodes cooperate with each other to count the times and the heat of the fragment video requests, monitor the qps and the storage state of the nodes and realize the maximum utilization of CDN resources.
The video caching module 202: and caching the fragment video file according to the request of the gateway.
The sequence caching module 204: and caching the mapping relations and the video file caching state in a linked list-like mode according to the mapping relations of the fragment files analyzed by the gateway. The mapping relation between the fragments is obtained by analyzing the header file content through the gateway, the sequence cache takes the video filename + fragment position as a key, and the index value is the source return url and the cache state of the fragment.
The gateway part is developed by using OpenResty, the method is realized by a plug-in mode, the mapping relation cache is realized by openResty shared memory, and the video cache is realized by an Apache Traffic Server. The method effectively reduces the occupation of low-efficiency resources on the content distribution network cache in a progressive cache mode, and can ensure the quick response to the client.
The client module 201 sends a request fragment video head file to the gateway module 203, the gateway module 203 receives the request and forwards the request to the video cache module 202, the video cache module 202 returns a result to the gateway module 203, meanwhile, the gateway module 203 establishes a fragment video sequence mapping relation and sends the sequence mapping relation to the sequence cache module 204, the sequence cache module 204 initializes a cache state, then the gateway module 203 can control the video cache module 202 to prefetch the previous fragments, the video cache module 202 returns the result to the gateway module 203, and the gateway module 203 sends an updated prefetch state to the sequence cache module 204; the client module 201 sends a request fragment video to the gateway module 203, the gateway module 203 calculates the number N of dynamic pre-fetching fragments, meanwhile, the gateway module 203 searches for a sequence mapping relationship where the fragment file is located, the sequence cache module 204 returns the mapping relationship to the gateway module 203, the gateway module 203 triggers pre-fetching in a non-pre-fetching state, meanwhile, sends an update state to the sequence cache module 204 as pre-fetching, the gateway module 203 controls the video cache module 202 to pre-fetch the fragment video file, the video cache module 202 returns a pre-fetching result, and after pre-fetching is completed, the gateway module 203 sends the update pre-fetching state (pre-fetched/non-fetched) to the sequence cache module 204.
The method comprises the steps of analyzing the contents of HLS files to obtain the mapping relation among HLS video fragments, detecting whether N fragment files behind a certain fragment are preloaded or not when a user requests the certain fragment, and immediately preloading the fragment files if the fragment files are not preloaded. HLS is an Apple dynamic code rate self-adaptive technology and is a streaming media network transmission protocol based on HTTP.
It is clear to a person skilled in the art that the solution according to the embodiments of the present application can be implemented by means of software and/or hardware. The term "unit", "module" or "section" in this specification refers to software and/or hardware that can perform a specific function independently or in cooperation with other components, and the hardware may be, for example, a Field-Programmable Gate Array (FPGA), an Integrated Circuit (IC), or the like.
Each processing unit and/or module in the embodiments of the present application may be implemented by an analog circuit that implements the functions described in the embodiments of the present application, or may be implemented by software that executes the functions described in the embodiments of the present application.
Referring to fig. 3, a schematic structural diagram of an electronic device according to an embodiment of the present application is shown, where the electronic device may be used to implement the method in the embodiment shown in fig. 1. As shown in fig. 3, the electronic device 300 may include: at least one central processor 301, at least one network interface 304, a user interface 303, a memory 305, at least one communication bus 302.
Wherein the communication bus 302 is used to enable connection communication between these components.
The user interface 303 may include a Display screen (Display) and a Camera (Camera), and the optional user interface 303 may further include a standard wired interface and a wireless interface.
The network interface 304 may optionally include a standard wired interface, a wireless interface (e.g., WI-FI interface), among others.
The central processor 301 may include one or more processing cores. The central processor 301 connects various parts within the entire electronic device 300 using various interfaces and lines, and performs various functions of the terminal 300 and processes data by executing or executing instructions, programs, code sets, or instruction sets stored in the memory 305, and calling data stored in the memory 305. Alternatively, the central Processing unit 301 may be implemented in at least one hardware form of Digital Signal Processing (DSP), field-Programmable Gate Array (FPGA), and Programmable Logic Array (PLA). The CPU 301 may integrate one or a combination of a Central Processing Unit (CPU), a Graphics Processing Unit (GPU), a modem, and the like. The CPU mainly processes an operating system, a user interface, an application program and the like; the GPU is used for rendering and drawing the content required to be displayed by the display screen; the modem is used to handle wireless communications. It is understood that the modem may not be integrated into the cpu 301, but may be implemented by a single chip.
The Memory 305 may include a Random Access Memory (RAM) or a Read-Only Memory (Read-Only Memory). Optionally, the memory 305 includes a non-transitory computer-readable medium. The memory 305 may be used to store instructions, programs, code sets, or instruction sets. The memory 305 may include a stored program area and a stored data area, wherein the stored program area may store instructions for implementing an operating system, instructions for at least one function (such as a touch function, a sound playing function, an image playing function, etc.), instructions for implementing the various method embodiments described above, and the like; the storage data area may store data and the like referred to in the above respective method embodiments. The memory 305 may alternatively be at least one storage device located remotely from the central processor 301. As shown in fig. 3, memory 305, which is a type of computer storage medium, may include an operating system, a network communication module, a user interface module, and program instructions.
In the electronic device 300 shown in fig. 3, the user interface 303 is mainly used as an interface for providing input for a user, and acquiring data input by the user; the cpu 301 may be configured to invoke a progressive elastic buffer application for fragmented video acceleration stored in the memory 305, and specifically perform the following operations:
s1: initiating a fragment video head file request;
s2: acquiring the mapping relation of the fragment files according to the header file;
s3: initiating a video slicing request;
s4: calculating the number of prefetcheable fragments according to the fragment file;
s5: and controlling the video cache module to preload the subsequent corresponding slices according to the mapping relation and the pre-fetching slice number.
The present application also provides a computer-readable storage medium having stored thereon a computer program which, when executed by a processor, carries out the steps of the above-mentioned method. The computer-readable storage medium may include, but is not limited to, any type of disk including floppy disks, optical disks, DVD, CD-ROMs, microdrive, and magneto-optical disks, ROMs, RAMs, EPROMs, EEPROMs, DRAMs, VRAMs, flash memory devices, magnetic or optical cards, nanosystems (including molecular memory ICs), or any type of media or device suitable for storing instructions and/or data.
It should be noted that, for simplicity of description, the above-mentioned method embodiments are described as a series of acts or combination of acts, but those skilled in the art will recognize that the present application is not limited by the order of acts described, as some steps may occur in other orders or concurrently depending on the application. Further, those skilled in the art will recognize that the embodiments described in this specification are preferred embodiments and that acts or modules referred to are not necessarily required for this application.
In the foregoing embodiments, the descriptions of the respective embodiments have respective emphasis, and for parts that are not described in detail in a certain embodiment, reference may be made to related descriptions of other embodiments.
In the several embodiments provided in the present application, it should be understood that the disclosed apparatus may be implemented in other manners. For example, the above-described embodiments of the apparatus are merely illustrative, and for example, the division of the units is only one type of division of logical functions, and there may be other divisions when actually implementing, for example, a plurality of units or components may be combined or may be integrated into another system, or some features may be omitted, or not implemented. In addition, the shown or discussed mutual coupling or direct coupling or communication connection may be an indirect coupling or communication connection of some service interfaces, devices or units, and may be an electrical or other form.
The units described as separate parts may or may not be physically separate, and parts displayed as units may or may not be physical units, may be located in one position, or may be distributed on multiple network units. Some or all of the units can be selected according to actual needs to achieve the purpose of the solution of the embodiment.
In addition, functional units in the embodiments of the present application may be integrated into one processing unit, or each unit may exist alone physically, or two or more units are integrated into one unit. The integrated unit can be realized in a form of hardware, and can also be realized in a form of a software functional unit.
The integrated unit, if implemented in the form of a software functional unit and sold or used as a stand-alone product, may be stored in a computer readable memory. Based on such understanding, the technical solutions of the present application, in essence or part of the technical solutions contributing to the prior art, or all or part of the technical solutions, can be embodied in the form of a software product, which is stored in a memory and includes several instructions for causing a computer device (which may be a personal computer, a server, or a network device, etc.) to execute all or part of the steps of the methods described in the embodiments of the present application. And the aforementioned memory comprises: various media capable of storing program codes, such as a usb disk, a Read-Only Memory (ROM), a Random Access Memory (RAM), a removable hard disk, a magnetic disk, or an optical disk.
Those skilled in the art will appreciate that all or part of the steps in the methods of the above embodiments may be implemented by a program which instructs associated hardware to perform the steps, and the program may be stored in a computer readable memory, and the memory may include: flash disks, read-Only memories (ROMs), random Access Memories (RAMs), magnetic or optical disks, and the like.
The above description is only an exemplary embodiment of the present disclosure, and the scope of the present disclosure should not be limited thereby. It is intended that all equivalent variations and modifications made in accordance with the teachings of the present disclosure be covered thereby. Embodiments of the present disclosure will be readily apparent to those skilled in the art from consideration of the specification and practice of the disclosure herein. This application is intended to cover any variations, uses, or adaptations of the disclosure following, in general, the principles of the disclosure and including such departures from the present disclosure as come within known or customary practice within the art to which the disclosure pertains. It is intended that the specification and examples be considered as exemplary only, with a true scope and spirit of the disclosure being indicated by the following claims.

Claims (10)

1. A progressive elastic buffering method for accelerating a sliced video, which is characterized by comprising the following steps:
s1: initiating a fragment video head file request;
s2: acquiring the mapping relation of the fragment files according to the header file;
s3: initiating a video slicing request;
s4: calculating the number of prefetcheable fragments according to the fragment file;
s5: and controlling the video cache module to preload the subsequent corresponding slices according to the mapping relation and the pre-fetching slice number.
2. The progressive elastic buffering method for fragmented video acceleration according to claim 1, wherein the step S2 specifically includes:
s21: analyzing the header file to obtain slice information;
s22: and establishing a mapping relation of the sliced video file according to the slice information.
3. The progressive elastic buffering method for fragmented video acceleration according to claim 1 or 2, characterized in that between steps S2 and S3 further comprising: caching the mapping relation;
the step S5 specifically includes:
s51: searching a mapping relation corresponding to the fragment file;
s52: and controlling the video cache module to prefetch the corresponding fragment file under the combination of the mapping relation and the number of the prefetched fragments.
4. The progressive elastic buffer method for accelerating fragmented video according to claim 3, wherein step S4 specifically includes:
s41: acquiring bandwidth, query rate per second and storage utilization rate indexes of a current resource pool or node;
s42: acquiring the integral heat of the fragment file;
s43: and calculating the number of the prefetched fragments according to the bandwidth, the query rate per second and the storage utilization rate indexes and by combining the overall heat.
5. A progressive elastic buffer apparatus for video slicing acceleration, characterized in that: comprises that
The system comprises a client module, a gateway module and a video cache module;
the client module is used for respectively initiating a fragment video head file request and a fragment video initiating request;
the video caching module is used for caching the sliced video;
the gateway module is used for acquiring the mapping relation of the fragment file according to the header file when the client module initiates the fragment video file, calculating the number of pre-fetching fragments according to the fragment file when the client module initiates a fragment video request, and controlling the video cache module to pre-load the subsequent corresponding fragments according to the mapping relation and the number of pre-fetching fragments.
6. The progressive elastic buffering apparatus for fragmented video acceleration as set forth in claim 5, wherein: the gateway module comprises
The parsing unit is used for parsing the header file to obtain slice information;
and the mapping establishing unit is used for establishing the mapping relation of the sliced video file according to the slice information.
7. The progressive elastic buffer apparatus for fragmented video acceleration according to claim 5 or 6, characterized by further comprising a sequence buffer module for buffering mapping relationships;
the gateway module further comprises
The searching unit is used for searching the mapping relation corresponding to the fragment file from the sequence cache module;
and the prefetching unit is used for controlling the video cache module to prefetch the corresponding fragment file under the combination of the mapping relation and the number of the prefetched fragments.
8. A progressive elastic buffering apparatus for fragmented video acceleration as claimed in claim 6 or 7, characterized in that: the gateway module further comprises
The index acquisition unit is used for acquiring the bandwidth, the query rate per second and the storage utilization index of the current resource pool or node;
the heat acquisition unit is used for acquiring the overall heat of the fragment file;
and the calculating unit is used for calculating the number of the prefetched fragments by combining the overall heat according to the bandwidth, the query rate per second and the storage utilization rate index.
9. An electronic device comprising a memory, a processor and a computer program stored on the memory and executable on the processor, characterized in that the steps of the method according to any of claims 1-4 are implemented when the computer program is executed by the processor.
10. A computer-readable storage medium, on which a computer program is stored which, when being executed by a processor, carries out the steps of the method according to any one of claims 1 to 4.
CN202211681973.7A 2022-12-27 2022-12-27 Progressive elastic caching method and device for fragmented video acceleration Pending CN115883910A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202211681973.7A CN115883910A (en) 2022-12-27 2022-12-27 Progressive elastic caching method and device for fragmented video acceleration

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202211681973.7A CN115883910A (en) 2022-12-27 2022-12-27 Progressive elastic caching method and device for fragmented video acceleration

Publications (1)

Publication Number Publication Date
CN115883910A true CN115883910A (en) 2023-03-31

Family

ID=85754722

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202211681973.7A Pending CN115883910A (en) 2022-12-27 2022-12-27 Progressive elastic caching method and device for fragmented video acceleration

Country Status (1)

Country Link
CN (1) CN115883910A (en)

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN103096126A (en) * 2012-12-28 2013-05-08 中国科学院计算技术研究所 Method and system of collaborative type cache for video-on-demand service in collaborative type cache cluster
CN105279105A (en) * 2014-07-17 2016-01-27 三星电子株式会社 Adaptive mechanism used for adjusting the degree of pre-fetches streams
CN110427582A (en) * 2018-04-28 2019-11-08 华为技术有限公司 The read method and device of file cache
CN110545460A (en) * 2018-05-29 2019-12-06 北京字节跳动网络技术有限公司 Media file preloading method and device and storage medium
CN113079386A (en) * 2021-03-19 2021-07-06 北京百度网讯科技有限公司 Video online playing method and device, electronic equipment and storage medium

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN103096126A (en) * 2012-12-28 2013-05-08 中国科学院计算技术研究所 Method and system of collaborative type cache for video-on-demand service in collaborative type cache cluster
CN105279105A (en) * 2014-07-17 2016-01-27 三星电子株式会社 Adaptive mechanism used for adjusting the degree of pre-fetches streams
CN110427582A (en) * 2018-04-28 2019-11-08 华为技术有限公司 The read method and device of file cache
CN110545460A (en) * 2018-05-29 2019-12-06 北京字节跳动网络技术有限公司 Media file preloading method and device and storage medium
CN113079386A (en) * 2021-03-19 2021-07-06 北京百度网讯科技有限公司 Video online playing method and device, electronic equipment and storage medium

Similar Documents

Publication Publication Date Title
US10182127B2 (en) Application-driven CDN pre-caching
US8966053B2 (en) Methods and systems for performing a prefetch abort operation for network acceleration
US20100146415A1 (en) Dns prefetch
US20140019576A1 (en) Intelligent edge caching
CN107197359B (en) Video file caching method and device
KR102151457B1 (en) Method and apparatus for reducing page load time in a communication system
CN110022482B (en) Video playing starting method, video service system and storage medium
CN110730196B (en) Network resource access method, computer equipment and storage medium
US10547705B2 (en) Caching proxy method and apparatus
CN110661826B (en) Method for processing network request by proxy server side and proxy server
JP2015509229A5 (en)
US10747723B2 (en) Caching with dynamic and selective compression of content
US11330075B2 (en) One-time cache
KR20070035911A (en) Fractional caching method and adaptive content transmission method using the same
CN109992406B (en) Picture request method, picture request response method and client
CN113419824A (en) Data processing method, device, system and computer storage medium
CN110933517A (en) Code rate switching method, client and computer readable storage medium
CN111225010A (en) Data processing method, data processing system and device
WO2019206033A1 (en) Server configuration method and apparatus
CN114040245A (en) Video playing method and device, computer storage medium and electronic equipment
US20180302489A1 (en) Architecture for proactively providing bundled content items to client devices
CN113438302A (en) Dynamic resource multi-level caching method, system, computer equipment and storage medium
US10341454B2 (en) Video and media content delivery network storage in elastic clouds
CN112087646A (en) Video playing method and device, computer equipment and storage medium
CN111935507A (en) Video switching method and device

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination