CN113473172A - VR video caching method and device, caching service device and storage medium - Google Patents
VR video caching method and device, caching service device and storage medium Download PDFInfo
- Publication number
- CN113473172A CN113473172A CN202010238749.5A CN202010238749A CN113473172A CN 113473172 A CN113473172 A CN 113473172A CN 202010238749 A CN202010238749 A CN 202010238749A CN 113473172 A CN113473172 A CN 113473172A
- Authority
- CN
- China
- Prior art keywords
- heat
- content
- view
- caching
- future
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Granted
Links
Images
Classifications
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N21/00—Selective content distribution, e.g. interactive television or video on demand [VOD]
- H04N21/20—Servers specifically adapted for the distribution of content, e.g. VOD servers; Operations thereof
- H04N21/23—Processing of content or additional data; Elementary server operations; Server middleware
- H04N21/231—Content storage operation, e.g. caching movies for short term storage, replicating data over plural servers, prioritizing data for deletion
- H04N21/23106—Content storage operation, e.g. caching movies for short term storage, replicating data over plural servers, prioritizing data for deletion involving caching operations
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N13/00—Stereoscopic video systems; Multi-view video systems; Details thereof
- H04N13/10—Processing, recording or transmission of stereoscopic or multi-view image signals
- H04N13/106—Processing image signals
- H04N13/111—Transformation of image signals corresponding to virtual viewpoints, e.g. spatial image interpolation
- H04N13/117—Transformation of image signals corresponding to virtual viewpoints, e.g. spatial image interpolation the virtual viewpoint locations being selected by the viewers or determined by viewer tracking
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N13/00—Stereoscopic video systems; Multi-view video systems; Details thereof
- H04N13/20—Image signal generators
- H04N13/282—Image signal generators for generating image signals corresponding to three or more geometrical viewpoints, e.g. multi-view systems
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N21/00—Selective content distribution, e.g. interactive television or video on demand [VOD]
- H04N21/60—Network structure or processes for video distribution between server and client or between remote clients; Control signalling between clients, server and network components; Transmission of management data between server and client, e.g. sending from server to client commands for recording incoming content stream; Communication details between server and client
- H04N21/65—Transmission of management data between client and server
- H04N21/658—Transmission by the client directed to the server
- H04N21/6587—Control parameters, e.g. trick play commands, viewpoint selection
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N21/00—Selective content distribution, e.g. interactive television or video on demand [VOD]
- H04N21/80—Generation or processing of content or additional data by content creator independently of the distribution process; Content per se
- H04N21/81—Monomedia components thereof
- H04N21/816—Monomedia components thereof involving special video data, e.g 3D video
Landscapes
- Engineering & Computer Science (AREA)
- Multimedia (AREA)
- Signal Processing (AREA)
- Two-Way Televisions, Distribution Of Moving Picture Or The Like (AREA)
- Processing Or Creating Images (AREA)
Abstract
The present disclosure provides a VR video caching method, device, caching service device and storage medium, relating to the technical field of computer, wherein the method comprises: acquiring access heat information corresponding to the second view content cached in the caching device and synthesis heat information for synthesizing the second view content and other view contents; acquiring future heat information of the second visual angle content according to a preset heat prediction rule and based on the access heat information and the synthesis heat information; deleting the second view content based on the future heat information; and caching the view angle content according to a preset hit rate gain maximization rule. The VR video caching method, the VR video caching device, the cache service device and the storage medium can avoid the influence on the user experience quality, improve the hit rate of VR cache content, optimize the resource utilization of a network and save the construction and maintenance cost.
Description
Technical Field
The present disclosure relates to the field of computer technologies, and in particular, to a VR video caching method and apparatus, a caching service apparatus, and a storage medium.
Background
Virtual Reality (VR) is a human-machine interface technology that enables users to interact with three-dimensional space in a Virtual environment. VR video is produced based on VR, and it is different from the single viewing angle of traditional video, can show 360 degrees panorama shots for the user, makes the user feel wherein. The VR video is a video which utilizes a multi-camera array to obtain multiple angles (multiple visual angles), the videos are stored on a source station server after a series of processing, when the video needs to be used, a VR terminal can request to extract a video stream, and visual angle contents needed by a user are presented through tracking and rendering operations. The existing technical scheme for caching the multi-view video needs a larger storage space, and increases the access delay of the user, so that the experience quality of the user is affected.
Disclosure of Invention
In view of the above, an object of the present invention is to provide a VR video caching method, apparatus, caching service apparatus and storage medium.
According to an aspect of the present disclosure, there is provided a VR video caching method, including: receiving a visual angle content request sent by a terminal, and judging whether the requested first visual angle content is not cached and the cache space is insufficient; if yes, acquiring access heat information corresponding to second visual angle content cached in the caching device and synthesis heat information for synthesizing the second visual angle content and other visual angle content; acquiring future heat information of the second view content according to a preset heat prediction rule and based on the access heat information and the synthesis heat information; deleting the second perspective content based on the future heat information; and caching the view angle content according to a preset hit rate gain maximization rule.
Optionally, the caching the view content according to a preset hit rate gain maximization rule includes: acquiring a maximum allowable view interval threshold value of adjacent view contents required to be used for synthesizing the first view contents; caching view content based on the maximum allowed view interval threshold.
Optionally, the caching of view content based on the maximum allowed view interval threshold comprises: acquiring a visual angle content set corresponding to the first visual angle content; based on the maximum allowable view angle interval threshold, acquiring a left candidate cache set and a right candidate cache set which are respectively positioned at the left side and the right side of the first view angle content from the view angle content set; caching the first perspective content if second perspective content located in the left candidate cache set and the second perspective content located in the right candidate cache set are cached; if the second view angle content in the left candidate cache set and/or the right candidate cache set is not cached, determining the left maximum spacing view angle content and/or the right maximum spacing view angle content corresponding to the first view angle content based on the maximum allowable view angle interval threshold, and caching the left maximum spacing view angle content and/or the right maximum spacing view angle content.
Optionally, the accessing the hot information includes: accessing the statistical data; the synthetic heat information includes: synthesizing statistical data; the obtaining future heat information of the second view content according to a preset heat prediction rule and based on the access heat information and the synthesis heat information comprises: calculating a composite weighted heat based on the access statistics and the composite statistics; and acquiring the future heat information according to the heat prediction rule and based on the comprehensive weighted heat.
Optionally, the obtaining the future heat information according to the heat prediction rule and based on the comprehensive weighted heat includes: generating a training sample based on the historical comprehensive weighted heat and the historical future heat; training a preset deep learning model by using a deep learning method based on the training samples to obtain a heat prediction model; and updating the preset deep learning model into the heat prediction model, and inputting the comprehensive weighted heat into the heat prediction model to obtain the future heat.
Optionally, the calculating a composite weighted heat based on the access statistics and the composite statistics comprises: calculating the composite weighted heat ═ α ∈ the visit statistics + (1- α) · the composite statistics, where α is a weighting parameter and α ∈ [0,1 ].
According to another aspect of the present disclosure, there is provided a VR video buffering apparatus including: the cache space determining module is used for receiving a visual angle content request sent by a terminal and judging whether the requested first visual angle content is not cached and the cache space is insufficient; the heat data acquisition module is used for acquiring access heat information corresponding to the second visual angle content cached in the cache device and synthesis heat information for synthesizing the second visual angle content and other visual angle contents if the second visual angle content is cached in the cache device; the heat prediction module is used for acquiring future heat information of the second view content according to a preset heat prediction rule and based on the access heat information and the synthesized heat information; the cache content deleting module is used for deleting the second visual angle content based on the future heat information; and the visual angle content caching module is used for caching the visual angle content according to a preset hit rate gain maximization rule.
Optionally, the view content caching module includes: a hit rate gain processing unit, configured to obtain a maximum allowable view interval threshold of adjacent view contents that need to be used for synthesizing the first view content; and the cache processing unit is used for caching the visual angle content based on the maximum allowable visual angle interval threshold value.
Optionally, the hit rate gain processing unit is configured to obtain a set of view content corresponding to the first view content; based on the maximum allowable view angle interval threshold, acquiring a left candidate cache set and a right candidate cache set which are respectively positioned at the left side and the right side of the first view angle content from the view angle content set; the cache processing unit is configured to cache the first perspective content if the second perspective content located in the left candidate cache set and the second perspective content located in the right candidate cache set are already cached; if the second view angle content in the left candidate cache set and/or the right candidate cache set is not cached, determining the left maximum spacing view angle content and/or the right maximum spacing view angle content corresponding to the first view angle content based on the maximum allowable view angle interval threshold, and caching the left maximum spacing view angle content and/or the right maximum spacing view angle content.
Optionally, the accessing the hot information includes: accessing the statistical data; the synthetic heat information includes: synthesizing statistical data; the heat data acquisition module comprises: a weighted heat determination unit for calculating a comprehensive weighted heat based on the access statistical data and the synthesized statistical data; and the future heat prediction unit is used for acquiring the future heat information according to the heat prediction rule and on the basis of the comprehensive weighted heat.
Optionally, the future heat prediction unit is configured to generate a training sample based on the historical integrated weighted heat and the historical future heat; training a preset deep learning model by using a deep learning method based on the training samples to obtain a heat prediction model; and updating the preset deep learning model into the heat prediction model, and inputting the comprehensive weighted heat into the heat prediction model to obtain the future heat.
Optionally, the weighted heat determination unit is configured to calculate the composite weighted heat α ═ α · the visit statistical data + (1- α) · the composite statistical data, where α is a weighting parameter and α ∈ [0,1 ].
According to still another aspect of the present disclosure, there is provided a VR video buffering apparatus, including: a memory; and a processor coupled to the memory, the processor configured to perform the method as described above based on instructions stored in the memory.
According to still another aspect of the present disclosure, there is provided a cache service apparatus including: the VR video buffering apparatus as described above.
According to yet another aspect of the present disclosure, a computer-readable storage medium is provided, which stores computer instructions for execution by a processor to perform the method as described above.
The VR video caching method, the VR video caching device, the VR video caching service device and the storage medium calculate comprehensive weighted heat based on access statistical data and synthesis statistical data, obtain future heat through a heat prediction model, delete second view content based on the future heat and cache the view content according to a hit rate gain maximization rule; the method and the device can avoid the influence on the user experience quality, improve the hit rate of VR cache content, optimize the resource utilization of the network and save the construction and maintenance cost.
Drawings
In order to more clearly illustrate the embodiments of the present disclosure or the technical solutions in the prior art, the drawings needed to be used in the description of the embodiments or the prior art will be briefly described below, it is obvious that the drawings in the following description are only some embodiments of the present disclosure, and other drawings can be obtained by those skilled in the art without inventive exercise.
Fig. 1 is a schematic flow chart diagram of one embodiment of a VR video caching method according to the present disclosure;
fig. 2 is a schematic flow chart illustrating buffering of view content in an embodiment of a VR video buffering method according to the present disclosure;
fig. 3 is a schematic flow chart of caching view content based on a maximum allowed view interval threshold in an embodiment of a VR video caching method according to the present disclosure;
fig. 4 is a schematic flow chart illustrating obtaining future heat information in an embodiment of a VR video caching method according to the present disclosure;
fig. 5 is a schematic flow chart illustrating obtaining future heat information based on the integrated weighted heat in an embodiment of a VR video caching method according to the present disclosure;
fig. 6 is a block diagram of one embodiment of a VR video caching device according to the present disclosure;
fig. 7 is a block diagram of a view content caching module in an embodiment of a VR video caching apparatus according to the present disclosure;
fig. 8 is a block diagram of a hotness data acquisition module in an embodiment of a VR video caching device according to the present disclosure;
fig. 9 is a block diagram of another embodiment of a VR video buffering device according to the present disclosure.
Detailed Description
The present disclosure now will be described more fully hereinafter with reference to the accompanying drawings, in which exemplary embodiments of the disclosure are shown. The technical solutions in the embodiments of the present disclosure will be clearly and completely described below with reference to the drawings in the embodiments of the present disclosure, and it is obvious that the described embodiments are only a part of the embodiments of the present disclosure, and not all of the embodiments. All other embodiments, which can be derived by a person skilled in the art from the embodiments disclosed herein without making any creative effort, shall fall within the protection scope of the present disclosure. The technical solution of the present disclosure is described in various aspects below with reference to various figures and embodiments.
The terms "first", "second", and the like are used hereinafter only for descriptive distinction and not for other specific meanings.
The video service is a new video service which is established on the broadband internet and the mobile internet and is open to the public of society, and is a technology of multimedia interactive service which integrates images, data and the like. For multi-view content, many similar parts exist between adjacent viewpoints, so that one view content can be combined into the current view content through left and right views (within a certain interval range) nearby the view content.
Fig. 1 is a schematic flowchart of an embodiment of a VR video caching method according to the present disclosure, as shown in fig. 1:
And 102, if so, acquiring access heat information corresponding to the second perspective content cached in the caching device and synthesis heat information for synthesizing the second perspective content and other perspective contents. The access heat information may include access statistics and the like, and the composite heat information may include composite statistics and the like.
And 103, acquiring future heat information of the second view content according to a preset heat prediction rule and based on the access heat information and the synthesis heat information. There are many kinds of heat prediction rules.
And 104, deleting the second perspective content based on the future heat information. When the space of the cache server is insufficient, the cached second visual angle content with the lowest future popularity can be deleted, and the influence on the user experience is avoided.
And 105, caching the view angle content according to a preset hit rate gain maximization rule. The hit rate gain maximization rule may be various.
Fig. 2 is a schematic flow chart of buffering view content in an embodiment of a VR video buffering method according to the present disclosure, as shown in fig. 2:
VR video is usually three-dimensional multi-view, and 6, 9 or more different view contents can be extracted from one VR video; each view (content) of the VR video may be synthesized from right and left adjacent views (content), i.e., the current view (content) may be synthesized based on the right and left adjacent views (content).
In step 202, view content is cached based on a maximum allowed view interval threshold.
Fig. 3 is a schematic flowchart of buffering view content based on a maximum allowable view interval threshold in an embodiment of a VR video buffering method according to the present disclosure, as shown in fig. 3:
In step 303, if the second perspective contents located in the left candidate cache set and the second perspective contents located in the right candidate cache set are already cached, the first perspective contents are cached. The first perspective content is cached in a caching device.
Fig. 4 is a schematic flowchart of obtaining future heat information in an embodiment of a VR video caching method according to the present disclosure, as shown in fig. 4:
There may be a variety of calculation methods for calculating the integrated weighted heat. For example, a composite weighted heat ═ α access statistic + (1- α) composite statistic is calculated, where α is a weighting parameter and α ∈ [0,1 ].
And step 402, acquiring future heat information according to the heat prediction rule and based on the comprehensive weighted heat. There are many kinds of heat prediction rules.
Fig. 5 is a schematic flowchart of obtaining future heat information based on the comprehensive weighted heat in an embodiment of the VR video caching method according to the present disclosure, as shown in fig. 5:
And 503, updating the preset deep learning model into a heat prediction model, and inputting the comprehensive weighted heat into the heat prediction model to obtain the future heat.
In the VR video caching method in the above embodiment, the comprehensive weighted heat is calculated based on the access statistical data and the synthesis statistical data, the future heat is obtained through the heat prediction model, and the second view content with the lowest future heat is deleted, so that the problem that the VR view content contributing to the synthesis of other view contents is mistakenly deleted by considering only the content access heat in the conventional two-dimensional video caching replacement method is solved.
The VR video caching method disclosed by the invention does not directly cache VR visual angle content requested by a user, but adopts a strategy of synthesizing a current visual angle by near visual angle content within a certain interval range, and overcomes the defects of high requirements on extra VR viewpoint rendering and processing time delay, transmission time delay and bandwidth caused by the traditional two-dimensional video caching replacement method by utilizing a cache hit rate maximization principle.
In one embodiment, a request of a first perspective content sent by a user VR terminal is received, and for a second perspective content cached in a caching device, the content heat (access statistical data) of a user request access perspective and the composition heat (composition statistical data) statistics of the perspective content for synthesizing other perspective contents in each time period are collected.
And synthesizing the other perspective heat (synthesized statistic data) by using the comprehensive weighted heat of the cached second perspective content, wherein the comprehensive weighted heat is alpha and the user visit heat (visit statistic data) + (1-alpha), wherein alpha is an adjustable parameter and alpha belongs to [0,1 ]. And taking the obtained comprehensive weighted heat as the input of a heat prediction model, and outputting the future heat by the heat prediction model.
The deep learning regression model may be various, such as RNN, LSTM, etc. The deep learning model typically includes three-layer neuron models including an input layer neuron model, a middle layer neuron model, and an output layer neuron model, the output of each layer of neuron model being input to the next layer of neuron model. Training samples are generated based on historical comprehensive weighted heat and historical future heat, an existing training method can be adopted, a deep learning method is used, a deep learning model is trained based on the training samples, and a trained heat prediction model is obtained.
Deleting the cached second perspective content with the lowest predicted future popularity. For a first VR perspective content, over a range of intervals, the first perspective content can be synthesized with perspective content that is adjacent left and right (over the range of intervals). For example, assume that the maximum allowable view interval threshold L of the current view content is synthesized with left and right adjacent view content, which is 3.
Only the view content V2 is currently cached in the cache server, and the first view content requested to be played by the user is V3. The view interval may be a difference of view content indexes, for example, a view interval between the first view content V3 and the view content V2 is 3-2 to 1, and a view interval between the first view content V3 and the view content V5 is 5-3 to 2. In order to meet the hit rate maximization (meet the hit rate gain maximization rule), by utilizing the characteristic of left and right perspective content synthesis, the cache server caches the perspective content V5 (does not cache the perspective content V3), the maximum requirement of the perspective interval | V5-V2| ≦ L ≦ 3 is met, and the cache server can synthesize the perspective contents V3, V4 and V5, so that more perspective contents are locally cached and hit, the cache hit rate is remarkably improved, and the access delay is reduced. Thus, the new perspective content that satisfies the hit rate maximization is cached.
Compared with the ordinary video, the VR video has the following characteristics: VR video is usually three-dimensional multi-view, and 6, 9 or more different view contents can be extracted from one VR video; each view of the VR video may be synthesized from left and right adjacent views. For the caching of general two-dimensional video, the content requested by the user is a unique data file, while for multi-view video with depth image rendering based technology, there are many similar parts between adjacent views, and therefore, one view content can be synthesized by the left and right views in its vicinity. If the cache server does not store the viewpoint requested by the user, it can transmit its adjacent left and right viewpoints to synthesize the content required by the user. In order to guarantee the quality of the synthesized views, the interval between the left and right views cannot exceed L when they are transmitted to be synthesized.
The viewpoint synthesis technology is proposed based on the existence of high correlation between left and right adjacent viewpoints in multi-viewpoint cheek coding, and by using the technology, an image of any virtual viewpoint between two viewpoints can be synthesized. The viewpoint synthesis technology performs comprehensive processing on image information and parallax information obtained by shooting at each viewpoint according to the relationship of the placement positions of the cameras and the setting of various parameters in the cameras, and synthesizes an image seen by any virtual viewpoint between the two viewpoints.
Multi-view video data is typically in a depth map plus texture map representation format. Depth information is data describing the distance of objects in a scene from a camera, and this structure has great advantages for applications that require simultaneous processing of depth and texture information. At the decoding end, based on the depth and texture information, the virtual viewpoint of another viewpoint can be synthesized from the existing viewpoint by adopting warping transformation.
The method of synthesizing a new viewing angle (virtual viewing angle) by left and right viewing angles may use various existing methods. Taking two viewpoint images (left and right adjacent viewpoints) provided by a first camera and a second camera as reference viewpoint images, and respectively carrying out three-dimensional image transformation, small hole filling, image complementation and image fusion on the two viewpoint images to obtain a main virtual viewpoint image; respectively carrying out depth map processing, three-dimensional image transformation, small hole filling, image complementation and image fusion processing on the two viewpoint images to obtain secondary virtual viewpoint images; and filling residual cavities in the main virtual viewpoint image according to the auxiliary virtual viewpoint image to obtain a virtual viewpoint image which needs to be synthesized finally. The method can firstly carry out 3D transformation on the adjacent viewpoints of the left and right of the target viewpoint and the depth map thereof respectively to obtain two synthesized viewpoint images, and then fuse the two synthesized viewpoint images by using various existing formulas to obtain a new synthesized image.
In one embodiment, as shown in fig. 6, the present disclosure provides a VR video buffering device 60, comprising: a buffer space determining module 61, a heat data obtaining module 62, a heat predicting module 63, a buffer content deleting module 64 and a view content buffer module 65.
The cache space determining module 61 receives a view content request sent by a terminal, and determines whether the requested first view content is not cached and the cache space is insufficient; if so, the heat data acquisition module 62 acquires the access heat information corresponding to the second perspective content cached in the caching device and the synthesis heat information for performing the synthesis processing of the second perspective content and the other perspective content.
The heat prediction module 63 obtains future heat information of the second view content according to a preset heat prediction rule and based on the access heat information and the synthesis heat information. The cache content deletion module 64 performs deletion processing on the second perspective content based on the future popularity information. The view content caching module 65 caches view content according to a preset hit rate gain maximization rule.
In one embodiment, as shown in fig. 7, the view content caching module 65 includes: a hit rate gain processing unit 651 and a cache processing unit 652. The hit rate gain processing unit 651 obtains a maximum allowable view interval threshold of adjacent view contents that need to be used for synthesizing the first view contents; the buffer processing unit 652 buffers the view content based on the maximum allowable view interval threshold.
The hit rate gain processing unit 651 acquires a set of perspective contents corresponding to the first perspective content. The hit rate gain processing unit 651 obtains a left candidate cache set and a right candidate cache set respectively located on the left and right sides of the first perspective content from the perspective content set based on the maximum allowable perspective interval threshold.
If the second perspective contents located in the left candidate cache set and the second perspective contents located in the right candidate cache set are already cached, the cache processing unit 652 caches the first perspective contents; if the second perspective contents located in the left candidate cache set and/or the second perspective contents located in the right candidate cache set are not cached, the cache processing unit 652 determines the left maximum-spaced perspective contents and/or the right maximum-spaced perspective contents corresponding to the first perspective contents based on the maximum allowable perspective interval threshold, and performs cache processing on the left maximum-spaced perspective contents and/or the right maximum-spaced perspective contents.
In one embodiment, accessing the heat information comprises: access statistics, etc.; the synthesis heat information includes: synthesizing statistical data, etc.; as shown in fig. 8, the heat data acquisition module 62 includes: a weighted heat determination unit 621 and a future heat prediction unit 622. The weighted heat determination unit 621 calculates a comprehensive weighted heat based on the access statistical data and the composite statistical data. The future heat prediction unit 622 obtains future heat information based on the comprehensive weighted heat according to the heat prediction rule.
The future heat prediction unit 622 generates training samples based on the historical integrated weighted heat and the historical future heat. The future heat prediction unit 622 trains a preset deep learning model based on the training samples by using a deep learning method to obtain a heat prediction model. The future heat prediction unit 622 updates a preset deep learning model to a heat prediction model, and obtains the future heat by inputting the comprehensively weighted heat into the heat prediction model. The weighted heat determination unit 622 calculates the integrated weighted heat ═ α ═ visit statistics + (1- α) × composite statistics, where α is a weighting parameter and α ∈ [0,1 ].
Fig. 9 is a block diagram of another embodiment of a VR video buffering device according to the present disclosure. As shown in fig. 9, the apparatus may include a memory 91, a processor 92, a communication interface 93, and a bus 94. The memory 91 is used for storing instructions, the processor 92 is coupled to the memory 91, and the processor 92 is configured to execute the VR video buffering method based on the instructions stored in the memory 91.
The memory 91 may be a high-speed RAM memory, a non-volatile memory (non-volatile memory), or the like, and the memory 91 may be a memory array. The storage 91 may also be partitioned and the blocks may be combined into virtual volumes according to certain rules. The processor 92 may be a central processing unit CPU, or an application Specific Integrated circuit asic, or one or more Integrated circuits configured to implement the VR video buffering method of the present disclosure.
In one embodiment, the present disclosure provides a cache service apparatus, including: the VR video buffering device of any preceding embodiment.
In one embodiment, the present disclosure provides a computer-readable storage medium storing computer instructions that, when executed by a processor, implement a VR video caching method as in any one of the above embodiments.
The VR video caching method, the VR video caching device, the VR video caching service device, and the VR video caching storage medium in the embodiments can delete cached view-angle content with the lowest popularity, and avoid the quality of user experience from being affected; the hit rate of the VR content distribution network is improved, the resource utilization of the VR content distribution network can be optimized, the construction and maintenance cost is saved, and the video experience quality of a user is guaranteed.
The method and system of the present disclosure may be implemented in a number of ways. For example, the methods and systems of the present disclosure may be implemented by software, hardware, firmware, or any combination of software, hardware, and firmware. The above-described order for the steps of the method is for illustration only, and the steps of the method of the present disclosure are not limited to the order specifically described above unless specifically stated otherwise. Further, in some embodiments, the present disclosure may also be embodied as programs recorded in a recording medium, the programs including machine-readable instructions for implementing the methods according to the present disclosure. Thus, the present disclosure also covers a recording medium storing a program for executing the method according to the present disclosure.
The description of the present disclosure has been presented for purposes of illustration and description, and is not intended to be exhaustive or limited to the disclosure in the form disclosed. Many modifications and variations will be apparent to practitioners skilled in this art. The embodiment was chosen and described in order to best explain the principles of the disclosure and the practical application, and to enable others of ordinary skill in the art to understand the disclosure for various embodiments with various modifications as are suited to the particular use contemplated.
Claims (15)
1. A VR video caching method, comprising:
receiving a visual angle content request sent by a terminal, and judging whether the requested first visual angle content is not cached and the cache space is insufficient;
if yes, acquiring access heat information corresponding to second visual angle content cached in the caching device and synthesis heat information for synthesizing the second visual angle content and other visual angle content;
acquiring future heat information of the second view content according to a preset heat prediction rule and based on the access heat information and the synthesis heat information;
deleting the second perspective content based on the future heat information;
and caching the view angle content according to a preset hit rate gain maximization rule.
2. The method of claim 1, wherein caching perspective content according to a preset hit rate gain maximization rule comprises:
acquiring a maximum allowable view interval threshold value of adjacent view contents required to be used for synthesizing the first view contents;
caching view content based on the maximum allowed view interval threshold.
3. The method of claim 2, the caching view content based on the maximum allowed view interval threshold comprises:
acquiring a visual angle content set corresponding to the first visual angle content;
based on the maximum allowable view angle interval threshold, acquiring a left candidate cache set and a right candidate cache set which are respectively positioned at the left side and the right side of the first view angle content from the view angle content set;
caching the first perspective content if second perspective content located in the left candidate cache set and the second perspective content located in the right candidate cache set are cached;
if the second view angle content in the left candidate cache set and/or the right candidate cache set is not cached, determining the left maximum spacing view angle content and/or the right maximum spacing view angle content corresponding to the first view angle content based on the maximum allowable view angle interval threshold, and caching the left maximum spacing view angle content and/or the right maximum spacing view angle content.
4. The method of claim 1, the accessing heat information comprising: accessing the statistical data; the synthetic heat information includes: synthesizing statistical data; the obtaining future heat information of the second view content according to a preset heat prediction rule and based on the access heat information and the synthesis heat information comprises:
calculating a composite weighted heat based on the access statistics and the composite statistics;
and acquiring the future heat information according to the heat prediction rule and based on the comprehensive weighted heat.
5. The method of claim 4, wherein said obtaining the future heat information based on the synthetically weighted heat according to the heat prediction rule comprises:
generating a training sample based on the historical comprehensive weighted heat and the historical future heat;
training a preset deep learning model by using a deep learning method based on the training samples to obtain a heat prediction model;
and updating the preset deep learning model into the heat prediction model, and inputting the comprehensive weighted heat into the heat prediction model to obtain the future heat.
6. The method of claim 4, the calculating a composite weighted heat based on the access statistics and the composite statistics comprising:
calculating the composite weighted heat ═ α ∈ the visit statistics + (1- α) · the composite statistics, where α is a weighting parameter and α ∈ [0,1 ].
7. A VR video caching apparatus, comprising:
the cache space determining module is used for receiving a visual angle content request sent by a terminal and judging whether the requested first visual angle content is not cached and the cache space is insufficient;
the heat data acquisition module is used for acquiring access heat information corresponding to the second visual angle content cached in the cache device and synthesis heat information for synthesizing the second visual angle content and other visual angle contents if the second visual angle content is cached in the cache device;
the heat prediction module is used for acquiring future heat information of the second view content according to a preset heat prediction rule and based on the access heat information and the synthesized heat information;
the cache content deleting module is used for deleting the second visual angle content based on the future heat information;
and the visual angle content caching module is used for caching the visual angle content according to a preset hit rate gain maximization rule.
8. The apparatus of claim 7, wherein,
the view content caching module comprises:
a hit rate gain processing unit, configured to obtain a maximum allowable view interval threshold of adjacent view contents that need to be used for synthesizing the first view content;
and the cache processing unit is used for caching the visual angle content based on the maximum allowable visual angle interval threshold value.
9. The apparatus of claim 8, wherein,
the hit rate gain processing unit is used for acquiring a view content set corresponding to the first view content; based on the maximum allowable view angle interval threshold, acquiring a left candidate cache set and a right candidate cache set which are respectively positioned at the left side and the right side of the first view angle content from the view angle content set;
the cache processing unit is configured to cache the first perspective content if the second perspective content located in the left candidate cache set and the second perspective content located in the right candidate cache set are already cached; if the second view angle content in the left candidate cache set and/or the right candidate cache set is not cached, determining the left maximum spacing view angle content and/or the right maximum spacing view angle content corresponding to the first view angle content based on the maximum allowable view angle interval threshold, and caching the left maximum spacing view angle content and/or the right maximum spacing view angle content.
10. The apparatus of claim 7, the accessing heat information comprising: accessing the statistical data; the synthetic heat information includes: synthesizing statistical data;
the heat data acquisition module comprises:
a weighted heat determination unit for calculating a comprehensive weighted heat based on the access statistical data and the synthesized statistical data;
and the future heat prediction unit is used for acquiring the future heat information according to the heat prediction rule and on the basis of the comprehensive weighted heat.
11. The apparatus of claim 10, wherein,
the future heat prediction unit is used for generating training samples based on historical comprehensive weighted heat and historical future heat; training a preset deep learning model by using a deep learning method based on the training samples to obtain a heat prediction model; and updating the preset deep learning model into the heat prediction model, and inputting the comprehensive weighted heat into the heat prediction model to obtain the future heat.
12. The apparatus of claim 10, wherein,
the weighted heat determination unit is configured to calculate the integrated weighted heat α ═ α · the visit statistical data + (1- α) · the composite statistical data, where α is a weighting parameter and α ∈ [0,1 ].
13. A VR video caching apparatus, comprising:
a memory; and a processor coupled to the memory, the processor configured to perform the method of any of claims 1-6 based on instructions stored in the memory.
14. A cache service apparatus, comprising:
the VR video buffering device of any of claims 7 to 13.
15. A computer-readable storage medium having stored thereon computer instructions for execution by a processor of the method of any one of claims 1 to 6.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202010238749.5A CN113473172B (en) | 2020-03-30 | 2020-03-30 | VR video caching method and device, caching service device and storage medium |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202010238749.5A CN113473172B (en) | 2020-03-30 | 2020-03-30 | VR video caching method and device, caching service device and storage medium |
Publications (2)
Publication Number | Publication Date |
---|---|
CN113473172A true CN113473172A (en) | 2021-10-01 |
CN113473172B CN113473172B (en) | 2023-03-24 |
Family
ID=77866137
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN202010238749.5A Active CN113473172B (en) | 2020-03-30 | 2020-03-30 | VR video caching method and device, caching service device and storage medium |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN113473172B (en) |
Cited By (2)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN114866797A (en) * | 2022-05-07 | 2022-08-05 | 湖南正好物联网科技有限公司 | 360-degree video caching method and device |
CN115103023A (en) * | 2022-06-14 | 2022-09-23 | 北京字节跳动网络技术有限公司 | Video caching method, device, equipment and storage medium |
Citations (4)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN103312776A (en) * | 2013-05-08 | 2013-09-18 | 青岛海信传媒网络技术有限公司 | Method and device for caching contents of videos by edge node server |
CN108777802A (en) * | 2018-06-05 | 2018-11-09 | 网宿科技股份有限公司 | A kind of method and apparatus of caching VR videos |
US20190362151A1 (en) * | 2016-09-14 | 2019-11-28 | Koninklijke Kpn N.V. | Streaming virtual reality video |
CN113766269A (en) * | 2020-06-02 | 2021-12-07 | 中国移动通信有限公司研究院 | Video caching strategy determination method, video data processing method, device and storage medium |
-
2020
- 2020-03-30 CN CN202010238749.5A patent/CN113473172B/en active Active
Patent Citations (4)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN103312776A (en) * | 2013-05-08 | 2013-09-18 | 青岛海信传媒网络技术有限公司 | Method and device for caching contents of videos by edge node server |
US20190362151A1 (en) * | 2016-09-14 | 2019-11-28 | Koninklijke Kpn N.V. | Streaming virtual reality video |
CN108777802A (en) * | 2018-06-05 | 2018-11-09 | 网宿科技股份有限公司 | A kind of method and apparatus of caching VR videos |
CN113766269A (en) * | 2020-06-02 | 2021-12-07 | 中国移动通信有限公司研究院 | Video caching strategy determination method, video data processing method, device and storage medium |
Cited By (4)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN114866797A (en) * | 2022-05-07 | 2022-08-05 | 湖南正好物联网科技有限公司 | 360-degree video caching method and device |
CN114866797B (en) * | 2022-05-07 | 2023-10-27 | 湖南正好物联网科技有限公司 | 360-degree video caching method and device |
CN115103023A (en) * | 2022-06-14 | 2022-09-23 | 北京字节跳动网络技术有限公司 | Video caching method, device, equipment and storage medium |
CN115103023B (en) * | 2022-06-14 | 2024-04-05 | 北京字节跳动网络技术有限公司 | Video caching method, device, equipment and storage medium |
Also Published As
Publication number | Publication date |
---|---|
CN113473172B (en) | 2023-03-24 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
US10979663B2 (en) | Methods and apparatuses for image processing to optimize image resolution and for optimizing video streaming bandwidth for VR videos | |
JP7522259B2 (en) | Three-dimensional model encoding device, three-dimensional model decoding device, three-dimensional model encoding method, and three-dimensional model decoding method | |
US11706403B2 (en) | Positional zero latency | |
EP0675462B1 (en) | System and method of generating compressed video graphics images | |
CN113811920A (en) | Distributed pose estimation | |
CN108492322B (en) | Method for predicting user view field based on deep learning | |
Hamza et al. | Adaptive streaming of interactive free viewpoint videos to heterogeneous clients | |
US20140292751A1 (en) | Rate control bit allocation for video streaming based on an attention area of a gamer | |
CA2821830A1 (en) | Moving image distribution server, moving image playback apparatus, control method, program, and recording medium | |
US11670039B2 (en) | Temporal hole filling for depth image based video rendering | |
JP7493496B2 (en) | Image Composition | |
CN113473172B (en) | VR video caching method and device, caching service device and storage medium | |
CN107211081A (en) | The transmission of video of context update based on absolute coding | |
CN113905221A (en) | Stereo panoramic video asymmetric transmission stream self-adaption method and system | |
KR20200102507A (en) | Apparatus and method for generating image data bitstream | |
EP2391135B1 (en) | Method and device for processing depth image sequence | |
CN112584119A (en) | Self-adaptive panoramic video transmission method and system based on reinforcement learning | |
CN113766269A (en) | Video caching strategy determination method, video data processing method, device and storage medium | |
Pan et al. | 5g mobile edge assisted metaverse light field video system: Prototype design and empirical evaluation | |
Li et al. | Utility-driven joint caching and bitrate allocation for real-time immersive videos | |
CN114900506B (en) | User experience quality-oriented 360-degree video viewport prediction method | |
JP6758268B2 (en) | Clients, programs and methods to determine delivery profiles based on displayable resolution | |
Ozcinar et al. | Delivery of omnidirectional video using saliency prediction and optimal bitrate allocation | |
US20240064360A1 (en) | Distribution control apparatus, distribution control system, distribution control method and program | |
Pan et al. | Mobile edge assisted multi-view light field video system: Prototype design and empirical evaluation |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
GR01 | Patent grant | ||
GR01 | Patent grant |