CN113242447B - Video data processing method and device - Google Patents

Video data processing method and device Download PDF

Info

Publication number
CN113242447B
CN113242447B CN202110504829.5A CN202110504829A CN113242447B CN 113242447 B CN113242447 B CN 113242447B CN 202110504829 A CN202110504829 A CN 202110504829A CN 113242447 B CN113242447 B CN 113242447B
Authority
CN
China
Prior art keywords
video
data
video data
identification information
file
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202110504829.5A
Other languages
Chinese (zh)
Other versions
CN113242447A (en
Inventor
娄志云
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Beijing QIYI Century Science and Technology Co Ltd
Original Assignee
Beijing QIYI Century Science and Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Beijing QIYI Century Science and Technology Co Ltd filed Critical Beijing QIYI Century Science and Technology Co Ltd
Priority to CN202110504829.5A priority Critical patent/CN113242447B/en
Publication of CN113242447A publication Critical patent/CN113242447A/en
Application granted granted Critical
Publication of CN113242447B publication Critical patent/CN113242447B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/20Servers specifically adapted for the distribution of content, e.g. VOD servers; Operations thereof
    • H04N21/23Processing of content or additional data; Elementary server operations; Server middleware
    • H04N21/232Content retrieval operation locally within server, e.g. reading video streams from disk arrays
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/20Servers specifically adapted for the distribution of content, e.g. VOD servers; Operations thereof
    • H04N21/23Processing of content or additional data; Elementary server operations; Server middleware
    • H04N21/239Interfacing the upstream path of the transmission network, e.g. prioritizing client content requests
    • H04N21/2393Interfacing the upstream path of the transmission network, e.g. prioritizing client content requests involving handling client requests
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/20Servers specifically adapted for the distribution of content, e.g. VOD servers; Operations thereof
    • H04N21/25Management operations performed by the server for facilitating the content distribution or administrating data related to end-users or client devices, e.g. end-user or client device authentication, learning user preferences for recommending movies
    • H04N21/266Channel or content management, e.g. generation and management of keys and entitlement messages in a conditional access system, merging a VOD unicast channel into a multicast channel
    • H04N21/26603Channel or content management, e.g. generation and management of keys and entitlement messages in a conditional access system, merging a VOD unicast channel into a multicast channel for automatically generating descriptors from content, e.g. when it is not made available by its provider, using content analysis techniques
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/40Client devices specifically adapted for the reception of or interaction with content, e.g. set-top-box [STB]; Operations thereof
    • H04N21/43Processing of content or additional data, e.g. demultiplexing additional data from a digital video stream; Elementary client operations, e.g. monitoring of home network or synchronising decoder's clock; Client middleware
    • H04N21/437Interfacing the upstream path of the transmission network, e.g. for transmitting client requests to a VOD server
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/80Generation or processing of content or additional data by content creator independently of the distribution process; Content per se
    • H04N21/83Generation or processing of protective or descriptive data associated with content; Content structuring
    • H04N21/84Generation or processing of descriptive data, e.g. content descriptors

Abstract

The application provides a video data processing method and a video data processing device, which are used for receiving a video data playing request sent by a terminal; acquiring description data matched with the identification information from the video file based on the identification information in the video data playing request; acquiring video data matched with the identification information from the video file by using the description data matched with the identification information; the video data matched with the identification information is sent to the terminal, the video file stores the plurality of video data and the description data of each piece of video data in the plurality of video data, the video file is distributed to each video server providing the video data for the terminal in advance, and the data distribution difficulty is reduced compared with the video data distribution with each piece of video data as a distribution unit. Because the video files contain a plurality of video data and the video files in different video servers are the same, the terminal can request the same video server when requesting the video data, and the video data are read from the video files of the video server, so that the loading efficiency is improved.

Description

Video data processing method and device
Technical Field
The present application belongs to the field of data processing technologies, and in particular, to a video data processing method and apparatus.
Background
In the related art, the more viewing angles used for shooting the same object, the more corresponding video data, for example, in the process of shooting a program, the larger the coverage angle, the more camera positions, and the more corresponding video data. For each piece of video data, each piece of video data is distributed to a plurality of video servers, but the data distribution difficulty is increased due to high data distribution complexity; in the process of playing video data by a corresponding terminal, the currently requested video data needs to be searched from a plurality of video servers, but the loading efficiency of the video data is influenced because the request processing rates and the request processing efficiencies of different video servers are different.
Disclosure of Invention
In view of the above, an object of the present application is to provide a video data processing method and apparatus, which are used to reduce data distribution difficulty and improve loading efficiency.
In a first aspect, the present application provides a method for processing video data, the method comprising:
receiving a video data playing request sent by a terminal;
obtaining description data matched with the identification information from a video file based on the identification information in the video data playing request, wherein the video file stores a plurality of video data and the description data of each video data in the plurality of video data, and is pre-distributed to each video server providing the video data for the terminal;
acquiring video data matched with the identification information from the video file by using the description data matched with the identification information;
and sending the video data matched with the identification information to the terminal.
Optionally, the obtaining, from the video file, the video data matched with the identification information by using the description data matched with the identification information includes:
determining the position of the video data matched with the identification information in the video file by using the description data matched with the identification information;
and reading the video data matched with the identification information from the video file based on the position of the video data matched with the identification information in the video file.
Optionally, the determining, by using the description data matched with the identification information, the position of the video data matched with the identification information in the video file includes:
determining the starting position of the video data matched with the identification information in a video file and the length of the video data matched with the identification information by using the description data matched with the identification information;
determining the end position of the video data matched with the identification information in the video file according to the start position and the length;
the reading the video data matched with the identification information from the video file based on the position of the video data matched with the identification information in the video file comprises: and reading data from the starting position to the ending position to obtain the video data matched with the identification information.
Optionally, the obtaining, based on the identification information in the video data playing request, the description data matched with the identification information from the video file includes:
determining an index matched with the identification information based on the identification information in the video data playing request; and obtaining the description data containing the index from the video file based on the index matched with the identification information, and determining the description data containing the index as the description data matched with the identification information.
In a second aspect, the present application provides a method for processing video data, the method comprising:
obtaining at least two pieces of video data to be distributed;
determining description data of each piece of video data;
at least packaging the description data of each piece of video data and each piece of video data to obtain a video file;
and sending the video file to each video server providing video data for the terminal, wherein the video file is used for enabling the video server to obtain the video data requested by the terminal by using the description data.
Optionally, the at least encapsulating the description data of each piece of video data and each piece of video data to obtain a video file includes:
determining an index identifier of the video file, wherein the index identifier is used for indicating the type of the video data and the number of the video data;
determining description data of each piece of video data, wherein the description data of the video data is used for positioning the video data;
writing the index identification and the description data of each video data in a file header of a video file;
and writing each piece of video data in the file body of the video file according to the sequence of the description data of each piece of video data in the file header.
Optionally, the method further includes: determining the length of all video data in the video file, wherein the length of all video data is written into a file header of the video file, and the length of all video data is used for indicating the data length in the video file.
Optionally, the determining description data of each piece of video data includes: determining an index of each piece of video data in the video file, a starting position of the video data in the video file, and a data size of the video data; wherein an order of the description data of each piece of video data in the header is determined based on an index of each piece of video data in the video file.
Optionally, the at least two video data correspond to different viewing angles, and the at least two video data are videos of the same object at different viewing angles.
In a third aspect, the present application provides a video data processing apparatus, the apparatus comprising:
the receiving unit is used for receiving a video data playing request sent by a terminal;
a first obtaining unit, configured to obtain, based on identification information in the video data play request, description data that matches the identification information from a video file, where a plurality of video data and description data of each of the plurality of video data are stored, and the video file is pre-distributed to each video server that provides video data to the terminal;
a second obtaining unit, configured to obtain, from the video file, video data matching the identification information by using description data matching the identification information;
and the sending unit is used for sending the video data matched with the identification information to the terminal.
In a fourth aspect, the present application provides a video data processing apparatus, the apparatus comprising:
an obtaining unit configured to obtain at least two pieces of video data to be distributed;
a determining unit configured to determine description data of each piece of video data;
the packaging unit is used for packaging at least the description data of each piece of video data and each piece of video data to obtain a video file;
and the sending unit is used for sending the video file to each video server providing video data for the terminal, and the video file is used for enabling the video server to obtain the video data requested by the terminal by using the description data.
In a fifth aspect, the present application provides a video server, comprising: a processor and a memory for storing processor-executable instructions; wherein the processor is configured to execute the instructions to implement the video data processing method of the first aspect.
In a sixth aspect, the present application provides a data source server, including: a processor and a memory for storing processor-executable instructions; wherein the processor is configured to execute the instructions to implement the video data processing method of the second aspect.
In a seventh aspect, the present application provides a computer-readable storage medium, wherein instructions in the computer-readable storage medium, when executed, implement the above-mentioned video data processing method.
The video data processing method and the device receive the video data playing request sent by the terminal; acquiring description data matched with the identification information from the video file based on the identification information in the video data playing request; acquiring video data matched with the identification information from the video file by using the description data matched with the identification information; the video data matched with the identification information is sent to the terminal, the video file stores a plurality of video data and description data of each video data in the plurality of video data, and the video file is pre-distributed to each video server providing the video data to the terminal, so that the video files in the video servers are the same, the same video file can be distributed to the video servers in the process of distributing the video data to the video servers, and the data distribution difficulty is reduced compared with the video data distribution with each piece of video data as one distribution unit. Because the video files contain a plurality of video data and the video files in different video servers are the same, the terminal can request the same video server in the process of requesting the video data, and a plurality of video data are read from the video files of the video server, so that the loading efficiency is improved.
Drawings
In order to more clearly illustrate the embodiments of the present application or the technical solutions in the prior art, the drawings needed to be used in the description of the embodiments or the prior art will be briefly introduced below, and it is obvious that the drawings in the following description are some embodiments of the present application, and it is obvious for those skilled in the art to obtain other drawings based on these drawings without creative efforts.
Fig. 1 is a schematic diagram of an implementation environment of a video data processing method according to an embodiment of the present application;
fig. 2 is a signaling diagram of a video data processing method according to an embodiment of the present application;
fig. 3 is a schematic diagram of a video data processing method according to an embodiment of the present application;
fig. 4 is a schematic structural diagram of a video data processing apparatus according to an embodiment of the present application;
fig. 5 is a schematic structural diagram of another video data processing apparatus according to an embodiment of the present application.
Detailed Description
In order to make the objects, technical solutions and advantages of the embodiments of the present application clearer, the technical solutions in the embodiments of the present application will be clearly and completely described below with reference to the drawings in the embodiments of the present application, and it is obvious that the described embodiments are some embodiments of the present application, but not all embodiments. All other embodiments, which can be derived by a person skilled in the art from the embodiments given herein without making any creative effort, shall fall within the protection scope of the present application.
The embodiment of the application provides a video data processing method, which can be applied to the implementation environment shown in fig. 1. In fig. 1, the system comprises at least one terminal 11, at least one video server 12 and a data source server 13, wherein the terminal 11 can be in communication connection with the video server 12, and the video server 12 can provide video data to the terminal 11 so as to feed back the requested video data to the terminal 11 after receiving a request of the terminal 11; the data source server 13 may be in communication connection with the video servers 12, and the data source server 13 distributes the video data to each video server 12 so that the video servers 12 can feed back the video data to the terminals 11.
The terminal 11 may be any electronic product that can perform human-Computer interaction with a user through one or more modes such as a mouse, a touch pad, and a touch screen, for example, a PC (Personal Computer), a smart phone, a wearable terminal, a pocket PC (pocket PC), a tablet Computer, a smart car, a smart television, and the like. The video server 12 may be a server, a server cluster composed of a plurality of servers, or a cloud computing service center. The data source server 13 may be a server, or a server cluster composed of multiple servers, or a cloud computing service center, for example, the data source server 13 is a server having a CDN (Content Delivery Network) cloud service function.
It should be understood by those skilled in the art that the above-mentioned terminal 11, video server 12 and data source server 13 are only examples, and other existing or future terminals or video servers or data source servers may be suitable for the present application and are included within the scope of the present application and are incorporated herein by reference.
In the implementation environment shown in fig. 1, the data source server 13 serves as a source of video data, and the data source server 13 can obtain video data from different viewing angles, and distributes the video data to each video server 12 according to a certain data distribution rule for the video data source server 13. Because tens or even hundreds of pieces of video data are stored in the data source server 13, the data source server 13 needs to reduce data distribution conflicts and ensure the accuracy of video data distribution in the process of sending the video data, and correspondingly, the video data needs to be coordinated in the process of distributing hundreds of pieces of video data, such as coordinating the distribution time of the video data with the video server to which the video data is distributed, so that the data distribution difficulty is increased, and the data distribution difficulty is also increased as the distributed video data is increased. And the terminal 11 requests video data from different video servers 12, different video servers 12 have different rates and efficiencies for processing the requests by different video servers 12 due to different hardware and different current remaining resources, so that different video servers 12 feed back the video data to the terminal 11, thereby affecting the loading efficiency of the video data.
Based on the problem that the video server 12 feeds back video data and the problem that the data source server 13 distributes video data, in the video data processing method provided in the embodiment of the present application, the data source server 13 may encapsulate multiple pieces of video data in one video file, and the data source server 13 provides the same video file to each video server 12, so that the file size distributed by the data source server 13 is reduced, and conflicts in the distribution process of multiple pieces of video data can be avoided, compared with the distribution of multiple pieces of video data to each video server 12, thereby reducing the difficulty in data distribution. Because a plurality of pieces of video data are packaged in one video file, the terminal 11 can request one video server 12 for the video data, so that the terminal 11 does not need to request different video servers 12 for the video data, and the problem that the loading efficiency is influenced due to different rates and efficiencies of processing requests by different video servers is solved.
The following describes a video data processing method provided by an embodiment of the present application in detail with reference to the accompanying drawings. Referring to fig. 2, a signaling diagram of a video data processing method according to an embodiment of the present application is shown, which may include the following steps:
101: the data source server obtains at least two pieces of video data to be distributed.
In this embodiment, the at least two pieces of video data may be uploaded to the data source server after the video data is captured by the image capture device, or the data source server requests the video data from at least one device such as a cloud service center and a terminal with the image capture device, and this embodiment is not limited to the process of obtaining the video data by the data source server. For example, at least two video data correspond to different viewing angles, and the at least two video data are videos of the same object at different viewing angles, and the video data of the same object at different viewing angles can be obtained by configuring the camera device at different viewing angles of the same object. For another example, the at least two pieces of video data are video data obtained by shooting different objects at the same angle by one shooting device, and the embodiment does not limit the relationship between the at least two pieces of video data.
For a data source server, it can traverse all video data stored in the data source server to obtain all video data to be distributed by the data source server, all video data to be distributed are video data stored in the data source server but are not sent to a video server providing video data to a terminal, for example, the data source server and the video server form a CDN, the distance between the data source server and the terminal is greater than the distance between the video server and the terminal, the farther the distance, the greater the possibility of data loss and the possibility of collision, the closer the distance, the smaller the possibility of data loss and the possibility of collision, thereby in order to reduce data loss and collision, the embodiment sends video data to be distributed in the data source server to the video server forming the CDN, and then provides the video data to the terminal by the video server forming the CDN, thereby providing video data to the terminal through the video server located close to the terminal.
102: the data source server determines description data of each piece of video data.
In this embodiment, after the data source server traverses all the video data to be distributed, the video data is encapsulated into one video file, so that all the video data to be distributed are transmitted simultaneously through one video file.
Although the data distribution difficulty is reduced compared with the video data distribution with each piece of video data as a distribution unit, how to accurately read the video data from the video file in the packaging into one video file needs to be solved in packaging the video data. The description data is used for locating the video data in the video file, and it is necessary to identify the position of the video data in the video file as well as to distinguish different video data, so that for each video data, the description data of the video data is used to determine its position in the video file, and thus the video data can be obtained based on the description data of the video data.
Wherein, there are many video data in a video file, and the description data needs to satisfy two points: one is that the video data is identified, namely different video data can be distinguished; another point is to identify the location of the video data in the video file, by which the video data is located. In this embodiment, in order to distinguish different pieces of video data, description data of any piece of video data includes: an index of the video data in the video file, a start position of the video data in the video file, and a data size of the video data. The corresponding determination of the description data of each piece of video data includes: an index of each piece of video data in the video file, a start position of the video data in the video file, and a data size of the video data are determined. Determining the description data may be determining the description data of each piece of video data when the piece of video data is obtained, or determining the description data of each piece of video data after all pieces of video data are obtained through traversal.
The index of the video data in the video file is regarded as a reference number of the video data in the video file, for example, one of numbers 1 to N is used as the index, and the index of the video data in the video file corresponds to the identification information of the video data, so that the video data with the identification information can be found in the process of requesting the video data through the identification information of the video data, the identification information of the video data can be represented by, but is not limited to, the name and the view angle of the video data, and N is a natural number greater than 1.
The starting position of the video data in the video file may represent an absolute offset of the video data in the video file, the absolute offset of the video data in the video file being related to the length of data in the video file that precedes the video data. For example, if the length of the previous piece of video data is M bytes, the start position of the piece of video data in the video file is the (M × 8+1) th bit, the obtaining and representing manner of the start position is not limited in this embodiment, and M is a natural number greater than 1. Of course, the starting position of the video data in the video file can directly record the number of bits of the first bit of the video data in the video file, so as to represent the starting position of the video data by using the starting position.
The data size of the video data may indicate the length/file size (e.g., the number of bytes included) of the video data, so as to determine the end position of the video data in the video file by the start position of the video data in the video file and the length of the video data, thereby locating the position of the video data in the video file.
In the present embodiment, one form of the description data is shown in table 1:
table 1 describes one form of data
Index Data size Starting position
Indexing: 16bit, a reference number identifying the currently indexed video data.
Data size: 16 bits, identifying the length/file size of the currently indexed video data.
Starting position: 32 bits, identifies the absolute offset of the currently indexed video data in the video file.
For each piece of video data, an index can be allocated to each piece of video data by sequentially selecting a number from 1 to N, and then the sequence of each piece of video data in the video file is set, so that the starting position of each piece of video data in the video file is obtained according to the sequence and the data size of each piece of video data.
In this embodiment, the description data may also take other forms, for example, the description data includes two fields, i.e., the identification information of the video data and the position of the video data in the video file, where the identification information of the video data is obtained by using the name and the angle of view of the video data, and for example, the name and the angle of view of the video data are used as the identification information of the video data, the position of the video data in the video file may be directly written into the start position and the end position of the video data, and the description data shown in table 1 may be used to locate the video data by using the position of the video data in the video file, thereby omitting the link of determining the end position by using the start position and the data size.
103: the data source server at least encapsulates the description data of each piece of video data and each piece of video data to obtain a video file, wherein the video file at least comprises the description data of each piece of video data and each piece of video data.
In the process of packaging the description data and the video data, the data source server firstly writes the description data of each piece of video data in sequence, and then sequentially writes each piece of video data according to the sequence of the description data of each piece of video data in all the description data, so that the sequence of any piece of video data in all the video data is consistent with the sequence of the description data in all the description data.
In the process of writing the description data of each piece of video data in sequence, the writing of the description data can be controlled according to the indexes in the description data, for example, the description data of each piece of video data is written in sequence according to the sequence indicated by the indexes in each piece of description data; taking a number in indexes 1 to N as an example, the description data of each piece of video data is written in sequence from 1 to N according to the size order of the indexes, so that the description data of the index with the smaller number is positioned before the description data of the index with the larger number.
In this embodiment, the video file may include other types of data besides the description data and the video data, and the video file may be divided into two parts, namely a header part and a body part, and at least the description data of each piece of video data and each piece of video data are correspondingly encapsulated, so that one possible way to obtain the video file is as follows:
determining an index identifier of the video file, wherein the index identifier is used for indicating the type of the video data and the number of the video data; determining description data of each piece of video data, wherein the description data of the video data is used for positioning the video data, and the description data is used for positioning the position of the video data in a video file; writing an index identification and description data of each video data in a file header of a video file; and writing each piece of video data in a file body of the video file according to the sequence of the description data of each piece of video data in the file header.
The video data packaged in one video file can be of the same type, for example, the packaged video data is MP4 data, and for MP4 data, additional information, such as control information in the video data playing process, is not needed in the playing process, so that description data of each piece of video data is written in the file header of the video file, so that the related (as with a target under different viewing angles) video data is concentrated in one video file, for example, related MP4 data is concentrated in one video file, and the corresponding description file is added when the video data is concentrated in one video file, and the original MP4 data is not needed to be adjusted, so that the integrity of MP4 data is ensured, and MP4 data can be positioned by the description data to be read, so that the MP4 data can be restored; if the video data encapsulated in the video file needs to use additional information during playing, the information used for playing, such as control information required for playing the video data, is written in the header or the position of the encapsulated video data at the same time, and how to encapsulate the video data is not described in detail here.
For an explanation of the description data, see the above explanation, the description data includes: the index of the video data in the video file, the starting position of the video data in the video file, and the data size of the video data by which the video data is located. The corresponding determination of the description data of each piece of video data includes: determining the index of each piece of video data in the video file, the starting position of the video data in the video file and the data size of the video data; wherein the order of the description data of each piece of video data in the header is determined based on the index of each piece of video data in the video file. The indexes can indicate the sequence, taking the index as one number from 1 to N as an example, the index with a small number is arranged before the index with a large number, and in the writing process, the description data of the index with a small number is positioned before the description data of the index with a large number, and the video data can be correspondingly written according to the sequence indicated by the indexes.
In the present embodiment, one form of the file header is shown in table 2:
TABLE 2 one form of File header
Index identification Description data
The index identification includes: type (8bit) and number (8 bit); the type is used for indicating the type of the currently packaged video data, such as an MP4 type or other custom types, so that the extension enables the packaging of other types of video data; the number is used to indicate the number of video data encapsulated in the video file.
Description data: x is 64 bits, X is the number of video data encapsulated in a video file, and description data of one piece of video data occupies 64 bits.
In addition, another form of the header is shown in table 3:
TABLE 3 alternative form of header
Index identification Length of Description data
A length field is added to indicate the length of data in the video file relative to the header shown in table 2.
Correspondingly, one form of video file is shown in table 4:
table 4 one form of video file
File head Video data 1 Video data 2 …… Video data X
The file size of the video file is equal to the size of index identification bits (16bit) + length bits (64bit) + description data (X × 64) + X video data.
The video data processing method implemented for the data source server may further include: the length of all video data in the video file is determined, the length of all video data is written into the header of the video file, as in the "length" field in table 3 to indicate the length of the data in the video file, and the description data can be corrected by the "length" field. The data size in the description data is used for indicating the length of the video data corresponding to the description data, the data size of all the video data can be obtained through the data size in the description data corresponding to all the video data, the data size of all the video data is the data length of all the video data, and if the data length is the same as the value of a length field in a file header, the description data is correct; if the two are different, the description data may have an error.
104: the data source server transmits a video file to each video server providing video data for the terminal, and the video file is used for enabling the video server to obtain the video data requested by the terminal by using the description data. The data source server distributes data by taking a video file packaged with a plurality of pieces of video data as a unit, reduces the number of distributed files, reduces conflicts in the distribution process and saves a file distribution link compared with the situation that each piece of video data is taken as a unit, and therefore the data distribution difficulty is reduced.
105: and the video server receives a video data playing request sent by the terminal.
Identification information of a plurality of video data can be carried in one video data playing request, so that the video data can be requested to the video server through one video data playing request. Because all the video data traversed by the data source server are encapsulated in the video file received by the video server, which means that the video data stored in the video server are relatively comprehensive, a plurality of video data of one video data playing request can be obtained in the same video server, and correspondingly, the video data playing request carries identification information of the plurality of video data.
Because one video server can feed back a plurality of requested video data to one terminal, the plurality of requested video data can be fed back to the terminal in sequence or simultaneously, the time difference of the feedback of the plurality of video data is shortened, and the loading efficiency of the video data is improved.
106: and the video server obtains the description data matched with the identification information from the video file based on the identification information in the video data playing request.
In this embodiment, a plurality of video data and description data of each of the plurality of video data are stored in a video file, and the video file is distributed in advance to each video server that provides video data to a terminal, and for the description of the video data and the description data in the video file, refer to the above description. Where the description data is used to locate the video data, the description data of each piece of video data corresponds to the identification information of the video data, for example, the index in the description data corresponds to the identification information of the video data, so that one possible way to obtain the description data matching the identification information is as follows:
the method comprises the steps of determining an index matched with identification information based on the identification information in a video data playing request, obtaining description data containing the index matched with the identification information from a video file based on the index matched with the identification information, and determining the description data containing the index matched with the identification information as the description data matched with the identification information.
107: and the video server acquires the video data matched with the identification information from the video file by using the description data matched with the identification information.
In the embodiment, the description data is used not only for distinguishing the video data but also for locating the video data, such as locating the position of the video data in the video file, so that the video data with the matching identification information can be accurately read from the video file through the description data.
One possible way to obtain video data with matching identification information may be, but is not limited to: determining the position of the video data matched with the identification information in the video file by using the description data matched with the identification information; and reading the video data matched with the identification information from the video file based on the position of the video data matched with the identification information in the video file.
For example, if the description data includes a start position and a data size, the position of the corresponding video data matching the determined identification information in the video file may be: determining the starting position of the video data matched with the identification information in the video file and the length of the video data matched with the identification information by using the description data matched with the identification information; and determining the end position of the video data matched with the identification information in the video file according to the start position and the length, so as to use the start position and the end position as the positions of the video data matched with the identification information in the video file, and further reading the data from the start position to the end position to obtain the video data matched with the identification information. For another example, if the description data includes a location, the location of the video data whose identification information matches in the video file can be determined directly from this field of location.
108: and the video server sends the video data matched with the identification information to the terminal.
The video data processing method receives a video data playing request sent by a terminal; acquiring description data matched with the identification information from the video file based on the identification information in the video data playing request; acquiring video data matched with the identification information from the video file by using the description data matched with the identification information; the video data matched with the identification information is sent to the terminal, the video file stores a plurality of video data and description data of each video data in the plurality of video data, and the video file is pre-distributed to each video server providing the video data to the terminal, so that the video files in the video servers are the same, the same video file can be distributed to the video servers in the process of distributing the video data to the video servers, and the data distribution difficulty is reduced compared with the video data distribution with each piece of video data as one distribution unit. Because the video files contain a plurality of video data and the video files in different video servers are the same, the terminal can request the same video server in the process of requesting the video data, and a plurality of video data are read from the video files of the video server, so that the loading efficiency is improved.
Taking the example that the video data is MP4 data, as shown in fig. 3, the data source server 13 obtains at least two pieces of MP4 data to be distributed, where the MP4 data to be distributed may be obtained when the same object is shot, and different MP4 data correspond to different viewing angles. The data source server 13 determines the description data of each piece of MP4 data, and the start positions of different pieces of MP4 data may be different, for example, the data source server 13 assigns an index to each piece of MP4 data and determines the data size of each piece of MP4 data. The MP4 data is written into the file body of the video file, and for the MP4 data written into the video file first, the start position is the first bit of the file body, for example, the number of bits occupied by the file header of the video file is: index flag bit (16bit) + length bit (64bit) + description data (N × 64), the start position of the first MP4 data written into the video file is: index identification bit (16bit) + length bit (64bit) + description data (N × 64) + 1. For other MP4 data written to the video file, the start position of other MP4 data may be determined by the data size of the MP4 data that precedes it, with the start position being represented by an absolute offset.
The data source server 13 writes the MP4 data into the video file according to the sequence of the written description data, so that a plurality of scattered and associated MP4 data can be merged into one video file to solve the problem of the separate distribution of a plurality of MP4 data, thereby improving the storage efficiency. Because of the playing characteristic of the MP4, the MP4 can be played independently without control information during the playing process, so that the description information is added to the file header in the embodiment, the MP4 data does not need to be modified, the integrity of the MP4 data is ensured, the description data of the file header is simple, the operability is high, the expandability is high, and the information required by the file header can be extended continuously. The MP4 data merged into a video file is divided in a single direction, which conforms to the rule of Internet transmission streaming media, and can be loaded from the file header, thereby reading the video files one by one.
The data source server 13 distributes the video file to the respective video servers 12. The terminal 11 sends a video data playing request to the video server 12, the video server 12 extracts the identification information from the video data playing request, obtains MP4 data matched with the identification information by using the identification information and the video file, and feeds back the MP4 data. If the description data is obtained from the file header, the video data is searched according to the description data, so that after the MP4 data are combined into a video file, the MP4 data can be searched through the file header, the use efficiency is improved, and the online use and playing are greatly convenient. One video server can feed back a plurality of MP4 data requested by the video server to one terminal, so that a plurality of video data requested by the video server can be fed back to the terminal sequentially or simultaneously, the time gap of feeding back a plurality of video data is shortened, and the loading efficiency of the video data is improved.
Corresponding to the foregoing method embodiments, an embodiment of the present application provides a video data processing apparatus, which may have an optional structure as shown in fig. 4, and may include: a receiving unit 10, a first obtaining unit 20, a second obtaining unit 30 and a transmitting unit 40.
The receiving unit 10 is configured to receive a video data playing request sent by a terminal.
A first obtaining unit 20, configured to obtain, based on the identification information in the video data play request, description data that matches the identification information from a video file in which a plurality of video data and description data of each of the plurality of video data are stored, and the video file is distributed in advance to each video server that provides the video data to the terminal.
One way for the first obtaining unit 20 to obtain the description data may be: determining an index matched with the identification information based on the identification information in the video data playing request; and obtaining the description data containing the index from the video file based on the index matched with the identification information, and determining the description data containing the index as the description data matched with the identification information.
A second obtaining unit 30, configured to obtain video data matching the identification information from the video file by using the description data matching the identification information. One way in which the second obtaining unit 30 obtains the video data may be: determining the position of the video data matched with the identification information in the video file by using the description data matched with the identification information; and reading the video data matched with the identification information from the video file based on the position of the video data matched with the identification information in the video file.
If the description data matched with the identification information is utilized, determining the starting position of the video data matched with the identification information in the video file and the length of the video data matched with the identification information; determining the end position of the video data matched with the identification information in the video file according to the start position and the length; reading data from the start position to the end position to obtain video data with matched identification information.
And a sending unit 40, configured to send the video data with the matching identification information to the terminal.
Referring to fig. 5, an alternative structure of another video data processing apparatus provided in the embodiment of the present application is shown, which may include: an obtaining unit 100, a determining unit 200, a packaging unit 300 and a sending unit 400.
An obtaining unit 100 is configured to obtain at least two pieces of video data to be distributed. The at least two video data correspond to different visual angles, and the at least two video data are videos of the same object under different visual angles.
A determining unit 200 for determining the description data of each piece of video data. Such as determining the index of each piece of video data in the video file, the starting position of the video data in the video file, and the data size of the video data; wherein the order of the description data of each piece of video data in the header is determined based on the index of each piece of video data in the video file.
The encapsulating unit 300 is configured to encapsulate at least the description data of each piece of video data and each piece of video data to obtain a video file. The packaging process is as follows:
determining an index identifier of the video file, wherein the index identifier is used for indicating the type of the video data and the number of the video data; determining description data of each piece of video data, wherein the description data of the video data is used for positioning the video data; writing an index identification and description data of each video data in a file header of a video file; and writing each piece of video data in a file body of the video file according to the sequence of the description data of each piece of video data in the file header.
In the present embodiment, the determining unit 200 is further configured to determine lengths of all video data in the video file, the lengths of all video data being written into a header of the video file, the lengths of all video data being used to indicate data lengths in the video file.
A sending unit 400, configured to send a video file to each video server providing video data for the terminal, where the video file is used for the video server to obtain the video data requested by the terminal by using the description data.
The video data processing device receives a video data playing request sent by a terminal; acquiring description data matched with the identification information from the video file based on the identification information in the video data playing request; acquiring video data matched with the identification information from the video file by using the description data matched with the identification information; the video data matched with the identification information is sent to the terminal, the video file stores a plurality of video data and description data of each video data in the plurality of video data, and the video file is pre-distributed to each video server providing the video data to the terminal, so that the video files in the video servers are the same, the same video file can be distributed to the video servers in the process of distributing the video data to the video servers, and the data distribution difficulty is reduced compared with the video data distribution with each piece of video data as one distribution unit. Because the video files contain a plurality of video data and the video files in different video servers are the same, the terminal can request the same video server in the process of requesting the video data, and a plurality of video data are read from the video files of the video server, so that the loading efficiency is improved.
For the video data processing apparatus shown in fig. 4 and 5, please refer to the method embodiment for the description of each unit in the video data processing apparatus, which is not repeated herein.
An embodiment of the present application further provides a video server, including: a processor and a memory for storing processor-executable instructions. Wherein the processor is configured to execute the instructions to implement the video data processing method described above.
An embodiment of the present application further provides a data source server, including: a processor and a memory for storing processor-executable instructions. Wherein the processor is configured to execute the instructions to implement the video data processing method described above.
The embodiment of the present application also provides a computer-readable storage medium, and when instructions in the computer-readable storage medium are executed, the video data processing method is implemented.
It should be noted that, various embodiments in this specification may be described in a progressive manner, and features described in various embodiments in this specification may be replaced with or combined with each other, each embodiment focuses on differences from other embodiments, and similar parts between various embodiments may be referred to each other. For the device-like embodiment, since it is basically similar to the method embodiment, the description is simple, and for the relevant points, reference may be made to the partial description of the method embodiment.
Finally, it should also be noted that, herein, relational terms such as first and second, and the like may be used solely to distinguish one entity or action from another entity or action without necessarily requiring or implying any actual such relationship or order between such entities or actions. Also, the terms "comprises," "comprising," or any other variation thereof, are intended to cover a non-exclusive inclusion, such that a process, method, article, or apparatus that comprises a list of elements does not include only those elements but may include other elements not expressly listed or inherent to such process, method, article, or apparatus. Without further limitation, an element defined by the phrase "comprising an … …" does not exclude the presence of other identical elements in the process, method, article, or apparatus that comprises the element.
The previous description of the disclosed embodiments is provided to enable any person skilled in the art to make or use the present application. Various modifications to these embodiments will be readily apparent to those skilled in the art, and the generic principles defined herein may be applied to other embodiments without departing from the spirit or scope of the application. Thus, the present application is not intended to be limited to the embodiments shown herein but is to be accorded the widest scope consistent with the principles and novel features disclosed herein.
The foregoing is only a preferred embodiment of the present application and it should be noted that those skilled in the art can make several improvements and modifications without departing from the principle of the present application, and these improvements and modifications should also be considered as the protection scope of the present application.

Claims (13)

1. A method of video data processing, the method comprising:
the video server receives a video data playing request sent by a terminal, wherein the video data playing request carries identification information of a plurality of pieces of video data, and the video data of the same object under different visual angles are obtained by the camera devices configured under different visual angles of the same object;
obtaining description data matched with the identification information from a video file based on the identification information in the video data playing request, wherein the video file stores a plurality of pieces of video data and description data of each piece of video data, the description data is used for positioning the video data, the description data of each piece of video data corresponds to the identification information of the video data, a data source server pre-distributes the video file to each video server providing the video data for the terminal, the distance between the data source server and the terminal is greater than the distance between the video server and the terminal, and the data source server performs data distribution by taking one video file packaged with a plurality of pieces of video data as a unit;
acquiring a plurality of pieces of video data matched with the identification information from the video file by using the description data matched with the identification information;
and sending the plurality of pieces of video data matched with the identification information to the terminal.
2. The method according to claim 1, wherein the obtaining, from the video file, the plurality of pieces of video data with the matching identification information by using the description data with the matching identification information comprises:
determining the position of the video data matched with the identification information in the video file by using the description data matched with the identification information;
and reading a plurality of pieces of video data matched with the identification information from the video file based on the position of the video data matched with the identification information in the video file.
3. The method according to claim 2, wherein the determining, by using the description data matching the identification information, a position of the video data matching the identification information in the video file comprises:
determining the starting position of the video data matched with the identification information in the video file and the length of the video data matched with the identification information by using the description data matched with the identification information;
determining the end position of the video data matched with the identification information in the video file according to the start position and the length;
the reading the video data matched with the identification information from the video file based on the position of the video data matched with the identification information in the video file comprises: and reading data from the starting position to the ending position to obtain the video data matched with the identification information.
4. The method of claim 2, wherein obtaining the description data matching with the identification information from the video file based on the identification information in the video data playing request comprises:
determining an index matched with the identification information based on the identification information in the video data playing request; and obtaining the description data containing the index from the video file based on the index matched with the identification information, and determining the description data containing the index as the description data matched with the identification information.
5. A method of video data processing, the method comprising:
the method comprises the steps that a data source server obtains at least two pieces of video data to be distributed;
determining description data of each piece of video data, wherein the description data is used for positioning the video data, and the description data of each piece of video data corresponds to the identification information of the video data;
at least packaging the description data of each piece of video data and each piece of video data to obtain a video file, wherein the data source server distributes data by taking one video file packaged with a plurality of pieces of video data as a unit, and the distance between the data source server and a terminal is greater than the distance between the video server and the terminal;
and sending the video file to each video server providing the video data for the terminal, wherein the video file is used for enabling the video server to obtain a plurality of pieces of video data requested by the terminal by using the description data, a video data playing request sent by the terminal to the video server carries identification information of the plurality of pieces of video data, and camera devices configured under different visual angles of the same object obtain the video data of the same object under different visual angles.
6. The method according to claim 5, wherein said encapsulating at least the description data of each piece of video data and each piece of video data to obtain a video file comprises:
determining an index identifier of the video file, wherein the index identifier is used for indicating the type of the video data and the number of the video data;
determining description data of each piece of the video data, wherein the description data of the video data is used for positioning the video data;
writing the index identification and the description data of each piece of video data in a file header of a video file;
and writing each piece of video data in the file body of the video file according to the sequence of the description data of each piece of video data in the file header.
7. The method of claim 6, further comprising: determining the length of all the video data in the video file, wherein the length of all the video data is written into a file header of the video file, and the length of all the video data is used for indicating the data length in the video file.
8. The method of claim 6, wherein said determining description data for each piece of said video data comprises: determining an index of each piece of the video data in the video file, a start position of the video data in the video file, and a data size of the video data; wherein an order of description data of each piece of the video data in the header is determined based on an index of each piece of the video data in the video file.
9. A video data processing apparatus, for use in a video server, the apparatus comprising:
the receiving unit is used for receiving a video data playing request sent by a terminal, the video data playing request carries identification information of a plurality of pieces of video data, and the video data of the same object under different visual angles are obtained by the camera devices configured under different visual angles of the same object;
a first obtaining unit, configured to obtain description data matched with identification information from a video file based on the identification information in the video data playing request, where the video file stores a plurality of pieces of video data and description data of each piece of video data, the description data is used to locate the video data, the description data of each piece of video data corresponds to the identification information of the video data, and a data source server pre-distributes the video file to each video server that provides the video data to the terminal, a distance between the data source server and the terminal is greater than a distance between the video server and the terminal, and the data source server performs data distribution in units of one video file in which a plurality of pieces of video data are encapsulated;
a second obtaining unit configured to obtain, from the video file, a plurality of pieces of the video data whose identification information matches, using description data that matches the identification information;
and the sending unit is used for sending the video data matched with the identification information to the terminal.
10. A video data processing apparatus, for use in a data source server, the apparatus comprising:
an obtaining unit configured to obtain at least two pieces of video data to be distributed;
a determining unit, configured to determine description data of each piece of the video data, where the description data is used to locate the video data, and the description data of each piece of the video data corresponds to identification information of the video data;
the data source server is used for distributing data by taking the video file in which a plurality of pieces of video data are packaged as a unit, and the distance between the data source server and the terminal is greater than the distance between the video server and the terminal;
a sending unit, configured to send the video file to each video server that provides the video data for the terminal, where the video file is used to enable the video server to obtain multiple pieces of video data requested by the terminal by using the description data, a video data play request sent by the terminal to the video server carries identification information of the multiple pieces of video data, and camera devices configured at different viewing angles of the same object obtain the video data of the same object at different viewing angles.
11. A video server, comprising: a processor and a memory for storing processor-executable instructions; wherein the processor is configured to execute the instructions to implement the video data processing method of any of claims 1 to 4.
12. A data source server, comprising: a processor and a memory for storing processor-executable instructions; wherein the processor is configured to execute the instructions to implement the video data processing method of any of claims 5 to 8.
13. A computer-readable storage medium, wherein instructions in the computer-readable storage medium, when executed, implement the video data processing method of any one of claims 1 to 8.
CN202110504829.5A 2021-05-10 2021-05-10 Video data processing method and device Active CN113242447B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202110504829.5A CN113242447B (en) 2021-05-10 2021-05-10 Video data processing method and device

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202110504829.5A CN113242447B (en) 2021-05-10 2021-05-10 Video data processing method and device

Publications (2)

Publication Number Publication Date
CN113242447A CN113242447A (en) 2021-08-10
CN113242447B true CN113242447B (en) 2022-05-17

Family

ID=77133145

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202110504829.5A Active CN113242447B (en) 2021-05-10 2021-05-10 Video data processing method and device

Country Status (1)

Country Link
CN (1) CN113242447B (en)

Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US5568181A (en) * 1993-12-09 1996-10-22 International Business Machines Corporation Multimedia distribution over wide area networks
CN103279474A (en) * 2013-04-10 2013-09-04 深圳康佳通信科技有限公司 Video file index method and system
CN105228001A (en) * 2015-09-26 2016-01-06 北京暴风科技股份有限公司 The method and system that a kind of FLV format video is play online

Family Cites Families (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US8122236B2 (en) * 2001-10-24 2012-02-21 Aol Inc. Method of disseminating advertisements using an embedded media player page
TW201540057A (en) * 2014-04-03 2015-10-16 Primax Electronics Ltd Method for playing video media of video network in area network and video media playing system

Patent Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US5568181A (en) * 1993-12-09 1996-10-22 International Business Machines Corporation Multimedia distribution over wide area networks
CN103279474A (en) * 2013-04-10 2013-09-04 深圳康佳通信科技有限公司 Video file index method and system
CN105228001A (en) * 2015-09-26 2016-01-06 北京暴风科技股份有限公司 The method and system that a kind of FLV format video is play online

Also Published As

Publication number Publication date
CN113242447A (en) 2021-08-10

Similar Documents

Publication Publication Date Title
CN109640113B (en) Processing method for dragging video data and proxy server
US9917916B2 (en) Media delivery service protocol to support large numbers of client with error failover processes
US9456230B1 (en) Real time overlays on live streams
WO2017201980A1 (en) Video recording method, apparatus and system
US11593448B2 (en) Extension for targeted invalidation of cached assets
CN112822560B (en) Virtual gift giving method, system, computer device and storage medium
CN103796046B (en) A kind of video source address detection method and device
US20200359080A1 (en) Content-Modification System with Issue Detection and Responsive Action Feature
EP3981165A1 (en) Content-modification system with system resource request feature
CN108076385B (en) Method and device for reporting promotion information monitoring data
CN113242447B (en) Video data processing method and device
CN101742247B (en) Method and system for interactive web TV service authentication and EPG server
US10462236B2 (en) Coordinating metgadata
US20240064357A1 (en) Content-modification system with probability-based selection feature
CN111506747B (en) File analysis method, device, electronic equipment and storage medium
CN111026912B (en) IPTV-based collaborative recommendation method, device, computer equipment and storage medium
CN110166823B (en) Screen projection method and related device
CN109150927A (en) File delivery method and device for document storage system
US11386696B2 (en) Content-modification system with fingerprint data mismatch and responsive action feature
CN101877722A (en) Electronic program guide (EPG) system and file downloading method
US20210195289A1 (en) Content-Modification System with Transmission Delay-Based Feature
US20200356755A1 (en) Content-modification system with geographic area-based feature
CN114465989A (en) Streaming media data processing method, server, electronic device and readable storage medium
US20070050825A1 (en) VOD transaction error correlator
WO2018112804A1 (en) Handling a content user request in a content delivery network

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant