CN110049348B - Video analysis method and system and video analysis server - Google Patents

Video analysis method and system and video analysis server Download PDF

Info

Publication number
CN110049348B
CN110049348B CN201910266997.8A CN201910266997A CN110049348B CN 110049348 B CN110049348 B CN 110049348B CN 201910266997 A CN201910266997 A CN 201910266997A CN 110049348 B CN110049348 B CN 110049348B
Authority
CN
China
Prior art keywords
video
information
server
analysis
target
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Expired - Fee Related
Application number
CN201910266997.8A
Other languages
Chinese (zh)
Other versions
CN110049348A (en
Inventor
蒋龙威
林伟强
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Wangsu Science and Technology Co Ltd
Original Assignee
Wangsu Science and Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Wangsu Science and Technology Co Ltd filed Critical Wangsu Science and Technology Co Ltd
Priority to CN201910266997.8A priority Critical patent/CN110049348B/en
Publication of CN110049348A publication Critical patent/CN110049348A/en
Application granted granted Critical
Publication of CN110049348B publication Critical patent/CN110049348B/en
Expired - Fee Related legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L67/00Network arrangements or protocols for supporting network services or applications
    • H04L67/01Protocols
    • H04L67/10Protocols in which an application is distributed across nodes in the network
    • H04L67/1097Protocols in which an application is distributed across nodes in the network for distributed storage of data in networks, e.g. transport arrangements for network file system [NFS], storage area networks [SAN] or network attached storage [NAS]
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/20Servers specifically adapted for the distribution of content, e.g. VOD servers; Operations thereof
    • H04N21/21Server components or server architectures
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/20Servers specifically adapted for the distribution of content, e.g. VOD servers; Operations thereof
    • H04N21/23Processing of content or additional data; Elementary server operations; Server middleware
    • H04N21/231Content storage operation, e.g. caching movies for short term storage, replicating data over plural servers, prioritizing data for deletion
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/20Servers specifically adapted for the distribution of content, e.g. VOD servers; Operations thereof
    • H04N21/23Processing of content or additional data; Elementary server operations; Server middleware
    • H04N21/234Processing of video elementary streams, e.g. splicing of video streams or manipulating encoded video stream scene graphs
    • H04N21/23418Processing of video elementary streams, e.g. splicing of video streams or manipulating encoded video stream scene graphs involving operations for analysing video streams, e.g. detecting features or characteristics

Landscapes

  • Engineering & Computer Science (AREA)
  • Signal Processing (AREA)
  • Multimedia (AREA)
  • Computer Networks & Wireless Communication (AREA)
  • Two-Way Televisions, Distribution Of Moving Picture Or The Like (AREA)
  • Information Transfer Between Computers (AREA)

Abstract

The invention discloses a video analysis method, a system and a video analysis server, wherein the method comprises the following steps: acquiring father node information; determining an unresolved target video stored in a parent node server based on the parent node information; acquiring video information of the target video from a corresponding father node server, and analyzing the video information of the target video to generate analysis information of the target video; and storing the generated analysis information of the target video in the video analysis server. According to the technical scheme, the efficiency of video analysis can be improved under the condition that user experience is not influenced.

Description

Video analysis method and system and video analysis server
Technical Field
The invention relates to the technical field of internet, in particular to a video analysis method, a video analysis system and a video analysis server.
Background
At present, in order to better provide services for users, a CDN (Content Delivery Network) system is usually selected for accelerating video resources. For example, a CDN system may provide video acceleration services for many customers, such as youth, Tencent, Aiqiyi, etc., at the same time.
For the CDN system, it is often necessary to count video information of different customers, and by analyzing the video information, resources inside the CDN system can be optimized and scheduled. At present, the process of video parsing is usually performed by an edge node server in the CDN system.
Specifically, the edge node server needs to download a complete video file from the source station server of the client, and then parse the downloaded video file, so as to obtain information such as playing time, code rate, and the like of the video file, where the parsed information may be stored in the cache of the edge node server, and then the parsed information cached by each edge node server may be uniformly analyzed.
However, parsing a video by an edge node server has various drawbacks: on one hand, the process of analyzing the video consumes considerable resources of the edge node server, and the edge node server also needs to interact with the client of the user, so that the experience of the user can be influenced; on the other hand, different edge node servers often perform repeated analysis on the same video file, and because the number of edge node servers is large, a large number of video analysis processes are repeated every day in the whole CDN system, which results in low video analysis efficiency.
Disclosure of Invention
The application aims to provide a video analysis method, a video analysis system and a video analysis server, which can improve the video analysis efficiency under the condition of not influencing the user experience.
In order to achieve the above object, an aspect of the present application provides a video parsing method, where the method is applied in a video parsing server, and the method includes: acquiring father node information, wherein the father node information is used for representing a storage relation between a video file and a father node server; determining an unresolved target video stored in a parent node server based on the parent node information; acquiring video information of the target video from a corresponding father node server, and analyzing the video information of the target video to generate analysis information of the target video; and storing the generated analysis information of the target video in the video analysis server.
In order to achieve the above object, another aspect of the present application further provides a video parsing server, including: the target video determining unit is used for acquiring father node information, and the father node information is used for representing the storage relationship between the video file and a father node server; determining an unresolved target video stored in a parent node server based on the parent node information; the video analysis unit is used for acquiring the video information of the target video from a corresponding father node server and analyzing the video information of the target video to generate analysis information of the target video; and the analysis information storage unit is used for storing the generated analysis information of the target video in the video analysis server.
In order to achieve the above object, another aspect of the present application further provides a video parsing server, which includes a memory and a processor, where the memory is used for storing a computer program, and the computer program, when executed by the processor, implements the video parsing method described above.
In order to achieve the above object, another aspect of the present application further provides a video parsing system, where the system includes a video parsing server, a scheduling system, and a parent node server, where: the scheduling system is used for storing father node information, and the father node information is used for representing the storage relation between the video file and the father node server; the father node server is used for storing the video file; the video analysis server is used for acquiring video information of an unresolved target video from the father node server and analyzing the video information of the target video to generate and store analysis information of the target video.
Therefore, according to the technical scheme, the video is analyzed through the independent video analysis server, so that the pressure of the edge node server can be reduced, and the influence on the experience of a user is avoided. The video files in the network can be stored in a parent node server of the CDN system. The video files may include the latest video file for which the prefetch service is opened, and other video files than the latest video file. Subsequently, the video parsing server may obtain the stored video file from the parent node server. Wherein, the video file stored in the parent node server may be partially parsed. Therefore, the video parsing server needs to identify the target video stored in the parent node server without parsing. In this way, for the target video, the analysis can be performed according to the acquired video information, so as to generate analysis information of the target video. The generated analysis information can be stored in the video analysis server, and the analysis information can be subsequently used as a reference basis for resource allocation and video monitoring analysis in the CDN system. Therefore, before the video is analyzed, the video analysis server firstly determines the video file which is not analyzed, so that the repeated analysis process can not be caused, and the video analysis efficiency is greatly improved.
Drawings
In order to more clearly illustrate the technical solutions in the embodiments of the present invention, the drawings needed to be used in the description of the embodiments will be briefly introduced below, and it is obvious that the drawings in the following description are only some embodiments of the present invention, and it is obvious for those skilled in the art to obtain other drawings based on these drawings without creative efforts.
FIG. 1 is a schematic diagram of a video analytics system in an embodiment of the present invention;
FIG. 2 is a schematic diagram illustrating steps of a video parsing method according to an embodiment of the present invention;
FIG. 3 is an interaction diagram of a video parsing method according to an embodiment of the present invention;
FIG. 4 is a functional block diagram of a video analytics server in an embodiment of the present invention;
FIG. 5 is a schematic structural diagram of a video parsing server in an embodiment of the present invention;
fig. 6 is a schematic structural diagram of a computer terminal in the present invention.
Detailed Description
In order to make the objects, technical solutions and advantages of the present invention more apparent, embodiments of the present invention will be described in detail with reference to the accompanying drawings.
The present application provides a video parsing method, which can be applied to a video parsing server in a video parsing system as shown in fig. 1. Referring to fig. 1, the video parsing system may include a video parsing server, a content management platform, a scheduling system, and a parent node server. In the present application, the number of servers is not limited. For example, the video parsing server may be a single server or a server cluster in actual application. Similarly, the parent node server may be a single server or a server cluster in practical applications.
In this application, the content management platform may store the latest video corresponding to the pre-fetching service. Specifically, the client may send a prefetch instruction to the content management platform for the latest video that needs to be prefetched, where the prefetch instruction may carry an identifier of the latest video, and thus, after receiving the prefetch instruction, the content management platform may identify one or more identifiers of the latest video, and may download the corresponding latest video from the source station server of the client.
The scheduling system can store father node information, and the father node information can be used for representing the storage relation between the video file and the father node server. Specifically, other video files, besides the latest video, may be stored in the parent node server. A storage list of video files can be maintained in the scheduling system, and the identification of the parent node server can be used as an index identification in the storage list, and the identification of the video files stored in the parent node server can be used as an index result. Therefore, the video file stored in the parent node server can be inquired through the identification of the parent node server. Conversely, through the identification of the video file, it can also be queried in which parent node server or servers the video file is stored.
Of course, in practical applications, the video parsing system may also include only the video parsing server, the scheduling system, and the parent node server. The parent node server can store the video files with the opened pre-fetching service and the video files without the opened pre-fetching service in a centralized manner. For video files with the opened pre-fetching service, the parent node server can obtain the latest videos from the content management platform, so that the latest videos and other videos are stored in the parent node server. Subsequently, the video analysis server only needs to acquire video information of the target video which is not analyzed from the father node server.
Referring to fig. 2, the video parsing method applied in the video parsing server may include the following steps.
S1: acquiring father node information, wherein the father node information is used for representing a storage relation between a video file and a father node server; and determining the unresolved target video stored in the parent node server based on the parent node information.
In this embodiment, the video parsing server may obtain video information of the video file from the parent node server. The video file can be a video file with opened prefetching service or a video file without opened prefetching service. Specifically, the video parsing server may first obtain parent node information from the scheduling system, where the parent node information may be a storage list in the scheduling system. Through the storage list, the video analysis server can acquire the identifiers of the video files stored in the current parent node servers.
In this embodiment, the video parsing server may generate a file list of the video files stored in the parent node server according to a storage relationship between the video file represented by the parent node information and the parent node server, where the file list may include an identifier of each video file. Before the video file is analyzed, the video analysis server can judge which video files in the file list are not analyzed, and can analyze the video files which are not analyzed subsequently, so that repeated analysis is avoided.
Specifically, in the video parsing server, for video files that have completed parsing, the identifiers of these video files and corresponding parsing information may be stored in an associated manner. Specifically, the identifier of the video file that has completed parsing may be used as a key (key), and the corresponding parsing information may be used as a value (value), so that the parsed result is stored by means of a key-value pair (key-value). In this way, in the file list generated by the video parsing server, it may be sequentially queried in the video parsing server whether parsing information associated with the video file exists for each video file in the file list. Finally, the video file without the associated parsing information may be used as the target video without being parsed.
Of course, in practical applications, there are many ways to mark whether a video file is parsed or not. For example, in the parent node information stored by the scheduling system, a parsing identifier may be added to each video file. The parsing flag may be used to characterize whether the current video file is parsed. And the scheduling system can be informed when the video analysis server completes the analysis of one video file, so that the scheduling system can modify the analysis identifier of the video file. Subsequently, after the video analysis server acquires the father node information, the target video which is not analyzed can be directly determined from the father node information.
Referring to fig. 3, in one embodiment, the latest video for opening the pre-fetch service may be stored centrally at the content management platform. The video parsing server can obtain video information of the latest video from a content management platform, wherein the content management platform can respond to a prefetch instruction of a client and download the latest video pointed by the prefetch instruction from a source station server of the client.
In this embodiment, the video parsing server may obtain video information of the latest video from the content management platform according to a fixed time period or according to a time node appointed by the client. The video information acquired by the video parsing server is not the entire data of the video but is partial data of the video. The video information may be information such as data size, encoding format, playing time length, etc. of the latest video, or may be other information calculated according to the information. In practical applications, different video information can be obtained from the content management platform through requests of different formats.
In particular, in one embodiment, the video parsing server may send a header request to the content management platform that points to the latest video, which may be, for example, a head request in an HTTP request. Unlike GET requests in HTTP, the head request does not GET the actual data body, but just some description information of the data body. Specifically, after receiving the header request sent by the video parsing server, the content management platform may identify an identifier of the latest video carried therein, so that response information of the latest video may be fed back to the video parsing server for the header request.
In this embodiment, the response information may carry a plurality of items of description information of the latest video. For example, the description information may include a compression format, a data size, a data type, a last buffering time, and the like of the latest video. Each piece of description information can be assigned with a corresponding field. For example, for data size, it can be identified by the assignment of the content-length field. In this way, the video parsing server can use the assignment of the content length field as the data size of the latest video by identifying the content length field in the response information.
In one embodiment, in addition to obtaining the data size of the latest video, some other information (e.g., encoding format, playing time length, etc.) of the latest video needs to be obtained. In practical applications, the other information of the latest video is usually recorded in the head field and/or the tail field of the latest video. Therefore, the video parsing server can obtain the head field and/or the tail field of the latest video from the content management platform. Specifically, the video parsing server may acquire partial data of the latest video by sending a range data acquisition request to the content management platform. The range data acquisition request may be a range request in which the position of the partial data to be acquired in the entire data of the latest video may be determined by defining a range parameter. For example, the range parameter may be expressed as content-length: 0-10, indicating that the first 11 bytes of data need to be retrieved from the entire data of the latest video.
In practical applications, the way of acquiring the header field and the trailer field is slightly different. Specifically, for the header field, the data start position of the latest video may be taken as the start position of the data to be acquired. Generally, the offset (offset) of the data start position may be 0. Of course, in some application scenarios, the data start position may not be 0, but may be another known value. After the initial position of the data is defined, the ending position of the data to be acquired can be calculated according to the length of the header field. For fixed format video, the length of the header field is also often fixed. The length of the header field may be represented by a first preset data length. For example, if the header field is typically 100KB in length, the first predetermined data length may be 100 KB. Therefore, the ending position of the data to be acquired can be determined according to the starting position of the data and the first preset data length. In this way, the start position and the end position of the data to be acquired may define a range parameter, and a range data acquisition request representing the data to be acquired may be constructed according to the range parameter.
For the end field, the data end position of the latest video may be used as the end position of the data to be acquired. The data termination position may be determined according to the data size of the latest video acquired as described above. For example, if the data size of the latest video is 100000KB, the data ending position may be 99999KB (data starting position starts from 0). Therefore, after the data termination position is determined, the initial position of the data to be acquired can be generated according to the second preset data length representing the length of the tail field. Subsequently, a range data acquisition request representing the data to be acquired may be constructed according to the range parameters defined by the start position and the end position of the data to be acquired.
In practical applications, the header field and the trailer field may be obtained according to practical situations. Only the head field or the tail field may be acquired, or both the head field and the tail field may be acquired, and the number of acquired fields is determined mainly according to the format of the video.
In this embodiment, after the video analysis server sends the range data acquisition request pointing to the latest video to the content management platform, the video analysis server may receive the range data fed back by the content management platform in response to the range data acquisition request. By analyzing the range data, other information of the latest video can be acquired. For example, the information such as the playing time length and the encoding format of the latest video can be obtained by parsing.
S3: and acquiring the video information of the target video from the corresponding father node server, and analyzing the video information of the target video to generate analysis information of the target video.
In this embodiment, after the target video that is not analyzed is determined, the target parent node server storing the target video may be determined according to the storage relationship represented by the parent node information. Subsequently, the video information of the corresponding target video can be acquired from the target parent node servers. The way of acquiring the video information from the parent node server is consistent with the way of acquiring the video information of the latest video from the content management platform, and different video information can be acquired through the header request and the range data acquisition request. Specifically, the video parsing server may send a header request pointing to the target video to the target parent node server, and receive response information fed back by the target parent node server for the header request, so that an assignment of a content length field may be used as a data size of the target video by identifying the content length field in the response information.
In addition, the video resolution server may further send a range data acquisition request pointing to the target video to the target parent node server, and receive range data fed back by the target parent node server for the range data acquisition request. The range data is at least used for representing the playing time length of the target video and can also represent the encoding mode of the target video and the like.
The range parameter carried in the range data acquisition request sent to the target parent node server may also be generated in the manner described above, and will not be described herein again.
In this embodiment, after the video information of the latest video and the video information of the target video are acquired, the analysis information of the latest video and the analysis information of the target video may be generated respectively. The playing code rate of the video is the most significant influence on resource allocation in the CDN system, and therefore, the analysis information at least needs to reflect the playing code rate of the video.
Specifically, according to the video information of the latest video and the video information of the target video, the data size and the playing time length of the latest video and the data size and the playing time length of the target video may be determined respectively. The size of the data can be obtained through the head request, and the playing duration of the video can be obtained through the range data corresponding to the range data obtaining request. Subsequently, according to the data size and the playing time of the latest video, the playing code rate of the latest video can be determined. Similarly, according to the data size and the playing time length of the target video, the playing code rate of the target video can be determined. The playback rate may be a ratio of a data size to a playback time duration. In this way, the video parsing server may use the playing code rate of the latest video as a parsing information of the latest video, and use the playing code rate of the target video as a parsing information of the target video.
Of course, other analysis information such as the encoding method and the video version can be obtained by analyzing the video information in other aspects, which is not limited herein.
S5: and storing the generated analysis information of the target video in the video analysis server.
In this embodiment, after generating the analysis information of the latest video and the target video, the video analysis server may identify the video identifiers of the latest video and the target video, respectively, and store the identified video identifiers and the corresponding analysis information in the video analysis server in an associated manner. Specifically, the video identifier of the latest video or the target video may be used as a key, and the corresponding parsing information may be used as a value, so that the video identifier of the latest video or the target video and the corresponding parsing information may be stored in the video parsing server by means of a key-value pair. In practical applications, the video identifier may be a URL (Uniform Resource Locator) of the latest video or the target video, or a character string calculated by performing a hash algorithm based on the URL, and the video identifier may uniquely represent the corresponding latest video or the target video, so that unique parsing information may be queried according to the video identifier.
In this embodiment, the video identifier and the parsing information stored in the video parsing server may be used as a basis for determining whether the parsing of the video file is completed. If the corresponding analysis information can be inquired in the video analysis server according to the video identification, the video corresponding to the video identification is analyzed. And if the corresponding analysis information cannot be inquired in the video analysis server according to the video identification, the video corresponding to the video identification is still not analyzed.
In this embodiment, the analysis information stored in the video analysis server may be subsequently accessed by another server in the CDN system, so that resource allocation in the CDN system may be adjusted and optimized according to the analysis information. For example, the video parsing server may store the bit rate of each video, and the video with different bit rates has different requirements for transmission bandwidth. Therefore, when providing video resources for users, the CDN system may access the video bitrate stored in the video parsing server and allocate a higher bandwidth to a high-bitrate video, so as to create a good video viewing experience for the users.
Referring to fig. 4, the present application further provides a video parsing server, including:
the target video determining unit is used for acquiring father node information from the scheduling system, and the father node information is used for representing the storage relationship between the video file and the father node server; determining an unresolved target video stored in a parent node server based on the parent node information;
the video analysis unit is used for acquiring the video information of the target video from a corresponding father node server and analyzing the video information of the target video to generate analysis information of the target video;
and the analysis information storage unit is used for storing the generated analysis information of the target video in the video analysis server.
In one embodiment, the video parsing server further comprises:
the system comprises a latest video synchronization unit, a content management platform and a client, wherein the latest video synchronization unit is used for acquiring video information of a latest video from the content management platform, and the content management platform is used for responding to a prefetching instruction of a client and downloading the latest video pointed by the prefetching instruction from a source station server of the client;
correspondingly, the video analyzing unit is further used for analyzing the video information of the latest video to generate the analysis information of the latest video;
the analysis information storage unit is further configured to store the generated analysis information of the latest video in the video analysis server.
In one embodiment, the latest video synchronization unit includes:
a header request sending module, configured to send a header request pointing to the latest video to the content management platform, and receive response information fed back by the content management platform for the header request;
and the data size identification module is used for identifying the content length field in the response information and taking the assignment of the content length field as the data size of the latest video.
In one embodiment, the latest video synchronization unit further comprises:
the range data acquisition module is used for sending a range data acquisition request pointing to the latest video to the content management platform and receiving range data fed back by the content management platform according to the range data acquisition request; wherein the range data is at least used for representing the playing time length of the latest video.
Referring to fig. 5, the present application further provides a video parsing server, where the video parsing server includes a memory and a processor, where the memory is used to store a computer program, and when the computer program is executed by the processor, the video parsing server can implement the video parsing method described above.
The present application further provides a video parsing system, the system includes a video parsing server, a scheduling system and a parent node server, wherein:
the scheduling system is used for storing father node information, and the father node information is used for representing the storage relation between the video file and the father node server;
the father node server is used for storing the video file;
the video analysis server is used for acquiring video information of an unresolved target video from the father node server and analyzing the video information of the target video to generate and store analysis information of the target video.
In practical applications, the video parsing system may further include a content management platform as shown in fig. 1, where the content management platform may store the latest video with the pre-fetching service enabled therein. Of course, the latest video for opening the pre-fetching service may also be obtained from the content management platform in advance by the parent node server, so that the video analysis server can obtain the latest video only by communicating with the parent node server.
Referring to fig. 6, in the present application, the technical solution in the above embodiment can be applied to the computer terminal 10 shown in fig. 6. The computer terminal 10 may include one or more (only one shown) processors 102 (the processor 102 may include, but is not limited to, a processing device such as a microprocessor MCU or a programmable logic device FPGA), a memory 104 for storing data, and a transmission module 106 for communication functions. It will be understood by those skilled in the art that the structure shown in fig. 6 is only an illustration and is not intended to limit the structure of the electronic device. For example, the computer terminal 10 may also include more or fewer components than shown in FIG. 6, or have a different configuration than shown in FIG. 6.
The memory 104 may be used to store software programs and modules of application software, and the processor 102 executes various functional applications and data processing by executing the software programs and modules stored in the memory 104. The memory 104 may include high speed random access memory, and may also include non-volatile memory, such as one or more magnetic storage devices, flash memory, or other non-volatile solid-state memory. In some examples, the memory 104 may further include memory located remotely from the processor 102, which may be connected to the computer terminal 10 via a network. Examples of such networks include, but are not limited to, the internet, intranets, local area networks, mobile communication networks, and combinations thereof.
The transmission device 106 is used for receiving or transmitting data via a network. Specific examples of the network described above may include a wireless network provided by a communication provider of the computer terminal 10. In one example, the transmission device 106 includes a Network adapter (NIC) that can be connected to other Network devices through a base station to communicate with the internet. In one example, the transmission device 106 can be a Radio Frequency (RF) module, which is used to communicate with the internet in a wireless manner.
Therefore, according to the technical scheme, the video is analyzed through the independent video analysis server, so that the pressure of the edge node server can be reduced, and the influence on the experience of a user is avoided. The video files in the network can be stored in a parent node server of the CDN system. The video files may include the latest video file for which the prefetch service is opened, and other video files than the latest video file. Subsequently, the video parsing server may obtain the stored video file from the parent node server. Wherein, the video file stored in the parent node server may be partially parsed. Therefore, the video parsing server needs to identify the target video stored in the parent node server without parsing. In this way, for the target video, the analysis can be performed according to the acquired video information, so as to generate analysis information of the target video. The generated analysis information can be stored in the video analysis server, and the analysis information can be subsequently used as a reference basis for resource allocation and video monitoring analysis in the CDN system. Therefore, before the video is analyzed, the video analysis server firstly determines the video file which is not analyzed, so that the repeated analysis process can not be caused, and the video analysis efficiency is greatly improved.
Through the above description of the embodiments, those skilled in the art will clearly understand that each embodiment can be implemented by software plus a necessary general hardware platform, and can also be implemented by hardware. With this understanding in mind, the above-described technical solutions may be embodied in the form of a software product, which can be stored in a computer-readable storage medium such as ROM/RAM, magnetic disk, optical disk, etc., and includes instructions for causing a computer device (which may be a personal computer, a server, or a network device, etc.) to execute the methods described in the embodiments or some parts of the embodiments.
The above description is only for the purpose of illustrating the preferred embodiments of the present invention and is not to be construed as limiting the invention, and any modifications, equivalents, improvements and the like that fall within the spirit and principle of the present invention are intended to be included therein.

Claims (15)

1. A video analysis method is applied to a video analysis server, wherein the video analysis server is connected with a scheduling system and a father node server in a CDN system, and the method comprises the following steps:
acquiring father node information from the scheduling system, wherein the father node information is used for representing the storage relationship between the video file and the father node server; determining a video file identifier of a video file stored in a father node server based on the father node information, and determining a target video which is not analyzed according to the incidence relation between the video file identifier and the analysis information in the video analysis server; the method specifically comprises the following steps:
generating a file list of video files stored in a father node server according to the storage relation represented by the father node information; sequentially inquiring whether analysis information associated with the video files exists in the video analysis server aiming at each video file in the file list; taking the video file without the associated analysis information as the target video which is not analyzed;
acquiring video information of the target video from a corresponding father node server, and analyzing the video information of the target video to generate analysis information of the target video; wherein, the analysis information at least comprises the playing code rate of the target video;
and associating the generated analysis information of the target video with a video file identifier and storing the analysis information in the video analysis server.
2. The method of claim 1, further comprising:
acquiring video information of a latest video from a content management platform, wherein the content management platform is used for responding to a prefetching instruction of a client and downloading the latest video pointed by the prefetching instruction from a source station server of the client; the latest video is used for representing a video file with opened pre-fetching service in the content management platform;
analyzing the video information of the latest video to generate analysis information of the latest video;
and storing the generated analysis information of the latest video in the video analysis server.
3. The method of claim 2, wherein obtaining video information of the latest video from the content management platform comprises:
sending a head request pointing to the latest video to the content management platform, and receiving response information fed back by the content management platform aiming at the head request;
and identifying a content length field in the response information, and using the assignment of the content length field as the data size of the latest video.
4. The method of claim 3, wherein obtaining video information of the latest video from the content management platform further comprises:
sending a range data acquisition request pointing to the latest video to the content management platform, and receiving range data fed back by the content management platform according to the range data acquisition request; wherein the range data is at least used for representing the playing time length of the latest video.
5. The method of claim 4, wherein sending a range data get request directed to the latest video to the content management platform comprises:
taking the data starting position of the latest video as the starting position of the data to be acquired, and generating the ending position of the data to be acquired according to the data starting position and a first preset data length;
constructing a range data acquisition request for representing the data to be acquired according to range parameters limited by the initial position and the end position of the data to be acquired;
and/or
Taking the data termination position of the latest video as the termination position of the data to be acquired, and generating the initial position of the data to be acquired according to the data termination position and a second preset data length;
and constructing a range data acquisition request for representing the data to be acquired according to range parameters limited by the starting position and the ending position of the data to be acquired.
6. The method of claim 1, wherein obtaining video information of the target video from a corresponding parent node server comprises:
determining a target father node server for storing the target video according to the storage relation represented by the father node information;
sending a head request pointing to the target video to the target father node server, and receiving response information fed back by the target father node server aiming at the head request;
and identifying a content length field in the response information, and using the assignment of the content length field as the data size of the target video.
7. The method of claim 6, wherein obtaining video information of the target video from the corresponding parent node server further comprises:
sending a range data acquisition request pointing to the target video to the target father node server, and receiving range data fed back by the target father node server according to the range data acquisition request; wherein the range data is at least used for representing the playing time length of the target video.
8. The method of claim 1, wherein generating parsing information for the target video comprises:
determining the data size and the playing time length of the target video according to the video information of the target video;
determining the playing code rate of the target video according to the data size and the playing time length of the target video;
and taking the playing code rate of the target video as analysis information of the target video.
9. The method of claim 1, wherein storing the generated parsing information of the target video in the video parsing server comprises:
and identifying the video identification of the target video, and storing the identified video identification and the corresponding analysis information in the video analysis server in an associated manner.
10. A video resolution server is characterized in that the video resolution server is connected with a scheduling system and a father node server in a CDN system, and the video resolution server comprises:
the target video determining unit is used for acquiring father node information from the scheduling system, and the father node information is used for representing the storage relationship between the video file and the father node server; determining a video file identifier of a video file stored in a father node server based on the father node information, and determining a target video which is not analyzed according to the incidence relation between the video file identifier and the analysis information in the video analysis server; the method is specifically used for:
generating a file list of video files stored in a father node server according to the storage relation represented by the father node information; sequentially inquiring whether analysis information associated with the video files exists in the video analysis server aiming at each video file in the file list; taking the video file without the associated analysis information as the target video which is not analyzed;
the video analysis unit is used for acquiring the video information of the target video from a corresponding father node server and analyzing the video information of the target video to generate analysis information of the target video; wherein, the analysis information at least comprises the playing code rate of the target video;
and the analysis information storage unit is used for associating the generated analysis information of the target video with the video file identifier and storing the analysis information and the video file identifier in the video analysis server.
11. The video parsing server of claim 10, wherein the video parsing server further comprises:
the system comprises a latest video synchronization unit, a content management platform and a client, wherein the latest video synchronization unit is used for acquiring video information of a latest video from the content management platform, and the content management platform is used for responding to a prefetching instruction of a client and downloading the latest video pointed by the prefetching instruction from a source station server of the client; the latest video is used for representing a video file with opened pre-fetching service in the content management platform;
correspondingly, the video analyzing unit is further used for analyzing the video information of the latest video to generate the analysis information of the latest video;
the analysis information storage unit is further configured to store the generated analysis information of the latest video in the video analysis server.
12. The video resolution server according to claim 11, wherein the latest video synchronization unit comprises:
a header request sending module, configured to send a header request pointing to the latest video to the content management platform, and receive response information fed back by the content management platform for the header request;
and the data size identification module is used for identifying the content length field in the response information and taking the assignment of the content length field as the data size of the latest video.
13. The video parsing server of claim 12, wherein the latest video synchronization unit further comprises:
the range data acquisition module is used for sending a range data acquisition request pointing to the latest video to the content management platform and receiving range data fed back by the content management platform according to the range data acquisition request; wherein the range data is at least used for representing the playing time length of the latest video.
14. A video resolution server, characterized in that it comprises a memory for storing a computer program which, when executed by the processor, implements the method according to any one of claims 1 to 9, and a processor.
15. A video parsing system, the system comprising a video parsing server, a scheduling system and a parent node server, wherein:
the scheduling system is used for storing father node information from the scheduling system, and the father node information is used for representing the storage relation between the video file and the father node server;
the father node server is used for storing the video file;
the video analysis server is configured to obtain video information of a target video that is not analyzed from the parent node server based on the parent node information and an association relationship between video file identifiers and analysis information in the video analysis server, and analyze the video information of the target video to generate and store analysis information of the target video, where the analysis information is stored in association with the video file identifiers of the target video, the analysis information at least includes a play code rate of the target video, and the target video that is not analyzed in the parent node server is determined in the following manner:
generating a file list of video files stored in a father node server according to the storage relation represented by the father node information; sequentially inquiring whether analysis information associated with the video files exists in the video analysis server aiming at each video file in the file list; and taking the video file without the associated analysis information as the target video which is not analyzed.
CN201910266997.8A 2019-04-03 2019-04-03 Video analysis method and system and video analysis server Expired - Fee Related CN110049348B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201910266997.8A CN110049348B (en) 2019-04-03 2019-04-03 Video analysis method and system and video analysis server

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201910266997.8A CN110049348B (en) 2019-04-03 2019-04-03 Video analysis method and system and video analysis server

Publications (2)

Publication Number Publication Date
CN110049348A CN110049348A (en) 2019-07-23
CN110049348B true CN110049348B (en) 2022-04-05

Family

ID=67275969

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201910266997.8A Expired - Fee Related CN110049348B (en) 2019-04-03 2019-04-03 Video analysis method and system and video analysis server

Country Status (1)

Country Link
CN (1) CN110049348B (en)

Families Citing this family (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112383686B (en) * 2020-11-02 2023-01-13 浙江大华技术股份有限公司 Video processing method, video processing device, storage medium and electronic device

Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN104811740A (en) * 2015-04-29 2015-07-29 北京奇艺世纪科技有限公司 Video file distribution method, system and device
CN105516739A (en) * 2015-12-22 2016-04-20 腾讯科技(深圳)有限公司 Video live broadcasting method and system, transcoding server and webpage client
CN105871972A (en) * 2015-11-13 2016-08-17 乐视云计算有限公司 Video resource distributed cathe method, device and system
CN106851343A (en) * 2017-01-23 2017-06-13 百度在线网络技术(北京)有限公司 For the method and apparatus of net cast
CN108574685A (en) * 2017-03-14 2018-09-25 华为技术有限公司 A kind of Streaming Media method for pushing, apparatus and system
CN109218430A (en) * 2018-09-26 2019-01-15 深圳市网心科技有限公司 A kind of video file transfer method, system and electronic equipment and storage medium

Family Cites Families (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US9635398B2 (en) * 2013-11-01 2017-04-25 Adobe Systems Incorporated Real-time tracking collection for video experiences
US10009247B2 (en) * 2014-04-17 2018-06-26 Netscout Systems Texas, Llc Streaming video monitoring using CDN data feeds
CN107846454A (en) * 2017-10-25 2018-03-27 暴风集团股份有限公司 A kind of resource regulating method, device and CDN system

Patent Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN104811740A (en) * 2015-04-29 2015-07-29 北京奇艺世纪科技有限公司 Video file distribution method, system and device
CN105871972A (en) * 2015-11-13 2016-08-17 乐视云计算有限公司 Video resource distributed cathe method, device and system
CN105516739A (en) * 2015-12-22 2016-04-20 腾讯科技(深圳)有限公司 Video live broadcasting method and system, transcoding server and webpage client
CN106851343A (en) * 2017-01-23 2017-06-13 百度在线网络技术(北京)有限公司 For the method and apparatus of net cast
CN108574685A (en) * 2017-03-14 2018-09-25 华为技术有限公司 A kind of Streaming Media method for pushing, apparatus and system
CN109218430A (en) * 2018-09-26 2019-01-15 深圳市网心科技有限公司 A kind of video file transfer method, system and electronic equipment and storage medium

Also Published As

Publication number Publication date
CN110049348A (en) 2019-07-23

Similar Documents

Publication Publication Date Title
CN108306877B (en) NODE JS-based user identity information verification method and device and storage medium
US11356748B2 (en) Method, apparatus and system for slicing live streaming
US10116572B2 (en) Method, device, and system for acquiring streaming media data
CN108848060B (en) Multimedia file processing method, processing system and computer readable storage medium
US9774642B2 (en) Method and device for pushing multimedia resource and display terminal
US9917916B2 (en) Media delivery service protocol to support large numbers of client with error failover processes
CN111431813B (en) Access current limiting method, device and storage medium
EP3734927A1 (en) Content service implementation method and device, and content delivery network node
CN109640113B (en) Processing method for dragging video data and proxy server
CN110557689B (en) Video playing method and device
CN110493321B (en) Resource acquisition method, edge scheduling system and server
CN107566477B (en) Method and device for acquiring files in distributed file system cluster
CN110324405B (en) Message sending method, device, system and computer readable storage medium
CN110267117B (en) Streaming media data processing method and streaming media processing server
CN108228625B (en) Push message processing method and device
CN109525622B (en) Fragment resource ID generation method, resource sharing method, device and electronic equipment
CN111212301B (en) Video code rate matching method, storage medium and terminal equipment
CN107040615B (en) Downloading method of media fragment, terminal and computer readable storage medium
CN110049348B (en) Video analysis method and system and video analysis server
CN111859127A (en) Subscription method and device of consumption data and storage medium
CN108134811B (en) Method, device and system for distributing or downloading target file
WO2019196225A1 (en) Resource file feedback method and apparatus
CN112954013B (en) Network file information acquisition method, device, equipment and storage medium
CN113726801A (en) AB experiment method, device, equipment and medium applied to server
CN114222086A (en) Method, system, medium and electronic device for scheduling audio and video code stream

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant
CF01 Termination of patent right due to non-payment of annual fee

Granted publication date: 20220405

CF01 Termination of patent right due to non-payment of annual fee