CN112039680A - Video service processing method, system and device - Google Patents

Video service processing method, system and device Download PDF

Info

Publication number
CN112039680A
CN112039680A CN201910476215.3A CN201910476215A CN112039680A CN 112039680 A CN112039680 A CN 112039680A CN 201910476215 A CN201910476215 A CN 201910476215A CN 112039680 A CN112039680 A CN 112039680A
Authority
CN
China
Prior art keywords
video
video service
data
acquisition
unit
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN201910476215.3A
Other languages
Chinese (zh)
Inventor
何国圆
智亚丹
魏超
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
ZTE Corp
Original Assignee
ZTE Corp
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by ZTE Corp filed Critical ZTE Corp
Priority to CN201910476215.3A priority Critical patent/CN112039680A/en
Priority to PCT/CN2020/080899 priority patent/WO2020244283A1/en
Publication of CN112039680A publication Critical patent/CN112039680A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L41/00Arrangements for maintenance, administration or management of data switching networks, e.g. of packet switching networks
    • H04L41/08Configuration management of networks or network elements
    • H04L41/0803Configuration setting
    • H04L41/0823Configuration setting characterised by the purposes of a change of settings, e.g. optimising configuration for enhancing reliability
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L41/00Arrangements for maintenance, administration or management of data switching networks, e.g. of packet switching networks
    • H04L41/50Network service management, e.g. ensuring proper service fulfilment according to agreements
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N17/00Diagnosis, testing or measuring for television systems or their details
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N17/00Diagnosis, testing or measuring for television systems or their details
    • H04N17/004Diagnosis, testing or measuring for television systems or their details for digital television systems
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/40Client devices specifically adapted for the reception of or interaction with content, e.g. set-top-box [STB]; Operations thereof
    • H04N21/43Processing of content or additional data, e.g. demultiplexing additional data from a digital video stream; Elementary client operations, e.g. monitoring of home network or synchronising decoder's clock; Client middleware
    • H04N21/442Monitoring of processes or resources, e.g. detecting the failure of a recording device, monitoring the downstream bandwidth, the number of times a movie has been viewed, the storage space available from the internal hard disk

Landscapes

  • Engineering & Computer Science (AREA)
  • Signal Processing (AREA)
  • Computer Networks & Wireless Communication (AREA)
  • Multimedia (AREA)
  • Health & Medical Sciences (AREA)
  • Biomedical Technology (AREA)
  • General Health & Medical Sciences (AREA)
  • Databases & Information Systems (AREA)
  • Mobile Radio Communication Systems (AREA)
  • Two-Way Televisions, Distribution Of Moving Picture Or The Like (AREA)

Abstract

The video service processing method identifies the video service based on a DPI technology, and more finely and accurately identifies the video service and application in a network; moreover, the wireless network and the internet technology are effectively integrated together through the method and the device, and a basis is provided for network optimization, so that the experience of a user on the video service is effectively improved.

Description

Video service processing method, system and device
Technical Field
The present application relates to, but not limited to, wireless network technologies, and in particular, to a method, a system, and an apparatus for processing a video service.
Background
With the rapid development of mobile communication technology and the continuous upgrade of mobile terminal products, especially smart phones, video users increase year by year, and video services are more and more applied to people's daily life and become the main service form in mobile communication systems.
Under the background of large videos, video services become basic services of video service providers, operators and equipment providers, and become firm guarantees for leading the future. High-definition, multi-screening, interaction, socialization and real-time video consumption are major requirements of consumer video consumption, however, the contradiction between the scarcity of wireless resources and the instability of wireless links and the requirement of users for increasingly improved video service quality brings challenges to the development of wireless video services.
For operators, the purpose of a large video strategy is to establish an end-to-end standard of video experience, make video traffic prediction of the existing network based on video performance evaluation, reasonably schedule and allocate wireless resources to provide high-quality video service, and reduce network delay of mobile end users so as to improve user experience.
Disclosure of Invention
The application provides a video service processing method, a video service processing system and a video service processing device, which can provide a basis for network optimization, so that the experience of a user on a video service is effectively improved.
The application provides a video service processing system, including: the system comprises a management unit, an identification unit, a collection unit and an analysis unit; wherein,
the management unit is used for issuing video acquisition tasks to one or more acquisition units needing video service quality evaluation;
the identification unit is used for detecting the data message in the network by adopting a Deep Packet Inspection (DPI) technology, and informing the acquisition unit associated with the identification unit when real video service data is detected in the data message;
the acquisition unit is used for receiving the video service acquisition notice from the identification unit, detecting that a video service acquisition task exists, acquiring according to the parameters of the video service acquisition task, and reporting an acquisition result to the analysis unit;
and the analysis unit is used for analyzing and counting the reported data and periodically outputting a video service quality analysis result according to the configured model.
In one illustrative example, the identification unit includes: a shunting service module, a DPI module, wherein,
the shunting service module is used for copying the data message and then sending the data message to the DPI module; receiving an identification result fed back by the DPI module, and sending the identification result meeting a preset rule and a corresponding user port carried in a video service acquisition notice to the acquisition unit;
and the DPI module is used for detecting the received data message in real time, identifying whether the data message contains real video service data according to a pre-configured detection strategy, determining that the data message contains the real video service data if the video feature code is detected in the data message, and feeding back an identification result to the shunting service module.
In an exemplary embodiment, the acquisition unit is specifically configured to:
receiving a video service acquisition notice from the identification unit, matching according to the ID of a video provider, and finding out videos with the same APPID; then, identifying according to the video ID, and considering the videos with the same APPID and video ID as the same video;
establishing a linked list node aiming at each terminal IP identified as having video service;
and according to the parameters carried by the video acquisition task, acquiring event reported data and periodically reported data in the video service process of the terminal identified as having the video service, and reporting the data of the terminal to the analysis unit configured in the video acquisition task when the video of the terminal identified as having the video service is finished or the terminal is released.
In an exemplary embodiment, the analysis unit is specifically configured to:
and analyzing the video service data of the acquisition unit within the specified time according to the data reporting period and the task time carried in the video acquisition task to obtain the video service quality analysis result.
In an exemplary instance, the analyzing, by the analyzing unit, the video service data of the acquiring unit in a specified time to obtain the video service quality analysis result includes:
and acquiring the total video service quality score as the video service quality analysis result according to the video source quality score, the interactive experience quality score and the watching experience quality score of the video service data.
In an exemplary embodiment, the management unit is further configured to: and stopping the issued video acquisition task.
In an exemplary embodiment, the management unit is further configured to: and changing the parameters of the video task.
In an exemplary instance, one of the identification units corresponds to one of the acquisition units; or, one of the identification units serves as a sink node in a network and corresponds to one or more acquisition units.
In an exemplary embodiment, the management unit is an element management system EMS, the identification unit is a mobile edge computing MEC, the acquisition unit is a base station, and the analysis unit is a big data server.
The application also provides a video service processing method, which comprises the following steps:
the base station receives the video service acquisition notice and detects whether a video service acquisition task exists;
and the base station detects that a video service acquisition task exists, acquires according to the parameters of the video service acquisition task, and reports an acquisition result for analyzing to obtain a video service quality analysis result.
In an exemplary embodiment, the acquiring according to the parameters of the video service acquisition task includes:
matching according to the video APPID to find out videos with the same APPID; identifying according to the video ID, wherein videos with the same APPID and video ID are regarded as the same video;
establishing a linked list node aiming at each terminal IP identified as having video service;
and according to parameters carried by the video acquisition task, acquiring event reported data and periodically reported data in the video service process of the terminal identified as having the video service, and reporting the data of the terminal when the video of the terminal identified as having the video service is finished or the terminal is released for analyzing to obtain a video service quality analysis result.
In one illustrative example, the parameters of the video capture task include a capture object; the acquisition object comprises any combination of:
the reporting mode is the video initial buffer phase data reported by the event;
the reporting mode is periodic video data information collection reported periodically;
the reporting mode is the video pause starting and stopping time of the event reporting;
the reporting mode is the video end time reported by the event;
the reporting mode is a video acquisition period reported periodically.
The present application further provides a computer-readable storage medium storing computer-executable instructions for performing any one of the video service processing methods described above.
The application further provides a device for realizing video service processing, which comprises a processor and a memory; wherein the memory has stored thereon a computer program operable on the processor to: for performing the steps of a video service processing method as claimed in any one of the preceding claims.
The application also provides another video service processing method, which comprises the following steps:
the MEC detects the data message in the network by adopting a DPI technology;
when the real video service data is detected in the data message, the base station associated with the MEC is notified to collect the video service.
In an exemplary embodiment, the detecting a data packet in a network by using a DPI technology includes:
and copying the data message, then carrying out real-time detection, identifying whether the data message contains real video service data according to a pre-configured detection strategy, if the video feature code is detected in the data message, determining that the data message contains the real video service data, and obtaining an identification result.
In one illustrative example, the notifying a base station associated with the MEC to collect video traffic includes:
and carrying the identification result which accords with a preset rule and corresponding user port information in the video service acquisition notice and sending the identification result and the corresponding user port information to a base station associated with the MEC.
The present application further provides a computer-readable storage medium storing computer-executable instructions for performing another video service processing method as described in any one of the above.
The application further provides a device for realizing video service processing, which comprises a processor and a memory; wherein the memory has stored thereon a computer program operable on the processor to: for performing the steps of the video further service processing method of any of the above.
The video service processing method provided by the application identifies the video service based on the DPI technology, and more finely and accurately identifies the video service and the application in the network; moreover, the wireless network and the internet technology are effectively integrated together through the method and the device, and a basis is provided for network optimization, so that the experience of a user on the video service is effectively improved.
Additional features and advantages of the invention will be set forth in the description which follows, and in part will be obvious from the description, or may be learned by practice of the invention. The objectives and other advantages of the invention will be realized and attained by the structure particularly pointed out in the written description and claims hereof as well as the appended drawings.
Drawings
The accompanying drawings are included to provide a further understanding of the claimed subject matter and are incorporated in and constitute a part of this specification, illustrate embodiments of the subject matter and together with the description serve to explain the principles of the subject matter and not to limit the subject matter.
Fig. 1 is a schematic diagram of a composition architecture of a video service processing system according to the present application;
fig. 2 is a schematic flowchart of an embodiment of a video service processing method according to the present application;
fig. 3 is a schematic flowchart of another embodiment of a video service processing method according to the present application.
Detailed Description
In one exemplary configuration of the present application, a computing device includes one or more processors (CPUs), input/output interfaces, a network interface, and memory.
The memory may include forms of volatile memory in a computer readable medium, Random Access Memory (RAM) and/or non-volatile memory, such as Read Only Memory (ROM) or flash memory (flash RAM). Memory is an example of a computer-readable medium.
Computer-readable media, including both non-transitory and non-transitory, removable and non-removable media, may implement information storage by any method or technology. The information may be computer readable instructions, data structures, modules of a program, or other data. Examples of computer storage media include, but are not limited to, phase change memory (PRAM), Static Random Access Memory (SRAM), Dynamic Random Access Memory (DRAM), other types of Random Access Memory (RAM), Read Only Memory (ROM), Electrically Erasable Programmable Read Only Memory (EEPROM), flash memory or other memory technology, compact disc read only memory (CD-ROM), Digital Versatile Discs (DVD) or other optical storage, magnetic cassettes, magnetic tape, magnetic disk storage or other magnetic storage devices, or any other non-transmission medium that can be used to store information that can be accessed by a computing device. As defined herein, computer readable media does not include non-transitory computer readable media (transient media), such as modulated data signals and carrier waves.
To make the objects, technical solutions and advantages of the present application more apparent, embodiments of the present application will be described in detail below with reference to the accompanying drawings. It should be noted that the embodiments and features of the embodiments in the present application may be arbitrarily combined with each other without conflict.
Fig. 1 is a schematic diagram of a composition architecture of a video service processing system according to the present application, as shown in fig. 1, including: the system comprises a management unit, an identification unit, a collection unit and an analysis unit; wherein,
and the management unit is used for issuing video acquisition tasks to one or more acquisition units needing video service quality evaluation.
In one illustrative example, the parameters carried by the video capture task include any combination of:
the task time includes, for example, a video capture start time and a video capture stop time, and further, may further include: supporting the time period setting of busy time and idle time, and the like;
task parameters, such as data reporting period, UE sampling number in a cell, and the like;
analysis information such as the name of the analysis unit, a data reception IP address, File Transfer Protocol (FTP) information of the reported data;
what types of data are acquired by the acquisition object, i.e. the video acquisition task, can be specifically referred to as any combination of acquisition items in table 1.
Figure BDA0002082331560000061
Figure BDA0002082331560000071
TABLE 1
In an exemplary embodiment, the management unit is further configured to: and stopping the issued video acquisition task.
When the video acquisition stopping time is reached, the issued video acquisition task can be automatically stopped. Further, the video capture may also be stopped manually when the video capture stop time is not reached, for example: the user issues a stop command and the like through an interface provided by the management unit.
In an exemplary embodiment, the management unit is further configured to: and changing the parameters of the video acquisition task.
It should be noted that, before the change, the video capture task needs to be stopped, and then the video capture task is modified and then re-issued, for example, the task time, the task parameters, and other information are modified.
In an exemplary embodiment, the Management unit may be an Element Management System (EMS).
And the identification unit is used for detecting the data message in the network by adopting a Deep Packet Inspection (DPI) technology, and informing the acquisition unit associated with the identification unit when the real video service data is detected in the data message.
In an exemplary embodiment, the identification unit is deployed at the edge of the mobile network, and is a network architecture that provides the functions of providing the services required by the user and cloud computing nearby the edge of the wireless network. The identification unit merges the service provider and the wireless network.
In an exemplary embodiment, one identification unit corresponds to one acquisition unit; alternatively, one identification unit may also serve as a sink node in the network and correspond to one or more acquisition units.
In an exemplary embodiment, the packet identification of DPI is based on DPI rules, and the DPI rules include two broad categories, i.e., primary rules and secondary rules. The first-level rule is L3/L4 message identification, and the entry is a message quintuple; the second level rule is the identification of the L7 message, incorporating seven layers of load information that are referred to as messages. The DPI technology is adopted, so that the data of a network layer (such as an IP address/protocol type) and a transmission layer (such as a TCP/UDP port number) are quickly matched, deep analysis of a load part of the TCP/UDP is realized, accurate protocol identification and application protocol event identification are realized, single-packet characteristic identification and multi-packet stateful characteristic identification based on flow are supported, and the accuracy of protocol identification is greatly improved.
In one illustrative example, the identification unit includes: a shunting service module, a DPI module, wherein,
the shunting service module is used for copying the data message, namely the downlink service flow message of the user, and then sending the copied data message to the DPI module; receiving an identification result fed back by the DPI module, carrying the identification result meeting a preset rule and information such as a corresponding user port in a video service acquisition notice, and sending the video service acquisition notice to an acquisition unit;
table 2 shows an embodiment of the preset rule, it should be noted that the content of the preset rule includes, but is not limited to, APPID and protocol name.
Figure BDA0002082331560000081
Figure BDA0002082331560000091
TABLE 2
In an exemplary embodiment, the preset rule may further include, but is not limited to, information of the initial buffering duration, the secondary buffering duration, and the like as shown in table 3:
Figure BDA0002082331560000092
Figure BDA0002082331560000101
TABLE 3
And the DPI module is used for detecting the received data message, namely the data of the user in real time, identifying whether the data message contains real video service data or not according to a pre-configured detection strategy, determining that the data message contains the real video service data if the video feature code is detected in the data message, and feeding back the identification result to the shunting service module. In one illustrative example, the recognition results may include, but are not limited to, such as: video provider ID (appid), video ID, bandwidth, resolution, etc.
In an exemplary embodiment, the identification unit may notify the acquisition unit of the identification result through a special message. In an exemplary embodiment, the special message may be a private message between the recognition unit and the acquisition unit, such as a user plane GPRS tunneling protocol (GTP-U) message.
In an exemplary instance, the identification unit may be a Mobile Edge Computing (MEC) node.
The application also provides an MEC comprising the functionality of any one of the identification units.
And the acquisition unit is used for receiving the video service acquisition notice from the identification unit, detecting that a video service acquisition task exists, acquiring according to the parameters of the video service acquisition task, and reporting the acquisition result to the analysis unit.
In an exemplary embodiment, all the packet statistics of the acquisition unit are performed when the uplink status report of the downlink packet is returned, and the acquisition unit performs the video merging calculation when receiving the video service acquisition notification from the identification unit: matching according to a video provider ID (APPID), finding videos with the same APPID, and then identifying according to the video ID, wherein the videos with the same APPID and the same video ID are regarded as the same video;
establishing a linked list node aiming at each terminal IP identified as having video service;
and according to parameters carried by the video acquisition task, acquiring event reported data and periodically reported data in the video service process of the terminal identified as having the video service, and reporting the data of the terminal to an analysis unit configured in the video acquisition task when the video of the terminal identified as having the video service is finished or the terminal is released.
In an exemplary embodiment, the acquisition unit mainly establishes index information by using an IP address of a terminal, that is, establishes a linked list node for each terminal IP identified as having a video service, where the IP address node information includes APPID linked list information, where the APPID includes a video ID and related statistical linked list information, and establishes linked list nodes when a video is identified to start, where each linked list node includes, but is not limited to:
APPID and video ID;
a total data volume and port number list;
a video start time and a video end time;
the current video playing time;
the video playable time length and the video played time length;
a stuck start time and a stuck end time;
current video bitrate, current video resolution, and the like.
In an exemplary example, the data compression of the terminal may be reported to the analysis unit corresponding to the video capture task in a File Transfer Protocol, such as a secure File Transfer Protocol (SSH File Transfer Protocol) like SFTP.
According to the method and the device, based on the DPI technology, the video service is identified, compared with the traditional Deep Flow Inspection (DFI) technology, the video service and the application in the network are identified more accurately, and meanwhile forwarding and processing of the identified service according to the preconfigured rule are realized. Moreover, through the identification unit of the application, the wireless network technology and the internet technology are effectively integrated, functions of calculation, storage, processing and the like are added on the wireless network side, network delay is effectively reduced through opening information interaction between the wireless network and the service server, the utilization rate of a wireless link is improved, and the traditional wireless base station is upgraded into an intelligent base station.
In one illustrative example, the acquisition unit may be a base station.
The present application also provides a base station comprising the functionality of any of the acquisition units.
And the analysis unit is used for analyzing and counting the reported data and periodically outputting a video service quality analysis result according to the parameters carried in the video acquisition task issued by the management unit.
In an exemplary embodiment, the analysis unit receives data reported by the acquisition unit, and analyzes video service data of the acquisition unit within a specified time according to a data reporting period and task time carried in a video acquisition task to obtain the video service quality analysis result. For example, the video service data may be evaluated from different dimensions, such as a video source quality score, an interactive experience quality score, and a viewing experience quality score, to obtain a total video service quality score as the video service quality analysis result.
In an exemplary embodiment, the quality of a video source mainly depends on the factors of the definition, the fluency, the fidelity, and the like of the video source, such as indexes that can cover six dimensions of the resolution, the frame rate, the code rate, the content, the coding, and the terminal of the video source, in an embodiment of the present application, the quality of the video source is represented by using the scores of the average code rate and the resolution of the video source as the scoring result of the quality of the video service, and the setting of the scoring criteria can be referred to table 2.
In an exemplary example, the average code rate may be as shown in equation (1):
Figure BDA0002082331560000121
in the formula (1), LkThe unit is bit, which is the total flow under the current code rate; t iskThe playable time length of the data under the corresponding code rate is in seconds; and N is the code rate change times during video playing.
In an exemplary embodiment, the interaction experience quality mainly depends on a response speed of a system to user interaction operation, which refers to convenience and efficiency of service operation of a user in a video service using process, in an embodiment of the present application, the interaction experience quality is represented by using a score of an initial buffering duration as an example, that is, a scoring result of the interaction experience quality is a scoring result of the initial buffering duration, and an embodiment of scoring standard setting may be referred to as table 3 below.
In an exemplary embodiment, the method for calculating the initial buffer duration T1 may include:
assuming that the video playing request time of the user is recorded as T0, a network side (such as a base station side) detects that the total data flow is L under the code rate corresponding to the successful receiving time of the user terminal, and the playable time length of the video is T;
when no code rate is reported in the period, T is L/default code rate; when code rate reporting exists in the period, T is L1/R1+ L2/R2+ … Ln/Rn, wherein Ln represents the total data traffic when the code rate is Rn, and n is an integer larger than 1 and represents different code rates and different total data traffic.
When video can be played for a time T>=TminIf so, recording the time as a playing time Tp;
then: the initial buffer time period T1 is Tp-T0. For how T1 is scored, reference is made to table 5 below.
In an exemplary embodiment, the quality of the viewing experience depends mainly on the quality of the program signal during the playing process of the video, i.e., whether there is a situation of quality degradation such as discontinuity of video images, abnormal images, etc. In an embodiment of the present application, the viewing experience quality can be expressed by evaluating two indexes, namely the katon recovery time ratio and the katon frequency, and an embodiment of the score setting can be seen in table 3.
In one illustrative example, the ratio of the duration of the katton recovery time may be as shown in equation (2):
Figure BDA0002082331560000131
in the formula (2), TskThe recovery time length of each time is expressed, and the unit is second; n is the number of times of pause during video playing; video play duration in units of seconds.
In one illustrative example, the frequency of katton may be as shown in equation (3):
Figure BDA0002082331560000132
formula (3) represents the video pause number per 10 minutes, where N is the video pause number during video playing, and the video playing time unit is converted into minutes.
For the above mentioned scoring criteria, reference may be made to the industry scoring mechanism of video services, and the setting embodiments may be as shown in tables 4 and 5 below.
Figure BDA0002082331560000133
Figure BDA0002082331560000141
TABLE 4
Table 4 shows an example of a video source quality score setting according to the present application.
Figure BDA0002082331560000142
TABLE 5
Table 5 shows an example of the interactive quality of experience and viewing quality score settings according to the present application.
It should be noted that the above-mentioned scoring criteria may also be set according to the specific requirements of the video service, and the specific setting manner is not used to limit the scope of the present application.
Therefore, the total grade of the video service quality can be counted according to the grading results such as the video source quality, the interactive experience quality and the watching experience.
In an exemplary embodiment, the wireless operation and maintenance personnel may be notified of the analysis result by regular mail, short message, or the like. Therefore, wireless operation and maintenance personnel can conveniently optimize the network of the base station in a targeted manner according to the analysis result, for example, problem indexes of the base station with poor video service quality are checked one by one, faults are positioned, the network is optimized and the like, and therefore the video experience of a user is improved.
In one illustrative example, the analysis unit may be a big data server.
The video service processing system identifies the video service based on the DPI technology, and more finely and accurately identifies the video service and the application in the network; moreover, through the identification unit of the application, the wireless network technology and the internet technology are effectively integrated together, and a basis is provided for network optimization, so that the experience of a user on video services is effectively improved.
According to the method and the device, video service data of the user under the appointed base station are collected, the video service data comprise information such as video starting time/ending time, video code rate/resolution, initial buffering duration/pause starting and stopping time and the like, and are reported to the data analysis center, the data analysis center carries out intelligent analysis on the data reported by the base station, real-time and effective evaluation is carried out on the video service quality of the base station in the appointed time period, an evaluation result is regularly notified to operation and maintenance personnel, the operation and maintenance personnel can carry out targeted optimization on the network according to the analysis report content, and therefore the video service experience quality of the user is effectively improved.
The method and the device realize the automatic operation and maintenance of the video service, greatly reduce the workload of wireless operation and maintenance personnel, do not need to carry out network optimization every time when the user complaints appear, and carry out the network optimization as a continuous periodic work. In a word, the video service processing system has the advantages of high-degree automatic processing, less intervention of operation and maintenance personnel, good long-term experience of users and the like, and plays a great role in intelligent operation and maintenance of the wireless network.
Fig. 2 is a schematic flowchart of an embodiment of a video service processing method according to the present application, as shown in fig. 2, including:
step 200: and the base station receives the video service acquisition notice and detects whether a video service acquisition task exists.
In an illustrative example, the video traffic collection task is from a management unit, such as an EMS.
In one illustrative example, the parameters carried by the video capture task include any combination of:
the task time includes, for example, a video capture start time and a video capture stop time, and further, may further include: supporting the time period setting of busy time and idle time, and the like;
task parameters, such as data reporting period, UE sampling number in a cell, and the like;
analysis information such as the name of the analysis unit, a data reception IP address, File Transfer Protocol (FTP) information of the reported data;
what types of data are collected by the collection object, i.e. the video collection task, can be seen in table 1.
In an illustrative example, the video traffic capture notification is from an identification unit, such as an MEC node.
In one illustrative example, the video traffic collection notification is carried with the following: and identifying results which accord with preset rules, and corresponding information such as user ports and the like.
Step 201: and the base station detects that a video service acquisition task exists, acquires according to the parameters of the video service acquisition task, and reports an acquisition result for analyzing to obtain a video service quality analysis result.
In one illustrative example, the step may include:
matching according to the video APPID to find out videos with the same APPID; identifying according to the video ID, wherein videos with the same APPID and video ID are regarded as the same video;
establishing a linked list node aiming at each terminal IP identified as having video service;
and according to parameters carried by the video acquisition task, acquiring event reported data and periodically reported data in the video service process of the terminal identified as having the video service, and reporting the data of the terminal when the video of the terminal identified as having the video service is finished or the terminal is released for analyzing to obtain a video service quality analysis result.
In an exemplary embodiment, establishing a linked list node for each terminal IP identified as having video traffic comprises:
establishing index information by using the IP address of the terminal, namely establishing a linked list node aiming at each terminal IP identified as having video service, wherein the IP address node information comprises APPID linked list information which comprises video ID and related statistical linked list information, and establishing linked list nodes when the video is identified to start, and the content of each linked list node comprises but is not limited to:
APPID and video ID;
a total data volume and port number list;
a video start time and a video end time;
the current video playing time;
the video playable time length and the video played time length;
a stuck start time and a stuck end time;
current video bit rate and current video resolution; and so on.
In an exemplary embodiment, the reporting the acquisition result may include:
and the data compression of the terminal is reported in a mode of a file transfer protocol, such as an SFTP (secure file transfer protocol) and other secure file transfer protocols.
An embodiment of the present invention further provides a computer-readable storage medium, in which computer-executable instructions are stored, where the computer-executable instructions are used to execute any one of the video service processing methods shown in fig. 2.
The embodiment of the invention also provides a device for realizing video service processing, which comprises a processor and a memory; wherein the memory has stored thereon a computer program operable on the processor to: for performing the steps of the video service processing method of any one of fig. 2.
The method and the device realize the automatic operation and maintenance of the video service, greatly reduce the workload of wireless operation and maintenance personnel, do not need to carry out network optimization every time when the user complaints appear, and carry out the network optimization as a continuous periodic work. In a word, the video service processing system has the advantages of high-degree automatic processing, less intervention of operation and maintenance personnel, good long-term experience of users and the like, and plays a great role in intelligent operation and maintenance of the wireless network.
Fig. 3 is a schematic flowchart of another embodiment of a video service processing method according to the present application, as shown in fig. 2, including:
step 300: and the MEC detects the data message in the network by adopting a DPI technology.
In an exemplary embodiment, the packet identification of DPI is based on DPI rules, and the DPI rules include two broad categories, i.e., primary rules and secondary rules. The first-level rule is L3/L4 message identification, and the entry is a message quintuple; the second level rule is the identification of the L7 message, incorporating seven layers of load information that are referred to as messages. The DPI technology is adopted, so that the data of a network layer (such as an IP address/protocol type) and a transmission layer (such as a TCP/UDP port number) are quickly matched, deep analysis of a load part of the TCP/UDP is realized, accurate protocol identification and application protocol event identification are realized, single-packet characteristic identification and multi-packet stateful characteristic identification based on flow are supported, and the accuracy of protocol identification is greatly improved.
In an exemplary embodiment, the detecting, by the MEC, a data packet in a network by using a DPI technology may include:
and copying the data message, namely the downlink service flow message of the user, then carrying out real-time detection, identifying whether the data message, namely the data of the user, contains real video service data or not according to a pre-configured detection strategy, if the video feature code is detected in the data message, determining that the data message contains the real video service data, and obtaining an identification result. In one illustrative example, the recognition results may include, but are not limited to, such as: video provider ID (appid), video ID, bandwidth, resolution, etc.
Step 301: when the real video service data is detected in the data message, the base station associated with the MEC is notified to collect the video service.
In one illustrative example, the step may include:
and carrying the identification result which accords with the preset rule, the corresponding information such as the user port and the like in the video service acquisition notice and sending the information to the base station associated with the MEC.
In an exemplary embodiment, the identification result may be notified by a special message.
The video service processing method provided by the application identifies the video service based on the DPI technology, and more finely and accurately identifies the video service and the application in the network; moreover, the wireless network and the internet technology are effectively integrated together through the method and the device, and a basis is provided for network optimization, so that the experience of a user on the video service is effectively improved.
An embodiment of the present invention further provides a computer-readable storage medium, in which computer-executable instructions are stored, where the computer-executable instructions are used to execute any one of the video service processing methods shown in fig. 3.
The embodiment of the invention also provides a device for realizing video service processing, which comprises a processor and a memory; wherein the memory has stored thereon a computer program operable on the processor to: for performing the steps of the video service processing method of any one of fig. 3.
The video service processing method provided by the application can be applied to the technical field of accurate delivery capability of recommending personalized contents for users. For example, a user video habit model is constructed for a user group customized individually (such as a VIP user of a video service provider or a user who starts a customization function of video subscription/hot video recommendation and the like), a video service task management part issues a task of acquiring relevant indexes such as a user watching time period, a watching behavior, a content preference, a watching market, a region where the user locates and the like to a base station, a base station sensor identifies the user habit model and acquires video data for a period of time, that is, the management unit issues the acquisition task to the user, the acquired content and the content change correspondingly according to an application scene, and the acquisition process is identified through interaction of the identification unit and the acquisition unit; then, the base station reports data signaling to the data analysis center, the big data analysis center analyzes user data through the analysis unit of the application after intelligently analyzing the video data, identifies a network platform with the highest video quality, then feeds back statistical data to a video service provider to establish content viscosity, explores a social operation mode with video content types as distinction, identifies all video resources related to the videos and comprehensively analyzes the network platform with the highest video quality through group management of subdivided users. And in the later period, when the user opens a browser to search related contents, the high-quality network platform is preferentially recommended, and when any network platform is used, the related video resources are preferentially recommended, so that the individual subscription of the user is met, and the commercial mode values such as advertisement distribution and the like are improved according to the user habit model.
The above description is only a preferred example of the present invention, and is not intended to limit the scope of the present invention. Any modification, equivalent replacement, or improvement made within the spirit and principle of the present invention should be included in the protection scope of the present invention.

Claims (19)

1. A video traffic processing system, comprising: the system comprises a management unit, an identification unit, a collection unit and an analysis unit; wherein,
the management unit is used for issuing video acquisition tasks to one or more acquisition units needing video service quality evaluation;
the identification unit is used for detecting the data message in the network by adopting a Deep Packet Inspection (DPI) technology, and informing the acquisition unit associated with the identification unit when real video service data is detected in the data message;
the acquisition unit is used for receiving the video service acquisition notice from the identification unit, detecting that a video service acquisition task exists, acquiring according to the parameters of the video service acquisition task, and reporting an acquisition result to the analysis unit;
and the analysis unit is used for analyzing and counting the reported data and periodically outputting a video service quality analysis result according to the configured model.
2. The video traffic processing system according to claim 1, wherein the identifying unit comprises: a shunting service module, a DPI module, wherein,
the shunting service module is used for copying the data message and then sending the data message to the DPI module; receiving an identification result fed back by the DPI module, and sending the identification result meeting a preset rule and a corresponding user port carried in a video service acquisition notice to the acquisition unit;
and the DPI module is used for detecting the received data message in real time, identifying whether the data message contains real video service data according to a pre-configured detection strategy, determining that the data message contains the real video service data if the video feature code is detected in the data message, and feeding back an identification result to the shunting service module.
3. The video service processing system according to claim 1, wherein the acquisition unit is specifically configured to:
receiving a video service acquisition notice from the identification unit, matching according to the ID of a video provider, and finding out videos with the same APPID; then, identifying according to the video ID, and considering the videos with the same APPID and video ID as the same video;
establishing a linked list node aiming at each terminal IP identified as having video service;
and according to the parameters carried by the video acquisition task, acquiring event reported data and periodically reported data in the video service process of the terminal identified as having the video service, and reporting the data of the terminal to the analysis unit configured in the video acquisition task when the video of the terminal identified as having the video service is finished or the terminal is released.
4. The video service processing system according to claim 1, wherein the analysis unit is specifically configured to:
and analyzing the video service data of the acquisition unit within the specified time according to the data reporting period and the task time carried in the video acquisition task to obtain the video service quality analysis result.
5. The video service processing system according to claim 4, wherein the analyzing unit analyzes the video service data of the acquisition unit in a specified time to obtain the video service quality analysis result, and comprises:
and acquiring the total video service quality score as the video service quality analysis result according to the video source quality score, the interactive experience quality score and the watching experience quality score of the video service data.
6. The video service processing system according to any of claims 1 to 4, wherein the management unit is further configured to: and stopping the issued video acquisition task.
7. The video service processing system according to any of claims 1 to 4, wherein the management unit is further configured to: and changing the parameters of the video task.
8. The video service processing system according to any of claims 1 to 4, wherein one of said identification units corresponds to one of said acquisition units; or, one of the identification units serves as a sink node in a network and corresponds to one or more acquisition units.
9. The video service processing system according to any one of claims 1 to 4, wherein the management unit is an element management system EMS, the identification unit is a mobile edge computing MEC, the acquisition unit is a base station, and the analysis unit is a big data server.
10. A video service processing method comprises the following steps:
the base station receives the video service acquisition notice and detects whether a video service acquisition task exists;
and the base station detects that a video service acquisition task exists, acquires according to the parameters of the video service acquisition task, and reports an acquisition result for analyzing to obtain a video service quality analysis result.
11. The video service processing method according to claim 10, wherein said acquiring according to the parameters of the video service acquisition task includes:
matching according to the video APPID to find out videos with the same APPID; identifying according to the video ID, wherein videos with the same APPID and video ID are regarded as the same video;
establishing a linked list node aiming at each terminal IP identified as having video service;
and according to parameters carried by the video acquisition task, acquiring event reported data and periodically reported data in the video service process of the terminal identified as having the video service, and reporting the data of the terminal when the video of the terminal identified as having the video service is finished or the terminal is released for analyzing to obtain a video service quality analysis result.
12. The video service processing method according to claim 10, wherein the parameters of the video capture task include a capture object; the acquisition object comprises any combination of:
the reporting mode is the video initial buffer phase data reported by the event;
the reporting mode is periodic video data information collection reported periodically;
the reporting mode is the video pause starting and stopping time of the event reporting;
the reporting mode is the video end time reported by the event;
the reporting mode is a video acquisition period reported periodically.
13. A computer-readable storage medium storing computer-executable instructions for performing the video service processing method of any one of claims 10 to 12.
14. An apparatus for implementing video service processing includes a processor, a memory; wherein the memory has stored thereon a computer program operable on the processor to: steps for performing the video traffic processing method of any one of claims 10 to 12.
15. A video service processing method comprises the following steps:
the MEC detects the data message in the network by adopting a DPI technology;
when the real video service data is detected in the data message, the base station associated with the MEC is notified to collect the video service.
16. The video service processing method according to claim 15, wherein the detecting the data packets in the network by using the DPI technology includes:
and copying the data message, then carrying out real-time detection, identifying whether the data message contains real video service data according to a pre-configured detection strategy, if the video feature code is detected in the data message, determining that the data message contains the real video service data, and obtaining an identification result.
17. The video traffic processing method of claim 15, wherein the notifying a base station associated with the MEC to collect the video traffic comprises:
and carrying the identification result which accords with a preset rule and corresponding user port information in the video service acquisition notice and sending the identification result and the corresponding user port information to a base station associated with the MEC.
18. A computer-readable storage medium storing computer-executable instructions for performing the video service processing method of any one of claims 15 to 17.
19. An apparatus for implementing video service processing includes a processor, a memory; wherein the memory has stored thereon a computer program operable on the processor to: steps for performing the video traffic processing method of any one of claims 15 to 17.
CN201910476215.3A 2019-06-03 2019-06-03 Video service processing method, system and device Pending CN112039680A (en)

Priority Applications (2)

Application Number Priority Date Filing Date Title
CN201910476215.3A CN112039680A (en) 2019-06-03 2019-06-03 Video service processing method, system and device
PCT/CN2020/080899 WO2020244283A1 (en) 2019-06-03 2020-03-24 Video service processing method, system, and device

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201910476215.3A CN112039680A (en) 2019-06-03 2019-06-03 Video service processing method, system and device

Publications (1)

Publication Number Publication Date
CN112039680A true CN112039680A (en) 2020-12-04

Family

ID=73576499

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201910476215.3A Pending CN112039680A (en) 2019-06-03 2019-06-03 Video service processing method, system and device

Country Status (2)

Country Link
CN (1) CN112039680A (en)
WO (1) WO2020244283A1 (en)

Families Citing this family (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN114640884B (en) * 2022-03-21 2023-01-31 广东易教优培教育科技有限公司 Online video playing quality analysis method, system and computer storage medium

Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20090252148A1 (en) * 2008-04-03 2009-10-08 Alcatel Lucent Use of DPI to extract and forward application characteristics
CN107493519A (en) * 2016-06-13 2017-12-19 中兴通讯股份有限公司 A kind of network quality appraisal procedure and device based on user video experience
CN107659856A (en) * 2017-07-04 2018-02-02 中国科学技术大学 The acquisition method of mobile video business experience qualitative data collection based on user feedback
CN109644199A (en) * 2016-10-18 2019-04-16 华为技术有限公司 Virtual network condition managing in mobile edge calculations

Family Cites Families (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2003037630A (en) * 2001-07-23 2003-02-07 Matsushita Electric Ind Co Ltd Method and device for measuring service quality in transmission of digital contents, and method and device for controlling service quality
CN105978910A (en) * 2016-07-14 2016-09-28 中国联合网络通信集团有限公司 Video service quality index generating method, device and system
CN107889126B (en) * 2016-09-30 2021-04-27 中国电信股份有限公司 Network state identification method, DPI monitoring and analyzing equipment and network system
CN106961632B (en) * 2017-04-12 2020-03-06 四川九鼎瑞信软件开发有限公司 Video quality analysis method and device
CN109039775A (en) * 2018-09-12 2018-12-18 网宿科技股份有限公司 Quality of service monitoring method, apparatus and system

Patent Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20090252148A1 (en) * 2008-04-03 2009-10-08 Alcatel Lucent Use of DPI to extract and forward application characteristics
CN107493519A (en) * 2016-06-13 2017-12-19 中兴通讯股份有限公司 A kind of network quality appraisal procedure and device based on user video experience
CN109644199A (en) * 2016-10-18 2019-04-16 华为技术有限公司 Virtual network condition managing in mobile edge calculations
CN107659856A (en) * 2017-07-04 2018-02-02 中国科学技术大学 The acquisition method of mobile video business experience qualitative data collection based on user feedback

Also Published As

Publication number Publication date
WO2020244283A1 (en) 2020-12-10

Similar Documents

Publication Publication Date Title
US10674387B2 (en) Video quality monitoring
US9032427B2 (en) System for monitoring a video network and methods for use therewith
WO2019223553A1 (en) Network traffic identification method and related device
CN104410516B (en) A kind of customer service perceptibility appraisal procedure and device
CN102006605A (en) Methods and apparatus to identify wireless carrier performance effects
EP3425909B1 (en) Video quality monitoring
US20210168049A1 (en) Quality of service monitoring method, device, and system
CN103650440A (en) Systems and methods for detection for prioritizing and scheduling packets in a communication network
CN102142990A (en) Traffic monitoring method and device
US11349731B2 (en) Data collection for the evaluation of the quality of experience of a service over a communications network
US20220353314A1 (en) Network data scheduling method and edge node thereof
CN109982293A (en) Flow product method for pushing, system, electronic equipment and storage medium
CN104753812A (en) Systems and methods for cooperative applications in communication systems
FR3016108B1 (en) MANAGING THE QUALITY OF APPLICATIONS IN A COOPERATIVE COMMUNICATION SYSTEM
CN110972199B (en) Flow congestion monitoring method and device
CN112039680A (en) Video service processing method, system and device
CN112752111B (en) Live stream processing method and device, computer readable storage medium and electronic equipment
CN108271189A (en) A kind of quality of service monitoring method and device
WO2022152230A1 (en) Information flow identification method, network chip, and network device
US11240544B1 (en) System, device, and method of differentiating between streaming live-video flows and streaming non-live-video flows
WO2015027860A1 (en) Video service processing method and apparatus, and network device
Tang et al. Analysis on the state of mobile HTTP video streaming at the client-side
CN109121073A (en) Mobile communication business quality monitoring method, device and equipment
KR101329864B1 (en) Method and apparatus for monitoring iptv service quality
CN117997744A (en) Network acceleration method and device, electronic equipment and storage medium

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
WD01 Invention patent application deemed withdrawn after publication
WD01 Invention patent application deemed withdrawn after publication

Application publication date: 20201204