CN117241071B - Method for sensing video katon quality difference based on machine learning algorithm - Google Patents
Method for sensing video katon quality difference based on machine learning algorithm Download PDFInfo
- Publication number
- CN117241071B CN117241071B CN202311514598.1A CN202311514598A CN117241071B CN 117241071 B CN117241071 B CN 117241071B CN 202311514598 A CN202311514598 A CN 202311514598A CN 117241071 B CN117241071 B CN 117241071B
- Authority
- CN
- China
- Prior art keywords
- model
- data
- training
- feature
- features
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Active
Links
- 238000000034 method Methods 0.000 title claims abstract description 37
- 238000010801 machine learning Methods 0.000 title claims abstract description 14
- 238000001514 detection method Methods 0.000 claims abstract description 9
- 238000004458 analytical method Methods 0.000 claims abstract description 7
- 238000012549 training Methods 0.000 claims description 35
- 238000012360 testing method Methods 0.000 claims description 21
- 238000005457 optimization Methods 0.000 claims description 11
- 238000011156 evaluation Methods 0.000 claims description 10
- 238000000605 extraction Methods 0.000 claims description 10
- 230000002159 abnormal effect Effects 0.000 claims description 9
- 230000000694 effects Effects 0.000 claims description 6
- 230000004044 response Effects 0.000 claims description 6
- 230000008030 elimination Effects 0.000 claims description 5
- 238000003379 elimination reaction Methods 0.000 claims description 5
- 238000005516 engineering process Methods 0.000 claims description 5
- 238000004891 communication Methods 0.000 claims description 4
- 238000012706 support-vector machine Methods 0.000 claims description 4
- 238000006243 chemical reaction Methods 0.000 claims description 3
- 238000010276 construction Methods 0.000 claims description 3
- 238000010586 diagram Methods 0.000 claims description 3
- 238000007477 logistic regression Methods 0.000 claims description 3
- 230000000737 periodic effect Effects 0.000 claims description 3
- 238000007781 pre-processing Methods 0.000 claims description 3
- 238000012545 processing Methods 0.000 claims description 3
- 238000012163 sequencing technique Methods 0.000 claims description 3
- 239000000523 sample Substances 0.000 description 14
- 238000003066 decision tree Methods 0.000 description 6
- 230000009286 beneficial effect Effects 0.000 description 3
- 238000007689 inspection Methods 0.000 description 3
- 230000008447 perception Effects 0.000 description 3
- 230000000903 blocking effect Effects 0.000 description 2
- 238000011161 development Methods 0.000 description 2
- 238000013442 quality metrics Methods 0.000 description 2
- 238000011160 research Methods 0.000 description 2
- 230000006399 behavior Effects 0.000 description 1
- 238000013461 design Methods 0.000 description 1
- 230000014509 gene expression Effects 0.000 description 1
- 238000005259 measurement Methods 0.000 description 1
- 238000012544 monitoring process Methods 0.000 description 1
- 238000010606 normalization Methods 0.000 description 1
- 238000013441 quality evaluation Methods 0.000 description 1
Landscapes
- Data Exchanges In Wide-Area Networks (AREA)
Abstract
The invention discloses a method for perceiving video jam quality difference based on a machine learning algorithm, which is based on a traditional internet log analysis method based on deep message detection, integrates the machine learning algorithm, can efficiently, comprehensively and truly detect video jam and avoid problems caused by video service characteristic change.
Description
Technical Field
The invention relates to the technical field of computer learning, in particular to a method for sensing video katon quality difference based on a machine learning algorithm.
Background
With the development of 5G, video service applications are becoming more widespread, and such rapid growth has led network mobile operators to fundamentally reconsider and optimize their operating networks. In order to make such optimization and capacity planning, operators must go deep into understanding and monitoring quality of service experience (QoE) in terms of video delivery. For limited mobile network bandwidth, network access service is reasonably provided, and a high-quality mobile video experience is provided, and network service providers need objective QoE evaluation to improve network utilization.
At present, the factors influencing the video QoE are studied at home and abroad, three influencing factors are mainly selected, namely bit rate change, initial time delay and blocking, and by respectively comparing the influence degrees of the three factors on the video QoE, researchers obtain the conclusion that the influence of the blocking on the video QoE is more serious compared with the indexes such as the initial time delay and the like. Therefore, research on video capture detection methods has also become one of the research hotspots in recent years.
The main schemes for researching the video cartoon detection method include quality index based on image characteristics, quality index based on dial-up measurement, user internet log analysis based on deep packet inspection (Deep Packet Inspection, DPI) technology and the like.
Methods based on image feature quality metrics propose quality metrics based on image features to evaluate the user experience of a katon video, such methods estimating the number and duration of katon by extracting image features from the decoded video. However, this type of method has the following problems: on the one hand, it is difficult for an operator to decode a video played on a client at a network access point and extract image features; on the other hand, it is difficult for the operator to reproduce the real user terminal play scene at the network access point, so the availability of the scheme for the network operator needs to be further studied.
According to the quality index based on dial testing, the dial testing probes deployed by each dial testing node are used for performing timing and quantitative dial testing on video packets, simulating the behavior of a user for watching videos, and counting the quality evaluation indexes such as the number of jams, the length of the jams, the downloading time, the throughput rate, the playing rate and the like which occur in the dial testing process and are perceived by the user side to be stronger. However, the indexes obtained by the dial testing technology are always carried out under specific conditions, and the user perception cannot be comprehensively and truly estimated.
The method for analyzing the internet log based on the deep message detection comprises the steps of collecting network messages through probe equipment to generate an XDR (External Data Representation ) ticket, and obtaining index data capable of reflecting video clamping through feature extraction and quality index algorithm model of the full data of the XDR ticket. However, with the development of video services, features are changing continuously, so that not only is a lot of manual service analysis work required, but also with the popularization of HTTPS, the features are also likely to disappear.
Disclosure of Invention
The invention aims to solve the problems, and designs a method for sensing video jamming quality difference based on a machine learning algorithm, which is used in an internet log analysis method for detecting a depth message and detects the problem of video jamming, and comprises the following steps:
s1, collecting network traffic through a DPI technology to generate an XDR ticket, and completing preprocessing work;
s2, carrying out characteristic engineering on the XDR ticket;
s3, performing model training by using a GBDT algorithm; taking the labeled sample generated in the step S2 after feature extraction as a training set, and adopting a GBDT algorithm and a GridSearchCV parameter tuning algorithm to construct a prediction model for predicting the cartoon data of the future time period;
s4, predicting the test set by using the model obtained through training; taking the sample which is generated in the step S2 and is subjected to feature extraction and is not provided with a label as a test set, calling a prediction model generated in the step S3 to predict, evaluating the model, and determining whether feature optimization or model optimization is carried out according to a model evaluation result;
s5, circulating S2-S4, constructing data and performing feature engineering by adopting a periodic XDR ticket, constructing a data set, obtaining a labeled training sample and training a model, predicting a non-labeled test sample by using the trained model, and evaluating the model;
if the evaluation effect is not good, training and evaluating the model again by optimizing the characteristic engineering and/or optimizing the model mode to form a closed cycle;
if the evaluation effect is good, ending the flow and outputting the prediction result of the cartoon data.
Further, S1 further includes:
s11, collecting network flow data in real time, and generating an XDR ticket;
s12, carrying out abnormal value and null value processing on the dialogue single data;
s13: the sample label is constructed on the video cartoon through the currently effective video service characteristics, wherein the video service characteristics comprise the fields of access domain name and access URL, userAgent after capturing, analyzing and summarizing the video service communication flow.
Further, in S11, the network traffic data includes at least one of source IP, destination IP, source port, destination port, transport layer protocol, user internet access account, access domain name, access URL, userAgent, uplink and downlink traffic, uplink and downlink packet number, link establishment delay, first packet delay, and response delay.
Further, in S12, deleting the data with null values from the key attribute fields of the source IP, the destination IP, the source port, the destination port, and the access domain name;
for index fields of chain establishment delay, first packet delay and response delay, detecting abnormal values by using a quartile range-IQR box diagram method, sorting data according to the values of the detection fields from low to high, dividing the data into four equal-number parts, calculating a lower quartile QL, an upper quartile QU and a quartile interval IQR, wherein IQR=QU-QL, and when the field values are smaller than QL-1.5IQR or larger than QU+1.5IQR, marking the abnormal values, and deleting the data.
Further, S2 at least includes:
s21, feature selection: selecting relevant features which leave the initial setting requirements from the original features by removing irrelevant, redundant or noisy features;
s22, feature construction: according to the understanding of video service, using the existing characteristic field in XDR ticket, constructing new characteristic field by the elementary conversion between characteristic fields and the method of characteristic statistic value in unit period;
s23, feature extraction: and (3) adopting a support vector machine recursion feature elimination algorithm, obtaining a combined variable capable of achieving model performance by adding or removing specific feature variables, training samples through the model, then carrying out score ranking on each feature, removing the feature with the minimum feature score, training the model again by using the rest features, carrying out the next iteration, and finally selecting the features with the specified quantity.
Further, S23 at least includes:
s231, training all the features by using a logistic regression classifier as a base model, sorting all the features by using the returned weight coefficients, and deleting the features with the minimum coefficients;
and S232, repeating the training, sequencing and deleting steps of S231 by using all the deleted features until the specified number of features remain.
Further, S3 at least includes:
s31, using a GBDT algorithm as a classifier, and adopting a GridSearchCV method to automatically adjust parameters of the GBDT algorithm, the number of the largest weak learners, the largest depth, the largest feature number and the sample number with the smallest child nodes to obtain an optimal parameter value;
s32, training a prediction model by using the optimal parameter value.
Further, S4 at least includes:
s41, constructing a ticket test set to be predicted;
s42, inputting the ticket test set to be predicted into a prediction model, and outputting a katon data prediction result.
The method for sensing the video katon quality difference based on the machine learning algorithm, which is manufactured by utilizing the technical scheme of the invention, achieves the following beneficial effects:
on the basis of the traditional internet log analysis method based on deep message detection, a machine learning algorithm is integrated, so that video jamming can be detected efficiently, comprehensively and truly, and problems caused by video service characteristic changes can be avoided.
The method takes an XDR ticket as input, marks the cartoon ticket through the current effective video service characteristics, adopts GBDT (Gradient Boosting Decision Tree, gradient lifting decision tree) algorithm and RFE (Recursive Feature Elimination ) algorithm to complete characteristic selection and model training, analyzes and predicts video cartoon phenomena from influence factors of multiple aspects, more accurately reflects video service experience quality, helps operators to find problems in advance, solves the problems, and is beneficial to optimization of service sides so as to realize cross-layer intelligent optimization of user experience and promote user service perception.
Drawings
FIG. 1 is a flow chart of a method for sensing video katana based on a machine learning algorithm according to the present invention;
FIG. 2 is a flow chart of the invention S1;
FIG. 3 is a flow chart of S2 according to the present invention;
FIG. 4 is a flow chart of S3 according to the present invention;
fig. 5 is a flowchart of S4 according to the present invention.
Detailed Description
For a better understanding of the present invention, the present invention is further described below with reference to specific examples and drawings.
Examples
As shown in fig. 1, a method for sensing video jamming quality difference based on a machine learning algorithm is used in an internet log analysis method for deep packet inspection, and the method includes:
s1, collecting network traffic through a DPI technology to generate an XDR ticket, and completing preprocessing work;
as shown in fig. 2, in particular, S1 further includes:
s11, collecting network flow data in real time, and generating an XDR ticket;
specifically, in S11, the network traffic data includes at least one of source IP, destination IP, source port, destination port, transport layer protocol, user internet access account, access domain name, access URL, userAgent, uplink and downlink traffic, uplink and downlink packet number, link establishment delay, first packet delay, and response delay.
S12, carrying out abnormal value and null value processing on the dialogue single data;
specifically, in S12, the data with null values is deleted for the key attribute fields of the source IP, the destination IP, the source port, the destination port, and the access domain name;
for index fields of chain establishment delay, first packet delay and response delay, detecting abnormal values by using a quartile range-IQR box diagram method, sorting data according to the values of the detection fields from low to high, dividing the data into four equal-number parts, calculating a lower quartile QL, an upper quartile QU and a quartile interval IQR, wherein IQR=QU-QL, and when the field values are smaller than QL-1.5IQR or larger than QU+1.5IQR, marking the abnormal values, and deleting the data.
S13, constructing a sample tag for the video cartoon through the currently effective video service characteristics, wherein the video service characteristics comprise the fields of access domain name and access URL, userAgent after capturing, analyzing and summarizing the video service communication flow.
Specifically, the recognition rule is derived from capturing, analyzing and summarizing the communication flow of the video service, for example, a video website will send a message to a specified domain name when playing a video clip, and its request URI contains some fixed character string. These rules may be represented by regular expressions, such as access domain name rules: "btrace\(qq|play\aisetj atianqi) \com", URI rule is "((bossil=9032.& step=2) | (& step=2..bossil=9032))".
The method further comprises the steps of: s2, carrying out characteristic engineering on the XDR ticket;
as shown in fig. 3, specifically, S2 includes at least:
s21, feature selection: selecting relevant features which leave the initial setting requirements from the original features by removing irrelevant, redundant or noisy features;
the specific process is as follows: firstly, carrying out value statistics on all characteristic fields in data; secondly, deleting characteristic fields with missing value proportion exceeding 50%, constant value proportion exceeding 90% and obvious irrelevant in the statistical result.
S22, feature construction: according to the understanding of video service, using the existing characteristic field in XDR ticket, constructing new characteristic field by the elementary conversion between characteristic fields and the method of characteristic statistic value in unit period;
specifically, constructing a stream duration field by a stream start time and a stream end time characteristic field, wherein the value of the stream duration field is equal to the stream end time minus the stream start time; constructing a downstream rate by means of the downstream byte number and the stream duration field, the value of which is equal to the downstream byte number divided by the stream duration;
s23, feature extraction: and (3) adopting a support vector machine recursion feature elimination algorithm, obtaining a combined variable capable of achieving model performance by adding or removing specific feature variables, training samples through the model, then carrying out score ranking on each feature, removing the feature with the minimum feature score, training the model again by using the rest features, carrying out the next iteration, and finally selecting the features with the specified quantity. It is a sequence backward selection algorithm based on the principle of maximum spacing of SVMs.
Specifically, S23 at least includes:
s231, training all the features by using a logistic regression classifier as a base model, sorting all the features by using the returned weight coefficients, and deleting the features with the minimum coefficients;
and S232, repeating the training, sequencing and deleting steps of S231 by using all the deleted features until the specified number of features remain.
The method further comprises the steps of: s3, performing model training by using a GBDT algorithm; taking the labeled sample generated in the step S2 after feature extraction as a training set, and adopting a GBDT algorithm and a GridSearchCV parameter tuning algorithm to construct a prediction model for predicting the cartoon data of the future time period;
GBDT is an integrated decision tree-based algorithm consisting of a number of decision trees (CART, classification regression tree), the conclusions of all trees being accumulated to make the final answer. Each tree learns the residual (negative gradient analog residual) of the sum of all previous tree conclusions, and the residual is the accumulated quantity of the true value obtained by adding the predicted value. The algorithm has the advantages of wide application range (regression and classification), flexible invariance (without normalization of characteristic values), relatively few parameter adjustment and high prediction precision. While gridsetarchcv can implement automatic parameter tuning and return to the optimal parameter combination.
As shown in fig. 4, specifically, S3 includes at least:
s31, using a GBDT algorithm as a classifier, and adopting a GridSearchCV method to automatically adjust parameters of the GBDT algorithm, the number of the largest weak learners, the largest depth, the largest feature number and the sample number with the smallest child nodes to obtain an optimal parameter value;
s32, training a prediction model by using the optimal parameter value.
The method further comprises the steps of: s4, predicting the test set by using the model obtained through training; and (3) taking the sample which is generated in the step (S2) and is subjected to feature extraction and is not provided with a label as a test set, calling the prediction model generated in the step (S3) to predict, evaluating the model, and determining whether to perform feature optimization or model optimization according to the model evaluation result.
As shown in fig. 5, specifically, S4 includes at least:
s41, constructing a ticket test set to be predicted;
s42, inputting the ticket test set to be predicted into a prediction model, and outputting a katon data prediction result.
The method further comprises the steps of: s5, circulating S2-S4, constructing data and performing feature engineering by adopting a periodic XDR ticket, constructing a data set, obtaining a labeled training sample and training a model, predicting a non-labeled test sample by using the trained model, and evaluating the model;
if the evaluation effect is not good, training and evaluating the model again by optimizing the characteristic engineering and/or optimizing the model mode to form a closed cycle;
if the evaluation effect is good, ending the flow and outputting the prediction result of the cartoon data.
The method takes an XDR ticket as input, marks the cartoon ticket through the current effective video service characteristics, adopts GBDT (Gradient Boosting Decision Tree, gradient lifting decision tree) algorithm and RFE (Recursive Feature Elimination ) algorithm to complete characteristic selection and model training, analyzes and predicts video cartoon phenomena from influence factors of multiple aspects, more accurately reflects video service experience quality, helps operators to find problems in advance, solves the problems, and is beneficial to optimization of service sides so as to realize cross-layer intelligent optimization of user experience and promote user service perception.
The above technical solution only represents the preferred technical solution of the present invention, and some changes that may be made by those skilled in the art to some parts of the technical solution represent the principles of the present invention, and the technical solution falls within the scope of the present invention.
Claims (5)
1. A method for sensing video jamming quality difference based on a machine learning algorithm is characterized in that the method is used in an internet log analysis method for deep message detection, and is used for detecting the problem of video jamming, and the method comprises the following steps:
s1, collecting network traffic through a DPI technology to generate an XDR ticket, and completing preprocessing work;
wherein S1 further comprises:
s11, collecting network flow data in real time, and generating an XDR ticket;
in S11, the network traffic data includes at least one of source IP, destination IP, source port, destination port, transport layer protocol, user internet access account, access domain name, access URL, userAgent, uplink and downlink traffic, uplink and downlink message number, link establishment delay, first packet delay, and response delay information data;
s12, carrying out abnormal value and null value processing on the dialogue single data;
s12, deleting the data with null values from key attribute fields of a source IP, a destination IP, a source port, a destination port and an access domain name;
detecting abnormal values of index fields of chain building delay, first packet delay and response delay by using a quartile range-IQR box diagram method, sorting data according to the values of the detection fields from low to high, dividing the data into 4 equal-number parts, calculating a lower quartile QL, an upper quartile QU and a quartile interval IQR, wherein IQR=QU-QL, and when the field values are smaller than QL-1.5IQR or larger than QU+1.5IQR, marking the abnormal values, and deleting the data;
s13, constructing a sample tag for the video cartoon through the current effective video service characteristics, wherein the video service characteristics comprise the fields of access domain name and access URL, userAgent after capturing, analyzing and summarizing the video service communication flow;
s2, carrying out characteristic engineering on the XDR ticket;
s3, performing model training by using a GBDT algorithm; taking the labeled sample generated in the step S2 after feature extraction as a training set, and adopting a GBDT algorithm and a GridSearchCV parameter tuning algorithm to construct a prediction model for predicting the cartoon data of the future time period;
s4, predicting the test set by using the model obtained through training; taking the sample which is generated in the step S2 and is subjected to feature extraction and is not provided with a label as a test set, calling a prediction model generated in the step S3 to predict, evaluating the model, and determining whether feature optimization or model optimization is carried out according to a model evaluation result;
s5, circulating S2-S4, constructing data and performing feature engineering by adopting a periodic XDR ticket, constructing a data set, obtaining a labeled training sample and training a model, predicting a non-labeled test sample by using the trained model, and evaluating the model;
if the evaluation effect is not good, training and evaluating the model again by optimizing the characteristic engineering and/or optimizing the model mode to form a closed cycle;
if the evaluation effect is good, ending the flow and outputting the prediction result of the cartoon data.
2. The method for sensing video katana based on a machine learning algorithm of claim 1, wherein S2 comprises at least:
s21, feature selection: selecting relevant features which leave the initial setting requirements from the original features by removing irrelevant, redundant or noisy features;
s22, feature construction: according to the understanding of video service, using the existing characteristic field in XDR ticket, constructing new characteristic field by the elementary conversion between characteristic fields and the method of characteristic statistic value in unit period;
s23, feature extraction: and (3) adopting a support vector machine recursion feature elimination algorithm, obtaining a combined variable capable of achieving model performance by adding or removing specific feature variables, training a sample through the model, sorting the scores of each feature, removing the feature with the minimum feature score, training the model again by using the rest features, performing the next iteration, and finally selecting the features with the specified quantity.
3. The method of claim 2, wherein S23 at least comprises:
s231, training all the features by using a logistic regression classifier as a base model, sorting all the features by using the returned weight coefficients, and deleting the features with the minimum coefficients;
and S232, repeating the training, sequencing and deleting steps of S231 by using all the deleted features until the specified number of features remain.
4. The method for sensing video katana based on a machine learning algorithm of claim 1, wherein S3 comprises at least:
s31, using a GBDT algorithm as a classifier, and adopting a GridSearchCV method to automatically adjust parameters of the GBDT algorithm, the number of the largest weak learners, the largest depth, the largest feature number and the sample number with the smallest child nodes to obtain an optimal parameter value;
s32, training a prediction model by using the optimal parameter value.
5. The method for sensing video katana based on machine learning algorithm as claimed in claim 1, wherein S4 at least comprises:
s41, constructing a ticket test set to be predicted;
s42, inputting the ticket test set to be predicted into a prediction model, and outputting a katon data prediction result.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202311514598.1A CN117241071B (en) | 2023-11-15 | 2023-11-15 | Method for sensing video katon quality difference based on machine learning algorithm |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202311514598.1A CN117241071B (en) | 2023-11-15 | 2023-11-15 | Method for sensing video katon quality difference based on machine learning algorithm |
Publications (2)
Publication Number | Publication Date |
---|---|
CN117241071A CN117241071A (en) | 2023-12-15 |
CN117241071B true CN117241071B (en) | 2024-02-06 |
Family
ID=89089775
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN202311514598.1A Active CN117241071B (en) | 2023-11-15 | 2023-11-15 | Method for sensing video katon quality difference based on machine learning algorithm |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN117241071B (en) |
Citations (10)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN108768695A (en) * | 2018-04-27 | 2018-11-06 | 华为技术有限公司 | The problem of KQI localization method and device |
CN110958491A (en) * | 2018-09-27 | 2020-04-03 | 中兴通讯股份有限公司 | Video Kanton model training method, video Kanton model prediction method, server and storage medium |
CN111107423A (en) * | 2018-10-29 | 2020-05-05 | 中国移动通信集团浙江有限公司 | Video service playing card pause identification method and device |
CN112995652A (en) * | 2021-02-01 | 2021-06-18 | 腾讯科技(深圳)有限公司 | Video quality evaluation method and device |
CN113099475A (en) * | 2021-04-20 | 2021-07-09 | 中国移动通信集团陕西有限公司 | Network quality detection method and device, electronic equipment and readable storage medium |
CN113453076A (en) * | 2020-03-24 | 2021-09-28 | 中国移动通信集团河北有限公司 | User video service quality evaluation method and device, computing equipment and storage medium |
CN113780398A (en) * | 2021-09-02 | 2021-12-10 | 科大国创云网科技有限公司 | Wireless network link quality prediction method and system |
CN115905619A (en) * | 2022-10-09 | 2023-04-04 | 上海哔哩哔哩科技有限公司 | Scheme for evaluating user experience quality of video |
WO2023116233A1 (en) * | 2021-12-20 | 2023-06-29 | 北京字节跳动网络技术有限公司 | Video stutter prediction method and apparatus, device and medium |
CN116915630A (en) * | 2022-12-02 | 2023-10-20 | 中国移动通信集团河北有限公司 | Network stuck prediction method, device, electronic equipment, medium and program product |
-
2023
- 2023-11-15 CN CN202311514598.1A patent/CN117241071B/en active Active
Patent Citations (10)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN108768695A (en) * | 2018-04-27 | 2018-11-06 | 华为技术有限公司 | The problem of KQI localization method and device |
CN110958491A (en) * | 2018-09-27 | 2020-04-03 | 中兴通讯股份有限公司 | Video Kanton model training method, video Kanton model prediction method, server and storage medium |
CN111107423A (en) * | 2018-10-29 | 2020-05-05 | 中国移动通信集团浙江有限公司 | Video service playing card pause identification method and device |
CN113453076A (en) * | 2020-03-24 | 2021-09-28 | 中国移动通信集团河北有限公司 | User video service quality evaluation method and device, computing equipment and storage medium |
CN112995652A (en) * | 2021-02-01 | 2021-06-18 | 腾讯科技(深圳)有限公司 | Video quality evaluation method and device |
CN113099475A (en) * | 2021-04-20 | 2021-07-09 | 中国移动通信集团陕西有限公司 | Network quality detection method and device, electronic equipment and readable storage medium |
CN113780398A (en) * | 2021-09-02 | 2021-12-10 | 科大国创云网科技有限公司 | Wireless network link quality prediction method and system |
WO2023116233A1 (en) * | 2021-12-20 | 2023-06-29 | 北京字节跳动网络技术有限公司 | Video stutter prediction method and apparatus, device and medium |
CN115905619A (en) * | 2022-10-09 | 2023-04-04 | 上海哔哩哔哩科技有限公司 | Scheme for evaluating user experience quality of video |
CN116915630A (en) * | 2022-12-02 | 2023-10-20 | 中国移动通信集团河北有限公司 | Network stuck prediction method, device, electronic equipment, medium and program product |
Non-Patent Citations (2)
Title |
---|
万仁辉 等.基于机器学习的网络投诉预测分析.电信工程技术与标准化.2020,(第08期),全文. * |
基于机器学习的网络投诉预测分析;万仁辉 等;电信工程技术与标准化(第08期);全文 * |
Also Published As
Publication number | Publication date |
---|---|
CN117241071A (en) | 2023-12-15 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN102420701B (en) | Method for extracting internet service flow characteristics | |
CN109873726B (en) | Robust service quality prediction and guarantee method based on deep learning in SDN | |
CN106656629B (en) | Method for predicting streaming media playing quality | |
CN111107423A (en) | Video service playing card pause identification method and device | |
CN103152599A (en) | Mobile video service user experience quality evaluation method based on ordinal regression | |
CN112003869B (en) | Vulnerability identification method based on flow | |
CN115967504A (en) | Encrypted malicious traffic detection method and device, storage medium and electronic device | |
Manzoor et al. | How HTTP/2 is changing web traffic and how to detect it | |
CN106535240A (en) | Mobile APP centralized performance analysis method based on cloud platform | |
CN117241071B (en) | Method for sensing video katon quality difference based on machine learning algorithm | |
Shaikh et al. | Modeling and analysis of web usage and experience based on link-level measurements | |
CN111310796B (en) | Web user click recognition method oriented to encrypted network flow | |
De Masi et al. | Predicting quality of experience of popular mobile applications from a living lab study | |
CN115174961B (en) | High-speed network-oriented multi-platform video flow early identification method | |
CN112235254A (en) | Rapid identification method for Tor network bridge in high-speed backbone network | |
CN115314407B (en) | Network game QoE detection method based on network traffic | |
CN114679318B (en) | Lightweight Internet of things equipment identification method in high-speed network | |
CN113765738B (en) | Encrypted traffic QoE detection method and system based on multi-task learning and hierarchical classification | |
CN116915630A (en) | Network stuck prediction method, device, electronic equipment, medium and program product | |
Belmoukadam et al. | Unveiling the end-user viewport resolution from encrypted video traces | |
Xu et al. | Rtd: the road to encrypted video traffic identification in the heterogeneous network environments | |
CN117313004B (en) | QoS flow classification method based on deep learning in Internet of things | |
CN115580603A (en) | Method for predicting live broadcast pause of webpage under RTP | |
CN114143301B (en) | Mobile traffic application identification feature extraction method and system | |
Maggi et al. | Online detection of stalling and scrubbing in adaptive video streaming |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
GR01 | Patent grant | ||
GR01 | Patent grant |