CN116846803A - QoE evaluation model training method, qoE evaluation method and QoE evaluation equipment - Google Patents

QoE evaluation model training method, qoE evaluation method and QoE evaluation equipment Download PDF

Info

Publication number
CN116846803A
CN116846803A CN202210283827.2A CN202210283827A CN116846803A CN 116846803 A CN116846803 A CN 116846803A CN 202210283827 A CN202210283827 A CN 202210283827A CN 116846803 A CN116846803 A CN 116846803A
Authority
CN
China
Prior art keywords
qoe
time
offline
data
final
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202210283827.2A
Other languages
Chinese (zh)
Inventor
李锦波
李锡民
秦晓卫
许小东
杨景淇
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
University of Science and Technology of China USTC
Honor Device Co Ltd
Original Assignee
University of Science and Technology of China USTC
Honor Device Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by University of Science and Technology of China USTC, Honor Device Co Ltd filed Critical University of Science and Technology of China USTC
Priority to CN202210283827.2A priority Critical patent/CN116846803A/en
Publication of CN116846803A publication Critical patent/CN116846803A/en
Pending legal-status Critical Current

Links

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L65/00Network arrangements, protocols or services for supporting real-time applications in data packet communication
    • H04L65/80Responding to QoS
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N20/00Machine learning
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L41/00Arrangements for maintenance, administration or management of data switching networks, e.g. of packet switching networks
    • H04L41/16Arrangements for maintenance, administration or management of data switching networks, e.g. of packet switching networks using machine learning or artificial intelligence

Landscapes

  • Engineering & Computer Science (AREA)
  • Software Systems (AREA)
  • Medical Informatics (AREA)
  • Signal Processing (AREA)
  • Artificial Intelligence (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Theoretical Computer Science (AREA)
  • Evolutionary Computation (AREA)
  • Computer Networks & Wireless Communication (AREA)
  • Multimedia (AREA)
  • Data Mining & Analysis (AREA)
  • Physics & Mathematics (AREA)
  • Computing Systems (AREA)
  • General Engineering & Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • Mathematical Physics (AREA)
  • Databases & Information Systems (AREA)
  • Testing, Inspecting, Measuring Of Stereoscopic Televisions And Televisions (AREA)

Abstract

The application provides a QoE evaluation model training method, a QoE evaluation method and QoE evaluation equipment, and relates to the technical field of artificial intelligence. In the scheme of the application, an offline data training machine learning model and a score memory unit are adopted, so that a QoE evaluation model for evaluating streaming media video in real time is constructed in advance. When a user watches the streaming media video online, accurate and real-time evaluation can be performed on QoE by collecting bottom layer parameters of a terminal side in real time and inputting the bottom layer parameters into the QoE evaluation model.

Description

QoE evaluation model training method, qoE evaluation method and QoE evaluation equipment
Technical Field
The application relates to the technical field of artificial intelligence (artificial intelligence, AI), in particular to a training method of a QoE evaluation model, a quality of experience (quality of experience, qoE) evaluation method and equipment.
Background
The video file can be continuously and uninterruptedly transmitted to the terminal device by the server by adopting streaming media (streaming media) technology, so that a user can watch the video file in real time. In order to characterize the user's satisfaction with the streaming video service, it is often necessary to evaluate the quality of the streaming video service.
QoE defined by the international telecommunication union telecommunication standardization sector (ITU-T) is typically employed as an evaluation criterion for user satisfaction with streaming media video services. In one aspect, streaming video provides video on demand and live services with adaptive bitrate streaming protocol (HTTP live streaming, HLS) based on hypertext transfer protocol (hyper text transfer protocol, HTTP) being common. The HLS adopts a progressive and adaptive downloading strategy, so that simple network performance cannot be mapped to QoE of the streaming media video, i.e., qoE evaluation of the streaming media video cannot be realized by using simple network performance. On the other hand, the traditional QoE evaluation method models the QoE influence factors extracted offline after the user completely watches the video to obtain QoE scores, and the QoE scores are not real-time.
Therefore, how to perform real-time QoE evaluation on the streaming video being played becomes a problem to be solved.
Disclosure of Invention
The application provides a training method of a QoE evaluation model, a QoE evaluation method and QoE evaluation equipment, which solve the technical problem of how to perform real-time QoE evaluation on a streaming media video being played.
In order to achieve the above purpose, the application adopts the following technical scheme:
In a first aspect, an embodiment of the present application provides a method for training a QoE evaluation model. The QoE evaluation model includes a machine learning classifier and a score memory unit. The method comprises the following steps:
extracting target offline characteristics from offline data, wherein the offline data are terminal side parameters from the initial playing time to the time t, which are acquired according to a log file of the streaming media video which is finished to be played, and the target offline characteristics are used for evaluating QoE at the time t;
training a machine learning classifier according to the target offline characteristics to obtain a mapping relation between the target offline characteristics and QoE preliminary evaluation results at time t;
training a score memory unit according to the plurality of QoE preliminary evaluation results to obtain a mapping relation between the plurality of QoE preliminary evaluation results and a QoE final evaluation result at the moment t; wherein, the plurality of QoE preliminary evaluation results include: a QoE preliminary evaluation result at time t, and a QoE preliminary evaluation result at least one time before time t.
It can be understood that, since the time t is any time, for other times before the time t, the machine learning classifier may be trained according to the offline feature for evaluating QoE at other times, so as to obtain the mapping relationship between the offline feature and the QoE preliminary evaluation result at other times.
In the scheme, through training a machine learning classifier, qoE preliminary evaluation results of a plurality of adjacent moments can be obtained; and training a score memory unit according to the QoE preliminary evaluation results to obtain the mapping relation between a plurality of QoE preliminary evaluation results and the QoE final evaluation result at the moment t. Thus, the machine learning classifier and the score memory unit are combined into a QoE evaluation model. Through a pre-constructed QoE evaluation model, real-time QoE evaluation of the streaming media video played later can be realized.
In one possible implementation, extracting the target offline feature from the offline data includes:
normalizing the offline data;
extracting all offline characteristics of the normalized offline data;
and removing redundant offline features from all the offline features to obtain target offline features.
In one possible implementation, all offline features include at least one of the following:
global features, which are features extracted from all of the offline data;
window characteristics, wherein the window characteristics are extracted from partial data of offline data, and the partial data comprise data of time t and data of preset time duration before the time t;
Other features that are not related to network data.
In one possible implementation, removing redundant offline features from all offline features to obtain target offline features includes:
iterative training is carried out on all offline features:
in each round of iterative training, deleting the offline features with the lowest importance level according to the importance level of each offline feature in all offline features until the preset iteration times are reached or until the number of the rest offline features is smaller than or equal to the preset number;
and taking the rest offline characteristics as target offline characteristics.
In one possible implementation, the plurality of QoE preliminary assessment results includes: the method comprises a plurality of preliminary prediction probabilities, wherein each preliminary prediction probability in the plurality of preliminary prediction probabilities is used for representing the probability that a machine learning classifier predicts that a streaming media video is stuck at one moment, and the moment is a moment t or at least one moment before the moment t.
Accordingly, the optimization objective of the machine learning classifier is: the first loss function is minimized, and the first loss function is used for representing the loss corresponding to the preliminary prediction probability of the moment and the difference degree of the initial QoE label of the moment. The initial QoE tag at the one time instant is used to indicate that the streaming video is stuck or smooth at the one time instant.
In one possible implementation, the QoE final evaluation result at time t includes: the final prediction probability at the time t is used for representing the probability that the score memory unit predicts that the streaming media video is stuck at the time t.
Accordingly, the optimization objective of the score memory unit is: and minimizing a second loss function, wherein the second loss function is used for representing the loss corresponding to the difference degree of the final prediction probability of the time t and the initial QoE label of the time t.
In one possible implementation, the mapping relationship between the QoE preliminary evaluation results and the QoE final evaluation result at time t is represented by the following relational expression:
wherein t is i Indicating the time t, a j Weight factor representing each preliminary prediction probability, T represents time window length, p output (t i ) Representing the final predicted probability at time t;
when j=0, the number of the groups,representing the preliminary prediction probability of the moment t;
when j is not equal to 0,representing the time t before the time t i-j Is used for the preliminary prediction probability of (a).
In one possible implementation, the QoE final evaluation result at time t further includes: final predicted tag at time t.
If the final prediction probability at the time t is greater than or equal to 0.5, the final prediction label at the time t is used for indicating that the streaming media video is stuck at the time t.
If the final prediction probability at the time t is smaller than 0.5, the final prediction label at the time t is used for indicating that the streaming media video is smooth at the time t.
In one possible implementation, if the initial QoE tag is 0, it represents that the streaming video is smooth; if the initial QoE label is 1, the representative streaming video is a clip.
In one possible implementation, if the initial QoE label is 1, it represents that the streaming video is smooth; if the initial QoE tag is 0, it represents that the streaming video is a clip.
In one possible implementation, the initial QoE label is manually annotated, or automatically generated based on a log file of the streaming video.
In one possible implementation, the terminal-side parameters include at least one of:
the transmission layer parameter is used for reflecting the network transmission condition when the streaming media video is played;
QoS parameters for evaluating the ability of a network to provide services for streaming video;
and the terminal parameter is a self parameter of the terminal equipment when the streaming media video is played.
In a second aspect, an embodiment of the present application provides a QoE evaluation method. The method comprises the following steps:
extracting target online characteristics from online data, wherein the online data are terminal side parameters from an initial playing time to a current time, which are acquired in real time in the process of playing target streaming media video, and the target online characteristics are used for evaluating QoE at the current time in real time;
Inputting the target online characteristics into a machine learning classifier of the QoE evaluation model to obtain a QoE preliminary evaluation result at the current moment;
inputting a plurality of QoE preliminary evaluation results into a score memory unit of a QoE evaluation model to obtain a QoE final evaluation result at the current moment; wherein, the plurality of QoE preliminary evaluation results include: a QoE preliminary evaluation result at the current time, and a QoE preliminary evaluation result at least one time before the current time.
It can be understood that, since the current time is any time, for other times before the current time, the online feature for real-time QoE evaluation at other times may be input into the machine learning classifier, so as to obtain the QoE preliminary evaluation result at the other times.
In the above scheme, because the QoE evaluation model is pre-built, in the process of playing the target streaming media video, the QoE preliminary evaluation results at a plurality of adjacent moments can be obtained by inputting the terminal side parameters acquired in real time into the machine learning classifier of the QoE evaluation model. Furthermore, by inputting the QoE preliminary evaluation results into the score memory unit of the QoE evaluation model, the "stab" samples misjudged as stuck in the QoE preliminary evaluation results are filtered, so that accurate and real-time QoE evaluation of the streaming media video being played is realized.
In one possible implementation, extracting target online features from online data includes:
carrying out normalization processing on the online data;
and extracting the target online characteristics of the normalized online data.
In one possible implementation, the target online feature includes at least one of:
global features, which are features extracted from all data of the online data;
window characteristics, wherein the window characteristics are extracted from partial data of online data, and the partial data comprise data at the current moment and data of a preset time length before the current moment;
other features that are not related to network data.
In one possible implementation, the target online feature comprises a global feature;
the global features are extracted according to the following parameters:
on-line data collected at the current moment;
global features for evaluating QoE at a time immediately preceding the current time;
an intermediate variable corresponding to the current time and a time immediately preceding the current time.
In one possible implementation, the plurality of QoE preliminary assessment results includes: the system comprises a plurality of preliminary prediction probabilities, wherein each preliminary prediction probability in the plurality of preliminary prediction probabilities is used for representing the probability that a machine learning classifier predicts that a target streaming media video is stuck at one moment, and the moment is the current moment or one moment in at least one moment.
In one possible implementation, the QoE final evaluation result at the current time includes: the final prediction probability of the current moment is used for indicating the probability that the score memory unit predicts that the target streaming media video is stuck at the current moment.
Inputting a plurality of QoE preliminary evaluation results into a score memory unit of a QoE evaluation model to obtain a QoE final evaluation result at the current moment, wherein the QoE final evaluation result comprises the following steps:
and the score memory unit performs weighted summation on the plurality of preliminary prediction probabilities according to the weight factor of each preliminary prediction probability to obtain the final prediction probability at the current moment.
In one possible implementation, the QoE final estimation result at the current time further includes: final predictive label at the current time;
inputting the plurality of QoE preliminary evaluation results into a score memory unit of the QoE evaluation model to obtain a QoE final evaluation result at the current moment, and further comprising:
determining a final prediction label at the current moment according to the final prediction probability at the current moment;
if the final prediction probability at the current moment is greater than or equal to 0.5, the final prediction label at the current moment is used for indicating that the streaming media video is stuck at the current moment; if the final prediction probability at the current moment is smaller than 0.5, the final prediction label at the current moment is used for indicating that the streaming media video is smooth at the current moment.
In one possible implementation, the terminal-side parameters include at least one of:
the transmission layer parameter is used for reflecting the network transmission condition when the target streaming media video is played;
QoS parameters for evaluating the ability of a network to provide services for a target streaming video;
and the terminal parameter is a self parameter of the terminal equipment when the target streaming media video is played.
In one possible implementation manner, the data of the target streaming media video is obtained by interaction with the server through the first network at the current moment;
after obtaining the final QoE evaluation result at the current time, the method further includes:
under the condition that the QoE final evaluation result at the current moment shows that the communication quality of the first network does not meet the communication requirement, the data of the target streaming media video are interacted with the server through the second network;
wherein the communication quality of the second network is better than the communication quality of the first network.
In one possible implementation, the QoE final evaluation result at the current time includes: the final prediction probability of the blocking of the target streaming media video at the current moment;
the QoE final evaluation result at the current time indicates that the communication quality of the first network does not meet the communication requirement, including: if the final prediction probability of the target streaming media video at the current moment is greater than or equal to the preset probability, the QoE final evaluation result at the current moment indicates that the communication quality of the first network does not meet the communication requirement.
In a third aspect, the present application provides a training apparatus of a QoE evaluation model, the apparatus comprising means for performing the method of the first aspect described above. The apparatus may correspond to performing the method described in the first aspect, and the relevant descriptions of the units/modules in the apparatus are referred to the description of the first aspect, which is omitted herein for brevity.
In a fourth aspect, the present application provides a QoE evaluation device comprising means for performing the method of the second aspect described above. The apparatus may correspond to the method of performing the second aspect, and the relevant descriptions of the units/modules in the apparatus are referred to the description of the second aspect, which is omitted herein for brevity.
In a fifth aspect, there is provided a terminal device comprising a processor coupled to a memory, the processor being configured to execute a computer program or instructions stored in the memory, to cause the terminal device to implement a training method of a QoE evaluation model as in any of the first aspects, or to implement a QoE evaluation method as in any of the second aspects.
In a sixth aspect, a chip is provided, the chip being coupled to a memory, the chip being configured to read and execute a computer program stored in the memory, to implement the training method of the QoE evaluation model as in any of the first aspects, or to implement the QoE evaluation method as in any of the second aspects.
In a seventh aspect, there is provided a computer readable storage medium storing a computer program which, when run on a terminal device, causes the terminal device to perform the training method of the QoE evaluation model as in any of the first aspects, or to perform the QoE evaluation method as in any of the second aspects.
In an eighth aspect, there is provided a computer program product, which when run on a computer, causes the computer to perform the training method of the QoE evaluation model as in any of the first aspects, or to perform the QoE evaluation method as in any of the second aspects.
It will be appreciated that the advantages of the third to eighth aspects may be found in the relevant description of the first and second aspects, and are not described here again.
Drawings
Fig. 1 is a schematic diagram of a communication system according to an embodiment of the present application;
fig. 2 is a general flow chart of a QoE evaluation method according to an embodiment of the present application;
FIG. 3 is a schematic flow chart of an offline training flow according to an embodiment of the present application;
FIG. 4 is a schematic diagram of a partial feature ordering result after a feature ordering according to an embodiment of the present application;
FIG. 5 is a schematic flow chart of selecting features based on an LGB-RFE classifier according to an embodiment of the present application;
FIG. 6 is a schematic diagram of a cross entropy loss function according to an embodiment of the present application;
FIG. 7 is a schematic diagram of another cross entropy loss function provided by an embodiment of the present application;
FIG. 8 is a schematic flow chart of training a machine learning classifier based on confidence learning according to an embodiment of the present application;
FIG. 9 is a schematic flow chart of an online evaluation flow provided in an embodiment of the present application;
fig. 10 is a schematic flow chart of network acceleration according to QoE real-time evaluation results according to an embodiment of the present application;
FIG. 11 is a schematic diagram of a network acceleration scenario provided by an embodiment of the present application;
FIG. 12 is a schematic diagram of an interface for network acceleration according to an embodiment of the present application;
fig. 13 is a schematic structural diagram of a training device of a QoE evaluation model according to an embodiment of the present application;
fig. 14 is a schematic structural diagram of a QoE evaluation device according to an embodiment of the present application;
fig. 15 is a schematic structural diagram of a terminal device according to an embodiment of the present application.
Detailed Description
In order to make the objects, technical solutions and advantages of the embodiments of the present application more apparent, the technical solutions of the embodiments of the present application will be clearly and completely described below with reference to the accompanying drawings in the embodiments of the present application, and it is apparent that the described embodiments are some embodiments of the present application, but not all embodiments of the present application.
In the description of the present application, "/" means or, unless otherwise indicated, for example, A/B may mean A or B. In the description of the present application, "and/or" is merely an association relationship describing an association object, and means that three relationships may exist, for example, a and/or B may mean: a exists alone, A and B exist together, and B exists alone.
The terms first and second and the like in the description and in the claims, are used for distinguishing between different objects and not necessarily for describing a particular sequential or chronological order of the objects. For example, a first network and a second network, etc. are used to distinguish between different networks, and are not used to describe a particular order of networks.
Reference in the specification to "one embodiment" or "some embodiments" or the like means that a particular feature, structure, or characteristic described in connection with the embodiment is included in one or more embodiments of the application. Thus, appearances of the phrases "in one embodiment," "in some embodiments," "in other embodiments," and the like in the specification are not necessarily all referring to the same embodiment, but mean "one or more but not all embodiments" unless expressly specified otherwise. The terms "comprising," "including," "having," and variations thereof mean "including but not limited to," unless expressly specified otherwise.
First, some terms or terms involved in the present application will be explained.
The transmitter streaming media refers to a media format, such as audio, video or multimedia, which is continuously played in real time on a network by adopting streaming media technology. Streaming media technology is also known as streaming media technology. The streaming media technology is a network transmission technology in which continuous video and/or audio information is compressed and then uploaded to a server, and then each compressed packet is transmitted to a terminal device in real time by the server, so that a user can view a streaming media file while downloading the compressed packets, and the user does not need to wait for downloading all the compressed packets of the entire streaming media file to the terminal device to view the streaming media file. In the embodiment of the application, when the media format of the streaming media is video, the video is called streaming media video or video streaming media. For convenience of explanation, the following embodiments are described by taking streaming video as an example, which does not limit the embodiments of the present application.
The capsule QoE refers to the overall acceptance degree of the user to the network service or service used in a certain objective environment. QoE is the overall quality of perceived network services or traffic from the user's perspective, reflecting the user's satisfaction with the performance of the traffic by knowing the subjective perception of the user's quality and performance of the device, network and system, application or traffic.
In some embodiments, the parameters used to assess QoE are simply referred to as QoE parameters. In the video playing process, the video QoE evaluation is affected by the frequency and total duration of video buffering, for example, the video buffering is frequent, and/or the total duration is large, i.e. a stuck event occurs in the video playing process, so that the QoE of a user is reduced. In addition, factors affecting QoE assessment may also include other factors such as user expectations, user preferences and privacy, fees paid by the user, and the type of service applied, among others.
⒊ quality of service (quality of service, qoS) is a technical indicator applied on a network in order to guarantee or enhance the QoE quality of the network. QoS may be used to address network delay and congestion issues.
In some embodiments, the parameters used to evaluate QoS are simply referred to as QoS parameters. The QoS parameters may include parameters such as system throughput, signal strength, stability of network transmission, reliability, transmission delay, delay jitter, packet loss rate, transmission code rate, error rate, transmission failure rate, and security. Based on these QoS parameters, it can be assessed whether the network communication quality meets the traffic communication requirements.
It should be understood that the QoE parameters and QoS parameters are listed as examples, and any other parameters may be included, which may be specifically determined according to actual use requirements, and embodiments of the present application are not limited.
It should be noted that both QoE and QoS may be used to measure the overall quality of network services. QoE is associated with a specific service, and QoE of different services has different QoS requirements, for example, some services are sensitive to delay indicators in QoS, such as voice transmission (voice over internet protocol, voIP) services based on internet protocol (internet protocol, IP); some traffic is more sensitive to packet loss rate indicators in QoS, such as file transfer traffic.
According to the above description, the QoE can more completely describe the overall acceptance degree of the user to the network service or service in a certain objective environment, so that the embodiment of the application adopts the QoE as the evaluation standard of the satisfaction degree of the user to the streaming media video service.
In general, in practical applications, there are the following QoE evaluation methods:
an off-line QoE assessment method. In the process of watching streaming media video by a user, information of interaction between a system and the user of the system is recorded by a log (log) file of terminal equipment, and data such as the type, the content and the time of the interaction between the user and the system are automatically captured. After the user views the video completely, the QoE influencing factors are extracted from the log file. And then modeling according to the QoE influence factors to obtain QoE scores. However, this offline assessment method can only make QoE predictions and assessments after the user has completely watched the video, with hysteresis.
The other is a QoE real-time assessment method. The method comprises the following steps: firstly, extracting round trip delay information of an uplink data packet in a video stream and constructing an input vector; constructing a neural network model comprising a convolution layer and a full connection layer; then inputting the constructed input vector into the neural network model, extracting characteristics, executing a full connection layer and predicting video QoE indexes; and finally, inputting round-trip time (RTT) information of the encrypted flow of the video to be estimated into a trained neural network model to predict the QoE index of the video. In the above steps, there are a large number of repeated operations, which may result in resource waste. In addition, this mode uses only RTT as an input parameter, the parameter dimension is relatively single, and this mode is limited to the transmission control protocol (transmission control protocol, TCP), but is not applicable to the low latency internet transport layer protocol (quick UDP internet connection, QUIC) based on the user datagram protocol, and the like. In addition, because the mode does not consider the correlation of QoE scores of adjacent time in the time dimension, the prediction result is caused to have the conditions of 'spurs', and the like, namely more false alarms are generated, and the accuracy rate of QoE prediction is reduced.
In view of this, the embodiment of the application provides a training method of a QoE evaluation model and a QoE evaluation method: in the scene of random play streaming media video such as video on demand, online live broadcast, real-time video conference, multimedia news release, network advertisement, electronic commerce, remote education, remote medical treatment or network radio station, the machine learning model and the score memory unit are trained by using offline data through a layered training mode, so that a QoE evaluation model for QoE real-time evaluation is built in advance. Therefore, when a user watches the streaming media video online, the terminal equipment can acquire the bottom layer parameters in real time and input the bottom layer parameters into the QoE evaluation model, so that the QoE can be evaluated and predicted in real time, for example, whether the streaming media video is blocked or not when watching the streaming media video at a future moment is predicted, and the like.
Fig. 1 illustrates a schematic architecture of a communication system according to various embodiments of the present application. As shown in fig. 1, the communication system includes a terminal device 1 and a server 2. The terminal device 1 may access a wireless local area network through an Access Point (AP) device 3, for example, access a wireless-fidelity (Wi-Fi) network through a router, and further establish a connection with the server 2 and perform data interaction. Alternatively, the terminal device 1 may access a mobile network (also called cellular network, or mobile data network) via the network device 4, for example via a base station, and thus establish a connection with the server 2 and perform data interaction.
The terminal device 1 may be a mobile terminal, a non-mobile terminal, a user equipment, or other devices or apparatuses capable of QoE real-time assessment, etc. By way of example, the mobile terminal may be a cell phone, tablet computer, notebook computer, palm computer, car mounted terminal, wearable device, ultra mobile personal computer (ultra-mobile personal computer, UMPC), netbook or personal digital assistant (personal digital assistant, PDA), etc., and the non-mobile terminal may be a personal computer (personal computer, PC), smart screen, television (TV), teller machine or self-service machine, etc. As an example, the terminal device 1 may be a smart phone with a Wi-Fi module built in, or may be a computer equipped with a wireless network card. The embodiment of the present application is not limited in any way as to the specific type of terminal device.
The server 2 may be a wide area network (Web) server and/or a streaming server, etc. Wherein the streaming server is also called streaming server, or audio/video (a/V) server. Embodiments of the present application are not limited with respect to the specific type of server.
In some embodiments, data transmission between the terminal device 1 and the server 2 may be performed through TCP towards the connection, or may be performed through user datagram protocol (user datagram protocol, UDP) towards the connectionless connection.
In some embodiments, the interaction of uplink data and/or downlink data may be performed between the terminal device 1 and the server 2. For example, the terminal device 1 may transmit play request information of the streaming media file to the server 2. As another example, the server 2 may send the streaming media file to the terminal device 1 in real time. For another example, the terminal device 1 may send data packets of the streaming media file to other devices through the server 2, and receive data packets of the media file sent by the other devices through the server 2.
In some embodiments, the server 2 may be used to provide a streaming media file and the terminal device 1 may be used to play the streaming media file. Taking servers including Web servers and a/V servers as examples. The transmission process of the streaming media specifically includes:
after the a/V server acquires the original video file, the a/V server performs preprocessing on the original video file to compress the original file to generate a file in a streaming format, i.e., a streaming file. After the user selects a play service of a certain streaming media file through the terminal device 1, control information is exchanged between the Web browser and the Web server using HTTP/TCP so as to retrieve real-time data to be transmitted from the original information. The Web browser then initiates an audio/video helper (a/V helper) program, which is initialized using HTTP to retrieve relevant parameters from the a/V server, including directory information, the type of encoding of the a/V data, or the server address associated with the a/V retrieval, etc. Then, the A/V Helper program and the A/V server run the streaming media protocol to exchange control information required for A/V transmission. The streaming protocol may be HLS or adaptive streaming (dynamic adaptive streaming over HTTP, DASH), and the streaming protocol provides a method for manipulating commands such as play, fast forward, fast rewind, pause, and record. The A/V server transmits A/V data to the A/V client (i.e., A/V helper) using real-time transport protocol (real-time transport protocol, RTP)/UDP. When the terminal device 1 receives the a/V data, the a/V client can play the a/V data. In the process of playing the A/V data by the A/V client program, the bottom layer parameters can be acquired in real time and input into a pre-constructed QoE evaluation model, so that the QoE can be evaluated and predicted in real time.
It should be noted that the streaming media file may be an audio file, a video file, a multimedia file, or the like. The embodiment of the application is illustrated by taking the example that the server 2 issues each data packet of the streaming media video to the terminal device 1, and performs QoE real-time assessment through a pre-constructed QoE assessment model in the process of playing the data packet of the streaming media video by the terminal device 1.
The training method of the QoE evaluation model and the specific implementation manner of the QoE evaluation method according to the embodiments of the present application will be described in detail below with reference to the accompanying drawings.
Fig. 2 is a general flow chart of a QoE evaluation method according to an embodiment of the present application.
As shown in fig. 2, the QoE evaluation method may include two parts:
the first part is an offline training (offline) procedure.
The off-line training process is mainly used for training a QoE evaluation model for QoE real-time evaluation according to the off-line data set.
The offline training process may include: the method comprises the following sub-processes of offline data set acquisition, parameter normalization, offline feature calculation, feature selection, offline training of a machine learning model, offline training of a score memory unit and the like.
Illustratively, in the process of video on demand, online live broadcast, real-time video conference, multimedia news release, network advertisement, electronic commerce, remote education, remote medical treatment or network radio station, etc. by a user using a video class application (application) of a terminal device (for example, APP1 and APP 2), information interacted between a log file recording system of the terminal device and a user of the system can be recorded. After the streaming video playing is finished, the terminal equipment can acquire an offline data set according to the log file. The offline data set includes a plurality of offline parameters, which are bottom parameters, such as transport layer parameters, qoS parameters, and/or terminal parameters, input when playing the streaming video. And then, the terminal equipment sequentially performs parameter normalization, offline feature calculation, offline feature selection, machine learning model, score memory unit training and other processing operations on the offline data set, so as to construct a QoE evaluation model for QoE real-time evaluation.
The QoE evaluation model mainly comprises two sub-models, namely a machine learning model and a score memory unit. The offline training of the machine learning model is mainly used for training the parameter weights of the machine learning model, and the offline training of the score memory unit is mainly used for training the parameter weights of the score memory unit.
Specifically, a machine learning model, also referred to as a machine learning classifier, is mainly used to: and obtaining a mapping relation from the feature vector to the QoE preliminary evaluation result at the time t by offline training of a machine learning model. The score memory unit is used for offline training and is mainly used for: based on the correlation between the preliminary evaluation results of QoE at adjacent moments in the time dimension, modeling is performed on the machine learning model output results at the moment t and a period of time before the moment t.
In some embodiments, a confidence learning (also referred to as confidence learning) algorithm is employed to assist in training the machine learning model to improve the generalization ability of the machine learning model.
In some embodiments, the offline training process trains the parameter weights of the machine learning model and the parameter weights in the score memory unit through a layered training mode.
For example, as shown in fig. 3, the terminal device may continuously iterate the steps of offline feature calculation, offline feature selection, and offline training of the machine learning model until an optimal modeling result of the machine learning model is reached, that is, training of the parameter weights in the machine learning model is completed. After training the machine learning model is completed, the modeling result of the machine learning model is input into the score memory unit for training, so that training of the parameter weights in the score memory unit is completed. Finally, a QoE evaluation model for QoE real-time evaluation is formed by the machine learning model and the score memory unit.
The second part is the online-time measurement flow.
The online evaluation flow is mainly used for inputting an online data set acquired in real time in the streaming media playing process into a pre-constructed QoE evaluation model so as to output a QoE evaluation result at the current moment.
The online evaluation flow may include: and (3) collecting an online data set, normalizing parameters, calculating online characteristics, online reasoning of a machine learning model, online updating of a score memory unit and the like.
For example, when a user uses a video class APP of a terminal device to play a target streaming video, the terminal device may acquire an online data set in real time. The online data set comprises a plurality of online parameters, and the online parameters are bottom layer parameters, such as transmission layer parameters, qoS parameters and/or terminal parameters, which are collected online by the terminal equipment when the target streaming media video is played. Then, the terminal equipment sequentially performs parameter normalization, online feature calculation, online reasoning of a machine learning model, online updating of a score memory unit and the like on the online data set.
The parameter normalization of the online evaluation flow is consistent with the parameter normalization of the offline training flow, and the parameter normalization are used for normalizing the parameters.
On-line feature computation, only some types of features selected in the off-line training process need to be computed on-line, and some types of features not selected in the off-line training process need not be computed on-line.
The machine learning model online reasoning is used for: and (3) utilizing a machine learning model trained through an offline training process to realize the mapping from the characteristics to the QoE preliminary evaluation result.
The score memory unit updates online for: inputting the QoE preliminary evaluation result into a score memory unit, and outputting the QoE preliminary evaluation result from the current time t and a period of time before the current time t in the score memory unit in a linear weighting mode to obtain a QoE final evaluation result, namely a final QoE evaluation result.
It can be appreciated that by training the machine learning model and the score memory unit separately in a hierarchical training mode, a QoE evaluation model for QoE real-time evaluation can be constructed in advance. Thus, when a user watches the streaming media video online, the terminal equipment can accurately make real-time evaluation and prediction on QoE by inputting the bottom layer parameters into the QoE evaluation model.
It should be noted that, in the embodiment of the present application, the QoE preliminary evaluation result is used to determine a QoE preliminary score, and the QoE final evaluation result is used to determine a QoE final score. For example, from time t 1 To time t i Inputting the QoE final evaluation result of (2) into a preset model, and obtaining a QoE final score which is 1, 2, 3, 4 or 5.
Specific implementations of each sub-process of the offline training process and the online evaluation process will be described below, respectively.
Fig. 3 is a flow chart of an offline training flow according to an embodiment of the present application.
As shown in fig. 3, the method may include S101 to S106 described below.
S101, the terminal equipment acquires an offline data set.
Wherein the offline data set may comprise a plurality of offline parameters, also referred to as offline data. The offline parameters are bottom parameters recorded by log files in the process of playing at least one streaming media video and acquired by the terminal equipment after the at least one streaming media video is played. That is, the offline parameter is a terminal-side parameter obtained from the initial playing time to time t of the at least one streaming video according to the log file of the at least one streaming video.
For example, assume that the terminal device is at a slave time t 1 To time t i At least one streaming video is continuously played or intermittently played in a period of time, the terminal device may acquire an offline data set corresponding to the at least one streaming video after the last streaming video is played Wherein, sample data->Can be used to indicate at time t i Bottom parameters recorded by log file, sample data->Can be used to indicate at time t i-1 Bottom parameters recorded by log files, …, sample data>Can be used to indicate at time t 1 The underlying parameters are recorded by log files.
Specifically, an example is that the terminal device may obtain the initial time t from playing the streaming video service using the video APP for the first time 1 Time t from last playing of streaming media i These off-line parameters may constitute an off-line data set. As another example, the terminal device may obtain the time t from the last playing of the streaming media i By time t i At a certain time t before 1 Offline parameters of the streaming video service are played (e.g., continuously played or intermittently played) using the video class APP, for example, offline parameters of the streaming video service played multiple times using the video class APP within approximately 1 day, 1 week, or 1 month are obtained, and these offline parameters may form an offline data set.
It should be noted that, in the data collection under the continuous play scene, the time interval between any two adjacent time points in the above time points may be the same, for example, time point t 1 And time x t-1 Is 1 second. Or, data acquisition in interval playing scene is performedThe time intervals of some adjacent ones of the moments are different.
Optionally, the terminal-side parameter may include at least one of the following parameters:
the first is a transport layer (transport layer) parameter. The transport layer parameters are the main parameters for training the machine learning model. The transmission layer parameter is used for reflecting network transmission conditions when the terminal equipment plays the streaming media video, such as various statistical characteristics of the data packet in unit time.
For example, the transport layer parameters may include the number of upstream TCP/UDP packets per unit time and the total packet length. For another example, the transport layer parameters may include the number of downlink TCP/UDP packets per unit time, the total packet length, the RTT value per unit time, and the number of retransmission packets per unit time, the packet length, and other transport layer data. The unit time refers to a time period between two moments of selecting equal intervals. Of course, the transport layer parameters may also include other parameters, which are not limited by the embodiments of the present application.
The second is QoS parameters. The QoS parameter is also a parameter used to train the machine learning model. The QoS parameters are used to evaluate the ability of the network to service streaming video.
For example, qoS parameters mainly include: several important environmental parameters, such as signal strength of cellular and Wi-Fi, wi-Fi negotiation rate, etc. Of course, qoS parameters may also include availability, throughput, stability of network transmissions, reliability, latency variations, jitter, transmission rate, bit error rate, transmission failure rate, reliability, security, guaranteed stream bit rate (guaranteed flow bit rate, GFBR), maximum stream bit rate (maximum flow bit rate, MFBR), average window and aggregate maximum bit rate (aggregate maximum bit rate, AMBR), allocation and retention priority (allocation and retention priority, ARP), and the like.
The third is the terminal parameters. The terminal parameter is a parameter of the terminal equipment when the streaming media video is played.
For example, the terminal parameters may include parameters such as audio track information, video track information, make and model, operating system, identification number (identity document, ID), network operator, network access mode, and/or IP address. The audio track information and the video track information are irrelevant to network parameters and are used for representing the playing progress of the audio and video when the streaming media video is played. In the embodiment of the application, the initial QoE label is generated according to the audio track information and the video track information.
As an optional implementation manner, for each video class APP, the terminal device performs an offline training process according to offline parameters of the streaming media video played by each video class APP within a preset period, so as to determine QoE evaluation models corresponding to each video class APP respectively, that is, create different QoE evaluation models in advance for different video classes APP. When a user plays a certain streaming media video online by using a certain video class APP, qoE real-time evaluation can be performed by adopting a QoE evaluation model corresponding to the video class APP.
As another optional implementation manner, for a plurality of video APP, the terminal device performs an offline training process according to offline parameters of the plurality of video APP playing streaming media video within a preset period, so as to determine a QoE evaluation model corresponding to the plurality of video APP, that is, create a QoE evaluation model in advance for the plurality of video APP. When a user uses any video class APP in the video classes APP to play a certain streaming media video online, the QoE evaluation model can be adopted to perform QoE real-time evaluation.
S102, the terminal equipment normalizes the offline parameters in the offline data set.
For offline data sets Each off-line parameter x in (a) t Normalization processing:
wherein G (·) is a normalization function.
Offline data set at terminal equipmentAfter normalization of each off-line parameter in (a) the off-line data set is updated to +.>Wherein, sample data->Can be used for representing the offline parameter +>Parameters after normalization, sample data ∈>Can be used for representing the offline parameter +>Parameters after normalization, …, sample data +.>Can be used for representing the offline parameter +>And carrying out parameters after normalization processing.
Alternatively, the normalization method may include, but is not limited to: normalization, interval scaling, discretization, and the like.
1) Normalization method
If the terminal equipment adopts a standardized method:and normalizing the offline parameters, namely normalizing the offline parameters to the (0, 1) distribution area so that the processed offline parameters accord with standard normal distribution. Where μ is the mathematical expectation of all sample data and σ is the standard deviation of all sample data. />
2) Interval scaling method
The interval scaling method is applicable to sample data which does not conform to normal distribution. Specifically, the method comprises the following steps:
the interval scaling method comprises the following steps:
the logarithmic method comprises the following steps:
the root number opening method comprises the following steps:
If the terminal equipment adopts an interval scaling method:the offline parameters for which the data differentiation is not too large can be scaled to 0,1]Within the interval. X is x max X is the maximum value in all sample data min Is the minimum value among all sample data.
If the terminal equipment adopts an interval scaling method:or->The data-differentiated, unbalanced offline parameters can be scaled to within a certain interval.
3) Discretization method
If the terminal equipment adopts a discretization method, the offline parameter x with continuous value can be obtained t Barrel separation is carried out to obtain discrete parametersTherefore, the frequency of the same parameter value is improved, outliers are reduced, and abnormal values are effectively filtered.
It should be understood that no matter what kind of normalization method is adopted by the terminal equipment to perform normalization operation on the offline parameters, the dimensionalized offline parameters can be converted into dimensionless offline parameters, namely scalar parameters, so that distribution among different parameters is closer, and training efficiency and robustness of the model are improved.
S103, the terminal equipment calculates the offline characteristics of the normalized offline parameters.
In the embodiment of the application, the offline characteristics of the offline parameters can be represented by characteristic vectors.
After normalizing the offline parameters of the offline data set, the terminal device may perform the following processing from time t 1 To time t i Extracting the offline parameters for evaluating the time t i Feature vector of QoE of (a)For example, feature vector +.>Wherein R represents a real number domain; n represents the number or dimension of features contained in each sample data. Feature vector->Can be understood as an N-dimensional vector.
For example, for evaluating the time t i Offline feature of QoE of (a)Expressed as:
wherein D (·) is an offline feature extraction function.
Alternatively, offline characteristics of offline parameters may include, but are not limited to: at least one of window features, global features, and other features.
1) Window features
For evaluatingTime t i The window characteristics of QoE of (c) can be expressed as:
wherein D is window (. Cndot.) is a window feature extraction function.
In an embodiment of the application, window featuresOnly in time windows [ t ] i -T 1 +1,t i ]The input offline parameters are correlated. Wherein the duration of the time window is T 1 And the time window epsilon t 1 ,t i ]。
Alternatively, the terminal device may employ (weighted) average, median, maximum, minimum, standard deviation and other complex statistical methods, based on the time window [ t ] i -T 1 +1,t i ]And calculating window characteristics according to the input offline parameters.
It will be appreciated that due to the duration T of the time window 1 Can be regarded as a constant, and thus both the temporal complexity and the spatial complexity of computing window features are o (1), i.e. a constant level. The time complexity refers to the calculation workload required by executing the algorithm, and the space complexity refers to the memory space required by executing the algorithm.
2) Global features
For evaluating the time t i The global characteristics of QoE of (c) can be expressed as:
wherein D is whole (. Cndot.) is a global feature extraction function.
In an embodiment of the application, global featuresAnd at a time interval of[t 1 ,t i ]The input offline parameters are correlated. Wherein the period [ t ] 1 ,t i ]Is the total period corresponding to the offline data set.
Alternatively, the terminal device may employ a (weighted) average, a maximum, a minimum, a skewness (skewness), a kurtosis (kurtosis), or the like, according to the time period [ t ] 1 ,t i ]And calculating global features by using the input offline parameters. The skewness is a characteristic number for representing the asymmetry degree of the probability distribution density curve relative to the average value, and the kurtosis is a characteristic number for representing the peak value of the probability distribution density curve at the average value.
It should be noted that, in theory, the time complexity and the space complexity of calculating the global feature are o (t), and the service duration (t i -t 1 ) The longer the duration for streaming video traffic, the higher the frequency of the temporal and spatial complexity of the global features.
3) Other features
Features that are independent of network parameters may be referred to as other features. Such as streaming video service session duration, etc.
It should be noted that, the real-time attribute time complexity and the space complexity for recording the whole streaming media video service are both o (1), i.e. constant level.
S104, the terminal equipment performs feature selection on the offline features.
Due to the extracted feature vectorThe method comprises the steps of including some features with higher redundancy, wherein the features with higher redundancy can lead to more complex training of a machine learning model and a score memory unit and longer training time, so that the terminal equipment can adopt a feature selection algorithm to remove the features with higher redundancy to obtain a simplified feature vector>Thereby reducing the training complexity and accelerating the training speed.Wherein N 'is used to represent the reduced dimension, N'<N. I.e. feature vector->Can be understood as an N' dimensional vector.
Alternatively, the feature selection algorithm may include, but is not limited to: mutual information-based methods, maximum correlation-minimum redundancy-based methods, packaging method (Wrapper) -based feature selection methods, and the like.
1) Method based on mutual information
Counting mutual information (mutual information) between each dimension feature and the tag in the N dimension features, and selecting the N 'dimension feature with the highest mutual information as the N' dimension featureWhere mutual information is an information metric in information theory, statistics and machine learning, used to represent the amount of information contained in one random variable about another random variable, or the uncertainty that one random variable decreases due to the knowledge of another random variable.
It should be noted that, the "label" in the embodiment of the present application refers to a QoE label for evaluating streaming video. For example, when the QoE tag is 0, the representative streaming video is smooth; when the QoE label is 1, the representative streaming video is stuck. For another example, when the QoE label is 1, the representative streaming video is smooth; when the QoE tag is 0, the representative streaming video is stuck.
2) Method based on maximum correlation-minimum redundancy
In this approach, each time a one-dimensional feature is selected from the set of features to be selected, the feature is most relevant to the tag (i.e., the mutual information is higher) and least redundant to the set of selected features (i.e., the mutual information is lower). The feature is then entered into the selected feature set. The step N' is then performed to obtain the selected feature vector
3) Feature selection method based on Wrapper
The feature selection method based on the Wrapper comprises the following steps:
window features, global features, and other features are computed from offline features. Then, a classifier is selected as a base classifier, and the feature vector is learned by using the classifierMapping relationship to tags. Then, according to the contributions (i.e. importance) of different features given by the classifier to the tag, from the feature vector +.>Features with the lowest contribution in one dimension are deleted. The above steps are performed N' times until the feature number is reduced to the expected number, and finally the selected feature vector +.>
In some embodiments, a distributed gradient hoist (light gradient boosting machine, LGB) based on a decision tree algorithm may be employed as a base classifier for multiple rounds of training. All features are ranked according to their importance in each round of training. And then removing the features with the lowest importance degree from the optimal feature set S. And then, stopping iteration by continuously iterating the rounds until only v features remain in the optimal feature set S, or stopping iteration until the preset iteration times are reached. Wherein v is a preset value.
Calculating the feature importance level (also referred to as importance level) in an LGB model may include two methods:
Split (split) method: calculating a feature importance degree using the feature as a total number of divided attributes in all trees;
gain (gain) method: the degree of importance of the feature is calculated using the feature as the total gain brought by the partitioning attribute.
In particular, an LGB classification model-based recursive feature elimination (recursive feature elimination, RFE) method may be employed to select offline features.
Illustratively, the LGB-RFE feature based selection algorithm is as follows:
input: training data sets (X, Y) and numbering features. Wherein X and Y may be used to represent different types of training data. Specifically, x= (X 0 ,X 1 ,…,X n ),Y=(Y 0 ,Y 1 ,…,Y n ). Wherein each data in set X is m-dimensional, e.g., X 0 =(x 0 ,x 1 ,…,x m ) The method comprises the steps of carrying out a first treatment on the surface of the Each data in set Y is 1-dimensional.
And (3) outputting: and the feature selection result is a sorted feature list.
The process comprises the following steps:
1. initialization of
The existing feature subset s= [1,2, …, m ], feature ordering storage list r= [ ];
for i=1 to m do// incremental for cycle, 1 is the initial value of the cycle, and m is the final value of the cycle.
Training the LGB model using the existing feature subset x=x (: S) of the full training data;
ii calculating the importance level a of each feature in the training LGB model in i i The method comprises the steps of carrying out a first treatment on the surface of the Importance level a i Can be calculated by a split method or a gain method. i is a positive integer less than or equal to m.
Iii finding out the feature f=argmin (a) with the lowest importance in ii); the value of the variable when minimizing the objective function is denoted by/(argmin).
Iv, updating a feature ordering storage list r= [ s (f), r ];
v removing the least important features, updating the existing feature subset list s= [1, …, f-1, f+1, …, m ];
end for// end for is used to indicate the end of the cycle.
3. And restoring the final feature ordered list into feature names according to feature numbers.
Taking a live video as an example, a method for selecting offline features based on LGB-RFE classifier will be described with reference to fig. 4 and 5.
Fig. 4 is a schematic diagram of a sorting result of a part of features after a certain feature sorting according to an embodiment of the present application. As shown in fig. 4, after the terminal device performs a feature ranking on the features from large to small according to the gain sum of feature splitting, the first 15 features are sequentially: feature 'dlinkpktlen_window_6_mean', feature 'uplinkretransrate_window_10_md', feature 'uplinkretransrate_window_18_md', …, and feature 'iterstsi'. Wherein the horizontal axis is used to represent gain magnitude and the vertical axis is used to represent characteristics.
Fig. 5 is a schematic flow chart of selecting features based on LGB-RFE classifier according to an embodiment of the present application. After the original features are feature structured, a total of 70-dimensional features including the original features can be calculated, as shown in fig. 5. Then, the LGB-RFE classifier sorts all the features by taking the sum of the information gains of feature splitting or the total times of feature splitting as the calculation basis of different features on the importance of the tag. And then eliminating the features with the lowest importance degree from the optimal feature set.
When the first feature ordering is performed, an ordering storage list [ ' uplinkpktnum_window_18_mean ', ' uplinkpktlen_window_18_mean ', … ', and ' uplinkPktLen ' ] is obtained according to the importance program of each feature in the 70-dimensional features. Since the importance of the feature 'uplinkPktLen' is the lowest in the first feature ordering, the feature 'uplinkPktLen' is removed and the existing feature subset list is updated. The updated list of existing feature subsets is then trained again using the LGB-RFE classifier and the importance level of the features is calculated.
When the second feature ordering is performed, an ordered list [ 'uplinkpktnum_window_18_mean', 'uplinkpktlen_window_18_mean', …, 'synNum' ] is obtained according to the importance program of each feature in the remaining 69-dimensional features. Since the feature 'synNum' is least important in the second feature ordering, the feature 'synNum' is removed and the existing feature subset list is updated. The updated list of existing feature subsets is then trained again using the LGB-RFE classifier and the importance level of the features is calculated.
Then, the LGB-RFE classifier repeats the above process until the feature set is empty and outputs the ordered features.
It should be noted that S101 to S104 are described above to obtain the time t for evaluation i Feature vector of QoE of (a)For example. It can be appreciated that for time t i At any previous moment, the feature vector corresponding to the feature vector can also be obtained by adopting the method, and the application is not repeated.
S105, the terminal equipment trains a machine learning model.
The machine learning model described above is also referred to as a machine learning classifier.
After the feature vector is obtainedThereafter, the terminal device may train the machine-learned classifier to obtain feature vectorsAnd at time t i Mapping relation F of QoE preliminary evaluation result of (C) ml (·)。
At time t i The QoE preliminary assessment results of (c) may be represented by at least one of:
at time t i The probability of the streaming media video service is blocked, and the probability is primarily predicted;
at time t i The labels of the streaming media video service, namely the preliminary prediction probability;
at time t i Time delay of streaming media video service;
at time t i Time t i The number of times and total duration of buffering of streaming media video service in a previous period of time;
at time t i Time t i Loss of streaming media video traffic for a previous period of timePacket rate, etc.
To at time t i Preliminary prediction probability of streaming media video service occurrence of blockingFor example, if the QoE preliminary evaluation result for representing the predicted output of the machine learning classifier is represented, the relationship is:
wherein whenAt time, it is shown that the machine learning classifier predicts at time t i The streaming media video service does not get stuck; when->At time, it is shown that the machine learning classifier predicts at time t i The streaming video service must be blocked. I.e.)>The larger the value of the (C) is, the higher the probability that the machine learning classifier predicts that the streaming media video service is stuck.
Alternatively, the machine learning classifier for training may be any one of the following: decision trees, support vector machines (support vector machines, SVM), random Forest (RF), reinforcement learning or boosting (AdaBoost), gradient boosting decision trees (gradient boosting decision tree, GBDT), and the like.
In the embodiment of the application, the optimization targets of training the machine learning classifier are as follows:
wherein,,for at time t i An initial QoE label of the played streaming video annotation. The initial QoE label is an artificial label or is automatically generated based on log files of the streaming media video. This initial QoE label is also referred to as an original QoE label.
Taking the initial QoE label as an example of manual labeling, one possibility is that when the initial QoE label is 0, the streaming media video is labeled as smooth by representative manual; when the initial QoE tag is 1, the representative manually marks the streaming video as katon. Another possibility is that the streaming video is marked as katon on behalf of a person when the initial QoE tag is 0; when the initial QoE label is 1, the representative manually marks the streaming video as smooth.
L (·) is a loss function (loss function) that represents the "risk" or "loss" of a random event.Representation->Decision to take ∈>Corresponding losses or risks, i.e. for representing preliminary prediction probabilities +.>And initial QoE tag->A loss corresponding to the degree of difference in (c). It should be understood that->The smaller the loss or risk is.
s.t. is an abbreviation for subject to, for indicating that..satisfies..the constraint condition. In an embodiment of the present application, in the present application,representing the loss function->Taking the minimum value, i.e. training the optimization objective of the machine learning classifier, is to predict the probability +.>And initial QoE tag->The loss corresponding to the degree of variance of (c) is minimized, thereby minimizing the loss or risk approach.
Illustratively, the above-mentioned loss function May be a cross entropy loss (cross entropy loss) function, a hinge loss function (hinge loss function), or an exponential loss function (exponential loss function), etc.
Wherein the cross entropy loss function of one sample may be:
where a is the base of the logarithmic function. For example, a is an irrational number e.
Accordingly, the total loss function for N samples may be:
it should be noted that the cross entropy loss function described above is applicable to two kinds of scenes, such as a scene where the streaming video is smooth or stuck. It should be understood that when classifying the streaming media video by using other multi-classification scenes besides the bi-classification, other forms of loss functions may also be used, which is not limited by the embodiment of the present application.
In order to more intuitively understand the cross entropy loss function described above, let a=e, an exemplary description is given below in connection with fig. 6 and 7.
As shown in fig. 6, when the QoE label is initiatedCross entropy loss function at the time->Wherein the abscissa is used to represent the predicted stuck probability of the machine learning classifier output>The ordinate is used to represent the cross entropy loss function L. When predicting the probability of stuck->The closer to 1, the smaller the cross entropy loss function L; when predicting the probability of stuck->The closer to 0, the greater the cross entropy loss function L.
As shown in fig. 7, when the initial QoE labelCross entropy loss functionWherein the abscissa is used for representing the predicted stuck probability output by the machine learning classifierThe ordinate is used to represent the cross entropy loss function L. When predicting the probability of stuck->The closer to 0, the smaller the cross entropy loss function L; when predicting the probability of stuck->The closer to 1, the greater the cross entropy loss function L.
The cross entropy loss function L in fig. 6 and 7 described above characterizes the predictive katon probabilityWith initial QoE labelIs a difference between the two. Due to the fact that when predicting the probability of stuck->And initial QoE tag->The smaller the phase difference is, the smaller the cross entropy loss function L is, and the smaller the penalty is on the machine learning model, so that the prediction katon probability is reduced as much as possible when the learning classifier is trainedAnd initial QoE tag->To obtain a final machine-learned classifier that minimizes the loss function.
In S105 of the above embodiment, the initial QoE label is described as the true QoE label. However, in actual implementation, since the initial QoE label is manually marked or automatically generated based on the log file, and the manual marking has an error problem, and the log file has a non-real-time problem, there is a delay error between the initial QoE label and the real QoE label for a certain time, that is, there is a noisy label/noisy label, and these noisy labels interfere with the learning of the classifier. In addition, even in the background of the real QoE label, the real-time network parameter differences are small before and after the QoE is suddenly changed (for example, the QoE label at time t is a clamp and the QoE label at time t-1 is a smooth one, or the QoE label at time t-1 is a clamp), but the QoE evaluation results corresponding to these times may be quite different, so that similar feature inputs may be mapped to different QoE evaluation results, thereby interfering with the learning of the machine learning classifier. Therefore, machine-learning classifiers trained in such a context do not have good robustness and generalization.
In order to solve the problem of poor robustness and generalization of the machine learning classifier, the embodiment of the application provides an algorithm based on confidence learning to assist in training of the machine learning classifier.
As shown in fig. 8, the machine learning classifier training process based on confidence learning may include:
step 1, inputting a feature vector setQoE setWherein the QoE set is composed of the sum time { t } 1 ,t 2 ,…t i An initial QoE tag for each time instant in …, the initial QoE tag being manually annotated or automatically generated based on log files.
Step 2, training a machine learning classifier, and learningMapping relation F of (2) ml (·)。/>
Step 3, from time { t 1 ,t 2 ,…t i Extracting the time t when all QoE labels are mutated from … i' And at t i' Time t before and after i'-T ,…,t i'+T ]Feature vector set of (a)And QoE sets corresponding to the moments of these mutations +.>Through the mapping relation F ml (. Cndot.) is calculated as->Predictive set of katon probabilities output by machine learning classifier +.>
Step 4, traversing QoE setAnd predictive set of probability of stuck->Calculating an initial QoE label y for each time t And predicting the probability p of stuck t Cross entropy of (c). Then, qoE set is ++in order of cross entropy from large to small >Is ordered.
Step 5, changing QoE labels of samples of the first X% in the sorting result into opposite QoE labels, for example, changing a Katon label into a fluent label or changing the fluent label into a Katon label, and updating and storing the changed labels into a setIs a kind of medium. Wherein X is a preset value, such as x=5, 10, 20, or the like.
And step 6, repeating the steps 2-5 according to the preset iteration times. The machine-learned classifier of the last round is then taken as the final machine-learned classifier.
Illustratively, to be based on confidenceThe first iteration of the training process of the learned machine learning classifier is exemplified. Assume at time t that i' QoE tag was mutated and t=5, table 1 below shows at the moment of mutation T i' And (5) initializing the corresponding relation of QoE labels, predicting the blocking probability and cross entropy at each moment.
TABLE 1
Time of day Initial QoE label Predicting a probability of stuck Cross entropy Modified QoE label
t i'-5 0 0.3 0.3567 0
t i'-4 0 0.2 0.2231 0
t i'-3 0 0.1 0.1054 0
t i'-2 0 0.3 0.3567 0
t i'-1 0 0.6 0.9163 1
t i' 1 0.6 0.5108 1
t i'+1 1 0.9 0.1054 1
t i'+2 1 0.7 0.3567 1
t i'+3 1 0.7 0.3567 1
t i'+4 0 0.5 0.6931 0
t i'+5 0 0.1 0.1054 0
Referring to Table 1, the machine learning classifier may utilize a mapping relationship F ml (. Cndot.) output and time t i'-5 、t i'-4 、…、t i' 、…、t i'+4 、t i'+5 The predicted stuck probability corresponding to each moment in time. And traversing each moment, and calculating the cross entropy of the initial QoE label and the prediction cartoon probability at each moment.
Specifically, for the cross entropy of the initial QoE label and the predicted stuck probability at each time instant, the calculation may be performed according to the calculation manner provided in fig. 6 and fig. 7 of the above embodiment: when an initial QoE labelCross entropy loss functionWhen the initial QoE tag->Cross entropy loss functionThus, the cross entropy of the initial QoE label and the prediction blocking probability at each moment can be calculated, and the forward direction from the large to the small can be calculated according to the cross entropyThe order is ordered. Suppose that the QoE label of the first 10% of samples in the ranking result is changed, i.e. 1 sample is taken in table 1. Due to time t i'-1 The corresponding cross entropy 0.9163 is highest among all cross entropies, so time t will be i'-1 The initial QoE tag is modified from "0" to "1". Then, the changed label update is saved to the collectionIn, i.e. will be assembled->Label->Modified to "1". The machine learning classifier may then iterate the training process of the machine learning classifier until training results in a final machine learning classifier.
It can be appreciated that by changing the QoE labels before and after the partial mutation time, these labels are generated iteratively by the classifier itself, rather than being fixed as manually labeled labels, thus enhancing the generalization capability of the machine learning classifier.
It should be noted that, the above-mentioned S102-S105 adopts an iterative training method based on feature engineering+machine learning classifier, and the precondition of this iterative training method is that the training samples are assumed to be independent in time. However, in practice, there is time dependency between training samples, so that the output result of the machine learning classifier has conditions such as "spurs", that is, more false alarms can be generated, and the accuracy of QoE prediction is reduced.
In order to improve the accuracy of the whole QoE evaluation model, based on the relevance between the machine learning model and the output results of the prediction of the adjacent moments in the time dimension, the embodiment of the application further provides a training score memory unit according to the prediction results of a plurality of adjacent moments output by the machine learning model.
S106, the terminal equipment trains the score memory unit according to the output result of the machine learning model.
Assume that the machine learning model pair is at time t i The clip probability of the played stream media video prediction is thatFor time t i The probability of blocking the streaming video prediction at each time of the previous period is +.>The training process of the score memory unit is a process of modeling these click probabilities.
The QoE preliminary evaluation result may be a preliminary prediction probability, and the QoE final evaluation result may be a final prediction probability.
Score memory unit is relative to time t i The final predicted probability of the prediction is:
wherein a is j Weight factor, T, representing each preliminary prediction probability 2 Representing the time window length, p output (t i ) Representative time t i Is used to predict the final probability of prediction. When j=0, the number of the groups,representative time t i Is used for the preliminary prediction probability of (1); when j is not equal to 0, ">Representing at time t i Time t before i-j Is used for the preliminary prediction probability of (a).
The QoE preliminary evaluation result may be a preliminary prediction probability, and the QoE final evaluation result may be a final prediction probability.
Further, the QoE final assessment result may also include a final prediction label. Illustratively, if time t i If the final prediction probability of (2) is greater than or equal to 0.5, the final prediction label is used for representing the streaming video at time t i Is stuck; if at time t i If the final prediction probability of (2) is less than 0.5, the final prediction label is used for representing the streaming video at the time t i Is smooth.
Time window length T 2 Is a predefined hyper-parameter. Typically, the time window length T 2 Taking 2 or 3.
For example, when T 2 When the number of the codes is =2,
for another example, when T 3 When the number of the samples is =3,
the training process of the score memory unit is optimized by the following targets:
Wherein,,for at time t i An initial QoE label of the played streaming video annotation.
It should be noted that, the initial QoE label for training the score memory unit and the initial QoE label for training the machine learning classifier are the same label, which are both manually labeled or automatically generated based on the log file of the streaming video.
L (·) is a loss function, representing the "risk" or "loss" of a random event.Represents p output (t i ) Decision to take ∈>Corresponding loss or risk. I.e. for representing the final prediction probability p output (t i ) And an initial QoE label/>A loss corresponding to the degree of difference in (c). It should be understood that->The smaller the loss or risk is.
s.t. is an abbreviation for subject to, for indicating that..satisfies..the constraint condition. In an embodiment of the present application, in the present application,representing the loss function->Taking the minimum, i.e. training the optimization objective of the score memory cell, is to predict the final probability p output (t i ) And initial QoE tag->The loss corresponding to the degree of variance of (c) is minimized, thereby minimizing the loss or risk approach. Further, due to the final prediction probability p output (t i ) The preliminary prediction probabilities are obtained by weighting and summing the respective corresponding weight factors, so that the optimization objective of the training score memory unit is to obtain the optimal weight factor, so that the loss or risk approach is minimum.
It can be understood that by training the score memory unit, a plurality of "stab" samples misjudged as stuck are filtered, so that the accuracy of the whole QoE evaluation model is improved, and the sensitivity of predicting stuck is reduced.
It should be noted that, in the foregoing embodiments, the training method and the QoE evaluation method of the score memory unit are described by taking the terminal device as an example, and it should be understood that, in actual implementation, the training method and the QoE evaluation method of the score memory unit may also be performed by the server, which is not limited by the embodiments of the present application.
In the above embodiment, the terminal device pre-builds a QoE evaluation model for QoE evaluation on the streaming media video according to the offline data set corresponding to the at least one streaming media video, and further saves the QoE evaluation model. In this way, in the process of playing the streaming media video by the terminal equipment, the QoE real-time evaluation can be performed on the streaming media video being played based on the QoE evaluation model. The online evaluation flow of the target streaming video being played will be described with reference to fig. 9.
Fig. 9 is a flow chart of an online evaluation flow provided in an embodiment of the present application. As shown in fig. 9, the method may include S201-S205 described below.
S201, the terminal equipment acquires the online parameters.
Unlike the offline parameters obtained by the terminal device after the end of playing the streaming video in the above embodiment, the online parameters are terminal-side parameters collected in real time during the process of playing the target streaming video from the initial playing time to the current time. The presence parameters are also referred to as presence data.
Let t be used at the current time r The terminal device can start to collect online parameters in real time at the initial time of playing the target streaming media, for example, at the initial playing time t 0 The online parameter 1 is collected, and at the initial playing time t 0 The next time the on-line parameter 2, … is acquired at the current time t r Collecting on-line parameters
Optionally, the terminal-side parameter may include at least one of the following parameters:
the transmission layer parameter is used for reflecting the network transmission condition when the target streaming media video is played;
QoS parameters that may be used to evaluate the ability of a network to provide services for a target streaming video;
and the terminal parameter is a self parameter of the terminal equipment when the target streaming media video is played.
For the description of the online parameters, reference may be made to the specific description of the offline parameters in the foregoing embodiments, which is not repeated herein.
S202, normalizing the online parameters by the terminal equipment.
Acquiring the current time t at the terminal equipment r Input on-line parametersAfterwards, on-line parameters can be +>Normalization processing: />
Wherein G (·) is a normalization function.
The method for normalizing the online parameter is consistent with the method for normalizing the offline parameter. For example, the online normalization method may include, but is not limited to: normalization, interval scaling, discretization, and the like. Reference may be made to the detailed description of the above embodiments, which are not repeated here.
It should be understood that in the process of playing the target streaming media video, the real-time collected and dimensional online parameters are converted into scalar parameters, so that the distribution among different parameters is more approximate, and the prediction accuracy and the robustness of the model are further improved.
S203, the terminal equipment calculates the online characteristics of the normalized online parameters.
In the embodiment of the application, the online characteristic of the online parameter can be represented by a characteristic vector.
Online features may include, but are not limited to: at least one of window features, global features, and other features.
Wherein the window feature is a feature extracted from partial data including the current time t r At the current time t r Previously preset time duration online data. Global features are features that relate to all data, including those that are open from a video applicationStarting to play target streaming media video from initial time to current time t r Is a function of the online data of the computer. Other features are features that are independent of the network data.
In the offline training process in the above embodiment, the terminal device first obtains all types of feature vectors corresponding to the entire period, and then selects a part of types of feature vectors from all the feature vectors. Unlike the offline training process, the online evaluation process calculates only some types of feature vectors selected by the offline training process, without calculating all types of feature vectors. For example, in the offline training process, the terminal device normalizes the offline data, calculates all offline features of the normalized offline data, and then removes redundant offline features from all offline features to obtain the target offline features. In the online evaluation flow, the terminal equipment normalizes online data first, and then directly calculates target online characteristics of the normalized online data without calculating redundant offline characteristics.
Illustratively, assume that in the offline training procedure, the terminal device eventually selects 15 types of features as shown in fig. 5 from the 70-dimensional types of features: the feature 'dlinkpktlen_window_6_mean', the feature 'uplinkretrans_window_18_mad', …, the feature 'dlinkretrans_window_18_mean', and the feature 'iterstsi'. When online evaluation is performed, the terminal equipment only needs to calculate the 15 types of features, and the deleted redundant features are not needed to be calculated, so that the calculation complexity and the resource consumption of the terminal equipment are reduced.
In addition, in the offline training process, the time complexity and the space complexity of the window feature and other features are both constant levels, but the global feature time complexity and the space complexity are both o (t), and the service duration (t) i -t 1 ) In a linear relationship. Unlike offline training processes, for global features of online evaluation processes, the embodiments of the present application provide an incremental feature update method: according to the current time t r Collected online data for evaluating the current time t r QoE of the last moment in timeGlobal feature and current time t r And the current time t r Intermediate variable corresponding to the previous time of (a) is obtained for evaluating the current time t r Global characteristics of QoE of (c).
Specifically, the expression can be represented by the following relation:
wherein D (·) is an incremental function, t r Is the current time. t is t r-1 For the moment immediately preceding the current moment,for and time t r-1 Corresponding intermediate variables. />To be equal to the current time t r Corresponding intermediate variables. />For and time t r-1 Corresponding online features, < >>For and time t r Corresponding online features.
Illustratively, a specific incremental feature update method is provided below:
it should be appreciated that in the incremental feature update algorithm, the time t is updated by introducing r-1 Corresponding intermediate variableSo that the terminal device can be controlled according to the time t r-1 Corresponding online feature->At time t r Input on-line parameters->Calculated and obtained and time t r Corresponding online feature->And time t r-1 Corresponding intermediate variable->Without storing the presence feature +.>Thereby reducing both the global feature temporal complexity and the spatial complexity to o (1), i.e. a constant level.
S204, the terminal equipment predicts the blocking probability on line through a machine learning model.
The machine learning model is obtained through training in the offline training process, namely, the mapping relation F of the feature vector and the QoE preliminary evaluation result ml (. Cndot.) the use of a catalyst. Therefore, the current time t is calculated by the above S203 r Corresponding online featuresThe current time t can be estimated in real time by utilizing the online characteristic r QoE of (a). For example, the online feature->The machine learning model is input so that the model can be output at the current time t r Preliminary blocking probability of blocking of streaming media video service>
Mapping relation F of feature vector and QoE preliminary evaluation result ml (·)、And the current time t r Corresponding online featuresAt the current time t r Probability of blocking streaming video service ∈>The following relationship exists:
wherein whenWhen it is, it means that the machine learning classifier predicts at the current time t r The streaming media video service does not get stuck; when->When it is, it means that the machine learning classifier predicts at the current time t r The streaming video service must be blocked. I.e.)>The larger the value of the (C) is, the higher the probability that the machine learning classifier predicts that the streaming media video service is stuck.
S205, the terminal equipment updates the on-line prediction jamming probability of the machine learning model into the final jamming probability through the score memory unit.
The score memory unit, i.e. the weighting coefficient of the predicted katon probability, is already trained in the offline training process. Thus, the current time t is predicted by the machine learning model r Probability of blocking streaming video serviceAt time t r-1 Probability of blocking streaming video service ∈>At time t r-2 Probability of blocking streaming video service ∈>… outputs these predicted stuck probabilities in a linear weighted manner, resulting in a final stuck probability:
wherein a is j And a weight factor representing the result of each QoE preliminary evaluation. T (T) 2 Representing the time window length. P is p output (t r ) Representing the current time t r Is used to determine the final click probability.
When j=0, the number of the groups,representing the current time t r Is a preliminary stuck probability; when j is not equal to 0, ">Representing at the current time t r Time t before i-j Is used for the preliminary click probability.
Time window length T 2 Is a predefined hyper-parameter. Typically, the time window length T 2 Taking 2 or 3.
For example, when T 2 When the number of the codes is =2,
for another example, when T 3 When the number of the samples is =3,
optionally, the terminal device may convert the final click probability into a click or fluent output.
If p output (t r )<0.5, then final predictive labelI.e. the output of the target streaming media video is smooth.
If p output (t r ) Not less than 0.5, finally predicting the labelI.e. the target streaming video output is stuck.
It should be noted that the above-mentioned steps S201 to S205 are how to predict the current time t r Is illustrated for the final jam probability. It can be understood that, as the streaming video is played, the terminal device may time t r And predicting the final jamming probability at other moments in real time, so as to determine whether jamming occurs in the streaming media video.
It should be understood that when a user views a streaming video online, the terminal device may acquire a QoE evaluation result in real time by inputting a bottom layer parameter into a QoE evaluation model constructed in advance. Further, the terminal device may determine, according to the QoE real-time evaluation result, whether the network quality of the current network meets the communication requirement, and further determine whether to trigger network acceleration.
Fig. 10 is a flowchart of network acceleration according to QoE real-time evaluation results according to an embodiment of the present application.
S301, the terminal equipment interacts service data of the streaming media video with the server through a first network.
The first network may be, for example, a Wi-Fi network as shown in fig. 1 or a cellular network.
In actual implementation, after the terminal device receives the streaming video playing operation of the user, a playing request message may be sent to the server in response to the operation to request resources of the streaming video. The server may then send out service data packets of the streaming video to the terminal device in response to the request message. Accordingly, the terminal device may receive service data packets of the streaming video, where each data packet has a corresponding sequence number.
S302, in the process of playing the streaming media video, the terminal equipment carries out QoE real-time evaluation on the streaming media video.
In the process of playing streaming media video, the terminal equipment can acquire the transmission layer parameters, qoS parameters, terminal equipment parameters and other bottom parameters in real time, and input the parameters into a pre-constructed QoE evaluation model, so as to output the QoE evaluation result at the current moment.
For the online evaluation flow of the streaming video, reference may be made to the description of the foregoing embodiments, which is not repeated herein.
S303, the terminal equipment determines whether the playing requirement of the streaming media video is met according to the QoE real-time evaluation result. If yes, executing the following S304; if not, the following S305 is executed.
Illustratively, assume that the final QoE evaluation at the current time is: and the final prediction probability of the blocking of the target streaming media video at the current moment.
When the final prediction probability of the target streaming media video is smaller than the preset probability at the current moment, namely the QoE real-time evaluation result is smooth, the playing requirement of the streaming media video is met, and network acceleration is not required to be triggered, so that the terminal equipment can continuously interact the service data of the streaming media video with the server through the first network.
If the final prediction probability of the target streaming media video is greater than or equal to the preset probability at the current moment, that is, the QoE real-time evaluation result is that the target streaming media video is blocked, the playing requirement of the streaming media video is not met, and the network acceleration needs to be triggered, so that the terminal equipment can interact the service data of the streaming media video through the second network and the server. Wherein the communication quality of the second network is better than the communication quality of the first network.
S304, the terminal equipment continues to interact the service data of the streaming media video with the server through the first network.
S305, the terminal equipment interacts service data of the streaming media video with the server through the second network.
Embodiments of the present application illustratively provide several possible scenarios for achieving network acceleration as follows.
Scene 1: the first network is a single Wi-Fi network and the second network is a plurality of Wi-Fi networks.
For example, when application service data is transmitted between the terminal device and the server through a single Wi-Fi network (for example, a 2.4GHz Wi-Fi network or a 5GHz Wi-Fi network), if the Wi-Fi network communication quality does not meet the service communication requirement, as shown in (a) of fig. 11, the application service data may be transmitted between the terminal device and the server through multiple Wi-Fi networks (for example, a 2.4GHz Wi-Fi network+5 GHz Wi-Fi network), so as to perform network acceleration.
Scene 2: the first network may be a Wi-Fi network and the second network may be a cellular network.
For example, when application service data is transmitted between the terminal device and the server through a Wi-Fi network (e.g., a 2.4GHz Wi-Fi network or a 5GHz Wi-Fi network), if the Wi-Fi network communication quality does not meet the service communication requirement, as shown in (b) of fig. 11, the terminal device and the server may be switched to transmit the application service data through a cellular network (e.g., a 5G network and/or a 4G network), so as to perform network acceleration.
Scene 3: the first network may be a Wi-Fi network and the second network may include a Wi-Fi network and a cellular network. Network acceleration is achieved through multi-network collaboration.
For example, when application service data is transmitted between the terminal device and the server through a Wi-Fi network, if the network communication quality does not meet the service communication requirement, as shown in (c) of fig. 11, the application service data may be cooperatively transmitted between the terminal device and the server through a Wi-Fi network (e.g., a 2.4GHz Wi-Fi network and/or a 5GHz Wi-Fi network) and a cellular network (e.g., a 5G network and/or a 4G network), so as to implement network acceleration.
Scene 4: the first network may be a cellular network, and the second network may include a Wi-Fi network and a mobile network, and network acceleration is achieved through multi-network cooperation.
Scene 5: the first network may be a first Wi-Fi network, and the second network may be a second Wi-Fi network, where an operating frequency of the second Wi-Fi network is higher than an operating frequency of the first Wi-Fi network.
Illustratively, the first network may be a 2.4GHz Wi-Fi network and the second network may be a 5GHz Wi-Fi network.
Scene 6: the first network may be a first cellular network, and the second network may be a second cellular network, where the network system of the second cellular network is higher than the network system of the first cellular network.
Illustratively, the first network may be a 4G cellular network and the second network may be a 5G cellular network.
Optionally, in the embodiment of the present application, link Turbo technology may be adopted to implement the cooperative acceleration of Wi-Fi network and cellular network, and when the network communication quality does not meet the service communication requirement, the Link Turbo technology supports the cooperative acceleration of four networks, namely 2.4ghz+5ghz WLAN, 5G mobile data, and 4G mobile data.
For example, when application service data is transmitted between the terminal device and the server through a Wi-Fi network or a cellular network, if the network communication quality does not meet the service communication requirement, as shown in (d) of fig. 11, network acceleration may be performed between the terminal device and the server through a 2.4GHz Wi-Fi network+5 GHz Wi-Fi network+a main card 5G network+a sub card 4G network at the same time.
In addition, when the terminal device performs network acceleration, the terminal device may also display information.
For example, as shown in fig. 12, during the live broadcast process of the mobile phone, the mobile phone displays the live broadcast interface 11, and automatically starts the QoE real-time assessment function. When the real-time QoE evaluation result is determined that the playing requirement of the streaming media video is not met, the icon 12 of the notification bar is updated to remind the user that the network acceleration is being performed. The icon 12 indicates that the handset is in the 5G network +5GHz Wi-Fi state. In addition, the mobile phone can also display prompt information 13: the WLAN and the mobile data are used simultaneously, and the network acceleration is reminded to be performed in the playing process of the streaming media video at the moment. The network stability of the streaming media video can be improved through network acceleration.
The scheme provided by the embodiment of the application is mainly described from the perspective of the terminal equipment. It will be appreciated that the terminal device, in order to implement the above-described functions, may comprise a corresponding hardware structure or software module, or a combination of both, for performing each function. Those of skill in the art will readily appreciate that the various illustrative elements and algorithm steps described in connection with the embodiments disclosed herein may be implemented as hardware or combinations of hardware and computer software. Whether a function is implemented as hardware or computer software driven hardware depends upon the particular application and design constraints imposed on the solution. Skilled artisans may implement the described functionality in varying ways for each particular application, but such implementation decisions should not be interpreted as causing a departure from the scope of the present application.
The embodiment of the application can divide the functional modules of the terminal equipment according to the method example, for example, each functional module can be divided corresponding to each function, and two or more functions can be integrated in one processing module. The integrated modules may be implemented in hardware or in software functional modules. It should be noted that, in the embodiment of the present application, the division of the modules is schematic, which is merely a logic function division, and other division manners may be implemented in actual implementation. The following description will take an example of dividing each function module into corresponding functions.
Fig. 13 is a schematic structural diagram of a training device of a QoE evaluation model according to an embodiment of the present application. The training device is used for training a QoE evaluation model for real-time evaluation. As shown in fig. 13, the training apparatus 130 may include an acquisition module 131, a feature extraction module 132, and a training module 133.
The obtaining module 131 may be configured to obtain offline data, where the offline data is a terminal-side parameter obtained according to a log file of a streaming video that has finished playing.
The feature extraction module 132 may be configured to extract a target offline feature from the offline data, where the target offline feature is used to evaluate QoE at a time t, and the time t is a time when the streaming video is played.
The training module 133 may be configured to train a machine learning classifier of the QoE evaluation model according to the target offline feature, to obtain a mapping relationship between the target offline feature and the QoE preliminary evaluation result at time t.
The training module 133 may be further configured to train the score memory unit of the QoE evaluation model according to the QoE preliminary evaluation result, to obtain mapping relationships between the QoE preliminary evaluation results and the QoE final evaluation result at time t. Wherein, the plurality of QoE preliminary evaluation results include: a QoE preliminary evaluation result at time t, and a QoE preliminary evaluation result at least one time before time t.
It should be understood that the training device is convenient for carrying out QoE real-time evaluation on the streaming media video being played by training and constructing a QoE evaluation model in advance.
The training device in the embodiment of the present application may correspond to a method for executing the description related to the offline training process in the embodiment of the present application, and for brevity, will not be described herein again.
Fig. 14 is a schematic structural diagram of a QoE evaluation device according to an embodiment of the present application. As shown in fig. 14, the apparatus 140 may include an acquisition module 141, a feature extraction module 142, and an evaluation module 143.
The acquiring module 141 is configured to acquire online data, where the online data is a terminal-side parameter from an initial playing time to a time t, which is acquired in real time during a process of playing a target streaming media video;
The feature extraction module 142 may be configured to extract a target online feature from online data, where the target online feature is used to evaluate QoE at a current time t in real time, where the time t is a time when the target streaming video is played;
the evaluation module 143 may be configured to input the target online feature into a machine learning classifier of a QoE evaluation model, to obtain a QoE preliminary evaluation result at the current time t;
the evaluation module 143 may be further configured to input a plurality of QoE preliminary evaluation results into a score memory unit of the QoE evaluation model, to obtain a QoE final evaluation result at the current time t; wherein, the plurality of QoE preliminary evaluation results include: a QoE preliminary evaluation result at the current time t, and a QoE preliminary evaluation result at least one time before the current time t.
It should be appreciated that when a user views a streaming video online, qoE evaluation results may be obtained in real time by inputting online data into a pre-built QoE evaluation model.
The QoE evaluation device of the embodiments of the present application may correspond to a method for executing the description related to the online evaluation flow in the embodiments of the present application, which is not described herein for brevity.
Fig. 15 is a schematic structural diagram of a terminal device according to an embodiment of the present application. As shown in fig. 15, the terminal device may include a processor 401, where the processor 401 is coupled to a memory 403, and the processor 401 is configured to execute a computer program or instructions stored in the memory, so that the terminal device implements the method in the foregoing embodiments.
The terminal devices can also include a communication bus 402, a communication interface 404, output devices 405, and input devices 406.
The number of processors 401 may be one or more. One processor 401 may include at least one processing unit. For example, the processor may include at least one central processing unit (central processing unit, CPU) as shown in fig. 15. As another example, the processors may also include an image signal processor (image signal processor, ISP), a digital signal processor (digital signal processor, DSP), a video codec, a neural network processor (neural-network processing unit, NPU), a graphics processor (graphics processing unit, GPU), an application processor (application processor, AP), a modem processor, and/or a baseband processor, among others. In some embodiments, the different processing units may be separate devices or may be integrated in one or more processors.
Communication bus 402 may include a path for transferring information between processor 401, memory 403, and communication interface 404.
The communication interface 404 uses any transceiver-like means for communicating with other devices or communication networks, such as ethernet, radio access network (radio access network, RAN) or wireless local area network (wireless local area networks, WLAN), etc. In the embodiment of the present application, the communication interface 404 is mainly used for communicating with a server, for example, transmitting data packets of streaming video, etc.
The memory 403 may be, but is not limited to, a read-only memory (ROM) or other type of static storage device that can store static information and instructions, a random access memory (random access memory, RAM) or other type of dynamic storage device that can store information and instructions, or an electrically erasable programmable read-only memory (electrically erasable programmable read-only memory, EEPROM), a compact disc (compact disk read only memory, CD-ROM) or other optical disk storage, optical disk storage (including compact disc, laser disc, optical disc, digital versatile disc, blu-ray disc, etc.), magnetic disk storage media or other magnetic storage devices, or any other medium that can be used to carry or store the desired program code in the form of instructions or data structures and that can be accessed by a computer. The memory may be stand alone and coupled to the processor via a bus. The memory may also be integrated with the processor.
The memory 403 is used to store code for executing applications, such as video applications, and is controlled for execution by the processor 401. The processor 401 is configured to execute application code stored in the memory 403, thereby implementing the QoE evaluation model training method and the QoE evaluation method in the above embodiments. The memory can also be used for log files, machine learning models, score memory units and the like.
The output device 405 communicates with the processor 401 and may display information in a variety of ways, such as displaying a playback interface for streaming video. The output device 405 may include a display panel, such as a liquid crystal display (liquid crystal display, LCD), an organic light-emitting diode (OLED), an active-matrix or active-matrix organic light-emitting diode (AMOLED), a flexible light-emitting diode (FLED), a quantum dot light-emitting diode (quantum dot light emitting diodes, QLED), or the like.
The input device 406 is in communication with the processor 401 and may receive user input in a variety of ways, such as receiving user input streaming video playback operations. Wherein the input device 406 may be a mouse, keyboard, touch screen, or sensing device, among others.
It should be understood that the terminal device shown in fig. 15 may correspond to the training apparatus shown in fig. 13. The processor 401 in the terminal device shown in fig. 15 may correspond to the acquisition module 131, the feature extraction module 132, and the training module 133 in the training apparatus in fig. 13. The terminal device shown in fig. 15 may also correspond to the QoE evaluation device shown in fig. 14. The processor 401 in the terminal device shown in fig. 15 may correspond to the acquisition module 141, the feature extraction module 142, and the evaluation module 143 in the QoE evaluation device in fig. 14.
The embodiment of the application also provides a computer readable storage medium, wherein the computer readable storage medium stores computer instructions; the computer readable storage medium, when run on a terminal device, causes the terminal device to perform the method as shown above. The computer instructions may be stored in a computer-readable storage medium or transmitted from one computer-readable storage medium to another computer-readable storage medium, for example, the computer instructions may be transmitted from one website, computer, server, or data center to another website, computer, server, or data center by a wired (e.g., coaxial cable, fiber optic, digital subscriber line (digital subscriber line, DSL)) or wireless (e.g., infrared, wireless, microwave, etc.). The computer readable storage medium can be any available medium that can be accessed by a computer or a data storage device including one or more servers, data centers, etc. that can be integrated with the medium. The usable medium may be a magnetic medium (e.g., a floppy disk, a hard disk, or a magnetic tape), an optical medium, or a semiconductor medium (e.g., a Solid State Disk (SSD)), or the like.
Embodiments of the present application also provide a computer program product comprising computer program code for causing a computer to perform the method of the embodiments described above when the computer program code is run on a computer.
The embodiment of the application also provides a chip, which is coupled with the memory and is used for reading and executing the computer program or the instructions stored in the memory to execute the method in each embodiment. The chip may be a general-purpose processor or a special-purpose processor.
It should be noted that the chip may be implemented using the following circuits or devices: one or more field programmable gate arrays (field programmable gate array, FPGA), programmable logic devices (programmable logic device, PLD), controllers, state machines, gate logic, discrete hardware components, any other suitable circuit or combination of circuits capable of performing the various functions described throughout this application.
The terminal device, the training device, the QoE evaluation device, the computer-readable storage medium, the computer program product and the chip provided by the embodiments of the present application are all configured to execute the method provided above, so that the beneficial effects achieved by the method provided above can be referred to the beneficial effects corresponding to the method provided above, and are not repeated herein.
It should be understood that the above description is only intended to assist those skilled in the art in better understanding the embodiments of the present application, and is not intended to limit the scope of the embodiments of the present application. It will be apparent to those skilled in the art from the foregoing examples that various equivalent modifications or variations can be made, for example, certain steps may not be necessary in the various embodiments of the detection methods described above, or certain steps may be newly added, etc. Or a combination of any two or more of the above. Such modifications, variations, or combinations are also within the scope of embodiments of the present application.
It should also be understood that the foregoing description of embodiments of the present application focuses on highlighting differences between the various embodiments and that the same or similar elements not mentioned may be referred to each other and are not repeated herein for brevity.
It should be further understood that the sequence numbers of the above processes do not mean the order of execution, and the order of execution of the processes should be determined by the functions and internal logic of the processes, and should not be construed as limiting the implementation process of the embodiments of the present application.
It should be further understood that, in the embodiments of the present application, the "preset" and "predefined" may be implemented by pre-storing corresponding codes, tables or other manners that may be used to indicate relevant information in a device (including, for example, a terminal device), and the present application is not limited to a specific implementation manner thereof.
It should also be understood that the manner, the case, the category, and the division of the embodiments in the embodiments of the present application are merely for convenience of description, should not be construed as a particular limitation, and the features in the various manners, the categories, the cases, and the embodiments may be combined without contradiction.
It is also to be understood that in the various embodiments of the application, where no special description or logic conflict exists, the terms and/or descriptions between the various embodiments are consistent and may reference each other, and features of the various embodiments may be combined to form new embodiments in accordance with their inherent logic relationships.
Finally, it should be noted that: the foregoing description is merely illustrative of specific embodiments of the present application, but the scope of the present application is not limited thereto, and any changes or substitutions within the technical scope of the present application should be covered by the scope of the present application. Therefore, the protection scope of the application is subject to the protection scope of the claims.

Claims (20)

1. A method of training a quality of experience, qoE, assessment model, the QoE assessment model comprising a machine learning classifier and a score memory unit, the method comprising:
extracting target offline characteristics from offline data, wherein the offline data are terminal side parameters from an initial playing time to a time t, which are acquired according to a log file of a streaming media video which is finished to be played, and the target offline characteristics are used for evaluating QoE at the time t;
Training the machine learning classifier according to the target offline features to obtain a mapping relation between the target offline features and QoE preliminary evaluation results at the moment t;
training the score memory unit according to the QoE preliminary evaluation results to obtain a mapping relation between the QoE preliminary evaluation results and the QoE final evaluation result at the moment t; wherein the plurality of QoE preliminary evaluation results includes: a QoE preliminary evaluation result at the time t, and a QoE preliminary evaluation result at least one time before the time t.
2. The method of claim 1, wherein extracting the target offline feature from the offline data comprises:
normalizing the offline data;
extracting all offline features of the normalized offline data;
and removing redundant offline features from all the offline features to obtain the target offline features.
3. The method of claim 2, wherein the all offline features include at least one of:
global features, wherein the global features are extracted from all data of the offline data;
window characteristics, wherein the window characteristics are extracted from partial data of the offline data, and the partial data comprise data of the time t and data of a preset duration before the time t;
Other features that are not related to network data.
4. The method of claim 2, wherein said removing redundant offline features from said all offline features results in said target offline feature, comprising:
performing iterative training on all offline features:
in each round of iterative training, deleting the offline features with the lowest importance level according to the importance level of each offline feature in all offline features until the preset iteration times are reached or until the number of the rest offline features is smaller than or equal to the preset number;
and taking the rest offline characteristics as the target offline characteristics.
5. The method according to any one of claim 1 to 4, wherein,
the plurality of QoE preliminary assessment results includes: a plurality of preliminary prediction probabilities, each preliminary prediction probability of the plurality of preliminary prediction probabilities being used to represent a probability that the machine-learned classifier predicts that the streaming video is stuck at a time instant, the time instant being the time instant t or a time instant of the at least one time instant; the optimization objective of the machine learning classifier is as follows: minimizing a first loss function, wherein the first loss function is used for representing the loss corresponding to the preliminary prediction probability of the moment and the difference degree of the initial QoE label of the moment; the initial QoE tag at the one time is used for indicating that the streaming media video is stuck or smooth at the one time;
The final QoE evaluation result at the time t includes: the final prediction probability of the moment t is used for indicating the probability that the score memory unit predicts that the streaming media video is stuck at the moment t; the optimization targets of the score memory unit are as follows: and minimizing a second loss function, wherein the second loss function is used for representing the loss corresponding to the final prediction probability of the moment t and the difference degree of the initial QoE label of the moment t.
6. The method of claim 5, wherein the mapping relationship between the plurality of QoE preliminary evaluation results and the QoE final evaluation result at the time t is represented by the following relationship:
wherein t is i Representing the time t, a j A weight factor representing said each preliminary prediction probability, T representing the time window length, p output (t i ) Representing a final predicted probability for the time t;
when j=0, the number of the groups,representing a preliminary prediction probability of the time t;
when j is not equal to 0,representing the time t before said time t i-j Is used for the preliminary prediction probability of (a).
7. The method according to claim 5 or 6, wherein the QoE final assessment result at time t further comprises: a final predictive label at said time t;
If the final prediction probability of the time t is greater than or equal to 0.5, the final prediction label of the time t is used for indicating that the streaming media video is stuck at the time t;
and if the final prediction probability of the time t is smaller than 0.5, the final prediction label of the time t is used for indicating that the streaming media video is smooth at the time t.
8. The method according to any of claims 5 to 7, wherein the initial QoE label is manually annotated or automatically generated based on a log file of the streaming video.
9. The method according to any one of claims 1 to 8, wherein the terminal side parameters comprise at least one of:
the transmission layer parameter is used for reflecting the network transmission condition when the streaming media video is played;
a quality of service parameter, the quality of service parameter being used to evaluate the ability of a network to provide services for the streaming video;
and the terminal parameter is a self parameter of the terminal equipment when the streaming media video is played.
10. A QoE evaluation method, wherein the method comprises:
extracting target online characteristics from online data, wherein the online data are terminal side parameters from an initial playing time to a current time, which are acquired in real time in the process of playing target streaming media video, and the target online characteristics are used for evaluating QoE at the current time in real time;
Inputting the target online characteristics into a machine learning classifier of a QoE evaluation model to obtain a QoE preliminary evaluation result at the current moment;
inputting a plurality of QoE preliminary evaluation results into a score memory unit of the QoE evaluation model to obtain a QoE final evaluation result at the current moment; wherein the plurality of QoE preliminary evaluation results includes: the QoE preliminary evaluation result of the current moment and the QoE preliminary evaluation result of at least one moment before the current moment.
11. The method of claim 10, wherein extracting the target online feature from the online data comprises:
normalizing the online data;
and extracting target online characteristics of the normalized online data.
12. The method of claim 10 or 11, wherein the target online feature comprises at least one of:
global features, which are features extracted from all data of the online data;
window characteristics, wherein the window characteristics are extracted from partial data of the online data, and the partial data comprise the data of the current moment and the data of a preset time length before the current moment;
Other features that are not related to network data.
13. The method of claim 12, wherein the target online feature comprises a global feature;
the global features are extracted according to the following parameters:
online data collected at the current moment;
global features for evaluating QoE at a time immediately preceding the current time;
an intermediate variable corresponding to the current time and a time immediately preceding the current time.
14. The method of any of claims 10 to 13, wherein the plurality of QoE preliminary assessment results comprises: a plurality of preliminary prediction probabilities, each preliminary prediction probability of the plurality of preliminary prediction probabilities being used to represent a probability that the machine-learned classifier predicts that the target streaming video is stuck at one time, the one time being the current time or a time of the at least one time;
the final QoE evaluation result at the current time includes: the final prediction probability of the current moment is used for indicating the probability that the score memory unit predicts that the target streaming media video is stuck at the current moment;
Inputting the plurality of QoE preliminary evaluation results into a score memory unit of the QoE evaluation model to obtain a QoE final evaluation result at the current moment, wherein the method comprises the following steps:
and the score memory unit performs weighted summation on the plurality of preliminary prediction probabilities according to the weight factor of each preliminary prediction probability to obtain the final prediction probability of the current moment.
15. The method of claim 14, wherein the final QoE assessment result for the current time further comprises: a final prediction tag of the current moment;
inputting the plurality of QoE preliminary evaluation results into a score memory unit of the QoE evaluation model to obtain a QoE final evaluation result at the current moment, and further comprising:
determining a final prediction tag of the current moment according to the final prediction probability of the current moment;
if the final prediction probability of the current moment is greater than or equal to 0.5, the final prediction label of the current moment is used for indicating that the target streaming media video is stuck at the current moment; and if the final prediction probability of the current moment is smaller than 0.5, the final prediction label of the current moment is used for indicating that the target streaming media video is smooth at the current moment.
16. The method according to any of claims 10 to 15, wherein the terminal side parameters comprise at least one of:
the transmission layer parameter is used for reflecting the network transmission condition when the target streaming media video is played;
a quality of service parameter, the quality of service parameter being used to evaluate the ability of the network to provide services for the target streaming video;
and the terminal parameter is a self parameter of the terminal equipment when the target streaming media video is played.
17. The method according to any one of claims 10 to 16, wherein the data of the target streaming video is obtained at the current moment by interaction with a server via a first network;
after the final QoE evaluation result at the current time is obtained, the method further includes:
under the condition that the QoE final evaluation result at the current moment shows that the communication quality of the first network does not meet the communication requirement, the data of the target streaming media video are interacted with the server through a second network;
wherein the communication quality of the second network is better than the communication quality of the first network.
18. The method of claim 17, wherein the final QoE assessment result for the current time comprises: the final prediction probability of the blocking of the target streaming media video at the current moment;
The QoE final evaluation result at the current time indicates that the communication quality of the first network does not meet the communication requirement, including: if the final prediction probability of the target streaming media video at the current moment is greater than or equal to the preset probability, the final QoE evaluation result at the current moment indicates that the communication quality of the first network does not meet the communication requirement.
19. A terminal device comprising a processor coupled to a memory, the processor configured to execute a computer program or instructions stored in the memory, to cause the terminal device to implement a method of training a QoE assessment model according to any of claims 1 to 9, or to implement a QoE assessment method according to any of claims 10 to 18.
20. A computer readable storage medium, characterized in that the computer readable storage medium stores a computer program, which when run on a terminal device causes the terminal device to perform the training method of the QoE evaluation model according to any of claims 1 to 9, or to perform the QoE evaluation method according to any of claims 10 to 18.
CN202210283827.2A 2022-03-22 2022-03-22 QoE evaluation model training method, qoE evaluation method and QoE evaluation equipment Pending CN116846803A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202210283827.2A CN116846803A (en) 2022-03-22 2022-03-22 QoE evaluation model training method, qoE evaluation method and QoE evaluation equipment

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202210283827.2A CN116846803A (en) 2022-03-22 2022-03-22 QoE evaluation model training method, qoE evaluation method and QoE evaluation equipment

Publications (1)

Publication Number Publication Date
CN116846803A true CN116846803A (en) 2023-10-03

Family

ID=88160364

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202210283827.2A Pending CN116846803A (en) 2022-03-22 2022-03-22 QoE evaluation model training method, qoE evaluation method and QoE evaluation equipment

Country Status (1)

Country Link
CN (1) CN116846803A (en)

Similar Documents

Publication Publication Date Title
CN109344884B (en) Media information classification method, method and device for training picture classification model
US11645571B2 (en) Scheduling in a dataset management system
US11531867B2 (en) User behavior prediction method and apparatus, and behavior prediction model training method and apparatus
US11538064B2 (en) System and method of providing a platform for managing data content campaign on social networks
CN109544396B (en) Account recommendation method and device, server, terminal and storage medium
CN111026971B (en) Content pushing method and device and computer storage medium
US10812358B2 (en) Performance-based content delivery
US20220277020A1 (en) Selectively identifying and recommending digital content items for synchronization
CN111460294B (en) Message pushing method, device, computer equipment and storage medium
CN109993627B (en) Recommendation method, recommendation model training device and storage medium
WO2017152734A1 (en) Data processing method and relevant devices and systems
CN108959319B (en) Information pushing method and device
CN110909182A (en) Multimedia resource searching method and device, computer equipment and storage medium
JP2007317068A (en) Recommending device and recommending system
CN110417867B (en) Web service QoS monitoring method under mobile edge environment
US11470370B2 (en) Crowdsourcing platform for on-demand media content creation and sharing
Qiao et al. Trace-driven optimization on bitrate adaptation for mobile video streaming
US20230004776A1 (en) Moderator for identifying deficient nodes in federated learning
US20220167034A1 (en) Device topological signatures for identifying and classifying mobile device users based on mobile browsing patterns
US20220167051A1 (en) Automatic classification of households based on content consumption
Lee et al. Machine learning and deep learning for throughput prediction
CN113204699A (en) Information recommendation method and device, electronic equipment and storage medium
CN111612783A (en) Data quality evaluation method and system
CN116846803A (en) QoE evaluation model training method, qoE evaluation method and QoE evaluation equipment
CN113297417B (en) Video pushing method, device, electronic equipment and storage medium

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination