CN110266611B - Method, device and system for processing buffered data - Google Patents

Method, device and system for processing buffered data Download PDF

Info

Publication number
CN110266611B
CN110266611B CN201910610756.0A CN201910610756A CN110266611B CN 110266611 B CN110266611 B CN 110266611B CN 201910610756 A CN201910610756 A CN 201910610756A CN 110266611 B CN110266611 B CN 110266611B
Authority
CN
China
Prior art keywords
probability
delay
client
data
katon
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN201910610756.0A
Other languages
Chinese (zh)
Other versions
CN110266611A (en
Inventor
薛德义
陶文质
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Tencent Technology Shanghai Co Ltd
Original Assignee
Tencent Technology Shanghai Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Tencent Technology Shanghai Co Ltd filed Critical Tencent Technology Shanghai Co Ltd
Priority to CN201910610756.0A priority Critical patent/CN110266611B/en
Publication of CN110266611A publication Critical patent/CN110266611A/en
Application granted granted Critical
Publication of CN110266611B publication Critical patent/CN110266611B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L41/00Arrangements for maintenance, administration or management of data switching networks, e.g. of packet switching networks
    • H04L41/14Network analysis or design
    • H04L41/145Network analysis or design involving simulating, designing, planning or modelling of a network
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L41/00Arrangements for maintenance, administration or management of data switching networks, e.g. of packet switching networks
    • H04L41/14Network analysis or design
    • H04L41/147Network analysis or design for predicting network behaviour
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L47/00Traffic control in data switching networks
    • H04L47/10Flow control; Congestion control
    • H04L47/28Flow control; Congestion control in relation to timing considerations
    • H04L47/283Flow control; Congestion control in relation to timing considerations in response to processing delays, e.g. caused by jitter or round trip time [RTT]
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L49/00Packet switching elements
    • H04L49/90Buffering arrangements
    • H04L49/9005Buffering arrangements using dynamic buffer space allocation

Abstract

The present application relates to a method, apparatus, system, business operation state synchronization system, computer readable storage medium and computer device for buffering data, the method comprising: acquiring a macroscopic probability distribution model; determining the prior probability of the blocking according to the macroscopic probability distribution model; correcting the katon prior probability according to the individual network delay recorded by the client to obtain a katon posterior probability; and controlling the consumption speed of the client to the buffered data according to the cartoon posterior probability. According to the scheme provided by the application, the lower operation delay is ensured, the synchronous smoothness of the service operation state is improved, and the balance between the minimization of the operation delay and the smoothness improvement is effectively achieved by accurately predicting the blocking.

Description

Method, device and system for processing buffered data
Technical Field
The present invention relates to the field of data processing technologies, and in particular, to a method, an apparatus, a system, a service operation state synchronization system, a computer readable storage medium, and a computer device for processing buffered data.
Background
With the development of data processing technology, a frame synchronization technology is developed, and the frame synchronization technology is often used for a scene requiring service operation state synchronization. For example, the user performs a certain service operation on a certain client, the client displays a corresponding service operation state, and the client on the other end also needs to synchronously display the service operation state. Through the frame synchronization technology, the service operation data can be sent to the client at the other end, and the client at the other end displays the corresponding service operation state according to the service operation data, so that state synchronization is realized.
Because when transmitting data frames, severe network jitter may cause packet loss or severe network delay, which causes a client at the other end to fail to synchronize state in time to cause a jam, which affects the smoothness of state synchronization. Therefore, when performing state synchronization, network jitter is usually handled by a buffering technique. Buffering is a technique that does not consume immediately after receiving data, but rather buffers the consumption in a buffer delay. Network jitter can be effectively avoided through a buffering technology. However, if the buffer length is large, that is, the buffer buffers too much data, a large operation delay is also caused, which affects the timeliness of the state synchronization.
In order to achieve a balance between smaller operation delay and ensuring smoothness, it is currently common to predict a larger network delay that may occur according to the network delay that has occurred at the client, thereby predicting the possibility of network jitter, so as to adjust the consumption speed of data according to the possibility of network jitter to avoid the stuck.
However, the network delay actually occurring at the client may be affected by various factors, and there may be no correlation between the network delay that has occurred and the network delay that occurs in the future, resulting in low accuracy of prediction, and difficulty in effectively balancing the operation delay and smoothness.
Therefore, the current buffer data processing method has the problem that the accuracy of the katon prediction is low, and the operation delay and the smoothness cannot be effectively balanced.
Disclosure of Invention
Based on this, it is necessary to provide a processing method, apparatus, system, service operation state synchronization system, computer readable storage medium and computer device for buffering data, aiming at the problem that the katon prediction accuracy is low and the operation delay and smoothness cannot be balanced effectively.
A method of processing buffered data, comprising:
acquiring a macroscopic probability distribution model; the macroscopic probability distribution model is obtained by learning a plurality of network delay samples; the macroscopic probability distribution model comprises network delays and corresponding occurrence probabilities;
determining the prior probability of the blocking according to the macroscopic probability distribution model; the blocking prior probability is the probability of blocking caused by the fact that the client does not have consumable buffer data;
correcting the katon prior probability according to the individual network delay recorded by the client to obtain a katon posterior probability; the blocking posterior probability is the probability of blocking when the client side generates the individual network delay;
And controlling the consumption speed of the client to the buffered data according to the cartoon posterior probability.
A method of processing buffered data, comprising:
acquiring a plurality of network delay samples;
learning the plurality of network delay samples to obtain a macroscopic probability distribution model; the macroscopic probability distribution model comprises network delays and corresponding occurrence probabilities;
the macroscopic probability distribution model is issued to a client for the client to determine the Katon prior probability according to the macroscopic probability distribution model; correcting the katon prior probability according to the individual network delay recorded by the client to obtain a katon posterior probability; controlling the consumption speed of the client to the buffer data according to the cartoon posterior probability; the blocking prior probability is the probability of blocking caused by the fact that the client does not have consumable buffer data; the blocking posterior probability is the probability of blocking when the client side generates the individual network delay.
A processing apparatus for buffering data, comprising:
the model acquisition module is used for acquiring a macroscopic probability distribution model; the macroscopic probability distribution model is obtained by learning a plurality of network delay samples; the macroscopic probability distribution model comprises network delays and corresponding occurrence probabilities;
The prior probability module is used for determining the prior probability of the cartoon according to the macroscopic probability distribution model; the blocking prior probability is the probability of blocking caused by the fact that the client does not have consumable buffer data;
the posterior probability module is used for correcting the katon prior probability according to the individual network delay recorded by the client to obtain katon posterior probability; the blocking posterior probability is the probability of blocking when the client side generates the individual network delay;
and the consumption control module is used for controlling the consumption speed of the client to the buffer data according to the cartoon posterior probability.
A processing apparatus for buffering data, comprising:
the sample acquisition module is used for acquiring a plurality of network delay samples;
the model construction module is used for learning the plurality of network delay samples to obtain a macroscopic probability distribution model; the macroscopic probability distribution model comprises network delays and corresponding occurrence probabilities;
the model issuing module is used for issuing the macroscopic probability distribution model to a client for the client to determine the katon prior probability according to the macroscopic probability distribution model; correcting the katon prior probability according to the individual network delay recorded by the client to obtain a katon posterior probability; controlling the consumption speed of the client to the buffer data according to the cartoon posterior probability; the blocking prior probability is the probability of blocking caused by the fact that the client does not have consumable buffer data; the blocking posterior probability is the probability of blocking when the client side generates the individual network delay.
According to the buffer data processing method, device, system, service operation state synchronization system, computer readable storage medium and computer equipment, a macroscopic probability distribution model is obtained by learning a plurality of network delay samples, the macroscopic probability distribution model is used as prior knowledge, the prior probability of the blocking of the client is determined, the prior probability of the blocking is corrected by adopting the individual network delay of the local occurrence of the client, the posterior probability of the blocking is obtained, and the consumption speed is controlled according to the posterior probability of the blocking, so that the problem that in the prior art, the prediction is inaccurate due to lack of prior knowledge when the blocking probability of the client is predicted is solved, the prediction accuracy of the blocking probability of the client is improved, the consumption speed is controlled based on the accurately predicted blocking probability, and the buffer data which is enough to resist network jitter is kept in a buffer zone. By controlling the consumption speed, the quantity of the buffer data is adaptively adjusted, so that the buffer data is minimized, and meanwhile, the situation of blocking caused by insufficient buffer data is avoided. When the processing method of the buffer data is applied to the service operation state synchronization scene, the smoothness of the service operation state synchronization is improved while the lower operation delay is ensured, and the balance between the minimization of the operation delay and the smoothness is effectively achieved by accurately predicting the blocking.
Drawings
FIG. 1A is an application environment diagram of a method of processing buffered data in one embodiment;
FIG. 1B is a schematic diagram of a scenario of a multiplayer online real-time interactive game service in one embodiment;
FIG. 1C is a schematic diagram of a scenario in which a network delay sample is obtained, in one embodiment;
FIG. 2 is a flow chart of a method for processing buffered data in one embodiment;
FIG. 3 is a timing diagram of client-server communication in one embodiment;
FIG. 4 is a schematic diagram of a stuck delay likelihood probability update in one embodiment;
FIG. 5 is a schematic diagram of a data processing framework in one embodiment;
FIG. 6 is a flow diagram of the processing steps of buffering data in one embodiment;
FIG. 7 is a schematic diagram of a threshold adaptive update manner in one embodiment;
FIG. 8 is a flow chart of a method for processing buffered data in another embodiment;
FIG. 9 is a schematic diagram of a server process flow in one embodiment;
FIG. 10 is a flow diagram of a buffer management module maintaining data in one embodiment;
FIG. 11 is a schematic diagram of a consumption decision flow in one embodiment;
FIG. 12 is a timing diagram of processing buffered data in one embodiment;
FIG. 13 is a block diagram of a processing system that buffers data in one embodiment;
FIG. 14 is a block diagram of a business operational state synchronization system in one embodiment;
FIG. 15 is a block diagram of a processing device that buffers data in one embodiment;
FIG. 16 is a block diagram showing a structure of a processing apparatus for buffering data in another embodiment;
FIG. 17 is a block diagram of a computer device in one embodiment.
Detailed Description
In order to make the objects, technical solutions and advantages of the present application more apparent, the present application will be further described in detail with reference to the accompanying drawings and examples. It should be understood that the specific embodiments described herein are for purposes of illustration only and are not intended to limit the present application.
FIG. 1A is an application environment diagram of a method of processing buffered data in one embodiment. With reference to FIG. 1, the method is applied to a data processing system. The data processing system includes a first client 110, a second client 120, and a server 130.
The first client 110 and the second client 120 are connected to the server 130 through a network. The first client 110 and the second client 120 may be specifically desktop terminals or mobile terminals, and the mobile terminals may be specifically at least one of mobile phones, tablet computers, notebook computers, and the like. The server 130 may be implemented as a stand-alone server or as a server cluster composed of a plurality of servers.
It should be noted that the data processing system may be applied in a scenario of service operation state synchronization based on a C/S (Client/Server) architecture. In the scenario of service operation state synchronization, for a service with a higher operation frequency, synchronization of service operation states is generally implemented based on a frame synchronization technology.
Specifically, the user may perform a service operation at the second client 120, and the second client 120 generates service operation data of the service operation, and displays a corresponding service operation state according to the service operation data.
The user performs a plurality of consecutive service operations in sequence, and the second client 120 generates service operation data of consecutive multiframes, and the second client 120 sends the service operation data of consecutive multiframes to the server 130.
The server 130 may assemble the traffic operation data of consecutive multiframes in sequence and broadcast to the first client 110 at a fixed frequency. After receiving the service operation data, the first client 110 displays a corresponding service operation state according to the service operation data.
Thus, in the scenario of synchronization of the service operation states of the C/S architecture, through the frame synchronization technique described above, the first client 110 displays the corresponding service operation states simultaneously during the service operation performed by the user at the second client 120.
For example, referring to the schematic view of the scenario of the multiplayer online real-time interactive game service of fig. 1B, the first client 110 and the second client 120 perform a game service through the server 130, the user performs a movement operation on the game character at the second client 120, and the second client 120 displays a game state in which the game character moves according to the movement operation data. The user sequentially moves the game character forward and leftward at the second client 120, and the second client 120 generates forward movement operation data and leftward movement operation data. The second client 120 transmits the forward movement operation data and the leftward movement operation data to the server 130. The server 130 sequentially forwards the forward movement operation data and the leftward movement operation data to the first client 110, and the first client 110 sequentially receives the forward movement operation data and the leftward movement operation data, and the first client 110 displays the game state in which the game character moves forward and then displays the game state in which the game character moves leftward.
In the above scenario of synchronization of the service operation states, if the first client 110 immediately uses the service operation data to display the service operation state after receiving the service operation data, a larger delay between the service operation of the second client 120 and the service operation state displayed by the first client 110 can be avoided, thereby reducing the operation delay and ensuring real-time performance.
However, in an actual service operation state synchronization scenario, the smoothness of service operation state synchronization may be affected by reducing the operation delay due to the existence of network jitter.
Network jitter is essentially a sudden increase in network delay or packet loss in the network. If the network delay is made as a network delay timing diagram, the network delay with a short duration is represented as network glitches in the diagram, and if the network delay at a certain point in time suddenly increases, a peak is represented.
Network delay and packet loss are phenomena which objectively exist in network transmission and are difficult to avoid, and factors in aspects of network equipment, network use condition, data quantity and the like can influence the network delay or the packet loss. When network jitter occurs, the service operation data of the server 130 cannot be timely transmitted to the first client 110, the first client 110 cannot display the corresponding service operation state by adopting the service operation data, and the update of the service operation state is stopped, so that a jam occurs, and the smoothness of the synchronization of the service operation state is affected.
For example, after displaying the game state in which the game piece is moved to the left, the first client 110 may remain in place, i.e., a click occurs, because there is no data in the buffer area to update the game operation state of the game piece.
Currently, buffering techniques are commonly employed to promote smoothness. Specifically, a buffer may be provided on the first client 110, and the received service operation data may be stored in the buffer. The first client 110 reads one frame of service operation data in the buffer in sequence at a certain speed, instead of reading all the service operation data at a time, so that a certain amount of service operation data is reserved in the buffer.
Therefore, even if network jitter occurs, since the first client 110 still has a certain amount of service operation data in the buffer, the first client 110 can display the service operation state by using the service operation data reserved in the buffer, thereby avoiding jamming and ensuring the smoothness of synchronization of the service operation state.
However, if the first client 110 reads the service operation data of the buffer at a slower speed, although the buffer can retain a certain amount of service operation data, the service operation data is continuously accumulated in the buffer, the length of the buffer is increased, and the operation delay is larger, that is, the service operation performed by the user on the second client 120 takes a longer time to display the corresponding service operation state on the first client 110.
Therefore, balancing the minimization of the operation delay and the improvement of the smoothness is the key of the business operation state synchronization scene.
As shown in FIG. 2, in one embodiment, a method of processing buffered data is provided. The present embodiment is mainly exemplified by the method applied to the first client 110 in fig. 1. Referring to fig. 2, the method for processing buffered data specifically includes the following steps:
s202, acquiring a macroscopic probability distribution model; the macroscopic probability distribution model is obtained by learning a plurality of network delay samples; the macroscopic probability distribution model includes individual network delays and corresponding occurrence probabilities.
Wherein the network delay may be the time interval between two consecutive data sent by the client receiving server 130. For example, the first client 110 receives the first data from the server 130 with a timestamp of 10ms (milliseconds), receives the second data with a timestamp of 12ms, and the network delay is 12ms-10 ms=2 ms.
The macroscopic probability distribution model can be a mathematical model constructed after machine learning a plurality of actually occurring network delays. The macroscopic probability distribution model may be embodied as a probability function for discrete random variables that mathematically expresses the probability distribution of the occurrence of the respective different network delays. For example, the probability function may be
Figure BDA0002122286060000071
k∈[1,2,…3000]For [1, 3000]Network delay in ms this intervalIn the macroscopic probability distribution model, there is a corresponding probability of occurrence, for example, the probability of occurrence of a 1ms network delay is 1.8%, the probability of occurrence of a 10ms network delay is 2.5%, and so on.
The network delay sample may be a learning sample for machine learning.
In a specific implementation, the first client 110 and the second client 120 receive the downlink data of the server 130, record the network delay actually occurring in the process of receiving the downlink data, and the first client 110 and the second client 120 may report the recorded network delay to the server 130, where the server 130 obtains a plurality of network delays as network delay samples.
Fig. 1C is a schematic diagram of a scenario in which a network delay sample is obtained in one embodiment. As shown in fig. 1C, in an actual application, the server 130 may further send downlink data to the plurality of third clients 140, where the plurality of third clients 140 may record the actual network delay occurring during the process of receiving the downlink data, and report the recorded network delay to the server 130, so that the server 130 may obtain a large number of network delay samples.
The server 130 may machine learn the plurality of network delay samples to construct a macroscopic probability distribution model. The server 130 may issue the macroscopic probability distribution model to the first client 110. The first client 110 may store the macroscopic probability distribution model, load the macroscopic probability distribution model when service operation state synchronization is required, predict a katon posterior probability based on the macroscopic probability distribution model, and control a consumption rate of the buffered data according to the katon posterior probability.
There are a number of ways in which the server 130 builds a macroscopic probability distribution model by learning the network delay samples. One such way may be a nuclear density estimation method (Kernel Density Estimation). The kernel density estimation method is a non-parameter estimation method, which does not presuppose the distribution characteristics of the data samples, but starts from the data samples themselves, and researches the distribution characteristics of the data samples. In the kernel density estimation method, a network delay sample is adopted to form a delay sample sequence, a delay probability density function to be estimated is formed based on the delay sample sequence, an optimal window width of the delay probability density function is determined through a progressive mean square error (Asymptotic Mean Squared Error) evaluation standard, the delay probability density function aiming at continuous random variables is obtained based on the optimal window width, the delay probability density function is discretized, the occurrence probability of each network delay belonging to a specific network delay interval is extracted, and a probability distribution function is obtained and is used as a final macroscopic probability distribution model.
Of course, in practical applications, those skilled in the art may also learn the network delay samples in other manners to construct a macroscopic probability distribution model, and the specific learning and model constructing manners are not limited in this embodiment. For example, it is also possible to learn the network delay samples, form a histogram, and construct a macroscopic probability distribution model based on the histogram.
It should be noted that, in practical application, the server 130 may be a server cluster, where the server cluster may be composed of a plurality of sub-servers, and a part of the sub-servers are responsible for processing service operation state synchronization, and a part of the sub-servers are responsible for collecting and learning network delay samples and constructing a macroscopic probability distribution model. The synchronization and collection of traffic operating states, learning of network delay samples, and building of macroscopic probability distribution models are not limited to processing by the same server.
S204, determining the prior probability of the cartoon according to the macroscopic probability distribution model; the katana priori probability is the probability that the client has no consumable buffered data resulting in a katana.
The buffer data may be data to be consumed in a buffer of the first client 110. The process of reading the data in the buffer for the first client 110 and deleting the read data from the buffer is consumed. For example, the first client 110 reads a frame of service operation data in the buffer to display a service operation status using the frame of service operation data, and deletes the read frame of service operation data from the buffer, which is consumption of data.
The blocking may be a state when the buffer has no data available for consumption when the first client 110 needs to consume data.
The probability of the first client 110 being stuck may be obtained according to a macroscopic probability distribution model as a priori knowledge. The prior probability is typically used to express the probability of occurrence of a certain event of a hypothesis.
In a specific implementation, when one or more frames of buffered data are consumed by the first client 110, the amount of data remaining in the buffer area is detected to obtain a buffered data amount, and according to the consumption speed and the buffered data amount, network delay of the first client 110 without any blocking is obtained, so as to obtain a plurality of non-blocking delays.
Then, in the macroscopic probability distribution model, the occurrence probabilities corresponding to the plurality of non-blocking delays are searched for, so as to obtain a non-blocking delay probability distribution, that is, a distribution of each network delay that does not cause blocking of the first client 110.
According to the non-jamming delay probability distribution, the non-jamming probability can be calculated, and finally, according to the non-jamming probability, the jamming probability can be determined and used as the jamming prior probability.
It should be noted that, the probability of a jam is obtained according to the macroscopic probability distribution model, and the probability is not obtained in combination with the network delay actually occurring in the first client 110, so the macroscopic probability distribution model predicts the prior knowledge in the probability, and accordingly, the obtained probability of a jam is referred to as the prior probability of a jam.
Of course, those skilled in the art may determine the katon prior probability from the macroscopic probability distribution model in other ways, and the above specific implementation is only an illustrative example. For example, in the macroscopic probability distribution model, the occurrence probabilities corresponding to the plurality of larger network delays are searched, and the occurrence probabilities corresponding to the plurality of larger network delays are summed to obtain the katon prior probability.
S206, correcting the katon prior probability according to the individual network delay recorded by the client to obtain a katon posterior probability; the kat posterior probability is the probability of kat when the client side has individual network delay.
The individual network delays may be a sequence of network delays that the first client 110 may experience in receiving the downstream data of the server 130.
The probability of the blocking posterior may be a probability of the blocking of the first client 110 obtained by correcting the prior probability according to the individual network delay actually occurring in the first client 110. The kat posterior probability represents the probability that the first client 110 is kating when the first client 110 experiences an individual network delay.
In a specific implementation, the first client 110 may receive service operation data sent by the server 130, where the service operation data sent by the server 130 is downlink data. The first client 110 may record the network delay of receiving the downstream data as an individual network delay of the first client 110 during the process of receiving the downstream data.
In order for those skilled in the art to understand the recording process of the individual network delays by the first client 110, the following description will be made with reference to the timing diagram of the client-server communication of fig. 3. Referring to fig. 3, a Server (Server) transmits downlink data to a Client (Client) according to a fixed frequency, a Network layer (Client Network) of the Client may receive the downlink data, and a service application (Client App) of the Client consumes the downlink data. Wherein, the downlink data is recorded with SeqID and ST i Wherein SeqID represents a unique sequence number incremented in transmission order, ST i Indicating the time stamp at which the server sent the ith data. Recording RT when network bottom layer of client receives ith data i ,RT i Indicating the time stamp at which the ith downstream data was received. Calculation of ST i And RT i To obtain n i ,n i Indicating the time of transmission of the ith data in the network. Similarly available RT i+1 、ST i+1 And n i+1
Thus, the time interval between the receiving of the ith data and the receiving of the (i+1) th data by the network bottom layer of the client can be calculated by the following formula:
Figure BDA0002122286060000101
wherein V is i+1 n Representing delay variation in network transmission process, V i+1 S Indicating the transmission frequency of the server.
The time interval D between the reception of the ith data and the reception of the (i+1) th data by the network bottom layer of the client i+1 I.e., the network delay for the client to receive the (i+1) th data, also commonly referred to as the downstream delay.
From the above formula, the network delay D i+1 Is mainly affected by the delay variation in the network transmission process.
Through the above calculation formula, the network delay of the first client 110 for receiving the downlink data can be obtained. The first client 110 records a succession of network delays, forming a sequence of network delays, resulting in individual network delays for the first client 110. For example, one particular individual network delay may be {2ms,3ms,1ms … … Nms }.
The first client 110 may correct the katon prior probability in combination with the actual individual network delay, so as to obtain a katon posterior probability, so as to individually predict the probability of the first client 110 being katon.
One specific way to correct the katon prior probability may be bayesian correction. Specifically, the individual network delay of the first client 110 may be utilized to obtain a likelihood probability of a katon delay, i.e., a probability of occurrence of the individual network delay of the first client 110 at the time of a katon. The macroscopic probability distribution model and the individual network delays are then used to derive an individual delay observation probability, i.e., the probability of the first client 110 experiencing a recorded individual network delay. Finally, using the likelihood probability of the katon delay and the observation probability of the individual delay, performing bayesian correction on the katon prior probability through a bayesian formula, so as to obtain a katon posterior probability, i.e. the probability of occurrence of katon when the first client 110 has the delay of the individual network.
By using a Bayesian correction method and using a macroscopic probability distribution model as priori knowledge, the prior probability of the katon is obtained, and the prior probability of the katon can be corrected by combining with the individual network delay of the first client 110 in actual occurrence, so as to obtain the posterior probability of the katon.
And S208, controlling the consumption speed of the client to the buffered data according to the Katon posterior probability.
The consumption rate may be a rate at which the first client 110 consumes the data in the buffer. The consumption rate may be determined by a consumption interval of the buffered data. For example, the first client 110 consumes one frame of data every 30ms, and the consumption speed is 1 frame/30 ms.
In a specific implementation, the first client 110 may compare the katon posterior probability with a preset katon prior probability threshold to obtain a comparison result, so as to control the consumption speed of the buffer data according to the comparison result.
When the comparison result is that the katon posterior probability is smaller than the Yu Kadu probability threshold, it indicates that the data retained in the buffer of the first client 110 is sufficient to resist network jitter, and no katon risk exists currently. Therefore, the first client 110 can increase the consumption speed of the buffered data, so as to reduce the operation delay of the service operation state synchronization.
When the comparison result is that the katon posterior probability is larger than the Yu Kadu probability threshold, it indicates that the data retained in the buffer of the first client 110 is insufficient to resist network jitter, and there is a current katon risk. Therefore, the first client 110 can reduce the consumption speed of the buffered data, so as to avoid the occurrence of the blocking and ensure the smoothness of the synchronization of the service operation states.
In practical application, when the katon posterior probability is larger than the Yu Kadu probability threshold, the first client 110 can also keep the normal consumption speed, but not increase the consumption speed, so as to avoid the increase of the katon risk caused by increasing the consumption speed, and balance the smoothness and the operation delay of the synchronization of the service operation states.
In another specific implementation, after the comparison result is obtained, a difference between the katon posterior probability and a preset katon probability threshold may be further calculated, and the consumption speed may be controlled according to the difference. For example, the speed adjustment value corresponding to different difference values is preset, the current katon posterior probability is 60%, the katon probability threshold is 50%, the difference value is +10%, +10% corresponds to the speed adjustment value of 10ms, and the current consumption speed is 1 frame/30 ms, so that the consumption speed can be adjusted to 1 frame/40 ms, and the consumption speed is reduced. For example, the previous katon posterior probability is 45%, the katon probability threshold is 50%, the difference is-5%, the speed adjustment value corresponding to-5% is-5 ms, the current consumption speed is 1 frame/30 ms, the consumption speed is adjusted to 1 frame/25 ms, and the consumption speed is improved.
Of course, in addition to the above-described consumption speed control method example, those skilled in the art may also use other ways to control the consumption speed according to the katon posterior probability, and the specific consumption speed control manner is not limited in this embodiment.
In one embodiment, the buffer data is at least one of business operation data and voice data.
The method for processing buffered data according to the above embodiment is described by taking a service operation state synchronization scenario based on a frame synchronization technique as an example. In practical applications, the method for processing buffered data in the above embodiment may also be applied to various application scenarios where the network environment is unstable but there is a low latency requirement.
For example, the processing method of buffered data of the above embodiment may also be applied to a voice synchronization scenario based on VoIP (Voice over Internet Protocol, voice over IP) technology. The VoIP technology can realize voice call and multimedia conference through IP network protocol, and can be applied to Internet equipment such as VoIP telephone, smart phone, personal computer and the like, and can also carry out call or send short message through cellular network and Wi-Fi. When the method for processing buffered data in the foregoing embodiment is applied to a voice synchronization scenario based on VoIP technology, the buffered data in the embodiment may be voice data, and specific implementation steps are similar to each step in the foregoing embodiment and are not repeated herein.
It should be further noted that, steps S202 to S208 in the above-mentioned processing method of buffering data may be executed by the server 130. For example, the server 130 may obtain a macroscopic probability distribution model, determine a katon prior probability according to the macroscopic probability distribution model, receive an individual network delay reported by the first client 110, correct the katon prior probability according to the individual network delay, obtain a katon posterior probability, and then control the consumption speed of the first client 110 according to the katon posterior probability. Steps S202 to S208 in the above-described processing method of buffering data may also be performed by the first client 110 and the server 130 in a cooperative manner. For example, the server 130 obtains a macroscopic probability distribution model, determines a katon prior probability according to the macroscopic probability distribution model, the server 130 issues the katon prior probability to the first client 110, the first client 110 corrects the katon prior probability according to individual network delay to obtain a katon posterior probability, and then the consumption speed is controlled according to the katon posterior probability.
Therefore, based on the technical ideas provided in the present embodiment, a person skilled in the art can flexibly set the execution subject of each step according to the actual needs, and the above embodiment is not limited to the execution subject of each step.
According to the buffer data processing method, the plurality of network delay samples are learned to obtain the macroscopic probability distribution model, the macroscopic probability distribution model is used as priori knowledge to determine the katon priori probability of the client, the individual network delay locally appearing at the client is adopted to correct the katon priori probability to obtain the katon posterior probability, and the consumption speed is controlled according to the katon posterior probability, so that the problem that prediction is inaccurate due to lack of priori knowledge when the katon probability of the client is predicted in the prior art is solved, the prediction accuracy of the katon probability of the client is improved, the consumption speed is controlled based on the accurately predicted katon probability, and buffer data which is enough to resist network jitter is reserved in a buffer area. By controlling the consumption speed, the quantity of the buffer data is adaptively adjusted, so that the buffer data is minimized, and meanwhile, the situation of blocking caused by insufficient buffer data is avoided. When the processing method of the buffer data is applied to the service operation state synchronization scene, the smoothness of the service operation state synchronization is improved while the lower operation delay is ensured, and the balance between the minimization of the operation delay and the smoothness is effectively achieved by accurately predicting the blocking.
Moreover, the individual network delay of the client locally appears to correct the jamming prior probability, so that the individual prediction of the jamming probability can be carried out aiming at the network environment of each client, the predicted jamming probability is closer to the actual jamming condition of the client, and the accuracy of the jamming prediction is improved. When the local individual network delay of the client changes, the predicted click probability is dynamically updated, so that the instantaneity of the click prediction is realized.
In one embodiment, the step S204 may specifically include:
detecting the quantity of the buffer data to obtain the quantity of the buffer data; obtaining a no-jamming delay according to the consumption speed and the buffer data volume; the non-blocking delay is network delay of the client without blocking; obtaining a non-stuck delay probability distribution through a macroscopic probability distribution model; the non-stuck delay probability distribution comprises non-stuck delay and corresponding occurrence probability; calculating the sum of occurrence probabilities corresponding to the non-jamming delay to obtain the non-jamming probability; determining a jamming priori probability according to the non-jamming probability; the sum of the probability of the blocking prior and the probability of no blocking is 1.
Wherein the amount of buffered data may be the amount of buffered data in the buffer of the first client 110.
The no-jam delay may be a network delay that does not cause the first client 110 to jam.
The probability distribution of no-jam delay may be a distribution of occurrence probabilities of a plurality of no-jam delays.
The no-click probability may be a probability that the first client 110 does not click.
In a specific implementation, after the service layer consumes data, the first client 110 may detect the current remaining amount of buffered data in the buffer area to obtain the buffered data CB i . From the consumption speed, a corresponding consumption interval Δp can be determined.
Calculating the buffer data volume CB i Multiplying by consumption interval deltapAnd obtaining a buffer tolerance time length, wherein the buffer tolerance time length represents the time length when the current buffer data is consumed when the buffer data is consumed according to the current consumption speed. If the network delay to be occurred does not exceed the buffer tolerance time, no blocking occurs due to the buffer data being consumed. It follows that, in order to ensure that the amount of buffered data is minimized, the condition that no stuck at a certain time i is ensured is as follows:
Figure BDA0002122286060000141
if the CB cannot be satisfied i *ΔP≥D i+1 And then indicates that the amount of buffered data cannot resist the upcoming network delay D i+1 . Conversely, any is smaller than CB i * Network delay D of ΔP i+1 I.e. without a stuck delay. After determining no stuck delay, a macroscopic probability distribution model P B In { X }, finding the occurrence probability corresponding to the no-jam delay to obtain a no-jam delay probability distribution P B (D) I.e. all smaller than CB i * No-jam delay D of ΔP i+1 Corresponding occurrence probability P B . Then, the sum of occurrence probabilities corresponding to the no-jamming delay is calculated to obtain the no-jamming probability, and the jamming prior probability P (L) is calculated according to a formula that the sum of the no-jamming probability and the jamming probability is 1.
In practical application, the katon prior probability P (L) can be calculated by the following formula:
Figure BDA0002122286060000151
it should be noted that, after receiving the downlink data from the server 130, the first client 110 first puts the data into the buffer, and the data is consumed after waiting a certain period of time in the buffer. Referring to fig. 3, the duration from the sending of the data from the server 130 to the actual consumption by the first client 110 is P i P can be calculated by the following formula i
P i =PT i -ST i =(PT i -RT i )+(RT i -ST i )=B i +n i
Wherein PT i Recording a time stamp of consumption of the ith data for the first client 110 when the data is consumed, B i Representing the latency of the ith data in the buffer, the consumption interval can then be calculated by the following formula:
ΔP=P i+1 -P i
According to the buffer data processing method, after data consumption is carried out on the service layer, the quantity of the residual buffer data is detected to obtain the buffer data quantity, no-blocking delay is determined according to the buffer data quantity and the consumption speed, and then the blocking prior probability is obtained according to the no-blocking delay and the macroscopic probability distribution model, so that the blocking probability can be predicted according to the current latest buffer data quantity and the real-time consumption speed of the buffer area, and the real-time property of prediction is improved. The consumption speed is controlled based on the prediction result with higher real-time performance, so that the problem of inaccurate prediction caused by non-real-time prediction is avoided.
In one embodiment, the step S206 may specifically include:
determining the likelihood probability of the katon delay according to the individual network delay; the likelihood probability of the katon delay is the probability of the individual network delay when the client terminal generates the actual katon; determining individual delay observation probability according to the individual network delay and the macroscopic probability distribution model; the individual delay observation probability is the probability of the client to generate individual network delay; and adopting the katon delay likelihood probability and the individual delay observation probability to carry out Bayesian correction on the katon prior probability, and obtaining the katon posterior probability.
The likelihood of a stuck delay may be a probability that an individual network delay occurs when the first client 110 is actually stuck. Likelihood probabilities may be used to describe the likelihood of occurrence of an unknown event in the event that a known event occurs. The likelihood of a stuck delay may be used to describe the likelihood of a certain network delay occurring if it is known that an actual stuck is occurring.
The individual delay observation probability may be a probability that the first client 110 exhibits an individual network delay. The observation probability may be used to describe the probability of occurrence of an actually observed event that has occurred.
In particular implementations, the likelihood probability of a stuck delay P (D c L), and determining an individual delay observation probability P (D) from the individual network delay and the macroscopic probability distribution model c ). Then, a Bayesian formula is adopted to calculate the product of the probability of the Katon delay likelihood and the Katon prior probability, and the product is divided by the individual delay observation probability to obtain the Katon probability after Bayesian correction, which is used as the Katon posterior probability P (L|D c ). Calculating the katon posterior probability P (L|D) c ) The bayesian formulation of (2) may be:
Figure BDA0002122286060000161
according to the buffer data processing method, the likelihood probability of the cartoon delay is determined according to the individual network delay, the individual delay observation probability is determined according to the individual network delay and the macroscopic probability distribution model, after the likelihood probability and the observation probability are obtained, the cartoon prior probability is corrected in a Bayesian correction mode, and the cartoon posterior probability is obtained, so that the consumption speed is controlled based on the cartoon posterior probability. Therefore, the probability is corrected by using a Bayesian correction mode, a large amount of calculation is not needed, the calculation resources in the correction process are saved, and the probability prediction efficiency is improved.
In one embodiment, determining the likelihood probability of a katon delay according to the individual network delay may specifically include:
when the client side is actually stuck, determining the actual stuck delay; the actual stuck delay is an individual network delay that occurs before the actual stuck occurs; calculating the ratio of the occurrence times of the actual stuck delay to the total number of the individual network delays to obtain the actual stuck delay probability; calculating the accumulated product of the actual stuck delay probability to obtain the stuck delay likelihood probability; the likelihood probability of the katon delay is the probability of the individual network delay when the client terminal generates the actual katon; and determining the likelihood probability of the stuck delay according to the likelihood probability of the stuck delay.
In particular implementations, the first client 110 may determine whether an actual jam occurs at present, and when an actual jam occurs, one or more individual network delays that occur before the jam will occur as the actual jam delay. Then, the actual katon delay quantity and the individual network delay quantity are counted, and the ratio of the two quantities is calculated to obtain the actual katon delay probability P c (D)。
For example, the first client 110 records a set D of individual network delays that actually occur c Probability of occurrence P of a certain individual network delay D c (D)=|D|/|D c I, i.e. number of occurrences of D with D c The ratio of the total number of all D's is the probability of occurrence of an individual network delay D. When the actual clamping occurs, determining the actual network delay as D 1 And D 2 ,D 1 The number of occurrences of (C) is 5, D 2 The number of occurrences of (C) is 10, D c The total number of (2) is 50, and the actual blocking delay probability P can be obtained c (D 1 )=5/50=10%,P c (D 2 )=10/50=20%。
Finally, the actual stuck delay probability P is calculated by the following formula c (D) To obtain the likelihood probability P (D c |L):
Figure BDA0002122286060000171
Based on the above example, the stuck delay likelihood probability P (D c |L)=P c (D 1 )*P c (D 2 )=10%*20%=2%。
In practical application, when the first client 110 does not have an actual pause, P c (D|L) is P (D) c ). The actual jamming L occurs when the ith frame data is received i ,P c (D|L) is updated to P c (D|L i ). The actual jamming L occurs when the jth frame data is received j ,P c (D|L i ) Then is updated to P c (D|L j )。
Fig. 4 is a schematic diagram of a stuck delay likelihood probability update for one embodiment. As shown, on a two-dimensional axis, the horizontal axis represents the serial number SeqID of data, and the vertical axis represents the network delay recorded when receiving the SeqID of x data. The height of the bar reflects the magnitude of the network delay.
In the event of actual jamming L i Before P c (D|L)=P(D c ) When the actual jamming L occurs i When using a clamp L i Updating P with previous network delay c (D|L) is P c (D|L j ):
Figure BDA0002122286060000172
P c (D|L j ) And the updated likelihood probability of the cartoon-like delay is used for subsequent correction of the prior probability of the cartoon-like delay. When the next actual blocking L occurs j When using a clamp L j Previous network delay, P is updated in the same manner c (D|L j ) Is P c (D|L j ),P c (D|L j ) And the updated likelihood probability of the cartoon-like delay is used for subsequent correction of the prior probability of the cartoon-like delay.
According to the buffer data processing method, when the actual clamping occurs, the actual clamping delay is determined, and the clamping delay likelihood probability is determined according to the actual clamping delay, so that the clamping probability can be predicted according to the clamping condition of the client, and the prediction accuracy is improved.
In one embodiment, determining the individual delay observation probability according to the individual network delay and macroscopic probability distribution model may specifically include:
obtaining individual delay probability distribution through a macroscopic probability distribution model; the individual delay probability distribution comprises individual network delays and corresponding occurrence probabilities; and calculating the cumulative product of the occurrence probabilities corresponding to the individual network delays to obtain the individual delay observation probability.
The individual delay probability distribution may be a distribution of a plurality of individual network delays and occurrence probabilities of the first client 110 actually occurring.
In particular implementation, the macro probability distribution model P can be used for B { X } find and individual network delay D c The occurrence probability corresponding to each network delay D in the network is used for obtaining the individual delay probability distribution P B (D) According to the individual delay probability distribution P B (D) Calculating the cumulative product of the occurrence probabilities corresponding to the network delays in the individual network delays by the following formula to obtain the individual delay observation probability P (D) c ):
Figure BDA0002122286060000181
According to the buffer data processing method, the individual delay observation probability of the client is obtained by combining the macroscopic probability distribution model and the individual network delay, the problem that prior knowledge is lacking in calculating the individual delay observation probability is avoided, and the prediction accuracy of the cartoon probability is improved.
In one embodiment, the step S208 may specifically include:
when the cartoon posterior probability is larger than a preset cartoon probability threshold, notifying a business layer of the client to consume the buffer data according to a first consumption speed; when the cartoon posterior probability is smaller than the cartoon probability threshold, notifying a business layer of the client to consume the buffer data according to the second consumption speed; the first consumption rate is lower than the second consumption rate.
The probability threshold for jamming can be a probability threshold preset according to an empirical value and used for judging whether jamming risks exist or not.
It should be noted that, in the data processing framework of the current client, a network bottom layer and a service layer are generally included. The network bottom layer is responsible for receiving downlink data issued by the server through the network, the received downlink data is stored in a buffer zone of the client, and a business layer of the client consumes the data in the buffer zone.
For the above-mentioned processing method of buffered data, a data processing framework suitable for the processing method of this embodiment is provided, and referring to fig. 5, a network jitter smoothing module is added between the network bottom layer and the service layer of the client. The network jitter smoothing module may be composed of a buffer management module, an online learning module, and a consumption decision module.
The network bottom layer receives the downlink data of the server and pushes the received data to the buffer management module. The buffer management module stores data in the buffer. The online learning module calculates the katon posterior probability and provides the katon posterior probability to the consumption decision module.
And the consumption decision module compares the cartoon posterior probability with a cartoon probability threshold value, and informs the business layer of the adjusted consumption speed according to the comparison result.
More specifically, when the katon posterior probability is greater than the preset katon probability threshold, the consumption decision module of the first client 110 notifies the traffic layer to consume the buffered data at the slower first consumption rate. When the kat posterior probability is less than the Yu Kadu probability threshold, the consumption decision module of the first client 110 notifies the traffic layer to consume buffered data at the faster second consumption rate.
Therefore, in the data processing framework, the business layer does not need to carry out the katon probability prediction and the consumption decision, and only needs to access the network jitter smoothing module to receive the notification of the network jitter smoothing module, so that the consumption speed of the buffer data can be correspondingly adjusted. The service layer and the processing logic of the network jitter smoothing module are isolated from each other, so that the data processing framework has lower coupling degree.
According to the buffer data processing method, when the cartoon posterior probability is larger than the Yu Kadu probability threshold, the buffer data is informed to be consumed by the service layer according to the relatively slower first consumption speed, when the cartoon posterior probability is smaller than the Yu Kadu probability threshold, the buffer data is informed to be consumed by the service layer according to the relatively faster second consumption speed, therefore, the service layer can consume the buffer data according to a certain consumption speed only according to the received notification, and the service layer does not need to be coupled with a network jitter smoothing module for carrying out probability prediction and consumption decision in a data processing frame, namely, the service layer does not need to adjust own processing logic according to the probability prediction, consumption decision and other processing logic of the network jitter smoothing module. The low coupling degree between the processing logics enables business software running on the client to be connected into the network jitter smoothing module at low cost, reduces the access cost of the business software, and improves the universality of the katon prediction.
FIG. 6 illustrates a method of processing buffered data according to an embodiment, as shown, in an embodiment, the method of processing buffered data may further include the steps of:
s610, determining that a business layer of a client side makes a data consumption request;
s612, when the client has the buffer data, extracting the target buffer data from the buffer data, and returning the target buffer data to the business layer of the client;
informing a business layer of a client of consuming buffered data according to a first consumption speed, specifically comprising: generating an end mark, and returning the end mark to a business layer of the client side, so that the business layer of the client side can provide a data consumption request after waiting for a preset consumption interval when the consumption of the target buffer data is finished;
informing the business layer of the client of consuming the buffered data according to the second consumption speed specifically comprises: generating a continuation mark, and returning the continuation mark to the business layer of the client side so that the business layer of the client side can provide a data consumption request when the target buffer data is consumed.
The data consumption request may be information of the request consumption buffer data made by the service layer.
The target buffer data may be data extracted from the buffer data by the first client 110 and returned to the service layer for consumption.
In a specific implementation, referring to fig. 5, a service layer may access a consumption decision module at a certain interval, and make a data consumption request to the consumption decision module. And when the consumption decision module receives the data consumption request, judging whether the buffer area is empty, and when the buffer area is not empty, namely buffer data exists, extracting one frame of buffer data from the buffer data as target buffer data, and returning the target buffer data to the service layer.
The consumption decision module can inform the online learning module that the current business layer consumes data, the online learning module detects the quantity of the remained buffer data in the buffer zone to obtain the buffer data quantity, calculates the current kat posterior probability according to the buffer data quantity, and returns the kat posterior probability to the consumption decision module. And the consumption decision module judges whether the jamming risk exists currently according to the jamming posterior probability.
When the probability of the stuck posterior is larger than the Yu Kadu probability threshold, the existence of the stuck risk is indicated, and therefore, the consumption decision module generates an end mark and sends the end mark to the service layer. After receiving the end mark, the business layer waits for a preset consumption interval when the target buffer data is consumed, and then puts forward a data consumption request to a consumption decision module, thereby reducing the consumption speed.
When the probability of a stuck posterior is smaller than a Yu Kadu probability threshold, the probability of a stuck posterior is indicated that no risk of the stuck posterior exists, and therefore, the consumption decision module generates a continuation mark and sends the continuation mark to the service layer. After the business layer receives the continuation mark, when the target buffer data is consumed, the business layer does not need to wait and immediately puts forward a data consumption request to the consumption decision module, so that the consumption speed is improved.
According to the buffer data processing method, the business layer can determine the interval for making a data consumption request according to different marks only by receiving the end mark or the continuous mark, and can access the network jitter smoothing module through simple processing logic, so that the invasiveness of the katon probability prediction to the processing logic of the business layer is reduced.
Referring to fig. 6, in one embodiment, the processing method of buffered data may further include the steps of:
s614, when the probability of the cartoon is smaller than the Yu Kadu probability threshold and the actual cartoon occurs at the client, updating the probability threshold;
updating the threshold of the click probability may specifically include:
performing small step increment on the threshold value of the blocking probability to obtain a first updated threshold value; obtaining updated stuck probability; the updated stuck probability is a stuck posterior probability obtained after the consumption speed is controlled according to a first updated threshold; and when the update jamming probability is smaller than the first update threshold and the first client does not generate actual jamming, performing small step size reduction on the first update threshold to obtain a second update threshold.
The small step increment can be a calculation process of adding a minimum step to the initial value.
Wherein the small step size reduction can be a calculation process of reducing the initial value by the minimum step size
In a specific implementation, when the probability of a stuck posterior is smaller than the Yu Kadu probability threshold, and it is determined that there is no risk of a stuck, the first client 110 generates an actual stuck state, which indicates that the current stuck state is wrong, and the threshold of the stuck state probability needs to be updated. In practical application, the threshold value of the jam probability can be adaptively updated based on a sliding window mode. The sliding window means that the initial value is adjusted stepwise with smaller increment or decrement, similar to sliding in different intervals with smaller steps with one sliding window.
Fig. 7 is a schematic diagram of a threshold adaptation update approach of an embodiment. As shown, on a two-dimensional coordinate axis, the horizontal axis represents the serial number SeqID of the data, and the vertical axis represents the corresponding threshold of the probability of stuck. The initial jam probability threshold is eta 0 When the stuck prediction error occurs, the threshold value eta of the stuck probability is set 0 Performing small step increment, and updating to a first updating threshold value eta 1 =η 0 +Δη + . Wherein Δη + Is the minimum step size of the increase.
After obtaining the new kat posterior probability, the first client 110 compares the new kat posterior probability with the first update threshold, and controls the consumption rate according to the comparison result.
When in a certain period of time T η In, the comparison results in a new Katon posterior probability smaller than the first oneWhen the threshold is new and it is determined that there is no risk of jamming, and the first client 110 does not actually jam, the threshold η may be updated for the first client 1 Performing small step size subtraction and updating to a second updating threshold value eta 2 =η 1 -Δη - . Wherein Δη - To a reduced minimum step size. Updating the second updated threshold value eta 2 As a new click probability threshold, a subsequent click prediction is performed.
According to the buffer data processing method, the threshold value of the jam probability is updated when the jam prediction error occurs, when the threshold value of the jam probability is updated, the first update probability is obtained through small step increment, and when the jam prediction is correct, the second update probability is obtained through small step decrement, so that the jam risk is reduced in a mode of gradually increasing the threshold value, the smoothness is improved, the consumption speed is improved in a mode of gradually reducing the threshold value when the jam risk is not ensured, the operation delay is reduced, and the balance between smaller operation delay and smoothness is realized.
As shown in fig. 8, in one embodiment, a method for processing buffered data is provided, and this embodiment is mainly applied to the server 130 in fig. 1 for illustration. Referring to fig. 8, the method for processing buffered data specifically includes the following steps:
S802, a plurality of network delay samples are acquired.
In a specific implementation, the first client 110, the second client 120, and the third client 140 may receive the downlink data of the server 130, record the network delay actually occurring in the process of receiving the downlink data, and the plurality of clients may report the recorded network delay to the server 130, where the server 130 obtains a plurality of network delays as network delay samples.
S804, learning a plurality of network delay samples to obtain a macroscopic probability distribution model; the macroscopic probability distribution model includes individual network delays and corresponding occurrence probabilities.
In a specific implementation, the server 130 may perform machine learning on a plurality of network delay samples to construct a macroscopic probability distribution model.
S806, issuing a macroscopic probability distribution model to the client for the client to determine the katon prior probability according to the macroscopic probability distribution model, correcting the katon prior probability according to the individual network delay recorded by the client to obtain a katon posterior probability, and controlling the consumption speed of the client on the buffer data according to the katon posterior probability; the katana priori probability is the probability of katana caused by the fact that the client does not have consumable buffer data; the katon posterior probability is the probability of katon when the client side generates the individual network delay.
In particular implementations, the server 130 may issue the macroscopic probability distribution model to the first client 110. The first client 110 obtains a katon posterior probability according to the macroscopic probability distribution model and the individual network delay recorded by the macroscopic probability distribution model, and controls the consumption speed of the buffer data according to the katon posterior probability.
The process of obtaining the kat posterior probability and controlling the consumption rate according to the kat posterior probability by the first client 110 is described in detail in the above embodiment, and will not be described herein.
According to the buffer data processing method, the server acquires the network delay samples, learns the network delay samples to obtain the macroscopic probability distribution model, and provides the macroscopic probability distribution model for the client, so that the client can adopt the macroscopic probability distribution model as priori knowledge to obtain the katon posterior probability, and control the consumption speed according to the katon posterior probability, thereby solving the problem of inaccurate prediction caused by lack of priori knowledge in predicting the katon probability of the client in the prior art, improving the prediction accuracy of the katon probability of the client, controlling the consumption speed based on the accurately predicted katon probability, and keeping the buffer data which is enough to resist network jitter in the buffer zone. By controlling the consumption speed, the quantity of the buffer data is adaptively adjusted, so that the buffer data is minimized, and meanwhile, the situation of blocking caused by insufficient buffer data is avoided. When the processing method of the buffer data is applied to the service operation state synchronization scene, the smoothness of the service operation state synchronization is improved while the lower operation delay is ensured, and the balance between the minimization of the operation delay and the smoothness is effectively achieved by accurately predicting the blocking.
Moreover, since learning is performed on a plurality of network delay samples, a large amount of data needs to be stored, which consumes more computing resources, and if learning is performed by the client, normal operation of the client is affected. Therefore, in the buffer data processing method, the server learns the network delay samples to obtain the macroscopic probability distribution model, and issues the macroscopic probability distribution model to the client for use, so that the influence on the normal operation of the client is avoided.
In one embodiment, the step S804 may specifically include:
generating an initial kernel density estimation function; the initial kernel density estimation function comprises a plurality of network delay samples and candidate density estimation window widths; evaluating an initial kernel density estimation function by adopting a progressive mean square error evaluation standard to obtain an optimal window width; the preferred window width is the preferred candidate density estimate window width; generating a delay probability density estimation function; the delay probability density estimation function comprises a plurality of network delay samples and an optimal window width; discretizing the delay probability density estimation function to obtain a macroscopic probability distribution model; the macroscopic probability distribution model includes the individual network delays and corresponding occurrence probabilities.
Wherein the initial kernel density estimation function may be a kernel density estimation function for which the density estimation window width is not optimized. The kernel density estimation function is used to estimate the probability density of the continuous random variable.
The candidate density estimation window width may be the selected density estimation window width to be optimized.
Wherein the optimal window width may be a preferred density estimation window width.
In particular implementations, server 130 may first generate an initial kernel density estimation function
Figure BDA0002122286060000241
/>
Figure BDA0002122286060000242
Where m represents the number of network delay samples, X i Representing the ith sample in the sequence, K (u) represents the kernel function, and h represents the density estimation window width for probability density estimation.
Since the specific value of the density estimation window width directly affects the accuracy of the final result, each candidate density estimation window width needs to be evaluated to determine the optimal density estimation window width.
In practical applications, the evaluation can be performed by using a progressive mean square error (Asymptotic Mean Squared Error) evaluation standard:
Figure BDA0002122286060000243
wherein σ represents the noise variance; r (L) = ≡l 2 (x)dx,
Figure BDA0002122286060000244
R (L) is Gaussian kernel, which can be calculated by the following formula:
Figure BDA0002122286060000245
thus, the function is estimated for the initial kernel density
Figure BDA0002122286060000246
Is converted into an optimal density estimation window width h opt Is guaranteed->
Figure BDA0002122286060000247
Minimizing. Optimal density estimation window width h opt Can be calculated by Silverman's Rule of Thumb:
Figure BDA0002122286060000248
wherein X is [0.75m] Representing a network delay of 0.75 quantiles in a sequence of network delay samples, X [0.25m] Representing a network delay of 0.25 quantiles in the sequence of network delay samples.
Determining an optimal density estimation window width h opt Obtaining the optimal delay probability density estimation function
Figure BDA0002122286060000249
Figure BDA00021222860600002410
Then, the delay probability density estimation function for the continuous random variable is performed
Figure BDA0002122286060000251
Conversion to probability distribution function P for discrete random variables B {X=k}:
Figure BDA0002122286060000252
By probability distribution function P B { x=k }, each network delay and the corresponding occurrence probability, i.e., the distribution of each network delay and the corresponding occurrence probability, can be calculated. The probability distribution function P B { X=k } is issued to the first client 110 as a macroscopic probability distribution model.
It should be noted that the foregoing embodiments provide a specific implementation manner of constructing the macroscopic probability distribution model by learning the network delay samples, and in practical applications, those skilled in the art may also learn the network delay samples in a plurality of manners to construct the macroscopic probability distribution model.
In order to facilitate a thorough understanding of the various embodiments described above by those skilled in the art, the following description will be provided in connection with specific examples. Referring to fig. 1 and 5, the first client 110, the second client 120, and the third client 140 each have a network jitter smoothing module, and an online learning module of the network jitter smoothing module may record an actually occurring network delay and report the network delay to the server 130.
FIG. 9 is a schematic diagram of a server process flow of one embodiment. As shown, the server 130 may store the received network delay in a database as a network delay sample. The offline learning module of the server 130 may obtain a network delay sample from the database, learn the network delay sample to form a sequence of network delay samples, determine an optimal window width through the sequence, obtain an optimal delay probability density estimation function according to the optimal window width, then discretize the delay probability density estimation function to obtain a macroscopic probability distribution model, and store the macroscopic probability distribution model in the database, and when the first client 110 needs to use the macroscopic probability distribution model to synchronize service operation states, issue the macroscopic probability distribution model to the first client 110 through the database.
In performing the service operation state synchronization, the second client 120 transmits the service operation data to the server 130, and the server 130 transmits the service operation data as downlink data to the first client 110.
The network bottom layer of the first client 110 receives the service operation data of the server 130 and pushes the service operation data to the network jitter smoothing module of the first client 110. And a buffer area management module in the network jitter smoothing module performs data maintenance according to the received service operation data.
FIG. 10 is a flow diagram of a buffer management module for data maintenance, according to one embodiment. The service operation data includes a serial number SeqID, and a time stamp ST of the ith data sent from the server 130 i And a data packet including a data content body. The buffer management module preprocesses abstract information of the business operation data.
In general, the complete summary information includes a sequence number SeqID, a data transmission time ST, a data reception time RT, a network delay D, a data consumption time PT, a buffer waiting time B, and a processing procedure duration P. The buffer management module only needs to extract RT and D from the complete summary information to make a katon prediction.
The buffer management module can push the abstract information RT and D extracted during preprocessing to the abstract buffer and push the data packet to the packet body buffer.
The on-line learning module needs to use the digest information in the digest buffer when calculating the katon probability, so that the digest information in the digest buffer can only be enqueued, but not dequeued. The packet body buffer area adopts a First In, first Out (FIFO) mechanism, that is, the data packet which is enqueued earliest In the packet body buffer area is extracted and used, and the data packet is deleted from the packet body buffer area after the use.
In order to accurately extract and update the summary information of the specified data packet, the summary buffer area supports a serial number SeqID index, the serial number SeqID is used as a Key (index value), and a mapping relation between the serial number SeqID and the summary information is established in a Hash (Hash algorithm) mode.
When the data packet is dequeued, the data consumption time PT can be determined, B and P can be further calculated according to the PT, and the B and the P are recorded in the abstract buffer.
The digest buffer may internally maintain the received digest information and provide the required digest information to the online learning module and the offline learning module of the server 130. For example, the network delay D is provided to an offline learning module of the server 130 to form a network delay sample. For another example, the network delay D is provided to an online learning module to record the individual network delays of the first client 110.
There are various ways to provide summary information, for example, when the summary buffer has newly enqueued summary information SeqID, RT, ST, D, the newly enqueued summary information is actively pushed. For another example, after the digest buffer obtains the updated PT, B, P of the packet, the updated digest information such as PT, B, P is actively pushed. For another example, when the online learning module needs to use the summary information D of a certain data packet, index query is performed through the serial number SeqID of the data packet, and the result obtained by the query is fed back to the online learning module.
FIG. 11 is a schematic diagram of a consumption decision flow of an embodiment. The service layer of the first client 110 accesses the consumption decision module at regular time, and the consumption decision module determines whether the buffer is empty, if so, it indicates that the actual clamping occurs currently, and the consumption decision module informs the online learning module to update the relevant parameters.
The online learning module updates the actual jamming delay according to the actually-occurring jamming, updates the jamming delay likelihood probability according to the updated actual jamming delay, and then updates the jamming posterior probability by adopting the updated jamming delay likelihood probability. In addition, the consumption decision module returns no data flag to the business layer to inform the business layer that there is no buffer data currently available for consumption.
If the buffer area is not empty, the consumption decision module proposes the buffer data with earliest receiving time from the buffer area, deletes the extracted data from the buffer area, informs the online learning module, detects the current quantity of the remaining buffer data in the buffer area after being informed by the online learning module, obtains updated buffer data quantity, updates the katon prior probability according to the updated buffer data quantity, and then updates the katon posterior probability by adopting the updated katon prior probability.
The online learning module judges the katon posterior probability P (L|D c ) If the probability threshold eta of Yu Kadu is smaller, if yes, indicating that no blocking risk exists at present, returning the extracted data and the continuation mark to the service layer by the consumption decision module, and accessing the consumption decision module again without waiting after the service layer uses the data so as to improve the consumption speed; if not, indicating that the blocking risk exists currently, returning the extracted data and the end mark to the service layer by the consumption decision module, and after the service layer uses the data, accessing the consumption decision module after waiting for a certain time so as to reduce the consumption speed.
FIG. 12 is a timing diagram of one embodiment of processing buffered data. As shown, the process of processing buffered data may include the steps of:
S1202, the third client 140 records the network delay and reports the network delay to the server 130.
The server 130 forms a network delay sample and generates a macroscopic probability distribution model from the network delay sample S1204.
S1206, the server 130 issues the macroscopic probability distribution model to the first client 110.
S1208, the first client 110 calculates the katon prior probability according to the macroscopic probability distribution model, and records the individual network delay.
S1210, the first client 110 performs Bayesian correction on the katana priori probability according to the individual network delay to obtain a katana posterior probability.
S1212, the first client 110 controls the consumption rate of the buffered data according to the kat posterior probability, and consumes the buffered data according to the consumption rate.
It should be understood that, although the steps in the above-described flowcharts are shown in order as indicated by the arrows, these steps are not necessarily performed in order as indicated by the arrows. The steps are not strictly limited to the order of execution unless explicitly recited herein, and the steps may be executed in other orders. Moreover, at least some of the steps of the flowcharts described above may include a plurality of sub-steps or stages that are not necessarily performed at the same time, but may be performed at different times, the order in which the sub-steps or stages are performed is not necessarily sequential, and may be performed in turn or alternately with at least a portion of the sub-steps or stages of other steps or other steps.
In one embodiment, as shown in FIG. 13, there is provided a processing system 1300 for buffering data, comprising: a server 1310 and a client 1320;
a server 1310 for obtaining a plurality of network delay samples; learning a plurality of network delay samples to obtain a macroscopic probability distribution model; the macroscopic probability distribution model comprises network delays and corresponding occurrence probabilities; issuing a macroscopic probability distribution model to the client 1320;
the client 1320 is configured to determine a katon prior probability according to the macroscopic probability distribution model, correct the katon prior probability according to the individual network delay recorded by the client, obtain a katon posterior probability, and control the consumption speed of the client 1320 on the buffered data according to the katon posterior probability; the katon prior probability is the probability of a client 1320 having no consumable buffered data resulting in a katon; the kat posterior probability is the probability that a kat occurs when the client 1320 experiences the individual network delay.
Since the steps performed by the server 1310 and the client 1320 are described in detail in the above embodiments, they are not described in detail herein.
According to the buffer data processing system, the server learns a plurality of network delay samples to obtain the macroscopic probability distribution model and sends the macroscopic probability distribution model to the client, the macroscopic probability distribution model is used as priori knowledge to determine the katon priori probability of the client, the individual network delay locally appearing at the client is adopted to correct the katon priori probability to obtain the katon posterior probability, and the consumption speed is controlled according to the katon posterior probability, so that the problem that in the prior art, prediction inaccuracy is caused due to lack of priori knowledge when the katon probability of the client is predicted is solved, the prediction accuracy of the katon probability of the client is improved, and the consumption speed is controlled based on the accurately predicted katon probability, so that buffer data which is enough to resist network jitter is reserved in a buffer area. By controlling the consumption speed, the quantity of the buffer data is adaptively adjusted, so that the buffer data is minimized, and meanwhile, the situation of blocking caused by insufficient buffer data is avoided. When the processing method of the buffer data is applied to the service operation state synchronization scene, the smoothness of the service operation state synchronization is improved while the lower operation delay is ensured, and the balance between the minimization of the operation delay and the smoothness is effectively achieved by accurately predicting the blocking.
Moreover, the individual network delay of the client locally appears to correct the jamming prior probability, so that the individual prediction of the jamming probability can be carried out aiming at the network environment of each client, the predicted jamming probability is closer to the actual jamming condition of the client, and the accuracy of the jamming prediction is improved. When the local individual network delay of the client changes, the predicted click probability is dynamically updated, so that the instantaneity of the click prediction is realized.
And the server learns the network delay sample to obtain a macroscopic probability distribution model, and issues the macroscopic probability distribution model to the client for use, so that the influence on the normal operation of the client is avoided.
In one embodiment, as shown in fig. 14, a business operational state synchronization system 1400 is provided, comprising: a first server 1410, a second server 1420, a first client 1430, and a second client 1440;
a first server 1410, configured to obtain a plurality of network delay samples, learn the plurality of network delay samples, obtain a macroscopic probability distribution model, and send the macroscopic probability distribution model to a first client 1430; the macroscopic probability distribution model comprises network delays and corresponding occurrence probabilities;
A second client 1440 for displaying a service operation state and transmitting service operation data of the service operation state to the second server 1420;
a second server 1420 for forwarding the business operation data to the first client 1430;
a first client 1430 for storing the service operation data in the buffer and recording the network delay of receiving the service operation data to obtain an individual network delay;
the first client 1430 is further configured to determine a katon prior probability according to the macroscopic probability distribution model, correct the katon prior probability according to the individual network delay recorded by the first client 1430 to obtain a katon posterior probability, control a consumption speed of the buffer data by the first client 1430 according to the katon posterior probability, and consume the service operation data of the buffer area according to the consumption speed so as to synchronously display the service operation state according to the service operation data;
the katon prior probability is a probability that the first client 1430 has no consumable buffer data to cause katon;
the kat posterior probability is the probability of a kat occurring when the first client 1430 experiences an individual network delay.
It should be noted that, the service operation state synchronization system 1400 may be a C/S architecture-based system, and applies a frame synchronization technique to synchronize service operation states.
In a specific implementation, the first server 1410 learns a plurality of network delay samples to obtain a macroscopic probability distribution model, and issues the macroscopic probability distribution model to the first client 1430; the macroscopic probability distribution model includes individual network delays and corresponding occurrence probabilities. When the service operation state synchronization is performed, the user performs the service operation through the second client 1440, and the second client 1440 displays the corresponding service operation state and transmits the service operation data of the service operation state to the second server 1420. The second server 1420 forwards the service operation data to the first client 1430, and the first client 1430 stores the service operation data to the buffer and records the network delay of receiving the service operation data, resulting in an individual network delay.
The first client 1430 loads the macroscopic probability distribution model to determine a katon prior probability, corrects the katon prior probability according to the individual network delay recorded by the first client 1430 to obtain a katon posterior probability, and controls the consumption speed of the buffered data according to the katon posterior probability. The first client 1430 consumes the service operation data of the buffer according to the consumption speed, and displays the corresponding service operation state, thereby implementing synchronization of the service operation state displayed by the second client 1440.
Since the steps performed by each server and client are described in detail in the above embodiments, they are not described in detail herein.
According to the business operation state synchronization system, the plurality of network delay samples are learned through the first server to obtain the macroscopic probability distribution model, the katon prior probability of the client is determined by taking the macroscopic probability distribution model as prior knowledge, the katon prior probability is corrected by adopting the individual network delay locally appearing at the first client to obtain the katon posterior probability, and the consumption speed is controlled according to the katon posterior probability, so that the problem that prediction is inaccurate due to lack of prior knowledge when the katon probability of the client is predicted in the prior art is solved, the prediction accuracy of the katon probability of the client is improved, and the consumption speed is controlled based on the accurately predicted katon probability, so that buffer data which is enough to resist network jitter are reserved in a buffer area. By controlling the consumption speed, the quantity of the buffer data is adaptively adjusted, so that the buffer data is minimized, and meanwhile, the situation of blocking caused by insufficient buffer data is avoided. When the processing method of the buffer data is applied to the service operation state synchronization scene, the smoothness of the service operation state synchronization is improved while the lower operation delay is ensured, and the balance between the minimization of the operation delay and the smoothness is effectively achieved by accurately predicting the blocking.
Moreover, the individual network delay of the client locally appears to correct the jamming prior probability, so that the individual prediction of the jamming probability can be carried out aiming at the network environment of each client, the predicted jamming probability is closer to the actual jamming condition of the client, and the accuracy of the jamming prediction is improved. When the local individual network delay of the client changes, the predicted click probability is dynamically updated, so that the instantaneity of the click prediction is realized.
In one embodiment, as shown in fig. 15, there is provided a processing apparatus 1500 for buffering data, including:
a model acquisition module 1502 for acquiring a macroscopic probability distribution model; the macroscopic probability distribution model is obtained by learning a plurality of network delay samples; the macroscopic probability distribution model comprises network delays and corresponding occurrence probabilities;
the prior probability module 1504 is configured to determine a katon prior probability according to the macroscopic probability distribution model; the blocking prior probability is the probability of blocking caused by the fact that the client does not have consumable buffer data;
the posterior probability module 1506 is configured to correct the katon prior probability according to the individual network delay recorded by the client to obtain a katon posterior probability; the blocking posterior probability is the probability of blocking when the client side generates the individual network delay;
And the consumption control module 1508 is configured to control a consumption speed of the buffered data by the client according to the kat posterior probability.
In one embodiment, consumption control module 1508 is specifically configured to: when the katon posterior probability is larger than a preset katon probability threshold, notifying a business layer of the client to consume the buffer data according to a first consumption speed; when the katon posterior probability is smaller than the katon probability threshold, notifying a business layer of the client to consume the buffer data according to a second consumption speed; the first consumption rate is lower than the second consumption rate.
In one embodiment, the processing apparatus 1500 for buffering data further includes:
the request determining module is used for determining that a business layer of the client side makes a data consumption request;
the data return module is used for extracting target buffer data from the buffer data when the buffer data exists in the client, and returning the target buffer data to a service layer of the client;
the consumption control module 1508 is specifically configured to:
generating an end mark, and returning the end mark to a business layer of the client side, so that the business layer of the client side can provide the data consumption request after waiting for a preset consumption interval when the consumption of the target buffer data is finished;
The consumption control module 1508 is specifically configured to:
generating a continuation mark, and returning the continuation mark to the business layer of the client side so that the business layer of the client side can provide the data consumption request when the target buffer data is consumed.
In one embodiment, the prior probability module 1504 is specifically configured to:
detecting the quantity of the buffer data to obtain the quantity of the buffer data; obtaining a no-jamming delay according to the consumption speed and the buffer data volume; the non-blocking delay is network delay of the client without blocking; obtaining a non-stuck delay probability distribution through the macroscopic probability distribution model; the non-stuck delay probability distribution comprises the non-stuck delay and the corresponding occurrence probability; calculating the sum of occurrence probabilities corresponding to the non-jamming delay to obtain the non-jamming probability; determining the jamming prior probability according to the non-jamming probability; and the sum of the jamming prior probability and the non-jamming probability is 1.
In one embodiment, the posterior probability module 1506 is specifically configured to:
determining a likelihood probability of a stuck delay according to the individual network delay; the likelihood probability of the stuck delay is the probability of the individual network delay when the client terminal generates actual stuck; determining an individual delay observation probability according to the individual network delay and the macroscopic probability distribution model; the individual delay observation probability is the probability that the client side generates the individual network delay; and adopting the katon delay likelihood probability and the individual delay observation probability to carry out Bayesian correction on the katon prior probability to obtain the katon posterior probability.
In one embodiment, the posterior probability module 1506 is specifically configured to:
when the client side is actually stuck, determining the actual stuck delay; the actual blocking delay is an individual network delay occurring before the actual blocking occurs; calculating the ratio of the occurrence times of the actual cartoon-in delay to the total number of the individual network delays to obtain the actual cartoon-in delay probability; calculating the accumulated product of the actual stuck delay probability to obtain a stuck delay likelihood probability; the likelihood probability of the stuck delay is the probability of the individual network delay when the client terminal generates actual stuck; and determining the likelihood probability of the stuck delay according to the likelihood probability of the stuck delay.
In one embodiment, the posterior probability module 1506 is specifically configured to:
obtaining individual delay probability distribution through the macroscopic probability distribution model; the individual delay probability distribution comprises the individual network delays and corresponding occurrence probabilities; and calculating the cumulative product of the occurrence probabilities corresponding to the individual network delays to obtain the individual delay observation probability.
In one embodiment, the processing apparatus 1500 for buffering data further includes:
the threshold updating module is used for updating the threshold of the jamming probability when the jamming posterior probability is smaller than the threshold of the jamming probability and the client side is actually jammed;
The threshold updating module is specifically configured to:
performing small step increment on the stuck probability threshold to obtain a first updated threshold; obtaining updated stuck probability; the updated stuck probability is a stuck posterior probability obtained after the consumption speed is controlled according to the first updated threshold; and when the update jamming probability is smaller than the first update threshold and the first client does not generate actual jamming, performing small step size reduction on the first update threshold to obtain a second update threshold.
In one embodiment, the buffer data is at least one of business operation data and voice data.
According to the buffer data processing device, the plurality of network delay samples are learned to obtain the macroscopic probability distribution model, the macroscopic probability distribution model is used as priori knowledge to determine the katon priori probability of the client, the katon priori probability is corrected by adopting the individual network delay locally appearing at the client to obtain the katon posterior probability, and the consumption speed is controlled according to the katon posterior probability, so that the problem of inaccurate prediction caused by lack of priori knowledge when the katon probability of the client is predicted in the prior art is solved, the prediction accuracy of the katon probability of the client is improved, the consumption speed is controlled based on the accurately predicted katon probability, and the buffer data enough to resist network jitter is reserved in the buffer area. By controlling the consumption speed, the quantity of the buffer data is adaptively adjusted, so that the buffer data is minimized, and meanwhile, the situation of blocking caused by insufficient buffer data is avoided. When the processing method of the buffer data is applied to the service operation state synchronization scene, the smoothness of the service operation state synchronization is improved while the lower operation delay is ensured, and the balance between the minimization of the operation delay and the smoothness is effectively achieved by accurately predicting the blocking.
Moreover, the individual network delay of the client locally appears to correct the jamming prior probability, so that the individual prediction of the jamming probability can be carried out aiming at the network environment of each client, the predicted jamming probability is closer to the actual jamming condition of the client, and the accuracy of the jamming prediction is improved. When the local individual network delay of the client changes, the predicted click probability is dynamically updated, so that the instantaneity of the click prediction is realized.
In one embodiment, as shown in fig. 16, there is provided a processing apparatus 1600 for buffering data, comprising:
a sample acquisition module 1602 for acquiring a plurality of network delay samples;
the model building module 1604 is configured to learn the plurality of network delay samples to obtain a macroscopic probability distribution model; the macroscopic probability distribution model comprises network delays and corresponding occurrence probabilities;
a model issuing module 1606, configured to issue the macroscopic probability distribution model to a client, so that the client determines a katana priori probability according to the macroscopic probability distribution model; correcting the katon prior probability according to the individual network delay recorded by the client to obtain a katon posterior probability; controlling the consumption speed of the client to the buffer data according to the cartoon posterior probability; the blocking prior probability is the probability of blocking caused by the fact that the client does not have consumable buffer data; the blocking posterior probability is the probability of blocking when the client side generates the individual network delay.
In one embodiment, the model building module 1604 is specifically configured to:
generating an initial kernel density estimation function; the initial kernel density estimation function includes the plurality of network delay samples and a candidate density estimation window width; evaluating the initial kernel density estimation function by adopting a progressive mean square error evaluation standard to obtain an optimal window width; the preferred window width is a preferred candidate density estimation window width; generating a delay probability density estimation function; the delay probability density estimation function includes the plurality of network delay samples and the optimal window width; discretizing the delay probability density estimation function to obtain the macroscopic probability distribution model; the macroscopic probability distribution model includes the individual network delays and corresponding occurrence probabilities.
According to the buffer data processing device, the network delay samples are acquired, the network delay samples are learned, the macroscopic probability distribution model is obtained and sent to the client, the client can adopt the macroscopic probability distribution model as priori knowledge to obtain the katon posterior probability, and the consumption speed is controlled according to the katon posterior probability, so that the problem that prediction is inaccurate due to lack of priori knowledge in predicting the katon probability of the client in the prior art is solved, the prediction accuracy of the katon probability of the client is improved, the consumption speed is controlled based on the accurately predicted katon probability, and the buffer data which can resist network jitter is reserved in the buffer zone. By controlling the consumption speed, the quantity of the buffer data is adaptively adjusted, so that the buffer data is minimized, and meanwhile, the situation of blocking caused by insufficient buffer data is avoided. When the processing method of the buffer data is applied to the service operation state synchronization scene, the smoothness of the service operation state synchronization is improved while the lower operation delay is ensured, and the balance between the minimization of the operation delay and the smoothness is effectively achieved by accurately predicting the blocking.
And the network delay samples are learned to obtain a macroscopic probability distribution model, and then the macroscopic probability distribution model is issued to the client for use, so that the influence on the normal operation of the client is avoided.
FIG. 17 illustrates an internal block diagram of a computer device in one embodiment. The computer device may be specifically the client 110 or the server 130 in fig. 1. As shown in fig. 17, the computer device includes a processor, a memory, a network interface, an input device, and a display screen connected by a system bus. The memory includes a nonvolatile storage medium and an internal memory. The non-volatile storage medium of the computer device stores an operating system, and may also store a computer program that, when executed by a processor, causes the processor to implement a method of processing buffered data. The internal memory may also store a computer program which, when executed by the processor, causes the processor to perform a method of buffering data. The display screen of the computer equipment can be a liquid crystal display screen or an electronic ink display screen, the input device of the computer equipment can be a touch layer covered on the display screen, can also be keys, a track ball or a touch pad arranged on the shell of the computer equipment, and can also be an external keyboard, a touch pad or a mouse and the like.
It will be appreciated by those skilled in the art that the structure shown in fig. 17 is merely a block diagram of a portion of the structure associated with the present application and is not limiting of the computer device to which the present application applies, and that a particular computer device may include more or fewer components than shown, or may combine certain components, or have a different arrangement of components.
In one embodiment, the processing apparatus for buffering data provided in the present application may be implemented in the form of a computer program, which may be executed on a computer device as shown in fig. 17. The memory of the computer device may store various program modules constituting the processing means of the buffered data, such as the model acquisition module 1502, the prior probability module 1504, the posterior probability module 1506, and the consumption control module 1508 shown in fig. 15. The computer program constituted by the respective program modules causes the processor to execute the steps in the processing method of buffered data of the respective embodiments of the present application described in the present specification.
For example, the computer apparatus shown in fig. 17 may execute step S202 by the model acquisition module 1502 in the processing device for buffering data as shown in fig. 15.
In one embodiment, a computer device is provided that includes a memory and a processor, the memory storing a computer program that, when executed by the processor, causes the processor to perform the steps of the above-described method of buffering data. The steps of the processing method of buffered data here may be the steps of the processing method of buffered data of the respective embodiments described above.
In one embodiment, a computer readable storage medium is provided, storing a computer program which, when executed by a processor, causes the processor to perform the steps of the above-described method of buffering data. The steps of the processing method of buffered data here may be the steps of the processing method of buffered data of the respective embodiments described above.
Those skilled in the art will appreciate that all or part of the processes in the methods of the above embodiments may be implemented by a computer program for instructing relevant hardware, where the program may be stored in a non-volatile computer readable storage medium, and where the program, when executed, may include processes in the embodiments of the methods described above. Any reference to memory, storage, database, or other medium used in the various embodiments provided herein may include non-volatile and/or volatile memory. The nonvolatile memory can include Read Only Memory (ROM), programmable ROM (PROM), electrically Programmable ROM (EPROM), electrically Erasable Programmable ROM (EEPROM), or flash memory. Volatile memory can include Random Access Memory (RAM) or external cache memory. By way of illustration and not limitation, RAM is available in a variety of forms such as Static RAM (SRAM), dynamic RAM (DRAM), synchronous DRAM (SDRAM), double Data Rate SDRAM (DDRSDRAM), enhanced SDRAM (ESDRAM), synchronous Link DRAM (SLDRAM), memory bus direct RAM (RDRAM), direct memory bus dynamic RAM (DRDRAM), and memory bus dynamic RAM (RDRAM), among others.
The technical features of the above embodiments may be arbitrarily combined, and all possible combinations of the technical features in the above embodiments are not described for brevity of description, however, as long as there is no contradiction between the combinations of the technical features, they should be considered as the scope of the description.
The above examples only represent a few embodiments of the present application, which are described in more detail and are not to be construed as limiting the scope of the present application. It should be noted that it would be apparent to those skilled in the art that various modifications and improvements could be made without departing from the spirit of the present application, which would be within the scope of the present application. Accordingly, the scope of protection of the present application is to be determined by the claims appended hereto.

Claims (24)

1. A method of processing buffered data, comprising:
acquiring a macroscopic probability distribution model; the macroscopic probability distribution model is obtained by learning a plurality of network delay samples; the macroscopic probability distribution model comprises network delays and corresponding occurrence probabilities;
determining the prior probability of the blocking according to the macroscopic probability distribution model; the blocking prior probability is the probability of blocking caused by the fact that the client does not have consumable buffer data;
Correcting the katon prior probability according to the individual network delay recorded by the client to obtain a katon posterior probability; the blocking posterior probability is the probability of blocking when the client side generates the individual network delay;
and controlling the consumption speed of the client to the buffered data according to the cartoon posterior probability.
2. The method of claim 1, wherein controlling the consumption rate of the buffered data according to the katon posterior probability comprises:
when the katon posterior probability is larger than a preset katon probability threshold, notifying a business layer of the client to consume the buffer data according to a first consumption speed;
when the katon posterior probability is smaller than the katon probability threshold, notifying a business layer of the client to consume the buffer data according to a second consumption speed; the first consumption rate is lower than the second consumption rate.
3. The method as recited in claim 2, further comprising:
determining that a business layer of the client side makes a data consumption request;
when the client side has the buffer data, extracting target buffer data from the buffer data, and returning the target buffer data to a business layer of the client side;
The notifying the service layer of the client of consuming the buffered data according to the first consumption speed includes:
generating an end mark, and returning the end mark to a business layer of the client side, so that the business layer of the client side can provide the data consumption request after waiting for a preset consumption interval when the consumption of the target buffer data is finished;
the notifying the service layer of the client of consuming the buffered data according to the second consumption speed includes:
generating a continuation mark, and returning the continuation mark to the business layer of the client side so that the business layer of the client side can provide the data consumption request when the target buffer data is consumed.
4. The method of claim 1, wherein said determining a katon prior probability from said macroscopic probability distribution model comprises:
detecting the quantity of the buffer data to obtain the quantity of the buffer data;
obtaining a no-jamming delay according to the consumption speed and the buffer data volume; the non-blocking delay is network delay of the client without blocking;
obtaining a non-stuck delay probability distribution through the macroscopic probability distribution model; the non-stuck delay probability distribution comprises the non-stuck delay and the corresponding occurrence probability;
Calculating the sum of occurrence probabilities corresponding to the non-jamming delay to obtain the non-jamming probability;
determining the jamming prior probability according to the non-jamming probability; and the sum of the jamming prior probability and the non-jamming probability is 1.
5. The method of claim 1, wherein the correcting the katon a priori probability according to the individual network delay recorded by the client to obtain a katon a posterior probability comprises:
determining a likelihood probability of a stuck delay according to the individual network delay; the likelihood probability of the stuck delay is the probability of the individual network delay when the client terminal generates actual stuck;
determining an individual delay observation probability according to the individual network delay and the macroscopic probability distribution model; the individual delay observation probability is the probability that the client side generates the individual network delay;
and adopting the katon delay likelihood probability and the individual delay observation probability to carry out Bayesian correction on the katon prior probability to obtain the katon posterior probability.
6. The method of claim 5, wherein said determining a katon delay likelihood probability based on said individual network delays comprises:
When the client side is actually stuck, determining the actual stuck delay; the actual blocking delay is an individual network delay occurring before the actual blocking occurs;
calculating the ratio of the occurrence times of the actual cartoon-in delay to the total number of the individual network delays to obtain the actual cartoon-in delay probability;
calculating the accumulated product of the actual stuck delay probability to obtain a stuck delay likelihood probability; the likelihood probability of the stuck delay is the probability of the individual network delay when the client terminal generates actual stuck;
and determining the likelihood probability of the stuck delay according to the likelihood probability of the stuck delay.
7. The method of claim 5, wherein said determining individual delay observation probabilities from said individual network delays and said macroscopic probability distribution model comprises:
obtaining individual delay probability distribution through the macroscopic probability distribution model; the individual delay probability distribution comprises the individual network delays and corresponding occurrence probabilities;
and calculating the cumulative product of the occurrence probabilities corresponding to the individual network delays to obtain the individual delay observation probability.
8. A method according to claim 3, further comprising:
Updating the jamming probability threshold when the jamming posterior probability is smaller than the jamming probability threshold and the client side is actually jammed;
the updating the threshold of the click probability further comprises:
performing small step increment on the stuck probability threshold to obtain a first updated threshold;
obtaining updated stuck probability; the updated stuck probability is a stuck posterior probability obtained after the consumption speed is controlled according to the first updated threshold;
and when the update jamming probability is smaller than the first update threshold and the first client does not generate actual jamming, performing small step size reduction on the first update threshold to obtain a second update threshold.
9. The method of claim 1, wherein the buffered data is at least one of business operational data and voice data.
10. A method of processing buffered data, comprising:
acquiring a plurality of network delay samples;
learning the plurality of network delay samples to obtain a macroscopic probability distribution model; the macroscopic probability distribution model comprises network delays and corresponding occurrence probabilities;
the macroscopic probability distribution model is issued to a client for the client to determine the Katon prior probability according to the macroscopic probability distribution model; correcting the katon prior probability according to the individual network delay recorded by the client to obtain a katon posterior probability; controlling the consumption speed of the client to the buffer data according to the cartoon posterior probability; the blocking prior probability is the probability of blocking caused by the fact that the client does not have consumable buffer data; the blocking posterior probability is the probability of blocking when the client side generates the individual network delay.
11. The method of claim 10, wherein learning the plurality of network delay samples results in a macroscopic probability distribution model, comprising:
generating an initial kernel density estimation function; the initial kernel density estimation function includes the plurality of network delay samples and a candidate density estimation window width;
evaluating the initial kernel density estimation function by adopting a progressive mean square error evaluation standard to obtain an optimal window width; the optimal window width is the optimal candidate density estimation window width;
generating a delay probability density estimation function; the delay probability density estimation function includes the plurality of network delay samples and the optimal window width;
discretizing the delay probability density estimation function to obtain the macroscopic probability distribution model; the macroscopic probability distribution model includes the individual network delays and corresponding occurrence probabilities.
12. A processing apparatus for buffering data, comprising:
the model acquisition module is used for acquiring a macroscopic probability distribution model; the macroscopic probability distribution model is obtained by learning a plurality of network delay samples; the macroscopic probability distribution model comprises network delays and corresponding occurrence probabilities;
The prior probability module is used for determining the prior probability of the cartoon according to the macroscopic probability distribution model; the blocking prior probability is the probability of blocking caused by the fact that the client does not have consumable buffer data;
the posterior probability module is used for correcting the katon prior probability according to the individual network delay recorded by the client to obtain katon posterior probability; the blocking posterior probability is the probability of blocking when the client side generates the individual network delay;
and the consumption control module is used for controlling the consumption speed of the client to the buffer data according to the cartoon posterior probability.
13. The apparatus of claim 12, wherein the consumption control module is configured to notify a traffic layer of the client to consume the buffered data at a first consumption rate when the katon posterior probability is greater than a preset katon probability threshold; when the katon posterior probability is smaller than the katon probability threshold, notifying a business layer of the client to consume the buffer data according to a second consumption speed; the first consumption rate is lower than the second consumption rate.
14. The apparatus of claim 13, wherein the means for buffering data further comprises:
The request determining module is used for determining that a business layer of the client side makes a data consumption request;
the data return module is used for extracting target buffer data from the buffer data when the buffer data exists in the client, and returning the target buffer data to a service layer of the client;
the consumption control module is used for generating an end mark, returning the end mark to the business layer of the client side, and providing the data consumption request after waiting for a preset consumption interval when the business layer of the client side finishes consuming the target buffer data; generating a continuation mark, and returning the continuation mark to the business layer of the client side so that the business layer of the client side can provide the data consumption request when the target buffer data is consumed.
15. The apparatus of claim 12, wherein the prior probability module is configured to detect the amount of buffered data to obtain an amount of buffered data; obtaining a no-jamming delay according to the consumption speed and the buffer data volume; the non-blocking delay is network delay of the client without blocking; obtaining a non-stuck delay probability distribution through the macroscopic probability distribution model; the non-stuck delay probability distribution comprises the non-stuck delay and the corresponding occurrence probability; calculating the sum of occurrence probabilities corresponding to the non-jamming delay to obtain the non-jamming probability; determining the jamming prior probability according to the non-jamming probability; and the sum of the jamming prior probability and the non-jamming probability is 1.
16. The apparatus of claim 12, wherein the posterior probability module is configured to determine a katon delay likelihood probability based on the individual network delays; the likelihood probability of the stuck delay is the probability of the individual network delay when the client terminal generates actual stuck; determining an individual delay observation probability according to the individual network delay and the macroscopic probability distribution model; the individual delay observation probability is the probability that the client side generates the individual network delay; and adopting the katon delay likelihood probability and the individual delay observation probability to carry out Bayesian correction on the katon prior probability to obtain the katon posterior probability.
17. The apparatus of claim 16, wherein the posterior probability module is configured to determine an actual stuck delay when the client experiences an actual stuck; the actual blocking delay is an individual network delay occurring before the actual blocking occurs; calculating the ratio of the occurrence times of the actual cartoon-in delay to the total number of the individual network delays to obtain the actual cartoon-in delay probability; calculating the accumulated product of the actual stuck delay probability to obtain a stuck delay likelihood probability; the likelihood probability of the stuck delay is the probability of the individual network delay when the client terminal generates actual stuck; and determining the likelihood probability of the stuck delay according to the likelihood probability of the stuck delay.
18. The apparatus of claim 16, wherein the posterior probability module is configured to obtain an individual delay probability distribution from the macroscopic probability distribution model; the individual delay probability distribution comprises the individual network delays and corresponding occurrence probabilities; and calculating the cumulative product of the occurrence probabilities corresponding to the individual network delays to obtain the individual delay observation probability.
19. The apparatus of claim 14, wherein the means for buffering data further comprises:
the threshold updating module is used for updating the threshold of the jamming probability when the jamming posterior probability is smaller than the threshold of the jamming probability and the client side is actually jammed;
the threshold updating module is specifically configured to:
performing small step increment on the stuck probability threshold to obtain a first updated threshold; obtaining updated stuck probability; the updated stuck probability is a stuck posterior probability obtained after the consumption speed is controlled according to the first updated threshold; and when the update jamming probability is smaller than the first update threshold and the first client does not generate actual jamming, performing small step size reduction on the first update threshold to obtain a second update threshold.
20. The apparatus of claim 12, wherein the buffered data is at least one of traffic operation data and voice data.
21. A processing apparatus for buffering data, comprising:
the sample acquisition module is used for acquiring a plurality of network delay samples;
the model construction module is used for learning the plurality of network delay samples to obtain a macroscopic probability distribution model; the macroscopic probability distribution model comprises network delays and corresponding occurrence probabilities;
the model issuing module is used for issuing the macroscopic probability distribution model to a client for the client to determine the katon prior probability according to the macroscopic probability distribution model; correcting the katon prior probability according to the individual network delay recorded by the client to obtain a katon posterior probability; controlling the consumption speed of the client to the buffer data according to the cartoon posterior probability; the blocking prior probability is the probability of blocking caused by the fact that the client does not have consumable buffer data; the blocking posterior probability is the probability of blocking when the client side generates the individual network delay.
22. The apparatus of claim 21, wherein the model building module is configured to generate an initial kernel density estimation function; the initial kernel density estimation function includes the plurality of network delay samples and a candidate density estimation window width; evaluating the initial kernel density estimation function by adopting a progressive mean square error evaluation standard to obtain an optimal window width; the optimal window width is the optimal candidate density estimation window width; generating a delay probability density estimation function; the delay probability density estimation function includes the plurality of network delay samples and the optimal window width; discretizing the delay probability density estimation function to obtain the macroscopic probability distribution model; the macroscopic probability distribution model includes the individual network delays and corresponding occurrence probabilities.
23. A computer readable storage medium storing a computer program which, when executed by a processor, causes the processor to perform the steps of the method of any one of claims 1 to 11.
24. A computer device comprising a memory and a processor, the memory storing a computer program that, when executed by the processor, causes the processor to perform the steps of the method of any of claims 1 to 11.
CN201910610756.0A 2019-07-08 2019-07-08 Method, device and system for processing buffered data Active CN110266611B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201910610756.0A CN110266611B (en) 2019-07-08 2019-07-08 Method, device and system for processing buffered data

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201910610756.0A CN110266611B (en) 2019-07-08 2019-07-08 Method, device and system for processing buffered data

Publications (2)

Publication Number Publication Date
CN110266611A CN110266611A (en) 2019-09-20
CN110266611B true CN110266611B (en) 2023-06-23

Family

ID=67924975

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201910610756.0A Active CN110266611B (en) 2019-07-08 2019-07-08 Method, device and system for processing buffered data

Country Status (1)

Country Link
CN (1) CN110266611B (en)

Families Citing this family (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2021197832A1 (en) * 2020-03-30 2021-10-07 British Telecommunications Public Limited Company Low latency content delivery
CN112888062B (en) * 2021-03-16 2023-01-31 芯原微电子(成都)有限公司 Data synchronization method and device, electronic equipment and computer readable storage medium

Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO1996025989A2 (en) * 1995-02-24 1996-08-29 Velocity, Inc. Method and apparatus for minimizing the impact of network delays
CN104156947A (en) * 2014-07-23 2014-11-19 小米科技有限责任公司 Image segmentation method, mechanism and device
CN104580006A (en) * 2014-12-24 2015-04-29 无锡儒安科技有限公司 Mobile network sending rate control method, device and system
CN105142002A (en) * 2015-08-07 2015-12-09 广州博冠信息科技有限公司 Audio/video live broadcasting method and device as well as control method and device
CN108600790A (en) * 2018-05-17 2018-09-28 北京奇艺世纪科技有限公司 A kind of detection method and device of interim card failure
CN109343997A (en) * 2018-10-31 2019-02-15 Oppo广东移动通信有限公司 Caton detection method, device, terminal and storage medium
CN109921941A (en) * 2019-03-18 2019-06-21 腾讯科技(深圳)有限公司 Network servicequality evaluates and optimizes method, apparatus, medium and electronic equipment

Family Cites Families (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US9380096B2 (en) * 2006-06-09 2016-06-28 Qualcomm Incorporated Enhanced block-request streaming system for handling low-latency streaming
US9756142B2 (en) * 2013-03-14 2017-09-05 The Regents Of The University Of California System and method for delivering video data from a server in a wireless network by caching the video data

Patent Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO1996025989A2 (en) * 1995-02-24 1996-08-29 Velocity, Inc. Method and apparatus for minimizing the impact of network delays
CN104156947A (en) * 2014-07-23 2014-11-19 小米科技有限责任公司 Image segmentation method, mechanism and device
CN104580006A (en) * 2014-12-24 2015-04-29 无锡儒安科技有限公司 Mobile network sending rate control method, device and system
CN105142002A (en) * 2015-08-07 2015-12-09 广州博冠信息科技有限公司 Audio/video live broadcasting method and device as well as control method and device
CN108600790A (en) * 2018-05-17 2018-09-28 北京奇艺世纪科技有限公司 A kind of detection method and device of interim card failure
CN109343997A (en) * 2018-10-31 2019-02-15 Oppo广东移动通信有限公司 Caton detection method, device, terminal and storage medium
CN109921941A (en) * 2019-03-18 2019-06-21 腾讯科技(深圳)有限公司 Network servicequality evaluates and optimizes method, apparatus, medium and electronic equipment

Non-Patent Citations (2)

* Cited by examiner, † Cited by third party
Title
张雨琪 ; 吴叶楠 ; 何颖俞 ; .基于数据分析的网络视频传输与用户体验研究.现代信息科技.2018,(第12期),全文. *
沈勇 ; 张新荣 ; .实时流媒体传输中客户终端系统缓冲区设计.微处理机.2007,(第05期),全文. *

Also Published As

Publication number Publication date
CN110266611A (en) 2019-09-20

Similar Documents

Publication Publication Date Title
KR102178633B1 (en) User credit evaluation method and device, and storage medium
CN110266611B (en) Method, device and system for processing buffered data
US7882181B2 (en) Minimizing data transfer from POP3 servers
CN108833352B (en) Caching method and system
US8060653B2 (en) Background synchronization
CN109978177B (en) Model training method, service processing method, device and related equipment
EP3788498A1 (en) Synchronized distributed processing in a communications network
MX2013005342A (en) Messaging system with multiple messaging channels.
CN111935025B (en) Control method, device, equipment and medium for TCP transmission performance
CN112752308A (en) Mobile prediction wireless edge caching method based on deep reinforcement learning
Casetti et al. An analytical framework for the performance evaluation of TCP Reno connections
EP3101549B1 (en) Estimating cache size for cache routers in information centric networks
Cassandras et al. Concurrent sample path analysis of discrete event systems
US10735527B1 (en) Driving high quality sessions through optimization of sending notifications
CN109783337B (en) Model service method, system, apparatus and computer readable storage medium
Tang et al. Tackling system induced bias in federated learning: Stratification and convergence analysis
CN107241442B (en) A kind of key assignments data storage storehouse copy selection method based on prediction
Liao et al. A token-bucket based notification traffic control mechanism for IMS presence service
Leconte et al. Adaptive replication in distributed content delivery networks
Aguilar-Armijo et al. Segment prefetching at the edge for adaptive video streaming
Yang et al. Hybrid cooperative caching based iot network considering the data cold start
JP2014135685A (en) Apparatus and method for predicting delay fluctuation
CN104580006B (en) A kind of mobile network's method of controlling transmission rate, apparatus and system
Abo Rahama et al. A novel closed-form expression for the probability of starvation in video streaming over wireless networks
CN111311014A (en) Service data processing method and device, computer equipment and storage medium

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
TA01 Transfer of patent application right

Effective date of registration: 20210112

Address after: 5 / F, area C, 1801 Hongmei Road, Xuhui District, Shanghai, 201200

Applicant after: Tencent Technology (Shanghai) Co.,Ltd.

Address before: 518000 Tencent Building, No. 1 High-tech Zone, Nanshan District, Shenzhen City, Guangdong Province, 35 Floors

Applicant before: TENCENT TECHNOLOGY (SHENZHEN) Co.,Ltd.

TA01 Transfer of patent application right
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant