CN114640754A - Video jitter detection method and device, computer equipment and storage medium - Google Patents

Video jitter detection method and device, computer equipment and storage medium Download PDF

Info

Publication number
CN114640754A
CN114640754A CN202210220933.6A CN202210220933A CN114640754A CN 114640754 A CN114640754 A CN 114640754A CN 202210220933 A CN202210220933 A CN 202210220933A CN 114640754 A CN114640754 A CN 114640754A
Authority
CN
China
Prior art keywords
jitter
value
determining
video
frame
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202210220933.6A
Other languages
Chinese (zh)
Inventor
刘则林
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Jingdong Technology Information Technology Co Ltd
Original Assignee
Jingdong Technology Information Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Jingdong Technology Information Technology Co Ltd filed Critical Jingdong Technology Information Technology Co Ltd
Priority to CN202210220933.6A priority Critical patent/CN114640754A/en
Publication of CN114640754A publication Critical patent/CN114640754A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N5/00Details of television systems
    • H04N5/14Picture signal circuitry for video frequency region
    • H04N5/21Circuitry for suppressing or minimising disturbance, e.g. moiré or halo
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N17/00Diagnosis, testing or measuring for television systems or their details
    • H04N17/02Diagnosis, testing or measuring for television systems or their details for colour television signals

Landscapes

  • Engineering & Computer Science (AREA)
  • Multimedia (AREA)
  • Signal Processing (AREA)
  • Health & Medical Sciences (AREA)
  • Biomedical Technology (AREA)
  • General Health & Medical Sciences (AREA)
  • Two-Way Televisions, Distribution Of Moving Picture Or The Like (AREA)

Abstract

The present disclosure proposes a video jitter detection method, apparatus, computer device and storage medium, the method comprising: when the frame receiving time of the current video frame is determined, the jitter distribution information of the cached video frame is determined, and the video jitter value is determined according to the frame receiving time and the jitter distribution information. By the method and the device, when the frame receiving of the current video frame is possibly influenced by network jitter and retransmission, when the video jitter value is analyzed in an auxiliary mode based on the frame receiving time of the detected current video frame, the detected video jitter value can effectively represent the influence of the network jitter and the retransmission on the jitter value, so that the detection accuracy of the video jitter value can be effectively improved, and when the video jitter value is used as the basis of video network jitter analysis, the video playing effect can be effectively improved in an auxiliary mode.

Description

Video jitter detection method and device, computer equipment and storage medium
Technical Field
The present disclosure relates to the field of video processing technologies, and in particular, to a video jitter detection method and apparatus, a computer device, and a storage medium.
Background
When a video is played, the network jitter can cause that the video in playing is slow or interrupted, and the user experience is greatly influenced.
In the related art, a kalman filtering method is usually used to detect a video jitter value to assist in increasing a jitter buffer to reduce the influence caused by network jitter, thereby improving the video playing effect.
In this way, the video jitter detection is not accurate enough, so that the referential of the video jitter detection value is not high, and the playing effect of the subsequent video is affected.
BRIEF SUMMARY OF THE PRESENT DISCLOSURE
The present disclosure addresses, at least to some extent, one of the technical problems in the related art.
Therefore, an object of the present disclosure is to provide a video jitter detection method, apparatus, computer device and storage medium, where when the frame receiving time of a current video frame may be affected by network jitter and retransmission, when the video jitter value is analyzed in an auxiliary manner based on the detected frame receiving time of the current video frame, the detected video jitter value can effectively represent the effect of the network jitter and retransmission on the jitter value, so as to effectively improve the detection accuracy of the video jitter value, and when the video jitter value is used as a basis for video network jitter analysis, the video playing effect can be effectively improved in an auxiliary manner.
In order to achieve the above object, an embodiment of the first aspect of the present disclosure provides a video jitter detection method, including: determining the frame receiving time of the current video frame; determining jitter distribution information of the cached video frame; and determining a video jitter value according to the frame receiving time and the jitter distribution information.
According to the video jitter detection method provided by the embodiment of the first aspect of the disclosure, when the frame receiving time of the current video frame is determined, the jitter distribution information of the cached video frame is determined, and the video jitter value is determined according to the frame receiving time and the jitter distribution information, because both network jitter and retransmission may affect the frame receiving time of the current video frame, when the video jitter value is assisted and analyzed based on the frame receiving time of the current video frame, the detected video jitter value can effectively represent the influence of network jitter and retransmission on the jitter value, so that the detection accuracy of the video jitter value can be effectively improved, and when the video jitter value is taken as the basis of video network jitter analysis, the video playing effect can be effectively assisted and improved.
In order to achieve the above object, a video jitter detection apparatus according to an embodiment of the second aspect of the present disclosure includes: the first determining module is used for determining the frame receiving time of the current video frame; the second determining module is used for determining the jitter distribution information of the cached video frame; and the third determining module is used for determining the video jitter value according to the frame receiving time and the jitter distribution information.
The video jitter detection apparatus provided in the embodiment of the second aspect of the present disclosure determines the frame receiving time of the current video frame, determines the jitter distribution information of the buffered video frame, and determines the video jitter value according to the frame receiving time and the jitter distribution information, because both network jitter and retransmission may affect the frame receiving time of the current video frame, when the video jitter value is analyzed in an auxiliary manner based on the frame receiving time of the detected current video frame, the detected video jitter value can effectively represent the influence of network jitter and retransmission on the jitter value, so that the detection accuracy of the video jitter value can be effectively improved, and when the video jitter value is taken as a basis for video network jitter analysis, the video playing effect can be effectively improved in an auxiliary manner.
An embodiment of a third aspect of the present disclosure provides a computer device, which includes a memory, a processor, and a computer program stored in the memory and executable on the processor, and when the processor executes the computer program, the video shake detection method as set forth in the embodiment of the first aspect of the present disclosure is implemented.
A fourth aspect of the present disclosure provides a non-transitory computer-readable storage medium, on which a computer program is stored, where the computer program, when executed by a processor, implements a video shake detection method as set forth in the first aspect of the present disclosure.
A fifth aspect of the present disclosure provides a computer program product, which when executed by an instruction processor performs the video shake detection method as set forth in the first aspect of the present disclosure.
Additional aspects and advantages of the disclosure will be set forth in part in the description which follows and, in part, will be obvious from the description, or may be learned by practice of the disclosure.
Drawings
The foregoing and/or additional aspects and advantages of the present disclosure will become apparent and readily appreciated from the following description of the embodiments, taken in conjunction with the accompanying drawings of which:
fig. 1 is a flowchart illustrating a video jitter detection method according to an embodiment of the present disclosure;
FIG. 2 is a flow diagram of video processing in an embodiment of the disclosure;
FIG. 3 is a block diagram of a computing module in an embodiment of the disclosure;
fig. 4 is a flowchart illustrating a video jitter detection method according to another embodiment of the present disclosure;
fig. 5 is a flowchart illustrating a video jitter detection method according to another embodiment of the present disclosure;
FIG. 6 is a schematic diagram illustrating a structure of an apparatus for detecting video jitter according to an embodiment of the present disclosure;
fig. 7 is a schematic structural diagram of a video jitter detection apparatus according to another embodiment of the present disclosure;
fig. 8 is a schematic structural diagram of a video jitter detection apparatus according to another embodiment of the present disclosure;
FIG. 9 illustrates a block diagram of an exemplary electronic device suitable for use in implementing embodiments of the present disclosure.
Detailed Description
Reference will now be made in detail to the embodiments of the present disclosure, examples of which are illustrated in the accompanying drawings, wherein like or similar reference numerals refer to the same or similar elements or elements having the same or similar functions throughout. The embodiments described below with reference to the drawings are exemplary only for the purpose of illustrating the present disclosure and should not be construed as limiting the same. On the contrary, the embodiments of the disclosure include all changes, modifications and equivalents coming within the spirit and terms of the claims appended hereto.
Fig. 1 is a flowchart illustrating a video jitter detection method according to an embodiment of the disclosure.
It should be noted that the main execution body of the video jitter detection method of this embodiment is a video jitter detection apparatus, which may be implemented in a software and/or hardware manner, and the apparatus may be configured in a computer device, and the computer device may include, but is not limited to, a terminal, a server, and the like.
As shown in fig. 1, the video jitter detection method includes:
s101: the frame receiving time of the current video frame is determined.
The video frame to be subjected to jitter analysis at present may be referred to as a current video frame.
The frame receiving time may be a time taken for a computer device playing the video frame to receive the current video frame from the server.
That is, when the server compresses each frame of video frame, it will split each video frame into multiple data packets in advance, and when receiving a video frame acquisition request sent from the computer device side, it can send the multiple data packets related to the video frame to the computer device packet by packet, so that the computer device can receive each data packet by packet, and perform statistical analysis processing on the frame receiving time of the multiple data packets forming the complete video frame to obtain the frame receiving time.
The computer device may perform statistical analysis processing on the frame receiving time in the process of receiving each data packet, or may trigger the statistical analysis processing on the frame receiving time when the last data packet is received, or the computer device may also receive the frame receiving time sent by the third-party timing device, and the frame receiving time may be obtained by statistically analyzing the time for the third-party timing device to receive the frame receiving time of the current video frame, which is not limited herein.
The application scenario in the embodiment of the present disclosure may be specifically described as follows:
the video jitter detection method in the embodiment of the present disclosure may be applied to a video processing scene, as shown in fig. 2, where fig. 2 is a flowchart of video processing in the embodiment of the present disclosure, and the video processing includes: the method comprises the steps that a sending end collects user sound and video pictures, codes and compresses a collected original video, video data obtained through coding are packed according to a Real-time Transport Protocol (RTP), when the video data obtained through coding exceed a Maximum Transmission Unit (MTU), RTP sub-packing can be carried out on the video data to obtain a plurality of data packets, then the data packets are stored in a server, and the data packets are pushed to computer equipment for receiving the video through the server. Usually, the data packet is transmitted from a transmitting end to a receiving device through a NetWork (NET work, NET), and most of real-time transmission scenarios use User data packet Protocol (UDP) transmission, but due to the unreliability of UDP, the receiving device may lose packets when receiving the data packet.
It can be understood that, in a video processing scene, a data transmission protocol is used to transmit a data packet of a video frame, and in order to avoid packet loss in the data transmission process, a retransmission mechanism may be configured, that is, when a packet loss event occurs, retransmission processing may be performed on the lost data packet to ensure accuracy of transmission of a plurality of data packets of a video.
In the embodiment of the present disclosure, statistical analysis may be performed on the receiving time of the video frame by the computer device in the above scenario to obtain the frame receiving time of the current video frame, where the frame receiving time may be used to analyze a subsequent video jitter value, and when the frame receiving time of the current video frame is possibly affected by network jitter and retransmission, and the detected video jitter value may effectively represent the effect of the network jitter and retransmission on the jitter value when the video jitter value is analyzed in an auxiliary manner based on the detected frame receiving time of the current video frame, so that the detection accuracy of the video jitter value may be improved, and specifically, the following embodiment may be specifically referred to as an implementation method for determining the video jitter value according to the frame receiving time.
S102: jitter distribution information for the buffered video frames is determined.
The cached video frame refers to a video frame that has been received and cached from the server by the computer device playing the video frame.
The jitter distribution information may be used to describe some information related to video jitter, such as the recorded video jitter time and the specific jitter value corresponding to the video jitter time during the process of receiving the buffered video frames from the server by the computer device playing the video frames.
For example, the jitter distribution information may be represented by a histogram, or may also be represented by a probability distribution function, when the histogram is used to represent the jitter distribution information, the granularity of the histogram, the position and the value of the jitter, or any one or more data (e.g., video jitter time and specific jitter value) capable of representing the jitter distribution information may be used, which is not limited to this.
In the embodiment of the present disclosure, a method of inputting a video frame to a pre-trained video parsing model (the video parsing model may be obtained by pre-training, and the video parsing model can perform video parsing on the input video frame to obtain corresponding jitter distribution information, which is not limited), may be adopted to determine jitter distribution information of a cached video frame, or may also determine jitter distribution information of a cached video frame in a mathematical operation manner, or may also determine jitter distribution information of a cached video frame in an engineering manner, which is not limited.
That is to say, after the frame receiving time of the current video frame is determined, the jitter distribution information of the buffered video frame may be determined, and both the frame receiving time and the jitter distribution information may be used to determine a subsequent video jitter value, so as to further ensure the referency of the determined video jitter value and improve the accuracy of the video jitter value.
S103: and determining a video jitter value according to the frame receiving time and the jitter distribution information.
The degree of change in delay of the video frame caused by one or more factors may be referred to as a video jitter value, and the one or more factors may specifically be, for example, end-to-end delay of the video information due to network congestion, packet drop retransmission, and the like in the real-time communication process, or delay of a packet video frame transmitted through the same link due to network congestion, packet drop retransmission, and the like.
When the frame receiving time of the current video frame is determined, and after the jitter distribution information of the cached video frame is determined, the frame receiving time and the jitter distribution information may be input into a pre-trained jitter detection model to obtain a video jitter value output by the jitter detection model, where the jitter detection model may be obtained by training in advance based on an artificial intelligence method, or may also be obtained by using a mathematical operation method to determine the video jitter value according to the frame receiving time and the jitter distribution information, or may also be obtained by using an engineering method to determine the video jitter value according to the frame receiving time and the jitter distribution information, which is not limited to this.
In the embodiment of the present disclosure, a computing device may be used to assist in determining a video jitter value according to frame receiving time and jitter distribution information, as shown in fig. 3, where fig. 3 is a structural diagram of the computing device in the embodiment of the present disclosure, and the jitter estimator includes: a data packet buffer (PacketBuffer) for grouping RTP packets, wherein the data packet buffer can be used for ensuring that all RTP packets of a video frame packetized by a sending end are received; a frame reference finder (FrameReferenceFinder) for obtaining a dependent frame decoded by the current video frame and ensuring that the current video frame can be correctly decoded; a frame buffer (FrameBuffer) for preventing a frame completion time of a video frame from fluctuating due to network jitter or retransmission; and the jitter estimator (jitter estimator) is used for calculating the frame receiving time length of the video frame, ensuring the jitter of the coverage threshold value through a histogram, calculating the jitter value, and providing the jitter value to the frame buffer to be used as the basis of the video network jitter.
In the embodiment, by determining the frame receiving time of the current video frame, determining the jitter distribution information of the cached video frame, and determining the video jitter value according to the frame receiving time and the jitter distribution information, when the frame receiving time of the current video frame is possibly influenced by network jitter and retransmission, when the video jitter value is assisted and analyzed based on the frame receiving time of the current video frame, the detected video jitter value can effectively represent the influence of the network jitter and the retransmission on the jitter value, so that the detection accuracy of the video jitter value can be effectively improved, and when the video jitter value is taken as the basis of video network jitter analysis, the video playing effect can be effectively assisted and improved.
Fig. 4 is a flowchart illustrating a video jitter detection method according to another embodiment of the present disclosure.
As shown in fig. 4, the video jitter detection method includes:
s401: and determining the first frame receiving time of the first message data packet of the current video frame.
S402: and determining the second frame receiving time of the last message data packet of the current video frame.
When a server performs compression coding on a video frame, the video frame is usually split into a plurality of message data packets, and a message data packet carrying initial video semantic information of the video frame may be referred to as a first-order message data packet, and correspondingly, a message data packet carrying last video semantic information of the video frame may be referred to as a last-order message data packet, where one or more message data packets carrying intermediate video semantic information of the video frame may be included between the first-order message data packet and the last-order message data packet, that is, the first-order message data packet and the one or more message data packets carrying intermediate video semantic information of the video frame, and the last-order message data packet together form a complete video frame.
When the computer device requests the server for the plurality of packet data packets of the current video frame, the time when the computer device receives the first packet data packet may be referred to as a first frame receiving time, and correspondingly, the time when the computer device receives the last packet data packet may be referred to as a second frame receiving time.
In the embodiment of the present disclosure, the first frame receiving time when the computer device receives the first packet may be marked and confirmed, or the marking and confirming of the first frame receiving time may also be triggered when the computer device receives the last packet, or the first frame receiving time sent by the third-party timing device may also be received, which is not limited to this.
That is to say, after the first frame receiving time and the second frame receiving time of the current video frame are determined, both the first frame receiving time and the second frame receiving time are used for confirming frame receiving time, so that the reasonability and convenience of analysis when the current video frame receives the frame are improved, the confirmation efficiency when the frame is received is improved, and the efficiency of executing the video jitter detection method is assisted to be improved.
S403: and determining the frame receiving time according to the first frame receiving time and the second frame receiving time.
In the embodiment of the present disclosure, after determining the first frame receiving time and the second frame receiving time for the computer device to receive the current video frame, statistical analysis may be performed on the first frame receiving time and the second frame receiving time to obtain the frame receiving time for receiving the plurality of packet data packets related to the current video frame.
Optionally, in some embodiments, when the frame receiving time is determined according to the first frame receiving time and the second frame receiving time, a time difference between the first frame receiving time and the second frame receiving time may be determined, and the time difference is used as the frame receiving time, so that the influence of both network jitter and retransmission on the frame receiving time of the current video frame is quickly and conveniently represented, and the execution efficiency of the video jitter detection method is effectively assisted to be improved.
For example, when the video frame is framed in the packet buffer, the following formula may be used to calculate the frame receiving time of the current video frame:
frame_duration=packet_recv_ts_max-packet_recv_ts_min;
wherein, the packet _ recv _ ts _ max is a second frame receiving time, the packet _ recv _ ts _ min is a first frame receiving time, and the frame _ duration is a frame receiving time of the current video frame.
Of course, a timing device may also be used, and the timing device performs timing by using the first frame receiving time and the second frame receiving time as the starting time and the ending time, respectively, to determine the frame receiving time according to the first frame receiving time and the second frame receiving time, or any other possible mathematical operation manner may also be used to determine the frame receiving time according to the first frame receiving time and the second frame receiving time, for example, a ratio between the first frame receiving time and the second frame receiving time may be used to assist in determining the frame receiving time, which is not limited in this respect.
In the embodiment of the present disclosure, after the frame receiving time is obtained according to the first frame receiving time and the second frame receiving time of a certain video frame, in order to determine the video jitter value as a basis for video network jitter analysis and assist in improving the video playing effect, subsequent steps may be triggered to be executed.
S404: jitter distribution information for the buffered video frames is determined.
S405: and determining the video jitter value according to the frame receiving time and the jitter distribution information.
For the description of S404-S405, reference may be made to the above embodiments, which are not described herein again.
In the embodiment, by determining the first frame receiving time of the first message data packet of the current video frame and the second frame receiving time of the last message data packet of the current video frame, determining the frame receiving time according to the first frame receiving time and the second frame receiving time, then determining the jitter distribution information of the cached video frame, determining the video jitter value according to the frame receiving time and the jitter distribution information, when the frame receiving time of the current video frame is possibly influenced by network jitter and retransmission, when the frame receiving time is determined based on the first frame receiving time and the second frame receiving time, the influence of the network jitter and retransmission on the frame receiving time of the current video frame is quickly and conveniently represented, the execution efficiency of the video jitter detection method is effectively improved in an auxiliary manner, when the frame receiving time of the current video frame is possibly influenced by the network jitter and the retransmission, when the video jitter value is analyzed in an auxiliary manner based on the detected frame receiving time of the current video frame, the detected video jitter value can effectively represent the influence of network jitter and retransmission on the jitter value, so that the detection accuracy of the video jitter value can be effectively improved, and when the video jitter value is used as the basis for video network jitter analysis, the video playing effect can be effectively assisted and improved.
Fig. 5 is a flowchart illustrating a video jitter detection method according to another embodiment of the present disclosure.
As shown in fig. 5, the video jitter detection method includes:
s501: the frame receiving time of the current video frame is determined.
S502: jitter distribution information for the buffered video frames is determined.
For the description of S501 and S502, reference may be made to the above embodiments, and details are not repeated here.
The jitter distribution information in the embodiment of the present disclosure includes: a plurality of marking times, and a jitter value corresponding to each marking time, then subsequent steps may be triggered after determining the plurality of marking times for the buffered video frames.
The plurality of mark times may be a plurality of identical time ranges obtained by equally dividing the jitter duration covered by the video jitter information on the premise that the video jitter detection apparatus can represent the video jitter information, and the jitter value may be a video jitter degree corresponding to the corresponding time range in the video.
For example, when a histogram is used to represent jitter distribution information, the number of "buckets" in the histogram is 300, each granularity is 10ms, the jitter time length that can be covered by video jitter information may be 3000ms, for example, a plurality of 10ms granularities therein may be referred to as a plurality of marking times, respectively, the plurality of marking times correspond to a plurality of time ranges of 10ms, each marking time corresponds to a jitter value, the jitter value may be represented by the height of the "bucket", and the jitter value and the corresponding marking time may be referred to as video jitter information, which is not limited thereto.
S503: and when the frame is received, determining target marking time from the plurality of marking times, wherein the target marking time corresponds to a target jitter value, and the target jitter value belongs to the plurality of jitter values.
The mark time determined from the plurality of mark times and matched with the frame receiving time may be referred to as a target mark time, where the specific meaning of the target mark time is the mark time at which the video jitter is detected in the plurality of mark times, and the jitter value corresponding to the target mark time may be referred to as a target jitter value, and the target jitter value may be understood as a jitter value of "hit" of the current video frame.
Optionally, in some embodiments, when the target mark time is determined from the plurality of mark times according to the frame receiving time, a division value between the frame receiving time and the time range may be determined, a mark time matching the division value is determined from the plurality of mark times, and the matched mark time is used as the target mark time, which can be used to assist in quickly determining to obtain a corresponding target jitter value, so that the jitter value hit by the current video frame is quickly determined by combining the jitter distribution information of the cached video frame, and the subsequent detection processing on the actual video jitter condition is assisted, thereby ensuring the accuracy of determining the video jitter value.
For example, the computing device may record jitter distribution information covering 3000ms, for example, 3000ms may be divided into 300 ranges, each range is 10ms, and then the number of the marking times is 300 in the embodiment of the present disclosure, and the time range corresponding to the marking time is 10 ms. When the computer device receives a video frame with a frame receiving duration of 1000ms, the frame receiving duration 1000ms may be divided by the time range of the marking time 10ms to obtain a division value 100, and then the 100 th marking time may be referred to as a target marking time, and correspondingly, the target marking time corresponds to a jitter value describing a jitter degree in the jitter distribution information, that is, may be referred to as a target jitter value.
In other embodiments, when the target marking time is determined from the multiple marking times according to the frame receiving time, jitter distribution information corresponding to the frame receiving time and the multiple marking times may be input into a pre-trained artificial intelligence model (the artificial intelligence model may be obtained by training multiple sets of sample data in advance), so that the frame receiving time and the jitter distribution information are matched by using the artificial intelligence model to match the target marking time and the target jitter value, or any other possible manner may be adopted to determine the target marking time from the multiple marking times according to the frame receiving time and the corresponding target jitter value, which is not limited herein.
S504: and adjusting the target jitter value to obtain a reference jitter value.
The reference jitter value may be a jitter value obtained by performing tuning processing on a target jitter value by using a certain numerical value adjustment rule, and the tuning processing on the jitter value may be effective to facilitate statistical analysis on jitter distribution information.
The embodiment of the disclosure may introduce a forgetting algorithm in the process of adjusting the target jitter value to obtain the reference jitter value, that is, the forgetting algorithm may be used to update the data related to the jitter distribution information of the cached video, and of course, any other possible algorithm may also be used to assist in updating the data related to the jitter distribution information of the cached video.
For example, each time the computer device receives a video frame, the driving related module may be triggered to perform a statistical update, and the update formula is as follows:
Figure BDA0003537399130000111
vector _ sum refers to the sum of jitter values of all marked times after being updated by the forgetting algorithm, bucket [ size () may refer to the total number of the marked times, bucket [ k ] may refer to the jitter value of the kth marked time, and get _ factor refers to a forgetting factor in the forgetting algorithm.
It should be noted that the forgetting factor is a weighting factor in the error measure function, and is introduced to give different values to the source data and the new data, so that the algorithm has a quick response capability to the characteristic change of the input process, the smaller the forgetting factor, the smaller the weight of the source data, and the larger the weight of the new data, and the sum of jitter values of all the mark times can be calculated after updating by using the formula with the forgetting algorithm.
buckets[value]=buckets[value]+(1-forget_factor);
Wherein, the packets [ vaule ] refers to a target jitter value corresponding to the target mark time.
In the embodiment of the present disclosure, the weight of the target jitter value may be adjusted correspondingly through the formula, and the adjusted target jitter value is used as a reference jitter value to assist in determining the video jitter value in the following embodiments, which may be specifically referred to in the following embodiments.
S505: a reference jitter sum value of a plurality of other jitter values and the reference jitter value is determined, wherein the other jitter values are jitter values other than the target jitter value among the plurality of jitter values.
In the embodiment of the present disclosure, a plurality of other jitter values and a reference jitter value may be summed, and the sum of the sum is used as a reference jitter sum, and the reference jitter sum may be used to assist in subsequent determination of whether a normalization condition is satisfied, so as to assist in determining a video jitter value.
S506: a video jitter value is determined from the plurality of other jitter values and the reference jitter value.
Optionally, in some embodiments, the video jitter value is determined according to a plurality of other jitter values and a reference jitter value, and the plurality of other jitter values and the reference jitter value may be summed to obtain a jitter sum value, if the jitter sum value satisfies a normalization condition, the video jitter value is determined directly according to the plurality of other jitter values and the reference jitter value, if the jitter sum value does not satisfy the normalization condition, the reference jitter value is corrected, and the video jitter value is determined according to the plurality of other jitter values and the corrected reference jitter value, so as to improve reliability of the reference jitter value, and the reference jitter value is subsequently combined with the plurality of other jitter values as a basis for determining the video jitter value, which may effectively assist in improving accuracy of the video jitter value.
That is to say, when determining the video jitter value according to the multiple other jitter values and the reference jitter value, the embodiments of the present disclosure may perform corresponding analysis processing on the multiple other jitter values and the reference jitter value in advance to correct the various jitter values in consideration of possible deviation, and then assist in determining the video jitter value according to the corrected various jitter values.
In the embodiment of the present disclosure, after a plurality of other jitter values and a reference jitter value are summed to obtain a jitter sum value, it may be determined whether the jitter sum value satisfies a normalization condition to analyze whether a deviation occurs, where the normalization condition may be preset, and the normalization condition is that a probability of finding a particle in a space is equal to 1. This process can be characterized by the following equation:
Figure BDA0003537399130000131
diff_sum=vector_sum-1
wherein diff _ sum can be regarded as a deviation of the reference jitter and the value vector _ sum, and when the value is 0, there is no deviation; if the value is not 0, a deviation may occur.
In this embodiment, it may be determined whether the sum of the jitter value obtained by summing the plurality of other jitter values and the reference jitter value is equal to 1, and if the sum of the jitter value and the reference jitter value is equal to 1, it is determined that the sum of the jitter value and the reference jitter value satisfies a normalization condition; if the sum of jitter is not equal to 1, the sum of jitter is determined not to satisfy the normalization condition.
That is to say, after determining a plurality of other jitter values and a reference jitter value, the embodiments of the present disclosure add the plurality of other jitter values and the reference jitter value for determining a jitter sum value, and then determine whether the reference jitter value needs to be corrected according to the jitter sum value, and perform correction processing on the reference jitter value, thereby improving the reliability of the reference jitter value, and the reference jitter value is subsequently combined with the plurality of other jitter values as a basis for determining the video jitter value, which can effectively assist in improving the accuracy of the video jitter value.
For example, a plurality of other jitter values a1, a2, a3 and a reference jitter value b determined as above may be summed to obtain a jitter sum value c, then c and 1 are compared, whether c is equal to 1 is judged to satisfy a normalization condition, if c is equal to 1, then the normalization condition is satisfied, video jitter values may be directly determined according to a1, a2, a3 and b, if c is not equal to 1, then b may be corrected to obtain b1, and then video jitter values may be determined according to a1, a2, a3 and b 1.
Of course, the video jitter value may be determined according to the plurality of other jitter values and the reference jitter value by inputting the plurality of other jitter values and the reference jitter value into a pre-trained Artificial Intelligence (AI) model (the AI model may be obtained by performing simulated pre-training on a large amount of sample data), or the video jitter value may be determined according to the plurality of other jitter values and the reference jitter value by determining a functional relationship between the other jitter values, the reference jitter value, and the video jitter value, which is not limited thereto.
Optionally, in some embodiments, the reference jitter value may be corrected if the jitter sum value does not satisfy the normalization condition.
Optionally, in some embodiments, the reference jitter value is corrected by determining a difference threshold, determining a reference difference between the jitter sum and the normalized threshold, and determining a smaller value between the difference threshold and the reference difference, where the smaller value is the difference threshold or a magnitude of the reference difference, if the reference difference is greater than or equal to zero, determining a difference between the reference jitter value and the smaller value as a corrected reference jitter value, and if the reference difference is less than zero, determining a sum of the reference jitter value and the smaller value as the corrected reference jitter value, where the correction process may be represented by the following formula:
Figure BDA0003537399130000141
wherein Min (| diff _ sum |, buckets [ n ]/16) is used for comparing and selecting a smaller value between | diff _ sum | and buckets [ n ]/16 as an output value, and the buckets [ n ]/16 can be regarded as a difference threshold value for the bucket [ n ].
In order to protect the number characteristics of the actual reference jitter values as much as possible when correcting the deviation, a threshold may be set for each reference jitter value to restrict the correction range for each reference jitter value, and the threshold may be referred to as a difference threshold.
That is to say, after determining that the sum of jitter value does not satisfy the normalization condition, the embodiment of the present disclosure may determine the difference threshold and the reference difference, and then determine a smaller value between the difference threshold and the reference difference for correcting the reference jitter value, thereby improving the reliability of the reference jitter value on the premise of ensuring the number characteristic of the reference jitter value, and further improving the rationality of subsequently determining the video jitter value.
Of course, a data statistics method may be used to determine a scaling factor suitable for most of the correction requirements, and a multiplied value or a divided value is obtained by the reference jitter value and the scaling factor to correct the reference jitter value, or a method of processing video frames by multiple devices and then averaging the obtained multiple reference jitter values to correct errors may be used, which is not limited to this.
Optionally, in some embodiments, the video jitter value is determined according to a plurality of other jitter values and a reference jitter value, and a first number of first jitter values may be counted from among the plurality of other jitter values and the reference jitter value, where the first jitter value is another jitter value or a reference jitter value, a sum of the first jitter values of the first number is greater than a jitter threshold, and a product value of the first number and a time range is used as the video jitter value, since when a computer device covers all jitters of a video frame, some weights may be smaller, but a greater number of marking times are calculated into the first number, and after a jitter threshold is set to constrain the first number, reliability of the obtained first number may be improved, thereby helping video jitter values obtained in a subsequent combination time range, and effectively improving rationality of video jitter value detection, and improving the representation effect of the video jitter value on the jitter condition of the current video frame.
In the embodiment of the present disclosure, in order to cover the video jitter recorded by the computer device as much as possible, and at the same time, avoid the excessive influence on the video jitter caused by covering a slight amount of jitter, a threshold value may be set to restrict the range of the jitter to be covered, and the threshold value may be referred to as a jitter threshold value.
For example, a computer device receiving a video frame may cover up to 3000ms of jitter, for convenience of statistics, 3000ms of jitter may be divided into 300 time ranges m, a jitter threshold value is set to 97%, and then, through a formula:
Figure BDA0003537399130000151
where bucket [ n ] refers to a reference jitter value.
In the embodiment of the present disclosure, it may be calculated that n time ranges m are needed for jitter covering 97% of a video, and then a product of the two is obtained, where l is m × n, and may be referred to as a video jitter value.
Of course, the video jitter value may also be determined according to a plurality of other jitter values and the reference jitter value by using a probability density function, or a function relation among a plurality of other jitter values, the reference jitter value and the video jitter value may also be obtained by using a mathematical statistics method, and the video jitter value may be determined according to a plurality of other jitter values and the reference jitter value, which is not limited thereto.
In the embodiment, by determining the frame receiving time of the current video frame, determining the jitter distribution information of the cached video frame, determining the target marking time from a plurality of marking times according to the frame receiving time, wherein the target marking time corresponds to the target jitter value, then adjusting the target jitter value to obtain the reference jitter value, determining a plurality of other jitter values and the reference jitter value of the reference jitter value, and determining the video jitter value according to the plurality of other jitter values and the reference jitter value, because the number of the marking times can be more, the target marking time can be quickly determined from the plurality of marking times when the frame receiving time is combined, the plurality of marking times of the video jitter covered by the computer equipment can possibly occupy the weight of the target marking time during data processing, and after the target jitter value is adjusted, the weight of the target marking time can be effectively increased, the reliability of the obtained reference jitter value is improved, the reference jitter sum value is obtained through the obtained reference jitter value and a plurality of other jitter values, whether deviation occurs or not is judged according to whether the reference jitter sum value meets the normalization condition, whether the reference jitter value is corrected or not can be selected, the rationality of the reference jitter value is improved, and therefore the follow-up obtained video jitter value can be helped to assist in improving the video playing effect.
Fig. 6 is a schematic structural diagram of a video jitter detection apparatus according to an embodiment of the present disclosure.
As shown in fig. 6, the video judder detection device 60 includes:
a first determining module 601, configured to determine frame receiving time of a current video frame;
a second determining module 602, configured to determine jitter distribution information of the buffered video frames;
a third determining module 603, configured to determine a video jitter value according to the frame receiving time and the jitter distribution information.
In some embodiments of the present disclosure, as shown in fig. 7, fig. 7 is a schematic structural diagram of a video jitter detection apparatus according to another embodiment of the present disclosure, where the jitter distribution information includes: a plurality of mark times, and jitter values corresponding to the respective mark times, the third determining module 603 includes:
a first determining sub-module 6031, configured to determine, according to frame receiving time, a target mark time from the multiple mark times, where the target mark time corresponds to a target jitter value, and the target jitter value belongs to the multiple jitter values;
an adjusting submodule 6032, configured to adjust the target jitter value to obtain a reference jitter value;
a second determination sub-module 6033 for determining a plurality of other jitter values and a reference jitter sum of the reference jitter values, wherein the other jitter values are jitter values other than the target jitter value among the plurality of jitter values;
a third determining sub-module 6034 for determining a video jitter value from the plurality of other jitter values and the reference jitter value.
In some embodiments of the present disclosure, the third determining sub-module 6034 is specifically configured to:
adding the plurality of other jitter values and the reference jitter value to obtain a jitter sum value;
when the jitter sum value meets the normalization condition, determining a video jitter value directly according to a plurality of other jitter values and a reference jitter value;
and when the jitter sum value does not meet the normalization condition, correcting the reference jitter value, and determining the video jitter value according to a plurality of other jitter values and the corrected reference jitter value.
In some embodiments of the present disclosure, the third determination sub-module 6034 is further configured to:
determining a difference threshold;
determining a reference difference between the jitter sum value and the normalized threshold value;
determining a smaller value between the difference threshold and the reference difference, wherein the smaller value is the difference threshold or the magnitude of the reference difference;
when the reference difference is greater than or equal to zero, determining the difference between the reference jitter value and the smaller value as the corrected reference jitter value;
and when the reference difference value is less than zero, determining the sum of the reference jitter value and the smaller value as the corrected reference jitter value.
In some embodiments of the present disclosure, the first determining module 601 is specifically configured to:
determining a division value between a frame receiving time and a time range;
a mark time matching the division value is determined from the plurality of mark times, and the matching mark time is set as a target mark time.
In some embodiments of the present disclosure, the third determination sub-module 6034 is further configured to:
determining a difference threshold;
determining a reference difference between the jitter sum value and the normalized threshold value;
determining a smaller value between the difference threshold and the reference difference, wherein the smaller value is the difference threshold or the magnitude of the reference difference;
when the reference difference is greater than or equal to zero, determining the difference between the reference jitter value and the smaller value as the corrected reference jitter value;
and when the reference difference value is less than zero, determining the sum of the reference jitter value and the smaller value as the corrected reference jitter value.
In some embodiments of the present disclosure, as shown in fig. 8, fig. 8 is a schematic structural diagram of a video jitter detection apparatus according to another embodiment of the present disclosure, where the first determining module 601 includes:
a fourth determining submodule 6011, configured to determine a first frame receiving time of a first packet of a current video frame;
a fifth determining submodule 6012, configured to determine a second frame receiving time of a last packet of the current video frame;
a sixth determining submodule 6013, configured to determine, according to the first frame receiving time and the second frame receiving time, a frame receiving time.
In some embodiments of the present disclosure, a sixth determining submodule 6013 is configured to:
and determining a time difference value between the first frame receiving time and the second frame receiving time, and taking the time difference value as the frame receiving time.
It should be noted that the foregoing explanation of the video shake detection method is also applicable to the video shake detection apparatus of the present embodiment, and is not repeated herein.
In the embodiment, by determining the frame receiving time of the current video frame, determining the jitter distribution information of the cached video frame, and determining the video jitter value according to the frame receiving time and the jitter distribution information, when the frame receiving time of the current video frame is possibly influenced by network jitter and retransmission, when the video jitter value is assisted and analyzed based on the frame receiving time of the current video frame, the detected video jitter value can effectively represent the influence of the network jitter and the retransmission on the jitter value, so that the detection accuracy of the video jitter value can be effectively improved, and when the video jitter value is taken as the basis of video network jitter analysis, the video playing effect can be effectively assisted and improved.
In order to implement the foregoing embodiments, the present disclosure also provides a computer device, including: the present disclosure provides a video judder detection method, which is implemented by a processor executing a program stored in a memory and running on the processor.
In order to implement the above embodiments, the present disclosure also proposes a non-transitory computer-readable storage medium on which is stored a computer program that, when executed by a processor, implements a video shake detection method as proposed by the aforementioned embodiments of the present disclosure.
In order to implement the foregoing embodiments, the present disclosure also provides a computer program product, which when executed by an instruction processor in the computer program product, performs the video shake detection method as set forth in the foregoing embodiments of the present disclosure.
FIG. 9 illustrates a block diagram of an exemplary computer device suitable for use in implementing embodiments of the present disclosure. The computer device 12 shown in fig. 9 is only an example and should not bring any limitations to the functionality or scope of use of the embodiments of the present disclosure.
As shown in FIG. 9, computer device 12 is in the form of a general purpose computing device. The components of computer device 12 may include, but are not limited to: one or more processors or processing units 16, a system memory 28, and a bus 18 that couples various system components including the system memory 28 and the processing unit 16.
Bus 18 represents one or more of any of several types of bus structures, including a memory bus or memory controller, a peripheral bus, an accelerated graphics port, and a processor or local bus using any of a variety of bus architectures. These architectures include, but are not limited to, Industry Standard Architecture (ISA) bus, Micro Channel Architecture (MAC) bus, enhanced ISA bus, Video Electronics Standards Association (VESA) local bus, and Peripheral Component Interconnect (PCI) bus, to name a few.
Computer device 12 typically includes a variety of computer system readable media. Such media may be any available media that is accessible by computer device 12 and includes both volatile and nonvolatile media, removable and non-removable media.
Memory 28 may include computer system readable media in the form of volatile memory, such as Random Access Memory (RAM) 30 and/or cache memory 32. Computer device 12 may further include other removable/non-removable, volatile/nonvolatile computer system storage media. By way of example only, storage system 34 may be used to read from and write to non-removable, nonvolatile magnetic media (not shown in FIG. 9, and commonly referred to as a "hard drive").
Although not shown in FIG. 9, a disk drive for reading from and writing to a removable, nonvolatile magnetic disk (e.g., a "floppy disk") and an optical disk drive for reading from or writing to a removable, nonvolatile optical disk (e.g., a compact disk read Only memory (CD-ROM), a digital versatile disk read Only memory (DVD-ROM), or other optical media) may be provided. In these cases, each drive may be connected to bus 18 by one or more data media interfaces. Memory 28 may include at least one program product having a set (e.g., at least one) of program modules that are configured to carry out the functions of embodiments of the disclosure.
A program/utility 40 having a set (at least one) of program modules 42 may be stored, for example, in memory 28, such program modules 42 including, but not limited to, an operating system, one or more application programs, other program modules, and program data, each of which examples or some combination thereof may comprise an implementation of a network environment. Program modules 42 generally perform the functions and/or methodologies of the embodiments described in this disclosure.
Computer device 12 may also communicate with one or more external devices 14 (e.g., keyboard, pointing device, display 24, etc.), with one or more devices that enable a user to interact with computer device 12, and/or with any devices (e.g., network card, modem, etc.) that enable computer device 12 to communicate with one or more other computing devices. Such communication may be through an input/output (I/O) interface 22. Moreover, computer device 12 may also communicate with one or more networks (e.g., a Local Area Network (LAN), a Wide Area Network (WAN), and/or a public Network such as the Internet) via Network adapter 20. As shown, the network adapter 20 communicates with the other modules of the computer device 12 via the bus 18. It should be understood that although not shown in the figures, other hardware and/or software modules may be used in conjunction with computer device 12, including but not limited to: microcode, device drivers, redundant processing units, external disk drive arrays, RAID systems, tape drives, and data backup storage systems, among others.
The processing unit 16 executes various functional applications and data processing, such as implementing the video shake detection method mentioned in the foregoing embodiments, by running a program stored in the system memory 28.
Other embodiments of the disclosure will be apparent to those skilled in the art from consideration of the specification and practice of the disclosure disclosed herein. This disclosure is intended to cover any variations, uses, or adaptations of the disclosure following, in general, the principles of the disclosure and including such departures from the present disclosure as come within known or customary practice within the art to which the disclosure pertains. It is intended that the specification and examples be considered as exemplary only, with a true scope and spirit of the disclosure being indicated by the following claims.
It will be understood that the present disclosure is not limited to the precise arrangements described above and shown in the drawings and that various modifications and changes may be made without departing from the scope thereof. The scope of the present disclosure is limited only by the appended claims.
It should be noted that, in the description of the present disclosure, the terms "first", "second", etc. are used for descriptive purposes only and are not to be construed as indicating or implying relative importance. In addition, in the description of the present disclosure, the meaning of "a plurality" is two or more unless otherwise specified.
Any process or method descriptions in flow charts or otherwise described herein may be understood as representing modules, segments, or portions of code which include one or more executable instructions for implementing specific logical functions or steps of the process, and the scope of the preferred embodiments of the present disclosure includes other implementations in which functions may be executed out of order from that shown or discussed, including substantially concurrently or in reverse order, depending on the functionality involved, as would be understood by those reasonably skilled in the art of the embodiments of the present disclosure.
It should be understood that portions of the present disclosure may be implemented in hardware, software, firmware, or a combination thereof. In the above embodiments, the various steps or methods may be implemented in software or firmware stored in memory and executed by a suitable instruction execution system. For example, if implemented in hardware, as in another embodiment, any one or combination of the following techniques, which are known in the art, may be used: a discrete logic circuit having a logic gate circuit for implementing a logic function on a data signal, an application specific integrated circuit having an appropriate combinational logic gate circuit, a Programmable Gate Array (PGA), a Field Programmable Gate Array (FPGA), or the like.
It will be understood by those skilled in the art that all or part of the steps carried by the method for implementing the above embodiments may be implemented by hardware related to instructions of a program, which may be stored in a computer readable storage medium, and when the program is executed, the program includes one or a combination of the steps of the method embodiments.
In addition, functional units in the embodiments of the present disclosure may be integrated into one processing module, or each unit may exist alone physically, or two or more units are integrated into one module. The integrated module can be realized in a hardware mode, and can also be realized in a software functional module mode. The integrated module, if implemented in the form of a software functional module and sold or used as a stand-alone product, may also be stored in a computer readable storage medium.
The storage medium mentioned above may be a read-only memory, a magnetic or optical disk, etc.
In the description herein, references to the description of the term "one embodiment," "some embodiments," "an example," "a specific example," or "some examples," etc., mean that a particular feature, structure, material, or characteristic described in connection with the embodiment or example is included in at least one embodiment or example of the present disclosure. In this specification, the schematic representations of the terms used above do not necessarily refer to the same embodiment or example. Furthermore, the particular features, structures, materials, or characteristics described may be combined in any suitable manner in any one or more embodiments or examples.
Although embodiments of the present disclosure have been shown and described above, it is understood that the above embodiments are exemplary and should not be construed as limiting the present disclosure, and that changes, modifications, substitutions and alterations may be made to the above embodiments by those of ordinary skill in the art within the scope of the present disclosure.

Claims (18)

1. A video jitter detection method, the method comprising:
determining the frame receiving time of the current video frame;
determining jitter distribution information of the cached video frame;
and determining a video jitter value according to the frame receiving time and the jitter distribution information.
2. The method of claim 1, wherein the jitter distribution information comprises: a plurality of mark times, and a jitter value corresponding to each of the mark times,
wherein, the determining the video jitter value according to the frame receiving time and the jitter distribution information comprises:
determining target marking time from the plurality of marking times according to the frame receiving time, wherein the target marking time corresponds to a target jitter value, and the target jitter value belongs to the plurality of jitter values;
adjusting the target jitter value to obtain a reference jitter value;
determining a plurality of other jitter values and a reference jitter sum value of the reference jitter value, wherein the other jitter values are jitter values other than the target jitter value among the plurality of jitter values;
determining the video jitter value from the plurality of other jitter values and the reference jitter value.
3. The method of claim 2, wherein determining the video jitter value based on the plurality of other jitter values and the reference jitter value comprises:
summing the plurality of other jitter values and the reference jitter value to obtain a jitter sum value;
if the sum of jitter values meets a normalization condition, determining the video jitter value directly according to the plurality of other jitter values and the reference jitter value;
and if the jitter sum value does not meet the normalization condition, correcting the reference jitter value, and determining the video jitter value according to the other jitter values and the corrected reference jitter value.
4. The method of claim 3, wherein the plurality of mark times respectively correspond to a plurality of same time ranges, and wherein the determining the video jitter value based on the plurality of other jitter values and the reference jitter value comprises:
counting a first number of first jitter values from among the plurality of other jitter values and the reference jitter value, wherein the first jitter value is the other jitter values or the reference jitter value, and a sum of the first number of first jitter values is greater than a jitter threshold;
and taking the product value of the first quantity and the time range as the video jitter value.
5. The method of claim 4, wherein said determining a target marker time from a plurality of said marker times based on said time to receive a frame comprises:
determining a division value between the frame receiving time and the time range;
determining a mark time matching the division value from the plurality of mark times, and regarding the matching mark time as the target mark time.
6. The method of claim 3, wherein said modifying the reference jitter value comprises:
determining a difference threshold;
determining a reference difference between the jitter sum value and a normalized threshold value;
determining a smaller value between the difference threshold and the reference difference, wherein the smaller value is the magnitude of the difference threshold or the reference difference;
if the reference difference is greater than or equal to zero, determining the difference between the reference jitter value and the smaller value as the corrected reference jitter value;
and if the reference difference value is smaller than zero, determining the sum of the reference jitter value and the smaller value as the corrected reference jitter value.
7. The method of claim 1, wherein said determining a frame receipt for a current video frame comprises:
determining first frame receiving time of a first message data packet of the current video frame;
determining a second frame receiving time of a last message data packet of the current video frame;
and determining the frame receiving time according to the first frame receiving time and the second frame receiving time.
8. The method of claim 7, wherein said determining said frame receipt timing based on said first frame receipt time and said second frame receipt time comprises:
and determining a time difference value between the first frame receiving time and the second frame receiving time, and taking the time difference value as the frame receiving time.
9. A video jitter detection apparatus, the apparatus comprising:
the first determining module is used for determining the frame receiving time of the current video frame;
the second determining module is used for determining the jitter distribution information of the cached video frame;
and the third determining module is used for determining a video jitter value according to the frame receiving time and the jitter distribution information.
10. The apparatus of claim 9, wherein the jitter profile information comprises: a plurality of mark times, and a jitter value corresponding to each of the mark times,
wherein the third determining module comprises:
a first determining sub-module, configured to determine a target marking time from the multiple marking times according to the frame receiving time, where the target marking time corresponds to a target jitter value, and the target jitter value belongs to the multiple jitter values;
the adjusting submodule is used for adjusting the target jitter value to obtain a reference jitter value;
a second determination sub-module configured to determine a plurality of other jitter values and a reference jitter sum value of the reference jitter value, wherein the other jitter values are jitter values other than the target jitter value among the plurality of jitter values;
a third determining sub-module, configured to determine the video jitter value according to the multiple other jitter values and the reference jitter value.
11. The apparatus of claim 10, wherein the third determination submodule is specifically configured to:
summing the plurality of other jitter values and the reference jitter value to obtain a jitter sum value;
determining the video jitter value directly according to the plurality of other jitter values and the reference jitter value when the jitter sum value meets a normalization condition;
and when the jitter sum value does not meet the normalization condition, correcting the reference jitter value, and determining the video jitter value according to the other jitter values and the corrected reference jitter value.
12. The apparatus of claim 11, wherein the plurality of marker times respectively correspond to a plurality of identical time ranges, and wherein the third determining submodule is further configured to:
counting a first number of first jitter values from among the plurality of other jitter values and the reference jitter value, wherein the first jitter value is the other jitter values or the reference jitter value, and a sum of the first number of first jitter values is greater than a jitter threshold;
and taking the product value of the first quantity and the time range as the video jitter value.
13. The apparatus of claim 12, wherein the first determining module is specifically configured to:
determining a division value between the frame receiving time and the time range;
determining a mark time matching the division value from the plurality of mark times, and regarding the matching mark time as the target mark time.
14. The apparatus of claim 11, wherein the third determination submodule is further configured to:
determining a difference threshold;
determining a reference difference between the jitter sum value and a normalized threshold value;
determining a smaller value between the difference threshold and the reference difference, wherein the smaller value is a magnitude of the difference threshold or the reference difference;
when the reference difference is greater than or equal to zero, determining the difference between the reference jitter value and the smaller value as the corrected reference jitter value;
and when the reference difference value is less than zero, determining the sum of the reference jitter value and the smaller value as the corrected reference jitter value.
15. The apparatus of claim 9, wherein the first determining module comprises:
a fourth determining sub-module, configured to determine a first frame receiving time of a first packet of the current video frame;
a fifth determining submodule, configured to determine a second frame receiving time of a last packet of the current video frame;
and the sixth determining submodule is used for determining the frame receiving time according to the first frame receiving time and the second frame receiving time.
16. The apparatus of claim 15, wherein the sixth determination submodule is specifically configured to:
and determining a time difference value between the first frame receiving time and the second frame receiving time, and taking the time difference value as the frame receiving time.
17. A computer device, comprising:
at least one processor; and
a memory communicatively coupled to the at least one processor; wherein the content of the first and second substances,
the memory stores instructions executable by the at least one processor to enable the at least one processor to perform the method of any one of claims 1-8.
18. A non-transitory computer readable storage medium having stored thereon computer instructions for causing the computer to perform the method of any one of claims 1-8.
CN202210220933.6A 2022-03-08 2022-03-08 Video jitter detection method and device, computer equipment and storage medium Pending CN114640754A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202210220933.6A CN114640754A (en) 2022-03-08 2022-03-08 Video jitter detection method and device, computer equipment and storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202210220933.6A CN114640754A (en) 2022-03-08 2022-03-08 Video jitter detection method and device, computer equipment and storage medium

Publications (1)

Publication Number Publication Date
CN114640754A true CN114640754A (en) 2022-06-17

Family

ID=81948206

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202210220933.6A Pending CN114640754A (en) 2022-03-08 2022-03-08 Video jitter detection method and device, computer equipment and storage medium

Country Status (1)

Country Link
CN (1) CN114640754A (en)

Citations (14)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2005037273A (en) * 2003-07-16 2005-02-10 Nippon Hoso Kyokai <Nhk> Video image shake detecting device, method and program thereof, and camera video image selecting device
JP2005064873A (en) * 2003-08-12 2005-03-10 Matsushita Electric Ind Co Ltd Jitter buffer control method, and ip telephone set
CN102065221A (en) * 2009-11-11 2011-05-18 美商豪威科技股份有限公司 Image sensor with shaking compensation
CN103530893A (en) * 2013-10-25 2014-01-22 南京大学 Foreground detection method in camera shake scene based on background subtraction and motion information
CN103763461A (en) * 2013-12-31 2014-04-30 合一网络技术(北京)有限公司 Video jitter detection method and device
EP2739044A1 (en) * 2012-11-29 2014-06-04 Alcatel-Lucent A video conferencing server with camera shake detection
US20140355895A1 (en) * 2013-05-31 2014-12-04 Lidong Xu Adaptive motion instability detection in video
CN107743228A (en) * 2017-11-24 2018-02-27 深圳市创维软件有限公司 Video quality detection method, monitoring device and storage medium
CN108492287A (en) * 2018-03-14 2018-09-04 罗普特(厦门)科技集团有限公司 A kind of video jitter detection method, terminal device and storage medium
CN110363748A (en) * 2019-06-19 2019-10-22 平安科技(深圳)有限公司 Dithering process method, apparatus, medium and the electronic equipment of key point
CN111585829A (en) * 2019-02-15 2020-08-25 泰雷兹集团 Electronic device and method for receiving data via an asynchronous communication network, related communication system and computer program
CN113691733A (en) * 2021-09-18 2021-11-23 北京百度网讯科技有限公司 Video jitter detection method and device, electronic equipment and storage medium
WO2021233032A1 (en) * 2020-05-19 2021-11-25 Oppo广东移动通信有限公司 Video processing method, video processing apparatus, and electronic device
US20210400338A1 (en) * 2020-06-19 2021-12-23 Apple Inc. Systems and methods of video jitter estimation

Patent Citations (14)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2005037273A (en) * 2003-07-16 2005-02-10 Nippon Hoso Kyokai <Nhk> Video image shake detecting device, method and program thereof, and camera video image selecting device
JP2005064873A (en) * 2003-08-12 2005-03-10 Matsushita Electric Ind Co Ltd Jitter buffer control method, and ip telephone set
CN102065221A (en) * 2009-11-11 2011-05-18 美商豪威科技股份有限公司 Image sensor with shaking compensation
EP2739044A1 (en) * 2012-11-29 2014-06-04 Alcatel-Lucent A video conferencing server with camera shake detection
US20140355895A1 (en) * 2013-05-31 2014-12-04 Lidong Xu Adaptive motion instability detection in video
CN103530893A (en) * 2013-10-25 2014-01-22 南京大学 Foreground detection method in camera shake scene based on background subtraction and motion information
CN103763461A (en) * 2013-12-31 2014-04-30 合一网络技术(北京)有限公司 Video jitter detection method and device
CN107743228A (en) * 2017-11-24 2018-02-27 深圳市创维软件有限公司 Video quality detection method, monitoring device and storage medium
CN108492287A (en) * 2018-03-14 2018-09-04 罗普特(厦门)科技集团有限公司 A kind of video jitter detection method, terminal device and storage medium
CN111585829A (en) * 2019-02-15 2020-08-25 泰雷兹集团 Electronic device and method for receiving data via an asynchronous communication network, related communication system and computer program
CN110363748A (en) * 2019-06-19 2019-10-22 平安科技(深圳)有限公司 Dithering process method, apparatus, medium and the electronic equipment of key point
WO2021233032A1 (en) * 2020-05-19 2021-11-25 Oppo广东移动通信有限公司 Video processing method, video processing apparatus, and electronic device
US20210400338A1 (en) * 2020-06-19 2021-12-23 Apple Inc. Systems and methods of video jitter estimation
CN113691733A (en) * 2021-09-18 2021-11-23 北京百度网讯科技有限公司 Video jitter detection method and device, electronic equipment and storage medium

Similar Documents

Publication Publication Date Title
US20230188578A1 (en) Data transmission method and apparatus
US11350150B2 (en) Method for estimation of quality of experience (QoE) metrics for video streaming using passive measurements
US8441943B2 (en) Information processing apparatus and method, program, and recording medium
US7864695B2 (en) Traffic load density measuring system, traffic load density measuring method, transmitter, receiver, and recording medium
US20150341248A1 (en) Method for detecting network transmission status and related device
CN102804714A (en) Controlling packet transmission
CN110012324B (en) Code rate self-adaption method, WIFI camera, control device and code rate self-adaption system for embedded wireless video transmission
WO2014177023A1 (en) Method and device for determining service type
CN110636283B (en) Video transmission test method and device and terminal equipment
WO2018024497A1 (en) Estimation of losses in a video stream
JP7444247B2 (en) Burst traffic detection device, burst traffic detection method, and burst traffic detection program
CN114640754A (en) Video jitter detection method and device, computer equipment and storage medium
US20240056370A1 (en) Data transmission control method and apparatus, electronic device, and storage medium
US8289868B2 (en) Network device and method of measuring upstream bandwidth employed thereby
JP7003467B2 (en) Packet classification program, packet classification method and packet classification device
US20200220794A1 (en) Method and system for monitoing communication in a network
WO2022105706A1 (en) Voice call quality determination method and apparatus
CN115801639A (en) Bandwidth detection method and device, electronic equipment and storage medium
WO2020164457A1 (en) Abnormality handling method and apparatus, and device
CN112688824B (en) RTP packet loss detection method, device, equipment and computer readable storage medium
US20100050028A1 (en) Network device and method for simultaneously calculating network throughput and packet error rate
CN115514686A (en) Flow acquisition method and device, electronic equipment and storage medium
CN114124754B (en) Method for processing media data packets in a multimedia network and related products
CN115150283B (en) Network bandwidth detection method and device, computer equipment and storage medium
CN115002513B (en) Audio and video scheduling method and device, electronic equipment and computer readable storage medium

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination