CN117135415A - Video data transmission method and device and electronic equipment - Google Patents
Video data transmission method and device and electronic equipment Download PDFInfo
- Publication number
- CN117135415A CN117135415A CN202210551382.1A CN202210551382A CN117135415A CN 117135415 A CN117135415 A CN 117135415A CN 202210551382 A CN202210551382 A CN 202210551382A CN 117135415 A CN117135415 A CN 117135415A
- Authority
- CN
- China
- Prior art keywords
- encoder
- time
- frame
- request
- frequency
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Pending
Links
- 238000000034 method Methods 0.000 title claims abstract description 78
- 230000005540 biological transmission Effects 0.000 title claims abstract description 34
- 230000008569 process Effects 0.000 claims abstract description 26
- 238000004590 computer program Methods 0.000 claims description 10
- 230000000694 effects Effects 0.000 description 10
- 238000010586 diagram Methods 0.000 description 9
- 238000012545 processing Methods 0.000 description 9
- 238000004364 calculation method Methods 0.000 description 4
- 238000004891 communication Methods 0.000 description 4
- 230000003993 interaction Effects 0.000 description 4
- 230000009471 action Effects 0.000 description 3
- 230000008859 change Effects 0.000 description 2
- 230000006835 compression Effects 0.000 description 2
- 238000007906 compression Methods 0.000 description 2
- 238000003672 processing method Methods 0.000 description 2
- 230000004044 response Effects 0.000 description 2
- 108091026890 Coding region Proteins 0.000 description 1
- 238000010521 absorption reaction Methods 0.000 description 1
- 238000012937 correction Methods 0.000 description 1
- 125000004122 cyclic group Chemical group 0.000 description 1
- 230000007547 defect Effects 0.000 description 1
- 230000001934 delay Effects 0.000 description 1
- 230000007613 environmental effect Effects 0.000 description 1
- 230000010365 information processing Effects 0.000 description 1
- 238000003780 insertion Methods 0.000 description 1
- 230000037431 insertion Effects 0.000 description 1
- 230000007246 mechanism Effects 0.000 description 1
- 238000012986 modification Methods 0.000 description 1
- 230000004048 modification Effects 0.000 description 1
- 230000002688 persistence Effects 0.000 description 1
- 230000003068 static effect Effects 0.000 description 1
- 230000000007 visual effect Effects 0.000 description 1
Classifications
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N21/00—Selective content distribution, e.g. interactive television or video on demand [VOD]
- H04N21/60—Network structure or processes for video distribution between server and client or between remote clients; Control signalling between clients, server and network components; Transmission of management data between server and client, e.g. sending from server to client commands for recording incoming content stream; Communication details between server and client
- H04N21/63—Control signaling related to video distribution between client, server and network components; Network processes for video distribution between server and clients or between remote clients, e.g. transmitting basic layer and enhancement layers over different transmission paths, setting up a peer-to-peer communication via Internet between remote STB's; Communication protocols; Addressing
- H04N21/637—Control signals issued by the client directed to the server or network components
- H04N21/6375—Control signals issued by the client directed to the server or network components for requesting retransmission, e.g. of data packets lost or corrupted during transmission from server
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N19/00—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
- H04N19/10—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding
- H04N19/134—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the element, parameter or criterion affecting or controlling the adaptive coding
- H04N19/146—Data rate or code amount at the encoder output
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N21/00—Selective content distribution, e.g. interactive television or video on demand [VOD]
- H04N21/20—Servers specifically adapted for the distribution of content, e.g. VOD servers; Operations thereof
- H04N21/23—Processing of content or additional data; Elementary server operations; Server middleware
- H04N21/234—Processing of video elementary streams, e.g. splicing of video streams or manipulating encoded video stream scene graphs
- H04N21/2343—Processing of video elementary streams, e.g. splicing of video streams or manipulating encoded video stream scene graphs involving reformatting operations of video signals for distribution or compliance with end-user requests or end-user device requirements
- H04N21/234381—Processing of video elementary streams, e.g. splicing of video streams or manipulating encoded video stream scene graphs involving reformatting operations of video signals for distribution or compliance with end-user requests or end-user device requirements by altering the temporal resolution, e.g. decreasing the frame rate by frame skipping
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N21/00—Selective content distribution, e.g. interactive television or video on demand [VOD]
- H04N21/60—Network structure or processes for video distribution between server and client or between remote clients; Control signalling between clients, server and network components; Transmission of management data between server and client, e.g. sending from server to client commands for recording incoming content stream; Communication details between server and client
- H04N21/63—Control signaling related to video distribution between client, server and network components; Network processes for video distribution between server and clients or between remote clients, e.g. transmitting basic layer and enhancement layers over different transmission paths, setting up a peer-to-peer communication via Internet between remote STB's; Communication protocols; Addressing
- H04N21/637—Control signals issued by the client directed to the server or network components
- H04N21/6373—Control signals issued by the client directed to the server or network components for rate control, e.g. request to the server to modify its transmission rate
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N21/00—Selective content distribution, e.g. interactive television or video on demand [VOD]
- H04N21/60—Network structure or processes for video distribution between server and client or between remote clients; Control signalling between clients, server and network components; Transmission of management data between server and client, e.g. sending from server to client commands for recording incoming content stream; Communication details between server and client
- H04N21/63—Control signaling related to video distribution between client, server and network components; Network processes for video distribution between server and clients or between remote clients, e.g. transmitting basic layer and enhancement layers over different transmission paths, setting up a peer-to-peer communication via Internet between remote STB's; Communication protocols; Addressing
- H04N21/637—Control signals issued by the client directed to the server or network components
- H04N21/6377—Control signals issued by the client directed to the server or network components directed to server
- H04N21/6379—Control signals issued by the client directed to the server or network components directed to server directed to encoder, e.g. for requesting a lower encoding rate
Landscapes
- Engineering & Computer Science (AREA)
- Multimedia (AREA)
- Signal Processing (AREA)
- Compression Or Coding Systems Of Tv Signals (AREA)
Abstract
The disclosure relates to a video data transmission method, a video data transmission device and electronic equipment, and relates to the technical field of data transmission, wherein the method comprises the following steps: firstly, in the process of transmitting the coding frame data output by an encoder in real time to a receiving end, acquiring a time point when the receiving end requests to lose data retransmission each time; determining a time interval when the request is received each time based on the time point; and then, dynamically adjusting the frequency information of the I frame output by the encoder according to the time interval and the video frame rate, so that the encoder outputs the encoded frame data according to the frequency information after each adjustment. By applying the technical scheme disclosed by the invention, the requirement of retransmission of lost data can be automatically met, thereby reducing network delay caused by packet loss and improving streaming efficiency.
Description
Technical Field
The disclosure relates to the technical field of data transmission, and in particular relates to a video data transmission method, a video data transmission device and electronic equipment.
Background
A Frame (Frame) is the smallest visual unit that makes up a video and is a static image. A sequence of temporally successive frames is synthesized together to form a dynamic video. In the process of video frame data transmission, the situation of data packet loss sometimes occurs.
At present, the conventional packet loss processing method is Forward Error Correction (FEC), for example, transmitting data a and B, and additionally transmitting a data C equal to the exclusive or of a and B. The receiver receives any 2 of these 3 packets, and the 3 rd packet can be obtained by exclusive-or.
However, when the fluctuation of the packet loss rate is large, the traditional processing mode has poor packet loss resistance, and the transmission effect of video data is affected.
Disclosure of Invention
In view of this, the present disclosure provides a method, an apparatus and an electronic device for transmitting video data, which mainly aims to solve the technical problem that when the packet loss rate fluctuates greatly, the packet loss resistance is poor and the video data transmission effect is affected in the conventional data packet loss processing method.
In a first aspect, the present disclosure provides a method for transmitting video data, including:
in the process of transmitting the coding frame data output by the coder in real time to the receiving end, acquiring a time point when the receiving end requests the retransmission of lost data each time;
determining a time interval each time the request is received based on the point in time;
and dynamically adjusting the frequency information of the I frame output by the encoder according to the time interval and the video frame rate, so that the encoder outputs the encoded frame data according to the frequency information after each adjustment.
In a second aspect, the present disclosure provides a transmission apparatus for video data, including:
the acquisition module is configured to acquire a time point when the receiving end requests to lose data retransmission each time in the process of transmitting the encoded frame data output by the encoder in real time to the receiving end;
a determining module configured to determine a time interval each time the request is received based on the point in time;
and the adjusting module is configured to dynamically adjust the frequency information of the I frame output by the encoder according to the time interval and the video frame rate, so that the encoder outputs the encoded frame data according to the frequency information after each adjustment.
In a third aspect, the present disclosure provides a computer-readable storage medium having stored thereon a computer program which, when executed by a processor, implements the method for transmitting video data according to the first aspect.
In a fourth aspect, the present disclosure provides an electronic device, including a storage medium, a processor, and a computer program stored on the storage medium and executable on the processor, where the processor implements the method for transmitting video data according to the first aspect when the processor executes the computer program.
By means of the technical scheme, compared with the traditional data packet loss processing mode at present, the video data transmission method, device and electronic equipment can dynamically adjust the frequency information of the I frame output by the encoder according to different network actual conditions. Specifically, in the process of transmitting the encoded frame data output by the encoder in real time to the receiving end, based on the time point when the receiving end requests the retransmission of lost data, the time interval when the request is received each time is determined, and then the frequency information of the output I frame of the encoder is dynamically adjusted according to the time interval and the video frame rate, so that the encoder outputs the encoded frame data according to the frequency information after each adjustment. By applying the technical scheme disclosed by the invention, the requirement of retransmission of lost data can be automatically met, thereby reducing network delay caused by packet loss and improving streaming efficiency. Therefore, the technical problems that when the fluctuation of the packet loss rate is large, the packet loss resistance is poor and the video data transmission effect is affected in the traditional data packet loss processing mode can be effectively improved.
The foregoing description is merely an overview of the technical solutions of the present disclosure, and may be implemented according to the content of the specification in order to make the technical means of the present disclosure more clearly understood, and in order to make the above and other objects, features and advantages of the present disclosure more clearly understood, the following specific embodiments of the present disclosure are specifically described.
Drawings
The accompanying drawings, which are incorporated in and constitute a part of this specification, illustrate embodiments consistent with the disclosure and together with the description, serve to explain the principles of the disclosure.
In order to more clearly illustrate the embodiments of the present disclosure or the solutions in the prior art, the drawings that are required for the description of the embodiments or the prior art will be briefly described below, and it will be obvious to those skilled in the art that other drawings can be obtained from these drawings without inventive effort.
Fig. 1 is a schematic flow chart of a video data transmission method according to an embodiment of the disclosure;
fig. 2 is a flowchart illustrating another method for transmitting video data according to an embodiment of the present disclosure;
FIG. 3 shows a schematic diagram of a setup phase provided by an embodiment of the present disclosure;
FIG. 4 illustrates a schematic diagram of a dynamic adjustment phase provided by an embodiment of the present disclosure;
fig. 5 shows a schematic flow chart of an encoder dynamic configuration GOP provided by an embodiment of the present disclosure;
FIG. 6 shows a schematic diagram of a pre-retrofit interaction flow provided by an embodiment of the present disclosure;
FIG. 7 shows a dynamic diagram of GopLength provided by an embodiment of the present disclosure;
FIG. 8 illustrates a schematic diagram of an improved interaction flow provided by an embodiment of the present disclosure;
fig. 9 is a schematic diagram illustrating a dynamic adjustment process of gaplength in an ideal network state according to an embodiment of the present disclosure;
fig. 10 is a schematic diagram illustrating a dynamic adjustment process of gaplength in a finite state of a network according to an embodiment of the present disclosure;
fig. 11 is a schematic structural diagram of a video data transmission apparatus according to an embodiment of the present disclosure.
Detailed Description
Embodiments of the present disclosure will be described in more detail below with reference to the accompanying drawings. It should be noted that, without conflict, the embodiments of the present disclosure and features of the embodiments may be combined with each other.
In order to solve the technical problem that the conventional data packet loss processing mode has poor packet loss resistance when the fluctuation of the packet loss rate is large, and the video data transmission effect is affected. The present embodiment provides a method for transmitting video data, as shown in fig. 1, which can be applied to a transmitting end side of video data, and the method includes:
step 101, acquiring a time point when the receiving end requests to lose data retransmission each time in the process that the transmitting end transmits the encoded frame data output by the encoder in real time to the receiving end.
The video is based on the principle of human eye persistence, and a series of pictures are played to make human eyes produce a sense of motion. Simply transmitting video pictures, the amount of video is very large and unacceptable for existing networks and storage. In order to facilitate the transmission and storage of video, people find that the video has a large amount of repeated information, if the repeated information is removed at a transmitting end and restored at a receiving end, the files of video data are greatly reduced, and therefore the H.264 video compression standard is adopted. In the h.264 compression standard I, P, B frames are used to represent transmitted video pictures. In this embodiment, encoded frame data output in real time by an encoder (video encoder) may include: the sending end transmits the real-time output encoded frame data to the receiving end, and in case of bad network environment, the transmitting end will request the transmitting end for retransmission of lost data, and the sending end in this embodiment may obtain the time point when the receiving end requests retransmission of lost data each time, for analyzing the network environment condition of data transmission.
Step 102, determining a time interval when each request is received based on a time point when the receiving end requests the lost data retransmission.
The receiving end requests the retransmission of the lost data when the lost data occurs in the process of receiving the video data sent by the sending end. For this embodiment, the network condition of the data transmission can be determined by the time interval when the request (the receiving end requests the missing data retransmission) is received each time, and then the accurate dynamic adjustment is performed based on the network condition, so as to ensure the efficiency of the video data transmission, and specifically execute the process shown in step 103.
Step 103, dynamically adjusting the frequency information of the I frame output by the encoder according to the time interval and the video frame rate when the request is received each time.
Further, the encoder is caused to output encoded frame data in accordance with the frequency information after each adjustment. The video frame rate is the video frame rate of the transmitted video data.
The group of pictures (Group of Pictures, GOP) of a video encoder refers to the interval between two I frames. In a video coding sequence, GOP, group of pictures, refers to the distance between two I frames, and Reference period (Reference) refers to the distance between two P frames. The number of bytes occupied by one I frame is greater than one P frame, and the number of bytes occupied by one P frame is greater than one B frame. Therefore, on the premise of unchanged code rate, the larger the GOP value is, the more the P, B frames are, the more the average byte number occupied by each I, P, B frame is, and the better image quality is easier to obtain; the larger the reference period, the larger the number of B frames, and similarly, the better image quality can be obtained more easily.
There is a limit to improving the picture quality by increasing the GOP value, and when a scene change is encountered, the h.264 encoder automatically forcibly inserts an I frame, at which time the actual GOP value is shortened. On the other hand, in one GOP, P, B frames are predicted from I frames, and when the image quality of an I frame is poor, the image quality of the following P, B frames in one GOP is affected, and it is not preferable to set the GOP value too large until the next GOP starts.
For the embodiment, the GOP length of the encoder can be dynamically adjusted according to the time interval and the video frame rate when the request is received each time, namely, the dynamic adjustment of the encoder GOP is realized based on the network condition, and further, the dynamic adjustment of the frequency information of the output I frame of the encoder is realized based on the network condition. Therefore, the technical problems that when the fluctuation of the packet loss rate is large, the packet loss resistance is poor and the video data transmission effect is affected in the traditional data packet loss processing mode can be effectively improved.
Further, as a refinement and extension of the foregoing embodiment, in order to fully describe a specific implementation procedure of the method of the present embodiment, the present embodiment provides a specific method as shown in fig. 2, where the method includes:
step 201, in the process that the transmitting end transmits the encoded frame data output by the encoder in real time to the receiving end, a time point when the receiving end requests for lost data retransmission is obtained.
Optionally, the request (request for retransmission of lost data) sent by the receiving end may specifically include: packet loss retransmission requests, I-frame application requests, and the like. The two requests are applicable to different application scenes (for example, in different scenes, a receiving end may send a packet loss retransmission request or an I-frame application request under the condition of poor network environment), so as to further meet different application requirements.
Step 202, determining a time interval when each request is received based on a time point when each request of the receiving end requests the lost data retransmission.
Step 203, dynamically adjusting the frequency information of the output I frame of the encoder according to the time interval and the video frame rate when the request is received each time, through at least one adjustment period.
Further, the encoder is caused to output encoded frame data in accordance with the frequency information after each adjustment. In a specific application, the transmitting end may determine the real-time to-be-transmitted encoded frame as an I frame to be transmitted to the receiving end each time the receiving end receives the request (e.g., a packet loss retransmission request or an I frame application request sent from the receiving end to the transmitting end).
The existing packet loss retransmission or I frame application mechanism has the following defects: a. the passive response of the transmitting end, such as the data transmitting end, can decide to initiate the retransmission of the data packet or I frame after the data transmitting end needs to judge through the acknowledgement characters (Acknowledge character, ACK) of a plurality of continuous data packets; b. retransmission delays, such as either timeout retransmission or fast retransmission, have a time course of waiting and ACK response.
Aiming at the technical problems, the method of the embodiment is equivalent to providing a scheme for realizing the dynamic adjustment of the GOP of the encoder based on the network condition, and can realize the dynamic adjustment of the GOP through the encoder according to different network actual conditions, namely, the configuration of the GOP of the encoder is adjusted in real time according to the change of the network condition, and the requirement of packet loss retransmission (I frame retransmission) is automatically met, so that the network delay caused by packet loss is reduced, and the streaming efficiency is improved. The embodiment can adjust the setting of the GOP of the encoder in real time according to the network condition, and then the encoder actively encodes the I frame and sends the I frame. And the request of packet loss retransmission is not received and then sent out. In this embodiment, instead of retransmitting packets or sending I frames based on a request of packet loss retransmission, the I frames are predicted in advance according to the network condition and sent actively, which has prospective but not lagging properties, so that the delay caused by "request-response-retransmission" can be eliminated.
The present embodiment may be divided into at least one adjustment period, and illustratively, in a single adjustment period, the process flow may be divided into two phases, including: a setup phase and a dynamic adjustment phase.
Taking the current adjustment period as an example, optionally, step 203 may specifically include: in the establishment stage, a first frequency of an I frame output by an encoder is determined according to a time interval and a video frame rate when a request (such as a packet loss retransmission request sent by a receiving end to a sending end or an I frame application request) is received twice before a current adjustment period, so that the encoder outputs encoded frame data according to the first frequency; recording the number of the coded frames sent to the receiving end after the request is received for the second time, and recording the time point when the request is received for the second time in the previous two times as a reference time point; in the dynamic adjustment stage, a second frequency of the I frame output by the encoder can be determined according to the number of the encoded frames and the reference time point, so that the encoder outputs encoded frame data according to the second frequency.
For example, the determining the first frequency of the encoder to output the I frame according to the time interval and the video frame rate when the request (such as the packet loss retransmission request sent by the receiving end to the sending end or the I frame application request) is received twice in the current adjustment period may specifically include: firstly, multiplying the time interval when the request is received for the first two times in the current adjustment period by the video frame rate; and adjusting the length (Goplenght) of the encoder image group to be a first length according to the obtained first product, and determining a first frequency of the encoder output I frame according to the first length.
Alternatively, the first frequency and the second frequency may be obtained by taking the remainder of the number of encoded frames and the encoder image group length (goldth).
For example, as shown in fig. 3, in order to perform the processing in the setup phase, i.e., according to the request of "packet loss retransmission" or "I-frame application", the GOP dynamic adjustment of the encoder is started to be established. Specifically, on the receiving end side, when the network packet loss condition occurs, the receiving end sends a request of 'packet loss retransmission' or 'I frame application' to the transmitting end. On the transmitting end side, after receiving the request sent by the receiving end, an I frame request is sent to a local encoder; starting a timer for recording a time interval Deltat from the second reception of a request for "packet loss retransmission" or "I-frame application"; and after receiving the I frame data sent by the encoder, sending the I frame data to a receiving end. On the encoder side, after receiving an I-frame request, the current encoded frame is forced to be an I-frame, and a counter inside the encoder is started to record the number of encoded frames (encoded_frame_index).
As shown in fig. 4, the encoder recalculates the group length (goldth) each time an I frame is output, i.e., adjusts the time at which the next I frame is encoded. Specifically, at this stage, the encoder has received at least two I-frame requests, and the timer clears the timer and resumes timing after the "setup stage" is completed. Calculating the GopLength with the time interval Deltat when the I frame application is received again, such as GopLength= Deltat; the frequency of the encoder output I-frames (remainder calculation) is then adjusted, such as encoded_frame_index% goplength. Taking the case of golenght=200 as an example, it can be realized that an I frame is forcedly output from the encoder every 200 frames, and the obtained actual effect is golenght=200 of the encoder. The data structure set in the process is an input parameter structure body of the encoder, and after the data structure body is set and updated, the API interface of the encoder is continuously called, so that the updated input parameter is effective in the encoder.
As shown in fig. 5, a specific flow chart of two processing stages is shown, wherein the node sequence of the stage is established:
①->②->③->④->⑤->⑥->①->②->③->④->⑦->⑧->⑨->①
the dynamic adjustment stage process node sequence is divided into the following cases:
case a: no I-frame application is received within the threshold range: (1) - > (2) - > (8) - > (9) - > (1) in the cavity
Case B: no I-frame application is received outside the threshold range: (1) - > (2) - > (ja) of the first aspect
Case C: the threshold range receives an I-frame application: (1) - > (2) - > (3) - > (1) self-absorption
It should be noted that, each time the forced encoder generates an I frame, it is necessary to recalculate the value of the updated golomb, and make the encoder implement the insertion operation of the I frame of the next frame according to the new value of golomb.
Corresponding to the above case a, the determining the second frequency of the I frame output by the encoder according to the number of encoded frames and the reference time point may specifically include: determining the number of the coded frames from the beginning to count until reaching a target time range corresponding to the number of the coded frames of the first length of the image group; if no new request is received in the target time range, subtracting the reference time point from the time end point of the target time range, and multiplying the obtained difference by the video frame rate; and then adjusting the length of the encoder image group to a second length according to the obtained second product, and determining a second frequency of the encoder output I frame according to the second length.
For example, assuming that the frame rate=60, the time threshold is 60 seconds, the time point when the I frame request is received for the first time is marked as 0, and the time point when the I frame request is received for the second time is 2 seconds in the setup phase; the first value of golplenght obtained in the setup phase is (2-0) ×60=120; after the first golplenght is established, the current time point is set to t0=2nd second. If no I-frame request has been received within the next 120 frame time range, then the encoder will readjust the value of goldenght while sending out an I-frame in 4 th seconds, goldenght= (4-T0) ×60= (4-2) ×60=120, according to the last goldenght=120 setting.
If no I-frame request has been received within the next 120 frame time range, then the encoder will readjust the value of goldenght while the encoder issues the I-frame at 6 th, goldenght= (6-T0) ×60= (6-2) ×60=240, according to the last goldenght=120 setting. Next, if the I-frame request has not been received, it is sequentially available that:
Goplenght=(4-T0)*60=(4-2)*60=120
Goplenght=(6-T0)*60=(6-2)*60=240
Goplenght=(10-T0)*60=(10-2)*60=480
Goplenght=(18-T0)*60=(18-2)*60=960
……
golplenght= = = Δt 60 (note:. Δt represents the time difference from the current moment in time when the first golplenght is completed.
By the alternative mode, when the network condition is changed from poor, the image group length (Goplenght) of the video encoder can be increased step by step, and further the resource consumption can be saved step by step on the premise of ensuring the video data transmission quality.
Corresponding to the above case B, the determining the second frequency of the I frame output by the encoder according to the number of encoded frames and the reference time point may further include: if no new request is received within the preset time range, the length of the encoder image group is adjusted to be infinite length, and the second frequency of the output I frame of the encoder is determined according to the infinite length.
For example, assuming that the frame rate=60, the time threshold is 60 seconds, the time point when the I frame request is received for the first time is marked as 0, and the time point when the I frame request is received for the second time is 2 seconds in the setup phase; the first value of golplenght obtained in the setup phase is (2-0) ×60=120; after the first golplenght is established, the current time point is set to t0=2nd second. If no new I-frame request has been received within the next 60 second time frame, i.e., the time interval exceeds the preset time threshold, the value of goldenght may be adjusted to an infinite GOP, i.e., no I-frames are output anymore.
By the above-mentioned alternative mode, under the extremely good condition of network condition, there is no packet loss, bandwidth is sufficient, time delay is very small, then GOP of encoder can be set to infinite, namely only need first frame be I frame, the rest frame is P frame or B frame. In the actual operation process, a threshold value can be set, namely, the value of Deltat exceeds the set time threshold value, which means that no packet loss occurs in a long enough time, the network state is excellent, and the encoder can set the value of Goplenght to be unlimited, so that the resource consumption is saved to the greatest extent on the premise of ensuring the video data transmission quality.
In addition to the above, optionally, determining the second frequency of the I frame output by the encoder according to the number of encoded frames and the reference time point may specifically include: if a new request is received within a preset time range after the reference time point, subtracting the reference time point from the time point when the new request is received, and multiplying the obtained difference by the video frame rate; and then adjusting the length of the encoder image group to be a third length according to the obtained third product, and determining a second frequency of the encoder output I frame according to the third length.
For example, assuming that the frame rate=60, the time threshold is 60 seconds, the time point when the I frame request is received for the first time is marked as 0, and the time point when the I frame request is received for the second time is 2 seconds in the setup phase; the first value of golplenght obtained in the setup phase is (2-0) ×60=120; after the first golplenght is established, the current time point is set to t0=2nd second. If the time point of the next third receipt of the I-frame request is 15 seconds, and the set time threshold is not exceeded and is still within the effective range of the present round of regulation, the value of goldenght is adjusted to (15-2) ×60=780.
Compared with the adjustment mode of the case A, the method can adjust the length of the large-span encoder image group under the condition that the network condition is not extremely good, and can effectively save the consumption of resources on the premise of ensuring the transmission quality of video data.
Corresponding to the above case C, the step 203 may specifically further include: and if a new request is received within the time after the preset time range, determining to enter the next adjustment period of the current adjustment period, and dynamically adjusting the frequency information of the output I frame of the encoder.
For example, assuming that the frame rate=60, the time threshold is 60 seconds, the time point when the I frame request is received for the first time is marked as 0, and the time point when the I frame request is received for the second time is 2 seconds in the setup phase; the first value of golplenght obtained in the setup phase is (2-0) ×60=120; after the first golplenght is established, the current time point is set to t0=2nd second. If the time point of receiving the I frame request for the third time is 65 th second, the set time threshold value is exceeded for 60 seconds, the next round of regulation is started, namely the 'setup phase' needs to be re-started, the timer is cleared, the received I frame request is recorded as the first time, and the time point is recorded as 0. It is then necessary to wait for a second I-frame application to complete the set-up of the round.
In the process of dynamically adjusting the length of the encoder image group, limit situations may exist, and further optionally, the method of the embodiment may further include: if the adjusted length of the encoder image group is smaller than the preset threshold value, the adjusted length of the encoder image group is set as the preset threshold value. Under the condition of extremely bad network, the network is blocked, the bandwidth is extremely small and the packet loss is serious, and the encoded output of each frame of the encoder is I frame, namely the encoding is equivalent to the lossless encoding. In the actual operation process, if the network situation is extremely bad, the obtained Δt is extremely small, and when the calculated goplenht is smaller than 1 according to Δt, the value of goplenht needs to be set to 1 forcedly.
In order to illustrate the specific implementation procedure of the above embodiments, the following application scenario is given, but not limited thereto:
in a scenario where a Virtual Reality (VR) all-in-one machine is used, there may be transmission of VR video data between the VR device and the server. Currently, in the prior art, GOP constants are fixed and not variable, and in general, the GOP of the encoder is already determined when the encoder is created, and the encoder does not support dynamic configuration of the parameters. Inserting an I frame according to a terminal request under a Gop infinite scene, if an encoder configuration of infinite GOP is adopted in an ultralow-delay scene, if a packet loss phenomenon occurs, sending a request of packet loss retransmission or I frame application by a receiving end; after the sending end receives the request, the forced encoder encodes the current frame into a frame I frame for output.
The GopLength in the prior art can be described by the following equation (1):
the interaction flow in the prior art can be as shown in fig. 6.
The video data transmission method provided by this embodiment can be divided into two stages: a setup phase and a dynamic adjustment phase. The establishment phase inserts an I-frame at the encoder end according to the request from the receiver end and starts the encoded frame counter and the time interval timer. And in the dynamic adjustment stage, according to the time difference of the I frame request, the value of GopLength to be updated is obtained through a calculation formula, and then the I frames are inserted every other GopLength frame through a 'remainder' calculation method, so that the effect of setting the GOP of the encoder is achieved. As shown in fig. 7, a dynamic diagram of the gaplength is shown. As can be seen from fig. 7, the curve portion corresponding to less than t0 can be expressed by the following formula (2):
GopLength= Δt frame Rate equation (2)
Note that, the start value of the curve portion corresponding to less than t0 is not 0, and in general, I-frame application is not performed in two consecutive frames, that is, I-frame application is at a certain interval.
The portion of the curve corresponding to greater than t0 can be expressed by the following equation (3):
GopLenght=e (t-Δt) formula (3)
With t0 as a threshold, blue oblique lines before t0 indicate that Goplenght increases in a positive proportion, and the increase coefficient is the frame rate. the red curve after t0 indicates that goldenght increases exponentially, and is not limited to exponentials, but may also be other power exponentials, and when the red curve indicates that the red curve exceeds a set threshold, goldenght is quickly set to an infinite length in a short time as possible.
Finally, the synthesized formula (4) is as follows:
the interaction flow after the method of the embodiment is adopted can be shown in fig. 8.
(1) T0 to T1 phase-set-up phase;
(2) T1 to Tn phase-dynamic adjustment phase;
(3) The method comprises the steps of outputting forced I frames of an encoder according to the frequency of GopLength at the frame rate of GopLength= delta t;
(4) If the value of Δt is greater than the threshold t0 set in advance, the goldenght output of the encoder is set to infinite GOP.
Because the network state is constantly changed by environmental factors, such as the number of people surfing the cell, the number of network access devices, downloading large files such as videos, hot spot live video on demand, and the like. In practical cases, the dynamic adjustment of the GopLength realized by the scheme is a cyclic reciprocation process. The network conditions are different among different users, and the GOP dynamic adjustment effect is shown in fig. 9 and 10, taking 10000MB bandwidth and 100MB bandwidth as examples. As shown in fig. 9, the network state is good, and when the data packet loss phenomenon happens occasionally, the dynamic adjustment process of the hopLength can show the following trend, and the infinite ideal state of the GOP can be achieved. As shown in fig. 10, in a state where the network is congested and the bandwidth is insufficient, the data packet loss phenomenon occurs frequently, and the hoplength dynamic adjustment process may exhibit the following trend.
Compared with the prior art, the embodiment can dynamically adjust the frequency information of the I frame output by the encoder according to different network actual conditions. The method can automatically meet the requirement of retransmission of lost data, further reduce network delay caused by packet loss and improve streaming efficiency. Therefore, the technical problems that when the fluctuation of the packet loss rate is large, the packet loss resistance is poor and the video data transmission effect is affected in the traditional data packet loss processing mode can be effectively improved.
Further, as a specific implementation of the method shown in fig. 1 and fig. 2, the present embodiment provides a transmission apparatus for video data, as shown in fig. 11, where the apparatus includes: an acquisition module 31, a determination module 32, and an adjustment module 33.
An obtaining module 31, configured to obtain a time point when the receiving end requests to lose data retransmission each time in a process of transmitting the encoded frame data output by the encoder in real time to the receiving end;
a determining module 32 configured to determine a time interval each time the request is received based on the point in time;
and an adjustment module 33 configured to dynamically adjust the frequency information of the output I frame of the encoder according to the time interval and the video frame rate, so that the encoder outputs the encoded frame data according to the frequency information after each adjustment.
In a specific application scenario, the adjustment module 33 is specifically configured to dynamically adjust the frequency information of the I-frame output by the encoder according to the time interval and the video frame rate, through at least one adjustment period.
In a specific application scenario, the adjusting module 33 is specifically further configured to determine a first frequency of the output I frame of the encoder according to the time interval when the request is received twice before in the current adjusting period and the video frame rate, so that the encoder outputs the encoded frame data according to the first frequency; recording the number of the coded frames sent to the receiving end after the request is received for the second time, and recording the time point when the request is received for the second time in the previous two times as a reference time point; and determining a second frequency of the I frame output by the encoder according to the number of the encoded frames and the reference time point, so that the encoder outputs the encoded frame data according to the second frequency.
In a specific application scenario, the adjustment module 33 is specifically further configured to multiply the video frame rate by the time interval when the request was received twice before in the current adjustment period; and adjusting the length of the encoder image group to be a first length according to the obtained first product, and determining the first frequency according to the first length.
In a specific application scenario, the adjustment module 33 is specifically further configured to determine a target time range when the number of encoded frames is counted from the beginning until the number of encoded frames corresponding to the first length of the image group is reached; if the new request is not received in the target time range, subtracting the reference time point from the time end point of the target time range, and multiplying the obtained difference by the video frame rate; and adjusting the length of the encoder image group to be a second length according to the obtained second product, and determining the second frequency according to the second length.
In a specific application scenario, the adjusting module 33 is specifically further configured to subtract the reference time point from the time point when the new request is received if the new request is received within a preset time range after the reference time point, and multiply the obtained difference by the video frame rate; and adjusting the length of the encoder image group to be a third length according to the obtained third product, and determining the second frequency according to the third length.
In a specific application scenario, the adjusting module 33 is specifically further configured to adjust the encoder image group length to an infinite length if the new request is not received within the preset time range, and determine the second frequency according to the infinite length.
In a specific application scenario, the adjusting module 33 is specifically further configured to determine to enter a next adjusting period of the current adjusting period, and dynamically adjust the frequency information of the output I frame of the encoder, if a new request is received within a time after the preset time range.
In a specific application scenario, optionally, the first frequency and the second frequency are obtained by performing a remainder calculation on the number of the encoded frames and the length of the encoder image group.
In a specific application scenario, the adjusting module 33 is further configured to set the adjusted encoder image group length to the preset threshold value if the encoder image group length is adjusted to be smaller than the preset threshold value.
In a specific application scenario, the determining module 32 is further configured to determine, each time the request is received, the encoded frame to be transmitted in real time as an I frame to be transmitted to the receiving end.
In a specific application scenario, optionally, the request includes: a packet loss retransmission request, or an I-frame application request.
It should be noted that, for other corresponding descriptions of each functional unit related to the video data transmission device provided in this embodiment, reference may be made to corresponding descriptions in fig. 1 and fig. 2, and details are not repeated here.
Based on the above-described methods shown in fig. 1 and 2, correspondingly, the present embodiment further provides a computer-readable storage medium having a computer program stored thereon, which when executed by a processor, implements the above-described video data transmission method shown in fig. 1 and 2.
Based on such understanding, the technical solution of the present disclosure may be embodied in the form of a software product, which may be stored in a non-volatile storage medium (may be a CD-ROM, a U-disk, a mobile hard disk, etc.), and includes several instructions for causing a computer device (may be a personal computer, a server, or a network device, etc.) to execute the method of each implementation scenario of the present disclosure.
Based on the methods shown in fig. 1 and fig. 2 and the virtual device embodiment shown in fig. 11, in order to achieve the above objects, the disclosed embodiment further provides an electronic device, which includes a storage medium and a processor; a storage medium storing a computer program; a processor for executing a computer program to implement the above-described video data transmission method as shown in fig. 1 and 2.
Optionally, the entity device may further include a user interface, a network interface, a camera, a Radio Frequency (RF) circuit, a sensor, an audio circuit, a WI-FI module, and so on. The user interface may include a Display screen (Display), an input unit such as a Keyboard (Keyboard), etc., and the optional user interface may also include a USB interface, a card reader interface, etc. The network interface may optionally include a standard wired interface, a wireless interface (e.g., WI-FI interface), etc.
It will be appreciated by those skilled in the art that the above-described physical device structure provided in this embodiment is not limited to this physical device, and may include more or fewer components, or may combine certain components, or may be a different arrangement of components.
The storage medium may also include an operating system, a network communication module. The operating system is a program that manages the physical device hardware and software resources described above, supporting the execution of information handling programs and other software and/or programs. The network communication module is used for realizing communication among all components in the storage medium and communication with other hardware and software in the information processing entity equipment.
From the above description of embodiments, it will be apparent to those skilled in the art that the present disclosure may be implemented by means of software plus necessary general hardware platforms, or may be implemented by hardware. By applying the scheme of the embodiment, compared with the prior art, the frequency information of the I frame output by the encoder can be dynamically adjusted according to different network actual conditions. The method can automatically meet the requirement of retransmission of lost data, further reduce network delay caused by packet loss and improve streaming efficiency. Therefore, the technical problems that when the fluctuation of the packet loss rate is large, the packet loss resistance is poor and the video data transmission effect is affected in the traditional data packet loss processing mode can be effectively improved.
It should be noted that in this document, relational terms such as "first" and "second" and the like are used solely to distinguish one entity or action from another entity or action without necessarily requiring or implying any actual such relationship or order between such entities or actions. Moreover, the terms "comprises," "comprising," or any other variation thereof, are intended to cover a non-exclusive inclusion, such that a process, method, article, or apparatus that comprises a list of elements does not include only those elements but may include other elements not expressly listed or inherent to such process, method, article, or apparatus. Without further limitation, an element defined by the phrase "comprising one … …" does not exclude the presence of other like elements in a process, method, article, or apparatus that comprises the element.
The foregoing is merely a specific embodiment of the disclosure to enable one skilled in the art to understand or practice the disclosure. Various modifications to these embodiments will be readily apparent to those skilled in the art, and the generic principles defined herein may be applied to other embodiments without departing from the spirit or scope of the disclosure. Thus, the present disclosure is not intended to be limited to the embodiments shown and described herein but is to be accorded the widest scope consistent with the principles and novel features disclosed herein.
Claims (15)
1. A method of transmitting video data, comprising:
in the process of transmitting the coding frame data output by the coder in real time to the receiving end, acquiring a time point when the receiving end requests the retransmission of lost data each time;
determining a time interval each time the request is received based on the point in time;
and dynamically adjusting the frequency information of the I frame output by the encoder according to the time interval and the video frame rate, so that the encoder outputs the encoded frame data according to the frequency information after each adjustment.
2. The method of claim 1, wherein dynamically adjusting the frequency information of the encoder output I-frames according to the time interval and video frame rate comprises:
and dynamically adjusting the frequency information of the output I frame of the encoder according to the time interval and the video frame rate through at least one adjustment period.
3. The method according to claim 2, wherein the dynamically adjusting the frequency information of the encoder output I-frame over at least one adjustment period according to the time interval and video frame rate, comprises:
determining a first frequency of the I frame output by the encoder according to the time interval when the request is received twice before the current adjustment period and the video frame rate, so that the encoder outputs the encoded frame data according to the first frequency;
recording the number of the coded frames sent to the receiving end after the request is received for the second time, and recording the time point when the request is received for the second time in the previous two times as a reference time point;
and determining a second frequency of the I frame output by the encoder according to the number of the encoded frames and the reference time point, so that the encoder outputs the encoded frame data according to the second frequency.
4. A method according to claim 3, wherein said determining a first frequency of the encoder output I-frames based on the video frame rate and a time interval when the request was received twice before a current adjustment period, in particular comprises:
multiplying the video frame rate by the time interval when the request is received twice before the current adjustment period;
and adjusting the length of the encoder image group to be a first length according to the obtained first product, and determining the first frequency according to the first length.
5. The method according to claim 4, wherein determining the second frequency of the I-frame output by the encoder according to the number of encoded frames and the reference time point comprises:
determining the number of the encoded frames from the beginning to count until reaching a target time range corresponding to the number of the encoded frames of the first length of the image group;
if the new request is not received in the target time range, subtracting the reference time point from the time end point of the target time range, and multiplying the obtained difference by the video frame rate;
and adjusting the length of the encoder image group to be a second length according to the obtained second product, and determining the second frequency according to the second length.
6. A method according to claim 3, wherein said determining a second frequency of the I-frames output by the encoder in dependence on the number of encoded frames and the reference point in time comprises:
if a new request is received within a preset time range after the reference time point, subtracting the reference time point from the time point when the new request is received, and multiplying the obtained difference by the video frame rate;
and adjusting the length of the encoder image group to be a third length according to the obtained third product, and determining the second frequency according to the third length.
7. The method according to claim 6, wherein determining the second frequency of the I-frame output by the encoder according to the number of encoded frames and the reference time point, in particular further comprises:
and if the new request is not received within the preset time range, adjusting the length of the encoder image group to be an infinite length, and determining the second frequency according to the infinite length.
8. The method according to claim 7, wherein the dynamically adjusting the frequency information of the encoder output I-frame over at least one adjustment period according to the time interval and video frame rate, in particular further comprises:
and if the new request is received within the time after the preset time range, determining to enter the next adjustment period of the current adjustment period, and dynamically adjusting the frequency information of the I frame output by the encoder.
9. The method according to any one of claims 3-8, wherein the first frequency and the second frequency are obtained by taking the remainder of the number of encoded frames and the encoder image group length.
10. The method according to claim 9, wherein the method further comprises:
and if the adjusted length of the encoder image group is smaller than a preset threshold value, setting the adjusted length of the encoder image group as the preset threshold value.
11. The method according to claim 1, wherein the method further comprises:
and when the request is received each time, determining the coding frame to be transmitted in real time as an I frame and transmitting the I frame to the receiving end.
12. The method of claim 1, wherein the request comprises: a packet loss retransmission request, or an I-frame application request.
13. A transmission apparatus for video data, comprising:
the acquisition module is configured to acquire a time point when the receiving end requests to lose data retransmission each time in the process of transmitting the encoded frame data output by the encoder in real time to the receiving end;
a determining module configured to determine a time interval each time the request is received based on the point in time;
and the adjusting module is configured to dynamically adjust the frequency information of the I frame output by the encoder according to the time interval and the video frame rate, so that the encoder outputs the encoded frame data according to the frequency information after each adjustment.
14. A computer readable storage medium, on which a computer program is stored, characterized in that the computer program, when being executed by a processor, implements the method of any one of claims 1 to 12.
15. An electronic device comprising a storage medium, a processor and a computer program stored on the storage medium and executable on the processor, characterized in that the processor implements the method of any one of claims 1 to 12 when executing the computer program.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202210551382.1A CN117135415A (en) | 2022-05-18 | 2022-05-18 | Video data transmission method and device and electronic equipment |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202210551382.1A CN117135415A (en) | 2022-05-18 | 2022-05-18 | Video data transmission method and device and electronic equipment |
Publications (1)
Publication Number | Publication Date |
---|---|
CN117135415A true CN117135415A (en) | 2023-11-28 |
Family
ID=88858755
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN202210551382.1A Pending CN117135415A (en) | 2022-05-18 | 2022-05-18 | Video data transmission method and device and electronic equipment |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN117135415A (en) |
-
2022
- 2022-05-18 CN CN202210551382.1A patent/CN117135415A/en active Pending
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN111135569B (en) | Cloud game processing method and device, storage medium and electronic equipment | |
US11089305B2 (en) | Video frame coding method during scene change, terminal and storage medium | |
WO2018059175A1 (en) | Video code rate processing method and apparatus, storage medium and electronic device | |
CN113747194B (en) | Remote video transmission method, transmission device, storage medium and electronic equipment | |
US10171815B2 (en) | Coding manner switching method, transmit end, and receive end | |
US10834405B2 (en) | Bit rate allocation method and device, and storage medium | |
US20150256822A1 (en) | Method and Apparatus for Assessing Video Freeze Distortion Degree | |
CN111617466B (en) | Method and device for determining coding format and method for realizing cloud game | |
CN103929682B (en) | Method and device for setting key frames in video live broadcast system | |
CN114245196B (en) | Screen recording and stream pushing method and device, electronic equipment and storage medium | |
US20130007206A1 (en) | Transmission apparatus, control method for transmission apparatus, and storage medium | |
CN103918258A (en) | Reducing amount of data in video encoding | |
US20080267284A1 (en) | Moving picture compression apparatus and method of controlling operation of same | |
CN117135415A (en) | Video data transmission method and device and electronic equipment | |
CN116962613A (en) | Data transmission method and device, computer equipment and storage medium | |
CN111953613B (en) | Data transmission control method and device | |
JP4140000B2 (en) | Information processing apparatus and method, recording medium, and program | |
CN111953612B (en) | Data transmission control method and device | |
CN113132807B (en) | Video-based key frame request method, device, equipment and storage medium | |
US11871079B2 (en) | Client and a method for managing, at the client, a streaming session of a multimedia content | |
CN114302145A (en) | Video coding optimization method, device, equipment and storage medium for adaptive network environment | |
JP5488694B2 (en) | Remote mobile communication system, server device, and remote mobile communication system control method | |
WO2023095438A1 (en) | Terminal device, wireless communication system, and terminal device processing method | |
US11778219B2 (en) | Method and system for live video streaming with integrated encoding and transmission semantics | |
CN106851340B (en) | Video plug-streaming method and device |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination |