Dynamic optimization of wireless real-time video data flow
Field of the invention
The present invention relates to the quality of service (QoS) management for mobile video applications. More specifically, the invention relates to dynamic optimization of wireless real-time video data flow, based on general predictions of mo¬ bile link characteristics.
Background of the invention
By now, the standardization of 3rd-generation wireless systems is almost completed in most major economic regions of the world. These systems are known under names such as IMT-2000
(ITU), UMTS (ETSI 3GPP) , EDGE and ANSI 3GPP2. A large number of suitable 3G wireless devices, such as cell phones or per¬ sonal digital assistants (PDAs) are commercially available. The transfer of data using the above-mentioned systems is pre¬ dominantly packed-switched, mostly using the well-known inter¬ net protocol (IP) . The transport of IP packets over the air interface not only extends the reach of the internet, it also
opens the opportunity to migrate all of the communication to a packet switched environment.
Using IP protocol for mobile radio networks has its chal- lenges, which are mainly due to frequent changes in the qual¬ ity of the connection between the mobile user and the corre¬ sponding base stations. These changes are a result of a number of complex factors, such as geographical factors, meteorologic factors or the movement of the mobile user, resulting in fre- quent network cell changes.
The impact of these frequent changes in the quality of connec¬ tion, resulting, e.g., in a frequent change of the bit error rate (BER) or the frame loss rate (FLR) , on typical real-time applications (mostly including voice and/or video packet transfer) strongly depends on the application itself: In Email data transfer, e.g., the reliability of the packet transfer is an esential factor, whereas, e.g., the speed of the transfer is of minor importance. For real-time audio- and video appli- cations on the other hand, the delay of the data packages has to be minimized, since the mobile user regards delayed se¬ quences as highly disturbing, whereas missing packages (re¬ sulting, e.g., in "crackling" voice transfer) are less notice¬ able.
In PCT/EP 02/03018 and DE 102 47 581, quality of service state predictors are described, which use a method for predicting link quality parameters in 2.5 and 3G mobile access networks. The predicted link quality parameters are used for controlling lower layer corrective mechanisms such as transmission power control, to aid QoS systems and applications in their quality management process. E.g., the codec mode and the bit rate can be adapted according to the current and predicted link qual¬ ity.
Nevertheless, since video encoding is largely different from the encoding schemes of voice packages, the method for adapt¬ ing the data transfer described in PCT/EP 02/03018 or DE 102 47 581 is rather limited with regard to video streaming.
Modern video encoders, called codecs, such as the H.264 video codec, include features for adapting a video stream for a wide range of different links. Note that the terms "video" as well as "video streaming", in this context, are used in a technical meaning. They do not include audio and audio transmission but only the (moving) pictures.
The features for adapting a video stream also include features for adapting to links with low bandwidth and/or high bit error rate. In prior art solutions, this adaptation is static during the video session, i. e., once the parameters are fixed for a transmission session, they are not to be changed again until the transmission is over. This means that the video stream will be adapted to the underlying link only at the beginning of the video transmission.
Summary of the invention
It is therefore an object of the present invention to provide a method and a system for dynamic optimization of wireless real-time video data flow.
The invention is based on the finding that quality of the video transmission can be improved dramatically if the adapta¬ tion is done dynamically during the transmission knowing the actual status of the underlying link (e.g., bandwidth, band¬ width variations, delay, jitter, bit error rate) or even pre¬ dict the (near) future development of these parameters.
Other objects and advantages of the present invention may be ascertained from a reading of the specification and appended claims in conjunction with the drawings.
Part of the invention is a method for dynamically adapting a video stream during the transmission. The preferred embodi¬ ments of the invention are set forth in the dependent claims.
A method for dynamic optimization of real-time video data flow between a mobile device, such as a personal digital assistant
(PDA) or a 3G cellular phone, and a wireless communication network is diclosed. The mobile device is assumed to comprise at least one application generating and encoding video data using at least one codec. Further, a hardware system is dis- closed, enabling the realization of the method in one of the listed variations.
As it is a standard for video data encoding, the encoded video data is assumed to be comprising P-frames and I-frames. I- frames are including the full (compressed) video picture. For a device receiving and decoding video data, in order to decode an I-frame, the decoder does not need information on previ¬ ously send frames. The picture used for encoding this I-frame is the starting point for P-frame generation. P-frames are in- eluding information on how parts of previous I-frames are changing (e.g., shifts, size variations, etc.) . This informa¬ tion is mainly vector based. Without the previously generated I-frame it is not possible to decode a video picture only by using a P-frame.
Generally, the amount of data of a P-frame is considerably smaller than the amount of data of an I-frame. Consequently, the transmission of P-frames requires a lower bandwidth than the transmission of I-frames.
In more or less regular temporal intervals, I-frames are sent, whereas in between, only P-frames are transmitted, in order to reduce the overall amount of transmitted data. Thus, e.g., every 200 P-frames, one I-frame is generated and transmitted.
The loss of a P-frame results in the loss of one decoded video picture, thus reducing the decoded video frame rate and dete¬ riorating the quality of the video. The loss of an I-frame re¬ sults in the loss of a whole sequence of video pictures.
The method described in the following comprises several steps. These steps not necessarily have to be taken in the given or¬ der. One or more steps can be performed in parallel. Addi¬ tional steps not listed can be performed.
First, a set of actual parameters indicating the current state of the data flow are acquired. These parameters preferrably comprise one or more of the following parameters: parameters of the air link, as e.g. coding scheme, and/or parameters of a transmission protocol stack, and/or available bandwidth, and/or maximum buffer sizes, and/or buffer fill levels, and/or information about PDP (Packet Data Protocol) contexts, e.g. quality of service (QoS) settings for the PDP context, and/or radio resource management information, and/or received signal code power (RSCP) , and/or signal to interference ratio (SIR) , and/or received signal strength indicator (RSSI) , and/or signal strength of the wireless connection, and/or traffic volume measurement, and/or position of the mobile device, and/or altitude of the mobile device, and/or
direction of the mobile device, and/or velocity of the mobile device, and/or block size (i.e. the size of the Data Link Layer transmission blocks. Remember that IP packets are cut in blocks and the blocks are transferred to the re¬ ceiving side.) , and/or block error rate (which is similar to the bit error rate but seen for the whole Data Link Layer transmis¬ sion block) , and/or the codec employed, and/or the compression of header data, and/or bit error rate, and/or frame loss rate, and/or transmission delay.
In a next step, on the basis of these actual parameters and other necessary data, a prediction of a future state of the data flow is made for a given time-interval . As an example, this prediction may refer to one or more of the following types of information on the data flow: predictions related to cell reselections, and/or predictions related to throughput, and/or predictions related to signal to interference ratio (SIR) , and/or predictions related to bit error rate, and/or predictions related to the suitable coding scheme (In GPRS and EDGE (at least) a certain number of bits, i.e. a transmission block can transferred in one time- slot. Some of the bits can be used for the data trans¬ fer, others are used to secure these data bits. There are existing different coding schemes (4 in GPRS, 9 in EDGE) which are standing for a different ratio of data bits / protection bits. This means: using coding scheme one you have the highest protection for the data bits but the lowest number of usable bits for
data transfer, coding scheme 4 provides less protec¬ tion but more bits for data transfer, resulting in a higher bandwidth.) , and/or
- predictions related to transmission delay, and/or predictions related to block error rate (an error in a transmission block (if the above mentioned protection failed) are leading to an error in the IP packet which may lead to the loss of an IP packet. Predicting this rate means predicting packet losses or damaged pack¬ ets) , and/or predictions related to round trip time, and/or
- predictions related to the increased and decreased bandwidth available for transmission of the video data.
Algorithms for calculating predictions of this type are state of the art and are disclosed, e.g., in PCT/EP 02/03018 or DE 102 47 581. The meaning of the expression "time-interval" is not necessarily restricted to an actual time, it can, e.g., equally well designate an internal clock of a computer. Other time scales, not necessarily having a continuous and steady succession in time, but indicating, e.g., the progress of a transmission, might be used. A widely used time scale is the TTI-time-scale (transmission time interval, i.e. 10 or 20 ms per interval) .
In a next step, one or more measures are taken, in order to dynamically adapt the video data flow, especially in order to reduce the risk of loss of important video data, to the pre¬ dicted state of the data flow during the given time-interval in the near future. The goal of this adaptation is to provide the best video quality on the receiving side for each situa¬ tion, i. e. for each possible state of the the link or data flow quality. The measures may comprise one or more of the following steps:
dynamic I-frame forward error correction (FEC) dynamic I-frame generation dynamic I-frame generation delaying dynamic P-frame rate adjusting dynamic quantization adaptation
These steps are described in detail in the following. Some of the steps may be performed in one or more of the described variations.
Dynamic I-frame forward error correction (FEC)
In a first preferred embodiment, the method of dynamic I-frame forward error correction (FEC) is employed. Dynamic I-frame FEC is typically used in cases of high bit error rates with the goal to prevent the loss of an I-frame, which would result in the loss of a whole sequence of frames in a row on the de¬ coded side. Note that P-frames based on a lost I-frame cannot be de-coded.
Two possible cases may be considered for this method:
1. The loss of an I-frame during transmission is detected.
2. It can be predicted (or estimated) that the probability to lose a certain I-frame during transmission is high.
Especially (but not solely) in the first case, a copy of each recently sent I-frame is buffered, and if a loss of this I- frame during transmission is detected, the I-frame is re¬ transmitted. There is no direct acknowledgment for receiving an I-frame. But the RTCP packets (RTP Control Protocol or Real Time Control Protocol) , used for quality feedback information for the RTP streams, can be used to give a feedback from the receiver to the sender.
In a preferred embodiment, the I-frame is re-transmitted with additional forward error correction (FEC) information. This step may be combined with the additional condition that the time that has passed between the generation of the respective I-frame and the detection of the loss of this I-frame is not too high, i. e. remains below a given threshold.
Several forward error correction means are known to the person skilled in the art and may be used. These forward error cor- rection algorithms include adding additional data allowing for the detection of transmission failures and the reconstruction of certain data packages by the receiving device, even if part of the transmitted data are lost during transmission. Besides forward error correction, other means of error correction usu- ally employed in wireless data transfer can be employed ac¬ cordingly. These are, e.g., convolutional coding or bit coding according to the used coding scheme (see above) .
Especially (but not solely) in the second case listed above, the following method is proposed: If the prediction of the state of the data flow for a given time-interval in the future indicates a risk of loss of the frames to be transmitted within this time-interval being above a pre-defined risk level, the I-frames to be transmitted within this time- interval are send with additional FEC information. This addi¬ tional FEC information increases the chance that (even if part of the transmitted data are lost) , the I-frame may be recon¬ structed by the addressee.
In a preferred embodiment, when using the method of dynamic I- frame forward error correction, the generation of P-frames as well as the transmission of these P-frames remains independent of the predicted future state of the data flow.
Dynamic I-frame generation
In a second preferred embodiment, the method of dynamic I- frame generation is used with the goal to send a new I-frame, i.e. synchronization point, to the decoder of the frames in the receiving device.
Dynamic I-frame generation might especially be useful or even necessary in one of the following cases: 1. The loss of an I-frame could not be prevented and there was no possibility to re-transmit a copy of the lost I- frame.
2. The last I-frame was generated more than a given time- interval ago. In this case the video pictures are de- coded on the basis of the P-frames combined with a rather "old" I-frame. This will reduce the accuracy of the P-frames so that the difference between encoded and decoded picture are increasing. One reason for this situation may be a delayed I-frame generation as a con- sequence of a reduced available bandwidth (dynamic I- frame generation delaying, see below) . If more band¬ width becomes available, the generation of an I-frame may be enforced.
Thus, the following step is proposed: If since the last sucessful transmission of an I-frame a time-period longer than a pre-defined time-period has passed, one or more control sig¬ nals are generated. These control signals trigger a codec to create an I-frame at the nearest possible point in time.
In a preferred embodiment, when using the method of dynamic I- frame generation, the generation of P-frames as well as the transmission of P-frames remains independent of the predicted future state of the data flow.
Dynamic I-frame generation delaying
In a third preferred embodiment, dynamic I-frame generation delaying is used. This method is especially useful in cases where the bandwidth available for data transmission is re¬ duced.
Thus, if the prediction of the state of the data flow for a given time-interval in the future indicates a risk of loss of the frames to be transmitted within this time-interval being above a pre-defined risk level, the following means may be taken: First, one or more control signals may be generated controlling the at least one codec not to create an I-frame within this time-interval. Alternatively or additionally, I- frames to be sent within this time-interval are buffered, and the transmission of these I-frames is delayed until the time- interval has passed.
During this period of delay, only P-frames are transmitted. As described above, the transmission of P-frames typically re¬ quires a lower bandwidth. Nevertheless, the vector based in¬ formation encoded in the P-frames are becoming more inaccurate with the rising "age" of the corresponding I-frame resulting in an inaccuracy of the decoded video picture.
In a preferred embodiment, when using the method of dynamic I- frame generation delaying, the generation of P-frames as well as the transmission of P-frames remains independent of the predicted future state of the data flow.
Dynamic P-frame rate adjustment
In a fourth preferred embodiment, dynamic P-frame rate adjust¬ ment is used, preferrably in those cases where the bandwidth available for data transfer is too low for a transfer of all P-frames.
Thus, if the prediction of the state of the data flow for a given time-interval in the future indicates a limitation of the amount of data that can be transmitted at a given quality below a given level, the transmission of one or more P-frames to be sent within this time-interval may be suppressed. In this case single P-frames are dropped (i. e. erased from a transmission buffer) rather than transmitted, in order to re¬ duce the number of P-frames. Preferrably, P-frames are not dropped "in sequence" but out of sequence, stochastically, in order to prevent "jumps" in the decoded video stream.
Alternatively, instead of simply suppressing the transmission of these P-frames, one or more control signals may be gener¬ ated controlling the at least one codec not to create a P- frame within this time-interval or to reduce the number of P- frames to be generated within this time-interval below a cer¬ tain number or rate.
Dynamic quantization adaptation
In a fifth preferred embodiment, dynamic quantization adapta¬ tion is used to adapt the amount of transferred data according to the available bandwidth. Note that there is a negative re- lationship between the quantization and the quality of a transmitted video picture received and decoded by an ad¬ dressee: Increasing the quantization results in a reduced quality of the decoded video picture and vice versa. Quantiza¬ tion is data reduction.
Quantization typically is the "lossy" part of the video pic¬ ture compression. It can be compared with the JPEG compression of a (still) picture, as known to the person skilled in the art. With higher quantization the quality of the decoded pic- ture is lower but also the needed bandwidth to transfer this video picture is reduced. The video codec H.264, e.g., con¬ tains 52 quantization levels.
Thus, if the prediction for the state of the data flow for a given time-interval in the future indicates that the amount of data that can be transmitted will be above a given level (i. e. in the case of high available bandwidth), one or more con¬ trol signals may be generated controlling the at least one co¬ dec to reduce the quantization of the data, thus increasing the quality of the video data. On the other hand, if the pre¬ dictions indicate a low available bandwidth, the quantization of the data may be increased.
Note that whereas the first four methods and embodiments are working on a (very granular) frame level, quantization adapta¬ tion will have an influence on a whole group of frames accord¬ ing to long-time but bigger bandwidth changes (on a temporal timescale of app. 500 ms) . The other four methods typically might be of special use in cases of occurrence of very fast but rather small short-time changes of the state of the data flow.
Also note that, if besides the video stream an audio stream is to be transmitted, the audio stream typically will have the higher priority. As described above, the mobile device user typically is willing to accept some lost video pictures rather than lost voice fragments. Thus, in this document, the term "available bandwidth" for video streaming is defined as:
available bandwidth = provided bandwidth - audio bandwidth.
Furthermore, the present invention includes:
- a computer loadable data structure that is adapted to per¬ form the method according to one of the embodiments described in this description while the data structure is being executed on a computer,
a computer program, wherein the computer program is adapted to perform the method according to one of the embodi¬ ments described in this description while the program is being executed on a computer,
- a computer program comprising program means for performing the method according to one of the embodiments described in this description while the computer program is being executed on a computer or on a computer network,
- a computer program comprising such program means, wherein the program means are stored on a storage medium readable to a computer,
a storage medium, wherein a data structure is stored on the storage medium and wherein the data structure is adapted to perform the method according to one of the embodiments de¬ scribed in this description after having been loaded into a main and/or working storage of a computer or of a computer network, and
a computer program product having program code means, wherein the program code means can be stored or are stored on a storage medium, for performing the method according to one of the embodiments described in this description, if the pro-
gram code means are executed on a computer or on a computer network.
Brief description of the drawings
For a more complete understanding of the present invention, reference is made to the following description made in connec¬ tion with accompanying drawings in which:
Fig. 1 shows a a schematic overview of the method for dy¬ namic optimization of real-time video data flow be¬ tween a mobile device and a wireless communication network; and Fig. 2 shows a schematic diagram of a system for performing the method depicted in Fig. 1 in one of its embodi¬ ments;
Detailed description of preferred embodiments
In Fig. 1, a schematic overview of the method for dynamic op¬ timization of real-time video data flow between mobile devices and a mobile network is depicted. In Fig. 2, a physical and/or embedded system is depicted, adapted for realizing the method of Fig. 1 in one or more of its variations. The arrows in Fig. 2 indicate the direction of data flow. In the following, Fig. 2 will be described in conjunction with the respective steps depicted in Fig. 1.
On the left hand side, Fig. 2 shows the typical system of functional layers of a mobile device real-time video streaming as known to the person skilled in the art from the OSI refer¬ ence model. Out of the seven OSI layers, in Fig. 2 only the Application Layer 210, the Transport Layer 212, the Data Link
Layer 214, and the Physical Layer 216 are depicted. The Net¬ work Layer, the Session Layer and the Presentation Layer are omitted for the sake of simplicity, but may be controlled in a similar way.
On the level of the Application Layer 210, several applica¬ tions 218 comprising applications generating video data may be run. As an example, video data acquisistion using a cell phone equipped with a video camera may be named, including the re- spective application software. Also included in the Applica¬ tion layer are one or more codec modules 220, 222, including codecs for video encoding 220 and for voice encoding 222. As an example, the implementation of the video codec H.264 and the audio codec AMR (adaptive multi rate) is assumed in the following. The codecs 220, 222 transform the data streams gen¬ erated by the various applications 218 into encoded data frames 224, which are passed down from the Application Layer 210 via the various other layers to the Physical Layer 216 to be transmitted via the wireless network.
On their way down to the Physical Layer 216, in each of the various OSI layers, the frames 224 may be modified (symboli¬ cally depicted by the frames 226 in Fig. 2) , especially equipped with additional information (e.g., additional head- ers) , according to the respective protocols used for the type of information to be transmitted. Thus, in the example de¬ picted in Fig. 2, in the Transport Layer 212, the Real-Time Transport Protocol (RTP) 228, the User Datagram Protocol (UDP) 230, and the Internel Protocol (IP) 232 are employed. In the Data Link Layer 214, the Logical Link Control Layer (LLC) 248, the Radio Link Control Layer (RLC) 250, and the Medium Access Control Layer (MAC) 251 are employed in this embodiment.
Each layer is equipped with one or more control modules 234 - 246. These control modules control the functionality of the
layers in various ways, depending on the layer itself. Thus, e.g., the Media Control Module 234 controls the settings of the audio and/or video codec(s) (e.g., the quantization, see above) of the application layer. The RTP control module 236 controls the FEC (forward error correction) packet generation, the I-frame buffering and retransmission as well as the read¬ ing out of RTCP (RTP Control Protocol or Real Time Control Protocol) quality feedback information.
In the example depicted in Fig. 2, in the Data Link Layer 214, the Logical Link Control Layer (LLC) 248 and the Radio Link Control Layer (RLC) 250 are controlled by a common control module 242 (RRC, Radio Ressource Control) .
Further, most or all of the layers have one or more buffers 252, 254 (symbolically depicted by the hatched boxes in Fig. 2) at their disposal. These buffers may be used for different purposes, such as for storing I-frames for delayed transmis¬ sion when the prediction of the future state of the data flow indicates a high risk of loss (see above) .
Besides controlling the settings and parameters of the single OSI layers, the control modules 234 - 246 allow for an easy access to actual parameters of the data flow. Thus, e.g., by accessing the control modules 242 and 244 of the Data Link Layer 214, Information on the quality of the transmission (e.g., the Bit Error Rate BER) can be gained. Further, infor¬ mation on the available ressources in each layer, e.g., the fill levels of the various buffers 252, 254, can be obtained. Acquiring these Actual Parameters 256 is the first step 110 of the method depicted in Fig. 1.
These Actual Parameters are passed on to a State Predictor
Module 258. The State Predictor Module, based on the informa- tion 256, estimates the development of one or more relavant
variables indicating the state of the data transfer for the near future (step 112 in Fig. 1) . In the preferred embodiment, for this purpose, the algorithm disclosed in DE 102 47 581is used. Thus, the State Predictor Module 258 may predict that the Bit Error Rate (BER) will be below a level of ICf9 for the upcoming 10 TTIs (transmission time intervals) . These predic¬ tions 260 are passed on to a Decider Module 262.
The Decider Module 262 compares the predicted parameters 260 with a set of parameters stored in a Lookup-Table 264. In this Lookup-Table, which preferrably consists of a multi¬ dimensional matrix, the possible states of the future flow control predicted by the State Predictor 258 are divided into a numer of "cases", i. e. into a number of intervals for each relevant predicted parameter. Thus, for each case, a set of control parameters is referenced in this Lookup-Table.
Thus, in this embodiment, the "decision" on flow optimization the Decider Module 262 takes in step 114, basically, has the form of a certain set of control parameters 266, which are picked from the Lookup-Table 264 according to the Predictions 260 of the State Predictor 258. These Control Parameters 266 are passed on to the respective control modules 234 - 246, in order to adjust the data flow.
As disclosed above, several ways of controlling the data flow are possible (steps 116 - 124 in Fig. 1) . The steps disclosed in this invention all refer to optimization of video streaming of encoded video data, but means for optimizing the data flow of encoded audio data may ba taken in parallel.
First, the decider may decide that according to the predic¬ tions of the future data flow, the method dynamic forward er¬ ror correction (FEC) 116 may be employed. As explained above,
this method will preferrably be chosen when high BERs are pre¬ dicted for the near future.
When dynamic I-frame forward error correction (FEC) 116 is chosen, control parameters for several control modules may be generated. First, control parameters controlling the Transport Layer 212 or the Data Link Layer 214 to store the most re¬ cently transmitted I-frame in one of the buffers 252, 254 may be generated and passed on to one of the control modules 236 - 244. In the preferred embodiment, the I-frame is buffered in the buffer 252 of the RTP 228. Thus, the I-frame can be re¬ transmitted in case a loss during transmission occurs. Besides the RTP 228, there is a RTCP (RTP Control Protocol or Real Time Control Protocol) and an FEC for RTP module (both not shown) . The RTCP will get the quality feedback packet from the receiving side. The decider 262 uses these pieces of informa¬ tion to decide that the I-frame should be retransmitted with additional ,FEC and signals to the RTP and FEC for RTP modules to retransmit the packet and to create an FEC packet.
In this context, when using the term "I-frames", it is obvious that not only the actual I-frames (224 in Fig. 2) are meant, but that the term also may include "modified" I-frames 226, i. e. after having passed layers below the Application Layer 210. These frames, as described above, also include additional in¬ formation, such as additional headers.
Further, control parameters controlling the Physical Layer 216 or the Data Link Layer 214 to apply a certain schedule of er- ror correction, especially forward error correction (FEC) may be passed on to the control modules 236 - 246. Thus, as an ex¬ ample, in case a BER above 10~3 is predicted for the upcoming 10 TTIs, the Decider Module 262 may control the Data Link Layer Control Module RRC 242 to increase a redundancy factor
(i.e. the factor controlling the error correction information) from 1.15 to 1.30 or change the coding scheme.
As explained above, the methods may be combined. E.g., the control parameters may first control the layers to buffer an I-frame and then, in case a loss of this I-frame is detected, to re-transmit it with increased redundancy factor.
Secondly, the Decider Module 262 may decide that the method of dynamic I-frame generation (118 in Fig. 1) is to be applied. As explained above, this method is especially useful in case the Actual Parameters 256 indicate that the loss of an I-frame could not be prevented and there was no possibility to re¬ transmit a copy of the lost I-frame or if since the last I- frame generation more than a pre-defined time-interval has passed. In case of dynamic I-frame generation 118, control pa¬ rameters 266 for the Media Control Module 234 are generated controlling the codec 220 to create, an I-frame at the nearest possible point in time.
Thirdly, the Decider Module 262 may decide that the method of dynamic I-frame generation delaying 120 is to be employed, which is especially useful if the Actual Parameters 256 indi¬ cate that the bandwidth available for data transmission is be- low a given level, an information which can be gained from a readout of the parameters of the Control Modules 242 - 246 by determining the used coding scheme in layer 1, the physical layer, the allocated timeslots in the data ling layer and the allocated transmission blocks in these timeslots, which can be collected from the RLC 250.
In this case, i.e. if the prediction indicates a high risk of loss of the frames to be transmitted in the near future, the decider may provide control parameters 266 to the Media Con- trol Module 234 preventing the codec 220 from creating an I-
frame within a pre-defined time-interval . Alternatively or ad¬ ditionally, the decider may provide control parameters 266 to the control modules 234 - 246 of one of the layers, prefer- rably of the Transport Layer 212 or of the Data Link Layer 214 to store the I-frames to be sent within this temporal interval in one of their buffers 252, 254 rather than to transmit them.
As soon as the temporal interval has passed, the fill-level of the buffers may be checked as part of the Actual Parameters 256, in order for the Decider 262 to make a new decision about transmission of the buffered data.
Fourthly, the Decider Module 262 may decide that the method of dynamic P-frame rate adjustment 122 is to be applied. As men- tioned above, this might be the case especially when the Ac¬ tual Parameters 256, in particular mainly control parameters of the Data Link Layer 214 or the Physical Layer 216 indicate that the bandwidth available for data transfer is too low for a transfer of all P-frames.
In this case, the Decider Module 262 may generate two types of control parameters 266: First, control parameters controlling the Application Layer 210, the Transport Layer 212, the Data Link Layer 214, the Physical Layer 216, or another layer not depicted in Fig. 2, to erase a certain number of P-frames up¬ coming for transmission from one or more of the buffers, e.g., from the buffers 252, 254. As indicated above, preferrably, these P-frames to be erased are selected stochastically rather than in sequence.
Further, the Decider Module 262 may generate control parame¬ ters for the Media Control Module 234 of the Application Layer 210 controlling the codec 220 not to create a P-frame within a given time-interval in the future (e.g., the upcoming 10 TTIs)
or to reduce the number of P-frames to be generated within this time-interval below a certain number or rate.
Fifthly, the Decider Module 262 may decide that the method of Dynamic quantization adaptation 124 is to be applied. As men¬ tioned above, this method may be chosen in cases where the State Predictor 258 indicates that the state of the data flow for a given time-interval in the future, e.g., the upcoming 10 TTIs, will be such that the amount of data that can be trans- mitted will be above a given level. In this case, the Decider Module 262 may generate one or more control parameters 266 controlling the Media Control Module 234 to operate the codec 220 in a way that the quantization of the data for this time- interval is reduced, thus increasing the quality of the video data.
While the present inventions have been described and illus¬ trated in conjunction with a number of specific embodiments, those skilled in the art will appreciate that variations and modifications may be made without departing from the princi¬ ples of the inventions as herein illustrated, as described and claimed. The present inventions may be embodied in other spe¬ cific forms without departing from their spirit or essential characteristics. The described embodiments are considered in all respects to be illustrative and not restrictive. The scope of the inventions are, therefore, indicated by the appended claims, rather than by the foregoing description. All changes which come within the meaning and range of equivalence of the claims are to be embraced within their scope.
Reference List
110 Acquisition of Actual Parameters of the data flow 112 Prediction of the future state of the data flow
114 Decision on flow optimization
116 Dynamic I-frame forward error correction
118 dynamic I-frame generation
120 dynamic I-frame generation delaying 122 dynamic P-frame adjusting
124 dynamic quantization adaptation
210 Application Layer
212 Transport Layer
214 Data Link Layer 216 Physical Layer
218 application
220 video codec
222 voice codec
224 data frames 226 modified data frames
228 Real Time Protocol, RTP
230 User Datagram Protocol, UDP 232 Internet Protocol, IP
234 Media Control Module 236 RTP Control Module
238 UDP Control Module
240 IP Control Module
242 Common Control Module RRC (Radio Ressource Control) for
LLC (Logical Link Control Layer) and RLC (Radio Link Con- trol Layer)
244 Control Module for MAC (Medium Access Control Layer)
246 Control Module for Physical Layer
248 Logical Link Control Layer (LLC)
250 Radio Link Control Layer (RLC) 251 Medium Access Control Layer (MAC)
252 buffer
254 buffer
256 Actual Parameters of the data flow
258 State Predictor Module
260 Predictions on the future state of the data flow
262 Decider Module
264 Lookup-Table
266 Set of Control Parameters