WO2010133158A1 - 确定调度包优先级的方法和装置 - Google Patents
确定调度包优先级的方法和装置 Download PDFInfo
- Publication number
- WO2010133158A1 WO2010133158A1 PCT/CN2010/072852 CN2010072852W WO2010133158A1 WO 2010133158 A1 WO2010133158 A1 WO 2010133158A1 CN 2010072852 W CN2010072852 W CN 2010072852W WO 2010133158 A1 WO2010133158 A1 WO 2010133158A1
- Authority
- WO
- WIPO (PCT)
- Prior art keywords
- frame
- image group
- distortion
- image
- code stream
- Prior art date
Links
Classifications
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N19/00—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
- H04N19/10—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding
- H04N19/169—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the coding unit, i.e. the structural portion or semantic portion of the video signal being the object or the subject of the adaptive coding
- H04N19/17—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the coding unit, i.e. the structural portion or semantic portion of the video signal being the object or the subject of the adaptive coding the unit being an image region, e.g. an object
- H04N19/172—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the coding unit, i.e. the structural portion or semantic portion of the video signal being the object or the subject of the adaptive coding the unit being an image region, e.g. an object the region being a picture, frame or field
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N19/00—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
- H04N19/10—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding
- H04N19/169—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the coding unit, i.e. the structural portion or semantic portion of the video signal being the object or the subject of the adaptive coding
- H04N19/177—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the coding unit, i.e. the structural portion or semantic portion of the video signal being the object or the subject of the adaptive coding the unit being a group of pictures [GOP]
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N19/00—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
- H04N19/85—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using pre-processing or post-processing specially adapted for video compression
- H04N19/89—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using pre-processing or post-processing specially adapted for video compression involving methods or arrangements for detection of transmission errors at the decoder
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N21/00—Selective content distribution, e.g. interactive television or video on demand [VOD]
- H04N21/20—Servers specifically adapted for the distribution of content, e.g. VOD servers; Operations thereof
- H04N21/23—Processing of content or additional data; Elementary server operations; Server middleware
- H04N21/238—Interfacing the downstream path of the transmission network, e.g. adapting the transmission rate of a video stream to network bandwidth; Processing of multiplex streams
Definitions
- the present invention relates to the field of communications technologies, and in particular, to a method and apparatus for determining a priority of a scheduling packet. Background technique
- the transmission of video information has become one of the important services of network transmission.
- the network transmission congestion and channel noise which may cause unstable transmission bandwidth; on the other hand, it is necessary to consider the decoding capability of the terminal device and the difference in application requirements, therefore, in the video information
- the encoding and transmission of video information is required to have scalable characteristics.
- SVC Scalable Video Coding
- H.264/AVC High compression efficiency video standard
- SVC can provide not only video airspace (resolution), quality and time domain (frame rate). Scalability, it can also provide accurate packet-to-packet stream interception, with high scalable video coding efficiency, close to the compression ratio of traditional non-scalable video coding scheme.
- the quality scalable coding of SVC achieves quality (SNR) scalability mainly by repeated quantization (layered coding) of transform coefficients, block coding of transform coefficients, and bit plane coding.
- Video quality scalability can be achieved through technologies such as coarse-grained scalability (CGS) and medium-grained scalability (MGS).
- CGS coarse-grained scalability
- MGS medium-grained scalability
- the basic idea is: divide each frame of video into a base layer (BL) code stream that can be decoded separately and one or more enhancement layer (EL) code streams, each enhancement layer including one or more schedules. package.
- the basic layer adopts the hybrid coding method.
- the code rate is relatively low, which can only guarantee the most basic quality requirements, and ensures that the decoder has sufficient capability to receive and decode the code stream of the base layer.
- the enhancement layer has two coding modes: CGS and MGS.
- the video image resolution of the base layer and the enhancement layer is usually the same.
- each coding layer In the CGS coding mode, each coding layer must be fully obtained to obtain a better enhancement layer quality.
- MGS coding mode through the key frame technology, multiple scheduling packets of each frame of a group of pictures (GOP, Group Of Pictures) can be arbitrarily intercepted, which greatly improves the flexibility of quality scalable coding.
- GOP Group Of Pictures
- truncation of multiple MGS scheduling packets of each layer of MGS code stream can be realized, and the fine granularity of quality scalable coding is greatly improved.
- MGS mainly adopts the mechanism of multi-level coding and extracting MGS scheduling packets. By retaining and discarding different MGS scheduling packets, various code rate constraints can be realized.
- SVC adopts the structure of hierarchical B-frames in each GOP. The prediction between frames and frames of different levels has a strong correlation. The coding efficiency after discarding different MGS scheduling packets is very different. Therefore, it is necessary to first The MGS scheduling packets are set with priority, and the unequal protection and scheduling are performed according to the priority of each MGS scheduling packet.
- the image group shown in FIG. 1 includes 9 frames, and each frame of the image group includes 1 basic layer and 2 enhancement layers, and each enhancement layer further includes two MGS scheduling packets.
- the reference association relationship between the frames of the above image group may be: the 0th frame and the 8th frame are key frames; the 4th frame refers to the 0th frame and the 8th frame; the second frame refers to the 4th frame and the 0th frame; Frame 6 refers to frame 4 and frame 8; frame 1 refers to frame 2 and frame 0; frame 3 refers to frame 2 and frame 4; frame 5 refers to frame 4 and frame 6; The frame references the 6th and 8th frames.
- frames referenced by other frames may be referred to as reference frames, frames referring to other frames may be referred to as predicted frames, and reference frames and predicted frames exhibit relative relationships.
- the 0th frame and the 8th frame may be referred to as the reference frame of the 4th frame, and the 4th frame may be referred to as the predicted frame of the 0th frame and the 8th frame;
- the 2nd frame and the 4th frame It may be referred to as a reference frame of the third frame, a predicted frame of the third frame which may be referred to as a second frame and a fourth frame, and so on.
- the distortion of the image group in different packet loss modes is generally obtained by multiple decoding, and then the image group distortion in different packet loss modes is compared to determine each scheduling packet of each frame of the image group.
- the technical problem to be solved by the embodiments of the present invention is to provide a method for determining the priority of a scheduling packet.
- the method and apparatus are capable of relatively reducing the processing complexity of the priority process of determining each scheduling packet of the image group.
- the embodiment of the present invention provides the following technical solutions:
- a method for determining a priority of a scheduling packet comprising:
- the first image group code stream according to the first intercepting manner and the second intercepting manner respectively acquiring the frame distortion of the frame image due to the missing of each enhancement layer of each frame of the first image group, where the first intercepting manner is performed
- the total number of scheduling packets of any one frame of the intercepted first image group is greater than or less than the total number of scheduling packets of the frame intercepted by the second intercepting mode; decoding the first image group intercepted according to the first intercepting manner a code stream, obtaining a first total distortion of each frame image of the first image group in the first intercept mode, decoding a first image group code stream intercepted according to the second intercept mode, and obtaining a first image group per frame in a second intercept mode a second total distortion of the image; a first frame distortion of the frame image caused by the first total distortion of each frame image of the first image group, the second total distortion, and the absence of each enhancement layer of each frame of the first image group, Obtaining,
- a device for determining a priority of a scheduling packet comprising:
- the frame distortion acquiring module is configured to respectively acquire the frame distortion of the frame image caused by the missing of each enhancement layer of each frame of the first image group; the code stream intercepting module is configured to follow the first intercepting manner and the second The intercepting method intercepts the first image group code stream, and the total number of scheduling packets of any one frame of the first image group intercepted by the first intercepting mode is greater than or smaller than the total number of scheduling packets of the frame intercepted by the second intercepting mode.
- a total distortion acquisition module configured to decode the first image group code stream intercepted by the code stream intercepting module according to the first intercepting manner, to obtain a first total distortion of each frame image of the first image group in the first intercept mode, Decoding the first image group code stream intercepted by the code stream intercepting module according to the second intercepting manner to obtain a second total distortion of each frame image of the first image group in the second intercept mode;
- a weight obtaining module configured to use the The first total distortion of each frame image of the first image group acquired by the total distortion acquiring module, the second total distortion, and the missing of each enhancement layer of each frame of the first image group acquired by the local frame distortion acquiring module And causing the frame distortion of the frame image to obtain the influence weight of each scheduling packet of the first image group on the first image group;
- the priority determining module configured to acquire the first image group acquired by the weight acquiring module The weight of each scheduling packet affecting the first image group, determining each scheduling of the first image group The priority of the package.
- the image group code stream is intercepted by using two different intercepting methods, and the image group can be obtained by two decodings in two different intercepting modes. Distortion, using the total distortion of each frame of the image group in two different intercept modes and the frame distortion of each frame of the image group to obtain the weight of each scheduling packet of each frame of the image group, and the number of decoding times is relatively small. The complexity of determining the scheduling packet priority process can be greatly reduced.
- FIG. 1 is a schematic diagram of an association structure of each frame of an image group provided by the prior art
- FIG. 2 is a flowchart of a method for determining a priority of a scheduling packet according to Embodiment 1 of the present invention
- FIG. 3 is a flowchart of a method for determining a priority of a scheduling packet according to Embodiment 2 of the present invention
- a schematic diagram of a device for determining a priority of a scheduling packet is provided in the third embodiment.
- FIG. 5 is a schematic structural diagram of an inter-frame weight obtaining sub-module according to Embodiment 3 of the present invention. detailed description
- Embodiments of the present invention provide a method and apparatus for determining a priority of a scheduling packet, which can relatively reduce processing complexity of determining a priority process of each scheduling packet of an image group.
- a method for determining a priority of a scheduling packet may specifically include:
- the current frame distortion of a certain frame image refers to the distortion of the frame image due to the loss of the frame's own data in the absence of a drift error.
- the original frame of the frame image is distorted.
- the first image group code stream before each enhancement layer of each frame of the first image group may be separately decoded, respectively, obtained by each frame of the first image group.
- the frame of the frame is distorted by the lack of each enhancement layer; after the first image group is encoded, the first image group code stream after each enhancement layer of each frame is respectively decoded and obtained separately
- the current frame of the frame image is distorted due to the absence of each enhancement layer per frame of the first image group.
- the first image group code stream is intercepted according to the first intercepting manner and the second intercepting manner, and the total number of scheduling packets of any one frame of the first image group intercepted by the first intercepting manner is greater than or smaller than the second intercepting manner. The total number of scheduled packets of the frame intercepted.
- the code stream obtains a second total distortion of each frame image of the first image group in the second intercept mode.
- the first total distortion of the image of each frame of the first image group, the second total distortion, and the absence of each enhancement layer of each frame of the first image group may be utilized to cause distortion of the frame image of the frame image.
- the impact weight of each scheduled packet of the acquired first image group on the first image group may be compared, and the priority of each scheduling packet of each frame of the image group is set according to the comparison result.
- the number of priority levels can be determined according to actual needs, and the weight of the first image group is affected. Different scheduling packets can be set to different priorities. Of course, scheduling packets whose effect weights on the image group are close to or the same can be set to the same priority. The higher the priority of the scheduling packet is set, the greater the weight of the scheduling packet affecting the image group.
- the processing of the unequal protection and scheduling may be performed on each scheduling packet according to the priority of each scheduling packet.
- two different intercepting methods are used to intercept the image group code stream, and the total distortion of each frame of the image group in two different intercept modes can be obtained by two decodings, and the image group is used in each frame.
- the total distortion and the frame distortion of each frame of the image group in two different intercept modes are used to obtain the weight of each scheduling packet of each frame of the image group, and the number of decoding times is relatively small, which can greatly reduce the scheduling packet.
- the complexity of the priority process is used to intercept the image group code stream, and the total distortion of each frame of the image group in two different intercept modes can be obtained by two decodings, and the image group is used in each frame.
- the scheduling packet determines that the weight of each of the scheduling packets of each frame of the image group needs to be decoded 36 times, and the technical solution of the embodiment of the present invention acquires the missing of each enhancement layer of each frame.
- a method for determining a priority of a scheduling packet may specifically include:
- the missing of the jth enhancement layer of the ith frame may be that the i-th frame is only missing the j-th enhancement layer without drift error, or the ith frame is missing the j-th frame.
- Enhancement layer And some or all of the enhancement layers above the jth enhancement layer, and so on, that is, the absence of each enhancement layer of each frame of the image group may be discarded when each enhancement layer is deleted. Part or all of the reinforcement layer above the layer.
- the frame distortion of the ith frame image due to the missing of each enhancement layer of the ith frame is respectively obtained;
- the data not referenced by the i-th frame for example, the data of the m-th frame
- the drift of the ith frame is not caused.
- the data not referenced by the ith frame in the image group may be partially or completely missing) to simultaneously acquire (ie, data not referenced by the i frame) due to the mth frame
- the current frame of the mth frame image caused by the absence of each enhancement layer is distorted.
- the image group code stream before each enhancement layer of each frame of the coded image group may be separately decoded in the process of encoding the code stream of the image group, respectively, each of each frame of the image group is obtained.
- the code stream of the j-1th layer (base layer or enhancement layer) encoded to the ith frame needs to be first decoded.
- Decoding is performed to encode the code stream of the jth layer of the i-th frame with reference to the decoded image when the i-th frame is encoded to the j-1th layer. Since the code stream of the j-1th layer encoded to the ith frame is decoded when the code stream of the jth layer of the i-th frame is encoded, the coded to the j-1th layer of the ith frame can be acquired.
- the current frame distortion of the i frame, and the code stream size of the ith frame when encoding to the j-1th layer of the ith frame, and the frame distortion of the ith frame when encoding to the j-1th layer of the ith frame is equal to the image
- the group frame distortion of the i-th frame image when the j-th layer of the i-th frame is separately deleted, and so on, in the process of encoding each frame of the image group, each enhancement layer of each frame of the individual missing image group can be separately acquired.
- the resulting frame distortion of the frame, and the code stream size of each layer of each frame of the image group can be separately acquired.
- the decoding step in the encoding process can directly acquire the frame distortion of each enhancement layer of each frame without adding additional processing overhead, which is relatively simple to implement.
- the image group code stream after each enhancement layer of each frame is discarded by decoding, respectively, and each enhancement layer of each frame of the image group is obtained.
- the resulting frame of the frame image is distorted.
- the j-th frame of the ith frame of the individual missing image group can be obtained by decoding the intercepted code stream.
- the frame distortion of the ith frame caused by the layer may be: intercepting the j-th layer code stream of the ith frame of the image group and all layer code streams of other frames of the image group, by decoding the intercepted code stream, Obtaining the frame distortion of the ith frame image caused by the jth layer of the ith frame of the erroneous image group, and obtaining the code stream size of the ith frame from the base layer to the j-1th layer, and so on, respectively obtaining The local frame distortion of the frame caused by each enhancement layer of each frame of the missing image group, and the code stream size of each layer of each frame of the image group, and the code stream size of each layer of each frame of the image group.
- the interception of the j-1th layer of the ith frame refers to the intercepted code stream being the entire code stream of the base layer to the j-1th layer of the ith frame, and so on, so-called interception i
- the kth scheduling packet of the jth layer of the frame refers to the intercepted code stream being the entire code stream of the kth scheduling packet of the ith frame to the kth scheduling packet of the jth frame.
- the current frame distortion of the i-th frame of the image group can represent the distortion of the ith frame due to the loss of the i-frame self data without the drift error.
- the frame distortion of the i-th frame image obtained by decoding the j-th layer of the ith frame of the intercepted image group and all the code streams of the other frames of the image group is marked as;
- the size of all code streams from the base layer to the jth layer is marked as .
- MSE R may indicate that the code stream size of the intercepted ith frame is the corresponding i-th frame if there is no drift error. Frame distortion.
- each frame is in any intercept mode (ie: the value can be adjusted as needed, and is not limited to just intercepting one.
- M ' k indicates that the frame distortion of the ith frame corresponding to the kth scheduling packet of the jth layer of the intercepted ith frame is decoded without the drift error.
- the frame distortion of the frame image caused by each enhancement layer of each frame of the missing image group obtained separately can be used to determine the frame caused by each scheduling packet of each enhancement layer of each frame of the missing image group.
- the frame of this image is distorted by M /, .
- * It can be expressed that the distortion code rate ratio of the layer 1 enhancement layer of the i-th frame without the drift error can be marked as ⁇ 3.
- (M - ' - ⁇ /(! ⁇ -! ⁇ - can indicate the distortion rate ratio of the jth enhancement layer of the i-th frame without the drift error (referred to as: rate distortion), Marked as moving;
- ( ⁇ /'" - Mi' k , l, Ri' k - can indicate the rate distortion of the kth MGS dispatch packet of the jth layer of the i-th frame without drift error, which can be marked as / ?) 0/'.
- RDO /' k reflects the kth rate-distortion function (RD) of the jth layer of the i-th frame, which can represent the j-th layer of the i-th frame.
- RD rate-distortion function
- the image group code stream is intercepted according to the first intercept mode and the second intercept mode, and the image group code streams intercepted by the two intercept methods are respectively decoded, and the image group is obtained in the first intercept mode and the second intercept mode respectively.
- the total distortion of the image is the image group code stream intercepted according to the first intercept mode and the second intercept mode, and the image group code streams intercepted by the two intercept methods are respectively decoded, and the image group is obtained in the first intercept mode and the second intercept mode respectively.
- the number of layers per frame and/or the number of scheduling packets of the image group intercepted by the first intercept mode and the second intercept mode are different, that is, the first intercept mode is intercepted.
- the total number of scheduling packets of any one frame of the image group is greater or smaller than the total number of scheduling packets of the frame of the image group intercepted by the second intercepting mode.
- the second interception method is to intercept the k2th scheduling packet of the jth layer of the i-th frame of the image group, where jl
- the values of j2, kl, k2 need to satisfy at least one of the following conditions: jl is greater than or less than j2, kl is greater than or less than k2.
- the total distortion of each image group of the image group in each intercept mode can be obtained separately, and each frame of the image group intercepted by the two intercept modes can be obtained separately.
- the size of the stream can be obtained separately.
- the total distortion of the ith frame of the image group can represent the distortion of the ith frame caused by the data loss of the ith frame and the data loss of the reference frame of the ith frame, and the image is reflected to some extent.
- the total distortion of the ith frame image obtained by decoding the code stream of each frame of the image group according to the first intercepting manner is marked as the code of the ith frame of the image group intercepted according to the first intercepting manner.
- the stream size is marked as: the total distortion of the ith frame image obtained by decoding the code stream of each frame of the image group according to the second intercept mode is marked as the code stream size flag of the ith frame of the image group intercepted according to the second intercept mode. for.
- each scheduling package uses the total distortion of each frame of the image group in the first intercept mode and the second intercept mode, and the frame distortion of the frame image caused by the missing of each enhancement layer of each frame of the image group, acquiring each frame of the image group.
- the weight of each scheduling package affects the image group.
- the influence weights between frames of an image group may be obtained first, and then the influence weights between frames of the image group are used to obtain the weight of influence of each frame of the image group on the image group.
- the influence weight of the reference frame (mth frame) of the i-th frame (predicted frame) in the image group on the i-th frame can be obtained by using formula (2):
- Equation (2) ) represents the total distortion of the ith frame in two different intercept modes, AE «. )
- the fourth frame refers to the 0th frame and the 8th frame, and it can be considered that the drift error of the 0th frame and the 8th frame to the 4th frame is linear, 0th.
- the frame and the 8th frame have the same drift error weight for the 4th frame.
- AE ( ⁇ 4 ) indicates the difference in the frame distortion of the fourth frame.
- 4 can represent the weight of the influence of the reference frame (0th frame or 8th frame) of the 4th frame on the 4th frame.
- the influence weight of each reference frame in the image group on the predicted frame can be simply obtained.
- the influence weight of each reference frame in the image group on the predicted frame may be utilized, and the weight of each frame of the image group on other frames is obtained, that is, The influence weights between frames can be obtained.
- the hierarchical reference relationship in the image group may be as follows: The nl frame refers to the n2 frame, and the nl frame reference The n3th frame, the n3th frame refers to the nth frame.
- the influence weight of the direct reference frame on the predicted frame is the sum of the influence weights of the upper reference frame (the reference frame of the direct reference frame described above) on the direct reference frame and the direct reference frame pair.
- the product of the influence weight of the predicted frame For example, the nl frame directly refers to the n2th frame and the n3th frame, the n3th frame refers to the nth frame, and for the nl frame, the nl frame is used as the nl frame
- the prediction frame, the n2th frame and the n3th frame are direct reference frames of the nl frame (predicted frame), and the nth frame is the upper reference frame of the n3th frame (direct reference frame).
- N represents the set of all direct reference frames of the nth frame
- F i represents the sum of the influence weights of the i-th frame on all direct reference frames (i.e., the j-th frame) of the n-th frame.
- the predicted frame and the reference frame in the image group may also be a partial reference relationship, that is, the predicted frame may refer only to a part of the pixels of the reference frame.
- the m2th frame and the m3th frame refer to the ith frame
- the first ml frame refers to the m2th frame and the m3th frame
- the first ml frame refers to the m2th frame and the m3th frame
- the weight of the i-th frame on the ml frame is F
- the pixel ratio of the m-th frame is referenced to the m-th frame
- the pixel ratio of the m-th frame is referred to as the m-th frame
- the value of ⁇ 3 may be greater than or equal to 0 and less than or equal to 1.
- the weight of the influence of the reference frame on the predicted frame can be obtained by using the formula (2). Taking the image group shown in FIG. 1 as an example, if F 0 " is used to indicate the weight of the 0th frame on the nth frame, the first in FIG.
- the weight of 0 frames affecting each frame can be as follows:
- the influence weights of the first, second, third, fourth, fifth, sixth, seventh, and eighth frames on other frames in the GOP are sequentially obtained, so that the influence weights between the frames can be obtained.
- P2 is the pixel ratio of the second frame referenced by the third frame
- P4 is the pixel ratio of the fourth frame referenced by the third frame. If other frames also have partial reference problems, this can be deduced.
- an influence weight array of each frame of the image group may be generated, and the weight array is affected to record the influence weight of each frame on each frame in the image group.
- each element in the influence weight array of the i-th frame may be the ith frame pair image group.
- the weight of each frame affects the weight.
- the influence weight array of the ith frame of the image group including the f frame may be as shown in the following array, but is not limited thereto:
- the weighting array of the i-th frame can clearly record the influence weight of the i-th frame on each frame in the image group, which can facilitate subsequent calculation.
- the weight of each frame of the image group can be used to obtain the weight of each image frame of the image group.
- the influence weight of the ith frame on the image group can be obtained by the formula (4), but is not limited thereto.
- F denotes a set of all frames of the image group. For example, in the image group shown in Figure 1, if used. It indicates that the 0th frame affects the weight of the entire image group, and all the elements of the influence weight array i ⁇ 3 ⁇ 4g [9] of the 0th frame can be summed by the formula (4) to obtain FW 0 .
- equation (4) it is possible to obtain a weighting influence on each group of image groups for the image group. After obtaining the influence weight of each frame of the image group on the image group by using formula (4), the influence weight of each scheduling packet of each frame of the image group on the image group may be further obtained.
- the 'k represents a k-th packet scheduling MW layer j i-th frame image group of the right to influence the weight
- the relationship may be as shown in equation (5), but not limited to this:
- w )O/ denotes the distortion code rate ratio (ie rate distortion) of the kth MGS scheduling packet of the jth layer of the i-th frame without drift error
- RDOi' k ( ⁇ /'"- ⁇ /' )
- MW/' k D , k * FW ⁇ * RDOj k ( 6 )
- the code stream size of the kth scheduling packet of the jth layer of the i-th frame may also be used to obtain an weighted array of each scheduling packet of each frame of the image group.
- a frame image group including a total of f with M e g / zt / '[ /] denotes the i-th frame weight array layer j k-th array weights packet scheduling, the frame and the weight of the i
- MWeight Jk [f] FWeight t [f] * RDO ⁇ k ( 7 )
- each element of MW ghdf] can characterize the influence weight of the kth scheduling packet of the jth layer of the i-th frame on each frame of the image group.
- each element of the influence weight array of the scheduling packet more intuitively characterize the influence weight of the scheduling packet on each frame of the image group
- the weighting array of the i-th frame affects the i-th frame
- the relationship of the influence weight array M Weigh t ( [ / ] of the kth scheduling packet of the jth layer of the frame may also be as shown in the formula (8), but is not limited to this:
- MWeigh [f] ⁇ * FWeight t [f] * RDO ⁇ k ( 8 )
- the weight of the scheduling package affects the image group.
- the priority of each scheduling packet per frame of the image group can be determined based on the weight of influence of each scheduling packet on the image group of each frame of the image group.
- each scheduling packet of each frame of the image group may be in order of size.
- the influence weights of the image groups are sorted, and the priority of each scheduling package can be set according to the sorting result.
- the number of priority levels can be determined according to actual needs.
- the scheduling packets with different weights for the image group can be set to different priorities.
- the scheduling packets with the same or different weights for the image group can be set to The same priority. The higher the priority of the scheduling packet is set, the greater the weight of the scheduling packet affecting the image group.
- the processing of the unequal protection and/or scheduling may be further performed on each scheduling packet according to the priority of each scheduling packet.
- the scheduling packet with lower priority can be discarded, and the scheduling packet with higher priority is reserved.
- the scheduling packet with the higher priority is transmitted on the link with better channel quality, and the scheduling packet with lower priority is transmitted on the link with poor channel quality.
- high-priority forward error correction (FEC) encoding can be performed on the scheduling packets with higher priority, which can be used for scheduling packets with lower priority.
- FEC coding when performing retransmission unequal protection, retransmission or retransmission of a higher priority scheduling packet may be performed to retransmit the lower priority scheduling packet. , or less re-transmission.
- two different intercepting methods are used to intercept the image group code stream, and the total distortion of each frame of the image group in two different intercept modes is obtained by two decodings, and the image group is used in each frame.
- the total distortion and the frame distortion of each frame of the image group in two different intercept modes are used to obtain the weight of each scheduling packet of each frame of the image group, and the number of decoding times is relatively small, which can greatly reduce the scheduling packet.
- the complexity of the priority process is used to intercept the image group code stream, and the total distortion of each frame of the image group in two different intercept modes is obtained by two decodings, and the image group is used in each frame.
- the influence weight of the frame is used to record the influence weight of the frame on each frame in the image group, and the influence weight of the scheduling packet on each frame in the image group is recorded by using the influence weight array of the scheduling packet, which can further simplify the operation process.
- Embodiment 3 is used to record the influence weight of the frame on each frame in the image group, and the influence weight of the scheduling packet on each frame in the image group is recorded by using the influence weight array of the scheduling packet, which can further simplify the operation process.
- an apparatus for determining a priority of a scheduling packet is further provided in the embodiment of the present invention.
- an apparatus for determining a priority of a scheduling packet according to Embodiment 3 of the present invention may specifically The method includes: a frame distortion acquiring module 410, a code stream intercepting module 420, a total distortion acquiring module 430, a weight obtaining module 440, and a priority determining module 450.
- the frame distortion acquiring module 410 is configured to respectively acquire the frame distortion of the frame image caused by the missing of each enhancement layer of each frame of the first image group.
- the code stream intercepting module 420 is configured to intercept the first image group code stream according to the first intercepting manner and the second intercepting manner, where the total number of scheduling packets of any one frame of the first image group intercepted by the first intercepting manner is greater than or The total number of scheduling packets of the frame that are smaller than the second intercept mode.
- the total distortion acquisition module 430 is configured to decode the first image group code stream intercepted by the code stream intercepting module 420 according to the first intercepting manner to obtain a first total distortion of each frame image of the first image group in the first intercept mode, and the decoding code
- the stream intercepting module 420 obtains the second total distortion of each frame image of the first image group in the second intercept mode according to the first image group code stream intercepted in the second intercept mode.
- the weight obtaining module 440 is configured to use the first total distortion of each frame image of the first image group acquired by the total distortion acquiring module 430, the second total distortion, and each of the first image groups acquired by the local frame distortion acquiring module 410
- the frame distortion of the frame image caused by the lack of the enhancement layer respectively acquires the weight of influence of each scheduling packet of the first image group on the first image group.
- the priority determining module 450 is configured to determine, according to the weight of each scheduling packet of the first image group acquired by the weight acquiring module 440, the priority of each scheduling packet of the first image group.
- the frame distortion acquiring module 410 may include:
- a first frame distortion acquisition sub-module 411 configured to respectively decode a first image group code stream before each enhancement layer of each frame of the first image group in the process of encoding the first image group, respectively obtaining The current frame distortion of the frame image caused by the absence of each enhancement layer of each frame of the first image group.
- the frame distortion acquiring module 410 may further include:
- a second frame distortion acquisition sub-module 412 configured to: after the first image group is encoded, decode the first image group code stream after each enhancement layer of each frame, respectively, and obtain each frame of the first image group respectively The current frame distortion of the frame image caused by the absence of each enhancement layer.
- the weight acquisition module 440 can include:
- the inter-frame weight obtaining sub-module 441 is configured to use the first total distortion, the second total distortion, and the first intercepting manner and the first intercepting manner of the first image group of the first image group acquired by the total distortion acquiring module 430
- the frame of each frame corresponding to the code stream size of each frame of the image group is distorted, and the first picture is obtained.
- the influence weight between the frames of the group is obtained, and the influence weight between the frames of the first image group is obtained.
- the frame weight acquisition sub-module 442 is configured to obtain, by using the inter-frame weight, the influence weights between the frames of the first image group acquired by the sub-module 441, and respectively obtain the influence weights of each frame of the first image group on the first image group.
- a rate-distortion obtaining module 443 configured to utilize a code stream size of each enhancement layer of each frame of the first image group and a deletion of each enhancement layer of each frame of the first image group acquired by the frame distortion obtaining module 410
- the current frame of the frame image is distorted, and the rate distortion of each scheduling packet of the first image group is estimated separately.
- the packet weight obtaining sub-module 444 is configured to use the weight of each frame of the first image group acquired by the frame weight acquiring sub-module 442 to affect the first image group and each scheduling packet of the first image group acquired by the rate distortion acquiring module 443.
- the rate distortion is obtained, respectively, the weight of each of the first image groups of the first image group is affected by the first image group; or the weight of each frame of the first image group obtained by the frame weight obtaining sub-module 442 is affected by the first image group.
- the rate distortion and the code stream size of each scheduling packet of the first image group acquired by the rate-distortion obtaining module 443 are respectively obtained, and the weights of influence of each scheduling packet of the first image group on the first image group are respectively acquired.
- the inter-frame weight obtaining sub-module 441 may include: a first obtaining sub-module 4411, configured to use the frame-distortion acquiring module 410 to acquire each enhancement of each frame of the first image group.
- the first frame corresponding to the large hour is distorted.
- a second acquisition sub-module 4412 configured to use the frame distortion of the frame image and the code stream intercepting module 420 according to the missing of each enhancement layer of each frame of the first image group acquired by the frame distortion acquiring module 410 according to the second
- the code stream size of each frame of the first image group intercepted by the interception manner obtains a second local frame distortion corresponding to each frame image of the first image group at the size of the code stream.
- a third obtaining sub-module 4413 configured to respectively acquire a total distortion difference of a first total distortion and a second total distortion of each frame image of the first image group, and a first local frame distortion and a second image of each frame image of the first image group The frame distortion of this frame is poorly distorted;
- the fourth obtaining sub-module 4414 is configured to obtain the influence weight between the frames of the first image group by using the total distortion difference of each frame image and the frame distortion difference of the first image group acquired by the third obtaining sub-module 4413.
- the foregoing apparatus may further include:
- the packet processing module 460 is configured to perform unequal protection and/or scheduling on each scheduling packet of the first image group according to the priority of each scheduling packet of the first image group determined by the priority determining module 440.
- the packet processing module 460 may discard the scheduling packet with a lower priority and reserve the scheduling packet with a higher priority.
- the packet processing module 460 may select a scheduling packet with a higher priority on the link with better channel quality, and select a scheduling packet with a lower priority on the link with poor channel quality.
- the packet processing module 460 can perform high-redundancy FEC encoding on the scheduling packet with higher priority, and can perform low redundancy on the scheduling packet with lower priority.
- FEC coding when the unequal protection of retransmission is performed, the scheduling packet with higher priority may be retransmitted or retransmitted multiple times, so that the scheduling packet with lower priority is not retransmitted, or less retransmission is performed. .
- two different intercepting methods are used to intercept the image group code stream, and the total distortion of each frame of the image group in two different intercept modes is obtained by two decodings, and the image group is used in two frames per frame.
- the total distortion and the frame distortion of each frame of the image group in different interception modes are used to obtain the weighting effect of each scheduling packet of each frame of the image group on the image group, the number of decoding times is relatively small, and the larger P can be used to determine the scheduling.
- the complexity of the packet prioritization process is used to obtain the weighting effect of each scheduling packet of each frame of the image group on the image group.
- each function module of the apparatus for determining the priority of the scheduling packet can be specifically implemented according to the method in the second embodiment.
- the specific implementation process refer to the related description in the second embodiment, and details are not described herein again. .
- two different intercepting methods are used to intercept the image group code stream, and the total distortion of each frame of the image group in two different intercept modes is obtained by two decodings, and the image group is used in two frames per frame.
- the total distortion and the frame distortion of each frame of the image group in different interception modes are used to obtain the weight of each scheduling packet of each frame of the image group, and the decoding times are relatively small, which can be greatly reduced. Low determines the complexity of the scheduling packet prioritization process.
- the influence weight of the frame is used to record the influence weight of the frame on each frame in the image group, and the influence weight of the scheduling packet on each frame in the image group is recorded by using the influence weight array of the scheduling packet, which can further simplify the operation process.
- the program can be stored in a computer readable storage medium.
- the storage medium can include: Read-only memory, random access memory, disk or optical disk, etc.
Landscapes
- Engineering & Computer Science (AREA)
- Multimedia (AREA)
- Signal Processing (AREA)
- Compression Or Coding Systems Of Tv Signals (AREA)
Description
确定调度包优先级的方法和装置 本申请要求了 2009年 5月 20日提交的, 申请号为 200910203202. 5, 发 明名称为 "确定调度包优先级的方法和装置" 的中国专利申请的优先权, 其 全部内容通过引用结合在本申请中。 技术领域
本发明涉及通信技术领域, 具体涉及确定调度包优先级的方法和装置。 背景技术
随着网络技术发展和普遍应用, 传输视频信息已经成为网络传输的重要 业务之一。 在互联网或移动网络上进行视频传输时, 一方面要考虑网络拥塞 和信道噪声等可能导致的传输带宽不稳定; 另一方面要考虑终端设备解码播 放能力以及应用需求的差异, 因此, 在视频信息传输过程中, 要求视频信息 的编码和传输具有可伸缩的特性。
可伸缩视频编码(SVC, Scalable Video Coding )是由高压缩效率视频标 准 H.264/AVC发展而来的视频编码方案, SVC不仅可以提供视频空域(分辨 率)、 质量和时域(帧率)的可伸缩性, 还可以提供精确到包的码流截取, 具 有很高的可伸缩视频编码效率,接近于传统非可伸缩视频编码方案的压缩率。
SVC 的质量可伸缩编码主要通过对变换系数的重复量化(分层编码)、 变换系数的分组编码和位平面编码, 实现质量(SNR ) 的可伸缩性。 具体可 以通过粗糙粒度可伸缩性(CGS )和中等粒度可伸缩性(MGS )等技术实现 视频质量的可伸缩。 其基本思想为: 将视频的每一帧都分为一个可以单独解 码的基本层(BL )码流以及一个或多个增强层(EL )码流, 每个增强层包括 1 个或多个调度包。 基本层采用混合编码的方法, 通常码率比较低, 只能保 证最基本的质量要求, 确保解码端有足够的能力接收并解码基本层的码流, 增强层有 CGS和 MGS两种编码方式, 基本层和增强层的视频图像分辨率通 常相同。
CGS编码方式中,每一个编码层必须完全获得才能得到较好的增强层质 量。 MGS编码方式中,通过关键帧技术, 1个图像组( GOP, Group Of Pictures ) 的每帧的多个调度包可以被任意截取,大大提高了质量可伸缩编码的灵活性,
同时通过对每个 MGS增强层的变换系数进行分组编码, 可以实现每层 MGS 码流多个 MGS调度包的截断, 大大提高了质量可伸缩编码的精细粒度。
MGS主要采用多层次编码和提取 MGS调度包的机制, 通过保留和丢弃 不同的 MGS调度包,可以实现各种码率约束。 SVC每个 GOP中采用层次化 B 帧的结构, 不同层次的帧与帧之间的预测具有很强的相关性, 丢弃不同 MGS调度包后的编码效率的差别很大, 因此, 需要先为每个 MGS调度包设 定优先级, 并根据每个 MGS调度包的优先级来进行不等保护和调度。
请参见图 1, 图 1所示的图像组包括 9帧, 图像组的每帧包括 1个基本 层和 2个增强层, 每个增强层又包括两个 MGS调度包。 上述图像组的各帧 之间的参考关联关系可以是: 第 0帧和第 8帧为关键帧; 第 4帧参考第 0帧 和第 8帧; 第 2帧参考第 4帧和第 0帧; 第 6帧参考第 4帧和第 8帧; 第 1 帧参考第 2帧和第 0帧; 第 3帧参考第 2帧和第 4帧; 第 5帧参考第 4帧和 第 6帧; 第 7帧参考第 6帧和第 8帧。
在图像组中, 被其它帧参考的帧可以称为参考帧, 参考其它帧的帧可以 称为预测帧, 参考帧和预测帧呈现相对关系。 按照图 1所示的参考关系, 第 0帧和第 8帧可以称为第 4帧的参考帧, 第 4帧可以称为第 0帧和第 8帧的 预测帧; 第 2帧和第 4帧可以称为第 3帧的参考帧, 第 3帧的可以称为第 2 帧和第 4帧的预测帧, 以此类推。
现有技术中, 一般是通过多次解码, 获得图像组在不同丢包方式下的失 真, 然后通过比较不同丢包方式下的图像组失真, 来确定图像组的每帧的每 个调度包的优先级的。 以图 1所示的图像组为例, 该图像组包括 9帧, 每帧 包括 4个调度包, 图像组共包括 9*4=36个调度包, 现有技术一般需要解码 36次, 以获得图像组在分别丢弃每个调度包时的图像组失真, 通过比较分别 丢弃每个调度包时的图像组失真,确定图像组的每帧的每个调度包的优先级。
在实现本发明的过程中, 发明人发现, 现有技术确定图像组的每帧的每 个调度包的优先级的方式中, 通过遍历解码不同丢包方式来获得各个调度包 的重要性, 需要经过很多次解码才能最终确定出图像组的每帧的每个调度包 的优先级, 确定调度包优先级的处理过程相对复杂。 发明内容
本发明实施例所要解决的技术问题是, 提供一种确定调度包优先级的方
法和装置, 能够相对降低确定图像组的每个调度包的优先级过程的处理复杂 度。
为解决上述技术问题, 本发明实施例提供以下技术方案:
一种确定调度包优先级的方法, 包括:
分别获取由于第一图像组每帧的每个增强层的缺失而引起的该帧图像的 本帧失真; 按照第一截取方式和第二截取方式截取第一图像组码流, 第一截 取方式所截取的第一图像组的任意一帧的调度包的总个数大于或小于第二截 取方式所截取的该帧的调度包的总个数; 解码按照第一截取方式所截取的第 一图像组码流, 获得第一截取方式下第一图像组每帧图像的第一总失真, 解 码按照第二截取方式所截取的第一图像组码流, 获得第二截取方式下第一图 像组每帧图像的第二总失真; 利用第一图像组每帧图像的第一总失真、 第二 总失真和第一图像组每帧的每个增强层的缺失而引起的该帧图像的本帧失 真, 分别获取第一图像组的每个调度包对第一图像组的影响权重; 基于第一 图像组的每个调度包对第一图像组的影响权重, 确定第一图像组的每个调度 包的优先级。
一种确定调度包优先级的装置, 包括:
本帧失真获取模块, 用于分别获取由于第一图像组每帧的每个增强层的 缺失而引起的该帧图像的本帧失真; 码流截取模块, 用于按照第一截取方式 和第二截取方式截取第一图像组码流, 第一截取方式所截取的第一图像组的 任意一帧的调度包的总个数大于或小于第二截取方式所截取的该帧的调度包 的总个数; 总失真获取模块, 用于解码所述码流截取模块按照第一截取方式 所截取的第一图像组码流, 获得第一截取方式下第一图像组每帧图像的第一 总失真,解码所述码流截取模块按照第二截取方式所截取的第一图像组码流, 获得第二截取方式下第一图像组每帧图像的第二总失真; 权重获取模块, 用 于利用所述总失真获取模块获取的第一图像组每帧图像的第一总失真、 第二 总失真和所述本帧失真获取模块获取的第一图像组每帧的每个增强层的缺失 而引起的该帧图像的本帧失真, 分别获取第一图像组的每个调度包对第一图 像组的影响权重; 优先级确定模块, 用于基于所述权重获取模块获取的第一 图像组的每个调度包对第一图像组的影响权重, 确定第一图像组的每个调度
包的优先级。
由上述技术方案可以看出, 本发明实施例中采用的技术方案优点如下: 采用两种不同截取方式截取图像组码流, 通过两次解码可以获得图像组每帧 在两种不同截取方式下总失真, 利用图像组每帧在两种不同截取方式下总失 真和图像组每帧图像的本帧失真来获取图像组每帧的每个调度包对图像组的 影响权重, 解码次数相对较少, 可以较大的降低确定调度包优先级过程的复 杂度。 附图说明
为了更清楚地说明本发明实施例和现有技术中的技术方案, 下面将对实 施例和现有技术描述中所需要使用的附图作简单地介绍, 显而易见地, 下面 描述中的附图仅仅是本发明的一些实施例, 对于本领域普通技术人员来讲, 在不付出创造性劳动性的前提下, 还可以根据这些附图获得其他的附图。
图 1是现有技术提供的一种图像组各帧关联结构示意图;
图 2是本发明实施例一提供的一种确定调度包优先级的方法流程图; 图 3是本发明实施例二提供的一种确定调度包优先级的方法流程图; 图 4是本发明实施例三提供的一种确定调度包优先级的装置示意图; 图 5是本发明实施例三提供的一种帧间权重获取子模块结构示意图。 具体实施方式
本发明实施例提供一种确定调度包优先级的方法和装置, 能够相对降低 确定图像组的每个调度包的优先级过程的处理复杂度。
以下通过具体实施例, 分别进行详细说明。 请参见图 2、本发明实施例一种确定调度包优先级的方法第一实施例具体 可以包括:
210、分别获取由于第一图像组每帧的每个增强层的缺失而引起的该帧图 像的本帧失真。
所谓某帧图像的本帧失真, 是指在没有漂移误差的情况下, 由于该帧自 身数据丢失而引起的该帧图像的失真。
可以采用多种方式来获取由于第一图像组每帧的每个增强层的缺失而引
起的该帧图像的本帧失真。 举例来说, 可以在第一图像组编码的过程中, 分 别解码在编码第一图像组的每帧的每个增强层之前的第一图像组码流, 分别 获得由于第一图像组每帧的每个增强层的缺失而引起的该帧图像的本帧失 真; 也可以在第一图像组编码完成后, 解码分别丢弃了每帧的每个增强层后 的第一图像组码流, 分别获得由于第一图像组每帧的每个增强层的缺失而引 起的该帧图像的本帧失真。
220、按照第一截取方式和第二截取方式截取第一图像组码流, 第一截取 方式所截取的第一图像组的任意一帧的调度包的总个数大于或小于第二截取 方式所截取的该帧的调度包的总个数。
230、解码按照第一截取方式所截取的第一图像组码流, 获得第一截取方 式下第一图像组每帧图像的第一总失真, 解码按照第二截取方式所截取的第 一图像组码流, 获得第二截取方式下第一图像组每帧图像的第二总失真。
240、 利用第一图像组每帧图像的第一总失真、 第二总失真和第一图像组 每帧的每个增强层的缺失而引起的该帧图像的本帧失真, 分别获取第一图像 组的每个调度包对第一图像组的影响权重。
在一种应用场景下, 可以利用第一图像组每帧图像的第一总失真、 第二 总失真和第一图像组每帧的每个增强层的缺失而引起的该帧图像的本帧失 真, 获取第一图像组的各帧之间的影响权重; 利用第一图像组的各帧之间的 影响权重, 分别获取第一图像组的每帧对第一图像组的影响权重; 利用第一 图像组每帧的每个增强层的缺失而引起的该帧图像的本帧失真和第一图像组 每帧的每个增强层的码流大小,分别估算第一图像组的每个调度包的率失真; 利用第一图像组的每帧对第一图像组的影响权重和第一图像组的每个调度包 的率失真, 分别获取第一图像组的每个调度包对第一图像组的影响权重。
250、基于第一图像组的每个调度包对第一图像组的影响权重, 确定第一 图像组的每个调度包的优先级。
在一种应用场景下, 可以比较获取的第一图像组的每个调度包对第一图 像组的影响权重的大小, 并根据比较结果设置上述图像组的每帧的每个调度 包的优先级。
优先级的等级数可以根据实际需要具体确定, 对第一图像组的影响权重
不同的调度包可以被设置成不同的优先级, 当然, 对图像组的影响权重接近 或相同的调度包可以被设置成相同的优先级。 调度包的优先级设定得越高, 表示该调度包对图像组的影响权重越大。
在确定出图像组的每帧的各个调度包的优先级后, 可以根据各个调度包 的优先级对各个调度包进行不等保护和调度等处理。
由上述技术方案可以看出, 本实施例中采用两种不同截取方式截取图像 组码流, 通过两次解码可以获得图像组每帧在两种不同截取方式下总失真, 利用图像组每帧在两种不同截取方式下总失真和图像组每帧图像的本帧失真 来获取图像组每帧的每个调度包对图像组的影响权重, 解码次数相对较少, 可以较大的降低确定调度包优先级过程的复杂度。
具体来说, 以图 1所示的图像组为例, 该图像组包括 9帧, 每帧包括 2个增 强层, 每个增强层包括 2个调度包, 图像组共包括 9 * 4 = 36个调度包, 现有技 术确定图像组每帧的每个调度包对图像组的影响权重需要解码 36次, 而本发 明实施例的技术方案, 获取每帧的每个增强层的缺失而引起的该帧图像的本 帧失真需要 9 * 2 = 18次解码, 获取第一总失真和第二总失真需要 2次解码, 本 发明实施例的技术方案总共只需要 18+2=20次解码即可确定图像组每帧的每 个调度包对图像组的影响权重, 并且, 如果是在编码图像组的过程中获取每 帧的每个增强层的缺失而引起的该帧图像的本帧失真, 则可以利用编码过程 中已有的解码步骤直接获取到每帧的每个增强层的缺失而引起的该帧图像的 本帧失真, 不需要新增获取每帧的每个增强层的缺失而引起的该帧图像的本 帧失真而进行解码的步骤(不必额外再进行 9 * 2 = 18次解码, 也就是只需要 额外进行 2次解码即可), 因此, 本发明实施例的技术方案的解码次数远远小 于现有技术。 为便于理解, 下面通过更为具体的实施例进一步进行说明, 请参见图 3、 本发明实施例一种确定调度包优先级的方法第二实施例具体可以包括:
301、分别获取由于图像组每帧的每个增强层的缺失而引起的该帧图像的 本帧失真。
需要说明的是, 所谓第 i帧的第 j个增强层的缺失, 可以是在没有漂移误差 的情况下, 第 i帧仅缺失了第 j个增强层, 也可以是第 i帧缺失了第 j个增强层以
及第 j个增强层之上的部分或全部增强层, 以此类推, 也就是说, 图像组每帧 的每个增强层的缺失, 可以是在缺失每个增强层时, 一并丢弃该增强层之上 的部分或全部增强层。
可以在第 i帧的参考数据 (包括: 直接参考的数据和间接参考的数据)完 整时, 分别求取第 i帧由于每个增强层的缺失而引起的第 i帧图像的本帧失真; 而在第 i帧每个增强层缺失时, 第 i帧不参考的数据(例如第 m帧的数据)可以 部分或全部的缺失(由于第 i帧不参考的数据缺失不会造成第 i帧的漂移误差, 因此在求取第 i帧图像的本帧失真时,图像组中第 i帧不参考的数据可以部分或 全部缺失), 以同时获取(即 i帧不参考的数据)由于第 m帧的每个增强层的缺 失而引起的第 m帧图像的本帧失真。
在一种应用场景下, 可以在编码图像组的码流的过程中, 分别解码在编 码图像组的每帧的每个增强层之前的图像组码流, 分别获得由于图像组每帧 的每个增强层的缺失而引起的该帧图像的本帧失真。
举例来说, 在编码图像组第 i帧的过程中, 在编码第 i帧的第 j层时, 需要 先将编码到第 i帧的第 j-1层(基本层或增强层)的码流进行解码, 以第 i帧编码 到第 j-1层时的解码图像为参考, 来编码第 i帧的第 j层的码流。 由于在编码第 i 帧的第 j层的码流时, 解码了编码到第 i帧的第 j-1层的码流, 因此, 可以获取到 编码到第 i帧的第 j-1层时第 i帧的本帧失真, 以及编码到第 i帧的第 j-1层时第 i帧 的码流大小,而编码到第 i帧的第 j-1层时第 i帧的本帧失真等于图像组单独缺失 第 i帧的第 j层时第 i帧图像的本帧失真, 以此类推, 在编码图像组的每帧的过 程中, 可以分别获取到单独缺失图像组每帧的每个增强层而引起的该帧的本 帧失真, 以及图像组的每帧的每层的码流大小。
可以看出, 由于在编码过程中有解码步骤, 因此利用编码过程中的解码 步骤可以直接获取每帧的每个增强层的本帧失真, 不需要增加额外的处理开 销, 实现相对简单。
在另一种应用场景下, 可以在图像组编码完成后, 通过解码分别丢弃了 每帧的每个增强层后的图像组码流, 分别获得由于图像组每帧的每个增强层 的缺失而引起的该帧图像的本帧失真。
举例来说, 可以通过解码截取的码流, 获取单独缺失图像组第 i帧的第 j
层而引起的第 i帧的本帧失真的方式可以是:截取图像组第 i帧的第 j- 1层码流以 及图像组其它帧的所有层码流, 通过解码上述截取的码流, 可以获得单独缺 失图像组第 i帧的第 j层而引起的第 i帧图像的本帧失真, 同时可以获得第 i帧的 基本层至第 j-1层的码流大小, 以此类推, 分别获取到单独缺失图像组每帧的 每个增强层而引起的该帧的本帧失真,以及图像组的每帧的每层的码流大小, 以及图像组的每帧的每层的码流大小。
需要说明的是,所谓截取第 i帧的第 j-1层,指的是截取的码流为第 i帧的基 本层至第 j-1层的全部码流, 以此类推,所谓截取第 i帧的第 j层的第 k个调度包, 指的是截取的码流为第 i帧的基本层至第 j层的第 k个调度包的全部码流。
可以理解的是, 图像组第 i帧的本帧失真, 可以表征在没有漂移误差的情 况下, 由于第 i帧自身数据丢失而引起的第 i帧的失真。
可以理解的是, 若图像组的某些帧之间不存在直接参考或间接参考的关 系, 则可以解码同时缺失的不存在参考关系的帧的 1个增强层, 同时获得在该 缺失情况下, 上述不存在参考关系的各帧的本帧失真。
下面, 为便于表述理解,将截取的图像组的第 i帧的第 j层以及图像组其它 帧的全部码流进行解码而获得的第 i帧图像的本帧失真标记为 ; 将第 i帧 的基本层至第 j层的全部码流的大小标记为 。
在一种应用场景下, 可以认为, 在第 i帧的同一个编码层中, 在没有漂 移误差 (Drift ) 的情况下, 失真 MSE的下降和比特率的上升呈线形关系。
第 i帧的不同编码层的调度包的缺失(丢弃)对第 i帧图像质量的影响 可以如公式(1 )所示的分段线性关系:
≤ ≤ ;
其中, 表示当前截取的图像组的第 i帧的截取后的码流大小, MSE R 可 以表示在没有漂移误差的情况下, 截取到的第 i帧的码流大小为 所对应第 i 帧的本帧失真。
可以看出, 利用公式(1 ), 可以简单的获取到在没有漂移误差的情况下, 每帧在任何截取方式(即: 可以按照需要调整取值, 且不限于正好截取一
个完整的增强层或完整的调度包, 也可以是截取了一个增强层或调度包的一 部分) 下的本帧失真, 例如, 参考公式(1 ), 当 的取值在 ≤ ≤ ?;时, 则應(^ ) = M -(M - O -R,0)/^1 - ) ]。
举例来说, 若用 表示截取到的第 i帧码流大小为基本层至第 j层的第 k 个调度包的全部码流大小, 当 = 时, 利用公式 1, 可以计算出 Μ /' ( MSEi^ ) )来, M 'k表示在没有漂移误差的情况下, 解码截取到的第 i帧的 第 j层的第 k个调度包对应的第 i帧的本帧失真。 这样就可以利用分别获取的缺 失图像组每帧的每个增强层而引起的该帧图像的本帧失真, 来确定出缺失图 像组每帧的每个增强层的各个调度包而引起的该帧图像的本帧失真 M /, 。 此外, (M|*
可以表示没有漂移误差下, 第 i帧的第 1层增强 层的失真码率比值, 可以标记为^ <¾。 以此类推, (M - ' -^^/(!^ -!^- 可以 表示没有漂移误差情况下, 第 i帧的第 j个增强层的失真码率比值(简称为: 率 失真),可以标记为動 ; (Μ/'" - Mi'k、l、Ri'k - 可以表示没有漂移误差 情况下, 第 i帧的第 j层的第 k个 MGS调度包的率失真, 可以标记为 /? ) 0/' 。 R D O /'k体现出第 i帧的第 j层的第 k个调度包码率失真关系 ( R-D , Rate-Distortion function ), 可以表征出第 i帧的第 j层的第 k个调度包的每个比特 的重要性。
302、按照第一截取方式和第二截取方式截取图像组码流, 分别解码上述 两种截取方式所截取的图像组码流, 分别获得在第一截取方式和第二截取方 式下图像组每帧图像的总失真。
在一种应用场景下, 要求上述的第一截取方式和第二截取方式所截取的 图像组每帧的层数和 /或调度包的个数均不相同, 也就是说第一截取方式所截 取的图像组任意一帧的调度包的总个数大于或小于第二截取方式所截取的图 像组的该帧的调度包的总个数。
举例来说,第一截取方式是截取图像组第 i帧的第 jl层的第 kl个调度包(若
kl=0, 则表示只截取图像组第 i帧的第 jl-1层, 以此类推); 第二截取方式是截 取图像组第 i帧的第 j2层的第 k2个调度包, 其中, jl、 j2 、 kl、 k2的取值需要 满足下述条件中的至少一个: jl大于或小于 j2、 kl大于或小于 k2。 图像组其它 帧的截取以此类推, 以保证两种截取方式所截取的图像组每帧的调度包总个 数均不相同。
通过分别解码两种截取方式下所截取的图像组码流, 能够分别获得每种 截取方式下图像组每帧图像的总失真, 同时还可以分别获得两种截取方式所 截取的图像组的每帧的码流大小。
可以理解的是, 图像组第 i帧的总失真, 可以表征在由于第 i帧自身数据丢 失以及第 i帧的参考帧的数据丢失而引起的第 i帧的失真,在一定程度上体现出 图像组各帧之间的参考关系。
下面, 为便于表述理解, 将解码按照第一截取方式截取图像组每帧的码 流而获得的第 i帧图像的总失真标记为 , 将按照第一截取方式截取的图 像组第 i帧的码流大小标记为 ,将解码按照第二截取方式截取图像组每 帧的码流而获得的第 i帧图像总失真标记为 ,将按照第二截取方式截取 的图像组第 i帧的码流大小标记为 。
303、利用第一截取方式和第二截取方式下图像组每帧图像的总失真以及 图像组每帧的每个增强层的缺失而引起的该帧图像的本帧失真, 获取图像组 每帧的每个调度包对图像组的影响权重。
在一种应用场景下, 可以先获取图像组各帧之间的影响权重, 再利用图 像组各帧之间的影响权重各帧之间的影响权重来获得图像组每帧对图像组的 影响权重。
可以利用公式(2 ), 获取图像组中第 i帧 (预测帧) 的参考帧 (第 m帧) 对第 i帧的影响权重:
μ (2 ) 在公式(2 )中, )表示在两种不同截取方式 下第 i帧的总失真的 , AE «. )
表示在没有漂移
误差的情况下, 两种不同截取方式所截取的第 i帧的码流大小对应的第 i帧 的本帧失真的差; S表示第 i帧的所有参考帧的集合, 公式(2) 中的分母表 示第 i帧的所有参考帧的总失真差的总和; ζ·则表示第 i帧的参考帧对第 i 帧的影响权重。 其中, 可以通过解码截取码流获得 z/1'"和 z/2' 2, 可以在 编码图像组码流的过程中或通过截取编码后的图像组码流来(参考公式( 1 ) ) 获得 M 和 M /"2 。
举例来说, 在图 1所示的图像组中, 第 4帧参考第 0帧和第 8帧, 可以 认为, 第 0帧和第 8帧对第 4帧的漂移误差影响是线性的, 第 0帧和第 8帧 对第 4帧的漂移误差权值相同。 可以利用公式( 2 ), 获取第 0帧或第 8帧对 第 4帧的影响权重 4 :
ΑΕ(ε4 2)-ΑΕ(εΗ 2 4)
θ->4 一 s->4 ~ 4 ~
ΑΕ(ε0 2) + ΑΕ(ε ) 其中, AE( )、 AE( )和 AE( )分别表示两种不同截取方式下第 0帧、 第 4帧和第 8帧的总失真的差, AE(^4)表示第 4帧的本帧失真差。 4则可以 表示第 4帧的参考帧 (第 0帧或第 8帧)对第 4帧的影响权重。
可以看出, 利用公式(2), 能够简单的获得图像组中各个参考帧对预测 帧的影响权重。 在利用公式(2)获得图像组中各个参考帧对预测帧的影响权重后, 可以 利用图像组中各个参考帧对预测帧的影响权重, 获得图像组的每帧对其它帧 的影响权重, 即可以获得各个帧之间的影响权重。 在一种应用场景下, 图像组中的各帧之间可能存在逐级参考的关系, 举 例来说, 图像组中的逐级参考关系可以如下: 第 nl 帧参考第 n2帧, 第 nl 帧参考第 n3帧, 第 n3帧参考第 n4帧。
存在逐级参考关系的图像组中, 直接参考帧对预测帧的影响权重是上一 级参考帧 (即上述直接参考帧的参考帧)对上述直接参考帧的影响权重总和 与上述直接参考帧对预测帧的影响权重的乘积。 例如, 第 nl 帧直接参考第 n2帧和第 n3帧, 第 n3帧参考第 n4帧时, 对第 nl帧而言, 以第 nl帧作为
预测帧, 第 n2帧和第 n3帧为第 nl帧 (预测帧) 的直接参考帧, 第 n4帧为 第 n3帧 (直接参考帧) 的上一级参考帧。
举例来说, 若用 F/1表示第 i帧对第 n帧的影响权重, 用 μ η表示第 η 帧的参考帧对第 η帧的影响权重, 第 i帧对第 n帧的影响权重 ."的获得方 式可以如公式(3 ) 所示:
1 " ½ 1 ( 3 ) 在公式(3 )中, N表示第 n帧的所有直接参考帧的集合,
Fi 表示 第 i帧对第 n帧的所有直接参考帧 (即第 j帧) 的影响权重的总和。 其中, 如果第 k帧为关键帧, 即第 k帧没有参考其他帧, 则 F =l ; 如果第 i帧是 第 n帧的上级帧, 且第 i帧不是第 n帧的直接参考帧或间接的逐级参考帧, 即第 i帧和第 n帧之间没有直接的或间接的参考关系, 则 F =0。 特别的, 图像组中的预测帧和参考帧之间也可以是部分参考关系, 即: 预测帧可以只参考参考帧的部分像素。
举例来说, 第 m2帧和第 m3帧参考第 i帧, 第 ml帧参考第 m2帧和第 m3帧, 且第 ml帧在参考第 m2帧和第 m3帧时, 只参考第 m2帧和第 m2 帧的部分像素 (块, 宏块), 那么, 第 i 帧对第 ml 帧的影响权重 F
其中, 为第 ml巾贞参考第 m2巾贞的像素 比例, 为第 ml帧参考第 m3帧的像素比例, 和^ 3的取值可以大于等 于 0且小于等于 1。
利用公式(2 )可以获取参考帧对预测帧的影响权重, 以图 1所示的图像 组为例, 若用 F0"表示第 0帧对第 n帧的影响权重, 则图 1中的第 0帧对各 帧的影响权重可以如下:
1、 第 0帧对第 0帧的影响权重: FoQ =l ;
第 4帧的参考帧
3、 第 0帧对第 2帧的影响权重:
4、 第 0帧对第 6帧的影响权重:
F0 6 = /60^4 + ) = 6 χ ( μ4 +ο) - β χ 4
5、 第 0帧对第 1帧的影响权重:
6、 第 0帧对第 3帧的影响权重:
= 3^F0 2+F0 4)^ 3 X ( 2 X ( 1 + 4 ) + A )
7、 第 0帧对第 5帧的影响权重:
8、 第 0帧对第 7帧的影响权重:
9、 第 0帧对第 8帧的影响权重 ^()8=0。
以此类推, 按照上述方式, 依次获得第 1、 2、 3、 4、 5、 6、 7、 8 帧对 GOP中其它帧的影响权重, 这样就可以获得各个帧之间的影响权重。
特别的, 若第 3帧只参考第 4帧和第 2帧的部分像素, 则第 0帧对第 3 帧的影响权重 F0 = ( P2 X ( 1 + ) X〃2 + P4 X ) X 3。 其中, P2 为第 3帧参考的第 2帧的像素比例, P4为第 3帧参考的第 4帧的像素比例。 若其它帧也存在部分参考的问题, 可以此类推。
进一步的, 可以生成图像组每帧的影响权重数组, 影响权重数组记录每 帧对图像组中各帧的影响权重, 例如第 i帧的影响权重数组中的各个元素可 以为第 i帧对图像组中各帧的影响权重值。
举例来说, 如图 1所示图像组的第 0帧的影响权重数组可以如下:
FWeight0[9] = [1,^(1 + μ2{\ + μ4)),μ2(1 + μ4),
/3( /2(1 + μ4) + μ4),μ4,μ5(μ4 + μ6μ4), μ6μ4, μ7μ6μ4,0] 可以看出,生成如数组 所示的第 i帧的影响权重数组,可以清 楚的记录第 i帧对图像组中各帧的影响权重, 可便于后续计算。 在利用公式( 3 )获得图像组每帧对其它帧的影响权重后, 可以利用图像 组每帧对其它帧的影响权重, 获得图像组每帧对图像组的影响权重。
在公式(4) 中, F表示图像组的所有帧的集合。 举例来说, 图 1所示图像组中, 若用 。表示第 0帧对整个图像组影响 权重, 可以利用公式(4)求和第 0帧的影响权重数组 i^¾g [9]的所有元素, 获得 FW0。
可以看出, 利用公式(4), 能够获得图像组的每帧对图像组影响权重。 在利用公式(4)获得图像组的每帧对图像组的影响权重后, 可以进一步 获得图像组的每帧的每个调度包对图像组的影响权重。 在一种应用场景下, 若用 MW'k表示第 i帧的第 j层的第 k个调度包对图 像组的影响权重, 则 和 的关系可以如公式(5)所示, 但不局限于 此:
在公式(5) 中, w )O/, 表示没有漂移误差情况下, 第 i帧的第 j层的 第 k 个 MGS 调度包的失真码率比值 ( 即率失真 ), RDOi'k = (Μ/'" -Μ/' ) 可以参考步骤 301中的相关内容。 若同一个增
强层的各个调度包的重要性相同, 则 RDO
。 进一步的, 为使得表征调度包对图像组的影响权重更为直观, 若仍用 Μ 表示第 i帧的第 j层的第 k个调度包对图像组的影响权重, 和 之间的关系也可以如公式(6)所示, 但不局限于此:
MW/'k = D ,k * FW{ * RDOj k ( 6 ) 在公式(6) 中, 表示第 i帧的第 j层的第 k个调度包的码流大小。 在另一种应用场景下, 也可以利用图像组每帧的权重数组, 获得图像组 每帧的每个调度包的权重数组。
关系可以如公式(7)所示, 但不局限于此:
MWeightJk [f] = FWeightt [f] * RDO{k ( 7 ) 其中, MW ghdf]的每个元素可以表征第 i帧第 j层第 k个调度包对 图像组各帧的影响权重。
进一步的, 为使得调度包的影响权重数组的各个元素更为直观的表征该 调度包对图像组各帧的影响权重, 若图像组共包括 f帧, 则第 i帧的影响权 重数组 与第 i 帧第 j 层第 k 个调度包的影响权重数组 M Weigh t( [ / ]的关系也可以如公式( 8 )所示, 但不局限于此:
MWeigh [f] = Ώ{ * FWeightt [f] * RDO{k ( 8 ) 通过求和 M e g/zt/' [/]的各个元素, 可以获得第 i帧的第 j层的第 k 个调度包对图像组的影响权重。
可以看出, 利用上述各个公式, 能够获得图像组的每帧的每个调度包对 图像组的影响权重。
304、 确定图像组的每帧的每个调度包的优先级。
可以根据图像组的每帧的每个调度包对图像组的影响权重, 确定图像组 的每帧的每个调度包的优先级。
在一种应用场景下, 可以按照大小顺序, 对图像组的每帧的每个调度包
对图像组的影响权重进行排序, 并可以根据排序结果设置每个调度包的优先 级。
优先级的等级个数可以根据实际需要具体确定, 对图像组的影响权重不 同的调度包可以被设置成不同的优先级, 当然, 对图像组的影响权重接近或 相同的调度包可以被设置成相同的优先级。 调度包的优先级设定得越高, 表 示该调度包对图像组的影响权重越大。
305、按照图像组的每个调度包的优先级,对图像组的每个调度包进行不 等保护和 /或调度。
在确定出图像组的每帧的各个调度包的优先级后, 可以根据各个调度包 的优先级进一步对各个调度包进行不等保护和 /或调度等处理。
举例来说, 当链路当前允许速率较小时,可以丢弃优先级较低的调度包, 保留优先级较高的调度包。 在信道质量不稳定时, 可以选择信道质量较好的 链路上传输优先级较高的调度包, 选择信道质量较差的链路上传输优先级较 低的调度包。 在对数据进行不同的数据冗余度的保护, 可以对优先级较高的 调度包进行高冗余度的前向纠错 ( FEC, Forward Error Correction )编码, 可 以对优先级较低的调度包进行低冗余度的 FEC编码;在进行重传的不等保护 时, 可以对优先级较高的调度包进行重传或者多次重传, 以对优先级较低的 调度包不进行重传, 或者少重传。
由上述技术方案可以看出, 本实施例中, 采用两种不同截取方式截取图 像组码流, 通过两次解码获得图像组每帧在两种不同截取方式下总失真, 利 用图像组每帧在两种不同截取方式下总失真和图像组每帧图像的本帧失真来 获取图像组每帧的每个调度包对图像组的影响权重, 解码次数相对较少, 可 以较大的降低确定调度包优先级过程的复杂度。
进一步的,利用帧的影响权重数组记录该帧对图像组中各帧的影响权重, 利用调度包的影响权重数组记录该调度包对图像组中各帧的影响权重, 可以 进一步简化运算过程。 实施例三、
为更好地实施上述方法, 本发明实施例中还提供一种确定调度包优先级 的装置, 请参见图 4, 本发明实施例三的一种确定调度包优先级的装置具体可
以包括: 本帧失真获取模块 410、 码流截取模块 420、 总失真获取模块 430、 权 重获取模块 440和优先级确定模块 450。
本帧失真获取模块 410,用于分别获取由于第一图像组每帧的每个增强层 的缺失而引起的该帧图像的本帧失真。
码流截取模块 420,用于按照第一截取方式和第二截取方式截取第一图像 组码流, 第一截取方式所截取的第一图像组的任意一帧的调度包的总个数大 于或小于第二截取方式所截取的该帧的调度包的总个数。
总失真获取模块 430, 用于解码码流截取模块 420按照第一截取方式所截 取的第一图像组码流, 获得第一截取方式下第一图像组每帧图像的第一总失 真, 解码码流截取模块 420按照第二截取方式所截取的第一图像组码流, 获得 第二截取方式下第一图像组每帧图像的第二总失真。
权重获取模块 440, 用于利用总失真获取模块 430获取的第一图像组每帧 图像的第一总失真、第二总失真和本帧失真获取模块 410获取的第一图像组每 帧的每个增强层的缺失而引起的该帧图像的本帧失真, 分别获取第一图像组 的每个调度包对第一图像组的影响权重。
优先级确定模块 450, 用于基于权重获取模块 440获取的第一图像组的每 个调度包对第一图像组的影响权重,确定第一图像组的每个调度包的优先级。
在一种应用场景下, 本帧失真获取模块 410可以包括:
第一本帧失真获取子模块 411, 用于在第一图像组编码的过程中, 分别解 码在编码第一图像组的每帧的每个增强层之前的第一图像组码流, 分别获得 由于第一图像组每帧的每个增强层的缺失而引起的该帧图像的本帧失真。
在一种应用场景下, 本帧失真获取模块 410也还可以包括:
第二本帧失真获取子模块 412, 用于在第一图像组编码完成后, 解码分别 丢弃了每帧的每个增强层后的第一图像组码流, 分别获得由于第一图像组每 帧的每个增强层的缺失而引起的该帧图像的本帧失真。
在一种应用场景下, 权重获取模块 440可以包括:
帧间权重获取子模块 441, 用于利用总失真获取模块 430获取的第一图像 组每帧图像的第一总失真、 第二总失真以及第一截取方式和第二截取方式所 截取的第一图像组每帧的码流大小对应的每帧图像的本帧失真, 获取第一图
像组的各帧之间的影响权重, 获取第一图像组的各帧之间的影响权重。
帧权重获取子模块 442, 用于利用帧间权重获取子模块 441获取的第一图 像组的各帧之间的影响权重, 分别获取第一图像组的每帧对第一图像组的影 响权重。
率失真获取模块 443,用于利用第一图像组每帧的每个增强层的码流大小 和本帧失真获取模块 410获取的第一图像组每帧的每个增强层的缺失而引起 的该帧图像的本帧失真, 分别估算第一图像组的每个调度包的率失真。
包权重获取子模块 444, 用于利用帧权重获取子模块 442获取的第一图像 组的每帧对第一图像组的影响权重和率失真获取模块 443获取的第一图像组 的每个调度包的率失真, 分别获取第一图像组的每个调度包对第一图像组的 影响权重; 或者, 利用帧权重获取子模块 442获取的第一图像组的每帧对第一 图像组的影响权重、率失真获取模块 443获取的第一图像组的每个调度包的率 失真和码流大小, 分别获取第一图像组的每个调度包对第一图像组的影响权 重。
请参见图 5, 在一种应用场景下, 帧间权重获取子模块 441可以包括: 第一获取子模块 4411,用于利用本帧失真获取模块 410获取的第一图像组 每帧的每个增强层的缺失而引起的该帧图像的本帧失真和码流截取模块 420 按照第一截取方式所截取的第一图像组每帧的码流大小, 获得第一图像组每 帧图像在该码流大小时对应的第一本帧失真。
第二获取子模块 4412,用于利用本帧失真获取模块 410获取的第一图像组 每帧的每个增强层的缺失而引起的该帧图像的本帧失真和码流截取模块 420 按照第二截取方式所截取的第一图像组每帧的码流大小, 获得第一图像组每 帧图像在该码流大小时对应的第二本帧失真。
第三获取子模块 4413, 用于分别获取第一图像组每帧图像的第一总失真 和第二总失真的总失真差, 以及第一图像组每帧图像的第一本帧失真和第二 本帧失真的本帧失真差;
第四获取子模块 4414, 用于利用第三获取子模块 4413获取的第一图像组 每帧图像的总失真差和本帧失真差, 获得第一图像组各帧之间的影响权重。
在一种应用场景下, 上述装置还可以包括:
包处理模块 460, 用于按照优先级确定模块 440确定的第一图像组的每个 调度包的优先级, 对第一图像组的每个调度包进行不等保护和 /或调度。
在一种应用场景下, 当链路当前允许速率较小时, 包处理模块 460可以丢 弃优先级较低的调度包, 保留优先级较高的调度包。 在信道质量不稳定时, 包处理模块 460可以选择信道质量较好的链路上传输优先级较高的调度包,选 择信道质量较差的链路上传输优先级较低的调度包。
包处理模块 460在对数据进行不同的数据冗余度的保护时,可以对优先级 较高的调度包进行高冗余度的 FEC编码, 可以对优先级较低的调度包进行低 冗余度的 FEC编码; 在进行重传的不等保护时, 可以对优先级较高的调度包 进行重传或者多次重传, 以对优先级较低的调度包不进行重传, 或者少重传。
由上述技术方案可以看出, 本实施例中采用两种不同截取方式截取图像 组码流, 通过两次解码获得图像组每帧在两种不同截取方式下总失真, 利用 图像组每帧在两种不同截取方式下总失真和图像组每帧图像的本帧失真来获 取图像组每帧的每个调度包对图像组的影响权重, 解码次数相对较少, 可以 较大的 P争低确定调度包优先级过程的复杂度。
可以理解的是, 本实施中确定调度包优先级的装置各个功能模块的功能 可以根据实施例二中的方法具体实现, 其具体实现过程可以参考实施例二中 的相关描述, 此处不再赘述。
需要说明的是, 对于前述的各方法实施例, 为了简单描述, 故将其都表 述为一系列的动作组合, 但是本领域技术人员应该知悉, 本发明并不受所描 述的动作顺序的限制, 因为依据本发明, 某些步骤可以采用其他顺序或者 同时进行。 其次, 本领域技术人员也应该知悉, 说明书中所描述的实施例 均属于优选实施例, 所涉及的动作和模块并不一定是本发明所必须的。
在上述实施例中, 对各个实施例的描述都各有侧重, 某个实施例中没 有详述的部分, 可以参见其他实施例的相关描述。
综上所述, 本发明实施例中, 采用两种不同截取方式截取图像组码流, 通过两次解码获得图像组每帧在两种不同截取方式下总失真, 利用图像组每 帧在两种不同截取方式下总失真和图像组每帧图像的本帧失真来获取图像组 每帧的每个调度包对图像组的影响权重, 解码次数相对较少, 可以较大的降
低确定调度包优先级过程的复杂度。
进一步的,利用帧的影响权重数组记录该帧对图像组中各帧的影响权重, 利用调度包的影响权重数组记录该调度包对图像组中各帧的影响权重, 可以 进一步简化运算过程。
本领域普通技术人员可以理解上述实施例的各种方法中的全部或部分步 骤是可以通过程序来指令相关的硬件来完成, 该程序可以存储于一计算机可 读存储介质中, 存储介质可以包括: 只读存储器、 随机存储器、 磁盘或光盘 等。
以上对本发明实施例所提供的一种确定调度包优先级的方法和装置进行 以上实施例的说明只是用于帮助理解本发明的方法及其核心思想; 同时, 对 于本领域的一般技术人员, 依据本发明的思想, 在具体实施方式及应用范围 上均会有改变之处, 综上所述, 本说明书内容不应理解为对本发明的限制。
Claims
1、 一种确定调度包优先级的方法, 其特征在于, 包括:
分别获取由于第一图像组每帧的每个增强层的缺失而引起的该帧图像的 本帧失真;
按照第一截取方式和第二截取方式截取第一图像组码流, 第一截取方式 所截取的第一图像组的任意一帧的调度包的总个数大于或小于第二截取方式 所截取的该帧的调度包的总个数;
解码按照第一截取方式所截取的第一图像组码流, 获得第一截取方式下 第一图像组每帧图像的第一总失真, 解码按照第二截取方式所截取的第一图 像组码流, 获得第二截取方式下第一图像组每帧图像的第二总失真;
利用第一图像组每帧图像的第一总失真、 第二总失真和第一图像组每帧 的每个增强层的缺失而引起的该帧图像的本帧失真, 分别获取第一图像组的 每个调度包对第一图像组的影响权重;
基于第一图像组的每个调度包对第一图像组的影响权重, 确定第一图像 组的每个调度包的优先级。
2、 根据权利要求 1所述的方法, 其特征在于, 所述分别获取由于第一图 像组每帧的每个增强层的缺失而引起的该帧图像的本帧失真, 包括:
在第一图像组编码的过程中, 分别解码在编码第一图像组的每帧的每个 增强层之前的第一图像组码流, 分别获得由于第一图像组每帧的每个增强层 的缺失而引起的该帧图像的本帧失真。
3、 根据权利要求 1所述的方法, 其特征在于, 所述分别获取由于第一图 像组每帧的每个增强层的缺失而引起的该帧图像的本帧失真, 包括:
在第一图像组编码完成后, 解码分别丢弃了每帧的每个增强层后的第一 图像组码流, 分别获得由于第一图像组每帧的每个增强层的缺失而引起的该 帧图像的本帧失真。
4、根据权利要求 1至 3任一项所述的方法, 其特征在于, 所述利用第一图 像组每帧图像的第一总失真、 第二总失真和第一图像组每帧的每个增强层的 缺失而引起的该帧图像的本帧失真, 分别获取第一图像组的每个调度包对第 一图像组的影响权重, 包括:
利用第一图像组每帧图像的第一总失真、 第二总失真以及第一截取方式 和第二截取方式所截取的第一图像组每帧的码流大小对应的每帧图像的本帧 失真, 获取第一图像组的各帧之间的影响权重;
利用第一图像组的各帧之间的影响权重, 分别获取第一图像组的每帧对 第一图像组的影响权重;
利用第一图像组每帧的每个增强层的缺失而引起的该帧图像的本帧失真 和第一图像组每帧的每个增强层的码流大小, 分别估算第一图像组的每个调 度包的率失真;
利用第一图像组的每帧对第一图像组的影响权重和第一图像组的每个调 度包的率失真,分别获取第一图像组的每个调度包对第一图像组的影响权重; 或者, 利用第一图像组的每帧对第一图像组的影响权重、 第一图像组的每个 调度包的率失真和码流大小, 分别获取第一图像组的每个调度包对第一图像 组的影响权重。
5、 根据权利要求 4所述的方法, 其特征在于, 所述利用第一图像组每帧 图像的第一总失真、 第二总失真以及第一截取方式和第二截取方式所截取的 第一图像组每帧的码流大小对应的每帧图像的本帧失真, 获取第一图像组的 各帧之间的影响权重, 包括:
利用第一图像组每帧的每个增强层的缺失而引起的该帧图像的本帧失真 和第一截取方式所截取的第一图像组每帧的码流大小, 获得第一图像组每帧 图像在该码流大小时对应的第一本帧失真;
利用第一图像组每帧的每个增强层的缺失而引起的该帧图像的本帧失真 和第二截取方式所截取的第一图像组每帧的码流大小, 获得第一图像组每帧 图像在该码流大小时对应的第二本帧失真;
分别获取第一图像组每帧图像的第一总失真和第二总失真的总失真差, 以及第一图像组每帧图像的第一本帧失真和第二本帧失真的本帧失真差; 利用获取的第一图像组每帧图像的总失真差和本帧失真差, 获得第一图 像组各帧之间的影响权重。
6、根据权利要求 1至 3任一项所述的方法,其特征在于,所述方法还包括: 按照第一图像组的每个调度包的优先级, 对第一图像组的每个调度包进
行不等保护和 /或调度。
7、 一种确定调度包优先级的装置, 其特征在于, 包括:
本帧失真获取模块, 用于分别获取由于第一图像组每帧的每个增强层的 缺失而引起的该帧图像的本帧失真;
码流截取模块, 用于按照第一截取方式和第二截取方式截取第一图像组 码流, 第一截取方式所截取的第一图像组的任意一帧的调度包的总个数大于 或小于第二截取方式所截取的该帧的调度包的总个数;
总失真获取模块, 用于解码所述码流截取模块按照第一截取方式所截取 的第一图像组码流,获得第一截取方式下第一图像组每帧图像的第一总失真, 解码所述码流截取模块按照第二截取方式所截取的第一图像组码流, 获得第 二截取方式下第一图像组每帧图像的第二总失真;
权重获耳 莫块, 用于利用所述总失真获取模块获取的第一图像组每帧图 像的第一总失真、 第二总失真和所述本帧失真获取模块获取的第一图像组每 帧的每个增强层的缺失而引起的该帧图像的本帧失真, 分别获取第一图像组 的每个调度包对第一图像组的影响权重;
优先级确定模块, 用于基于所述权重获取模块获取的第一图像组的每个 调度包对第一图像组的影响权重, 确定第一图像组的每个调度包的优先级。
8、 根据权利要求 7所述的装置, 其特征在于, 所述本帧失真获取模块包 括:
第一本帧失真获取子模块, 用于在第一图像组编码的过程中, 分别解码 在编码第一图像组的每帧的每个增强层之前的第一图像组码流, 分别获得由 于第一图像组每帧的每个增强层的缺失而引起的该帧图像的本帧失真; 和 / 或
第二本帧失真获取子模块, 用于在第一图像组编码完成后, 解码分别丢 弃了每帧的每个增强层后的第一图像组码流, 分别获得由于第一图像组每帧 的每个增强层的缺失而引起的该帧图像的本帧失真。
9、 根据权利要求 7或 8述的装置, 其特征在于, 所述权重获取模块包括: 帧间权重获取子模块, 用于利用所述总失真获取模块获取的第一图像组 每帧图像的第一总失真、 第二总失真以及第一截取方式和第二截取方式所截
取的第一图像组每帧的码流大小对应的每帧图像的本帧失真, 获取第一图像 组的各帧之间的影响权重;
帧权重获取子模块, 用于利用所述帧间权重获取子模块获取的第一图像 组的各帧之间的影响权重, 分别获取第一图像组的每帧对第一图像组的影响 权重;
率失真获取模块, 用于利用第一图像组每帧的每个增强层的码流大小和 所述本帧失真获取模块获取的第一图像组每帧的每个增强层的缺失而引起的 该帧图像的本帧失真, 分别估算第一图像组的每个调度包的率失真;
包权重获取子模块, 用于利用所述帧权重获取子模块获取的第一图像组 的每帧对第一图像组的影响权重和所述率失真获 莫块获取的第一图像组的 每个调度包的率失真, 分别获取第一图像组的每个调度包对第一图像组的影 响权重; 或者, 利用所述帧权重获取子模块获取的第一图像组的每帧对第一 图像组的影响权重、 所述率失真获耳 莫块获取的第一图像组的每个调度包的 率失真和码流大小, 分别获取第一图像组的每个调度包对第一图像组的影响 权重。
10、根据权利要求 9所述的装置, 其特征在于, 所述帧间权重获取子模块 包括:
第一获取子模块, 用于利用所述本帧失真获取模块获取的第一图像组每 帧的每个增强层的缺失而引起的该帧图像的本帧失真和所述码流截取模块按 照第一截取方式所截取的第一图像组每帧的码流大小, 获得第一图像组每帧 图像在该码流大小时对应的第一本帧失真;
第二获取子模块, 用于利用所述本帧失真获取模块获取的第一图像组每 帧的每个增强层的缺失而引起的该帧图像的本帧失真和所述码流截取模块按 照第二截取方式所截取的第一图像组每帧的码流大小, 获得第一图像组每帧 图像在该码流大小时对应的第二本帧失真;
第三获取子模块, 用于分别获取第一图像组每帧图像的第一总失真和第 二总失真的总失真差, 以及第一图像组每帧图像的第一本帧失真和第二本帧 失真的本帧失真差;
第四获取子模块, 用于利用所述第三获取子模块获取的第一图像组每帧
图像的总失真差和本帧失真差, 获得第一图像组各帧之间的影响权重。
11、 根据权利要求 7或 8所述的装置, 其特征在于, 所述装置还包括: 包处理模块, 用于按照所述优先级确定模块确定的第一图像组的每个调 度包的优先级, 对第一图像组的每个调度包进行不等保护和 /或调度。
Applications Claiming Priority (2)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN200910203202.5 | 2009-05-20 | ||
CN 200910203202 CN101895461B (zh) | 2009-05-20 | 2009-05-20 | 确定调度包优先级的方法和装置 |
Publications (1)
Publication Number | Publication Date |
---|---|
WO2010133158A1 true WO2010133158A1 (zh) | 2010-11-25 |
Family
ID=43104534
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
PCT/CN2010/072852 WO2010133158A1 (zh) | 2009-05-20 | 2010-05-17 | 确定调度包优先级的方法和装置 |
Country Status (2)
Country | Link |
---|---|
CN (1) | CN101895461B (zh) |
WO (1) | WO2010133158A1 (zh) |
Families Citing this family (1)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN103269457B (zh) * | 2013-05-15 | 2016-03-30 | 西安交通大学 | 基于失真估计的h.264/avc视频包优先级调度方法 |
Citations (3)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN1620782A (zh) * | 2002-02-22 | 2005-05-25 | 连宇通信有限公司 | 一种在无线分组数据通信中的优先级控制方法 |
CN101102495A (zh) * | 2007-07-26 | 2008-01-09 | 武汉大学 | 一种基于区域的视频图像编解码方法和装置 |
CN101426133A (zh) * | 2007-10-29 | 2009-05-06 | 佳能株式会社 | 用于发送运动图像数据的方法和通信设备 |
Family Cites Families (2)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN1180577C (zh) * | 2002-04-15 | 2004-12-15 | 华为技术有限公司 | 一种流量整形技术的实现方法 |
CN101146229B (zh) * | 2007-10-29 | 2010-06-02 | 北京大学 | 一种svc视频fgs优先级调度方法 |
-
2009
- 2009-05-20 CN CN 200910203202 patent/CN101895461B/zh active Active
-
2010
- 2010-05-17 WO PCT/CN2010/072852 patent/WO2010133158A1/zh active Application Filing
Patent Citations (3)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN1620782A (zh) * | 2002-02-22 | 2005-05-25 | 连宇通信有限公司 | 一种在无线分组数据通信中的优先级控制方法 |
CN101102495A (zh) * | 2007-07-26 | 2008-01-09 | 武汉大学 | 一种基于区域的视频图像编解码方法和装置 |
CN101426133A (zh) * | 2007-10-29 | 2009-05-06 | 佳能株式会社 | 用于发送运动图像数据的方法和通信设备 |
Also Published As
Publication number | Publication date |
---|---|
CN101895461B (zh) | 2012-10-17 |
CN101895461A (zh) | 2010-11-24 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
US10652580B2 (en) | Video data processing method and apparatus | |
US8699522B2 (en) | System and method for low delay, interactive communication using multiple TCP connections and scalable coding | |
CN109068187B (zh) | 实时流量传送系统和方法 | |
US20090103635A1 (en) | System and method of unequal error protection with hybrid arq/fec for video streaming over wireless local area networks | |
JP5706144B2 (ja) | スケーラブル映像を優先度に応じて伝送する方法及びその装置 | |
WO2010130182A1 (zh) | 多路视频通讯系统及处理方法 | |
Bucciol et al. | Cross-layer perceptual ARQ for H. 264 video streaming over 802.11 wireless networks | |
CN103795996B (zh) | 3d视频传递方法和设备 | |
WO2013004170A1 (zh) | 一种视频编解码方法和装置 | |
Yao et al. | IPB-frame adaptive mapping mechanism for video transmission over IEEE 802.11 e WLANs | |
JP2012239193A (ja) | 多層ビデオ符号化 | |
Pérez et al. | Lightweight multimedia packet prioritization model for unequal error protection | |
Fiandrotti et al. | Traffic prioritization of H. 264/SVC video over 802.11 e ad hoc wireless networks | |
Monteiro et al. | Evaluation of the H. 264 scalable video coding in error prone IP networks | |
Liu et al. | Adaptive EDCA algorithm using video prediction for multimedia IEEE 802.11 e WLAN | |
Chen et al. | Multi-stages hybrid ARQ with conditional frame skipping and reference frame selecting scheme for real-time video transport over wireless LAN | |
JP2005033556A (ja) | データ送信装置、データ送信方法、データ受信装置、データ受信方法 | |
WO2010133158A1 (zh) | 确定调度包优先级的方法和装置 | |
Bucciol et al. | Perceptual ARQ for H. 264 video streaming over 3G wireless networks | |
EP1781035A1 (en) | Real-time scalable streaming system and method | |
CN115623155A (zh) | 视频数据处理方法、视频数据处理装置、存储介质 | |
Zhai et al. | Joint source-channel video transmission | |
Tizon et al. | Scalable and media aware adaptive video streaming over wireless networks | |
Baik et al. | Efficient MAC for real-time video streaming over wireless LAN | |
Zhai | Optimal cross-layer resource allocation for real-time video transmission over packet lossy networks |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
121 | Ep: the epo has been informed by wipo that ep was designated in this application |
Ref document number: 10777361 Country of ref document: EP Kind code of ref document: A1 |
|
NENP | Non-entry into the national phase |
Ref country code: DE |
|
122 | Ep: pct application non-entry in european phase |
Ref document number: 10777361 Country of ref document: EP Kind code of ref document: A1 |