TW201444342A - Complexity aware video encoding for power aware video streaming - Google Patents

Complexity aware video encoding for power aware video streaming Download PDF

Info

Publication number
TW201444342A
TW201444342A TW103107626A TW103107626A TW201444342A TW 201444342 A TW201444342 A TW 201444342A TW 103107626 A TW103107626 A TW 103107626A TW 103107626 A TW103107626 A TW 103107626A TW 201444342 A TW201444342 A TW 201444342A
Authority
TW
Taiwan
Prior art keywords
complexity
cost
encoder
interpolation
deblocking
Prior art date
Application number
TW103107626A
Other languages
Chinese (zh)
Inventor
yu-wen He
Markus Kunstner
Yan Ye
Eun Ryu
Original Assignee
Vid Scale Inc
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Vid Scale Inc filed Critical Vid Scale Inc
Publication of TW201444342A publication Critical patent/TW201444342A/en

Links

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N19/00Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
    • H04N19/10Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding
    • H04N19/169Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the coding unit, i.e. the structural portion or semantic portion of the video signal being the object or the subject of the adaptive coding
    • H04N19/17Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the coding unit, i.e. the structural portion or semantic portion of the video signal being the object or the subject of the adaptive coding the unit being an image region, e.g. an object
    • H04N19/174Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the coding unit, i.e. the structural portion or semantic portion of the video signal being the object or the subject of the adaptive coding the unit being an image region, e.g. an object the region being a slice, e.g. a line of blocks or a group of blocks
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N19/00Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
    • H04N19/10Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding
    • H04N19/102Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the element, parameter or selection affected or controlled by the adaptive coding
    • H04N19/103Selection of coding mode or of prediction mode
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N19/00Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
    • H04N19/10Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding
    • H04N19/134Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the element, parameter or criterion affecting or controlling the adaptive coding
    • H04N19/136Incoming video signal characteristics or properties
    • H04N19/14Coding unit complexity, e.g. amount of activity or edge presence estimation
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N19/00Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
    • H04N19/10Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding
    • H04N19/134Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the element, parameter or criterion affecting or controlling the adaptive coding
    • H04N19/156Availability of hardware or computational resources, e.g. encoding based on power-saving criteria

Landscapes

  • Engineering & Computer Science (AREA)
  • Multimedia (AREA)
  • Signal Processing (AREA)
  • Computing Systems (AREA)
  • Theoretical Computer Science (AREA)
  • Compression Or Coding Systems Of Tv Signals (AREA)

Abstract

Video encoding and decoding may take into account decoding complexities, e.g., complexity levels that may be employed by decoders in user devices, such as wireless transmit/receive units (WTRUs). Complexity aware encoding may take into account decoding complexity that may be related to motion compensation, coding mode, and/or deblocking. An encoder, e.g., a server, for example, may encode a video stream by taking into account decoding complexity. The decoding complexity may include one or more of a motion estimation, a mode decision, or a deblocking option. For example, an encoder may encode video data by receiving an input video signal. The encoder may generate a prediction signal based on the input video signal. The encoder may generate an encoded bitstream as a function of the prediction signal. The prediction signal or the encoded bitstream, or both, may be generated as a function of a decoding complexity.

Description

功率感知視訊串流複雜性感知視訊編碼Power-aware video stream complexity perception video coding

相關申請案的交叉引用 本申請案要求於2013年3月6日提出的美國臨時專利申請案No. 61/773,528的權益。CROSS-REFERENCE TO RELATED APPLICATIONS This application claims the benefit of U.S. Provisional Patent Application No. 61/773,528, filed on March 6, 2013.

隨著單晶片系統(SoC)和無線網路技術(例如,4G及/或WiFi)的改進,計算能力(例如,CPU頻率及/或多核)及/或行動裝置的頻寬得到了極大的提升。隨著行動用戶數量的快速增加,例如,行動視訊內容產生和傳遞出現了極大的增加。由於顯示器尺寸、處理能力、網路條件、及/或電池容量的變化,對於在資源受限且多樣化的行動裝置上提供高品質行動視訊服務可能存在挑戰。在伺服器(例如,編碼器)處及/或在使用者裝置(例如,解碼器)處使用的內容生產、處理、分配和消費方法可能不合適。With improvements in single-chip systems (SoCs) and wireless networking technologies (eg, 4G and/or WiFi), computing power (eg, CPU frequency and/or multi-core) and/or mobile device bandwidth has been greatly improved . With the rapid increase in the number of mobile users, for example, there has been a tremendous increase in the generation and delivery of mobile video content. Providing high quality mobile video services on resource-constrained and diverse mobile devices can be challenging due to changes in display size, processing power, network conditions, and/or battery capacity. Content production, processing, distribution, and consumption methods used at a server (eg, an encoder) and/or at a user device (eg, a decoder) may not be suitable.

系統、方法和手段被提供以藉由考慮解碼複雜性(例如,可以由如無線傳輸/接收單元(WTRU)的使用者裝置中解碼器採用的複雜性等級)來實施視訊編碼和解碼。複雜性感知編碼可以考慮解碼複雜性,該解碼複雜性可以與移動補償、編碼模式及/或解塊相關。 如伺服器之類的編碼器例如可以藉由考慮解碼複雜性來對視訊流編碼。解碼複雜性可以包括移動估計、模式決策、或解塊選項中的一者或多者。在例示編碼中,編碼器可以接收輸入視訊訊號。編碼器可以基於該輸入視訊訊號產生預測信號。編碼器可以根據預測信號產生編碼位元流。預測信號或編碼位元流或二者可以根據解碼複雜性而被產生。 例如,移動估計可以根據解碼複雜性而被執行。這可以包含選擇與預測單元的最小移動向量成本相關聯的移動向量。移動向量成本可以是根據解碼複雜性的,例如,移動向量成本可以包括與解碼複雜性相關的成本元件。編碼模式可以根據解碼複雜性而被選擇,例如藉由根據解碼複雜性來確定與編碼模式相關聯的成本。解塊過程可以根據解塊成本而被選擇性地賦能或禁用,其中解塊成本根據解碼複雜性來確定。Systems, methods and means are provided to implement video encoding and decoding by considering decoding complexity (e.g., the level of complexity that can be employed by a decoder in a user device such as a WTRU). Complexity-aware coding can take into account decoding complexity, which can be related to motion compensation, coding modes, and/or deblocking. An encoder such as a server can encode the video stream, for example, by considering decoding complexity. The decoding complexity may include one or more of a motion estimation, a mode decision, or a deblocking option. In the exemplary encoding, the encoder can receive the input video signal. The encoder can generate a prediction signal based on the input video signal. The encoder can generate a stream of encoded bits based on the prediction signal. The prediction signal or coded bit stream or both can be generated based on the decoding complexity. For example, motion estimation can be performed based on decoding complexity. This may include selecting a motion vector associated with the minimum motion vector cost of the prediction unit. The motion vector cost may be based on decoding complexity, for example, the motion vector cost may include cost elements associated with decoding complexity. The coding mode may be selected based on the decoding complexity, such as by determining the cost associated with the coding mode based on the decoding complexity. The deblocking process can be selectively enabled or disabled based on the deblocking cost, where the deblocking cost is determined based on the decoding complexity.

現在將參考不同附圖對說明性實施方式的詳細說明進行描述。儘管該說明提供可能的實施方式的詳細示例,但是應該注意的是,這些細節被確定為示例性的並且不以任何方式限制本申請案的範圍。此外,附圖可以示出流程圖,該流程圖意味著示例性的。可以使用其他實施方式。訊息的順序可以適當地改變。在不需要的情況下訊息可以被省略,並且附加流程可以被添加。 第1圖示出示例性基於超文字傳輸協定(HTTP)的視訊流系統100。諸如網飛、亞馬遜等服務供應者可以使用網際網路基礎設施來於雲上(OTT,over the top)部署其服務。OTT部署可以降低部署成本及/或時間。在第1圖示出的示例性視訊流系統中,捕獲的內容可以被壓縮並被切割成小分段。串流系統中的分段週期例如可以在2和10秒之間。分段可以被儲存在HTTP串流伺服器中並例如經由內容傳遞網路(CDN)被分配。如位元速率、位元組範圍、及/或統一資源定位符(URL)的分段性能的資訊可以被裝配在例如媒體呈現描述(MPD)清單檔中。在串流對話開始時,用戶端102可以請求MPD檔。用戶端102可以例如根據其能力(如剖析度、可用頻寬等)來決定它可能需要的分段。例如按照用戶端的請求,伺服器可以將資料發送至用戶端。經由HTTP傳送的分段可以被快取至HTTP快取伺服器104中,HTTP快取伺服器104可以允許這些分段被用於其他用戶。系統100可以大規模地提供串流服務。 隨著更多應用使用行動網路平臺,行動裝置的功率持續可能成為考慮的因素。功率消耗率對於每個應用可以變化。視訊解碼可能是具有高功率消耗的應用,因為其可能包含密集計算和記憶體存取。視訊回放應用可以用足夠的亮度等級來顯示畫面,這可能是耗電量大的。第3圖是示出示例性視訊回放系統300的方塊圖。視訊回放系統300可以包括接收器302、解碼器304及/或顯示器306(例如,渲染器)。第4圖示出基於區塊的單層解碼器400的示例性方塊圖,該解碼器400可以接收例如由第6圖中例示方式示出的編碼器600產生的視訊位元流、並且可以重建將被顯示的視訊訊號。在視訊解碼器400處,位元流可以由熵解碼器402來剖析。殘差系數可以在404處被逆量化並在406處被逆變換以獲得重建殘差。例如使用408處的空間預測及/或410處的時間預測,編碼模式和預測資訊可以被用於獲得預測信號。預測信號和重建殘差可以在412處被相加在一起以得到重建視訊。重建視訊可以在414處通過迴路濾波、並可以被儲存在參考畫面儲存416中、將被顯示及/或將被用於解碼將來的視訊訊號。第6圖是示出示例性基於區塊的單層視訊編碼器600的圖式,視訊編碼器600可以被用於產生用於如第2圖示例性示出的串流系統200的位元流。 單層視訊編碼器600可以採用例如空間預測602(例如,其可以被稱為畫面內預測)及/或時間預測604(例如,其可以被稱為畫面間預測及/或移動補償預測)以預測輸入視訊訊號來獲得有效壓縮。編碼器600可以具有模式決策邏輯606,模式決策邏輯606可以例如基於如速率及/或失真考慮的結合的標準或條件來選擇預測的形式。編碼器600可以在608處變換及/或在610處量化預測殘差(例如,輸入信號和預測信號之間的差異信號)。量化的殘差和模式資訊(例如,畫面內及/或畫面間預測)及預測資訊(例如,移動向量、參考畫面索引、畫面內預測模式等)可以在熵編碼器612處被壓縮並被緊縮至輸出視訊位元流中。 編碼器600可以例如藉由在614處應用逆量化和在616處對量化的殘差應用逆變換以獲得重建殘差並在618處將重建殘差添加至預測信號來產生重建視訊訊號。重建視訊訊號可以在620處通過環路濾波(例如,解塊濾波、取樣適應性偏移、及/或適應性迴路濾波)、並且可以被儲存在參考畫面儲存622中。重建視訊訊號可以被用於預測將來的視訊訊號。 高效率視訊編碼(HEVC)可以是正由ITU-T視訊編碼專家組(VCEG)和ISO/IEC移動影像專家組(MPEG)開發的視訊壓縮標準。HEVC可以使用基於區塊的混合視訊編碼。使用HEVC的編碼器和解碼器可以如用第4圖和6中的示例所示的來操作。HEVC可以允許視訊區塊(例如,大區塊)的使用、並且可以使用四元樹分割來用信號發送區塊編碼資訊。畫面或片(slice)可以被分割成可以具有相同大小(例如,64×64)的編碼樹區塊(CTB)。CTB可以被分割成具有一個或多個四元樹的編碼單元(CU),並且CU可以被分割成具有一個或多個四元樹的預測單元(PU)及/或變換單元(TU)。畫面間編碼後CU和其PU可以使用任何數量的分割模式。第7圖示出8個示例分割模式。時間預測(例如,移動補償)可以被應用以重建畫面間編碼後PU。依賴於移動向量的精確度(例如,HEVC中高達四分之一像素),線型濾波可以被應用以在小數位置處獲得像素值。在HEVC中,內插濾波器可以具有用於亮度的七個或八個分接頭(tap)及用於色度的四個分接頭。HEVC中的解塊濾波器可以是基於內容的。依賴於諸如編碼模式差異、移動差異、參考畫面差異、像素值差異等的多個因數,不同的解塊濾波操作在TU及/或PU邊界處可以被應用。對於熵編碼,HEVC可以對一些或大部分區塊級語法元素採用上下文適應性二進位算術編碼(CABAC)。可以被用於CABAC編碼中的二進位示例可以包括基於上下文的編碼規律二進位和無需上下文的繞開編碼二進位。在HEVC解碼器的模組中,移動補償、解塊、及熵編碼可以包含耗電量大的操作。 第2圖示出示例性複雜性感知串流系統200。在伺服器側,例如具有不同解碼複雜性的多個視訊版本可以被儲存(例如,以類似的位元速率)。解碼複雜性資訊可以被嵌入至MPD檔202中。當串流對話開始時,MPD檔202可以由用戶端(例如,如WTRU的使用者裝置)204獲取。用戶端204可以用根據可用位元速率的品質、以及用根據剩餘電池電力及/或剩餘視訊回放時間的解碼複雜性等級來請求視訊版本(例如,MPD檔202中所識別的視訊位元流分段)。複雜性感知串流系統200可以確保視訊的整個長度可以在裝置電池電力被完全耗盡之前被回放。如果在目前複雜性等級解碼該位元流分段消耗過多電力來使用裝置上的剩餘可用電力維持整個長度回放,則用戶端204可以切換至具有較低複雜性等級的位元流分段。如果電池電力是充足的,則用戶端204可以切換至具有較高複雜性等級的位元流以得到更好的品質。於此揭露的方法、系統和手段可以允許編碼器產生具有不同解碼複雜性的位元流。 例如採用H.264編碼標準的串流系統可以在伺服器側不考慮解碼複雜性來處理針對行動平臺的功率考慮。編碼器(例如,HEVC參考軟體編碼器或HM編碼器)可以提高(例如,最佳化)編碼效率,同時維持一定的視訊品質。視訊品質可以用客觀品質度量來測量、及/或可以由人類觀察者主觀地測量,其中客觀品質度量為例如峰值信號干擾比(PSNR)、視訊品質度量(VQM)、結構相似度(SSIM)等。通用編碼器可以不考慮解碼器側的位元流複雜性的最佳化。用於手機的複雜性受限編碼可以考慮手機解碼能力。此限制可以允許手機即時解碼位元流。一些功率感知串流系統可以例如以可變複雜性等級、剖析度及/或位元速率來編碼視訊內容。為了支援功率感知串流系統,編碼器可以考慮解碼器側的速率失真性能、解碼複雜性、及/或功率消耗。於此揭露的方法、系統和手段可以提供可以考慮解碼複雜性的編碼。 第5圖示出使用例如HEVC解碼器實施(例如,最佳化的HEVC解碼器實施)的示例性剖析結果(例如,依據一任務的解碼時間與所有解碼時間的百分比)。針對被包含在解碼中的各種模組的結果可以包括移動補償(MC)、畫面內重建(Intra_rec)、畫面間重建(Inter_Rec)、迴路濾波(例如,解塊(DB)濾波或LF)、熵解碼取樣適應性偏移(SAO)等。WVGA(例如,832×480)位元流解碼可以在平板電腦上執行;用於WVGA位元流解碼的示例剖析結果在502處被示出。HD(例如,720p)位元流解碼可以在個人電腦(PC)上執行。用於HD位元流解碼的示例剖析結果在504處被示出。如第5圖所示,移動補償、迴路內濾波、及熵解碼可以耗費很多的處理時間並可以消耗很多電力。 複雜性感知編碼技術可以考慮解碼複雜性,例如,與移動補償(MC)、編碼模式、及/或解碼濾波的解碼複雜性有關。解碼器可以在移動估計、模式決策、及/或迴路內濾波期間調整其決策,例如以考慮與複雜性有關的性能。性能和複雜性之間的平衡可以被獲得。 移動向量可以具有小數級像素精確度(例如,HEVC及/或H.264中高達四分之一像素)。移動補償可以使用一個或多個內插濾波器來獲得小數級像素位置處的像素值。移動補償的複雜性可以依賴於移動向量(MV)的小數位置。第8圖示出例如在移動向量具有四分之一像素精確度的情況下的示例性可能的小數位置。如第8圖所示,例如,可能存在十六種情況,這十六種情況可以根據MV的小數部分的值來設置。可以影響MC複雜性的因素可以是可以應用該內插的方向。內插濾波器可以保持水平及/或垂直對稱位置相同。例如,內插濾波器可以保持小數位置802和804相同、可以保持小數位置806和808相同等等。計算複雜性可以保持相同。對於垂直內插及/或水平內插的記憶體存取效率可以是不同的。對於水平內插,解碼器可以一起提取多個參考像素,因為記憶體佈置可以是按行(line)組織的。對於垂直內插,參考像素可以例如根據記憶體位置而彼此遠離放置。這種設置可以影響提取速度並可以增加記憶體頻寬。內插濾波器可以是可能影響MC複雜性的因素。如果內插濾波器具有對稱系數,則計算複雜性可以被降低,因為可以需要較少的乘法運算。表1示出針對第8圖示出的位置的示例性內插濾波器特徵。表2示出針對第8圖中位置的內插的示例性記憶體大小。基於這些特徵,4×4複雜性矩陣可以在編碼器側針對移動估計來定義。在複雜性矩陣中,垂直內插可以比水平內插更加複雜,並且非對稱內插可以比對稱內插更複雜。A detailed description of the illustrative embodiments will now be described with reference to the various drawings. While the description provides a detailed example of possible embodiments, it should be noted that the details are not to be construed as limiting the scope of the application. Further, the drawings may show a flowchart, which is meant to be exemplary. Other embodiments can be used. The order of the messages can be changed as appropriate. The message can be omitted if not needed, and an additional process can be added. FIG. 1 illustrates an exemplary Hypertext Transfer Protocol (HTTP) based video streaming system 100. Service providers such as Netflix, Amazon, etc. can use the Internet infrastructure to deploy their services on the cloud (OTT, over the top). OTT deployment can reduce deployment costs and/or time. In the exemplary video streaming system illustrated in Figure 1, the captured content can be compressed and cut into small segments. The segmentation period in the streaming system can be, for example, between 2 and 10 seconds. Segments can be stored in an HTTP streaming server and distributed, for example, via a content delivery network (CDN). Information such as bit rate, byte range, and/or segmentation performance of a Uniform Resource Locator (URL) may be assembled, for example, in a Media Presentation Description (MPD) manifest file. At the beginning of the streaming session, the client 102 can request an MPD file. Client 102 may determine the segments it may need, for example, based on its capabilities (e.g., profiling, available bandwidth, etc.). For example, according to the request of the client, the server can send the data to the client. Segments transmitted via HTTP can be cached into the HTTP cache server 104, which can allow these segments to be used by other users. System 100 can provide streaming services on a large scale. As more applications use mobile network platforms, the power of mobile devices may continue to be a factor to consider. The power consumption rate can vary for each application. Video decoding may be an application with high power consumption as it may involve intensive computing and memory access. Video playback applications can display images with sufficient brightness levels, which can be very power hungry. FIG. 3 is a block diagram showing an exemplary video playback system 300. Video playback system 300 can include a receiver 302, a decoder 304, and/or a display 306 (eg, a renderer). 4 shows an exemplary block diagram of a block-based single layer decoder 400 that can receive a stream of video bits, such as that produced by encoder 600, illustrated in the manner illustrated in FIG. 6, and can be reconstructed The video signal that will be displayed. At video decoder 400, the bitstream can be parsed by entropy decoder 402. The residual coefficient may be inverse quantized at 404 and inverse transformed at 406 to obtain a reconstructed residual. For example, using spatial prediction at 408 and/or temporal prediction at 410, the coding mode and prediction information can be used to obtain the prediction signal. The prediction signal and reconstruction residuals can be added together at 412 to obtain reconstructed video. The reconstructed video may be loop filtered at 414 and may be stored in reference picture store 416, to be displayed, and/or to be used to decode future video signals. 6 is a diagram showing an exemplary block-based single layer video encoder 600, which may be used to generate bits for the streaming system 200 as exemplarily shown in FIG. flow. Single layer video encoder 600 may employ, for example, spatial prediction 602 (eg, which may be referred to as intra-picture prediction) and/or temporal prediction 604 (eg, which may be referred to as inter-picture prediction and/or motion compensated prediction) to predict Enter a video signal to get a valid compression. Encoder 600 may have mode decision logic 606 that may select the form of the prediction based, for example, on a combined criteria or condition such as rate and/or distortion considerations. Encoder 600 may transform at 608 and/or quantize the prediction residual (e.g., the difference signal between the input signal and the prediction signal) at 610. Quantized residual and mode information (eg, intra-picture and/or inter-picture prediction) and prediction information (eg, motion vector, reference picture index, intra-picture prediction mode, etc.) may be compressed and compressed at entropy encoder 612 To the output video bit stream. Encoder 600 may generate a reconstructed video signal, for example, by applying inverse quantization at 614 and applying an inverse transform to the quantized residual at 616 to obtain a reconstruction residual and adding a reconstruction residual to the prediction signal at 618. The reconstructed video signal may be loop filtered (e.g., deblocking filtered, sample adaptive offset, and/or adaptive loop filtered) at 620 and may be stored in reference picture store 622. The reconstructed video signal can be used to predict future video signals. High Efficiency Video Coding (HEVC) can be a video compression standard being developed by the ITU-T Video Coding Experts Group (VCEG) and the ISO/IEC Mobile Imaging Experts Group (MPEG). HEVC can use block-based hybrid video coding. Encoders and decoders using HEVC can operate as shown in the examples in Figures 4 and 6. HEVC can allow the use of video blocks (eg, large blocks) and can use quadtree partitioning to signal block coded information. A picture or slice may be partitioned into coding tree blocks (CTBs) that may have the same size (eg, 64x64). The CTB may be partitioned into coding units (CUs) having one or more quadtrees, and the CUs may be partitioned into prediction units (PUs) and/or transform units (TUs) having one or more quadtrees. The CU and its PU can use any number of split modes after inter-picture coding. Figure 7 shows eight example split modes. Temporal prediction (eg, motion compensation) can be applied to reconstruct the inter-picture encoded PU. Depending on the accuracy of the motion vector (eg, up to a quarter of a pixel in HEVC), linear filtering can be applied to obtain pixel values at fractional locations. In HEVC, the interpolation filter can have seven or eight taps for brightness and four taps for chrominance. The deblocking filter in HEVC can be content based. Different deblocking filtering operations may be applied at the TU and/or PU boundaries depending on multiple factors such as coding mode differences, motion differences, reference picture differences, pixel value differences, and the like. For entropy coding, HEVC may employ context adaptive binary arithmetic coding (CABAC) for some or most of the block level syntax elements. Binary examples that may be used in CABAC coding may include context-based coding regular bins and context-free bypass coding bins. In the HEVC decoder module, motion compensation, deblocking, and entropy coding may include power consuming operations. FIG. 2 illustrates an exemplary complexity aware streaming system 200. On the server side, for example, multiple video versions with different decoding complexity can be stored (eg, at similar bit rates). The decoding complexity information can be embedded in the MPD file 202. When the streaming session begins, the MPD file 202 can be obtained by a client (e.g., a user device such as a WTRU) 204. The client 204 can request the video version (eg, the video bit stream identified in the MPD file 202) based on the quality of the available bit rate and the decoding complexity level based on the remaining battery power and/or the remaining video playback time. segment). The complexity-aware streaming system 200 can ensure that the entire length of the video can be played back before the device battery power is completely exhausted. If the bit complexity segment is consumed at the current complexity level to consume excessive power to maintain the entire length of playback using the remaining available power on the device, the client 204 can switch to a bitstream segment with a lower level of complexity. If battery power is sufficient, the client 204 can switch to a bitstream with a higher level of complexity for better quality. The methods, systems, and means disclosed herein may allow an encoder to generate bitstreams having different decoding complexities. For example, a streaming system employing the H.264 coding standard can handle power considerations for mobile platforms without regard to decoding complexity on the server side. An encoder (eg, a HEVC reference software encoder or HM encoder) can improve (eg, optimize) coding efficiency while maintaining a certain video quality. Video quality can be measured with objective quality metrics and/or can be subjectively measured by human observers, such as peak signal-to-interference ratio (PSNR), video quality metric (VQM), structural similarity (SSIM), etc. . The universal encoder can ignore the optimization of the bit stream complexity at the decoder side. Complexity-limited coding for mobile phones can take into account mobile phone decoding capabilities. This restriction allows the phone to decode the bit stream in real time. Some power-aware streaming systems may encode video content, for example, at a variable complexity level, profiling, and/or bit rate. To support a power aware streaming system, the encoder can consider rate distortion performance, decoding complexity, and/or power consumption on the decoder side. The methods, systems, and means disclosed herein can provide encoding that can take into account decoding complexity. Figure 5 illustrates an exemplary profiling result (e.g., based on a task's decoding time and percentage of all decoding time) implemented using, for example, an HEVC decoder implementation (e.g., an optimized HEVC decoder implementation). The results for the various modules included in the decoding may include motion compensation (MC), intra-picture reconstruction (Intra_rec), inter-picture reconstruction (Inter_Rec), loop filtering (eg, deblocking (DB) filtering or LF), entropy Decode sampling adaptive offset (SAO), etc. WVGA (eg, 832 x 480) bitstream decoding can be performed on a tablet; an example profiling result for WVGA bitstream decoding is shown at 502. HD (eg, 720p) bitstream decoding can be performed on a personal computer (PC). An example profiling result for HD bitstream decoding is shown at 504. As shown in Figure 5, motion compensation, in-loop filtering, and entropy decoding can take a lot of processing time and can consume a lot of power. Complexity-aware coding techniques may consider decoding complexity, for example, related to the decoding complexity of motion compensation (MC), coding modes, and/or decoding filtering. The decoder can adjust its decisions during motion estimation, mode decision, and/or intra-loop filtering, for example to account for complexity-related performance. A balance between performance and complexity can be obtained. The motion vector can have fractional pixel precision (eg, up to a quarter of a pixel in HEVC and/or H.264). Motion compensation may use one or more interpolation filters to obtain pixel values at fractional pixel locations. The complexity of motion compensation can depend on the fractional position of the motion vector (MV). Figure 8 shows an exemplary possible fractional position, for example, where the motion vector has a quarter pixel accuracy. As shown in Fig. 8, for example, there may be sixteen cases, which may be set according to the value of the fractional part of the MV. A factor that can affect the complexity of the MC can be the direction in which the interpolation can be applied. The interpolation filter can maintain the same horizontal and/or vertical symmetry position. For example, the interpolation filter can keep the fractional positions 802 and 804 the same, can keep the decimal positions 806 and 808 the same, and the like. The computational complexity can remain the same. The memory access efficiency for vertical interpolation and/or horizontal interpolation can be different. For horizontal interpolation, the decoder can extract multiple reference pixels together because the memory arrangement can be organized in lines. For vertical interpolation, the reference pixels can be placed away from each other, for example, depending on the memory location. This setting can affect the extraction speed and can increase the memory bandwidth. The interpolation filter can be a factor that can affect the complexity of the MC. If the interpolation filter has symmetric coefficients, the computational complexity can be reduced because fewer multiplication operations can be required. Table 1 shows exemplary interpolation filter features for the locations shown in FIG. Table 2 shows exemplary memory sizes for interpolation of the locations in Figure 8. Based on these features, a 4x4 complexity matrix can be defined for the motion estimation on the encoder side. In complexity matrices, vertical interpolation can be more complex than horizontal interpolation, and asymmetric interpolation can be more complex than symmetric interpolation.

在搜尋視窗中,移動估計(ME)可以尋找用於每個預測單元的例如與最小移動成本相關聯的移動向量。等式(1)可以被用於確定移動向量成本。In the search window, the motion estimation (ME) may look for a motion vector for each prediction unit, for example associated with a minimum mobile cost. Equation (1) can be used to determine the cost of the motion vector.

移動向量成本可以被用於ME處理中。如等式(1)所示出的,成本計算可以在不考慮解碼複雜性考慮的情況下被計算。可以是例如用於目前PU的移動向量的成本。可以是移動補償預測信號和原始信號之間的失真。可以藉由如絕對誤差和(SAD)、絕對變換誤差和(SATD)、及/或平方誤差和(SSE)的度量來測量。可以是被用於編碼移動向量的值的位元數量。可以是與編碼位元速率相關的λ(lambda)因數。等式(2)可以提供示例性移動向量成本計算,其中編碼器可以考慮解碼複雜性。The motion vector cost can be used in ME processing. As shown in equation (1), the cost calculation can be calculated without considering the complexity of the decoding considerations. It can be, for example, the cost of the motion vector for the current PU . It may be the distortion between the motion compensated prediction signal and the original signal. It can be measured by metrics such as absolute error sum (SAD), absolute transform error and (SATD), and/or squared error sum (SSE). Can be used to encode motion vectors The number of bits of the value. It can be a lambda (lambda) factor associated with the encoding bit rate. Equation (2) can provide an exemplary motion vector cost calculation in which the encoder can consider decoding complexity.

除了有助於移動向量成本的等式(1)中的項外,等式(2)在確定移動向量成本中還可以考慮解碼複雜性。在等式(2)中,可以表示如 的移動向量的小數部分。可以是複雜性矩陣CM中的元素Size(PU)可以是例如在像素中的預測單元的大小。等式(2)可以對移動向量成本計算中的第二項和第三項使用相同或不同的λ因數值。例如,第一λ因數可以被用作率項(Rate term)R mv 的乘數,而第二λ因數可以被用作複雜性項的乘數。移動估計可以選擇預測效率和移動補償複雜性之間的權衡(例如,最佳權衡)。例如,如果兩個移動向量mv1和mv2具有類似的預測誤差,則編碼器可以選擇具有較低內插複雜性的移動向量(例如,較低計算複雜性及/或較低記憶體提取複雜性)。In addition to the terms in equation (1) that contribute to the motion vector cost, equation (2) may also consider decoding complexity in determining the motion vector cost. In equation (2), Can be expressed as The fractional part of the moving vector. Can be an element in the complexity matrix CM . Size (PU) may be, for example, the size of a prediction unit in a pixel. Equation (2) can use the same or different lambda factor values for the second and third terms in the motion vector cost calculation . For example, the first lambda factor Can be used as a multiplier for the Rate term R mv and a second lambda factor Can be used as a multiplier for complexity items. Motion estimation can choose a trade-off between prediction efficiency and motion compensation complexity (eg, optimal trade-off). For example, if two motion vectors mv 1 and mv 2 have similar prediction errors, the encoder can select motion vectors with lower interpolation complexity (eg, lower computational complexity and/or lower memory extraction complexity) Sex).

如等式(3)中所提供的,可以被表示為內插複雜性矩陣(IC)和縮放矩陣(S)的哈達馬(Hadamard)(例如,分素(entrywise))乘積。As provided in equation (3), It can be expressed as a Hadamard (eg, entrywise) product of the interpolation complexity matrix ( IC ) and the scaling matrix ( S ).

IC可以是考慮例如第8圖中不同小數內插位置的內插複雜性的基礎複雜性矩陣。IC可以根據表1和表2反映不同小數位置處的不同複雜性。表3示出示例性IC矩陣。 The IC may be a basic complexity matrix that considers the interpolation complexity of, for example, different fractional interpolation locations in Figure 8. The IC can reflect the different complexity at different fractional locations according to Tables 1 and 2. Table 3 shows an exemplary IC matrix.

表4示出例如中間複雜性等級(表4a)和低複雜性等級(表4b)的示例性非平S矩陣。如果S是平的,則其可以用例如位於中間複雜性等級位置處的值2和低複雜性等級位置處的值16來填充。Table 4 shows exemplary non-flat S matrices such as intermediate complexity levels (Table 4a) and low complexity levels (Table 4b). If S is flat, it can be populated with, for example, a value of 2 at the intermediate complexity level position and a value 16 at the low complexity level position.

HEVC可以支援畫面間編碼模式(例如,合併模式)。在合併模式中,目前PU可以參考來自空間和時間鄰近PU的移動資訊。HEVC編碼器可以選擇具有用等式(4)估計的最小成本的來自合併候選者(例如,HEVC中的合併候選者可以包括來自不同空間及/或時間鄰近PU的移動)列表的一個候選者(例如,最佳候選者)。HEVC can support inter-picture coding mode (for example, merge mode). In merge mode, the current PU can refer to mobile information from spatial and temporal neighboring PUs. The HEVC encoder may select a candidate from the merge candidate (eg, merge candidates in HEVC may include movements from different spatial and/or temporal neighboring PUs) having a minimum cost estimated using equation (4) ( For example, the best candidate).

複雜性感知編碼器可以在等式(5)中使用合併候選者成本計算,等式(5)可以包括基於複雜性考慮的項。The complexity-aware coder may use a merge candidate cost calculation in equation (5), and equation (5) may include an item based on complexity considerations.

在等式(4)和(5)中,可以是原始信號和移動補償信號之間的預測失真,例如,在來自合併候選者的移動資訊被應用的情況下。可以是用於編碼的位元數量。Size(PU)可以是例如在像素中處的複雜性和預測單元的大小。In equations (4) and (5), Can be the prediction distortion between the original signal and the motion compensated signal, for example, in the mobile information from the merge candidate In case of being applied. Can be used for encoding The number of bits. And Size(PU) can be, for example, in a pixel The complexity and the size of the prediction unit.

模式決策過程可以基於使用等式(6)所估計的率失真成本來選擇編碼模式。The mode decision process may select an encoding mode based on the rate distortion cost estimated using equation (6).

可以是使用編碼模式mode的重建信號和原始信號之間的失真。可以是在模式決策中使用的λ因數。如果CU以編碼模式mode被編碼,則可以是用於編碼目前編碼單元CU的資訊的位元數量。複雜性感知編碼器可以使用模式成本,例如,等式(7)所提供的。 It may be distortion between the reconstructed signal using the encoding mode mode and the original signal. It can be the lambda factor used in the mode decision. If the CU is encoded in the encoding mode mode , then It may be the number of bits used to encode the information of the current coding unit CU. The complexity-aware encoder can use the mode cost, for example, provided by equation (7).

可以與編碼模式複雜性相關。例如,可以使用等式(8)來測量item Can be related to coding mode complexity. For example, you can use equation (8) to measure .

可以是可以被用於平衡複雜性和壓縮效率的因數。的較大值可以對應於較低的解碼複雜性。例如,在等式(8)中,因為可以藉由將的移動補償複雜性添加至CU中來計算,畫面間編碼模式複雜性可以被考慮。編碼模式複雜性可以被考慮用於畫面內編碼模式。不同畫面內預測模式可以被考慮以具有不同複雜性。例如,相較於使用可以使用鄰近像素值的內插的方向(角度)預測模式編碼的畫面內區塊,使用DC預測模式所編碼的畫面內區塊可以被考慮的複雜性較低。當計算時,畫面間編碼模式和畫面內編碼模式之間的複雜性差異可以被考慮並被反映。 It can be a factor that can be used to balance complexity and compression efficiency. Larger values may correspond to lower decoding complexity. For example, in equation (8), because Mobile compensation complexity Add to CU to calculate The inter-picture coding mode complexity can be considered. Encoding mode complexity can be considered for intra-picture coding mode. Different intra-picture prediction modes can be considered to have different complexity. For example, intra-picture blocks encoded using DC prediction mode may be considered less complex than intra-picture blocks that use mode (angle) prediction mode coding that may use interpolation of neighboring pixel values. When calculating The complexity difference between the inter-picture coding mode and the intra-picture coding mode can be considered and reflected.

視訊編碼標準(例如,HEVC和H.264)可以支援迴路內解塊濾波,該迴路內解塊濾波可以有效地抑制區塊效應。解塊可以消耗解碼時間和裝置電力的百分比(例如,如第5圖中的示例所示)。複雜性感知編碼器可以例如基於可以包括解碼複雜性考慮的解塊成本來決定是否賦能及/或禁用解塊處理。無解塊及/或有解塊的成本可以被估計,例如,如等式(9)和(10)分別提供的。Video coding standards (eg, HEVC and H.264) can support in-loop deblocking filtering, which can effectively suppress block effects. Deblocking can consume the decoding time and the percentage of device power (eg, as shown in the example in Figure 5). The complexity aware encoder may decide whether to enable and/or disable the deblocking process, for example based on the deblocking cost that may include decoding complexity considerations. The cost of no deblocking and/or deblocking can be estimated, for example, as provided by equations (9) and (10), respectively.

P可以是原始畫面。可以是用解塊而被重建的畫面。可以是未使用解塊而被重建的畫面。可以是畫面PR之間的失真。可以是解塊複雜性,該解塊複雜性可以根據不同解碼複雜性等級來預定義。可以是用於平衡複雜性和重建品質的λ因數。可以影響值的因數可以包括:例如,畫面是否被用作參考畫面、編碼位元速率等。如果畫面是參考畫面,則值可以是小的。等式(10)中的失真項可以比複雜性項更重要。預測結傋可以包括例如第9圖所示的階層編碼結構900。用於階層中每一層902、904、906、908的位元速率可以被不同地分配。如果較低層中的畫面被較高層中的畫面所參考,則較低量化參數可以被用於較低層。較低層畫面的位元速率可以較高。用於較低層畫面的的值可以比用於較高層畫面的的值小。如果由等式(10)計算的有解塊的成本比經由等式(9)計算的無解塊的成本小,則編碼器可以賦能解塊。否則編碼器可以禁用解塊。 P can be the original picture. It can be a picture that was reconstructed with deblocking. It may be a picture that was reconstructed without using deblocking. It can be the distortion between the pictures P and R. It can be a deblocking complexity that can be predefined based on different levels of decoding complexity. It can be a lambda factor for balancing complexity and reconstruction quality. Can influence The factor of the value may include, for example, whether the picture is used as a reference picture, an encoding bit rate, and the like. If the picture is a reference picture, then The value can be small. The distortion term in equation (10) can be more important than the complexity term. The predictive node may include, for example, a hierarchical coding structure 900 as shown in FIG. The bit rates for each of the layers 902, 904, 906, 908 in the hierarchy can be assigned differently. If the picture in the lower layer is referenced by the picture in the higher layer, the lower quantization parameter can be used for the lower layer. The bit rate of the lower layer picture can be higher. For lower level screens Value can be used for higher level screens The value is small. If the cost of deblocking calculated by equation (10) is less than the cost of no deblocking calculated via equation (9), the encoder can enable deblocking. Otherwise the encoder can disable deblocking.

第10A圖是示例通信系統1000的圖式,在該通信系統1000中一個或多個揭露的實施方式可以被實施。通信系統1000可以是向多個無線使用者提供諸如語音、資料、視訊、訊息傳送、廣播等內容的多重存取系統。通信系統1000可以經由共用包括無線頻寬的系統資源來使得多個無線使用者能夠存取這類內容。例如,通信系統1000可以採用一個或多個頻道存取方法,諸如分碼多重存取(CDMA)、分時多重存取(TDMA)、分頻多重存取(FDMA)、正交FDMA(OFDMA)、單載波FDMA(SC-FDMA)等等。 如第10A圖所示,通信系統1000可以包括無線傳輸/接收單元(WTRU)1002a、1002b、1002c、及/或1002d(這些通常或共同地被稱為WTRU 1002)、無線電存取網路(RAN)1003/1004/1005、核心網路1006/1007/1009、公共交換電話網路(PSTN)1008、網際網路1010以及其他網路1012,但是應該瞭解,所揭露的實施方式設想了任何數量的WTRU、基地台、網路及/或網路元件。每一個WTRU 1002a、1002b、1002c、1002d可以是被配置為在無線環境中操作及/或通信的任一類型的裝置。例如,WTRU 1002a、1002b、1002c、1002d可以被配置為傳輸及/或接收無線信號、並且可以包括無線傳輸/接收單元(WTRU)、行動站、固定或行動用戶單元、呼叫器、行動電話、個人數位助理(PDA)、智慧型電話、膝上型電腦、隨身型易網機、個人電腦、無線感測器、消費類電子裝置等等。 通信系統1000可以包括基地台1014a和基地台1014b。每一個基地台1014a、1014b可以是被配置為經由與WTRU 1002a、1002b、1002c、1002d中的至少一個進行無線介接來促使存取一個或多個通信網路的任一類型的裝置,該網路可以是核心網路1006/1007/1009、網際網路1010及/或網路1012。作為示例,基地台1014a、1014b可以是基地收發站(BTS)、節點B、e節點B、家用節點B、家用e節點B、網站控制器、存取點(AP)、無線路由器等等。雖然每一個基地台1014a、1014b都被描述為是單一元件,但是應該瞭解,基地台1014a、1014b可以包括任何數量的互連基地台及/或網路元件。 基地台1014a可以是RAN 1003/1004/1005的一部分,該RAN 1003/1004/1005也可以包括其他基地台及/或網路元件(未顯示),例如基地台控制器(BSC)、無線電網路控制器(RNC)、中繼節點等等。基地台1014a及/或基地台1014b可以被配置為在稱為胞元(未顯示)的特定地理區域內部傳輸及/或接收無線信號。胞元可被進一步劃分成胞元扇區。例如,與基地台1014a關聯的胞元可分為三個扇區。由此,在一個實施方式中,基地台1014a可以包括三個收發器,例如,每一個收發器對應於胞元的一個扇區。在一實施方式中,基地台1014a可以使用多輸入多輸出(MIMO)技術,由此可以為胞元的每個扇區使用多個收發器。 基地台1014a、1014b可以經由空中介面1015/1016/1017來與一個或多個WTRU 1002a、1002b、1002c、1002d進行通信,該空中介面1015/1016/1017可以是任一適當的無線通訊鏈路(例如,射頻(RF)、微波、紅外線(IR)、紫外線(UV)、可見光等等)。該空中介面1015/1016/1017可以用任一適當的無線電存取技術(RAT)來建立。 更具體地說,如上所述,通信系統1000可以是多重存取系統、並且可以使用一種或多種頻道存取方案,例如CDMA、TDMA、FDMA、OFDMA、SC-FDMA等等。舉例來說,RAN 1003/1004/1005中的基地台1014a與WTRU 1002a、1002b、1002c可以實施諸如通用行動電信系統(UMTS)陸地無線電存取(UTRA)之類的無線電技術,並且該技術可以使用寬頻CDMA(WCDMA)來建立空中介面1015/1016/1017。WCDMA可以包括諸如高速封包存取(HSPA)及/或演進型HSPA(HSPA+)之類的通信協定。HSPA可以包括高速下鏈封包存取(HSDPA)及/或高速上鏈封包存取(HSUPA)。 在一實施方式中,基地台1014a與WTRU 1002a、1002b、1002c可以實施演進型UMTS陸地無線電存取(E-UTRA)之類的無線電技術,該技術可以使用長期演進(LTE)及/或高級LTE(LTE-A)來建立空中介面1015/1016/1017。 在一實施方式中,基地台1014a與WTRU 1002a、1002b、1002c可以實施IEEE 802.16(例如,全球互通微波存取(WiMAX))、CDMA2000、CDMA2000 1X、CDMA2000 EV-DO、臨時標準2000(IS-2000)、臨時標準95(IS-95)、臨時標準856(IS-856)、全球行動通信系統(GSM)、用於GSM增強資料速率演進(EDGE)、GSM EDGE(GERAN)等無線電存取技術。 作為示例,第10A圖中的基地台1014b可以是無線路由器、家用節點B、家用e節點B或存取點、並且可以使用任一適當的RAT來促成例如營業場所、住宅、交通工具、校園等等的局部區域中的無線連接。在一實施方式中,基地台1014b與WTRU 1002c、1002d可以藉由實施諸如IEEE 802.11之類的無線電技術來建立無線區域網路(WLAN)。在一實施方式中,基地台1014b與WTRU 1002c、1002d可以藉由實施諸如IEEE 802.15之類的無線電技術來建立無線個人區域網路(WPAN)。在一實施方式中,基地台1014b和WTRU 1002c、1002d可以使用基於蜂巢的RAT(例如,WCDMA、CDMA2000、GSM、LTE、LTE-A等等)來建立微微胞元或毫微微胞元。如第10A圖所示,基地台1014b可以直接連接到網際網路1010。由此,基地台1014b可以不需要經由核心網路1006/1007/1009來存取網際網路1010。 RAN 1003/1004/1005可以與核心網路1006/1007/1009通信,該核心網路1006/1007/1009可以是被配置為向一個或多個WTRU 1002a、1002b、1002c、1002d提供語音、資料、應用及/或經由網際網路協定語音(VoIP)服務的任一類型的網路。例如,核心網路1006/1007/1009可以提供呼叫控制、記帳服務、基於移動位置的服務、預付費呼叫、網際網路連接、視訊分配等等、及/或執行使用者驗證之類的高階安全功能。雖然在第10A圖中沒有顯示,但是應該瞭解,RAN 1003/1004/1005及/或核心網路1006/1007/1009可以直接或間接地和使用與RAN 1003/1004/1005使用相同RAT或不同RAT的其他RAN進行通信。例如,除了與可以使用E-UTRA無線電技術的RAN 1003/1004/1005連接之外,核心網路1006/1007/1009還可以與使用GSM無線電技術的RAN(未顯示)進行通信。 核心網路1006/1007/1009也可以充當供WTRU 1002a、1002b、1002c、1002d存取PSTN 1008、網際網路1010及/或其他網路1012的閘道。PSTN 1008可以包括提供簡易老式電話服務(POTS)的電路交換電話網路。網際網路1010可以包括使用公共通信協定的全球性互連電腦網路和裝置系統,該協定可以是傳輸控制協定(TCP)/網際網路協定(IP)網際網路協定族中的TCP、使用者資料包通訊協定(UDP)和IP。網路1012可以包括由其他服務供應者擁有及/或操作的有線或無線通訊網路。例如,網路1012可以包括與一個或多個RAN相連的核心網路,該一個或多個RAN可以與RAN 1003/1004/1005使用相同RAT或不同RAT。 通信系統1000中一些或所有WTRU 1002a、1002b、1002c、1002d可以包括多模能力,例如,WTRU 1002a、1002b、1002c、1002d可以包括在不同無線鏈路上與不同無線網路通信的多個收發器。例如,第10A圖所示的WTRU 1002c可以被配置為與可以使用基於蜂巢的無線電技術的基地台1014a通信,以及與可以使用IEEE 802無線電技術的基地台1014b通信。 第10B圖是例示WTRU 1002的系統圖。如第10B圖所示,WTRU 1002可以包括處理器1018、收發器1020、傳輸/接收元件1022、揚聲器/麥克風1024、鍵盤1026、顯示器/觸控板1028、不可移式記憶體1030、可移式記憶體1032、電源1034、全球定位系統(GPS)晶片組1036以及其他無線麥克風週邊裝置1038。應該瞭解的是,在保持符合實施方式的同時,WTRU 1002還可以包括前述元件的任一子組合。而且,實施方式考慮了基地台1014a和1014b、及/或基地台1014a和1014b可以表示的節點可以包括第10B圖中描繪的及於此描述的某些或所有元件,節點諸如但不限於收發站(BTS)、節點B、網站控制器、存取點(AP)、家用節點B、演進型家用節點B(e節點B)、家用演進型節點B(HeNB)、家用演進型節點B閘道、及代理節點等。 處理器1018可以是通用處理器、專用處理器、常規處理器、數位訊號處理器(DSP)、多個微處理器、與DSP核心關聯的一或多個微處理器、控制器、微控制器、專用積體電路(ASIC)、現場可程式設計閘陣列(FPGA)電路、其他任一類型的積體電路(IC)、狀態機等等。處理器1018可以執行信號編碼、資料處理、功率控制、輸入/輸出處理及/或其他任何能使WTRU 1002在無線環境中操作的功能。處理器1018可以耦合至收發器1020,收發器1020可以耦合至傳輸/接收元件1022。雖然第10B圖將處理器1018和收發器1020描述為是獨立元件,但是應該瞭解,處理器1018和收發器1020可以集成在一個電子封裝或晶片中。 傳輸/接收元件1022可以被配置為經由空中介面1015/1016/1017來傳輸或接收至或來自基地台(例如,基地台1014a)的信號。舉個例子,在一個實施方式中,傳輸/接收元件1022可以是被配置為傳輸及/或接收RF信號的天線。在一實施方式中,作為示例,傳輸/接收元件1022可以是被配置為傳輸及/或接收IR、UV或可見光信號的發射器/偵測器。在一實施方式中,傳輸/接收元件1022可以被配置為傳輸和接收RF和光信號兩者。應該瞭解的是,傳輸/接收元件1022可以被配置為傳輸及/或接收無線信號的任一組合。 此外,雖然在第10B圖中將傳輸/接收元件1022描述為是單一元件,但是WTRU 1002可以包括任何數量的傳輸/接收元件1022。更具體地說,WTRU 1002可以使用MIMO技術。因此,在一個實施方式中,WTRU 1002可以包括經由空中介面1015/1016/1017來傳輸和接收無線信號的兩個或更多個傳輸/接收元件1022(例如,多個天線)。 收發器1020可以被配置為對傳輸/接收元件1022將要傳輸的信號進行調變、以及對傳輸/接收元件1022接收的信號進行解調。如上所述,WTRU 1002可以具有多模能力。因此,收發器1020可以包括允許WTRU 1002經由諸如UTRA和IEEE 802.11之類的多種RAT來進行通信的多個收發器。 WTRU 1002的處理器1018可以耦合至揚聲器/麥克風1024、鍵盤1026及/或顯示器/觸控板1028(例如,液晶顯示器(LCD)顯示單元或有機發光二極體(OLED)顯示單元)、並且可以接收來自這些元件的使用者輸入資料。處理器1018還可以向揚聲器/麥克風1024、鍵盤1026及/或顯示器/觸控板1028輸出使用者資料。此外,處理器1018可以從任一適當的記憶體(例如不可移式記憶體1030及/或可移式記憶體1032)中存取資訊、以及將資料儲存在這些記憶體。該不可移式記憶體1030可以包括隨機存取記憶體(RAM)、唯讀記憶體(ROM)、硬碟或是其他任一類型的記憶體儲存裝置。可移式記憶體1032可以包括用戶身份模組(SIM)卡、記憶條、安全數位(SD)記憶卡等等。在其他實施方式中,處理器1018可以從那些並非實體上位於WTRU 1002上的記憶體存取資訊、以及將資料存入這些記憶體,其中舉例來說,該記憶體可以位於伺服器或家用電腦(未顯示)上。 處理器1018可以接收來自電源1034的電力、並且可以被配置為分配及/或控制用於WTRU 1002中的其他元件的電力。電源1034可以是為WTRU 1002供電的任一適當的裝置。舉例來說,電源134可以包括一個或多個乾電池(如鎳鎘(Ni-Cd)、鎳鋅(Ni-Zn)、鎳氫(NiMH)、鋰離子(Li-ion)等等)、太陽能電池、燃料電池等等。 處理器1018還可以與GPS晶片組1036耦合,該晶片組可以被配置為提供與WTRU 1002的目前位置相關的位置資訊(例如,經度和緯度)。作為來自GPS晶片組1036的資訊的補充或替代,WTRU 1002可以經由空中介面1015/1016/1017接收來自基地台(例如,基地台1014a、1014b)的位置資訊、及/或根據從兩個或多個附近基地台接收的信號時序來確定其位置。應該瞭解的是,在保持符合實施方式的同時,WTRU 1002可以用任何適當的位置確定方法來獲取位置資訊。 處理器1018也可以耦合到其他週邊裝置1038,這其中可以包括提供附加特徵、功能及/或有線或無線連接的一個或多個軟體及/或硬體模組。例如,週邊裝置1038可以包括加速度計、電子指南針、衛星收發器、數位相機(用於照片和視訊)、通用序列匯流排(USB)埠、振動裝置、電視收發器、免持耳機、藍芽R模組、調頻(FM)無線電單元、數位音樂播放機、媒體播放機、視訊遊戲機模組、網際網路瀏覽器等等。 第10C圖是根據一個實施方式的RAN 1003和核心網路1006的系統圖。如上所述,RAN 1003可以使用UTRA無線電技術並經由空中介面1015來與WTRU 1002a、1002b、1002c進行通信。RAN 1003還可以與核心網路1006通信。如第10C圖所示,RAN 1003可以包括節點B 1040a、1040b、1040c,其中每一個節點B都可以包括經由空中介面1015以與WTRU 1002a、1002b、1002c通信的一個或多個收發器。節點B 1040a、1040b、1040c中的每一個都可以與RAN 1003內的特定胞元(未顯示)相關聯。RAN 1003還可以包括RNC 1042a、1042b。應該瞭解的是,在保持與實施方式相符的同時,RAN 1003可以包括任何數量的節點B和RNC。 如第10C圖所示,節點B 1040a、1040b可以與RNC 1042a進行通信。此外,節點B 1040c可以與RNC 1042b進行通信。節點B 1040a、1040b、1040c可以經由Iub介面來與各自的RNC 1042a、1042b進行通信。RNC 1042a、1042b可以經由Iur介面彼此通信。每一個RNC 1042a、1042b都可以被配置為控制與之相連的各自的節點B 1040a、1040b、1040c。另外,每一個RNC 1042a、1042b可被配置為執行或支援其他功能,例如外環功率控制、負載控制、准許控制、封包排程、切換控制、巨集分集、安全功能、資料加密等等。 第10C圖所示的核心網路1006可以包括媒體閘道(MGW)1044、行動交換中心(MSC)1046、服務GPRS支援節點(SGSN)1048、及/或閘道GPRS支援節點(GGSN)1050。雖然前述每個元件都被描述為是核心網路1006的一部分,但是應該瞭解,核心網路操作者之外的其他實體也可以擁有及/或操作這其中的任一元件。 RAN 1003中的RNC 1042a可以經由IuCS介面被連接到核心網路1006中的MSC 1046。MSC 1046可以連接到MGW 1044。MSC 1046和MGW 1044可以為WTRU 1002a、1002b、1002c提供至PSTN 1008之類的電路切換式網路的存取,以便促成WTRU 1002a、1002b、1002c與傳統陸線通信裝置間的通信。 RAN 1003中的RNC 1042a還可以經由IuPS介面連接到核心網路1006中的SGSN 1048。該SGSN 1048可以連接到GGSN 1050。SGSN 1048和GGSN 1050可以為WTRU 1002a、1002b、1002c提供至網際網路1010之類的封包交換網路的存取,以便促成WTRU 1002a、1002b、1002c與IP賦能裝置之間的通信。 如上所述,核心網路1006還可以連接到網路1012,該網路1012可以包括由其他服務供應者擁有及/或操作的其他有線或無線網路。 第10D圖是根據一個實施方式的RAN 1004以及核心網路1006的系統圖。如上所述,RAN 1004可以使用E-UTRA無線電技術並經由空中介面1016來與WTRU 1002a、1002b、1002c進行通信。此外,RAN 1004還可以與核心網路1007通信。 RAN 1004可以包括e節點B 1060a、1060b、1060c,但是應該瞭解,在保持與實施方式相符的同時,RAN 1004可以包括任何數量的e節點B。每一個e節點B 1060a、1060b、1060c可以包括一個或多個收發器,以便經由空中介面1016來與WTRU 1002a、1002b、1002c通信。在一個實施方式中,e節點B 1060a、1060b、1060c可以實施MIMO技術。由此,舉例來說,e節點B 1060a可以使用多個天線來向WTRU 1002a傳輸無線信號、以及接收來自WTRU 1002a的無線信號。 每一個e節點B 1060a、1060b、1060c可以與特定胞元(未顯示)相關聯、並且可以被配置為處理無線電資源管理決策、切換決策、上鏈及/或下鏈中的使用者排程等等。如第10D圖所示,e節點B 1060a、1060b、1060c可以經由X2介面彼此通信。 第10D圖所示的核心網路1007可以包括移動性管理閘道(MME)1062、服務閘道1064以及封包資料網路(PDN)閘道1066。雖然上述每一個元件都被描述為是核心網路1007的一部分,但是應該瞭解,核心網路操作者之外的其他實體同樣可以擁有及/或操作這其中的任一元件 MME 1062可以經由S1介面來與RAN 1004中的每一個e節點B 1060a、1060b、1060c相連、並且可以充當控制節點。例如,MME 1062可以負責認證WTRU 1002a、1002b、1002c的使用者、承載啟動/停用、在WTRU 1002a、1002b、1002c的初始連結期間選擇特定服務閘道等等。該MME 1062還可以提供控制平面功能,以便在RAN 1004與使用了GSM或WCDMA之類的其他無線電技術的其他RAN(未顯示)之間執行切換。 服務閘道1064可以經由S1介面被連接到RAN 1004中的每一個e節點B 1060a、1060b、1060c。該服務閘道1064通常可以路由和轉發至/來自WTRU 1002a、1002b、1002c的使用者資料封包。服務閘道1064還可以執行其他功能,例如在e節點B間的切換過程中錨定使用者平面、在下鏈資料可供WTRU 1002a、1002b、1002c使用時觸發傳呼、管理和儲存WTRU 1002a、1002b、1002c的上下文等等。 服務閘道1064還可以連接到PDN閘道1066,可以為WTRU 1002a、1002b、1002c提供至諸如網際網路1010之類的封包交換網路的存取,以便促成WTRU 1002a、1002b、1002c與IP賦能裝置之間的通信。 核心網路1007可以促成與其他網路的通信。例如,核心網路1007可以為WTRU 1002a、1002b、1002c提供至PSTN 1008之類的電路切換式網路的存取,以便促成WTRU 1002a、1002b、1002c與傳統陸線通信裝置之間的通信。作為示例,核心網路1007可以包括IP閘道(例如,IP多媒體子系統(IMS)伺服器)或與之通信,其中該IP閘道充當了核心網路1007與PSTN 1008之間的介面。此外,核心網路1007可以為WTRU 1002a、1002b、1002c提供至網路1012的存取,其中該網路可以包括其他服務供應者擁有及/或操作的其他有線或無線網路。 第10E圖是根據一個實施方式的RAN 1005和核心網路1009的系統圖。RAN 1005可以是使用IEEE 802.16無線電技術以經由空中介面1017來與WTRU 1002a、1002b、1002c通信的存取服務網路(ASN)。如以下進一步論述的那樣,WTRU 1002a、1002b、1002c,RAN 1005以及核心網路1009的不同功能實體之間的通信鏈路可被定義為參考點。 如第10E圖所示,RAN 1005可以包括基地台1080a、1080b、1080c以及ASN閘道1082,但是應該瞭解,在保持與實施方式相符的同時,RAN 1005可以包括任何數量的基地台及ASN閘道。每一個基地台1080a、1080b、1080c可以與RAN 1005中的特定胞元(未顯示)相關聯,並且每個基地台可以包括一個或多個收發器,以便經由空中介面1017來與WTRU 1002a、1002b、1002c進行通信。在一個實施方式中,基地台1080a、1080b、1080c可以實施MIMO技術。由此,舉例來說,基地台1080a可以使用多個天線來向WTRU 102a傳輸無線信號、以及接收來自WTRU 1002a的無線信號。基地台1080a、1080b、1080c還可以提供移動性管理功能,例如切換觸發、隧道建立、無線電資源管理、訊務分類、服務品質(QoS)策略執行等等。ASN閘道1082可以充當訊務聚合點、並且可以負責傳呼、用戶設定檔快取、至核心網路1009的路由等等。 WTRU 1002a、1002b、1002c與RAN 1005之間的空中介面1017可被定義為是實施IEEE 802.16規範的R1參考點。另外,每一個WTRU 1002a、1002b、1002c可以與核心網路1009建立邏輯介面(未顯示)。WTRU 1002a、1002b、1002c與核心網路1009之間的邏輯介面可被定義為R2參考點,該參考點可以用於認證、授權、IP主機配置管理及/或移動性管理。 每一個基地台1080a、1080b、1080c之間的通信鏈路可被定義為R8參考點,該參考點包括了用於促成WTRU切換以及基地台之間的資料傳送的協定。基地台1080a、1080b、1080c與ASN閘道1082之間的通信鏈路可被定義為R6參考點。該R6參考點可以包括用於促成基於與每一個WTRU 1002a、1002b、1002c相關聯的移動性事件的移動性管理的協定。 如第10E圖所示,RAN 1005可以連接到核心網路1009。RAN 1005與核心網路1009之間的通信鏈路可以被定義為R3參考點,作為示例,該參考點包括了用於促成資料傳送和移動性管理能力的協定。核心網路1009可以包括行動IP本地代理(MIP-HA)1084、認證、授權、記帳(AAA)伺服器1086以及閘道1088。雖然前述每個元件都被描述為是核心網路1009的一部分,但是應該瞭解,核心網路操作者以外的實體也可以擁有及/或操作這其中的任一元件。 MIP-HA可以負責IP位址管理、並且可以使WTRU 1002a、1002b、1002c能在不同的ASN及/或不同的核心網路之間漫遊。MIP-HA 1084可以為WTRU 1002a、1002b、1002c提供至網際網路1010之類的封包交換網路的存取,以便促成WTRU 1002a、1002b、1002c與IP賦能裝置之間的通信。AAA伺服器1086可以負責使用者認證以及支援使用者服務。閘道1088可以促成與其他網路的互通。例如,閘道1088可以為WTRU 1002a、1002b、1002c提供至PSTN 1008之類的電路切換式網路的存取,以便促成WTRU 1002a、1002b、1002c與傳統陸線通信裝置之間的通信。另外,閘道1088可以為WTRU 1002a、1002b、1002c提供至網路1012的存取,其中該網路可以包括由其他服務供應者擁有及/或操作的其他有線或無線網路。 雖然在第10E圖中沒有顯示,但是應該瞭解,RAN 1005可以連接到其他ASN,並且核心網路1009可以連接到其他核心網路。RAN 1005與其他ASN之間的通信鏈路可被定義為R4參考點,該參考點可以包括用於協調WTRU 1002a、1002b、1002c在RAN 1005與其他ASN之間的移動的協定。核心網路1009與其他核心網路之間的通信鏈路可以被定義為R5參考點,該參考點可以包括用於促成歸屬核心網路與被訪核心網路之間互通的協定。 於此揭露的過程和手段可以以任何組合來應用,可以應用至其他無線技術,和用於其他服務。 WTRU可以涉及實體裝置的識別碼、或如用戶相關的識別碼(如MSISDN、SIR PRL等)的使用者的識別碼。WTRU可以涉及基於應用的識別碼,例如,每個應用可以使用的使用者名稱。 雖然在上文中描述了採用特定組合的特徵和元素,但是本領域中具有通常知識者將會瞭解,每一個特徵既可以單獨使用,也可以與其他特徵和元素進行任一組合。此外,這裡描述的方法可以在引入到電腦可讀媒體中並供電腦或處理器運行的電腦程式、軟體或韌體中實施。電腦可讀媒體的示例包括電信號(經由有線或無線連接傳送)以及電腦可讀儲存媒體。電腦可讀儲存媒體的示例包括但不限於唯讀記憶體(ROM)、隨機存取記憶體(RAM)、暫存器、快取記憶體、半導體存放裝置、如內部硬碟和可移式磁片之類的磁媒體、磁光媒體、以及CD-ROM碟片和數位多功能光碟(DVD)之類的光學媒體。與軟體相關聯的處理器可以用於實施在WTRU、UE、終端、基地台、RNC或任一主機電腦中使用的射頻收發器。FIG. 10A is a diagram of an example communication system 1000 in which one or more disclosed embodiments may be implemented. Communication system 1000 can be a multiple access system that provides content, such as voice, data, video, messaging, broadcast, etc., to multiple wireless users. Communication system 1000 can enable multiple wireless users to access such content via sharing system resources including wireless bandwidth. For example, communication system 1000 can employ one or more channel access methods, such as code division multiple access (CDMA), time division multiple access (TDMA), frequency division multiple access (FDMA), orthogonal FDMA (OFDMA). Single carrier FDMA (SC-FDMA) and the like. As shown in FIG. 10A, communication system 1000 can include wireless transmit/receive units (WTRUs) 1002a, 1002b, 1002c, and/or 1002d (these are commonly or collectively referred to as WTRUs 1002), radio access networks (RAN). 1003/1004/1005, core network 1006/1007/1009, Public Switched Telephone Network (PSTN) 1008, Internet 1010, and other networks 1012, but it should be understood that the disclosed embodiments contemplate any number of WTRU, base station, network, and/or network element. Each of the WTRUs 1002a, 1002b, 1002c, 1002d may be any type of device configured to operate and/or communicate in a wireless environment. For example, the WTRUs 1002a, 1002b, 1002c, 1002d may be configured to transmit and/or receive wireless signals, and may include wireless transmit/receive units (WTRUs), mobile stations, fixed or mobile subscriber units, pagers, mobile phones, individuals Digital assistants (PDAs), smart phones, laptops, portable Internet devices, personal computers, wireless sensors, consumer electronics devices, and more. Communication system 1000 can include base station 1014a and base station 1014b. Each of the base stations 1014a, 1014b may be any type of device configured to facilitate access to one or more communication networks via wireless interfacing with at least one of the WTRUs 1002a, 1002b, 1002c, 1002d, the network The road can be the core network 1006/1007/1009, the Internet 1010, and/or the network 1012. By way of example, base stations 1014a, 1014b may be base transceiver stations (BTS), Node Bs, eNodeBs, home Node Bs, home eNodeBs, website controllers, access points (APs), wireless routers, and the like. While each base station 1014a, 1014b is described as a single component, it should be understood that base stations 1014a, 1014b can include any number of interconnected base stations and/or network elements. The base station 1014a may be part of the RAN 1003/1004/1005, which may also include other base stations and/or network elements (not shown), such as a base station controller (BSC), radio network. Controller (RNC), relay node, and so on. Base station 1014a and/or base station 1014b may be configured to transmit and/or receive wireless signals within a particular geographic area known as a cell (not shown). The cell can be further divided into cell sectors. For example, a cell associated with base station 1014a can be divided into three sectors. Thus, in one embodiment, base station 1014a may include three transceivers, for example, each transceiver corresponding to one sector of a cell. In an embodiment, base station 1014a may use multiple input multiple output (MIMO) technology whereby multiple transceivers may be used for each sector of a cell. The base stations 1014a, 1014b may communicate with one or more WTRUs 1002a, 1002b, 1002c, 1002d via an empty intermediation plane 1015/1016/1017, which may be any suitable wireless communication link ( For example, radio frequency (RF), microwave, infrared (IR), ultraviolet (UV), visible light, etc.). The null intermediate plane 1015/1016/1017 can be established using any suitable radio access technology (RAT). More specifically, as noted above, communication system 1000 can be a multiple access system and can utilize one or more channel access schemes such as CDMA, TDMA, FDMA, OFDMA, SC-FDMA, and the like. For example, base station 1014a and WTRUs 1002a, 1002b, 1002c in RAN 1003/1004/1005 may implement a radio technology such as Universal Mobile Telecommunications System (UMTS) Terrestrial Radio Access (UTRA) and the technology may be used Broadband CDMA (WCDMA) is used to establish an empty intermediate plane 1015/1016/1017. WCDMA may include communication protocols such as High Speed Packet Access (HSPA) and/or Evolved HSPA (HSPA+). HSPA may include High Speed Downlink Packet Access (HSDPA) and/or High Speed Uplink Packet Access (HSUPA). In an embodiment, base station 1014a and WTRUs 1002a, 1002b, 1002c may implement a radio technology such as Evolved UMTS Terrestrial Radio Access (E-UTRA), which may use Long Term Evolution (LTE) and/or LTE-Advanced (LTE-A) to establish an empty mediator 1015/1016/1017. In an embodiment, base station 1014a and WTRUs 1002a, 1002b, 1002c may implement IEEE 802.16 (eg, Worldwide Interoperability for Microwave Access (WiMAX)), CDMA2000, CDMA2000 1X, CDMA2000 EV-DO, Provisional Standard 2000 (IS-2000) ), Temporary Standard 95 (IS-95), Provisional Standard 856 (IS-856), Global System for Mobile Communications (GSM), Radio Access Technology for GSM Enhanced Data Rate Evolution (EDGE), GSM EDGE (GERAN). As an example, base station 1014b in FIG. 10A may be a wireless router, home node B, home eNodeB or access point, and may use any suitable RAT to facilitate, for example, a business location, a home, a vehicle, a campus, etc. Wireless connections in local areas such as. In an embodiment, base station 1014b and WTRUs 1002c, 1002d may establish a wireless local area network (WLAN) by implementing a radio technology such as IEEE 802.11. In an embodiment, base station 1014b and WTRUs 1002c, 1002d may establish a wireless personal area network (WPAN) by implementing a radio technology such as IEEE 802.15. In an embodiment, base station 1014b and WTRUs 1002c, 1002d may use a cellular based RAT (eg, WCDMA, CDMA2000, GSM, LTE, LTE-A, etc.) to establish picocells or femtocells. As shown in FIG. 10A, the base station 1014b can be directly connected to the Internet 1010. Thus, base station 1014b may not need to access Internet 1010 via core network 1006/1007/1009. The RAN 1003/1004/1005 can communicate with a core network 1006/1007/1009, which can be configured to provide voice, data, to one or more WTRUs 1002a, 1002b, 1002c, 1002d, Any type of network that applies and/or via Voice over Internet Protocol (VoIP) services. For example, the core network 1006/1007/1009 can provide call control, billing services, mobile location based services, prepaid calling, internet connectivity, video distribution, etc., and/or high level security for user authentication. Features. Although not shown in FIG. 10A, it should be appreciated that the RAN 1003/1004/1005 and/or the core network 1006/1007/1009 may use the same RAT or a different RAT as the RAN 1003/1004/1005, either directly or indirectly. The other RANs communicate. For example, in addition to being connected to the RAN 1003/1004/1005, which may use E-UTRA radio technology, the core network 1006/1007/1009 may also be in communication with a RAN (not shown) that uses GSM radio technology. The core network 1006/1007/1009 may also serve as a gateway for the WTRUs 1002a, 1002b, 1002c, 1002d to access the PSTN 1008, the Internet 1010, and/or other networks 1012. The PSTN 1008 may include a circuit switched telephone network that provides Plain Old Telephone Service (POTS). The Internet 1010 may include a globally interconnected computer network and device system using a public communication protocol, which may be TCP in the Transmission Control Protocol (TCP)/Internet Protocol (IP) Internet Protocol suite, use Data Packet Protocol (UDP) and IP. Network 1012 may include a wired or wireless communication network that is owned and/or operated by other service providers. For example, network 1012 can include a core network connected to one or more RANs that can use the same RAT or a different RAT as RAN 1003/1004/1005. Some or all of the WTRUs 1002a, 1002b, 1002c, 1002d in the communication system 1000 may include multi-mode capabilities, for example, the WTRUs 1002a, 1002b, 1002c, 1002d may include multiple transceivers that communicate with different wireless networks over different wireless links. For example, the WTRU 1002c shown in FIG. 10A can be configured to communicate with a base station 1014a that can use a cellular-based radio technology, and with a base station 1014b that can use an IEEE 802 radio technology. FIG. 10B is a system diagram illustrating the WTRU 1002. As shown in FIG. 10B, the WTRU 1002 may include a processor 1018, a transceiver 1020, a transmit/receive element 1022, a speaker/microphone 1024, a keyboard 1026, a display/touchpad 1028, a non-removable memory 1030, and a removable type. Memory 1032, power supply 1034, global positioning system (GPS) chipset 1036, and other wireless microphone peripherals 1038. It should be appreciated that the WTRU 1002 may also include any sub-combination of the aforementioned elements while remaining consistent with the embodiments. Moreover, embodiments contemplate that base stations 1014a and 1014b, and/or base stations 1014a and 1014b may represent nodes that may include some or all of the elements depicted in FIG. 10B and described herein, such as but not limited to transceiver stations (BTS), Node B, Website Controller, Access Point (AP), Home Node B, Evolved Home Node B (eNode B), Home Evolved Node B (HeNB), Home Evolved Node B Gate, And proxy nodes, etc. The processor 1018 can be a general purpose processor, a special purpose processor, a conventional processor, a digital signal processor (DSP), a plurality of microprocessors, one or more microprocessors associated with the DSP core, a controller, a microcontroller Dedicated integrated circuit (ASIC), field programmable gate array (FPGA) circuit, any other type of integrated circuit (IC), state machine, and so on. The processor 1018 can perform signal coding, data processing, power control, input/output processing, and/or any other functionality that enables the WTRU 1002 to operate in a wireless environment. The processor 1018 can be coupled to a transceiver 1020 that can be coupled to the transmit/receive element 1022. Although FIG. 10B depicts processor 1018 and transceiver 1020 as separate components, it should be appreciated that processor 1018 and transceiver 1020 can be integrated into an electronic package or wafer. The transmit/receive element 1022 can be configured to transmit or receive signals to or from a base station (e.g., base station 1014a) via the null intermediate plane 1015/1016/1017. For example, in one embodiment, the transmit/receive element 1022 can be an antenna configured to transmit and/or receive RF signals. In an embodiment, as an example, the transmit/receive element 1022 can be a transmitter/detector configured to transmit and/or receive IR, UV, or visible light signals. In an embodiment, the transmit/receive element 1022 can be configured to transmit and receive both RF and optical signals. It should be appreciated that the transmit/receive element 1022 can be configured to transmit and/or receive any combination of wireless signals. Moreover, although transmission/reception element 1022 is depicted as a single element in FIG. 10B, WTRU 1002 may include any number of transmission/reception elements 1022. More specifically, the WTRU 1002 may use MIMO technology. Thus, in one embodiment, the WTRU 1002 may include two or more transmit/receive elements 1022 (e.g., multiple antennas) that transmit and receive wireless signals via the null intermediaries 1015/1016/1017. The transceiver 1020 can be configured to modulate a signal to be transmitted by the transmission/reception element 1022 and to demodulate a signal received by the transmission/reception element 1022. As noted above, the WTRU 1002 may have multi-mode capabilities. Thus, transceiver 1020 can include multiple transceivers that allow WTRU 1002 to communicate via multiple RATs, such as UTRA and IEEE 802.11. The processor 1018 of the WTRU 1002 can be coupled to a speaker/microphone 1024, a keyboard 1026, and/or a display/touchpad 1028 (eg, a liquid crystal display (LCD) display unit or an organic light emitting diode (OLED) display unit), and can Receive user input from these components. The processor 1018 can also output user profiles to the speaker/microphone 1024, the keyboard 1026, and/or the display/trackpad 1028. In addition, the processor 1018 can access information from any suitable memory (eg, the non-removable memory 1030 and/or the removable memory 1032) and store the data in the memory. The non-removable memory 1030 can include random access memory (RAM), read only memory (ROM), hard disk, or any other type of memory storage device. The removable memory 1032 can include a Subscriber Identity Module (SIM) card, a memory stick, a secure digital (SD) memory card, and the like. In other embodiments, the processor 1018 can access information from, and store data in, memory that is not physically located on the WTRU 1002, where the memory can be located, for example, on a server or a home computer. (not shown) on. The processor 1018 can receive power from the power source 1034 and can be configured to allocate and/or control power for other elements in the WTRU 1002. Power supply 1034 can be any suitable device that powers WTRU 1002. For example, the power source 134 may include one or more dry cells (such as nickel-cadmium (Ni-Cd), nickel-zinc (Ni-Zn), nickel-hydrogen (NiMH), lithium-ion (Li-ion), etc.), solar cells. , fuel cells, etc. The processor 1018 can also be coupled to a GPS chipset 1036 that can be configured to provide location information (eg, longitude and latitude) related to the current location of the WTRU 1002. Additionally or alternatively to the information from the GPS chipset 1036, the WTRU 1002 may receive location information from base stations (e.g., base stations 1014a, 1014b) via the null mediator 1015/1016/1017, and/or based on two or more The timing of the signals received by nearby base stations determines their position. It should be appreciated that the WTRU 1002 can obtain location information using any suitable location determination method while remaining consistent with the implementation. Processor 1018 may also be coupled to other peripheral devices 1038, which may include one or more software and/or hardware modules that provide additional features, functionality, and/or wired or wireless connections. For example, peripheral device 1038 can include an accelerometer, an electronic compass, a satellite transceiver, a digital camera (for photos and video), a universal serial bus (USB) port, a vibrating device, a television transceiver, hands-free headset, Bluetooth R Modules, FM radio units, digital music players, media players, video game console modules, Internet browsers, and more. Figure 10C is a system diagram of RAN 1003 and core network 1006, in accordance with one embodiment. As described above, the RAN 1003 can communicate with the WTRUs 1002a, 1002b, 1002c via the null plane 1015 using UTRA radio technology. The RAN 1003 can also communicate with the core network 1006. As shown in FIG. 10C, the RAN 1003 can include Node Bs 1040a, 1040b, 1040c, each of which can include one or more transceivers that communicate with the WTRUs 1002a, 1002b, 1002c via the null plane 1015. Each of Node Bs 1040a, 1040b, 1040c can be associated with a particular cell (not shown) within RAN 1003. The RAN 1003 may also include RNCs 1042a, 1042b. It should be appreciated that the RAN 1003 may include any number of Node Bs and RNCs while remaining consistent with the implementation. As shown in FIG. 10C, Node Bs 1040a, 1040b can communicate with RNC 1042a. Additionally, Node B 1040c can communicate with RNC 1042b. Node Bs 1040a, 1040b, 1040c can communicate with respective RNCs 1042a, 1042b via an Iub interface. The RNCs 1042a, 1042b can communicate with each other via the Iur interface. Each RNC 1042a, 1042b can be configured to control a respective Node B 1040a, 1040b, 1040c connected thereto. In addition, each RNC 1042a, 1042b can be configured to perform or support other functions, such as outer loop power control, load control, admission control, packet scheduling, handover control, macro diversity, security functions, data encryption, and the like. The core network 1006 shown in FIG. 10C may include a media gateway (MGW) 1044, a mobile switching center (MSC) 1046, a serving GPRS support node (SGSN) 1048, and/or a gateway GPRS support node (GGSN) 1050. While each of the foregoing elements is described as being part of core network 1006, it should be understood that other entities other than the core network operator may also own and/or operate any of these elements. The RNC 1042a in the RAN 1003 can be connected to the MSC 1046 in the core network 1006 via the IuCS interface. The MSC 1046 can be connected to the MGW 1044. MSC 1046 and MGW 1044 may provide WTRUs 1002a, 1002b, 1002c with access to a circuit switched network, such as PSTN 1008, to facilitate communication between WTRUs 1002a, 1002b, 1002c and conventional landline communication devices. The RNC 1042a in the RAN 1003 can also be connected to the SGSN 1048 in the core network 1006 via an IuPS interface. The SGSN 1048 can be connected to the GGSN 1050. The SGSN 1048 and GGSN 1050 may provide WTRUs 1002a, 1002b, 1002c with access to a packet switched network, such as the Internet 1010, to facilitate communication between the WTRUs 1002a, 1002b, 1002c and IP-enabled devices. As noted above, core network 1006 can also be coupled to network 1012, which can include other wired or wireless networks that are owned and/or operated by other service providers. Figure 10D is a system diagram of RAN 1004 and core network 1006, in accordance with one embodiment. As described above, the RAN 1004 can communicate with the WTRUs 1002a, 1002b, 1002c via the null plane 1016 using the E-UTRA radio technology. In addition, the RAN 1004 can also communicate with the core network 1007. The RAN 1004 may include eNodeBs 1060a, 1060b, 1060c, but it should be appreciated that the RAN 1004 may include any number of eNodeBs while remaining consistent with the implementation. Each eNodeB 1060a, 1060b, 1060c may include one or more transceivers to communicate with the WTRUs 1002a, 1002b, 1002c via the null plane 1016. In one embodiment, eNodeBs 1060a, 1060b, 1060c may implement MIMO technology. Thus, for example, eNodeB 1060a may use multiple antennas to transmit wireless signals to, and receive wireless signals from, WTRU 1002a. Each eNodeB 1060a, 1060b, 1060c may be associated with a particular cell (not shown) and may be configured to handle radio resource management decisions, handover decisions, user scheduling in the uplink and/or downlink, etc. Wait. As shown in FIG. 10D, the eNodeBs 1060a, 1060b, 1060c can communicate with each other via the X2 interface. The core network 1007 shown in FIG. 10D may include a mobility management gateway (MME) 1062, a service gateway 1064, and a packet data network (PDN) gateway 1066. While each of the above elements is described as being part of the core network 1007, it should be understood that other entities than the core network operator may also own and/or operate any of these elements. The MME 1062 may be interfaced via the S1. It is connected to each of the eNodeBs 1060a, 1060b, 1060c in the RAN 1004 and can act as a control node. For example, the MME 1062 may be responsible for authenticating the users of the WTRUs 1002a, 1002b, 1002c, bearer activation/deactivation, selecting a particular service gateway during the initial connection of the WTRUs 1002a, 1002b, 1002c, and the like. The MME 1062 may also provide control plane functionality to perform handover between the RAN 1004 and other RANs (not shown) that employ other radio technologies such as GSM or WCDMA. The service gateway 1064 can be connected to each of the eNodeBs 1060a, 1060b, 1060c in the RAN 1004 via an S1 interface. The service gateway 1064 can typically route and forward user data packets to/from the WTRUs 1002a, 1002b, 1002c. The service gateway 1064 can also perform other functions, such as anchoring the user plane during handover between eNodeBs, triggering paging, managing and storing the WTRUs 1002a, 1002b, when the downlink information is available to the WTRUs 1002a, 1002b, 1002c, 1002c context and so on. The service gateway 1064 can also be coupled to the PDN gateway 1066 to provide WTRUs 1002a, 1002b, 1002c with access to a packet switched network, such as the Internet 1010, to facilitate WTRUs 1002a, 1002b, 1002c, and IP assignments. Can communicate between devices. The core network 1007 can facilitate communication with other networks. For example, core network 1007 may provide WTRUs 1002a, 1002b, 1002c with access to a circuit switched network, such as PSTN 1008, to facilitate communication between WTRUs 1002a, 1002b, 1002c and conventional landline communication devices. As an example, core network 1007 can include or be in communication with an IP gateway (eg, an IP Multimedia Subsystem (IMS) server), where the IP gateway acts as an interface between core network 1007 and PSTN 1008. In addition, core network 1007 can provide WTRUs 1002a, 1002b, 1002c with access to network 1012, which can include other wired or wireless networks owned and/or operated by other service providers. Figure 10E is a system diagram of RAN 1005 and core network 1009, in accordance with one embodiment. The RAN 1005 may be an Access Service Network (ASN) that communicates with the WTRUs 1002a, 1002b, 1002c via the null plane 1017 using IEEE 802.16 radio technology. As discussed further below, the communication links between the different functional entities of the WTRUs 1002a, 1002b, 1002c, RAN 1005, and core network 1009 may be defined as reference points. As shown in FIG. 10E, the RAN 1005 may include base stations 1080a, 1080b, 1080c and ASN gateway 1082, but it should be appreciated that the RAN 1005 may include any number of base stations and ASN gateways while remaining consistent with the implementation. . Each of the base stations 1080a, 1080b, 1080c may be associated with a particular cell (not shown) in the RAN 1005, and each base station may include one or more transceivers to communicate with the WTRUs 1002a, 1002b via the null plane 1017. And 1002c communicate. In one embodiment, base stations 1080a, 1080b, 1080c may implement MIMO technology. Thus, for example, base station 1080a can use multiple antennas to transmit wireless signals to, and receive wireless signals from, WTRU 1002a. Base stations 1080a, 1080b, 1080c can also provide mobility management functions such as handover triggering, tunnel establishment, radio resource management, traffic classification, quality of service (QoS) policy enforcement, and the like. The ASN gateway 1082 can act as a traffic aggregation point and can be responsible for paging, user profile cache, routing to the core network 1009, and the like. The null intermediation plane 1017 between the WTRUs 1002a, 1002b, 1002c and the RAN 1005 may be defined as an Rl reference point that implements the IEEE 802.16 specification. In addition, each of the WTRUs 1002a, 1002b, 1002c can establish a logical interface (not shown) with the core network 1009. The logical interface between the WTRUs 1002a, 1002b, 1002c and the core network 1009 can be defined as an R2 reference point that can be used for authentication, authorization, IP host configuration management, and/or mobility management. The communication link between each of the base stations 1080a, 1080b, 1080c can be defined as an R8 reference point that includes protocols for facilitating WTRU handover and data transfer between base stations. The communication link between the base stations 1080a, 1080b, 1080c and the ASN gateway 1082 can be defined as an R6 reference point. The R6 reference point can include an agreement to facilitate mobility management based on mobility events associated with each of the WTRUs 1002a, 1002b, 1002c. As shown in FIG. 10E, the RAN 1005 can be connected to the core network 1009. The communication link between the RAN 1005 and the core network 1009 can be defined as an R3 reference point, which, by way of example, includes protocols for facilitating data transfer and mobility management capabilities. Core network 1009 may include a Mobile IP Home Agent (MIP-HA) 1084, an Authentication, Authorization, Accounting (AAA) server 1086, and a gateway 1088. While each of the foregoing elements is described as being part of the core network 1009, it should be understood that entities other than the core network operator may also own and/or operate any of these elements. The MIP-HA may be responsible for IP address management and may enable the WTRUs 1002a, 1002b, 1002c to roam between different ASNs and/or different core networks. The MIP-HA 1084 may provide WTRUs 1002a, 1002b, 1002c with access to a packet switched network, such as the Internet 1010, to facilitate communication between the WTRUs 1002a, 1002b, 1002c and IP-enabled devices. The AAA server 1086 can be responsible for user authentication and supporting user services. Gateway 1088 can facilitate interworking with other networks. For example, gateway 1088 can provide WTRUs 1002a, 1002b, 1002c with access to a circuit-switched network, such as PSTN 1008, to facilitate communication between WTRUs 1002a, 1002b, 1002c and conventional landline communication devices. In addition, gateway 1088 can provide WTRUs 1002a, 1002b, 1002c with access to network 1012, where the network can include other wired or wireless networks that are owned and/or operated by other service providers. Although not shown in Figure 10E, it should be understood that the RAN 1005 can be connected to other ASNs and the core network 1009 can be connected to other core networks. The communication link between the RAN 1005 and other ASNs may be defined as an R4 reference point, which may include a protocol for coordinating the movement of the WTRUs 1002a, 1002b, 1002c between the RAN 1005 and other ASNs. The communication link between core network 1009 and other core networks may be defined as an R5 reference point, which may include protocols for facilitating interworking between the home core network and the visited core network. The processes and means disclosed herein can be applied in any combination, can be applied to other wireless technologies, and used for other services. The WTRU may relate to an identification code of a physical device, or an identification code of a user such as a user-related identification code (e.g., MSISDN, SIR PRL, etc.). The WTRU may relate to an application based identification code, such as a username that each application may use. Although features and elements of a particular combination are described above, those of ordinary skill in the art will appreciate that each feature can be used alone or in any combination with other features and elements. Moreover, the methods described herein can be implemented in a computer program, software or firmware incorporated into a computer readable medium and executed by a computer or processor. Examples of computer readable media include electrical signals (transmitted via a wired or wireless connection) and computer readable storage media. Examples of computer readable storage media include, but are not limited to, read only memory (ROM), random access memory (RAM), scratchpad, cache memory, semiconductor storage devices, such as internal hard drives and removable magnetics. Magnetic media such as films, magneto-optical media, and optical media such as CD-ROM discs and digital versatile discs (DVDs). A processor associated with the software can be used to implement a radio frequency transceiver for use in a WTRU, UE, terminal, base station, RNC, or any host computer.

100...視訊流系統100. . . Video streaming system

102、204...用戶端102, 204. . . user terminal

104...快取伺服器104. . . Cache server

200...串流系統200. . . Streaming system

202...MPD檔202. . . MPD file

300...視訊回放系統300. . . Video playback system

302...接收器302. . . receiver

304...解碼器304. . . decoder

306...顯示器306. . . monitor

400...單層解碼器400. . . Single layer decoder

402...熵解碼器402. . . Entropy decoder

416、622...參考畫面儲存416, 622. . . Reference screen storage

600...單層祖訊編碼器600. . . Single layer coder

602...空間預測602. . . Spatial prediction

604...時間預測604. . . Time prediction

606...模式決策邏輯606. . . Mode decision logic

612...熵編碼器612. . . Entropy encoder

802、804、806、808...小數位置802, 804, 806, 808. . . Decimal position

900...階層編碼結構900. . . Hierarchical coding structure

902、904、906、908...層902, 904, 906, 908. . . Floor

1000...通信系統1000. . . Communication Systems

1002、1002a、1002b、1002c、1002d...WTRU1002, 1002a, 1002b, 1002c, 1002d. . . WTRU

1003、1004、1005...RAN1003, 1004, 1005. . . RAN

1006、1007、1009...核心網路1006, 1007, 1009. . . Core network

1008...PSTN1008. . . PSTN

1010...網際網路1010. . . Internet

1012...其他網路1012. . . Other network

1014a、1014b、1080a、1080b、1080c...基地台1014a, 1014b, 1080a, 1080b, 1080c. . . Base station

1015、1016、1017...空中介面1015, 1016, 1017. . . Empty intermediary

1018...處理器1018. . . processor

1020...收發器1020. . . transceiver

1022...傳輸/接收元件1022. . . Transmission/reception component

1024...揚聲器/麥克風1024. . . Speaker/microphone

1026...鍵盤1026. . . keyboard

1028...顯示器/觸控板1028. . . Display/trackpad

1030...不可移式記憶體1030. . . Non-removable memory

1032...可移式記憶體1032. . . Removable memory

1034...電源1034. . . power supply

1036...GPS晶片組1036. . . GPS chipset

1038...週邊裝置1038. . . Peripheral device

1040a、1040b、1040c...節點B1040a, 1040b, 1040c. . . Node B

1042a、1042b...RNC1042a, 1042b. . . RNC

1044...MGW1044. . . MGW

1046...MSC1046. . . MSC

1048...SGSN1048. . . SGSN

1050...GGSN1050. . . GGSN

1062...MME1062. . . MME

1064...服務閘道1064. . . Service gateway

1066...PDN閘道1066. . . PDN gateway

1084...MIP-AA1084. . . MIP-AA

1086...AAA伺服器1086. . . AAA server

1088...閘道1088. . . Gateway

AAA...詔證、授權、記帳AAA. . . Proof, authorization, accounting

ASN...存取服務網路ASN. . . Access service network

DB...解塊DB. . . Deblocking

GGSN...閘道GPRS支援節點GGSN. . . Gateway GPRS support node

GPS...全球定位系統GPS. . . Global Positioning System

HEVC...高效視訊編碼HEVC. . . Efficient video coding

HTTP...超文字傳輸協定HTTP. . . Hypertext transfer protocol

Intra_rec...畫面內重建Intra_rec. . . Intra-image reconstruction

Inter_Rec...畫面間重建Inter_Rec. . . Inter-picture reconstruction

IP...網際網路協定IP. . . Internet protocol

Iub、IuCS、IuPS、iur、S1、X2...介面Iub, IuCS, IuPS, iur, S1, X2. . . interface

MC...移動補償MC. . . Motion compensation

MGW...媒體閘道MGW. . . Media gateway

MIP-HA...行動IP本地代理MIP-HA. . . Mobile IP local agent

MME...移動性管理閘道MME. . . Mobility management gateway

MPD...媒體呈現描述MPD. . . Media presentation description

MSC...行動交換中心MSC. . . Action exchange center

PDN...封包資料網路PDN. . . Packet data network

PSTN...公共交換電話網路PSTN. . . Public switched telephone network

R1、R3、R6、R8...參考點R1, R3, R6, R8. . . Reference point

RAN...無線電存取網路RAN. . . Radio access network

SAO...熵解碼取樣適應性偏移SAO. . . Entropy decoding sampling adaptive offset

SGSN...服務GPRS支援節點SGSN. . . Service GPRS support node

WTRU...無線傳輸/接收單元WTRU. . . Wireless transmission/reception unit

第1圖示出示例性基於超文字傳輸協定(HTTP)的視訊流系統。 第2圖示出示例性複雜性感知串流架構。 第3圖示出示例性視訊回放系統的各種模組。 第4圖示出示例性視訊解碼器的方塊圖。 第5圖示出針對例如平板電腦上的WVGA序列和個人電腦(PC)上的高剖析度(HD)序列的高效視訊編碼(HEVC)解碼的示例性剖析結果。 第6圖示出示例性視訊轉碼器的方塊圖。 第7圖示出HEVC中示例性預測單元(PU)模式。 第8圖示出HEVC中亮度移動補償(MC)的示例性像素位置。 第9圖示出HEVC中示例性階層編碼結構。 第10A圖是可以實施所揭露的一個或多個實施方式的示例通信系統的系統圖。 第10B圖是可以在第10A圖所示的通信系統內使用的示例無線傳輸/接收單元(WTRU)的系統圖。 第10C圖是可以在第10A圖所示的通信系統內使用的示例無線電存取網路以及示例核心網路的系統圖。 第10D圖是可以在第10A圖所示的通信系統內使用的示例無線電存取網路以及示例核心網路的系統圖。 第10E圖是可以在第10A圖所示的通信系統內使用的示例無線電存取網路以及示例核心網路的系統圖。Figure 1 shows an exemplary Hypertext Transfer Protocol (HTTP) based video streaming system. Figure 2 illustrates an exemplary complexity aware streaming architecture. Figure 3 illustrates various modules of an exemplary video playback system. Figure 4 shows a block diagram of an exemplary video decoder. Figure 5 shows an exemplary profiling result for efficient video coding (HEVC) decoding of, for example, a WVGA sequence on a tablet and a high profiling (HD) sequence on a personal computer (PC). Figure 6 shows a block diagram of an exemplary video transcoder. Figure 7 shows an exemplary prediction unit (PU) mode in HEVC. Figure 8 shows an exemplary pixel location for luminance shift compensation (MC) in HEVC. Figure 9 shows an exemplary hierarchical coding structure in HEVC. FIG. 10A is a system diagram of an example communication system in which one or more of the disclosed embodiments may be implemented. Figure 10B is a system diagram of an example wireless transmit/receive unit (WTRU) that can be used within the communication system shown in Figure 10A. Figure 10C is a system diagram of an example radio access network and an example core network that can be used within the communication system shown in Figure 10A. Figure 10D is a system diagram of an example radio access network and an example core network that can be used within the communication system shown in Figure 10A. Figure 10E is a system diagram of an example radio access network and an example core network that can be used within the communication system shown in Figure 10A.

802、804、806、808...小數位置802, 804, 806, 808. . . Decimal position

Claims (20)

一種編碼一視訊資料的方法,該方法包括: 接收一輸入視訊訊號; 基於該輸入視訊訊號產生一預測信號;以及 根據該預測信號產生一編碼位元流, 其中該預測信號以及該編碼位元流中的至少一者是根據一解碼複雜性而被產生。A method for encoding a video data, the method comprising: receiving an input video signal; generating a prediction signal based on the input video signal; and generating an encoded bit stream according to the prediction signal, wherein the prediction signal and the encoded bit stream At least one of them is generated according to a decoding complexity. 如申請專利範圍第1項所述的方法,其中產生該預測信號包括根據該解碼複雜性執行一移動估計。The method of claim 1, wherein generating the prediction signal comprises performing a motion estimation based on the decoding complexity. 如申請專利範圍第2項所述的方法,其中執行該移動估計包括選擇與一預測單元的一最小移動向量成本相關聯的一移動向量。The method of claim 2, wherein performing the motion estimation comprises selecting a motion vector associated with a minimum motion vector cost of a prediction unit. 如申請專利範圍第3項所述的方法,更包括根據一內插複雜性來確定一移動向量成本。The method of claim 3, further comprising determining a mobile vector cost based on an interpolation complexity. 如申請專利範圍第4項所述的方法,其中確定該移動向量成本包括考慮一水平內插、一垂直內插、一非對稱內插以及一對稱內插中的至少一者是否被執行。The method of claim 4, wherein determining the motion vector cost comprises considering whether at least one of a horizontal interpolation, a vertical interpolation, an asymmetric interpolation, and a symmetric interpolation is performed. 如申請專利範圍第1項所述的方法,其中產生該預測信號包括根據該解碼複雜性選擇一編碼模式。The method of claim 1, wherein generating the prediction signal comprises selecting an encoding mode based on the decoding complexity. 如申請專利範圍第6項所述的方法,其中選擇該編碼模式包括根據該解碼複雜性確定與該編碼模式相關聯的一成本。The method of claim 6, wherein selecting the encoding mode comprises determining a cost associated with the encoding mode based on the decoding complexity. 如申請專利範圍第7項所述的方法,其中確定與該編碼模式相關聯的該成本包括根據一內插複雜性來確定一編碼模式複雜性。The method of claim 7, wherein determining the cost associated with the encoding mode comprises determining an encoding mode complexity based on an interpolation complexity. 如申請專利範圍第1項所述的方法,其中產生該編碼位元流包括根據一解塊成本選擇性地賦能或禁用一解塊過程,該解塊成本根據該解碼複雜性被確定。The method of claim 1, wherein generating the encoded bit stream comprises selectively enabling or disabling a deblocking process based on a deblocking cost, the deblocking cost being determined based on the decoding complexity. 如申請專利範圍第9項所述的方法,其中該解塊成本是針對一解碼複雜性等級被預定義。The method of claim 9, wherein the deblocking cost is predefined for a level of decoding complexity. 一種被配置為編碼一資料的裝置,該裝置包括: 一記憶體;以及 一編碼器, 其中該裝置至少部分被配置為: 接收一輸入視訊訊號; 基於該輸入視訊訊號產生一預測信號;以及 根據該預測信號產生一編碼位元流, 其中該預測信號和該編碼位元流中的至少一者是根據一解碼複雜性而被產生。An apparatus configured to encode a material, the apparatus comprising: a memory; and an encoder, wherein the apparatus is at least partially configured to: receive an input video signal; generate a prediction signal based on the input video signal; The predictive signal produces a coded bitstream, wherein at least one of the predictive signal and the encoded bitstream is generated in accordance with a decoding complexity. 如申請專利範圍第11項所述的裝置,其中該編碼器被配置為至少藉由根據該解碼複雜性執行一移動估計來產生該預測信號。The apparatus of claim 11, wherein the encoder is configured to generate the prediction signal by at least performing a motion estimation based on the decoding complexity. 如申請專利範圍第12項所述的裝置,其中該編碼器被配置為至少藉由選擇與一預測單元的一最小移動向量成本相關聯的一移動向量來執行該移動估計。The apparatus of claim 12, wherein the encoder is configured to perform the motion estimation by at least selecting a motion vector associated with a minimum motion vector cost of a prediction unit. 如申請專利範圍第13項所述的裝置,其中該編碼器更被配置為根據一內插複雜性確定一移動向量成本。The apparatus of claim 13, wherein the encoder is further configured to determine a motion vector cost based on an interpolation complexity. 如申請專利範圍第14項所述的裝置,其中該編碼器被配置為至少藉由考慮一水平內插、一垂直內插、一非對稱內插和一對稱內插中的至少一者是否被執行來確定該移動向量成本。The apparatus of claim 14, wherein the encoder is configured to at least consider by considering at least one of a horizontal interpolation, a vertical interpolation, an asymmetric interpolation, and a symmetric interpolation. Execute to determine the cost of the mobile vector. 如申請專利範圍第11項所述的裝置,其中該編碼器被配置為至少藉由根據該解碼複雜性選擇一編碼模式來產生該預測信號。The apparatus of claim 11, wherein the encoder is configured to generate the prediction signal by at least selecting an encoding mode based on the decoding complexity. 如申請專利範圍第16項所述的裝置,其中該編碼器被配置為至少藉由根據該解碼複雜性確定與一編碼模式相關聯的一成本來選擇該編碼模式。The apparatus of claim 16, wherein the encoder is configured to select the encoding mode by at least determining a cost associated with an encoding mode based on the decoding complexity. 如申請專利範圍第17項所述的裝置,其中該編碼器被配置為至少藉由根據一內插複雜性確定一編碼模式複雜性來確定與該編碼模式相關聯的該成本。The apparatus of claim 17, wherein the encoder is configured to determine the cost associated with the encoding mode by at least determining an encoding mode complexity based on an interpolation complexity. 如申請專利範圍第11項所述的裝置,其中該編碼器被配置為至少藉由根據一解塊成本選擇性地賦能或禁用一解塊過程來產生一編碼位元流,該解塊成本根據該解碼複雜性被確定。The apparatus of claim 11, wherein the encoder is configured to generate an encoded bit stream at least by selectively enabling or disabling a deblocking process according to a deblocking cost, the deblocking cost It is determined based on the decoding complexity. 如申請專利範圍第19項所述的裝置,其中該解塊成本是針對解碼複雜性等級被預定義。The apparatus of claim 19, wherein the deblocking cost is predefined for a decoding complexity level.
TW103107626A 2013-03-06 2014-03-06 Complexity aware video encoding for power aware video streaming TW201444342A (en)

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
US201361773528P 2013-03-06 2013-03-06

Publications (1)

Publication Number Publication Date
TW201444342A true TW201444342A (en) 2014-11-16

Family

ID=50440814

Family Applications (1)

Application Number Title Priority Date Filing Date
TW103107626A TW201444342A (en) 2013-03-06 2014-03-06 Complexity aware video encoding for power aware video streaming

Country Status (2)

Country Link
TW (1) TW201444342A (en)
WO (1) WO2014138325A1 (en)

Families Citing this family (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US10834384B2 (en) 2017-05-15 2020-11-10 City University Of Hong Kong HEVC with complexity control based on dynamic CTU depth range adjustment

Family Cites Families (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20090304085A1 (en) * 2008-06-04 2009-12-10 Novafora, Inc. Adaptive Deblocking Complexity Control Apparatus and Method

Also Published As

Publication number Publication date
WO2014138325A1 (en) 2014-09-12

Similar Documents

Publication Publication Date Title
JP6684867B2 (en) System and method for RGB video coding enhancement
JP6954763B2 (en) Adaptive upsampling of multilayer video coding
JP6421224B2 (en) Sampling grid information for spatial layers in multi-layer video coding
TWI715598B (en) Method and device for enhanced chroma coding using cross plane filtering
CN107431817B (en) Method and apparatus for palette coding
TWI722996B (en) Video coding device and method
TWI735424B (en) Escape color coding for palette coding mode
TWI678102B (en) Intra-block copy searching
JP6307650B2 (en) Motion information signaling for scalable video coding
CN108156463B (en) Method and apparatus for motion vector prediction for scalable video coding
TWI558183B (en) Power aware video decoding and streaming
JP6397902B2 (en) Inter-layer prediction for scalable video coding
TW201419867A (en) Layer dependency and priority signaling design for scalable video coding
WO2017020021A1 (en) Scalable high efficiency video coding to high efficiency video coding transcoding
TW201427426A (en) Methods and apparatus of edge guided processing for video coding
TW201444342A (en) Complexity aware video encoding for power aware video streaming