TW202029121A - Motion estimation through input perturbation - Google Patents

Motion estimation through input perturbation Download PDF

Info

Publication number
TW202029121A
TW202029121A TW108144969A TW108144969A TW202029121A TW 202029121 A TW202029121 A TW 202029121A TW 108144969 A TW108144969 A TW 108144969A TW 108144969 A TW108144969 A TW 108144969A TW 202029121 A TW202029121 A TW 202029121A
Authority
TW
Taiwan
Prior art keywords
motion vector
image data
subset
frame
momentum
Prior art date
Application number
TW108144969A
Other languages
Chinese (zh)
Inventor
塞繆爾 班杰明 荷姆斯
馬汀 雷恩斯區勒
強那森 維克斯
羅伯特 約翰 凡里寧
Original Assignee
美商高通公司
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 美商高通公司 filed Critical 美商高通公司
Publication of TW202029121A publication Critical patent/TW202029121A/en

Links

Images

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N19/00Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
    • H04N19/50Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using predictive coding
    • H04N19/503Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using predictive coding involving temporal prediction
    • H04N19/51Motion estimation or motion compensation
    • H04N19/513Processing of motion vectors
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/20Analysis of motion
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T15/003D [Three Dimensional] image rendering
    • G06T15/005General purpose rendering architectures
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N19/00Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
    • H04N19/10Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding
    • H04N19/169Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the coding unit, i.e. the structural portion or semantic portion of the video signal being the object or the subject of the adaptive coding
    • H04N19/186Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the coding unit, i.e. the structural portion or semantic portion of the video signal being the object or the subject of the adaptive coding the unit being a colour or a chrominance component
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N19/00Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
    • H04N19/42Methods or arrangements for coding, decoding, compressing or decompressing digital video signals characterised by implementation details or hardware specially adapted for video compression or decompression, e.g. dedicated software implementation
    • H04N19/436Methods or arrangements for coding, decoding, compressing or decompressing digital video signals characterised by implementation details or hardware specially adapted for video compression or decompression, e.g. dedicated software implementation using parallelised computational arrangements
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10024Color image

Landscapes

  • Engineering & Computer Science (AREA)
  • Multimedia (AREA)
  • Signal Processing (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Computing Systems (AREA)
  • Computer Graphics (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Compression Or Coding Systems Of Tv Signals (AREA)
  • Image Generation (AREA)

Abstract

The present disclosure relates to methods and devices for motion estimation which may include a GPU. In one aspect, the GPU may generate at least one first motion vector in a first subset of a frame, the first motion vector providing a first motion estimation for image data in the first subset of the frame. The GPU may also perturb the image data. Also, the GPU may generate at least one second motion vector based on the perturbed image data, the second motion vector providing a second motion estimation for the image data. Moreover, the GPU may compare the first motion vector and the second motion vector. Further, the GPU may determine at least one third motion vector for the motion estimation of the image data based on the comparison between the first motion vector and the second motion vector.

Description

經由輸入擾動之動量估計Momentum estimation via input disturbance

本發明大體上係關於處理系統,且更特定而言,係關於用於處理系統中之圖形處理的一或多種技術。The present invention generally relates to processing systems, and more particularly, to one or more technologies used for graphics processing in processing systems.

計算器件常常利用圖形處理單元(GPU)來加速供顯示之圖形資料的呈現。此類計算器件可包括例如電腦工作台、諸如所謂的智慧型電話之行動電話、嵌入式系統、個人電腦、平板電腦及視訊遊戲控制台。GPU執行圖形處理管線,該圖形處理管線包括一起操作以執行圖形處理命令並輸出圖框之複數個處理階段。中央處理單元(CPU)可藉由向GPU發出一或多個圖形處理命令來控制GPU之操作。現代CPU通常能夠同時執行多個應用程式,每一應用程式在執行期間可能需要利用GPU。提供用於視覺呈現在顯示器上之內容的器件大體上包括圖形處理單元(GPU)。Computing devices often use graphics processing units (GPUs) to accelerate the presentation of graphics data for display. Such computing devices may include, for example, computer workstations, mobile phones such as so-called smart phones, embedded systems, personal computers, tablet computers, and video game consoles. The GPU executes a graphics processing pipeline, which includes a plurality of processing stages that operate together to execute graphics processing commands and output frames. The central processing unit (CPU) can control the operation of the GPU by issuing one or more graphics processing commands to the GPU. Modern CPUs are usually capable of executing multiple applications at the same time, and each application may need to utilize the GPU during execution. Devices that provide content for visual presentation on displays generally include a graphics processing unit (GPU).

通常,器件之GPU經組態以執行圖形處理管線中之每一程序。然而,隨著無線通信及內容流(例如,遊戲內容或使用GPU呈現之任何其他內容)之出現,出現了對分佈圖形處理之需求。舉例而言,已出現將由第一器件(例如,諸如遊戲控制台、虛擬現實器件或任何其他器件之用戶端器件)之GPU執行之處理卸載至第二器件(例如,諸如代管移動遊戲之伺服器之伺服器)的需求。Generally, the GPU of the device is configured to execute each program in the graphics processing pipeline. However, with the advent of wireless communication and content streaming (for example, game content or any other content rendered using GPU), there is a need for distributed graphics processing. For example, it has emerged that the processing performed by the GPU of a first device (for example, a client device such as a game console, virtual reality device, or any other device) is offloaded to a second device (for example, a server such as hosting a mobile game). Server).

以下內容呈現一或多個態樣之簡化概述,以提供對該等態樣之基本理解。本概述並非所有所涵蓋的態樣之廣泛綜述,且既不意欲識別全部態樣之關鍵或決定性要素,亦不意欲劃定任何或所有態樣之範圍。其唯一目的為以簡化形式呈現一或多個態樣之一些概念,作為稍後所呈現的更為具體之實施方式的序言。The following content presents a simplified overview of one or more aspects to provide a basic understanding of these aspects. This summary is not an extensive overview of all the aspects covered, and neither intends to identify the key or decisive elements of all aspects, nor does it intend to delimit any or all aspects. Its sole purpose is to present some concepts in one or more aspects in a simplified form as a prelude to more specific implementations presented later.

在本發明之一態樣中,提供一種方法、一種電腦可讀媒體及一第一裝置。該裝置可為一GPU。在一個態樣中,該GPU可在一圖框之一第一子集中產生至少一個第一運動向量,該第一運動向量提供用於該圖框之該第一子集中之影像資料之一第一動量估計。該GPU亦可在該圖框之該第一子集中擾動該影像資料。另外,該GPU可基於該擾動影像資料產生至少一個第二運動向量,該第二運動向量提供用於該圖框之該第一子集中之該影像資料之一第二動量估計。此外,該GPU可比較該第一運動向量與該第二運動向量。此外,該GPU可基於該第一運動向量與該第二運動向量之間的該比較,判定用於該圖框之該第一子集中之該影像資料之該動量估計之至少一個第三運動向量。In one aspect of the present invention, a method, a computer-readable medium, and a first device are provided. The device may be a GPU. In one aspect, the GPU may generate at least one first motion vector in a first subset of a frame, and the first motion vector provides a first motion vector for the image data in the first subset of the frame. A momentum estimate. The GPU may also disturb the image data in the first subset of the frame. In addition, the GPU can generate at least one second motion vector based on the disturbed image data, the second motion vector providing a second momentum estimate for the image data in the first subset of the frame. In addition, the GPU can compare the first motion vector with the second motion vector. In addition, the GPU may determine at least one third motion vector for the momentum estimation of the image data in the first subset of the frame based on the comparison between the first motion vector and the second motion vector .

下文在附圖及描述中闡述本發明之一或多個實例的細節。本發明之其他特徵、目標及優勢將自描述及圖式及申請專利範圍而顯而易見。The details of one or more examples of the present invention are set forth in the accompanying drawings and the description below. Other features, objectives and advantages of the present invention will be apparent from the description and drawings and the scope of the patent application.

本申請案主張2018年12月10日申請之美國非臨時申請案第16/215,547號之權利,該申請案之全部內容以引用之方式併入本文中。This application claims the rights of U.S. Non-Provisional Application No. 16/215,547 filed on December 10, 2018, and the entire content of the application is incorporated herein by reference.

下文中參考附圖更全面地描述系統、裝置、電腦程式產品及方法之各種態樣。然而,本發明可以許多不同形式來體現,且不應將其解釋為限於貫穿本發明所呈現之任何具體結構或功能。實情為,提供此等態樣,使得本發明將為透徹且完整的,且將向熟習此項技術者充分傳達本發明之範疇。基於本文中之教示,熟習此項技術者應瞭解,本發明之範疇意欲涵蓋獨立於本發明之其他態樣抑或與本發明之其他態樣組合而實施的本文所揭示之系統、裝置、電腦程式產品及方法的任一態樣。舉例而言,可使用本文中所闡述之任何數目個態樣來實施裝置或可使用本文中所闡述之任何數目個態樣來實踐方法。此外,本發明之範疇意欲涵蓋使用除本文中所闡述之本發明之各種態樣之外的或不同於本文中所闡述之本發明之各種態樣的其他結構、功能性,或結構與功能性來實踐的此裝置或方法。本文所揭示之任一態樣可藉由申請專利範圍之一或多個元件體現。Hereinafter, various aspects of the system, device, computer program product and method are described more fully with reference to the accompanying drawings. However, the present invention may be embodied in many different forms, and it should not be construed as being limited to any specific structure or function presented throughout the present invention. The fact is that these aspects are provided so that the present invention will be thorough and complete, and will fully convey the scope of the present invention to those skilled in the art. Based on the teachings herein, those familiar with the art should understand that the scope of the present invention is intended to cover the systems, devices, and computer programs disclosed herein that are implemented independently of or in combination with other aspects of the present invention Any aspect of products and methods. For example, any number of aspects described herein can be used to implement the device or any number of aspects described herein can be used to practice a method. In addition, the scope of the present invention is intended to cover the use of other structures, functions, or structures and functionalities other than the various aspects of the present invention described herein or different from the various aspects of the present invention described herein To practice this device or method. Any aspect disclosed herein can be embodied by one or more elements within the scope of the patent application.

儘管本文中描述各種態樣,但此等態樣之許多變化及排列屬於本發明之範疇內。雖然提及本發明之態樣的一些潛在益處及優勢,但本發明之範疇並非意欲限制於特定益處、用途或目標。實情為,本發明之態樣意欲廣泛適用於不同的無線技術、系統組態、網路及傳輸協定,其中之一些在諸圖中及在以下描述中借助於實例加以說明。實施方式及圖式僅說明本發明而非限制本發明,本發明之範疇由所附申請專利範圍及其均等物來界定。Although various aspects are described herein, many variations and permutations of these aspects are within the scope of the present invention. Although some potential benefits and advantages of the aspects of the present invention are mentioned, the scope of the present invention is not intended to be limited to specific benefits, uses, or goals. In fact, the aspects of the present invention are intended to be widely applicable to different wireless technologies, system configurations, networks, and transmission protocols, some of which are illustrated in the figures and in the following description with examples. The embodiments and drawings only illustrate the present invention rather than limit the present invention. The scope of the present invention is defined by the scope of the attached patent application and its equivalents.

參考各種裝置及方法來呈現若干態樣。藉由各種區塊、組件、電路、程序、演算法等(統稱為「元件」)將此等裝置及方法在以下詳細描述中加以描述並在附圖中進行說明。此等元件可使用電子硬體、電腦軟體或其任何組合來予以實施。是否將此等元件實施為硬體或軟體取決於特定應用程式及強加於整個系統上之設計約束。Refer to various devices and methods to present several aspects. These devices and methods are described in the following detailed description and illustrated in the accompanying drawings through various blocks, components, circuits, programs, algorithms, etc. (collectively referred to as "components"). These components can be implemented using electronic hardware, computer software, or any combination thereof. Whether these components are implemented as hardware or software depends on the specific application and the design constraints imposed on the entire system.

藉助於實例,元件或元件之任何部分或元件之任何組合可實施為包括一或多個處理器(其亦可稱為處理單元)之「處理系統」。處理器之實例包括微處理器、微控制器、圖形處理單元(GPU)、通用GPU (GPGPU)、中央處理單元(CPU)、應用程式處理器、數位信號處理器(DSP)、精簡指令集計算(RISC)處理器、系統單晶片(SoC)、基頻處理器、特殊應用積體電路(ASIC)、場可程式化閘陣列(FPGA)、可程式化邏輯器件(PLD)、狀態機、閘控邏輯、離散硬體電路,及經組態以執行貫穿本發明描述之各種功能性的其他合適硬體。處理系統中之一或多個處理器可執行軟體。軟體將廣泛地解釋為意謂指令、指令集、程式碼、碼段、程式碼、程式、子程式、軟體組件、應用程式、軟體應用程式、套裝軟體、常式、次常式、物件、可執行碼、執行線程、程序、功能等,而不管其稱為軟體、韌體、中間軟體、微碼、硬體描述語言或稱為其他。術語應用程式可指軟體。如本文所描述,一或多種技術可指經組態以執行一或多個功能之應用程式(亦即,軟體)。在此類實例中,應用程式可儲存於記憶體(例如,處理器之晶載記憶體、系統記憶體或任何其他記憶體)上。諸如處理器之本文所描述硬體可經組態以執行應用程式。舉例而言,應用程式可描述為包括程式碼,該程式碼在由硬體執行時致使硬體執行本文中所描述之一或多種技術。作為一實例,硬體可自記憶體存取程式碼,並執行自記憶體存取之程式碼以執行本文中所描述之一或多種技術。在一些實例中,在本發明中識別出組件。在此類實例中,組件可為硬體、軟體或其組合。組件可為單獨組件或單個組件之子組件。By way of example, an element or any part of an element or any combination of elements may be implemented as a "processing system" that includes one or more processors (which may also be referred to as processing units). Examples of processors include microprocessors, microcontrollers, graphics processing units (GPU), general-purpose GPUs (GPGPU), central processing units (CPU), application processors, digital signal processors (DSP), and reduced instruction set computing (RISC) processor, system-on-chip (SoC), baseband processor, special application integrated circuit (ASIC), field programmable gate array (FPGA), programmable logic device (PLD), state machine, gate Control logic, discrete hardware circuits, and other suitable hardware configured to perform the various functionalities described throughout this invention. One or more processors in the processing system can execute software. Software will be broadly interpreted as meaning instructions, instruction sets, code, code segments, code, programs, subprograms, software components, applications, software applications, packaged software, routines, subroutines, objects, Execution code, execution thread, program, function, etc., regardless of what it is called software, firmware, middleware, microcode, hardware description language or other. The term application can refer to software. As described herein, one or more technologies may refer to applications (ie, software) that are configured to perform one or more functions. In such instances, the application program can be stored in memory (for example, on-chip memory of the processor, system memory, or any other memory). The hardware described herein, such as a processor, can be configured to execute application programs. For example, an application program can be described as including code that, when executed by hardware, causes the hardware to perform one or more of the techniques described herein. As an example, hardware can access code from memory and execute code accessed from memory to perform one or more of the techniques described in this article. In some instances, components are identified in the present invention. In such instances, the component may be hardware, software, or a combination thereof. Components can be individual components or subcomponents of a single component.

因此,在本文中所描述之一或多個實例中,可以硬體、軟體或其任何組合來實施所描述功能。若以軟體實施,則功能可儲存於電腦可讀媒體上或在電腦可讀媒體上編碼為一或多個指令或程式碼。電腦可讀媒體包括電腦儲存媒體。儲存媒體可為可由電腦存取之任何可用媒體。藉助於實例但非限制,此類電腦可讀媒體可包含隨機存取記憶體(random-access memory,RAM)、唯讀記憶體(read-only memory,ROM)、電可擦除可程式化ROM (electrically erasable programmable ROM,EEPROM)、光碟儲存器、磁碟儲存器、其他磁性儲存器件、前述類型之電腦可讀媒體的組合,或可用以儲存呈指令形式之電腦可執行編碼的任何其他媒體或可由電腦存取之資料結構。Therefore, in one or more examples described herein, the described functions can be implemented by hardware, software or any combination thereof. If implemented in software, the functions can be stored on a computer-readable medium or encoded as one or more instructions or program codes on the computer-readable medium. Computer-readable media include computer storage media. The storage medium can be any available medium that can be accessed by a computer. By way of example but not limitation, such computer-readable media may include random-access memory (RAM), read-only memory (ROM), and electrically erasable programmable ROM (electrically erasable programmable ROM, EEPROM), optical disk storage, magnetic disk storage, other magnetic storage devices, combinations of the foregoing types of computer-readable media, or any other media that can be used to store computer-executable codes in the form of instructions or A data structure that can be accessed by a computer.

一般而言,本發明描述用於跨越多個器件具有分佈圖形處理管線、改良圖形內容之寫碼及/或減少處理單元(亦即,經組態以執行本文所描述之一或多種技術之任一處理單元,諸如圖形處理單元(GPU))之裝載之技術。舉例而言,本發明描述用於通信系統中之圖形處理技術。遍及本發明描述其他實例益處。Generally speaking, the present invention describes the use of distributed graphics processing pipelines across multiple devices, improved graphics content coding, and/or reduction of processing units (ie, configured to perform any of one or more of the technologies described herein). A processing unit, such as a graphics processing unit (GPU)) that is loaded with technology. For example, this invention describes graphics processing techniques used in communication systems. Other example benefits are described throughout this disclosure.

如本文所使用,術語「寫碼器」通常可指代編碼器及/或解碼器。舉例而言,對「內容寫碼器」之參考可包括對內容編碼器及/或內容解碼器之參考。相似地,如本文所使用,術語「寫碼」通常可指代編碼及/或解碼。如本文所使用,術語「編碼」與「壓縮」可互換地使用。相似地,術語「解碼」與「解壓縮」可互換地使用。As used herein, the term "code writer" can generally refer to an encoder and/or decoder. For example, the reference to the "content writer" may include a reference to a content encoder and/or a content decoder. Similarly, as used herein, the term "coding" can generally refer to encoding and/or decoding. As used herein, the terms "encoding" and "compression" are used interchangeably. Similarly, the terms "decoding" and "decompression" are used interchangeably.

如本文所使用,術語「內容」之例子可指術語「視訊」、「圖形內容」、「影像」且反之亦然。無論該等術語用作形容詞、名詞抑或詞類之其他部分,此為成立。舉例而言,對「內容寫碼器」之參考可包括對「視訊寫碼器」、「圖形內容寫碼器」或「影像寫碼器」之參考;且對「視訊寫碼器」、「圖形內容寫碼器」或「影像寫碼器」之參考可包括對「內容寫碼器」之參考。作為另一實例,對向內容寫碼器提供內容之處理單元之參考可包括對向視訊編碼器提供圖形內容之處理單元之參考。在一些實例中,如本文所使用,術語「圖形內容」可指由圖形處理管線之一或多個程序產生之內容。在一些實例中,如本文所使用,術語「圖形內容」可指由經組態以執行圖形處理之處理單元產生之內容。在一些實例中,如本文所使用,術語「圖形內容」可指由圖形處理單元產生之內容。As used herein, examples of the term "content" can refer to the terms "video", "graphic content", "image" and vice versa. This holds true whether these terms are used as adjectives, nouns or other parts of speech. For example, the reference to "Content Writer" can include references to "Video Code Writer", "Graphic Content Writer" or "Image Code Writer"; and to "Video Code Writer", " The reference to "Graphic Content Code Writer" or "Image Code Writer" may include the reference to "Content Code Writer". As another example, a reference to a processing unit that provides content to a content writer may include a reference to a processing unit that provides graphics content to a video encoder. In some instances, as used herein, the term "graphic content" may refer to content generated by one or more programs of the graphics processing pipeline. In some examples, as used herein, the term "graphic content" may refer to content generated by a processing unit configured to perform graphics processing. In some instances, as used herein, the term "graphic content" may refer to content generated by a graphics processing unit.

如本文所使用,術語「內容」之例子可指圖形內容或顯示內容。在一些實例中,如本文所使用,術語「圖形內容」可指由經組態以執行圖形處理之處理單元產生之內容。舉例而言,術語「圖形內容」可指由圖形處理管線之一或多個程序產生之內容。在一些實例中,如本文所使用,術語「圖形內容」可指由圖形處理單元產生之內容。在一些實例中,如本文所使用,術語「顯示內容」可指由經組態以執行顯示處理之處理單元產生之內容。在一些實例中,如本文所使用,術語「顯示內容」可指由顯示處理單元產生之內容。圖形內容可經處理以變成顯示內容。舉例而言,圖形處理單元可將諸如圖框之圖形內容輸出至緩衝器(其可稱為圖框緩衝器)。顯示處理單元可自緩衝器讀取諸如一或多個圖框之圖形內容,並對其執行一或多個顯示處理技術以產生顯示內容。舉例而言,顯示處理單元可經組態以對一或多個所呈現層執行合成以產生圖框。作為另一實例,顯示處理單元可經組態以將兩個或多於兩個層一起合成、融合或以其他方式組合至單個圖框中。顯示處理單元可經組態以對圖框執行縮放(例如,向上按比例縮放或向下按比例縮放)。在一些實例中,圖框可指層。在其他實例中,圖框可指已一起融合以形成圖框之兩個或多於兩個層(亦即,圖框包括兩個或多於兩個層,且隨後可融合包括兩個或多於兩個層之圖框)。As used herein, examples of the term "content" can refer to graphic content or display content. In some examples, as used herein, the term "graphic content" may refer to content generated by a processing unit configured to perform graphics processing. For example, the term "graphic content" may refer to content generated by one or more programs of the graphics processing pipeline. In some instances, as used herein, the term "graphic content" may refer to content generated by a graphics processing unit. In some instances, as used herein, the term "display content" may refer to content generated by a processing unit configured to perform display processing. In some instances, as used herein, the term “display content” may refer to content generated by the display processing unit. The graphic content can be processed to become display content. For example, the graphics processing unit may output graphics content such as a frame to a buffer (which may be referred to as a frame buffer). The display processing unit can read graphics content such as one or more frames from the buffer, and perform one or more display processing techniques on it to generate display content. For example, the display processing unit can be configured to perform synthesis on one or more presented layers to generate a frame. As another example, the display processing unit may be configured to synthesize, fuse, or otherwise combine two or more layers together into a single frame. The display processing unit may be configured to perform scaling (for example, scaling up or scaling down). In some examples, the frame may refer to a layer. In other examples, the frame may refer to two or more layers that have been fused together to form the frame (ie, the frame includes two or more layers, and may subsequently be fused to include two or more Frames on two layers).

如本文所引用,第一組件(例如,處理單元)可將諸如圖形內容之內容提供至第二組件(例如,內容寫碼器)。在一些實例中,第一組件可藉由將內容儲存於第二組件可存取之記憶體中而將內容提供至第二組件。在此類實例中,第二組件可經組態以讀取由第一組件儲存於記憶體中之內容。在其他實例中,第一組件可在不存在任何中間組件(例如,不存在記憶體或另一組件)之情況下將內容提供至第二組件。在此類實例中,第一組件可經描述為將內容直接提供至第二組件。舉例而言,第一組件可將內容輸出至第二組件,且第二組件可經組態以將自第一組件接收到之內容儲存於諸如緩衝器之記憶體中。As referenced herein, the first component (e.g., processing unit) may provide content such as graphical content to the second component (e.g., content writer). In some examples, the first component can provide content to the second component by storing the content in a memory accessible by the second component. In such instances, the second component can be configured to read the content stored in the memory by the first component. In other examples, the first component may provide content to the second component without any intermediate components (for example, no memory or another component). In such instances, the first component may be described as providing content directly to the second component. For example, the first component can output content to the second component, and the second component can be configured to store the content received from the first component in a memory such as a buffer.

圖1為說明經組態以實施本發明之一或多種技術之實例內容產生及寫碼系統100之方框圖。內容產生及寫碼系統100包括源器件102及目的地器件104。根據本文所描述之技術,源器件102可經組態以使用內容編碼器108對由處理單元106在傳輸至目的地器件104之前產生之圖形內容進行編碼。內容編碼器108可經組態以輸出具有位元率之位元流。處理單元106可經組態以基於處理單元106如何產生圖形內容來控制及/或影響內容編碼器108之位元率。FIG. 1 is a block diagram illustrating an example content generation and coding system 100 configured to implement one or more techniques of the present invention. The content generation and coding system 100 includes a source device 102 and a destination device 104. According to the techniques described herein, the source device 102 can be configured to use the content encoder 108 to encode the graphical content generated by the processing unit 106 before transmission to the destination device 104. The content encoder 108 can be configured to output a bit stream with a bit rate. The processing unit 106 may be configured to control and/or influence the bit rate of the content encoder 108 based on how the processing unit 106 generates graphical content.

源器件102可包括用於執行在本文中所描述之各種功能的一或多個組件(或電路)。目的地器件104可包括用於執行在本文中所描述之各種功能的一或多個組件(或電路)。在一些實例中,源器件102之一或多個組件可為系統單晶片(SOC)之組件。相似地,在一些實例中,目的地器件104之一或多個組件可為SOC之組件。The source device 102 may include one or more components (or circuits) for performing various functions described herein. The destination device 104 may include one or more components (or circuits) for performing various functions described herein. In some examples, one or more components of the source device 102 may be system-on-chip (SOC) components. Similarly, in some examples, one or more components of the destination device 104 may be components of the SOC.

源器件102可包括經組態以執行本發明之一或多種技術之一或多個組件。在所展示之實例中,源器件102可包括處理單元106、內容編碼器108、系統記憶體110及通信介面112。處理單元106可包括內部記憶體109。處理單元106可經組態以諸如在圖形處理管線107-1中執行圖形處理。內容編碼器108可包括內部記憶體111。The source device 102 may include one or more components configured to perform one or more techniques of the present invention. In the example shown, the source device 102 may include a processing unit 106, a content encoder 108, a system memory 110, and a communication interface 112. The processing unit 106 may include an internal memory 109. The processing unit 106 may be configured to perform graphics processing, such as in the graphics processing pipeline 107-1. The content encoder 108 may include an internal memory 111.

處理單元106及內容編碼器108外部之記憶體(諸如系統記憶體110)可經處理單元106及內容編碼器108存取。舉例而言,處理單元106及內容編碼器108可經組態以自外部記憶體(諸如系統記憶體110)讀取及/或寫入外部記憶體。處理單元106及內容編碼器108可經由匯流排以通信方式耦接至系統記憶體110。在一些實例中,處理單元106及內容編碼器108可經由匯流排或不同的連接以通信方式彼此耦接。The memory outside the processing unit 106 and the content encoder 108 (such as the system memory 110) can be accessed through the processing unit 106 and the content encoder 108. For example, the processing unit 106 and the content encoder 108 can be configured to read from and/or write to external memory (such as the system memory 110). The processing unit 106 and the content encoder 108 can be communicatively coupled to the system memory 110 via a bus. In some examples, the processing unit 106 and the content encoder 108 may be communicatively coupled to each other via a bus or different connections.

內容編碼器108可經組態以自諸如系統記憶體110及/或處理單元106之任一源接收圖形內容。系統記憶體110可經組態以儲存由處理單元106產生之圖形內容。舉例而言,處理單元106可經組態以將圖形內容儲存於系統記憶體110中。內容編碼器108可經組態以接收呈像素資料形式之圖形內容(例如,自系統記憶體110及/或處理單元106)。另外描述,內容編碼器108可經組態以接收由處理單元106產生之圖形內容之像素資料。舉例而言,內容編碼器108可經組態以接收圖形內容之一或多個像素之各成分(例如,各顏色成分)之值。作為一實例,紅色(R)、綠色(G)、藍色(B) (RGB)顏色空間中之像素可包括紅色成分之第一值、綠色成分之第二值及藍色成分之第三值。The content encoder 108 can be configured to receive graphical content from any source such as the system memory 110 and/or the processing unit 106. The system memory 110 can be configured to store graphics content generated by the processing unit 106. For example, the processing unit 106 can be configured to store graphic content in the system memory 110. The content encoder 108 may be configured to receive graphic content in the form of pixel data (for example, from the system memory 110 and/or the processing unit 106). In addition, the content encoder 108 can be configured to receive the pixel data of the graphic content generated by the processing unit 106. For example, the content encoder 108 may be configured to receive the value of each component (eg, each color component) of one or more pixels of the graphic content. As an example, pixels in the red (R), green (G), and blue (B) (RGB) color spaces may include the first value of the red component, the second value of the green component, and the third value of the blue component .

內部記憶體109、系統記憶體110及/或內部記憶體111可包括一或多個揮發性或非揮發性記憶體或儲存器件。在一些實例中,內部記憶體109、系統記憶體110及/或內部記憶體111可包括隨機存取記憶體(RAM)、靜態RAM (SRAM)、動態RAM (DRAM)、可擦除可程式化ROM (EPROM)、電可擦除可程式化ROM (EEPROM)、快閃記憶體、磁資料媒體或光學儲存媒體或任何其他類型之記憶體。The internal memory 109, the system memory 110, and/or the internal memory 111 may include one or more volatile or non-volatile memory or storage devices. In some examples, the internal memory 109, the system memory 110, and/or the internal memory 111 may include random access memory (RAM), static RAM (SRAM), dynamic RAM (DRAM), erasable and programmable ROM (EPROM), electrically erasable programmable ROM (EEPROM), flash memory, magnetic data media or optical storage media or any other type of memory.

根據一些實例,內部記憶體109、系統記憶體110及/或內部記憶體111可為非暫時性儲存媒體。術語「非暫時性」可指示儲存媒體並不以載波或傳播信號體現。然而,術語「非暫時性」不應解譯為意謂內部記憶體109、系統記憶體110及/或內部記憶體111不可移動或其內容為靜態的。作為一個實例,系統記憶體110可自源器件102移除並移動至另一器件。作為另一實例,系統記憶體110可能不可自源器件102移除。According to some examples, the internal memory 109, the system memory 110, and/or the internal memory 111 may be non-transitory storage media. The term "non-transitory" may indicate that the storage medium is not embodied by a carrier wave or a propagated signal. However, the term "non-transitory" should not be interpreted as meaning that the internal memory 109, the system memory 110, and/or the internal memory 111 are not removable or their contents are static. As an example, the system memory 110 can be removed from the source device 102 and moved to another device. As another example, the system memory 110 may not be removable from the source device 102.

處理單元106可為中央處理單元(CPU)、圖形處理單元(GPU)、通用GPU (GPGPU)或可經組態以執行圖形處理之任何其他處理單元。在一些實例中,處理單元106可整合至源器件102之母板中。在一些實例中,處理單元106可以存在於安裝在源器件102之母板中之埠之圖形卡上,或可以其他方式併入在經組態以與源器件102交互操作之周邊器件內。The processing unit 106 may be a central processing unit (CPU), a graphics processing unit (GPU), a general-purpose GPU (GPGPU), or any other processing unit that can be configured to perform graphics processing. In some examples, the processing unit 106 may be integrated into the motherboard of the source device 102. In some examples, the processing unit 106 may exist on a graphics card installed in a port in the motherboard of the source device 102, or may be incorporated in a peripheral device configured to interoperate with the source device 102 in other ways.

處理單元106可包括一或多個處理器,諸如一或多個微處理器、特殊應用積體電路(ASIC)、場可程式化閘陣列(FPGA)、算術邏輯單元(ALU)、數位信號處理器(DSP)、離散邏輯、軟體、硬體、韌體、其他等效物整合式或離散邏輯電路,或其任何組合。若技術部分實施於軟體中,則處理單元106可將用於軟體之指令儲存於合適的非暫時性電腦可讀儲存媒體(例如,內部記憶體109)中,且可使用一或多個處理器執行硬體中之指令以執行本發明之技術。可將上述內容(包括硬體、軟體、硬體與軟體之組合等)中之任一者視為一或多個處理器。The processing unit 106 may include one or more processors, such as one or more microprocessors, application-specific integrated circuits (ASIC), field programmable gate array (FPGA), arithmetic logic unit (ALU), digital signal processing DSP, discrete logic, software, hardware, firmware, other equivalent integrated or discrete logic circuits, or any combination thereof. If the technical part is implemented in software, the processing unit 106 can store the instructions for the software in a suitable non-transitory computer-readable storage medium (for example, the internal memory 109), and can use one or more processors Execute instructions in the hardware to execute the technology of the present invention. Any one of the above content (including hardware, software, a combination of hardware and software, etc.) can be regarded as one or more processors.

內容編碼器108可為經組態以執行內容編碼之任一處理單元。在一些實例中,內容編碼器108可整合至源器件102之母板中。內容編碼器108可包括一或多個處理器,諸如一或多個微處理器、特殊應用積體電路(ASIC)、場可程式化閘陣列(FPGA)、算術邏輯單元(ALU)、數位信號處理器(DSP)、離散邏輯、軟體、硬體、韌體、其他等效物整合式或離散邏輯電路,或其任何組合。若技術部分實施於軟體中,則內容編碼器108可將用於軟體之指令儲存於合適的非暫時性電腦可讀儲存媒體(例如,內部記憶體111),且可使用一或多個處理器執行硬體中之指令以執行本發明之技術。可將上述內容(包括硬體、軟體、硬體與軟體之組合等)中之任一者視為一或多個處理器。The content encoder 108 may be any processing unit configured to perform content encoding. In some examples, the content encoder 108 may be integrated into the motherboard of the source device 102. The content encoder 108 may include one or more processors, such as one or more microprocessors, application-specific integrated circuits (ASIC), field programmable gate array (FPGA), arithmetic logic unit (ALU), digital signal Processor (DSP), discrete logic, software, hardware, firmware, other equivalent integrated or discrete logic circuits, or any combination thereof. If the technology is implemented in software, the content encoder 108 can store instructions for the software in a suitable non-transitory computer-readable storage medium (for example, the internal memory 111), and can use one or more processors Execute instructions in the hardware to execute the technology of the present invention. Any one of the above content (including hardware, software, a combination of hardware and software, etc.) can be regarded as one or more processors.

通信介面112可包括接收器114及傳輸器116。接收器114可經組態以執行本文中關於源器件102所描述之任何接收功能。舉例而言,接收器114可經組態以自目的地器件104接收資訊,該資訊可包括對內容之請求。在一些實例中,響應於接收對內容之請求,源器件102可經組態以執行本文所描述之一或多種技術,諸如生成或以其他方式產生用於傳送至目的地器件104之圖形內容。傳輸器116可經組態以執行本文中關於源器件102所描述之任何傳輸功能。舉例而言,傳輸器116可經組態以將經編碼內容傳輸至目的地器件104,諸如由處理單元106及內容編碼器108產生之經編碼圖形內容(亦即,圖形內容由處理單元106產生,內容編碼器108接收該圖形內容作為輸入以生成或以其他方式產生經編碼圖形內容)。接收器114及傳輸器116可組合為收發器118。在此類實例中,收發器118可經組態以執行本文關於源器件102所描述之任何接收功能及/或傳輸功能。The communication interface 112 may include a receiver 114 and a transmitter 116. The sink 114 can be configured to perform any of the receiving functions described herein with respect to the source device 102. For example, the receiver 114 may be configured to receive information from the destination device 104, which may include a request for content. In some instances, in response to receiving a request for content, the source device 102 may be configured to perform one or more of the techniques described herein, such as generating or otherwise generating graphical content for delivery to the destination device 104. The transmitter 116 can be configured to perform any of the transmission functions described herein with respect to the source device 102. For example, the transmitter 116 may be configured to transmit the encoded content to the destination device 104, such as the encoded graphic content generated by the processing unit 106 and the content encoder 108 (ie, the graphic content is generated by the processing unit 106). , The content encoder 108 receives the graphic content as input to generate or otherwise generate encoded graphic content). The receiver 114 and the transmitter 116 can be combined into a transceiver 118. In such instances, the transceiver 118 may be configured to perform any of the receiving and/or transmitting functions described herein with respect to the source device 102.

目的地器件104可包括經組態以執行本發明之一或多種技術之一或多個組件。在所展示之實例中,目的地器件104可包括處理單元120、內容解碼器122、系統記憶體124、通信介面126及一或多個顯示器131。對顯示器131之參考可指一或多個顯示器131。舉例而言,顯示器131可包括單個顯示器或複數個顯示器。顯示器131可包括第一顯示器及第二顯示器。第一顯示器可為左眼顯示器,且第二顯示器可為右眼顯示器。在一些實例中,第一顯示器及第二顯示器可接收不同圖框以供於其上呈現。在其他實例中,第一顯示器及第二顯示器可接收相同圖框,以供於其上呈現。The destination device 104 may include one or more components configured to perform one or more techniques of the present invention. In the example shown, the destination device 104 may include a processing unit 120, a content decoder 122, a system memory 124, a communication interface 126, and one or more displays 131. The reference to the display 131 may refer to one or more displays 131. For example, the display 131 may include a single display or a plurality of displays. The display 131 may include a first display and a second display. The first display may be a left-eye display, and the second display may be a right-eye display. In some examples, the first display and the second display may receive different frames for presentation thereon. In other examples, the first display and the second display may receive the same frame for presentation thereon.

處理單元120可包括內部記憶體121。處理單元120可經組態以諸如在圖形處理管線107-2中執行圖形處理。內容解碼器122可包括內部記憶體123。在一些實例中,目的地器件104可包括顯示處理器,諸如顯示處理器127,以在由一或多個顯示器131呈現之前對由處理單元120產生之一或多個圖框執行一或多個顯示處理技術。顯示處理器127可經組態以執行顯示處理。舉例而言,顯示處理器127可經組態以對由處理單元120產生之一或多個圖框執行一或多個顯示處理技術。一或多個顯示器131可經組態以顯示使用經解碼內容產生之內容。舉例而言,顯示處理器127可經組態以處理由處理單元120產生之一或多個圖框,其中一或多個圖框由處理單元120藉由使用自源器件102接收到之經編碼內容導出之經解碼內容來產生。隨後,顯示處理器127可經組態以對由處理單元120產生之一或多個圖框執行顯示處理。一或多個顯示器131可經組態以顯示或以其他方式呈現由顯示處理器127處理之圖框。在一些實例中,一或多個顯示器件可包括以下各者中之一或多者:液晶顯示器(LCD)、電漿顯示器、有機發光二極體(OLED)顯示器、投影顯示器件、增強現實顯示器件、虛擬現實顯示器件、頭戴式顯示器或任何其他類型之顯示器件。The processing unit 120 may include an internal memory 121. The processing unit 120 may be configured to perform graphics processing, such as in the graphics processing pipeline 107-2. The content decoder 122 may include an internal memory 123. In some examples, the destination device 104 may include a display processor, such as a display processor 127, to perform one or more frames on one or more frames generated by the processing unit 120 before being presented by the one or more displays 131 Display processing technology. The display processor 127 may be configured to perform display processing. For example, the display processor 127 may be configured to perform one or more display processing techniques on one or more frames generated by the processing unit 120. One or more displays 131 may be configured to display content generated using decoded content. For example, the display processor 127 may be configured to process one or more frames generated by the processing unit 120, wherein the one or more frames are coded by the processing unit 120 by using the source device 102 The content is derived from the decoded content. Subsequently, the display processor 127 may be configured to perform display processing on one or more frames generated by the processing unit 120. One or more displays 131 may be configured to display or otherwise present the frames processed by the display processor 127. In some examples, the one or more display devices may include one or more of the following: liquid crystal display (LCD), plasma display, organic light emitting diode (OLED) display, projection display device, augmented reality display Device, virtual reality display device, head mounted display or any other type of display device.

處理單元120及內容解碼器122外部之記憶體(諸如系統記憶體124)可經處理單元120及內容解碼器122存取。舉例而言,處理單元120及內容解碼器122可經組態以自諸如系統記憶體124之外部記憶體讀取及/或寫入至外部記憶體124。處理單元120及內容解碼器122可經由匯流排以通信方式耦接至系統記憶體124。在一些實例中,處理單元120及內容解碼器122可經由匯流排或以不同的連接以通信方式彼此耦接。The memory outside the processing unit 120 and the content decoder 122 (such as the system memory 124) can be accessed through the processing unit 120 and the content decoder 122. For example, the processing unit 120 and the content decoder 122 can be configured to read from and/or write to the external memory 124 from an external memory such as the system memory 124. The processing unit 120 and the content decoder 122 may be communicatively coupled to the system memory 124 via a bus. In some examples, the processing unit 120 and the content decoder 122 may be communicatively coupled to each other via a bus or through different connections.

內容解碼器122可經組態以自諸如系統記憶體124及/或通信介面126之任一源接收圖形內容。系統記憶體124可經組態以儲存諸如自源器件102接收到之經編碼圖形內容之所接收經編碼圖形內容。內容解碼器122可經組態以接收呈經編碼像素資料形式之經編碼圖形內容(例如,自系統記憶體124及/或通信介面126)。內容解碼器122可經組態以解碼經編碼圖形內容。The content decoder 122 can be configured to receive graphical content from any source such as the system memory 124 and/or the communication interface 126. The system memory 124 may be configured to store received coded graphic content such as coded graphic content received from the source device 102. The content decoder 122 may be configured to receive encoded graphic content in the form of encoded pixel data (eg, from the system memory 124 and/or the communication interface 126). The content decoder 122 can be configured to decode encoded graphic content.

內部記憶體121、系統記憶體124及/或內部記憶體123可包括一或多個揮發性或非揮發性記憶體或儲存器件。在一些實例中,內部記憶體121、系統記憶體124及/或內部記憶體123可包括隨機存取記憶體(RAM)、靜態RAM (SRAM)、動態RAM (DRAM)、可擦除可程式化ROM (EPROM)、電可擦除可程式化ROM (EEPROM)、快閃記憶體、磁資料媒體或光學儲存媒體或任何其他類型之記憶體。The internal memory 121, the system memory 124, and/or the internal memory 123 may include one or more volatile or non-volatile memory or storage devices. In some examples, the internal memory 121, the system memory 124 and/or the internal memory 123 may include random access memory (RAM), static RAM (SRAM), dynamic RAM (DRAM), erasable and programmable ROM (EPROM), electrically erasable programmable ROM (EEPROM), flash memory, magnetic data media or optical storage media or any other type of memory.

根據一些實例,內部記憶體121、系統記憶體124及/或內部記憶體123可為非暫時性儲存媒體。術語「非暫時性」可指示儲存媒體並不以載波或傳播信號體現。然而,術語「非暫時性」不應解譯為意謂內部記憶體121、系統記憶體124及/或內部記憶體123不可移動或其內容為靜態的。作為一個實例,系統記憶體124可自目的地器件104移除並移動至另一器件。作為另一實例,系統記憶體124可能不可自目的地器件104移除。According to some examples, the internal memory 121, the system memory 124, and/or the internal memory 123 may be non-transitory storage media. The term "non-transitory" may indicate that the storage medium is not embodied by carrier waves or propagated signals. However, the term "non-transitory" should not be interpreted as meaning that the internal memory 121, the system memory 124, and/or the internal memory 123 are not removable or their contents are static. As an example, the system memory 124 can be removed from the destination device 104 and moved to another device. As another example, the system memory 124 may not be removable from the destination device 104.

處理單元120可為中央處理單元(CPU)、圖形處理單元(GPU)、通用GPU (GPGPU)或可經組態以執行圖形處理之任何其他處理單元。在一些實例中,處理單元120可整合至目的地器件104之母板中。在一些實例中,處理單元120可存在於安裝在目的地器件104之母板中之埠之圖形卡上,或可以其他方式併入在經組態以與目的地器件104交互操作之周邊器件內。The processing unit 120 may be a central processing unit (CPU), a graphics processing unit (GPU), a general-purpose GPU (GPGPU), or any other processing unit that can be configured to perform graphics processing. In some examples, the processing unit 120 may be integrated into the motherboard of the destination device 104. In some instances, the processing unit 120 may exist on a graphics card installed in a port in the motherboard of the destination device 104, or may be incorporated in a peripheral device configured to interoperate with the destination device 104 in other ways .

處理單元120可包括一或多個處理器,諸如一或多個微處理器、特殊應用積體電路(ASIC)、場可程式化閘陣列(FPGA)、算術邏輯單元(ALU)、數位信號處理器(DSP)、離散邏輯、軟體、硬體、韌體、其他等效物整合式或離散邏輯電路,或其任何組合。若技術部分實施於軟體中,則處理單元120可將用於軟體之指令儲存於合適的非暫時性電腦可讀儲存媒體(例如,內部記憶體121)中,且可使用一或多個處理器執行硬體中之指令以執行本發明之技術。可將上述(包括硬體、軟體、硬體與軟體之組合等)中之任一者視為一或多個處理器。The processing unit 120 may include one or more processors, such as one or more microprocessors, application-specific integrated circuits (ASIC), field programmable gate array (FPGA), arithmetic logic unit (ALU), digital signal processing DSP, discrete logic, software, hardware, firmware, other equivalent integrated or discrete logic circuits, or any combination thereof. If the technology is implemented in software, the processing unit 120 can store the instructions for the software in a suitable non-transitory computer-readable storage medium (for example, the internal memory 121), and can use one or more processors Execute instructions in the hardware to execute the technology of the present invention. Any one of the above (including hardware, software, a combination of hardware and software, etc.) can be regarded as one or more processors.

內容解碼器122可為經組態以執行內容解碼之任一處理單元。在一些實例中,內容解碼器122可整合至目的地器件104之母板中。內容解碼器122可包括一或多個處理器,諸如一或多個微處理器、特殊應用積體電路(ASIC)、場可程式化閘陣列(FPGA)、算術邏輯單元(ALU)、數位信號處理器(DSP)、離散邏輯、軟體、硬體、韌體、其他等效物整合式或離散邏輯電路,或其任何組合。若技術部分實施於軟體中,則內容解碼器122可將用於軟體之指令儲存於合適的非暫時性電腦可讀儲存媒體(例如,內部記憶體123),且可使用一或多個處理器執行硬體中之指令以執行本發明之技術。可將上述內容(包括硬體、軟體、硬體與軟體之組合等)中之任一者視為一或多個處理器。The content decoder 122 may be any processing unit configured to perform content decoding. In some examples, the content decoder 122 may be integrated into the motherboard of the destination device 104. The content decoder 122 may include one or more processors, such as one or more microprocessors, application-specific integrated circuits (ASIC), field programmable gate array (FPGA), arithmetic logic unit (ALU), digital signal Processor (DSP), discrete logic, software, hardware, firmware, other equivalent integrated or discrete logic circuits, or any combination thereof. If the technology is implemented in software, the content decoder 122 can store instructions for the software in a suitable non-transitory computer-readable storage medium (for example, internal memory 123), and can use one or more processors Execute instructions in the hardware to execute the technology of the present invention. Any one of the above content (including hardware, software, a combination of hardware and software, etc.) can be regarded as one or more processors.

通信介面126可包括接收器128及傳輸器130。接收器128可經組態以執行本文關於目的地器件104所描述之任一接收功能。舉例而言,接收器128可經組態以自源器件102接收資訊,該資訊可包括經編碼內容,諸如由源器件102之處理單元106及內容編碼器108產生或以其他方式產生之經編碼圖形內容(亦即,圖形內容由處理單元106產生,內容編碼器108接收該圖形內容作為輸入以生成或以其他方式產生經編碼圖形內容)。作為另一實例,接收器114可經組態以自目的地器件104接收位置資訊,該位置資訊可經編碼或未經編碼(亦即,未經編碼)。另外,接收器128可經組態以自源器件102接收位置資訊。在一些實例中,目的地器件104可經組態以根據本文所描述之技術解碼自源器件102接收到之經編碼圖形內容。舉例而言,內容解碼器122可經組態以解碼經編碼圖形內容,從而生成或以其他方式產生經解碼圖形內容。處理單元120可經組態以使用經解碼圖形內容,以生成或以其他方式產生一或多個圖框,用於在一或多個顯示器131上呈現。傳輸器130可經組態以執行本文關於目的地器件104所描述之任一傳輸功能。舉例而言,傳輸器130可經組態以將資訊傳輸至源器件102,該資訊可包括對內容之請求。接收器128及傳輸器130可組合為收發器132。在此類實例中,收發器132可經組態以執行本文關於目的地器件104所描述之任一接收功能及/或傳輸功能。The communication interface 126 may include a receiver 128 and a transmitter 130. The receiver 128 can be configured to perform any of the receiving functions described herein with respect to the destination device 104. For example, the receiver 128 may be configured to receive information from the source device 102, and the information may include encoded content, such as the encoded content generated by the processing unit 106 and the content encoder 108 of the source device 102 or otherwise Graphic content (that is, the graphic content is generated by the processing unit 106, and the content encoder 108 receives the graphic content as input to generate or otherwise generate encoded graphic content). As another example, the receiver 114 may be configured to receive location information from the destination device 104, which location information may be encoded or unencoded (ie, unencoded). In addition, the receiver 128 can be configured to receive location information from the source device 102. In some examples, the destination device 104 may be configured to decode the encoded graphical content received from the source device 102 according to the techniques described herein. For example, the content decoder 122 may be configured to decode encoded graphic content, thereby generating or otherwise generating decoded graphic content. The processing unit 120 may be configured to use the decoded graphics content to generate or otherwise generate one or more frames for presentation on the one or more displays 131. The transmitter 130 can be configured to perform any of the transmission functions described herein with respect to the destination device 104. For example, the transmitter 130 may be configured to transmit information to the source device 102, and the information may include a request for content. The receiver 128 and the transmitter 130 can be combined into a transceiver 132. In such instances, the transceiver 132 may be configured to perform any of the receiving functions and/or transmission functions described herein with respect to the destination device 104.

內容產生及寫碼系統100之內容編碼器108及內容解碼器122表示計算組件(例如,處理單元)之實例,該計算組件可經組態以分別根據本發明中所描述之各種實例來執行用於編碼內容及解碼內容之一或多種技術。在一些實例中,內容編碼器108及內容解碼器122可經組態以根據諸如視訊寫碼標準、顯示器串流壓縮標準或影像壓縮標準之內容寫碼標準來操作。The content encoder 108 and the content decoder 122 of the content generation and coding system 100 represent examples of computing components (for example, processing units), which can be configured to execute according to the various examples described in the present invention. One or more technologies for encoding content and decoding content. In some examples, the content encoder 108 and the content decoder 122 may be configured to operate according to content coding standards such as video coding standards, display streaming compression standards, or image compression standards.

如圖1中所展示,源器件102可經組態以產生經編碼內容。因此,源器件102可稱為內容編碼器件或內容編碼裝置。目的地器件104可經組態以解碼由源器件102產生之經編碼內容。因此,目的地器件104可稱為內容解碼器件或內容解碼裝置。在一些實例中,如所展示,源器件102及目的地器件104可為獨立器件。在其他實例中,源器件102及目的地器件104可在同一計算器件上或同一計算器件之部分。在任一實例中,圖形處理管線可分佈在兩個器件之間。舉例而言,單個圖形處理管線可包括複數個圖形處理。圖形處理管線107-1可包括複數個圖形處理中之一或多個圖形處理。相似地,圖形處理管線107-2可包括複數個圖形處理中之一或多個處理圖形處理。就此而言,圖形處理管線107-1串接或以其他方式跟隨圖形處理管線107-2可形成完整的圖形處理管線。另外描述,圖形處理管線107-1可為部分圖形處理管線,且圖形處理管線107-2可為部分圖形處理管線,當其組合時,產生分佈圖形處理管線。As shown in Figure 1, the source device 102 can be configured to generate encoded content. Therefore, the source device 102 may be referred to as a content encoding device or a content encoding device. The destination device 104 can be configured to decode the encoded content generated by the source device 102. Therefore, the destination device 104 may be referred to as a content decoding device or a content decoding device. In some examples, as shown, the source device 102 and the destination device 104 may be independent devices. In other examples, the source device 102 and the destination device 104 may be on the same computing device or part of the same computing device. In either instance, the graphics processing pipeline can be distributed between two devices. For example, a single graphics processing pipeline may include multiple graphics processing. The graphics processing pipeline 107-1 may include one or more of a plurality of graphics processing. Similarly, the graphics processing pipeline 107-2 may include one or more of a plurality of graphics processing. In this regard, the graphics processing pipeline 107-1 is connected in series or otherwise follows the graphics processing pipeline 107-2 to form a complete graphics processing pipeline. In addition, the graphics processing pipeline 107-1 may be a partial graphics processing pipeline, and the graphics processing pipeline 107-2 may be a partial graphics processing pipeline, and when combined, a distributed graphics processing pipeline is generated.

再次參考圖1,在某些態樣中,圖形處理管線107-2可包括產生組件,其經組態以在圖框之第一子集中產生至少一個第一運動向量,該第一運動向量提供用於圖框之第一子集中之影像資料之第一動量估計。圖形處理管線107-2亦可包括經組態以在圖框之第一子集中擾動該影像資料之擾動組件。此外,產生組件可經組態以基於圖框之第一子集中之擾動影像資料產生至少一個第二運動向量,該至少一個第二運動向量提供用於圖框之第一子集中之影像資料之第二動量估計。圖形處理管線107-2亦可包括經組態以比較第一運動向量與第二運動向量之比較組件。此外,圖形處理管線107-2可包括判定組件198,其經組態以基於第一運動向量與第二運動向量之間的比較,判定用於圖框之第一子集中之影像資料之動量估計之至少一個第三運動向量。藉由在源器件102與目的地器件104之間分佈圖形處理管線,在一些實例中,目的地器件可能夠呈現以其他方式不能夠呈現之圖形內容;且因此,可不呈現。遍及本發明描述其他實例益處。1 again, in some aspects, the graphics processing pipeline 107-2 may include a generating component configured to generate at least one first motion vector in the first subset of the frame, the first motion vector providing Used for the first momentum estimation of the image data in the first subset of the frame. The graphics processing pipeline 107-2 may also include a disturbance component configured to disturb the image data in the first subset of the frame. In addition, the generating component can be configured to generate at least one second motion vector based on the disturbed image data in the first subset of the frame, and the at least one second motion vector provides the image data for the first subset of the frame Second momentum estimate. The graphics processing pipeline 107-2 may also include a comparison component configured to compare the first motion vector with the second motion vector. In addition, the graphics processing pipeline 107-2 may include a determination component 198 configured to determine the momentum estimate for the image data in the first subset of the frame based on the comparison between the first motion vector and the second motion vector At least one third motion vector. By distributing the graphics processing pipeline between the source device 102 and the destination device 104, in some instances, the destination device may be able to render graphical content that cannot be rendered in other ways; and therefore, may not render. Other example benefits are described throughout this disclosure.

如本文所描述,諸如源器件102及/或目的地器件104之器件可指經組態以執行本文所描述之一或多種技術之任一器件、裝置或系統。舉例而言,器件可為伺服器、基地台、使用者設備、用戶端器件、站台、存取點、電腦(例如,個人電腦、桌上型電腦、膝上型電腦、平板電腦、電腦工作站或大型電腦)、最終產物、裝置、電話、智慧型電話、伺服器、視訊遊戲平台或控制台、手持型器件(例如,便攜式視訊遊戲器件或個人數位助理(PDA))、可穿戴式計算器件(例如智慧型手錶、增強現實器件或虛擬現實器件)、非可穿戴式器件、增強現實器件、虛擬現實器件、顯示器(例如,顯示器件)、電視、電視機上盒、中間網絡器件、數位媒體播放器、視訊串流器件、內容串流器件、車載電腦、任一行動器件、經組態以產生圖形內容之任一器件,或經組態以執行本文中所描述之一或多種技術的任一器件。As described herein, a device such as source device 102 and/or destination device 104 can refer to any device, apparatus, or system configured to perform one or more of the techniques described herein. For example, the device can be a server, base station, user equipment, client device, station, access point, computer (for example, personal computer, desktop computer, laptop computer, tablet computer, computer workstation or Mainframe computers), final products, devices, phones, smart phones, servers, video game platforms or consoles, handheld devices (for example, portable video game devices or personal digital assistants (PDAs)), wearable computing devices ( Such as smart watches, augmented reality devices or virtual reality devices), non-wearable devices, augmented reality devices, virtual reality devices, displays (for example, display devices), televisions, TV boxes, intermediate network devices, digital media playback Device, video streaming device, content streaming device, on-board computer, any mobile device, any device configured to produce graphical content, or any device configured to perform one or more of the technologies described in this article Device.

源器件102可經組態以與目的地器件104通信。舉例而言,目的地器件104可經組態以自源器件102接收經編碼內容。在一些實例中,源器件102與目的地器件104之間耦接的通信展示為鏈路134。鏈路134可包含能夠將經編碼內容自源器件102移動至目的地器件104之任一類型之媒體或器件。The source device 102 can be configured to communicate with the destination device 104. For example, the destination device 104 may be configured to receive encoded content from the source device 102. In some examples, the coupled communication between the source device 102 and the destination device 104 is shown as a link 134. The link 134 may include any type of media or device capable of moving the encoded content from the source device 102 to the destination device 104.

在圖1之實例中,鏈路134可包含通信媒體,以使源器件102能夠將經編碼內容實時傳輸至目的地器件104。經編碼內容可根據諸如無線通信協定之通信標準進行調製,且傳輸至目的地器件14。通信媒體可包含任何無線或有線通信媒體,諸如射頻(RF)頻譜或一或多個實體傳輸線。通信媒體可形成基於封包之網路(諸如,區域網路、廣域網路或諸如網際網路之全域網路)之部分。通信媒體可包括路由器、交換器、基地台或可用於促進自源器件102至目的地器件104之通信的任何其他設備。在其他實例中,鏈路134可為源器件102與目的地器件104之間的點對點連接,諸如有線或無線顯示鏈路連接(例如,HDMI鏈路、顯示埠鏈路、MIPI DSI鏈路或經編碼內容可經由其自源器件102橫穿至目的地器件104之另一鏈路)。In the example of FIG. 1, the link 134 may include a communication medium to enable the source device 102 to transmit the encoded content to the destination device 104 in real time. The encoded content can be modulated according to a communication standard such as a wireless communication protocol and transmitted to the destination device 14. The communication medium may include any wireless or wired communication medium, such as a radio frequency (RF) spectrum or one or more physical transmission lines. The communication medium may form part of a packet-based network (such as a local area network, a wide area network, or a global network such as the Internet). The communication medium may include routers, switches, base stations, or any other equipment that can be used to facilitate communication from the source device 102 to the destination device 104. In other examples, the link 134 may be a point-to-point connection between the source device 102 and the destination device 104, such as a wired or wireless display link connection (eg, HDMI link, display port link, MIPI DSI link, or via The encoded content may traverse another link from the source device 102 to the destination device 104 via it).

在另一實例中,鏈路134可包括經組態以儲存由源器件102產生之經編碼內容之儲存媒體。在此實例中,目的地器件104可經組態以存取儲存媒體。儲存媒體可包括各種局部存取之資料儲存媒體,諸如藍光光碟、DVD、CD-ROM、快閃記憶體或用於儲存經編碼內容之其他合適的數位儲存媒體。In another example, the link 134 may include a storage medium configured to store the encoded content generated by the source device 102. In this example, the destination device 104 can be configured to access the storage medium. The storage media may include various locally accessible data storage media, such as Blu-ray Disc, DVD, CD-ROM, flash memory, or other suitable digital storage media for storing encoded content.

在另一實例中,鏈路134可包括經組態以儲存由源器件102產生之經編碼內容之伺服器或另一中間儲存器件。在此實例中,目的地器件104可經組態以存取儲存於伺服器或其他中間儲存器件之經編碼內容。伺服器可為能夠儲存經編碼內容且將經編碼內容傳輸至目的地器件104之類型的伺服器。In another example, the link 134 may include a server or another intermediate storage device configured to store the encoded content generated by the source device 102. In this example, the destination device 104 may be configured to access encoded content stored in a server or other intermediate storage device. The server may be a type of server capable of storing encoded content and transmitting the encoded content to the destination device 104.

本文所描述之器件可經組態以彼此通信,諸如源器件102及目的地器件104。通信可包括資訊之傳輸及/或接收。可在一或多個訊息中攜載資訊。作為一實例,與第二器件通信之第一器件可描述為以通信方式耦接至第二器件或以其他方式與第二器件耦接。舉例而言,可以通信方式耦接用戶端器件及伺服器。作為另一實例,伺服器可以通信方式耦接至複數個用戶端器件。作為另一實例,經組態以執行本發明之一或多種技術之本文所描述之任何器件可以通信方式耦接至經組態以執行本發明之一或多種技術之一或多個其他器件。在一些實例中,當以通信方式耦接時,兩個器件可主動地傳輸或接收資訊,或可經組態以傳輸或接收資訊。若不以通信方式耦接,則任何兩個器件可經組態以諸如根據符合一或多個通信標準之一或多個通信協定來以通信方式彼此耦接。提及「任何兩個器件」不意謂僅兩個器件可經組態以以通信方式彼此耦接;更確切而言,任何兩個器件包括多於兩個器件。舉例而言,第一器件可以通信方式與第二器件耦接,且第一器件可以通信方式與第三器件耦接。在此類實例中,第一器件可為伺服器。The devices described herein can be configured to communicate with each other, such as source device 102 and destination device 104. Communication may include the transmission and/or reception of information. Information can be carried in one or more messages. As an example, the first device communicating with the second device may be described as being communicatively coupled to the second device or otherwise coupled to the second device. For example, the client device and the server can be coupled in a communication manner. As another example, the server may be communicatively coupled to a plurality of client devices. As another example, any device described herein that is configured to perform one or more techniques of the present invention may be communicatively coupled to one or more other devices configured to perform one or more techniques of the present invention. In some examples, when coupled in a communicative manner, the two devices can actively transmit or receive information, or can be configured to transmit or receive information. If not communicatively coupled, any two devices may be configured to be communicatively coupled to each other, such as according to one or more communication protocols that comply with one or more communication standards. The reference to "any two devices" does not mean that only two devices can be configured to be communicatively coupled to each other; rather, any two devices include more than two devices. For example, the first device may be communicatively coupled with the second device, and the first device may be communicatively coupled with the third device. In such instances, the first device may be a server.

參考圖1,源器件102可描述為以通信方式耦接至目的地器件104。在一些實例中,術語「以通信方式耦接」可指可為直接或間接之通信連接。在一些實例中,鏈路134可表示源器件102與目的地器件104之間耦接的通信。通信連接可為有線及/或無線的。有線連接可指資訊可經由其行進之導電路徑、跡線或實體媒體(不包括無線實體媒體)。導電路徑可指任何長度之任何導體,諸如導電襯墊、導電通孔、導電平面、導電跡線或任何導電媒體。直接通信連接可指兩個以通信方式耦接之組件之間未駐存中間組件之連接。間接通信連接可指兩個以通信方式耦接之組件之間駐存有至少一個中間組件之連接。以通信方式耦接之兩個器件可根據一或多個通信協定經由一或多個不同類型之網路(例如,無線網路及/或有線網路)彼此通信。在一些實例中,以通信方式耦接之兩個器件可經由關聯處理彼此相關聯。在其他實例中,以通信方式耦接之兩個器件可在不關聯處理之情況下彼此通信。舉例而言,諸如源器件102之器件可經組態以單播、廣播、多播或以其他方式將資訊(例如,經編碼內容)傳輸至一或多個其他器件(例如,包括目的地器件104之一或多個目的地器件)。在此實例中之目的地器件104可描述為以通信方式與一或多個其他器件中之各者耦接。在一些實例中,通信連接可實現資訊之傳輸及/或接收。舉例而言,根據本發明之技術,以通信方式耦接至第二器件之第一器件可經組態以將資訊傳輸至第二器件及/或自第二器件接收資訊。相似地,根據本發明之技術,在此實例中之第二器件可經組態以將資訊傳輸至第一器件及/或自第一器件接收資訊。在一些實例中,術語「以通信方式耦接」可指臨時、間歇或永久之通信連接。1, the source device 102 may be described as being communicatively coupled to the destination device 104. In some examples, the term "communicatively coupled" may refer to a communication connection that may be direct or indirect. In some examples, the link 134 may represent the communication coupled between the source device 102 and the destination device 104. The communication connection can be wired and/or wireless. Wired connections can refer to conductive paths, traces, or physical media (excluding wireless physical media) through which information can travel. The conductive path can refer to any conductor of any length, such as a conductive pad, a conductive via, a conductive plane, a conductive trace, or any conductive medium. A direct communication connection may refer to a connection between two components that are communicatively coupled without an intermediate component. Indirect communication connection may refer to a connection in which at least one intermediate component resides between two components that are communicatively coupled. Two devices that are communicatively coupled can communicate with each other via one or more different types of networks (for example, wireless networks and/or wired networks) according to one or more communication protocols. In some examples, two devices that are communicatively coupled can be associated with each other through an association process. In other examples, two devices that are communicatively coupled can communicate with each other without associated processing. For example, a device such as the source device 102 may be configured to unicast, broadcast, multicast, or otherwise transmit information (eg, encoded content) to one or more other devices (eg, including destination devices) 104 one or more destination devices). The destination device 104 in this example can be described as being communicatively coupled to each of one or more other devices. In some instances, the communication connection can enable the transmission and/or reception of information. For example, according to the technology of the present invention, a first device communicatively coupled to a second device can be configured to transmit information to and/or receive information from the second device. Similarly, according to the technology of the present invention, the second device in this example can be configured to transmit information to and/or receive information from the first device. In some examples, the term "communicatively coupled" may refer to a temporary, intermittent, or permanent communication connection.

本文所描述之任何器件(諸如源器件102及目的地器件104)可經組態以根據一或多個通信協定進行操作。舉例而言,源器件102可經組態以使用一或多個通信協定與目的地器件104通信(例如,自目的地器件104接收資訊及/或將資訊傳輸至目的地器件104)。在此類實例中,源器件102可描述為經由連接與目的地器件104通信。該連接可符合或以其他方式依據通信協定。相似地,目的地器件104可經組態以使用一或多個通信協定與源器件102通信(例如,自源器件102接收資訊及/或將資訊傳輸至源器件102)。在此類實例中,目的地器件104可描述為經由連接與源器件102通信。該連接可符合或以其他方式依據通信協定。Any of the devices described herein (such as source device 102 and destination device 104) can be configured to operate according to one or more communication protocols. For example, the source device 102 may be configured to communicate with the destination device 104 using one or more communication protocols (eg, receive information from the destination device 104 and/or transmit information to the destination device 104). In such instances, the source device 102 may be described as communicating with the destination device 104 via a connection. The connection may conform to or be based on a communication protocol in other ways. Similarly, the destination device 104 can be configured to communicate with the source device 102 using one or more communication protocols (eg, receive information from the source device 102 and/or transmit information to the source device 102). In such instances, the destination device 104 may be described as communicating with the source device 102 via a connection. The connection may conform to or be based on a communication protocol in other ways.

如本文所使用,術語「通信協定」可指任何通信協定,諸如與通信標準相容之通信協定等。如本文所使用,術語「通信標準」可包括諸如無線通信標準及/或有線通信標準之任何通信標準。無線通信標準可對應於無線網路。作為一實例,通信標準可包括對應於諸如藍芽(例如,IEEE 802.15)、藍芽低能量(BLE) (例如,IEEE 802.15.4)之無線個人區域網路(WPAN)標準之任何無線通信標準。作為另一實例,通信標準可包括對應於諸如Wi-Fi(例如,任何802.11標準,諸如802.11a、802.11b、802.11c、802.11n或802.11ax)之無線局部區域網路(WLAN)標準之任何無線通信標準。作為另一實例,通信標準可包括對應於諸如3G、4G、4G LTE或5G之無線廣域網路(WWAN)標準之任何無線通信標準。As used herein, the term "communication protocol" can refer to any communication protocol, such as a communication protocol compatible with a communication standard. As used herein, the term "communication standard" may include any communication standard such as a wireless communication standard and/or a wired communication standard. The wireless communication standard may correspond to a wireless network. As an example, the communication standard may include any wireless communication standard corresponding to wireless personal area network (WPAN) standards such as Bluetooth (eg, IEEE 802.15), Bluetooth Low Energy (BLE) (eg, IEEE 802.15.4) . As another example, the communication standard may include any wireless local area network (WLAN) standard such as Wi-Fi (for example, any 802.11 standard, such as 802.11a, 802.11b, 802.11c, 802.11n, or 802.11ax). Wireless communication standards. As another example, the communication standard may include any wireless communication standard corresponding to a wireless wide area network (WWAN) standard such as 3G, 4G, 4G LTE, or 5G.

參考圖1,內容編碼器108可經組態以編碼圖形內容。在一些實例中,內容編碼器108可經組態以將圖形內容編碼為一或多個視訊圖框。當內容編碼器108對內容進行編碼時,內容編碼器108可產生位元流。位元流可具有諸如位元/時間單位之位元率,其中時間單位為諸如秒或分鐘之任何時間單位。位元流可包括形成圖形內容及相關資料之寫碼表示之位元序列。為了產生位元流,內容編碼器108可經組態以對像素資料執行編碼操作,諸如對應於陰影紋理圖譜之像素資料。舉例而言,當內容編碼器108對作為輸入提供給內容編碼器108之影像資料(例如,陰影紋理圖譜之一或多個區塊)執行編碼操作時,內容編碼器108可產生一系列寫碼影像及相關資料。相關之資料可包括諸如量化參數(QP)之寫碼參數集。With reference to Figure 1, the content encoder 108 may be configured to encode graphical content. In some examples, the content encoder 108 may be configured to encode graphical content into one or more video frames. When the content encoder 108 encodes the content, the content encoder 108 may generate a bitstream. The bit stream may have a bit rate such as bit/time unit, where the time unit is any time unit such as seconds or minutes. The bit stream may include a sequence of bits forming a coded representation of the graphic content and related data. In order to generate a bit stream, the content encoder 108 may be configured to perform an encoding operation on pixel data, such as pixel data corresponding to a shadow texture map. For example, when the content encoder 108 performs an encoding operation on the image data provided as input to the content encoder 108 (for example, one or more blocks of the shadow texture map), the content encoder 108 can generate a series of writing codes Images and related information. Related information may include a set of coding parameters such as quantization parameters (QP).

動量估計為分析多個二維(2D)影像且產生描述區域自一個影像至另一影像之移動之運動向量之過程。本質上,動量估計產生可描述物件在影像之某些部分內如何移動之運動向量。運動向量具有各種用途,其包括視訊壓縮、後處理效果(諸如運動模糊),及圖框外推或內插。為了減輕GPU上之呈現工作負荷,虛擬現實(VR)或增強現實(AR)系統可利用動量估計自先前呈現內容中外推圖框。藉此,此可允許GPU以降低之速率呈現圖框,外推圖框將在呈現內容之地方顯示給用戶。動量估計可為適用的,因為在VR或AR系統中,存在減少呈現工作負荷之較強驅動器,例如在GPU中。本發明可藉由在GPU上呈現更少圖框且使用動量估計來填充影像或運動向量中之空隙來減少呈現工作負荷。另外,雖然可對視訊內容執行動量估計,但本文所描述之動量估計可利用在GPU上實時呈現之呈現內容。Momentum estimation is the process of analyzing multiple two-dimensional (2D) images and generating a motion vector describing the movement of an area from one image to another. Essentially, momentum estimation generates motion vectors that describe how objects move within certain parts of the image. Motion vectors have various uses, including video compression, post-processing effects (such as motion blur), and frame extrapolation or interpolation. In order to reduce the rendering workload on the GPU, virtual reality (VR) or augmented reality (AR) systems can use momentum estimation to extrapolate frames from previously rendered content. In this way, this allows the GPU to render the frame at a reduced rate, and the extrapolated frame will be displayed to the user where the content is presented. Momentum estimation may be applicable because in VR or AR systems, there are stronger drivers that reduce rendering workload, such as in GPUs. The present invention can reduce the rendering workload by rendering fewer frames on the GPU and using momentum estimation to fill the gaps in the image or motion vector. In addition, although momentum estimation can be performed on the video content, the momentum estimation described herein can utilize the presentation content presented in real time on the GPU.

在一些情況下,輸入影像中之重複圖案對於動量估計可能難以處理。舉例而言,呈現內容比其他內容(例如,光內容)具有更頻繁之重複圖案,因為呈現內容使用具有重複圖案之紋理映射。在一些態樣中,動量估計技術在重複圖案方面存在問題,因為此等技術試圖匹配自一個圖框移動至另一圖框之物件。因為呈現內容可在一些情況下準確重複圖案,所以可存在錯誤之運動匹配,其中動量估計不正確地嘗試匹配運動。舉例而言,由於重複圖案,動量估計可跳過運動週期,並導致對下一個元件之不正確映射且產生不正確動量估計。另外,由於影像之許多區域可同樣匹配,因此一些系統可能難以正確地識別運動。在不正確動量估計之此等區域中可產生錯誤的運動向量,此可導致基於精確運動識別之用例的嚴重訛誤。由於呈現內容可含有重複結構及圖案之自由使用,隨著呈現內容動量估計用例增加,不正確動量估計可能會成為日益嚴重之問題。In some cases, the repeated patterns in the input image may be difficult to process for momentum estimation. For example, presentation content has more frequent repeating patterns than other content (for example, light content) because presentation content uses texture mapping with repeating patterns. In some aspects, momentum estimation techniques have problems with repeating patterns because these techniques try to match objects that move from one frame to another. Because the presentation content can accurately repeat the pattern in some cases, there can be erroneous motion matching, where the momentum estimation incorrectly attempts to match the motion. For example, due to the repeating pattern, the momentum estimation can skip the motion cycle, resulting in incorrect mapping of the next element and generating incorrect momentum estimation. In addition, since many areas of the image can be matched equally, some systems may have difficulty identifying motion correctly. Wrong motion vectors can be generated in these areas of incorrect momentum estimation, which can lead to serious errors in use cases based on accurate motion recognition. Since presentation content can contain repeated structures and free use of patterns, as presentation content momentum estimation use cases increase, incorrect momentum estimation may become an increasingly serious problem.

本發明之一些態樣可提供一種用於識別錯誤運動向量之方法。藉由正確地識別錯誤運動向量,其可經移除以生成更精確動量估計。在一些態樣中,根據本發明之動量估計可為除一般動量估計處理之外或位於其之上。因此,本發明之一些態樣可多次(例如兩次)執行動量估計處理。在此等情況下,可經由輸入擾動執行動量估計,此可導致動量估計之總體改良。具有輸入擾動之此動量估計可執行兩次,其中可對原始輸入影像執行一次遍次,且可對輸入影像之擾動版本執行第二遍次。在一些態樣中,藉由發送輸入影像並產生所得運動向量,第一遍次可為無擾動或嚴格的遍次。第二遍次可為擾動遍次,其中輸入影像之運動向量以一些方式擾動。當遍次皆完成時,可比較所得運動向量來判定例如由重複圖案引起之真實運動及無效運動。在一些情況下,可同時或並行執行第一遍次及第二遍次。原始遍次與擾動遍次之間匹配的所得向量標識為有效,且不匹配之向量丟棄為無效。Some aspects of the present invention can provide a method for identifying false motion vectors. By correctly identifying the wrong motion vector, it can be removed to generate a more accurate momentum estimate. In some aspects, the momentum estimation according to the present invention may be in addition to or on top of the general momentum estimation process. Therefore, some aspects of the present invention can execute the momentum estimation process multiple times (for example, twice). In these cases, momentum estimation can be performed via input disturbances, which can lead to an overall improvement in momentum estimation. This momentum estimation with input perturbation can be performed twice, where it can be performed once for the original input image, and the perturbed version of the input image can be performed a second time. In some aspects, by sending the input image and generating the resulting motion vector, the first pass can be an undisturbed or strict pass. The second pass can be a perturbation pass, where the motion vector of the input image is perturbed in some ways. When all passes are completed, the obtained motion vectors can be compared to determine, for example, real motion and invalid motion caused by repeated patterns. In some cases, the first pass and the second pass can be executed simultaneously or in parallel. The vector obtained from the match between the original pass and the disturbance pass is identified as valid, and the unmatched vector is discarded as invalid.

本發明可以多種方式執行輸入擾動。在一些態樣中,本發明可將足夠新之不規則性引入至影像中,從而其可擾亂動量估計。舉例而言,藉由將新不規則性引入至運動向量中,動量估計可影響輸入影像之某些區域(例如重複圖案區域)中之運動向量。在一些情況下,此不規則性可為運動向量之δ或差異性值。在此等情況下,不應有太多δ或差異性,以免丟失對真實物件之運動跟蹤。因此,本發明可在增加足夠噪聲以擾動輸入影像之間找到平衡,但又不至於影響真實運動。如上文所提及,本發明可比較例如無擾動遍次及擾動遍次之兩個遍次。動量估計在兩個遍次之間不對齊的區域可標識為不相關或錯誤的運動向量。在一些實例中,可存在過多擾動,使得動量估計極大地更改至影響實際運動之程度。實際上,過多擾動可導致一些輸入影像區域中之運動實際上不跟隨真實運動,但會產生虛假重複圖案之一些假影。The present invention can perform input perturbation in a variety of ways. In some aspects, the present invention can introduce sufficiently new irregularities into the image so that it can disturb the momentum estimation. For example, by introducing new irregularities into the motion vector, the momentum estimation can affect the motion vector in certain areas of the input image (such as the repeated pattern area). In some cases, this irregularity can be the δ or difference value of the motion vector. In such cases, there should not be too much delta or difference, so as not to lose track of the real object's movement. Therefore, the present invention can find a balance between adding enough noise to disturb the input image without affecting the real motion. As mentioned above, the present invention can compare, for example, two passes of no disturbance pass and disturbance pass. Areas where momentum estimation is not aligned between two passes can be identified as uncorrelated or erroneous motion vectors. In some instances, there may be too much disturbance, so that the momentum estimate changes greatly to the extent that it affects the actual motion. In fact, too much disturbance may cause the motion in some input image areas to not actually follow the real motion, but will produce some artifacts of false repeating patterns.

圖2說明根據本發明之動量估計200之實例。圖2顯示呈現內容區塊202及RGB擾動紋理區塊204可皆為顏色空間轉化遍次206及208之輸入。如圖2中所展示,顏色空間轉化206及208可導致動量估計210及212,此可導致δ或差異性分析214,且可隨後導致運動向量計算216。圖2中之顏色空間轉化206/208在轉化成亮度(Y)、第一色度(U)及第二色度(V) (YUV)值時可考慮前述擾動。自RGB至YUV之轉化可為對GPU執行擾動之有效的方式。如圖2中所展示,一些態樣可首先取樣呈現內容,隨後擾動RGB值且接著基於此等值執行YUV轉化。本發明之一些態樣可能不關注使用什麼類型之YUV轉化。舉例而言,YUV轉化之類型可取決於所遵循之YUV。在其他態樣中,藉由像素乘像素計算噪聲,此轉化可在顏色空間轉化之外完成。此外,執行擾動之有效方式可為輸入噪聲作為另一紋理。Figure 2 illustrates an example of momentum estimation 200 according to the present invention. FIG. 2 shows that the content block 202 and the RGB perturbation texture block 204 can both be the input of the color space conversion passes 206 and 208. As shown in FIG. 2, color space conversions 206 and 208 can lead to momentum estimates 210 and 212, which can lead to delta or disparity analysis 214, and can subsequently lead to motion vector calculation 216. The color space conversion 206/208 in FIG. 2 can consider the aforementioned disturbances when converting into brightness (Y), first chromaticity (U), and second chromaticity (V) (YUV) values. The conversion from RGB to YUV can be an effective way to perturb the GPU. As shown in Figure 2, some aspects may first sample the presentation content, then perturb the RGB values and then perform the YUV conversion based on these equivalent values. Some aspects of the present invention may not pay attention to what type of YUV conversion is used. For example, the type of YUV conversion may depend on the YUV being followed. In other aspects, the noise is calculated by multiplying pixels by pixels, and this conversion can be done in addition to the color space conversion. In addition, an effective way to perform perturbation can be to input noise as another texture.

在一些態樣中,擾動可允許GPU判定運動向量,且接著呈現後續圖框。有效擾動可對輸入進行足夠的修改,以歸因於重複圖案而擾亂虛假區域或特徵匹配,但可不會在損害真實運動之識別程度之範圍內影響輸入。在本發明之一些態樣中,用於執行圖框外推之VR或AR構架之動量估計路徑可在例如SDM845平台之某些平台上實施。如上文所提及,用以執行擾動之方法可為顏色空間轉化。呈現內容通常可在RGB顏色空間中,且動量估計可在YUV顏色空間中。因此,在本發明執行動量估計之前,可執行顏色空間轉化。舉例而言,可在第一或無擾動遍次期間在GPU上執行顏色空間轉化。本發明亦可對含有輸入將如何擾動之值之紋理進行取樣。因此,本發明可在顏色空間轉化期間執行此修改,且接著繼續執行動量估計。In some aspects, the perturbation may allow the GPU to determine the motion vector and then present subsequent frames. Effective perturbation can modify the input enough to disrupt false regions or feature matching due to the repeated pattern, but it may not affect the input within the range of impairing the recognition of real motion. In some aspects of the present invention, the momentum estimation path of the VR or AR framework for performing frame extrapolation can be implemented on some platforms such as the SDM845 platform. As mentioned above, the method used to perform perturbation can be color space conversion. The presentation content can usually be in the RGB color space, and the momentum estimation can be in the YUV color space. Therefore, before the present invention performs momentum estimation, color space conversion can be performed. For example, the color space conversion can be performed on the GPU during the first or bumpless pass. The invention can also sample textures that contain the value of how the input will be disturbed. Therefore, the present invention can perform this modification during the color space conversion, and then continue to perform momentum estimation.

在一些情況下,前述擾動可藉由將額外噪聲紋理傳遞給現有RGB至YUV轉化來實現,例如藉由使用施加一定量值之均一分佈的隨機RGB值之轉化著色器。在一些態樣中,量值可在顏色空間轉化之前添加至輸入影像之各顏色通道。在一些態樣中,在顏色空間轉化內,可執行添加噪聲,且接著可對經修改值執行顏色空間轉化。轉化著色器可施加數個不同幅值(例如5%量值)。藉由判定噪聲何時可消除過多合法向量,可預先判定或經由實驗得出量值。實際上,此量值可為靈活的,因為同一量值可能不適用於每一個影像。In some cases, the aforementioned perturbation can be achieved by transferring additional noise texture to the existing RGB to YUV conversion, for example, by using a conversion shader that applies a certain amount of uniformly distributed random RGB values. In some aspects, the magnitude can be added to each color channel of the input image before the color space conversion. In some aspects, within the color space conversion, noise addition can be performed, and then the color space conversion can be performed on the modified value. The conversion shader can apply several different amplitude values (for example, 5% value). By determining when the noise can eliminate too many legal vectors, the magnitude can be determined in advance or through experiments. In fact, this value can be flexible because the same value may not be applicable to every image.

在一些態樣中,不僅可施加恆定之噪聲量值,亦可基於輸入影像該方差來更改量值。舉例而言,若在特定區域中存在具有高對比度區域之輸入影像,則本發明可施加較高程度之噪聲紋理。在其他實例中,若存在具有低方差及較軟特徵之影像,則本發明可在擾動遍次中施加較低水準之噪聲。當基於影像之方差改變噪聲之量值時,本揭示之一些態樣可獲得改良結果。舉例而言,若本發明判定區域之方差且使用該方差以判定要施加之噪聲量,則可獲得改良動量估計。因此,藉由量測影像之各局部區域之方差,且使用該方差以判定要施加多少噪聲,本發明可改良偵測錯誤運動向量之能力,且減小意外丟棄真實運動之可能性。實際上,本發明可基於實際輸入影像以更多或更少噪聲獲得更有效的結果。In some aspects, not only can a constant noise magnitude be applied, but the magnitude can also be changed based on the variance of the input image. For example, if there is an input image with a high contrast area in a specific area, the present invention can apply a higher degree of noise texture. In other examples, if there are images with low variance and softer features, the present invention can apply a lower level of noise in the perturbation pass. When the amount of noise is changed based on the variance of the image, some aspects of the present disclosure can obtain improved results. For example, if the present invention determines the variance of the region and uses the variance to determine the amount of noise to be applied, an improved momentum estimate can be obtained. Therefore, by measuring the variance of each local area of the image and using the variance to determine how much noise to apply, the present invention can improve the ability to detect false motion vectors and reduce the possibility of accidentally discarding real motion. In fact, the present invention can obtain more effective results with more or less noise based on the actual input image.

如上文所提及,本發明所施加之方差量可為內容相關的。若施加過多噪聲,則可擾亂實際特徵之特徵識別。在一些實例中,取決於影像之內容,10%之噪聲量值可過高。舉例而言,藉由在某些區域中施加過多噪聲,結果可為在該區域中有過多錯誤運動向量。當噪聲開始干擾實際物件跟蹤時,亦可有一些合法運動向量標識為錯誤。因此,若施加過多擾動,則可擾亂實際物件跟蹤。舉例而言,施加過多噪聲可導致輸入影像上具有合法移動之一些區域不匹配,例如δ或差異性可能過高,此可導致合法運動向量為錯誤向量之錯誤結論。簡而言之,若噪聲量過高且修改過多,則噪聲量可能與實際向量相差太大,且噪聲將淹沒真實特徵。因此,本發明可評估輸入影像,以便正確地改變待施加之擾動量。實際上,本發明之一些態樣可基於輸入影像之具體內容來調整施加至運動向量之噪聲或擾動量。As mentioned above, the amount of variance imposed by the present invention can be content-dependent. If too much noise is applied, it can disturb the feature recognition of actual features. In some instances, depending on the content of the image, the noise level of 10% may be too high. For example, by applying too much noise in certain areas, the result may be too many false motion vectors in that area. When the noise starts to interfere with the tracking of the actual object, there may also be some legal motion vectors identified as errors. Therefore, if too much disturbance is applied, the actual object tracking can be disrupted. For example, applying too much noise may lead to mismatches in some areas of the input image with legal movement, for example, the δ or the difference may be too high, which may lead to the wrong conclusion that the legal motion vector is the wrong vector. In short, if the amount of noise is too high and modified too much, the amount of noise may be too different from the actual vector, and the noise will overwhelm the real features. Therefore, the present invention can evaluate the input image in order to correctly change the amount of disturbance to be applied. In fact, some aspects of the present invention can adjust the amount of noise or disturbance applied to the motion vector based on the specific content of the input image.

圖3A及圖3B分別說明對其執行動量估計之影像300及310。因為可對影像300或310執行動量估計,所以此等影像可稱為運動估計或動量估計影像。如圖3A中之所顯示,影像300包括重複背景302及插入紋理影像304。在一些態樣中,重複背景302可為靜止的。插入紋理影像304可在例如在跨越重複背景302之方向上的數目個不同方向上移動。影像300說明根據本發明之動量估計之一個實例,其可在例如SDM845平台之硬體平台上執行。3A and 3B respectively illustrate images 300 and 310 on which momentum estimation is performed. Because momentum estimation can be performed on images 300 or 310, these images can be referred to as motion estimation or momentum estimation images. As shown in FIG. 3A, the image 300 includes a repeating background 302 and an inserted texture image 304. In some aspects, the repeating background 302 may be static. The inserted texture image 304 may move in a number of different directions, for example, across the direction of the repeated background 302. Image 300 illustrates an example of momentum estimation according to the present invention, which can be executed on a hardware platform such as the SDM845 platform.

在一些態樣中,藉由處理輸入影像或插入紋理影像304且獲得所得運動向量,影像300可包括嚴格或無擾動遍次。在此等態樣中,重複背景302中之「X」可看起來其正在向右或向左移位,因為其將與向右或向左之其他「X」匹配。在其他態樣中,影像300可包括擾動遍次,其中輸入影像或插入紋理影像304之運動向量以一些方式擾動,且導致擾動影像。在此等態樣中,輸入影像或插入紋理影像304可例如略微向右或左側移動,以生成擾動影像,使得其不大可能在重複背景302中得到相同「X」匹配。舉例而言,當擾動影像304以生成擾動影像時,可存在與嚴格或無擾動遍次不同的不正確「X」匹配。In some aspects, by processing the input image or inserting the texture image 304 and obtaining the resulting motion vector, the image 300 may include strict or undisturbed passes. In these situations, the "X" in the repeating background 302 may appear to be shifting to the right or left because it will match the other "X"s to the right or left. In other aspects, the image 300 may include perturbation passes, where the motion vector of the input image or the inserted texture image 304 is perturbed in some way, and causes the perturbed image. In these aspects, the input image or the inserted texture image 304 may be moved slightly to the right or left to generate a disturbance image, making it unlikely that it will get the same "X" match in the repeated background 302. For example, when perturbing the image 304 to generate a perturbed image, there may be incorrect "X" matches that differ from strict or undisturbed passes.

圖3B顯示擾動遍次之結果之影像310。舉例而言,影像310可為對圖3A中之影像300施加擾動之結果。在圖3B中,影像310中之擾動由重複背景312及呈現為灰色之插入紋理影像314展示。相比之下,圖3A顯示在擾動遍次之前,重複背景302及插入紋理影像304呈現為黑色。實際上,在擾動遍次期間,可擾動整個影像300,此可導致整個影像310經擾動,例如以灰色而非黑色呈現。對影像之擾動之結果可以多種方式(例如影像改變或顏色消褪)體現。舉例而言,與影像300相比,影像310可為顏色消褪。在一些態樣中,擾動之結果可施加於整個影像。在其他態樣中,擾動之結果可施加於影像之特定部分。FIG. 3B shows an image 310 of the result of the disturbance. For example, the image 310 may be the result of applying a disturbance to the image 300 in FIG. 3A. In FIG. 3B, the disturbance in the image 310 is shown by the repeating background 312 and the interpolated texture image 314 that appears gray. In contrast, FIG. 3A shows that the repeated background 302 and the inserted texture image 304 appear black before the disturbance. In fact, during the perturbation times, the entire image 300 may be perturbed, which may cause the entire image 310 to be perturbed, for example, appearing in gray instead of black. The result of perturbation to the image can be reflected in many ways (such as image change or color fade). For example, compared with the image 300, the image 310 may be faded. In some aspects, the result of the disturbance can be applied to the entire image. In other aspects, the result of the disturbance can be applied to a specific part of the image.

在一些態樣中,當影像中之「X」值由擾動所擾動或搖動時,背景「X」更可能將在不同方向上移動,且不與先前遍次匹配。藉此,本發明可比較嚴格遍次與擾動遍次之間的δ或差異性。本質上,本發明可比較嚴格遍次與擾動遍次之間的共同運動向量。此等共同運動向量可能對應於真實物件運動。本發明亦可進行多於兩個遍次,使得除了嚴格的遍次及擾動遍次之外,可存在多個遍次。此外,當執行多於兩個遍次時,可並行執行多個遍次。雖然執行多於兩個遍次可提供更好的估計,但間接成本可限制可執行之遍次量。In some aspects, when the value of "X" in the image is disturbed or shaken by disturbances, the background "X" is more likely to move in different directions and not match the previous pass. In this way, the present invention can compare the δ or difference between the strict pass and the disturbance pass. Essentially, the present invention can compare the common motion vectors between strict passes and disturbance passes. These common motion vectors may correspond to real object motion. The present invention can also perform more than two passes, so that in addition to the strict pass and the disturbance pass, there may be multiple passes. In addition, when more than two passes are executed, multiple passes can be executed in parallel. Although performing more than two passes can provide a better estimate, the indirect cost can limit the number of passes that can be performed.

如上文所提及,本發明之一些實例可能不會呈現每一圖框。舉例而言,根據本發明之GPU可每隔一個圖框進行呈現,且仍利用呈現內容之運動向量。在一些態樣中,由於VR呈現之需求,GPU可在資源上受到限制,因此不呈現每一圖框可節省GPU資源,且藉由自GPU卸載呈現工作來實現特定圖框速率。如先前所提及,本發明可使用運動向量及所得動量估計作為用於呈現每一圖框之取代或替代。藉此,本發明可節省GPU之功率及效能。此外,此可允許本發明在GPU上呈現更高品質圖框。As mentioned above, some examples of the present invention may not present every frame. For example, the GPU according to the present invention can render every other frame and still use the motion vector of the rendered content. In some aspects, due to the requirements of VR rendering, the GPU may be limited in resources. Therefore, not rendering each frame can save GPU resources, and a specific frame rate can be achieved by offloading the rendering work from the GPU. As mentioned previously, the present invention can use the motion vector and the resulting momentum estimate as an alternative or alternative for presenting each frame. In this way, the present invention can save the power and performance of the GPU. In addition, this may allow the present invention to render higher quality frames on the GPU.

在本發明之一些態樣中,當執行動量估計時,結果可為單一向量解。在此等情況下,可為無擾動或嚴格遍次與擾動遍次之間的偏差量設定臨限值,以判定某一向量是合法還是錯誤。如上文所提及,此可稱為計算δ或差異性,且δ或差異性值可設定為任何優選值。在使用δ計算動量估計之一個實例中,若某一向量之遍次之間的差值小於δ,則可認為向量為來自無擾動遍次之運動向量。在另一實例中,若某一向量之遍次之間的差值大於δ,則可認為運動為零。本發明之態樣亦可比較鄰近向量且判定其運動。本發明之一些態樣亦可假定,與具有不正確估計相比,優選不具有任何運動估計。本質上,可假設,沒有動量估計優於不正確估計。In some aspects of the invention, when momentum estimation is performed, the result may be a single vector solution. In such cases, a threshold value can be set for the deviation between no disturbance or strict pass and disturbance pass to determine whether a certain vector is legal or wrong. As mentioned above, this can be referred to as calculating δ or difference, and the value of δ or difference can be set to any preferred value. In an example of using δ to calculate momentum estimation, if the difference between the passes of a vector is less than δ, the vector can be considered as a motion vector from the pass without disturbance. In another example, if the difference between passes of a certain vector is greater than δ, the motion can be considered as zero. The aspect of the present invention can also compare neighboring vectors and determine its motion. Some aspects of the present invention can also assume that, compared to having incorrect estimates, it is preferable to not have any motion estimation. In essence, it can be assumed that no momentum estimate is better than incorrect estimates.

本發明可提供數目個將擾動施加於動量估計之不同方法。舉例而言,如上文所提及,方差可含有自輸入圖框資料計算之局部影像方差。在一些態樣中,在動量估計期間,方差可作為額外資料流產生。因此,擾動動量估計遍次之方差資料可取自無擾動動量估計遍次。此外,擾動動量估計遍次之方差資料可藉由利用先前圖框方差資料而概算。在一些態樣中,這樣做可以避免在當前圖框之動量估計遍次上丟失並行性。The present invention can provide a number of different methods for applying disturbances to momentum estimation. For example, as mentioned above, the variance may include the local image variance calculated from the input frame data. In some aspects, the variance can be generated as an additional data stream during momentum estimation. Therefore, the variance data of the perturbed momentum estimates can be taken from the undisturbed momentum estimates. In addition, the variance data of the perturbation momentum estimates can be estimated by using the variance data of the previous frame. In some aspects, this can avoid losing parallelism on the momentum estimation passes of the current frame.

本發明之一些態樣可提供施加前述噪聲擾動之著色器。舉例而言,本發明可對此等類型之著色器使用以下程式碼: #version 300 es #extension GL_EXT_YUV_target : require uniform sampler2DArray texArray; uniform sampler2D noiseTexture; uniform float perturbationFactor; in vec3 tc; layout (yuv) out vec3 fragColor; void main(void) { vec3 color = texture(texArray, tc).xyz; fragColor = rgb_2_yuv(color + (texture(noiseTexture, tc.xy).xyz * perturbationFactor), itu_601); }Some aspects of the present invention can provide a coloring device that applies the aforementioned noise disturbance. For example, the present invention can use the following code for this type of colorizer: #version 300 es #extension GL_EXT_YUV_target: require uniform sampler2DArray texArray; uniform sampler2D noiseTexture; uniform float perturbationFactor; in vec3 tc; layout (yuv) out vec3 fragColor; void main(void) { vec3 color = texture(texArray, tc).xyz; fragColor = rgb_2_yuv(color + (texture(noiseTexture, tc.xy).xyz * perturbationFactor), itu_601); }

在以上實例程式碼中,texArray可為輸入影像,noiseTexture可為噪聲擾動紋理,perturbationFactor可為施加之擾動量之比例因子,vec3 tc可為輸入影像紋理座標,且fragColor可為輸出片段顏色。In the above example code, texArray can be the input image, noiseTexture can be the noise perturbation texture, perturbationFactor can be the scale factor of the amount of perturbation applied, vec3 tc can be the texture coordinates of the input image, and fragColor can be the color of the output fragment.

本發明亦可提供在考量局部方差之情況下施加噪聲擾動之著色器。舉例而言,本發明可對此等類型之著色器使用以下程式碼: #version 300 es #extension GL_EXT_YUV_target : require uniform sampler2DArray texArray; uniform sampler2D noiseTexture; uniform sampler2D varianceTexture; uniform vec2 varianceSampleRange; uniform vec2 varianceSampleSpread; uniform float perturbationFactor; in vec3 tc; layout (yuv) out vec3 fragColor; void main(void) { vec3 color = texture(texArray, tc).xyz; float varianceFactor = 0.0; for (int i = -1 * varianceSampleRange.x; i < varianceSampleRange.x, i++) { for (int j = -1 * varianceSampleRange.y; j < varianceSampleRange.y, j++) { vec2 sampleLocation; sampleLocation.x = tc.x + (varianceSampleSpread.x * i); sampleLocation.y = tc.y + (varianceSampleSpread.y * j); varianceFactor += texture(varianceTexture, sampleLocation.xy).x; } } varianceFactor = varianceFactor/((varianceSampleRange.x*2)* (varianceSampleRange.y* 2)); fragColor = rgb_2_yuv(color + (texture(noiseTexture, tc.xy).xyz* perturbationFactor * varianceFactor), itu_601); }The present invention can also provide a color shader that applies noise disturbance in consideration of local variance. For example, the present invention can use the following code for this type of colorizer: #version 300 es #extension GL_EXT_YUV_target: require uniform sampler2DArray texArray; uniform sampler2D noiseTexture; uniform sampler2D varianceTexture; uniform vec2 varianceSampleRange; uniform vec2 varianceSampleSpread; uniform float perturbationFactor; in vec3 tc; layout (yuv) out vec3 fragColor; void main(void) { vec3 color = texture(texArray, tc).xyz; float varianceFactor = 0.0; for (int i = -1 * varianceSampleRange.x; i < varianceSampleRange.x, i++) { for (int j = -1 * varianceSampleRange.y; j < varianceSampleRange.y, j++) { vec2 sampleLocation; sampleLocation.x = tc.x + (varianceSampleSpread.x * i); sampleLocation.y = tc.y + (varianceSampleSpread.y * j); varianceFactor += texture(varianceTexture, sampleLocation.xy).x; } } varianceFactor = varianceFactor/((varianceSampleRange.x*2)* (varianceSampleRange.y* 2)); fragColor = rgb_2_yuv(color + (texture(noiseTexture, tc.xy).xyz* perturbationFactor * varianceFactor), itu_601); }

在以上實例程式碼中,texArray可為輸入影像,noiseTexture可為噪聲擾動紋理,varianceTexture可為方差資料紋理,varianceSampleRange可為要取樣之方差紋理之區域之大小,varianceSampleSpread可為方差紋理取樣位置之間的距離,perturbationFactor可為施加之擾動量之比例因子,vec3 tc可為輸入影像紋理座標,且fragColor可為輸出片段顏色。In the above example code, texArray can be the input image, noiseTexture can be the noise disturbance texture, varianceTexture can be the variance data texture, varianceSampleRange can be the size of the area of the variance texture to be sampled, and varianceSampleSpread can be the difference between the sampling positions of the variance texture Distance, perturbationFactor can be the scale factor of the applied perturbation, vec3 tc can be the texture coordinates of the input image, and fragmentColor can be the color of the output fragment.

本發明亦可處理兩個運動向量之陣列以偵測偏差。舉例而言,可比較且利用來自前述兩個遍次之所得運動向量。在一些態樣中,臨限值可為基於對不正確動量估計之用例公差而設定之用例特定調諧參數。如上文所提及,在利用原型VR之用例中,不生成動量估計比生成不正確動量估計更好。因此,本發明之一些態樣可將臨限值設定為例如0.001之極小值。本發明可使用以下程式碼以處理兩個運動向量之陣列以偵測偏差: for (int loopY = 0; loopY < arrayHeight; loopY++) { for (int loopX = 0; loopX < arrayWidth; loopX++) { float Xa = motionVectorsUnperturbed[(loopY * arrayWidth) + loopX].xMagnitude; float Xb = motionVectorsPerturbed[(loopY * arrayWidth) + loopX].xMagnitude; float Ya = motionVectorsUnperturbed[(loopY * arrayWidth) + loopX].yMagnitude; float Yb = motionVectorsPerturbed[(loopY * arrayWidth) + loopX].yMagnitude; if (((fabs(Xa) - fabs(Xb)) > threshold) || ((fabs(Ya) - fabs(Yb)) > threshold)) { finalMotionVectors[(loopY * arrayWidth) + loopX].xMagnitude = 0; finalMotionVectors[(loopY * arrayWidth) + loopX].yMagnitude = 0; } else { finalMotionVectors[(loopY * arrayWidth) + loopX].xMagnitude = motionVectorsUnperturbed[(loopY * arrayWidth) + loopX].xMagnitude; finalMotionVectors[(loopY * arrayWidth) + loopX].yMagnitude = motionVectorsUnperturbed[(loopY * arrayWidth) + loopX].yMagnitude; } } }The present invention can also process an array of two motion vectors to detect deviations. For example, the resulting motion vectors from the aforementioned two passes can be compared and used. In some aspects, the threshold may be a use case specific tuning parameter set based on the use case tolerance for incorrect momentum estimation. As mentioned above, it is better not to generate momentum estimates than to generate incorrect momentum estimates in the use case using prototype VR. Therefore, in some aspects of the present invention, the threshold may be set to a minimum value such as 0.001. The present invention can use the following code to process an array of two motion vectors to detect deviations: for (int loopY = 0; loopY < arrayHeight; loopY++) { for (int loopX = 0; loopX < arrayWidth; loopX++) { float Xa = motionVectorsUnperturbed[(loopY * arrayWidth) + loopX].xMagnitude; float Xb = motionVectorsPerturbed[(loopY * arrayWidth) + loopX].xMagnitude; float Ya = motionVectorsUnperturbed[(loopY * arrayWidth) + loopX].yMagnitude; float Yb = motionVectorsPerturbed[(loopY * arrayWidth) + loopX].yMagnitude; if (((fabs(Xa)-fabs(Xb)) > threshold) || ((fabs(Ya)-fabs(Yb)) > threshold)) { finalMotionVectors[(loopY * arrayWidth) + loopX].xMagnitude = 0; finalMotionVectors[(loopY * arrayWidth) + loopX].yMagnitude = 0; } else { finalMotionVectors[(loopY * arrayWidth) + loopX].xMagnitude = motionVectorsUnperturbed[(loopY * arrayWidth) + loopX].xMagnitude; finalMotionVectors[(loopY * arrayWidth) + loopX].yMagnitude = motionVectorsUnperturbed[(loopY * arrayWidth) + loopX].yMagnitude; } } }

圖4A說明根據本發明之動量估計400之另一實例。如圖4A中所顯示,動量估計400包括圖框402、重複背景404、插入紋理影像410、至少一個第一運動向量412、至少一個第二運動向量414及至少一個第一運動向量412與至少一個第二運動向量414之間的差值416。如圖4A中所展示,在一些態樣中,可在圖框402之第一圖框子集中產生至少一個第一運動向量412。如圖4A中所展示,在一些態樣中,第一圖框子集可定位在重複背景404之一部分中之插入紋理影像410上方。在其他態樣中,第一圖框子集可定位在圖框402之另一部分中。至少一個第一運動向量412可提供用於第一圖框子集中之影像資料之第一動量估計。在一些態樣中,可擾動第一圖框子集中之影像資料。此外,可基於第一圖框子集中之擾動影像資料產生至少一個第二運動向量414。至少一個第二運動向量414可提供用於第一圖框子集中之影像資料之第二動量估計。另外,可比較至少一個第一運動向量412與至少一個第二運動向量414。在一些態樣中,比較至少一個第一運動向量412與至少一個第二運動向量414可包含判定至少一個第一運動向量412與至少一個第二運動向量414之間的差值416。差值416可判定為小於或大於臨限值。在一些態樣中,以上比較可表達為函數,例如公式:f(v1 ,v2 )=|v1 -v2 |<臨限值。Figure 4A illustrates another example of momentum estimation 400 according to the present invention. As shown in FIG. 4A, the momentum estimation 400 includes a frame 402, a repeated background 404, an inserted texture image 410, at least one first motion vector 412, at least one second motion vector 414, at least one first motion vector 412, and at least one The difference 416 between the second motion vectors 414. As shown in FIG. 4A, in some aspects, at least one first motion vector 412 may be generated in the first frame subset of the frame 402. As shown in FIG. 4A, in some aspects, the first subset of frames may be positioned above the inserted texture image 410 in a portion of the repeated background 404. In other aspects, the first subset of frames can be positioned in another part of the frame 402. At least one first motion vector 412 can provide a first momentum estimate for the image data in the first subset of frames. In some aspects, the image data in the first frame subset may be disturbed. In addition, at least one second motion vector 414 can be generated based on the disturbed image data in the first frame subset. At least one second motion vector 414 can provide a second momentum estimate for the image data in the first subset of frames. In addition, at least one first motion vector 412 and at least one second motion vector 414 may be compared. In some aspects, comparing at least one first motion vector 412 with at least one second motion vector 414 may include determining a difference 416 between at least one first motion vector 412 and at least one second motion vector 414. The difference 416 can be determined to be less than or greater than the threshold value. In some aspects, the above comparison can be expressed as a function, for example, the formula: f(v 1 ,v 2 )=|v 1 -v 2 |<threshold value.

在一些態樣中,可基於至少一個第一運動向量412與至少一個第二運動向量414之間的比較來判定用於第一圖框子集中之影像資料之動量估計之至少一個第三運動向量。此外,當差值416小於臨限值時,至少一個第三運動向量可設定為至少一個第一運動向量412。差值416亦可稱為δ分析。如本文中所提到,在一些態樣中,差值416可為絕對值。基於以上,至少一個第三向量可表達為以下公式:v3 =v1 ,當|v1 -v2 |<臨限值時。在一些態樣中,當差值416大於或等於臨限值時,至少一個第三運動向量可設定為具有零運動值。基於此,至少一個第三向量亦可表達為公式:v3 =0,當|v1 -v2 |≥臨限值時。In some aspects, the at least one third motion vector used for the momentum estimation of the image data in the first frame subset may be determined based on the comparison between the at least one first motion vector 412 and the at least one second motion vector 414. In addition, when the difference 416 is less than the threshold value, at least one third motion vector may be set as at least one first motion vector 412. The difference 416 can also be referred to as delta analysis. As mentioned herein, in some aspects, the difference 416 may be an absolute value. Based on the above, at least one third vector can be expressed as the following formula: v 3 =v 1 , when |v 1 -v 2 |<the threshold value. In some aspects, when the difference 416 is greater than or equal to the threshold, at least one third motion vector may be set to have a zero motion value. Based on this, at least one third vector can also be expressed as a formula: v 3 =0, when |v 1 -v 2 |≥the threshold.

在一些情況下,可判定至少一個第三運動向量與圍繞或緊鄰至少一個第三運動向量之一或多個鄰近向量之間的第二差值。在此等情況下,如圖4A中所展示,緊鄰或圍繞第三運動向量之一或多個鄰近向量可用以判定第三運動向量與鄰近向量之間的第二差值。另外,若差值大於臨限值及/或第二差值小於臨限值,則可基於一或多個鄰近向量設定至少一個第三運動向量。此外,前述臨限值可基於第一圖框子集中之影像資料之動量估計公差。In some cases, the second difference between the at least one third motion vector and one or more neighboring vectors surrounding or immediately adjacent to the at least one third motion vector may be determined. In such cases, as shown in FIG. 4A, one or more neighboring vectors next to or surrounding the third motion vector may be used to determine the second difference between the third motion vector and the neighboring vector. In addition, if the difference is greater than the threshold and/or the second difference is less than the threshold, at least one third motion vector may be set based on one or more neighboring vectors. In addition, the aforementioned threshold value can be based on the momentum estimation tolerance of the image data in the first frame subset.

圖4B說明根據本發明之動量估計450之另一實例。如圖4B中所顯示,動量估計450包括圖框452、重複背景454、插入紋理影像460、至少一個第一運動向量462、至少一個第二運動向量464、至少一個第一運動向量462與至少一個第二運動向量464之間的差值466及至少一個第四運動向量472。在4B圖中,如結合圖4A中所描述,可在圖框452之第一圖框子集中產生至少一個第一運動向量462,其中至少一個第一運動向量462可提供用於第一圖框子集中之影像資料之第一動量估計。如圖4B中所展示,在一些態樣中,第一圖框子集可定位在重複背景454之一部分中之插入紋理影像460上方。在其他態樣中,第一圖框子集可定位在圖框452之另一部分中。在一些情況下,可擾動第一圖框子集中之影像資料。此外,可基於第一圖框子集中之擾動影像資料產生至少一個第二運動向量464,其中至少一個第二運動向量464可提供用於第一圖框子集中之影像資料之第二動量估計。另外,可比較至少一個第一運動向量462與至少一個第二運動向量464。在一些態樣中,比較至少一個第一運動向量462與至少一個第二運動向量464可包含判定至少一個第一運動向量462與至少一個第二運動向量464之間的差值466。差值466可判定為小於或大於臨限值。在一些態樣中,可基於至少一個第一運動向量462與至少一個第二運動向量464之間的比較來判定用於第一圖框子集中之影像資料之動量估計之至少一個第三運動向量。Figure 4B illustrates another example of momentum estimation 450 according to the present invention. As shown in FIG. 4B, the momentum estimation 450 includes a frame 452, a repeated background 454, an inserted texture image 460, at least one first motion vector 462, at least one second motion vector 464, at least one first motion vector 462, and at least one The difference 466 between the second motion vector 464 and at least one fourth motion vector 472. In Figure 4B, as described in conjunction with Figure 4A, at least one first motion vector 462 can be generated in the first frame subset of frame 452, and at least one first motion vector 462 can be provided for the first frame subset. The first momentum estimate of the image data. As shown in FIG. 4B, in some aspects, the first subset of frames may be positioned above the inserted texture image 460 in a portion of the repeated background 454. In other aspects, the first subset of frames may be positioned in another part of frame 452. In some cases, the image data in the first frame subset may be disturbed. In addition, at least one second motion vector 464 can be generated based on the disturbed image data in the first subset of frames, where at least one second motion vector 464 can provide a second momentum estimate for the image data in the first subset of frames. In addition, at least one first motion vector 462 and at least one second motion vector 464 may be compared. In some aspects, comparing at least one first motion vector 462 with at least one second motion vector 464 may include determining a difference 466 between at least one first motion vector 462 and at least one second motion vector 464. The difference 466 can be determined to be less than or greater than the threshold. In some aspects, the at least one third motion vector used for the momentum estimation of the image data in the first frame subset may be determined based on the comparison between the at least one first motion vector 462 and the at least one second motion vector 464.

如圖4B中所展示,可判定至少一個第四運動向量472以用於圖框452之第二圖框子集之動量估計。如圖4B中所展示,在一些態樣中,第二圖框子集可定位在重複背景454之一部分中之第一圖框子集下方及插入紋理影像460上方。在其他態樣中,第二圖框子集可定位在圖框452之另一部分中。在一些態樣中,當差值466大於或等於臨限值時,可判定至少一個第四運動向量472。如圖4B中所說明,在一些態樣中,第二圖框子集可不同於第一圖框子集。此外,至少一個第三運動向量可基於至少一個第一運動向量462、至少一個第二運動向量464及至少一個第四運動向量472而判定。As shown in FIG. 4B, at least one fourth motion vector 472 can be determined to be used for the momentum estimation of the second frame subset of frame 452. As shown in FIG. 4B, in some aspects, the second subset of frames may be positioned below the first subset of frames in a portion of the repeating background 454 and above the texture image 460 inserted. In other aspects, the second subset of frames may be positioned in another part of frame 452. In some aspects, when the difference 466 is greater than or equal to the threshold, at least one fourth motion vector 472 can be determined. As illustrated in Figure 4B, in some aspects, the second subset of frames may be different from the first subset of frames. In addition, at least one third motion vector may be determined based on at least one first motion vector 462, at least one second motion vector 464, and at least one fourth motion vector 472.

在一些態樣中,在第一圖框子集中擾動影像資料可包含將影像資料之紅色(R)、綠色(G)及藍色(B) (RGB)值之量值修改為例如m之值。在一些情況下,m可能不等於零。此外,m可小於或等於5%及大於或等於-5%。此外,第一圖框子集中之影像資料可受一擾動量擾動,且擾動量可基於影像資料之RGB值之局部方差而調整。影像資料之RGB值之局部方差亦可基於用於第一圖框子集中之影像資料之第一動量估計。此外,影像資料之RGB值之局部方差基於RGB值之先前方差而概算。在其他態樣中,第一圖框子集中之影像資料受到一擾動量擾動,且該擾動量基於影像資料之亮度(Y)、第一色度(U)及第二色度(V) (YUV)值之局部方差而調整。此外,來自RGB影像資料之影像資料可轉化成YUV影像資料,其中影像資料可在RGB影像資料轉化成YUV影像資料之前擾動。In some aspects, perturbing the image data in the first frame subset may include modifying the magnitude of the red (R), green (G), and blue (B) (RGB) values of the image data to, for example, a value of m. In some cases, m may not be equal to zero. In addition, m may be less than or equal to 5% and greater than or equal to -5%. In addition, the image data in the first frame subset can be disturbed by a disturbance amount, and the disturbance amount can be adjusted based on the local variance of the RGB values of the image data. The local variance of the RGB values of the image data can also be based on the first momentum estimate for the image data in the first subset of frames. In addition, the local variance of the RGB value of the image data is estimated based on the previous variance of the RGB value. In other aspects, the image data in the first frame subset is disturbed by a disturbance, and the disturbance is based on the luminance (Y), first chromaticity (U), and second chromaticity (V) of the image data (YUV ) Value of local variance. In addition, image data from RGB image data can be converted into YUV image data, where the image data can be disturbed before the RGB image data is converted into YUV image data.

圖5A以及圖5B說明根據本發明之動量估計500及550之實例。如圖5A及5B中顯示,動量估計500及動量估計550包括對應運動向量以輔助動量估計。更特定言之,圖5A說明沒有擾動遍次之動量估計500。因此,動量估計500可包括不正確運動區域502及504。圖5B顯示包括擾動遍次之動量估計550。因而,動量估計550可不包括錯誤運動區域,因為此等運動向量已經標識為不正確且經移除。5A and 5B illustrate examples of momentum estimation 500 and 550 according to the present invention. As shown in FIGS. 5A and 5B, momentum estimate 500 and momentum estimate 550 include corresponding motion vectors to assist momentum estimation. More specifically, Figure 5A illustrates the momentum estimate 500 without perturbation. Therefore, the momentum estimate 500 may include incorrect motion regions 502 and 504. Figure 5B shows the momentum estimate 550 including the perturbation pass. Thus, the momentum estimate 550 may not include erroneous motion regions because these motion vectors have been identified as incorrect and removed.

圖6說明根據本發明之一或多種技術的實例方法之實例流程圖600。方法可藉由器件或GPU來執行。方法可輔助GPU以處理動量估計。Figure 6 illustrates an example flowchart 600 of an example method in accordance with one or more techniques of the present invention. The method can be executed by a device or a GPU. The method can assist the GPU to process momentum estimation.

在602處,如結合圖2、圖3、圖4A及圖5B中之實例所描述,GPU可在圖框之第一子集中產生至少一個第一運動向量。舉例而言,第一運動向量可提供用於圖框之第一子集中之影像資料之第一動量估計。在604處,如結合圖2、圖3、圖4A及圖5B中之實例所描述,GPU可擾動影像資料。在一些態樣中,GPU可擾動圖框中之所有影像資料。在其他態樣中,GPU可在圖框之第一子集中擾動影像資料。在其他態樣中,GPU可擾動圖框之第一子集之外的影像資料。另外,在606處,如結合圖2、圖3、圖4A及圖5B中之實例所描述,GPU可基於擾動影像資料產生至少一個第二運動向量。在一些態樣中,第二運動向量可提供用於影像資料之第二動量估計。在608處,如結合圖2、圖3、圖4A及圖5B中之實例所描述,GPU可比較第一運動向量與第二運動向量。此外,在610處,如結合圖2、圖3、圖4A、圖4B及圖5B中之實例所描述,GPU可基於第一運動向量與第二運動向量之間的比較來判定用於影像資料之動量估計之至少一個第三運動向量。At 602, as described in conjunction with the examples in FIGS. 2, 3, 4A, and 5B, the GPU may generate at least one first motion vector in the first subset of the frame. For example, the first motion vector may provide a first momentum estimate for the image data in the first subset of the frame. At 604, as described in conjunction with the examples in FIGS. 2, 3, 4A, and 5B, the GPU may disturb the image data. In some aspects, the GPU can disturb all image data in the frame. In other aspects, the GPU can disturb the image data in the first subset of the frame. In other aspects, the GPU may disturb the image data outside the first subset of the frame. In addition, at 606, as described in conjunction with the examples in FIGS. 2, 3, 4A, and 5B, the GPU may generate at least one second motion vector based on the disturbance image data. In some aspects, the second motion vector may provide a second momentum estimate for the image data. At 608, as described in connection with the examples in FIGS. 2, 3, 4A, and 5B, the GPU may compare the first motion vector with the second motion vector. In addition, at 610, as described in conjunction with the examples in FIG. 2, FIG. 3, FIG. 4A, FIG. 4B, and FIG. 5B, the GPU may determine the image data based on the comparison between the first motion vector and the second motion vector The momentum estimates at least one third motion vector.

在一些態樣中,如結合圖2中之實例所描述,當比較第一運動向量與第二運動向量時,GPU可判定第一運動向量與第二運動向量之間的差值。差值亦可稱為δ分析。在一些情況下,差值可為絕對值。另外,如進一步結合圖2中之實例所描述,GPU可判定差值是否小於臨限值。此外,如結合圖2、圖3、圖4A、圖4B及圖5B中之實例所描述,當差值小於臨限值時,第三運動向量可設定為第一運動向量。如結合圖2中之實例所描述,當差值大於臨限值時,GPU亦可判定用於圖框之第二子集之動量估計之至少一個第四運動向量。此外,圖框之第二子集可不同於圖框之第一子集。在一些態樣中,如結合圖2、圖3、圖4A、圖4B及圖5B中之實例所描述,第三運動向量可基於所判定之第一運動向量、第二運動向量及第四運動向量而判定。In some aspects, as described in conjunction with the example in FIG. 2, when comparing the first motion vector and the second motion vector, the GPU may determine the difference between the first motion vector and the second motion vector. The difference can also be called delta analysis. In some cases, the difference may be an absolute value. In addition, as further described in conjunction with the example in FIG. 2, the GPU can determine whether the difference is less than the threshold. In addition, as described in conjunction with the examples in FIG. 2, FIG. 3, FIG. 4A, FIG. 4B, and FIG. As described in conjunction with the example in FIG. 2, when the difference is greater than the threshold, the GPU can also determine at least one fourth motion vector for momentum estimation of the second subset of the frame. In addition, the second subset of frames can be different from the first subset of frames. In some aspects, as described in conjunction with the examples in FIGS. 2, 3, 4A, 4B, and 5B, the third motion vector may be based on the determined first motion vector, second motion vector, and fourth motion Vector and judge.

另外,在一些態樣中,如結合圖2、圖3、圖4A、圖4B及圖5B中之實例所描述,當差值大於臨限值時,第三運動向量可設定為具有零運動值。在一些態樣中,如結合圖2、圖3、圖4A、圖4B及圖5B中之實例所描述,GPU可判定第三運動向量與圍繞第三運動向量之一或多個鄰近向量之間的第二差值。在此等態樣中,當差值大於臨限值且第二差值小於臨限值時,可基於一或多個鄰近向量設定第三運動向量。In addition, in some aspects, as described in conjunction with the examples in FIGS. 2, 3, 4A, 4B, and 5B, when the difference is greater than the threshold, the third motion vector can be set to have a zero motion value . In some aspects, as described in conjunction with the examples in FIG. 2, FIG. 3, FIG. 4A, FIG. 4B, and FIG. 5B, the GPU may determine whether the third motion vector is between one or more adjacent vectors around the third motion vector. The second difference. In these aspects, when the difference is greater than the threshold and the second difference is less than the threshold, the third motion vector may be set based on one or more neighboring vectors.

在其他態樣中,如結合圖2、圖3、圖4A、圖4B及圖5B中之實例所描述,前述臨限值亦可基於圖框之第一子集中之影像資料之動量估計公差。另外,如結合圖2、圖3、圖4A、圖4B及圖5B中之實例所描述,在圖框之第一子集中擾動影像資料可包含將影像資料之RGB值之量值修改為m。在一些情況下,m可能不等於零。此外,m之絕對值可在-5%與5%之間,使得m可小於或等於5%及大於或等於-5%。In other aspects, as described in conjunction with the examples in FIG. 2, FIG. 3, FIG. 4A, FIG. 4B, and FIG. 5B, the aforementioned threshold value may also be based on the momentum estimation tolerance of the image data in the first subset of the frame. In addition, as described in conjunction with the examples in FIGS. 2, 3, 4A, 4B, and 5B, perturbing the image data in the first subset of the frame may include modifying the magnitude of the RGB value of the image data to m. In some cases, m may not be equal to zero. In addition, the absolute value of m can be between -5% and 5%, so that m can be less than or equal to 5% and greater than or equal to -5%.

在其他態樣中,如結合圖2、圖3、圖4A、圖4B及圖5B中之實例所描述,圖框之第一子集中之影像資料可受到一擾動量擾動,其中擾動量可基於影像資料之RGB值之局部方差而調整。如結合圖2、圖3、圖4A、圖4B及圖5B中之實例所描述,影像資料之RGB值之局部方差可基於用於圖框之第一子集中之影像資料的第一動量估計。此外,如結合圖2、圖3、圖4A、圖4B及圖5B中之實例所描述,影像資料之RGB值之局部方差可基於RGB值之先前方差而概算。在其他態樣中,如結合圖2、圖3、圖4A、圖4B及圖5B中之實例所描述,圖框之第一子集中之影像資料可受到一擾動量擾動,其中擾動量可基於影像資料之YUV值之局部方差而調整。在其他態樣中,如結合圖2、圖3、圖4A、圖4B及圖5B中之實例所描述,GPU可自RGB影像資料之影像資料轉化成YUV影像資料,其中影像資料在將RGB影像資料轉化成YUV影像資料之前擾動。In other aspects, as described in conjunction with the examples in Figure 2, Figure 3, Figure 4A, Figure 4B and Figure 5B, the image data in the first subset of the frame can be disturbed by a disturbance, where the disturbance can be based on Adjust the local variance of the RGB value of the image data. As described in conjunction with the examples in FIGS. 2, 3, 4A, 4B, and 5B, the local variance of the RGB values of the image data can be based on the first momentum estimate for the image data in the first subset of the frame. In addition, as described in conjunction with the examples in FIG. 2, FIG. 3, FIG. 4A, FIG. 4B, and FIG. 5B, the local variance of the RGB value of the image data can be estimated based on the previous variance of the RGB value. In other aspects, as described in conjunction with the examples in Figure 2, Figure 3, Figure 4A, Figure 4B and Figure 5B, the image data in the first subset of the frame can be disturbed by a disturbance, where the disturbance can be based on Adjust the local variance of the YUV value of the image data. In other aspects, as described in conjunction with the examples in Figure 2, Figure 3, Figure 4A, Figure 4B, and Figure 5B, GPU can convert image data of RGB image data into YUV image data, where the image data is transforming the RGB image Disturbance before the data is converted into YUV image data.

在一個組態中,提供用於動量估計之裝置。裝置可為GPU中之動量估計器件。在一個態樣中,動量估計器件可為器件104內之處理單元120或可為器件104或另一器件內之一些其他硬體。裝置可包括用於在圖框之第一子集中產生至少一個第一運動向量之構件。第一運動向量可提供用於圖框之第一子集中之影像資料之第一動量估計。裝置亦可包括用於在圖框之第一子集中擾動影像資料之構件。另外,裝置可包括用於基於在圖框之第一子集中擾動影像資料產生至少一個第二運動向量之構件。在一些態樣中,第二運動向量可提供用於圖框之第一子集中之影像資料之第二動量估計。裝置亦可包括用於比較第一運動向量與第二運動向量之構件。此外,裝置可包括用於基於第一運動向量與第二運動向量之間的比較判定用於圖框之第一子集中之影像資料之動量估計之至少一個第三運動向量之構件。In one configuration, a device for momentum estimation is provided. The device can be a momentum estimation device in the GPU. In one aspect, the momentum estimation device may be the processing unit 120 in the device 104 or may be the device 104 or some other hardware in another device. The device may include means for generating at least one first motion vector in the first subset of the frame. The first motion vector may provide a first momentum estimate for the image data in the first subset of the frame. The device may also include means for disturbing the image data in the first subset of the frame. In addition, the device may include means for generating at least one second motion vector based on disturbing image data in the first subset of the frame. In some aspects, the second motion vector may provide a second momentum estimate for the image data in the first subset of the frame. The device may also include means for comparing the first motion vector with the second motion vector. In addition, the device may include means for determining at least one third motion vector for momentum estimation of the image data in the first subset of the frame based on the comparison between the first motion vector and the second motion vector.

在一些態樣中,用於比較第一運動向量與第二運動向量之構件可經組態以判定第一運動向量與第二運動向量之間的差值,且判定差值是否小於臨限值。裝置亦可包括用於當差值大於臨限值時判定用於圖框之第二子集之動量估計之至少一個第四運動向量之構件。圖框之第二子集可不同於圖框之第一子集,其中第三運動向量基於所判定之第一運動向量、第二運動向量及第四運動向量而判定。另外,裝置可包括用於判定第三運動向量與圍繞第三運動向量之一或多個鄰近向量之間的第二差值之構件。當差值大於臨限值且第二差值小於臨限值時,可基於一或多個鄰近向量設定第三運動向量。此外,用於在圖框之第一子集中擾動影像資料之該構件可經組態以將影像資料之RGB值之量值修改為m,其中m不等於零。此外,裝置可包括用於將自RGB影像資料之影像資料轉化成YUV影像資料之構件,其中影像資料在將RGB影像資料轉化成YUV影像資料之前擾動。In some aspects, the means for comparing the first motion vector and the second motion vector can be configured to determine the difference between the first motion vector and the second motion vector, and determine whether the difference is less than the threshold . The device may also include means for determining at least one fourth motion vector for momentum estimation of the second subset of the frame when the difference is greater than the threshold. The second subset of the frame may be different from the first subset of the frame, where the third motion vector is determined based on the determined first motion vector, second motion vector, and fourth motion vector. In addition, the device may include means for determining the second difference between the third motion vector and one or more neighboring vectors surrounding the third motion vector. When the difference is greater than the threshold and the second difference is less than the threshold, the third motion vector may be set based on one or more neighboring vectors. In addition, the member used to disturb the image data in the first subset of the frame can be configured to modify the magnitude of the RGB value of the image data to m, where m is not equal to zero. In addition, the device may include a component for converting image data from RGB image data into YUV image data, wherein the image data is disturbed before the RGB image data is converted into YUV image data.

本文所描述之主題可經實施以實現一或多個潛在益處或優勢。舉例而言,所描述之技術可由GPU使用來減少顯現工作負荷量。本文所描述之系統可利用動量估計,以便自先前呈現內容外推圖框。藉此,此可允許GPU以降低之速率呈現圖框,外推圖框顯示在呈現內容之位置。因此,本發明可藉由在GPU上呈現更少圖框且使用動量估計來填充影像或運動向量中之空隙來減少呈現工作負荷。因此,本發明可節省GPU之功率及效能,以及在GPU上呈現更高品質圖框。另外,本發明亦可減少呈現內容之成本。The subject matter described herein can be implemented to realize one or more potential benefits or advantages. For example, the techniques described can be used by GPUs to reduce the amount of visualization workload. The system described in this article can use momentum estimation to extrapolate the frame from the previous presentation. In this way, this allows the GPU to render the frame at a reduced rate, and extrapolate the frame to be displayed at the position where the content is presented. Therefore, the present invention can reduce the rendering workload by rendering fewer frames on the GPU and using momentum estimation to fill the gaps in the image or motion vector. Therefore, the present invention can save GPU power and performance, and present higher quality frames on the GPU. In addition, the present invention can also reduce the cost of presenting content.

根據本發明,術語「或」可解譯為「及/或」,其中上下文並不以其他方式指示。另外,雖然諸如「一或多個」或「至少一個」等之片語可用於本文中(但並非為其他者)所揭示之一些特徵;並未使用此語言之特徵可解釋為具有暗示上下文並不以其他方式指示的此含義。According to the present invention, the term "or" can be interpreted as "and/or", where the context does not indicate otherwise. In addition, although phrases such as "one or more" or "at least one" can be used for some of the features disclosed in this article (but not for others); features that are not used in this language can be interpreted as implying context and This meaning is not indicated otherwise.

在一或多個實例中,本文中所描述之功能可實施於硬體、軟體、韌體或其任何組合中。舉例而言,儘管已貫穿本發明使用術語「處理單元」,但此處理單元可實施於硬體、軟體、韌體或其任何組合中。若任何函數、處理單元、本文中所描述之技術或其他模組實施於軟體中,則函數、處理單元、本文中所描述之技術或其他模組可作為一或多個指令或程式碼儲存於電腦可讀媒體上,或經由該電腦可讀媒體傳輸。電腦可讀媒體可包括電腦資料儲存媒體及通信媒體兩者,通信媒體包括促進電腦程式自一處至另一處之傳送的任何媒體。以此方式,電腦可讀媒體一般可對應於(1)非暫時性之有形電腦可讀儲存媒體,或(2)諸如信號或載波之通信媒體。資料儲存媒體可為可由一或多個電腦或一或多個處理器存取以擷取指令、程式碼及/或資料結構以用於實施本發明所描述之技術。藉助於實例而非限制,此類電腦可讀媒體可包含RAM、ROM、EEPROM、CD-ROM或其他光碟儲存器、磁碟儲存器或其他磁性儲存器件。如本文所使用之磁碟及光碟包括緊密光碟(CD)、雷射光碟、光學光碟、數位多功能光碟(DVD)、軟碟及藍光光碟,其中磁碟通常以磁性方式再生資料,而光碟用雷射以光學方式再生資料。以上各物之組合亦應包括於電腦可讀媒體之範疇內。電腦程式產品可包括電腦可讀媒體。In one or more examples, the functions described herein can be implemented in hardware, software, firmware, or any combination thereof. For example, although the term "processing unit" has been used throughout the present invention, the processing unit may be implemented in hardware, software, firmware, or any combination thereof. If any function, processing unit, technology described in this article, or other module is implemented in software, the function, processing unit, technology described in this article, or other module can be stored as one or more instructions or code On or through computer-readable media. Computer-readable media can include both computer data storage media and communication media. Communication media includes any media that facilitates the transfer of computer programs from one place to another. In this way, computer-readable media generally correspond to (1) non-transitory tangible computer-readable storage media, or (2) communication media such as signals or carrier waves. The data storage medium can be accessed by one or more computers or one or more processors to retrieve instructions, program codes, and/or data structures for implementing the techniques described in the present invention. By way of example and not limitation, such computer-readable media may include RAM, ROM, EEPROM, CD-ROM or other optical disk storage, magnetic disk storage or other magnetic storage devices. Disks and optical discs as used in this article include compact discs (CD), laser discs, optical discs, digital versatile discs (DVD), floppy discs, and Blu-ray discs. Disks usually reproduce data magnetically, while optical discs are used The laser regenerates data optically. The combination of the above should also be included in the category of computer-readable media. The computer program product may include a computer readable medium.

寫碼可由一或多個處理器執行,諸如一或多個數位信號處理器(DSP)、通用微處理器、特殊應用積體電路(ASIC)、算術邏輯單元(ALU)、場可程式化邏輯陣列(FPGA)或其他等效物整合式或離散邏輯電路。因此,如本文所用之術語「處理器」可指前述結構或適用於實施本文中所描述之技術的任何其他結構中之任一者。又,可在一或多個電路或邏輯元件中充分實施該等技術。Code writing can be performed by one or more processors, such as one or more digital signal processors (DSP), general-purpose microprocessors, application-specific integrated circuits (ASIC), arithmetic logic unit (ALU), field programmable logic Array (FPGA) or other equivalent integrated or discrete logic circuits. Therefore, the term "processor" as used herein may refer to any of the foregoing structure or any other structure suitable for implementing the techniques described herein. Furthermore, these techniques can be fully implemented in one or more circuits or logic elements.

本發明之技術可在多種器件或裝置中實施,該等器件或裝置包括無線手機、積體電路(IC)或IC之集合(例如,晶片集合)。在本發明中描述各種組件、模組或單元以強調經組態以執行所揭示技術之器件的功能態樣,但未必要求由不同硬體單元來實現。更確切而言,如上文所描述,可將各種單元組合於任一硬體單元中,或藉由互操作性硬體單元(包括如上文所描述之一或多個處理器)之集合而結合合適之軟體及/或韌體來提供該等單元。The technology of the present invention can be implemented in a variety of devices or devices, including wireless mobile phones, integrated circuits (ICs), or collections of ICs (eg, collections of chips). Various components, modules, or units are described in the present invention to emphasize the functional aspects of devices configured to perform the disclosed technology, but they do not necessarily require different hardware units to be implemented. More precisely, as described above, various units can be combined in any hardware unit, or combined by a collection of interoperable hardware units (including one or more processors as described above) Appropriate software and/or firmware provide these units.

各種實例已予以描述。此等及其他實例在以下申請專利範圍之範疇內。Various examples have been described. These and other examples are within the scope of the following patent applications.

100:內容產生及寫碼系統 102:源器件 104:目的地器件 106:處理單元 107-1:圖形處理管線 107-2:圖形處理管線 108:內容編碼器 109:內部記憶體 110:系統記憶體 111:內部記憶體 112:通信介面 114:接收器 116:傳輸器 118:收發器 120:處理單元 121:內部記憶體 122:內容解碼器 123:內部記憶體 124:系統記憶體 126:通信介面 127:顯示處理器 128:接收器 130:傳輸器 131:顯示器 132:收發器 134:鏈路 198:判定組件 200:動量估計 202:呈現內容區塊 204:RGB擾動紋理區塊 206:顏色空間轉化遍次 208:顏色空間轉化遍次 210:動量估計 212:動量估計 214:δ或差異性分析 216:運動向量計算 300:影像 302:重複背景 304:插入紋理影像 310:影像 312:重複背景 314:插入紋理影像 400:動量估計 402:圖框 404:重複背景 410:入紋理影像 412:第一運動向量 414:第二運動向量 416:差值 450:動量估計 452:圖框 454:重複背景 460:入紋理影像 462:第一運動向量 464:第二運動向量 466:差值 472:第四運動向量 500:動量估計 502:不正確運動區域 504:不正確運動區域 550:動量估計 600:流程圖 602:步驟 604:步驟 606:步驟 608:步驟 610:步驟100: Content generation and coding system 102: source device 104: destination device 106: Processing Unit 107-1: Graphics processing pipeline 107-2: Graphics processing pipeline 108: content encoder 109: Internal memory 110: System memory 111: Internal memory 112: Communication interface 114: receiver 116: Transmitter 118: Transceiver 120: processing unit 121: Internal memory 122: content decoder 123: Internal memory 124: System memory 126: Communication Interface 127: display processor 128: receiver 130: transmitter 131: Display 132: Transceiver 134: Link 198: determination component 200: Momentum estimation 202: Present content block 204: RGB disturbance texture block 206: Color space conversion times 208: Color space conversion times 210: Momentum estimation 212: Momentum Estimation 214: delta or difference analysis 216: Motion vector calculation 300: Image 302: Repeat background 304: Insert texture image 310: Image 312: Repeat background 314: Insert texture image 400: Momentum estimation 402: frame 404: Repeat background 410: Import texture image 412: first motion vector 414: second motion vector 416: Difference 450: Momentum Estimate 452: frame 454: Repeat background 460: Import texture image 462: first motion vector 464: second motion vector 466: Difference 472: Fourth Motion Vector 500: Momentum estimate 502: Incorrect movement area 504: Incorrect movement area 550: Momentum Estimate 600: flow chart 602: step 604: step 606: step 608: step 610: Step

圖1為說明根據本發明之技術的實例內容產生及寫碼系統之方塊圖。FIG. 1 is a block diagram illustrating an example content generation and coding system according to the technology of the present invention.

圖2說明根據本發明之動量估計之實例。Figure 2 illustrates an example of momentum estimation according to the present invention.

圖3A及圖3B說明根據本發明之對其執行動量估計之影像之實例。3A and 3B illustrate examples of images on which momentum estimation is performed according to the present invention.

圖4A說明根據本發明之動量估計之另一實例。Figure 4A illustrates another example of momentum estimation according to the present invention.

圖4B說明根據本發明之動量估計之另一實例。Figure 4B illustrates another example of momentum estimation according to the present invention.

圖5A及圖5B說明根據本發明之動量估計之其他實例。5A and 5B illustrate other examples of momentum estimation according to the present invention.

圖6說明根據本發明之一或多種技術的實例方法之實例流程圖。Figure 6 illustrates an example flowchart of an example method in accordance with one or more techniques of the present invention.

600:流程圖 600: flow chart

602:步驟 602: step

604:步驟 604: step

606:步驟 606: step

608:步驟 608: step

610:步驟 610: Step

Claims (30)

一種一圖形處理單元(GPU)中之動量估計方法,其包含: 在一圖框之一第一子集中產生至少一個第一運動向量,該至少一個第一運動向量提供用於該圖框之該第一子集中之影像資料之一第一動量估計; 在該圖框之該第一子集中擾動該影像資料; 基於該圖框之該第一子集中之該擾動影像資料產生至少一個第二運動向量,該至少一個第二運動向量提供用於該圖框之該第一子集中之該影像資料之一第二動量估計; 比較該至少一個第一運動向量與該至少一個第二運動向量;以及 基於該至少一個第一運動向量與該至少一個第二運動向量之間的該比較,判定用於該圖框之該第一子集中之該影像資料之該動量估計的至少一個第三運動向量。A method of momentum estimation in a graphics processing unit (GPU), which includes: Generating at least one first motion vector in a first subset of a frame, the at least one first motion vector providing a first momentum estimate for image data in the first subset of the frame; Disturb the image data in the first subset of the frame; Generate at least one second motion vector based on the disturbed image data in the first subset of the frame, the at least one second motion vector providing a second for the image data in the first subset of the frame Momentum estimation Comparing the at least one first motion vector with the at least one second motion vector; and Based on the comparison between the at least one first motion vector and the at least one second motion vector, determine at least one third motion vector used for the momentum estimation of the image data in the first subset of the frame. 如請求項1之方法,其中比較該至少一個第一運動向量與該至少一個第二運動向量包含: 判定該至少一個第一運動向量與該至少一個第二運動向量之間的一差值;及 判定該差值是否小於一臨限值。Such as the method of claim 1, wherein comparing the at least one first motion vector with the at least one second motion vector includes: Determine a difference between the at least one first motion vector and the at least one second motion vector; and Determine whether the difference is less than a threshold value. 如請求項2之方法,其中當該差值小於該臨限值時,將該至少一個第三運動向量設定為該至少一個第一運動向量。Such as the method of claim 2, wherein when the difference is less than the threshold value, the at least one third motion vector is set as the at least one first motion vector. 如請求項3之方法,其進一步包含當該差值大於該臨限值時,判定用於該圖框之一第二子集之動量估計之至少一個第四運動向量,該圖框之該第二子集不同於該圖框之該第一子集,其中基於所判定之至少一個第一運動向量、至少一個第二運動向量及至少一個第四運動向量而判定該至少一個第三運動向量。Such as the method of claim 3, which further includes determining at least one fourth motion vector used for momentum estimation of a second subset of the frame when the difference is greater than the threshold value, and the first The second subset is different from the first subset of the frame, in which the at least one third motion vector is determined based on the determined at least one first motion vector, at least one second motion vector, and at least one fourth motion vector. 如請求項2之方法,其中當該差值大於該臨限值時,將該至少一個第三運動向量設定為具有一零運動值。Such as the method of claim 2, wherein when the difference is greater than the threshold, the at least one third motion vector is set to have a zero motion value. 如請求項2之方法,其進一步包含 判定該至少一個第三運動向量與圍繞該至少一個第三運動向量之一或多個鄰近向量之間的一第二差值; 其中當該差值大於該臨限值且該第二差值小於該臨限值時,基於一或多個鄰近向量設定該至少一個第三運動向量。Such as the method of claim 2, which further includes Determine a second difference between the at least one third motion vector and one or more neighboring vectors surrounding the at least one third motion vector; When the difference is greater than the threshold and the second difference is less than the threshold, the at least one third motion vector is set based on one or more neighboring vectors. 如請求項2之方法,其中該臨限值係基於用於該圖框之該第一子集中之影像資料之一動量估計公差。Such as the method of claim 2, wherein the threshold value is based on a momentum estimation tolerance of the image data in the first subset of the frame. 如請求項1之方法,其中在該圖框之該第一子集中擾動該影像資料包含將該影像資料之紅色(R)、綠色(G)及藍色(B) (RGB)值之一量值修改為m,其中m不等於0。Such as the method of claim 1, wherein perturbing the image data in the first subset of the frame includes one of the red (R), green (G), and blue (B) (RGB) values of the image data The value is modified to m, where m is not equal to 0. 如請求項8之方法,其中m小於或等於5%及大於或等於-5%。Such as the method of claim 8, where m is less than or equal to 5% and greater than or equal to -5%. 如請求項1之方法,其中該圖框之該第一子集中之該影像資料受到一擾動量擾動,且該擾動量基於該影像資料之紅色(R)、綠色(G)及藍色(B) (RGB)值之一局部方差而調整。Such as the method of claim 1, wherein the image data in the first subset of the frame is disturbed by a disturbance, and the disturbance is based on the red (R), green (G), and blue (B) of the image data ) (RGB) value is adjusted based on the local variance. 如請求項10之方法,其中該影像資料之RGB值之該局部方差基於用於該圖框之該第一子集中之該影像資料的該第一動量估計。The method of claim 10, wherein the local variance of the RGB value of the image data is based on the first momentum estimate for the image data in the first subset of the frame. 如請求項10之方法,其中該影像資料之RGB值之該局部方差基於RGB值之一先前方差而概算。Such as the method of claim 10, wherein the local variance of the RGB value of the image data is estimated based on a previous variance of the RGB value. 如請求項1之方法,其中該圖框之該第一子集中之該影像資料受到一擾動量擾動,且該擾動量基於該影像資料之亮度(Y)、第一色度(U)及第二色度(V) (YUV)值之一局部方差而調整。Such as the method of claim 1, wherein the image data in the first subset of the frame is disturbed by a disturbance, and the disturbance is based on the brightness (Y), the first chromaticity (U), and the first chromaticity (U) of the image data The dichromaticity (V) (YUV) value is adjusted by one of the local variances. 如請求項1之方法,其進一步包含將該影像資料自紅色(R)、綠色(G)、藍色(B) (RGB)影像資料轉化成亮度(Y)、第一色度(U)、第二色度(V) (YUV)影像資料,其中該影像資料在將該RGB影像資料轉化成該YUV影像資料之前擾動。Such as the method of claim 1, which further includes converting the image data from red (R), green (G), blue (B) (RGB) image data into brightness (Y), first chromaticity (U), The second chromaticity (V) (YUV) image data, wherein the image data is disturbed before the RGB image data is converted into the YUV image data. 一種用於一圖形處理單元(GPU)中之動量估計之裝置,其包含: 一記憶體;及 至少一個處理器,其耦接至該記憶體且經組態以: 在一圖框之一第一子集中產生至少一個第一運動向量,該至少一個第一運動向量提供用於該圖框之該第一子集中之影像資料之一第一動量估計; 在該圖框之該第一子集中擾動該影像資料; 基於該圖框之該第一子集中之該擾動影像資料產生至少一個第二運動向量,該至少一個第二運動向量提供用於該圖框之該第一子集中之該影像資料之一第二動量估計; 比較該至少一個第一運動向量與該至少一個第二運動向量;及 基於該至少一個第一運動向量與該至少一個第二運動向量之間的該比較,判定用於該圖框之該第一子集中之該影像資料之該動量估計之至少一個第三運動向量。A device for momentum estimation in a graphics processing unit (GPU), which includes: A memory; and At least one processor coupled to the memory and configured to: Generating at least one first motion vector in a first subset of a frame, the at least one first motion vector providing a first momentum estimate for image data in the first subset of the frame; Disturb the image data in the first subset of the frame; Generate at least one second motion vector based on the disturbed image data in the first subset of the frame, the at least one second motion vector providing a second for the image data in the first subset of the frame Momentum estimation Comparing the at least one first motion vector with the at least one second motion vector; and Based on the comparison between the at least one first motion vector and the at least one second motion vector, determine at least one third motion vector for the momentum estimation of the image data in the first subset of the frame. 如請求項15之裝置,其中比較該至少一個第一運動向量與該至少一個第二運動向量包括該至少一個處理器,其進一步經組態以: 判定該至少一個第一運動向量與該至少一個第二運動向量之間的一差值;及 判定該差值是否小於一臨限值。The device of claim 15, wherein comparing the at least one first motion vector with the at least one second motion vector includes the at least one processor, which is further configured to: Determine a difference between the at least one first motion vector and the at least one second motion vector; and Determine whether the difference is less than a threshold value. 如請求項16之裝置,其中當該差值小於該臨限值時,該至少一個第三運動向量設定為該至少一個第一運動向量。Such as the device of claim 16, wherein when the difference is less than the threshold value, the at least one third motion vector is set as the at least one first motion vector. 如請求項17之裝置,其中該至少一個處理器進一步經組態以: 當該差值大於該臨限值時,判定用於該圖框之一第二子集之動量估計之至少一個第四運動向量,該圖框之該第二子集不同於該圖框之該第一子集,其中該至少一個第三運動向量基於該所判定之至少一個第一運動向量、至少一個第二運動向量及至少一個第四運動向量而判定。Such as the device of claim 17, wherein the at least one processor is further configured to: When the difference is greater than the threshold value, determine at least one fourth motion vector for momentum estimation of a second subset of the frame, the second subset of the frame being different from the one of the frame The first subset, wherein the at least one third motion vector is determined based on the determined at least one first motion vector, at least one second motion vector, and at least one fourth motion vector. 如請求項16之裝置,其中當該差值大於該臨限值時,該至少一個第三運動向量設定為具有一零運動值。Such as the device of claim 16, wherein when the difference is greater than the threshold, the at least one third motion vector is set to have a zero motion value. 如請求項16之裝置,其中該至少一個處理器進一步經組態以: 判定該至少一個第三運動向量與圍繞該至少一個第三運動向量之一或多個鄰近向量之間的一第二差值; 其中當該差值大於該臨限值且該第二差值小於該臨限值時,基於該一或多個鄰近向量設定該至少一個第三運動向量。Such as the device of claim 16, wherein the at least one processor is further configured to: Determine a second difference between the at least one third motion vector and one or more neighboring vectors surrounding the at least one third motion vector; When the difference is greater than the threshold and the second difference is less than the threshold, the at least one third motion vector is set based on the one or more neighboring vectors. 如請求項16之裝置,其中該臨限值係基於用於該圖框之該第一子集中之影像資料之一動量估計公差。Such as the device of claim 16, wherein the threshold value is based on a momentum estimation tolerance of the image data in the first subset of the frame. 如請求項15之裝置,其中在該圖框之該第一子集中擾動該影像資料包括該至少一個處理器,其進一步經組態以將該影像資料之紅色(R)、綠色(G)及藍色(B) (RGB)值之一量值修改為m,其中m不等於0。Such as the device of claim 15, wherein perturbing the image data in the first subset of the frame includes the at least one processor, which is further configured to red (R), green (G) and the image data One of the blue (B) (RGB) values is modified to m, where m is not equal to zero. 如請求項22之裝置,其中m小於或等於5%及大於或等於-5%。Such as the device of claim 22, where m is less than or equal to 5% and greater than or equal to -5%. 如請求項15之裝置,其中該圖框之該第一子集中之該影像資料受到一擾動量擾動,且該擾動量基於該影像資料之紅色(R)、綠色(G)及藍色(B) (RGB)值之一局部方差而調整。Such as the device of claim 15, wherein the image data in the first subset of the frame is disturbed by a disturbance, and the disturbance is based on the red (R), green (G), and blue (B) of the image data ) (RGB) value is adjusted based on the local variance. 如請求項24之裝置,其中該影像資料之RGB值之該局部方差基於用於該圖框之該第一子集中之該影像資料的該第一動量估計。Such as the device of claim 24, wherein the local variance of the RGB value of the image data is based on the first momentum estimate for the image data in the first subset of the frame. 如請求項24之裝置,其中該影像資料之RGB值之該局部方差基於RGB值之一先前方差而概算。Such as the device of claim 24, wherein the local variance of the RGB value of the image data is estimated based on a previous variance of the RGB value. 如請求項15之裝置,其中該圖框之該第一子集中之該影像資料受到一擾動量擾動,且該擾動量基於該影像資料之亮度(Y)、第一色度(U)及第二色度(V) (YUV)值之一局部方差而調整。Such as the device of claim 15, wherein the image data in the first subset of the frame is disturbed by a disturbance, and the disturbance is based on the luminance (Y), the first chromaticity (U), and the first chromaticity (U) of the image data The dichromaticity (V) (YUV) value is adjusted by one of the local variances. 如請求項15之裝置,其中該至少一個處理器進一步經組態以將該影像資料自紅色(R)、綠色(G)、藍色(B) (RGB)影像資料轉化成亮度(Y)、第一色度(U)、第二色度(V) (YUV)影像資料,其中該影像資料在將該RGB影像資料轉化成該YUV影像資料之前擾動。Such as the device of claim 15, wherein the at least one processor is further configured to convert the image data from red (R), green (G), blue (B) (RGB) image data into brightness (Y), First chromaticity (U) and second chromaticity (V) (YUV) image data, wherein the image data is disturbed before converting the RGB image data into the YUV image data. 一種用於一圖形處理單元(GPU)中之動量估計之裝置,其包含: 構件,用於在一圖框之一第一子集中產生至少一個第一運動向量,該至少一個第一運動向量提供用於該圖框之該第一子集中之影像資料之一第一動量估計; 構件,用於在該圖框之該第一子集中擾動該影像資料; 構件,用於基於該圖框之該第一子集中之該擾動影像資料產生至少一個第二運動向量,該至少一個第二運動向量提供用於該圖框之該第一子集中之該影像資料之一第二動量估計; 構件,用於比較該至少一個第一運動向量與該至少一個第二運動向量;及 構件,用於基於該至少一個第一運動向量與該至少一個第二運動向量之間的該比較,判定用於該圖框之該第一子集中之該影像資料之該動量估計之至少一個第三運動向量。A device for momentum estimation in a graphics processing unit (GPU), which includes: A component for generating at least one first motion vector in a first subset of a frame, the at least one first motion vector providing a first momentum estimate for image data in the first subset of the frame ; Component for disturbing the image data in the first subset of the frame; A component for generating at least one second motion vector based on the disturbed image data in the first subset of the frame, the at least one second motion vector providing the image data for the first subset of the frame One second momentum estimate; A component for comparing the at least one first motion vector with the at least one second motion vector; and Means for determining, based on the comparison between the at least one first motion vector and the at least one second motion vector, at least one of the first momentum estimates used for the image data in the first subset of the frame Three motion vectors. 一種儲存用於一圖形處理單元(GPU)中之動量估計之電腦可執行程式碼之電腦可讀媒體,其包含程式碼以: 在一圖框之一第一子集中產生至少一個第一運動向量,該至少一個第一運動向量提供用於該圖框之該第一子集中之影像資料之一第一動量估計; 在該圖框之該第一子集中擾動該影像資料; 基於該圖框之該第一子集中之該擾動影像資料產生至少一個第二運動向量,該至少一個第二運動向量提供用於該圖框之該第一子集中之該影像資料之一第二動量估計; 比較該至少一個第一運動向量與該至少一個第二運動向量;及 基於該至少一個第一運動向量與該至少一個第二運動向量之間的該比較,判定用於該圖框之該第一子集中之該影像資料之該動量估計之至少一個第三運動向量。A computer readable medium storing computer executable code for momentum estimation in a graphics processing unit (GPU), which contains code to: Generating at least one first motion vector in a first subset of a frame, the at least one first motion vector providing a first momentum estimate for image data in the first subset of the frame; Disturb the image data in the first subset of the frame; Generate at least one second motion vector based on the disturbed image data in the first subset of the frame, the at least one second motion vector providing a second for the image data in the first subset of the frame Momentum estimation Comparing the at least one first motion vector with the at least one second motion vector; and Based on the comparison between the at least one first motion vector and the at least one second motion vector, determine at least one third motion vector for the momentum estimation of the image data in the first subset of the frame.
TW108144969A 2018-12-10 2019-12-09 Motion estimation through input perturbation TW202029121A (en)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
US16/215,547 2018-12-10
US16/215,547 US11388432B2 (en) 2018-12-10 2018-12-10 Motion estimation through input perturbation

Publications (1)

Publication Number Publication Date
TW202029121A true TW202029121A (en) 2020-08-01

Family

ID=69160257

Family Applications (1)

Application Number Title Priority Date Filing Date
TW108144969A TW202029121A (en) 2018-12-10 2019-12-09 Motion estimation through input perturbation

Country Status (5)

Country Link
US (1) US11388432B2 (en)
EP (1) EP3895124A1 (en)
CN (1) CN113168702A (en)
TW (1) TW202029121A (en)
WO (1) WO2020123339A1 (en)

Families Citing this family (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US11615537B2 (en) * 2020-11-02 2023-03-28 Qualcomm Incorporated Methods and apparatus for motion estimation based on region discontinuity

Family Cites Families (13)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US6157677A (en) 1995-03-22 2000-12-05 Idt International Digital Technologies Deutschland Gmbh Method and apparatus for coordination of motion determination over multiple frames
US6546117B1 (en) * 1999-06-10 2003-04-08 University Of Washington Video object segmentation using active contour modelling with global relaxation
US7536031B2 (en) 2003-09-02 2009-05-19 Nxp B.V. Temporal interpolation of a pixel on basis of occlusion detection
JP2006128920A (en) * 2004-10-27 2006-05-18 Matsushita Electric Ind Co Ltd Method and device for concealing error in image signal decoding system
US8824831B2 (en) * 2007-05-25 2014-09-02 Qualcomm Technologies, Inc. Advanced noise reduction in digital cameras
JP2011191973A (en) * 2010-03-15 2011-09-29 Fujifilm Corp Device and method for measurement of motion vector
JP5387520B2 (en) * 2010-06-25 2014-01-15 ソニー株式会社 Information processing apparatus and information processing method
JP5816511B2 (en) * 2011-10-04 2015-11-18 オリンパス株式会社 Image processing apparatus, endoscope apparatus, and operation method of image processing apparatus
US9692939B2 (en) * 2013-05-29 2017-06-27 Yeda Research And Development Co. Ltd. Device, system, and method of blind deblurring and blind super-resolution utilizing internal patch recurrence
US10051274B2 (en) * 2013-10-25 2018-08-14 Canon Kabushiki Kaisha Image processing apparatus, method of calculating information according to motion of frame, and storage medium
US10055852B2 (en) 2016-04-04 2018-08-21 Sony Corporation Image processing system and method for detection of objects in motion
US11750832B2 (en) * 2017-11-02 2023-09-05 Hfi Innovation Inc. Method and apparatus for video coding
CN107767343B (en) * 2017-11-09 2021-08-31 京东方科技集团股份有限公司 Image processing method, processing device and processing equipment

Also Published As

Publication number Publication date
US20200186817A1 (en) 2020-06-11
US11388432B2 (en) 2022-07-12
EP3895124A1 (en) 2021-10-20
CN113168702A (en) 2021-07-23
WO2020123339A1 (en) 2020-06-18

Similar Documents

Publication Publication Date Title
CN112513937B (en) Distributed graphics processing
KR102562877B1 (en) Methods and apparatus for distribution of application computations
US11455705B2 (en) Asynchronous space warp for remotely rendered VR
US20200104973A1 (en) Methods and apparatus for frame composition alignment
CN114830660B (en) Method and apparatus for utilizing display correction factors
US20200105227A1 (en) Methods and apparatus for improving frame rendering
US20210044807A1 (en) Systems and methods for deferred post-processes in video encoding
TW202029121A (en) Motion estimation through input perturbation
US10867431B2 (en) Methods and apparatus for improving subpixel visibility
CN114902286A (en) Method and apparatus for facilitating region of interest tracking of motion frames
US20240013713A1 (en) Adaptive subsampling for demura corrections
US10841549B2 (en) Methods and apparatus to facilitate enhancing the quality of video
TW202137141A (en) Methods and apparatus for edge compression anti-aliasing
US10652512B1 (en) Enhancement of high dynamic range content
CN114245904A (en) Method and apparatus for efficient motion estimation
US20240096042A1 (en) Methods and apparatus for saliency based frame color enhancement
US11622113B2 (en) Image-space function transmission