TW201719570A - Apparatus and method for pattern driven self-adaptive virtual graphics processor units - Google Patents

Apparatus and method for pattern driven self-adaptive virtual graphics processor units Download PDF

Info

Publication number
TW201719570A
TW201719570A TW105125322A TW105125322A TW201719570A TW 201719570 A TW201719570 A TW 201719570A TW 105125322 A TW105125322 A TW 105125322A TW 105125322 A TW105125322 A TW 105125322A TW 201719570 A TW201719570 A TW 201719570A
Authority
TW
Taiwan
Prior art keywords
gpu
command
cost
data
processor
Prior art date
Application number
TW105125322A
Other languages
Chinese (zh)
Other versions
TWI706373B (en
Inventor
鄭曉
董耀祖
張玉磊
Original Assignee
英特爾股份有限公司
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 英特爾股份有限公司 filed Critical 英特爾股份有限公司
Publication of TW201719570A publication Critical patent/TW201719570A/en
Application granted granted Critical
Publication of TWI706373B publication Critical patent/TWI706373B/en

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F9/00Arrangements for program control, e.g. control units
    • G06F9/06Arrangements for program control, e.g. control units using stored programs, i.e. using an internal store of processing equipment to receive or retain programs
    • G06F9/46Multiprogramming arrangements
    • G06F9/50Allocation of resources, e.g. of the central processing unit [CPU]
    • G06F9/5061Partitioning or combining of resources
    • G06F9/5077Logical partitioning of resources; Management or configuration of virtualized resources

Abstract

An apparatus and method are described for a pattern driven self-adaptive graphics processing unit (GPU). For example, one embodiment of an apparatus comprises: a graphics processing unit (GPU) to process graphics commands and responsively render a plurality of image frames; a hypervisor to virtualize the GPU to share the GPU among a plurality of virtual machines (VMs), the hypervisor subdividing GPU processing for each VM into a plurality of quanta; and scheduling logic to monitor GPU utilization including GPU busy time and/or GPU idle time for each VM during its allocated quantum and to store utilization data reflecting the GPU utilization during each quantum; the scheduling logic to predict a cost of waiting within a given quantum of a first VM based on the utilization data and to further predict a cost of yielding to a second VM, the scheduling logic to yield the GPU to the second VM if the cost of waiting is greater than the cost of yielding to the second VM.

Description

用於型樣驅動自適應虛擬繪圖處理單元的裝置及方法 Apparatus and method for pattern driven adaptive virtual drawing processing unit

本發明一般關於電腦處理器領域。更具體地,本發明關於用於型樣驅動自適應虛擬繪圖處理單元(vGPU)的裝置及方法。 The present invention generally relates to the field of computer processors. More specifically, the present invention relates to an apparatus and method for a pattern driven adaptive virtual graphics processing unit (vGPU).

已開發用於繪圖虛擬化的各種方法。例如,一種方法允許將整個GPU的功率直接指派給單一使用者,經由超管理器(hypervisor)傳遞原生的驅動程式功能而無任何限制。此版本的繪圖虛擬化的通用命名是「直接圖形配接器(Direct Graphics Adaptor)」(vDGA)。另一種虛擬化方法需要虛擬機中的虛擬繪圖驅動程式,並使用API轉送(forwarding)技術來與繪圖硬體介接。在最近於多個並行使用者之間共用GPU的方法中,每個虛擬桌上型機器維護原生繪圖驅動程式的副本。在時間切片(time sliced)的基礎上,超管理器中的代理直接將完整的GPU資源指派給各個虛擬機。因此,在其時間片段期間,雖然虛擬機接 收了完整專用的GPU,從整個系統角度來看則是多個虛擬機共用一單一的GPU。 Various methods have been developed for drawing virtualization. For example, one approach allows the power of the entire GPU to be directly assigned to a single user, passing native driver functionality via a hypervisor without any restrictions. The generic name for this version of drawing virtualization is the Direct Graphics Adaptor (vDGA). Another virtualization approach requires a virtual drawing driver in the virtual machine and uses API forwarding technology to interface with the drawing hardware. In a recent method of sharing GPUs between multiple concurrent users, each virtual desktop machine maintains a copy of the native graphics driver. Based on the time sliced, the agent in the hypervisor directly assigns the complete GPU resource to each virtual machine. Therefore, during its time segment, although the virtual machine is connected With a full dedicated GPU, from a system perspective, multiple virtual machines share a single GPU.

100‧‧‧處理系統 100‧‧‧Processing system

102‧‧‧處理器 102‧‧‧Processor

104‧‧‧快取記憶體 104‧‧‧Cache memory

106‧‧‧暫存器檔案 106‧‧‧Scratch file

107‧‧‧處理器核心 107‧‧‧Processor core

108‧‧‧繪圖處理器 108‧‧‧Drawing Processor

109‧‧‧指令集 109‧‧‧Instruction Set

110‧‧‧處理器匯流排 110‧‧‧Processor bus

112‧‧‧外部繪圖處理器 112‧‧‧External graphics processor

116‧‧‧記憶體控制器集線器 116‧‧‧Memory Controller Hub

120‧‧‧記憶體裝置 120‧‧‧ memory device

121‧‧‧指令 121‧‧‧ directive

122‧‧‧資料 122‧‧‧Information

124‧‧‧資料儲存裝置 124‧‧‧Data storage device

126‧‧‧無線收發器 126‧‧‧Wireless transceiver

128‧‧‧韌體介面 128‧‧‧ Firmware interface

130‧‧‧輸入輸出(I/O)控制器集線器 130‧‧‧Input/Output (I/O) Controller Hub

134‧‧‧網路控制器 134‧‧‧Network Controller

140‧‧‧傳統I/O控制器 140‧‧‧Traditional I/O Controller

142‧‧‧通用序列匯流排(USB)控制器 142‧‧‧Common Serial Bus (USB) Controller

144‧‧‧鍵盤和滑鼠 144‧‧‧ keyboard and mouse

146‧‧‧音訊控制器 146‧‧‧ audio controller

200‧‧‧處理器 200‧‧‧ processor

202A-202N‧‧‧處理器核心 202A-202N‧‧‧ Processor Core

204A-204N‧‧‧內部快取單元 204A-204N‧‧‧Internal cache unit

206‧‧‧共用快取單元 206‧‧‧Shared cache unit

208‧‧‧繪圖處理器 208‧‧‧Drawing processor

210‧‧‧系統代理核心 210‧‧‧System Agent Core

211‧‧‧顯示控制器 211‧‧‧ display controller

212‧‧‧基於環形之互連單元/環形互連 212‧‧‧Circular-based interconnect unit/ring interconnect

213‧‧‧I/O鏈路 213‧‧‧I/O link

214‧‧‧記憶體控制器 214‧‧‧ memory controller

216‧‧‧匯流排控制器單元 216‧‧‧ Busbar Controller Unit

218‧‧‧嵌入式記憶體模組 218‧‧‧ Embedded Memory Module

300‧‧‧繪圖處理器 300‧‧‧Drawing Processor

302‧‧‧顯示控制器 302‧‧‧ display controller

304‧‧‧區塊影像轉移(BLIT)引擎 304‧‧‧ Block Image Transfer (BLIT) Engine

306‧‧‧視訊編解碼引擎 306‧‧‧Video Codec Engine

310‧‧‧繪圖處理引擎(GPE) 310‧‧‧Drawing Processing Engine (GPE)

312‧‧‧3D管線 312‧‧3D pipeline

314‧‧‧記憶體介面 314‧‧‧ memory interface

315‧‧‧3D/媒體子系統 315‧‧‧3D/media subsystem

316‧‧‧媒體管線 316‧‧‧Media pipeline

320‧‧‧顯示裝置 320‧‧‧ display device

410‧‧‧繪圖處理引擎 410‧‧‧Drawing Processing Engine

403‧‧‧命令串流器 403‧‧‧Command Streamer

412‧‧‧媒體管線 412‧‧‧Media pipeline

414‧‧‧執行單元陣列 414‧‧‧Execution unit array

416‧‧‧媒體管線 416‧‧‧Media pipeline

430‧‧‧取樣引擎 430‧‧‧Sampling engine

432‧‧‧去雜訊/解交錯模組 432‧‧‧To noise/deinterlacing module

434‧‧‧運動估計模組 434‧‧‧Sports estimation module

436‧‧‧影像縮放及過濾模組 436‧‧‧Image scaling and filtering module

444‧‧‧資料埠 444‧‧‧Information埠

500‧‧‧繪圖處理器 500‧‧‧Drawing Processor

502‧‧‧環形互連 502‧‧‧ Ring Interconnect

503‧‧‧命令串流器 503‧‧‧Command Streamer

504‧‧‧管線前端 504‧‧‧ pipeline front end

534‧‧‧視訊前端 534‧‧‧Video front end

536‧‧‧幾何管線 536‧‧‧Geometric pipeline

530‧‧‧視訊品質引擎(VQE) 530‧‧·Video Quality Engine (VQE)

533‧‧‧多格式編碼/解碼(MFX) 533‧‧‧Multi-format encoding/decoding (MFX)

537‧‧‧媒體引擎 537‧‧‧Media Engine

550A-550N‧‧‧子核心 550A-550N‧‧‧Subcore

552A-552N‧‧‧執行單元 552A-552N‧‧‧Execution unit

554A-554N‧‧‧媒體/紋理取樣器 554A-554N‧‧‧Media/Texture Sampler

570A-570N‧‧‧共用資源 570A-570N‧‧‧Shared resources

560A-560N‧‧‧子核心 560A-560N‧‧‧Subcore

562A-562N‧‧‧執行單元 562A-562N‧‧‧Execution unit

564A-564N‧‧‧取樣器 564A-564N‧‧‧ sampler

580A-580N‧‧‧繪圖核心 580A-580N‧‧‧ drawing core

600‧‧‧線程執行邏輯 600‧‧‧Thread Execution Logic

602‧‧‧像素著色器 602‧‧‧ pixel shader

604‧‧‧線程分派器 604‧‧‧Thread Dispatcher

606‧‧‧指令快取 606‧‧‧ instruction cache

608A-608N‧‧‧執行單元 608A-608N‧‧‧Execution unit

610‧‧‧取樣器 610‧‧‧sampler

612‧‧‧資料快取 612‧‧‧Information cache

614‧‧‧資料埠 614‧‧‧Information埠

800‧‧‧繪圖處理器 800‧‧‧Drawing Processor

802‧‧‧環形互連 802‧‧‧ ring interconnect

803‧‧‧命令串流器 803‧‧‧Command Streamer

805‧‧‧頂點擷取器 805‧‧‧Vertex Extractor

807‧‧‧頂點著色器 807‧‧‧Vertex Shader

811‧‧‧外殼著色器 811‧‧‧Shell shader

813‧‧‧曲面細分器 813‧‧‧Diagram

817‧‧‧域著色器 817‧‧‧Domain Shader

819‧‧‧幾何著色器 819‧‧‧Geometry shader

820‧‧‧繪圖管線 820‧‧‧Drawing pipeline

823‧‧‧串流輸出單元 823‧‧‧Stream output unit

829‧‧‧剪裁器 829‧‧‧Cutter

830‧‧‧媒體管線 830‧‧‧Media pipeline

834‧‧‧視訊前端 834‧‧ ‧ video front end

837‧‧‧媒體引擎 837‧‧‧Media Engine

831‧‧‧線程分派器 831‧‧‧Thread Dispatcher

840‧‧‧顯示引擎 840‧‧‧Display engine

841‧‧‧2D引擎 841‧‧‧2D engine

843‧‧‧顯示控制器 843‧‧‧ display controller

850‧‧‧線程執行邏輯 850‧‧‧Thread Execution Logic

851‧‧‧L1快取 851‧‧‧L1 cache

852A、852B‧‧‧執行單元 852A, 852B‧‧‧ execution unit

854‧‧‧紋理及媒體取樣器 854‧‧‧Texture and media sampler

856‧‧‧資料埠 856‧‧‧Information埠

858‧‧‧紋理/取樣器快取 858‧‧‧Texture/sampler cache

870‧‧‧渲染輸出管線 870‧‧‧ Rendering output pipeline

873‧‧‧光柵化及深度測試元件 873‧‧‧Rasterization and depth test components

875‧‧‧L3快取 875‧‧‧L3 cache

877‧‧‧像素操作元件 877‧‧‧pixel operating elements

878‧‧‧渲染快取 878‧‧‧ Rendering cache

879‧‧‧深度快取 879‧‧‧Deep cache

1000‧‧‧資料處理系統 1000‧‧‧Data Processing System

1010‧‧‧3D繪圖應用程式 1010‧‧‧3D drawing application

1012‧‧‧著色器指令 1012‧‧‧ Shader Instructions

1014‧‧‧可執行指令 1014‧‧‧executable instructions

1016‧‧‧繪圖物件 1016‧‧‧ Drawing objects

1020‧‧‧作業系統 1020‧‧‧ operating system

1022‧‧‧繪圖API 1022‧‧‧Drawing API

1024‧‧‧前端著色器編譯器 1024‧‧‧front-end shader compiler

1026‧‧‧使用者模式繪圖驅動器 1026‧‧‧User mode drawing driver

1027‧‧‧後端著色器編譯器 1027‧‧‧Backend shader compiler

1028‧‧‧作業系統核心模式功能 1028‧‧‧Operating system core mode function

1029‧‧‧核心模式繪圖驅動器 1029‧‧‧core mode drawing driver

1030‧‧‧處理器 1030‧‧‧ Processor

1032‧‧‧繪圖處理器 1032‧‧‧Drawing processor

1034‧‧‧通用處理器核心 1034‧‧‧General Processor Core

1050‧‧‧系統記憶體 1050‧‧‧ system memory

1100‧‧‧IP核心開發系統 1100‧‧‧IP Core Development System

1110‧‧‧軟體模擬 1110‧‧‧Software simulation

1115‧‧‧RTL設計 1115‧‧‧RTL design

1120‧‧‧硬體模型 1120‧‧‧ hardware model

1130‧‧‧設計設施 1130‧‧‧Design facilities

1140‧‧‧非揮發性記憶體 1140‧‧‧ Non-volatile memory

1150‧‧‧有線連接 1150‧‧‧Wired connection

1160‧‧‧無線連接 1160‧‧‧Wireless connection

1165‧‧‧製造設施 1165‧‧‧ Manufacturing facilities

1200‧‧‧系統單晶片積體電路 1200‧‧‧ system single chip integrated circuit

1205‧‧‧應用程式處理器 1205‧‧‧Application Processor

1210‧‧‧繪圖處理器 1210‧‧‧Drawing Processor

1215‧‧‧影像處理器 1215‧‧‧Image Processor

1220‧‧‧視訊處理器 1220‧‧‧Video Processor

1225‧‧‧USB控制器 1225‧‧‧USB controller

1230‧‧‧UART控制器 1230‧‧‧UART controller

1235‧‧‧SPI/SDIO控制器 1235‧‧‧SPI/SDIO Controller

1240‧‧‧I2S/I2C控制器 1240‧‧‧I 2 S/I 2 C controller

1245‧‧‧顯示裝置 1245‧‧‧Display device

1250‧‧‧高解析度多媒體介面(HDMI)控制器 1250‧‧‧High-resolution multimedia interface (HDMI) controller

1255‧‧‧行動產業處理器介面(MIPI)顯示介面 1255‧‧‧Mobile Industry Processor Interface (MIPI) display interface

1260‧‧‧快閃記憶體子系統 1260‧‧‧Flash Memory Subsystem

1265‧‧‧記憶體控制器 1265‧‧‧ memory controller

1270‧‧‧嵌入式安全引擎 1270‧‧‧ Embedded Security Engine

1400‧‧‧示例性實施例 1400‧‧‧Exemplary embodiment

1410‧‧‧超管理器 1410‧‧‧Super Manager

1412‧‧‧GPU排程器 1412‧‧‧GPU scheduler

1470‧‧‧PDSAS邏輯 1470‧‧‧PDSAS logic

1418‧‧‧命令剖析器 1418‧‧‧Command Profiler

1420‧‧‧GPU 1420‧‧‧GPU

1460A、1460B‧‧‧虛擬GPU(vGPU) 1460A, 1460B‧‧‧Virtual GPU (vGPU)

1430‧‧‧VM 1430‧‧‧VM

1440‧‧‧VM 1440‧‧‧VM

透過下面結合附圖之詳細描述可獲得對本發明之較佳理解,其中:圖1為電腦系統之實施例的方塊圖,該電腦系統有具有一或多個處理器核心的處理器及繪圖處理器;圖2為處理器之一個實施例的方塊圖,該處理器具有一或多個處理器核心、整合式記憶體控制器、及整合式繪圖處理器;圖3為繪圖處理器之一個實施例的方塊圖,該繪圖處理器可以是個別的繪圖處理單元、或者可以是與複數個處理核心整合的繪圖處理器;圖4為用於繪圖處理器之繪圖處理引擎的實施例的方塊圖;圖5為繪圖處理器之另一實施例的方塊圖;圖6為線程執行邏輯之方塊圖,該線程執行邏輯包括處理元件之陣列;圖7繪出依據實施例之繪圖處理器執行單元指令格式;圖8為繪圖處理器之另一實施例的方塊圖,該繪圖處理器包括繪圖管線、媒體管線、顯示引擎、線程執行邏輯、及渲染(render)輸出管線; 圖9A為繪示依據實施例之繪圖處理器命令格式的方塊圖;圖9B為繪示依據實施例之繪圖處理器命令序列的方塊圖;圖10繪出依據實施例之用於資料處理系統的示例性繪圖軟體結構;圖11繪出依據實施例之示例性IP核心開發系統,其可被用於製造積體電路以執行操作;圖12繪出依據實施例之示例性系統單晶片積體電路,其可使用一或多個IP核心來製造;圖13繪出示例性在循環排程器下的GPU資源利用;圖14繪出GPU排程器的一個實施例,該GPU排程器實現型樣驅動自適應方案(PDSAS);圖15繪出來自示例性3D工作負載之閒置的示例性可能曲線;圖16繪出可能等待時間的示例性分佈曲線;圖17A-B繪出依據本發明之一個實施例的方法;圖18繪出針對不同的時間值、切換成本值、及閾值之本發明的一個實施例的操作;以及圖19繪出示例性機率曲線,其指示在不同時間值的機率以及包括平衡係數。 A better understanding of the present invention can be obtained by the following detailed description of the accompanying drawings in which: FIG. 1 is a block diagram of an embodiment of a computer system having a processor and a graphics processor having one or more processor cores 2 is a block diagram of one embodiment of a processor having one or more processor cores, an integrated memory controller, and an integrated graphics processor; FIG. 3 is an embodiment of a graphics processor Block diagram, the graphics processor may be an individual graphics processing unit, or may be a graphics processor integrated with a plurality of processing cores; FIG. 4 is a block diagram of an embodiment of a graphics processing engine for a graphics processor; Figure 4 is a block diagram of another embodiment of a graphics processor; Figure 6 is a block diagram of thread execution logic including an array of processing elements; Figure 7 depicts a graphics processor execution unit instruction format in accordance with an embodiment; 8 is a block diagram of another embodiment of a graphics processor including a graphics pipeline, a media pipeline, a display engine, thread execution logic, and rendering (r Derender) output pipeline; 9A is a block diagram showing a command format of a graphics processor according to an embodiment; FIG. 9B is a block diagram showing a sequence of command of a graphics processor according to an embodiment; and FIG. 10 is a diagram showing a data processing system according to an embodiment; Exemplary Graphics Software Architecture; Figure 11 depicts an exemplary IP core development system in accordance with an embodiment that can be used to fabricate integrated circuitry to perform operations; Figure 12 depicts an exemplary system single wafer integrated circuitry in accordance with an embodiment , which may be fabricated using one or more IP cores; Figure 13 depicts an exemplary GPU resource utilization under a loop scheduler; Figure 14 depicts an embodiment of a GPU scheduler, the GPU scheduler implementation Sample Drive Adaptive Scheme (PDSAS); Figure 15 depicts an exemplary possible curve for idle from an exemplary 3D workload; Figure 16 depicts an exemplary distribution curve for possible latency; Figures 17A-B depict the invention in accordance with the present invention The method of one embodiment; FIG. 18 depicts the operation of one embodiment of the present invention for different time values, switching cost values, and thresholds; and FIG. 19 depicts an exemplary probability curve indicating the probability of different time values Take And include the balance factor.

【發明內容及實施方式】 SUMMARY OF THE INVENTION AND EMBODIMENT

於以下說明中,為了解釋之目的,提出數個特定細節 以提供對下述本發明之實施例的透徹瞭解。然而,對於本領域之技術人員顯而易見的是,可在沒有這些特定細節之部分的情況下實施本發明的實施例。於其他實例中,以方塊圖形式示出公知的結構及裝置,以避免混淆本發明之實施例的基本原則。 In the following description, for the purpose of explanation, several specific details are proposed. To provide a thorough understanding of the embodiments of the invention described below. However, it is apparent to those skilled in the art that the embodiments of the invention may be practiced without the specific details. In other instances, well-known structures and devices are shown in block diagrams in order to avoid obscuring the basic principles of the embodiments of the invention.

示例性繪圖處理器架構及資料類型 Exemplary graphics processor architecture and data type 系統總覽 System overview

圖1為依據實施例之處理系統100的方塊圖。在各種實施例中,系統100包括一或多個處理器102及一或多個繪圖處理器108,並且可以是單一處理器桌上型系統、多處理器工作站系統、或者是具有大量處理器102或處理器核心107的伺服器系統。在一實施例中,系統100是被併入用於行動、手持、或嵌入式裝置的系統單晶片(SoC)積體電路內的處理平台。 FIG. 1 is a block diagram of a processing system 100 in accordance with an embodiment. In various embodiments, system 100 includes one or more processors 102 and one or more graphics processors 108, and can be a single processor desktop system, a multi-processor workstation system, or have a large number of processors 102 Or the server system of processor core 107. In one embodiment, system 100 is a processing platform incorporated into a system single-chip (SoC) integrated circuit for mobile, handheld, or embedded devices.

系統100的實施例可包括基於伺服器之遊戲平台、遊戲機(包括遊戲和媒體控制台、行動遊戲機、手持遊戲機、或線上遊戲機),或者併入於前述各者內。在一些實施例中,系統100是行動電話、智慧型電話、平板計算裝置或行動上網裝置。資料處理系統100亦可包括、耦接、或被整合進穿戴式裝置(諸如智慧型手錶穿戴式裝置、智慧型眼鏡裝置、擴增實境裝置、或虛擬實境裝置)。在一些實施例中,資料處理系統100是電視或機上盒裝置,其具有一或多個處理器102及由一或多個繪圖處理器108產 生的繪圖介面。 Embodiments of system 100 may include a server-based gaming platform, a gaming machine (including gaming and media consoles, a mobile gaming machine, a handheld gaming machine, or an online gaming machine), or incorporated within the foregoing. In some embodiments, system 100 is a mobile phone, a smart phone, a tablet computing device, or a mobile internet device. The data processing system 100 can also include, be coupled to, or be integrated into a wearable device (such as a smart watch wearable device, a smart eyewear device, an augmented reality device, or a virtual reality device). In some embodiments, data processing system 100 is a television or set-top box device having one or more processors 102 and produced by one or more graphics processors 108 The raw drawing interface.

在一些實施例中,該一或多個處理器102各包括一或多個處理器核心107以處理指令,當執行該等指令時將執行用於系統和使用者軟體的操作。在一些實施例中,該一或多個處理器核心107之各者被配置成處理特定的指令集109。在一些實施例中,指令集109可有助於複雜指令集計算(CISC)、精簡指令集計算(RISC)、或透過超長指令字(VLIW)的計算。多個處理器核心107可以各自處理不同的指令集109,其可包括有助於其他指令集之仿真的指令。處理器核心107亦可包括其他的處理裝置,諸如數位信號處理器(DSP)。 In some embodiments, the one or more processors 102 each include one or more processor cores 107 to process instructions that, when executed, perform operations for the system and user software. In some embodiments, each of the one or more processor cores 107 is configured to process a particular set of instructions 109. In some embodiments, the set of instructions 109 may facilitate computation of complex instruction set calculations (CISC), reduced instruction set calculations (RISC), or through very long instruction words (VLIW). Multiple processor cores 107 may each process a different set of instructions 109, which may include instructions that facilitate simulation of other sets of instructions. Processor core 107 may also include other processing devices, such as a digital signal processor (DSP).

在一些實施例中,處理器102包括快取記憶體104。取決於架構,處理器102可具有單一內部快取或者多階內部快取。在一些實施例中,在處理器102的各種元件之間共用快取記憶體。在一些實施例中,處理器102亦使用外部快取(例如,第3階(L3)快取或最後一階快取(LLC))(未示出),其可使用已知的快取一致性技術在處理器核心107之間共用。處理器102中還包括暫存器檔案106,其可包括用於儲存不同類型之資料的不同類型的暫存器(例如,整數暫存器、浮點暫存器、狀態暫存器、及指令指標暫存器)。某些暫存器可以是通用暫存器,而其他的暫存器可以專用於處理器102之設計。 In some embodiments, processor 102 includes cache memory 104. Depending on the architecture, processor 102 can have a single internal cache or multiple internal caches. In some embodiments, cache memory is shared between various elements of processor 102. In some embodiments, processor 102 also uses an external cache (eg, a third order (L3) cache or a last order cache (LLC)) (not shown), which can use known cache coherency Sex technology is shared between processor cores 107. The processor 102 also includes a scratchpad file 106 that can include different types of registers for storing different types of data (eg, integer registers, floating point registers, status registers, and instructions) Indicator register). Some registers may be general purpose registers, while other registers may be dedicated to the design of processor 102.

在一些實施例中,處理器102耦接至處理器匯流排110,以在處理器102和系統100中的其他元件之間傳輸 諸如位址、資料、或控制信號的通訊信號。在一個實施例中,系統100使用示例性「集線器」系統架構,包括記憶體控制器集線器116及輸入輸出(I/O)控制器集線器130。記憶體控制器集線器116有助於記憶體裝置和系統100之其他元件之間的通訊,而I/O控制器集線器(ICH)130提供經由I/O匯流排至I/O裝置的連接。在一個實施例中,記憶體控制器集線器116的邏輯被整合在處理器內。 In some embodiments, processor 102 is coupled to processor bus 110 for transmission between processor 102 and other components in system 100. Communication signals such as address, data, or control signals. In one embodiment, system 100 uses an exemplary "hub" system architecture, including a memory controller hub 116 and an input/output (I/O) controller hub 130. The memory controller hub 116 facilitates communication between the memory device and other components of the system 100, while the I/O controller hub (ICH) 130 provides a connection via the I/O bus to the I/O device. In one embodiment, the logic of the memory controller hub 116 is integrated within the processor.

記憶體裝置120可以是動態隨機存取記憶體(DRAM)裝置、靜態隨機存取記憶體(SRAM)裝置、快取記憶體裝置、相變記憶體裝置、或者一些其他具有合適性能以作為處理記憶體的其他記憶體裝置。在一個實施例中,記憶體裝置120可操作為用於系統100的系統記憶體,以儲存用於當一或多個處理器102執行應用程式或者處理時使用的資料122及指令121。記憶體控制器集線器116亦與可選的外部繪圖處理器112耦接,外部繪圖處理器112可與處理器102中的一或多個繪圖處理器108通訊,以執行繪圖和媒體操作。 The memory device 120 can be a dynamic random access memory (DRAM) device, a static random access memory (SRAM) device, a cache memory device, a phase change memory device, or some other suitable function as a processing memory. Other memory devices of the body. In one embodiment, memory device 120 is operative for system memory of system 100 to store data 122 and instructions 121 for use by one or more processors 102 when executing applications or processes. Memory controller hub 116 is also coupled to optional external graphics processor 112, which can communicate with one or more graphics processors 108 in processor 102 to perform graphics and media operations.

在一些實施例中,ICH 130使週邊設備可以經由高速I/O匯流排連接至記憶體裝置120及處理器102。I/O週邊設備包括,但不限於,音訊控制器146、韌體介面128、無線收發器126(例如,Wi-Fi、藍芽)、資料儲存裝置124(例如,硬碟驅動機、快取記憶體等)、及用於將舊有(例如,個人系統2(PS/2))裝置耦接至系統的舊有I/O控制器140。一或多個通用序列匯流排(USB)控制器142連接 諸如鍵盤和滑鼠144組合的輸入裝置。網路控制器134亦可耦接至ICH 130。在一些實施例中,高效能網路控制器(未示出)耦接至處理器匯流排110。應當理解的是,所示的系統100是示例性的而非限制性的,因為亦可使用不同配置的其他類型的資料處理系統。例如,I/O控制器集線器130可被整合在一或多個處理器102之內,或者記憶體控制器集線器116及I/O控制器集線器130可被整合進個別的外部繪圖處理器,諸如外部繪圖處理器112。 In some embodiments, ICH 130 enables peripheral devices to connect to memory device 120 and processor 102 via a high speed I/O bus. I/O peripherals include, but are not limited to, an audio controller 146, a firmware interface 128, a wireless transceiver 126 (eg, Wi-Fi, Bluetooth), a data storage device 124 (eg, a hard disk drive, cache) Memory, etc.), and legacy I/O controller 140 for coupling legacy (eg, Personal System 2 (PS/2)) devices to the system. One or more universal serial bus (USB) controllers 142 are connected An input device such as a combination of a keyboard and a mouse 144. Network controller 134 may also be coupled to ICH 130. In some embodiments, a high performance network controller (not shown) is coupled to the processor bus. It should be understood that the illustrated system 100 is illustrative and not limiting, as other types of data processing systems of different configurations may also be used. For example, I/O controller hub 130 can be integrated into one or more processors 102, or memory controller hub 116 and I/O controller hub 130 can be integrated into individual external graphics processors, such as External graphics processor 112.

圖2為處理器200之實施例的方塊圖,處理器200具有一或多個處理器核心202A-202N、整合式記憶體控制器214、及整合式繪圖處理器208。具有與本文任何其他圖式之元件相同標號(或名稱)的圖2的該些元件,可以類似於本文他處所描述的任何方式操作或發生功用,但不限於此。處理器200可包括高達且包括由虛線框表示的額外核心202N的額外核心。處理器核心202A-202N之各者包括一或多個內部快取單元204A-204N。在一些實施例中,各個處理器核心亦可存取一或多個共用快取單元206。 2 is a block diagram of an embodiment of a processor 200 having one or more processor cores 202A-202N, an integrated memory controller 214, and an integrated graphics processor 208. The elements of Figure 2 having the same reference numbers (or names) as the elements of any other figures herein may operate or function in any manner similar to that described herein, but are not limited thereto. Processor 200 may include additional cores up to and including additional cores 202N represented by dashed boxes. Each of processor cores 202A-202N includes one or more internal cache units 204A-204N. In some embodiments, each processor core may also access one or more shared cache units 206.

內部快取單元204A-204N及共用快取單元206表示處理器200內的快取記憶體階層。該快取記憶體階層可包括在每個處理器核心內的至少一階的指令和資料快取,以及一或多階的共用中階快取,諸如第2階(L2)、第3階(L3)、第4階(L4)、或其他階的快取,其中在外部記憶體之前的最高階快取被歸類為LLC。在一些實施例中,快取一致性邏輯保持各種快取單元206及204A-204N之間的 一致性。 Internal cache units 204A-204N and shared cache unit 206 represent cache memory levels within processor 200. The cache memory hierarchy can include at least one order of instruction and data caches within each processor core, and one or more orders of shared intermediate caches, such as second order (L2), third order ( L3), 4th order (L4), or other order cache, where the highest order cache before the external memory is classified as LLC. In some embodiments, cache coherency logic maintains between various cache units 206 and 204A-204N consistency.

在一些實施例中,處理器200亦可包括一組一或多個匯流排控制器單元216及系統代理核心210。一或多個匯流排控制器單元216管理一組週邊設備匯流排,諸如一或多個週邊組件互連匯流排(例如,PCI、PCI Express)。系統代理核心210為各種處理器元件提供管理功能。在一些實施例中,系統代理核心210包括一或多個整合式記憶體控制器214,以管理對各種外部記憶體裝置(未示出)的存取。 In some embodiments, processor 200 can also include a set of one or more bus controller unit 216 and system agent core 210. One or more bus controller units 216 manage a set of peripheral busses, such as one or more peripheral component interconnect busses (eg, PCI, PCI Express). System agent core 210 provides management functions for various processor elements. In some embodiments, system agent core 210 includes one or more integrated memory controllers 214 to manage access to various external memory devices (not shown).

在一些實施例中,一或多個處理器核心202A-202N包括對同時多線程的支持。在此種實施例中,系統代理核心210包括用於在多線程處理期間協調和操作核心202A-202N的元件。系統代理核心210還可包括電力控制單元(PCU),其包括用以調節處理器核心202A-202N及繪圖處理器208之電力狀態的邏輯和元件。 In some embodiments, one or more processor cores 202A-202N include support for simultaneous multi-threading. In such an embodiment, system agent core 210 includes elements for coordinating and operating cores 202A-202N during multi-threading processing. System agent core 210 may also include a power control unit (PCU) that includes logic and elements to condition the power states of processor cores 202A-202N and graphics processor 208.

在一些實施例中,處理器200還包括繪圖處理器208,用以執行繪圖處理操作。在一些實施例中,繪圖處理器208與該組共用快取單元206,以及包括一或多個整合式記憶體控制器214的系統代理核心210耦接。在一些實施例中,顯示控制器211與繪圖處理器208耦接,以將繪圖處理器輸出驅動至一或多個耦接的顯示器。在一些實施例中,顯示控制器211可以是經由至少一個互連而與繪圖處理器耦接的單獨模組,或者可以整合在繪圖處理器208或系統代理核心210之內。 In some embodiments, processor 200 also includes a graphics processor 208 for performing graphics processing operations. In some embodiments, the graphics processor 208 is coupled to the set of shared cache units 206 and to the system proxy core 210 that includes one or more integrated memory controllers 214. In some embodiments, display controller 211 is coupled to graphics processor 208 to drive the graphics processor output to one or more coupled displays. In some embodiments, display controller 211 can be a separate module coupled to the graphics processor via at least one interconnect, or can be integrated within graphics processor 208 or system proxy core 210.

在一些實施例中,基於環形之互連單元212係用以耦接處理器200的內部元件。然而,可使用替代的互連單元,諸如,點對點互連、切換式互連或其他技術,包括此領域中熟知的技術。在一些實施例中,繪圖處理器208經由I/O鏈路213與環形互連212耦接。 In some embodiments, the ring-based interconnect unit 212 is used to couple internal components of the processor 200. However, alternative interconnect units may be used, such as point-to-point interconnects, switched interconnects, or other techniques, including techniques well known in the art. In some embodiments, graphics processor 208 is coupled to ring interconnect 212 via I/O link 213.

示例性I/O鏈路213表示多種I/O互連中之至少一者,包括有助於各種處理器元件與高效能嵌入式記憶體模組218(諸如,eDRAM模組)之間的通訊之封裝上I/O互連。在一些實施例中,處理器核心202A-202N之各者及繪圖處理器208使用嵌入式記憶體模組218作為共用最後一階快取記憶體。 The exemplary I/O link 213 represents at least one of a variety of I/O interconnects, including facilitating communication between various processor elements and a high performance embedded memory module 218, such as an eDRAM module. The I/O interconnect on the package. In some embodiments, each of processor cores 202A-202N and graphics processor 208 uses embedded memory module 218 as a shared last-order cache memory.

在一些實施例中,處理器核心202A-202N為執行同一指令集架構的均質核心。在另一實施例中,就指令集架構(ISA)而言,處理器核心202A-202N是異質的,其中處理器核心202A-N之其中一或多者執行第一指令集,而其他核心中之至少一者執行第一指令集的子集或不同的指令集。在一個實施例中,就微架構而言,處理器核心202A-202N是異質的,其中具有相對較高功耗的一或多個核心與具有較低功耗的一或多個核心耦接。此外,處理器200可在一或多個晶片上實現,或者實現為除了其他元件之外還具有所示元件的SoC積體電路。 In some embodiments, processor cores 202A-202N are homogeneous cores that implement the same instruction set architecture. In another embodiment, processor cores 202A-202N are heterogeneous with respect to an instruction set architecture (ISA), wherein one or more of processor cores 202A-N execute a first set of instructions, while in other cores At least one of them performs a subset of the first set of instructions or a different set of instructions. In one embodiment, in terms of microarchitecture, processor cores 202A-202N are heterogeneous, with one or more cores having relatively higher power consumption coupled to one or more cores having lower power consumption. Moreover, processor 200 can be implemented on one or more wafers or as an SoC integrated circuit with the elements shown, among other components.

圖3為繪圖處理器300的方塊圖,繪圖處理器300可以是個別的繪圖處理單元,或者可以是與複數個處理核心整合的繪圖處理器。在一些實施例中,繪圖處理器經由至 繪圖處理器上之暫存器的記憶體映射I/O介面以及以放置到處理器記憶體中的命令來通訊。在一些實施例中,繪圖處理器300包括記憶體介面314,用以存取記憶體。記憶體介面314可以是至區域記憶體、一或多個內部快取、一或多個共用外部快取、及/或系統記憶體的介面。 3 is a block diagram of a graphics processor 300, which may be an individual graphics processing unit, or may be a graphics processor integrated with a plurality of processing cores. In some embodiments, the graphics processor is via The memory mapped I/O interface of the scratchpad on the graphics processor communicates with commands placed in the processor memory. In some embodiments, the graphics processor 300 includes a memory interface 314 for accessing memory. The memory interface 314 can be an interface to a region memory, one or more internal caches, one or more shared external caches, and/or system memory.

在一些實施例中,繪圖處理器300亦包括顯示控制器302,以將顯示輸出資料驅動至顯示裝置320。顯示控制器302包括一或多個疊覆平面的硬體,用於多個視訊層或使用者介面元件的顯示及合成。在一些實施例中,繪圖處理器300包括視訊編解碼引擎306,用以將媒體編碼至一或多個媒體編碼格式、自一或多個媒體編碼格式解碼媒體、或者在一或多個媒體編碼格式之間轉碼媒體,該一或多個媒體編碼格式包括,但不限於,諸如MPEG-2的動畫專家群(MPEG)格式、諸如H.264/MPEG-4 AVC的先進視訊編碼(AVC)格式、以及動畫與電視工程師協會(SMPTE)421M/VC-1、與諸如JPEG的聯合圖像專家群(JPEG)格式、以及動畫JPEG(MJPEG)格式。 In some embodiments, the graphics processor 300 also includes a display controller 302 to drive display output data to the display device 320. Display controller 302 includes one or more stacked planar hardware for display and synthesis of multiple video layers or user interface elements. In some embodiments, the graphics processor 300 includes a video codec engine 306 for encoding media to one or more media encoding formats, decoding media from one or more media encoding formats, or encoding one or more media. Transcoding media between formats, including, but not limited to, an Animation Experts Group (MPEG) format such as MPEG-2, Advanced Video Coding (AVC) such as H.264/MPEG-4 AVC Format, and Society of Motion Picture and Television Engineers (SMPTE) 421M/VC-1, Joint Photographic Experts Group (JPEG) format such as JPEG, and Motion JPEG (MJPEG) format.

在一些實施例中,繪圖處理器300包括區塊影像轉移(BLIT)引擎304,用以執行包括,例如,位元邊界(bit-boundary)區塊轉移的二維(2D)光柵化操作。然而,在一個實施例中,使用繪圖處理引擎(GPE)310的一或多個元件來執行2D繪圖操作。在一些實施例中,繪圖處理引擎310是用於執行包括三維(3D)繪圖操作及媒體操作之繪圖操作的計算引擎。 In some embodiments, graphics processor 300 includes a block image transfer (BLIT) engine 304 to perform a two-dimensional (2D) rasterization operation including, for example, bit-boundary block transfer. However, in one embodiment, one or more components of the graphics processing engine (GPE) 310 are used to perform a 2D drawing operation. In some embodiments, the drawing processing engine 310 is a computing engine for performing drawing operations including three-dimensional (3D) drawing operations and media operations.

在一些實施例中,GPE 310包括3D管線312用於執行3D操作,諸如使用作用於3D基元形狀(例如,矩形、三角形等)的處理函數來渲染(rendering)三維影像及場景。3D管線312包括在元件內執行各種任務及/或產生(spawn)執行線程至3D/媒體子系統315的可編程及固定功能元件。僅管3D管線312可被用來執行媒體操作,但GPE 310的實施例亦包括媒體管線316,其具體地被用來執行諸如視訊後處理及影像增強的媒體操作。 In some embodiments, GPE 310 includes a 3D pipeline 312 for performing 3D operations, such as rendering a three-dimensional image and scene using a processing function that acts on a 3D primitive shape (eg, rectangle, triangle, etc.). The 3D pipeline 312 includes programmable and fixed function elements that perform various tasks within the component and/or spawn execution threads to the 3D/media subsystem 315. Although the 3D pipeline 312 can be used to perform media operations, the embodiment of the GPE 310 also includes a media pipeline 316 that is specifically used to perform media operations such as post-visual processing and image enhancement.

在一些實施例中,媒體管線316包括固定功能或可編程邏輯單元,以代替或代表視訊編解碼引擎306執行一或多個專用媒體操作,諸如視訊解碼加速、視訊解交錯(de-interlacing)、及視訊編碼加速。在一些實施例中,媒體管線316還包括線程產生(thread spawning)單元,以產生用於在3D/媒體子系統315上執行的線程。所產生的線程在包括於3D/媒體子系統315中的一或多個繪圖執行單元上執行針對媒體操作的計算。 In some embodiments, media pipeline 316 includes fixed functions or programmable logic units to perform one or more dedicated media operations, such as video decoding acceleration, video de-interlacing, in place of or on behalf of video codec engine 306, And video encoding acceleration. In some embodiments, media pipeline 316 also includes a thread spawning unit to generate threads for execution on 3D/media subsystem 315. The generated threads perform calculations for media operations on one or more graphics execution units included in 3D/media subsystem 315.

在一些實施例中,3D/媒體子系統315包括用於執行由3D管線312及媒體管線316所產生的線程的邏輯。在一個實施例中,該等管線傳送線程執行請求至3D/媒體子系統315,其包括線程分派邏輯,用於仲裁及分派各種請求至可用的線程執行資源。該等執行資源包括用以處理3D及媒體線程的繪圖執行單元之陣列。在一些實施例中,3D/媒體子系統315包括用於線程指令和資料的一或多個內部快取。在一些實施例中,該子系統亦包括共用記 憶體,其包括暫存器和可定址記憶體,用以在線程之間共用資料以及儲存輸出資料。 In some embodiments, 3D/media subsystem 315 includes logic for executing threads generated by 3D pipeline 312 and media pipeline 316. In one embodiment, the pipeline delivery threads execute requests to the 3D/media subsystem 315, which includes thread dispatch logic for arbitrating and dispatching various requests to available thread execution resources. The execution resources include an array of graphics execution units for processing 3D and media threads. In some embodiments, 3D/media subsystem 315 includes one or more internal caches for thread instructions and material. In some embodiments, the subsystem also includes a shared note Recall, which includes a scratchpad and addressable memory for sharing data between threads and storing output data.

3D/媒體處理 3D/media processing

圖4為依據一些實施例之繪圖處理器之繪圖處理引擎410的方塊圖。在一個實施例中,GPE 410是圖3中所示之GPE 310的版本。具有與本文任何其他圖式之元件相同標號(或名稱)的圖4的元件,可以類似於本文他處所描述的任何方式操作或發生功用,但不限於此。 4 is a block diagram of a graphics processing engine 410 of a graphics processor in accordance with some embodiments. In one embodiment, GPE 410 is the version of GPE 310 shown in FIG. The elements of Figure 4 having the same reference numbers (or names) as the elements of any other figures herein may operate or function in any manner similar to that described elsewhere herein, but are not limited thereto.

在一些實施例中,GPE 410與命令串流器403耦接,命令串流器403提供命令串流至GPE 3D及媒體管線412、416。在一些實施例中,命令串流器403耦接至記憶體,其可以是系統記憶體,或者是內部快取記憶體及共用快取記憶體中的一或多個。在一些實施例中,命令串流器403從記憶體接收命令,並傳送該等命令至3D管線412及/或媒體管線416。該等命令是從環形緩衝器提取的指令(directives),該環形緩衝器儲存用於3D及媒體管線412、416的命令。在一個實施例中,該環形緩衝器還可包括批次命令緩衝器,其儲存多個命令的批次。3D及媒體管線412、416藉由經由個別管線內的邏輯執行操作或者藉由分派一或多個執行線程至執行單元陣列414來處理該等命令。在一些實施例中,執行單元陣列414是可擴充的,使得該陣列包括基於GPE 410之目標功率和效能等級的可變數量的執行單元。 In some embodiments, GPE 410 is coupled to command streamer 403, which provides a command stream to GPE 3D and media pipelines 412, 416. In some embodiments, the command streamer 403 is coupled to a memory, which may be system memory, or one or more of internal cache memory and shared cache memory. In some embodiments, command streamer 403 receives commands from memory and transmits the commands to 3D pipeline 412 and/or media pipeline 416. The commands are directives extracted from a ring buffer that stores commands for the 3D and media pipelines 412, 416. In one embodiment, the ring buffer may also include a batch command buffer that stores a batch of multiple commands. The 3D and media pipelines 412, 416 process the commands by performing operations via logic within the individual pipelines or by dispatching one or more execution threads to the execution unit array 414. In some embodiments, the execution unit array 414 is expandable such that the array includes a variable number of execution units based on the target power and performance level of the GPE 410.

在一些實施例中,取樣引擎430與記憶體(例如,快取記憶體或系統記憶體)和執行單元陣列414耦接。在一些實施例中,取樣引擎430提供記憶體存取機制給執行單元陣列414,其允許執行陣列414從記憶體讀取圖形及媒體資料。在一些實施例中,取樣引擎430包括用以執行針對媒體之專門圖像取樣操作的邏輯。 In some embodiments, the sampling engine 430 is coupled to a memory (eg, a cache or system memory) and an array of execution units 414. In some embodiments, the sampling engine 430 provides a memory access mechanism to the execution unit array 414 that allows the execution array 414 to read graphics and media material from memory. In some embodiments, the sampling engine 430 includes logic to perform specialized image sampling operations for the media.

在一些實施例中,取樣引擎430中的專門媒體取樣邏輯包括去雜訊/解交錯模組432、運動估計模組434、及影像縮放及過濾模組436。在一些實施例中,去雜訊/解交錯模組432包括對已解碼的視訊資料執行去雜訊或解交錯演算法中之一或多者的邏輯。該解交錯邏輯將經交錯的視訊內容的交替的場(field)組合成視訊的單一圖框。該去雜訊邏輯減少或者去除視訊及影像資料的資料雜訊。在一些實施例中,該去雜訊邏輯和解交錯邏輯是動態適應性的,並且基於在視訊資料中偵測到的運動量使用空間或時間過濾。在一些實施例中,去雜訊/解交錯模組432包括專用的運動偵測邏輯(例如,在運動估計引擎434內)。 In some embodiments, the specialized media sampling logic in the sampling engine 430 includes a de-noising/de-interlacing module 432, a motion estimation module 434, and an image scaling and filtering module 436. In some embodiments, the denoising/deinterlacing module 432 includes logic to perform one or more of a denoising or deinterlacing algorithm on the decoded video material. The de-interlacing logic combines alternating fields of interlaced video content into a single frame of video. The de-noising logic reduces or removes data noise from video and video data. In some embodiments, the de-noising logic and de-interlacing logic are dynamically adaptive and use spatial or temporal filtering based on the amount of motion detected in the video material. In some embodiments, the denoising/de-interlacing module 432 includes dedicated motion detection logic (eg, within the motion estimation engine 434).

在一些實施例中,運動估計引擎434藉由對視訊資料執行諸如運動向量估計和預測的視訊加速功能來提供用於視訊操作的硬體加速。運動估計引擎判定描述連續視訊圖框之間的影像資料的變換的運動向量。在一些實施例中,繪圖處理器媒體編解碼使用視訊運動估計引擎434在巨集區塊等級執行對視訊的操作,該等操作在以通用處理器執行的情況下可能太過計算密集。在一些實施例中,運動估 計引擎434通常可為繪圖處理器元件所用,以輔助對視訊資料內之運動的方向或量值敏感或適應的視訊解碼及處理功能。 In some embodiments, motion estimation engine 434 provides hardware acceleration for video operations by performing video acceleration functions such as motion vector estimation and prediction on video data. The motion estimation engine determines a motion vector that describes the transformation of the image data between consecutive video frames. In some embodiments, the graphics processor media codec uses video motion estimation engine 434 to perform video operations at the macroblock level, which may be too computationally intensive if executed by a general purpose processor. In some embodiments, motion estimation The metering engine 434 is typically used by the graphics processor component to assist in video decoding and processing functions that are sensitive or adaptive to the direction or magnitude of motion within the video material.

在一些實施例中,影像縮放及過濾模組436執行影像處理操作以提高所產生的影像和視訊的視覺品質。在一些實施例中,在將資料提供給執行單元陣列414之前的取樣操作期間,縮放及過濾模組436處理影像和視訊資料。 In some embodiments, image scaling and filtering module 436 performs image processing operations to improve the visual quality of the resulting images and video. In some embodiments, the scaling and filtering module 436 processes the image and video material during a sampling operation prior to providing the data to the execution unit array 414.

在一些實施例中,GPE 410包括資料埠444,其提供額外的機構給繪圖子系統以存取記憶體。在一些實施例中,資料埠444有助於括渲染目標寫入、常數緩衝器讀取、暫用記憶體(scratch memory)空間讀取/寫入、及媒體表面存取之操作的記憶體存取。在一些實施例中,資料埠444包括快取記憶體空間,以快取對記憶體的存取。快取記憶體可以是單一資料快取或者被分為用於經由資料埠存取記憶體之多個子系統的多個快取(例如,渲染緩衝器快取、常數緩衝器快取等)。在一些實施例中,在執行單元陣列414中的執行單元上執行的線程藉由經由耦接GPE 410之子系統中之各者的資料分佈互連來交換訊息而與資料埠通訊。 In some embodiments, GPE 410 includes data 444, which provides an additional mechanism for the graphics subsystem to access memory. In some embodiments, the data 444 facilitates memory storage including rendering target writes, constant buffer reads, scratch memory space read/write, and media surface access operations. take. In some embodiments, the data 444 includes a cache memory space to cache access to the memory. The cache memory can be a single data cache or divided into multiple caches (eg, render buffer cache, constant buffer cache, etc.) for accessing multiple subsystems of memory via data. In some embodiments, threads executing on execution units in execution unit array 414 communicate with the data stream by exchanging messages via data distribution interconnections coupled to each of the subsystems of GPE 410.

執行單元 Execution unit

圖5為繪圖處理器500之另一實施例的方塊圖。具有與本文任何其他圖式之元件相同標號(或名稱)的圖5的元件,可以類似於本文他處所描述的任何方式操作或發生功 用,但不限於此。 FIG. 5 is a block diagram of another embodiment of a graphics processor 500. An element of Figure 5 having the same reference number (or name) as the elements of any other figures herein may operate or perform work in any manner similar to that described elsewhere herein. Use, but not limited to.

在一些實施例中,繪圖處理器500包括環形互連502、管線前端504、媒體引擎537、及繪圖核心580A-580N。在一些實施例中,環形互連502將繪圖處理器耦接至其他處理單元,包括其他繪圖處理器或一或多個通用處理器核心。在一些實施例中,繪圖處理器為整合在多核心處理系統內的許多處理器之其中一者。 In some embodiments, graphics processor 500 includes a ring interconnect 502, a pipeline front end 504, a media engine 537, and graphics cores 580A-580N. In some embodiments, the ring interconnect 502 couples the graphics processor to other processing units, including other graphics processors or one or more general purpose processor cores. In some embodiments, the graphics processor is one of a number of processors integrated within a multi-core processing system.

在一些實施例中,繪圖處理器500經由環形互連502接收成批的命令。傳入的命令由管線前端504中的命令串流器503解譯。在一些實施例中,繪圖處理器500包括可擴展的執行邏輯,以經由繪圖核心580A-580N執行3D幾何處理及媒體處理。針對3D幾何處理命令,命令串流器503將命令提供至幾何管線536。針對至少一些媒體處理命令,命令串流器503將命令提供至視訊前端534,其與媒體引擎537耦接。在一些實施例中,媒體引擎537包括用於視訊及影像後處理的視訊品質引擎(VQE)530以及多格式編碼/解碼(MFX)533引擎,以提供硬體加速的媒體資料編碼及解碼。在一些實施例中,幾何管線536及媒體引擎537各產生用於由至少一個繪圖核心580A所提供的線程執行資源的執行線程。 In some embodiments, graphics processor 500 receives a batch of commands via ring interconnect 502. The incoming command is interpreted by command stream 503 in pipeline front end 504. In some embodiments, graphics processor 500 includes scalable execution logic to perform 3D geometry processing and media processing via graphics cores 580A-580N. Command streamer 503 provides commands to geometry pipeline 536 for 3D geometry processing commands. For at least some of the media processing commands, the command streamer 503 provides the commands to the video front end 534, which is coupled to the media engine 537. In some embodiments, media engine 537 includes a video quality engine (VQE) 530 for video and post-image processing and a multi-format encoding/decoding (MFX) 533 engine to provide hardware accelerated encoding and decoding of media data. In some embodiments, geometry pipeline 536 and media engine 537 each generate an execution thread for thread execution resources provided by at least one drawing core 580A.

在一些實施例中,繪圖處理器500包括特徵為模組化核心580A-580N(有時稱為核心切片)的可擴充線程執行資源,每一核心具有多個子核心550A-550N、560A-560N(有時稱為核心子切片)。在一些實施例中,繪圖處理器500 可具有任何數目的繪圖核心580A至580N。在一些實施例中,繪圖處理器500包括繪圖核心580A,其具有至少一第一子核心550A及一第二子核心560A。在其他實施例中,繪圖處理器為具有單一子核心(例如,550A)的低功率處理器。在一些實施例中,繪圖處理器500包括多個繪圖核心580A-580N,各包括第一子核心550A-550N之集合以及第二子核心560A-560N之集合。在第一子核心550A-550N之集合中的每一個子核心包括執行單元552A-552N和媒體/紋理取樣器554A-554N之至少一第一集合。在第二子核心560A-560N之集合中的每一個子核心包括執行單元562A-562N和取樣器564A-564N之至少一第二集合。在一些實施例中,各個子核心550A-550N、560A-560N共用一組共用資源570A-570N。在一些實施例中,共用資源包括共用快取記憶體及像素操作邏輯。其他共用資源亦可包括於繪圖處理器之各種實施例中。 In some embodiments, graphics processor 500 includes scalable thread execution resources characterized by modular cores 580A-580N (sometimes referred to as core slices), each core having multiple sub-cores 550A-550N, 560A-560N ( Sometimes called a core sub-slice). In some embodiments, the graphics processor 500 There may be any number of graphics cores 580A through 580N. In some embodiments, the graphics processor 500 includes a graphics core 580A having at least a first sub-core 550A and a second sub-core 560A. In other embodiments, the graphics processor is a low power processor with a single sub-core (eg, 550A). In some embodiments, the graphics processor 500 includes a plurality of graphics cores 580A-580N, each including a collection of first sub-cores 550A-550N and a collection of second sub-cores 560A-560N. Each of the sub-cores in the set of first sub-cores 550A-550N includes at least a first set of execution units 552A-552N and media/texture samplers 554A-554N. Each of the sub-cores in the set of second sub-cores 560A-560N includes at least a second set of execution units 562A-562N and samplers 564A-564N. In some embodiments, each sub-core 550A-550N, 560A-560N shares a common set of resources 570A-570N. In some embodiments, the shared resources include shared cache memory and pixel operation logic. Other shared resources may also be included in various embodiments of the graphics processor.

圖6繪示線程執行邏輯600,其包括GPE之一些實施例中所採用的處理元件的陣列。具有與本文任何其他圖式之元件相同標號(或名稱)的圖6的元件,可以類似於本文他處所描述的任何方式操作或發生功用,但不限於此。 6 illustrates thread execution logic 600 that includes an array of processing elements employed in some embodiments of GPE. The elements of Figure 6 having the same reference numbers (or names) as the elements of any other figures herein may operate or function in any manner similar to that described elsewhere herein, but are not limited thereto.

在一些實施例中,線程執行邏輯600包括像素著色器602、線程分派器604、指令快取606、可擴充執行單元陣列(包括複數個執行單元608A-608N)、取樣器610、資料快取612、及資料埠614。在一個實施例中,包括的元件係經由鏈結至各個元件的互連結構互連。在一些實施例 中,線程執行邏輯600包括經由指令快取606、資料埠614、取樣器610、及執行單元陣列608A-608N中之一或多者至諸如系統記憶體或快取記憶體之記憶體的一或多個連接。在一些實施例中,各個執行單元(例如,608A)是能夠執行多個同時線程並針對各個線程平行地處理多個資料元件的個別向量處理器。在一些實施例中,執行單元陣列608A-608N包括任何數目的個別執行單元。 In some embodiments, thread execution logic 600 includes a pixel shader 602, a thread dispatcher 604, an instruction cache 606, an expandable execution unit array (including a plurality of execution units 608A-608N), a sampler 610, a data cache 612. And information 埠 614. In one embodiment, the included components are interconnected via interconnect structures that are linked to the various components. In some embodiments The thread execution logic 600 includes one or more of the memory via the instruction cache 606, the data buffer 614, the sampler 610, and the execution unit array 608A-608N to a memory such as system memory or cache memory. Multiple connections. In some embodiments, each execution unit (eg, 608A) is an individual vector processor capable of executing multiple simultaneous threads and processing multiple data elements in parallel for each thread. In some embodiments, execution unit arrays 608A-608N include any number of individual execution units.

在一些實施例中,執行單元陣列608A-608N主要被用來執行「著色器」程式。在一些實施例中,陣列608A-608N中的執行單元執行包括對許多標準3D繪圖著色器指令之原生支援的指令集,使得以最小的轉換來執行來自圖形庫(例如,Direct 3D及OpenGL)的著色器程式。執行單元支援頂點及幾何處理(例如,頂點程式、幾何程式、頂點著色器)、像素處理(例如,像素著色器、片段著色器)及通用處理(例如,計算及媒體著色器)。 In some embodiments, execution unit arrays 608A-608N are primarily used to execute "shader" programs. In some embodiments, the execution units in arrays 608A-608N perform a set of instructions that include native support for a number of standard 3D graphics shader instructions such that execution from graphics libraries (eg, Direct 3D and OpenGL) is performed with minimal conversion. Shader program. Execution units support vertex and geometry processing (eg, vertex programs, geometry programs, vertex shaders), pixel processing (eg, pixel shaders, fragment shaders), and general processing (eg, computation and media shaders).

執行單元陣列608A-608N中個各個執行單元對資料元件之陣列操作。資料元件的數目為「執行大小」或用於指令的通道數目。執行通道為用於指令內之資料元件存取、遮蔽、及流量控制之執行的邏輯單元。對於特定的繪圖處理器,頻道數目可獨立於實體算術邏輯單元(ALU)或浮點單元(FPU)的數目。在一些實施例中,執行單元608A-608N支援整數及浮點資料類型。 Each of the execution unit arrays 608A-608N operates on an array of data elements. The number of data elements is the "execution size" or the number of channels used for the instruction. The execution channel is a logic unit for the execution of data element access, masking, and flow control within the instruction. For a particular graphics processor, the number of channels can be independent of the number of physical arithmetic logic units (ALUs) or floating point units (FPUs). In some embodiments, execution units 608A-608N support integer and floating point data types.

執行單元指令集包括單指令多資料(SIMD)指令。可將各種資料元件作為封裝資料類型儲存於暫存器中,且執行 單元將基於元件的資料大小來處理各種元件。例如,當對256位元寬的向量進行操作時,將該向量的256位元儲存在暫存器中,並且該執行單元將該向量作為四個獨立的64位元封裝資料元件(四字(QW)大小資料元件)、八個獨立的32位元封裝資料元件(雙字(DW)大小資料元件)、十六個獨立的16位元封裝資料元件(字(W)大小資料元件)、或三十二個獨立的8位元資料元件(位元組(B)大小資料元件)而對其操作。然而,不同的向量寬度和暫存器大小是可能的。 The execution unit instruction set includes a single instruction multiple data (SIMD) instruction. Various data components can be stored in the scratchpad as package data types and executed The unit will process the various components based on the size of the component's data. For example, when operating on a 256-bit wide vector, the 256-bit vector of the vector is stored in a scratchpad, and the execution unit uses the vector as four separate 64-bit packed data elements (four words ( QW) size data element), eight independent 32-bit package data elements (double word (DW) size data elements), sixteen independent 16-bit package data elements (word (W) size data elements), or Thirty-two independent 8-bit data elements (bytes (B) size data elements) are manipulated. However, different vector widths and scratchpad sizes are possible.

一或多個內部指令快取(例如,606)係包括在線程執行邏輯600中,以快取用於執行單元的線程指令。在一些實施例中,包括一或多個資料快取(例如,612),用以在線程執行期間快取線程資料。在一些實施例中,包括取樣器610,用以提供用於3D操作的紋理取樣以及用於媒體操作的媒體取樣。在一些實施例中,取樣器610包括專用的紋理或媒體取樣功能,以在將已經取樣的資料提供給執行單元之前,在取樣處理期間處理紋理或媒體資料。 One or more internal instruction caches (e.g., 606) are included in thread execution logic 600 to cache thread instructions for execution units. In some embodiments, one or more data caches (eg, 612) are included to cache thread data during thread execution. In some embodiments, a sampler 610 is included to provide texture sampling for 3D operations and media sampling for media operations. In some embodiments, sampler 610 includes a dedicated texture or media sampling function to process texture or media material during the sampling process prior to providing the sampled material to the execution unit.

在執行期間,繪圖及媒體管線經由線程產生及分派邏輯將線程初始化請求傳送給線程執行邏輯600。在一些實施例中,線程執行邏輯600包括本地線程分派器604,其仲裁來自繪圖及媒體管線的線程初始化請求,並且在一或多個執行單元608A-608N上實例化所請求的線程。例如,幾何管線(例如,圖5的536)將頂點處理、曲面細分(tessellation)、或幾何處理線程分派給線程執行邏輯 600(圖6)。在一些實施例中,線程分派器604亦可處理來自執行著色器程式的執行時線程產生請求。 During execution, the graphics and media pipeline passes thread initialization requests to thread execution logic 600 via thread generation and dispatch logic. In some embodiments, thread execution logic 600 includes a local thread dispatcher 604 that arbitrates thread initialization requests from the drawing and media pipelines and instantiates the requested threads on one or more execution units 608A-608N. For example, a geometry pipeline (eg, 536 of Figure 5) dispatches vertex processing, tessellation, or geometry processing threads to thread execution logic. 600 (Figure 6). In some embodiments, thread dispatcher 604 can also process execution time thread generation requests from the execution shader program.

一旦一組幾何物件已經被處理且被光柵化為像素資料,則調用像素著色器602來進一步計算輸出資訊,並使結果被寫入到輸出表面(例如,顏色緩衝器、深度換衝器、模板緩衝器等等)。在一些實施例中,像素著色器602計算待被內插到經光柵化的物件之間的各種頂點屬性的值。在一些實施例中,像素著色器602接著執行供應應用程式介面(API)的像素著色器程式。為了執行像素著色器程式,像素著色器602經由線程分派器604將線程分派至執行單元(例如,608A)。在一些實施例中,像素著色器602使用取樣器610中的紋理取樣邏輯來存取儲存在記憶體中的紋理圖中的紋理資料。對紋理資料及輸入幾何資料的算術操作計算用於各個幾何片段的像素色彩資料,或者丟棄一或多個像素避免進一步的處理。 Once a set of geometric objects have been processed and rasterized into pixel data, pixel shader 602 is invoked to further calculate the output information and cause the results to be written to the output surface (eg, color buffer, depth changer, template) Buffers, etc.). In some embodiments, pixel shader 602 calculates values of various vertex attributes to be interpolated between rasterized objects. In some embodiments, pixel shader 602 then executes a pixel shader program that supplies an application interface (API). To execute the pixel shader program, pixel shader 602 dispatches the thread to the execution unit (eg, 608A) via thread dispatcher 604. In some embodiments, pixel shader 602 uses texture sampling logic in sampler 610 to access texture data stored in texture maps in memory. Arithmetic operations on texture data and input geometry calculate pixel color data for each geometry, or discard one or more pixels to avoid further processing.

在一些實施例中,資料埠614提供線程執行邏輯600一記憶體存取機構,輸出經處理的資料至記憶體用於在繪圖處理器輸出管線上處理。在一些實施例中,資料埠614包括或耦接至一或多個快取記憶體(例如,資料快取612),以快取用於經由資料埠進行記憶體存取的資料。 In some embodiments, data file 614 provides thread execution logic 600 - a memory access mechanism that outputs processed data to memory for processing on the graphics processor output pipeline. In some embodiments, the data cartridge 614 includes or is coupled to one or more cache memories (eg, data cache 612) to cache data for memory access via the data cartridge.

圖7為繪示依據一些實施例之繪圖處理器指令格式700的方塊圖。在一或多個實施例中,繪圖處理器執行單元支援具有多種格式之指令的指令集。實現框繪示了在執行單元指令中通常包括的元件,而虛線則包括可選的或僅 包括在指令之子集中的元件。在一些實施例中,所描述和繪示的指令格式700是巨集指令,因為它們是供應到執行單元的指令,這與一旦指令被處理即指令解碼所導致的微操作相反。 FIG. 7 is a block diagram of a graphics processor instruction format 700 in accordance with some embodiments. In one or more embodiments, the graphics processor execution unit supports a set of instructions having instructions in multiple formats. The implementation box shows the components that are typically included in the execution unit instructions, while the dashed lines include optional or only Components included in a subset of instructions. In some embodiments, the described and illustrated instruction formats 700 are macro instructions because they are instructions that are supplied to the execution unit, as opposed to micro-operations that result when the instruction is processed, ie, the instruction is decoded.

在一些實施例中,繪圖處理器執行單元原生地支援128位元格式710的指令。64位元壓緊指令格式730可用於基於選定指令、指令選項、及運算元數目的一些指令。原生的128位元格式710提供對所有指令選項的存取,而一些選項及操作在64位元格式730中受到限制。64位元格式730中可用的原生指令隨實施例而不同。在一些實施例中,使用索引(index)欄位713中的一組索引值來部分地壓緊指令。執行單元基於索引值硬體參考一組壓緊表,並且使用壓緊表輸出來重構128位元格式710的原生指令。 In some embodiments, the graphics processor execution unit natively supports the instructions of the 128-bit format 710. The 64-bit compaction command format 730 can be used for some instructions based on selected instructions, instruction options, and operand numbers. The native 128-bit format 710 provides access to all instruction options, while some options and operations are limited in the 64-bit format 730. Native instructions available in 64-bit format 730 vary from embodiment to embodiment. In some embodiments, a set of index values in an index field 713 is used to partially compact the instructions. The execution unit hardware references a set of compact tables based on the index values and uses the compact table output to reconstruct the native instructions of the 128-bit format 710.

針對各個格式,指令操作碼712定義執行單元將要執行的操作。執行單元跨各個運算元的多個資料元件並行地執行各個指令。例如,回應於添加指令,執行單元跨表示紋理元件或圖片元件的各個色彩通道執行同時的添加操作。按照預設,執行單元跨運算元的所有資料通道執行每個指令。在一些實施例中,指令控制欄位714致能對諸如通道選擇(例如,預測)及資料通道階順序例如,資料調換(swizzle))之某些執行選項的控制。針對128位元指令710,執行大小(exec-size)欄位716限制了將被並行執行的資料通道的數目。在一些實施例中,執行大小欄位716不適用於在64位元壓緊指令格式730中使用。 For each format, the instruction opcode 712 defines the operations to be performed by the execution unit. The execution unit executes the respective instructions in parallel across a plurality of data elements of the respective operands. For example, in response to the add instruction, the execution unit performs a simultaneous add operation across the various color channels representing the texture element or picture element. By default, the execution unit executes each instruction across all data channels of the operand. In some embodiments, the instruction control field 714 enables control of certain execution options, such as channel selection (eg, prediction) and data channel order, eg, data swizzle. For 128-bit instructions 710, the execution size (exec-size) field 716 limits the number of data channels that will be executed in parallel. In some embodiments, the execution size field 716 is not suitable for use in the 64-bit compaction command format 730.

一些執行單元指令具有高達三個運算元,包括兩個來源運算元src0 722、src1 722以及一個目的地718。在一些實施例中,執行單元支援雙目的地指令,其中目的地之其中一者是隱含的。資料操縱指令可具有第三來源運算元(例如,SRC2 724),其中指令操作碼712判定來源運算元的數目。指令的最後一個來源運算元可以是與指令一起傳遞的一立即(例如,硬編碼的)值。 Some execution unit instructions have up to three operands, including two source operands src0 722, src1 722, and a destination 718. In some embodiments, the execution unit supports dual destination instructions in which one of the destinations is implied. The data manipulation instruction may have a third source operand (eg, SRC2 724), wherein the instruction opcode 712 determines the number of source operands. The last source operand of the instruction may be an immediate (eg, hard coded) value passed with the instruction.

在一些實施例中,128位元指令格式710包括存取/定址模式資訊726,指定例如是使用直接暫存器定址模式或是間接暫存器定址模式。當使用直接暫存器定址模式時,由指令710中的位元直接地提供一或多個運算元的暫存器位址。 In some embodiments, the 128-bit instruction format 710 includes access/addressing mode information 726 specifying, for example, a direct register addressing mode or an indirect register addressing mode. When the direct register addressing mode is used, the register address of one or more operands is directly provided by the bits in instruction 710.

在一些實施例中,128位元指令格式710包括存取/定址模式欄位726,其指定用於指令的定址模式及/或存取模式。在一個實施例中,存取模式用以定義指令的資料存取對齊。一些實施例支援存取模式包括16位元組對齊的存取模式及1位元組對齊的存取模式,其中存取模式的位元組對齊決定指令運算元的存取對齊。例如,當在第一模式時,指令710可將位元組對齊的定址用於來源及目的地運算元,且在第二模式時,指令710可將16位元組對齊的定址用於所有的來源及目的地運算元。 In some embodiments, the 128-bit instruction format 710 includes an access/addressing mode field 726 that specifies an addressing mode and/or an access mode for the instruction. In one embodiment, the access mode is used to define the data access alignment of the instructions. Some embodiments support access modes including a 16-byte aligned access mode and a 1-byte aligned access mode, wherein the byte alignment of the access mode determines the access alignment of the instruction operand. For example, when in the first mode, the instructions 710 can use byte-aligned addressing for source and destination operands, and in the second mode, the instructions 710 can use 16-byte aligned addressing for all Source and destination operands.

在一個實施例中,存取/定址模式欄位726的定址模式部分決定指令是使用直接或間接定址。當使用直接暫存器定址模式時,指令710中的位元直接地提供一或多個運 算元的暫存器位址。當使用間接暫存器定址模式時,可基於定址暫存器值及指令中的定址立即欄位來計算一或多個運算元的暫存器位址。 In one embodiment, the addressing mode portion of the access/addressing mode field 726 determines whether the instruction is to use direct or indirect addressing. When using the direct register addressing mode, the bits in instruction 710 provide one or more operations directly The scratchpad address of the operand. When the indirect scratchpad addressing mode is used, the scratchpad address of one or more operands can be calculated based on the addressed scratchpad value and the addressed immediate field in the instruction.

在一些實施例中,基於操作碼712位元欄位將指令分組以簡化操作碼解碼740。針對8位元操作碼,位元4、5、及6允許執行單元決定操作碼的類型。所示之精確的操作碼分組僅是示例性的。在一些實施例中,移動及邏輯操作碼群組742包括資料移動及邏輯指令(例如,移動(mov)、比較(cmp))。在一些實施例中,移動及邏輯群組742共用五個最高有效位元(MSB),其中移動(mov)指令是0000xxxxb的形式,且邏輯指令是0001xxxxb的形式。流量控制指令群組744(例如,呼叫、跳越(jmp))包括0010xxxxb(例如,0x20)之形式的指令。雜項指令群組746包括指令的混合,包括0011xxxxb(例如,0x30)之形式的同步指令(例如,等待、發送)。平行數學指令群組748包括0100xxxxb(例如,0x40)之形式的分組件的算術指令(例如,加、乘(mul))。平行數學群組748跨資料通道並行地執行算數運算。向量數學群組750包括0101xxxxb(例如,0x50)之形式的算數指令(例如,dp4)。向量數學群組對向量運算元執行諸如點積計算的算術。 In some embodiments, the instructions are grouped to simplify the opcode decoding 740 based on the opcode 712 bit field. For 8-bit opcodes, bits 4, 5, and 6 allow the execution unit to determine the type of opcode. The precise opcode grouping shown is merely exemplary. In some embodiments, the mobile and logical opcode group 742 includes data movement and logic instructions (eg, move (mov), compare (cmp)). In some embodiments, the move and logical group 742 shares five most significant bits (MSBs), where the move (mov) instruction is in the form of 0000xxxxb and the logical instruction is in the form of 0001xxxxb. Flow control command group 744 (e.g., call, skip (jmp)) includes instructions in the form of 0010xxxxb (e.g., 0x20). Miscellaneous instruction group 746 includes a mix of instructions, including synchronization instructions (eg, wait, send) in the form of 0011xxxxb (eg, 0x30). The parallel math instruction group 748 includes arithmetic instructions (eg, add, multiply (mul)) of the sub-components in the form of 0100xxxxb (eg, 0x40). Parallel math group 748 performs arithmetic operations in parallel across data channels. The vector math group 750 includes an arithmetic instruction (eg, dp4) in the form of 0101xxxxb (eg, 0x50). Vector math groups perform arithmetic such as dot product calculations on vector operands.

繪圖管線 Drawing pipeline

圖8為繪圖處理器800之另一實施例的方塊圖。具有與本文任何其他圖式之元件相同標號(或名稱)的圖8的元 件,可以類似於本文他處所描述的任何方式操作或發生功用,但不限於此。 FIG. 8 is a block diagram of another embodiment of a graphics processor 800. Element of Figure 8 having the same reference number (or name) as the elements of any other figures herein The device may operate or function in any manner similar to that described elsewhere herein, but is not limited thereto.

在一些實施例中,繪圖處理器800包括繪圖管線820、媒體管線830、顯示引擎840、線程執行邏輯850、及渲染輸出管線870。在一些實施例中,繪圖處理器800是在包括一或多個通用處理核心之多核處理系統內的繪圖處理器。繪圖處理器係透過至一或多個控制暫存器(未示出)的暫存器寫入或經由環形互連802發佈至繪圖處理器800的命令而被控制。在一些實施例中,環形互連802將繪圖處理器800耦接到其他處理元件,諸如其他繪圖處理器或通用處理器。來自環形互連802的命令係由命令串流器803解譯,其將指令提供至繪圖管線820或媒體管線830的個別元件。 In some embodiments, graphics processor 800 includes a graphics pipeline 820, a media pipeline 830, a display engine 840, thread execution logic 850, and a rendering output pipeline 870. In some embodiments, graphics processor 800 is a graphics processor within a multi-core processing system that includes one or more general purpose processing cores. The graphics processor is controlled by a scratchpad write to one or more control registers (not shown) or via a ring interconnect 802 to the graphics processor 800. In some embodiments, ring interconnect 802 couples graphics processor 800 to other processing elements, such as other graphics processors or general purpose processors. Commands from ring interconnect 802 are interpreted by command streamer 803, which provides instructions to individual elements of drawing pipeline 820 or media pipeline 830.

在一些實施例中,命令串流器803指示頂點擷取器805的操作,其從記憶體讀取頂點資料並且執行由命令串流器803所提供的頂點處理命令。在一些實施例中,頂點擷取器805將頂點資料提供給頂點著色器807,其對各個頂點執行座標空間轉換及照明操作。在一些實施例中,頂點擷取器805及頂點著色器807透過經由線程分派器831將執行線程分派至執行單元852A、852B來執行頂點處理指令。 In some embodiments, command streamer 803 instructs the operation of vertex skimmer 805, which reads vertex material from memory and executes vertex processing commands provided by command streamer 803. In some embodiments, vertex skimmer 805 provides vertex material to vertex shader 807, which performs coordinate space conversion and illumination operations on the various vertices. In some embodiments, vertex skimmer 805 and vertex shader 807 execute vertex processing instructions by dispatching execution threads to execution units 852A, 852B via thread dispatcher 831.

在一些實施例中,執行單元852A、852B為向量處理器的陣列,其具有用於執行繪圖和媒體操作的指令集。在一些實施例中,執行單元852A、852B具有特定用於各個 陣列或者在陣列之間共用的附加的L1快取851。該快取可被配置成資料快取、指令快取、或單一快取,其被區分成在不同的分區中包含資料及指令。 In some embodiments, execution units 852A, 852B are arrays of vector processors having a set of instructions for performing drawing and media operations. In some embodiments, execution units 852A, 852B have specificities for each An array or an additional L1 cache 851 that is shared between the arrays. The cache can be configured as a data cache, an instruction cache, or a single cache, which is divided into data and instructions in different partitions.

在一些實施例中,繪圖管線820包括曲面細分元件,用以執行3D物件之硬體加速的曲面細分。在一些實施例中,可程式化的外殼著色器811配置曲面細分操作。可程式化的域著色器817提供曲面細分輸出的後端評估。曲面細分器813在外殼著色器811的方向上操作,並且包含專用邏輯以基於被提供作為繪圖管線820之輸入的粗略幾何模型來產生一組細微的幾何物件。在一些實施例中,若未使用曲面細分,則可繞過曲面細分元件811、813、817。 In some embodiments, drawing pipeline 820 includes tessellation elements to perform hard-accelerated tessellation of 3D objects. In some embodiments, the stylized hull shader 811 configures tessellation operations. The programmable field shader 817 provides a backend evaluation of the tessellation output. The tessellator 813 operates in the direction of the hull shader 811 and contains dedicated logic to generate a set of subtle geometric objects based on a coarse geometric model provided as input to the drawing pipeline 820. In some embodiments, tessellation elements 811, 813, 817 can be bypassed if tessellation is not used.

在一些實施例中,完整的幾何物件可透過幾何著色器819經由被分派至執行單元852A、852B的一或多個線程來處理,或者可直接前進到剪裁器829。在一些實施例中,幾何著色器對整個幾何物件進行操作,而不是如在繪圖管線的先前階段中對頂點或是頂點的面片(patches)進行操作。若曲面細分被停用,則幾何著色器819從頂點著色器807接收輸入。在一些實施例中,幾何著色器819可透過幾何著色器程式而被程式化以在曲面細分單元被停用的情況下執行幾何曲面細分。 In some embodiments, the complete geometry may be processed by geometry shader 819 via one or more threads assigned to execution units 852A, 852B, or may proceed directly to cutter 829. In some embodiments, the geometry shader operates on the entire geometric object, rather than operating on patches of vertices or vertices as in previous stages of the drawing pipeline. Geometry shader 819 receives input from vertex shader 807 if tessellation is disabled. In some embodiments, geometry shader 819 can be programmed through a geometry shader program to perform geometric tessellation if the tessellation unit is deactivated.

在光柵化之前,剪裁器829處理頂點資料。剪裁器829可以是固定功能剪裁器或是具有剪裁及幾何著色器功能的可程式化剪裁器。在一些實施例中,渲染輸出管線870中的光柵化及深度測試元件873分派像素著色器以將 幾何物件轉換成其之每像素表示。在一些實施例中,像素著色器邏輯包括在線程執行邏輯850中。在一些實施例中,應用程式可繞過光柵化873,且經由串流輸出單元823存取未被光柵化的頂點資料。 The trimmer 829 processes the vertex data prior to rasterization. The cutter 829 can be a fixed function cutter or a programmable cutter with a crop and geometry shader function. In some embodiments, the rasterization and depth test component 873 in the render output pipeline 870 dispatches a pixel shader to The geometric object is converted to its per pixel representation. In some embodiments, pixel shader logic is included in thread execution logic 850. In some embodiments, the application can bypass rasterization 873 and access the unrasterized vertex data via stream output unit 823.

繪圖處理器800具有允許資料及訊息在處理器的主要元件之間傳遞的互連匯流排、互連結構、或一些其他互連機構。在一些實施例中,執行單元852A、852B及相關聯的快取851、紋理及媒體取樣器854、及紋理/取樣器快取858經由資料埠856互連以執行記憶體存取,並與處理器的渲染輸出管線元件通訊。在一些實施例中,取樣器854、快取851、858及執行單元852A、852B各具有獨立的記憶體存取路徑。 The graphics processor 800 has an interconnect bus, interconnect structure, or some other interconnect mechanism that allows data and messages to be transferred between the main components of the processor. In some embodiments, execution units 852A, 852B and associated cache 851, texture and media sampler 854, and texture/sampler cache 858 are interconnected via data 856 to perform memory access, and processing Rendering output pipeline component communication. In some embodiments, sampler 854, cache 851, 858, and execution units 852A, 852B each have separate memory access paths.

在一些實施例中,渲染輸出管線870包含光柵化(rasterizer)及深度測試元件873,其將基於頂點的物件轉換成關聯的基於像素的表示。在一些實施例中,該光柵化邏輯包括分窗器(windower)/遮蔽器(masker)單元以執行固定功能三角形及線光柵化。相關聯的渲染快取878及深度快取879在一些實施例中亦為可用的。像素操作元件877對資料執行基於像素的操作,然而在一些實例中,與2D操作相關聯的像素操作(例如,具有混色的位元區塊圖像傳送)是由2D引擎841執行,或者由顯示控制器843使用疊加顯示平面在顯示時間代替。在一些實施例中,共用的L3快取875可用於所有的繪圖元件,允許資料在不使用主系統記憶體之情況下的共用。 In some embodiments, the render output pipeline 870 includes a rasterizer and depth test component 873 that converts the vertice-based object into an associated pixel-based representation. In some embodiments, the rasterization logic includes a windower/masker unit to perform fixed function triangles and line rasterization. The associated render cache 878 and depth cache 879 are also available in some embodiments. Pixel manipulation element 877 performs pixel-based operations on the material, however in some examples, pixel operations associated with 2D operations (eg, bit-block image transfer with color mixing) are performed by 2D engine 841, or by display The controller 843 replaces the display time with the superimposed display plane. In some embodiments, the shared L3 cache 875 can be used for all of the drawing elements, allowing for sharing of data without the use of main system memory.

在一些實施例中,繪圖處理器媒體管線830包括媒體引擎837及視訊前端834。在一些實施例中,視訊前端834接收來自命令串流器803的管線命令。在一些實施例中,媒體管線830包括獨立的命令串流器。在一些實施例中,視訊前端834在將媒體命令傳送至媒體引擎837之前處理該等命令。在一些實施例中,媒體引擎837包括線程產生功能性以產生線程用於經由線程分派器831分派至線程執行邏輯850。 In some embodiments, graphics processor media pipeline 830 includes media engine 837 and video front end 834. In some embodiments, video front end 834 receives a pipeline command from command streamer 803. In some embodiments, media pipeline 830 includes a separate command stream. In some embodiments, video front end 834 processes the commands prior to transmitting the media commands to media engine 837. In some embodiments, media engine 837 includes thread generation functionality to generate threads for dispatching to thread execution logic 850 via thread dispatcher 831.

在一些實施例中,繪圖處理器800包括顯示引擎840。在一些實施例中,顯示引擎840在處理器800外部,並且經由環形互連802、或一些其他互連匯流排或結構與繪圖處理器耦接。在一些實施例中,顯示引擎840包括2D引擎841及顯示控制器843。在一些實施例中,顯示引擎840包含能夠獨立於3D管線進行操作的專用邏輯。在一些實施例中,顯示控制器843與顯示裝置(未示出)耦接,該顯示裝置可如在膝上型電腦中為系統整合式顯示裝置,或者為經由顯示裝置連接器附接的外部顯示裝置。 In some embodiments, graphics processor 800 includes display engine 840. In some embodiments, display engine 840 is external to processor 800 and coupled to the graphics processor via ring interconnect 802, or some other interconnect bus or structure. In some embodiments, display engine 840 includes a 2D engine 841 and a display controller 843. In some embodiments, display engine 840 includes dedicated logic that is capable of operating independently of the 3D pipeline. In some embodiments, display controller 843 is coupled to a display device (not shown), such as a system-integrated display device in a laptop or an external device attached via a display device connector Display device.

在一些實施例中,繪圖管線820及媒體管線830可被配置成基於多個繪圖及媒體程式介面執行操作,並且非特定於任一個應用程式介面(API)。在一些實施例中,用於繪圖處理器的驅動軟體將特定於特定繪圖或媒體程式庫的API調用轉譯成可由繪圖處理器處理的命令。在一些實施例中,提供對來自科納斯組織(Khronos Group)的開放圖形 程式庫(OpenGL)及開放計算語言(OpenCL)、來自微軟公司的Direct3D程式庫的支援,或可提供對OpenGL及D3D兩者的支援。亦可提供對開放源電腦視覺程式庫(OpenCV)的支援。具有相容3D管線的未來API亦可在能將未來API之管線映射至繪圖處理器之管線的情況下得到支援。 In some embodiments, graphics pipeline 820 and media pipeline 830 can be configured to perform operations based on multiple graphics and media programming interfaces, and are not specific to any one application interface (API). In some embodiments, the driver software for the graphics processor translates API calls specific to a particular drawing or media library into commands that can be processed by the graphics processor. In some embodiments, providing open graphics from the Khronos Group Support for OpenGL and OpenCL, Open3D libraries from Microsoft, or support for both OpenGL and D3D. Support for the Open Source Computer Vision Library (OpenCV) is also available. Future APIs with compatible 3D pipelines can also be supported in the case of pipelines that map future APIs to the pipeline of the graphics processor.

繪圖管線程式設計 Drawing pipeline programming

圖9A為繪示依據一些實施例之繪圖處理器指命令格式900的方塊圖。圖9B為繪示依據一實施例之繪圖處理器命令序列910的方塊圖。圖9A中實線框繪示了在繪圖命令中通常包括的元件,而虛線則包括可選的或僅包括在繪圖命令之子集中的元件。圖9A的示例性的繪圖處理器命令格式900包括用以識別命令之目標客戶端902的資料欄位、命令操作碼(opcode)904、及用於命令的相關資料906。子操作碼905及命令大小908亦包括在一些命令中。 FIG. 9A is a block diagram showing a graphics processor finger command format 900 in accordance with some embodiments. FIG. 9B is a block diagram showing a drawing processor command sequence 910 in accordance with an embodiment. The solid lined boxes in Figure 9A illustrate the elements typically included in the drawing commands, while the dashed lines include elements that are optional or only included in a subset of the drawing commands. The exemplary graphics processor command format 900 of FIG. 9A includes a data field, a command opcode 904, and associated material 906 for the command to identify the target client 902 of the command. Sub-opcode 905 and command size 908 are also included in some commands.

在一些實施例中,客戶端902指定繪圖裝置之處理命令資料的客戶端單元。在一些實施例中,繪圖處理器命令剖析器檢驗每個命令的客戶端欄位以調節命令的進一步處理,並將命令資料路由到適當的客戶端單元。在一些實施例中,繪圖處理器客戶端單元包括記憶體介面單元、渲染單元、2D單元、3D單元、及媒體單元。各個客戶端單元具有處理命令的相對應的處理管線。一旦命令由客戶端單 元接收,該客戶端單元讀取操作碼904,及若存在的話,讀取子操作碼905,以判定待執行的操作。該客戶端單元使用在資料欄位906中的資訊來執行命令。針對某些命令,需要顯式命令大小908來指定命令的大小。在一些實施例中,命令剖析器基於命令操作碼自動地判定命令中之至少一些命令的大小。在一些實施例中,經由雙字的倍數來對齊命令。 In some embodiments, the client 902 specifies a client unit of the drawing device that processes the command material. In some embodiments, the graphics processor command parser verifies the client field of each command to adjust the further processing of the command and route the command material to the appropriate client unit. In some embodiments, the graphics processor client unit includes a memory interface unit, a rendering unit, a 2D unit, a 3D unit, and a media unit. Each client unit has a corresponding processing pipeline that processes the commands. Once the order is ordered by the client Upon receipt, the client unit reads the opcode 904 and, if present, reads the sub-opcode 905 to determine the operation to be performed. The client unit uses the information in the data field 906 to execute the command. For some commands, an explicit command size 908 is required to specify the size of the command. In some embodiments, the command parser automatically determines the size of at least some of the commands based on the command opcode. In some embodiments, the commands are aligned via a multiple of a double word.

圖9B中的流程圖示出示例性的繪圖處理器命令序列910。在一些實施例中,表徵繪圖處理器之實施例的資料處理系統之軟體或韌體使用所示之命令序列的版本來設定、執行、及終止一組繪圖操作。僅出於示例之目的示出及描述範例命令序列,因為實施例不限於這些特定命令或此命令序列。此外,命令可作為命令序列中的命令批次而被發佈,使得繪圖處理器將至少部分地同時處理命令序列。 The flowchart in FIG. 9B shows an exemplary drawing processor command sequence 910. In some embodiments, the software or firmware of the data processing system characterizing the embodiment of the graphics processor uses the version of the command sequence shown to set, execute, and terminate a set of drawing operations. The example command sequences are shown and described for purposes of example only, as embodiments are not limited to these particular commands or sequences of such commands. In addition, commands can be issued as command batches in a sequence of commands such that the graphics processor will process the sequence of commands at least partially simultaneously.

在一些實施例中,繪圖處理器命令序列910可以管線排清命令912開始,以使任何作用中的繪圖管線完成用於管線的當前未決的命令。在一些實施例中,3D管線922及媒體管線924不會同時地操作。執行管線排清以使得作用中的繪圖管線完成任何未決的命令。回應於管線排清,用於繪圖處理器的命令剖析器將暫停命令處理直到作用中的製圖引擎完成未決的操作,且相關的讀取快取是無效的。可選地,渲染快取中被標記為「髒(dirty)」的任何資料可被排清至記憶體。在一些實施例中,管線排清命令 912可被用於管線同步或在將繪圖處理器置於低功率狀態之前使用。 In some embodiments, the drawing processor command sequence 910 can begin with a pipeline clearing command 912 to cause any active drawing pipeline to complete the currently pending command for the pipeline. In some embodiments, 3D pipeline 922 and media pipeline 924 do not operate simultaneously. Execution pipeline clearing causes the active drawing pipeline to complete any pending commands. In response to the pipeline clearing, the command parser for the graphics processor will pause the command processing until the active graphics engine completes the pending operation and the associated read cache is invalid. Optionally, any material marked as "dirty" in the render cache can be flushed to memory. In some embodiments, the pipeline clearing command 912 can be used for pipeline synchronization or prior to placing the graphics processor in a low power state.

在一些實施例中,當命令序列要求繪圖處理器顯式地在管線之間切換時,使用管線選擇命令913。在一些實施例中,管線選擇命令913在發佈管線命令之前在執行上下文內僅被需要一次,除非該上下文是用於發佈針對兩個管線的命令。在一些實施例中,在經由管線選擇命令913的管線切換之前立即需要管線排清命令912。 In some embodiments, the pipeline selection command 913 is used when the command sequence requires the graphics processor to explicitly switch between pipelines. In some embodiments, the pipeline select command 913 is only needed once within the execution context before issuing the pipeline command, unless the context is for issuing commands for both pipelines. In some embodiments, a pipeline clearing command 912 is required immediately prior to pipeline switching via pipeline select command 913.

在一些實施例中,管線控制命令914配置繪圖管線以供操作,並且被用以編程3D管線922和媒體管線924。在一些實施例中,管線控制命令914配置用於作用中的管線的管線狀態。在一個實施例中,管線控制命令914被用於管線同步,並且在處理命令批次之前被用以清除作用中的管線內的一或多個快取記憶體的資料。 In some embodiments, pipeline control command 914 configures the graphics pipeline for operation and is used to program 3D pipeline 922 and media pipeline 924. In some embodiments, the pipeline control command 914 configures the pipeline status for the active pipeline. In one embodiment, the pipeline control command 914 is used for pipeline synchronization and is used to clear data for one or more cache memories within the active pipeline before processing the command batch.

在一些實施例中,返回緩衝狀態命令916被用以配置一組返回緩衝器以供個別的管線寫入資料。一些管線操作需要一或多個返回緩衝器的分配、選擇、或組態,操作在處理期間將中間資料寫入該一或多個返回緩衝器中。在一些實施例中,繪圖處理器亦使用一或多個返回緩衝器來儲存輸出資料,並執行跨線程通訊。在一些實施例中,返回緩衝狀態916包括選擇要用於一組管線操作的返回緩衝器的大小及數量。 In some embodiments, the return buffer status command 916 is used to configure a set of return buffers for individual pipelines to write data. Some pipeline operations require the allocation, selection, or configuration of one or more return buffers that write intermediate data into the one or more return buffers during processing. In some embodiments, the graphics processor also uses one or more return buffers to store the output data and perform cross-thread communication. In some embodiments, return buffer status 916 includes selecting the size and number of return buffers to be used for a set of pipeline operations.

命令序列中的剩餘命令基於用於操作的作用中的管線而不同。基於管線判定920,命令序列係針對從3D管線 狀態930開始的3D管線922設計,或者針對從媒體管線狀態940開始的媒體管線924設計。 The remaining commands in the command sequence differ based on the pipeline in effect for the operation. Based on pipeline decision 920, the command sequence is for the slave 3D pipeline The 3D pipeline 922 design begins with state 930 or is designed for media pipeline 924 starting from media pipeline state 940.

用於3D管線狀態930的命令包括3D狀態設定命令,用於頂點緩衝狀態、頂點元件狀態、恆定色彩狀態、深度緩衝狀態、以及將在處理3D基元命令之前被配置的其他狀態變數。這些命令的值係至少部份地依據使用中的特定3D API來判定。在一些實施例中,3D管線狀態930命令亦能夠選擇性地停用或者繞過某些管線元件,若那些元件將不被使用。 The commands for the 3D pipeline state 930 include 3D state setting commands for vertex buffer states, vertex component states, constant color states, depth buffer states, and other state variables that will be configured prior to processing the 3D primitive commands. The values of these commands are determined, at least in part, by the particular 3D API in use. In some embodiments, the 3D pipeline state 930 command can also selectively disable or bypass certain pipeline components if those components will not be used.

在一些實施例中,3D基元932命令係用以提交3D基元以由3D管線處理。經由3D基元932命令被傳遞至繪圖處理器的命令及相關聯參數被轉送至繪圖管線中的頂點提取功能。該頂點提取功能使用3D基元932命令資料來產生頂點資料結構。該頂點資料結構被儲存在一或多個返回緩衝器中。在一些實施例中,3D基元932命令被用以經由頂點著色器對3D基元執行頂點操作。為了處理頂點著色器,3D管線922將著色器執行線程分派至繪圖處理器執行單元。 In some embodiments, the 3D primitive 932 command is used to submit 3D primitives for processing by the 3D pipeline. Commands and associated parameters that are passed to the graphics processor via the 3D primitive 932 command are forwarded to the vertex extraction function in the drawing pipeline. The vertex extraction function uses the 3D primitive 932 command material to generate a vertex data structure. The vertex data structure is stored in one or more return buffers. In some embodiments, the 3D primitive 932 command is used to perform vertex operations on the 3D primitive via the vertex shader. To process the vertex shader, the 3D pipeline 922 dispatches the shader execution thread to the graphics processor execution unit.

在一些實施例中,3D管線922係經由執行934命令或事件而被觸發。在一些實施例中,暫存器寫入觸發命令執行。在一些實施例中,執行係經由命令序列中的「前進(go)」或「啟動(kick)」命令而被觸發。在一個實施例中,命令執行係使用管線同步命令來從繪圖管線中排清命令序列而被觸發。3D管線將針對3D基元執行幾何處理。 一旦操作完成,所得到的幾何物件將被光柵化,且像素引擎將所得到的像素上色。用以控制像素著色及像素後端操作的額外命令亦可被包括用於那些操作。 In some embodiments, the 3D pipeline 922 is triggered by executing a 934 command or event. In some embodiments, the scratchpad write triggers command execution. In some embodiments, execution is triggered via a "go" or "kick" command in the command sequence. In one embodiment, command execution is triggered using a pipeline synchronization command to clear a sequence of commands from the drawing pipeline. The 3D pipeline will perform geometric processing for the 3D primitives. Once the operation is complete, the resulting geometry will be rasterized and the pixel engine will color the resulting pixels. Additional commands to control pixel shading and pixel back end operations may also be included for those operations.

在一些實施例中,繪圖處理器命令序列910在執行媒體操作時遵循媒體管線924路徑。一般而言,用於媒體管線924之編程的具體用途和方式取決於待執行的媒體或計算操作。具體的媒體解碼操作可在媒體解碼期間被卸載至媒體管線。在一些實施例中,媒體管線亦可被繞過,且可使用由一或多個通用處理核心所提供的資源來整體或部份地執行媒體解碼。在一個實施例中,媒體管線亦包括用於通用繪圖處理器單元(GPGPU)操作的元件,其中繪圖處理器係用以使用並非顯式地與繪圖基元之渲染相關的計算著色器程式來執行SIMD向量操作。 In some embodiments, the graphics processor command sequence 910 follows the media pipeline 924 path when performing media operations. In general, the particular use and manner of programming for media pipeline 924 depends on the media or computing operations to be performed. Specific media decoding operations may be offloaded to the media pipeline during media decoding. In some embodiments, the media pipeline can also be bypassed, and media decoding can be performed in whole or in part using resources provided by one or more general processing cores. In one embodiment, the media pipeline also includes components for general purpose graphics processor unit (GPGPU) operations, wherein the graphics processor is configured to execute using a computation shader program that is not explicitly associated with rendering of the graphics primitives SIMD vector operation.

在一些實施例中,媒體管線924以與3D管線922類似的方式被組態。一組媒體管線狀態命令940被分配或放置在命令序列中的媒體物件命令942之前。在一些實施例中,媒體管線狀態命令940包括資料,用以組態將被用以處理媒體物件之媒體管線元件。此包括資料,用以組態媒體管線內之視訊解碼及視訊編碼邏輯,諸如編碼或解碼格式。在一些實施例中,媒體管線狀態命令940亦支援使用一或多個指標指向「間接」狀態元件,其包含狀態設定之批次。 In some embodiments, media pipeline 924 is configured in a similar manner as 3D pipeline 922. A set of media pipeline status commands 940 are assigned or placed before the media object command 942 in the command sequence. In some embodiments, media pipeline status command 940 includes data to configure media pipeline elements that will be used to process media objects. This includes data for configuring video decoding and video encoding logic within the media pipeline, such as encoding or decoding formats. In some embodiments, the media pipeline status command 940 also supports the use of one or more indicators to point to an "indirect" status element that includes a batch of status settings.

在一些實施例中,媒體物件命令942將指標供應至媒體物件以供媒體管線處理。媒體物件包括記憶體緩衝器, 其包含待處理的視訊資料。在一些實施例中,在發出媒體物件命令942之前,所有的媒體管線狀態必須是有效的。一旦管線狀態被組態且媒體物件命令942被排進佇列,媒體管線924經由執行命令944或等效的執行事件(例如,暫存器寫入)而被觸發。來自媒體管線924的輸出可接著由3D管線922或媒體管線924所提供的操作進行後處理。在一些實施例中,GPGPU操作以與媒體操作相似的方式而被組態及執行。 In some embodiments, the media item command 942 supplies the metrics to the media item for processing by the media pipeline. Media objects include a memory buffer, It contains the video material to be processed. In some embodiments, all media pipeline states must be valid before the media object command 942 is issued. Once the pipeline status is configured and the media object command 942 is queued, the media pipeline 924 is triggered via an execution command 944 or an equivalent execution event (eg, a scratchpad write). The output from media pipeline 924 can then be post-processed by operations provided by 3D pipeline 922 or media pipeline 924. In some embodiments, GPGPU operations are configured and executed in a manner similar to media operations.

繪圖軟體架構 Drawing software architecture

圖10繪示依據一些實施例之用於資料處理系統1000的示例性繪圖軟體結構。在一些實施例中,軟體架構包括3D繪圖應用程式1010、作業系統1020、及至少一個處理器1030。在一些實施例中,處理器1030包括繪圖處理器1032及一或多個通用處理器核心1034。繪圖應用程式1010及作業系統1020各在資料處理系統之系統記憶體1050中執行。 FIG. 10 illustrates an exemplary drawing software structure for data processing system 1000 in accordance with some embodiments. In some embodiments, the software architecture includes a 3D graphics application 1010, an operating system 1020, and at least one processor 1030. In some embodiments, processor 1030 includes a graphics processor 1032 and one or more general purpose processor cores 1034. The drawing application 1010 and the operating system 1020 are each executed in a system memory 1050 of the data processing system.

在一些實施例中,3D繪圖應用程式1010包含一或多個著色器程式,其包括著色器指令1012。著色器語言指令可以是高階著色器語言,諸如高階著色器語言(HLSL)或是OpenGL著色器語言(GLSL)。應用程式亦包括適於通用處理器核心1034執行之機器語言形式的可執行指令1014。應用程式亦包括由頂點資料所界定的繪圖物件1016。 In some embodiments, the 3D drawing application 1010 includes one or more shader programs that include shader instructions 1012. The shader language instructions can be high order shader languages such as High Order Shader Language (HLSL) or OpenGL Shader Language (GLSL). The application also includes executable instructions 1014 in a machine language format suitable for execution by general purpose processor core 1034. The application also includes a drawing object 1016 defined by vertex data.

在一些實施例中,作業系統1020為來自微軟公司的Microsoft® Windows®作業系統、專有的類UNIX作業系統、或使用Linux核心之變體的開源類UNIX作業系統。當使用Direct3D API時,作業系統1020使用前端著色器編譯器1024來將HLSL之形式的任何著色器指令1012編譯成較低階的著色器語言。該編譯可以是即時(JIT)編譯或者該應用程式可執行著色器預編譯。在一些實施例中,高階著色器在3D繪圖應用程式1010的編譯期間被編譯成低階著色器。 In some embodiments, operating system 1020 is a Microsoft® Windows® operating system from Microsoft Corporation, a proprietary UNIX-like operating system, or an open source UNIX operating system that uses a variant of the Linux kernel. When using the Direct3D API, the operating system 1020 uses the front end shader compiler 1024 to compile any shader instructions 1012 in the form of HLSL into lower order color shader languages. The compilation can be a just-in-time (JIT) compilation or the application can perform a shader precompilation. In some embodiments, the high order shader is compiled into a low order shader during compilation of the 3D drawing application 1010.

在一些實施例中,使用者模式繪圖驅動器1026包含後端著色器編譯器1027,用以將著色器指令1012轉換成硬體特定表示。當使用OpenGL API時,GLSL高階語言形式的著色器指令1012被傳遞至使用者模式繪圖驅動器1026用於編譯。在一些實施例中,使用者模式繪圖驅動器1026使用作業系統核心模式功能1028來與核心模式繪圖驅動器1029通訊。在一些實施例中,核心模式繪圖驅動器1029與繪圖處理器1032通訊以分派命令及指令。 In some embodiments, the user mode drawing driver 1026 includes a back end shader compiler 1027 for converting the shader instructions 1012 into a hardware specific representation. When the OpenGL API is used, the GLSL high-level language form of the color wheel instructions 1012 is passed to the user mode drawing driver 1026 for compilation. In some embodiments, the user mode drawing driver 1026 uses the operating system core mode function 1028 to communicate with the core mode drawing driver 1029. In some embodiments, core mode drawing driver 1029 communicates with graphics processor 1032 to dispatch commands and instructions.

IP核心實作 IP core implementation

至少一個實施例的一或多個態樣可由儲存在機器可讀取媒體上的代表性程式碼實施,該機器可讀取媒體表示及/或定義在積體電路內的邏輯,諸如處理器。例如,機器可讀取媒體可包括表示處理器內之各種邏輯的指令。當由機器讀取時,指令可使機器製造邏輯來執行本文所述之技 術。被稱為「IP核心」的此種表示為用於積體電路的可重複使用的邏輯單元,其可被儲存在有形的、機器可讀取媒體上作為描述積體電路之結構的硬體模型。硬體模型可被供應給各種客戶或製造設施,其將硬體模型加載在製造積體電路的製造機器上。可製造積體電路使得電路執行與本文所述之任何實施例相關聯地描述的操作。 One or more aspects of at least one embodiment can be implemented by a representative code stored on a machine readable medium, the machine readable media representation and/or logic defined within the integrated circuit, such as a processor. For example, machine readable media can include instructions that represent various logic within the processor. When read by a machine, the instructions cause the machine to make logic to perform the techniques described herein Surgery. This representation, referred to as an "IP core," is a reusable logic unit for an integrated circuit that can be stored on a tangible, machine-readable medium as a hardware model that describes the structure of the integrated circuit. . The hardware model can be supplied to various customers or manufacturing facilities that load the hardware model onto the manufacturing machine that manufactures the integrated circuit. The integrated circuit can be fabricated such that the circuit performs the operations described in association with any of the embodiments described herein.

圖11為繪示依據一實施例之可被用於製造積體電路以執行操作之IP核心開發系統1100的方塊圖。可使用IP核心開發系統1100來產生可被整合進較大型設計或被用來建構整個積體電路(例如,SOC積體電路)的模組化的、可重複使用的設計。設計設施1130可以高階程式語言(例如,C/C++)來產生IP核心設計的軟體模擬1110。軟體模擬1110可被用來設計、測試、及驗證IP核心的行為。可接著從模擬模型1100建立或合成暫存器傳送層級(RTL)設計。該RTL設計1115為模型化硬體暫存器之間的數位信號流之積體電路的行為的抽象化,包括使用模型化的數位信號所執行的相關聯的邏輯。除了RTL設計1115之外,亦可建立、設計、或合成在邏輯層級或電晶體層級的較低階設計。因此,初始設計和模擬的特定細節可不同。 11 is a block diagram of an IP core development system 1100 that can be used to fabricate integrated circuits to perform operations in accordance with an embodiment. The IP core development system 1100 can be used to create a modular, reusable design that can be integrated into a larger design or used to construct an entire integrated circuit (eg, a SOC integrated circuit). The design facility 1130 can generate a software simulation 1110 of the IP core design in a high-level programming language (eg, C/C++). Software Simulation 1110 can be used to design, test, and verify the behavior of IP cores. A scratchpad transfer level (RTL) design can then be built or synthesized from the simulation model 1100. The RTL design 1115 is an abstraction of the behavior of the integrated circuit of the digital signal stream between the modeled hardware registers, including the associated logic performed using the modeled digital signals. In addition to the RTL design 1115, lower order designs at the logic level or the transistor level can be created, designed, or synthesized. Therefore, the specific details of the initial design and simulation can be different.

RTL設計1115或等效物可進一步由設計設施合成為硬體模型1120,其可以是硬體描述語言(HDL)格式或實體設計資料的某些其他表示。可進一步模擬或測試HDL,以驗證該IP核心設計。可使用非揮發性記憶體1140(例如,硬碟、快閃記憶體、或任何非揮發性儲存媒體)來儲存IP 核心設計用以傳遞至第三方製造設施1165。替代地,可經由有線連接1150或無線連接1160來傳輸(例如,經由網際網路)IP核心設計。製造設施1165可接著製造至少部分依據該IP核心設計的積體電路。所製造的積體電路可被組態成依據本文所述之至少一個實施例執行操作。 The RTL design 1115 or equivalent may be further synthesized by the design facility as a hardware model 1120, which may be a hardware description language (HDL) format or some other representation of the physical design material. The HDL can be further simulated or tested to verify the IP core design. Non-volatile memory 1140 (eg, hard drive, flash memory, or any non-volatile storage medium) can be used to store IP The core design is passed to a third party manufacturing facility 1165. Alternatively, the IP core design can be transmitted (eg, via the internet) via wired connection 1150 or wireless connection 1160. Manufacturing facility 1165 can then fabricate an integrated circuit that is at least partially designed in accordance with the IP core. The fabricated integrated circuit can be configured to perform operations in accordance with at least one embodiment described herein.

圖12為繪出依據一實施例之可使用一或多個IP核心來製造的示例性系統單晶片積體電路1200的方塊圖。示例性積體電路包括一或多個應用程式處理器1205(例如,CPU)、至少一個繪圖處理器1210、且可額外地包括影像處理器1215及/或視訊處理器1220,其中任一者可為來自相同或多個不同是設計設施的模組化IP核心。積體電路包括周邊或匯流排邏輯,其包括USB控制器1225、UART控制器1230、SPI/SDIO控制器1235、及I2S/I2C控制器1240。另外,積體電路可包括顯示裝置1245,其耦接至高解析度多媒體介面(HDMI)控制器1250及行動產業處理器介面(MIPI)顯示介面1255中之一或多者。儲存可由包括快閃記憶體及快閃記憶體控制器的快閃記憶體子系統1260提供。記憶體介面可經由記憶體控制器1265提供,用於存取SDRAM或SRAM記憶體裝置。一些積體電路另外包括嵌入式安全引擎1270。 12 is a block diagram depicting an exemplary system single-chip integrated circuit 1200 that can be fabricated using one or more IP cores in accordance with an embodiment. The exemplary integrated circuit includes one or more application processors 1205 (eg, CPUs), at least one graphics processor 1210, and may additionally include an image processor 1215 and/or a video processor 1220, any of which may A modular IP core that is designed from the same or multiple different facilities. The integrated circuit includes peripheral or busbar logic including a USB controller 1225, a UART controller 1230, an SPI/SDIO controller 1235, and an I 2 S/I 2 C controller 1240. In addition, the integrated circuit can include a display device 1245 coupled to one or more of a high resolution multimedia interface (HDMI) controller 1250 and a mobile industry processor interface (MIPI) display interface 1255. The storage may be provided by a flash memory subsystem 1260 that includes a flash memory and a flash memory controller. The memory interface can be provided via memory controller 1265 for accessing SDRAM or SRAM memory devices. Some integrated circuits additionally include an embedded security engine 1270.

此外,其他邏輯及電路可被包括在積體電路1200的處理器中,包括額外的繪圖處理器/核心、周邊介面控制器、或通用處理器核心。 用於型樣驅動、自適應虛擬繪圖處理單元的裝置及方法 In addition, other logic and circuitry may be included in the processor of integrated circuit 1200, including additional graphics processors/cores, peripheral interface controllers, or general purpose processor cores. Device and method for pattern driving and adaptive virtual drawing processing unit

在當前的繪圖處理器單元(GPU)虛擬化實現中,虛擬機監視器(VMM),特別是虛擬GPU(vGPU)裝置模型,捕獲及模擬訪客(guest)存取特權GPU資源用於安全和多工,同時通過性能關鍵資源的CPU存取(例如,諸如CPU存取繪圖記憶體)。GPU命令,一旦被提交,(在GPU中)被直接執行而不需VMM干預。因此,能達到接近本地(native)效能。 In current graphics processor unit (GPU) virtualization implementations, virtual machine monitors (VMMs), and in particular virtual GPU (vGPU) device models, capture and simulate guest access to privileged GPU resources for security and more Work, while accessing CPUs through performance-critical resources (for example, such as CPU access to graphics memory). GPU commands, once committed, are executed directly (in the GPU) without VMM intervention. Therefore, it is possible to achieve near-native performance.

在GPU執行來自多個vGPU訪客的工作負載的架構中,從vGPU訪客的其中一者切換到下一個vGPU訪客將導致GPU上下文切換並且所需的硬體級環型引擎上下文將被保存/恢復。不同於上下文係非常輕量的CPU,GPU上下文是非常重的。因此,GPU上下文切換比CPU上下文切換花費顯著地更多的時間。分析資料顯示GPU上下文切換可能需要100~300微秒(us),而CPU上下文切換可能僅需要數百奈秒(ns)。 In an architecture where the GPU performs workloads from multiple vGPU guests, switching from one of the vGPU guests to the next vGPU guest will result in a GPU context switch and the required hardware level ring engine context will be saved/restored. Unlike a very lightweight CPU with a context, the GPU context is very heavy. Therefore, GPU context switching takes significantly more time than CPU context switching. Analysis data shows that GPU context switching may take 100 to 300 microseconds (us), while CPU context switching may only take hundreds of nanoseconds (ns).

因此,GPU排程器策略將不如CPU中那樣頻繁地觸發上下文切換。通常,它每幾毫秒(ms)發生。典型的GPU排程器提供基於配額(quantum)的策略來週期性地排程vGPU實例。應注意的是,這裡的週期可以是固定的,或是加權的,或者甚至是每個附加策略不同的。各個vGPU,一旦經排程,被提供一配額,其中執行由CPU準備的命令(CMD)直到該配額被消耗光,及/或直到vGPU被鎖住(諸如由於旗號、等待事件等等)。雖然此策略在CPU 側執行良好,但它對GPU側帶來了額外的挑戰,因為vGPU實例的執行依賴於由CPU準備的可用CMD。也就是說,vGPU實例(VM)可能具有用以運行GPU CMD的配額,但在給定的時間點可能沒有準備好的CMD(即,GPU必須等待來自CPU的CMD準備就緒,這可能需要時間)。因此,GPU可保持在閒置狀態中,直到下一個CMD可用,在此期間GPU循環被浪費了。圖13繪示VM1的GPU利用率大約為20%的示例性情況,其中大約80%的GPU資源被浪費在等待下一個可用命令。 Therefore, the GPU scheduler policy will not trigger context switching as frequently as in the CPU. Usually, it happens every few milliseconds (ms). A typical GPU scheduler provides a quantum based policy to schedule vGPU instances periodically. It should be noted that the period here can be fixed, or weighted, or even different for each additional strategy. Each vGPU, once scheduled, is provided with a quota in which commands prepared by the CPU (CMD) are executed until the quota is consumed, and/or until the vGPU is locked (such as due to a flag, a wait event, etc.). Although this strategy is in the CPU The side performs well, but it poses an additional challenge to the GPU side because the execution of the vGPU instance depends on the available CMD prepared by the CPU. That is, a vGPU instance (VM) might have a quota to run GPU CMD, but there may not be a ready CMD at a given point in time (ie, the GPU must wait for the CMD from the CPU to be ready, which may take time) . Thus, the GPU can remain in an idle state until the next CMD is available, during which time the GPU loop is wasted. Figure 13 illustrates an exemplary scenario where VM1's GPU utilization is approximately 20%, with approximately 80% of GPU resources being wasted waiting for the next available command.

一個選項是當偵測到沒有可用CMD時,立即將vGPU實例排程出去。然而,因為GPU中的上下文切換成本相較於CPU是如此高,其可能導致頻繁的GPU上下文切換有非常高的切換成本。因此對GPU排程器而言,判斷在沒有可用CMD時讓出(排程出)vGPU實例的條件集合是一個挑戰。 One option is to schedule the vGPU instance out immediately when it detects that no CMD is available. However, because the context switching cost in the GPU is so high compared to the CPU, it can result in very high switching costs for frequent GPU context switching. So for the GPU scheduler, it is a challenge to determine the set of conditions for vending (routing out) a vGPU instance when no CMD is available.

本文所描述之發明的實施例實現用於高效讓出(yielding)的型樣驅動、自適應方案(PDSAS)。如圖14中所繪示,在一個實施例中,PDSAS邏輯1470係建立在現有的GPU排程器1412內或者在其頂部,且實際上,可以任何現有的GPU排程器來實施。 Embodiments of the invention described herein implement a pattern driven, adaptive scheme (PDSAS) for efficient yielding. As depicted in FIG. 14, in one embodiment, PDSAS logic 1470 is built into or on top of an existing GPU scheduler 1412, and indeed, can be implemented by any existing GPU scheduler.

A. 虛擬繪圖處理總覽 A. Virtual Drawing Processing Overview

現在將提供圖14中所示的示例性實施例1400的額外細節,隨後(在B部分)是由PDSAS邏輯1470執行的操作 的詳細描述。所示的實施例包括多個VM,例如,VM 1430和VM 1440,由超管理器1410(有時稱為虛擬機監視器(VMM))管理,其可存取GPU 1420中的GPU特徵的全部陣列。虛擬GPU(vGPU)1460A-B可基於GPU虛擬化技術來存取由GPU硬體1420提供的全部功能。在各種實施例中,超管理器1410可追蹤、管理一或多個vGPU的資源及生命週期。雖然圖14中僅示出兩個vGPU 1460A-B,超管理器1410可包括其他的vGPU。在一些實施例中,vGPU可與原生的GPU驅動器交互作用。VM 1430或VM 1440可透過vGPU 1460A-B存取GPU特徵的全部陣列。可每配額或每事件切換vGPU上下文。在一些實施例中,上下文切換可每GPU渲染引擎(例如,3D渲染引擎或blitter渲染引擎)發生。週期性的切換允許多個VM以對VM 1430、1440的工作負載透明的方式共用實體GPU 1420。 Additional details of the exemplary embodiment 1400 shown in FIG. 14 will now be provided, followed by (in section B) the operations performed by the PDSAS logic 1470. Detailed description. The illustrated embodiment includes a plurality of VMs, such as VM 1430 and VM 1440, managed by a hypervisor 1410 (sometimes referred to as a virtual machine monitor (VMM)) that can access all of the GPU features in GPU 1420. Array. Virtual GPU (vGPU) 1460A-B can access all of the functionality provided by GPU hardware 1420 based on GPU virtualization technology. In various embodiments, hypervisor 1410 can track and manage the resources and lifecycle of one or more vGPUs. Although only two vGPUs 1460A-B are shown in FIG. 14, hypervisor 1410 may include other vGPUs. In some embodiments, the vGPU can interact with a native GPU driver. The VM 1430 or VM 1440 can access the entire array of GPU features through the vGPU 1460A-B. The vGPU context can be switched per quota or per event. In some embodiments, context switching may occur per GPU rendering engine (eg, a 3D rendering engine or a blitter rendering engine). Periodic switching allows multiple VMs to share physical GPU 1420 in a manner that is transparent to the workload of VMs 1430, 1440.

很像單一中央處理單元(CPU)核心,可由VM 1430分配有限的時間,GPU 1420亦可由VM 1430分配有限的時間。另一虛擬化模型是分時,其中GPU 1420或其一部分可由多個VM,例如,VM 1430和VM 1440,以多工的方式共享。在其他的實施例中亦可使用其他的GPU虛擬化模型。在各種實施例中,與GPU 1420相關聯的繪圖記憶體可被劃分,並被分配給超管理器1410中的各種vGPU。 Much like a single central processing unit (CPU) core, which can be allocated a limited amount of time by the VM 1430, the GPU 1420 can also be allocated a limited amount of time by the VM 1430. Another virtualization model is time sharing, where GPU 1420 or a portion thereof can be shared by multiple VMs, such as VM 1430 and VM 1440, in a multiplexed manner. Other GPU virtualization models may also be used in other embodiments. In various embodiments, the graphics memory associated with GPU 1420 can be partitioned and assigned to various vGPUs in hypervisor 1410.

在各種實施例中,圖形轉換表(GTT)可被VM及/或GPU 1420用來將繪圖處理器記憶體映射至系統記憶體, 或用來將GPU虛擬位址轉換為實體位址。在一些實施例中,超管理器1410可經由影射(shadow)GTT管理繪圖記憶體映射,且影射GTT可被保持在vGPU實例,例如,vGPU 1460A-B中。在各種實施例中,各個VM可具有一對應的影射GTT來保持繪圖記憶體位址與實體記憶體位址之間的映射。在一些實施例中,影射GTT可被共用並且維護多個VM的映射。在一些實施例中,各個VM(例如,VM 1430和VM 1440),可包括每程序(per-process)和全域GTT二者。 In various embodiments, a graphics conversion table (GTT) may be used by the VM and/or GPU 1420 to map graphics processor memory to system memory. Or used to convert a GPU virtual address to a physical address. In some embodiments, hypervisor 1410 can manage the mapping memory map via a shadow GTT, and the mapping GTT can be maintained in a vGPU instance, such as vGPU 1460A-B. In various embodiments, each VM may have a corresponding mapping GTT to maintain a mapping between the mapped memory address and the physical memory address. In some embodiments, the mapping GTT can be shared and maintain a mapping of multiple VMs. In some embodiments, individual VMs (eg, VM 1430 and VM 1440) may include both per-process and global GTT.

一些實施例可使用系統記憶體作為繪圖記憶體。系統記憶體可由GPU頁表被映射至多個虛擬位址空間。不同的實施例可支援全域繪圖記憶體空間和每程序繪圖記憶體位址空間。全域繪圖記憶體空間可以是虛擬位址空間,例如,2GB,透過全域圖形轉換表(GGTT)映射。此位址空間的較低部分可被稱為「孔(aperture)」,可由GPU 1420和CPU(未示出)二者存取。此位址空間的較高部分被稱為高繪圖記憶體空間或隱藏繪圖記憶體空間,其可僅由GPU 1420存取。在各種實施例中,影射全域圖形轉換表(SGGTT)可被VM 1430、VM 1440、超管理器1410、或GPU 1420用來依據全域記憶體位址空間將繪圖記憶體位址轉換成個別的系統記憶體位址。 Some embodiments may use system memory as the graphics memory. System memory can be mapped to multiple virtual address spaces by GPU page tables. Different embodiments can support global graphics memory space and per program graphics memory address space. The global graphics memory space can be a virtual address space, for example, 2 GB, mapped through a Global Graphics Transformation Table (GGTT). The lower portion of this address space may be referred to as an "aperture" and may be accessed by both GPU 1420 and a CPU (not shown). The upper portion of this address space is referred to as a high graphics memory space or a hidden graphics memory space, which can be accessed only by GPU 1420. In various embodiments, a mapped global graphics conversion table (SGGTT) can be used by VM 1430, VM 1440, hypervisor 1410, or GPU 1420 to convert a drawing memory address into individual system memory locations in accordance with a global memory address space. site.

超管理器1410可使用命令剖析器1418來偵測用於由VM 1430或VM 1440所提交之命令的GPU渲染引擎的潛在記憶體工作集。在各種實施例中,VM 1430可具有個別 的命令緩衝器(未示出),以保存來自不同工作負載(例如,3D工作負載和媒體工作負載)的命令。相似地,VM 1440可具有個別的命令緩衝器(未示出),以保存來自這些不同工作負載的命令。 The hypervisor 1410 can use the command parser 1418 to detect potential memory working sets of the GPU rendering engine for commands submitted by the VM 1430 or VM 1440. In various embodiments, VM 1430 can have individual Command buffers (not shown) to hold commands from different workloads (eg, 3D workloads and media workloads). Similarly, VM 1440 can have individual command buffers (not shown) to hold commands from these different workloads.

在各種實施例中,命令剖析器1418可掃描來自VM的命令,並判斷該命令是否包含記憶體運算元。若其包含記憶體運算元,則命令剖析器1418可從,例如,用於VM的GTT,讀取相關的繪圖記憶體空間映射,然後將其寫入SGGTT的工作負載特定部分。在工作負載的命令緩衝器整個被掃描了之後,可產生或更新保存與此工作負載相關聯的記憶體位址空間映射的SGGTT。此外,藉由掃描來自VM 1430或VM 1440的待執行命令,命令剖析器1418還可諸如透過減輕惡意的操作來提高GPU操作的安全性。 In various embodiments, the command parser 1418 can scan for commands from the VM and determine if the command includes a memory operand. If it contains a memory operand, the command parser 1418 can read the associated drawing memory space map from, for example, the GTT for the VM and then write it to the workload specific portion of the SGGTT. After the workload's command buffer has been scanned in its entirety, the SGGTT that holds the memory address space map associated with this workload can be generated or updated. Moreover, by scanning pending commands from VM 1430 or VM 1440, command parser 1418 can also increase the security of GPU operations, such as by mitigating malicious operations.

在一些實施例中,可產生一個SGGTT來保存來自所有VM的所以工作負載的轉換。在一些實施例中,可產生一個SGGTT來保存,例如,僅來自一個VM,的所有工作負載的轉換。可由命令剖析器1418根據需要來構建工作負載特定SGGTT部分,以保存特定工作負載(例如,來自VM 1430的3D工作負載或來自VM 1440的媒體工作負載)的轉換。 In some embodiments, an SGGTT can be generated to hold the conversion of all workloads from all VMs. In some embodiments, an SGGTT can be generated to hold, for example, a conversion of all workloads from only one VM. The workload specific SGGTT portion may be constructed by command parser 1418 as needed to preserve the conversion of a particular workload (eg, a 3D workload from VM 1430 or a media workload from VM 1440).

B. 用於高效讓出的型樣驅動、自適應方案(PDSAS) B. Pattern Driven, Adaptive Solutions (PDSAS) for efficient yielding

如上所述,本發明的實施例實現用於高效讓出的型樣 驅動、自適應方案(PDSAS)。如圖14中所示,在一實施例中,PDSAS邏輯1470係建立在現有的GPU排程器1412內或者在其頂部,且實際上,可以任何現有的GPU排程器來實施。 As described above, embodiments of the present invention implement a pattern for efficient yielding Drive, Adaptive Solutions (PDSAS). As shown in FIG. 14, in an embodiment, PDSAS logic 1470 is built into or on top of an existing GPU scheduler 1412, and in fact, can be implemented by any existing GPU scheduler.

本發明的一個實施例監控活動並產生運行期間GPU執行狀態的分析型樣,包括針對給定的VM,GPU可維持在閒置狀態中多長時間。在一個實施例中,這是透過判斷從最後CMD完成的時間直到新的CMD可用的時間的時間差來完成。然後可使用VM的這些過去的執行型樣來預測VM的行為,並擴展GPU排程器1412來對是否讓出GPU做出更好的決定,從而提高效率。 One embodiment of the present invention monitors activities and produces an analytic pattern of GPU execution states during operation, including how long the GPU can remain in an idle state for a given VM. In one embodiment, this is done by judging the time difference from the time the last CMD was completed until the time the new CMD is available. These past execution patterns of the VM can then be used to predict the behavior of the VM and extend the GPU scheduler 1412 to improve the efficiency of whether to make better decisions on the GPU.

為了執行這些操作,PDSAS邏輯1470的一個實施例將每VM一批GPU命令的執行加上戳記(stamp),以確定如何使用GPU資源的型樣。特別是,PDSAS邏輯1470收集從GPU完成先前提交的CMD(可能是批次格式)的執行的時間點(其中最初沒有可提交的CMD)到新的CMD可用的時間點的持續時間的統計。 To perform these operations, one embodiment of PDSAS Logic 1470 adds a stamp to the execution of each VM GPU command to determine how to use the GPU resource's pattern. In particular, PDSAS Logic 1470 collects statistics on the duration of the point in time at which the GPU completes the execution of the previously submitted CMD (possibly batch format) (where there is initially no committable CMD) to the new CMD available.

在一個實施例中,分析資料包括針對各個VM之GPU忙碌時間(busy time)及/或GPU閒置時間,如圖13中所示。在此範例中,各個VM被提供有15ms的配額。VM1在其配額內的GPU資源利用率為大約20%(即,有顯著的閒置週期),且VM2在其配額內的GPU資源利用率為大約80%(有顯著較少的閒置週期)。 In one embodiment, the analytics data includes GPU busy time and/or GPU idle time for each VM, as shown in FIG. In this example, each VM is offered a quota of 15ms. VM1 has a GPU resource utilization of approximately 20% within its quota (ie, there is a significant idle period), and VM2 has a GPU resource utilization of approximately 80% within its quota (with significantly less idle periods).

CPU排程器和GPU排程器可能是不可知的。VM一 次可具有四種排程可能性:1)僅由CPU排程器排程(scheduled-in),2)僅由GPU排程器排程,3)僅由CPU和GPU排程器二者排程,4)非由GPU或CPU排程器排程。 CPU schedulers and GPU schedulers may be unknown. VM one There are four scheduling possibilities: 1) Scheduled-in only by CPU scheduler, 2) Scheduled by GPU scheduler only, 3) Only by CPU and GPU scheduler Cheng, 4) is not scheduled by the GPU or CPU scheduler.

在一個實施例中,PDSAS邏輯1470接著執行所確定的利用型樣的分析以獲得可能等待時間(即,每VM發生GPU閒置)的分佈曲線。一個實施例應用函數P(t),其中t是等待持續時間,且P(t)是機率(例如,0.01%)。圖15中示出P(t)的此種機率分佈曲線之一個範例,其強調平均時間(Taverage)和T80%(即,意謂著下一CMD將在指定時間週期內到達的機率為80%)之曲線上的點。此範例衍生自3Dmark工作負載。 In one embodiment, PDSAS logic 1470 then performs the analysis of the determined utilization patterns to obtain a profile of possible latency (ie, GPU idle per VM). One embodiment applies a function P(t), where t is the wait duration and P(t) is the probability (eg, 0.01%). An example of such a probability distribution curve for P(t) is shown in Figure 15, which emphasizes the mean time (T average ) and T 80% (i.e., the probability that the next CMD will arrive within a specified time period). 80%) The point on the curve. This example is derived from a 3Dmark workload.

一旦機率曲線被確定,PDSAS邏輯1470可以計算並預測在一定機率下的等待成本(“W-Cost”)。在一個實施例中,PDSAS邏輯1470使用平均等待時間Taverage簡單地執行其評估:W-Cost=Taverage,其中Taverage被計算為: Once the probability curve is determined, the PDSAS logic 1470 can calculate and predict the waiting cost ("W-Cost") at a certain probability. In one embodiment, PDSAS logic 1470 simply performs its evaluation using the average latency T average : W-Cost = T average , where T average is calculated as:

在一個實施例中,因為資料在諸如10ns(0.01ms)的小時間間隔內被取樣,上述公式成為如下的離散函數: In one embodiment, since the data is sampled in small time intervals such as 10 ns (0.01 ms), the above formula becomes a discrete function as follows:

N=T average /0.01 N = T average /0.01

當然,亦可將其他的值用於W-Cost。例如,在一實施例中,W-Cost被設定為下一命令(T0)的最大可能等待持續時間,如圖16所指示。在一個實施例中,W-Cost是在 運行時間期間依據指定的策略動態調整的變數(下面將討論其之一些範例)。 Of course, other values can also be used for W-Cost. For example, in an embodiment, W-Cost is set to the maximum possible wait duration of the next command (T0), as indicated in FIG. In one embodiment, W-Cost is in Variables that are dynamically adjusted during runtime based on the specified policy (some examples of which are discussed below).

上述的取樣技術可持續例如,諸如10分鐘的一定時間。在一個實施例中,可同等地使用取樣週期中的型樣及/或可使用不同的權重來區分最新的資料和舊的資料(例如,有更多的權重被施加至更多的當前資料)。例如,最新的1分鐘的取樣資料可被認為具有最佳的試探法(heuristics),並且,因此,可有80%的權重,而剩餘9分鐘的資料可有20%的權重。在此實現中,最終P(t)=Pa(t) * 0.8+Pb(t) * 0.2,其中P(t)是可能等待時間的分佈曲線。當然,本發明的基本原理不限於判定權重及/或在其內施加權重的時間週期的任何特定方式。 The sampling technique described above can last, for example, for a certain period of time of 10 minutes. In one embodiment, the patterns in the sampling period may be used equally and/or different weights may be used to distinguish between the most recent data and the old data (eg, more weight is applied to more current data) . For example, the latest 1-minute sampling data can be considered to have the best heuristics, and, therefore, can have an 80% weight, while the remaining 9 minutes of data can have a 20% weight. In this implementation, the final P(t) = Pa(t) * 0.8 + Pb(t) * 0.2, where P(t) is the distribution curve of possible latency. Of course, the underlying principles of the invention are not limited to any particular manner of determining the weight and/or the time period during which the weight is applied.

PDSAS邏輯1470的一個實施例藉由預測讓出vGPU實例的成本和增益來增強GPU排程器1412。例如,PDSAS邏輯1470可執行上述評估,然後比較兩種方法的增益和損耗。在一個實施例中,若等待的成本大於或等於切換的成本,則PDSAS邏輯1470讓出vGPU。即: One embodiment of PDSAS Logic 1470 enhances GPU scheduler 1412 by predicting the cost and gain of vGPU instances. For example, PDSAS Logic 1470 can perform the above evaluation and then compare the gain and loss of the two methods. In one embodiment, the PDSAS logic 1470 yields the vGPU if the cost of waiting is greater than or equal to the cost of the handover. which is:

若W-Cost>=SW-Cost(切換成本)+Thres0,則讓出vGPU。 If W-Cost>=SW-Cost (switching cost) + Thres0, the vGPU is given out.

如上所述,W-Cost是若GPU尚未被讓出時,GPU閒置的等待時間。SW-Cost是切換成本,且Thres0是產生GPU的最小增益限制。如上所述,在一實施例中,W-Cost是等待vGPU獲得可運行的新的CMD的平均成本,且SW-Cost是vGPU上下文切換(例如,0.3ms)的成本,且 Thres0是可被設置來保持策略彈性與保守的閾值。 As mentioned above, W-Cost is the latency of the GPU idle if the GPU has not been released. SW-Cost is the switching cost, and Thres0 is the minimum gain limit for generating the GPU. As described above, in an embodiment, W-Cost is the average cost of waiting for the vGPU to obtain a new CMD that is operational, and SW-Cost is the cost of a vGPU context switch (eg, 0.3 ms), and Thres0 is a threshold that can be set to maintain policy resiliency and conservativeness.

在一個實施例中,利用實現上述方案的PDSAS邏輯1470,提高了GPU利用率,且因此提高了總系統GPU吞吐量。然而,在一些情況中,可能在VM之間引進不平衡的GPU資源分配。即,特定VM可被分配到比甚至在長時間觀點中其應被分配到的資源更少的資源。在一個實施例中,藉由說明針對各個VM的各個GPU的使用,以及擴展排程器以提高VM的優先權來解決此問題,該VM將接著具有更多機會成為將被排程進來的下一個VM,及/或若其被排程進來且具有要執行的CMD則將被給定較長的時間配額。亦可使用不同的排程器策略,諸如基於優先權的排程器、或循環排程器或其之任意組合。可使用上述方法來使具有較低資源分配的VM具有較高機率在下一個週期被排程進來。 In one embodiment, utilizing PDSAS logic 1470 that implements the above scheme, GPU utilization is increased, and thus overall system GPU throughput is increased. However, in some cases, unbalanced GPU resource allocation may be introduced between VMs. That is, a specific VM can be allocated to resources that are less than resources that should be allocated even in a long-term perspective. In one embodiment, by addressing the use of individual GPUs for individual VMs and extending the scheduler to increase the priority of the VM, the VM will then have more opportunities to be scheduled to come in. A VM, and/or if it is scheduled to come in and has a CMD to execute, will be given a longer time quota. Different scheduler policies can also be used, such as priority based schedulers, or loop schedulers, or any combination thereof. The above method can be used to make a VM with a lower resource allocation have a higher probability of being scheduled in the next cycle.

圖17A-B中示出依據本發明之實施例的一方法。該方法被細分為由vCPU執行所實施的那些操作(圖17A),以及由GPU排程器所實施的那些操作(圖17B)。在1701,VM的CPU開始GPU工作負載的執行,並且,在1702,GPU工作負載產生一命令序列,其被放置在命令緩衝器1705中。若在取樣週期中(在1703判斷),則更新P(t)曲線並在1704結束取樣。 A method in accordance with an embodiment of the present invention is illustrated in Figures 17A-B. The method is subdivided into those operations performed by the vCPU (Fig. 17A), as well as those implemented by the GPU scheduler (Fig. 17B). At 1701, the CPU of the VM begins execution of the GPU workload, and, at 1702, the GPU workload generates a sequence of commands that are placed in the command buffer 1705. If during the sampling period (determined at 1703), the P(t) curve is updated and sampling begins at 1704.

轉向圖17B,在1706,GPU排程器選擇用於執行的下一vGPU(例如,針對諸如15ms的給定的配額)。若在配額內(在1707判斷),則在1708,vGPU當前正執行工作負 載。當在1709完成vGPU執行,在1710做出關於新命令是否可從命令緩衝器獲得的判定。若是的話,則處理返回到1707。若否的話,則在1711開始取樣。若在其他的VM中有未決的工作負載(在1712判斷),則在1713施用PDSAS邏輯。若否,則處理返回1707用於當前的vGPU實例。在1714做出關於是否讓出另一vGPU實例(例如,使用本文所述之PDSAS技術)的判定。若是的話,則處理返回至1706。若否的話,則處理返回至1707用於當前的vGPU實例。 Turning to Figure 17B, at 1706, the GPU scheduler selects the next vGPU for execution (e.g., for a given quota such as 15ms). If it is within the quota (determined at 1707), then at 1708, the vGPU is currently performing work negative. Loaded. When the vGPU execution is completed at 1709, a determination is made at 1710 as to whether the new command is available from the command buffer. If so, the process returns to 1707. If not, start sampling at 1711. If there are pending workloads in other VMs (as judged at 1712), the PDSAS logic is applied at 1713. If not, the process returns 1707 for the current vGPU instance. A determination is made at 1714 as to whether to give up another vGPU instance (eg, using the PDSAS techniques described herein). If so, the process returns to 1706. If not, processing returns to 1707 for the current vGPU instance.

因此,本文所述之發明的一些實施例定義與SW-Cost和Thres0相比較來計算WCost的策略。在一些架構中,已確定SW-cost大約為0.1~0.3ms,在運行時取樣。還有幾種可選擇Thres0的方式。在一個實施例中,Thres0=0。在另一實施例中,Thres0是固定值,諸如0.2ms作為保守排程(諸如用於RT VM/vGPU)的邊界(barrier),或是-0.1ms作為用以激勵排程的積極因子。在又另一實施例中,從下一vGPU的平均GPU執行時間判定該值(即,正如閒置可針對VM而被判定,忙碌狀態亦可針對VM而被判定)。此處,例如,下一VM表示將被排程進來的下一訪客排程器。例如,針對具有運行的3D工作負載的下一vGPU,若平均GPU忙碌時間是每各個命令緩衝器0.5ms,則Thres0可被設定為0.5ms。VM會想要以總切換成本=SW-Cost+0.5ms=0.8ms來切換至下一VM,其表示它需要平均等待0.8ms,直到GPU所有權被再次切換(即,用 於2個VM實現)。然而,應注意的是,本發明的基本原理不限於計算Thres0的任何特定方式。 Accordingly, some embodiments of the invention described herein define a strategy for computing WCost compared to SW-Cost and Thres0. In some architectures, the SW-cost has been determined to be approximately 0.1 to 0.3 ms, sampled at runtime. There are also several ways to choose Thres0. In one embodiment, Thres0=0. In another embodiment, Thres0 is a fixed value, such as 0.2ms as a conservative schedule (such as for a RT VM/vGPU), or -0.1ms as a positive factor to motivate the schedule. In yet another embodiment, the value is determined from the average GPU execution time of the next vGPU (ie, as idle may be determined for the VM, the busy state may also be determined for the VM). Here, for example, the next VM represents the next visitor scheduler to be scheduled to come in. For example, for the next vGPU with a running 3D workload, Thres0 can be set to 0.5ms if the average GPU busy time is 0.5ms per command buffer. The VM will want to switch to the next VM with a total handover cost = SW-Cost + 0.5ms = 0.8ms, which means it needs to wait an average of 0.8ms until the GPU ownership is switched again (ie, with Implemented in 2 VMs). However, it should be noted that the basic principles of the invention are not limited to any particular way of computing Thres0.

現在將描述三個示例性策略(1)W-Cost=Taverage、(2)W-Cost=T80%、及(3)W-Cost=(1-D)* Taverage(其中D是如下所述的平衡係數)。 Three exemplary strategies will now be described (1) W-Cost=T average , (2) W-Cost=T80%, and (3) W-Cost=(1-D)* T average (where D is as follows) The balance factor described).

1. W-Cost=T average 1. W-Cost=T average

在此實施例中,成本是不讓出GPU及有GPU閒置的等待Taverage時間。若切換成本,SW-cost=0.3,且Thres0為0.5ms之下一vGPU的工作負載平均執行時間,則總成本為0.8ms。在此策略中,只要vGPU保持相同的工作負載,W-Cost可相對地維持一致,因為工作負載可能具有固定的型樣。因此,若Taverage>0.8ms,則當前的vGPU可在各個CMD完成時讓出GPU。 In this embodiment, the cost is not to let the GPU and the GPU idle waiting T average time. If the switching cost, SW-cost=0.3, and Thres0 is the average execution time of a vGPU under 0.5ms, the total cost is 0.8ms. In this strategy, W-Cost can be relatively consistent as long as the vGPU maintains the same workload, as the workload may have a fixed pattern. Therefore, if T average > 0.8ms, the current vGPU can give up the GPU when each CMD is completed.

如圖17a-b中的流程圖所示,此策略的優點是,我們可最大化GPU週期的使用,但它可能影響vGPU1(讓步之該者)。其他實施例中,vGPU1工作負載是高優先權的並且即使我們將GPU讓出給vGPU2也不應被影響,猶如實體GPU遲早可被排程回到vGPU1(例如在平均等待時間到期之前)。這對於那些高優先權或即時的工作負載(例如,實況串流媒體、視訊會議等等)而言是非常重要的。缺點是它總是試圖保持該GPU用於一個VM或者在每個週期總是選擇讓出GPU。具有小Taverage和小Thred0值(指示其具有繁忙的工作負載)的VM將接著傾向佔用大多數的 GPU資源。它需要排程器的幫忙,以平衡各個VM之間的GPU資源。 As shown in the flow chart in Figures 17a-b, the advantage of this strategy is that we can maximize the use of GPU cycles, but it can affect vGPU1 (the concession). In other embodiments, the vGPU1 workload is high priority and even if we give the GPU out to vGPU2, it should not be affected, as if the physical GPU could be scheduled back to vGPU1 sooner or later (eg before the average wait time expires). This is very important for high priority or instant workloads (eg, live streaming media, video conferencing, etc.). The downside is that it always tries to keep the GPU for a VM or always choose to give up the GPU every cycle. VMs with small T average and small Thred0 values (indicating that they have a busy workload) will then tend to consume most of the GPU resources. It requires the help of a scheduler to balance the GPU resources between the various VMs.

2. W-Cost=T 80% 2. W-Cost=T 80%

如上所述,T80%表示下一命令的80%將在此持續時間內到達。下面公式可被用來判定T80%。雖然在此示例性實施例中使用80%的值,但仍可使用各種其他的百分比同時仍符合本發明的基本原理(例如,90%、95%、99%、Tmax等等)。 As mentioned above, T80% means that 80% of the next command will arrive within this duration. The following formula can be used to determine T80%. Although 80% of the values are used in this exemplary embodiment, various other percentages can still be used while still complying with the basic principles of the invention (eg, 90%, 95%, 99%, Tmax, etc.).

在一個實施例中,此實施具有與上述W-Cost=Taverage策略中所述之相同的效果,主要的差異在於,若T80%>Taverage,W-Cost可以是更大的值。圖18繪示針對W-Cost=T80%之一示例性實施例,使用與上述相同的VM1及VM2工作負載。如圖18中所示,由於較大的W-Cost值,VM1更容易被排程給VM2。如所示,在1801中的切換相似於上面所述的切換。然而,在1802中的切換是非常不同的,因為GPU交替處理來自VM1和VM2之各者的CMD,給予較大的W-Cost值。在一個實施例中,在各個CMD緩衝器中的閒置時間可能不夠大,並且,因此,工作負載本身可能被輕微地影響/延遲。因此,此實現方式更適合非即時的工作負載。然而,如1802所示,此實施方式可能導致提高的整體GPU利用率。 In one embodiment, this implementation has the same effect as described in the W-Cost=T average strategy described above, with the main difference being that if T 80% > T average , W-Cost can be a larger value. FIG 18 illustrates a T 80% for one exemplary embodiment of the W-Cost =, as described above using the same workload VM1 and VM2. As shown in Figure 18, VM1 is more easily scheduled to VM2 due to the larger W-Cost value. As shown, the switching in 1801 is similar to the switching described above. However, the switching in 1802 is very different because the GPU alternately processes the CMDs from each of VM1 and VM2, giving a larger W-Cost value. In one embodiment, the idle time in each CMD buffer may not be large enough, and, therefore, the workload itself may be slightly affected/delayed. Therefore, this implementation is more suitable for non-instant workloads. However, as shown at 1802, this embodiment may result in increased overall GPU utilization.

W-Cost的兩個其他範例為W-Cost=Max及W-Cost= 0。在第一種情況下,W-Cost=Max,將立即地讓出GPU。在第二種情況下,W-Cost=0,在VM配額期間將不讓出GPU。 Two other examples of W-Cost are W-Cost=Max and W-Cost= 0. In the first case, W-Cost=Max will immediately give up the GPU. In the second case, W-Cost=0, the GPU will not be given up during the VM quota.

圖19繪出示例性機率曲線,其指示在不同時間值的機率以及包括平衡係數。 Figure 19 depicts an exemplary probability curve indicating the probability at different time values and including the balance factor.

3. W-Cost=(1-D)* T average 3. W-Cost=(1-D)* T average

在此實施例中,D是介於(-1.0~1.0)之間的平衡係數。負D值表示PDSAS邏輯1470將更傾向讓出GPU執行時間。而正D值表示PDSAS邏輯1470將更傾向保持在其GPU配額(例如,15ms)之內。加入D因子,以影響在多個VM之間的處於平衡的(in-balance)GPU資源分配。D會在運行時依據各個VM的GPU利用率進行調整。已經知道沒有排程器的幫忙,策略#1或策略#2可能導致一些VM佔用大部分的GPU時間。 In this embodiment, D is a balance factor between (-1.0 and 1.0). A negative D value indicates that the PDSAS logic 1470 will be more inclined to give up the GPU execution time. A positive D value indicates that the PDSAS logic 1470 will be more prone to remain within its GPU quota (eg, 15ms). The D factor is added to affect the in-balance GPU resource allocation between multiple VMs. D will be adjusted at runtime based on the GPU utilization of each VM. It is known that without the help of a scheduler, Strategy #1 or Strategy #2 may cause some VMs to take up most of the GPU time.

下面是2個VM的範例。使用D=1-VM%* TotalVM來計算係數D。在此範例中,VM%表示當前VM的GPU時間的分配百分比。針對2個VM訪客的情況,TotalVM=2。若VM%為30%(其中針對2個VM的情況,應該為50%),D=0.4*Taverage。W-cost=0.6*Taverage。使用這些技術,由於引入小的W-cost,VM(其過去擁有30%的GPU時間)更容易分配更多的GPU資源。 Below are examples of 2 VMs. The coefficient D is calculated using D = 1 - VM% * TotalVM. In this example, VM% represents the percentage of allocation of GPU time for the current VM. For the case of 2 VM visitors, TotalVM=2. If VM% is 30% (which should be 50% for 2 VMs), D=0.4*T average . W-cost=0.6*T average . Using these techniques, VMs (which used to have 30% GPU time) are more likely to allocate more GPU resources due to the introduction of small W-costs.

應注意的是,上述用於預測下一個CMD之閒置的技術也可用於預測中斷何時到達。在此實現中,IRQ處置器 可等待該時間來組合進入的中斷,將它們一起處理。這基本上可減少注入到訪客VM的總陷阱數 It should be noted that the above technique for predicting the idleness of the next CMD can also be used to predict when an interrupt arrives. In this implementation, the IRQ handler You can wait for this time to combine the incoming interrupts and process them together. This basically reduces the total number of traps injected into the guest VM.

在此詳細的描述中,參考形成本文之一部分的附圖,其中相同的標號表示相同的部件,且其中以可實施的舉例說明實施例的方式示出。應當理解的是,在不脫離本揭示之範圍的情況下,可以利用其他的實施例,以及可進行結構或邏輯的改變。因此,以下的詳細描述不應被視為限制性的,並且實施例的範圍係由隨附的申請專利範圍及其等同物所限定。 In the detailed description, reference is made to the accompanying drawings, in which FIG. Other embodiments may be utilized, and structural or logical changes may be made without departing from the scope of the disclosure. Therefore, the following detailed description is not to be considered as limiting,

各種操作可以以最有助於理解所主張之申請專利標的的方式,被描述為依次的多個分離的動作或操作。然而,描述的順序不應被解釋為暗示這些操作必須是順序相依的。具體地,可不按照渲染的順序來執行這些操作。所描述的操作可以以與所述實施例不同的順序來執行。在附加的實施例中,可以執行各種附加的操作及/或可省略描述的操作。 Various operations may be described as a plurality of separate acts or operations in sequence, in a manner that is most helpful in understanding the claimed subject matter. However, the order of description should not be construed as implying that the operations must be sequential. In particular, these operations may not be performed in the order in which they are rendered. The described operations may be performed in a different order than the described embodiments. In additional embodiments, various additional operations may be performed and/or the operations described may be omitted.

為了本公開的目的,「A及/或B」之用語表示(A)、(B)、或(A及B)。為了本公開的目的,「A、B、及/或C」之用語表示(A)、(B)、(C)、(A及B)、(A及C)、(B及C)、或(A、B、及C)。在本揭示描述「一(a)」或「第一元件」或其等同物的情況下,此揭示包括一或多個此種元件,既不要求也不排除兩個或更多的此種元件。此外,用於識別元件的序數指示(例如,第一、第二、或第三)被用來區別多個元件,並且不指示或暗示此種元件的所需數量 或限定數量,除非另有具體說明,否則它們亦不指示此種元件的特定位置或順序。 For the purposes of this disclosure, the terms "A and/or B" mean (A), (B), or (A and B). For the purposes of this disclosure, the terms "A, B, and/or C" mean (A), (B), (C), (A and B), (A and C), (B and C), or (A, B, and C). In the context of the disclosure "a" or "a first element" or its equivalent, the disclosure includes one or more such elements, neither requiring nor excluding two or more such elements . Furthermore, ordinal indications (eg, first, second, or third) for identifying elements are used to distinguish multiple elements and do not indicate or imply the required number of such elements. Or a limited number, unless otherwise specified, they do not indicate a particular location or order of such elements.

在描述中對一個實施例或一實施例的參照意謂著,結合實施例所描述的特定特徵、結構、或特性被包括在本發明的至少一個實施例中。該描述可使用「在一個實施例中」、「在另一實施例中」、「在一些實施例中」、「在實施例中」、「在各種實施例中」等之用語,其可各指稱一或多個相同或不同的實施例。此外,關於本揭示之實施例所使用的術語「包含」、「包括」、「具有」等,是同義的。 Reference to an embodiment or an embodiment in the description means that a particular feature, structure, or characteristic described in connection with the embodiments is included in at least one embodiment of the invention. The description may use terms such as "in one embodiment", "in another embodiment", "in some embodiments", "in an embodiment", "in various embodiments", etc. One or more identical or different embodiments are referred to. In addition, the terms "including", "including", "having", etc., used in connection with the embodiments of the present disclosure are synonymous.

在實施例中,術語「引擎」或「模組」或「邏輯」可指稱以下之一部分、或包含以下:特殊應用積體電路(ASIC)、電子電路、處理器(共用、專用、或群組)、及/或執行一或多個軟體或韌體程式的記憶體(共用、專用、或群組)、組合邏輯電路、及/或其他提供所述功能之合適的元件。在實施例中,可以韌體、硬體、軟體、或韌體、硬體、及軟體之任意組合來實現引擎或模組。本發明的實施例可包括已描述於上的各種步驟。該等步驟可以可用以使通用或專用處理器執行該等步驟的機器可執行指令來體現。替代地,這些步驟可以由包含用於執行該等步驟的固線式(hardwired)邏輯的特定硬體元件、或者由已編程的電腦元件及定制的硬體元件的任意組合來執行。 In the embodiments, the terms "engine" or "module" or "logic" may refer to one or all of the following: special application integrated circuits (ASICs), electronic circuits, processors (shared, dedicated, or group) And/or memory (shared, dedicated, or group) of one or more software or firmware programs, combinatorial logic, and/or other suitable components that provide the functionality. In an embodiment, the engine or module can be implemented in any combination of firmware, hardware, software, or firmware, hardware, and software. Embodiments of the invention may include various steps that have been described above. The steps can be embodied by machine executable instructions that cause a general purpose or special purpose processor to perform the steps. Alternatively, these steps may be performed by a particular hardware component that includes hardwired logic for performing the steps, or by any combination of programmed computer components and custom hardware components.

如本文所述,指令可指稱,諸如被配置以執行某些操作或具有預定功能之特殊應用積體電路(ASICs)的硬體的 特定配置,或者儲存在以非暫時性電腦可讀取媒體所體現的記憶體中的軟體指令。因此,可使用在一或多個電子裝置(例如,終端站、網路元件等等)上儲存及執行的程式碼及資料來實現圖式中所示之技術。此種電子裝置使用電腦機器可讀取媒體儲存及(內部地及/或透過網路與其他的電子裝置)通訊程式碼及資料,電腦機器可讀取媒體係諸如非暫時性電腦機器可讀取儲存媒體(例如,磁碟;光碟;隨機存取記憶體;唯讀記憶體;快閃記憶體裝置;相變記憶體)及暫時性電腦機器可讀取通訊媒體(例如,電、光、聲或其他形式的傳播信號-諸如載波、紅外線信號、數位信號等)。 As described herein, instructions may refer to hardware such as special application integrated circuits (ASICs) that are configured to perform certain operations or have predetermined functions. A specific configuration, or a software instruction stored in a memory embodied in a non-transitory computer readable medium. Thus, the techniques shown in the figures can be implemented using code and data stored and executed on one or more electronic devices (e.g., terminal stations, network elements, etc.). Such an electronic device can read media storage and (internal and/or network and other electronic devices) communication code and data using a computer device, and the computer machine can read the media such as a non-transitory computer machine to be readable. Storage media (eg, diskette; CD-ROM; random access memory; read-only memory; flash memory device; phase change memory) and temporary computer devices can read communication media (eg, electricity, light, sound) Or other forms of propagating signals - such as carrier waves, infrared signals, digital signals, etc.).

此外,此種電子裝置通常包括一或多個處理器的集合,其耦接至一或多個其他元件,諸如一或多個儲存裝置(非暫時性機器可讀取儲存媒體)、使用者輸入/輸出裝置(例如,鍵盤、觸控螢幕、及/或顯示器)、及網路連接。該處理器集合與其他元件的耦接通常透過一或多個匯流排及橋接器(亦稱為匯流排控制器)。儲存裝置及乘載網路流量的信號分別表示一或多個機器可讀取儲存媒體及機器可讀取通訊媒體。因此,給定電子裝置的儲存裝置通常儲存用於在該電子裝置之該一或多個處理器的集合上執行的程式碼及/或資料。當然,可使用軟體、韌體、及/或硬體的不同組合來實現本發明之實施例的一或多個部分。在整個詳細描述中,為了解釋的目的,說明了許多具體細節以提供對本發明的透徹理解。然而,對本領域之技術人員顯而易 見的是,可在沒有這些特定細節之部分的情況下實施本發明。在某些實例中,在詳細說明的細節中不描述公知的結構及功能,以避免模糊本發明之申請專利標的。因此,應依據以下的申請專利範圍來判斷本發明之範圍及精神。 Moreover, such electronic devices typically include a collection of one or more processors coupled to one or more other components, such as one or more storage devices (non-transitory machine readable storage media), user input / Output devices (eg, keyboard, touch screen, and/or display), and network connections. The coupling of the processor set to other components is typically through one or more bus bars and bridges (also known as bus bar controllers). The signals of the storage device and the load network traffic represent one or more machine readable storage media and machine readable communication media, respectively. Thus, a storage device of a given electronic device typically stores code and/or data for execution on the set of one or more processors of the electronic device. Of course, different combinations of software, firmware, and/or hardware can be used to implement one or more portions of embodiments of the present invention. Numerous specific details are set forth to provide a thorough understanding of the invention. However, it is obvious to those skilled in the art. It is to be understood that the invention may be practiced without a part of these specific details. In some instances, well-known structures and functions are not described in the details of the details. Therefore, the scope and spirit of the present invention should be judged based on the following claims.

1400‧‧‧示例性實施例 1400‧‧‧Exemplary embodiment

1410‧‧‧超管理器 1410‧‧‧Super Manager

1412‧‧‧GPU排程器 1412‧‧‧GPU scheduler

1418‧‧‧命令剖析器 1418‧‧‧Command Profiler

1420‧‧‧GPU 1420‧‧‧GPU

1430‧‧‧VM 1430‧‧‧VM

1440‧‧‧VM 1440‧‧‧VM

1460A、1460B‧‧‧虛擬GPU(vGPU) 1460A, 1460B‧‧‧Virtual GPU (vGPU)

1470‧‧‧PDSAS邏輯 1470‧‧‧PDSAS logic

Claims (25)

一種裝置,包含:繪圖處理單元(GPU),用以處理繪圖命令以及回應地渲染複數個圖像框;超管理器,用以虛擬化該GPU以在多個虛擬機(VM)之間共享該GPU,該超管理器將用於各個VM的GPU細分為多個配額(quanta);以及排程邏輯,用以監控GPU利用,其包括各個VM在其分配到的配額期間的GPU忙碌時間及/或GPU閒置時間,並且用以儲存反映在各配額期間該GPU利用的利用資料;該排程邏輯用以基於該利用資料來預測在第一VM之給定的配額內的等待的成本,並且用以進一步預測讓出給第二VM的成本,若該等待的成本大於該讓出給第二VM的成本,則該排程邏輯將該GPU讓出給該第二VM。 An apparatus comprising: a graphics processing unit (GPU) for processing a drawing command and responsively rendering a plurality of image frames; a hypervisor for virtualizing the GPU to share the plurality of virtual machines (VMs) a GPU that subdivides the GPU for each VM into a plurality of quotas; and scheduling logic to monitor GPU utilization, including GPU busy times for each VM during its assigned quota and/or Or GPU idle time, and used to store usage data reflected by the GPU during each quota period; the scheduling logic is configured to predict a waiting cost within a given quota of the first VM based on the utilization data, and To further predict the cost of the second VM, if the waiting cost is greater than the cost of the second VM, the scheduling logic yields the GPU to the second VM. 如申請專利範圍第1項之裝置,其中若該等待的成本大於該讓出給該第二VM的成本與一指定閾值之和,該排程邏輯用以將該GPU讓出給該第二VM。 The device of claim 1, wherein the scheduling logic is used to give the GPU to the second VM if the cost of the waiting is greater than a sum of the cost of the second VM and a specified threshold. . 如申請專利範圍第2項之裝置,其中該閾值包含0或者大於0的值。 The device of claim 2, wherein the threshold comprises a value of 0 or greater than zero. 如申請專利範圍第1項之裝置,其中監控該GPU忙碌時間及/或GPU閒置時間,該排程邏輯將各配額內每VM將執行的一組GPU命令內的命令加上戳記(stamp)。 The apparatus of claim 1, wherein the GPU busy time and/or GPU idle time is monitored, the schedule logic stamping commands within a set of GPU commands that each VM will execute within each quota. 如申請專利範圍第1項之裝置,其中該排程邏輯 被配置成儲存該反映GPU利用的利用資料作為機率曲線,該機率曲線反映將在不同的時間週期給定的配額內接收新命令的機率。 Such as the device of claim 1 of the patent scope, wherein the scheduling logic It is configured to store the utilization data reflecting the utilization of the GPU as a probability curve that reflects the probability that a new command will be received within a given quota for different time periods. 如申請專利範圍第5項之裝置,其中至少一個時間週期包含基於該機率曲線來預期新命令的平均時間量。 The apparatus of claim 5, wherein the at least one time period includes an average amount of time to expect a new command based on the probability curve. 如申請專利範圍第5項之裝置,其中至少一個時間週期與指定百分比相關聯,該指定百分比包含將在指定的時間週期內接收命令的可能性。 The apparatus of claim 5, wherein at least one time period is associated with a specified percentage that includes a likelihood that a command will be received within a specified time period. 如申請專利範圍第1項之裝置,其中該排程邏輯被配置成判定是否命令係針對該第二VM為未決的,且僅在命令係針對該第二VM為未決之時才將該GPU讓出給該第二VM。 The apparatus of claim 1, wherein the scheduling logic is configured to determine whether the command is pending for the second VM, and the GPU is only allowed if the command is pending for the second VM Out to the second VM. 如申請專利範圍第2項之裝置,其中該排程邏輯回應VM之各者上的工作負載條件而動態地調整該指定閾值。 The apparatus of claim 2, wherein the scheduling logic dynamically adjusts the specified threshold in response to a workload condition on each of the VMs. 一種方法,包含:選擇第一虛擬機(VM)以執行繪圖處理器單元(GPU)上的命令;監控GPU利用資料,其包括在一或多個分配到的配額期間該第一VM的GPU忙碌時間及/或GPU閒置時間;儲存反映該GPU利用的利用資料;基於該利用資料來預測在第一VM之第一配額內的等待的成本以及讓出給第二VM的成本;以及若該等待的成本大於該讓出給第二VM的成本,則將 該GPU讓出給該第二VM。 A method comprising: selecting a first virtual machine (VM) to execute a command on a graphics processor unit (GPU); monitoring GPU utilization data, including GPU busy of the first VM during one or more assigned quotas Time and/or GPU idle time; storing usage data reflecting the utilization of the GPU; predicting a cost of waiting within the first quota of the first VM and a cost of giving out to the second VM based on the utilization data; and if waiting The cost is greater than the cost of giving up the second VM, then The GPU is given out to the second VM. 如申請專利範圍第10項之方法,其中若該等待的成本大於該讓出給該第二VM的成本與一指定閾值之和,該將該GPU讓出給該第二VM。 The method of claim 10, wherein if the waiting cost is greater than a sum of the cost of the second VM and a specified threshold, the GPU is given out to the second VM. 如申請專利範圍第11項之方法,其中該閾值包含0或者大於0的值。 The method of claim 11, wherein the threshold comprises a value of 0 or greater than zero. 如申請專利範圍第10項之方法,其中監控該GPU忙碌時間及/或GPU閒置時間,將各配額內每VM將執行的一組GPU命令加上戳記。 A method of claim 10, wherein monitoring the GPU busy time and/or GPU idle time, stamping a set of GPU commands to be executed by each VM within each quota. 如申請專利範圍第10項之方法,更包含儲存該反映GPU利用的利用資料作為機率曲線,該機率曲線反映將在不同的時間週期給定的配額內接收新命令的機率。 The method of claim 10, further comprising storing the utilization data reflecting the utilization of the GPU as a probability curve, the probability curve reflecting a probability of receiving a new command within a given quota of different time periods. 如申請專利範圍第14項之方法,其中至少一個時間週期包含基於該機率曲線來預期新命令的平均時間量。 The method of claim 14, wherein the at least one time period comprises an average amount of time to expect a new command based on the probability curve. 如申請專利範圍第14項之方法,其中至少一個時間週期與指定百分比相關聯,該指定百分比包含將在指定的時間週期內接收命令的可能性。 The method of claim 14, wherein at least one time period is associated with a specified percentage that includes a likelihood that a command will be received within a specified time period. 如申請專利範圍第10項之方法,更包含:判定是否命令係針對該第二VM為未決的,且僅在命令係針對該第二VM為未決之時才將該GPU讓出給該第二VM。 The method of claim 10, further comprising: determining whether the command is pending for the second VM, and only releasing the GPU to the second when the command is pending for the second VM VM. 如申請專利範圍第11項之方法,更包含:回應該等VM之各者上的工作負載條件而動態地調整 該指定閾值。 For example, the method of claim 11 of the patent scope further includes: dynamically adjusting the workload conditions on each of the VMs. The specified threshold. 一種系統,包含:記憶體,用以儲存資料及程式碼;中央處理單元(CPU),包括用於快取該程式碼之一部分的指令快取,以及用於快取該資料之一部分的資料快取,該CPU進一步包含執行邏輯,用以執行該程式碼中的至少一些程式碼,並回應地處理該資料中的至少一些資料,該程式碼的至少一部分包含繪圖命令;繪圖處理單元(GPU),用以處理該等繪圖命令以及回應地渲染複數個圖像框;超管理器,用以虛擬化該GPU以在多個虛擬機(VM)之間共享該GPU,該超管理器將用於各個VM的GPU細分為多個配額;以及排程邏輯,用以監控GPU利用,其包括各個VM在其分配到的配額期間的GPU忙碌時間及/或GPU閒置時間,並且用以儲存反映在各配額期間該GPU利用的利用資料;該排程邏輯用以基於該利用資料來預測在第一VM之給定的配額內的等待的成本,並且用以進一步預測讓出給第二VM的成本,若該等待的成本大於該讓出給第二VM的成本,則該排程邏輯將該GPU讓出給該第二VM。 A system comprising: a memory for storing data and a code; a central processing unit (CPU) including an instruction cache for caching a portion of the code, and a data for quickly accessing a portion of the data The CPU further includes execution logic for executing at least some of the code in the code and responsively processing at least some of the data, at least a portion of the code including a drawing command; a graphics processing unit (GPU) For processing the drawing commands and responsively rendering a plurality of image frames; a hypervisor for virtualizing the GPU to share the GPU among a plurality of virtual machines (VMs) to be used by the hypervisor The GPU of each VM is subdivided into multiple quotas; and scheduling logic to monitor GPU utilization, including GPU busy time and/or GPU idle time of each VM during its assigned quota, and used to store The utilization data utilized by the GPU during the quota period; the scheduling logic is configured to predict the waiting cost within the given quota of the first VM based on the utilization data, and to further predict the yield to the first VM cost, if the cost is greater than the wait to make a cost of the second VM, the scheduling logic that allows the GPU to the second VM. 如申請專利範圍第19項之系統,其中若該等待的成本大於該讓出給該第二VM的成本與一指定閾值之和,該排程邏輯用以將該GPU讓出給該第二VM。 The system of claim 19, wherein the scheduling logic is used to give the GPU to the second VM if the cost of the waiting is greater than a sum of the cost of the second VM and a specified threshold. . 如申請專利範圍第20項之系統,其中該閾值包含0或者大於0的值。 A system as claimed in claim 20, wherein the threshold comprises a value of 0 or greater than zero. 如申請專利範圍第19項之系統,其中監控該GPU忙碌時間及/或GPU閒置時間,該排程邏輯將各配額內每VM將執行的一組GPU命令內的命令加上戳記。 A system of claim 19, wherein the GPU busy time and/or GPU idle time is monitored, the scheduling logic stamping a command within a set of GPU commands that each VM will execute within each quota. 如申請專利範圍第19項之系統,其中該排程邏輯被配置成儲存該反映GPU利用的利用資料作為機率曲線,該機率曲線反映將在不同的時間週期給定的配額內接收新命令的機率。 The system of claim 19, wherein the scheduling logic is configured to store the utilization data reflecting the utilization of the GPU as a probability curve, the probability curve reflecting a probability of receiving a new command within a given quota for different time periods. . 如申請專利範圍第23項之系統,其中至少一個時間週期包含基於該機率曲線來預期新命令的平均時間量。 A system as claimed in claim 23, wherein the at least one time period comprises an average amount of time to expect a new command based on the probability curve. 如申請專利範圍第23項之系統,其中至少一個時間週期與指定百分比相關聯,該指定百分比包含將在指定的時間週期內接收命令的可能性。 A system of claim 23, wherein at least one time period is associated with a specified percentage that includes a likelihood that a command will be received within a specified time period.
TW105125322A 2015-09-24 2016-08-09 Apparatus, method and system for pattern driven self-adaptive virtual graphics processor units TWI706373B (en)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
PCT/CN2015/090571 WO2017049538A1 (en) 2015-09-24 2015-09-24 Apparatus and method for pattern driven self-adaptive virtual graphics processor units
WOPCT/CN2015/090571 2015-09-24

Publications (2)

Publication Number Publication Date
TW201719570A true TW201719570A (en) 2017-06-01
TWI706373B TWI706373B (en) 2020-10-01

Family

ID=58385714

Family Applications (1)

Application Number Title Priority Date Filing Date
TW105125322A TWI706373B (en) 2015-09-24 2016-08-09 Apparatus, method and system for pattern driven self-adaptive virtual graphics processor units

Country Status (2)

Country Link
TW (1) TWI706373B (en)
WO (1) WO2017049538A1 (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
TWI641950B (en) * 2017-08-07 2018-11-21 上海兆芯集成電路有限公司 Balancing devices and methods thereof

Families Citing this family (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US10521271B2 (en) * 2017-04-01 2019-12-31 Intel Corporation Hybrid low power homogenous grapics processing units
US10649956B2 (en) 2017-04-01 2020-05-12 Intel Corporation Engine to enable high speed context switching via on-die storage
US10474490B2 (en) * 2017-06-29 2019-11-12 Advanced Micro Devices, Inc. Early virtualization context switch for virtualized accelerated processing device
US10796472B2 (en) * 2018-06-30 2020-10-06 Intel Corporation Method and apparatus for simultaneously executing multiple contexts on a graphics engine

Family Cites Families (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US9069622B2 (en) * 2010-09-30 2015-06-30 Microsoft Technology Licensing, Llc Techniques for load balancing GPU enabled virtual machines
CN104216783B (en) * 2014-08-20 2017-07-11 上海交通大学 Virtual GPU resource autonomous management and control method in cloud game
CN104660711A (en) * 2015-03-13 2015-05-27 华存数据信息技术有限公司 Remote visualized application method based on virtualization of graphic processor

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
TWI641950B (en) * 2017-08-07 2018-11-21 上海兆芯集成電路有限公司 Balancing devices and methods thereof

Also Published As

Publication number Publication date
WO2017049538A1 (en) 2017-03-30
TWI706373B (en) 2020-10-01

Similar Documents

Publication Publication Date Title
US11494232B2 (en) Memory-based software barriers
US10269088B2 (en) Dynamic thread execution arbitration
US11650928B2 (en) Cache optimization for graphics systems
TWI706373B (en) Apparatus, method and system for pattern driven self-adaptive virtual graphics processor units
US20180033116A1 (en) Apparatus and method for software-agnostic multi-gpu processing
US11354171B2 (en) De-centralized load-balancing at processors
US11393065B2 (en) Dynamic allocation of cache based on instantaneous bandwidth consumption at computing devices
US10580108B2 (en) Method and apparatus for best effort quality of service (QoS) scheduling in a graphics processing architecture
US11354768B2 (en) Intelligent graphics dispatching mechanism
US10909037B2 (en) Optimizing memory address compression
US9830676B2 (en) Packet processing on graphics processing units using continuous threads
US10891774B2 (en) Method and apparatus for profile-guided graphics processing optimizations

Legal Events

Date Code Title Description
MM4A Annulment or lapse of patent due to non-payment of fees