TWI313563B - A display mixer and the method thereof - Google Patents

A display mixer and the method thereof Download PDF

Info

Publication number
TWI313563B
TWI313563B TW95117538A TW95117538A TWI313563B TW I313563 B TWI313563 B TW I313563B TW 95117538 A TW95117538 A TW 95117538A TW 95117538 A TW95117538 A TW 95117538A TW I313563 B TWI313563 B TW I313563B
Authority
TW
Taiwan
Prior art keywords
layer
window
data
graphic
video
Prior art date
Application number
TW95117538A
Other languages
Chinese (zh)
Other versions
TW200744373A (en
Inventor
Jenya Chou
Yan Yuan
Original Assignee
Magima Digital Information Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Magima Digital Information Co Ltd filed Critical Magima Digital Information Co Ltd
Priority to TW95117538A priority Critical patent/TWI313563B/en
Publication of TW200744373A publication Critical patent/TW200744373A/en
Application granted granted Critical
Publication of TWI313563B publication Critical patent/TWI313563B/en

Links

Landscapes

  • Digital Computer Display Output (AREA)
  • Controls And Circuits For Display Device (AREA)

Description

1313563 九、發明說明: 【發明所屬之技術領域】 本發明是有關於一種螢幕顯示混合系統及方法,尤其 涉及對視訊層與圖形層等多層資料進行混合的螢幕顯示混 合系統及方法。 【先前技術】 數位視訊廣播(Digital Vide〇Br〇adcasting; dvb)是 種對於市場的數位服務體系結構,在DVB標準中選用動 態影像壓縮標準(Moving Picture Expem Gr〇up_2 ; MPEG-2)作爲音訊及視訊的編碼壓縮方式。經壓縮後的 MPEG-2位元串流封包形成傳輪串流伽哪⑽价刪; TS) ’對多個傳輸串流多工轉換後,可通過衛星、有線電 視及無線電視等不同媒介進行傳輪。 …在MPEG2位元串流解碼成㈣送到電視信號編碼器 進行編碼顯示之前,通常需要和螢幕顯示操作介面BACKGROUND OF THE INVENTION 1. Field of the Invention The present invention relates to a screen display mixing system and method, and more particularly to a screen display mixing system and method for mixing multi-layer data such as a video layer and a graphics layer. [Prior Art] Digital video broadcasting (Digital Vide〇Br〇adcasting; dvb) is a digital service architecture for the market. In the DVB standard, the motion picture compression standard (Moving Picture Expem Gr〇up_2; MPEG-2) is selected as the audio information. And the encoding compression method of video. The compressed MPEG-2 bit stream packet forms a traverse stream gamma (10) price deletion; TS) 'Multiple conversion of multiple transmission streams, which can be performed by different media such as satellite, cable TV and wireless TV Passing the wheel. ...the MPEG2 bit stream is decoded into (4) sent to the TV signal encoder. Before the code display is displayed, it is usually necessary to display the operation interface with the screen.

Screen Display ; OSD )、滌垆 北审 _ ;游標、背景、圖形使用者介面 (Graphic User Interface ; GUI )、字幕等預先混合共同形 成-張圖片’然後再送人電視信號編碼器按圖場掃描速率 進行顯示。麵標準中定義了階層模式(me職_Screen Display; OSD), Polyester North Review _; cursor, background, graphical user interface (GUI), subtitles, etc. are pre-mixed to form a picture - and then send the TV signal encoder according to the field scan rate Display. The hierarchical model is defined in the face standard (me job _

Mode",以顯示層次來表示不同資料層。第⑽爲则技 *中顯示平面的階層模式示意圖。參見第i圖,顯示層次 通常包括背景層η、視訊層12、则層i3、〇sdm層、 字幕層15、游標層16等。背景層u常常指單色背景。視 讯層u通常是由視訊解碼器生成的視訊流,可以通過硬體 1313563 耦合輸入到混合系統。GUI層13常常包括視窗、功能表等。 字幕層15是指從傳輸串流中的基本位元串流封包 (Packetized Elementary Stream ; PES )中解碼出來的相關資 訊,如字幕、調色板内容等。OSD ( On Screen Display)通 常用於硬體設置,調節資訊顯示等。游標層16爲硬體游標, 控制游標位置等。 一般來說,背景層、GUI層、OSD層、字幕層、游標 層等都是RGB域的像素格式,而視訊層是YUV域的像素 格式,如YUV4:2:2的格式,因此混合過程中背景層、GUI 層、OSD層、字幕層、游標層等都需要先進行RGB到YUV 格式的轉換。混合系統實現的主要功能是對階層模式進行 RGB到YUV格式的轉換並加以混合,最後將輸出一張混合 好的圖片給電視信號編碼系統(TV encoder)。現有技術一 般是先對GUI層、OSD層、字幕層等各層分別進行RGB 格式到YUV格式的轉換,並分別將其和視訊層混合。例如, 先將GUI層進行RGB格式到YUV格式的轉換,並與視訊 層混合成第二層;然後對OSD層進行RGB格式到YUV格 式的轉換,再將其和前面混合得到的第二層混合而得到第 三層;對背景層進行RGB格式到YUV格式的轉換,再將 其和前面混合得到的第三層混合而得到第四層;……;最 後是對游標層進行RGB格式到YUV格式的轉換,再將其 和前面混合得到的層混合而得到最後層。進行多次RGB格 式到YUV格式的轉換的操作,並經過多次混合,最終混合 好的圖片,才能夠輸出給電視信號編碼系統。現有技術中 爲每個層RGB到YUV格式的轉換都提供這樣一個轉換模 1313563 組,但RGB到YUV格式的轉換模組消耗的硬體面積比較 大’勢必導致整個系統的硬體面積無法減小,相應地,成 本也無法降低。 【發明内容】 本發明的目的在於克服現有技術中存在的缺陷,提供 一種改進的螢幕顯示混合系統,它可以減小系統的硬體面 積’並降低系統的整體成本。Mode", which shows the different data layers by display hierarchy. The (10) is a schematic diagram of the hierarchical mode in which the plane is displayed in the technique *. Referring to Fig. i, the display hierarchy generally includes a background layer η, a video layer 12, a layer i3, a 〇sdm layer, a subtitle layer 15, a vernier layer 16, and the like. The background layer u is often referred to as a monochrome background. The video layer u is usually a video stream generated by the video decoder and can be coupled to the hybrid system via the hardware 1313563. The GUI layer 13 often includes windows, function tables, and the like. The subtitle layer 15 refers to related information decoded from a Packetized Elementary Stream (PES) in a transport stream, such as subtitles, palette contents, and the like. OSD (On Screen Display) is usually used for hardware settings, adjustment of information display, etc. The cursor layer 16 is a hardware cursor, controls the position of the cursor, and the like. Generally, the background layer, the GUI layer, the OSD layer, the subtitle layer, the cursor layer, and the like are all pixel formats of the RGB domain, and the video layer is a pixel format of the YUV domain, such as the YUV 4:2:2 format, so the mixing process The background layer, GUI layer, OSD layer, subtitle layer, and cursor layer need to be converted from RGB to YUV format first. The main function of the hybrid system is to convert and mix the RGB to YUV format of the hierarchical mode, and finally output a mixed picture to the TV encoder. The prior art generally converts the RGB format to the YUV format for each layer of the GUI layer, the OSD layer, and the subtitle layer, and respectively mixes them with the video layer. For example, the GUI layer is first converted from RGB format to YUV format, and mixed with the video layer into a second layer; then the OSD layer is converted from RGB format to YUV format, and then mixed with the second layer obtained by mixing the front layer. And the third layer is obtained; the background layer is converted from RGB format to YUV format, and then mixed with the third layer obtained by the previous mixing to obtain the fourth layer; ...; finally, the cursor format is RGB format to YUV format. The conversion is then mixed with the previously mixed layer to obtain the final layer. The operation of converting the RGB format to the YUV format is performed, and after multiple mixing, the final mixed image can be output to the television signal encoding system. In the prior art, for each layer of RGB to YUV format conversion, such a conversion mode 1313563 group is provided, but the RGB to YUV format conversion module consumes a relatively large hardware area, which inevitably results in the hardware area of the entire system cannot be reduced. Accordingly, the cost cannot be reduced. SUMMARY OF THE INVENTION It is an object of the present invention to overcome the deficiencies of the prior art and to provide an improved screen display mixing system that reduces the hardware area of the system and reduces the overall cost of the system.

本發明一方面係提供一種螢幕顯示混合系統,其包括: 螢幕座標掃描裝置,掃描螢幕,産生以座標掃描流形 式的當時之掃描像素; 第一矩形視窗裝置,根據所述座標掃描流判斷當前掃 描像素位於包括圖形視窗和游標視窗在内的多個視窗中的 一個視窗,並對包括圖形層和游標層在内的資料進行多選 一的選擇形成一個RGB域的臨時層; 、 第一矩形視窗裝置,根據所述座標掃描流判斷當前掃 描像素位於一個視訊視窗; _謂緩衝裝置,根據當前掃描像素所在的視窗,分別從 第-矩形視窗裝置接收所傳輸的圖形層f料和從第二矩护 視窗裝置接收所傳輸的視訊層資料; yOne aspect of the present invention provides a screen display mixing system including: a screen coordinate scanning device that scans a screen to generate a current scanning pixel in the form of a coordinate scanning stream; a first rectangular window device that determines a current scanning based on the coordinate scanning stream The pixel is located in one of the plurality of windows including the graphic window and the cursor window, and selectively selects a data layer including the graphic layer and the cursor layer to form a temporary layer of the RGB domain; and the first rectangular window The device determines, according to the coordinate scanning stream, that the current scanning pixel is located in a video window; and the buffering device receives the transmitted graphic layer material and the second moment from the first rectangular window device according to the window in which the current scanning pixel is located. The guard window device receives the transmitted video layer data; y

轉換裝置,對所述的RGB域的臨時層資料 域到YUV域的格式轉才奐; 仃RGB 脚ha混合裝置1YUV域的臨時層資 枓進行Alpha混合。 疋成層貝 本發明之另一方面係提供一種勞幕 冤I顯不混合方法,其 1313563 包括如下步驟: 办=應於螢幕顯不的每一層資料的顯示區域在第一矩形 視ΐ ^置叹置對應於圖形層資料的圖形視窗和對應於游標 層I料的游標視窗,在第二矩形視窗裝置設置對應於視訊 層貝=的視訊視窗’並爲所述各個視窗定義視窗位置座標; ,掃描絲’並將得到的當前掃描料以座標掃描流的 开/式送到第-矩形視窗襄置和第二矩形視窗裝置; 將當前掃描像素與所述各個視窗的視窗位置座標進行 比較,以確定當前掃描像素所在的視窗; 根據當前掃描像素所在的視窗,將所述每一層資料送 到4緩衝裝置’其中’圖形層資料和游標層資料在送到讀 緩衝裝置前先進行多選一的選擇,形成一個臨時層; 〃當判斷讀緩衝裝置十的資料爲臨時層資料時,對所述 臨時層資料進行rGB格式到γυν格式的轉換; 對視訊層資料和經過所述格式轉換後的臨時層資料在 YUV顏色域進行Alpha混合。 【實施方式】 ^在MpEG-2解碼中’通常,Gm層、〇犯層和字幕層 等都是以零散形式存貯在同步動態隨機存取記憶體 (ynchronous Dynamic Random Access Memory ; SDRAM) 中。爲簡化DVB標準中的階層模式,對於層、 層和字幕層等零散内容可以在送入勞幕顯示混合系統(以 下簡稱混合系統)之前進行預先混合,形成一個全螢幕的 圖形層,並存回SDRAM +。在本發明的—個實施例中、 9 1313563 可以通過2D模組引擎對GUI層、OSD層和字幕層等 SDRAM中的零散内容預先進行aipha混合形成一個全螢幕 的圖形層。在一種較佳的實施方式中,可以允許圖形層支 援若干種小位元寬的像素格式,從而在一定程度上降低頻 寬的需求。經過預先混合後,原來的混合模型就簡化成4 層,如第2圖所示,分別爲背景層u、圖形層17、視訊層 12和游標層16,因而爲簡化混合系統的設計首先提供了一 個簡化模型。The conversion device converts the temporary layer data field of the RGB domain to the YUV domain; 仃 RGB pin ha mixing device 1 temporary UV layer of the YUV domain for alpha blending. In another aspect of the present invention, a method for displaying a screen is provided. The 1313563 includes the following steps: 1. The display area of each layer of data that should be displayed on the screen is in the first rectangular view. a graphics window corresponding to the graphics layer data and a cursor window corresponding to the cursor layer I, a video window corresponding to the video layer = in the second rectangular window device and defining a window position coordinate for each window; Silk 'and the obtained current scan material is sent to the first rectangular window device and the second rectangular window device in the open/form of the coordinate scanning stream; the current scanning pixel is compared with the window position coordinates of the respective windows to determine The window in which the current scanning pixel is located; according to the window in which the current scanning pixel is located, the data of each layer is sent to the 4 buffer device, wherein the graphic layer data and the cursor layer data are selected one by one before being sent to the reading buffer device. Forming a temporary layer; 判断 When judging that the data of the reading buffer device is temporary layer data, performing rGB format on the temporary layer data to γυν Conversion formula; layers of video and data information via the temporary layer after the format conversion carried out in the YUV color domain Alpha blending. [Embodiment] ^In MpEG-2 decoding 'Generally, the Gm layer, the sputum layer, and the subtitle layer are all stored in a random dynamic random access memory (SDRAM) in a scattered form. In order to simplify the hierarchical mode in the DVB standard, scattered content such as layer, layer and subtitle layer can be pre-mixed before being sent to the screen display mixing system (hereinafter referred to as the hybrid system) to form a full-screen graphics layer and stored in the SDRAM. +. In an embodiment of the present invention, 9 1313563 can pre-amble the scattered content in the SDRAM such as the GUI layer, the OSD layer, and the subtitle layer through the 2D module engine to form a full-screen graphics layer. In a preferred embodiment, the graphics layer can be allowed to support a number of small bit-wide pixel formats, thereby reducing the bandwidth requirements to some extent. After pre-mixing, the original hybrid model is simplified into 4 layers. As shown in Figure 2, the background layer u, the graphics layer 17, the video layer 12 and the vernier layer 16, respectively, thus providing the first to simplify the design of the hybrid system. A simplified model.

第 圖爲本發明一個實施例的混合系統的系統引擎結 構示意圖。系統引擎包括螢幕座標掃描裝置31、第一矩形 視窗裝置33、第二矩形視窗裝置34、讀緩衝裝置35、轉換 破置36以及ALPHA展合裝置37。螢幕座標掃描裝置 掃描整個螢幕,並發出座標掃描流給第-矩形視窗裝置33 和第二矩形視窗裝置34。第一矩形視窗裝置33和第二矩形 34利用視窗機制’根據座標掃描流判斷目前掃描 :位於哪一個視窗。讀緩衝裝置35根據第一矩形視窗裝 料。第=:矩t:窗裝置34的判斷結果讀取當時螢幕資 芒置36對:7裝置Μ送出的資料定義爲臨時層。轉換 表置36對臨時層資料進行rgb到 ALPHA混合裝置對當時座 ' 工、 進行alpha混合操作。#視訊像素和臨時層像素 第4圖爲螢幕座標掃描裝置 螢幕通常可以定義爲一個H 乍梃式示意圖。整個 上角定義馬第4圖所示,左 正輕。原點(〇,〇)’向右爲X軸的正轴,向下爲y軸的 1313563 視窗機制中的“視窗”是指在這樣的螢幕坐 -個區域。在本發明的一個實施例 V、晨面的 域,它可以由左上角的位置座標和右下二】:爲:形區 -的指定。例如,對應於視訊層的顯示區域 ”2唯 以由兩個位置座標(Xvl,Yvl)和(χν2,γν2)來確: 二=:顯示區域,背景視窗可由兩個位置座標 (Xbl’Ybl)和(Xb2,Yb2)來確定等等。The figure is a schematic diagram of the system engine structure of the hybrid system of one embodiment of the present invention. The system engine includes a screen coordinate scanning device 31, a first rectangular window device 33, a second rectangular window device 34, a read buffer device 35, a conversion break 36, and an ALPHA spread device 37. The screen coordinate scanning device scans the entire screen and emits a coordinate scanning stream to the first rectangular window device 33 and the second rectangular window device 34. The first rectangular window means 33 and the second rectangle 34 use the window mechanism ' to judge the current scan based on the coordinate scan stream: which window is located. The read buffer device 35 is loaded according to the first rectangular window. No. =: moment t: The judgment result of the window device 34 is read. At that time, the screen is provided with 36 pairs: 7 The data sent by the device is defined as a temporary layer. The conversion table 36 performs an alpha blending operation on the temporary layer data by rgb to the ALPHA hybrid device. #Video Pixels and Temporary Layer Pixels Figure 4 shows the screen coordinate scanning device. The screen can usually be defined as an H-shaped schematic. The entire upper corner defines the horse as shown in Figure 4, and the left is light. The origin (〇, 〇)' is the positive axis of the X-axis to the right and the 1313563 of the y-axis downward. The "window" in the window mechanism refers to the area in which such a screen sits. In one embodiment of the present invention V, the field of the morning surface, it may be specified by the position coordinates of the upper left corner and the lower right second:: For example, the display area "2 corresponding to the video layer" is determined by only two position coordinates (Xvl, Yvl) and (χν2, γν2): two =: display area, the background window can be two position coordinates (Xbl'Ybl) And (Xb2, Yb2) to determine and so on.

:序,《個榮幕的左上角開始掃描,直到勞幕二 、、、。束。在可數掃晦圖場的時候掃描奇數行,在偶數掃晦圖 %的時候掃描偶數行。螢幕座標掃描裝置根據視訊的掃描 時序掃描螢幕的同穩定地發出座標掃描流。 發幕座標掃描裝置中設有水平計數器和垂直計數器 (未圖示)。水平計數器沿水平方向對螢幕計數,垂直計數 器沿垂直方向對螢幕計數。這裏以64〇*48〇的螢幕掃描面 積爲例來加以說明。混合系統採用圖框模式(Frame M〇de) 進行計數時,水平計數器的計數從〇到639每次累加丨,垂 直。十數從0到479每次累加1。混合系統採用圖場模式(Field Mode)進行計數時,整個螢幕分爲奇、偶兩場,首先計數奇 數掃瞄圖場,再計數偶數掃瞄圖場。計數奇數掃瞄圖場時, 水平-十數器的計數從〇到639每次加1,垂直計數從〇到 Μ 8母-欠累加2 ;計數偶數掃猫圖場時,水平計數器的計數 攸0到639每次加1,垂直計數從1到479每次累加2。第 5圖爲本發明一個實施例的混合系統的結構方塊圖。控制暫 存器4〇接收來自cpu (未圖示)的命令信號並控制系統引 11 1313563 擎的工作。控制暫存器40的工作狀態機可參見第6圖,包 括4種狀態:開始狀態61、間置狀態63、工作狀態65和 錯蜈處理狀態67。在混合系統開始工作時,首先由cpu寫 ^控制暫存器,給其中的開始(start)暫存器(未圖示) 賦值1,混合系統就從開始狀態61進入到閒置狀態Μ。在 狀63下,篇混合系統檢測到同步信號vSync,則重 裝陰影暫存器(shadow register)(未圖示),並進入工作狀態 65 ’系統引擎開始進行混合工作。當混合系統搬運完畢當 時圖框的資料後,收到DMA_finish爲i的信號,進入錯誤 處狀t 67,在這個狀態裏將把所有的輸入資料,以先進 先出=法(Fim In,First _ ;阳〇)做一次清空操作,防止 有上-圖框因爲出錯而殘留的資料,從而隔離了每一圖框 相互之間的影響’達到偵察錯誤的目的。錯誤處理狀態67 在5個周期後自動返回到閒置狀態63。 請同時參考第5圖及第6圖,根據本發明的其中一個 =施=第一矩形視窗裝置33可以對背景層、⑽層、⑽ 二幕:、游標層等進行多選一的選擇,得到咖域的 二、二·二按照前面所描述,在本發明的-個實施例 曰^则層、〇SD層和字幕層等零散内容可以在送入 心系統之别進行預先混合,形成— 並存回SDRAM中^墙 货眷的圖形層, 鳩中。因此’第一矩形視 對背景層、圖形層和游標層進行三選 :二接 視窗裝置33和第二矩 爲:第-矩形 1 4刀別爲其中的各個層 配備個視㈣斷裝置70(如第7圖 本實施例中,姐# 換J 5舌§兄’在 對於圖形層可配置—個圖形層視窗„裝 12 1313563 置,對於游標層可配置一個游標層視窗判斷裝置。在螢幕 掃描前,對應於圖形層和游標層每一層資料的顯示區域分 別設置了一個視窗,可分別稱爲圖形視窗331和游標視窗 332。各個視窗均含有上述的視窗判斷裝置。如第2圖中, 圖形視窗由兩個位置座標(Xgl,Ygl)和(Xg2,Yg2)來確 定;游標視窗由兩個位置座標(Xcl,Ycl)和(Xc2,Yc2) 來確定。圖形視窗和游標視窗的位置座標在每圖框影像開 始時輸入視窗判斷裝置。 第7圖爲視窗判斷裝置的結構示意圖。第7圖中每個 視窗判斷裝置設有視窗位置座標暫存器7〇1和視窗比較裝 置702。視窗位置座標暫存器7〇1可以存放定義視窗的位置 座標,如圖形視窗331的兩個位置座標(Xgl,Ygl)和 (Xg2,Yg2)’游標視窗332的兩個位置座標(Xcl,Ycl)和 (Xc2,Yc2)。 請同時參考第5圖及第7圖。在第5圖中,第二矩形 視窗裝置33可對視訊層資料進行判斷,與第一矩形視窗裝 置34原理類似,其中的視訊視窗341含有視訊層視窗判斷 裝置7〇(如第7圖所示)。視訊視窗341可由兩個位置座標 (Χν1,Υν1)和(Xv2,Yv2)來確定。而如第7圖所示,視 訊層視窗判斷裝置70也設有視窗位置座 (χν2,γν2)在每圖框影像開始時,輸入視窗判斷裝置的視 窗位置座標暫存器701。 榮幕座標掃描裝置31送出的座標掃描流並列地通過視 訊層視窗判斷裝置、圖形層視窗判斷裝置、游標層視窗判 13 1313563 斷裝置。每個視窗判斷裝置中 + ^ ^ ^ 7 中的視齒比較襞置對座標掃描 &和各個視窗的位置座標分別 j進订比較。例如,視訊層視 函判斷裝置將座標掃描流與 興視戒層現窗的位置座標 (ΧνΙ,ΥνΙ)和(Xv2,yv2) ㈠進仃比較,判斷當時的掃描像 素疋否位於視訊視窗之内。如要术 如果當時掃描像素(X,Y), 滿足該落入條件,落入兩個你里—上 兩個位置座標(Xvl,Yvl )和 (Χν2,Υν2 )之間,即: Preface, "The top left corner of a glory screen began to scan until the curtains 2, ,,. bundle. Odd lines are scanned while the broom field is counted, and even lines are scanned for the even broom map %. The screen coordinate scanning device stably emits a coordinate scanning stream based on the scanning timing of the video scanning. A horizontal counter and a vertical counter (not shown) are provided in the screen scanning device. The horizontal counter counts the screen horizontally and the vertical counter counts the screen vertically. Here, a screen scan area of 64 〇 * 48 为 is taken as an example for explanation. When the hybrid system counts in frame mode (Frame M〇de), the horizontal counter counts from 〇 to 639 each time it is accumulated and is vertical. Tens are incremented from 0 to 479 each time. When the hybrid system uses the Field Mode to count, the entire screen is divided into odd and even fields. First, the odd scan field is counted, and the even scan field is counted. When counting an odd scan field, the count of the horizontal-decimal device is incremented from 〇 to 639, the vertical count is from 〇 to Μ 8 female-under-accumulated 2; when the even-numbered sweeping cat field is counted, the counter of the horizontal counter 攸Each time 0 to 639 is incremented by 1, the vertical count is incremented by 2 from 1 to 479. Figure 5 is a block diagram showing the structure of a hybrid system in accordance with one embodiment of the present invention. The control register 4 receives the command signal from the cpu (not shown) and controls the operation of the system. The operational state machine of the control register 40 can be seen in Fig. 6, including four states: a start state 61, an intervening state 63, an operational state 65, and an error processing state 67. When the hybrid system starts working, the control register is first written by the CPU, and the start register (not shown) is assigned a value of 1, and the hybrid system enters the idle state from the start state 61. In the case of the 63, the hybrid system detects the sync signal vSync, reloads the shadow register (not shown), and enters the working state. 65' The system engine starts the mixing operation. When the hybrid system has finished loading the data of the frame at that time, it receives the signal that DMA_finish is i, and enters the error state t 67. In this state, all input data will be put in the first in first out = method (Fim In, First _ ; Yangshuo) Do a clearing operation to prevent the data remaining on the top-frame due to errors, thus isolating the influence of each frame from each other' to achieve the purpose of reconnaissance errors. The error handling state 67 automatically returns to the idle state 63 after 5 cycles. Referring to FIG. 5 and FIG. 6 simultaneously, one of the first rectangular window devices 33 according to the present invention can perform multiple-selection selection on the background layer, the (10) layer, the (10) two-screen:, the cursor layer, and the like. According to the foregoing description, in the embodiment of the present invention, the scattered content such as the layer, the SD layer and the subtitle layer can be pre-mixed in the heart system to form a coexistence. Back to the graphics layer of the wall in the SDRAM, 鸠中. Therefore, the first rectangular viewing pair of the background layer, the graphic layer and the vernier layer are three-selected: the second connecting window device 33 and the second moment are: the first-rectangular 1 4 knife is equipped with a visual (four) breaking device 70 for each of the layers ( As shown in Fig. 7, in this embodiment, the sister #换J 5 tongue § brother 'is configurable to the graphics layer - a graphics layer window „ 12 1313563, for the vernier layer can be configured with a vernier layer window judging device. Before, a display window corresponding to each layer of data of the graphic layer and the cursor layer is respectively provided with a window, which can be respectively referred to as a graphic window 331 and a cursor window 332. Each of the windows includes the above-mentioned window determining device. As shown in FIG. 2, the graphic The window is determined by two position coordinates (Xgl, Ygl) and (Xg2, Yg2); the cursor window is determined by two position coordinates (Xcl, Ycl) and (Xc2, Yc2). The position coordinates of the graphics window and the cursor window are The window judging device is input at the beginning of each frame image. Fig. 7 is a schematic view showing the structure of the window judging device. In Fig. 7, each of the window judging devices is provided with a window position coordinate register 7〇1 and a window comparing device 702. The window position coordinate register 7〇1 can store the position coordinates of the definition window, such as the two position coordinates of the graphic window 331 (Xgl, Ygl) and (Xg2, Yg2). The two position coordinates of the cursor window 332 (Xcl, Ycl) And (Xc2, Yc2). Please refer to both Fig. 5 and Fig. 7. In Fig. 5, the second rectangular window device 33 can judge the video layer data, similar to the principle of the first rectangular window device 34, wherein The video window 341 includes a video layer window judging device 7 (as shown in Fig. 7.) The video window 341 can be determined by two position coordinates (Χν1, Υν1) and (Xv2, Yv2), and as shown in Fig. 7. The video layer window determining device 70 is also provided with a window position seat (χν2, γν2), which is input to the window position coordinate register 701 of the window determining device at the beginning of each frame image. The coordinate scanning stream sent by the glory coordinate scanning device 31 Parallelly through the video layer window judging device, the graphic layer window judging device, and the cursor layer window, the 13 1313563 breaking device is determined. Each of the window judging devices in the + ^ ^ ^ 7 is compared to the coordinate scanning & and each window Location seat For example, the video layer visual function judging device compares the coordinate scanning stream with the position coordinates (ΧνΙ, ΥνΙ) and (Xv2, yv2) (1) of the window of the horizon, and judges the scanning pixels at that time. No is located in the video window. If you want to scan the pixel (X, Y) at that time, meet the drop condition, fall into two of you - the upper two coordinates (Xvl, Yvl) and (Χν2, Υν2) Between

Xvl < X ^ χν2 ;Xvl < X ^ χν2 ;

Υνί ^ γ < γν2 則當時掃描像素落入該視窗之内。Υνί ^ γ < γν2 Then the scanned pixels fall into the window.

請同時參考第5圖及箆8m 墙 L 巧帀夂弟8圖。弟二矩形視窗裝置34設 有視訊FIF0342,可以接收來自視訊解竭器的視訊資料。 當視訊層視窗判斷裝置判斷出當時掃描像素落人視訊視窗 341之内,才見訊層視窗判斷裝置産生視訊視窗有效信號 xy一in 一 video,並將該信號送入讀緩衝裝置35。相應地,讀 緩衝裝置35從視訊FIF0342中載入視訊像素。Please also refer to Figure 5 and 箆8m wall L. The second rectangular window device 34 is provided with a video FIF0342 for receiving video data from the video decommissioner. When the video layer window determining means determines that the scanning pixel falls within the video window 341, the video layer window determining means generates the video window valid signal xy_in a video and sends the signal to the read buffering means 35. Accordingly, the read buffer device 35 loads the video pixels from the video FIF 0342.

第一矩形視窗裝置33設有圖形FIF〇334與圖形DMA (Direct Memory Access ;直接記憶體存取)333。圖形 DMA333可以從外部記憶體中搬運圖形資料,並存放在圖 形FIF0334中。外部記憶體可以是SDRAM等。當圖形層 視ia判斷裝置判斷出當時掃描像素落入圖形視窗3 31之 内’圖形層視窗判斷裝置産生圖形視窗有效信號 xy_in一graphic,並將該信號送入讀緩衝裝置35。相應地, 讀緩衝裝置35從圖形FIF0334中載入圖形像素。 游標資料和背景資料均爲混合系統的系統引擎内置。 14 1313563 在本發明的—個實施例中,讀緩衝裝置35中包含可 =標資料的小容量狀趙和存放背景資料 分Μ示資料和背景資粗左 暫存為’ 錢難置35内。暫存器中背 =Γ可以適當調整來改變背景的顏色。當游標層: 層當時掃描像素落入游標視窗之内,游標 計號、、,2產生游標視窗有效信號xyJn-curs0r,並將 该k戒达入讀鱗给駐¥. 儿竹 擎内部模Jr 應地’讀緩衝裝置35從系統引 内輪、.且中载入游標資料。如 則讀緩衝裝置35從系絲d敬〜 牙豕看輸入’ 從糸統引擎内部模組中載入背景資料。 主在本發明一個實施例的系統定義中,當資料均未出規 時以背景來填充。背景層 9未出現 牙厅、層不配置專門的視窗判斷裴置,σ 在虽圖形、游標與視訊視窗均盔 ’、 Ρ德杏玆X北θ ,,、、貝卄五錄時,涊爲當前掃 田素落入#厅、視窗,而選擇背景層輸入。 ,在本發明的其他實施例中,第—矩形視窗裝置可 圖形層和游標層進杆-、登 十 Α層進仃一選一的選擇。背景 形視窗裝置中處理。當第二矩形顏窗裝η 隹弟一矩 M W ^ ^ 形視1^裝置中的視訊層視窗 情“前座標掃描像素不落在視訊層視窗,就 認爲當時掃描像素“背景視窗,而選擇背景層輪入。 第8圖爲讀緩衝裝置的内部結構與周邊交= 圖。讀緩衝裝置35具有輸人° 有称入暫存姦351和輪出暫存器353, 亚設有協定解析裝置355協調輸人暫存器和輸 ^料的讀取和發送。輸入暫存器351可暫時存放從前面的 ^矩形視=置33和第二矩形視窗34裝置中接收的信 唬,如視§fi視窗有效信號χ丨id • y— —V ae〇游標視窗有效信號 xyjn—cu腿等。對於圖形FIF〇334和視訊阳〇342,當於 15 1313563 FIFO中已經放置當時需要輸出的資料在FIFO埠上,等待 下級模組讀取時,則圖形FIF0334的圖形等待信號 G—fifo_lasting爲高,或者視訊FIF0342的視訊等待信號 V—fifo_lasting爲高。協定解析裝置335如果同時接收到圖 形視窗有效信號xy_in_graphic和有效的圖形等待信號 G_fifo_lasting,會發出一個對應該圖形FIFO的圖形可讀信 號G_fifo_reading,並取走圖形FIFO埠上的資料。如果協 定解析裝置335同時接收到視訊視窗有效信號xy_in_video 和視訊等待信號V_fifo_lasting時,會發出一個對應該圖形 FIFO的圖形可讀信號V_fifo_reading,並取走視訊FIFO埠 上的資料。 讀緩衝裝置35同時與轉換裝置36和ALPHA混合裝置 37相連接。轉換裝置36可以把RGB格式的資料轉換到YUV 格式。在本發明的一個實施例中,當讀緩衝裝置中的資料 爲臨時層時,即背景層、圖形層和游標層時,該資料先送 入轉換裝置36,從RGB格式轉換到YUV格式,再送入 ALPHA混合裝置37。當讀緩衝裝置35中的資料爲視訊層 時,該資料直接送入ALPHA混合裝置37。 ALPHA混合裝置37對當前座標上的視訊像素和臨時 層像素進行ALPHA混合操作。通常可以採取如下公式進行 計算: dst = A * src 1 + ( 1 - A) * src2 =A * ( src 1 - src2) + src2 公式中src 1和src2分別表示參與混合的層,假設src 1 在src2的上面,其中A爲ALPHA值,表示“不透明度”, 16 1313563 =值範圍爲Hi值爲0時,表示完全透明;取 時,表示完全不透明。 爲 ALPHA 混合裝晋 π άτ 1、,— & * 圖示),用來調hLPHA : 爲圖带居... J如’對於某像素點’ srcl ””圖v層’sire 4視訊層’當視訊層資料在圖形層上 =ρηα值調整裝置把ALPHA值設爲q,則該像素點^ δ後視訊層完全覆蓋圖形層。 …’此 第9圖爲本發明一個實施例的營幕顯示混 圖,按第9圖所示,本發明的混合方法包括以下步^红 步驟90"設置多個視窗,並爲每個視窗定 位 座標二多個視窗,包括視訊視窗、圖形視窗和游標視窗。 ,ν驟9G2 .掃描螢幕,並送座標掃描流到第—和 形:窗裝置。在本實施例中,螢幕座標掃描 ; =並把,到的掃描像素以座標掃描流的形= 矩形視窗裝置和第二矩形視窗裝置。 -步驟齡當時掃描像素與上述第—矩形視窗裝置 口::窗裝置中各視窗的視窗位置座標進行比較,以確 窗二Τ:所在的視窗。在本實施例中,第-矩形視 =!和弟二矩形視窗裝置分別將當時掃描像素與模组中 在的視窗,第-矩形視窗;二:::描::: 一的選擇,形成一層臨時層,第二矩形視窗 描辛=掃描像素與視訊視窗進行比較,以確定當時掃 田像素疋否位於視訊視窗。 步驟904.按照當時掃描像素所在視窗,傳輪資科到讀 17 1313563 置。在本實施例中,如果當前掃描像素在圖形視窗, 矩形視窗裝置從圖开”IF0中傳輸圖形資料到讀緩衝 ,如果當時掃描像素在視訊視窗,第二矩形視窗裝置 2訊fIF0中傳輸視„料到讀緩衝裝置;如果當前掃描 j游標視窗,就直接從系統引擎中讀取游標資料到讀 目^置’如果當前掃描像素不落在已定義的座標視窗, 攸系統引擎中讀取背景資料到讀緩衝裝置。 步驟905 ·•判斷讀緩衝裝置中的資料是否爲臨時層。在 中’當判斷讀緩衝裝置中的資料爲臨時層時,即 =、圖形或游標資料時,將進行步驟9()6;;,然而當判斷 讀緩衝裝置中的資料不爲臨時層時,將進行步驟術,即判 斷資料為視訊層時,資料直接送人ALpHA混合裝置。 步驟9G6:資料送入轉換裝置,使得資料先完成從 GB格式轉換到γυν格式,再送人ALpHA混合裝置。 。v驟907 · ALPHA混合。在本實施例中,把γυν域的 臨時層資料與視訊層資料進行ALpHA混合。 上述中,在迭入混合系統之前,圖形層可由Gui層、 層和字幕層等零散内容先進行預先混合,形成—個全勞幕 的圖形層,並存回SDRAm中。 *、另外,在步驟902中,螢幕座標掃描裝置根據視訊的 ::偶數掃關場時序,穩定地發出座標掃描流,座標流 掃描並行地通過各矩形視窗襄置處理,從而判斷出當時掃 描像素是位於哪一個視窗之内。而在步驟9〇4中系統引 擎=的背景資料先由軟體設置在RGB格式,再從系統引擎 中碩取到讀緩衝裝置。步驟9〇6巾,背景層、圖形層和游 18 1313563 標層在硬體路徑上共用一個RGB顏色域到γυν顏色域的 轉換裝置。 • 第ίο圖爲本發明另一個實施例的螢幕顯示混合方法流 程圖,與圖9所示實施例的不同之處在於,其中的背景層 不作爲臨時層,而是在系統引擎中由軟體直接設置爲 格式,不需要進行RGB到YUV的轉換,就可從系統引擎 中讀取到讀緩衝裝置,直接參與AlpHA混合操作。按第 10圖所示,該實施例的混合方法的步驟9〇3,中其中第一 • 矩形視窗裝置僅對圖形、游標等資料進行多選一的選擇, 形成一層臨時層。 而在另一種實施例的態樣中’前述的OSD層、GUI層 和字幕層也可以不作預混合而直接送到螢幕顯示混合系統 中進行處理,在此情況下,所述的圖形視窗將可包含〇SD " 視窗、GUI視窗和字幕視窗等各個視窗。 如鈾所述’在本發明的一個實施例中,可以通過2d模 組系統引擎對GUI層、OSD層和字幕層等SDRAM中的零 鲁散内容預先進行alpha混合形成一個全螢幕的圖形層。以下 是本發明的利用一個2D模組進行所述的混合以形成一個 圖形層的實施例’當然,所述的預混合也可以採用其他常 規形式的模組來完成。 第11圖是本發明一個實施例的2D模組的硬體結構方 塊圖。2D模組利用資料DMA111從SDRAM中取出資料, 並把資料送入源緩衝器。圖中2D模組設置了第一源緩衝器 113和第二源緩衝器114,這樣,資料DMA111從SDRAM 中取資料時,可以一次取兩層資料分別送入第一源緩衝器 19 1313563The first rectangular window device 33 is provided with a graphic FIF 334 and a graphic DMA (Direct Memory Access) 333. Graphics The DMA333 can transfer graphics data from external memory and store it in the graphic FIF0334. The external memory can be SDRAM or the like. When the graphics layer IA judging means judges that the scanning pixel falls within the graphics window 3 31, the graphics layer window determining means generates the graphics window valid signal xy_in a graphic and sends the signal to the read buffering means 35. Accordingly, the read buffer device 35 loads the graphics pixels from the graphic FIF0334. The cursor data and background data are built into the system engine of the hybrid system. 14 1313563 In an embodiment of the present invention, the read buffer device 35 includes a small-capacity Zhao and a background data for the data to be marked, and the data and background information are stored in the left side of the money. The back of the scratchpad = Γ can be adjusted to change the color of the background. When the cursor layer: the layer scan pixel falls into the cursor window at the time, the cursor number, ,, 2 generates the cursor window valid signal xyJn-curs0r, and the k ring reaches the reading scale to the station. 儿竹擎 internal model Jr The read buffer device 35 loads the cursor data from the system inner wheel, and loads the cursor data. For example, the read buffer device 35 loads the background data from the internal module of the system engine from the wire to the gums. In the system definition of one embodiment of the present invention, the primary fills in the background when the data is not published. The background layer 9 does not have a tooth chamber, and the layer is not equipped with a special window to determine the position. σ is in the form of a helmet, a cursor and a video window, and a 录德杏兹X North θ,,,, At present, Shi Tiansu falls into the #厅, window, and selects the background layer input. In other embodiments of the present invention, the first-rectangular window device can select one of the graphic layer and the vernier layer into the pole-and-go layer. Background Processing in a window device. When the second rectangular window is installed, the video layer window in the device is "the front coordinate scanning pixel does not fall in the video layer window, and it is considered that the pixel "background window" is selected at the time. The background layer is rounded. Figure 8 is the internal structure of the read buffer device and the surrounding intersection = map. The read buffer device 35 has a load-input temporary storage 351 and a turn-out register 353, and the sub-set protocol parsing device 355 coordinates the reading and transmission of the input register and the feed. The input register 351 can temporarily store the signals received from the front device of the rectangle and the device of the second rectangular window 34, such as the §fi window effective signal χ丨 id • y — V ae 〇 the cursor window is valid Signal xyjn-cu legs, etc. For the graphic FIF 〇 334 and the video yang 342, when the data that needs to be output is already placed in the FIFO on the 15 1313563 FIFO, waiting for the lower module to read, the graphic waiting signal G_fifo_lasting of the graphic FIF0334 is high. Or the video waiting signal V-fifo_lasting of the video FIF0342 is high. If the protocol parsing means 335 receives both the graphics window valid signal xy_in_graphic and the valid graphics wait signal G_fifo_lasting, a pattern readable signal G_fifo_reading corresponding to the graphics FIFO is issued, and the data on the graphics FIFO is taken. If the protocol parser 335 receives the video window valid signal xy_in_video and the video wait signal V_fifo_lasting simultaneously, a graphic readable signal V_fifo_reading corresponding to the graphics FIFO is issued, and the data on the video FIFO is taken. The read buffer unit 35 is simultaneously connected to the conversion unit 36 and the ALPHA mixing unit 37. The conversion device 36 can convert the data in the RGB format to the YUV format. In an embodiment of the present invention, when the data in the read buffer device is a temporary layer, that is, a background layer, a graphics layer, and a cursor layer, the data is first sent to the conversion device 36, converted from the RGB format to the YUV format, and then sent. Into the ALPHA mixing device 37. When the data in the buffer unit 35 is the video layer, the data is sent directly to the ALPHA mixing unit 37. The ALPHA mixing device 37 performs an ALPHA mixing operation on the video pixels and the temporary layer pixels on the current coordinates. It can usually be calculated by the following formula: dst = A * src 1 + ( 1 - A) * src2 = A * ( src 1 - src2) + src2 In the formula, src 1 and src2 represent the layers participating in the mixing, respectively, assuming src 1 is Above src2, where A is the ALPHA value, indicating "opacity", 16 1313563 = value range is Hi value is 0, it means completely transparent; when taken, it means completely opaque. For the ALPHA hybrid π ά 1 1,, - & * icon), used to adjust hLPHA: for the picture band ... J such as 'for a pixel point ' srcl 》 ” v layer 'sire 4 video layer ' When the video layer data is on the graphics layer = ρηα value adjusting device sets the ALPHA value to q, the video layer completely covers the graphics layer after the pixel point δ. ...the ninth figure is a camping display hybrid diagram according to an embodiment of the present invention. According to the ninth figure, the hybrid method of the present invention includes the following steps: a red step 90" setting a plurality of windows and positioning each window Two more windows, including video window, graphic window and cursor window. , ν9G2. Scan the screen and send the coordinate scan to the first and the shape: window device. In this embodiment, the screen coordinates are scanned; and the scanned pixels are scanned into a shape of a coordinate = rectangular window device and a second rectangular window device. - The scanning pixels at the time of the step are compared with the window position coordinates of the windows in the above-mentioned first-rectangular window device port:: window device to confirm the window: the window in which it is located. In this embodiment, the first-rectangular view-! and the second-divided window device respectively form a layer of the window, the first-rectangular window, and the second:::::: Temporary layer, second rectangular window description = scan pixel is compared with the video window to determine whether the pixel is currently located in the video window. Step 904. According to the window where the pixel is scanned at the time, the transmission is completed to read 17 1313563. In this embodiment, if the current scanning pixel is in the graphics window, the rectangular window device transmits the graphic data from the image opening "IF0" to the reading buffer, and if the scanning pixel is in the video window, the second rectangular window device transmits the image in the fIF0. Expected to read the buffer device; if the current j cursor window is scanned, the cursor data is read directly from the system engine to the read position. If the current scan pixel does not fall within the defined coordinate window, the background data is read in the system engine. Go to the reading buffer. Step 905 ·• Determine whether the data in the read buffer device is a temporary layer. When it is judged that the data in the read buffer device is a temporary layer, that is, =, graphic or cursor data, step 9()6;;, however, when it is judged that the data in the read buffer device is not a temporary layer, Steps will be performed, that is, when the data is judged to be the video layer, the data is directly sent to the ALpHA mixing device. Step 9G6: The data is sent to the conversion device, so that the data is first converted from the GB format to the γυν format, and then sent to the ALpHA mixing device. . v. 907 · ALPHA mixing. In the present embodiment, the temporary layer data of the γυν domain is mixed with the video layer data for ALCHA. In the above, before being stacked in the hybrid system, the graphics layer may be pre-mixed by the scattered content such as the Gui layer, the layer and the subtitle layer to form a graphic layer of the full screen, and stored in the SDRAm. In addition, in step 902, the screen coordinate scanning device stably emits the coordinate scanning stream according to the video::even sweeping field timing, and the coordinate stream scanning is processed in parallel through each rectangular window to determine the scanning pixel at that time. Which window is located. In step 9〇4, the background data of the system engine= is first set by the software in the RGB format, and then from the system engine to the read buffer device. Step 9〇6, background layer, graphics layer and swim 18 1313563 The layer layer shares a RGB color field to the γυν color field conversion device on the hardware path. • FIG. 1 is a flow chart of a screen display mixing method according to another embodiment of the present invention, which is different from the embodiment shown in FIG. 9 in that the background layer is not used as a temporary layer but directly in the system engine by the software. Set to format, you don't need RGB to YUV conversion, you can read the read buffer from the system engine and directly participate in the AlpHA mixing operation. According to Fig. 10, in step 9〇3 of the mixing method of the embodiment, the first rectangular window device selects only one of the graphics, the cursor and the like to form a temporary layer. In another aspect of the embodiment, the foregoing OSD layer, GUI layer and subtitle layer may also be directly sent to the screen display mixing system for processing without premixing, in which case the graphic window will be Includes windows such as SD " windows, GUI windows, and caption windows. As described in the uranium, in one embodiment of the present invention, the zero-lust content in the SDRAMs such as the GUI layer, the OSD layer, and the subtitle layer may be alpha-mixed in advance by the 2d modular system engine to form a full-screen graphics layer. The following is an embodiment of the present invention for performing the mixing described by a 2D module to form a pattern layer. Of course, the premixing can also be accomplished using modules of other conventional forms. Fig. 11 is a block diagram showing the hardware structure of a 2D module according to an embodiment of the present invention. The 2D module uses the data DMA 111 to retrieve data from the SDRAM and send the data to the source buffer. In the figure, the 2D module sets the first source buffer 113 and the second source buffer 114, so that when the data DMA 111 takes data from the SDRAM, it can take two layers of data into the first source buffer at a time.

113和第二源緩衝器1丨4。第一源緩衝器113和第二源緩衝 器U4中的資料隨後進入2Da丨pha混合單元116中進行預 先混合。在本發明的實施例中,第一源緩衝器丨丨3和第二 源緩衝器114與2Dalpha混合單元之間可以設置格式轉換 早το 120,以支援各種像素格式的資料層。這樣可以允許圖 形層支援多種小存儲位元寬的像素格式,從而在一定程度 上降低頻寬的需求。根據本發明的一個實施例,如第u圖 所不,格式轉換單元12〇包括格式轉換器121和123,可以 把如 RGB565,ARGB3454,ARGB4444,ARGB32 等像素 格式在内的多種格式轉換成統一格式例如ARGB32像素格 式。廷樣’ 2Dalpha混合單元可以在例如ARGB32這—種格 式下進行預先混合。圖11所示的實施例中,調色板122與 ,式轉換H 121並行設置在第__源緩衝器與2Dalpha混^ 早兀之間,同樣地,調色板122可在第一源緩衝器和第二 源緩衝器114肖2Daipha混合單元U6之間進行一些格式 種類的像素格式轉換,例如把8位元ARGB32索引、8位 元AYUV32索引等轉換成ARGB32格式。可以理解的是, 與2Dalpha混合單元之間的格式轉換單元除了本實施例中 列出的格式轉換器12卜123和調色板122外,還可以配置 其他的裝置來進行各種像素格式的轉換。 2Dalpha混合單元對第一源緩衝器和第二源緩衝器中 經過格式轉換的像素進行ALPHA混合操作。通常可以採取 如下公式進行計算:113 and a second source buffer 1丨4. The data in the first source buffer 113 and the second source buffer U4 are then entered into the 2Da 丨pha mixing unit 116 for pre-mixing. In an embodiment of the present invention, a format conversion early το 120 may be set between the first source buffer 丨丨3 and the second source buffer 114 and the 2Dalpha mixing unit to support data layers of various pixel formats. This allows the graphics layer to support multiple pixel formats with small memory bit widths, thereby reducing the bandwidth requirements to a certain extent. According to an embodiment of the present invention, as shown in FIG. 5, the format conversion unit 12 includes format converters 121 and 123, which can convert various formats such as RGB565, ARGB3454, ARGB4444, ARGB32, etc. into a unified format. For example, the ARGB32 pixel format. The sample-like 2Dalpha mixing unit can be pre-mixed in a format such as ARGB32. In the embodiment shown in FIG. 11, the palette 122 and the pattern conversion H 121 are disposed in parallel between the __source buffer and the 2Dalpha buffer, and similarly, the palette 122 can be buffered at the first source. And the second source buffer 114, the 2Daipha mixing unit U6 performs some format type pixel format conversion, for example, converting an 8-bit ARGB32 index, an 8-bit AYUV32 index, etc. into an ARGB32 format. It can be understood that the format conversion unit with the 2Dalpha mixing unit can configure other devices to perform conversion of various pixel formats in addition to the format converters 12 and 123 of the present embodiment. The 2Dalpha blending unit performs an ALPHA blending operation on the format converted pixels in the first source buffer and the second source buffer. It can usually be calculated by the following formula:

dst = A * =A srcl + (1-A) * src2 (srcl - src2 ) + src2 20 1313563 公式中src 1和src2分別表示參與混合的層,在此爲第 一源緩衝器和第二源緩衝器中經過格式轉換的像素層。假 設src 1在src2的上面,其中A爲alpha值,表示“不透明 度”,取值範圍爲0〜1,取值爲0時,表示完全透明; 取值爲1時,表示完全不透明。 2Dalpha混合單元可以包含2Dalpha值調整邏輯,用來 調整ALPHA值。例如,對於某像素點,src 1爲GUI層,src2 爲OSD層,當OSD層資料在GUI層上方時,2Dalpha值調 整邏輯把ALPHA值設爲0,則該像素點在混合後OSD層 完全覆蓋GUI層。 2Dalpha混合單元將混合好的資料送到2D輸出 FIF0117中。2D輸出FIF0117與2D返回FIF0112相連接, 2D返回FIFO中的資料經由資料DMA送回到SDRAM中。 當需要利用2D模組進行多層混合,例如三層或三層以上的 混合時,2D輸出FIFO的資料可以經2D返回FIFO 112送 加到SDRAM中,再由資料DMA重新送入2D模組中與 SDRAM中的其他層進行混合,這樣重復利用可支援二層 ALPHA混合的2D模組就可實現多層的混合。 2D模組系統引擎中由2D配置暫存器200來控制。2D 配置暫存器200接收來自CPU的控制命令,通過中斷通訊 機制來實現對2D模組系統引擎的控制。2D配置暫存器200 的工作狀態機如第12圖所示。它包括4種狀態:開始21、 閒置23、工作25和中斷服務程式27狀態。開始狀態21 下,由CPU發出開始命令給2D配置暫存器200,並配置好 其他所有的2D暫存器。隨後,2D模組系統引擎進入閒置 21 1313563 狀態23。閒置狀‘態23下,如果配置暫存器扇接收到咖 發出的將engine—start暫存器置i的信號,2D配置暫存哭 2〇〇將進入工作狀態啓動2D模組系統引擎開始工作^ 模組系統引擎在卫作狀態25下,如果接收到cpu發出的 中斷命令,就會進人中斷服務程式(ISR)狀態27進行等 待。中斷服務程式(ISR)下,2D配置暫存器·如果接 =rne—.stan暫存器置1的信號,則重新返回工作狀態 、、續工作’如果接收到engine-咖暫存器爲0的信號, 則回到閒置狀態23。 雖然本發明已以-較佳實施例揭露如上,然其並非用 :限2發明,任何熟習此技藝者,在不脫離本發明之精 口祀圍内,當可作各種之更動與_,因此本發明之保 護範圍當視後附之中請專利範圍所界定者為準。 ’、 【圖式簡單說明】Dst = A * = A srcl + (1-A) * src2 (srcl - src2 ) + src2 20 1313563 In the formula, src 1 and src2 respectively represent the layers participating in the mixing, here the first source buffer and the second source buffer The formatted pixel layer in the device. Suppose that src 1 is above src2, where A is an alpha value, indicating "opacity", which ranges from 0 to 1. When the value is 0, it means completely transparent. When the value is 1, it means completely opaque. The 2Dalpha blending unit can contain 2D alpha value adjustment logic to adjust the ALPHA value. For example, for a pixel, src 1 is the GUI layer and src2 is the OSD layer. When the OSD layer data is above the GUI layer, the 2Dalpha value adjustment logic sets the ALPHA value to 0, and the pixel is completely covered by the mixed OSD layer. GUI layer. The 2Dalpha mixing unit sends the mixed data to the 2D output FIF0117. The 2D output FIF0117 is connected to the 2D return FIF0112, and the data in the 2D return FIFO is sent back to the SDRAM via the data DMA. When multi-layer mixing is required by using 2D modules, for example, a mixture of three or more layers, the data of the 2D output FIFO can be sent to the SDRAM through the 2D return FIFO 112, and then sent back to the 2D module by the data DMA. The other layers in the SDRAM are mixed so that the multi-layer mixing can be achieved by reusing the 2D module that supports the two-layer ALPHA mixing. The 2D module system engine is controlled by the 2D configuration register 200. The 2D configuration register 200 receives control commands from the CPU and implements control of the 2D module system engine by interrupting the communication mechanism. The working state machine of the 2D configuration register 200 is as shown in FIG. It includes four states: Start 21, Idle 23, Work 25, and Interrupt Service Program 27 Status. In the start state 21, the CPU issues a start command to the 2D configuration register 200 and configures all other 2D registers. Subsequently, the 2D module system engine enters idle 21 1313563 state 23. In the idle state, if the configuration register fan receives the signal that the engine-start register is set to i, the 2D configuration temporarily stores the crying 2〇〇 and will enter the working state to start the 2D module system engine to start working. ^ Module system engine in the state of the state 25, if it receives the interrupt command issued by the cpu, it will enter the interrupt service program (ISR) state 27 to wait. Under the interrupt service program (ISR), the 2D configuration register is connected to the =rne-.stan register, and then returns to the working state, and continues to work. 'If the engine-coffee register is received, it is 0. The signal returns to the idle state 23. Although the present invention has been disclosed above in the preferred embodiment, it is not intended to limit the invention, and any person skilled in the art can make various changes and _ without departing from the scope of the present invention. The scope of protection of the present invention is defined by the scope of the patent application. ', [Simple description of the map]

只施例的輔助說明,結合 ’疋爲進一步揭示本發明 。圖中相同的參照號代表 以下附圖爲對本發明示例性 以下附圖對本發明實施例的闡述 的特徵所在,但並不限制本發明 相應的元件、部件或步驟,其中 第1 圖 圖爲表示DVB技術中顧录巫& 何Τ 3不千面的階層模式的示意 苐2圖爲表示本發明一 模式的示意圖。 貫把例中顯不平面的簡化階層 示混合系統的系 第3圖爲本發明一個實施例的螢幕顯 統引擎結構示意圖。 ’ 22 1313563 工作=圖爲第3 ®解线中—㈣幕座標掃插裝置的 111候式示意圖。 第5圖爲本發明-個實施例的榮幕顯示混合系统的結 構方塊圖。 第6圖爲帛5 ffi所示螢幕顯示混合系統的工作狀態機 的示意圖。 第7圖爲本發明的螢幕顯示混合系統中視窗判斷裝置 的結構示意圖。The accompanying description of the examples, in conjunction with the '''''''''' The same reference numerals are used in the drawings to illustrate the features of the embodiments of the present invention, but the corresponding elements, components or steps of the present invention are not limited thereto, wherein the first figure shows DVB In the technique, the schematic diagram of the hierarchical mode of the three-sided hierarchical mode is shown as a schematic diagram showing a mode of the present invention. A simplified hierarchical diagram showing a mixed system in an example. Fig. 3 is a schematic diagram showing the structure of a screen display engine according to an embodiment of the present invention. ’ 22 1313563 Work = The picture shows the 111th schematic of the (4) curtain coordinate sweeping device in the 3rd line. Figure 5 is a block diagram showing the structure of a display screen hybrid system of an embodiment of the present invention. Figure 6 is a schematic diagram showing the operating state of the hybrid system on the screen shown by 帛5 ffi. Fig. 7 is a view showing the structure of a window judging device in the screen display mixing system of the present invention.

第8圖爲本發明的螢幕顯示混合系統中一個讀缓衝裝 置的内部結構與周邊交互信號示意圖。 第9圖爲本發明的螢幕顯示混合方法的一個實施例的 流程圖。Figure 8 is a schematic diagram showing the internal structure and peripheral interaction signals of a read buffer device in the screen display hybrid system of the present invention. Figure 9 is a flow chart showing an embodiment of the screen display mixing method of the present invention.

第10圖爲本發明的螢幕顯示混合方法的另—個實施例 的流程圖。 第11圖是本發明 個實施例的2D模組的硬體結構方 塊圖。 第1 2圖是第1 1 圖所示2D模組工作狀態機示意圖。 【主要元件符號說明】 11 :背景層 111 資料DMA 12:視訊層 112 2D返回FIFO 13 : GUI 層 113 第一源緩衝器 l4 : OSD 層 114 第二源緩衝器 15 :字幕層 116 2Dalpha混合單元 :游標層 117 23 2D輸出FIFO 1313563 17 : 圖形層 120 .格式轉換單元 200 :2D配置暫存器 121、123 :格式轉換器 21 : 開始狀態 122 .調色板 23 : 閒置狀態 331 :圖形視窗 25 : 工作狀態 332 :游標視窗 27 : 中斷服務程式狀態 333 :圖形 DMA 31 : 蝥幕座標掃描裝置 334 :圖形 FIFO 33 : 第一矩形視窗裴置 341 :視訊視窗 34 : 第二矩形視窗裝置 342 :視訊 FIFO 35 : 讀緩衝裝置 351 :輸入暫存器 36 : 轉換裝置 353 :輸出暫存器 37 : ALPHA混合裝置 355:協定解析裝置 40 : 控制暫存器 701 :視窗位置座標暫存器 61 : 開始狀態 702 :視窗比較裝置 63 : 閒置狀態 9(H、902、903、904、905、906、 65 : 工作狀態 907、903,:步驟 67 : 錯誤處理狀態 (Xbl,Ybl )和(Xb2,Yb2) 70 : 視窗判斷裝置 (Xcl,Ycl)和(xc2,Yc2) (Xgl,Ygl )和(Xg2,Yg2) (ΧνΙ,ΥνΙ )和(Xv2,Yv2) :位置座標 24Figure 10 is a flow chart showing another embodiment of the screen display mixing method of the present invention. Fig. 11 is a block diagram showing the hardware structure of the 2D module of the embodiment of the present invention. Figure 1 2 is a schematic diagram of the 2D module working state machine shown in Figure 11. [Main component symbol description] 11: Background layer 111 Data DMA 12: Video layer 112 2D return FIFO 13: GUI layer 113 First source buffer 14: OSD layer 114 Second source buffer 15: Subtitle layer 116 2Dalpha mixing unit: Cursor layer 117 23 2D output FIFO 1313563 17 : graphics layer 120. format conversion unit 200: 2D configuration register 121, 123: format converter 21: start state 122. palette 23: idle state 331: graphics window 25: Working state 332: Cursor window 27: Interrupt service program state 333: Graphics DMA 31: Curtain coordinate scanning device 334: Graphics FIFO 33: First rectangular window device 341: Video window 34: Second rectangular window device 342: Video FIFO 35 : read buffer device 351 : input register 36 : conversion device 353 : output register 37 : ALPHA hybrid device 355 : protocol analysis device 40 : control register 701 : window position coordinate register 61 : start state 702 : Window comparison device 63: Idle state 9 (H, 902, 903, 904, 905, 906, 65: Operating states 907, 903,: Step 67: Error handling State (Xbl, Ybl) and (Xb2, Yb2) 70 : window judging device (Xcl, Ycl) and (xc2, Yc2) (Xgl, Ygl) and (Xg2, Yg2) (ΧνΙ, ΥνΙ) and (Xv2, Yv2) : Position coordinates 24

Claims (1)

1313563 私乡日修正替換頁 冷〆 圍 45»利專請中 1.一種螢幕顯示混合系統,包括: • 一螢幕座標掃描裝置,掃描螢幕而産生掃描像素,該 掃描像素以一座標掃描流形式形成; 一第一矩形視窗裝置,根據該座標掃描流判斷該掃描 像素所坐落之一視窗,並形成一 RGB域之臨時層,其中該 視函為包括一圖形層視窗及一游標視窗之複數個視窗中之 其中-個,而該臨時層為包括一背景層、一圖形層及—游 標層之複數個資料中之其中一個; 第一矩形視窗裝置,根據該座標掃描流判斷該掃描 像素位於一視訊視窗; 一讀緩衝裝置,分別接收該第一矩形視窗裝置及該第 二矩形視窗裝置之判斷結I,㈣取該掃描像素所在之該 視窗; 一轉換裝置,對該RGB域之臨時層資料進行rgb域 至YUV域之格式轉換;以及 一 Alpha混合裝置,對一已轉為Yuv域之臨時層資料 與一視訊層資料進行Alpha混合。 2.如申請專利範圍帛μ所述之螢幕顯示混合系統, 其中該圖形層由以零散形式存貯在同步冑態隨機二取纪憶 體之圖形使用者介面層1幕顯示操作介面層及字幕層 預先alpha混合而成後,又再度存回同步動態隨機存取記憶 體中0 25 1313563 复.如申請專利範圍帛"員所述之營幕顯示混合系統, ”中,該圖形層支援小位元寬之像素格式。 :·如申請專利麵1項所述之螢幕顯示混合系統, 、Λ者景層、該圖开》層和該游標層於一硬體路徑上共用 轉換裝置,該轉換裝置進行rgb域至Yuv域之格式轉 5. 如中請專利範圍帛丨項所述之螢幕顯示混合系統, 帛-矩形視窗裳置更設有—圖形先進先出單元及一 接記憶體存取,該圖形直接記憶體存取從外部記憶 %取-圖形資料’並存放_形先進先出單元中。 6. 如中請專利範圍帛!項所述之㈣顯示混合系統, 窗判斷较:了別設置對應之一視窗判斷裝置,該視 置,2 視窗位置座標暫存器及-視窗比較裝 視窗位置座標暫存11存有該視窗之位置座標,在該 '較裝置:該座標掃描流與該視窗位置座標進行比 X以判斷該抑插像素是否落於一輸入視窗判斷農置中。 7. -種螢幕顯示混合方法,它包括如下步驟. 設置複數個視窗,並爲每一視窗定義視窗位置座標, 以對應於螢幕顯示的每一層資料的顯示區域,在—第一矩 形祝窗裝置設置對應於一圖形層資料之圖形視窗及對應於 26 1313563 游標層資料之游標視窗,而在__第二矩形視窗裝置設置 對應於一視訊層資料的視訊視窗; 叹 、、婦描榮幕,並將得到之掃描像素以座標掃描流的形式 4至該第一矩形視窗裝置及該第二矩形視窗裝置; 比較該掃描像素與每-視窗之視窗位置座標,以確定 該掃描像素所在之視窗; 根據該掃描像素所位於之視窗,將每一層資料送至— 讀緩衝裝置’其中,該圖形層資料及㈣標層資料在送至 讀緩衝裝置前先進行多選—的選擇,以形成—臨時層; 當判斷該讀緩衝裝置中的資料爲一臨時層資料時,對 該臨時層資料進行RGB格式到γυν格式之轉換; 對視訊層資料及已經過格式轉換後之該臨時層資料在 YUV顏色域進行Alpha混合。 8.如申請專利範圍第7項所述之營幕顯示混合方法, 其中’當該掃描像素位於該圖形視f,則該第—矩形視窗 裝置從—圖形單元中傳輪圖形資料至該讀緩衝裝 置:而當該掃描像素位在視訊視f,_第三㈣視窗裝 置'缺視訊先進先出單(中傳輪視訊資料至該讀緩衝裝 置’ ^當該掃描像素在該游標視t,❹接讀取游標資 料至該讀緩«置,另夕卜,#該掃㈣已定義之 座標視窗,則讀取背景資料到讀緩衝裝置。 9·:申請專利麵7項所述之螢幕顯示混合方法, 其中1幕座標掃描裝置根據視訊的奇、偶數掃_場時 27 1313563 η 序’穩定地發出座標掃描、士 ^ r- 4» 4^ , 坪拖机’泫座標掃描流並行地通過每 一視窗裝置處理,從而主丨齡山為η 士垣 1斷出當時知描像素位於何視窗之 内。 10.如申請專利範圍第7項所述之營幕顯示混合方 法其中,彦'7、層資料先由軟體設置為Rgb格式,再讀 取至該讀緩衝裝置。 I 、11.如申請專利範圍第7項所述之螢幕顯示混合方 法,其中’一背景層、圖形層及游標層共用一個RGB域到 YUV域之轉換裝置。 、I2.如申請專利範圍第7項所述之螢幕顯示混合方 法’其中’―背景層由軟體直接設置爲YUV格式,不為該 ej寺層之部分,因❿經讀取到該讀緩衝裝置後,直接參 與ALPHA >昆合操作。 、I3.如申請專利範圍第7項所述之螢幕顯示混合方 去,其中,該圖形層由一圖形使用者介面層、—發幕顯示 操作介面層及一字幕層混合而成。 28 1313563 七、指定代表圖·· (一) 、本案指定代表圖為. (二) 、本案押為·弟(3)圖 31:螢幕座標掃描裝置 ^ 儿間早呪明· 33 :第一矩形視窗裝置 34:第二矩形視窗裝置 3 5 :讀緩衝裝置 田衣置 36:轉換裝置 37 : ALPHA混合裝置 八、本案若有化學式時 特徵的化學式·· 5月揭不最能顯示發日月1313563 Private Day Correction Replacement Page Cold Rolling 45»利Special 1. A screen display mixing system, including: • A screen coordinate scanning device that scans the screen to produce scanning pixels that are formed as a standard scanning stream a first rectangular window device, determining a window in which the scanning pixel is located according to the coordinate scanning stream, and forming a temporary layer of an RGB domain, wherein the visual function is a plurality of windows including a graphic layer window and a cursor window One of the plurality, and the temporary layer is one of a plurality of materials including a background layer, a graphics layer, and a vernier layer; the first rectangular window device determines that the scanning pixel is located in a video according to the coordinate scanning stream a first reading buffer device, respectively receiving the judgment node I of the first rectangular window device and the second rectangular window device, (4) taking the window where the scanning pixel is located; and a converting device, performing the temporary layer data of the RGB domain Format conversion from rgb domain to YUV domain; and an alpha blending device for temporary layer data and a view of a Yuv domain Alpha layer data were mixed. 2. The screen display hybrid system as described in the patent application scope, wherein the graphic layer is displayed in a form of a user interface layer and a subtitle of a graphical user interface layer stored in a random state in a random state. After the layer is pre-alpha blended, it is stored again in the synchronous dynamic random access memory (0 25 1313563). If the patent application scope " the staff described the camp display hybrid system,", the graphics layer supports small The pixel format of the bit width. The image display hybrid system, the layer of the image, the layer of the image, and the layer of the cursor share a conversion device on a hardware path, the conversion is performed. The device performs the format conversion from the rgb domain to the Yuv domain. 5. The screen display hybrid system described in the scope of the patent application, the 帛-rectangular window skirt is further provided with a graphic first-in first-out unit and a memory access. The graphic direct memory access is taken from the external memory %-graphic data' and stored in the _-shaped first-in first-out unit. 6. As shown in the patent scope 帛! (4) display hybrid system, the window is judged: Don't set One of the window determining means, the view, the 2 window position coordinate register and the - window comparison loading window position coordinate temporary storage 11 storing the position coordinates of the window, in the 'comparison device: the coordinate scanning stream and the window The position coordinate is compared with X to determine whether the suppression pixel falls within an input window to determine the farming position. 7. - A screen display mixing method, which includes the following steps. Setting a plurality of windows and defining a window position coordinate for each window , in the display area corresponding to each layer of data displayed on the screen, the first rectangular window device is provided with a graphic window corresponding to a graphic layer data and a cursor window corresponding to the cursor layer data of 26 1313563, and in the __ second The rectangular window device sets a video window corresponding to a video layer data; sighs, and displays the scanned pixels in the form of a coordinate scanning stream 4 to the first rectangular window device and the second rectangular window device; Comparing the scan pixel with the window position coordinate of each window to determine a window in which the scan pixel is located; according to the window in which the scan pixel is located, Each layer of data is sent to a read buffer device, wherein the graphic layer data and (4) the layer data are first selected before being sent to the read buffer device to form a temporary layer; when determining the read buffer device When the data is a temporary layer of data, the temporary layer data is converted into an RGB format to a γυν format; the video layer data and the temporary layer data that has been format converted are alpha-blended in the YUV color field. The camp display mixing method according to Item 7 of the patent scope, wherein 'when the scanning pixel is located in the graphic view f, the first rectangular window device transmits the graphic data from the graphic unit to the read buffer device: The scanning pixel is in the video view f, the third (four) window device 'missing the video first-in first-out order (the middle-passing video information to the read buffer device) ^ when the scanning pixel is in the cursor, t is connected to the read cursor The data is read to the reading buffer, and the other is to read the background data to the reading buffer device. 9: Applying the screen display mixing method described in item 7 of the patent, wherein the 1-screen coordinate scanning device stably emits coordinate scans according to the odd and even scans of the video 27 1313563 η sequence, and the ^r- 4» 4 ^, the flat tractor's coordinate scanning stream is processed in parallel through each window device, so that the main age mountain is η 士垣1 to break out the window in which the known pixel is located. 10. The camping display mixing method as described in claim 7 wherein the data of the layer is first set by the software to the Rgb format and then read to the read buffer device. I. 11. The screen display mixing method of claim 7, wherein the 'background layer, the graphics layer and the vernier layer share a conversion device from the RGB domain to the YUV domain. I2. The screen display mixing method described in item 7 of the patent application scope is as follows: 'the background layer is directly set to the YUV format by the software, and is not part of the ej temple layer, because the reading buffer device is read. After that, directly participate in ALPHA > Kunhe operation. I3. The screen display mixing party according to item 7 of the patent application scope, wherein the graphic layer is composed of a graphic user interface layer, a screen display operation interface layer and a subtitle layer. 28 1313563 VII. Designation of Representative Representatives (1) The representative representative of the case is (2), the case is escorted to the younger brother (3) Figure 31: Screen coordinate scanning device ^ 儿间早明· 33: First rectangle Window device 34: second rectangular window device 3 5 : read buffer device field clothing set 36: conversion device 37: ALPHA hybrid device 8. The chemical formula of the case if there is a chemical formula in this case ·· May not reveal the most
TW95117538A 2006-05-17 2006-05-17 A display mixer and the method thereof TWI313563B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
TW95117538A TWI313563B (en) 2006-05-17 2006-05-17 A display mixer and the method thereof

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
TW95117538A TWI313563B (en) 2006-05-17 2006-05-17 A display mixer and the method thereof

Publications (2)

Publication Number Publication Date
TW200744373A TW200744373A (en) 2007-12-01
TWI313563B true TWI313563B (en) 2009-08-11

Family

ID=45072791

Family Applications (1)

Application Number Title Priority Date Filing Date
TW95117538A TWI313563B (en) 2006-05-17 2006-05-17 A display mixer and the method thereof

Country Status (1)

Country Link
TW (1) TWI313563B (en)

Families Citing this family (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
TWI391907B (en) * 2007-12-25 2013-04-01 Mstar Semiconductor Inc Method for setting caption window attributes and associated tv system

Also Published As

Publication number Publication date
TW200744373A (en) 2007-12-01

Similar Documents

Publication Publication Date Title
CN102981793B (en) Screen synchronization method and device
CN103139517B (en) Projection system and information processing apparatus
US6493008B1 (en) Multi-screen display system and method
CN100556085C (en) Screen display hybrid system and mixed method
US8723891B2 (en) System and method for efficiently processing digital video
US8275031B2 (en) System and method for analyzing multiple display data rates in a video system
CN103974007B (en) The stacking method and device of screen menu type regulative mode information
US8421921B1 (en) Post processing displays with on-screen displays
CN100414981C (en) A control system for screen display
EP1589521A2 (en) Compositing multiple full-motion video streams for display on a video monitor
CA2286194A1 (en) Multi-format audio/video production system with frame-rate conversion
JP2007264141A (en) Video display apparatus
KR100464421B1 (en) Method and apparatus to process On-Screen Display data really-displayed on screen
WO2009138015A1 (en) Image displaying method and its device
US8284209B2 (en) System and method for optimizing display bandwidth
US8184137B2 (en) System and method for ordering of scaling and capturing in a video system
US6919929B1 (en) Method and system for implementing a video and graphics interface signaling protocol
TW200924529A (en) Image processing apparatus and method
US20060017850A1 (en) Video combining apparatus and method thereof
TWI313563B (en) A display mixer and the method thereof
TWI540499B (en) Video processing system and video processing method
CN101986384A (en) Method for processing display of multi-layer picture in picture for DLP multi-screen splicing display wall
TWI707581B (en) Video processing circuit and method for handling multiple videos using single video processing path
CN101742161A (en) Method and system for displaying operation interface as well as television and display device
US6714256B2 (en) Video signal processing system

Legal Events

Date Code Title Description
MM4A Annulment or lapse of patent due to non-payment of fees