TW201901620A - Rendering method and device - Google Patents

Rendering method and device Download PDF

Info

Publication number
TW201901620A
TW201901620A TW107107019A TW107107019A TW201901620A TW 201901620 A TW201901620 A TW 201901620A TW 107107019 A TW107107019 A TW 107107019A TW 107107019 A TW107107019 A TW 107107019A TW 201901620 A TW201901620 A TW 201901620A
Authority
TW
Taiwan
Prior art keywords
picture
texture data
pictures
index
processor
Prior art date
Application number
TW107107019A
Other languages
Chinese (zh)
Inventor
李利民
鄭劍杰
Original Assignee
香港商阿里巴巴集團服務有限公司
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 香港商阿里巴巴集團服務有限公司 filed Critical 香港商阿里巴巴集團服務有限公司
Publication of TW201901620A publication Critical patent/TW201901620A/en

Links

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/40Client devices specifically adapted for the reception of or interaction with content, e.g. set-top-box [STB]; Operations thereof
    • H04N21/41Structure of client; Structure of client peripherals
    • H04N21/422Input-only peripherals, i.e. input devices connected to specially adapted client devices, e.g. global positioning system [GPS]
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/40Client devices specifically adapted for the reception of or interaction with content, e.g. set-top-box [STB]; Operations thereof
    • H04N21/43Processing of content or additional data, e.g. demultiplexing additional data from a digital video stream; Elementary client operations, e.g. monitoring of home network or synchronising decoder's clock; Client middleware
    • H04N21/431Generation of visual interfaces for content selection or interaction; Content or additional data rendering
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/40Client devices specifically adapted for the reception of or interaction with content, e.g. set-top-box [STB]; Operations thereof
    • H04N21/47End-user applications
    • H04N21/482End-user interface for program selection

Landscapes

  • Engineering & Computer Science (AREA)
  • Multimedia (AREA)
  • Signal Processing (AREA)
  • Human Computer Interaction (AREA)
  • Image Generation (AREA)
  • Processing Or Creating Images (AREA)
  • Controls And Circuits For Display Device (AREA)

Abstract

A rendering method and device are disclosed in the present application. An electronic device provided in the present application comprises a first processor and a second processor. The first processor obtains N first images that are used for interface display, combines the N first images into a second image, and sends texture data of the second image to a second processor; on the basis of a third image to be rendered, the second processor obtains texture data of the third image from the texture data of the second image and, on the basis of the obtained texture data, renders the third image, the third image being at least one image from among the N first images. The present application allows for batch process-rendering of images, thereby increasing rendering efficiency.

Description

一種繪製方法及裝置Drawing method and device

本申請涉及電腦影像處理技術領域,尤其涉及一種繪製方法及裝置。The present application relates to the field of computer image processing technology, and in particular, to a drawing method and apparatus.

為了保證智慧電視介面的繪製性能和介面的清晰度,智慧電視的介面繪製通常由智慧電視中專用於影像處理的圖形處理器(Graphics Processing Unit,GPU)來進行。GPU基於開放式圖形庫(Open Graphics Library,OpenGL)進行介面繪製,OpenGL提供了多種圖形處理函數,圖形處理軟體通過呼叫這些函數可實現介面繪製。   介面中通常包括多個顯示對象,一個顯示對象佔用一定的顯示區域。以用於展示電影列表的介面為例,其中可包括多個電影的宣傳用圖片,每個圖片即為一個顯示對象。在進行介面繪製時,需要針對每個圖片分別進行繪製。針對一個圖片的繪製過程一般包括:智慧電視中的CPU從網路側獲取到用於在電影列表介面中展示的圖片,基於獲取到的圖片以及預先設置的介面佈局產生用於進行圖片繪製的基本資料,並將這些基本資料傳輸給智慧電視中的GPU;GPU依據圖形處理軟體提供的繪製邏輯,通過呼叫OpenGL使用這些基本資料繪製出電影列表介面。上述過程稱為一次繪製過程。其中,用於進行圖片繪製的基本資料可包括圖片所在顯示區域在所在介面中的頂點資料、圖片所在區域的紋理坐標資料、圖片的紋理資料,還可進一步包括光照資料、場景矩陣等。   每一次繪製過程都需要CPU將基本資料傳輸給GPU。由於CPU與GPU之間的資料傳輸頻寬有限,使GPU無法發揮其應有的處理性能,成為介面繪製的瓶頸。為此,現有技術提出了批次處理方案:針對多個基本資料相同(頂點資料和紋理坐標資料除外)的顯示對象,CPU將該多個顯示對象的基本資料一次傳輸給GPU,使得GPU將多次繪製過程進行合併,從而降低了CPU與GPU之間的資料傳輸頻寬對繪製效率的影響。   但是,上述批次處理方案需要滿足特定條件,即除了頂點資料和紋理坐標資料之外的其他基本資料均相同才能對多個顯示對象的繪製過程進行合併。在一些應用場景中,多數情況下無法滿足上述條件(比如在智慧電視應用場景中,一個介面中的每個圖片的紋理各不相同),因此無法採用上述批次處理方案。   由此可見,如何提高繪製效率是目前亟需解決的技術問題。In order to ensure the rendering performance and interface clarity of the smart TV interface, the interface drawing of the smart TV is usually performed by a graphics processing unit (GPU) dedicated to image processing in smart TV. The GPU is based on the Open Graphics Library (OpenGL) for interface rendering. OpenGL provides a variety of graphics processing functions. The graphics processing software can interface the interface by calling these functions. The interface usually includes a plurality of display objects, and one display object occupies a certain display area. For example, an interface for displaying a movie list may include a promotional image of a plurality of movies, and each image is a display object. When you are working on interface drawing, you need to draw separately for each picture. The drawing process for a picture generally includes: the CPU in the smart TV acquires the picture for displaying in the movie list interface from the network side, and generates basic data for drawing the picture based on the acquired picture and the preset interface layout. And transfer these basic data to the GPU in the smart TV; the GPU draws the movie list interface by calling OpenGL using the basic data according to the drawing logic provided by the graphics processing software. The above process is called a drawing process. The basic data used for image drawing may include vertex data of the display area where the image is located, texture coordinate data of the area where the image is located, and texture data of the image, and may further include illumination data, a scene matrix, and the like. Every drawing process requires the CPU to transfer the basic data to the GPU. Due to the limited bandwidth of data transmission between the CPU and the GPU, the GPU cannot perform its proper processing performance and becomes a bottleneck for interface rendering. To this end, the prior art proposes a batch processing scheme: for a display object with multiple basic data (except vertex data and texture coordinate data), the CPU transmits the basic data of the plurality of display objects to the GPU at a time, so that the GPU will be more The sub-rendering process merges, thereby reducing the impact of the data transmission bandwidth between the CPU and the GPU on the rendering efficiency. However, the above batch processing scheme needs to meet certain conditions, that is, the basic data except the vertex data and the texture coordinate data are the same to merge the drawing processes of the plurality of display objects. In some application scenarios, the above conditions cannot be met in most cases (for example, in a smart TV application scenario, the texture of each picture in an interface is different), so the above batch processing scheme cannot be adopted. It can be seen that how to improve the drawing efficiency is a technical problem that needs to be solved at present.

本申請實施例提供了一種繪製方法及裝置,用以提高繪製效率。   第一方面,提供一種電子設備,包括:   第一處理器,獲取用於介面展示的N個第一圖片,將所述N個第一圖片組合為第二圖片,將所述第二圖片的紋理資料發送給第二處理器,N為大於等於1的整數;第二處理器,根據待繪製的第三圖片,從所述第二圖片的紋理資料中獲取所述第三圖片的紋理資料,根據獲取到的紋理資料繪製所述第三圖片,所述第三圖片為所述N個第一圖片中的至少一個圖片。   第二方面,提供一種處理裝置,包括:   獲取單元,獲取用於介面展示的N個第一圖片,N為大於等於1的整數;資料處理單元,將所述N個第一圖片組合為第二圖片;發送單元,發送所述第二圖片的紋理資料。   第三方面,提供一種處理裝置,包括:   接收單元,接收第二圖片,所述第二圖片是根據N個第一圖片組合得到的,N為大於等於1的整數; 資料處理單元,根據待繪製的第三圖片,從所述第二圖片的紋理資料中獲取所述第三圖片的紋理資料;繪製單元,根據獲取到的紋理資料繪製所述第三圖片。   第四方面,提供一種繪製方法,包括:   獲取用於介面展示的N個第一圖片,N為大於等於1的整數;將所述N個第一圖片組合為第二圖片;發送所述第二圖片的紋理資料。   第五方面,提供一種繪製方法,包括:   接收第二圖片,所述第二圖片是根據N個第一圖片組合得到的,N為大於等於1的整數; 根據待繪製的第三圖片,從所述第二圖片的紋理資料中獲取所述第三圖片的紋理資料;根據獲取到的紋理資料繪製所述第三圖片。   第六方面,提供一個或多個計算機可讀媒體,所述可讀媒體上儲存有指令,所述指令被一個或多個處理器執行時,使得電子設備執行上述第四方面提供的方法。   第七方面,提供一個或多個電腦可讀媒體,所述可讀媒體上儲存有指令,所述指令被一個或多個處理器執行時,使得電子設備執行上述第五方面提供的方法。   本申請的上述實施例中,第一處理器將用於介面展示的N個第一圖片組合為第二圖片,將該第二圖片的紋理資料發送給第二處理器,從而使多個第一圖片的紋理資料整合為一個圖片的紋理資料,滿足了批次處理方案的要求,因此可採用批次處理方案,一次將與多個圖片的紋理資料發送給第二處理器進行圖片繪製,進而提高了繪製效率。The embodiment of the present application provides a drawing method and device for improving drawing efficiency. The first aspect provides an electronic device, including: a first processor, acquiring N first pictures for interface display, combining the N first pictures into a second picture, and texture of the second picture The data is sent to the second processor, where N is an integer greater than or equal to 1; the second processor obtains the texture data of the third image from the texture data of the second image according to the third picture to be drawn, according to The obtained texture data is used to draw the third picture, and the third picture is at least one of the N first pictures. In a second aspect, a processing apparatus is provided, including: an acquiring unit, acquiring N first pictures for interface display, N is an integer greater than or equal to 1; and a data processing unit combining the N first pictures into a second a sending unit that transmits the texture data of the second picture. A third aspect provides a processing apparatus, including: a receiving unit, receiving a second picture, where the second picture is obtained according to a combination of N first pictures, where N is an integer greater than or equal to 1; and a data processing unit, according to the to-be-drawn a third picture, the texture data of the third picture is obtained from the texture data of the second picture; and the drawing unit is configured to draw the third picture according to the acquired texture data. A fourth aspect provides a rendering method, including: acquiring N first pictures for interface display, N being an integer greater than or equal to 1; combining the N first pictures into a second picture; and sending the second The texture data of the picture. A fifth aspect provides a drawing method, including: receiving a second picture, where the second picture is obtained according to N first picture combinations, and N is an integer greater than or equal to 1; according to the third picture to be drawn, Obtaining the texture data of the third picture in the texture data of the second picture; and drawing the third picture according to the acquired texture data. In a sixth aspect, there is provided one or more computer readable medium having stored thereon instructions that, when executed by one or more processors, cause an electronic device to perform the method provided by the fourth aspect above. In a seventh aspect, there is provided one or more computer readable mediums having stored thereon instructions that, when executed by one or more processors, cause an electronic device to perform the method provided by the fifth aspect above. In the above embodiment of the present application, the first processor combines the N first pictures for the interface display into the second picture, and sends the texture data of the second picture to the second processor, so that the plurality of first The texture data of the picture is integrated into the texture data of a picture, which satisfies the requirements of the batch processing scheme. Therefore, the batch processing scheme can be adopted, and the texture data of multiple pictures is sent to the second processor for image drawing at one time, thereby improving Drawing efficiency.

雖然本申請的概念易於進行各種修改和替代形式,但是其具體實施例已經通過附圖中的示例示出並且將在本文中詳細描述。然而,應當理解,沒有意圖將本申請的概念限制為所公開的特定形式,而是相反,意圖是覆蓋與本申請以及所附申請專利範圍第一致的所有修改、等同物和替代物。   說明書中對“一個實施例”、“實施例”、“說明性實施例”等的引用,指示所描述的實施例可包括特定特徵、結構或特性,但是每個實施例可以或可以不必包括特定特徵、結構或特性。此外,這樣的短語不一定指的是相同的實施例。進一步地,認為在本領域技術人員的知識範圍內,當結合實施例描述特定特徵、結構或特性時,結合無論是否明確描述的其它實施例影響這樣的特徵,結構或特性。另外,應當理解,以“A,B和C中的至少一個”的形式包括在列表中的項目可以表示(A);(B);(C);(A和B);(A和C);(B和C);或(A,B和C)。類似地,以“A,B或C中的至少一個”的形式列出的項目可以表示(A);(B);(C);(A和B);(A和C);(B和C)或(A,B和C)。   在一些情況下,所公開的實施例可以在硬體、韌體、軟體或其任何組合中實現。所公開的實施例還可以被實現為由一個或多個暫時性或非暫時性機器可讀(例如,電腦可讀)儲存媒體攜帶或儲存的指令,其可以由一個或多個處理器讀取和執行。機器可讀儲存媒體可以體現為用於以機器可讀形式(例如,揮發性或非揮發性記憶體、媒體盤或其他媒體)儲存或傳輸資訊的任何儲存設備,機制或其他物理結構的設備)。   在附圖中,一些結構或方法特徵可以以特定佈置及/或順序示出。然而,應當理解,可能不需要這樣的具體佈置及/或排序。相反,在一些實施例中,這些特徵可以以與說明性附圖中所示不同的方式及/或順序來佈置。另外,在特定圖中包括結構或方法特徵並不意味著暗示這種特徵在所有實施例中都是需要的,並且在一些實施例中可以不包括或可以與其他特徵組合。   如背景技術所述,在進行對象繪製時採用批次處理方案可提高繪製效率。但是採用批次處理需要滿足一定的條件,即除了頂點資料和紋理坐標資料之外的其他資料均相同的情況下才能將多個對象的繪製過程進行合併,這就要求待繪製的對象除了位置不同以外,其他的資料(比如紋理資料、場景矩陣等)均要相同。在一些場景下,尤其在智慧電視領域中,智慧電視所展示的介面(比如資源列表介面)中,除不同區域所展示的圖片不同以外,不同區域的其他屬性均相同。但正是由於不同區域展示的圖片不同,因此不同區域的OpenGL紋理資料不同,進而無法採用上述批次處理方案,也就是說每一個區域中的圖片的繪製過程都需要一次從CPU到GPU的資料傳輸,介面中每一幀中的每一個顯示區域內的圖片都需要進行一次從CPU到GPU的資料傳輸,導致繪製效率較低。   為此,本申請實施例通過將多個圖片組合為一個圖片,即,將多個圖片的紋理資料組合為一個圖片的紋理資料,這樣可針對多個圖片,將其紋理資料整合為一個圖片的紋理資料,使其滿足批次處理要求,從而可採用批次處理技術進行繪製,進而提高了繪製效率。其中,所述紋理資料可以是2D紋理資料,也可以是其他維度的紋理資料,本申請實施例不作限制。   下面結合附圖對本申請實施例進行詳細描述。   圖1示例性地示出了本申請實施例所適用的系統架構。如圖所示,該架構中可包括:電視設備101、伺服器103。電視設備101和伺服器103可通過網路104進行資訊互動。   電視設備101可以是具有數位訊號處理功能的智慧電視。用戶可通過遙控器對電視設備101進行控制操作,用戶也直接通過電視設備101提供的功能鍵對電視設備進行控制操作。伺服器103中儲存有資源列表(比如電子節目指南),可向電視設備101提供資源列表,以使電視設備101根據資源列表繪製介面,以供用戶根據該介面選擇需要收看的節目。   考慮到有些電視設備不具備數位訊號編解碼功能,需要通過數位視訊變換盒(Set Top Box,簡稱STB,通常稱作機頂盒或機上盒,是一種連接電視機與外部訊號源的設備)接收數位內容(比如可包括電子節目指南、網際網路網頁、字幕等等),進行解碼後傳輸給電視設備進行介面繪製和顯示,還可將用戶的互動資訊發送給網路側,實現互動式業務。圖2示例性地示出了本申請實施例所適用的包含數位視訊變換盒的系統架構。該架構中,電視設備101’與數位視訊變換盒102連接,數位視訊變換盒102通過網路104與伺服器103連接。數位視訊變換盒也可將接收到的視訊數位訊號轉換為類比訊號並傳輸給電視設備進行播放。   需要說明的是,上述圖1和圖2是以電視設備為例描述的,上述電視設備可被替換為任何其他具有類似功能的電子設備。   以圖1所示的系統架構為例,在一種應用場景中,用戶通過遙控器請求獲取資源列表,比如電影列表。電視設備101接收到該請求後,通過網路104從伺服器103獲取電影列表(該電影列表中包括電影圖片等內容),並根據獲得的電影列表進行介面繪製,所繪製的介面中包括電影圖片。電視設備101將繪製得到的包含有電影圖片的介面顯示在電視設備101的屏幕上,供用戶從中選擇需要播放的電影。   目前,電視設備所顯示的資源列表介面中,各資源對應的圖片在排列方式上比較有規律,比如尺寸基本相同、排列上較為整齊、分類較為清晰。圖3A至圖3G分別示例性地示出了幾種目前智慧電視採用的介面形式。以圖3A所示的介面為例,介面中包括9個對象顯示區域(對象顯示區域1至對象顯示區域9),對象顯示區域1和對象顯示區域2中的顯示對象尺寸相同,對象顯示區域4至對象顯示區域9中分別顯示有圖片(圖片1至圖片7),且圖片的尺寸相同。圖片1至圖片7與對象顯示區域之間的對應關係可預先設置。   這幾種介面形式基本涵蓋了當前智慧電視介面的顯示風格。本申請實施例可適用於上述各種形式介面的繪製過程。   圖4示例性地示出了電視設備中與本申請實施例相關的結構示意圖。如前所述,該電視設備也可被替換為其他具有類似功能的電子設備。以下僅以電視設備為例進行描述。   圖4所示的電視設備中包括至少2個處理器,具體如圖中所示的第一處理器401和第二處理器402。進一步地,該電視設備還可包括顯示裝置等組成部分。   第一處理器401主要用於進行邏輯控制,可向第二處理器402發送用於介面繪製的資料,並可控制第二處理器進行介面繪製。第二處理器可進行介面繪製。繪製的介面可輸出到顯示裝置進行顯示。   具體地,第一處理器401用於獲取伺服器發送的用於介面展示的N個(N為大於等於1的整數)第一圖片,將該N個第一圖片組合為第二圖片,將第二圖片的紋理資料發送給第二處理器402。第二處理器1002用於根據待繪製的第三圖片,從所述第二圖片的紋理資料中獲取所述第三圖片的紋理資料,根據獲取到的紋理資料繪製所述第三圖片,所述第三圖片為所述N個第一圖片中的至少一個圖片。其中,“第一圖片”、“第二圖片”、“第三圖片”僅出於便於區分的考慮進行命名。例如,“第一圖片”可以是電視設備從伺服器接收到的圖片,由多個第一圖片組合後得到的圖片稱為第二圖片。第一圖片的數量通常為多個,且尺寸有可能不相同。   其中,第一處理器可以是CPU,第二處理器可以是GPU。以第一處理器為CPU、第二處理器為GPU為例,CPU中運行OpenGL主程式以控制介面繪製過程,GPU中運行著色器程式以實現介面繪製過程。   基於圖4所示的原理,圖5示例性地示出了一種電子設備(比如電視設備)的結構。該電子設備中可包括至少2個處理器。此例子中以2個處理器(5021,5022)為例描述,其中第一處理器5021可以是CPU,第二處理器5022可以是GPU。系統控制邏輯501耦合於至少一個處理器(5021,5022),非揮發性記憶體(non-volatile memory,NMV)/記憶體504耦合於系統控制邏輯501,網路模組或網路介面506耦合於系統控制邏輯501。   一個實施例中的系統控制邏輯501,可包括任何適當的介面控制器,以提供到第一處理器5021或第二處理器5022中的至少一個的任何合適的介面,及/或提供到與系統控制邏輯501通信的任何合適的設備或組件的任何合適的介面。   一個實施例中的系統控制邏輯501,可包括一個或多個記憶體控制器,以提供到系統記憶體503的介面。系統記憶體503用來加載以及儲存資料及/或指令。例如,對應該電子設備,在一個實施例中,系統記憶體503可包括任何合適的易失性儲存器。   NVM/記憶體504可包括一個或多個有形的非暫時的電腦可讀媒體,用於儲存資料及/或指令。例如,NVM/記憶體504可包括任何合適的非揮發性記憶體,如一個或多個硬碟(hard disk device,HDD),一個或多個光碟(compact disk,CD),及/或一個或多個數位通用磁碟(digital versatile disk,DVD)。   NVM/記憶體504可包括儲存資源,該儲存資源物理上是該系統所安裝的或者可以被存取的設備的一部分,但不一定是設備的一部分。例如,NVM/記憶體504可經由網路模組或網路介面506被網路存取。   系統記憶體503以及NVM/記憶體504可分別包括臨時的或持久的指令510的副本。指令510可包括當由第一處理器5021或第二處理器5022中的至少一個執行時導致該電子設備實現本申請實施例描述的方法之一或組合的指令。各實施例中,指令510或硬體、韌體,及/或軟體組件可另外地/可替換地被置於系統控制邏輯501,網路模組或網路介面506及/或處理器(5021,5022)。   網路模組或網路介面506可包括一個接收器來為該電子設備提供無線介面來與一個或多個網路及/或任何合適的設備進行通信。網路模組或網路介面506可包括任何合適的硬體及/或韌體。網路模組或網路介面506可包括多個天線來提供多輸入多輸出無線介面。在一個實施例中,網路模組或網路介面506可包括一個網路介面卡、一個無線網路介面卡、一個電話調制解調器,及/或無線調制解調器。   在一個實施例中,處理器(5021,5022)中的至少一個可以與用於系統控制邏輯的一個或多個控制器的邏輯一起封裝。在一個實施例中,處理器中的至少一個可以與用於系統控制邏輯的一個或多個控制器的邏輯一起封裝以形成系統級封裝。在一個實施例中,處理器中的至少一個可以與用於系統控制邏輯的一個或多個控制器的邏輯集成在相同的晶粒上。在一個實施例中,處理器中的至少一個可以與用於系統控制邏輯的一個或多個控制器的邏輯集成在相同的晶粒上以形成系統晶片。   該電子設備可進一步包括輸入/輸出裝置505。輸入/輸出裝置505可包括用戶介面旨在使用戶與該電子設備進行互動,可包括外圍組件介面,其被設計為使得外圍組件能夠與系統互動,及/或,可包括感測器 ,旨在確定環境條件及/或有關電子設備的位置資訊。   基於圖4或圖5所示的結構,圖6示例性地示出了一種處理裝置的結構。該處理裝置可以是應用於第一處理器的應用程式。如圖所示,該處理裝置可包括:獲取單元601、資料處理單元602、發送單元603。   獲取單元601用於獲取用於介面展示的N個第一圖片,資料處理單元602用於將所述N個第一圖片組合為第二圖片。發送單元603用於將所述第二圖片的紋理資料發送給第二處理器。   基於圖4或圖5所示的結構,圖7示例性地示出了另一種處理裝置的結構。該處理裝置可以是應用與第二處理器的應用程式。如圖所示,該處理裝置可包括:接收單元701、資料處理單元702、繪製單元703。   接收單元701用於接收第二圖片,所述第二圖片是根據N個第一圖片組合得到的。資料處理單元702用於根據待繪製的第三圖片,從所述第二圖片的紋理資料中獲取所述第三圖片的紋理資料。繪製單元703用於根據獲取到的紋理資料繪製所述第三圖片。   圖8示例性地示出了本申請實施例提供的介面繪製流程。如圖所示,當電視設備啟動後,可從網路側的伺服器獲取用於介面展示的圖片,並進行介面繪製,將繪製的介面進行展示。電視設備啟動後所展示的介面一般稱為桌面,其中可包括多個對象顯示區域,一個對象顯示區域可顯示一個圖片,每個對象顯示區域被觸發後,可執行對應的功能。比如,以圖3A所示的介面為例,當對象顯示區域中的對象(本例子中為圖片)被觸發後(比如用戶通過遙控器選擇該顯示區域中的圖片),播放該對象所對應的電影。   如圖8所示,上述介面繪製過程可包括:   S801:第一處理器獲取用於介面展示的圖片,所獲取到的圖片可包括N個(N為大於等於1的整數)第一圖片。   在電視設備啟動的場景下,在基於圖1所示架構的一些實施例中,第一處理器中運行的OpenGL主程式通過電視設備的網路模組與網路側的伺服器建立連接,並請求伺服器發送用於介面展示的圖片。在基於圖2所示架構的一些實施例中,數位視訊變換盒與網路側伺服器建立連接,將網路側伺服器發送的用於介面展示的圖片發送給電視設備中的第一處理器。   通常情況下,電視設備所展示的介面可包含多頁(用戶可通過介面中的“上一頁”“下一頁”等翻頁功能鍵進行翻頁)。電視設備所展示的介面也可僅包含一頁但其長度超過屏幕顯示區域的高度,可通過滾動方式(比如用戶可通過遙控器的“上/下”或“左/右”移動鍵對介面內容進行前後滾動)進行顯示。由於介面中可包含大量圖片,網路側伺服器發送圖片的過程會佔用較多的網路資源開銷,影響圖片傳輸效率,另外,電視設備繪製介面的過程也會佔用大量系統開銷,耗時較長,影響用戶感受。因此本申請實施例中,伺服器可首先將用於介面展示的部分圖片發送給電視設備,之後在用戶請求其他介面或請求介面的其它部分時,再根據用戶的請求將相應圖片發送給電視設備。其中,伺服器可首先將電視設備啟動後默認顯示的介面中的圖片或該介面中顯示順序在前的N個(N為大於等於1的整數)圖片發送給電視設備。其中,N的取值可預先設置,一般可根據電視設備的處理能力(比如繪製能力)進行設置。   第一處理器接收到伺服器發送的用於介面展示的圖片後,可將該圖片及/或該圖片的索引儲存在第一處理器的記憶體中。其中,圖片的索引用於唯一標識一個圖片。   上述S801具體可由第一處理器執行,或者由應用於第一處理器的處理裝置(如應用程式)執行。更具體地,可由圖6中的獲取單元601執行。   S802:第一處理器將用於介面展示的N個第一圖片組合為第二圖片。   該步驟中,可採用圖片拼接方式進行圖片組合,即,將N個第一圖片拼接為一個第二圖片。第一處理器將N個第一圖片拼接為一個第二圖片時,可基於拼接得到的第二圖片的尺寸儘量小的原則來進行。   基於上述原則,在一些實施例中,第一處理器可根據圖片尺寸對所述N個圖片進行分組,同一組中的圖片的尺寸相同。第一處理器將同一組圖片沿水平方向依次拼接,一組圖片在圖片高度上對齊。圖9示例性地示出了將尺寸不完全相同的17個圖片拼接為一個圖片的示意圖。具體實施時,可根據17個圖片的寬高計算出拼接後的圖片的尺寸,即,首先將尺寸較大的圖片的寬度之和作為拼接後的圖片的寬度,然後其他圖片按行的順序依次排列,最終根據行數以及每行圖片的高度算出拼接後的圖片的高度。具體如圖9所示,根據圖片尺寸進行分組可得到3個圖片組,圖片1~4為一組,圖片5~8為一組,圖片9~17為一組。每組圖片按行方向(即圖中的s軸方向,或稱水平方向)依次排列,不同組圖片佔用不同的行。這樣拼接可儘量減小拼接後的圖片的尺寸,進而節省記憶體。   基於上述原則,在另外一些實施例中,第一處理器還可以根據更為複雜的拼接算法,計算得到N個第一圖片的拼接方式,從而使第二圖片的尺寸儘量小。   進一步地,如圖9所示,各圖片之間可保留設定大小的間隔,比如可保留1個到2個像素的間隙,這樣可便於圖片邊界的截取。   進一步地,為了方便後續的圖片繪製操作,對拼裝後的圖片的紋理寬度(如圖中的s軸方向)和高度(如圖中的t軸方向)坐標可進行正規化處理。對圖9所示的拼接後的圖片進行正規化處理後,可如圖10所示。其中,拼接後的圖片在s軸方向上的長度為1,在t軸方向上的長度為1。圖片1~17的寬度和高度可如圖10中的數位所示。   進一步地,在組合得到第二圖片後,第一處理器可計算得到第二圖片中每個第一圖片在該第二圖片中的位置資訊,並保存該位置資訊。所述位置資訊可以是第一圖片在第二圖片中的坐標。以矩形圖片為例,可以是圖片4個頂點的坐標。   以2D紋理資料為例,2D紋理資料實際上是二維數組,它的元素是一些顏色值。第一處理器可根據接收到的第一圖片中各像素的顏色值產生該第一圖片的2D紋理資料,根據拼接後的第二圖片產生該第二圖片的2D紋理資料。   通過將多個第一圖片組合為一個第二圖片,使得該第二圖片的紋理資料中包括所述多個第一圖片的紋理資料,這樣,將多個第一圖片的紋理資料整合為一個第二圖片的紋理資料,針對該多個第一圖片的繪製時,則可以採用批次處理技術實現。   進一步地,第一處理器的記憶體中預先儲存有介面佈局相關資訊,介面佈局相關資訊定義了介面的屬性(比如大小、長寬比例)以及介面中各對象顯示區域的位置,還可以定義介面中各對象顯示區域內需要顯示的對象。比如,以圖3A所示的介面為例,該介面的介面佈局相關資訊定義了該介面中各對象顯示區域(對象顯示區域1至對象顯示區域9)在該介面中的位置,以及各對象顯示區域所顯示的圖片的索引。第一處理器可根據該介面佈局相關資訊獲得介面中各對象顯示區域的頂點資料(圖3A中的對象顯示區域為矩形,包括4個頂點)。   進一步地,第一處理器的記憶體中預先儲存有介面中各對象顯示區域的紋理坐標資料。紋理坐標簡單地說就是紋理資料到目標圖元表面映射。   上述S801具體可由第一處理器執行,或者由應用於第一處理器的處理裝置(如應用程式)執行。更具體地,可由圖6中的資料處理單元602執行。   S803:第一處理器將第二圖片的紋理資料發送給第二處理器。   該步驟中,第一處理器可將S802中處理得到的第二圖片的紋理資料,以及其他用於介面繪製的資料(比如各對象顯示區域的頂點資料、紋理坐標資料、光照資料、場景矩陣等)進行合併,並根據與第二處理器之間的資料傳輸協議,將合併後的資料傳輸給第二處理器。其中,光照資料、場景矩陣等資料可預先設置在第一處理器的記憶體中。   可選地,第一處理器發送給第二處理器的資料中還可包括第二圖片的紋理資料中所包含的第一圖片的紋理資料在其中的位置資訊。   上述S803具體可由第一處理器執行,或者由應用於第一處理器的處理裝置(如應用程式)執行。更具體地,可由圖6中的發送單元603執行。   S804:第二處理器從第一處理器接收到第二圖片的2D紋理資料等用於介面繪製的資料後,根據待繪製的第一圖片,從第二圖片的紋理資料中獲取待繪製的第三圖片的紋理資料,並根據獲取到的紋理資料繪製所述第三圖片。其中,所述第三圖片為所述N個第一圖片中的至少一個圖片。第二處理器可根據第三圖片的紋理資料在所述第二圖片的紋理資料中的位置資訊,從所述第二圖片的紋理資料中獲取所述第三圖片的紋理資料。   可選地,具體實施時,第一處理器在將第二圖片的紋理資料等用於介面繪製的資料傳輸給第二處理器後,可向第二處理器發送介面繪製指令。通過繪製指令可指示第二處理器待展示的介面或介面中待繪製的第三圖片。繪製指令中可包括待展示介面中待繪製的第三圖片的指示資訊。第二處理器可根據繪製指令中待繪製的第三圖片的指示資訊確定該區域中需要繪製的圖片。其中,待繪製的第三圖片的指示資訊可以是待繪製圖片的索引,此種情況下,第二處理器可根據第三圖片的索引,查詢圖片索引與第一圖片在第二圖片中的位置資訊之間的對應關係,得到與待繪製的第三圖片的索引對應的圖片位置資訊。第三圖片的指示資訊也可以是第三圖片所對應的待繪製對象顯示區域的索引,此種情況下,第二處理器可根據對象顯示區域的索引,查詢對象顯示區域與第一圖片的對應關係(該對應關係可根據介面佈局預先設置),得到與該待繪製對象顯示區域的索引對應的第三圖片的索引,根據該第三圖片的索引,查詢第一圖片索引與第一圖片在第二圖片中的位置資訊之間的對應關係(該對應關係可在S802中由第一處理器確定),得到與該待繪製的第三圖片的索引對應的位置資訊,從而根據該位置資訊從第二圖片的紋理資料中的相應位置獲得該第三圖片的紋理資料。待繪製的第三圖片的指示資訊還可以是該圖片在第二圖片中的位置資訊,此種情況下,第二處理器可直接根據該位置資訊從第二圖片的紋理資料中獲得第三圖片的紋理資料。   進一步地,第二處理器可根據待繪製的第三圖片的紋理資料,並結合其他資料(比如待繪製對象顯示區域的頂點資料、紋理坐標資料,以及光照資料、場景矩陣等)對該待繪製的對象顯示區域中的第三圖片進行繪製。   以圖3A所示的介面為例,可將圖片1至圖片9拼接為一個圖片,並將拼接得到的圖片的2D紋理資料發送給第二處理器,使得第二處理器可在一次繪製過程中基於該第二圖片的2D紋理資料在相應對象顯示區域繪製圖片1至圖片9,即實現批次處理。   上述S804具體可由第二處理器執行,或者由應用於第一處理器的處理裝置(如應用程式)執行。更具體地,可分別由圖7中的接收單元701、資料處理單元702和繪製單元703執行。   通過上述流程,當電視設備啟動後,可從伺服器獲取用於介面繪製的圖片進行介面繪製,並將繪製得到的介面顯示在電視設備的屏幕上。後續,如果用戶發出了上下翻頁的請求或前後滾動介面的請求,或者請求另外的介面,此時,電視設備可回應用戶的上述請求,更新當前顯示的內容。   例如,以用戶發出向下翻頁的請求為例,第一處理器接收到向下翻頁的請求後,可確定目標頁面中包含的第一圖片,針對目標頁面中包含的第一圖片,判斷是否已經將由這些圖片組合得到的圖片的紋理資料發送給第二處理器;如果已經發送,則向第二處理器發送繪製指令,用於指示第二處理器繪製這些圖片,否則,向伺服器發送圖片獲取請求,以請求獲取這些圖片,並按照前述實施例的方式,將獲取到的第一圖片組合為第二圖片,將第二圖片的紋理資料發送給第二處理器,並向第二處理器發送繪製指令,以指示第二處理器繪製目標頁面。   再例如,在以請求新的介面為例,當前介面為電影列表介面,第一處理器接收到請求切換到電視劇列表介面的請求後,可針對所請求的介面確定用於該介面展示的圖片是否已經從伺服器獲取得到。若判定為否,則向伺服器發送圖片獲取請求,以請求獲取所請求的介面中的圖片;否則,向第二處理器發送繪製指令,該繪製指令用於指示第二處理器繪製所請求的介面。   在有些情況下,網路側的伺服器對用於介面繪製的圖片進行了更新,比如更新了電影列表介面中第一個電影圖片,此種情況下,伺服器將更新的第一圖片發送給電視設備。為描述方便,以下用“第四圖片”表示上述N個第一圖片中的一個或多個圖片所更新後的圖片。電視設備中的第一處理器接收伺服器發送的第四圖片,獲取包含更新前的第二圖片,根據第四圖片更新該第二圖片,將更新後的第二圖片的紋理資料發送給第二處理器。第一處理器可進一步向第二處理器發送繪製指令,該繪製指令中包括第四圖片的指示資訊,以指示第二處理器更新相應圖片。第二處理器可根據該繪製指令中的第四圖片的指示資訊,從更新後的第二圖片的紋理資料中獲取該第四圖片的紋理資料,並根據該紋理資料對該第四圖片進行繪製。   基於相同的技術構思,本申請實施例還提供了一個或多個電腦可讀媒體,所述可讀媒體上儲存有指令,所述指令被一個或多個處理器執行時,使得電子設備執行前述實施例中的繪製方法。比如,使得電子設備可執行前述實施例中第一處理器所執行的方法,或者執行第二處理器所執行的方法,或者執行第一處理器和第二處理器所執行的方法。   通過以上描述可以看出,本申請的上述實施例中,一方面,第一處理器將用於介面展示的N個第一圖片組合為一個第二圖片,將該第二圖片的紋理資料發送給第二處理器,從而使多個第一圖片的紋理資料整合為一個圖片的紋理資料,滿足了批次處理方案的要求,因此可採用批次處理方案,一次將與多個圖片的紋理資料發送給第二處理器進行圖片繪製。另一方面,第二處理器可根據待繪製的圖片,獲取第二圖片的紋理資料中所述待繪製的圖片的紋理資料,根據獲取到的紋理資料繪製所述待繪製的圖片,實現了圖片的批次處理繪製,進而提高了繪製效率。While the concept of the present application is susceptible to various modifications and alternatives, the specific embodiments thereof are illustrated by the examples in the drawings and will be described in detail herein. It should be understood, however, that the invention is not intended to be limited to the details of the invention. References to "an embodiment", "an embodiment", "an illustrative embodiment" or the like in the specification are intended to mean that the described embodiments may include specific features, structures, or characteristics, but each embodiment may or may not necessarily include a particular Feature, structure or characteristic. Moreover, such phrases are not necessarily referring to the same embodiments. Further, it is to be understood that the specific features, structures, or characteristics may be combined with other embodiments, whether explicitly described or not, in conjunction with the embodiments. In addition, it should be understood that items included in the list in the form of "at least one of A, B, and C" may represent (A); (B); (C); (A and B); (A and C) ; (B and C); or (A, B and C). Similarly, items listed in the form of "at least one of A, B or C" may represent (A); (B); (C); (A and B); (A and C); (B and C) or (A, B and C). In some cases, the disclosed embodiments can be implemented in hardware, firmware, software, or any combination thereof. The disclosed embodiments can also be implemented as instructions carried or stored by one or more transitory or non-transitory machine readable (eg, computer readable) storage media, which can be read by one or more processors And execution. A machine-readable storage medium may be embodied as any storage device, mechanism, or other physical structure for storing or transmitting information in a machine readable form (eg, volatile or non-volatile memory, media disk, or other media). . In the figures, some structural or method features may be shown in a particular arrangement and/or order. However, it should be understood that such specific arrangements and/or ordering may not be required. Rather, in some embodiments, these features may be arranged in a different and/or sequential manner than illustrated in the illustrative figures. In addition, the inclusion of structural or method features in a particular figure is not meant to imply that such a feature is required in all embodiments, and may not include or be combined with other features in some embodiments. As described in the background art, a batch processing scheme is adopted when object drawing is performed to improve drawing efficiency. However, batch processing needs to meet certain conditions, that is, the drawing process of multiple objects can be combined except that the vertex data and the texture coordinate data are the same, which requires that the objects to be drawn be different in position. Other materials (such as texture data, scene matrices, etc.) must be the same. In some scenarios, especially in the field of smart TV, in the interface displayed by smart TV (such as the resource list interface), other attributes of different areas are the same except for the pictures displayed in different areas. However, because the images displayed in different regions are different, the OpenGL texture data of different regions is different, and the above batch processing scheme cannot be adopted. That is to say, the drawing process of each region requires a data from the CPU to the GPU. In the transmission, each picture in the display area in each frame of the interface needs to perform data transmission from the CPU to the GPU, resulting in low rendering efficiency. To this end, the embodiment of the present application combines multiple pictures into one picture, that is, combines texture data of multiple pictures into texture data of one picture, so that the texture data can be integrated into one picture for multiple pictures. The texture data is such that it meets the batch processing requirements, so that it can be drawn by batch processing technology, thereby improving the drawing efficiency. The texture data may be a 2D texture data, or may be a texture data of other dimensions, which is not limited in the embodiment of the present application. The embodiments of the present application are described in detail below with reference to the accompanying drawings. FIG. 1 exemplarily shows a system architecture to which the embodiment of the present application is applied. As shown in the figure, the architecture may include: a television device 101 and a server 103. The television device 101 and the server 103 can perform information interaction via the network 104. The television device 101 can be a smart television with digital signal processing functionality. The user can control the television device 101 through the remote controller, and the user also directly controls the television device through the function keys provided by the television device 101. A list of resources (such as an electronic program guide) is stored in the server 103, and the resource list can be provided to the television device 101 to cause the television device 101 to draw an interface according to the resource list for the user to select a program to be viewed according to the interface. Considering that some TV devices do not have digital signal encoding and decoding functions, they need to receive digital digits through a digital video conversion box (STB, commonly referred to as a set-top box or set-top box, which is a device that connects a TV to an external signal source). The content (for example, may include an electronic program guide, an internet webpage, a subtitle, etc.), is decoded and transmitted to a television device for interface drawing and display, and can also send interactive information of the user to the network side to realize an interactive service. FIG. 2 exemplarily shows a system architecture including a digital video conversion box to which the embodiment of the present application is applied. In this architecture, the television device 101' is coupled to the digital video conversion box 102, and the digital video conversion box 102 is coupled to the server 103 via the network 104. The digital video conversion box can also convert the received video digital signal into an analog signal and transmit it to the television device for playback. It should be noted that the above FIG. 1 and FIG. 2 are described by taking a television device as an example, and the above television device can be replaced with any other electronic device having similar functions. Taking the system architecture shown in FIG. 1 as an example, in an application scenario, a user requests a list of resources, such as a movie list, through a remote controller. After receiving the request, the television device 101 acquires a movie list from the server 103 via the network 104 (the movie list includes content such as a movie picture), and performs interface drawing according to the obtained movie list, and the drawn interface includes a movie picture. . The television device 101 displays the drawn interface containing the movie picture on the screen of the television device 101 for the user to select the movie to be played. At present, in the resource list interface displayed by the television device, the pictures corresponding to the resources are arranged in a regular manner, for example, the sizes are basically the same, the arrangement is relatively neat, and the classification is relatively clear. 3A to 3G exemplarily show several interface forms adopted by current smart televisions. Taking the interface shown in FIG. 3A as an example, the interface includes nine object display areas (object display area 1 to object display area 9), and the object display area 1 and the object display area 2 have the same display object size, and the object display area 4 Pictures (picture 1 to picture 7) are respectively displayed in the object display area 9, and the pictures are the same size. The correspondence between the pictures 1 to 7 and the object display area can be set in advance. These types of interfaces basically cover the display style of the current smart TV interface. The embodiments of the present application are applicable to the drawing process of the above various forms of interfaces. FIG. 4 exemplarily shows a schematic structural diagram of a television device related to an embodiment of the present application. As mentioned previously, the television device can also be replaced with other electronic devices having similar functions. The following is only a description of a television device. The television device shown in FIG. 4 includes at least two processors, specifically a first processor 401 and a second processor 402 as shown in the figure. Further, the television device may further include a display device or the like. The first processor 401 is mainly used for logic control, and can send data for interface drawing to the second processor 402, and can control the second processor to perform interface drawing. The second processor can perform interface drawing. The drawn interface can be output to the display device for display. Specifically, the first processor 401 is configured to acquire N (N is an integer greater than or equal to 1) first picture sent by the server for interface display, and combine the N first pictures into a second picture, The texture data of the two pictures is sent to the second processor 402. The second processor 1002 is configured to obtain, according to the third picture to be drawn, the texture data of the third picture from the texture data of the second picture, and draw the third picture according to the acquired texture data, where The third picture is at least one of the N first pictures. Among them, the “first picture”, the “second picture”, and the “third picture” are named only for the convenience of distinction. For example, the “first picture” may be a picture received by the television device from the server, and the picture obtained by combining the plurality of first pictures is referred to as a second picture. The number of first pictures is usually multiple and the sizes may not be the same. The first processor may be a CPU, and the second processor may be a GPU. Taking the first processor as the CPU and the second processor as the GPU, the CPU runs the OpenGL main program to control the interface drawing process, and the GPU runs the shader program to implement the interface drawing process. Based on the principle shown in FIG. 4, FIG. 5 exemplarily shows the structure of an electronic device such as a television device. At least 2 processors may be included in the electronic device. In this example, two processors (5021, 5022) are taken as an example, wherein the first processor 5021 may be a CPU, and the second processor 5022 may be a GPU. The system control logic 501 is coupled to at least one processor (5021, 5022), the non-volatile memory (NMV)/memory 504 is coupled to the system control logic 501, and the network module or network interface 506 is coupled. In system control logic 501. System control logic 501 in one embodiment may include any suitable interface controller to provide any suitable interface to at least one of first processor 5021 or second processor 5022, and/or to the system Any suitable interface of any suitable device or component that controls logic 501 to communicate. System control logic 501 in one embodiment may include one or more memory controllers to provide an interface to system memory 503. System memory 503 is used to load and store data and/or instructions. For example, in response to an electronic device, in one embodiment, system memory 503 can include any suitable volatile storage. NVM/memory 504 can include one or more tangible, non-transitory computer readable media for storing data and/or instructions. For example, NVM/memory 504 can include any suitable non-volatile memory, such as one or more hard disk devices (HDDs), one or more compact disks (CDs), and/or one or Multiple digital versatile disk (DVD). The NVM/memory 504 can include a storage resource that is physically part of the device that the system is installed or can be accessed, but is not necessarily part of the device. For example, NVM/memory 504 can be accessed by the network via a network module or network interface 506. System memory 503 and NVM/memory 504 can each include a copy of a temporary or persistent instruction 510. The instructions 510 can include instructions that, when executed by at least one of the first processor 5021 or the second processor 5022, cause the electronic device to implement one or a combination of the methods described in the embodiments of the present application. In various embodiments, the instructions 510 or hardware, firmware, and/or software components may additionally/alternatively be placed in system control logic 501, network module or network interface 506 and/or processor (5021) , 5022). The network module or network interface 506 can include a receiver to provide a wireless interface for the electronic device to communicate with one or more networks and/or any suitable device. The network module or network interface 506 can include any suitable hardware and/or firmware. The network module or network interface 506 can include multiple antennas to provide a multiple input multiple output wireless interface. In one embodiment, the network module or network interface 506 can include a network interface card, a wireless network interface card, a telephone modem, and/or a wireless modem. In one embodiment, at least one of the processors (5021, 5022) may be packaged with logic for one or more controllers of the system control logic. In one embodiment, at least one of the processors may be packaged with logic for one or more controllers of system control logic to form a system level package. In one embodiment, at least one of the processors can be integrated on the same die as the logic of one or more controllers for system control logic. In one embodiment, at least one of the processors can be integrated on the same die as the logic of one or more controllers for system control logic to form a system wafer. The electronic device can further include an input/output device 505. Input/output device 505 can include a user interface intended to enable a user to interact with the electronic device, can include a peripheral component interface that is designed to enable peripheral components to interact with the system, and/or can include a sensor, Determine environmental conditions and/or location information about electronic devices. Based on the structure shown in FIG. 4 or FIG. 5, FIG. 6 exemplarily shows the structure of a processing apparatus. The processing device can be an application applied to the first processor. As shown, the processing device may include an obtaining unit 601, a data processing unit 602, and a transmitting unit 603. The obtaining unit 601 is configured to acquire N first pictures for interface display, and the data processing unit 602 is configured to combine the N first pictures into a second picture. The sending unit 603 is configured to send the texture data of the second picture to the second processor. Based on the structure shown in Fig. 4 or Fig. 5, Fig. 7 exemplarily shows the structure of another processing apparatus. The processing device can be an application that is applied to the second processor. As shown, the processing device can include a receiving unit 701, a data processing unit 702, and a rendering unit 703. The receiving unit 701 is configured to receive a second picture, where the second picture is obtained according to the N first picture combinations. The data processing unit 702 is configured to obtain the texture data of the third picture from the texture data of the second picture according to the third picture to be drawn. The drawing unit 703 is configured to draw the third picture according to the acquired texture data. FIG. 8 exemplarily shows an interface drawing process provided by an embodiment of the present application. As shown in the figure, after the TV device is started, the image for the interface display can be obtained from the server on the network side, and the interface is drawn, and the drawn interface is displayed. The interface displayed after the TV device is started is generally referred to as a desktop, and may include a plurality of object display areas, and an object display area may display a picture, and each object display area may be triggered to perform a corresponding function. For example, taking the interface shown in FIG. 3A as an example, when an object in the object display area (a picture in this example) is triggered (for example, the user selects a picture in the display area by using a remote controller), playing the object corresponding to the object the film. As shown in FIG. 8, the interface drawing process may include: S801: The first processor acquires a picture for interface display, and the acquired picture may include N (N is an integer greater than or equal to 1) the first picture. In a scenario where the television device is activated, in some embodiments based on the architecture shown in FIG. 1, the OpenGL main program running in the first processor establishes a connection with the network side server through the network module of the television device, and requests The server sends a picture for the interface display. In some embodiments based on the architecture shown in FIG. 2, the digital video conversion box establishes a connection with the network side server, and sends the picture for interface display sent by the network side server to the first processor in the television device. Usually, the interface displayed by the television device can include multiple pages (the user can page through the page turning function keys such as "previous page" and "next page" in the interface). The interface displayed by the TV device can also contain only one page but its length exceeds the height of the screen display area. It can be scrolled (for example, the user can move the key to the interface content via the “Up/Down” or “Left/Right” buttons of the remote control. Scroll forward and backward to display. Since a large number of pictures can be included in the interface, the process of sending pictures by the network side server consumes more network resource overhead and affects the picture transmission efficiency. In addition, the process of drawing the interface of the TV device also takes a lot of system overhead and takes a long time. , affecting user experience. Therefore, in the embodiment of the present application, the server may first send a part of the picture for the interface display to the television device, and then send the corresponding picture to the television device according to the user's request when the user requests other interfaces or other parts of the request interface. . The server may first send the picture in the interface displayed by default after the television device is started or the N (N is an integer greater than or equal to 1) picture in the interface to the television device. The value of N can be preset, and can generally be set according to the processing capability of the television device (such as drawing capability). After receiving the picture sent by the server for the interface display, the first processor may store the picture and/or the index of the picture in the memory of the first processor. The index of the image is used to uniquely identify a picture. The above S801 may be specifically executed by the first processor or by a processing device (such as an application) applied to the first processor. More specifically, it can be performed by the acquisition unit 601 in FIG. S802: The first processor combines the N first pictures used for the interface display into the second picture. In this step, the picture combination may be used to perform picture combination, that is, the N first pictures are spliced into one second picture. When the first processor splicing the N first pictures into one second picture, the first processor may perform the principle that the size of the second picture obtained by the splicing is as small as possible. Based on the above principles, in some embodiments, the first processor may group the N pictures according to a picture size, and the pictures in the same group have the same size. The first processor sequentially stitches the same set of pictures in a horizontal direction, and a set of pictures are aligned on the picture height. FIG. 9 exemplarily shows a schematic diagram of splicing 17 pictures of different sizes into one picture. In the specific implementation, the size of the spliced picture can be calculated according to the width and height of the 17 pictures, that is, the sum of the widths of the larger pictures is first used as the width of the spliced picture, and then the other pictures are sequentially in the order of the lines. Arrange, and finally calculate the height of the stitched image based on the number of rows and the height of each line of the image. As shown in FIG. 9 , grouping according to the picture size can obtain three picture groups, the pictures 1~4 are a group, the pictures 5~8 are a group, and the pictures 9~17 are a group. Each group of pictures is arranged in the row direction (ie, the s-axis direction or the horizontal direction in the figure), and different groups of pictures occupy different lines. This splicing minimizes the size of the stitched image, thereby saving memory. Based on the foregoing principles, in other embodiments, the first processor may further calculate a splicing manner of the N first pictures according to a more complicated splicing algorithm, so that the size of the second picture is as small as possible. Further, as shown in FIG. 9, the interval of the set size may be reserved between the pictures, for example, a gap of 1 to 2 pixels may be reserved, which facilitates the interception of the picture boundary. Further, in order to facilitate the subsequent picture drawing operation, the texture width (the s-axis direction in the figure) and the height (the t-axis direction in the figure) of the assembled picture may be normalized. After the spliced picture shown in FIG. 9 is normalized, it can be as shown in FIG. The spliced picture has a length of 1 in the s-axis direction and a length of 1 in the t-axis direction. The width and height of the pictures 1 to 17 can be as shown by the digits in FIG. Further, after combining the second picture, the first processor may calculate location information of each first picture in the second picture in the second picture, and save the location information. The location information may be coordinates of the first picture in the second picture. Taking a rectangular picture as an example, it can be the coordinates of the four vertices of the picture. Taking 2D texture data as an example, 2D texture data is actually a two-dimensional array, and its elements are some color values. The first processor may generate 2D texture data of the first image according to the color value of each pixel in the received first image, and generate 2D texture data of the second image according to the spliced second image. The texture data of the plurality of first pictures is included in the texture data of the second picture by combining the plurality of first pictures into one second picture, so that the texture data of the plurality of first pictures is integrated into one The texture data of the two pictures can be implemented by batch processing technology when the plurality of first pictures are drawn. Further, the memory of the first processor is pre-stored with interface layout related information, and the interface layout related information defines the attributes of the interface (such as size, aspect ratio) and the position of each object display area in the interface, and may also define an interface. Each object in the display area displays the objects that need to be displayed in the area. For example, taking the interface shown in FIG. 3A as an example, the interface layout related information of the interface defines the position of each object display area (object display area 1 to object display area 9) in the interface in the interface, and display of each object. The index of the image displayed by the area. The first processor may obtain vertex data of each object display area in the interface according to the interface layout related information (the object display area in FIG. 3A is a rectangle, including 4 vertices). Further, the texture coordinate data of each object display area in the interface is stored in advance in the memory of the first processor. The texture coordinates are simply the texture data to the target primitive surface map. The above S801 may be specifically executed by the first processor or by a processing device (such as an application) applied to the first processor. More specifically, it can be performed by the material processing unit 602 in FIG. S803: The first processor sends the texture data of the second picture to the second processor. In this step, the first processor may use the texture data of the second image processed in S802, and other materials used for interface drawing (such as vertex data, texture coordinate data, illumination data, scene matrix, etc. of each object display area). Merging, and transmitting the merged data to the second processor according to a data transfer protocol with the second processor. The data such as the illumination data and the scene matrix may be preset in the memory of the first processor. Optionally, the data sent by the first processor to the second processor may further include location information of the texture data of the first image included in the texture data of the second image. The above S803 may be specifically performed by the first processor or by a processing device (such as an application) applied to the first processor. More specifically, it can be performed by the transmitting unit 603 in FIG. S804: After receiving the data for the interface drawing, such as the 2D texture data of the second picture, the second processor acquires the to-be-drawn from the texture data of the second image according to the first picture to be drawn. The texture data of the three pictures, and the third picture is drawn according to the obtained texture data. The third picture is at least one of the N first pictures. The second processor may acquire the texture data of the third picture from the texture data of the second picture according to the location information of the texture data of the third picture in the texture data of the second picture. Optionally, in a specific implementation, after transmitting, by the first processor, the texture data for the interface of the second picture to the second processor, the interface may send an interface drawing instruction to the second processor. The third picture to be drawn in the interface or interface to be displayed by the second processor is indicated by a drawing instruction. The drawing instruction may include indication information of the third picture to be drawn in the interface to be displayed. The second processor may determine a picture to be drawn in the area according to the indication information of the third picture to be drawn in the drawing instruction. The indication information of the third picture to be drawn may be an index of the picture to be drawn. In this case, the second processor may query the position of the picture index and the first picture in the second picture according to the index of the third picture. Corresponding relationship between the information, the picture position information corresponding to the index of the third picture to be drawn is obtained. The indication information of the third picture may also be an index of the display area of the object to be drawn corresponding to the third picture. In this case, the second processor may query the correspondence between the object display area and the first picture according to the index of the object display area. a relationship (the correspondence may be preset according to the interface layout), obtaining an index of the third picture corresponding to the index of the display area of the object to be drawn, and querying the first picture index and the first picture according to the index of the third picture Corresponding relationship between the location information in the two pictures (the correspondence may be determined by the first processor in S802), obtaining location information corresponding to the index of the third picture to be drawn, thereby obtaining information according to the location information The corresponding position in the texture data of the two pictures obtains the texture data of the third picture. The indication information of the third picture to be drawn may also be the location information of the picture in the second picture. In this case, the second processor may directly obtain the third picture from the texture data of the second picture according to the location information. Texture information. Further, the second processor may be configured according to the texture data of the third picture to be drawn, and combining other materials (such as vertex data of the object display area to be drawn, texture coordinate data, and illumination data, scene matrix, etc.) The object in the display area is drawn in the third picture. Taking the interface shown in FIG. 3A as an example, the pictures 1 to 9 can be spliced into one picture, and the 2D texture data of the spliced picture is sent to the second processor, so that the second processor can be in one drawing process. Based on the 2D texture data of the second picture, picture 1 to picture 9 are drawn in the corresponding object display area, that is, batch processing is implemented. The above S804 may be specifically executed by the second processor or by a processing device (such as an application) applied to the first processor. More specifically, it can be performed by the receiving unit 701, the material processing unit 702, and the drawing unit 703 in FIG. 7, respectively. Through the above process, after the television device is started, the interface for drawing the interface can be obtained from the server for interface drawing, and the drawn interface is displayed on the screen of the television device. Subsequently, if the user issues a request to page up or down or a request to scroll the interface, or request another interface, the television device can respond to the user's request and update the currently displayed content. For example, in the case that the user issues a page-down request, the first processor may determine the first picture included in the target page after receiving the request for page-down, and determine the first picture included in the target page. Whether the texture data of the picture obtained by combining the pictures has been sent to the second processor; if it has been sent, sending a drawing instruction to the second processor for instructing the second processor to draw the pictures, otherwise, sending the pictures to the server The image acquisition request is requested to acquire the pictures, and the acquired first picture is combined into a second picture, the texture data of the second picture is sent to the second processor, and the second processing is performed according to the foregoing embodiment. The device sends a drawing instruction to instruct the second processor to draw the target page. For example, in the case of requesting a new interface, the current interface is a movie list interface, and after receiving the request to switch to the TV drama list interface, the first processor may determine whether the image for the interface display is for the requested interface. It has been obtained from the server. If the determination is no, sending a picture acquisition request to the server to request to acquire the picture in the requested interface; otherwise, sending a drawing instruction to the second processor, the drawing instruction is used to instruct the second processor to draw the requested interface. In some cases, the server on the network side updates the image used for interface drawing, such as updating the first movie picture in the movie list interface. In this case, the server sends the updated first picture to the TV. device. For convenience of description, the "fourth picture" is used to indicate the updated picture of one or more of the N first pictures. The first processor in the television device receives the fourth picture sent by the server, acquires the second picture before the update, updates the second picture according to the fourth picture, and sends the updated texture information of the second picture to the second picture. processor. The first processor may further send a drawing instruction to the second processor, where the drawing instruction includes indication information of the fourth picture to instruct the second processor to update the corresponding picture. The second processor may obtain the texture data of the fourth image from the texture data of the updated second image according to the indication information of the fourth image in the drawing instruction, and draw the fourth image according to the texture data. . Based on the same technical concept, the embodiment of the present application further provides one or more computer readable media, where the readable medium stores instructions, when the instructions are executed by one or more processors, causing the electronic device to perform the foregoing The drawing method in the embodiment. For example, the electronic device may be caused to execute the method performed by the first processor in the foregoing embodiment, or to perform the method performed by the second processor, or to execute the method performed by the first processor and the second processor. As can be seen from the above description, in the foregoing embodiment of the present application, in one aspect, the first processor combines the N first pictures for interface display into a second picture, and sends the texture information of the second picture to The second processor, so that the texture data of the plurality of first pictures is integrated into the texture data of one picture, satisfies the requirements of the batch processing scheme, so the batch processing scheme can be adopted, and the texture data of the multiple pictures is sent at a time Perform picture drawing to the second processor. On the other hand, the second processor may obtain the texture data of the picture to be drawn in the texture data of the second picture according to the picture to be drawn, and draw the picture to be drawn according to the obtained texture data, and implement the picture. Batch processing is drawn to improve drawing efficiency.

101‧‧‧電視設備101‧‧‧TV equipment

103‧‧‧伺服器103‧‧‧Server

104‧‧‧網路104‧‧‧Network

102‧‧‧數位視訊變換盒102‧‧‧Digital Video Converter Box

401‧‧‧第一處理器401‧‧‧First processor

402‧‧‧第二處理器402‧‧‧second processor

5021‧‧‧第一處理器5021‧‧‧First processor

5022‧‧‧第二處理器5022‧‧‧second processor

501‧‧‧系統控制邏輯501‧‧‧System Control Logic

503‧‧‧記憶體503‧‧‧ memory

504‧‧‧記憶體504‧‧‧ memory

505‧‧‧輸入/輸出裝置505‧‧‧Input/output devices

506‧‧‧網路模組或網路介面506‧‧‧Network module or network interface

510‧‧‧指令510‧‧ directive

601‧‧‧獲取單元601‧‧‧Acquisition unit

602‧‧‧資料處理單元602‧‧‧ Data Processing Unit

603‧‧‧發送單元603‧‧‧Send unit

701‧‧‧接收單元701‧‧‧ receiving unit

702‧‧‧資料處理單元702‧‧‧ Data Processing Unit

703‧‧‧繪製單元703‧‧‧ drawing unit

S801~S804‧‧‧過程S801~S804‧‧‧Process

本申請的實施例通過示例而非限制的方式示出在所附附圖中,類似的附圖標記表示類似的元素。   圖1、圖2分別示例性地示出了本申請實施例所適用的系統架構示意圖;   圖3A至圖3G分別示例性地示出了本申請中的介面示意圖;   圖4示例性地示出了本申請實施例提供的電視設備原理結構圖;   圖5示例性地示出了本申請實施例的電視設備的結構示意圖;   圖6、圖7分別示例性地示出了本申請實施例提供的處理裝置的結構示意圖;   圖8示例性地示出了本申請實施例提供的介面繪製流程示意圖;   圖9示例性地示出了本申請實施例中圖片拼接原理示意圖;   圖10示例性地示出了本申請實施例中對拼接後的圖片的坐標進行正規化的示意圖。The embodiments of the present application are illustrated by way of example, and not limitation FIG. 1 and FIG. 2 are respectively a schematic diagram showing a system architecture applicable to an embodiment of the present application; FIG. 3A to FIG. 3G respectively exemplarily show an interface diagram in the present application; FIG. 4 exemplarily shows FIG. 5 is a schematic structural diagram of a television apparatus according to an embodiment of the present application; FIG. 6 and FIG. 7 respectively exemplarily illustrate the processing provided by the embodiment of the present application. FIG. 8 exemplarily shows a schematic diagram of the interface splicing process provided by the embodiment of the present application; FIG. 9 exemplarily shows a schematic diagram of the splicing principle of the embodiment of the present application; FIG. 10 exemplarily shows A schematic diagram of normalizing the coordinates of the stitched picture in the embodiment of the present application.

Claims (49)

一種電子設備,其中,包括:   第一處理器,獲取用於介面展示的N個第一圖片,將所述N個第一圖片組合為第二圖片,將所述第二圖片的紋理資料發送給第二處理器,N為大於等於1的整數;   第二處理器,根據待繪製的第三圖片,從所述第二圖片的紋理資料中獲取所述第三圖片的紋理資料,根據獲取到的紋理資料繪製所述第三圖片,所述第三圖片為所述N個第一圖片中的至少一個圖片。An electronic device, comprising: a first processor, acquiring N first pictures for interface display, combining the N first pictures into a second picture, and sending texture information of the second picture to The second processor, N is an integer greater than or equal to 1; the second processor obtains the texture data of the third image from the texture data of the second image according to the third picture to be drawn, according to the obtained The texture data is used to draw the third picture, and the third picture is at least one of the N first pictures. 如申請專利範圍第1項所述的電子設備,其中,   第一處理器進一步確定所述N個第一圖片的紋理資料在所述第二圖片的紋理資料中的位置資訊,並將所述位置資訊發送給第二處理器;   所述第二處理器進一步根據所述第三圖片的紋理資料在所述第二圖片的紋理資料中的位置資訊,從所述第二圖片的紋理資料中獲取所述第三圖片的紋理資料。The electronic device of claim 1, wherein the first processor further determines location information of the texture data of the N first pictures in the texture data of the second picture, and the location Sending information to the second processor; the second processor further acquiring, according to location information in the texture data of the second image, the texture data of the second image from the texture data of the second image The texture data of the third picture. 如申請專利範圍第1項所述的電子設備,其中,所述第一處理器根據圖片尺寸對所述N個第一圖片進行分組,同一組中的圖片的尺寸相同;以及,將同一組圖片沿水平方向依次拼接,一組圖片在圖片高度上對齊。The electronic device of claim 1, wherein the first processor groups the N first pictures according to a picture size, and the pictures in the same group have the same size; and the same group of pictures Splicing in the horizontal direction, a group of pictures are aligned on the height of the picture. 如申請專利範圍第1項所述的電子設備,其中,所述第二圖片中,相鄰的第一圖片之間間隔有設定距離。The electronic device according to claim 1, wherein in the second picture, a distance between the adjacent first pictures is set. 如申請專利範圍第1項所述的電子設備,其中,   所述第一處理器進一步向第二處理器發送第一繪製指令,所述第一繪製指令中包括所述第三圖片的指示資訊;   所述第二處理器根據所述第一繪製指令中的所述第三圖片的指示資訊,確定所述待繪製的第三圖片。The electronic device of claim 1, wherein the first processor further sends a first drawing instruction to the second processor, where the first drawing instruction includes indication information of the third picture; The second processor determines the third picture to be drawn according to the indication information of the third picture in the first drawing instruction. 如申請專利範圍第5項所述的電子設備,其中,所述第三圖片的指示資訊包括:第三圖片的索引,或者第三圖片對應的待繪製對象顯示區域的索引,或者第三圖片在第二圖片中的位置資訊。The electronic device of claim 5, wherein the indication information of the third picture comprises: an index of the third picture, or an index of the display area of the object to be drawn corresponding to the third picture, or the third picture is Location information in the second picture. 如申請專利範圍第6項所述的電子設備,其中,   所述指示資訊為第三圖片的索引時,所述第二處理器根據所述第三圖片的索引,查詢圖片索引與第一圖片在第二圖片中的位置資訊之間的對應關係,得到與所述第三圖片的索引對應的圖片位置資訊;或者,   所述第三圖片的指示資訊為第三圖片對應的待繪製對象顯示區域的索引時,所述第二處理器根據所述待繪製對象顯示區域的索引,查詢對象顯示區域與圖片的對應關係,得到與所述待繪製對象顯示區域的索引對應的第三圖片的索引,根據所述第三圖片的索引,查詢圖片索引與第一圖片在第二圖片中的位置資訊之間的對應關係,得到與所述第三圖片的索引對應的圖片位置資訊。The electronic device of claim 6, wherein, when the indication information is an index of the third picture, the second processor queries the picture index and the first picture according to the index of the third picture. Corresponding relationship between the location information in the second picture, the picture location information corresponding to the index of the third picture is obtained; or the indication information of the third picture is the display area of the object to be drawn corresponding to the third picture When indexing, the second processor queries the corresponding relationship between the object display area and the image according to the index of the display area of the object to be drawn, and obtains an index of the third picture corresponding to the index of the display area of the object to be drawn, according to An index of the third picture, a correspondence between the picture index and the location information of the first picture in the second picture, to obtain picture location information corresponding to the index of the third picture. 如申請專利範圍第1項所述的電子設備,其中,第一處理器進一步:   接收更新的第四圖片,根據所述第四圖片的紋理資料更新所述第二圖片的紋理資料,將更新後的第二圖片的紋理資料發送給第二處理器;其中,所述第四圖片為所述N個第一圖片中的至少一個圖片所對應的更新後的圖片。The electronic device of claim 1, wherein the first processor further: receiving the updated fourth picture, updating the texture data of the second picture according to the texture data of the fourth picture, and updating The texture data of the second picture is sent to the second processor, where the fourth picture is an updated picture corresponding to at least one of the N first pictures. 如申請專利範圍第8項所述的電子設備,其中,所述第一處理器進一步向第二處理器發送第二繪製指令,所述第二繪製指令中包括所述第四圖片的指示資訊;   第二處理器進一步根據所述第二繪製指令中的所述第四圖片的指示資訊,從所述更新後的第二圖片的紋理資料中獲取所述第四圖片的紋理資料,根據獲取到的紋理資料繪製所述第四圖片。The electronic device of claim 8, wherein the first processor further sends a second drawing instruction to the second processor, where the second drawing instruction includes indication information of the fourth picture; The second processor further acquires the texture data of the fourth picture from the texture data of the updated second picture according to the indication information of the fourth picture in the second drawing instruction, according to the obtained The texture data draws the fourth picture. 如申請專利範圍第1項所述的電子設備,其中,所述第一處理器在所述電子設備啟動後或在接收到介面請求後,向伺服器發送圖片獲取請求;以及,接收所述伺服器根據所述獲取請求發送的用於介面展示的所述N個第一圖片。The electronic device of claim 1, wherein the first processor sends a picture acquisition request to the server after the electronic device is started or after receiving the interface request; and receiving the servo The N first pictures for interface display sent according to the obtaining request. 如申請專利範圍第10項所述的電子設備,其中, 所述第一處理器在接收到介面請求後,針對所請求的介面確定用於該介面展示的第一圖片是否已經從伺服器獲取得到;   若判定為否,則向伺服器發送所述圖片獲取請求,所述圖片獲取請求用於請求獲取所請求的介面中的第一圖片;否則,向第二處理器發送第三繪製指令,所述第三繪製指令用於指示第二處理器繪製所請求的介面。The electronic device of claim 10, wherein the first processor determines, after receiving the interface request, whether the first picture for the interface display has been obtained from the server for the requested interface. If the determination is no, the image acquisition request is sent to the server, where the image acquisition request is used to request to acquire the first picture in the requested interface; otherwise, the third drawing instruction is sent to the second processor. The third drawing instruction is used to instruct the second processor to draw the requested interface. 如申請專利範圍第1至11項中任一項所述的電子設備,其中,所述第一處理器為中央處理器,所述第二處理器為圖形處理器。The electronic device of any one of claims 1 to 11, wherein the first processor is a central processor and the second processor is a graphics processor. 如申請專利範圍第1至11項中任一項所述的電子設備,其中,所述電子設備為電視設備。The electronic device of any one of claims 1 to 11, wherein the electronic device is a television device. 一種處理裝置,其中,包括:   獲取單元,獲取用於介面展示的N個第一圖片,N為大於等於1的整數;   資料處理單元,將所述N個第一圖片組合為第二圖片;   發送單元,發送所述第二圖片的紋理資料。A processing device, comprising: an acquiring unit, acquiring N first pictures for interface display, N is an integer greater than or equal to 1; a data processing unit combining the N first pictures into a second picture; a unit that transmits the texture data of the second picture. 如申請專利範圍第14項所述的裝置,其中,所述資料處理單元進一步確定所述N個第一圖片的紋理資料在所述第二圖片的紋理資料中的位置資訊;   所述發送單元進一步發送所述位置資訊。The apparatus of claim 14, wherein the data processing unit further determines location information of texture data of the N first pictures in texture data of the second picture; Send the location information. 如申請專利範圍第14項所述的裝置,其中,所述資料處理單元根據圖片尺寸對所述N個第一圖片進行分組,同一組中的圖片的尺寸相同;以及,將同一組圖片沿水平方向依次拼接,一組圖片在圖片高度上對齊。The device of claim 14, wherein the data processing unit groups the N first pictures according to a picture size, the pictures in the same group have the same size; and, the same group of pictures are horizontal The directions are stitched one after the other, and a set of pictures are aligned at the height of the picture. 如申請專利範圍第14項所述的裝置,其中,所述第二圖片中,相鄰的第一圖片之間間隔有設定距離。The device of claim 14, wherein the second picture has a set distance between adjacent first pictures. 如申請專利範圍第14項所述的裝置,其中,所述發送單元進一步發送第一繪製指令,所述第一繪製指令中包括所述第三圖片的指示資訊。The device of claim 14, wherein the sending unit further sends a first drawing instruction, where the first drawing instruction includes indication information of the third picture. 如申請專利範圍第18項所述的裝置,其中,所述第三圖片的指示資訊包括:第三圖片的索引,或者第三圖片對應的待繪製對象顯示區域的索引,或者第三圖片在第二圖片中的位置資訊。The device of claim 18, wherein the indication information of the third picture comprises: an index of the third picture, or an index of the display area of the object to be drawn corresponding to the third picture, or the third picture is in the Location information in the second picture. 如申請專利範圍第14項所述的裝置,其中,所述獲取單元進一步接收更新的第四圖片;其中,所述第四圖片為所述N個第一圖片中的至少一個圖片所對應的更新後的圖片;   所述資料處理單元進一步根據所述第四圖片的紋理資料更新所述第二圖片的紋理資料;   所述發送單元進一步發送更新後的第二圖片的紋理資料。The device of claim 14, wherein the obtaining unit further receives an updated fourth picture; wherein the fourth picture is an update corresponding to at least one of the N first pictures The data processing unit further updates the texture data of the second picture according to the texture data of the fourth picture; the sending unit further sends the texture data of the updated second picture. 如申請專利範圍第20項所述的裝置,其中,所述發送單元進一步發送第二繪製指令,所述第二繪製指令中包括所述第四圖片的指示資訊。The device of claim 20, wherein the sending unit further sends a second drawing instruction, wherein the second drawing instruction includes indication information of the fourth picture. 如申請專利範圍第14項所述的裝置,其中,所述獲取單元在所述電子設備啟動後或在接收到介面請求後,向伺服器發送圖片獲取請求;以及,接收所述伺服器根據所述獲取請求發送的用於介面展示的所述N個第一圖片。The device of claim 14, wherein the acquiring unit sends a picture acquisition request to the server after the electronic device is started or after receiving the interface request; and receiving the server according to the The N first pictures for interface display sent by the request are obtained. 如申請專利範圍第22項所述的裝置,其中, 所述獲取單元在接收到介面請求後,針對所請求的介面確定用於該介面展示的第一圖片是否已經從伺服器獲取得到;   若判定為否,則向伺服器發送所述圖片獲取請求,所述圖片獲取請求用於請求獲取所請求的介面中的第一圖片;否則,通過所述發送單元發送第三繪製指令,所述第三繪製指令用於指示第二處理器繪製所請求的介面。The device of claim 22, wherein the obtaining unit determines, after the interface request is received, whether the first picture for the interface display has been obtained from the server for the requested interface; If yes, the image acquisition request is sent to the server, where the image acquisition request is used to request to acquire the first picture in the requested interface; otherwise, the third drawing instruction is sent by the sending unit, the third A drawing instruction is used to instruct the second processor to draw the requested interface. 一種處理裝置,其中,包括:   接收單元,接收第二圖片,所述第二圖片是根據N個第一圖片組合得到的,N為大於等於1的整數;   資料處理單元,根據待繪製的第三圖片,從所述第二圖片的紋理資料中獲取所述第三圖片的紋理資料;   繪製單元,根據獲取到的紋理資料繪製所述第三圖片。A processing device, comprising: a receiving unit, receiving a second picture, the second picture is obtained according to a combination of N first pictures, N is an integer greater than or equal to 1; a data processing unit, according to a third to be drawn a picture, the texture data of the third picture is obtained from the texture data of the second picture; and the drawing unit is configured to draw the third picture according to the acquired texture data. 如申請專利範圍第24項所述的裝置,其中,所述接收單元進一步接收所述N個第一圖片的紋理資料在所述第二圖片的紋理資料中的位置資訊;   所述資料處理單元根據所述第三圖片的紋理資料在所述第二圖片的紋理資料中的位置資訊,從所述第二圖片的紋理資料中獲取所述第三圖片的紋理資料。The device of claim 24, wherein the receiving unit further receives location information of texture data of the N first pictures in texture data of the second picture; The position information of the texture data of the third picture in the texture data of the second picture, and the texture data of the third picture is obtained from the texture data of the second picture. 如申請專利範圍第24項所述的裝置,其中,所述接收單元進一步接收第一繪製指令,所述第一繪製指令中包括所述第三圖片的指示資訊;   所述資料處理單元根據所述第一繪製指令中的所述第三圖片的指示資訊,確定所述待繪製的第三圖片。The device of claim 24, wherein the receiving unit further receives a first drawing instruction, wherein the first drawing instruction includes indication information of the third picture; The indication information of the third picture in the first drawing instruction determines the third picture to be drawn. 如申請專利範圍第26項所述的裝置,其中,所述第三圖片的指示資訊包括:第三圖片的索引,或者第三圖片對應的待繪製對象顯示區域的索引,或者第三圖片在第二圖片中的位置資訊。The device of claim 26, wherein the indication information of the third picture comprises: an index of the third picture, or an index of the display area of the object to be drawn corresponding to the third picture, or the third picture is in the Location information in the second picture. 如申請專利範圍第27項所述的裝置,其中,   所述指示資訊為第三圖片的索引時,所述資料處理單元根據所述第三圖片的索引,查詢圖片索引與第一圖片在第二圖片中的位置資訊之間的對應關係,得到與所述第三圖片的索引對應的圖片位置資訊;或者,   所述第三圖片的指示資訊為第三圖片對應的待繪製對象顯示區域的索引時,所述資料處理單元根據所述待繪製對象顯示區域的索引,查詢對象顯示區域與圖片的對應關係,得到與所述待繪製對象顯示區域的索引對應的第三圖片的索引,根據所述第三圖片的索引,查詢圖片索引與第一圖片在第二圖片中的位置資訊之間的對應關係,得到與所述第三圖片的索引對應的圖片位置資訊。The device of claim 27, wherein, when the indication information is an index of the third picture, the data processing unit queries the picture index and the first picture in the second according to the index of the third picture. Corresponding relationship between the location information in the picture, the picture position information corresponding to the index of the third picture is obtained; or the indication information of the third picture is the index of the display area of the object to be drawn corresponding to the third picture The data processing unit queries the correspondence between the display area of the object and the image according to the index of the display area of the object to be drawn, and obtains an index of the third picture corresponding to the index of the display area of the object to be drawn, according to the The index of the three pictures, the correspondence between the image index and the location information of the first picture in the second picture, and the picture location information corresponding to the index of the third picture is obtained. 如申請專利範圍第24項所述的裝置,其中,所述接收模組進一步接收更新後的第二圖片的紋理資料,所述更新後的第二圖片的紋理資料中包括第四圖片的紋理資料,所述第四圖片為所述N個第一圖片中的至少一個圖片更新後的圖片。The device of claim 24, wherein the receiving module further receives the updated texture data of the second image, and the updated texture data of the second image includes the texture data of the fourth image. The fourth picture is an updated picture of at least one of the N first pictures. 如申請專利範圍第29項所述的裝置,其中,所述接收模組進一步接收第二繪製指令,所述第二繪製指令中包括所述第四圖片的指示資訊;   所述資料處理單元根據所述第二繪製指令中的所述第四圖片的指示資訊,從所述更新後的第二圖片的紋理資料中獲取所述第四圖片的紋理資料,根據獲取到的紋理資料繪製所述第四圖片。The device of claim 29, wherein the receiving module further receives a second drawing instruction, wherein the second drawing instruction includes indication information of the fourth picture; And the indication information of the fourth picture in the second drawing instruction, acquiring the texture data of the fourth picture from the texture data of the updated second picture, and drawing the fourth according to the acquired texture data image. 一種繪製方法,其中,包括:   獲取用於介面展示的N個第一圖片,N為大於等於1的整數;   將所述N個第一圖片組合為第二圖片;   發送所述第二圖片的紋理資料。A drawing method, comprising: obtaining N first pictures for interface display, N being an integer greater than or equal to 1; combining the N first pictures into a second picture; transmitting a texture of the second picture data. 如申請專利範圍第31項所述的方法,其中,還包括:   確定所述N個第一圖片的紋理資料在所述第二圖片的紋理資料中的位置資訊;   發送所述位置資訊。The method of claim 31, further comprising: determining location information of the texture data of the N first pictures in the texture data of the second picture; and transmitting the location information. 如申請專利範圍第31項所述的方法,其中,所述將所述N個第一圖片組合為第二圖片,包括:   根據圖片尺寸對所述N個第一圖片進行分組,同一組中的圖片的尺寸相同;以及,將同一組圖片沿水平方向依次拼接,一組圖片在圖片高度上對齊。The method of claim 31, wherein the combining the N first pictures into a second picture comprises: grouping the N first pictures according to a picture size, in the same group The images are the same size; and the same set of images are stitched together in a horizontal direction, and a set of images are aligned at the height of the image. 如申請專利範圍第31項所述的方法,其中,所述第二圖片中,相鄰的第一圖片之間間隔有設定距離。The method of claim 31, wherein in the second picture, the adjacent first pictures are spaced apart by a set distance. 如申請專利範圍第31項所述的方法,其中,還包括:   發送第一繪製指令,所述第一繪製指令中包括所述第三圖片的指示資訊。The method of claim 31, further comprising: transmitting a first drawing instruction, wherein the first drawing instruction includes indication information of the third picture. 如申請專利範圍第35項所述的方法,其中,所述第三圖片的指示資訊包括:第三圖片的索引,或者第三圖片對應的待繪製對象顯示區域的索引,或者第三圖片在第二圖片中的位置資訊。The method of claim 35, wherein the indication information of the third picture comprises: an index of the third picture, or an index of the display area of the object to be drawn corresponding to the third picture, or the third picture is in the Location information in the second picture. 如申請專利範圍第31項所述的方法,其中,還包括:   接收更新的第四圖片;其中,所述第四圖片為所述N個第一圖片中的至少一個圖片所對應的更新後的圖片;   根據所述第四圖片的紋理資料更新所述第二圖片的紋理資料;   發送更新後的第二圖片的紋理資料。The method of claim 31, further comprising: receiving an updated fourth picture; wherein the fourth picture is an updated one corresponding to at least one of the N first pictures Updating the texture data of the second picture according to the texture data of the fourth picture; and sending the texture data of the updated second picture. 如申請專利範圍第37項所述的方法,其中,還包括:   發送第二繪製指令,所述第二繪製指令中包括所述第四圖片的指示資訊。The method of claim 37, further comprising: transmitting a second drawing instruction, wherein the second drawing instruction includes indication information of the fourth picture. 如申請專利範圍第31項所述的方法,其中,所述獲取用於介面展示的N個第一圖片,包括:   在所述電子設備啟動後或在接收到介面請求後,向伺服器發送圖片獲取請求;以及,接收所述伺服器根據所述獲取請求發送的用於介面展示的所述N個第一圖片。The method of claim 31, wherein the obtaining the N first pictures for the interface display comprises: sending a picture to the server after the electronic device is started or after receiving the interface request Acquiring the request; and receiving, by the server, the N first pictures for interface display sent according to the obtaining request. 如申請專利範圍第39項所述的方法,其中,所述在所述電子設備啟動後或在接收到介面請求後,向伺服器發送圖片獲取請求,包括:   在接收到介面請求後,針對所請求的介面確定用於該介面展示的第一圖片是否已經從伺服器獲取得到;   若判定為否,則向伺服器發送所述圖片獲取請求,所述圖片獲取請求用於請求獲取所請求的介面中的第一圖片;否則,發送第三繪製指令,所述第三繪製指令用於指示第二處理器繪製所請求的介面。The method of claim 39, wherein the sending the image acquisition request to the server after the electronic device is started or after receiving the interface request comprises: after receiving the interface request, The requested interface determines whether the first picture for the interface display has been obtained from the server; if the determination is no, the image acquisition request is sent to the server, and the image acquisition request is used to request to acquire the requested interface. The first picture in the picture; otherwise, the third drawing instruction is sent, the third drawing instruction is used to instruct the second processor to draw the requested interface. 一種繪製方法,其中,包括:   接收第二圖片,所述第二圖片是根據N個第一圖片組合得到的,N為大於等於1的整數;   根據待繪製的第三圖片,從所述第二圖片的紋理資料中獲取所述第三圖片的紋理資料;   根據獲取到的紋理資料繪製所述第三圖片。A drawing method, comprising: receiving a second picture, the second picture is obtained according to N first picture combinations, N is an integer greater than or equal to 1; according to the third picture to be drawn, from the second Obtaining the texture data of the third picture in the texture data of the image; and drawing the third picture according to the acquired texture data. 如申請專利範圍第41項所述的方法,其中,還包括:   接收所述N個第一圖片的紋理資料在所述第二圖片的紋理資料中的位置資訊;   所述從所述第二圖片的紋理資料中獲取所述第三圖片的紋理資料,包括:   根據所述第三圖片的紋理資料在所述第二圖片的紋理資料中的位置資訊,從所述第二圖片的紋理資料中獲取所述第三圖片的紋理資料。The method of claim 41, further comprising: receiving location information of texture data of the N first pictures in texture data of the second picture; Obtaining the texture data of the third image in the texture data, comprising: obtaining, according to location information in the texture data of the second image, texture information of the third image, from the texture data of the second image The texture data of the third picture. 如申請專利範圍第41項所述的方法,其中,還包括:   接收第一繪製指令,所述第一繪製指令中包括所述第三圖片的指示資訊;   根據所述第一繪製指令中的所述第三圖片的指示資訊,確定所述待繪製的第三圖片。The method of claim 41, further comprising: receiving a first drawing instruction, wherein the first drawing instruction includes indication information of the third picture; according to the first drawing instruction Determining the information of the third picture to determine the third picture to be drawn. 如申請專利範圍第43項所述的方法,其中,所述第三圖片的指示資訊包括:第三圖片的索引,或者第三圖片對應的待繪製對象顯示區域的索引,或者第三圖片在第二圖片中的位置資訊。The method of claim 43, wherein the indication information of the third picture comprises: an index of the third picture, or an index of the display area of the object to be drawn corresponding to the third picture, or the third picture is in the Location information in the second picture. 如申請專利範圍第44項所述的方法,其中,所述根據所述第一繪製指令中的所述第三圖片的指示資訊,確定所述待繪製的第三圖片,包括:   所述指示資訊為第三圖片的索引時,根據所述第三圖片的索引,查詢圖片索引與第一圖片在第二圖片中的位置資訊之間的對應關係,得到與所述第三圖片的索引對應的圖片位置資訊;或者,   所述第三圖片的指示資訊為第三圖片對應的待繪製對象顯示區域的索引時,根據所述待繪製對象顯示區域的索引,查詢對象顯示區域與圖片的對應關係,得到與所述待繪製對象顯示區域的索引對應的第三圖片的索引,根據所述第三圖片的索引,查詢圖片索引與第一圖片在第二圖片中的位置資訊之間的對應關係,得到與所述第三圖片的索引對應的圖片位置資訊。The method of claim 44, wherein the determining, according to the indication information of the third picture in the first drawing instruction, the third picture to be drawn, comprising: the indication information When the index of the third picture is the index of the third picture, the corresponding relationship between the picture index and the location information of the first picture in the second picture is obtained, and the picture corresponding to the index of the third picture is obtained. Position information; or, when the indication information of the third picture is an index of the display area of the object to be drawn corresponding to the third picture, according to the index of the display area of the object to be drawn, the correspondence between the display area and the image of the object is obtained An index of the third picture corresponding to the index of the display area of the object to be drawn, according to the index of the third picture, querying a correspondence between the picture index and the location information of the first picture in the second picture, and obtaining The picture location information corresponding to the index of the third picture. 如申請專利範圍第41項所述的方法,其中,還包括:   接收更新後的第二圖片的紋理資料,所述更新後的第二圖片的紋理資料中包括第四圖片的紋理資料,所述第四圖片為所述N個第一圖片中的至少一個圖片更新後的圖片。The method of claim 41, further comprising: receiving texture data of the updated second picture, wherein the updated texture data of the second picture includes texture data of the fourth picture, The fourth picture is an updated picture of at least one of the N first pictures. 如申請專利範圍第46項所述的方法,其中,還包括:   接收第二繪製指令,所述第二繪製指令中包括所述第四圖片的指示資訊;   根據所述第二繪製指令中的所述第四圖片的指示資訊,從所述更新後的第二圖片的紋理資料中獲取所述第四圖片的紋理資料,根據獲取到的紋理資料繪製所述第四圖片。The method of claim 46, further comprising: receiving a second drawing instruction, wherein the second drawing instruction includes indication information of the fourth picture; according to the second drawing instruction And the indication information of the fourth picture is obtained, the texture data of the fourth picture is obtained from the texture data of the updated second picture, and the fourth picture is drawn according to the acquired texture data. 一個或多個電腦可讀媒體,所述可讀媒體上儲存有指令,所述指令被一個或多個處理器執行時,使得電子設備執行如申請專利範圍第31至40項中任一項所述的方法。One or more computer readable mediums having stored thereon instructions that, when executed by one or more processors, cause the electronic device to perform as claimed in any one of claims 31 to 40 The method described. 一個或多個電腦可讀媒體,所述可讀媒體上儲存有指令,所述指令被一個或多個處理器執行時,使得電子設備執行如申請專利範圍第41至47項中任一項所述的方法。One or more computer readable mediums having stored thereon instructions that, when executed by one or more processors, cause the electronic device to perform any of claims 41-47 The method described.
TW107107019A 2017-05-24 2018-03-02 Rendering method and device TW201901620A (en)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
??201710375345.9 2017-05-24
CN201710375345.9A CN108933955A (en) 2017-05-24 2017-05-24 A kind of method for drafting and device

Publications (1)

Publication Number Publication Date
TW201901620A true TW201901620A (en) 2019-01-01

Family

ID=64396165

Family Applications (1)

Application Number Title Priority Date Filing Date
TW107107019A TW201901620A (en) 2017-05-24 2018-03-02 Rendering method and device

Country Status (3)

Country Link
CN (1) CN108933955A (en)
TW (1) TW201901620A (en)
WO (1) WO2018214768A1 (en)

Families Citing this family (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN109636724A (en) * 2018-12-11 2019-04-16 北京微播视界科技有限公司 A kind of display methods of list interface, device, equipment and storage medium
CN112581557A (en) * 2019-09-30 2021-03-30 Oppo广东移动通信有限公司 Layer drawing method and electronic equipment

Family Cites Families (12)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US8380005B1 (en) * 2009-02-02 2013-02-19 Adobe Systems Incorporated System and method for image composition using non-destructive editing model and fast gradient solver
US8773448B2 (en) * 2010-04-09 2014-07-08 Intel Corporation List texture
CN102855132B (en) * 2011-06-30 2016-01-20 大族激光科技产业集团股份有限公司 A kind of choosing method of Drawing Object and system
CN102332151B (en) * 2011-09-13 2015-01-07 深圳Tcl新技术有限公司 Processing method and system for numbers of pictures
US20130106887A1 (en) * 2011-10-31 2013-05-02 Christopher Tremblay Texture generation using a transformation matrix
CN102999946B (en) * 2012-09-17 2016-08-03 Tcl集团股份有限公司 A kind of 3D Disposal Method about Graphics Data, device and equipment
CN104461480B (en) * 2013-09-16 2018-04-27 联想(北京)有限公司 A kind of information processing method and electronic equipment
CN103677828B (en) * 2013-12-10 2017-02-22 华为技术有限公司 Coverage drawing method, drawing engine and terminal equipment
CN106296622B (en) * 2015-05-27 2020-04-28 阿里巴巴集团控股有限公司 Automatic layout jigsaw method and device
CN106548501B (en) * 2015-09-21 2019-12-24 阿里巴巴集团控股有限公司 Image drawing method and device
CN105426191B (en) * 2015-11-23 2019-01-18 深圳创维-Rgb电子有限公司 user interface display processing method and device
CN105719331A (en) * 2016-01-15 2016-06-29 网易(杭州)网络有限公司 Sprite drawing method, device and game system

Also Published As

Publication number Publication date
CN108933955A (en) 2018-12-04
WO2018214768A1 (en) 2018-11-29

Similar Documents

Publication Publication Date Title
US11175818B2 (en) Method and apparatus for controlling display of video content
WO2022110903A1 (en) Method and system for rendering panoramic video
KR102366752B1 (en) Reducing latency in map interfaces
US9426476B2 (en) Video stream
CN107103890B (en) The method and apparatus of application is shown on fixed-direction display
US10528998B2 (en) Systems and methods for presenting information related to products or services being shown on a second display device on a first display device using augmented reality technology
JP2018534607A (en) Efficient display processing using prefetch
JP2013531830A (en) Zoom display navigation
AU2017317839B2 (en) Panoramic video compression method and device
JP5323260B2 (en) Control terminal device and remote control system
US9538231B2 (en) Systems and methods for rendering multiple applications on television screens
US20210274262A1 (en) Multi-subtitle display method, intelligent terminal and storage medium
WO2018214768A1 (en) Rendering method and device
TWI669958B (en) Method, processing device, and computer system for video preview
WO2022095858A1 (en) Data transmission method, and device and medium
US20120218292A1 (en) System and method for multistage optimized jpeg output
TW201915710A (en) Display device and image display method thereof based on Android platform
WO2018214779A1 (en) Rendering method and device
CN112153459A (en) Method and device for screen projection display
CN109214977B (en) Image processing apparatus and control method thereof
CN117557701A (en) Image rendering method and electronic equipment
CN116700943A (en) Video playing system and method and electronic equipment
CN111885417B (en) VR video playing method, device, equipment and storage medium
US8972877B2 (en) Information processing device for displaying control panel image and information image on a display
CN113268302B (en) Display mode switching method and device of head-mounted display equipment