GB2602027A - Display apparatus - Google Patents

Display apparatus Download PDF

Info

Publication number
GB2602027A
GB2602027A GB2019816.4A GB202019816A GB2602027A GB 2602027 A GB2602027 A GB 2602027A GB 202019816 A GB202019816 A GB 202019816A GB 2602027 A GB2602027 A GB 2602027A
Authority
GB
United Kingdom
Prior art keywords
area
primitives
rendering
tiles
graphic object
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
GB2019816.4A
Other versions
GB202019816D0 (en
Inventor
Bialogonski Adam
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Samsung Electronics Co Ltd
Original Assignee
Samsung Electronics Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Samsung Electronics Co Ltd filed Critical Samsung Electronics Co Ltd
Priority to GB2019816.4A priority Critical patent/GB2602027A/en
Publication of GB202019816D0 publication Critical patent/GB202019816D0/en
Priority to PCT/KR2021/003105 priority patent/WO2022131449A1/en
Publication of GB2602027A publication Critical patent/GB2602027A/en
Pending legal-status Critical Current

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T15/003D [Three Dimensional] image rendering
    • G06T15/50Lighting effects
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T15/003D [Three Dimensional] image rendering
    • G06T15/50Lighting effects
    • G06T15/503Blending, e.g. for anti-aliasing

Abstract

A display apparatus arranged to render an image comprising a graphic object drawn on the base layer, the graphic object comprising a first area to be drawn opaque such that the based layer is not visible therethrough and a second area to be drawn such that the base layer is at least partially visible therethrough. The display apparatus comprises a processor that is arranged to perform a method comprising: analysing the graphic object to thereby generate primitives corresponding to the first area and primitives corresponding to the second area, sending the primitives corresponding to the first area for rendering without alpha-blending, sending the primitives corresponding the second area for rendering with alpha-blending and rendering the primitives wherein the rendering is performed such that the first area is drawn on the base layer before the second area is drawn on the base layer.

Description

Display Apparatus
Technical Field
Exemplary embodiments relate to a display apparatuses, and methods of controlling the same, for efficient rendering of alpha-blended images.
Background
It is common for display apparatuses to perform alpha-blending when overlaying graphics, for example backgrounds, icons and similar components of an on-screen display, on a base video image, to give the appearance of transparency to the overlaid image. Alpha-blending mixes the base video image with the overlaid graphic in a ratio according to an alpha value set for each of the overlaying pixels.
It is known to alpha-blend graphic objects in a two or more layers. Typically, the lowermost layer is first blended with the layer above. Then, a higher layer, i.e. a layer which is to appear in front of the lower layers, is blended with the output of the lower layers' blending operation. Multiple layers require multiple alpha-blending operations of this nature, which increases the amount of resource needed. to produce the final output.
FIG. 1 shows a schematic illustration of an alpha-blending process performed by a display apparatus. Video RGB data 11, graphic object RGB data 12, and alpha data 13 are delivered from storage 10 as inputs to a rendering unit 20. The rendering unit 20 performs an alpha-blending operation on the video RGB data 11 and graphic object RGB data 12 according the alpha data 13, formatting its output for delivery to a display unit 30 for display.
Using the procedure illustrated in FIG. 1, producing an alpha-blended output at a suitable resolution for display on a HD display unit requires significant computations resources when the graphic object RGB data 12 is to overlie the whole display area. At a resolution of 1920x1080, if all pixels are transmitted with an alpha value of 8 bits, a graphic signal per colour of 8 bits, and a video signal per colour of 16 bits, a data signal of 1920x1080x{8+(8x3)+(16x3)} needs to be transmitted from storage 10 to the blending unit 20 to perform alpha-blending for the full display area, and a correspondingly large amount of computational resource is needed for the rendering unit 20 to produce its output for display.
When rendering multiple layers, some optimisation is possible by examining pixels of higher layers that are to be rendered above those of lower layers. If it can be determined that a pixel of a highest layer is to be rendered as opaque, it will cover any blending that is performed on the corresponding pixels in lower layers, meaning that there is no value in performing the blending operation on the lower layers. This approach requires an understanding of which graphic objects belong in which layer, and a knowledge of what is to come in the layers above at a time before alpha-blending of lower layers will be carried out. In a standard GPU architecture, operating the processes of FIG. 1 on a layer by layer basis, there is no straightforward way of the knowing, before alpha-blending of a lower layer is carried out, what may, or may not, be present in higher layers such that lower layer alpha-blending operations become redundant. Overdrawing of lower alpha-blended pixels by opaque higher pixels is wasteful of resources needed to render the final output for display.
As the process of alpha-blending lower layers takes some finite time, it may sometimes take place in a period of time that partially overlaps the time in which the content of a higher layer can be determined by the rendering unit. In that period of time there is an opportunity to kill further alpha-blending operations on lower layers, if it is determined that opaque pixels of a higher layer will overdraw the alpha-blended lower pixels. In this way a reduction in redundant operations can be achieved. Some GPU hardware such as the "Mali" hardware produced by ARM Ltd operates in this way. However, in this approach dedicated hardware is needed to manage pixel queues of the various layers for rendering, and other optimisation processes employed in producing the final output for display may be compromised.
Exemplary embodiments aim to address the disadvantages associated with the techniques described above.
Summary of the Invention
In one example, there is provided a display apparatus arranged to render an image comprising a graphic object drawn on the base layer, the graphic object comprising a first area to be drawn opaque such that the based layer is not visible therethrough, and a second area to be drawn such that the base layer is at least partially visible therethrough. The display apparatus comprises a processor that is arranged to carry out a method that comprises: analysing the graphic object to thereby generate primitives corresponding to the first area and primitives corresponding to the second area; sending the primitives corresponding to the first area for rendering without alpha-blending; and sending the primitives corresponding the second area for rendering with alpha-blending; and rendering the primitives; wherein the rendering is performed such that the first area is drawn on the base layer before the second area is drawn on the base layer. In one example, the display apparatus is arranged to carry out a method according to any one of the exemplary embodiments set out herein.
In one example, there is provided a method of rendering an image comprising a graphic object drawn on the base layer, the graphic object comprising a first area to be drawn opaque such that the based layer is not visible therethrough, and a second area to be drawn such that the base layer is at least partially visible therethrough. The method comprises: analysing the graphic object to thereby generate primitives corresponding to the first area and primitives corresponding to the second area; sending the primitives corresponding to the first area for rendering without alpha-blending; and sending the primitives corresponding the second area for rendering with alpha-blending; and rendering the primitives; wherein the rendering is performed such that the first area is drawn on the base layer before the second area is drawn on the base layer.
By rendering the image in this way, alpha-blending is performed only on areas of the graphic object to be drawn with the base layer is at least partially visible therethrough. Furthermore, standard hardware can administer the analysis of the graphic object and perform the rendering in an efficient manner. For example, the primitives can be stored in two sections of an index buffer, and two separate appropriate draw calls made for the two different rendering operations for first and second primitives.
In one example, the first area comprises opaque pixels. In one example, the second area comprises transparent and/or semi-transparent pixels.
In one example, the graphic object comprises an image file, for example JPEG file or PNG file. In one example the graphic object comprises a texture. In one example the graphic object may include data corresponding to R, G, and B colour values of each pixel of the graphic object, and an alpha value for each pixel indicating a degree of transparency. In one example, when an alpha value is 0, the corresponding pixel is to be transparently displayed. In one example, when an alpha value is 1, the corresponding pixel is to be opaquely displayed. In one example, when an alpha value is between 0 and 1, the corresponding pixel is to be semi-transparently displayed.
In one example, the analysing step comprises a tiling process, in which tiles making up the graphic object are analysed. In one example the filing process comprises dividing the graphic object into a plurality of tiles, and the analysing is performed based on the tiles. In one example, the tiles comprise divisions of the graphic object, for example tessellating sub units across the area of the graphic object.
In one example the files are rectangular shaped, such as square shaped. In one example, the tiles comprise a plurality of pixels of the graphic object. In one example the tiles comprise elements that are at least 8 pixels in one dimension, for example at least 16 pixels. In one example the tiles comprise elements that are at least 8 pixels in two dimensions, for example at least 16 pixels. In one example, the tiles are sized with predetermined dimensions corresponding to the hardware that is to be used to render the primitives.
In one example, the analysis comprises a thresholding process that classifies the graphic object based on the distribution of opaque pixels, transparent pixels and semi-transparent pixels. In one example the analysis comprises a thresholding process that classifies the graphic object based on the distribution of opaque pixels, transparent pixels and semi-transparent pixels within each tile. In one example, the thresholding process classifies each tile as opaque, transparent or semi-transparent.
In one example, the analysing step comprises a mapping step, in which transparent parts of the graphic object are discarded. In one example, the analysing step comprises a mapping step, in which transparent parts of the graphic object are not used in generating primitives, such that the primitives corresponding to the second area are primitives to be rendered as semi-transparent.
In one example, the analysing step comprises a mapping step that operates on the tiles. In one example, the analysing step comprises a mapping step that discards files classified as transparent. In one example, the mapping step comprises forming an initial opacity map for the graphic object, containing information on which tiles are opaque and therefore correspond to the first area. In one example, the mapping step comprises forming an initial opacity map for the graphic object, containing information on which tiles are semi-transparent and therefore correspond to the second area. In one example the mapping step comprises forming an initial opacity map which includes no information on transparent tiles.
In one example, the mapping step comprises a merging operation, such as an operation of merging tiles of the initial opacity map. In one example, the merging operation comprises merging tiles of the initial opacity map, according to opacity state. In one example, the merging operation comprises merging opaque tiles with one another. In one example, the merging operation comprises merging semi-transparent tiles into one another. In one example, the merging operation comprises a two-dimensional process. In a first dimension, files are compared to their adjacent tile and, if there is correspondence the tiles are merged. If not, the comparison moves on one tile and is repeated. Once all tiles in the first row of the initial opacity map have been compared in the first dimension, the next row of files is compared in the same manner and so on. Then, in a second dimension this comparison and merging operation is performed. In one example, the mapping step comprises producing a final opacity map of merged tiles.
In one example, the mapping step produces an opacity map of tiles that are used to create a geometry comprising primitives suitable for rendering.
In one example, the analysing step comprises transforming tiles of the final opacity map into triangle primitives. In one example, the files of the final opacity map, including the merged files, are transformed into primitives, such as triangle primitives. In one example the primitives tessellate to cover the final opacity map. In one example, the method comprises storing data of the primitives in a buffer ready to be sent for rendering, such as a geometry buffer or an index buffer. In one example the primitives, sorted so that primitives of the opaque tiles and the semi-transparent tiles are stored separately. In one example the method comprises storing the primitives of the opaque tiles first in a geometry buffer with the primitives of the semi-transparent tiles following thereafter.
In one example, the method comprises arranging the primitives of the opaque tiles and the semi-transparent tiles in such a way that allows the rendering process for the graphic object to be performed in two separate routines. In one example, rendering the primitives is performed by delivering one of two different draw calls, according to whether the rendering is to be performed for the primitives corresponding to the first area or for the primitives corresponding to the second area. In one example, for primitives of the opaque tiles the method comprises making a draw call to a routine that does not perform alpha-blending when rendering the respective primitives. In one example, for primitives of the semi-transparent tiles the method comprises making a draw call to a routine that performs alpha-blending when rendering the respective primitives.
A computer program comprising instructions which, when the program is executed by a computer, cause the computer to carry out the steps of a method according to any one of the exemplary embodiments set out herein.
A computer-readable storage medium comprising instructions which, when executed, cause a computer to carry out the steps of a method according to any one of the exemplary embodiments set out herein.
Additional and/or other aspects and advantages of exemplary embodiments will be set forth in pad in the description which follows and, in pad, can be appreciated from the description, or learned by practice of the exemplary embodiments.
Introduction to the Drawings
For a better understanding of the invention, and to show how exemplary embodiments of the same may be carried into effect, reference will now be made, by way of example, to the accompanying diagrammatic drawings in which: FIG. 1 shows a schematic illustration of an alpha-blending process performed by a display apparatus.; FIG. 2A is a block diagram illustrating a configuration of a display apparatus according to an exemplary embodiment; FIG. 2B is an example flow diagram illustrating a method according to an exemplary embodiment; FIG. 3 shows an example of a graphic object 300 that may be included in a graphic signal; FIG. 4 shows steps in a tiling process; FIG. 5 shows steps in an alternative tiling process; FIG. 6A and FIG. 6B are diagrams illustrating a display image according to an exemplary embodiment; FIG. 7 shows tiles of the final opacity map transformed into triangle primitives; FIG. 8A -FIG. 8E show operations performed in rendering components of an example on-screen display in layers; FIG. 9A and 98 illustrate an example of the difference in the amount of alpha-blending required when using processes described herein, in contrast to a related art process FIG. 10 shows an example whole frame that includes a typical on-screen display rendered in layers; and FIG. 11A and FIG. 11B illustrate another example the difference in the amount of alpha-blending required when using processes described herein, in contrast to a related art process;
Description of Exemplary Embodiments
Hereinafter, the exemplary embodiments will be described in detail with reference to the attached drawings. In the description of the exemplary embodiments, certain detailed explanations of the related art are omitted where they might unnecessarily obscure features of the exemplary embodiments.
FIG. 2A is a block diagram illustrating a configuration of a display apparatus 100 according to an exemplary embodiment. As illustrated in FIG. 2, the display apparatus 100 includes an input unit 110, a processor 120, a rendering unit 130, and a display unit 140. In this embodiment, the display apparatus 100 comprises a television, but this is merely exemplary. In other embodiments the display apparatus 100 may be embodied as various electronic apparatuses including the display unit 140, such as a cellular phone, a laptop, a tablet computer, a digital camera, a camcorder, a notebook, a personal digital assistant, an MP3 player, a portable multimedia player, a smart phone, a digital photo frame, digital signage, and the like.
The input unit 110 is arranged to receive a display signal including a video signal, a plurality of graphic signals, and a plurality of alpha values.
That is, the input unit 110 is arranged to receive a display signal from a device external to display apparatus, when transmitted from outside the display apparatus 100. For example, the input unit 110 is arranged to receive a display signal of an image contained in broadcast signals that are transmitted from a broadcaster through a radio frequency (RF) communication network.
In alternative embodiments, the input unit 110 is alternatively, or additionally arranged to receive a display signal contained in content that is received from a server through a wide area network such as an IP network, and/or to receive a display signal through an external device that is directly physically connected to the display apparatus.
Alternatively, or in addition, according to a user command for reproduction of image content stored in a storage medium inside or outside the display apparatus 100, the input unit 110 is arranged to receive a display signal for displaying an image from a storage medium inside or outside the display apparatus 100.
The input unit 110 is arranged to receive and operate on a video signal including R, G, and B colour values of video data, per pixel. Each graphic signal included in the plurality of graphic signals includes R, G, and B colour values of the graphic data per pixel. The associated alpha values are data values corresponding to transmittance of the graphic data, on a per pixel basis, for blending the video signal and the graphic signal. The associated depth information corresponds to the order in which the graphic signals are to be rendered in layers in the output on the display unit 140.
FIG. 2B outlines the steps in a method performed by the apparatus of FIG. 2A. to rendering an image comprising a graphic object drawn on the base layer, the graphic object comprising a first area to be drawn opaque such that the based layer is not visible therethrough, and a second area to be drawn such that the base layer is at least partially visible therethrough. At Step S101 the method comprises analysing the graphic object to thereby generate primitives corresponding to the first area and primitives corresponding to the second area. At Step S102 the primitives corresponding to the first area are sent for rendering without alpha-blending. At Step S103 the primitives corresponding the second area are sent for rendering with alpha-blending. At step S104 the primitives are rendered such that the first area is drawn on the base layer before the second area is drawn on the base layer. The method steps will be explained in more detail below.
FIG. 3 shows an example of a graphic object 300 that may be included in a graphic signal as described. The graphic object 300 comprises an icon that is to be displayed over a video image. This is a common scenario when rendering an on-screen display or GUI of a display apparatus. The graphic object 300 may be represented as an image file, or as in this example bay be represented by a texture. The graphic object comprises pixels with different opacity characteristics, including opaque pixels OP, transparent pixels TP and semi-transparent pixels STP.
The processor 120 is arranged to analyse the graphic object 300. As described below, the result of the analysis performed by the processor 120 is a representation of the graphic object in which the opaque pixels OP, transparent pixels TP and semi-transparent pixels STP are grouped together in tiles according to their opacity characteristics.
FIG. 4 shows steps in a tiling process performed by the processor 120. The tiling process results in a map of tiles that that are classified as opaque tiles OT, transparent tiles TT and semi-transparent tiles STT, according to the pixels that make up the graphic object 300 that is being analysed. The tiling process comprises dividing the graphic object into a plurality of tiles, and then using a thresholding process analysis to classify the tile based on the distribution of opaque pixels OP, transparent pixels TP and semi-transparent pixels STP within the tile. Various known threshold techniques can be used in order to set a threshold and to reach a decision for the tiles, for example taking a weighted average of the opacity of the pixels within the tile.
FIG. 5 shows steps in an alternative filing process performed by the processor 120. The tiling process in this example uses Compute Shader work groups to parallelize tiling operations on across the pixels that make up the graphic object in a manner that can be performed efficiently using GPU hardware. Schematically, the NxM work groups are shown as operating over N threads, CSThread _0,...,CSThread_N, performing the filing operations in parallel, with the resulting outputs 0...N sent to the GPU buffer as the filed representation of the graphic object 300. The Compute Shader approach allows the tiling process to take advantage of GPU functionality, using a programming model that is transferrable across any GPU hardware having suitable APIs. Further flexibility can be provided with Compute Shaders in terms of the format of graphic objects to be filed. Compute Shaders can be established so that the tiling operations can be performed efficiently with available GPU hardware independently for various input formats of the graphic objects to be tiled, including starting from input graphic objects that are provided in compressed form and which may advantageously be decompressed using available GPU hardware.
The example graphic object 300 comprises a texture of size 256x256 pixels, and the tile size is set at 16x16. The tile size may be set at predetermined dimensions to correspond to processing hardware in the display apparatus. For example, some GPUs include a tile-based rendering engine. In display apparatuses according to exemplary embodiments which include processing hardware that operates on tiles, the processor 120 is arranged to produce tiles that correspond to those used in such processing hardware.
FIG. 6A and FIG. 6B show steps performed in generating a tile map. The steps illustrated are suitably performed by the processor 120 in converting the tiles that represent the graphic object 300 into a file map. The first step is to discard the transparent tiles, leaving only semitransparent files STT and opaque tiles OT. The semi-transparent tiles STT and opaque tiles OT form the basis for an initial opacity map 610, containing information on which tiles are semi-transparent and therefore must be alpha-blended, and which tiles are opaque and can therefore be drawn without alpha-blending.
The opacity map is stored for the tiles that are not discarded as transparent is stored as an array of bits. For example, Os in the array may represent opaque tiles and 1s represent transparent tiles.
To reduce the map complexity, the tiles of the initial opacity map 610 are merged based on the opacity state. The merging algorithm comprises in two steps. Firstly, files are merged horizontally, as shown at 620. Then, the horizontally merged tiles are themselves merged vertically, as shown at 630. The final opacity map 640 is thus produced. Tiles of the final opacity map 640 are used to create a geometry suitable for rendering.
FIG. 7 shows tiles of the final opacity map transformed into triangle primitives. Both the semi-transparent tiles STT and the opaque tiles OT are transformed into primitives suitable for rendering. In this example, the primitives comprise triangle primitives, which tessellate to cover the final opacity map 640. A geometry buffer stores the data that defines the primitives, sorted so that primitives of the semi-transparent tiles STT and the opaque tiles OT are stored separately, for example in separate contiguous blocks of data. The primitives of the opaque tiles OT appear first, and the primitives of the semi-transparent tiles STT follow thereafter.
The separate arrangement of the primitives of the opaque files OT and the semitransparent tiles STT allows the rendering process for the graphic object 300 to be performed in Iwo separate routines by the rendering unit 130. For the semi-transparent tiles STT a draw call is made to a routine that performs alpha-blending when rendering the respective primitives. For the opaque tiles OT, a different draw call is made, to a routine that does not perform alpha-blending when rendering the respective primitives.
When graphic objects are to be rendered, in a plurality of layers from the lowest layer through higher layers, the processes described above are performed first for the graphic object in the lowest layer. Opaque parts of the graphic object in the lowest layer are drawn first by the rendering unit 130, and then semi-transparent parts of the graphic object are drawn by the rendering unit 130 with alpha-blending. The process then repeats for graphic objects in the next, higher, layer. This may involve overlapping graphic objects in lower layers, as shown in the example of FIG. 8A -FIG. 8E.
FIG. 8A -FIG. 8E show operations performed in rendering components of an example on-screen display in layers. The components of the example on-screen display are in three layers and comprise graphic objects that have opaque, semi-transparent and transparent pixels. The lowermost layer comprises a graphic object in the form of a background 810. In this example the background 810 comprises only opaque pixels. A graphic object in the form of a menu bar 820 is to be rendered as a higher layer. The menu bar 820 comprises opaque pixels and semi-transparent pixels. Opaque pixels of the menu bar 820 make up the central region of the menu bar 820, and in this example semi-transparent pixels of the menu bar 820 make up an upper edge region and a lower edge region of the menu bar 820. A graphic object in the form of an icon 830 is to be rendered as a next higher layer. The icon 830 comprises opaque pixels, semitransparent pixels and transparent pixels. Opaque pixels of the icon 830 make up the central region of the icon 830, and in this example semi-transparent pixels of the icon 830 surround the central region of the icon and are transparent pixels of the icon 830 surround that.
Rendering in this example is to be performed from back to front, that is starting with the lowermost layer. As the lowermost layer has nothing beneath it, it is rendered as normal. This stage is shown at FIG. 8A For the next-lowermost layer, where blending may be required, the processes of filing, generating an opacity map and generating two sets of primitives for separate rendering is performed for the graphic object(s) in that layer. In this example the processes of tiling, generating an opacity map and generating two sets of primitives for separate rendering is performed for the menu bar 820. The opaque parts of the menu bar 820 are drawn first, as shown at FIG. 8B, and then semi-transparent parts of the menu bar 820 are drawn with alpha-blending, as shown at FIG. 8C.
For the next-lowermost layer, where blending may again be required, the processes of filing, generating an opacity map and generating two sets of primitives for separate rendering is performed for the graphic object(s) in that layer. In this example the processes of filing, generating an opacity map and generating two sets of primitives for separate rendering is performed for the icon 830. The transparent parts of the icon 830 are discarded, and the opaque parts of the icon 830 are drawn first, as shown at FIG. 8D, and then semi-transparent parts of the icon 830 are drawn with alpha-blending, as shown at FIG. 8E.
FIG. 9A and 9B illustrate an example of the difference in the amount of alpha-blending required when using processes described herein, in contrast to a related art process in which no consideration is given to the composition of the graphic objects to be rendered in the respective layers.
FIG. 9A illustrates, for the components of FIG. 8A -8E, the amount of times the pixels in each part of the output are overdrawn when using a standard back to front rendering of layers on top of a video signal, with alpha-blending for all pixels of graphic object that are added onto lower layers. The background 810 is overdrawn once, as indicated by the dark shaded area "lx". The menu bar 820 is overdrawn twice, as indicated by the light shaded area "2x". The icon 830 is overdrawn three times, as indicated by the white area "3x".
FIG 9B illustrates, for the components of FIG. 8A -8E, the way overdrawing with redundant alpha-blending is reduced, using the processes described herein when rendering of layers on top of a video signal. As can be seen, the only area in which overdrawing 3x is required and which requires the expense of multiple alpha-blending operations is the area of the semi-transparent parts of the icon 830, around the edges of the icon 830. Most of the area of the icon 830 is overdrawn only lx, the same as the background.
FIG. 10 shows an example whole frame that includes a typical on-screen display rendered in layers. The components of the example on-screen display are in a plurality of layers and comprise graphic objects that have opaque, semi-transparent and transparent pixels. The lowermost layer comprises a graphic object in the form of a background, and there are a plurality of menu bars and icons to be rendered as higher layers comprising. Between them, the menu bars and icons comprise opaque pixels, semi-transparent pixels and transparent pixels The frame resolution is 1280x720, with the Ul area using about 50% of screen area.
FIG. 11A and FIG. 11B show the amount of overdrawing required for this example, in the same manner as FIG. 9A and FIG. 9B.
TABLE 1 shows the results of an overdrawing complexity analysis, using a standard back to front rendering of layers on top of a video signal, with alpha-blending for all pixels of graphic object that are added onto lower layers for the example of FIG. 10, corresponding to the representations of FIG. 11A and FIG 11B. The numbers in Table 1 correspond to the areas shown in FIG. 11A. The value for Total Fragment Cost represents a performance cost in terms of GPU cycles, of running a shader as part of the rendering. For each cost the lower the number the better, in terms of GPU cycles.
TABLE 1
In contrast, using the processes described herein, the numbers in TABLE 2 show a reduction in overdrawing of -16% and a reduction in Total Fragment Cost.
TABLE 2
The display unit 140 is a component for displaying an image. That is, the display unit 140 may display an image signal processed through a signal processor (not shown).
A signal processor (not shown) is a component for signal-processing image information and sound information that constitute content. In response to a stream signal being received, the signal processor (not shown) may demultiplex the stream signal to separate an image signal, 2,1 3 a sound signal, and a data signal. The signal processor (not shown) may perform decoding using a decoder when the demultiplexed image signal is an encoded image signal. For example, an MPEG-2 standard-encoded image signal may be decoded by an MPEG-2 decoder and an H.264 standard image signal of digital multimedia broadcasting (DMB) or DVB-H may be decoded by an H.264 decoder. In addition, the signal processor (not shown) may process the brightness, tint, tone, and so on of an image signal.
The signal processor (not shown) may process the demultiplexed voice signal. For example, an MPEG-2 standard-encoded voice signal may be decoded by an MPEG-2 decoder and an MPEG-4 bit-sliced arithmetic coding (BSAC) standard-encoded voice signal of terrestrial digital multimedia broadcasting (DMB) may be decoded by an MPEG-4 decoder. An MPEG-2 advanced audio codec (AAC) standard-encoded voice signal of a DMB or DVB-H method may be decoded by an AAC decoder. In addition, base, treble, sound, and so on may be adjusted.
The signal processor (not shown) may data-process the demultiplexed data signal.
Encoded data may be decoded and may include electronic program guide (EPG) indicating information about a program broadcast in each channel. In the case of ATSC method, the EPG may include ATSC-program and system information protocol (ATSC-PSIP) information, and in the case of DVB method, the EPG may include DVB-service information (DVB-SI).
The display unit 140 includes scaler (not shown), a frame rate converter (not shown), and a video enhancer (not shown). The scaler adjusts a picture ratio of an image. The video enhancer removes degradation or noise of an image and stores the processed image data in a frame buffer. The frame rate converter adjusts a frame rate and transmits the image data of the frame buffer to a display module according to the set frame rate.
The display unit may be designed according to various technologies. That is, the display panel may be configured with one of an organic light emitting diode, a liquid crystal display panel, a plasma display panel, a vacuum fluorescent display, field emission display, and electro luminescence display. In addition, the display panel may be configured as an emissive-type display panel or a reflective display such as E-ink. Alternatively, the display panel may be embodied as a flexible display, a transparent display, or the like.
As set out above, example embodiments of the can reduce overdrawing when rendering typical OSD menus, leading to a reduction resources needed for such rendering.
The methods described herein may be coded in software and stored in a non-transitory readable medium as instructions to be executed on a suitable computer. The non-transitory readable medium may be installed and used in various apparatuses.
The non-transitory computer-readable media refers to a medium that permanently or semi-permanently stores data and is readable by a device instead of a medium that stores data for a short time period, such as a register, a cache, a memory, etc. In detail, the programs may be stored and provided in the non-transitory computer readable media such as CD, DVD, hard disc, blue ray disc, USB, a memory card, ROM, etc. Although a few exemplary embodiments have been shown and described, it will be appreciated by those skilled in the art that various changes and modifications might be made without departing from the scope of the invention, as defined in the appended claims.
Attention is directed to all papers and documents which are filed concurrently with or before this specification in connection with this application and which are open to public inspection with this specification, and the contents of all such papers and documents are incorporated herein by reference.
All the features disclosed in this specification (including any accompanying claims, abstract and drawings), and/or all the steps of any method or process so disclosed, may be combined in any combination, except combinations where at least some of such features and/or steps are mutually exclusive.
Each feature disclosed in this specification (including any accompanying claims, abstract and drawings) may be replaced by alternative features serving the same, equivalent or similar purpose, unless expressly stated otherwise. Thus, unless expressly stated otherwise, each feature disclosed is one example only of a generic series of equivalent or similar features.
The invention is not restricted to the details of the foregoing exemplary embodiments. The invention extends to any novel one, or any novel combination, of the features disclosed in this specification (including any accompanying claims, abstract and drawings), or to any novel one, or any novel combination, of the steps of any method or process so disclosed.

Claims (15)

  1. CLAIMS1. A method of rendering an image comprising a graphic object drawn on the base layer, the graphic object comprising a first area to be drawn opaque such that the based layer is not visible therethrough, and a second area to be drawn such that the base layer is at least partially visible therethrough, the method comprising: analysing the graphic object to thereby generate primitives corresponding to the first area and primitives corresponding to the second area; sending the primitives corresponding to the first area for rendering without alpha-blending; and sending the primitives corresponding the second area for rendering with alpha-blending; and rendering the primitives; wherein the rendering is performed such that the first area is drawn on the base layer before the second area is drawn on the base layer.
  2. 2. The method of claim 1, wherein the graphic object includes data corresponding to R, G, and B colour values of each pixel of the graphic object and an alpha value for each pixel indicating a degree of transparency, and wherein the first area comprises opaque pixels and the second area comprises transparent and/or semi-transparent pixels.
  3. 3. The method of claim 2, wherein the analysing step comprises a mapping step, in which transparent parts of the graphic object are discarded, such that the primitives corresponding to the second area are primitives to be rendered as semi-transparent
  4. 4. The method of claim 1, 2, or 3, wherein the analysing step comprises a tiling process, in which files making up the graphic object are analysed, the filing process comprising dividing the graphic object into a plurality of tiles that comprise tessellating sub units across the area of the graphic object, and wherein the analysing is performed based on the tiles.
  5. 5. The method of claim 3 or 4, wherein the analysis comprises a thresholding process that classifies each file as opaque, transparent or semi-transparent based on the distribution of opaque pixels, transparent pixels and semi-transparent pixels within each tile.
  6. 6. The method of claim 4 orb, wherein the tiles comprise a plurality of pixels of the graphic object and are sized with predetermined dimensions corresponding to the hardware that is to be used to render the primitives.
  7. 7. The method of claim 6, wherein the analysing step comprises a mapping step that operates on the tiles, and comprises discarding tiles classified as transparent to form an initial opacity map for the graphic object that contains information on which tiles are opaque and therefore correspond to the first area, and information on which tiles are semi-transparent and therefore correspond to the second area.
  8. 8. The method of claim 7, wherein the analysing step comprises merging tiles of the initial opacity map, according to opacity state, to thereby merge semi-transparent tiles into one another to produce a final opacity map of merged tiles, and further comprises using the final opacity map to create a geometry comprising primitives suitable for rendering.
  9. 9. The method of clam 8, wherein the analysing step comprises transforming tiles of the final opacity map into triangle primitives
  10. 10. The method of any preceding claim, wherein the method comprises storing data of the primitives in a buffer ready to be sent for rendering, the primitives sorted so that primitives of opaque pixels and semi-transparent tiles are stored separately in respective contiguous areas of the buffer.
  11. 11. The method of any preceding claim, wherein the method comprises arranging the primitives of the opaque and semi-transparent pixels ready for rendering to be performed in two separate routines.
  12. 12. The method of any preceding claim, wherein for primitives of opaque pixels the method comprises making a draw call to a routine that does not perform alpha-blending when rendering the respective primitives, and for primitives of semi-transparent pixels the method comprises making a draw call to a routine that performs alpha-blending when rendering the respective primitives.
  13. 13. A display apparatus, comprising a processor arranged to carry out the method of any one of claims 1 to 12.
  14. 14. An apparatus arranged to render an image comprising a graphic object drawn on a base layer, the graphic object comprising a first area to be drawn opaque such that the based layer is not visible therethrough, and a second area to be drawn such that the base layer is at least partially visible therethrough, the apparatus comprising a processor arranged to: analyse the graphic object to thereby generate primitives corresponding to the first area and primitives corresponding to the second area; send the primitives corresponding to the first area for rendering without alpha-blending; send the primitives corresponding the second area for rendering with alpha-blending; and render the primitives; wherein the processor is arranged to control the rendering such that the first area is drawn on the base layer before the second area is drawn on the base layer.
  15. 15. A computer-readable storage medium comprising instructions which, when executed, cause a computer to carry out the method of any one of claims 1 to 12.
GB2019816.4A 2020-12-15 2020-12-15 Display apparatus Pending GB2602027A (en)

Priority Applications (2)

Application Number Priority Date Filing Date Title
GB2019816.4A GB2602027A (en) 2020-12-15 2020-12-15 Display apparatus
PCT/KR2021/003105 WO2022131449A1 (en) 2020-12-15 2021-03-12 Display apparatus

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
GB2019816.4A GB2602027A (en) 2020-12-15 2020-12-15 Display apparatus

Publications (2)

Publication Number Publication Date
GB202019816D0 GB202019816D0 (en) 2021-01-27
GB2602027A true GB2602027A (en) 2022-06-22

Family

ID=74188889

Family Applications (1)

Application Number Title Priority Date Filing Date
GB2019816.4A Pending GB2602027A (en) 2020-12-15 2020-12-15 Display apparatus

Country Status (2)

Country Link
GB (1) GB2602027A (en)
WO (1) WO2022131449A1 (en)

Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20070211048A1 (en) * 2003-07-25 2007-09-13 Imagination Technologies Limited Method and apparatus for rendering computer graphic images of translucent and opaque objects
US20090167785A1 (en) * 2007-12-31 2009-07-02 Daniel Wong Device and method for compositing video planes
US20150221127A1 (en) * 2014-02-06 2015-08-06 Imagination Technologies Limited Opacity Testing For Processing Primitives In A 3D Graphics Processing System

Family Cites Families (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2011131230A1 (en) * 2010-04-20 2011-10-27 Trident Microsystems, Inc. System and method to display a user interface in a three-dimensional display
US9019275B2 (en) * 2010-10-01 2015-04-28 Lucid Software, Inc. Manipulating graphical objects
US9349213B2 (en) * 2013-09-09 2016-05-24 Vivante Corporation Tile-based accumulative multi-layer alpha blending systems and methods
JP6494249B2 (en) * 2014-11-12 2019-04-03 キヤノン株式会社 Image forming apparatus, image forming method, and program
KR101843411B1 (en) * 2015-05-07 2018-05-14 에스케이테크엑스 주식회사 System for cloud streaming service, method of image cloud streaming service based on transparency of image and apparatus for the same

Patent Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20070211048A1 (en) * 2003-07-25 2007-09-13 Imagination Technologies Limited Method and apparatus for rendering computer graphic images of translucent and opaque objects
US20090167785A1 (en) * 2007-12-31 2009-07-02 Daniel Wong Device and method for compositing video planes
US20150221127A1 (en) * 2014-02-06 2015-08-06 Imagination Technologies Limited Opacity Testing For Processing Primitives In A 3D Graphics Processing System

Also Published As

Publication number Publication date
WO2022131449A1 (en) 2022-06-23
GB202019816D0 (en) 2021-01-27

Similar Documents

Publication Publication Date Title
US9584785B2 (en) One pass video processing and composition for high-definition video
US6396473B1 (en) Overlay graphics memory management method and apparatus
CN107615770B (en) Application processing method and terminal equipment
CN109379628B (en) Video processing method and device, electronic equipment and computer readable medium
CN110377263B (en) Image synthesis method, image synthesis device, electronic equipment and storage medium
KR19990087566A (en) Mixing Video Images at Home Communication Terminals
CN108702514B (en) High dynamic range image processing method and device
EP3099081B1 (en) Display apparatus and control method thereof
US20220139017A1 (en) Layer composition method, electronic device, and storage medium
KR102246105B1 (en) Display apparatus, controlling method thereof and data transmitting method thereof
CN109871192A (en) A kind of display methods, device, electronic equipment and computer readable storage medium
CN111107127A (en) Streaming of individual application windows for remote desktop applications
CN106899878A (en) A kind of adjustable video and graph compound method and system of transparency based on OMAP chips
CN109587561B (en) Video processing method and device, electronic equipment and storage medium
CN109587555B (en) Video processing method and device, electronic equipment and storage medium
US20220028360A1 (en) Method, computer program and apparatus for generating an image
GB2602027A (en) Display apparatus
US8063916B2 (en) Graphics layer reduction for video composition
US10484640B2 (en) Low power video composition using a stream out buffer
CN114697555B (en) Image processing method, device, equipment and storage medium
EP2693426A1 (en) Display apparatus, image post-processing apparatus and method for image post-processing of contents
CN110519530B (en) Hardware-based picture-in-picture display method and device
CN107743710A (en) Display device and its control method
CN112565869A (en) Window fusion method, device and equipment for video redirection
KR20180068678A (en) Display apparatus and method for displaying