AU2009243440A1 - Optimised rendering of palette images - Google Patents

Optimised rendering of palette images Download PDF

Info

Publication number
AU2009243440A1
AU2009243440A1 AU2009243440A AU2009243440A AU2009243440A1 AU 2009243440 A1 AU2009243440 A1 AU 2009243440A1 AU 2009243440 A AU2009243440 A AU 2009243440A AU 2009243440 A AU2009243440 A AU 2009243440A AU 2009243440 A1 AU2009243440 A1 AU 2009243440A1
Authority
AU
Australia
Prior art keywords
colour
pixel
data
rendering
run
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Abandoned
Application number
AU2009243440A
Inventor
Ian Richard Beaumont
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Canon Inc
Original Assignee
Canon Inc
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Canon Inc filed Critical Canon Inc
Priority to AU2009243440A priority Critical patent/AU2009243440A1/en
Publication of AU2009243440A1 publication Critical patent/AU2009243440A1/en
Abandoned legal-status Critical Current

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T11/002D [Two Dimensional] image generation
    • G06T11/40Filling a planar surface by adding surface attributes, e.g. colour or texture

Description

S&F Ref: 921408 AUSTRALIA PATENTS ACT 1990 COMPLETE SPECIFICATION FOR A STANDARD PATENT Name and Address Canon Kabushiki Kaisha, of 30-2, Shimomaruko 3 of Applicant: chome, Ohta-ku, Tokyo, 146, Japan Actual Inventor(s): Ian Richard Beaumont Address for Service: Spruson & Ferguson St Martins Tower Level 35 31 Market Street Sydney NSW 2000 (CCN 3710000177) Invention Title: Optimised rendering of palette images The following statement is a full description of this invention, including the best method of performing it known to me/us: 5845c(2416223_1) OPTIMISED RENDERING OF PALETTE IMAGES TECHNICAL FIELD The current invention relates to high speed rendering and, in particular, to the processing of palette images within the rendering architecture. BACKGROUND In high speed rendering, the time taken to composite colour data is often a 5 performance limitation. Depending on the required compositing operations, a compositor may have to spend many processing cycles to calculate a final pixel value. Rendering systems use techniques to reduce the amount of redundant compositing that takes place. For instance, background objects hidden behind foreground objects may be culled from a compositing list, to reduce the processing load on the compositor. 10 A common method of rendering uses the Painter's Algorithm to render each object one at a time. Painter's Algorithm has also been called object-sequential rendering. This type of rendering method utilises a frame store and paints or renders each object into the frame store as that object becomes available. Typically objects are arranged in levels, such that an object rendered at a relatively lower level may be 15 subsequently painted over by an object at a higher level. Where the object currently being rendered has some transparency, the rendering further necessitates a compositing operation with underlying pixel values already recorded in the frame store, for example from one or more previously rendered objects at lower levels. A technique used to reduce compositing requirements with the Painter's Algorithm rendering method 20 includes saving composited data from a previous run and reusing the result for a next run, if the next run has the same objects as the previous run. 2414524_1 921408_speci_lodge -2 Another method of rendering is a region rendering method where all objects to be drawn are retained in a display list. The method of region rendering involves a first stage of tracking the edges of objects without generating colour data, but forming a plurality of non-overlapping contiguous regions, and assigning to each contiguous region a unique 5 set of fills which define the region. Fills describe how to generate colour data for each object contributing colour to the contiguous region. This description of the unique contiguous regions is typically stored in a display list. The display list is then passed to a renderer where the display list is used to generate pixel data. The renderer generates pixel data for each continuous region by determining the current region run length and 10 generating pixel data for the contiguous region using the corresponding fill data. A technique used to reduce compositing requirements within region rendering systems involves grouping fills for a given region into sequential runs of constant colour and only performing the compositing operation once per run, and not every pixel. However, renderers typically composite each pixel, or each run comprising pixels 15 of the same colour, separately. Further, where one or more objects are filled with or otherwise contain a bitmap image, there may be many pixels in the bitmap image that have the same colour as each other, potentially giving rise to redundant compositing. This redundant compositing requires extraneous computational resources that is unnecessarily wasted. 20 SUMMARY In accordance with one aspect of the present disclosure, there is provided rendering apparatus for rendering a run of a palette image data representation comprising at least one level requiring compositing, said rendering apparatus comprising: 2414524_1 921408_specilodge -3 at least one Colour and Mask Generator for receiving the palette image data representation comprising a plurality of input colour data to be composited and at least one set of indices associated with one of the plurality of input colour data, said Colour and Mask Generator for generating a set of mask bits for each of the plurality of input 5 colour data; a Position Generator for generating a position mask for an output colour data based on the generated sets of mask bits, such that there is at least one pixel in the run with a different colour data from the output colour data separating two pixels with the output colour data; 10 at least one Output Generator for generating the output colour data from the plurality of input colour data of the palette image data representation; and a Pixel Duplicator for duplicating the output colour data in the run based on the generated position mask. Typically, the Output Generator composites the plurality of input colour data to 15 generate the output colour data. In accordance with another aspect of the present disclosure, there is provided a method of rendering a run of input pixel data, said method comprising the steps of: selecting a pixel in the run, said pixel having a first output colour; determining, for the run, a plurality of pixels with the first output colour, at least 20 one of the plurality of pixels being separated from another of the determined plurality of pixels by a further pixel with a second output colour; and writing the first output colour to the determined plurality of pixels. In accordance with another aspect of the present disclosure, there is provided a Data Processing Architecture for rendering a run of indexed pixel data, said run of 2414524_1 921408_speci_lodge -4 indexed pixel data comprising at least one level, said Data Processing Architecture comprising: a Position Generator for generating a set of masks for the run using the indexed pixel data, each of said set of masks being associated with a colour value; 5 a Colour and Mask Generator for compositing a plurality of colours referenced by the indexed pixel data to obtain the colour values; and a Pixel Duplicator for writing the colour values according to the associated mask. Other aspects are also disclosed. BRIEF DESCRIPTION OF THE DRAWINGS 10 At least one embodiment of the present invention will now be described with reference to the following drawings, in which: Figs. IA and lB collectively form a schematic block diagram of a general purpose computing system and printing system in which the arrangements to be described may be implemented; 15 Fig. 2 is a schematic block diagram of a pixel rendering apparatus in which the arrangements to be described may be implemented; Fig. 3 is a diagram showing an example of two runs of colour data to be composited; Fig. 4 is deliberately omitted; 20 Figs. 5A to 5C show an example of the result of rendering according to the present disclosure; Fig. 6 is a schematic block diagram of a data processing architecture for rendering palette images; 2414524_1 921408_specilodge -5 Fig. 7 is a schematic block diagram of an exemplary architecture of a colour and mask generator (CMG) as used in Fig. 6; Fig. 8 is a schematic block diagram of an exemplary architecture of a position generator as used in Fig. 6; 5 Fig. 9 is a schematic flow diagram illustrating a method of rendering a run of pixels according to the present disclosure; Fig. 10 is a schematic diagram illustrating an example of colour input to the colour and mask generators of Fig. 6; Fig. 11 is a schematic diagram illustrating an example input into the position 10 generator of Fig. 8; Fig. 12A is a diagram illustrating an example of an internal copy operation as carried out by a pixel duplicator of the data processing architecture of Fig. 6; Fig. 12B is a diagram illustrating an example of an internal copy operation as carried out by a pixel duplicator of the data processing architecture of Fig. 6; 15 Fig. 12C is a diagram illustrating an example of an internal copy operation as carried out by a pixel duplicator of the data processing architecture of Fig. 6. DETAILED DESCRIPTION INCLUDING BEST MODE Overview The present disclosure is generally directed to reducing redundant compositing 20 within a rendering system. A "palette image" is a known encoded representation that can be used to store digital images, such as bitmap images. A "palette image" is more accurately referred to as a palette image data representation, to distinguish from other forms of encoded image representation. A palette image data representation comprises of an array of indices and 2414524_1 921408_speci_lodge -6 a lookup table (LUT). Each index in the array maps to an entry in the LUT containing a single colour value, e.g. a four-channel colour value, such as a colour having RGBA (Red, Green Blue and Alpha) components. The number of bits used to represent each index in the array is typically one, two, four or eight bits. These numbers of bits 5 correspond with LUT sizes of two, four, sixteen, and two hundred and fifty six entries respectively. The number of bits used to store each colour channel value in the LUT is typically eight bits or more. Since, in a palette image data representation, the colour data is stored separately from the image data, the colour data being the LUT and the image data being a two-dimensional map of the indices, such an image representation is 10 inherently compressed, providing a compact and efficient method of representing images within a rendering system. When a palette image is to be composited, each index of the palette image data representation is "decompressed" as needed, by substituting the index value with the colour value of the palette located at the index value. As each index value is decompressed into a colour value, the colour value is composited with other levels in 15 the pixel as needed. However, because there is a generally limited (small) set of colours in the palette image representation, it is likely that the same compositing operation is carried out multiple times when compositing a number of pixels of a palette image to produce a final pixel image output. The arrangements to be described provide a method of efficiently processing 20 palette and palette-like images in a region rendering system. Particularly, the rendering of a palette image data representation may be mixed with the rendering of other image data representations, such as a raw bitmap (e.g. photographic) image, for example. As seen in Fig. 1 A, the computer system 100 is formed by a computer module 101, input devices such as a keyboard 102, a mouse pointer device 103, a 2414524_1 921408_speci_lodge -7 scanner 126, a camera 127, and a microphone 180, and output devices including a printer 115, a display device 114 and loudspeakers 117. An external Modulator Demodulator (Modem) transceiver device 116 may be used by the computer module 101 for communicating to and from a communications network 120 via a connection 121. 5 The network 120 may be a wide-area network (WAN), such as the Internet or a private WAN. Where the connection 121 is a telephone line, the modem 116 may be a traditional "dial-up" modem. Alternatively, where the connection 121 is a high capacity (eg: cable) connection, the modem 116 may be a broadband modem. A wireless modem may also be used for wireless connection to the network 120. 10 The computer module 101 typically includes at least one processor unit 105, and a memory unit 106 for example formed from semiconductor random access memory (RAM) and semiconductor read only memory (ROM). The module 101 also includes an number of input/output (I/O) interfaces including an audio-video interface 107 that couples to the video display 114, loudspeakers 117 and microphone 180, an 1/0 15 interface 113 for the keyboard 102, mouse 103, scanner 126, camera 127 and optionally a joystick (not illustrated), and an interface 108 for the external modem 116 and printer 115. In some implementations, the modem 116 may be incorporated within the computer module 101, for example within the interface 108. The computer module 101 also has a local network interface Ill which, via a connection 123, permits coupling of 20 the computer system 100 to a local computer network 122, known as a Local Area Network (LAN). As also illustrated, the local network 122 may also couple to the wide network 120 via a connection 124, which would typically include a so-called "firewall" device or device of similar functionality. The interface 111 may be formed by an 2414524_1 921408_speci_lodge -8 Etherneti" circuit card, a BluetoothTM wireless arrangement or an IEEE 802.11 wireless arrangement. The interfaces 108 and 113 may afford either or both of serial and parallel connectivity, the former typically being implemented according to the Universal Serial 5 Bus (USB) standards and having corresponding USB connectors (not illustrated). Storage devices 109 are provided and typically include a hard disk drive (HDD) 110. Other storage devices such as a floppy disk drive and a magnetic tape drive (not illustrated) may also be used. An optical disk drive 112 is typically provided to act as a non-volatile source of data. Portable memory devices, such optical disks (eg: CD-ROM, 10 DVD), USB-RAM, and floppy disks for example may then be used as appropriate sources of data to the system 100. The components 105 to 113 of the computer module 101 typically communicate via an interconnected bus 104 and in a manner which results in a conventional mode of operation of the computer system 100 known to those in the relevant art. The storage 15 devices 109, memory 106 and optical disk drive 112 are connected to other components of the computer system 100 via the connection 119. Examples of computers on which the described arrangements can be practised include IBM-PC's and compatibles, Sun Sparcstations, Apple Mac M or alike computer systems evolved therefrom. The method of processing and rendering a bitmap object may be implemented 20 using the computer system 100 wherein the processes of Figs. 2 - 10, to be described, may be implemented as one or more software application programs 133 executable within the computer system 100. In particular, the steps of the method of rendering a bitmap object are effected by instructions 131 in the software 133 that are carried out within the computer system 100. The software instructions 131 may be formed as one or 2414524_1 921408_specilodge -9 more code modules, each for performing one or more particular tasks. The software may also be divided into two separate parts, in which a first part and the corresponding code modules performs the described methods and a second part and the corresponding code modules manage a user interface between the first part and the user. 5 The software 133 is generally loaded into the computer system 100 from a computer readable medium, and is then typically stored in the HDD 110, as illustrated in Fig. 1A, or the memory 106, after which the software 133 can be executed by the computer system 100. In some instances, the application programs 133 may be supplied to the user encoded on one or more CD-ROM 125 and read via the corresponding 10 drive 112 prior to storage in the memory 110 or 106. Alternatively the software 133 may be read by the computer system 100 from the networks 120 or 122 or loaded into the computer system 100 from other computer readable media. Computer readable storage media refers to any storage medium that participates in providing instructions and/or data to the computer system 100 for execution and/or processing. Examples of such storage 15 media include floppy disks, magnetic tape, CD-ROM, a hard disk drive, a ROM or integrated circuit, USB memory, a magneto-optical disk, or a computer readable card such as a PCMCIA card and the like, whether or not such devices are internal or external of the computer module 101. Examples of computer readable transmission media that may also participate in the provision of software, application programs, instructions 20 and/or data to the computer module 101 include radio or infra-red transmission channels as well as a network connection to another computer or networked device, and the Internet or Intranets including e-mail transmissions and information recorded on Websites and the like. 2414524_1 921408_speci_lodge - 10 The second part of the application programs 133 and the corresponding code modules mentioned above may be executed to implement one or more graphical user interfaces (GUls) to be rendered or otherwise represented upon the display 114. Through manipulation of typically the keyboard 102 and the mouse 103, a user of the computer 5 system 100 and the application may manipulate the interface in a functionally adaptable manner to provide controlling commands and/or input to the applications associated with the GUI(s). Other forms of functionally adaptable user interfaces may also be implemented, such as an audio interface utilizing speech prompts output via the loudspeakers 117 and user voice commands input via the microphone 180. 10 The printer 115 comprises a controller processor 1151 for executing a controlling program 1152, a pixel rendering apparatus 1153, and a printer engine 1154, coupled via a bus 1155. The printer 115 may also have a resident memory (not illustrated). The pixel rendering apparatus 1153 is preferably in the form of an application specific integrated circuit (ASIC) coupled via the bus 1155 to the controller processor 1151, and 15 the printer engine 1154. However, the pixel rendering apparatus 1153 may alternatively be implemented in software as part of the controlling program and executable by the controller processor 1151. In the computer system 100, the processor 105 executes the software application 133 to create page-based documents where each page contains objects such as text, lines, 20 fill regions, and image data. The software application 133 as executed, causes the processor 105 to send a high level description of the page (for example a Page Description Language (PDL) file) via the interface 108 to the controlling program 1152 as executed in the controller processor 1151 of the printer 115. The controlling program 1152 then interprets this high level description of the page, and causes the controller 2414524_1 921408_specilodge -11 processor 1151 to send rendering instructions to the pixel rendering apparatus 1153. The program 1152 executing on the controller processor 1151 is also responsible for providing memory for the pixel rendering apparatus 1153, initialising the pixel rendering apparatus 1153, and instructing the pixel rendering apparatus 1153 to start rendering the 5 page description. The pixel rendering apparatus 1153 then uses the rendering instructions to render the page description to pixels. The output of the pixel rendering apparatus 1153 is colour pixel data, which may be used by the printer engine 1154 to print a hard-copy image on a medium such as paper. The colour pixel data may also be directly 10 reproduced upon the display 114. When the printer 115 via the controlling program 1152 receives the description of the page from the computer module 1010 via the software application 133, the controlling program 1152 first converts objects in the page description into an intermediate page representation, called a display list. Each object in the display list 15 generally contains a rendering instruction or fill. The fill of an object indicates to the pixel rendering apparatus 1153 how to generate colour information for pixels activated by the object. Examples of types of fills are flat colours, bitmaps, linear blends and radial blends. In order to generate a fill, the controlling program 1152 will convert the object specified in the page description into a fill instruction that can be used by the pixel 20 rendering apparatus 1153 to generate pixel colour data. Each fill instruction must be executed for each pixel in which its object is active. Therefore each fill instruction may be executed a large number of times. It is crucial to the performance of the pixel rendering system that the fill instruction be executed as 2414524_1 921408_specilodge -12 efficiently as possible. For this reason, the controlling program 1152 will generate fill instructions which are efficient for the pixel rendering apparatus 1153 to evaluate. Fig. I B is a detailed schematic block diagram of the processor 105 and a "memory" 134. The memory 134 represents a logical aggregation of all the memory 5 devices (including the HDD 110 and semiconductor memory 106) that can be accessed by the computer module 101 in Fig. IA. When the computer module 101 is initially powered up, a power-on self-test (POST) program 150 executes. The POST program 150 is typically stored in a ROM 149 of the semiconductor memory 106. A program permanently stored in a 10 hardware device such as the ROM 149 is sometimes referred to as firmware. The POST program 150 examines hardware within the computer module 101 to ensure proper functioning, and typically checks the processor 105, the memory (109, 106), and a basic input-output systems software (BIOS) module 151, also typically stored in the ROM 149, for correct operation. Once the POST program 150 has run successfully, the 15 BIOS 151 activates the hard disk drive 110. Activation of the hard disk drive 110 causes a bootstrap loader program 152 that is resident on the hard disk drive 110 to execute via the processor 105. This loads an operating system 153 into the RAM memory 106 upon which the operating system 153 commences operation. The operating system 153 is a system level application, executable by the processor 105, to fulfil various high level 20 functions, including processor management, memory management, device management, storage management, software application interface, and generic user interface. The operating system 153 manages the memory (109, 106) in order to ensure that each process or application running on the computer module 101 has sufficient memory in which to execute without colliding with memory allocated to another process. 2414524_1 921408_speci_lodge - 13 Furthermore, the different types of memory available in the system 100 must be used properly so that each process can run effectively. Accordingly, the aggregated memory 134 is not intended to illustrate how particular segments of memory are allocated (unless otherwise stated), but rather to provide a general view of the memory 5 accessible by the computer system 100 and how such is used. The processor 105 includes a number of functional modules including a control unit 139, an arithmetic logic unit (ALU) 140, and a local or internal memory 148, sometimes called a cache memory. The cache memory 148 typically includes a number of storage registers 144 - 146 in a register section. One or more internal buses 141 10 functionally interconnect these functional modules. The processor 105 typically also has one or more interfaces 142 for communicating with external devices via the system bus 104, using a connection 118. The application program 133 includes a sequence of instructions 131 that may include conditional branch and loop instructions. The program 133 may also include 15 data 132 which is used in execution of the program 133. The instructions 131 and the data 132 are stored in memory locations 128-130 and 135-137 respectively. Depending upon the relative size of the instructions 131 and the memory locations 128-130, a particular instruction may be stored in a single memory location as depicted by the instruction shown in the memory location 130. Alternately, an instruction may be 20 segmented into a number of parts each of which is stored in a separate memory location, as depicted by the instruction segments shown in the memory locations 128-129. In general, the processor 105 is given a set of instructions which are executed therein. The processor 105 then waits for a subsequent input, to which it reacts to by executing another set of instructions. Each input may be provided from one or more of a 2414524_1 921408_specilodge -14 number of sources, including data generated by one or more of the input devices 102, 103, data received from an external source across one of the networks 120, 122, data retrieved from one of the storage devices 106, 109 or data retrieved from a storage medium 125 inserted into the corresponding reader 112. The 5 execution of a set of the instructions may in some cases result in output of data. Execution may also involve storing data or variables to the memory 134. The disclosed rendering arrangements can use input variables 154, that are stored in the memory 134 in corresponding memory locations 155-158. The bitmap rendering arrangements produce output variables 161, which are stored in the memory 134 in 10 corresponding memory locations 162-165. Intermediate variables may be stored in memory locations 159, 160, 166 and 167. The register section 144-146, the arithmetic logic unit (ALU) 140, and the control unit 139 of the processor 105 work together to perform sequences of micro operations needed to perform "fetch, decode, and execute" cycles for every instruction in 15 the instruction set making up the program 133. Each fetch, decode, and execute cycle comprises: (a) a fetch operation, which fetches or reads an instruction 131 from a memory location 128; (b) a decode operation in which the control unit 139 determines which 20 instruction has been fetched; and (c) an execute operation in which the control unit 139 and/or the ALU 140 execute the instruction. 2414524_1 921408 speci_lodge - 15 Thereafter, a further fetch, decode, and execute cycle for the next instruction may be executed. Similarly, a store cycle may be performed by which the control unit 139 stores or writes a value to a memory location 132. Each step or sub-process in the processes of Figs. 2 - 12 is associated with one or 5 more segments of the program 133, and is performed by the register section 144-147, the ALU 140, and the control unit 139 in the processor 105 working together to perform the fetch, decode, and execute cycles for every instruction in the instruction set for the noted segments of the program 133. Similar functionality of the arrangement shown in Fig. I B also applies to the 10 controller processor 1151 and controlling program 1152 of the printer 115. The methods of rendering may alternatively be implemented in dedicated hardware such as one or more integrated circuits performing the functions or sub functions to be described. Such dedicated hardware may include graphic processors, digital signal processors, or one or more microprocessors and associated memories. 15 Fig. 2 shows a schematic overview of a Region Rendering System 200 in which the arrangements presently disclosed can be performed. The Region Rendering System 200 can form part of the Pixel Rendering Apparatus 1153 or the Printer Engine 1154, and be implemented in hardware (for example using an ASIC), in software executing on a processor, or a combination of the two. The Region Rendering System 200 contains an 20 Instruction Controller 210, a Fill Controller 220 and a Data Processing Architecture 600. The Instruction Controller 210 interprets rendering instructions 212 received from the controller processor 1151, for example, and provides fill instructions 214 to the Fill Controller 220. In response to the fill instructions 214 from the Instruction Controller 210, the Fill Controller 220 generates colour data 222 for each active object within a 2414524_1 921408_specilodge - 16 current run of pixels being rendered. Colour data generated by the Fill controller 220 is then processed by the Data Processing Architecture 600 to form rendered output pixel data 650 for the current run. The Data Processing Architecture 600 provides means to efficiently render colour data passed to it from the Fill Controller 220. 5 Shown in Fig. 3 is an example of a First Run 310 of colour data and a Second Run 320 of colour data that is to be composited into a Result Buffer 330. The First Run 310 of colour data contains the colours, Colour A and Colour B, and may for example be considered as colours for an object at a first (lower) level. The Second Run 320 of colour data contains the colours, Colour C and Colour D, which may for example be 10 considered as colours for a another object at a second (higher) level. The compositing results for each combination, being individual colours CO, Cl and C2, are indicated. Each of the runs 310 and 320 has a run length of 8 pixels. Shown in Figs. 5A, 5B and 5C are the steps taken to composite the First Run 310 of colour data with the Second Run 320 of colour data according to the present disclosure, each having 8 pixel positions 15 PO - P7. In Fig. 5A, Result CO 510 is calculated by compositing Colour A with Colour C. The Result CO 510 is copied into the first and second positions of the Result Buffer 330 as the Data Processing Architecture 600 determines that these positions contain the same source colours to be composited, and hence the same result. In Fig. 5B, Result Cl 520 is calculated by compositing Colour A with Colour D. The Result C1 520 is copied 20 into the third, fourth and sixth position of the Result Buffer 330 as the Data Processing Architecture 600 determines that these positions contain the same source colours to be composited, and hence the same result. By pre-processing the region being rendered before a compositing operation, pixels of the same color are determined even though the pixels are located within a different contiguous region. Thus, the architecture 600 2414524_1 921408_specilodge - 17 provides the ability to avoid a redundant compositing operation for the sixth pixel in the run. In Fig. 5C, Result C2 530 is calculated by compositing Colour B with Colour C. The Result C2 530 is copied into the fifth, seventh and eighth positions of the Result Buffer 330 as the Data Processing Architecture 600 determines that these positions 5 contain the same source colours to be composited. Again, it is noted that the fifth, seventh and eighth pixels are not located within a contiguous run. As such, a redundant compositing operation for the seventh and the eighth pixels is avoided. As will be apparent from Fig. 5C, there may be considered a selected pixel position (e.g. P2) having a first output colour (e.g. CI) and at least one further pixel position (e.g. P5) with the 10 first output colour (C1) such that there is at least another pixel position (e.g. P4) in the run with a second output colour (e.g. C2) separating the selected and the further pixel positions (P2, P5). As it can be seen in the example of Figs. 5A, 5B and 5C, the minimum number of compositing operations, three, are carried out in order to fill the Result Buffer 330. The Data Processing Architecture 600 of the Region Rendering 15 System 200 provides a means to determine the minimum number of composites required to be carried out for a run of colour data to be composited. It is to be observed that, in some compositing systems, the result of compositing colour A (at a higher level) with colour C (at a lower level), may not be the same as compositing colour C (at the higher level) with colour A (at the lower level), in view of the particular compositing operation 20 being performed (eg. A over C, C over A) and the relative opacity/transparency value (Alpha) of each of the colours. Hence, these alternatives may require an additional compositing. Fig. 6 shows the details of an exemplary implementation of the Data Processing Architecture 600. The Data Processing Architecture 600 comprises four different 2414524_1 921408_speci_lodge - 18 components, including a Pixel Duplicator 610, a Position Generator 800, two Output Generators 630, and three Colour and Mask Generators (CMG) 601, 602 and 603. The Data Processing Architecture 600 processes a plurality of colour data 222, each colour data representing a run (array) of colour values, received from the Fill 5 Controller 220. The Region Rendering System 200 may use the Data Processing Architecture 600 to process a region. Alternatively, if the region rendering system 200, and notably the instruction controller 210, determines that the fills within a given region are not suitable for efficient rendering by the Data Processing Architecture 600, then an alternate method of rendering the region may be used. For example, some linear ramp 10 fills may not be suitable because every colour may different. In this case, since every colour is different, every pixel will need to be composited separately. Bitmaps with a large number of bits per pixel, like a photograph may also be not suitable for rendering by the data processing architecture 600, as most colours in the photograph are different. However, a ramp fill that changes very slowly may be suitable for processing by the data 15 processing architecture 600. Those regions deemed not suited to rendering by the data processing architecture 600 may be rendered by other apparatus (not illustrated or described) of the Region Rendering System 200. Each of the plurality of colour data 222 is derived from a unique fill within a region to be rendered. Colour data from each fill is sent to a different input designated 20 by a first input 633, a second input 635 and a third input 637 of the Data Processing Architecture 600, as described later. The Data Processing Architecture 600 of the specific implementation to be described can process colour data from up to, say, three unique fills, each representing a run length of up to sixty-four (64) colour values, at any one time. 2414524_1 921408_specilodge - 19 For a given run length, a palette image colour data representation for each fill is desirably represented as two separate components: - a set of indices for each pixel position in the run length; and - a colour LUT associated with the set of indices. 5 The output of the Data Processing Architecture 600 is a run length of pixels representing the composited input colour data. The Region Rendering System 200 may use the Data Processing Architecture 600 multiple times throughout the rendering of a display list to completely composite a run length of colour data. The Data Processing Architecture 600 utilises a process 900, to be described with reference to Fig. 9, to 10 determine the minimum number of compositing operations to be carried out for a given run length of colour data. As each combination of compositing data is determined, the result of the compositing operation is copied into one or more output positions representing the positions for which colour data for the given run length is the same as that currently being composited. By utilising the Data Processing Architecture 600, the 15 Region Rendering System 200 can reduce the amount of redundant compositing that occurs during rendering. In Figs. 7 and 10, a short 45 degree line is used to denote the exemplary width of a bus connecting two components. A numeral is placed close to the line and provides the width of the bus in units of 'number of bits'. 20 Detailed Operation A current run length is set to the width of a region to be rendered on a given scan line, as defined by Region Rendering System 200. Fig. 6 shows a first input 633, second input 635 and third input 637 of the Data Processing Architecture 600. Each of the inputs 633, 635 and 637 receives input colour 2414524_1 921408_speci_lodge -20 data of a different fill within the region currently being rendered and a set of indices which collectively form the palette image data representation from which the final image is to be ultimately rendered. For each of the inputs 633, 635 and 637, the set of indices is routed to a 5 corresponding Palette Indices input 710 (seen in Fig. 7) of the respective Colour and Mask Generator (CMG) 601, 602 and 603. For each of the inputs 633, 635 and 637 of the Data Processing Architecture 600, the input colour data is routed to the Colour Values input 750 of the respective Colour and Mask Generator (CMG) 601, 602 and 603. Also shown in Fig. 7 is a Pixel Select Value 720 which is routed from the Position 10 Generator 800 to each of the Colour and Mask Generators 601, 602 and 603. For a given run within a region to be composited, if any fill to be composited can not be represented as a palette image, then the Region Rendering System 200 breaks down the run length until each fill within the new run length can be represented as a palette image. If the Region Rendering System 200 determines that breaking down the 15 run lengths is inefficient or futile, then an alternate method of rendering the region may be used. Fig. 6 shows the connections of the Data Processing Architecture 600, where: (i) The input colour data is connected directly to the Colour and Mask Generators 601, 602 and 603; 20 (ii) Mask Bits outputs 605, 606 and 607 of the Colour and Mask Generators 601, 602 and 603 are connected to the Position Generator 800; (iii) Output Colours 790 of each of the Colour and Mask Generator 601, 602 and 603 are connected to the two Output Generators 630; 2414524_1 921408_specilodge -21 (iv) Pixel Positions 830 formed by the Position Generator 800 are connected to the Pixel Duplicator 610; (v) The Pixel Select Value 720 generated by the Position Generator 800 are connected to each of the Colour and Mask Generators 700; and 5 (vi) The Data Processing Architecture 600 has an Output Bus 650. Fig. 7 shows further detail of the Colour and Mask Generators (CMG) 601, 602 and 603. The Colour and Mask Generators 601, 602 and 603 are each used to generate a corresponding Output Colour 790 and a set of Mask Bits 730, representing the mask outputs 605, 606 and 607. The positions where the Mask Bits 730 are set to '1' to 10 represent the positions within the current run length of colour data, having a length of (n+1) pixels (i.e. pixel positions PO, Pl, P2, ... Pn), for which a palette index from the set of Palettes Indices 710 is the same as the palette index selected by the Pixel Select Value 720. Fig. 7 shows that the Colour and Mask Generator 700 presently described uses Palette Indices 710, which has, for example, 2 bits per index. 15 All fills input into the Data Processing Architecture 600 are input as a palette image representation. Some fills, although not inherently stored as a palette image representation, can be and are mapped by the Region Rendering System 200 into a palette image representation. For example, a fill with a constant colour value is mapped to a palette image representation by setting the Palette Indices 710 input value to 0 and a 20 single Colour Value 750 to the colour of the fill. The format of the Colour Values 750 is not important to the Data Processing Architecture 600, but in the exemplary implementation the colour values 750 comprises a five 12-bit channels of colour data. 2414524_1 921408_specilodge -22 The Palette Indices 710 and Colour Values 750 represent a palette image as previously described. The Palette Indices 710 input receives a set of indices numbering the current run length, n + 1. As seen in Fig. 7, the Colour Values 750 representing the colour data of the associated Palette Indices 710 are loaded into and populate a look-up 5 table (LUT) 740. For a palette image representation containing a single constant colour, the colour is loaded into position 0 of the LUT 740. In Fig. 7, Palette Indices 710 conform to the following convention: (i) Ioo and 10, represent the 2 bit index of position 0 of the set of Palette Indices 710 for the current run length of colour data; 10 (ii) 11o and 11, represent the 2 bit index of position 1 of the set of Palette Indices 710 for the current run length of colour data; and (iii) Ino and In, represent the 2 bit index of position n of the set of Palette Indices 710 for the current run length of colour data. Fig. 7 also shows other connections within the Colour and Mask Generator 700 15 where: (i) The Pixel Select Value 720 is connected to Multiplexer 760. The Pixel Select Value 720 is generated by the output of the Position Generator 800 as shown in Fig. I and described in more detail with reference to Fig. 4. The Pixel Select Value 720 is used by the Colour and Mask Generator 700 to determine a current position within the 20 current run length of colour data for which to process. (ii) The Multiplexer 760 outputs an Index Value 725 associated with the currently selected position within the run length of colour data. (iii) The Index Value 725 is connected into an Inverter 770 which drives a set of XOR gates 780 and AND gates 785 which are used to produce the Mask Bits 605, 606 2414524_1 921408_specilodge -23 and 607 at the output of the Colour and Mask Generator 700, as shown in Fig. 7. The XOR gates 780 and AND gates 785 have inputs from both the Inverter 770 and Palette Indices 710. The XOR gates 780 and AND gates 785 perform the logical 'xor' and 'and' operations as are well known in the art. 5 (iv) An Output Colour 790 of the LUT 740 is the result of a lookup into the LUT 740 as indexed by the Index Value 725. (v) The Mask Bits 605, 606 and 607 comprise, in an exemplary implementation, a single mask bit for each position MO, M1, ... Mn, where the number "n" represents the position within the mask. That is, MO represents the value of the 10 Mask Bits 605, 606 and 607 at position 0, and Mn represents the value of the Mask Bits 605, 606 and 607 at position n. The Position Generator 800 is shown in Fig. 8. The Position Generator 800 resolves the Mask Bits 730 from the Colour and Mask Generators 700 to produce a Position Mask 830. Each of the Mask Bits 730 from the Colour and Mask Generators 15 700 are input, grouped by their position, into the Position Generator 800 as shown in Fig. 8. The Position Generator 800 also contains the following, where: (i) The Position Mask 830 contains '1's at the positions at which each Colour and Mask Generator 700 has the same index value as that selected by Pixel Select 20 Value 720. (ii) The input Mask Bits 730 are routed through Combinatorial Logic 810 and then connected to a Priority Encoder 820. (iii) An Output 822 of the Priority Encoder 820, which represents a next pixel select value, is connected to a Register 825, 2414524_1 921408_speci_lodge - 24 (iv) The output of the Register 825 represents the current Pixel Select Value 720. Referring to Fig. 3, the exemplary palette image representation comprises a First Run 310 of colour data and associated set of indices. For example, colour A is associated 5 with a exemplary set of indices 11110100, with '1' representing that colour A is to be rendered for a particular pixel in the run. As mentioned before, the colour data A and B and the set of indices associated with colour data A are input into Colour and Mask Generator (CMG) 601, 602 and 603. On the other hand, the palette image representation of the Second Run 320 comprises colour data C and D, and associated with colour C, 10 for example, a set of indices 11001011, again with '1' representing that colour C is to be rendered for a particular pixel in the run. Colour data C and D and the set of indices associated with colour data C are fed into the Colour and Mask Generator (CMG) 601, 602 and 603 so that these modules can composite the colours to produce the output colour 790. The Colour and Mask Generators 601, 602 and 603 also generates the set of 15 indices associated with colour data B and D using the input set of indices of colour data A and C respectively. Then, the Position Generator 800 receives the input and generated set of indices of the Second Run 320 to produce the positions of different output colours called the Position Mask 830. For example, for an Output Colour formed from the composited colour data A and C, the Position Generator 800 outputs a Position Mask 20 830 of 11000000, with '1' indicating where the composited colour of A and C should go in the rendered run 330. The Output Generator 630 combines colour data present on the corresponding inputs 790 to generate a Pixel Result 620, being a pixel output, each time a new Output Colour 790 is selected or otherwise generated by the CMGs 601-603. In the illustrated 2414524_1 921408_specilodge -25 implementation there is more than one Output Generator 630, thereby permitting the Output Generators 620 to share the load of processing the colour data. Colour data is combined according to the requirements of the Region Rendering System 200. The Output Generator 630 may also optionally perform other operations such as colour space 5 conversion on the input colour data, before outputting a final pixel result. The result, Pixel Result 620, of Output Generator 630 is connected to the Pixel Duplicator 610. The Pixel Duplicator 610 is responsible for duplicating the Pixel Result 620 across the positions as determined by the Position Mask 830. At the positions where the Position Mask 830 is set to 'I', the Pixel Result 620 is 10 copied. Desirably, this operation is performed sequentially, although may be performed in parallel. The Pixel Duplicator 610 contains a Destination buffer 640 used to store the run length of pixels generated by a given input Pixel Result 620 and Position Mask 830. Once all the positions within the buffer 640 contain composited pixel data, the Pixel Duplicator 610 outputs the pixel data on Output Bus 650. 15 Other information used by the Data Processing Architecture 600, such as compositing operation type, control flow logic, colour space conversion information and the like, have been omitted from Fig. 6 for clarity as such do not impact upon the salient features of rendering according to the present disclosure. Process 20 The architecture of Fig. 6 is now described with reference to the flow chart shown in Fig. 9, and to the example data shown in Figs. 10, 11, 12A, 12B and Fig. 12C. Fig. 9 is a flow chart of a process 900 describing the steps taken to composite a run length of colour data using the Data Processing Architecture 600 and generating an output run length of pixel data on Output Bus 650. The process 900 may be 2414524_1 921408_specilodge -26 implemented in hardware, such as with electronic components arranged akin to Figs. 6 8, or as an application/software program, able to be stored in a memory 106 or HDD 110 and executable by the processor 105. The process 900 starts at a Receive Colours step 910 where the Data Processing 5 Architecture 600 receives the Inputs 633, 635 and 637, including palette indices and colour values pertaining to a span of pixels for an object to be rendered. Each Input 633, 635 and 637 of the Data Processing Architecture 600 is connected to a separate Colour and Mask Generator 700, as shown in Fig. 6. In the Receive Colours step 910 of the process 900, colour values present on the Inputs 633, 635 and 637 are loaded into the 10 LUT 740 of an instance of a Colour and Mask Generator 700 via the Colour Values 750 input. Palette indices present on the Inputs 633, 635 and 637 are directed to the Inverter 770 and the set of XOR gates 780 of an instance of the Colour and Mask Generator 700 via the Palette Indices 710 input. As shown in Fig. 6 and described with reference to Fig. 3, the Colour and Mask 15 Generators 700 also receive a Pixel Select Value 720 from the Position Generator 800. Once all the Colour Values 750 are loaded into the LUT 740 in the step 910, the process 900 moves to a Determine Active Colours step 930. In the Determine Active Colours step 930, the Colour and Mask Generators 601, 602 and 603 use the Pixel Select Value 720 input to generate an Output Colour 790 and 20 a set of Masks 605, 606 and 607. The set of Masks 605, 606 and 607 represent the positions within the run length where the Output Colour 790 is active, and thus requires compositing, for the given input. After the Determine Active Colours step 930, a Calculate Active Positions step 940 is executed. During the Calculate Active Positions step 940, the Position Generator 2414524_1 921408_specilodge - 27 800 uses the Masks Bits 730 to determine the Pixel Positions 830 for which the currently selected position, Pixel Select 720, selects exactly the same input colour values for each Colour and Mask Generator 700. The Pixel Positions 830 are fed to the input of the Pixel Duplicator 610 for use in a Copy Pixel step 960, to be described. 5 After the Calculate Active Positions step 940, a Calculate Pixel Result step 950 is executed where an available Output Generator 630 composites the Output Colour 790 to form a Pixel Result 620. Only one (1) Output Generator 630 operates on the current set of Output Colours 790. In an exemplary implementation, the operation of the Output Generator 630 reads in Colour data sequentially and uses multiple ones of the Output 10 Generators 630 to enable high speed compositing. In this case, a next free Output Generator 630 accepts Output Colours 790 at the Calculate Pixel Result step 950, allowing the process 900 to continue without waiting for the calculation of a Pixel Result 620 from the Output Generator 630 that accepted the Output Colours 790. After the Calculate Pixel Result step 950, the Pixel Result 620 from the Output 15 Generator 630 is fed into the Pixel Duplicator 610 in the Copy Pixel step 960. The Pixel Duplicator 610 uses the Pixel Positions 830 as generated in the Calculate Active Positions step 940 to copy the Pixel Result 620 into the Destination Buffer 640. The Destination Buffer 640 is used as a place to store the Pixel Results 620 for the current span. 20 Fig. 10 shows as example data flow 1002, 1004 and 1006 of a first, second and third Colour and Mask Generator 700 respectively. The example data flow 1002, 1004 and 1006 shows the processing of Palette Indices 1010, 1012 and 1014 along with Colour Values 1040, 1042 and 1044 respectively. In the example of Fig. 10, a Pixel Select Value 720 is set to zero (0). 2414524_1 921408_specilodge -28 Shown in Fig. 10 are the Inverters 770, XOR gates 780 and AND gates 785 of the individual data flows 1002, 1004 and 1006, along with intermediate results 1020, 1022 and 1024 respectively. Mask Bits 1030, 1032, 1034 and Output Colours 1050, 1052 and 1054 are generated by each of the example date flows 1002, 1004 and 1006 respectively. 5 In the example, the Palette Indices 1010, 1012 and 1014 inputs are four (4) indices wide (representing a run length of four (4)). Each index is represented by a two (2) bit number (separated by a dotted line, as shown in Fig. 10). The positions at which the Mask Bits 1030, 1032 and 1034 of each colour and mask generator are set to '1', represents the position where the corresponding Palette 10 Indices 1010, 1012 and 1014 are the same as that which is indexed by the Pixel Select Value 720 of zero (0). For instance, Mask Positions 0, 1 and 2 of Mask Bits 1030 all have the same value, "1", meaning the first three indices of Palette Indices 1010 are the same, and in this case are all the two (2) bit number "01". Mask Position 0 represents the position of the left most mask bits of a set of mask bits, Mask Position I represents 15 the position of the adjacent mask bit of a set of mask bits, and so forth. The XOR gates 1080, 1082 and 1084 of the example shown in Fig. 10 have two (2) inputs, each carrying two (2) bits of data. The intermediate results 1020, 1022 and 1024 from the output of the XOR gates 1080, 1082 and 1084 (representing the gates 780) are connected to the AND gates 1085, 1087 and 1089 (representing the gates 785) 20 respectively. Each position of the intermediate results 1020, 1022 and 1024 is represented as two (2) bits of data (separated by a dotted line, as shown in Fig. 10). Each position of intermediate results 1020, 1022 and 1024 is input into a separate AND gate, as shown in Fig. 10. 2414524_1 921408_specilodge -29 For the given input shown in the example of Fig. 10, a first colour and mask generator 1002 produces Output Colour 1050 Colour B and Mask Bits 1030 with binary values "I110". This means that, for the given input, Colour B is active at Mask Position 0, 1 and 2 of Mask Bits 1030. A second colour and mask generator 1004 produces 5 Output Colour 1052 Colour E and Mask bits 1030 "1011". This means that, for the given input, Colour E is active at Mask Position 0, 2 and 3 of Mask Bits 1030. A third colour and mask generator 1006 produces Output Colour 1054 Colour I and Mask Bits 1030 "1111". This means that, for the given input, Colour 1 is active at Mask Position 0, 1, 2 and 3 of Mask Bits 1030. 10 Fig. 11 continues the example of Fig. 10 and shows the Mask Bits 1030, 1032 and 1034 for the given input Palette Indices 1010, 1012 and 1014 for a Pixel Select Value 720 of zero (0). The Mask Bits 1030, 1032 and 1034 are connected to the Combinatorial Logic block 810 which generates Position Mask 830. In the example of Fig. 11, the Position Mask 830 has a value of "1010", and this represents the fact that 15 Mask Positions 0 and 2 of Position Mask 830 have the same input colours from all the Colour and Mask Generators 700. That is, the colours Colour B, Colour E and Colour I will be used to generate a pixel result for a pixel at positions 0 and 2 within the given run length. This enables a compositing operation to be performed once and the result copied to multiple positions by the Pixel Duplicator 610. Also shown in Fig. 11 is the Priority 20 Encoder 820 and the output of the priority encoder 822 which will be used to generate a next Pixel Select Value 830. Fig. 12A shows a Pixel Result Cr0 1210, which is copied into a portion of the Destination Buffer 140 using the current Position Mask 830, as generated by the example shown in Fig. 10 and Fig. 11. 2414524_1 921408_specilodge -30 Fig. 12B shows a further Pixel Result Cr1 1212, which is copied into a portion of the Destination Buffer 640 using a current Position Mask 830, as would be generated in a next step of the examples shown in Fig. 10 and Fig. 11. Fig. 12C shows yet a further Pixel Result Cr2 1214, which is copied into a 5 portion of the Destination Buffer 640 using a current Position Mask 830, as would be generated in a next step of the examples shown in Fig. 10 and Fig. I1 after carrying out the step shown in Fig. 12B. Returning to Fig. 9, after the Copy Pixel step 960, the process 900 executes All Positions Finished step 965, where the processor 105 checks if all positions have been 10 calculated, and if so, then the process 900 proceeds to an Output Pixel step 980. If, in the All Positions Finished step 965, all positions have not been calculated, then the processor 105 causes the process 900 to proceed to a Next Pixel step 970. In the Next Pixel step 970 the process 900 determines a next Pixel Select Value 720, after which the process 900 proceeds to Determine Active Colours step 930 and 15 repeats the process 900 as previously described with reference to Figs. 6 to 11. Upon entry to the Output Pixel step 980, the Destination Buffer 640 in the Pixel Duplicator 610 will contain all the Pixel Results 620 for the current span of pixels to be rendered. During the Output Pixel step 980, the Pixel Duplicator 610 outputs the Pixel Results 610 from the Destination Buffer 640 on the Output Bus 650 where further 20 operations may be carried out on the Pixel Results 620 before being sent to an output device, such as the printer 115 or display 114. The process 900 then ends at End step 995. By using the Data Processing Architecture 600 a minimum set of compositing operations can be achieved for a given run length of input colour data. 2414524_1 921408_speci_lodge - 31 Extensions and Advantages The arrangements described can be expanded to accommodate for a larger number of fills (by adding more Colour and Mask Generators 700), greater run length and a larger bit-per index for palette indices. FIFO buffers may also be used within the 5 Data Processing Architecture 600 to increase performance. INDUSTRIAL APPLICABILITY The arrangements described are applicable to the computer and data processing industries and particularly for the rendering and printing of bitmap objects. The foregoing describes only some embodiments of the present invention, and 10 modifications and/or changes can be made thereto without departing from the scope and spirit of the invention, the embodiments being illustrative and not restrictive. In the context of this specification, the word "comprising" means "including principally but not necessarily solely" or "having" or "including", and not "consisting only of'. Variations of the word "comprising", such as "comprise" and "comprises" 15 have correspondingly varied meanings. 2414524_1 921408_speci_lodge

Claims (4)

  1. 3. A method of rendering a run of input pixel data, said method comprising the steps of: 2414524_1 921408_specilodge - 33 selecting a pixel in the run, said pixel having a first output colour; determining, for the run, a plurality of pixels with the first output colour, at least one of the plurality of pixels being separated from another of the determined plurality of pixels by a further pixel with a second output colour; and 5 writing the first output colour to the determined plurality of pixels.
  2. 4. A Data Processing Architecture for rendering a run of indexed pixel data, said run of indexed pixel data comprising at least one level, said Data Processing Architecture comprising: 10 a Position Generator for generating a set of masks for the run using the indexed pixel data, each of said set of masks being associated with a colour value; a Colour and Mask Generator for compositing a plurality of colours referenced by the indexed pixel data to obtain the colour values; and a Pixel Duplicator for writing the colour values according to the associated mask. 15
  3. 5. A Data Processing Architecture according to claim 4, wherein the Position Generator to determines a next pixel position with the different colour that has not been processed. 20 6. A method of rendering a run of input pixel data substantially as described herein with reference to the drawings.
  4. 7. Rendering apparatus for rendering a run of a palette image data representation said apparatus being substantially as described herein with reference to the drawings. 2414524_1 921408_specitodge - 34 Dated this 30th day of November 2009 CANON KABUSHIKI KAISHA 5 Patent Attorneys for the Applicant Spruson & Ferguson 2414524_1 921408_specilodge
AU2009243440A 2009-11-30 2009-11-30 Optimised rendering of palette images Abandoned AU2009243440A1 (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
AU2009243440A AU2009243440A1 (en) 2009-11-30 2009-11-30 Optimised rendering of palette images

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
AU2009243440A AU2009243440A1 (en) 2009-11-30 2009-11-30 Optimised rendering of palette images

Publications (1)

Publication Number Publication Date
AU2009243440A1 true AU2009243440A1 (en) 2011-06-16

Family

ID=44153215

Family Applications (1)

Application Number Title Priority Date Filing Date
AU2009243440A Abandoned AU2009243440A1 (en) 2009-11-30 2009-11-30 Optimised rendering of palette images

Country Status (1)

Country Link
AU (1) AU2009243440A1 (en)

Cited By (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN107455007A (en) * 2015-04-02 2017-12-08 株式会社Kt Method and apparatus for handling vision signal
US10469870B2 (en) 2014-09-26 2019-11-05 Kt Corporation Method and apparatus for predicting and restoring a video signal using palette entry
US10477227B2 (en) 2015-01-15 2019-11-12 Kt Corporation Method and apparatus for predicting and restoring a video signal using palette entry and palette mode
US10477244B2 (en) 2015-01-29 2019-11-12 Kt Corporation Method and apparatus for predicting and restoring a video signal using palette entry and palette mode
US10477243B2 (en) 2015-01-29 2019-11-12 Kt Corporation Method and apparatus for predicting and restoring a video signal using palette entry and palette mode
US10477218B2 (en) 2014-10-20 2019-11-12 Kt Corporation Method and apparatus for predicting and restoring a video signal using palette entry
CN111402380A (en) * 2020-03-12 2020-07-10 杭州趣维科技有限公司 GPU (graphics processing Unit) compressed texture processing method

Cited By (10)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US10469870B2 (en) 2014-09-26 2019-11-05 Kt Corporation Method and apparatus for predicting and restoring a video signal using palette entry
US10477218B2 (en) 2014-10-20 2019-11-12 Kt Corporation Method and apparatus for predicting and restoring a video signal using palette entry
US10477227B2 (en) 2015-01-15 2019-11-12 Kt Corporation Method and apparatus for predicting and restoring a video signal using palette entry and palette mode
US10477244B2 (en) 2015-01-29 2019-11-12 Kt Corporation Method and apparatus for predicting and restoring a video signal using palette entry and palette mode
US10477243B2 (en) 2015-01-29 2019-11-12 Kt Corporation Method and apparatus for predicting and restoring a video signal using palette entry and palette mode
CN107455007A (en) * 2015-04-02 2017-12-08 株式会社Kt Method and apparatus for handling vision signal
US10484713B2 (en) 2015-04-02 2019-11-19 Kt Corporation Method and device for predicting and restoring a video signal using palette entry and palette escape mode
CN107455007B (en) * 2015-04-02 2021-02-09 株式会社Kt Method for encoding and decoding video signal
CN111402380A (en) * 2020-03-12 2020-07-10 杭州趣维科技有限公司 GPU (graphics processing Unit) compressed texture processing method
CN111402380B (en) * 2020-03-12 2023-06-30 杭州小影创新科技股份有限公司 GPU compressed texture processing method

Similar Documents

Publication Publication Date Title
AU2009243440A1 (en) Optimised rendering of palette images
US10068518B2 (en) Method, apparatus and system for dithering an image
US6429950B1 (en) Method and apparatus for applying object characterization pixel tags to image data in a digital imaging device
US9715356B2 (en) Method, apparatus and system for determining a merged intermediate representation of a page
US9183645B2 (en) System and method for fast manipulation of graphical objects
AU2016273973A1 (en) Transcode PCL delta-row compressed image to edges
CN101790749B (en) Multi-sample rendering of 2d vector images
AU2011205085B2 (en) 2D region rendering
JP2013505854A (en) How to create a printable raster image file
JP5374567B2 (en) Image processing apparatus, image processing system, and image processing method
JP2005135415A (en) Graphic decoder including command based graphic output accelerating function, graphic output accelerating method therefor, and image reproducing apparatus
JP4143613B2 (en) Drawing method and drawing apparatus
JP2008228168A (en) Image processing apparatus and program
KR100879896B1 (en) Format Conversion Apparatus from Band Interleave Format to Band Separate Format
JP6821924B2 (en) Image processing device, image processing method
AU2012202491A1 (en) Method, system and apparatus for rendering an image on a page
US8395630B2 (en) Format conversion apparatus from band interleave format to band separate format
AU2012232989A1 (en) A method of rendering an overlapping region
AU2013248237A1 (en) Image scaling process and apparatus
US6903748B1 (en) Mechanism for color-space neutral (video) effects scripting engine
US10262386B2 (en) Method, apparatus and system for rendering regions of an image
US20240112377A1 (en) Method and apparatus for transforming input image based on target style and target color information
AU2015202676A1 (en) Systems and methods for efficient halftone screening
JP5752323B2 (en) Display device and computer
AU2008260056A1 (en) System and method for accelerated colour compositing

Legal Events

Date Code Title Description
MK4 Application lapsed section 142(2)(d) - no continuation fee paid for the application