AU2013248237A1 - Image scaling process and apparatus - Google Patents

Image scaling process and apparatus Download PDF

Info

Publication number
AU2013248237A1
AU2013248237A1 AU2013248237A AU2013248237A AU2013248237A1 AU 2013248237 A1 AU2013248237 A1 AU 2013248237A1 AU 2013248237 A AU2013248237 A AU 2013248237A AU 2013248237 A AU2013248237 A AU 2013248237A AU 2013248237 A1 AU2013248237 A1 AU 2013248237A1
Authority
AU
Australia
Prior art keywords
segment
refinement
region
image
reusable part
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Abandoned
Application number
AU2013248237A
Inventor
Bin LIAO
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Canon Inc
Original Assignee
Canon Inc
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Canon Inc filed Critical Canon Inc
Priority to AU2013248237A priority Critical patent/AU2013248237A1/en
Publication of AU2013248237A1 publication Critical patent/AU2013248237A1/en
Abandoned legal-status Critical Current

Links

Landscapes

  • Compression Of Band Width Or Redundancy In Fax (AREA)

Abstract

- 32 IMAGE SCALING PROCESS AND APPARATUS A method for generating a transformed image (210) associated with an original image, the original image being encoded (1305) into a plurality of segments (1310) comprises identifying a refinement region (212) using at least one refinement segment (160) from the plurality of segments and determining, based on a spatial arrangement (intersect, overlap) of the identified refinement region (212) relative to a reference region (320) associated with a reference segment (340), a reusable part (310) of the reference region to be decoded from the reference segment. The method decodes and stores (1020, 1040) the reusable part of the reference region in an untransformed form to be used to decode the refinement segment, and generates the transformed image by transforming a refinement portion in accordance with a transformation factor, wherein the refinement portion is produced by decoding the refinement segment using the decoded reusable part of the reference region stored in the untransformed form (240). segment A -4segmnent B 150 segment C segment D 160 _segmentE_ segment F 170 segment G segment H segment K Fig. 1

Description

- 1 IMAGE SCALING PROCESS AND APPARATUS TECHNICAL FIELD [001] The present invention relates generally to image scaling and, in particular, to scaling JBIG2 compressed images. The present invention also relates to a method and apparatus for scaling images, and to a computer program product including a computer readable medium having recorded thereon a computer program for scaling images. BACKGROUND [002] When scaling or transforming an image containing different types of content, such as an image containing both text and a natural scene photo, a single scaling or transform method is typically not able to give the best result. For example, bilinear interpolation can scale natural images well, but can add blurriness to text. A vectorization technique is more suitable for scaling text, but can cause loss of details for a natural image. [003] It is possible to achieve high quality scaling for multiple content types, but at a high computational cost. For example, one such approach separated the image to different layers based on the content type, and then applied different techniques to scale the layers separately. The result was then composited together to form the final scaled up image. [004] Some mixed-content encoding formats, such as a JBIG2, encode image segments according to the type of the contents. After the decompression, the encoded different segments are combined together to form a final result image. The useful content type separation is lost during the decompression process. [005] Thus a need exists for a decompression and scaling method for decoding a mixed-content encoded image, to produce a high quality result at a scale factor other than 1. SUMMARY [006] Disclosed is an integrated method to decompress and scale a JBIG2 encoded image in a more efficiently, by utilizing information created during the encoding process and minimising the memory consumption and computational cost.
-2 [007] According to one aspect of the present disclosure there is provided a method for generating a transformed image associated with an original image, the original image being encoded into a plurality of segments, the method comprising: identifying a refinement region using at least one refinement segment from the plurality of segments; determining, based on a spatial arrangement of the identified refinement region relative to a reference region associated with a reference segment, a reusable part of the reference region to be decoded from the reference segment; decoding and storing the reusable part of the reference region in an untransformed form to be used to decode the refinement segment; and generating the transformed image by transforming a refinement portion in accordance with a transformation factor, wherein the refinement portion is produced by decoding the refinement segment using the decoded reusable part of the reference region stored in the untransformed form. [008] Typically each segment is associated with a decoding process and a pre determined transformation factor. [009] Preferably the reusable part of the reference region is determined based on a position of the refinement region within the image relative to regions associated with other segments from the plurality of segments, the reusable part of the reference region corresponding to an area of overlap between the refinement region and the reference region. [0010] Typically the transformation comprises scaling. [0011] Desirably the method further comprises storing the decoded reusable part associated with the reference segment. In this manner, the method may further comprise discarding the stored reusable part once all segments affected by said reusable part have been transformed. [0012] The method may further comprise determining a type of the segment, being one of text, image, and graphics. Preferably the transforming is performed using a transformation method associated with the determined segment type.
-3 [0013] In another implementation the method further comprises converting decoded bitmaps to a bit depth greater than 1, transforming the converted bitmaps, and combining the transformed bitmaps. [0014] Preferably a text type segment is decoded and scaled considering both the transformation factor and sub-pixel-placement. [0015] According to another aspect of the present disclosure there is provided a method for generating an image transformed using a pre-determined transformation, the transformed image being associated with an original image, the method comprising: receiving the original image associated with a plurality of portions, the plurality of portions comprises at least a reference portion and a refinement portion; determining a reusable part of the reference portion based a region of overlap in an output image between the regions in the output image associated with the reference portion and the refinement portion; and generating the transformed image by transforming the refinement portion in accordance with the pre-determined transformation using the determined reusable part of the reference portion stored in an untransformed form. [0016] Other aspects are also disclosed. BRIEF DESCRIPTION OF THE DRAWINGS [0017] At least one embodiment of the invention will now be described with reference to the following drawings, in which: [0018] Fig. 1 is a diagram showing the different type of refinement segments; [0019] Fig. 2 is a schematic diagram illustrating the refinement region, refinement region list and refinement region buffer; [0020] Fig. 3 is schematic diagram illustrating an alternate representation of the refinement region, refinement region data and refinement region buffer; [0021] Fig. 4 is a diagram showing the relationship between a refinement segment and the segment being refined; -4 [0022] Fig. 5 is schematic block diagram representation of a method of image transformation with scaled output page buffer and refinement region buffer; [0023] Fig. 6 is a schematic flow diagram showing a method of processing the segment according with scaled output page buffer and refinement region buffer; [0024] Fig. 7 is a schematic flow diagram showing a method of pre-processing the segments, as executed in the method of Fig. 6; [0025] Fig. 8 is a schematic flow diagram showing a method of decoding and scaling the segments, as executed in the method of Fig. 6; [0026] Fig. 9 is a schematic flow diagram showing a method of processing the segment which will be refined later, as executed in the method of Fig. 8; [0027] Fig. 10 is a schematic flow diagram showing a method of processing the segment whose output overlaps a refinement region, as executed in the method of Fig. 8;. [0028] Fig. 11 is a schematic diagram illustrating the refinement segment decoding process; [0029] Fig. 12 is a diagram illustrating the refinement result; [0030] Fig. 13 is a schematic block diagram representation of a scaling implementation with two output page buffers; [0031] Fig. 14 is a schematic flow diagram illustrating a method of decoding and scaling the segment according to the implementation of Fig. 13 with two output page buffers; and [0032] Figs. 15A and 15B form a schematic block diagram of a general purpose computer system upon which arrangements described can be practiced. DETAILED DESCRIPTION INCLUDING BEST MODE Context [0033] JBIG2 is an advanced compression fonnat for 1-bit per pixel (bi-level) black and white images, developed by the Joint Bi-level Image Experts Group and published as ITU T.88 as well as ISO/IEC 14492:2001. A typical JBIG2 encoder -5 decomposes an input bi-level original image into several regions according to content, the regions having three basic content types: halftone regions (for example halftone images), text regions and generic regions. A generic region is a region with content that is not found to clearly fit into the other two region types, and can include graphics. A well implemented JBIG2 encoder will identify the different types of content in an original image and compress each region type using a method most favourable to effectively compress that type of image data. These compressed regions are stored inside the JBIG2 file format as segments which, when decoded into sub-bitmaps, are composited with a cumulative bitmap using an OR, AND, XOR, XNOR or REPLACE combination operators. In the present description, the cumulative bitmap is called an output page buffer and represents intermediate and final outputs of decoding/decompression. The encoding of an original image into JBIG2 format involves the formation of segments associated with each of the content types, as well as so-called refinement segments that are used in decoding to refine the content segments to facilitate improved quality decompression, which necessarily involves determining a type of the segment to be decoded, being one of text, image or other, such as graphics. The sub-bitmaps may also be kept as or in intermediate buffers which are further refined before combining with the cumulative bitmap. [0034] Fig. 11 shows a process 1100 of decoding using a refinement segment, in which the encoded data of a text segment 1111 is decoded and is refined by a refinement segment. A text segment decoding process 1121 takes the encoded text segment data 1111 as the input and decodes the segment data 1111 into decoded text segment data 1112. Encoded data of a refinement segment 1131 is received and is input to a refinement segment decoding process 1122, to which the decoded text segment data 1112 is also input. The process 1122 outputs decoded data of the refinement segment 1132. The decoded refinement segment data 1132 and the decoded text segment data 1112 are then each input to a refining process 1123 which forms and outputs refined data of the text segment 1113. [0035] Fig. 12 shows an example set of pixel data for refining a text segment. In this example, a region 1212 represents the decoded pixel data of a text segment 1112 before refinement, and a region 1232 represents the decoded pixel data of a refinement segment 1132, and a region 1213 represents the final refined text pixel data 1113. According to the JBIG2 standard, an encoded segment decodes into a region which -6 includes one or more text characters against a uniform fill background. The example of Fig. 12 operates to produce two characters "i" and "n", shown in the region 1212. Because this text segment region 1112 will be refined later by a refinement segment 1131, the decoded text segment data 1112 is stored in an intermediate buffer. As described in the JBIG2 specification, the process to decode a refinement segment requires the decoded pixel data from the text segment being refined. The refinement segment decoding process 1122 takes both encoded refinement segment data 1131 and the decoded text segment data 1112 which is stored in the intermediate buffer as inputs, and decodes the data 1112 and 1131 into the decoded refinement segment data 1132. In the example given of Fig. 12, the decoded refinement segment data 1232, which itself is a region contains two dots 1233. The decoded text segment data 1112, which is input into the refinement segment decoding process 1122, must be the original decoded text data without any transformation (i.e. untransformed), as any variation will lead to erroneous decoding of the refinement segment. After the refinement segment data 1131 is decoded, the refining process 1123 takes both the decoded text segment data 1112 and the decoded refinement segment data 1132 as inputs and combines them using a given operation to generate the refined text segment data 1113. In the example given in Fig. 12, the refinement operation 1123 is an "XOR" refinement operation. The combination of decoded text segment pixel data 1212 and refinement segment pixel data 1232 with the "XOR" refinement operation 1223 is shown as a refined decoded region 1213. The refined text data 1213 can then be composited with the cumulative bitmap in the output page buffer. [0036] Fig. 1 shows different type of refinement segments. A segments list 120 is shown listing all segments, including non-refinement segments and refinement segments, that relate to a page, represented by an output page buffer 110. A segment K 170 is a non refinement segment. A segment B 140 is a non-refinement segment which is refined by a segment E 150, this relationship being indicated by the arrowed line 152. The segment E 150 is a refinement segment which refines the segment B 140. A segment G 160 is a refinement segment which refines a region 212 of the output page buffer 110. As such Fig. 1 shows two types of refinement segments: firstly - a type A refinement segment, such as the segment E 150, which refines the sub-bitmap produced by another segment; and secondly - a type B refinement segment, such as segment G 160, which refines a region of the output page buffer. The segment data of a type A refinement segment contains the information about the segment which it refines. For example, the segment data of the segment E 150 contains the segment ID (identity) of the segment B 140 -7 indicating that segment E 150 is used to refine the segment B 140. The segment data of a type B refinement segment contains the information about the region which it refines. For example, the segment data of the segment G 160 contains the coordinates of the region 212 indicating that segment G 160 is used to refine the region 212 of the output page buffer 110. However the particular segment which would contribute to the region 212 cannot be determined from the segment data of the (refinement) segment G 160. Overview [0037] According to the present disclosure, scaling or transformation processes are carried out for each decoded segment, before the decoded segment pixel data in a sub bitmap is composited with the cumulative bitmap in the output page buffer. As such, the type information of the segment which was generated during the compression process can be used to determine a suitable scaling method which applies to the content type of the segment being decoded. Furthermore, a refinement region list and associated buffer is used to minimise the memory usage and computation cost. This is done by only storing part of the unsealed output page buffer which is needed by a refinement segment. Hardware Implementation [0038] Figs. 15A and 15B depict a general-purpose computer system 1500, upon which the various arrangements described can be practiced. [0039] As seen in Fig. 15A, the computer system 1500 includes: a computer module 1501; input devices such as a keyboard 1502, a mouse pointer device 1503, a scanner 1526, a camera 1527, and a microphone 1580; and output devices including a printer 1515, a display device 1514 and loudspeakers 1517. An external Modulator Demodulator (Modem) transceiver device 1516 may be used by the computer module 1501 for communicating to and from a communications network 1520 via a connection 1521. The communications network 1520 may be a wide-area network (WAN), such as the Internet, a cellular telecommunications network, or a private WAN. Where the connection 1521 is a telephone line, the modem 1516 may be a traditional "dial up" modem. Alternatively, where the connection 1521 is a high capacity (e.g., cable) connection, the modem 1516 may be a broadband modem. A wireless modem may also be used for wireless connection to the communications network 1520.
-8 [0040] The computer module 1501 typically includes at least one processor unit 1505, and a memory unit 1506. For example, the memory unit 1506 may have semiconductor random access memory (RAM) and semiconductor read only memory (ROM). The computer module 1501 also includes an number of input/output (1/0) interfaces including: an audio-video interface 1507 that couples to the video display 1514, loudspeakers 1517 and microphone 1580; an 1/0 interface 1513 that couples to the keyboard 1502, mouse 1503, scanner 1526, camera 1527 and optionally a joystick or other human interface device (not illustrated); and an interface 1508 for the external modem 1516 and printer 1515. In some implementations, the modem 1516 may be incorporated within the computer module 1501, for example within the interface 1508. The computer module 1501 also has a local network interface 1511, which permits coupling of the computer system 1500 via a connection 1523 to a local-area communications network 1522, known as a Local Area Network (LAN). As illustrated in Fig. 15A, the local communications network 1522 may also couple to the wide network 1520 via a connection 1524, which would typically include a so-called "firewall" device or device of similar functionality. The local network interface 1511 may comprise an Ethernet circuit card, a BluetoothTM wireless arrangement or an IEEE 802.11 wireless arrangement; however, numerous other types of interfaces may be practiced for the interface 1511. With such a configuration, the networks 1520 and/or 1522 and devices connected thereto can form sources of JBIG2 compressed images intended for decoding and scaling by the computer 1501 according to the processes described herein. [0041] The 1/0 interfaces 1508 and 1513 may afford either or both of serial and parallel connectivity, the former typically being implemented according to the Universal Serial Bus (USB) standards and having corresponding USB connectors (not illustrated). Storage devices 1509 are provided and typically include a hard disk drive (HDD) 1510. Other storage devices such as a floppy disk drive and a magnetic tape drive (not illustrated) may also be used. An optical disk drive 1512 is typically provided to act as a non-volatile source of data. Portable memory devices, such optical disks (e.g., CD-ROM, DVD, Blu-ray Disc TM), USB-RAM, portable, external hard drives, and floppy disks, for example, may be used as appropriate sources of data to the system 1500. Such storage devices may also represent sources of JBIG2 compressed images intended for processing within the computer 1501.
-9 [0042] The components 1505 to 1513 of the computer module 1501 typically communicate via an interconnected bus 1504 and in a manner that results in a conventional mode of operation of the computer system 1500 known to those in the relevant art. For example, the processor 1505 is coupled to the system bus 1504 using a connection 1518. Likewise, the memory 1506 and optical disk drive 1512 are coupled to the system bus 1504 by connections 1519. Examples of computers on which the described arrangements can be practised include IBM-PC's and compatibles, Sun Sparcstations, Apple MacTM or a like computer systems. [0043] The methods of image decoding and scaling may be implemented using the computer system 1500 wherein the processes of Figs. 1 to 14 may be implemented as one or more software application programs 1533 executable within the computer system 1500. In particular, the steps of the method of image decoding and scaling are effected by instructions 1531 (see Fig. 15B) in the software 1533 that are carried out within the computer system 1500. The software instructions 1531 may be formed as one or more code modules, each for performing one or more particular tasks. The software may also be divided into two separate parts, in which a first part and the corresponding code modules performs the image decoding and scaling methods and a second part and the corresponding code modules manage a user interface between the first part and the user. [0044] The software may be stored in a computer readable medium, including the storage devices described below, for example. The software is loaded into the computer system 1500 from the computer readable medium, and then executed by the computer system 1500. A computer readable medium having such software or computer program recorded on the computer readable medium is a computer program product. The use of the computer program product in the computer system 1500 preferably effects an advantageous apparatus for image decoding and scaling. [0045] The software 1533 is typically stored in the HDD 1510 or the memory 1506. The software is loaded into the computer system 1500 from a computer readable medium, and executed by the computer system 1500. Thus, for example, the software 1533 may be stored on an optically readable disk storage medium (e.g., CD ROM) 1525 that is read by the optical disk drive 1512. A computer readable medium having such software or computer program recorded on it is a computer program product.
- 10 The use of the computer program product in the computer system 1500 preferably effects an apparatus for image decoding and scaling. [0046] In some instances, the application programs 1533 may be supplied to the user encoded on one or more CD-ROMs 1525 and read via the corresponding drive 1512, or alternatively may be read by the user from the networks 1520 or 1522. Still further, the software can also be loaded into the computer system 1500 from other computer readable media. Computer readable storage media refers to any non-transitory tangible storage medium that provides recorded instructions and/or data to the computer system 1500 for execution and/or processing. Examples of such storage media include floppy disks, magnetic tape, CD-ROM, DVD, Blu-rayTM Disc, a hard disk drive, a ROM or integrated circuit, USB memory, a magneto-optical disk, or a computer readable card such as a PCMCIA card and the like, whether or not such devices are internal or external of the computer module 1501. Examples of transitory or non-tangible computer readable transmission media that may also participate in the provision of software, application programs, instructions and/or data to the computer module 1501 include radio or infra-red transmission channels as well as a network connection to another computer or networked device, and the Internet or Intranets including e-mail transmissions and information recorded on Websites and the like. [0047] The second part of the application programs 1533 and the corresponding code modules mentioned above may be executed to implement one or more graphical user interfaces (GUIs) to be rendered or otherwise represented upon the display 1514. Through manipulation of typically the keyboard 1502 and the mouse 1503, a user of the computer system 1500 and the application may manipulate the interface in a functionally adaptable manner to provide controlling commands and/or input to the applications associated with the GUI(s). Other forms of functionally adaptable user interfaces may also be implemented, such as an audio interface utilizing speech prompts output via the loudspeakers 1517 and user voice commands input via the microphone 1580. [0048] Fig. 15B is a detailed schematic block diagram of the processor 1505 and a "memory" 1534. The memory 1534 represents a logical aggregation of all the memory modules (including the HDD 1509 and semiconductor memory 1506) that can be accessed by the computer module 1501 in Fig. 15A.
- 11 [0049] When the computer module 1501 is initially powered up, a power-on self test (POST) program 1550 executes. The POST program 1550 is typically stored in a ROM 1549 of the semiconductor memory 1506 of Fig. 15A. A hardware device such as the ROM 1549 storing software is sometimes referred to as firmware. The POST program 1550 examines hardware within the computer module 1501 to ensure proper functioning and typically checks the processor 1505, the memory 1534 (1509, 1506), and a basic input-output systems software (BIOS) module 1551, also typically stored in the ROM 1549, for correct operation. Once the POST program 1550 has run successfully, the BIOS 1551 activates the hard disk drive 1510 of Fig. 15A. Activation of the hard disk drive 1510 causes a bootstrap loader program 1552 that is resident on the hard disk drive 1510 to execute via the processor 1505. This loads an operating system 1553 into the RAM memory 1506, upon which the operating system 1553 commences operation. The operating system 1553 is a system level application, executable by the processor 1505, to fulfil various high level functions, including processor management, memory management, device management, storage management, software application interface, and generic user interface. [0050] The operating system 1553 manages the memory 1534 (1509, 1506) to ensure that each process or application running on the computer module 1501 has sufficient memory in which to execute without colliding with memory allocated to another process. Furthermore, the different types of memory available in the system 1500 of Fig. 15A must be used properly so that each process can run effectively. Accordingly, the aggregated memory 1534 is not intended to illustrate how particular segments of memory are allocated (unless otherwise stated), but rather to provide a general view of the memory accessible by the computer system 1500 and how such is used. [0051] As shown in Fig. 15B, the processor 1505 includes a number of functional modules including a control unit 1539, an arithmetic logic unit (ALU) 1540, and a local or internal memory 1548, sometimes called a cache memory. The cache memory 1548 typically include a number of storage registers 1544 - 1546 in a register section. One or more internal busses 1541 functionally interconnect these functional modules. The processor 1505 typically also has one or more interfaces 1542 for communicating with external devices via the system bus 1504, using a connection 1518. The memory 1534 is coupled to the bus 1504 using a connection 1519.
- 12 [0052] The application program 1533 includes a sequence of instructions 1531 that may include conditional branch and loop instructions. The program 1533 may also include data 1532 which is used in execution of the program 1533. The instructions 1531 and the data 1532 are stored in memory locations 1528, 1529, 1530 and 1535, 1536, 1537, respectively. Depending upon the relative size of the instructions 1531 and the memory locations 1528-1530, a particular instruction may be stored in a single memory location as depicted by the instruction shown in the memory location 1530. Alternately, an instruction may be segmented into a number of parts each of which is stored in a separate memory location, as depicted by the instruction segments shown in the memory locations 1528 and 1529. [0053] In general, the processor 1505 is given a set of instructions which are executed therein. The processor 1505 waits for a subsequent input, to which the processor 1505 reacts to by executing another set of instructions. Each input may be provided from one or more of a number of sources, including data generated by one or more of the input devices 1502, 1503, data received from an external source across one of the networks 1520, 1502, data retrieved from one of the storage devices 1506, 1509 or data retrieved from a storage medium 1525 inserted into the corresponding reader 1512, all depicted in Fig. 15A. The execution of a set of the instructions may in some cases result in output of data. Execution may also involve storing data or variables to the memory 1534. [0054] The disclosed image decoding and scaling arrangements use input variables 1554, which are stored in the memory 1534 in corresponding memory locations 1555, 1556, 1557. The image decoding and scaling arrangements produce output variables 1561, which are stored in the memory 1534 in corresponding memory locations 1562, 1563, 1564. Intermediate variables 1558 may be stored in memory locations 1559, 1560, 1566 and 1567. [0055] Referring to the processor 1505 of Fig. 15B, the registers 1544, 1545, 1546, the arithmetic logic unit (ALU) 1540, and the control unit 1539 work together to perform sequences of micro-operations needed to perform "fetch, decode, and execute" cycles for every instruction in the instruction set making up the program 1533. Each fetch, decode, and execute cycle comprises: (i) a fetch operation, which fetches or reads an instruction 1531 from a memory location 1528, 1529, 1530; - 13 (ii) a decode operation in which the control unit 1539 determines which instruction has been fetched; and (iii) an execute operation in which the control unit 1539 and/or the ALU 1540 execute the instruction. [0056] Thereafter, a further fetch, decode, and execute cycle for the next instruction may be executed. Similarly, a store cycle may be performed by which the control unit 1539 stores or writes a value to a memory location 1532. [0057] Each step or sub-process in the processes of Figs. 1 to 14 is associated with one or more segments of the program 1533 and is performed by the register section 1544, 1545, 1546, the ALU 1540, and the control unit 1539 in the processor 1505 working together to perform the fetch, decode, and execute cycles for every instruction in the instruction set for the noted segments of the program 1533. First Implementation [0058] Fig. 13 shows a schematic block diagram representation of data flow for a first implementation 1300 for decoding and scaling according to the present disclosure. A JBIG2 encoded image 1305 contains encoded segments 1310. Decoding a segment 1310 produces an unsealed sub-bitmap 1320. Scaling an unsealed sub-bitmap 1320 produces a scaled sub-bitmap 1330. [0059] In order to meet the requirement of decoding the refinement segment, two output page buffers are used. The first of these is an unsealed output page buffer 1340 unsealedd cumulative bitmap) which stores a cumulative final bitmap in unsealed form. The unsealed output page buffer 1340 is used to assist in the decoding of a subsequent type B refinement segment (segment which refines region of the output page, or indirect refinement segment) in the set of encoded segments 1310 forming the encoded image 1305. Type B refinement segment decoding will need unsealed data from the unsealed output page buffer 1340. Type B refinement segment cannot be decoded correctly without data from the unsealed output buffer 1340. The second page buffer is a scaled output page buffer 1350 (scaled cumulative bitmap) which stores a scaled cumulative final bitmap. The detailed processing of decoding and scaling a segment is shown in Fig. 14. The various buffers described herein are most typically implemented as virtual buffers generally formed within the dynamic memory 1506 of the computer 1501. In some - 14 implementation, where such memory is limited, one or more of the buffers may be formed or mirrored in the HDD 1510. Some buffers may also be formed in cache memory (not illustrated) of the processor 1505 for which skilled persons will appreciate copies are retained in the memory 1506. [0060] Fig. 14 is a flow diagram showing a method 1400 of decoding and scaling a segment 1310, the result of which for a plurality of segments is the generation of a transformed image. Typically a compressed image 1305 is received into the computer 1501 and stored in the permanent memory, such as the HDD 1510 for subsequent use or manipulation. The method 1400 starts at decoding step 1420, where data of an encoded segment 1310 is read into the memory 1506 in anticipation for processing, typically from the HDD 1510 , or possibly directly from an external source, such as the networks 1520, 1522 or from portable media, such as the disk 1525 . [0061] If the current segment is a refinement segment, extra data is also read into the memory 1506. For example, if the segment 1310 is a type A refinement segment (segment which refines another segment in the segment list 120, or direct refinement segment), such as the segment E 150 in Fig. 1, which refines the segment B 140 in Fig. 1, then an unsealed intermediate buffer 1360 corresponding to the reference segment B 140 previously stored in step 1470 (to be described) during decoding of the segment B 140, is also read into the memory 1506. If the segment 1310 is a type B refinement segment, such as the segment G 150 in Fig. 1, which refines a region 212 of the output page 110, then the corresponding region data from the unsealed output page buffer 1340 is read into the memory 1506. After all required data is read into memory 1506, the processor 1505 operates to decode the segment 1310 according to the JBIG2 specification. In some implementations, a dedicated JBIG2 hardware decoder chip device may be used for the decoding operation. The pixel data resulting from decoding the segment 1310 is stored in the memory 1506 as an unsealed sub-bitmap 1320. The method 1400 goes to a next, scaling step 1430. [0062] At the scaling step 1430, the unsealed sub-bitmap 1320 is scaled to the required size. The scaling step 1430 is preferably implemented by the processor 1505 to achieve the desired bitmap size. The result is stored as a scaled sub-bitmap 1330 in the memory 1506. The specific transformation method used for the scaling process is determined by the type of the segment 1310. Each type of segment uses an appropriate - 15 scaling method which is most suitable to its type. For example, a vectorization scaling technique can be used to scale text segments, while nearest neighbour interpolation can be used for other types of segments. The scaling method of a refinement segment is chosen based on the type of the reference segment which the refinement segment is going to refine. [0063] If the current segment 1310 is a type A refinement segment, a further refinement process is also carried out in step 1430. For example, if the current segment which is being decoded is segment E 150, the scaled sub-bitmap 1330 of the segment E 150 is then combined with the scaled intermediate buffer 1370 of the reference segment B 140 which is previously stored in step 1470 during the segment B decoding. The result is a refined scaled sub-bitmap which is stored back to the scaled sub-bitmap buffer 1330. Then the intermediate buffers 1360 and 1370 of the reference segment B 140 is released, meaning that any data stored in the buffer is discarded (i.e. the buffer is reset, erased, expunged or nullified). [0064] A decision step 1440 determines whether the segment 1310 will be refined later by a type A refinement segment. If the segment will be so refined, the process 1400 goes to step 1470. Otherwise, if the segment will not be refined, the process 1400 goes to a rendering step 1450. [0065] At the rendering step 1450, the unsealed sub-bitmap 1320 is rendered to the unsealed output page buffer 1340, whereupon the process 1400 goes to a further rendering step 1460. [0066] At step 1460, the scaled sub-bitmap 1330 is rendered to the scaled output page buffer 1350. Then the unsealed sub-bitmap 1320 and scaled sub-bitmap 1330 are both released. The process then finishes here. [0067] At step 1470, the unsealed and scaled sub-bitmaps of the segment 1310 are both stored separately as intermediate buffers. For example if the current segment which is being decoded is segment B 140, as the segment B 140 will be refined later by segment E 150 as shown in Fig. 1, the unsealed sub-bitmap 1320 is stored as unsealed intermediate buffer 1360 and the scaled sub-bitmap 1330 is stored as scaled intermediate buffer 1370. Those intermediate buffers 1360 and 1370 are for later use by refinement segment decoding, for example segment E 150 in Fig. 1. The process 1400 then finishes. Because - 16 unsealed bitmaps for the segments used by refinement procedure are required for decompression of refinement segment, the unsealed bitmaps (e.g. 1360) have to be stored in the memory 1506. The method 1400 advantageously can identify for which particular segments unsealed bitmaps have to be stored, thereby minimising memory utilisation during decompression and scaling. Second Implementation [0068] As described in the first implementation with reference to Fig. 13 and Fig. 14, there are two output page buffers allocated for both unsealed output page 1340 and scaled output page 1350. The unsealed output page buffer 1340 is used for the decoding process of type B refinement segments which refine a region in the output page at step 1420. For example, the refinement segment G 160 refines the region 212 in output page buffer 110. When decoding the segment G 160, the data corresponding to the region 212 is read from the unsealed output page buffer 1340 at step 1420. In most cases, the region being refined 212 is smaller than the output page 110, thereby affording room for improvement or optimisation. An implementation is now described to eliminate the use of the full-page unsealed output buffer 1340, thereby affording a significant advantage offering reduced cost and processing time. . [0069] Fig. 2 shows the relationship between the refinement region buffer and the refinement region. In the example of Fig. 2, there are three refinement regions in an output page 210, being refinement regions 211, 212 and 213. The refinement regions 211, 212 and 213 will be refined by type B refinement segment, such as segment G 160, during the refinement segment decoding process. Each refinement region 211, 212, 213 within the output page 210 has a corresponding refinement region buffer 240 which contains a cumulative bitmap in unsealed form corresponding to the refinement region in the output page 210. A particular refinement region buffer 241 corresponds to the refinement region 211, a refinement region buffer 242 corresponds to the refinement region 212 and refinement region buffer 243 corresponds to the refinement region 213. [0070] Every output page has one refinement region list. As the example given in Fig. 2, a refinement region list 220 contains an output page ID 221 which indicates the output page 210, and three refinement region data 222, 223 and 224 containing information for the corresponding refinement regions 211, 212 and 213. The refinement - 17 region list 220 is built during segment pre-processing, for which more details will be described later in method 700 with reference to Fig. 7. [0071] Each of the refinement region data 222, 223, 224 is a data structure shown by the structures 230 in Fig. 2. The refinement region data 230 contains information about the refinement region. The data 230 includes the coordinates 231- 234 of the region (XO, YO) and (Xl, Yl) on the output page 210, the refinement segment id - SID 236 and the address 235 of the refinement region buffer A 235. Coordinates XO 231 and YO 232 define the top left corner of the refinement region 211, 212, 213 and X1 233 and Y1 234 define the bottom right corner of the refinement region 211, 212, 213 with respect to the output page region 210. The buffer address A 235 points to the corresponding refinement region buffer 240. The refinement segment ID 236 indicates which refinement segment is associated with the corresponding refinement region. [0072] For example, if the refinement region 211 in Fig. 2 is the same as the region 212 in Fig. 1, then the refinement segment ID 236 of the corresponding refinement region data 222 is the ID of the segment G 160 and the refinement region buffer 241 contains the cumulative bitmap data in unsealed form corresponding to the refinement region 212. The refinement region buffer 241 is rendered during the segment decoding process before the refinement segment G 160 is decoded. When decoding the type B refinement segment G 160, which refines the output page region 212, the data from the refinement region buffer 241 is then used. Details of rendering into the refinement region buffer and decoding the refinement segment are described later. [0073] Fig. 5 schematically shows an example of the second implementation. A JBIG2 encoded segment 510 is decoded to produce an unsealed sub-bitmap 520. Scaling the unsealed sub-bitmap 520 produces a scaled sub-bitmap 530. The scaled sub-bitmap 530 is then rendered to the scaled output page buffer 550 (scaled cumulative bitmap) which cumulatively stores the scaled cumulated final bitmap, effectively performing a composite operation. Refinement region buffer(s) 240 are provided, each containing a cumulatively stored bitmap in unsealed form corresponding to a refinement region in the unsealed output page region. The refinement region buffer(s) 240 are used for decoding the type B refinement segment and contrasts the unsealed output page buffer 1340 of the first implementation. An unsealed intermediate buffer 560 and a scaled intermediate buffer 570 are used to decode the type A refinement segment. For those segments which - 18 are not going to be refined, the unsealed sub-bitmap 520 is released straight away after scaling (see step 1060 to be described), while for other segments which are going to be refined by type A refinement segment, the unsealed sub-bitmap 520 is stored as the unsealed intermediate buffer 560 (a reusable part) until the corresponding refinement process (and scaling) is finished. For other segments which output a region intersecting with a region of type B refinement segment, a portion of the unsealed sub-bitmap 520 is composited to the refinement region buffer 240 before the unsealed sub-bitmap 520 and preferably the buffer 560, are released. The refinement region buffer 240 is kept until the corresponding refinement process is finished. The details of processing a segment are described below with reference to Figs. 6, 7, 8, 9 and 10. [0074] Fig. 6 is a flow diagram showing a method 600 of processing a set of encoded segments, which together comprise an encoded image such as a JBIG2 image. The method 600 is preferably implemented as software, for example as part of the program 1533 executable by the processor 1505. The process 600 starts at pre-processing step 610, in which all segments 510 are pre-processed to identify (1) those segments which are to be refined later by type A refinement segments, and (2) to create the refinement region list 220 which is used for decoding type B refinement segments for those regions on the output page which are to be refined later. These processes may be performed in either order, or alternately as required for the particular segment being pre-processed. A method of detailed step 610 is described below with reference to Fig. 7. At step 620, each of the segments 510 is decoded and scaled, and each scaled sub-bitmap 530 is rendered to the scaled output page buffer 550 to form the final scaled image. A method of decoding and scaling each segment is described below with reference to Fig. 8. [0075] The method 700 of pre-processing segments, as executed at step 610 will be described with reference to Fig. 7. The method 700 begins at reading step 720, where the segment data 510 is read, for example from the HDD 1510, into the memory 1506. Next at decision step 730, the segment data 510 is partially decoded and the segment header is checked by the processor 1505 to determine whether the segment is a refinement segment or not. If the segment is not a refinement segment, for example the segment K 170 in Fig. 1, the method 700 goes to decision step 770. If the segment is a refinement segment, for example the segment G 160 in Fig. 1, the method proceeds to decision step 740.
- 19 [0076] At step 740, the partially decoded segment data is checked to determine whether the segment is a type A refinement segment such as segment E 150 or a type B refinement segment such as segment G 160. If the partially decoded segment data indicates the segment is going to refine another segment, for example the data of segment E 150 indicates the segment E 150 is going to refine the segment B 140, then the segment is a type A refinement segment. The method 700 then proceeds to a marking step 750, where the corresponding segment, for example the segment B 140, is marked as "will be refined later", for example. The method 700 then goes to step 770. [0077] Back at step 740, if the partially decoded segment data indicates the segment is going to refine a region on the output page, for example the data of segment G 160 indicates the segment G 160 is going to refine the region 212 on the output page 110, then the segment is a type B refinement segment. If decision step 740 determines that the segment is a type B refinement segment, for example the segment G 160, the method 700 goes to step 760. At step 760, a new refinement region data 230 is created by the processor 1505 to represent the region which will be refined by the type B refinement segment. For example, if the refinement segment is segment G 160, the following operations are preformed: SID (236) <- id of segment G (160) (XO (231), YO (232)) <- coordinates of the top-left corner of the region 212 (X1 (233), Y1 (234)) <- coordinates of the bottom-right corner of the region 212 A(235) <- NULL [0078] The ID of the segment G 160 is stored as SID 236. (XO, YO) in the refinement region data 230 are stored as the coordinates of the top-left corner of the region 212, and (X1, Yl) in the refinement region data are stored as the coordinates of the bottom-right corner of the region 212. The address of buffer A (235) is set to NULL. This address will be set to a new refinement region buffer (240) later at step 780. The newly created refinement region data 230 is added to the end of refinement region list 220. [0079] Next at decision step 770, if there are more segments in the encoded image, the method 700 goes back to reading step 720 to process the next segment. If there are no more segments to be processed, the method 700 goes to allocation step 780, where a refinement region buffer 240 for the refinement regions 211, 212, 213 is allocated. For example, a new refinement region buffer 240/241 is allocated for refinement region 211.
- 20 The buffer address A 235 of the refinement region data 222 is set to the new refinement region buffer 240/241. [0080] There are two ways to allocate the refinement region buffer 240. One way is to allocate an individual buffer for each refinement region separately. Thus the buffer address A 235 of each refinement region data 230 points to the corresponding refinement region buffer 240. For example as shown in Fig. 2, three refinement region buffers 240 are allocated correspond to three refinement regions 211, 212 and 213. The buffer address A 235 of refinement region data 230 of each different refinement region, such as the first refinement region data 222, the second refinement region data 223 and the third refinement region data 224, points to the corresponding buffer 241, 242 and 243. [0081] Another way is to allocate a single buffer which is large enough to hold all of the refinement region 211, 212, 213 at same time. Fig. 3 shows how a single buffer is allocated. As shown in Fig. 3, a single buffer 240 is a buffer with the spatial size same as the spatial size of combined region of all refinement regions when unioned together. That means, the top-left coordinates (x, y) of the output region corresponding to the buffer 240 are the minimum XO and YO values among all XO and YO of all the refinement regions, and the bottom-right coordinates (x, y) of the output region corresponding to the buffer 240 are the maximum X1 and Y1 values among all X1 and Y1 of all the refinement regions. Within the single refinement region buffer 240, the portion 241 is the buffer corresponding to refinement region 211, the portion 242 corresponds to the refinement region 212 and the portion 243 corresponds to the refinement region 213. to the manner of allocation of the buffer 240 is decided by comparing the size of the single large buffer 240 with the sum of the sizes of all individual buffers 241, 242, 243. If the memory size of the single buffer is smaller, then a single refinement region buffer 240 is preferably allocated. Otherwise, a set of individual refinement region buffers 240 are allocated. The method 700 ends. [0082] Fig. 4 is a diagram showing the output region of a non-refinement segment intersecting, and therefore defining a spatial arrangement, with the refinement region of a type B refinement segment. As shown in Fig. 4, a segment list 330 contains a set of segments. Among the segments, a segment C 340 is a non-refinement reference segment which, when decoded, produces pixels which are to be composited with a reference output region 320 on the output page 210. The segment G 160 is a type B refinement segment - 21 which refines the refinement region 212 on the output page 210. A region 310 represents the intersection of the reference output region 320 of reference segment C 340 and the refinement region 212 of the refinement segment G 160 and may be considered as a reusable part of the reference region 320 to be decoded from the reference segment 340. [0083] Fig. 8 is a flow diagram showing a method 800 of decoding and scaling segments, as executed at step 620 discussed with reference to Fig. 6. The method 800 starts at reading step 820, where the segment data 510 is read by the processor 1505 into the memory 1506. If the segment is a refinement segment, extra data, such as previous stored intermediate buffer of the correspondent reference segment of type A refinement segment or the data of the refinement region which correspondent to current type B refinement segment is also read into the memory 1506. [0084] If this segment is a type A refinement segment, for example the segment E 150 in Fig. 1, which refines a reference segment, for example the segment B 140 in Fig. 1, then an unsealed intermediate buffer 540 which corresponds to the reference segment B 140 previously stored in step 840 (to be described) during decoding of the segment B 140 is also read into the memory 1506. [0085] If this segment 510 is a type B refinement segment which refines a region of the output page, then the region data from the corresponding refinement region buffer 240 is read into memory 1506. The corresponding refinement region buffer 240 is located by: first searching the segment ID SID 236 in the refinement region list 220 to find the corresponding refinement region data 230, then using the buffer address A 235 of the refinement region data 230 to locate the refinement region buffer 240. For example, as shown in Fig. 4, if the current segment is type B refinement segment G 160, after the data of segment G 160 is read into memory 1506, the refinement region list 220 is searched to find the refinement region data 230 which contains the ID of segment G 160. As the refinement region data 223 has the ID of segment G 160 which is set at step 760, the refinement region data 223 is used to locate the refinement region buffer 240/242 via the buffer address A 235. Then the data from the corresponding refinement region buffer 240/242 is also read into the memory 1506. After all required data is read into the memory 1506, the method 800 goes to next step 830.
- 22 [0086] Next at decision step 830, the segment data are checked to determine whether the segment is marked as "will be refined later" or not. If the segment is not be refined later by a type A refinement segment, the method 700 continues at step 850. [0087] At decision step 850, the output region of the segment is compared by the processor 1505 to the regions in the refinement region list 220. If the output region of the segment does not intersect with any of the refinement regions in the refinement region list 220, the method 700 proceeds to step 870. [0088] At step 870, a scaled sub-bitmap 530 is generated by the processor 1505 decoding and scaling the segment data which was read into the memory 1505 at step 820. The decoding is typically performed according to the JBIG2 standard, which decodes to original size, and the scaling performed as discussed above based on an input scale factor as determined by the user or a destination application calling for the decoding. [0089] If the segment is a text type segment, the text symbols contained in the dictionary defined by the JBIG2 specification referred to by this text segment are scaled by a required transformation factor. The transformation factor may be pre-determined, for example by an application seeking to decompress the image into a bounding box of user determined size. Alternatively, the transformation factor may be selected directly according to user requirements. Then the placement coordinates decoded from the text segment are also scaled by the required transformation factor. The scaled text symbols are then placed to the scaled sub-bitmap 530 using the transformed placement coordinates. When a text segment is decoded and scaled directly by scaling the symbols and then placing the scaled symbols to scaled placement positions, it is possible that a symbol is required to be placed on a sub-pixel position. To handle this situation, preferably an adjusted scaled symbol is generated according to the sub-pixel position to compensate for the sub-pixel placement. The scaled symbol is generated by considering both the scale factor and the translation factor determined by the sub-pixel placement position. For example, if the scaled symbol is to be placed at coordinates of (100.3, 100.7) where 100.3 is the x coordinate and 100.7 is the y coordinate, then the scaled symbol is generated using the required scaling factor and a translation factor of (0.3, 0.7). The adjusted scaled symbol is then placed at the integer position (100, 100).
- 23 [0090] For other type of segments, an unsealed form sub-bitmap 520 is decoded first then scaled by the required transformation factor. The result is stored as the scaled sub-bitmap 530. [0091] If the segment is a type A refinement segment, the scaled sub-bitmap 530 is then combined with the previously stored scaled intermediate buffer 570 corresponding to the reference segment which the refinement segment refines. For example, if the current segment which is being decoded is segment E 150, the scaled sub-bitmap 530 of the segment E 150 is then combined with the scaled intermediate buffer 570 of the reference segment B 140 which is previously stored in step 960 (to be described) during decoding of the segment B 140. The result is a refined scaled sub-bitmap which is stored back to the scaled sub-bitmap 530. Then the intermediate buffers 560 and 570 of the reference segment B 140 are released. [0092] If the segment is a type B refinement segment 160, the corresponding refinement region buffer 240/242 is released after the scaled sub-bitmap 530 is generated. [0093] Next, at rendering step 880, the scaled sub-bitmap 530 is rendered to the scaled output page buffer 550. The scaled sub-bitmap 530 is released after rendering. The method 800 then goes to step 890. [0094] Returning to the decision step 830, if the segment will be refined later by a type A refinement segment, the method 800 goes to type A refinement segment processing step 840, where the segment is decoded and scaled by method 900 which will be described later with reference to Fig. 9. After the method 900 returns, method 800 continues at step 890. [0095] At step 850, if the output region of the segment intersects or overlaps one or more refinement regions as shown in Fig. 4, the method 800 proceeds to type B refinement segment processing step 860, where the segment is decoded and scaled by method 1000 which will be described later with reference to Fig. 10. After the method 1000 returns, method 800 goes to step 890. [0096] At step 890, if there are more segments, the method 800 goes back to reading step 820 to process the next segment. If there is no more segment need to be processed, the method 800 finishes.
- 24 [0097] The method 900 of decoding and scaling a segment which will be refined later by a type A refinement segment, as executed at step 840, will be described with reference to Fig. 9. Method 900 begins at decoding step 920, where an unsealed sub bitmap 520 is generated by the processor 1505 decoding the segment data which was read into the memory 1506 at step 820. [0098] At a scaling step 930, the unsealed sub-bitmap 520 is scaled to the required size. The result is stored as a scaled sub-bitmap 530. The transformation method of the scaling process is determined by the type of the segment. For example, a vectorization scaling technique can be used to scale text segment, while nearest neighbour interpolation can be used for other types of segments. The scaling method of a refinement segment is chosen based on the type of the reference segment which the refinement segment is going to refine. If the current segment 510 is a type A refinement segment, the scaled sub bitmap 530 is then combined, by compositing or cumulative operation with the previously stored scaled intermediate buffer 570 corresponding to the reference segment which the refinement segment refines. The result of the refinement is then the new scaled sub bitmap 530. For example, if the current segment which is being decoded is segment E 150, the scaled sub-bitmap 530 of the segment E 150 is then combined with the scaled intermediate buffer 570 of the reference segment B 140, which was previously stored in step 960 during the segment B decoding. The result is a refined scaled sub-bitmap which is stored back to the scaled sub-bitmap 530. Then, the intermediate buffers 560 and 570 of the reference segment B 40 are released. [0099] Next at decision step 940, the unsealed output region of the current segment is compared to the regions in the refinement region list 220. If the output region of the current segment does not intersect any of the refinement regions in the list, the method 900 proceeds to step 960. Otherwise, if the output region of the segment 510 intersects with at least one refinement region, the method 900 proceeds to rendering step 950, where a portion of the unsealed sub-bitmap 520 is rendered to one or more refinement region buffers 240 corresponding to refinement segments which region intersects the output region of current segment 510. For example, if the current decoded segment 510 is the segment C 340 shown in Fig. 4, its output region 320 intersects refinement region 212. Then a portion of the unsealed sub-bitmap 520 of the segment C 340 is rendered and stored in a corresponding refinement region buffer 240/242 which is located by the buffer address 235 of the refinement region data 223. The rendered and stored portion of the - 25 unsealed sub-bitmap 520 is the part covering the intersection region 310. The method 900 then proceeds to step 960. [00100] At buffering step 960, the unsealed sub-bitmap 520 is saved as an unsealed intermediate buffer 560 and the scaled sub-bitmap 530 is saved as a scaled intermediate buffer 570. Those intermediate buffers are for the use of decoding and scaling a type A refinement segment later. The method 900 then finishes. [00101] The method 1000 of decoding and scaling a segment whose output region intersects a refinement region, as executed at step 860 will be described with reference to Fig. 10. Method 1000 begins at decoding step 1020, where an unsealed sub bitmap 520 is generated by decoding the segment data which was read into memory at step 820. [00102] If the segment is a text segment, only a portion of the sub-bitmap is decoded. As shown in Fig. 4, if the segment C 340 is a text segment and has been decoded at step 1020 of the method 1000, then the only portion that needs to be decoded is the region 310, which is the intersection between the refinement region 212 and the output reference region 320 associated with the segment C 340 which is the currently decoded segment 510. [00103] At scaling step 1030, a scaled sub-bitmap 530 for the segment 510 is generated. If the segment is a text type segment, the text symbols contained in the dictionary referred to by this text segment are scaled by the required transformation factor. Then the placement coordinates decoded from the text segment are also scaled by the required transformation factor. The scaled text symbols are then placed in the scaled sub bitmap 530 using the transformed placement coordinates. When a text segment is decoded and scaled directly by scaling the symbols and then placing the scaled symbols to scaled placement positions, it is possible that a symbol is required to be placed on a sub-pixel position. To handle this situation, an adjusted scaled symbol is generated according to the sub-pixel position to compensate for the sub-pixel placement. The scaled symbol is preferably generated by considering both the scale factor and the translation factor determined by the sub-pixel placement position. For example, if the scaled symbol will be placed at coordinates of (100.3, 100.7) where the 100.3 is the x coordinate and 100.7 is the y coordinate, then the scaled symbol is generated using the required scaling factor and a - 26 translation factor of (0.3, 0.7). The adjusted scaled symbol is then placed at the integer position (100, 100). [00104] For other type of segments, the unsealed sub-bitmap 520 is scaled to the required size. The result is stored as a scaled sub-bitmap 530. The scaling process used in step 1030 is determined by the type of the segment. For example, a nearest neighbour interpolation can be used for other types of segments. The scaling method of a refinement segment is chosen based on the type of the reference segment which the refinement segment is going to refine. If the current segment 510 is a type A refinement segment, the scaled sub-bitmap 530 is then combined with the previously stored scaled intermediate buffer 570 corresponding to the segment which the refinement segment refines. For example, if the current segment which is being decoded is a type A refinement segment such as segment E 150, the scaled sub-bitmap 530 of the segment E 150 is then combined with the scaled intermediate buffer 570 of the reference segment B 140 which was previously stored in step 960 during the segment B decoding. The combination is performed by the processor 1505 according to the predetermined refinement operation, for example "XOR". The result is a refined scaled sub-bitmap which is stored back to 530. Then the intermediate buffer 570 of the reference segment B 140 is released. [00105] Next at rendering step 1040, a portion of the unsealed sub-bitmap (520) is rendered to the corresponding refinement region buffer 240. For example, if the current decoded segment 510 is the segment C 340 shown in Fig. 4, the output region 320 of the segment C 340 intersects with refinement region 212. Then a portion of the unsealed sub-bitmap 520 of the segment is rendered and stored in the corresponding refinement region buffer 240/242 which is located in the memory 1506 by the buffer address 235 of the refinement region data 223. The rendered and stored portion of the unsealed sub-bitmap 520 is the part covering the intersection region 310. The method 1000 then proceeds to step 1050. [00106] At rendering step 1050, the scaled sub-bitmap 530 is composited to the scaled output page buffer 550. At releasing next step 1060, the unsealed sub-bitmap 520 and scaled sub-bitmap 550 are released. The method 1000 then finishes. Rotation implementation - 27 [00107] The previously described implementations describe the method of decompressing and scaling the JBIG2 encoded image during the decoding process. The same technique can be applied to combine the scaling and rotation during the decoding process to process a scaled and rotated JBIG2 image. Bit depth conversion implementation [00108] The JBIG2 compression fonnat encodes an image with bit depth of 1-bit per pixel. To achieve a high quality scaling result, the 1-bit depth pixel data may be converted to a higher bit depth, such as 8 bits per pixel, before scaling. Because scaling is performed before combining sub-bitmaps, sub-bitmaps may require combination at a higher bit depth. [00109] The JBIG2 binary combination operators are only defined for 1 bit operations. The following are the formulae for simulating the effect of the JBIG2 binary operators on multi-bit channels which approximate values between 0 and 1: a AND b ab , aORb = a+b-ab a XOR b a+b-2ab aXNORb = 1+2ab-a-b aREPLACEb = a INDUSTRIAL APPLICABILITY [00110] The arrangements described are applicable to the computer and data processing industries and particularly for the decompression of images, particularly JBIG2 images, where there is a need for scaling the decompressed result. [00111] The foregoing describes only some embodiments of the present invention, and modifications and/or changes can be made thereto without departing from the scope and spirit of the invention, the embodiments being illustrative and not restrictive. [00112] (Australia Only) In the context of this specification, the word "comprising" means "including principally but not necessarily solely" or "having" or "including", and not "consisting only of'. Variations of the word "comprising", such as "comprise" and "comprises" have correspondingly varied meanings.

Claims (20)

1. Method for generating a transformed image associated with an original image, the original image being encoded into a plurality of segments, the method comprising: identifying a refinement region using at least one refinement segment from the plurality of segments; determining, based on a spatial arrangement of the identified refinement region relative to a reference region associated with a reference segment, a reusable part of the reference region to be decoded from the reference segment; decoding and storing the reusable part of the reference region in an untransformed form to be used to decode the refinement segment; and generating the transformed image by transforming a refinement portion in accordance with a transformation factor, wherein the refinement portion is produced by decoding the refinement segment using the decoded reusable part of the reference region stored in the untransformed form.
2. A method according to claim 1, wherein each segment is associated with a decoding process and a pre-determined transformation factor.
3. A method according to claim 1, wherein the reusable part of the reference region is determined based on a position of the refinement region within the image relative to regions associated with other segments from the plurality of segments, the reusable part of the reference region corresponding to an area of overlap between the refinement region and the reference region.
4. A method according to claim 1, wherein the transformation comprises scaling.
5. A method according to claim 1, further comprising storing the decoded reusable part associated with the reference segment.
6. A method according to claim 5, further comprising discarding the stored reusable part once all segments affected by said reusable part have been transformed.
7. A method according to claim 1, further comprising determining a type of the segment, being one of text, image, and graphics. - 29
8. A method according to claim 7, wherein the transforming is performed using a transformation method associated with the determined segment type.
9. A method according to claim 1 further comprising converting decoded bitmaps to a bit depth greater than 1, transforming the converted bitmaps, and combining the transformed bitmaps.
10. A method according to claim 1, wherein a text type segment is decoded and scaled considering both the transformation factor and sub-pixel-placement.
11. Method for generating an image transformed using a pre-determined transformation, the transformed image being associated with an original image, the method comprising: receiving the original image associated with a plurality of portions, the plurality of portions comprises at least a reference portion and a refinement portion; determining a reusable part of the reference portion based a region of overlap in an output image between the regions in the output image associated with the reference portion and the refinement portion; and generating the transformed image by transforming the refinement portion in accordance with the pre-determined transformation using the determined reusable part of the reference portion stored in an untransformed form.
12. A non-transitory computer readable storage medium having a program recorded thereon, the program being executable by a processor to generate an image transformed using a pre-determined transformation, the transformed image being associated with an original image, the program comprising: code for receiving the original image associated with a plurality of portions, the plurality of portions comprises at least a reference portion and a refinement portion; code for determining a reusable part of the reference portion based a region of overlap in an output image between the regions in the output image associated with the reference portion and the refinement portion; and code for generating the transformed image by transforming the refinement portion in accordance with the pre-determined transformation using the determined reusable part of the reference portion stored in an untransformed form. - 30
13. A non-transitory computer readable storage medium according to claim 12, wherein the program generates a transformed image associated with an original image, the original image being encoded into a plurality of segments, and comprises: code for identifying a refinement region using at least one refinement segment from the plurality of segments; code for determining, based on a spatial arrangement of the identified refinement region relative to a reference region associated with a reference segment, a reusable part of the reference region to be decoded from the reference segment; code for decoding and storing the reusable part of the reference region in an untransformed form to be used to decode the refinement segment; and code for generating the transformed image by transforming a refinement portion in accordance with a transformation factor, wherein the refinement portion is produced by decoding the refinement segment using the decoded reusable part of the reference region stored in the untransformed form.
14. A non-transitory computer readable storage medium according to claim 13, wherein each segment is associated with a decoding process and a pre-determined scaling factor and the reusable part of the reference region is determined based on a position of the refinement region within the image relative to regions associated with other segments from the plurality of segments, the reusable part of the reference region corresponding to an area of overlap between the refinement region and the reference region.
15. A non-transitory computer readable storage medium according to claim 13, further comprising storing the decoded reusable part associated with the reference segment.
16. A non-transitory computer readable storage medium according to claim 15, further comprising discarding the stored reusable part once all segments affected by said reusable part have been transformed.
17. A non-transitory computer readable storage medium according to claim 16, further comprising determining a type of the segment, being one of text, image, and graphics and the transforming is performed using a transformation method associated with the determined segment type. - 31
18. A non-transitory computer readable storage medium according to claim 13 further comprising converting decoded bitmaps to a bit depth greater than 1, transforming the converted bitmaps, and combining the transformed bitmaps.
19. A non-transitory computer readable storage medium according to claim 13, wherein a text type segment is decoded and scaled considering both the transformation factor and sub-pixel-placement.
20. Computer apparatus for generating a transformed image associated with an original image, the original image being encoded into a plurality of segments, the apparatus comprising: means for identifying a refinement region using at least one refinement segment from the plurality of segments; means for determining, based on a spatial arrangement of the identified refinement region relative to a reference region associated with a reference segment, a reusable part of the reference region to be decoded from the reference segment; means for decoding and storing the reusable part of the reference region in an untransformed form to be used to decode the refinement segment; and means for generating the transformed image by transforming a refinement portion in accordance with a transformation factor, wherein the refinement portion is produced by decoding the refinement segment using the decoded reusable part of the reference region stored in the untransformed form. CANON KABUSHIKI KAISHA Patent Attorneys for the Applicant Spruson&Ferguson
AU2013248237A 2013-10-25 2013-10-25 Image scaling process and apparatus Abandoned AU2013248237A1 (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
AU2013248237A AU2013248237A1 (en) 2013-10-25 2013-10-25 Image scaling process and apparatus

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
AU2013248237A AU2013248237A1 (en) 2013-10-25 2013-10-25 Image scaling process and apparatus

Publications (1)

Publication Number Publication Date
AU2013248237A1 true AU2013248237A1 (en) 2015-05-14

Family

ID=53054278

Family Applications (1)

Application Number Title Priority Date Filing Date
AU2013248237A Abandoned AU2013248237A1 (en) 2013-10-25 2013-10-25 Image scaling process and apparatus

Country Status (1)

Country Link
AU (1) AU2013248237A1 (en)

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN110049326A (en) * 2019-05-28 2019-07-23 广州酷狗计算机科技有限公司 Method for video coding and device, storage medium
CN110177275A (en) * 2019-05-30 2019-08-27 广州酷狗计算机科技有限公司 Method for video coding and device, storage medium

Cited By (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN110049326A (en) * 2019-05-28 2019-07-23 广州酷狗计算机科技有限公司 Method for video coding and device, storage medium
CN110049326B (en) * 2019-05-28 2022-06-28 广州酷狗计算机科技有限公司 Video coding method and device and storage medium
CN110177275A (en) * 2019-05-30 2019-08-27 广州酷狗计算机科技有限公司 Method for video coding and device, storage medium
CN110177275B (en) * 2019-05-30 2022-09-30 广州酷狗计算机科技有限公司 Video encoding method and apparatus, and storage medium

Similar Documents

Publication Publication Date Title
US10110936B2 (en) Web-based live broadcast
US10991065B2 (en) Methods and systems for processing graphics
US10834414B2 (en) Transcode PCL delta-row compressed image to edges
WO2020068406A1 (en) Asynchronous space warp for remotely rendered vr
AU2013267004A1 (en) Method, apparatus and system for tessellating a parametric patch
US9779064B2 (en) Cloud assisted rendering
US20030219161A1 (en) Image processing device
US7580041B1 (en) Direct storage of compressed scan converted data
US9275316B2 (en) Method, apparatus and system for generating an attribute map for processing an image
JP2013125994A (en) Image compression device, image compression method, and computer program
AU2013248237A1 (en) Image scaling process and apparatus
Zhou et al. A hardware decoder architecture for general string matching technique
JP2008042345A (en) Image processing method and image processor
CN1231101A (en) Image mapping device and method, and image generating device and method
AU2009212933A1 (en) Methods of storing and retrieving images
US9514555B2 (en) Method of rendering an overlapping region
US20150128029A1 (en) Method and apparatus for rendering data of web application and recording medium thereof
US10728551B2 (en) Methods and apparatus for block-based layout for non-rectangular regions between non-contiguous imaging regions
JP2003087558A (en) Apparatus and method for processing image
US11729395B2 (en) Methods and devices for extracting motion vector data from compressed video data
JPH10105672A (en) Computer and memory integrated circuit with operation function to be used in this computer
CN116996695B (en) Panoramic image compression method, device, equipment and medium
JP5424785B2 (en) Image processing apparatus, image processing method, and computer program
KR20040014053A (en) Mobile station and face detection method for mobile station
US20060126949A1 (en) Method of coding and decoding still picture

Legal Events

Date Code Title Description
MK4 Application lapsed section 142(2)(d) - no continuation fee paid for the application