US20130039596A1 - Optimized method and system for entropy coding - Google Patents
Optimized method and system for entropy coding Download PDFInfo
- Publication number
- US20130039596A1 US20130039596A1 US13/594,082 US201213594082A US2013039596A1 US 20130039596 A1 US20130039596 A1 US 20130039596A1 US 201213594082 A US201213594082 A US 201213594082A US 2013039596 A1 US2013039596 A1 US 2013039596A1
- Authority
- US
- United States
- Prior art keywords
- dct
- zero
- coefficient
- dct coefficients
- statistics
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Granted
Links
Images
Classifications
-
- H—ELECTRICITY
- H03—ELECTRONIC CIRCUITRY
- H03M—CODING; DECODING; CODE CONVERSION IN GENERAL
- H03M7/00—Conversion of a code where information is represented by a given sequence or number of digits to a code where the same, similar or subset of information is represented by a different sequence or number of digits
- H03M7/30—Compression; Expansion; Suppression of unnecessary data, e.g. redundancy reduction
- H03M7/40—Conversion to or from variable length codes, e.g. Shannon-Fano code, Huffman code, Morse code
- H03M7/4031—Fixed length to variable length coding
- H03M7/4037—Prefix coding
- H03M7/4043—Adaptive prefix coding
- H03M7/4062—Coding table adaptation
-
- H—ELECTRICITY
- H03—ELECTRONIC CIRCUITRY
- H03M—CODING; DECODING; CODE CONVERSION IN GENERAL
- H03M7/00—Conversion of a code where information is represented by a given sequence or number of digits to a code where the same, similar or subset of information is represented by a different sequence or number of digits
- H03M7/30—Compression; Expansion; Suppression of unnecessary data, e.g. redundancy reduction
- H03M7/46—Conversion to or from run-length codes, i.e. by representing the number of consecutive digits, or groups of digits, of the same kind by a code word and a digit indicative of that kind
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N19/00—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
- H04N19/10—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding
- H04N19/102—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the element, parameter or selection affected or controlled by the adaptive coding
- H04N19/13—Adaptive entropy coding, e.g. adaptive variable length coding [AVLC] or context adaptive binary arithmetic coding [CABAC]
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N19/00—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
- H04N19/10—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding
- H04N19/134—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the element, parameter or criterion affecting or controlling the adaptive coding
- H04N19/136—Incoming video signal characteristics or properties
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N19/00—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
- H04N19/10—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding
- H04N19/169—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the coding unit, i.e. the structural portion or semantic portion of the video signal being the object or the subject of the adaptive coding
- H04N19/18—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the coding unit, i.e. the structural portion or semantic portion of the video signal being the object or the subject of the adaptive coding the unit being a set of transform coefficients
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N19/00—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
- H04N19/46—Embedding additional information in the video signal during the compression process
- H04N19/463—Embedding additional information in the video signal during the compression process by compressing encoding parameters before transmission
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N19/00—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
- H04N19/60—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using transform coding
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N19/00—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
- H04N19/90—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using coding techniques not provided for in groups H04N19/10-H04N19/85, e.g. fractals
- H04N19/93—Run-length coding
Definitions
- This application relates to image compression and, in particular, to an optimized method and system for entropy coding.
- Digital images are commonly used in several applications such as, for example, in digital still cameras (DSC), printers, scanners, etc.
- a digital image includes a matrix of elements, commonly referred to as a bit map. Each element of the matrix, which represents an elemental area of the image (a pixel or pet), is formed by several digital values indicating corresponding components of the pixel.
- Digital images are typically subjected to a compression process to increase the number of digital images which can be stored simultaneously, such as onto a memory of a digital camera. This may allow transmission of digital images to be easier and less time consuming.
- a compression method commonly used in standard applications is the JPEG (Joint Photographic Experts Group) algorithm, described in CCITT T.81, 1992.
- DCT Discrete Cosine Transform
- FIG. 1 illustrates an example architecture, within which the system to improve performance during a process of entropy coding may be implemented, in accordance with an example embodiment
- FIG. 2 is a block diagram of an encoder, in accordance with an example embodiment
- FIG. 3 is a flow chart illustrating a method, in accordance with an example embodiment, to generate a compressed image
- FIG. 4 is a block diagram illustrating a statistics generator, in accordance with an example embodiment
- FIG. 5 is a flow chart illustrating a method to generate and store statistics, in accordance with an example embodiment
- FIG. 6 illustrates a diagrammatic representation of a DCT coefficient storage, in accordance with an example embodiment
- FIG. 7 illustrates a diagrammatic representation of a machine in the example form of a computer system within which a set of instructions, for causing the machine to perform any one or more of the methodologies discussed herein, may be executed.
- an encoder e.g., JPEG encoder
- JPEG encoder may be configured to build a probability table based on the image statistics.
- the image statistics may be collected for the whole image or, where an image is first divided into smaller portions, for each portion of the image. Each portion may be encoded independently.
- a portion of the image may be, for example, one image block of 8 ⁇ 8 pixels or a subset of the set of all 8 ⁇ 8 blocks of the image.
- a subset of the set of all 8 ⁇ 8 blocks of the image may be referred to as a sub-image.
- references to an image will be understood to encompass embodiments where the image is a sub image.
- the image statistics for Discrete Cosine Transform (DCT) coefficients (or merely coefficients) associated with a digital image may be stored intelligently in the available bits of the DCT coefficients themselves and then accessed and utilized during an operation of entropy coding,
- DCT Discrete Cosine Transform
- the bit-length of a non-zero DCT coefficient may be stored in some of the bits of that coefficient.
- the length of a zero-run may be stored in the first zero coefficient of the run.
- a zero-run is a set of consecutive DCT coefficients listed in a zigzag order that have a zero value. If the last DCT coefficient is zero, the position of the last non-zero coefficient may he stored in the last DCT coefficient.
- FIG. 1 illustrates an example architecture 100 , within which the system to improve performance during a process of entropy coding may be implemented.
- an input digital image 110 is provided to an encoder 200 .
- the encoder 200 may be configured to generate a compressed image 130 , e.g., a JPEG file, that corresponds to the input digital image 110 .
- the compressed image 130 may include headers 132 , tables 134 , and data 136 .
- the architecture 100 may include a decoder to recreate the original input digital image 110 from the compressed image 130 .
- the input digital image 110 may be a raw digital image or a compressed digital image. If the input digital image 110 is a compressed digital image, the encoder 200 may uncompress the compressed digital image to the DCT level and perform operations to generate an improved compressed image (e.g., a compressed image that is smaller in size than the original compressed digital image).
- an improved compressed image e.g., a compressed image that is smaller in size than the original compressed digital image.
- One example embodiment of the encoder 200 may be described with reference to FIG. 2 .
- FIG. 2 is a block diagram of an encoder 200 , according to one example embodiment.
- the encoder 200 may comprise an input module 202 , a DCT module 206 , a quantizer 240 , an entropy coder 214 and an output module 216 .
- the encoder 200 may receive image pixels, or input digital image, at an input module 202 .
- the input module 202 may provide the input digital image data to the DCT module 206 .
- the DCT module 206 may cooperate with an image blocks module 204 to divide the input digital image into non-overlapping, fixed length image blocks.
- the DCT module 206 may then transform each image block into a corresponding block of DCT coefficients.
- the DCT coefficients may be referred to as frequency domain coefficients, and a block of DCT coefficients may be referred to as a frequency domain image block (or a frequency domain image).
- the quantizer 240 maybe configured to receive the DCT coefficients generated by the DCT module 206 and to quantize the corresponding set of values utilizing quantization tables. Quantization tables may provide factors indicating how much of the image quality can be reduced for a given DCT coefficient before the image deterioration is perceptible.
- the entropy coder 214 is configured to receive the quantized DCT coefficients from the quantizer 240 and to rearrange the DCT coefficients in a zigzag order. The zigzag output is then compressed using runlength encoding.
- the entropy coder 2 . 14 may be configured to generate uniquely decodable (UD) codes (e.g. entropy codes). A code is said to be uniquely decodable if the original symbols can be recovered uniquely from sequences of encoded symbols.
- the output module 216 may be configured to generate a compressed version of the input digital image utilizing the generated UD codes codes.
- the entropy coder 214 uses probability tables to generate entropy codes.
- the probability tables may be Huffman tables and the entropy codes may be Huffman codes.
- a probability tables generator 212 may be configured to generate two sets of probability tables for an image, one for the luminance or grayscale components and one for the chrominance or color components.
- the entropy coder 214 may be configured to examine each DCT coefficient associated with an input image or sub image to determine whether a DCT coefficient is zero. The examination of coefficients may stop at the last non-zero entry rather than spanning the entire set of the DCT coefficients. The process of entropy coding may stop after signaling the final zero-run (if any).
- the probability tables generator 212 may generate probability tables utilizing statistics associated with the input image provided by a statistics generator 400 .
- the encoder 200 may be configured such that at least some of the DCT coefficients statistics collected by the statistics generator 400 may be saved for later use by the entropy coder 214 .
- the encoder 200 may utilize a placement module 210 to store the DCT coefficients statistics in the available bits of the associated DCT coefficients. The example operations performed by the encoder 200 may be described with reference to FIG. 3 .
- FIG. 3 is a flow chart illustrating a method 300 , in accordance with an example embodiment, to generate a compressed image.
- the method 300 may be performed by processing logic that may comprise hardware (e.g., dedicated logic, programmable logic, microcode, etc.), software (such as run on a general purpose computer system or a dedicated machine), or a combination of both. It will be noted, that, in an example embodiment, the processing logic may reside in any of the modules shown in FIG. 3 described above.
- the method 300 commences at operation 302 .
- DCT module 206 receives an image block.
- the DCT module 206 performs a Discrete Cosine Transform (DCT) on the image block at operation 306 to obtain DCT coefficients associated with the image block.
- the statistics generator 400 may generate statistics associated with the DCT coefficients. As the statistics generator 400 obtains a particular statistics value associated with a DCT coefficient, the placement module 210 may store the particular statistics value in the available bits of the DCT coefficient at operation 310 .
- DCT Discrete Cosine Transform
- the probability tables generator 212 utilizes the statistics generated at operation 308 to generate probability tables (operation 312 ).
- the entropy coder 214 then access and utilizes the statistics stored at operation 310 in DCT coefficients themselves to perform entropy coding, thereby avoiding at least some of the repeated computations.
- the entropy coding is performed at operation 314 to generate uniquely decodable (LID) codes (e.g., Huffman codes) for the DCT coefficients, utilizing the statistics stored in the DCT coefficients.
- LID uniquely decodable
- a compressed version of the input block is generated utilizing the UD codes.
- An example statistics generator may be described with reference to FIG. 4 .
- FIG. 4 is a block diagram illustrating some modules of an example statistics generator 400 , in accordance with an example embodiment.
- the statistics generator 400 may comprise a bit-length module 402 , a zero-run detector 404 and a runlength module 406 .
- the bit depth of an image to be encoded may be 8 bits, in which case the maximum bit-length of any coefficient after a DCT equals 12. Because, in one example embodiment, the DCT coefficients may be represented in 16 bit quantities, there may be four additional bits available for storing data per a DCT coefficient. This additional space may be utilized to store the bit-length of the DCT coefficients.
- the bit-length module 402 may be configured to calculate the bit-length for a non-zero DCT coefficient and to provide the calculated bit-length to the placement module 210 of FIG. 2 . The placement module 210 may then store the bit-length in the first four bits of the associated DCT coefficient.
- the information pertaining to the run lengths of zero coefficients may be stored in some of the coefficients that are zero.
- the zero-run detector 404 may be configured to detect any zero-run that may be present in the set of DCT coefficients associated with the 8 ⁇ 8 block of the input image or sub-image.
- the runlength module 406 may be configured to calculate the runlength of the detected zero-run.
- the placement module 210 may be configured to store the calculated runlength of the zero-run in the first zero coefficient of the run.
- the statistics generator 400 may further comprise a last non-zero module 408 to determine whether the last DCT coefficient of the input 8 ⁇ 8 block is zero. If the last DCI coefficient from the plurality of DCT coefficients associated with the input 8 ⁇ 8 block is zero, the placement module 210 may store the position of the last non-zero coefficient for future use, e.g., in the last coefficient of the 8 ⁇ 8 block (the 64th coefficient). The operations performed by the statistics generator 400 in cooperation with the placement module 210 may be described with reference to FIG. 5 .
- FIG. 5 is a flow chart illustrating a method 500 , in accordance with an example embodiment, to generate and store statistics associated with a plurality of DCT coefficients.
- the method 500 may be performed by processing logic that may comprise hardware (e.g., dedicated logic, programmable logic, microcode, etc.), software (such as run on a general purpose computer system or a dedicated machine), or a combination of both. It will be noted, that, in an example embodiment, the processing logic may reside in any of the modules shown in FIG. 2 and FIG. 4 described above.
- the method 500 commences at operation 502 .
- the statistics generator 400 receives DCT coefficients associated with the 8 ⁇ 8 blocks of the image or sub-image.
- the bit-length module 402 may calculate the bit-lengths for non-zero DCT coefficients at operation 506 and provides the calculated bit-lengths to the placement module 210 of FIG. 2 .
- the placement module 210 may store the bit-lengths in some of the available bits of the associated DCT coefficients at operation 506 , as shown in FIG. 6 ,
- Figure a 6 is a diagrammatic representation of a DCT coefficient storage 600 .
- bits 0 through 3 of the DCT coefficient storage 600 may be used to store the bit-length of a non-zero DCT coefficient.
- Bits 4 through 15 of the DCT coefficient storage 600 may be used to store the value of a non-zero DCT coefficient.
- the bit-length may be stored in the last four bits of a DCT coefficient storage, or in the middle bits of a DCT coefficient storage.
- the bit-length does not need to be stored in consecutive bits of a DCT coefficient storage.
- the bit-length may be “broken up” and stored, according to a rule, in the various locations of a DCT coefficient storage,
- the zero-run detector 404 may detect a zero-nm present in the DCT coefficients and obtain the runlength of the detected zero-run.
- the placement module 210 may store the runlength of the zero-run in the first zero coefficient of the run, at operation 512 . If the last DCT coefficient from the received DCT coefficients is zero, the placement module 210 may store the position of the last non-zero coefficient in the first zero coefficient of the associated zero-run, at operation 514 .
- the method and system described above may result in reducing the amount of redundant computation because some of the statistics information that may be used in the process of entropy coding may made available from the DCT coefficients themselves. Furthermore, in one example embodiment, the zero coefficients may be skipped over during the entropy coding process, because the stored runlength value may provide information regarding how many subsequent DCT coefficients are zero.
- Pass 1 refers to operations performed to generate statistics for DCT coefficients associated with an input digital image and to store the generated statistics in the available bits of the DCT coefficients.
- Pass 2 refers to operations performed to entropy code the DCT coefficient utilizing the stored statistics information.
- the techniques described herein may be utilized in a variety of applications, including desk top applications for image processing, built-in logic for digital cameras, etc.
- the system and method for entropy coding have been described in the context of JPEG compression, the system and method may be utilized advantageously with image compression techniques that perform run length coding of zeros and that does performs two similar passes over an image—first to gather the run length statistics and second to actually use the statistics to generate entropy codes.
- the method and system described herein may be utilized with a coding scheme that operates using the following steps:
- FIG. 7 shows a diagrammatic representation of a machine in the example electronic form of a computer system 700 within which a set of instructions, for causing the machine to perform any one or more of the methodologies discussed herein, may be executed.
- the machine operates as a standalone device or may be connected (e.g., networked) to other machines.
- the machine may operate in the capacity of a server or a client machine in server-client network environment, or as a peer machine in a peer-to-peer (or distributed) network environment.
- the machine may be a personal computer (PC), a tablet PC, a set-top box (STB), a Personal Digital Assistant (PDA), a cellular telephone, a portable music player (e.g., a portable hard drive audio device such as an “Moving Picture Experts Group (MPEG) Layer 3” (MP3) player), a web appliance, a network router, switch or bridge, or any machine capable of executing a set of instructions (sequential or otherwise) that specify actions to be taken by that machine.
- PC personal computer
- PDA Personal Digital Assistant
- MPEG Motion Picture Experts Group
- MP3 Motion Picture Experts Group
- the example computer system 700 includes a processor 702 (e.g., a central processing unit (CPU), a graphics processing unit (GPU) or both), a main memory 704 and a static memory 706 , which communicate with each other via a bus 708 .
- the computer system 700 may further include a video display unit 710 (e.g., a liquid crystal display (LCD) or a cathode ray tube (CRT)).
- the computer system 700 also includes an alphanumeric input device 712 (e.g., a keyboard), a user interface (UI) navigation device 714 (e.g., a mouse), a disk drive unit 716 , a signal generation device 718 (e.g., a speaker) and a network interface device 720 .
- a processor 702 e.g., a central processing unit (CPU), a graphics processing unit (GPU) or both
- main memory 704 e.g., RAM
- static memory 706 e.g.,
- the disk drive unit 716 includes a machine-readable medium 722 on which is stored one or more sets of instructions and data structures (e.g., software 724 ) embodying or utilized by any one or more of the methodologies or functions described herein.
- the software 724 may also reside, completely or at least partially, within the main memory 704 and/or within the processor 702 during execution thereof by the computer system 700 , the main memory 704 and the processor 702 also constituting machine-readable media.
- the software 724 may further be transmitted or received over a network 726 via the network interface device 720 utilizing any one of a number of well-known transfer protocols (e.g., Hyper Text Transfer Protocol (HTTP)).
- HTTP Hyper Text Transfer Protocol
- machine-readable medium 722 is shown in an example embodiment to be a single medium, the term “machine-readable medium” should be taken to include a single medium or multiple media (e.g., a centralized or distributed database, and/or associated caches and servers) that store the one or more sets of instructions.
- the term “machine-readable medium” shall also be taken to include any medium that is capable of storing, encoding or carrying a set of instructions for execution by the machine and that cause the machine to perform any one or more of the methodologies of the present invention, or that is capable of storing, encoding or carrying data structures utilized by or associated with such a set of instructions.
- machine-readable medium shall accordingly be taken to include, but not be limited to, solid-state memories, optical and magnetic media, and carrier wave signals. Such medium may also include, without limitation, hard disks, floppy disks, flash memory cards, digital video disks, random access memory (RAMs), read only memory (ROMs), and the like.
- the embodiments described herein may be implemented in an operating environment comprising software installed on a computer, in hardware, or in a combination of software and hardware.
Abstract
Description
- This application is a continuation of U.S. patent application Ser. No. 11/543,586, entitled “AN OPTIMIZED METHOD AND SYSTEM FOR ENTROPY CODING,” filed Oct. 3, 2006, which is incorporated herein by reference in its entirety.
- This application relates to image compression and, in particular, to an optimized method and system for entropy coding.
- Digital images are commonly used in several applications such as, for example, in digital still cameras (DSC), printers, scanners, etc. A digital image includes a matrix of elements, commonly referred to as a bit map. Each element of the matrix, which represents an elemental area of the image (a pixel or pet), is formed by several digital values indicating corresponding components of the pixel. Digital images are typically subjected to a compression process to increase the number of digital images which can be stored simultaneously, such as onto a memory of a digital camera. This may allow transmission of digital images to be easier and less time consuming. A compression method commonly used in standard applications is the JPEG (Joint Photographic Experts Group) algorithm, described in CCITT T.81, 1992.
- In a basic JPEG algorithm, 8×8 pixel blocks are extracted from the digital image. Discrete Cosine Transform (DCT) coefficients are then calculated for the components of each block. The DCT coefficients are rounded of using corresponding quantization tables. The quantized DCT coefficients are encoded utilizing entropy coding to obtain a compressed digital image. Entropy coding may be performed by using Arithmetic encoding or by using Huffman Coding. The original digital image can be extracted later from the compressed version of the image by a decompression process.
- In the process of entropy coding and the associated steps, some operations, e.g., the computation required fir finding the bit-length of a DCT coefficient, may be computationally expensive. Furthermore, the check to determine whether a DCI coefficient is zero may also be expensive. This problem may be addressed by utilizing additional memory to store these values for later use. However, this approach may result in storing data that is larger than the input image that is being coded. Thus, existing encoding techniques may require additional memory usage in order to improve performance.
- Embodiments are illustrated by way of example and not limitation in the figures of the accompanying drawings, in which like references indicate similar elements and in which:
-
FIG. 1 illustrates an example architecture, within which the system to improve performance during a process of entropy coding may be implemented, in accordance with an example embodiment; -
FIG. 2 is a block diagram of an encoder, in accordance with an example embodiment; -
FIG. 3 is a flow chart illustrating a method, in accordance with an example embodiment, to generate a compressed image; -
FIG. 4 is a block diagram illustrating a statistics generator, in accordance with an example embodiment; -
FIG. 5 is a flow chart illustrating a method to generate and store statistics, in accordance with an example embodiment; -
FIG. 6 illustrates a diagrammatic representation of a DCT coefficient storage, in accordance with an example embodiment; and -
FIG. 7 illustrates a diagrammatic representation of a machine in the example form of a computer system within which a set of instructions, for causing the machine to perform any one or more of the methodologies discussed herein, may be executed. - In order to perform entropy coding, an encoder (e.g., JPEG encoder) may be configured to build a probability table based on the image statistics. The image statistics may be collected for the whole image or, where an image is first divided into smaller portions, for each portion of the image. Each portion may be encoded independently. In one example embodiment, a portion of the image may be, for example, one image block of 8×8 pixels or a subset of the set of all 8×8 blocks of the image. A subset of the set of all 8×8 blocks of the image may be referred to as a sub-image. For the purposes of this description, references to an image will be understood to encompass embodiments where the image is a sub image.
- A method and system are provided to improve performance during a process of entropy coding. In one example embodiment, the image statistics for Discrete Cosine Transform (DCT) coefficients (or merely coefficients) associated with a digital image may be stored intelligently in the available bits of the DCT coefficients themselves and then accessed and utilized during an operation of entropy coding,
- In one example embodiment, the bit-length of a non-zero DCT coefficient may be stored in some of the bits of that coefficient. The length of a zero-run may be stored in the first zero coefficient of the run. A zero-run is a set of consecutive DCT coefficients listed in a zigzag order that have a zero value. If the last DCT coefficient is zero, the position of the last non-zero coefficient may he stored in the last DCT coefficient. An example system to improve performance during a process of entropy coding may be implemented, in one embodiment, in the context of an architecture illustrated in
FIG. 1 . -
FIG. 1 illustrates anexample architecture 100, within which the system to improve performance during a process of entropy coding may be implemented. In theexample architecture 100, an inputdigital image 110 is provided to anencoder 200. Theencoder 200 may be configured to generate acompressed image 130, e.g., a JPEG file, that corresponds to the inputdigital image 110. Thecompressed image 130 may includeheaders 132, tables 134, anddata 136. - The
architecture 100, in one example embodiment, may include a decoder to recreate the original inputdigital image 110 from thecompressed image 130. - The input
digital image 110, in one example embodiment, may be a raw digital image or a compressed digital image. If the inputdigital image 110 is a compressed digital image, theencoder 200 may uncompress the compressed digital image to the DCT level and perform operations to generate an improved compressed image (e.g., a compressed image that is smaller in size than the original compressed digital image). One example embodiment of theencoder 200 may be described with reference toFIG. 2 . -
FIG. 2 is a block diagram of anencoder 200, according to one example embodiment. Theencoder 200 may comprise aninput module 202, aDCT module 206, aquantizer 240, anentropy coder 214 and anoutput module 216. - The
encoder 200 may receive image pixels, or input digital image, at aninput module 202. Theinput module 202 may provide the input digital image data to theDCT module 206. TheDCT module 206 may cooperate with animage blocks module 204 to divide the input digital image into non-overlapping, fixed length image blocks. TheDCT module 206 may then transform each image block into a corresponding block of DCT coefficients. The DCT coefficients may be referred to as frequency domain coefficients, and a block of DCT coefficients may be referred to as a frequency domain image block (or a frequency domain image). - The
quantizer 240 maybe configured to receive the DCT coefficients generated by theDCT module 206 and to quantize the corresponding set of values utilizing quantization tables. Quantization tables may provide factors indicating how much of the image quality can be reduced for a given DCT coefficient before the image deterioration is perceptible. - In an example embodiment, the
entropy coder 214 is configured to receive the quantized DCT coefficients from thequantizer 240 and to rearrange the DCT coefficients in a zigzag order. The zigzag output is then compressed using runlength encoding. The entropy coder 2.14, in one example embodiment, may be configured to generate uniquely decodable (UD) codes (e.g. entropy codes). A code is said to be uniquely decodable if the original symbols can be recovered uniquely from sequences of encoded symbols. Theoutput module 216 may be configured to generate a compressed version of the input digital image utilizing the generated UD codes codes. - The
entropy coder 214, in one example embodiment, uses probability tables to generate entropy codes. In one embodiment, the probability tables may be Huffman tables and the entropy codes may be Huffman codes. Aprobability tables generator 212 may be configured to generate two sets of probability tables for an image, one for the luminance or grayscale components and one for the chrominance or color components. - In one example embodiment, the
entropy coder 214 may be configured to examine each DCT coefficient associated with an input image or sub image to determine whether a DCT coefficient is zero. The examination of coefficients may stop at the last non-zero entry rather than spanning the entire set of the DCT coefficients. The process of entropy coding may stop after signaling the final zero-run (if any). - The
probability tables generator 212, in one example embodiment, may generate probability tables utilizing statistics associated with the input image provided by astatistics generator 400. In one example embodiment, theencoder 200 may be configured such that at least some of the DCT coefficients statistics collected by thestatistics generator 400 may be saved for later use by theentropy coder 214. Theencoder 200 may utilize aplacement module 210 to store the DCT coefficients statistics in the available bits of the associated DCT coefficients. The example operations performed by theencoder 200 may be described with reference toFIG. 3 . -
FIG. 3 is a flow chart illustrating amethod 300, in accordance with an example embodiment, to generate a compressed image. Themethod 300 may be performed by processing logic that may comprise hardware (e.g., dedicated logic, programmable logic, microcode, etc.), software (such as run on a general purpose computer system or a dedicated machine), or a combination of both. It will be noted, that, in an example embodiment, the processing logic may reside in any of the modules shown inFIG. 3 described above. - As shown in
FIG. 3 , themethod 300 commences atoperation 302. Atoperation 304,DCT module 206 receives an image block. TheDCT module 206 performs a Discrete Cosine Transform (DCT) on the image block atoperation 306 to obtain DCT coefficients associated with the image block. Atoperation 308, thestatistics generator 400 may generate statistics associated with the DCT coefficients. As thestatistics generator 400 obtains a particular statistics value associated with a DCT coefficient, theplacement module 210 may store the particular statistics value in the available bits of the DCT coefficient atoperation 310. - At
operation 312, theprobability tables generator 212 utilizes the statistics generated atoperation 308 to generate probability tables (operation 312). Theentropy coder 214 then access and utilizes the statistics stored atoperation 310 in DCT coefficients themselves to perform entropy coding, thereby avoiding at least some of the repeated computations. The entropy coding is performed atoperation 314 to generate uniquely decodable (LID) codes (e.g., Huffman codes) for the DCT coefficients, utilizing the statistics stored in the DCT coefficients. Atoperation 316, a compressed version of the input block is generated utilizing the UD codes. An example statistics generator may be described with reference toFIG. 4 . -
FIG. 4 is a block diagram illustrating some modules of anexample statistics generator 400, in accordance with an example embodiment. Thestatistics generator 400 may comprise a bit-length module 402, a zero-run detector 404 and arunlength module 406. - In one example embodiment, the bit depth of an image to be encoded may be 8 bits, in which case the maximum bit-length of any coefficient after a DCT equals 12. Because, in one example embodiment, the DCT coefficients may be represented in 16 bit quantities, there may be four additional bits available for storing data per a DCT coefficient. This additional space may be utilized to store the bit-length of the DCT coefficients. The bit-
length module 402 may be configured to calculate the bit-length for a non-zero DCT coefficient and to provide the calculated bit-length to theplacement module 210 ofFIG. 2 . Theplacement module 210 may then store the bit-length in the first four bits of the associated DCT coefficient. - Furthermore, in one example embodiment, the information pertaining to the run lengths of zero coefficients may be stored in some of the coefficients that are zero. The zero-
run detector 404 may be configured to detect any zero-run that may be present in the set of DCT coefficients associated with the 8×8 block of the input image or sub-image. Therunlength module 406 may be configured to calculate the runlength of the detected zero-run. In one example embodiment, theplacement module 210 may be configured to store the calculated runlength of the zero-run in the first zero coefficient of the run. - In one example embodiment, the
statistics generator 400 may further comprise a lastnon-zero module 408 to determine whether the last DCT coefficient of theinput 8×8 block is zero. If the last DCI coefficient from the plurality of DCT coefficients associated with theinput 8×8 block is zero, theplacement module 210 may store the position of the last non-zero coefficient for future use, e.g., in the last coefficient of the 8×8 block (the 64th coefficient). The operations performed by thestatistics generator 400 in cooperation with theplacement module 210 may be described with reference toFIG. 5 . -
FIG. 5 is a flow chart illustrating amethod 500, in accordance with an example embodiment, to generate and store statistics associated with a plurality of DCT coefficients. Themethod 500 may be performed by processing logic that may comprise hardware (e.g., dedicated logic, programmable logic, microcode, etc.), software (such as run on a general purpose computer system or a dedicated machine), or a combination of both. It will be noted, that, in an example embodiment, the processing logic may reside in any of the modules shown inFIG. 2 andFIG. 4 described above. - As shown in
FIG. 5 , themethod 500 commences atoperation 502. Atoperation 504, thestatistics generator 400 receives DCT coefficients associated with the 8×8 blocks of the image or sub-image. The bit-length module 402 may calculate the bit-lengths for non-zero DCT coefficients atoperation 506 and provides the calculated bit-lengths to theplacement module 210 ofFIG. 2 . Theplacement module 210 may store the bit-lengths in some of the available bits of the associated DCT coefficients atoperation 506, as shown inFIG. 6 , - Figure a 6 is a diagrammatic representation of a
DCT coefficient storage 600. In one example embodiment, bits 0 through 3 of theDCT coefficient storage 600 may be used to store the bit-length of a non-zero DCT coefficient.Bits 4 through 15 of theDCT coefficient storage 600 may be used to store the value of a non-zero DCT coefficient. It will be noted that, in some embodiments, the bit-length may be stored in the last four bits of a DCT coefficient storage, or in the middle bits of a DCT coefficient storage. Furthermore, the bit-length does not need to be stored in consecutive bits of a DCT coefficient storage. For example, the bit-length may be “broken up” and stored, according to a rule, in the various locations of a DCT coefficient storage, - Returning to
FIG. 5 , atoperation 510, the zero-run detector 404 may detect a zero-nm present in the DCT coefficients and obtain the runlength of the detected zero-run. Theplacement module 210 may store the runlength of the zero-run in the first zero coefficient of the run, atoperation 512. If the last DCT coefficient from the received DCT coefficients is zero, theplacement module 210 may store the position of the last non-zero coefficient in the first zero coefficient of the associated zero-run, atoperation 514. - It will be noted that, although the operations of the
method 500 are shown as sequential, it will be understood that the order of operations may be changed in some example embodiments. For example, an operation to detect a zero-run may precede an operation to calculate the bit-length of a non-zero DCT coefficient. - In some embodiments, the method and system described above may result in reducing the amount of redundant computation because some of the statistics information that may be used in the process of entropy coding may made available from the DCT coefficients themselves. Furthermore, in one example embodiment, the zero coefficients may be skipped over during the entropy coding process, because the stored runlength value may provide information regarding how many subsequent DCT coefficients are zero.
- The following pseudo-code describes the high level process to improve the optimization of entropy coding, according to one example embodiment.
Pass 1 refers to operations performed to generate statistics for DCT coefficients associated with an input digital image and to store the generated statistics in the available bits of the DCT coefficients.Pass 2 refers to operations performed to entropy code the DCT coefficient utilizing the stored statistics information. - PASS 1:
-
For each block in image/sub-image { Perform DCT // Let us assume that c[i] holds the ith DCT coefficient of the block. // Each c[i] is a 16 bit entity. For (i = 1; i < 63; i++) { if(c[i] == 0) { RunlengthOfZero++; } else { // The runlength of zero information is being stored in the first zero // coefficient of the current zero run. c[i − RunlengthOfZero] = RunlengthOfZero; Update RunLenthOfZero statistics b = bit-length(c[i]) Update the bit-length statistics // The bit-length information is stored in the first 4 bits of the coefficient c[i] |= b << 12 RunLengthOfZero = 0 } } // Store the last run if any if(RunLengthOfZero != 0) c[i − RunLengthOfZero] = RunLengthOfZero // The position of the last non-zero coefficient is stored in case the last // coefficient c[63] is zero. if(c[63] == 0) c[63] = 63 − RunLengthOfZero } Generate the Huffman tables using the statistics just gathered. - PASS 2:
-
For each block in image/sub-image { //Find the position of the last non-zero element. If the size of c[63] is zero, // then it was originally zero. if((c[63] >> 12) == 0) lastNonZero = c[63] else lastNonZero = 63 For (i = 1; i < lastNonZero; i++) { // Check if the size of a coefficient is zero since we have modified some // zero value coefficients if((c[i] >> 12) == 0) { RunlengthOfZero = c[i] // move the index to the next non-zero coefficient i += RunlengthOfZero } Write the Huffman code for the RunLengthOfZero // Read the bit-length from the coefficient c[i] itself b = c[i] >> 12 Write the Huffman code for b // The value of the coefficient is held in the last 12 bits c = c[i] & 0x0FFF Write the actual value of the coefficient c RunLengthOfZero = 0 } Write the last zeroRun code if any } - It will be noted that, in one example embodiment, the techniques described herein may be utilized in a variety of applications, including desk top applications for image processing, built-in logic for digital cameras, etc. it will be noted that, although the system and method for entropy coding have been described in the context of JPEG compression, the system and method may be utilized advantageously with image compression techniques that perform run length coding of zeros and that does performs two similar passes over an image—first to gather the run length statistics and second to actually use the statistics to generate entropy codes.
- In one example embodiment, the method and system described herein may be utilized with a coding scheme that operates using the following steps:
- 1) dividing an image into blocks (the blocks may be any size, not necessarily 8×8 blocks as in JPEG);
- 2) optionally applying a transform (the transform is not necessarily DCT, it could be, e.g., wavelet, Hadamard, some approximation of DCT, or an identity transform, where no transform is applied); and
- 3) using an entropy coding method that relies on the statistics of the image being coded, which is generated in a separate pass from the coding pass.
-
FIG. 7 shows a diagrammatic representation of a machine in the example electronic form of acomputer system 700 within which a set of instructions, for causing the machine to perform any one or more of the methodologies discussed herein, may be executed. In Various embodiments, the machine operates as a standalone device or may be connected (e.g., networked) to other machines. In a networked deployment, the machine may operate in the capacity of a server or a client machine in server-client network environment, or as a peer machine in a peer-to-peer (or distributed) network environment. The machine may be a personal computer (PC), a tablet PC, a set-top box (STB), a Personal Digital Assistant (PDA), a cellular telephone, a portable music player (e.g., a portable hard drive audio device such as an “Moving Picture Experts Group (MPEG)Layer 3” (MP3) player), a web appliance, a network router, switch or bridge, or any machine capable of executing a set of instructions (sequential or otherwise) that specify actions to be taken by that machine. Further, while only a single machine is illustrated, the term “machine” shall also be taken to include any collection of machines that individually or jointly execute a set (or multiple sets) of instructions to perform any one or more of the methodologies discussed herein. - The
example computer system 700 includes a processor 702 (e.g., a central processing unit (CPU), a graphics processing unit (GPU) or both), amain memory 704 and astatic memory 706, which communicate with each other via abus 708. Thecomputer system 700 may further include a video display unit 710 (e.g., a liquid crystal display (LCD) or a cathode ray tube (CRT)). Thecomputer system 700 also includes an alphanumeric input device 712 (e.g., a keyboard), a user interface (UI) navigation device 714 (e.g., a mouse), adisk drive unit 716, a signal generation device 718 (e.g., a speaker) and anetwork interface device 720. - The
disk drive unit 716 includes a machine-readable medium 722 on which is stored one or more sets of instructions and data structures (e.g., software 724) embodying or utilized by any one or more of the methodologies or functions described herein. Thesoftware 724 may also reside, completely or at least partially, within themain memory 704 and/or within theprocessor 702 during execution thereof by thecomputer system 700, themain memory 704 and theprocessor 702 also constituting machine-readable media. - The
software 724 may further be transmitted or received over anetwork 726 via thenetwork interface device 720 utilizing any one of a number of well-known transfer protocols (e.g., Hyper Text Transfer Protocol (HTTP)). - While the machine-
readable medium 722 is shown in an example embodiment to be a single medium, the term “machine-readable medium” should be taken to include a single medium or multiple media (e.g., a centralized or distributed database, and/or associated caches and servers) that store the one or more sets of instructions. The term “machine-readable medium” shall also be taken to include any medium that is capable of storing, encoding or carrying a set of instructions for execution by the machine and that cause the machine to perform any one or more of the methodologies of the present invention, or that is capable of storing, encoding or carrying data structures utilized by or associated with such a set of instructions. The term “machine-readable medium” shall accordingly be taken to include, but not be limited to, solid-state memories, optical and magnetic media, and carrier wave signals. Such medium may also include, without limitation, hard disks, floppy disks, flash memory cards, digital video disks, random access memory (RAMs), read only memory (ROMs), and the like. - The embodiments described herein may be implemented in an operating environment comprising software installed on a computer, in hardware, or in a combination of software and hardware.
- Thus, a method and system for generating a compressed image have been described. Although embodiments have been described with reference to specific example embodiments, it will be evident that various modifications and changes may be made to these embodiments without departing from the broader spirit and scope of the invention. Accordingly, the specification and drawings are to be regarded in an illustrative rather than a restrictive sense.
Claims (20)
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
US13/594,082 US8600183B2 (en) | 2006-10-03 | 2012-08-24 | Optimized method and system for entropy coding |
Applications Claiming Priority (2)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
US11/543,586 US8254700B1 (en) | 2006-10-03 | 2006-10-03 | Optimized method and system for entropy coding |
US13/594,082 US8600183B2 (en) | 2006-10-03 | 2012-08-24 | Optimized method and system for entropy coding |
Related Parent Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
US11/543,586 Continuation US8254700B1 (en) | 2006-10-03 | 2006-10-03 | Optimized method and system for entropy coding |
Publications (2)
Publication Number | Publication Date |
---|---|
US20130039596A1 true US20130039596A1 (en) | 2013-02-14 |
US8600183B2 US8600183B2 (en) | 2013-12-03 |
Family
ID=46689835
Family Applications (2)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
US11/543,586 Active 2029-06-07 US8254700B1 (en) | 2006-10-03 | 2006-10-03 | Optimized method and system for entropy coding |
US13/594,082 Active US8600183B2 (en) | 2006-10-03 | 2012-08-24 | Optimized method and system for entropy coding |
Family Applications Before (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
US11/543,586 Active 2029-06-07 US8254700B1 (en) | 2006-10-03 | 2006-10-03 | Optimized method and system for entropy coding |
Country Status (1)
Country | Link |
---|---|
US (2) | US8254700B1 (en) |
Families Citing this family (11)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US8254700B1 (en) | 2006-10-03 | 2012-08-28 | Adobe Systems Incorporated | Optimized method and system for entropy coding |
US8755619B2 (en) * | 2009-11-19 | 2014-06-17 | Samsung Electronics Co., Ltd. | Method and apparatus for encoding and decoding image data using run of the image data |
US9172967B2 (en) | 2010-10-05 | 2015-10-27 | Google Technology Holdings LLC | Coding and decoding utilizing adaptive context model selection with zigzag scan |
US8891616B1 (en) | 2011-07-27 | 2014-11-18 | Google Inc. | Method and apparatus for entropy encoding based on encoding cost |
US9774856B1 (en) | 2012-07-02 | 2017-09-26 | Google Inc. | Adaptive stochastic entropy coding |
US9509998B1 (en) | 2013-04-04 | 2016-11-29 | Google Inc. | Conditional predictive multi-symbol run-length coding |
US9392288B2 (en) | 2013-10-17 | 2016-07-12 | Google Inc. | Video coding using scatter-based scan tables |
US9179151B2 (en) | 2013-10-18 | 2015-11-03 | Google Inc. | Spatial proximity context entropy coding |
US11350015B2 (en) | 2014-01-06 | 2022-05-31 | Panamorph, Inc. | Image processing system and method |
US9584701B2 (en) * | 2014-01-06 | 2017-02-28 | Panamorph, Inc. | Image processing system and method |
CN111933162B (en) * | 2020-08-08 | 2024-03-26 | 北京百瑞互联技术股份有限公司 | Method for optimizing LC3 encoder residual error coding and noise estimation coding |
Family Cites Families (22)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US5184229A (en) * | 1988-12-09 | 1993-02-02 | Fuji Photo Film Co., Ltd. | Compression coding device and expansion decoding device for picture signal |
JP3199292B2 (en) | 1993-06-28 | 2001-08-13 | 日本電信電話株式会社 | Run-length extraction method, Huffman code conversion method, and MH coding processing method in Huffman code coding |
JP3210996B2 (en) * | 1993-07-30 | 2001-09-25 | 三菱電機株式会社 | High efficiency coding device and high efficiency decoding device |
JPH0936748A (en) | 1995-07-19 | 1997-02-07 | Toshiba Corp | Huffman coding method, its device, huffman decoding method and its device |
US6101276A (en) | 1996-06-21 | 2000-08-08 | Compaq Computer Corporation | Method and apparatus for performing two pass quality video compression through pipelining and buffer management |
US6311258B1 (en) | 1997-04-03 | 2001-10-30 | Canon Kabushiki Kaisha | Data buffer apparatus and method for storing graphical data using data encoders and decoders |
US6081211A (en) * | 1998-04-08 | 2000-06-27 | Xerox Corporation | Minimal buffering method and system for optimized encoding tables in JPEG compression |
JP2000125295A (en) | 1998-10-13 | 2000-04-28 | Canon Inc | Moving picture coder, its method, moving picture decoder, its method and storage medium |
US6351760B1 (en) * | 1999-01-29 | 2002-02-26 | Sun Microsystems, Inc. | Division unit in a processor using a piece-wise quadratic approximation technique |
US6947874B2 (en) * | 2000-11-16 | 2005-09-20 | Canon Kabushiki Kaisha | Entropy coding |
AUPR192700A0 (en) * | 2000-12-06 | 2001-01-04 | Canon Kabushiki Kaisha | Storing coding image data in storage of fixed memory size |
JP2002330279A (en) * | 2001-05-07 | 2002-11-15 | Techno Mathematical Co Ltd | Method for embedding data in image and method for extracting the data |
US7024441B2 (en) | 2001-10-03 | 2006-04-04 | Intel Corporation | Performance optimized approach for efficient numerical computations |
US6563440B1 (en) | 2001-10-19 | 2003-05-13 | Nokia Corporation | Apparatus and method for decoding Huffman codes using leading one/zero string length detection |
US20030133619A1 (en) | 2002-01-17 | 2003-07-17 | Wong Daniel W. | System for handling multiple discrete cosine transform modes and method thereof |
US20030202603A1 (en) * | 2002-04-12 | 2003-10-30 | William Chen | Method and apparatus for fast inverse motion compensation using factorization and integer approximation |
US7016547B1 (en) | 2002-06-28 | 2006-03-21 | Microsoft Corporation | Adaptive entropy encoding/decoding for screen capture content |
US7164802B2 (en) | 2002-11-14 | 2007-01-16 | Zoran Corporation | Method for image compression by modified Huffman coding |
US20050036548A1 (en) | 2003-08-12 | 2005-02-17 | Yong He | Method and apparatus for selection of bit budget adjustment in dual pass encoding |
US7463775B1 (en) * | 2004-05-18 | 2008-12-09 | Adobe Systems Incorporated | Estimating compressed storage size of digital data |
US7242328B1 (en) * | 2006-02-03 | 2007-07-10 | Cisco Technology, Inc. | Variable length coding for sparse coefficients |
US8254700B1 (en) | 2006-10-03 | 2012-08-28 | Adobe Systems Incorporated | Optimized method and system for entropy coding |
-
2006
- 2006-10-03 US US11/543,586 patent/US8254700B1/en active Active
-
2012
- 2012-08-24 US US13/594,082 patent/US8600183B2/en active Active
Also Published As
Publication number | Publication date |
---|---|
US8254700B1 (en) | 2012-08-28 |
US8600183B2 (en) | 2013-12-03 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
US8600183B2 (en) | Optimized method and system for entropy coding | |
US7003168B1 (en) | Image compression and decompression based on an integer wavelet transform using a lifting scheme and a correction method | |
JP5937206B2 (en) | Context adaptive coding of video data | |
US6041143A (en) | Multiresolution compressed image management system and method | |
JP4365957B2 (en) | Image processing method and apparatus and storage medium | |
US7302105B2 (en) | Moving image coding apparatus, moving image decoding apparatus, and methods therefor | |
US8326059B2 (en) | Method and apparatus for progressive JPEG image decoding | |
JP2010251946A (en) | Image encoding apparatus, method, and program, and image decoding apparatus, method, and program | |
JP2008113374A (en) | Entropy coding apparatus | |
US10785493B2 (en) | Method of compressing and decompressing image data | |
US8457428B2 (en) | Image coding apparatus, control method thereof, and storage medium | |
JP2007537644A (en) | Method and apparatus for encoding a block of values | |
KR100733949B1 (en) | Lossless adaptive encoding of finite alphabet data | |
CN110324639B (en) | Techniques for efficient entropy encoding of video data | |
US8305244B2 (en) | Coding data using different coding alphabets | |
US8427348B2 (en) | Parallel processing of sequentially dependent digital data | |
US10356410B2 (en) | Image processing system with joint encoding and method of operation thereof | |
JP2008271039A (en) | Image encoder and image decoder | |
Hussin et al. | A comparative study on improvement of image compression method using hybrid DCT-DWT techniques with huffman encoding for wireless sensor network application | |
US8260070B1 (en) | Method and system to generate a compressed image utilizing custom probability tables | |
US8229236B2 (en) | Method for progressive JPEG image decoding | |
JP2006005478A (en) | Image encoder and image decoder | |
JP2004253889A (en) | Image processing apparatus and method | |
JPH0918350A (en) | Coding/decoding device and coding/decoding method | |
US20110091119A1 (en) | Coding apparatus and coding method |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
FEPP | Fee payment procedure |
Free format text: PAYOR NUMBER ASSIGNED (ORIGINAL EVENT CODE: ASPN); ENTITY STATUS OF PATENT OWNER: LARGE ENTITY |
|
STCF | Information on status: patent grant |
Free format text: PATENTED CASE |
|
FPAY | Fee payment |
Year of fee payment: 4 |
|
AS | Assignment |
Owner name: ADOBE INC., CALIFORNIA Free format text: CHANGE OF NAME;ASSIGNOR:ADOBE SYSTEMS INCORPORATED;REEL/FRAME:048867/0882 Effective date: 20181008 |
|
MAFP | Maintenance fee payment |
Free format text: PAYMENT OF MAINTENANCE FEE, 8TH YEAR, LARGE ENTITY (ORIGINAL EVENT CODE: M1552); ENTITY STATUS OF PATENT OWNER: LARGE ENTITY Year of fee payment: 8 |