USRE43647E1 - Region-based information compaction as for digital images - Google Patents
Region-based information compaction as for digital images Download PDFInfo
- Publication number
- USRE43647E1 USRE43647E1 US12/841,862 US84186210A USRE43647E US RE43647 E1 USRE43647 E1 US RE43647E1 US 84186210 A US84186210 A US 84186210A US RE43647 E USRE43647 E US RE43647E
- Authority
- US
- United States
- Prior art keywords
- information
- parameter
- regions
- values
- stream
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Expired - Fee Related, expires
Links
- 238000005056 compaction Methods 0.000 title description 6
- 238000000034 method Methods 0.000 claims abstract description 90
- 230000008569 process Effects 0.000 claims abstract description 18
- 238000013507 mapping Methods 0.000 claims description 11
- 230000006835 compression Effects 0.000 claims description 9
- 238000007906 compression Methods 0.000 claims description 9
- 238000012886 linear function Methods 0.000 claims description 6
- 239000000284 extract Substances 0.000 claims 2
- 230000011218 segmentation Effects 0.000 abstract description 2
- 238000012545 processing Methods 0.000 description 18
- 238000004891 communication Methods 0.000 description 13
- 238000010586 diagram Methods 0.000 description 9
- 238000012937 correction Methods 0.000 description 6
- 230000005236 sound signal Effects 0.000 description 5
- 238000012546 transfer Methods 0.000 description 5
- 230000010365 information processing Effects 0.000 description 4
- 238000004321 preservation Methods 0.000 description 4
- 230000006872 improvement Effects 0.000 description 3
- 238000013139 quantization Methods 0.000 description 3
- 230000015556 catabolic process Effects 0.000 description 2
- 238000006731 degradation reaction Methods 0.000 description 2
- 238000000638 solvent extraction Methods 0.000 description 2
- 230000007423 decrease Effects 0.000 description 1
- 230000000694 effects Effects 0.000 description 1
- 239000000835 fiber Substances 0.000 description 1
- 238000012986 modification Methods 0.000 description 1
- 230000004048 modification Effects 0.000 description 1
- 238000012805 post-processing Methods 0.000 description 1
- 238000007781 pre-processing Methods 0.000 description 1
- 230000009467 reduction Effects 0.000 description 1
- 230000004044 response Effects 0.000 description 1
- 230000000717 retained effect Effects 0.000 description 1
Images
Classifications
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N19/00—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
- H04N19/90—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using coding techniques not provided for in groups H04N19/10-H04N19/85, e.g. fractals
- H04N19/98—Adaptive-dynamic-range coding [ADRC]
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N19/00—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
- H04N19/46—Embedding additional information in the video signal during the compression process
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N19/00—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
- H04N19/60—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using transform coding
- H04N19/61—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using transform coding in combination with predictive coding
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N19/00—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
- H04N19/85—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using pre-processing or post-processing specially adapted for video compression
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N21/00—Selective content distribution, e.g. interactive television or video on demand [VOD]
- H04N21/20—Servers specifically adapted for the distribution of content, e.g. VOD servers; Operations thereof
- H04N21/23—Processing of content or additional data; Elementary server operations; Server middleware
- H04N21/234—Processing of video elementary streams, e.g. splicing of video streams or manipulating encoded video stream scene graphs
- H04N21/2343—Processing of video elementary streams, e.g. splicing of video streams or manipulating encoded video stream scene graphs involving reformatting operations of video signals for distribution or compliance with end-user requests or end-user device requirements
- H04N21/234345—Processing of video elementary streams, e.g. splicing of video streams or manipulating encoded video stream scene graphs involving reformatting operations of video signals for distribution or compliance with end-user requests or end-user device requirements the reformatting operation being performed only on part of the stream, e.g. a region of the image or a time segment
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N21/00—Selective content distribution, e.g. interactive television or video on demand [VOD]
- H04N21/20—Servers specifically adapted for the distribution of content, e.g. VOD servers; Operations thereof
- H04N21/23—Processing of content or additional data; Elementary server operations; Server middleware
- H04N21/236—Assembling of a multiplex stream, e.g. transport stream, by combining a video stream with other content or additional data, e.g. inserting a URL [Uniform Resource Locator] into a video stream, multiplexing software data into a video stream; Remultiplexing of multiplex streams; Insertion of stuffing bits into the multiplex stream, e.g. to obtain a constant bit-rate; Assembling of a packetised elementary stream
- H04N21/23614—Multiplexing of additional data and video streams
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N21/00—Selective content distribution, e.g. interactive television or video on demand [VOD]
- H04N21/40—Client devices specifically adapted for the reception of or interaction with content, e.g. set-top-box [STB]; Operations thereof
- H04N21/43—Processing of content or additional data, e.g. demultiplexing additional data from a digital video stream; Elementary client operations, e.g. monitoring of home network or synchronising decoder's clock; Client middleware
- H04N21/434—Disassembling of a multiplex stream, e.g. demultiplexing audio and video streams, extraction of additional data from a video stream; Remultiplexing of multiplex streams; Extraction or processing of SI; Disassembling of packetised elementary stream
- H04N21/4348—Demultiplexing of additional data and video streams
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N21/00—Selective content distribution, e.g. interactive television or video on demand [VOD]
- H04N21/40—Client devices specifically adapted for the reception of or interaction with content, e.g. set-top-box [STB]; Operations thereof
- H04N21/43—Processing of content or additional data, e.g. demultiplexing additional data from a digital video stream; Elementary client operations, e.g. monitoring of home network or synchronising decoder's clock; Client middleware
- H04N21/44—Processing of video elementary streams, e.g. splicing a video clip retrieved from local storage with an incoming video stream or rendering scenes according to encoded video stream scene graphs
- H04N21/4402—Processing of video elementary streams, e.g. splicing a video clip retrieved from local storage with an incoming video stream or rendering scenes according to encoded video stream scene graphs involving reformatting operations of video signals for household redistribution, storage or real-time display
- H04N21/440245—Processing of video elementary streams, e.g. splicing a video clip retrieved from local storage with an incoming video stream or rendering scenes according to encoded video stream scene graphs involving reformatting operations of video signals for household redistribution, storage or real-time display the reformatting operation being performed only on part of the stream, e.g. a region of the image or a time segment
Definitions
- the invention relates to information processing systems in general, and, more particularly, the invention relates to a method and apparatus for preserving a relatively high dynamic range of an information signal, such as a video information signal, processed via a relatively low dynamic range information processing system.
- MPEG Moving Pictures Experts Group
- MPEG-1 refers to ISO/IEC standards 11172 and is incorporated herein by reference.
- MPEG-2 refers to ISO/IEC standards 13818 and is incorporated herein by reference.
- a compressed digital video system is described in the Advanced Television Systems Committee (ATSC) digital television standard document A/53, and is incorporated herein by reference.
- ATSC Advanced Television Systems Committee
- the above-referenced standards describe data processing and manipulation techniques that are well suited to the compression and delivery of video, audio and other information using fixed or variable length digital communications systems.
- the above-referenced standards, and other “MPEG-like” standards and techniques compress, illustratively, video information using intra-frame coding techniques (such as run-length coding, Huffman coding and the like) and inter-frame coding techniques (such as forward and backward predictive coding, motion compensation and the like).
- intra-frame coding techniques such as run-length coding, Huffman coding and the like
- inter-frame coding techniques such as forward and backward predictive coding, motion compensation and the like.
- MPEG and MPEG-like video processing systems are characterized by prediction-based compression encoding of video frames with or without intra- and/or inter-frame motion compensation encoding.
- JPEG Joint Photographic Experts Group
- ISO/IEC 10918-1 ISO/IEC 10918-1
- information such as pixel intensity and pixel color depth of a digital image is encoded as a binary integer between 0 and 2 n ⁇ 1 .
- film makers and television studios typically utilize video information having 10-bit pixel intensity and pixel color depth, which produces luminance and chrominance values of between zero and 1023.
- 10-bit dynamic range of the video information may be preserved on film and in the studio
- the above-referenced standards typically utilize a dynamic range of only 8-bits.
- the quality of a film, video or other information source provided to an ultimate information consumer is degraded by dynamic range constraints of the information encoding methodologies and communication networks used to provide such information to a consumer.
- the invention comprises a method and apparatus for preserving the dynamic range of a relatively high dynamic range information stream, illustratively a high resolution video signal, subjected to a relatively low dynamic range encoding and/or transport process(es).
- the invention subjects the relatively high dynamic range information stream to a segmentation and remapping process whereby each segment is remapped to the relatively low dynamic range appropriate to the encoding and/or transport process(es) utilized.
- An auxiliary information stream includes segment and associated remapping information such that the initial, relatively high dynamic range information stream may be recovered in a post-encoding (i.e. decoding) or post-transport (i.e., receiving) process.
- a method for encoding an information frame comprises the steps of: dividing the information frame into a plurality of information regions, at least one of the information regions comprising at least one information parameter having associated with it a plurality of intra-region values bounded by upper and lower value limits defining a dynamic range of the information parameter; determining, for each of the at least one information region, a respective maximal value and a minimal value of the at least one information parameter; remapping, for each of the at least one information regions and according to the respective determined maximal and minimal values, the respective plurality of intra-region values of the at least one information parameter; and encoding each information region.
- FIG. 1 depicts an information distribution system
- FIG. 2 is a flow diagram of a combined information stream encoding method and decoding method
- FIG. 3A depicts an image that has been divided into a plurality of regions using a pixel coordinate technique
- FIG. 3B depicts an image that has been divided into a plurality of single macroblock regions defined by row and column;
- FIG. 4A depicts a diagram illustrative of a non-linear encoding function
- FIG. 4B depicts a diagram illustrative of a non-linear decoding function associated with the encoding function of FIG. 4A ;
- FIG. 5 depicts a high level function block diagram of an encoding and decoding method and apparatus.
- the teachings of the present invention are readily applicable to single or still image processing (e.g., JPEG-like image processing). More generally, the teachings of the present invention are applicable to any form of information comprising one or more information parameters having associated with them a relatively high dynamic range.
- the invention provides the capability to reduce that dynamic range for, e.g., processing or transport, and subsequently restore that dynamic range.
- FIG. 1 depicts an information distribution system 100 that encodes, illustratively, a 10-bit dynamic range information stream using a pre-processing function to produce a range enhancement information stream, and an 8-bit encoding process, illustratively an MPEG-like encoding process, to produce an 8-bit encoded information stream.
- the 8-bit encoded information stream and the range enhancement information stream are transported to, e.g., a receiver.
- the 8-bit encoded information stream is subjected to a decoding process, illustratively an MPEG-like decoding process, to produce an 8-bit decoded information stream.
- a post-processing function utilizes the range enhancement information stream to enhance the dynamic range of the 8-bit decoded information stream such that the original 10-bit dynamic range is substantially restored.
- the system 100 of FIG. 1 comprises an information coding section (10-30) suitable for use by, illustratively, an information provider such as a television studio; an information distribution section (35), illustratively a communication channel such as a terrestrial broadcast channel; and an information decoding section (40-60), suitable for use by, illustratively, an information consumer having an appropriate decoding device.
- an information coding section (10-30) suitable for use by, illustratively, an information provider such as a television studio
- an information distribution section (35) illustratively a communication channel such as a terrestrial broadcast channel
- an information decoding section 40-60
- the information coding section comprises a region map and scale unit 10 that receives a relatively high dynamic range information signal S 1 , illustratively a 10-bit dynamic range video signal, from an information source such as a video source (not shown).
- the region map and scale unit 10 divides each picture-representative, frame-representative or field-representative portion of the 10-bit video signal S 1 into a plurality of, respectively, sub-picture regions, sub-frame regions or sub-field regions.
- the operation of region map and scale unit 10 will be described in more detail below with respect to FIG. 2 .
- each of the plurality of regions are processed to identify, illustratively, a maximum luminance level (Y max ) and a minimum luminance level (Y min ) utilized by pixels within the processed region.
- the luminance information within each region is then scaled (i.e., remapped) from the original 10-bit dynamic range (i.e., 0 to 1023) to an 8-bit dynamic range having upper and lower limits corresponding to the identified minimum luminance level (Y min ) and maximum luminance level (Y max ) of the respective region to produce, at an output, an 8-bit baseband video signal S 3 .
- the maximum and minimum values associated with each region, and information identifying the region are coupled to an output as a map region ID signal S 4 .
- the map region ID signal may comprise an empty set.
- An encoder 15 receives the remapped, 8-bit baseband video (or image) signal S 3 from the region map and scale unit 10 .
- the encoder 15 encodes the 8-bit baseband video signal to produce a compressed video signal S 5 , illustratively an MPEG-like video elementary stream.
- An audio encoder 20 receives a baseband audio signal S 2 from an audio source (not shown).
- the baseband audio signal S 2 is, typically, temporally related to the baseband video signal S 3 .
- the audio encoder 20 encodes the baseband audio signal to produce a compressed audio signal S 16 , illustratively an MPEG-like audio elementary stream. It must be noted that audio encoder 20 , and other audio functionality to be described later, is not strictly necessary to the practice of the invention.
- a service multiplexer 25 wraps the map region ID signal S 4 , the elementary stream S 5 and the audio elementary stream S 16 into respective variable-length or fixed length packet structures known as packetized elementary streams.
- the packetized elementary streams (PES) are combined to form a multiplexed PES S 6 .
- the PES structure provides, e.g., functionality for identification and synchronization of decoding and presentation of the video, audio and other information.
- a transport encoder 30 converts the PES packets of multiplexed PES S 6 into fixed-length transport packets in a known manner to produce a transport stream S 7 .
- the map region ID signal S 4 may be communicated to an end user (e.g., a decoder) via a plurality of means within the context of, e.g., the various communications standards.
- User private data tables and private data or message descriptors incorporating the map region ID signal S 4 may be placed in designated locations throughout messages as described in the MPEG and ATSC standards. The use of such data, and other MPEG, ATSC, DVB and similar private, user or auxiliary data communication formats is contemplated by the inventors.
- the map region ID signal S 4 includes information corresponding to encoded region information within the elementary stream S 5
- the map region ID signal is included as private data within the multiplexed elementary stream S 5 .
- the map region ID signal S 4 may be communicated as an auxiliary data stream, an MPEG-like data stream or a user private data or message stream.
- Private data may comprise a data stream associated with a particular packet identification (PID), private or user data inserted into, e.g., a payload or header portion of another data stream (e.g., a packetized elementary stream including the elementary stream S 5 ) or other portions of an information stream.
- PID packet identification
- private or user data inserted into, e.g., a payload or header portion of another data stream (e.g., a packetized elementary stream including the elementary stream S 5 ) or other portions of an information stream.
- the map region ID signal S 4 is optionally incorporated into a transport stream private section.
- the transport encoder includes, in a private data section of the transport stream being formed, the dynamic range enhancement stream.
- the transport encoder associated the encoded information stream and the associated dynamic range enhancement stream with respective packet identification (PID) values.
- PID packet identification
- the transport encoder incorporates, into a packetized stream, the encoded information stream. Additionally, the transport encoder includes, within a header portion of the packetized stream incorporating the encoded information stream, the associated dynamic range enhancement stream.
- the information distribution section comprises a communications network 35 , illustratively a terrestrial broadcast, fiber optic, telecommunications or other public or private data communications network.
- the communications network receives the transport stream S 7 produced by the information coding section; modulates or encodes the transport stream S 7 to conform to the requirements of the communications network (e.g., converting the MPEG transport stream S 7 into an asynchronous transfer mode (ATM) format); transmits the modulated or encoded transport stream to, e.g., a receiver; and demodulates or decodes the modulated or encoded transport stream to produce an output transport stream S 8 .
- ATM asynchronous transfer mode
- the information decoding section comprises a transport decoder 40 that converts the received transport stream S 8 into a multiplexed PES S 9 .
- the multiplexed PES S 9 is demultiplexed by a service demultiplexer 45 to produce a map region ID signal S 14 , a video elementary stream S 12 and an audio elementary stream S 10 corresponding to, respectively, map region ID signal S 4 , elementary stream S 5 and audio elementary stream S 16 .
- the video elementary stream S 12 is decoded in a known manner by a video decoder 55 to produce, an 8-bit baseband video signal S 13 corresponding to the remapped 8-bit baseband video signal S 3 .
- the audio elementary stream S 10 is decoded in a known manner by an audio decoder 50 to produce a baseband audio output signal S 11 , corresponding to the baseband audio signal S 2 , which is coupled to an audio processor (not shown) for further processing.
- An inverse region map and scale unit 60 receives the 8-bit baseband video signal S 13 and the map region ID signal S 14 .
- the inverse region map and scale unit 60 remaps the 8-bit baseband video signal S 13 , on a region by region basis, to produce a 10-bit video signal S 15 corresponding to the original 10-bit dynamic range video signal S 1 .
- the produced 10-bit video signal is coupled to a video processor (not shown) for further processing.
- the operation of inverse region map and scale unit 60 will be described in more detail below with respect to FIG. 2 .
- the inverse region map and scale unit 60 retrieves, from the map region ID signal S 14 , the previously identified maximum luminance level (Y max ) and minimum luminance level (Y min ) associated with each picture, frame or field region, and any identifying information necessary to associate the retrieved maximum and minimum values with a particular region within the 8-bit baseband video signal S 13 .
- the luminance information associated with each region is then scaled (i.e., remapped) from the 8-bit dynamic range bounded by the identified minimum luminance level (Y min ) and maximum luminance level (Y max ) associated with the region to the original 10-bit (i.e., 0-1023) dynamic range to produce the 10-bit video signal S 15 .
- other high dynamic range parameters associated with an information signal e.g., chrominance components, high dynamic range audio information and the like
- the map region ID signal S 4 may be communicated to an end user (e.g., a decoder) via a plurality of means within the context of, e.g., the various communications standards.
- the map region ID signal S 4 is recovered from a private data section of said transport stream.
- the map region ID signal is associated with a respective identification (PID) value and recovered using that value.
- the encoded information stream is recovered from a packetized stream associated with a predefined packet identification (PID) value, while the map region ID signal is retrieved form a header portion of the packetized stream associated with the predefined packet identification (PID) value.
- FIG. 2 is a flow diagram of a combined information stream encoding method and decoding method.
- the method 200 is entered at step 210 when a relatively high dynamic range information stream comprising a plurality of logical information frames is received by, e.g., region map and scale unit 10 .
- the method 200 proceeds to step 215 , where each logical information frame of the received information stream is divided into regions according to, illustratively, the criteria depicted in box 205 which includes: fixed or variable coordinate regions based on picture, frame, field, slice macroblock, block and pixel location, related motion vector information and the like.
- any exemplary region comprises a macroblock region size.
- a parameter of interest may comprise a luminance parameter (Y), color difference parameter (U, V), motion vector and the like.
- the method 200 then proceeds to step 225 , where the parameters of interest in each pixel of each region are remapped to a parameter value range bounded by respective maximum and minimum parameter values. That is, if the parameter of interest of a pixel is a luminance parameter, all the luminance parameters within a particular region are remapped to a range determined by the maximum luminance value and the minimum luminance value within the particular region as previously determined in step 220 .
- steps of regional division of logical frames, maximum and minimum parameter(s) determination and remapping comprise the steps necessary to generate an information stream and an associated dynamic range enhancements stream.
- dynamic range degradation visited upon the information stream due to a subsequent, relatively low dynamic range processing step may be largely corrected by a second, subsequent processing step (e.g., steps 240 - 245 below). This concept is critical to the understanding of the invention.
- step 230 the information within the region is encoded, to produce an encoded information stream.
- encoding may comprise one of the MPEG-like encoding standards referenced above.
- step 235 the encoded information stream, maximum and minimum data associated with each region of the encoded information stream, and information sufficient to associate each region with its respective maximum and minimum parameter(s) information are transported to, e.g., a receiver.
- step 240 the encoded information stream is decoded to produce a decoded information stream.
- the dynamic range of the decoded information stream will not exceed the dynamic range of the encoding or processing methodology employed in, e.g., steps 230 - 235 .
- the dynamic range of the decoded information stream will not exceed the dynamic range of the encoding or processing methodology employed in, e.g., steps 230 - 235 .
- MPEG-like encoding and decoding methodology which utilizes an eight bit dynamic range will produce, at the decoder output, a video information stream having only an eight bit dynamic range luminance parameter.
- step 240 After decoding the transported information stream (step 240 ), the method 200 proceeds to step 245 , where the eight bit dynamic range decoded information stream is remapped on a region by region basis using the respective maximum and minimum values associated with the parameter or parameters of interest in each region. The resulting relatively high dynamic range information stream is then utilized at step 250 .
- Information streams are typically segmented or framed according to a logical constraint.
- Each logical segment or frame comprises a plurality information elements, and each information element is typically associated with one or more parameters.
- video information streams are typically segmented in terms of a picture, frame or field.
- the picture, frame or field comprises a plurality of information elements known as picture elements (pixels).
- pixels picture elements
- Each pixel is associated with parameters such as luminance information and chrominance information.
- pixels are grouped into blocks or macroblocks. Pixels, blocks and macroblocks may also have associated with them motion parameters and other parameters.
- Each of the parameters associated with a pixel, block or macroblock is accurate to the extent that the dynamic range of the information defining the parameter is accurate.
- preservation of the dynamic range of some parameters is more critical than preservation of the dynamic range of other parameters, such as block motion.
- degradation of some parameters due to dynamic range constraints may be acceptable, while other parameters should be preserved with as high a fidelity as possible.
- the dynamic range of the luminance information representing the image may be fully utilized. That is, the value of luminance parameters associated with pixels in the image may be between (in a 10-bit dynamic range representation) from zero (black) to 1023 (white).
- the dynamic range of the luminance information representing the image illustratively a 10-bit studio image
- the dynamic range of an information processing operation used to process the image illustratively an 8-bit MPEG encoding operation
- quantization errors will necessarily degrade the resulting processed image.
- the probability that the full 10-bit dynamic range of the luminance information is utilized in a region decreases.
- Regions may be selected according to any intra-frame selection criteria.
- appropriate criteria include scan lines, regions defined by pixel coordinates, blocks, macroblocks, slices and the like.
- the smaller the region selected the greater the probability of preserving the full dynamic range of the information element parameter.
- FIG. 3A depicts an image 300 that has been divided into a plurality of regions 301 - 307 using a pixel coordinate technique.
- identifying indicia of region location comprise pixel coordinates defining, e.g., corners or edges of the regions.
- FIG. 3B depicts an image 300 that has been divided into a plurality of single macroblock regions defined by row (R 1 -R N ) and column (C 1 -C N ). Since the regions defined in FIG. 3B are much smaller then the regions defined in FIG. 3A , there is a greater probability of preserving the dynamic range of the parameters of interest forming the image.
- identifying indicia of region location comprise macroblock address, as defined by row (i.e., slice) number and column number.
- a simpler method of region identification comprises identifying each region (i.e., macroblock) by a macroblock offset value representing the number of macroblocks from the start of a picture (i.e., the number of macroblocks from the top left, or first, macroblock).
- equation 1 becomes equation 2.
- equation 3 the quantities or results within the floor function operators ⁇ ⁇ are rounded down to the nearest integer value.
- TP ⁇ OP*(TR/OR)+0.5 ⁇ (eq. 1)
- TP ⁇ OP*(256/1024)+0.5 ⁇ (eq. 2)
- TP ⁇ OP*(1024/256)+0.5 ⁇ (eq. 3)
- equation 4 becomes equation 5.
- TP ⁇ (OP ⁇ MIN)*(TR/(MAX ⁇ MIN))+0.5 ⁇ (eq. 4)
- TP ⁇ (OP ⁇ 400)*(TR/(600 ⁇ 400))+0.5 ⁇ (eq. 5)
- a function such as equation 4 will be able to preserve the relatively high dynamic range of the original pixel parameter as long as the difference between the maximum and minimum parameter values does not exceed a range defined by the ratio of the original dynamic range and the target dynamic range. That is, in the case of a 10-bit original dynamic range and an 8-bit target dynamic range where the ratio is 1023:255 (i.e., 4:1), the difference between the maximum and minimum values must not be greater than one fourth of the original dynamic range.
- a threshold level of dynamic range for each region is established that determines if the full, original dynamic range of the parameter will be preserved by the invention. Since, in equation 5, the difference between the maximum (600) and minimum (400) is less than one fourth of the 10-bit dynamic range (256), full 10-bit dynamic range will be preserved.
- equations 4 and 5 should not in any way be construed as limiting the scope of the invention. Rather, equations 4 and 5 are presented as only one of a plurality of linear functions suitable for use in the invention.
- the invention may also be practiced using non-linear functions (such as gamma correction and companding functions). Moreover, the invention may be practiced using a combination of linear and non-linear functions to optimize data compaction.
- the linear and/or non-linear functions selected will vary depending on the type of information stream being processed, the typical distribution of parameters of interest within the information elements of that stream, the amount of dynamic range allowed for a given application, the processing constraints of the encoder and/or decoder operating on the information streams and other criteria.
- the above-described method advantageously provides substantially full dynamic range preservation of selected information element parameters in an information frame.
- the cost, in terms of extra bits necessary to implement the invention, e.g., the overhead due to the use of minimum and maximum pixel values for each region of a picture, will now be briefly discussed. Specifically, the additional number of bits to be transported by, e.g., the communications network 35 of FIG. 1 will be discussed.
- each of the luminance (Y) and color difference (U, V) signals each have 10-bit dynamic range.
- a small region size is selected, such as a 16 ⁇ 16 block of 8-bit pixels.
- adding six 10-bit values, a minimum and a maximum for each of the luminance (Y) and color difference (U, V) signals, to this block increases the number of bits by 60 to 6204 bits, or an increase of about 1%.
- each of the luminance (Y) and color difference (U, V) signals are never worse than 8 bits, and may be as high as 10 bits, a factor of four improvement in the respective intensity and color depth resolutions.
- the method provides a substantial improvement in dynamic range without a correspondingly substantial increase in bit count.
- the method leverages the cost-savings of existing 8-bit chipsets to provide a 10-bit (or higher) effective dynamic range.
- Non-linear mapping methods may be used to implement gamma correction and other functions while preserving the dynamic range of the underlying signal.
- linear and non-linear methods may be used together.
- non-linear mapping is used to preserve the original pixel values (i.e., dynamic range) over some part of the range.
- a lower bit range e.g., 0-131
- an upper bit range e.g., 132-1023
- FIG. 4A depicts a diagram 4 illustrative of a non-linear encoding function.
- the diagram comprises an original dynamic range 410 A of 1024 bits and a target dynamic range 420 A of 255 bits.
- a signal 430 A, 440 A having a 1024 bit dynamic range is remapped into the 255 bit dynamic range space in two segments.
- the first segment 430 A utilizes a substantially linear transfer function, while the second segment 440 A utilizes a compressed transfer function. That is, the range of 0-131 in the original map is retained in the target map, while the range of 132 to 1023 in the original map is compressed into the 132-255 range of the target map.
- FIG. 4B depicts a diagram illustrative of a non-linear decoding function associated with the encoding function of FIG. 4A .
- the decoder implements a remapping function having the transfer function depicted in FIG. 4B .
- FIG. 5 depicts a high level function block diagram of an encoding and decoding method and apparatus according to the invention.
- the encoding and decoding method and process comprises a function mapper 530 , which is responsive to an information stream S 1 received from, illustratively, a pixel source 510 .
- the function mapper remaps the information stream S 1 according to various function criteria f c provided by a function criteria source 520 to produce a remapped information stream S 3 and an associated map information stream S 4 .
- the remapped information stream S 3 is coupled to an encoder 540 that encodes the remapped information stream S 3 to produce an encoded information stream S 5 .
- the encoded information stream S 5 and the map information stream S 4 are transported to, respectively, a decoder 550 and an inverse function mapper 560 .
- the decoder 550 decodes the transported and encoded information stream to retrieve an information stream substantially corresponding to the initial remapped information stream.
- the inverse function mapper 560 performs, in accordance with the transported map information stream S 4 , an inverse function mapping operation on the retrieved stream to produce an information stream substantially corresponding to the original information stream. It must be noted that the information stream produced by the inverse function mapper 560 may advantageously include linear and/or non-linear modifications in furtherance of the specific application (e.g., gamma correction and the like).
- function mapper 530 and inverse function mapper 560 may be operated in substantially the same manner as the region map and scale unit 10 and inverse region map and scale unit 60 depicted in FIG. 1 .
- the remapping function performed by, e.g., the function mapper 530 or region map and scale unit 10 performs a remapping function according to an arbitrary function.
- TP F(OP,MAX,MIN,TR) (eq. 6)
- the function F may take a number of forms and be implemented in a number of ways.
- the function F may implement: 1) a simple linear function such as described above with respect to FIGS. 1-2 ; 2) a gamma correction function that varies input video intensity levels such that they correspond to intensity response levels of a display device; 3) an arbitrary polynomial; or 4) a tabulated function (i.e., a function purely described in terms of a lookup table, where each input bit addresses a table to retrieve the contents stored therein.
- TP ⁇ F[(OP ⁇ MIN) ⁇ *TR/(MAX ⁇ MIN) ⁇ ]+0.5 ⁇ (eq. 7)
- TP ⁇ [(OP ⁇ MIN) 2 +(OP ⁇ MIN)*TR/[(MAX ⁇ MIN) 2 +(MAX ⁇ MIN)]+0.5 ⁇ (eq. 8)
- the table comprises an indexable array of values, where the index values are the original range and the values in the table are included in the target range. This allows any arbitrary mapping between the two ranges. Unless, like gamma correction, that mapping is one-way only (i.e., the remapping is not intended to be “unmapped”), then there an inverse table at the decoder 550 or inverse map and scale unit 60 will restore the original information values.
- dynamic range enhancement stream and map region identification stream are used in substantially the same manner to describe information streams carrying auxiliary or other data suitable for use in recovering at least a portion of the dynamic range of an information stream processed according to the invention.
Landscapes
- Engineering & Computer Science (AREA)
- Multimedia (AREA)
- Signal Processing (AREA)
- Compression Or Coding Systems Of Tv Signals (AREA)
- Compression, Expansion, Code Conversion, And Decoders (AREA)
- Facsimile Image Signal Circuits (AREA)
- Compression Of Band Width Or Redundancy In Fax (AREA)
Abstract
Description
TP=└OP*(TR/OR)+0.5┘ (eq. 1)
TP=└OP*(256/1024)+0.5┘ (eq. 2)
TP=└OP*(1024/256)+0.5┘ (eq. 3)
TP=└(OP−MIN)*(TR/(MAX−MIN))+0.5┘ (eq. 4)
TP=└(OP−400)*(TR/(600−400))+0.5┘ (eq. 5)
TP=F(OP,MAX,MIN,TR) (eq. 6)
TP=└F[(OP−MIN)γ*TR/(MAX−MIN)γ]+0.5┘ (eq. 7)
TP=└[(OP−MIN)2+(OP−MIN)*TR/[(MAX−MIN)2+(MAX−MIN)]+0.5┘ (eq. 8)
Claims (31)
TP=[OP*(TR/OR)+0.5]
TP=[OP*(TR/OR)+0.5]
TP=[OP*(TR/OR)+0.5]
TP=[OP*(TR/OR)+0.5]
TP=[OP*(TR/OR)+0.5]
TP=[OP*(TR/OR)+0.5]
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
US12/841,862 USRE43647E1 (en) | 1998-03-30 | 2010-07-22 | Region-based information compaction as for digital images |
Applications Claiming Priority (4)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
US09/050,304 US6118820A (en) | 1998-01-16 | 1998-03-30 | Region-based information compaction as for digital images |
US09/292,693 US6560285B1 (en) | 1998-03-30 | 1999-04-15 | Region-based information compaction as for digital images |
US10/429,985 US7403565B2 (en) | 1998-03-30 | 2003-05-06 | Region-based information compaction as for digital images |
US12/841,862 USRE43647E1 (en) | 1998-03-30 | 2010-07-22 | Region-based information compaction as for digital images |
Related Parent Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
US10/429,985 Reissue US7403565B2 (en) | 1998-03-30 | 2003-05-06 | Region-based information compaction as for digital images |
Publications (1)
Publication Number | Publication Date |
---|---|
USRE43647E1 true USRE43647E1 (en) | 2012-09-11 |
Family
ID=23125789
Family Applications (3)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
US09/292,693 Expired - Fee Related US6560285B1 (en) | 1998-03-30 | 1999-04-15 | Region-based information compaction as for digital images |
US10/429,985 Ceased US7403565B2 (en) | 1998-03-30 | 2003-05-06 | Region-based information compaction as for digital images |
US12/841,862 Expired - Fee Related USRE43647E1 (en) | 1998-03-30 | 2010-07-22 | Region-based information compaction as for digital images |
Family Applications Before (2)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
US09/292,693 Expired - Fee Related US6560285B1 (en) | 1998-03-30 | 1999-04-15 | Region-based information compaction as for digital images |
US10/429,985 Ceased US7403565B2 (en) | 1998-03-30 | 2003-05-06 | Region-based information compaction as for digital images |
Country Status (4)
Country | Link |
---|---|
US (3) | US6560285B1 (en) |
EP (1) | EP1169863A1 (en) |
JP (1) | JP4554087B2 (en) |
WO (1) | WO2000064185A1 (en) |
Cited By (2)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US10225539B2 (en) | 2014-08-28 | 2019-03-05 | Sony Corporation | Transmitting apparatus, transmitting method, receiving apparatus, and receiving method |
US11367223B2 (en) * | 2017-04-10 | 2022-06-21 | Intel Corporation | Region based processing |
Families Citing this family (57)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US6829301B1 (en) | 1998-01-16 | 2004-12-07 | Sarnoff Corporation | Enhanced MPEG information distribution apparatus and method |
US6560285B1 (en) | 1998-03-30 | 2003-05-06 | Sarnoff Corporation | Region-based information compaction as for digital images |
US7099348B1 (en) * | 1998-11-03 | 2006-08-29 | Agere Systems Inc. | Digital audio broadcast system with local information |
EP1305691A4 (en) * | 2000-06-02 | 2003-07-23 | Radisys Corp | Voice-over ip communication without echo cancellation |
US7103668B1 (en) * | 2000-08-29 | 2006-09-05 | Inetcam, Inc. | Method and apparatus for distributing multimedia to remote clients |
US8290062B1 (en) | 2000-09-27 | 2012-10-16 | Intel Corporation | Method and apparatus for manipulating MPEG video |
US20020140834A1 (en) * | 2001-03-30 | 2002-10-03 | Olding Benjamin P. | Method and apparatus for companding pixel data in a digital pixel sensor |
US6987536B2 (en) * | 2001-03-30 | 2006-01-17 | Pixim, Inc. | Method and apparatus for storing image information for multiple sampling operations in a digital pixel sensor |
US7340605B2 (en) * | 2001-10-31 | 2008-03-04 | Agilent Technologies, Inc. | System and method for secure download of waveforms to signal generators |
AU2002951574A0 (en) * | 2002-09-20 | 2002-10-03 | Unisearch Limited | Method of signalling motion information for efficient scalable video compression |
AU2003260192B2 (en) * | 2002-09-20 | 2010-03-04 | Newsouth Innovations Pty Limited | Method of signalling motion information for efficient scalable video compression |
US6879731B2 (en) * | 2003-04-29 | 2005-04-12 | Microsoft Corporation | System and process for generating high dynamic range video |
US7142723B2 (en) * | 2003-07-18 | 2006-11-28 | Microsoft Corporation | System and process for generating high dynamic range images from multiple exposures of a moving scene |
US8032645B2 (en) * | 2003-11-13 | 2011-10-04 | Panasonic Corporation | Coding method and coding apparatus |
US7492375B2 (en) * | 2003-11-14 | 2009-02-17 | Microsoft Corporation | High dynamic range image viewing on low dynamic range displays |
US7649539B2 (en) * | 2004-03-10 | 2010-01-19 | Microsoft Corporation | Image formats for video capture, processing and display |
JP4385841B2 (en) * | 2004-04-22 | 2009-12-16 | ソニー株式会社 | Image processing device |
US20050265459A1 (en) * | 2004-06-01 | 2005-12-01 | Bhattacharjya Anoop K | Fixed budget frame buffer compression using block-adaptive spatio-temporal dispersed dither |
US7567897B2 (en) * | 2004-08-12 | 2009-07-28 | International Business Machines Corporation | Method for dynamic selection of optimized codec for streaming audio content |
US8050511B2 (en) * | 2004-11-16 | 2011-11-01 | Sharp Laboratories Of America, Inc. | High dynamic range images from low dynamic range images |
US7932914B1 (en) * | 2005-10-20 | 2011-04-26 | Nvidia Corporation | Storing high dynamic range data in a low dynamic range format |
US7713773B2 (en) * | 2005-11-02 | 2010-05-11 | Solopower, Inc. | Contact layers for thin film solar cells employing group IBIIIAVIA compound absorbers |
US7684626B1 (en) * | 2005-12-01 | 2010-03-23 | Maxim Integrated Products | Method and apparatus for image decoder post-processing using image pre-processing and image encoding information |
CN100452090C (en) * | 2006-03-14 | 2009-01-14 | 腾讯科技(深圳)有限公司 | Method and system for implementing high dynamic light range |
US8880571B2 (en) * | 2006-05-05 | 2014-11-04 | Microsoft Corporation | High dynamic range data format conversions for digital media |
CN101449586B (en) * | 2006-05-25 | 2012-08-29 | 汤姆逊许可证公司 | Method and system for weighted coding |
US8253752B2 (en) * | 2006-07-20 | 2012-08-28 | Qualcomm Incorporated | Method and apparatus for encoder assisted pre-processing |
US8155454B2 (en) * | 2006-07-20 | 2012-04-10 | Qualcomm Incorporated | Method and apparatus for encoder assisted post-processing |
US8054886B2 (en) * | 2007-02-21 | 2011-11-08 | Microsoft Corporation | Signaling and use of chroma sample positioning information |
US7825938B2 (en) * | 2007-09-06 | 2010-11-02 | Himax Technologies Limited | Method and apparatus for processing digital image to be displayed on display device with backlight module |
GB2452765A (en) * | 2007-09-14 | 2009-03-18 | Dooworks Fz Co | Combining multiple image streams by reducing the colour depth of each bit stream combining them onto a single stream |
US8165393B2 (en) | 2008-06-05 | 2012-04-24 | Microsoft Corp. | High dynamic range texture compression |
US20090322777A1 (en) * | 2008-06-26 | 2009-12-31 | Microsoft Corporation | Unified texture compression framework |
JP5697301B2 (en) * | 2008-10-01 | 2015-04-08 | 株式会社Nttドコモ | Moving picture encoding apparatus, moving picture decoding apparatus, moving picture encoding method, moving picture decoding method, moving picture encoding program, moving picture decoding program, and moving picture encoding / decoding system |
US9036693B2 (en) * | 2009-01-08 | 2015-05-19 | Sri International | Method and system for providing region-of-interest video compression |
KR101885258B1 (en) | 2010-05-14 | 2018-08-06 | 삼성전자주식회사 | Method and apparatus for video encoding, and method and apparatus for video decoding |
EP2445214A1 (en) * | 2010-10-19 | 2012-04-25 | Fraunhofer-Gesellschaft zur Förderung der angewandten Forschung e.V. | Video coding using temporally coherent dynamic range mapping |
US9024951B2 (en) * | 2011-02-16 | 2015-05-05 | Apple Inc. | Devices and methods for obtaining high-local-contrast image data |
DK3324622T3 (en) | 2011-04-14 | 2019-10-14 | Dolby Laboratories Licensing Corp | INDICATOR WITH MULTIPLE REGRESSIONS AND MULTIPLE COLOR CHANNELS |
JP2013055615A (en) * | 2011-09-06 | 2013-03-21 | Toshiba Corp | Moving image coding device, method of the same, moving image decoding device, and method of the same |
TWI586150B (en) * | 2012-06-29 | 2017-06-01 | 新力股份有限公司 | Image processing device and non-transitory computer readable storage medium |
CN105324997B (en) | 2013-06-17 | 2018-06-29 | 杜比实验室特许公司 | For enhancing the adaptive shaping of the hierarchical coding of dynamic range signal |
JP5639228B2 (en) * | 2013-06-18 | 2014-12-10 | トムソン ライセンシングThomson Licensing | Weighted encoding method and system |
JP2015008361A (en) * | 2013-06-24 | 2015-01-15 | ソニー株式会社 | Reproducing apparatuses, reproducing method and recording medium |
EP2894857A1 (en) * | 2014-01-10 | 2015-07-15 | Thomson Licensing | Method and apparatus for encoding image data and method and apparatus for decoding image data |
EP3096509A4 (en) * | 2014-01-14 | 2016-12-28 | Fujitsu Ltd | Image processing program, display program, image processing method, display method, image processing device, and information processing device |
US10701403B2 (en) | 2014-01-24 | 2020-06-30 | Sony Corporation | Transmission device, transmission method, reception device, and reception method |
CN111526350B (en) * | 2014-02-25 | 2022-04-05 | 苹果公司 | Adaptive video processing |
WO2015128268A1 (en) * | 2014-02-25 | 2015-09-03 | Thomson Licensing | Method for generating a bitstream relative to image/video signal, bitstream carrying specific information data and method for obtaining such specific information |
EP3304881B1 (en) * | 2015-06-05 | 2022-08-10 | Apple Inc. | Rendering and displaying high dynamic range content |
JP6628400B2 (en) * | 2015-09-25 | 2020-01-08 | シャープ株式会社 | Video processing apparatus, video processing method and program |
EP3343913B1 (en) * | 2015-09-30 | 2022-05-25 | Samsung Electronics Co., Ltd. | Display device and method for controlling same |
RU2614576C1 (en) * | 2016-03-11 | 2017-03-28 | Федеральное государственное казенное военное образовательное учреждение высшего образования "Академия Федеральной службы охраны Российской Федерации" (Академия ФСО России) | Method for encoding images based on nonlinear forming system |
US10185718B1 (en) * | 2017-08-23 | 2019-01-22 | The Nielsen Company (Us), Llc | Index compression and decompression |
US11039092B2 (en) * | 2017-11-15 | 2021-06-15 | Nvidia Corporation | Sparse scanout for image sensors |
US10986354B2 (en) * | 2018-04-16 | 2021-04-20 | Panasonic Intellectual Property Corporation Of America | Encoder, decoder, encoding method, and decoding method |
US10652512B1 (en) * | 2018-11-20 | 2020-05-12 | Qualcomm Incorporated | Enhancement of high dynamic range content |
Citations (35)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US4845560A (en) | 1987-05-29 | 1989-07-04 | Sony Corp. | High efficiency coding apparatus |
US4947447A (en) | 1986-04-24 | 1990-08-07 | Hitachi, Ltd. | Method for data coding |
US4982290A (en) * | 1988-01-26 | 1991-01-01 | Fuji Photo Film Co., Ltd. | Digital electronic still camera effecting analog-to-digital conversion after color balance adjustment and gradation correction |
US5023710A (en) | 1988-12-16 | 1991-06-11 | Sony Corporation | Highly efficient coding apparatus |
US5049990A (en) * | 1989-07-21 | 1991-09-17 | Sony Corporation | Highly efficient coding apparatus |
US5070402A (en) | 1987-11-27 | 1991-12-03 | Canon Kabushiki Kaisha | Encoding image information transmission apparatus |
US5121205A (en) | 1989-12-12 | 1992-06-09 | General Electric Company | Apparatus for synchronizing main and auxiliary video signals |
JPH04185172A (en) | 1990-11-20 | 1992-07-02 | Matsushita Electric Ind Co Ltd | High-efficiency coding device for digital image signal |
US5374958A (en) | 1992-06-30 | 1994-12-20 | Sony Corporation | Image compression based on pattern fineness and edge presence |
EP0630158A1 (en) | 1993-06-17 | 1994-12-21 | Sony Corporation | Coding of analog image signals |
US5392072A (en) | 1992-10-23 | 1995-02-21 | International Business Machines Inc. | Hybrid video compression system and method capable of software-only decompression in selected multimedia systems |
EP0649261A2 (en) | 1993-10-18 | 1995-04-19 | Canon Kabushiki Kaisha | Image data processing and encrypting apparatus |
US5412428A (en) | 1992-12-28 | 1995-05-02 | Sony Corporation | Encoding method and decoding method of color signal component of picture signal having plurality resolutions |
US5486929A (en) | 1993-09-03 | 1996-01-23 | Apple Computer, Inc. | Time division multiplexed video recording and playback system |
US5497246A (en) | 1993-07-15 | 1996-03-05 | Asahi Kogaku Kogyo Kabushiki Kaisha | Image signal processing device |
US5526131A (en) | 1992-12-01 | 1996-06-11 | Samsung Electronics Co., Ltd | Data coding for a digital video tape recorder suitable for high speed picture playback |
US5541739A (en) | 1990-06-15 | 1996-07-30 | Canon Kabushiki Kaisha | Audio signal recording apparatus |
US5589993A (en) | 1993-02-23 | 1996-12-31 | Matsushita Electric Corporation Of America | Digital high definition television video recorder with trick-play features |
US5612748A (en) | 1991-06-27 | 1997-03-18 | Nippon Hoso Kyokai | Sub-sample transmission system for improving picture quality in motional picture region of wide-band color picture signal |
WO1997017669A1 (en) | 1995-11-08 | 1997-05-15 | Storm Technology, Inc. | Method and format for storing and selectively retrieving image data |
WO1997047139A2 (en) | 1996-06-05 | 1997-12-11 | Philips Electronics N.V. | Method and device for decoding coded digital video signals |
US5757855A (en) | 1995-11-29 | 1998-05-26 | David Sarnoff Research Center, Inc. | Data detection for partial response channels |
US5764805A (en) | 1995-10-25 | 1998-06-09 | David Sarnoff Research Center, Inc. | Low bit rate video encoder using overlapping block motion compensation and zerotree wavelet coding |
US5790206A (en) | 1994-09-02 | 1998-08-04 | David Sarnoff Research Center, Inc. | Method and apparatus for global-to-local block motion estimation |
US5848220A (en) | 1996-02-21 | 1998-12-08 | Sony Corporation | High definition digital video recorder |
WO1999037097A1 (en) | 1998-01-16 | 1999-07-22 | Sarnoff Corporation | Layered mpeg encoder |
WO1999037096A1 (en) | 1998-01-16 | 1999-07-22 | Sarnoff Corporation | Region-based information compaction for digital images |
US5995668A (en) * | 1995-10-25 | 1999-11-30 | U.S. Philips Corporation | Segmented picture coding method and system, and corresponding decoding method and system |
US6025878A (en) | 1994-10-11 | 2000-02-15 | Hitachi America Ltd. | Method and apparatus for decoding both high and standard definition video signals using a single video decoder |
US6037984A (en) | 1997-12-24 | 2000-03-14 | Sarnoff Corporation | Method and apparatus for embedding a watermark into a digital image or image sequence |
US6040867A (en) * | 1996-02-20 | 2000-03-21 | Hitachi, Ltd. | Television signal receiving apparatus and method specification |
US6084912A (en) | 1996-06-28 | 2000-07-04 | Sarnoff Corporation | Very low bit rate video coding/decoding method and apparatus |
US6084908A (en) | 1995-10-25 | 2000-07-04 | Sarnoff Corporation | Apparatus and method for quadtree based variable block size motion estimation |
WO2000064185A1 (en) | 1999-04-15 | 2000-10-26 | Sarnoff Corporation | Standard compression with dynamic range enhancement of image regions |
US6208745B1 (en) | 1997-12-30 | 2001-03-27 | Sarnoff Corporation | Method and apparatus for imbedding a watermark into a bitstream representation of a digital image sequence |
-
1999
- 1999-04-15 US US09/292,693 patent/US6560285B1/en not_active Expired - Fee Related
-
2000
- 2000-04-17 WO PCT/US2000/010295 patent/WO2000064185A1/en active Application Filing
- 2000-04-17 EP EP00926054A patent/EP1169863A1/en not_active Withdrawn
- 2000-04-17 JP JP2000613198A patent/JP4554087B2/en not_active Expired - Fee Related
-
2003
- 2003-05-06 US US10/429,985 patent/US7403565B2/en not_active Ceased
-
2010
- 2010-07-22 US US12/841,862 patent/USRE43647E1/en not_active Expired - Fee Related
Patent Citations (44)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US4947447A (en) | 1986-04-24 | 1990-08-07 | Hitachi, Ltd. | Method for data coding |
US4845560A (en) | 1987-05-29 | 1989-07-04 | Sony Corp. | High efficiency coding apparatus |
US5070402A (en) | 1987-11-27 | 1991-12-03 | Canon Kabushiki Kaisha | Encoding image information transmission apparatus |
US4982290A (en) * | 1988-01-26 | 1991-01-01 | Fuji Photo Film Co., Ltd. | Digital electronic still camera effecting analog-to-digital conversion after color balance adjustment and gradation correction |
US5023710A (en) | 1988-12-16 | 1991-06-11 | Sony Corporation | Highly efficient coding apparatus |
US5049990A (en) * | 1989-07-21 | 1991-09-17 | Sony Corporation | Highly efficient coding apparatus |
US5121205A (en) | 1989-12-12 | 1992-06-09 | General Electric Company | Apparatus for synchronizing main and auxiliary video signals |
US5541739A (en) | 1990-06-15 | 1996-07-30 | Canon Kabushiki Kaisha | Audio signal recording apparatus |
JPH04185172A (en) | 1990-11-20 | 1992-07-02 | Matsushita Electric Ind Co Ltd | High-efficiency coding device for digital image signal |
US5307177A (en) | 1990-11-20 | 1994-04-26 | Matsushita Electric Industrial Co., Ltd. | High-efficiency coding apparatus for compressing a digital video signal while controlling the coding bit rate of the compressed digital data so as to keep it constant |
US5612748A (en) | 1991-06-27 | 1997-03-18 | Nippon Hoso Kyokai | Sub-sample transmission system for improving picture quality in motional picture region of wide-band color picture signal |
US5374958A (en) | 1992-06-30 | 1994-12-20 | Sony Corporation | Image compression based on pattern fineness and edge presence |
US5392072A (en) | 1992-10-23 | 1995-02-21 | International Business Machines Inc. | Hybrid video compression system and method capable of software-only decompression in selected multimedia systems |
US5526131A (en) | 1992-12-01 | 1996-06-11 | Samsung Electronics Co., Ltd | Data coding for a digital video tape recorder suitable for high speed picture playback |
US5412428A (en) | 1992-12-28 | 1995-05-02 | Sony Corporation | Encoding method and decoding method of color signal component of picture signal having plurality resolutions |
US5589993A (en) | 1993-02-23 | 1996-12-31 | Matsushita Electric Corporation Of America | Digital high definition television video recorder with trick-play features |
EP0630158A1 (en) | 1993-06-17 | 1994-12-21 | Sony Corporation | Coding of analog image signals |
US5809175A (en) | 1993-06-17 | 1998-09-15 | Sony Corporation | Apparatus for effecting A/D conversation on image signal |
US5610998A (en) | 1993-06-17 | 1997-03-11 | Sony Corporation | Apparatus for effecting A/D conversion on image signal |
US5497246A (en) | 1993-07-15 | 1996-03-05 | Asahi Kogaku Kogyo Kabushiki Kaisha | Image signal processing device |
US5486929A (en) | 1993-09-03 | 1996-01-23 | Apple Computer, Inc. | Time division multiplexed video recording and playback system |
EP0649261A2 (en) | 1993-10-18 | 1995-04-19 | Canon Kabushiki Kaisha | Image data processing and encrypting apparatus |
US5790206A (en) | 1994-09-02 | 1998-08-04 | David Sarnoff Research Center, Inc. | Method and apparatus for global-to-local block motion estimation |
US6025878A (en) | 1994-10-11 | 2000-02-15 | Hitachi America Ltd. | Method and apparatus for decoding both high and standard definition video signals using a single video decoder |
US6084908A (en) | 1995-10-25 | 2000-07-04 | Sarnoff Corporation | Apparatus and method for quadtree based variable block size motion estimation |
US5764805A (en) | 1995-10-25 | 1998-06-09 | David Sarnoff Research Center, Inc. | Low bit rate video encoder using overlapping block motion compensation and zerotree wavelet coding |
US5995668A (en) * | 1995-10-25 | 1999-11-30 | U.S. Philips Corporation | Segmented picture coding method and system, and corresponding decoding method and system |
WO1997017669A1 (en) | 1995-11-08 | 1997-05-15 | Storm Technology, Inc. | Method and format for storing and selectively retrieving image data |
US5757855A (en) | 1995-11-29 | 1998-05-26 | David Sarnoff Research Center, Inc. | Data detection for partial response channels |
US6040867A (en) * | 1996-02-20 | 2000-03-21 | Hitachi, Ltd. | Television signal receiving apparatus and method specification |
US5848220A (en) | 1996-02-21 | 1998-12-08 | Sony Corporation | High definition digital video recorder |
US6125146A (en) | 1996-06-05 | 2000-09-26 | U.S. Philips Corporation | Method and device for decoding coded digital video signals |
WO1997047139A2 (en) | 1996-06-05 | 1997-12-11 | Philips Electronics N.V. | Method and device for decoding coded digital video signals |
US6084912A (en) | 1996-06-28 | 2000-07-04 | Sarnoff Corporation | Very low bit rate video coding/decoding method and apparatus |
US6037984A (en) | 1997-12-24 | 2000-03-14 | Sarnoff Corporation | Method and apparatus for embedding a watermark into a digital image or image sequence |
US6208745B1 (en) | 1997-12-30 | 2001-03-27 | Sarnoff Corporation | Method and apparatus for imbedding a watermark into a bitstream representation of a digital image sequence |
US6118820A (en) | 1998-01-16 | 2000-09-12 | Sarnoff Corporation | Region-based information compaction as for digital images |
WO1999037096A1 (en) | 1998-01-16 | 1999-07-22 | Sarnoff Corporation | Region-based information compaction for digital images |
EP1050167A1 (en) | 1998-01-16 | 2000-11-08 | Sarnoff Corporation | Region-based information compaction for digital images |
WO1999037097A1 (en) | 1998-01-16 | 1999-07-22 | Sarnoff Corporation | Layered mpeg encoder |
US6829301B1 (en) | 1998-01-16 | 2004-12-07 | Sarnoff Corporation | Enhanced MPEG information distribution apparatus and method |
US6560285B1 (en) | 1998-03-30 | 2003-05-06 | Sarnoff Corporation | Region-based information compaction as for digital images |
US7403565B2 (en) | 1998-03-30 | 2008-07-22 | Akikaze Technologies, Inc. | Region-based information compaction as for digital images |
WO2000064185A1 (en) | 1999-04-15 | 2000-10-26 | Sarnoff Corporation | Standard compression with dynamic range enhancement of image regions |
Non-Patent Citations (9)
Title |
---|
Chen, et al.: "Coding of Subregions Content-Based Scalable Video": IEEE Transactions on Circuits and Systems for Video Technology, 7(1), Feb. 1, 1997, pp. 256-260. |
Chen, T. et al.: "Coding of Subregions Content-Based Scalable Video": IEEE Transactions on Circuits and Systems for Video Technology, vol. 7, No. 1, Feb. 1997, pp. 256-260, XP000678899; see p. 258, paragraph IV.A; figure 4. |
EP Communication issued by the Examining Division on Apr. 20, 2004 from corresponding EP Application No. EP99902105.8. |
EP Communication issued by the Examining Division on May 30, 2003 from corresponding EP Application No. EP99902105.8. |
PCT International Preliminary Examination Report dated Jun. 6, 2000 from corresponding International Application No. PCT/US99/00352. |
PCT International Search Report dated Apr. 16, 1999 in corresponding International Application No. PCT/US99/00351. |
PCT International Search Report dated Apr. 16, 1999 in corresponding International Application No. PCT/US99/00352. |
PCT International Search Report, dated Apr. 16, 1999, for International Patent Application No. PCT/US99/00351, 3 pgs. |
U.S. Appl. No. 11/635,063, filed Dec. 7, 2006, Tinker et al. |
Cited By (4)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US10225539B2 (en) | 2014-08-28 | 2019-03-05 | Sony Corporation | Transmitting apparatus, transmitting method, receiving apparatus, and receiving method |
US10791311B2 (en) | 2014-08-28 | 2020-09-29 | Sony Corporation | Transmitting apparatus, transmitting method, receiving apparatus, and receiving method |
US11272149B2 (en) | 2014-08-28 | 2022-03-08 | Sony Corporation | Transmitting apparatus, transmitting method, receiving apparatus, and receiving method |
US11367223B2 (en) * | 2017-04-10 | 2022-06-21 | Intel Corporation | Region based processing |
Also Published As
Publication number | Publication date |
---|---|
US20030202589A1 (en) | 2003-10-30 |
JP4554087B2 (en) | 2010-09-29 |
EP1169863A1 (en) | 2002-01-09 |
US6560285B1 (en) | 2003-05-06 |
WO2000064185A1 (en) | 2000-10-26 |
US7403565B2 (en) | 2008-07-22 |
JP2002542739A (en) | 2002-12-10 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
USRE43647E1 (en) | Region-based information compaction as for digital images | |
EP1050167B1 (en) | Region-based information compaction for digital images | |
US12028523B2 (en) | System and method for reshaping and adaptation of high dynamic range video data | |
CN113170218B (en) | Video signal enhancement decoder with multi-level enhancement and scalable coding formats | |
US9743100B2 (en) | Image processing apparatus and image processing method | |
US9667957B2 (en) | High precision encoding and decoding of video images | |
US9571838B2 (en) | Image processing apparatus and image processing method | |
KR101223983B1 (en) | Bitrate reduction techniques for image transcoding | |
IL305463A (en) | Image reshaping in video coding using rate distortion optimization | |
US9838716B2 (en) | Image processing apparatus and image processing method | |
US10542265B2 (en) | Self-adaptive prediction method for multi-layer codec | |
CN112042202B (en) | Decoded picture buffer management and dynamic range adjustment | |
KR102496345B1 (en) | Method and Apparatus for Color Correction During HDR to SDR Conversion | |
WO2019203973A1 (en) | Method and device for encoding an image or video with optimized compression efficiency preserving image or video fidelity | |
CN110087072B (en) | Image processing apparatus and method |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
AS | Assignment |
Owner name: AKIKAZE TECHNOLOGIES, LLC, DELAWARE Free format text: ASSIGNMENT OF ASSIGNORS INTEREST;ASSIGNOR:SARNOFF CORPORATION;REEL/FRAME:025536/0424 Effective date: 20070122 |
|
FEPP | Fee payment procedure |
Free format text: PAYOR NUMBER ASSIGNED (ORIGINAL EVENT CODE: ASPN); ENTITY STATUS OF PATENT OWNER: LARGE ENTITY |
|
CC | Certificate of correction | ||
FPAY | Fee payment |
Year of fee payment: 8 |
|
AS | Assignment |
Owner name: INTELLECTUAL VENTURES ASSETS 145 LLC, DELAWARE Free format text: ASSIGNMENT OF ASSIGNORS INTEREST;ASSIGNOR:AKIKAZE TECHNOLOGIES, LLC;REEL/FRAME:050963/0739 Effective date: 20191029 |
|
AS | Assignment |
Owner name: DIGIMEDIA TECH, LLC, GEORGIA Free format text: ASSIGNMENT OF ASSIGNORS INTEREST;ASSIGNOR:INTELLECTUAL VENTURES ASSETS 145 LLC;REEL/FRAME:051408/0730 Effective date: 20191115 |
|
FEPP | Fee payment procedure |
Free format text: MAINTENANCE FEE REMINDER MAILED (ORIGINAL EVENT CODE: REM.); ENTITY STATUS OF PATENT OWNER: LARGE ENTITY |
|
LAPS | Lapse for failure to pay maintenance fees |
Free format text: PATENT EXPIRED FOR FAILURE TO PAY MAINTENANCE FEES (ORIGINAL EVENT CODE: EXP.); ENTITY STATUS OF PATENT OWNER: LARGE ENTITY |