WO2001001697A1 - Time-varying randomization for data synchronization and implicit information transmission - Google Patents

Time-varying randomization for data synchronization and implicit information transmission Download PDF

Info

Publication number
WO2001001697A1
WO2001001697A1 PCT/US2000/014331 US0014331W WO0101697A1 WO 2001001697 A1 WO2001001697 A1 WO 2001001697A1 US 0014331 W US0014331 W US 0014331W WO 0101697 A1 WO0101697 A1 WO 0101697A1
Authority
WO
WIPO (PCT)
Prior art keywords
data
value
block
transforming
current block
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Ceased
Application number
PCT/US2000/014331
Other languages
English (en)
French (fr)
Inventor
Tetsujiro Kondo
Yasuhiro Fujimori
William Knox Carey
James J. Carrig
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Sony Electronics Inc
Original Assignee
Sony Electronics Inc
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Sony Electronics Inc filed Critical Sony Electronics Inc
Priority to AU54435/00A priority Critical patent/AU5443500A/en
Priority to DE10084741T priority patent/DE10084741T1/de
Priority to JP2001506240A priority patent/JP2003503915A/ja
Publication of WO2001001697A1 publication Critical patent/WO2001001697A1/en
Anticipated expiration legal-status Critical
Ceased legal-status Critical Current

Links

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N9/00Details of colour television systems
    • H04N9/79Processing of colour television signals in connection with recording
    • H04N9/87Regeneration of colour television signals
    • H04N9/88Signal drop-out compensation
    • H04N9/888Signal drop-out compensation for signals recorded by pulse code modulation
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N19/00Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
    • H04N19/50Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using predictive coding
    • H04N19/593Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using predictive coding involving spatial prediction techniques
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N19/00Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
    • H04N19/85Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using pre-processing or post-processing specially adapted for video compression
    • H04N19/89Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using pre-processing or post-processing specially adapted for video compression involving methods or arrangements for detection of transmission errors at the decoder
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N9/00Details of colour television systems
    • H04N9/79Processing of colour television signals in connection with recording
    • H04N9/80Transformation of the television signal for recording, e.g. modulation, frequency changing; Inverse transformation for playback
    • H04N9/804Transformation of the television signal for recording, e.g. modulation, frequency changing; Inverse transformation for playback involving pulse code modulation of the colour picture signal components
    • H04N9/8042Transformation of the television signal for recording, e.g. modulation, frequency changing; Inverse transformation for playback involving pulse code modulation of the colour picture signal components involving data reduction
    • H04N9/8047Transformation of the television signal for recording, e.g. modulation, frequency changing; Inverse transformation for playback involving pulse code modulation of the colour picture signal components involving data reduction using transform coding
    • YGENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
    • Y10TECHNICAL SUBJECTS COVERED BY FORMER USPC
    • Y10STECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
    • Y10S707/00Data processing: database and file management or data structures
    • Y10S707/99941Database schema or data structure
    • Y10S707/99942Manipulating data structure, e.g. compression, compaction, compilation
    • YGENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
    • Y10TECHNICAL SUBJECTS COVERED BY FORMER USPC
    • Y10STECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
    • Y10S707/00Data processing: database and file management or data structures
    • Y10S707/99951File or database maintenance
    • Y10S707/99952Coherency, e.g. same view to multiple users

Definitions

  • the present invention relates to providing a robust error recovery due to data losses incurred during transmission of signals. More particularly, the present invention relates to a method of time-varying randomization of data used in facilitating a robust error recovery.
  • HDTV high definition television
  • HDTV National Television Systems Committee
  • NTSC National Television Systems Committee
  • NTSC National Television Systems Committee
  • a color television signal is converted for digital use it is common that the luminance and chrominance signals may be digitized using eight bits.
  • Digital transmission of NTSC color television signals may require a nominal bit rate of about two-hundred and sixteen megabits per second.
  • the transmission rate is greater for HDTV, which may nominally require about 1200 megabits per second.
  • Such high transmission rates may be well beyond the bandwidths supported by current wireless standards. Accordingly, an efficient compression methodology is required.
  • Compression methodologies also play an important role in mobile telecommunication applications.
  • packets of data are communicated between remote terminals in mobile telecommunication applications.
  • the limited number of transmission channels in mobile communications requires an effective compression methodology prior to the transmission of packets.
  • a number of compression techniques are available to facilitate high transmission rates.
  • ADRC Adaptive Dynamic Range Coding
  • DCT Discrete Cosine Transform
  • the present invention includes a system and method to encode data to maximize subsequent recovery of lost or damaged encoded data.
  • a current block of data is randomized in accordance with data from the current block and data from at least one temporally adjacent block of data.
  • randomized data is derandomized using the current block of data and data from at least one temporally adjacent block.
  • decoding of the current block and the adjacent block is delayed in order to facilitate recovery of lost or damaged compression parameters of encoded data.
  • Figure la illustrates an embodiment of the processes of signal encoding, transmission, and decoding.
  • Figures lb and lc illustrate embodiments of signal encoding, transmission, and decoding implemented as software executed by a processor.
  • Figures Id and le illustrate embodiments of signal encoding, transmission, and decoding implemented as hardware logic.
  • Figure 2 illustrates one embodiment of a packet structure.
  • Figure 3 is a flow diagram illustrating one embodiment of the encoding process.
  • Figure 4 is a flow diagram illustrating one embodiment of the decoding process.
  • Figure 5 illustrates one embodiment of image-to-block mapping.
  • Figure 5a illustrates one embodiment of a shuffling pattern used in image-to- block mapping.
  • Figure 6 is an illustration of exemplary complementary and interlocking block structures.
  • Figures 7a, 7b, 7c, 7d illustrate one embodiment of shuffling patterns for Y blocks within a frame set.
  • Figure 8 is an illustration of one embodiment of cumulative DR distribution for Buffer 0.
  • Figure 8a is an illustration of one embodiment of a partial buffering process.
  • Figure 9 illustrates one embodiment of the intra buffer YUV block shuffling process.
  • Figure 10 illustrates one embodiment of the intra group VL-data shuffling process.
  • Figure 11 illustrates one embodiment of Q code concatenation within a 3-block group.
  • Figure 11a illustrates one embodiment of Q code concatenation for frame pairs including motion blocks.
  • Figure 12 illustrates one embodiment of pixel data error caused by a 1/6 burst error loss.
  • Figure 12a illustrates one embodiment of shuffling Q codes and distributing Q code bits.
  • Figure 12b illustrates one embodiment of pixel data error caused by a 1/6 burst error loss of redistributed Q codes.
  • Figure 12c illustrates one embodiment of pixel data error caused by a 1 /6 burst error loss of reassigned Q codes.
  • Figure 12d is a flowchart of one embodiment for a time-varying randomization of Q codes.
  • Figure 13 illustrates one embodiment of MIN shuffling.
  • Figure 13a illustrates one embodiment of Motion Flag shuffling and of a fixed length data loss in one frame pair.
  • Figure 14 illustrates one embodiment of a modular shuffling.
  • Figure 14a illustrates one embodiment of a modular shuffling result and the fixed length data loss associated with the modular shuffling.
  • Figure 14b illustrates an alternate embodiment of a modular shuffling result and the fixed length data loss associated with the modular shuffling.
  • Figure 14c illustrates an alternate embodiment of a modular shuffling result and the fixed length data loss associated with the modular shuffling.
  • Figure 15 illustrates one embodiment of variable length data buffering in a frame set.
  • Figure 16 illustrates one embodiment of inter segment VL-data shuffling.
  • Figure 17 is a flowchart of one embodiment for a delayed-de ⁇ sion, time-varying derandomization of Q codes.
  • the present invention provides a system and method for the time-varying randomization of a signal stream to provide for a robust error recovery.
  • the present invention provides a system and method for time-varying derandomization of a randomized signal stream and alternately delayed-decoding of the signal stream.
  • ADRC Adaptive Dynamic Range Coding
  • DR dynamic range
  • MIN minimum value
  • the present invention is not limited to ADRC encoding and the particular compression constants generated; rather it will be apparent that the present invention is applicable to different compression technologies, different types of correlated data, including, but not limited to, sound data and the like, and different compression constants including, but not limited to, the maximum value (MAX) and central value (CEN) which may be used in ADRC processes.
  • MAX maximum value
  • CEN central value
  • ADRC Adaptive Dynamic Range Coding Scheme for Future HDTV Digital VTR
  • Kondo Fujimori, Nakaya, Fourth International Workshop on HDTV and Beyond, September 4-6, 1991, Turin, Italy.
  • ADRC has been established as a feasible real-time technique for coding and compressing images in preparation for constant bit-rate transmission.
  • Signal 100 is a data stream input to Encoder 110.
  • Encoder 110 follows the Adaptive Dynamic Range Coding ("ADRC") compression algorithm and generates Packets 1, . . . N for transmission along Transmission Media 135.
  • Decoder 120 receives Packets 1, . . . N from Transmission Media 135 and generates Signal 130.
  • Signal 130 is a reconstruction of Signal 100.
  • ADRC Adaptive Dynamic Range Coding
  • Encoder 110 and Decoder 120 can be implemented a variety of ways to perform the functionality described herein.
  • Encoder 110 and /or Decoder 120 may be embodied as software stored on media and executed by a general purpose or specifically configured computer system, typically including a central processing unit, memory and one or more input /output devices and co-processors, as shown in Figures lb and lc.
  • the Encoder 110 and/or Decoder 120 may be implemented as logic to perform the functionality described herein, as shown in Figures Id and le.
  • Encoder 110 and/or Decoder 120 can be implemented as a combination of hardware, software or firmware.
  • Embodiments of the circuits for coding, arranging, and the time-varying randomization of a signal stream to provide for a robust error recovery are shown in Figures lb and lc.
  • the methods described herein may be implemented on a specially configured or general purpose processor system 170. Instructions are stored in memory 190 and accessed by processor 175 to perform many of the steps described herein.
  • Input 180 receives the input bitstream and forwards the data to processor 175.
  • Output 185 outputs the data.
  • the output may consist of the encoded data.
  • the output may consist of the decoded data, such as image data decoded according to the methods described, sufficient to drive an external device such as display 195.
  • Signal 100 may be a color video image comprising a sequence of video frames, each frame including information representative of an image in an interlaced video system.
  • Each frame is composed of two fields, wherein one field contains data of the even lines of the image and the other field containing the odd lines of the image.
  • the data includes pixel values that describe the color components of a corresponding location in the image.
  • the color components consist of the luminance signal Y, and color difference signals U, and V. It is readily apparent the process of the present invention can be applied to signals other than interlaced video signals. Furthermore, it is apparent that the present invention is not limited to implementations in the Y, U, V color space, but can be applied to images represented in other color spaces.
  • Signal 100 may be, for example, two-dimensional static images, hologram images, three-dimensional static images, video, two- dimensional moving images, three dimensional moving images, monaural sound, or N- channel sound.
  • Encoder 110 divides the Y, U, and V signals and processes each group of signals independently in accordance with the ADRC algorithm.
  • the following description for purposes of simplifying the discussion, describes the processing of the Y signal; however, the encoding steps may be replicated for the U and V signals.
  • Encoder 110 groups Y signals across two subsequent frames, referred to herein as a frame pair, of Signal 100 into three dimensional blocks ("3D") blocks.
  • 3D three dimensional blocks
  • a 3D block is generated from grouping two 2D blocks from the same localized area across a given frame pair, wherein a two dimensional 2D block is created by grouping localized pixels within a frame or a field. It is contemplated that the process described herein can be applied to different block structures. The grouping of signals will be further described in the image-to-block mapping section below.
  • Encoder 110 calculates whether there is a change in pixel values between the 2D blocks forming the 3D block.
  • a Motion Flag is set if there are substantial changes in values. As is known in the art, use of a Motion Flag allows Encoder 110 to reduce the number of quantization codes when there is localized image repetition within each frame pair.
  • DR MAX - MLN.
  • the encoder may also determine a central value (CEN) that has a value between MAX and MIN.
  • Encoder 110 encodes signals on a frame by frame basis for a stream of frames representing a sequence of video frames. In another embodiment, Encoder 110 encodes signals on a field by field basis for a stream of fields representing a sequence of video fields. Accordingly, Motion Flags are not used and 2D blocks may be used to calculate the MIN, MAX, CEN and DR values.
  • Encoder 110 references the calculated DR against a threshold table of DR threshold values and corresponding Qbit values to determine the number of quantization bits ("Qbits") used to encode pixels within the block corresponding to the DR. Encoding of a pixel results in a quantization code ("Q code").
  • the Q codes are the relevant compressed image data used for storage or transmission purposes.
  • the Qbit selection is derived from the DR of a 3D block. Accordingly, all pixels within a given 3D block are encoded using the same Qbit, resulting in a 3D encoded block.
  • the collection of Q codes, MIN, Motion Flag, and DR for a 3D encoded block is referred to as a 3D ADRC block.
  • 2D blocks are encoded and the collection of Q codes, MIN, and DR for a given 2D block results in 2D ADRC blocks.
  • the MAX value and CEN value may be used in place of the MIN value.
  • the threshold table consists of a row of DR threshold values.
  • a Qbit corresponds to the number of quantization bits used to encode a range of DR values between two adjacent DRs within a row of the threshold table.
  • the threshold table includes multiple rows and selection of a row depends on the desired transmission rate. Each row in the threshold table is identified by a threshold index.
  • a detailed description of one embodiment of threshold selection is described below in the discussion of partial buffering.
  • a further description of an example of ADRC encoding and buffering is disclosed in US Patent No. 4,722,003 entitled "High Efficiency Coding Apparatus" and US Patent No.
  • VL-data variable length data
  • DR variable length data
  • MIN MIN
  • MAX MAX
  • CEN Motion Flag
  • FL-data fixed length data
  • block attribute describes a parameter associated with a component of a signal element, wherein a signal element includes multiple components.
  • An advantage of not including the Qbit code value in the FL-data is that no additional bits are need be transmitted for each ADRC block.
  • a disadvantage of not including the Qbit value is that, if the DR is lost or damaged during transmission or storage, the Q codes cannot be easily recovered. The ADRC decoder must determine how many bits were used to quantize the block without relying on any DR information.
  • the Qbit value may be sent implicitly by time- varying randomization of the VL-data.
  • the Qbit value of a current block of data, together with the Qbit values of a number of previous blocks, may be used as a randomizing or seed value for a pseudorandom number generator (PNG).
  • PNG pseudorandom number generator
  • the three previous Qbit values may be used.
  • any number of temporally adjacent values (either prior or subsequent) may be used to generate the seed value. For purposes of discussion herein, temporally adjacent may be construed to include any prior or subsequent block of data.
  • each successive Qbit value is concatenated to the right of the current seed value.
  • the PNG creates a statistically distinct pseudorandom number sequence for a unique seed value and creates the same statistically distinct sequence for each application of the same seed value.
  • the pseudorandom number sequence may then be used to transform the VL-data.
  • the FL-data may be transformed or both the VL-data and FL-data may be transformed.
  • the transformation T of the VL-data is achieved by applying a bitwise XOR (exclusive OR) function to the pseudorandom number sequence (y) and the VL-data (x).
  • a variety of sets of transformations may be used to generate the statistically distinct sequences.
  • a table of pre-defined sequences may be used.
  • a similar process may be used to decode the Qbit value from the DR of the current block. If the DR arrives undamaged, the Qbit value may be determined by using the threshold table as was used for the Q code encoding. The DR is used to look-up the Qbit value in the table and the Qbit value is then used as a seed value to the PNG to produce the pseudorandom number sequence. The decoder transforms the randomized VL-data by applying a bitwise XOR function to the pseudorandom number sequence and the randomized VL-data to produce the original, non-randomized VL-data. In this embodiment, because the same PNG and seed value are used, the same pseudorandom number sequence is produced.
  • the decoder attempts to decode the block with all possible Qbit values and associated possible seed values.
  • a local correlation metric is applied to each candidate decoding, and a confidence metric is computed for the block.
  • the decoder implements a delayed-decision decoder that delays the dequantization by four blocks.
  • the decoder may conclude that the decoding of the oldest block was incorrect.
  • the decoder may then return to the candidate seed value used for the oldest block and try the next-most-likely decoding of the oldest block.
  • the decoder may then re-derandomize the three most recent blocks using a second guess at a seed value. This process may continue until the decoder produces a sequence of four decoded blocks in which the most recent block's confidence metric is large.
  • the Qbit value may be implicitly transmitted by means of the time-varying randomization.
  • any data may be implicitly transmitted.
  • the Motion Hag or a combination of the Qbit value and the Motion Flag may be used to generate the pseudorandom number sequence and, thus, be implicitly transmitted.
  • FIG. 1d One embodiment of a circuit for coding, arranging, and the time-varying randomization of a signal stream to provide for a robust error recovery is shown in Figure Id.
  • An input signal is received and time- varying VL-data randomization logic 144 generates randomized Q codes from the encoded and shuffled data.
  • the output from the time-varying VL-data randomization logic 144 may be further encoded as discussed herein.
  • Figure le illustrates an embodiment of a circuit for recovering lost or damaged DR values.
  • An input signal is received and time-varying VL-data derandomization logic 150 derandomizes the Q codes from the input bitstream and recovers lost or damaged dynamic range constants.
  • the output signal from the time-varying VL-data derandomization logic 150 may be further decoded and deshuffled as described herein.
  • Frames, block attributes, and VL-data describe a variety of components within a video signal.
  • the boundaries, location, and quantity of these components are dependent on the transmission and compression properties of a video signal.
  • these components are varied, shuffled, and randomized within a bitstream of the video signal to ensure a robust error recovery during transmission losses.
  • a data set may include a partition of data of a video or other type of data signal.
  • a frame set may be a type of data set that includes one or more consecutive frames.
  • a segment may include a memory with the capacity to store a one-sixth division of the Q codes and block attributes included in a frame set.
  • a buffer may include a memory with the capacity to store a one-sixtieth division of the Q codes and block attributes included in a frame set.
  • the shuffling of data may be performed by interchanging components within segments and /or buffers. Subsequently, the data stored in a segment may be used to generate packets of data for transmission. Thus, in the following description, if a segment is lost all the packets generated from the segment are lost during transmission. Similarly, if a fraction of a segment is lost then a corresponding number of packets generated from the segment are lost during transmission.
  • Packet Structure 200 used for the transmission of the data across point-to-point connections as well as networks.
  • Packet Structure 200 is generated by Encoder 110 and is transmitted across Transmission Media 135.
  • Packet Structure 200 comprises five bytes of header information, eight DR bits, eight MIN bits, a Motion Flag bit, a five bit threshold index, and 354 bits of Q codes.
  • the MIN bits may be replaced with CEN bits.
  • the packet structure described herein is illustrative and may typically be implemented for transmission in an asynchronous transfer mode (“ATM") network.
  • ATM asynchronous transfer mode
  • the present invention is not limited to the packet structure described and a variety of packet structures that are used in a variety of networks can be utilized.
  • Transmission Media (e.g., media) 135 is not assumed to provide error-free transmission and therefore packets may be lost or damaged.
  • conventional methods exist for detecting such loss or damage, but substantial image degradation will generally occur.
  • the system and methods of the present invention therefore teach source coding to provide robust recovery from such loss or damage. It is assumed throughout the following discussion that a burst loss, that is the loss of several consecutive packets, is the most probable form of error, but some random packet losses might also occur.
  • the system and methods of the present invention provide multiple level shuffling.
  • the FL-data included in a transmitted packet comprise data from spatially and temporally disjointed locations of an image.
  • Shuffling data ensures that any burst error is scattered and facilitates error recovery.
  • the shuffling allows recovery of block attributes and Qbit values.
  • Figure 3 is a flow diagram illustrating one embodiment of the encoding process performed by Encoder 110. Figure 3 further describes an overview of the shuffling process used to ensure against image degradation and to facilitate a robust error recovery.
  • an input frame set also referred to as a display component, may be decimated to reduce the transmission requirements.
  • the Y signal is decimated horizontally to three-quarters of its original width and the U and V signals are each decimated to one-half of their original height and one-half of their original width.
  • the discussion will describe the processing of Y signals; however, the process is applicable to the U and V signals.
  • the two Y frame images are mapped to 3D blocks.
  • 3D blocks are shuffled.
  • ADRC buffering and encoding is used.
  • encoded Y, U and V blocks are shuffled within a buffer.
  • the VL-data for a group of encoded 3D blocks and their corresponding block attributes are shuffled.
  • the FL-data is shuffled across different segments.
  • post-amble filling is performed in which variable space at the end of a buffer is filled with a predetermined bitstream.
  • the VL-data is shuffled across different segments.
  • the following shuffling description provides a method for manipulation of pixel data before and after encoding.
  • independent data values are shuffled /deshuffled via hardware.
  • the hardware maps the address of block values to different addresses to implement the shuffling/ deshuf fling process.
  • address mapping may not be possible for data dependent values because shuffling may follow the processing of data.
  • the intra group VL-data shuffling described below includes the data dependent values.
  • the following shuffling description occurs on discrete sets of data.
  • a signal may be defined based on multiple data levels ranging from bits, to pixels, and to frames. Shuffling may be possible for each level defined in the signal and across different data levels of the signal.
  • Figure 4 is a flow diagram illustrating one embodiment of decoding process performed by Decoder 120.
  • the conversion and de-shuffling processes may be the inverse of the processes represented in Figure 3.
  • time- varying de-randomization of Q codes and delayed decision decoding may be performed within step 435.
  • a single frame typically may comprise 5280 2D blocks wherein each 2D block comprises 64 pixels.
  • a frame pair may comprise 5280 3D blocks as a 2D block from a first frame and a 2D block from a subsequent frame are collected to form a 3D block.
  • Image-to-block mapping is performed for the purpose of dividing a frame or frame set of data into 2D blocks or 3D blocks respectively. Moreover, image-to-block mapping includes using a complementary and /or interlocking pattern to divide pixels in a frame to facilitate robust error recovery during transmission losses. However, to improve the probability that a given DR value is not too large, each 2D block is constructed from pixels in a localized area.
  • Figure 5 illustrates one embodiment of an image-to-block mapping process for an exemplary 16-pixel section of an image.
  • Image 500 comprises 16 pixels forming a localized area of a single frame.
  • Each pixel in Image 500 is represented by an intensity value. For example, the pixel in the top left-hand side of the image has an intensity value equal to 100 whereas the pixel in the bottom right hand side of the image has an intensity value of 10.
  • pixels from different areas of Image 500 are used to create 2D Blocks 510, 520, 530, and 540.
  • 2D Blocks 510, 520, 530, and 540 are encoded, shuffled (as illustrated below), and transmitted. Subsequent to transmission, 2D Blocks 510, 520, 530, and 540 are recombined and used to form Image 550.
  • Image 550 is a reconstruction of Image 500.
  • an interlocking complementary block structure one embodiment of which is illustrated in Figure 5, is used to reconstruct Image 550.
  • the pixel selection used to create 2D Blocks 510, 520, 530, and 540 ensures that a complementary and/or interlocking pattern is used to recombine the blocks when Image 550 is reconstructed. Accordingly, when a particular 2D block's attribute is lost during transmission, contiguous sections of Image 550 are not distorted during reconstruction.
  • the DR of 2D Block 540 is lost during data transmission.
  • the decoder utilizes multiple neighboring pixels of neighboring blocks through which a DR can be recovered for the missing DR of 2D Block 540.
  • the combination of complementary patterns and shifting increases the number of neighboring pixels, preferably maximizing the number of neighboring pixels that originate from other blocks, significantly improving DR and MIN recovery.
  • Figure 5a illustrates one embodiment of a shuffling pattern used to form 2D blocks in one embodiment of the image-to-block mapping process.
  • An image is decomposed into two sub-images, Sub-Image 560 and Sub-Image 570, based on alternating pixels. Rectangular shapes are formed in Sub-Image 560 to delineate the 2D block boundaries.
  • the 2D blocks are numbered 0, 2, 4, 7, 9, 11, 12, 14, 16, 19, 21, and 23.
  • Tile 565 illustrates the pixel distribution for a 2D block within Sub-Image 560.
  • Sub-Image 570 the 2D block assignment is shifted by eight pixels horizontally and four pixels vertically. This results in a wrap around 2D block assignment and overlap when Sub-Images 560 and 570 are combined during reconstruction.
  • the 2D blocks are numbered 1, 3, 5, 6, 8, 10, 13, 15, 17, 18, 20, and 22.
  • Tile 575 illustrates the pixel distribution for a 2D block within Sub-Image 570.
  • Tile 575 is the complementary structure of Tile 565. Accordingly, when a particular block's attribute is lost during transmission, neighboring pixels through which a block attribute can be recovered for the missing 2D block exists. Additionally, an overlapping 2D block of pixels with a similar set of block attributes exist. Therefore, during reconstruction of the image the decoder has multiple neighboring pixels from adjacent 2D blocks through which a lost block attribute can be recovered.
  • Figure 6 illustrates other complementary and interlocking 2D block structures. Other structures may also be utilized. Similar to Figure 5, these 2D block structures illustrated in Figure 6, ensure surrounding 2D blocks are present despite transmission losses for a given 2D block.
  • Patterns 610a, 610b, and 610d use horizontal and /or vertical shifting during the mapping of pixels to subsequent 2D blocks.
  • Horizontal shifting describes shifting the tile structure in the horizontal direction a predetermined number of pixels prior to beginning a new 2D block boundary.
  • Vertical shifting describes shifting the tile structure in the vertical direction a predetermined number of pixels prior to beginning a new 2D block boundary. In application, horizontal shifting may be applied, vertical shifting may only be applied, or a combination of horizontal and vertical shifting may be applied.
  • Pattern 610a illustrates a spiral pattern used for image-to-block mapping.
  • the spiral pattern follows a horizontal shifting to create subsequent 2D blocks during the image-to-block mapping process.
  • Patterns 610b and 610d illustrate complementary patterns wherein pixel selection is moved by a horizontal and vertical shifting to create subsequent 2D blocks during the image-to-block mapping process.
  • Patterns 610b and 610d illustrate alternating offsets on pixels selection between 2D blocks.
  • Pattern 610c illustrates using an irregular sampling of pixels to create a 2D block for image-to-block mapping. Accordingly, the image-to-block mapping follows any mapping structure provided a pixel is mapped to a 2D block only once.
  • Figure 5a and Figure 6 describe image-to-block mapping for 2D block generation. It is readily apparent that the processes are applicable to 3D blocks. As described above, 3D block generation follows the same boundary definition as a 2D block, however the boundary division extends across a subsequent frame resulting in a 3D block.
  • a 3D block is created by collecting the pixels used to define a 2D block in a first frame together with pixels from a 2D block in a subsequent frame. In one embodiment, both pixels in the 2D block from the first frame and the 2D block from the subsequent frame are from the exact same location.
  • Intra Frame Set Block Shuffling Intra Frame Set Block Shuffling
  • the pixel values for a given image are closely related for a localized area. However, in another area of the same images the pixel values may have significantly different values. Thus, subsequent to encoding, the DR and MIN values for spatially close 2D or 3D blocks in a section of an image have similar values, whereas the DR and MIN values for blocks in another section of the image may be significantly different. Accordingly, when buffers are sequentially filled with encoded data from spatially close 2D or 3D blocks of an image, a disproportionate usage of buffer space occurs. Intra frame set block shuffling occurs prior to ADRC encoding and includes shuffling the 2D or 3D blocks generated during the image-to-block mapping process. This shuffling process ensures an equalized buffer usage during a subsequent ADRC encoding.
  • Figures 7a - 7d illustrate one embodiment of shuffling 3D Y-blocks.
  • the 3D Y- blocks in Figures 7a-7d are generated from applying the image-to-block mapping process described above to a frame pair containing only Y signals.
  • the 3D Y-blocks are shuffled to ensure that the buffers used to store the encoded frame pair contain 3D Y- blocks from different parts of the frame pair. This leads to similar DR distribution during ADRC encoding. A similar DR distribution within each buffer leads to consistent buffer utilization.
  • Figure 7a -7d also illustrate 3D block shuffling using physically disjointed 3D blocks to ensure that transmission loss of consecutive packets results in damaged block attributes scattered across the image, as opposed to a localized area of the image.
  • the block shuffling is designed to widely distribute block attributes in the event of small, medium, or large, burst packet losses occur.
  • a small burst loss is one in which a few packets are lost;
  • a medium loss is one in which the amount of data that can be held in one buffer is lost;
  • a large loss is one in which the amount of data that can be held in one segment is lost.
  • each group of three adjacent blocks are selected from relatively remote parts of the image. Accordingly, during the subsequent intra group VL-data shuffling (to be detailed later), each group is formed from 3D blocks that have differing statistical characteristics. Distributed block attribute losses allow for a robust error recovery because a damaged 3D block is surrounded by undamaged 3D blocks and the undamaged 3D blocks can be used to recover lost data.
  • Figure 7a illustrates a frame pair containing 66 3D Y-blocks in the horizontal direction and 603D Y-blocks in the vertical direction.
  • the 3D Y-blocks are allocated into Segments 0 - 5.
  • the 3D Y-block assignment follows a two by three column section such that one 3D Y-block from each section is associated with a segment.
  • FL-data shuffling is performed to further disperse block attribute losses.
  • Figure 7b illustrates the scanning order of 3D Y-blocks numbered "0" used to enter into Segment 0.
  • Each "0" 3D Y-block of Figure 7a is numbered 0, 1, 2, 3, . . . 659 to illustrate their location in the stream that is inputted into Segment 0.
  • Using the block numbering to allocate segment assignment the remaining 3D Y-blocks are inputted into Segments 1 - 5, thus resulting in a frame pair shuffled across multiple segments.
  • Figure 7c illustrates the 6603D Y-blocks comprising one segment.
  • the 3D Y- blocks numbered 0 - 65 are inputted into Buffer 0.
  • the 3D Y-blocks adjacent to the numbered 3D Y-blocks are inputted into Buffer 1.
  • the process is repeated to fill Buffers 2 - 9. Accordingly, damage to a buffer during data transmission results in missing 3D Y-blocks from different parts of the image.
  • Figure 7d illustrates the final ordering of the "0" 3D Y-blocks across a buffer.
  • 3D Y-blocks 0, 1, and 2 occupy the first three positions in the buffer. The process is repeated for the rest of the buffer. Accordingly, loss of three 3D Y-blocks during data transmission results in missing 3D Y-blocks from distant locations within the image.
  • Figures 7a-d illustrate one embodiment of 3D block distributions for 3D Y-blocks of a frame set.
  • 3D block distributions for 3D U- blocks and 3D V-blocks are available.
  • the 3D U-blocks are generated from applying the image-to-block mapping process, described above, to a frame set containing only U signals.
  • 3D V-blocks are generated from applying the image-to-block mapping process to a frame set containing only V signals. Both the 3D U-block and the 3D V-block follow the 3D Y-block distribution described above. However, as previously described, the number of 3D U-blocks and 3D V-blocks each have a 1:6 proportion to 3D Y-blocks.
  • Figures 7a-d are used to illustrate one embodiment of intra frame set block shuffling for a Y signal such that burst error of up to 1/6 of the packets lost during transmission is tolerated and further ensures equalized buffer use. It will be appreciated by one skilled in the art that segment, buffer, and ADRC block assignments can be varied to ensure against 1/n burst error loss or to modify buffer utilization.
  • the ADRC encoding and buffering processes occur in step four.
  • 2D or 3D blocks generated during the image-to-block mapping process are encoded resulting in 2D or 3D ADRC blocks.
  • a 3D ADRC block contains Q codes, a MIN value, a Motion Flag, and a DR.
  • a 2D ADRC block contains Q codes, a MIN, and a DR.
  • a 2D ADRC block does not include a Motion Flag because the encoding is performed on a single frame or a single field.
  • partial buffering describes an innovative method for determining the encoding bits used in ADRC encoding.
  • partial buffering describes a method of selecting threshold values from a threshold table designed to provide a constant transmission rate between remote terminals while restricting error propagation.
  • the threshold table is further designed to provide maximum buffer utilization.
  • a buffer is a memory that stores a one-sixtieth division of encoded data from a given frame set. The threshold values are used to determine the number of Qbits used to encode the pixels in 2D or 3D blocks generated from the image-to-block mapping process previously described.
  • the threshold table includes rows of threshold values, also referred to as a threshold set, and each row in the threshold table is indexed by a threshold index.
  • the threshold table is organized with threshold sets that generate a higher number of Q code bits located in the upper rows of the threshold table. Accordingly, for a given buffer having a predetermined number of bits available, Encoder 110 moves down the threshold table until a threshold set that generates less than a predetermined number of bits is encountered. The appropriate threshold values are used to encode the pixel data in the buffer.
  • Figure 8 illustrates one embodiment of selected threshold values and the DR distribution for Buffer 0.
  • the vertical axis of Figure 8 includes the cumulative DR distribution.
  • the value "b" is equal to the number of 3D or 2D blocks whose DR is greater than or equal to L 3 .
  • the horizontal axis includes the possible DR values. In one embodiment, DR values range from 0 to 255.
  • Threshold values L 4 , L 3 , L 2 , and Li describe a threshold set used to determine the encoding of a buffer.
  • all blocks stored in Buffer 0 are encoded using threshold values L 4 , L 3 , L 2 , and Li . Accordingly, blocks with DR values greater than L 4 have their pixel values encoded using four bits. Similarly, all pixels belonging to blocks with DR values between L3 and L4 are encoded using three bits. All pixels belonging to blocks with DR values between L2 and L3 are encoded using two bits. All pixels belonging to blocks with DR values between Li and L2 are encoded using one bit. Finally, all pixels belonging to blocks with DR values smaller than Li are encoded using zero bits. L4, L3, L2, and Li are selected such that the total number of bits used to encode all the blocks in Buffer 0 is as close as possible to a limit of 31,152 bits without exceeding the limit of 31,152.
  • Figure 8a illustrates the use of partial buffering in one embodiment.
  • Frame 800 is encoded and stored in Buffers 0 - 59.
  • a transmission error inhibits data recovery, the decoding process is stalled for Frame 800 until error recovery is performed on the lost data.
  • partial buffering restricts the error propagation within a buffer, thus allowing decoding of the remaining buffers.
  • a transmission error inhibits the Qbit and Motion Flag recovery for Block 80 in Buffer 0.
  • Partial buffering limits the error propagation to the remaining blocks within Buffer 0. Error propagation is limited to Buffer 0 because the end of Buffer 0 and the beginning of Buffer 1 are known due to the fixed buffer length. Accordingly, Decoder 120 can begin processing of blocks within Buffer 1 without delay.
  • the use of different threshold sets to encode different buffers allows Encoder 110 to maximize /control the number of Q codes bits included in a given buffer, thus allowing a higher compression ratio. Furthermore, the partial buffering process allows for a constant transmission rate because Buffers 0 - 59 consist of a fixed length.
  • a buffer's variable space is not completely filled with Q code bits because a limited number of threshold sets exist. Accordingly, the remaining bits in the fixed length buffer are filled with a predetermined bitstream pattern referred to as a post-amble. As will be described subsequently, the post-amble enables bidirectional data recovery because the post-amble delineates the end of the VL-data prior to the end of the buffer.
  • Y, U, and V signals each have unique statistical properties.
  • the Y, U, and V signals are multiplexed within a buffer. Accordingly, transmission loss does not have a substantial effect on a specific signal.
  • FIG. 9 illustrates one embodiment of the intra buffer YUV block shuffling process in which YUV ADRC blocks are derived from the Y, U, and V signals respectively.
  • Buffer 900 illustrates the ADRC block assignments after intra frame set block shuffling.
  • Buffer 900 comprises 66 Y-ADRC blocks followed by 11 U-ADRC blocks which are in turn followed by 11 V-ADRC blocks.
  • Buffer 910 shows the YUV ADRC block organization after intra buffer YUV block shuffling. As illustrated, three Y-ADRC blocks are followed by a U-ADRC block or three Y-ADRC blocks are followed by a V-ADRC block.
  • Intra buffer YUV block shuffling reduces similarity between adjacent block's bitstreams within the buffer.
  • Alternative embodiments of intra buffer YUV block shuffling with a different signal, i.e., YUV ratios or other color spaces are possible dependent on the initial image format.
  • Intra group VL-data shuffling comprises three processing steps.
  • the three processing steps include Q code concatenation, Q code reassignment, and time-varying randomizing of Q codes.
  • Figure 10 illustrates one embodiment of intra group VL-data shuffling wherein three processing steps are applied consecutively to Q codes stored in a buffer.
  • the time-varying randomization processing step only may be applied to perform intra group VL-data shuffling.
  • Each processing step independently assists in the error recovery of data lost during transmission. Accordingly, each processing step is described independently.
  • Q code concatenation ensures that groups of ADRC blocks are decoded together. Group decoding facilitates error recovery because additional information is available from neighboring blocks during the data recovery process detailed below.
  • Q code concatenation may be applied independently to each group of three ADRC blocks stored in a buffer.
  • a group includes ADRC block(s) from different buffers. The concatenation of Q codes across three ADRC blocks is described as generating one concatenated ADRC tile.
  • Figure 11 and Figure 11a illustrate one embodiment of generating concatenated ADRC tiles.
  • Figure 11 illustrates one embodiment of generating a concatenated ADRC tile from 2D ADRC blocks.
  • the concatenation is performed for each Q code (qo - q 6 3) included in 2D ADRC Blocks 0, 1, and 2 resulting in the sixty four Q codes of Concatenated ADRC Tile A.
  • the first Q code qo,o (0th quantized value) of 2D ADRC Block 0 is concatenated to the first Q code qo, ⁇ of 2D ADRC Block 1.
  • the two concatenated Q codes are in turn concatenated to the first Q code qn, 2 of 2D ADRC Block 2, thus resulting in Qo of Concatenated ADRC Tile A.
  • the process is repeated until Q ⁇ 3 is generated.
  • the generation of Qi in Concatenated ADRC Tile A is described by the equation
  • ADRC Tile A associated with each Qi in Concatenated ADRC Tile A there is a corresponding number of N bits that represents the total number of bits concatenated to generate a single Qi.
  • Figure 11a illustrates one embodiment of generating a concatenated ADRC tile from frame pairs including motion blocks.
  • a motion block is a 3D ADRC block with a set Motion Flag.
  • the Motion Flag may be set when a predetermined number of pixels within two 2D blocks structure created by image-to-block mapping process described earlier, change in value between a first frame and a subsequent frame.
  • the Motion Flag may be set when the maximum value of each pixel change between the 2D block of a first frame and a subsequent frame exceeds a predetermined value.
  • non-motion (i.e., stationary) block includes a 3D ADRC block with a Motion Flag that is not set.
  • the Motion Flag remains un-set when a predetermined number of pixels within the two 2D blocks of a first frame and a subsequent frame do not change in value. In an alternative embodiment, the Motion Flag remains un-set when the maximum value of each pixel change between a first frame and a subsequent frame does not exceed a predetermined value.
  • a motion block includes Q codes from an encoded 2D block in a first frame and an encoded 2D block in a subsequent frame.
  • the collection of Q codes corresponding to a single encoded 2D block are referred to as an ADRC tile. Accordingly, a motion block generates two ADRC tiles.
  • a stationary block need only include one-half of the number of Q codes of a motion block, thus generating only one ADRC tile.
  • the Q codes of a stationary block are generated by averaging corresponding pixels values between a 2D block in a first frame and a corresponding 2D block in a subsequent frame. Each averaged pixel value is subsequently encoded resulting in the collection of Q codes forming a single ADRC tile. Accordingly, Motion Blocks 1110 and 1130 generate ADRC Tiles 0, 1, 3, and 4. Stationary Block 1120 generates ADRC Tile 2.
  • the concatenated ADRC tile generation of Figure 11a concatenates the Q codes for ADRC Tiles 0 - 4 into Concatenated ADRC Tile B. Specifically, the concatenation may be performed for each Q code (qo - q 6 3) included in ADRC Tiles 0, 1, 2, 3 and 4 resulting in the sixty four Q codes of Concatenated ADRC Tile B. Alternatively, the generation of each Q code, Qi, in Concatenated ADRC Tile B may be described by the mathematical equation
  • Q code reassignment ensures that bit errors caused by transmission losses are localized within spatially disjointed pixels.
  • Q codes are redistributed and the bits of the redistributed Q codes are shuffled. Accordingly, Q code reassignment facilitates error recovery because undamaged pixels surround each damaged pixel. Furthermore, DR and MIN recovery is aided because pixel damage is distributed evenly throughout an ADRC block.
  • FIG. 12 illustrates one embodiment of pixel corruption during the transmission loss of a 1/6 burst error loss.
  • 2D ADRC Blocks 1210, 1220, and 1230 each include sixty-four pixels encoded using three bits. Accordingly, each pixel, Po through P 63 , of a 2D ADRC block may be represented by three bits.
  • 2D ADRC Block 1210 shows the bit loss pattern, indicated by a darkened square, of bits when the first bit of every six bits are lost. Similarly, the bit loss pattern when the second bit or fourth bit of every six bits are lost are shown in 2D ADRC Blocks 1220 and 1230, respectively.
  • Figure 12 illustrates that without Q code reassignment one-half of all the pixels 2D ADRC Blocks 1210, 1220, and 1230 are corrupted for a 1/6 burst error loss.
  • Q code reassignment may be applied independently to each concatenated ADRC tile stored in a buffer, thus ensuring that bit errors are localized within spatially disjointed pixels upon deshuffling.
  • Q code reassignment may be applied to each ADRC block stored in a buffer.
  • Figure 12a illustrates one embodiment of Q code reassignment that generates a bitstream of shuffled Q code bits from a concatenated ADRC tile.
  • Table 122 and Table 132 illustrate the Q code redistribution.
  • Bitstreams 130 and 140 illustrate the shuffling of Q code bits.
  • Table 122 shows the concatenated Q codes for Concatenated ADRC Tile A.
  • Qo is the first concatenated Q code and Q 63 is the final concatenated Q code.
  • Table 132 illustrates the redistribution of Q codes. For one embodiment Qo, Q ⁇ , Qi2/ Q l8/ Q 24/ O30/ 036/ Q42 Q48/ 054/ and Q60 are included in a first set, partition 0. Following Table 132, the following eleven concatenated Q codes are included in partition 1. The steps are repeated for partitions 2 - 5. The boundary of a partition is delineated by a vertical line in Table 132.
  • FIG. 12b illustrates one embodiment of the bit pattern loss created by the 1 /6 burst error loss of redistributed Q codes.
  • 2D ADRC blocks 1215, 1225, and 1235 each include sixty four pixels encoded using three bits. Accordingly, each pixel Po through P 63 , of each 2D ADRC block, is represented by three bits.
  • the bit loss pattern indicated by a darkened square, is localized across a group of consecutive pixels.
  • Q code assignment to partitions include Q codes from different motion blocks, thus providing both a disjointed spatial and temporal assignment of Q codes to six segments. This results in additional undamaged spatial- temporal pixels during a 1/6 burst error loss and further facilitates a more robust error recovery.
  • bits of the redistributed Q codes in Table 132 are shuffled across a generated bitstream so that adjacent bits in the bitstream are from adjacent partitions.
  • the Q code bits for all the partitions in Table 132 are concatenated into Bitstream 130.
  • adjacent bits in Bitstream 130 are scattered to every sixth bit location in the generated Bitstream 140. Accordingly, bits number zero through five, of Bitstream 140, include the first bit from the first Q code in each partition. Similarly, bits number six through eleven, of Bitstream 140, include the second bit from the first Q code in each partition.
  • the process is repeated for all Q code bits. Accordingly, a 1/6 burst error loss will result in a spatially disjointed pixel loss.
  • Figure 12c illustrates one embodiment of the bit pattern loss created by the 1/6 burst error loss of reassigned (i.e., redistributed and shuffled) Q codes.
  • 2D ADRC Blocks 1217, 1227, and 1237 each include sixty four pixels encoded using three bits. Accordingly, each pixel Po through F ⁇ , of each 2D ADRC Block, is represented by three bits.
  • the bit loss pattern indicated by a darkened square, is distributed across spatially disjointed pixels, thus facilitating pixel error recovery. 3. Time-varying Randomization of Q Codes
  • the ADRC decoder must determine how many bits were used to quantize that block without relying on the DR. In one embodiment, this process may be accomplished by applying time-varying randomization to each VL-data block.
  • Randomization may be applied to destroy the correlation of incorrect candidate decodings that may be generated during a subsequent data decoding process in order to estimate lost or damaged data.
  • the randomization process does not change the properties of the correct candidate decoding, as it is restored to its original condition.
  • subsequent derandomized data will tend to result in candidate decodings that exhibit highly correlated properties indicative that the corresponding candidate decoding is a good selection.
  • the randomization process is chosen such that a correct derandomization results in candidate decoding exhibiting highly correlated properties and an incorrect derandomization results in a decoding exhibiting uncorrelated properties.
  • the time-varying randomization advantageously handles zero blocks.
  • time-varying randomization may decrease the likelihood that the decoder will miss data errors by resynchronization (i.e., the decoder incorrectly decoding a set of blocks then correctly decoding subsequent blocks without recognizing the error).
  • Encoding parameters may be used to perform the randomization and derandomization processes. For example, a randomization pattern may be chosen based on the values of the compression parameters.
  • Q is the Qbit value used to quantize a given VL-data block x r In this embodiment, this number may be 0, 1, 2, 3, or 4.
  • a seed value may be used to initialize a pseudorandom number generator (PNG) to create a pseudorandom number sequence. This seed value may vary with the current Q ; on a block-by-block basis. In alternate embodiments, the seed value may be used to generate any suitable mathematical transformation sequence.
  • PNG pseudorandom number generator
  • the seed value may be generated by the combination of a variety of compression constants to encode the block of data.
  • compression constants include, but are not limited to, Qbit value, Motion Flag (MF), MIN, MAX, CEN, DR, and block address (BA), in which BA identifies a particular pixel location within the block of data.
  • MF Motion Flag
  • MIN Motion Flag
  • MAX MAX
  • CEN CEN
  • DR block address
  • BA block address
  • the seed value may be generated as follows:
  • seed generating combinations may be summed over a number of blocks to generate time- varying seed values.
  • seed may be defined as follows:
  • Figure 12d illustrates one embodiment of method for encoding VL-data blocks by time-varying randomization.
  • the seed value may be set to zero. Other initial values may also be used.
  • the seed value is an 8-bit binary number (e.g., 00000000).
  • the next VL-data block is retrieved.
  • the Qbit value for the VL-data block is determined.
  • the Qbit value may be determined directly from the DR.
  • a Qbit value previously determined by the encoder may be used and stored in a data buffer.
  • step 1283 if the Qbit value is not equal to zero, the process continues at step 1285. If the Qbit value is equal to zero, the process continues at step 1289.
  • step 1285 the seed value is combined with the Qbit values.
  • the seed value is shifted left by a number of bits, e.g., two bits.
  • the seed value may be combined, for example, concatenated, with the binary equivalent of the Qbit value minus one. (For example, if the current seed value is 00000010 and the binary equivalent of Qbit value minus one is 11, the two steps result in a seed value of 00001011.) Processing then continues at step 1291.
  • the seed value is manipulated to indicate a zero block.
  • the seed value is shifted right one bit. (For example, if the current seed value is 00001011, the result of the right shift is a seed value of 00000101.)
  • the seed value may be set to a specified constant, left shifted in some manner, or manipulated in any advantageous manner.
  • the VL-data is randomized in accordance with the seed value.
  • the seed value is used to generate a pseudorandom number sequence using the PRG.
  • a given PRG always generates the same pseudorandom number sequence using the same seed value.
  • the pseudorandom number sequence is used as a transformation function of the VL-data block.
  • the VL- data may be randomized by applying a bitwise XOR (exclusive OR) function to the VL- data and the pseudorandom number sequence.
  • a sequence of Qbit values for successive temporally adjacent blocks of data may be as follows:
  • the seed value is initially set to 00000000, (corresponding to step 1277).
  • the first VL- data block, x v is retrieved and Q i is determined.
  • Q 1 has a value of 3.
  • the Qbit value is not zero, therefore, steps 1285 and 1287 are executed.
  • the seed value is shifted left two bits, resulting in the seed value 00000000.
  • Q, - 1 2 which has a binary value of 10.
  • the two values are concatenated resulting in a seed value of 00000010.
  • the seed value is then used to generate the pseudorandom number sequence y 1 which is bitwise XORed with x r
  • the next VL-data block, x 2 , and its Qbit value, Q 2 (value 2), are retrieved.
  • Q 2 - 1 1, which has a binary value of 01.
  • the current seed value is shifted left two bits, resulting in 00001000.
  • the two values are concatenated resulting in a new seed value of 00001001.
  • the new seed value is then used to generate the pseudorandom number sequence y 2 which is bitwise XORed with x 2 .
  • the next VL-data block, x y and its Qbit value, Q 3 (value 1), are retrieved.
  • Q 3 - 1 0, which has a binary value of 00.
  • the current seed value is shifted left two bits, resulting in 00100000.
  • the two values are concatenated resulting in a new seed value of 00100100.
  • the new seed value is then used to generate the pseudorandom number sequence y 3 which is bitwise XORed with x y
  • the next VL-data block, x v and its Qbit value, Q 4 (value 0), are retrieved. Because the Qbit value is 0 (a zero block), the seed value is shifted to the right one bit, corresponding to step 1289. This results in a new seed value of 00010010. The new seed value is then used to generate the pseudorandom number sequence y 4 which is bitwise XORed with x 4 .
  • Figures 10 - 12 illustrate intra group VL-data shuffling tolerated up to 1/6 packet data loss during transmission. It will be appreciated by one skilled in the art, that the number of total partitions and bit separation can be varied to ensure against 1/n burst error loss.
  • Inter segment FL-data shuffling describes rearranging block attributes among different segments. Rearranging block attributes provides for a distributed loss of data. In particular, when FL-data from a segment is lost during transmission the DR value, MIN value, and Motion Flag value lost do not belong to the same block.
  • Figures 13 and 14 illustrate one embodiment of inter segment FL-data shuffling.
  • FIG. 13 illustrates the contents of Segments 0 to 5.
  • each segment comprises 880 DRs, 880 MINs, 880 Motion Flags, and VL-data corresponding to 660 Y-blocks, 110 U-blocks, and 110 V-blocks.
  • MIN Shuffling 1300 the MIN values for Segment 0 are moved to Segment 2
  • MIN values for Segment 2 are moved to Segment 4
  • the MLN values for Segment 4 are moved to Segment 0.
  • the MIN values for Segment 1 are moved to Segment 3
  • the MTN values for Segment 3 are moved to Segment 5
  • Motion Flag values for Segment 5 are moved to Segment 1.
  • Figure 13a illustrates Motion Flag shuffling.
  • both Figure 13 and Figure 13a illustrate shuffling all instances of the specific block attribute between segments.
  • the 880 MIN values from Segment 0 are collectively exchanged with the 880 MIN values in Segment 2.
  • the 880 Motion Flags for Segment 0 are collectively exchanged with the 880 Motion Flags in Segment 4.
  • this collective shuffling of block attributes results in a disproportionate loss of a specific block attributes for a block group.
  • a block group includes three ADRC blocks.
  • Figure 14 illustrates one embodiment of a modular three shuffling process for DR, MIN, and Motion Flag values.
  • CEN may also be used in the shuffling process.
  • a modular three shuffling describes a shuffling pattern shared across three blocks (i.e., a block group) in three different segments. The shuffling pattern is repeated for all block groups within the three different segments. However, a different shuffling pattern is used for different block attributes. Accordingly, the modular three shuffling process distributes block attributes over all three segments. In particular, for a given block group a modular three shuffling ensures that only one instance of a specific block attribute is lost during the transmission loss of a segment. Thus, during the data recovery process, described below, a reduced number of candidate decodings are generated to recover data loss within a block.
  • a segment stores 880 DR values. Accordingly, the DR values are numbered 0 - 879 dependent on the block from which a given DR value is derived.
  • the FL-data contents of three segments are shuffled.
  • a count of 0 - 2 is used to identify each DR value in the three segments identified for a modular shuffling. Accordingly, DR's belonging to blocks numbered 0, 3, 6, 9 . . . belong to Count 0.
  • DR's belonging to blocks numbered 1, 4, 7, 10, . . . belong to Count 1 and DR's belonging to blocks numbered 2, 5, 8, 11 . . . belong to Count 2.
  • the DR values associated with that count are shuffled across Segment 0, 2, and 4.
  • the DR values associated with the same count are shuffled across Segments 1, 3, and 5.
  • DR Modular Shuffle 1410 the DR values belonging to Count 0 are left un- shuffled.
  • the DR values belonging to Count 1 are shuffled.
  • the Count 1 DR values in Segment A are moved to Segment B
  • the Count 1 DR values in Segment B are moved to Segment C
  • the Count 1 DR values in Segment C are moved to Segment A.
  • the DR values belonging to Count 2 are also shuffled.
  • the Count 2 DR values in Segment A are moved to Segment C
  • the Count 2 DR values in Segment B are moved to Segment A
  • the Count 2 DR values in Segment C are moved to Segment B.
  • MLN Modular Shuffle 1420 illustrates one embodiment of a modular three block attribute shuffling process for MIN values.
  • a segment includes 880 MIN values.
  • the shuffling pattern used for Count 1 and Count 2 in DR Modular Shuffle 1410 are shifted to Count 0 and Count 1.
  • the shuffling pattern used for Count 1 in DR Modular Shuffle 1410 is applied to Count 0.
  • the shuffling pattern used for Count 2 in DR Modular Shuffle 1410 is applied to Count 1 and the MIN values belonging to Count 2 are left un-shuffled.
  • Motion Flag Modular Shuffle 1430 illustrates one embodiment of a modular three block attribute shuffling process for Motion Flag values.
  • a segment includes 880 Motion Flag values.
  • the shuffling pattern used for Count 1 and Count 2 in DR Modular Shuffle 1410 are shifted to Count 2 and Count 0 respectively.
  • the shuffling pattern used for Count 2 in DR Modular Shuffle 1410 is applied to Count 0.
  • the shuffling pattern used for Count 1 in DR Modular Shuffle 1410 is applied to Count 2 and the Motion Flag values belonging to Count 1 are left un-shuffled.
  • Figure 14a illustrates the modular shuffling result of Modular Shuffles 1410, 1420, and 1430.
  • Modular Shuffle Result 1416 shows each attribute destination of blocks belonging to Segment 0.
  • Segment 0 corresponds to Segment A of Figure 14. This destination is defined according to Modular Shuffles 1410, 1420, and 1430 of Figure 14.
  • Figure 14a also illustrates the distribution loss of block attributes after Segment 0 is lost during transmission.
  • Loss Pattern 1415 shows the DR, Motion Flag, and MIN values loss across six segments after a subsequent deshuffling is applied to the received data that was initially shuffled using Modular Shuffles 1410, 1420, and 1430.
  • CEN value may also be used in the shuffling and deshuffling process.
  • the block attribute loss is distributed periodically across Segments 0, 2, and 4 while Segments 1, 3, and 5 have no block attribute loss.
  • Spatial Loss Pattern 1417 illustrates the deshuffled spatial distribution of damaged FL-data after Segment 0 is lost during transmission.
  • Spatial Loss Pattern 1417 shows the DR, Motion Flag, and MIN value loss after a subsequent deshuffling is applied to the received data.
  • Spatial Loss Pattern 1417 a damaged block is surrounded by undamaged blocks and damaged block attributes can be recovered with surrounding undamaged blocks.
  • Figure 14 and Figure 14a illustrate a modular three shuffling pattern and the distribution loss of block attributes after a segment is lost during transmission. In alternative embodiments, the count variables or the number of segments are varied to alternate the distribution of lost block attributes.
  • Figure 14b illustrates Modular Shuffle Result 1421 and Loss Pattern 1420.
  • Figure 14c illustrates Modular Shuffle Result 1426 and Loss Pattern 1425. Both Loss Pattern 1420 and Loss Pattern 1425 illustrate the distribution loss of block attributes across six segments, as opposed to three segments as previously described.
  • FIG. 15 and 16 illustrate one embodiment of the inter segment VL-data shuffling process.
  • a transmission rate approaching 30 Mbps is desired. Accordingly, the desired transmission rate results in 31,152 bits available for the VL- data in each of the 60 buffers. The remaining space is used by FL-data for the eighty- eight blocks included in a buffer.
  • Figure 15 includes the VL-data buffer organization within a frame set for a transmission rate approaching 30 Mbps. As previously described, partial buffering is used to maximize the usage of available VL-data space within each buffer, and the unused VL-data space is filled with a post-amble.
  • Figure 16 illustrates one embodiment of the shuffling process to ensure a spatially separated and periodic VL-data loss.
  • the first row illustrates the VL-data from the 60 buffers in Figure 15 rearranged into a concatenated stream of 1,869,120 bits.
  • the second row illustrates the collection of every sixth bit into a new stream of bits.
  • the third row illustrates grouping every 10 bits of Stream 2 into a new stream of bits, Stream 3.
  • the boundary of a grouping is also defined by the number of bits in a segment.
  • Grouping of Stream 2 for every tenth bit ensures that a 1/60 data loss results in fifty-nine undamaged bits between every set of two damaged bits. This provides for a spatially separated and periodic VL-data loss in the event that 88 consecutive packets of data are lost.
  • the fourth row illustrates grouping every 11 bits of Stream 3 into Stream 4.
  • the boundary of a grouping is also defined by the number of bits in a segment.
  • Grouping of Stream 3 for every eleventh bit ensures that 1/660 data loss results in 659 undamaged bits between to damaged bits, resulting in a spatially separated and periodic VL-data loss during a transmission loss of 8 consecutive packets.
  • Each group of 31,152 bits within Stream 4 is consecutively re-stored in Buffers 0 - 59, with the first group of bits stored in Buffer 0 and the last group of bits stored in Buffer 59.
  • the previously described shuffling process creates buffers with intermixed FL- data.
  • packets are generated from each buffer, according to packet structure 200, and transmitted across Transmission media 135.
  • the data received is subsequently decoded. Lost or damaged data may be recovered using data recovery processes.
  • Decoder 120 a flow diagram illustrating one embodiment of decoding process performed by Decoder 120 is shown.
  • the conversion and de-shuffling processes are the inverse of the processes represented in Figure 3.
  • time-varying derandomization of Q codes and delayed decision decoding may be performed within step 435 as discussed below.
  • the ADRC decoder must determine how many bits were used to quantize that block without relying on the DR. In one embodiment, this process may be accomplished by applying time-varying derandomization to each VL-data block as it is received at the decoder.
  • Randomization and the subsequent derandomization of data, may be applied to destroy the correlation of incorrect candidate decodings that may be generated during the data decoding process in order to estimate lost or damaged data.
  • the derandomization process does not change the properties of the correct candidate decoding, as it is restored to its original condition. Derandomized data will tend to result in a candidate decoding that exhibits highly correlated properties indicating that the corresponding candidate decoding is a good selection.
  • the derandomization process may result in candidate decodings exhibiting highly correlated properties and an incorrect derandomization may result in a decoding exhibiting uncorrelated properties.
  • the time-varying derandomization advantageously handles zero blocks.
  • the time- varying randomization may decrease the likelihood that the decoder will miss data errors by resynchronization (i.e., the decoder incorrectly decoding a set of blocks then correctly decoding subsequent blocks without recognizing the error).
  • Encoding parameters may be used to perform the derandomization processes. For example, a derandomization pattern may be chosen based on the values of the compression parameters.
  • the Qbit value is determined by the given threshold table defined for the ADRC algorithm. In this embodiment, the decoder can easily determine the proper update for its copy of the randomizing or seed value. In one embodiment, if DR is damaged, the decoder attempts to decode the block with all possible Qbit values and associated possible randomizing or seed values to generate candidate decodings. In this embodiment, a local correlation metric is applied to each candidate decoding and a confidence metric is computed for the block.
  • the block may not be dequantized yet as the decoder implements a delayed-decision decoder.
  • the delayed-de ⁇ sion decoder delays the decoding of the data by four blocks. If the decoder calculates four, consecutive low confidence metrics, it concludes that the decoding of the oldest block was incorrect. In that case, an alternate decoding, for example, the next most likely decoding is then evaluated. In one embodiment, the three more recent blocks are derandomized using the alternate guess at seed value used for derandomization. This continues until a sequence of four decoded blocks are produced where the most recent block's confidence metric is greater than a given threshold value ⁇ .
  • FIG. 17 is a flowchart of one embodiment for the time-varying derandomization of VL-data blocks using a seed value. Initially at step 1705, a seed value is set to zero. In one embodiment, the seed value is an 8-bit binary number (e.g., 00000000).
  • step 1710 the next VL-data block is retrieved. Then at step 1715, it is determined whether the DR of the VL-data block is lost or damaged. If the DR is intact, processing continues at step 1720. If the DR is not intact (either lost or damaged), processing continues at step 1755. If at step 1715, the DR for the current VL-data block is intact, steps 1720 through 1750 are performed to derandomize the VL-data. The steps are similar to steps 1281 through 1293 described above in reference to Figure 12d.
  • step 1755 all possible candidate seed values for the current block are computed.
  • all five possible candidate seed values are computed from the current seed value for the current VL-data block.
  • step 1760 the current block is derandomized for all possible seed values.
  • the derandomization of each possible seed value is similar to processing steps 1720 through 1750.
  • step 1765 the correlations of the possible seed values are computed.
  • correlation values may be determined using a variety of methods including, but not limited to, least squares estimates, linear regression, or any suitable method.
  • One method of determining correlation values is described in more detail in "Source Coding To Provide For Robust Error Recovery During Transmission Losses," PCT application No. PCTUS98/22347 assigned to the assignee of the present invention.
  • the confidence metric for the block if determined. If at step 1775, the confidence metric ⁇ . is above a threshold ⁇ , the candidate Qbit value to derandomize the current VL-data block is used beginning at step 1725. However, if the confidence metric ⁇ . is below the threshold ⁇ , then processing continues at step 1780. At step 1780, the confidence metric for the oldest block retained in memory is examined. In one embodiment, up to four blocks may be maintained. Thus, in this embodiment, the confidence metric c. .3 is examined. If the confidence metric for the oldest block is less than ⁇ , then, at step 1780, an alternate or next-best decoding for the oldest block and is chosen the oldest block is derandomized.
  • step 1785 the remaining three blocks in memory are re-derandomized based on this new alternate seed value obtained in step 1780.
  • the re-derandomizing of the remaining blocks is similar to processing steps 1725 through 1750. Processing then returns to step 1755 and repeats steps 1780 through 1785 until the confidence metric of the most recent block, ⁇ ., is greater than ⁇ .
  • a confidence metric determines when the local correlation metric has failed to produce the correct decoding from among the possible candidate decodings.
  • the most likely decoding candidate for correlation- based decoding exhibits higher correlation properties as compared to the next-most- likely decoding candidate.
  • the confidence metric is a numerical measurement of the degree to which the best candidate exhibits the higher correlation for any given block.
  • the decoder performs every possible candidate decoding and then attempts to determine the appropriate decoding based on local correlation.
  • the decoder determines a confidence metric based on the two most likely decodings, i.e., the two decodings that exhibit the largest local correlation. This metric indicates the degree to which the most likely decoding is superior to the next-most- likely decoding.
  • a decoding that produces no clearly superior choice based on the local correlation structure in the block would have a low confidence metric.
  • Blocks in which there is one decoding that produces a much larger correlation than any of the other possible decodings would have a large confidence metric.
  • the decoder computes n consecutive low confidence metrics then it would conclude that a decoding error occurred in the decoding of the oldest block.
  • decoder may assume that block -3 was correctly derandomized.
  • decoder determines the correlations of the four derandomized blocks as follows:
  • decoder may not make a determination if block -2, -1, and 0 are correctly decoded until decoder derandomized the next block.
  • the correlations of the four derandomized blocks may be as follows: 2 low
  • Decoder may assume that the three low correlation blocks (-3, -2, -1) were derandomized correctly.
  • the correlations of the four-derandomized blocks may be as follows:
  • the decoder may assume that the oldest block (-3) was incorrectly derandomized and will explore the oldest block's alternative derandomizations to find the next-most-likely candidate for derandomization. In one embodiment, it is only when all four blocks have low correlation values that the alternatives for the oldest block may be examined. In alternate embodiments, a greater or lesser number of low correlation blocks may be used or a combination of low and high correlations of varying number.

Landscapes

  • Engineering & Computer Science (AREA)
  • Multimedia (AREA)
  • Signal Processing (AREA)
  • Compression Or Coding Systems Of Tv Signals (AREA)
  • Compression, Expansion, Code Conversion, And Decoders (AREA)
  • Detection And Prevention Of Errors In Transmission (AREA)
  • Editing Of Facsimile Originals (AREA)
PCT/US2000/014331 1999-06-29 2000-05-24 Time-varying randomization for data synchronization and implicit information transmission Ceased WO2001001697A1 (en)

Priority Applications (3)

Application Number Priority Date Filing Date Title
AU54435/00A AU5443500A (en) 1999-06-29 2000-05-24 Time-varying randomization for data synchronization and implicit information transmission
DE10084741T DE10084741T1 (de) 1999-06-29 2000-05-24 Zeitlich variierende Randomisierung für eine Datensynchronisation und implizite Informationsübertragung
JP2001506240A JP2003503915A (ja) 1999-06-29 2000-05-24 データ同期のための時間変化ランダム化及び間接的情報伝送

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
US09/342,275 US6493842B1 (en) 1999-06-29 1999-06-29 Time-varying randomization for data synchronization and implicit information transmission
US09/342,275 1999-06-29

Publications (1)

Publication Number Publication Date
WO2001001697A1 true WO2001001697A1 (en) 2001-01-04

Family

ID=23341117

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/US2000/014331 Ceased WO2001001697A1 (en) 1999-06-29 2000-05-24 Time-varying randomization for data synchronization and implicit information transmission

Country Status (6)

Country Link
US (2) US6493842B1 (enExample)
JP (1) JP2003503915A (enExample)
AU (1) AU5443500A (enExample)
DE (1) DE10084741T1 (enExample)
TW (1) TW496091B (enExample)
WO (1) WO2001001697A1 (enExample)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN102170568A (zh) * 2011-03-11 2011-08-31 山东大学 高光谱遥感图像的无损压缩编码器及其译码器

Families Citing this family (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US7035412B2 (en) * 2002-07-03 2006-04-25 Infineon Technologies Ag WLAN error control
AU2007237313A1 (en) * 2007-12-03 2009-06-18 Canon Kabushiki Kaisha Improvement for error correction in distributed vdeo coding
KR102133542B1 (ko) * 2013-12-03 2020-07-14 에스케이하이닉스 주식회사 랜더마이저 및 디랜더마이저를 포함하는 메모리 시스템
US10942909B2 (en) * 2018-09-25 2021-03-09 Salesforce.Com, Inc. Efficient production and consumption for data changes in a database under high concurrency
US11392922B2 (en) 2019-06-20 2022-07-19 Advanced New Technologies Co., Ltd. Validating transactions using information transmitted through magnetic fields
US10681044B1 (en) 2019-06-20 2020-06-09 Alibaba Group Holding Limited Authentication by transmitting information through magnetic fields
US11544615B2 (en) * 2021-05-27 2023-01-03 Red Hat, Inc. Managing runtime qubit allocation for executing quantum services

Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US4722003A (en) * 1985-11-29 1988-01-26 Sony Corporation High efficiency coding apparatus
US4845560A (en) * 1987-05-29 1989-07-04 Sony Corp. High efficiency coding apparatus
EP0851679A2 (en) * 1996-12-25 1998-07-01 Nec Corporation Identification data insertion and detection system for digital data
WO1999021372A1 (en) * 1997-10-23 1999-04-29 Sony Electronics, Inc. Source coding to provide for robust error recovery during transmission losses

Family Cites Families (140)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US3311879A (en) 1963-04-18 1967-03-28 Ibm Error checking system for variable length data
US3805232A (en) 1972-01-24 1974-04-16 Honeywell Inf Systems Encoder/decoder for code words of variable length
US3811108A (en) 1973-05-29 1974-05-14 Honeywell Inf Systems Reverse cyclic code error correction
FR2387557A1 (fr) 1977-04-14 1978-11-10 Telediffusion Fse Systemes de reduction de visibilite du bruit sur des images de television
GB2073534B (en) 1980-04-02 1984-04-04 Sony Corp Error concealment in digital television signals
GB2084432A (en) 1980-09-18 1982-04-07 Sony Corp Error concealment in digital television signals
US4394642A (en) 1981-09-21 1983-07-19 Sperry Corporation Apparatus for interleaving and de-interleaving data
US4532628A (en) 1983-02-28 1985-07-30 The Perkin-Elmer Corporation System for periodically reading all memory locations to detect errors
US4574393A (en) 1983-04-14 1986-03-04 Blackwell George F Gray scale image processor
JPH0746864B2 (ja) 1984-08-22 1995-05-17 ソニー株式会社 高能率符号化装置
DE3582314D1 (de) 1984-12-19 1991-05-02 Sony Corp Hochleistungsfaehige technik zur kodierung eines digitalen videosignals.
JPH0793724B2 (ja) 1984-12-21 1995-10-09 ソニー株式会社 テレビジョン信号の高能率符号化装置及び符号化方法
US4796299A (en) 1985-08-22 1989-01-03 Itt Corporation Video encoder apparatus
JP2512894B2 (ja) 1985-11-05 1996-07-03 ソニー株式会社 高能率符号化/復号装置
JPH0746862B2 (ja) 1985-11-30 1995-05-17 ソニー株式会社 駒落とし圧縮符号化及び復号化方法
EP0597556B1 (en) 1985-12-13 2001-10-17 Canon Kabushiki Kaisha Image processing apparatus
US4797945A (en) 1985-12-13 1989-01-10 Canon Kabushiki Kaisha Image data coding apparatus
JP2612557B2 (ja) 1985-12-18 1997-05-21 ソニー株式会社 データ伝送受信システム及びデータ復号装置
JPS62231569A (ja) 1986-03-31 1987-10-12 Fuji Photo Film Co Ltd 予測誤差の量子化方法
JP2751201B2 (ja) 1988-04-19 1998-05-18 ソニー株式会社 データ伝送装置及び受信装置
ATE74219T1 (de) 1987-06-02 1992-04-15 Siemens Ag Verfahren zur ermittlung von bewegungsvektorfeldern aus digitalen bildsequenzen.
US5122873A (en) 1987-10-05 1992-06-16 Intel Corporation Method and apparatus for selectively encoding and decoding a digital motion video signal at multiple resolution levels
US5093872A (en) 1987-11-09 1992-03-03 Interand Corporation Electronic image compression method and apparatus using interlocking digitate geometric sub-areas to improve the quality of reconstructed images
JP2629238B2 (ja) 1988-02-05 1997-07-09 ソニー株式会社 復号装置及び復号方法
SE503549C2 (sv) 1988-09-15 1996-07-01 Telia Ab Kryptering med efterföljande källkodning
US4953023A (en) 1988-09-29 1990-08-28 Sony Corporation Coding apparatus for encoding and compressing video data
JP2900385B2 (ja) 1988-12-16 1999-06-02 ソニー株式会社 フレーム化回路及び方法
US5150210A (en) 1988-12-26 1992-09-22 Canon Kabushiki Kaisha Image signal restoring apparatus
JP3018366B2 (ja) 1989-02-08 2000-03-13 ソニー株式会社 ビデオ信号処理回路
JPH02248161A (ja) 1989-03-20 1990-10-03 Fujitsu Ltd データ伝送方式
US5185746A (en) 1989-04-14 1993-02-09 Mitsubishi Denki Kabushiki Kaisha Optical recording system with error correction and data recording distributed across multiple disk drives
JPH02280462A (ja) 1989-04-20 1990-11-16 Fuji Photo Film Co Ltd 画像データ圧縮方法
DE69031638T2 (de) 1989-05-19 1998-03-19 Canon Kk System zum Übertragen von Bildinformation
US5208816A (en) 1989-08-18 1993-05-04 At&T Bell Laboratories Generalized viterbi decoding algorithms
JPH03141752A (ja) 1989-10-27 1991-06-17 Hitachi Ltd 画像信号伝送方法
JP2533393B2 (ja) 1990-02-16 1996-09-11 シャープ株式会社 Ntsc―hdコンバ―タ
US5166987A (en) 1990-04-04 1992-11-24 Sony Corporation Encoding apparatus with two stages of data compression
US5101446A (en) 1990-05-31 1992-03-31 Aware, Inc. Method and apparatus for coding an image
JPH0474063A (ja) 1990-07-13 1992-03-09 Matsushita Electric Ind Co Ltd 画像の符号化方法
JP2650472B2 (ja) 1990-07-30 1997-09-03 松下電器産業株式会社 ディジタル信号記録装置およびディジタル信号記録方法
JP2969867B2 (ja) 1990-08-31 1999-11-02 ソニー株式会社 ディジタル画像信号の高能率符号化装置
GB9019538D0 (en) 1990-09-07 1990-10-24 Philips Electronic Associated Tracking a moving object
DE69121829T2 (de) 1990-10-09 1997-03-20 Philips Electronics Nv Kodier/Dekodier-Einrichtung und Verfahren für durch kodierte Modulation übertragene, digitale Signale
US5416651A (en) 1990-10-31 1995-05-16 Sony Corporation Apparatus for magnetically recording digital data
US5243428A (en) 1991-01-29 1993-09-07 North American Philips Corporation Method and apparatus for concealing errors in a digital television
US5636316A (en) 1990-12-05 1997-06-03 Hitachi, Ltd. Picture signal digital processing unit
JP2906671B2 (ja) 1990-12-28 1999-06-21 ソニー株式会社 ディジタルビデオ信号の高能率符号化装置およびその方法
EP0495501B1 (en) 1991-01-17 1998-07-08 Sharp Kabushiki Kaisha Image coding and decoding system using an orthogonal transform and bit allocation method suitable therefore
EP0495490B1 (en) 1991-01-17 1998-05-27 Mitsubishi Denki Kabushiki Kaisha Video signal encoding apparatus
TW223690B (enExample) 1991-02-13 1994-05-11 Ampex
US5455629A (en) 1991-02-27 1995-10-03 Rca Thomson Licensing Corporation Apparatus for concealing errors in a digital video processing system
JP3125451B2 (ja) 1991-11-05 2001-01-15 ソニー株式会社 信号処理方法
JPH04358486A (ja) 1991-06-04 1992-12-11 Toshiba Corp 高能率符号化信号処理装置
JP2766919B2 (ja) 1991-06-07 1998-06-18 三菱電機株式会社 ディジタル信号記録再生装置、ディジタル信号記録装置、ディジタル信号再生装置
US5263026A (en) 1991-06-27 1993-11-16 Hughes Aircraft Company Maximum likelihood sequence estimation based equalization within a mobile digital cellular receiver
JP3141896B2 (ja) 1991-08-09 2001-03-07 ソニー株式会社 ディジタルビデオ信号の記録装置
ATE148607T1 (de) 1991-09-30 1997-02-15 Philips Electronics Nv Bewegungsvektorschätzung, bewegungsbildkodierung- und -speicherung
JPH05103309A (ja) 1991-10-04 1993-04-23 Canon Inc 情報伝送方法及び装置
US5398078A (en) 1991-10-31 1995-03-14 Kabushiki Kaisha Toshiba Method of detecting a motion vector in an image coding apparatus
JP3278881B2 (ja) 1991-12-13 2002-04-30 ソニー株式会社 画像信号生成装置
US5473479A (en) 1992-01-17 1995-12-05 Sharp Kabushiki Kaisha Digital recording and/or reproduction apparatus of video signal rearranging components within a fixed length block
JP3360844B2 (ja) 1992-02-04 2003-01-07 ソニー株式会社 ディジタル画像信号の伝送装置およびフレーム化方法
JPH05236427A (ja) 1992-02-25 1993-09-10 Sony Corp 画像信号の符号化装置及び符号化方法
JPH05268594A (ja) 1992-03-18 1993-10-15 Sony Corp 動画像の動き検出装置
US5307175A (en) 1992-03-27 1994-04-26 Xerox Corporation Optical image defocus correction
JP3259323B2 (ja) 1992-04-13 2002-02-25 ソニー株式会社 デ・インターリーブ回路
US5325203A (en) 1992-04-16 1994-06-28 Sony Corporation Adaptively controlled noise reduction device for producing a continuous output
US5440344A (en) 1992-04-28 1995-08-08 Mitsubishi Denki Kabushiki Kaisha Video encoder using adjacent pixel difference for quantizer control
JP3438233B2 (ja) 1992-05-22 2003-08-18 ソニー株式会社 画像変換装置および方法
JP2976701B2 (ja) 1992-06-24 1999-11-10 日本電気株式会社 量子化ビット数割当方法
US5321748A (en) 1992-07-02 1994-06-14 General Instrument Corporation, Jerrold Communications Method and apparatus for television signal scrambling using block shuffling
US5359694A (en) 1992-07-27 1994-10-25 Teknekron Communications Systems, Inc. Method and apparatus for converting image data
US5438369A (en) 1992-08-17 1995-08-01 Zenith Electronics Corporation Digital data interleaving system with improved error correctability for vertically correlated interference
US5481554A (en) 1992-09-02 1996-01-02 Sony Corporation Data transmission apparatus for transmitting code data
JPH06153180A (ja) 1992-09-16 1994-05-31 Fujitsu Ltd 画像データ符号化方法及び装置
JPH06121192A (ja) 1992-10-08 1994-04-28 Sony Corp ノイズ除去回路
DE596826T1 (de) 1992-11-06 1994-10-06 Gold Star Co Mischungsverfahren für ein digitales Videobandaufzeichnungsgerät.
US5689302A (en) 1992-12-10 1997-11-18 British Broadcasting Corp. Higher definition video signals from lower definition sources
US5477276A (en) 1992-12-17 1995-12-19 Sony Corporation Digital signal processing apparatus for achieving fade-in and fade-out effects on digital video signals
JPH06205386A (ja) 1992-12-28 1994-07-22 Canon Inc 画像再生装置
US5805762A (en) 1993-01-13 1998-09-08 Hitachi America, Ltd. Video recording device compatible transmitter
US5416847A (en) 1993-02-12 1995-05-16 The Walt Disney Company Multi-band, digital audio noise filter
US5737022A (en) 1993-02-26 1998-04-07 Kabushiki Kaisha Toshiba Motion picture error concealment using simplified motion compensation
JP3259428B2 (ja) 1993-03-24 2002-02-25 ソニー株式会社 ディジタル画像信号のコンシール装置及び方法
US5429403A (en) 1993-04-27 1995-07-04 Brasher; Andrew J. Automated pivotable cargo box extensions
KR100261072B1 (ko) 1993-04-30 2000-07-01 윤종용 디지털 신호처리시스템
KR940026915A (ko) 1993-05-24 1994-12-10 오오가 노리오 디지탈 비디오신호 기록장치 및 재생장치 및 기록방법
GB2284495B (en) 1993-05-28 1998-04-08 Sony Corp Error correction processing method and apparatus for digital data
US5499057A (en) 1993-08-27 1996-03-12 Sony Corporation Apparatus for producing a noise-reducded image signal from an input image signal
US5406334A (en) 1993-08-30 1995-04-11 Sony Corporation Apparatus and method for producing a zoomed image signal
KR960012931B1 (ko) 1993-08-31 1996-09-25 대우전자 주식회사 분류 벡터 양자화된 영상의 채널 오류 은폐 방법
US5663764A (en) 1993-09-30 1997-09-02 Sony Corporation Hierarchical encoding and decoding apparatus for a digital image signal
JP3590996B2 (ja) 1993-09-30 2004-11-17 ソニー株式会社 ディジタル画像信号の階層符号化および復号装置
JP3495766B2 (ja) 1993-10-01 2004-02-09 テキサス インスツルメンツ インコーポレイテツド 画像処理方法
JP2862064B2 (ja) 1993-10-29 1999-02-24 三菱電機株式会社 データ復号装置及びデータ受信装置及びデータ受信方法
KR100269213B1 (ko) 1993-10-30 2000-10-16 윤종용 오디오신호의부호화방법
US5617333A (en) 1993-11-29 1997-04-01 Kokusai Electric Co., Ltd. Method and apparatus for transmission of image data
JP3271108B2 (ja) 1993-12-03 2002-04-02 ソニー株式会社 ディジタル画像信号の処理装置および方法
JPH07203428A (ja) 1993-12-28 1995-08-04 Canon Inc 画像処理方法及び装置
JP3321972B2 (ja) 1994-02-15 2002-09-09 ソニー株式会社 ディジタル信号記録装置
JP3161217B2 (ja) 1994-04-28 2001-04-25 松下電器産業株式会社 画像符号化記録装置および記録再生装置
JP3336754B2 (ja) 1994-08-19 2002-10-21 ソニー株式会社 デジタルビデオ信号の記録方法及び記録装置
JP3845870B2 (ja) 1994-09-09 2006-11-15 ソニー株式会社 ディジタル信号処理用集積回路
US5577053A (en) 1994-09-14 1996-11-19 Ericsson Inc. Method and apparatus for decoder optimization
US6026190A (en) 1994-10-31 2000-02-15 Intel Corporation Image signal encoding with variable low-pass filter
JPH08140091A (ja) 1994-11-07 1996-05-31 Kokusai Electric Co Ltd 画像伝送システム
US5594807A (en) 1994-12-22 1997-01-14 Siemens Medical Systems, Inc. System and method for adaptive filtering of images based on similarity between histograms
US5852470A (en) 1995-05-31 1998-12-22 Sony Corporation Signal converting apparatus and signal converting method
US5710815A (en) * 1995-06-07 1998-01-20 Vtech Communications, Ltd. Encoder apparatus and decoder apparatus for a television signal having embedded viewer access control data
US5946044A (en) 1995-06-30 1999-08-31 Sony Corporation Image signal converting method and image signal converting apparatus
JPH0918357A (ja) 1995-06-30 1997-01-17 Sony Corp データシャフリング方法およびその装置
FR2736743B1 (fr) 1995-07-10 1997-09-12 France Telecom Procede de controle de debit de sortie d'un codeur de donnees numeriques representatives de sequences d'images
US5991450A (en) 1995-09-06 1999-11-23 Canon Kabushiki Kaisha Image encoding and decoding apparatus
JP3617879B2 (ja) 1995-09-12 2005-02-09 株式会社東芝 実時間ストリームサーバのディスク修復方法及びディスク修復装置
KR0155900B1 (ko) 1995-10-18 1998-11-16 김광호 위상에러검출방법 및 위상 트래킹 루프회로
US5724369A (en) 1995-10-26 1998-03-03 Motorola Inc. Method and device for concealment and containment of errors in a macroblock-based video codec
KR100196872B1 (ko) 1995-12-23 1999-06-15 전주범 영상 복화화 시스템의 영상 에러 복구 장치
KR100197366B1 (ko) 1995-12-23 1999-06-15 전주범 영상 에러 복구 장치
JPH09214952A (ja) * 1996-01-30 1997-08-15 Kokusai Electric Co Ltd 画像データの並べ替え方法、画像データの伝送方法
US5931968A (en) * 1996-02-09 1999-08-03 Overland Data, Inc. Digital data recording channel
US5751862A (en) 1996-05-08 1998-05-12 Xerox Corporation Self-timed two-dimensional filter
DE69712676T2 (de) 1996-07-08 2003-01-02 Hyundai Curitel, Inc. Verfahren zur Videokodierung
JP3352887B2 (ja) 1996-09-09 2002-12-03 株式会社東芝 クランプ付除算器、このクランプ付除算器を備えた情報処理装置及び除算処理におけるクランプ方法
US6134269A (en) 1996-09-25 2000-10-17 At&T Corp Fixed or adaptive deinterleaved transform coding for image coding and intra coding of video
US5751865A (en) 1996-09-26 1998-05-12 Xerox Corporation Method and apparatus for image rotation with reduced memory using JPEG compression
KR100196840B1 (ko) 1996-12-27 1999-06-15 전주범 영상복호화시스템에 있어서 비트에러복원장치
WO1998047259A2 (en) * 1997-03-10 1998-10-22 Fielder Guy L File encryption method and system
US5938318A (en) 1997-08-19 1999-08-17 Mattsen; Gregory Paul Novelty shadow projection lamp
US6198851B1 (en) 1997-09-22 2001-03-06 Sony Corporation Apparatus and method for encoding/decoding
US6070174A (en) * 1997-09-30 2000-05-30 Infraworks Corporation Method and apparatus for real-time secure file deletion
EP1027651B1 (en) * 1997-10-23 2013-08-07 Sony Electronics, Inc. Apparatus and method for providing robust error recovery for errors that occur in a lossy transmission environment
EP1025648B1 (en) * 1997-10-23 2012-01-11 Sony Electronics, Inc. Apparatus and method for localizing transmission errors to provide robust error recovery in a lossy transmission environment
AU1115599A (en) * 1997-10-23 1999-05-10 Sony Electronics Inc. Apparatus and method for partial buffering transmitted data to provide robust error recovery in a lossy transmission environment
ID26503A (id) * 1997-10-23 2001-01-11 Sony Electronics Inc Peralatan dan metode untuk memetakan gambar pada blok-blok untuk memberikan perolehan kesalahan yang kuat dalam suatu pemancaran kehilangan lingkungan
JP4558192B2 (ja) * 1997-10-23 2010-10-06 ソニー エレクトロニクス インク 復号方法及び装置並びに記録媒体
WO1999021285A1 (en) * 1997-10-23 1999-04-29 Sony Electronics, Inc. Apparatus and method for recovery of lost/damaged data in a bitstream of data based on compatibility
AU1362999A (en) * 1997-10-23 1999-05-10 Sony Electronics Inc. Apparatus and method for recovery of data in a lossy transmission environment
US6229929B1 (en) 1998-05-14 2001-05-08 Interval Research Corporation Border filtering of video signal blocks
US6363118B1 (en) * 1999-02-12 2002-03-26 Sony Corporation Apparatus and method for the recovery of compression constants in the encoded domain
US6377955B1 (en) * 1999-03-30 2002-04-23 Cisco Technology, Inc. Method and apparatus for generating user-specified reports from radius information

Patent Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US4722003A (en) * 1985-11-29 1988-01-26 Sony Corporation High efficiency coding apparatus
US4845560A (en) * 1987-05-29 1989-07-04 Sony Corp. High efficiency coding apparatus
EP0851679A2 (en) * 1996-12-25 1998-07-01 Nec Corporation Identification data insertion and detection system for digital data
WO1999021372A1 (en) * 1997-10-23 1999-04-29 Sony Electronics, Inc. Source coding to provide for robust error recovery during transmission losses

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
KONDO T ET AL: "ADAPTIVE DYNAMIC RANGE CODING SCHEME FOR FUTURE HDTV DIGITAL VTR", PROCEEDINGS OF THE INTERNATIONAL WORKSHOP ON HDTV AND BEYOND,NL,AMSTERDAM, ELSEVIER, vol. WORKSHOP 4, 4 September 1991 (1991-09-04), pages 43 - 50, XP000379937 *

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN102170568A (zh) * 2011-03-11 2011-08-31 山东大学 高光谱遥感图像的无损压缩编码器及其译码器

Also Published As

Publication number Publication date
JP2003503915A (ja) 2003-01-28
US20020059550A1 (en) 2002-05-16
US6553381B2 (en) 2003-04-22
AU5443500A (en) 2001-01-31
DE10084741T1 (de) 2002-07-11
US6493842B1 (en) 2002-12-10
TW496091B (en) 2002-07-21

Similar Documents

Publication Publication Date Title
US6389562B1 (en) Source code shuffling to provide for robust error recovery
US6332042B1 (en) Apparatus and method for encoding and decoding data in a lossy transmission environment
US6553381B2 (en) Time-varying randomization for data synchronization and implicit information transmission
WO1999021090A1 (en) Apparatus and method for providing robust error recovery for errors that occur in a lossy transmission environment
CA2308223C (en) Apparatus and method for mapping and image to blocks to provide for robust error recovery in a lossy transmission environment
WO1999021285A1 (en) Apparatus and method for recovery of lost/damaged data in a bitstream of data based on compatibility
US7080312B2 (en) Data transformation for explicit transmission of control information
KR100577091B1 (ko) 손실이 있는 송신 환경에서 강력한 에러 회복을 실행하기 위한 데이터 블록의 생성 방법, 장치 및 프로세서, 및 컴퓨터 판독 가능한 매체
US6170074B1 (en) Source coding to provide for robust error recovery
CA2308220C (en) Apparatus and method for partial buffering transmitted data to provide robust error recovery in a lossy transmission environment
EP1040444A1 (en) Apparatus and method for recovery of quantization codes in a lossy transmission environment
EP1025648A1 (en) Apparatus and method for localizing transmission errors to provide robust error recovery in a lossy transmission environment
EP1025538A1 (en) Apparatus and method for recovery of data in a lossy transmission environment

Legal Events

Date Code Title Description
AK Designated states

Kind code of ref document: A1

Designated state(s): AE AG AL AM AT AU AZ BA BB BG BR BY CA CH CN CR CU CZ DE DK DM DZ EE ES FI GB GD GE GH GM HR HU ID IL IN IS JP KE KG KP KR KZ LC LK LR LS LT LU LV MA MD MG MK MN MW MX MZ NO NZ PL PT RO RU SD SE SG SI SK SL TJ TM TR TT TZ UA UG UZ VN YU ZA ZW

AL Designated countries for regional patents

Kind code of ref document: A1

Designated state(s): GH GM KE LS MW MZ SD SL SZ TZ UG ZW AM AZ BY KG KZ MD RU TJ TM AT BE CH CY DE DK ES FI FR GB GR IE IT LU MC NL PT SE BF BJ CF CG CI CM GA GN GW ML MR NE SN TD TG

121 Ep: the epo has been informed by wipo that ep was designated in this application
DFPE Request for preliminary examination filed prior to expiration of 19th month from priority date (pct application filed before 20040101)
RET De translation (de og part 6b)

Ref document number: 10084741

Country of ref document: DE

Date of ref document: 20020711

WWE Wipo information: entry into national phase

Ref document number: 10084741

Country of ref document: DE

122 Ep: pct application non-entry in european phase
REG Reference to national code

Ref country code: DE

Ref legal event code: 8607