EP1074091A2 - Procede et appareil de prise en charge d'un protocole video dans un environnement reseau - Google Patents

Procede et appareil de prise en charge d'un protocole video dans un environnement reseau

Info

Publication number
EP1074091A2
EP1074091A2 EP99918718A EP99918718A EP1074091A2 EP 1074091 A2 EP1074091 A2 EP 1074091A2 EP 99918718 A EP99918718 A EP 99918718A EP 99918718 A EP99918718 A EP 99918718A EP 1074091 A2 EP1074091 A2 EP 1074091A2
Authority
EP
European Patent Office
Prior art keywords
video data
packets
video
receiver
transmitter
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Ceased
Application number
EP99918718A
Other languages
German (de)
English (en)
Inventor
Alan T. Ruberg
James G. Hanko
J. Duane Northcutt
Gerard A. Wall
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Sun Microsystems Inc
Original Assignee
Sun Microsystems Inc
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Sun Microsystems Inc filed Critical Sun Microsystems Inc
Publication of EP1074091A2 publication Critical patent/EP1074091A2/fr
Ceased legal-status Critical Current

Links

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N11/00Colour television systems
    • H04N11/04Colour television systems using pulse code modulation
    • H04N11/042Codec means
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L69/00Network arrangements, protocols or services independent of the application payload and not provided for in the other groups of this subclass
    • H04L69/08Protocols for interworking; Protocol conversion
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N19/00Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
    • H04N19/10Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding
    • H04N19/102Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the element, parameter or selection affected or controlled by the adaptive coding
    • H04N19/124Quantisation
    • H04N19/126Details of normalisation or weighting functions, e.g. normalisation matrices or variable uniform quantisers
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N19/00Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
    • H04N19/10Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding
    • H04N19/169Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the coding unit, i.e. the structural portion or semantic portion of the video signal being the object or the subject of the adaptive coding
    • H04N19/186Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the coding unit, i.e. the structural portion or semantic portion of the video signal being the object or the subject of the adaptive coding the unit being a colour or a chrominance component
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N19/00Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
    • H04N19/40Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using video transcoding, i.e. partial or full decoding of a coded input stream followed by re-encoding of the decoded output stream
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N19/00Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
    • H04N19/50Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using predictive coding
    • H04N19/59Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using predictive coding involving spatial sub-sampling or interpolation, e.g. alteration of picture size or resolution
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N19/00Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
    • H04N19/50Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using predictive coding
    • H04N19/593Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using predictive coding involving spatial prediction techniques
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/20Servers specifically adapted for the distribution of content, e.g. VOD servers; Operations thereof
    • H04N21/23Processing of content or additional data; Elementary server operations; Server middleware
    • H04N21/234Processing of video elementary streams, e.g. splicing of video streams, manipulating MPEG-4 scene graphs
    • H04N21/2343Processing of video elementary streams, e.g. splicing of video streams, manipulating MPEG-4 scene graphs involving reformatting operations of video signals for distribution or compliance with end-user requests or end-user device requirements
    • H04N21/234354Processing of video elementary streams, e.g. splicing of video streams, manipulating MPEG-4 scene graphs involving reformatting operations of video signals for distribution or compliance with end-user requests or end-user device requirements by altering signal-to-noise ratio parameters, e.g. requantization
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/40Client devices specifically adapted for the reception of or interaction with content, e.g. set-top-box [STB]; Operations thereof
    • H04N21/41Structure of client; Structure of client peripherals
    • H04N21/414Specialised client platforms, e.g. receiver in car or embedded in a mobile appliance
    • H04N21/4143Specialised client platforms, e.g. receiver in car or embedded in a mobile appliance embedded in a Personal Computer [PC]
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/40Client devices specifically adapted for the reception of or interaction with content, e.g. set-top-box [STB]; Operations thereof
    • H04N21/43Processing of content or additional data, e.g. demultiplexing additional data from a digital video stream; Elementary client operations, e.g. monitoring of home network or synchronising decoder's clock; Client middleware
    • H04N21/44Processing of video elementary streams, e.g. splicing a video clip retrieved from local storage with an incoming video stream, rendering scenes according to MPEG-4 scene graphs
    • H04N21/4402Processing of video elementary streams, e.g. splicing a video clip retrieved from local storage with an incoming video stream, rendering scenes according to MPEG-4 scene graphs involving reformatting operations of video signals for household redistribution, storage or real-time display
    • H04N21/440254Processing of video elementary streams, e.g. splicing a video clip retrieved from local storage with an incoming video stream, rendering scenes according to MPEG-4 scene graphs involving reformatting operations of video signals for household redistribution, storage or real-time display by altering signal-to-noise parameters, e.g. requantization
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N9/00Details of colour television systems
    • H04N9/64Circuits for processing colour signals
    • H04N9/641Multi-purpose receivers, e.g. for auxiliary information
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N11/00Colour television systems
    • H04N11/04Colour television systems using pulse code modulation
    • H04N11/042Codec means
    • H04N11/044Codec means involving transform coding
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N19/00Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
    • H04N19/10Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding
    • H04N19/102Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the element, parameter or selection affected or controlled by the adaptive coding
    • H04N19/124Quantisation
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N19/00Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
    • H04N19/60Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using transform coding
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/40Client devices specifically adapted for the reception of or interaction with content, e.g. set-top-box [STB]; Operations thereof
    • H04N21/41Structure of client; Structure of client peripherals
    • H04N21/426Internal components of the client ; Characteristics thereof
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/40Client devices specifically adapted for the reception of or interaction with content, e.g. set-top-box [STB]; Operations thereof
    • H04N21/47End-user applications
    • H04N21/478Supplemental services, e.g. displaying phone caller identification, shopping application
    • H04N21/4788Supplemental services, e.g. displaying phone caller identification, shopping application communicating with other users, e.g. chatting
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N9/00Details of colour television systems
    • H04N9/64Circuits for processing colour signals
    • H04N9/67Circuits for processing colour signals for matrixing

Definitions

  • This invention relates to the field of digital video, and, more specifically, to digital video applications in a network environment.
  • Computers and computer networks are used to exchange information in many fields such as media, commerce, and telecommunications, for example.
  • One form of information that is commonly exchanged is video data (or image data), i.e., data representing a digitized image or sequence of images.
  • a video conferencing feed is an example of telecommunication information which includes video data.
  • Other examples of video data include video streams or files associated with scanned images, digitized television performances, and animation sequences, or portions thereof, as well as other forms of visual information that are displayed on a display device. It is also possible to synthesize video information by artificially rendering video data from two or three-dimensional computer models.
  • the exchange of information between computers on a network occurs between a "transmitter” and a "receiver.”
  • the information contains video data
  • the services provided by the transmitter are associated with the processing and transmission of the video data.
  • a problem with current network systems is that multiple services provided by one or more transmitters may provide video data using different video protocols.
  • the complexity of the receiver is necessarily increased by the need to accommodate each of the different video protocols.
  • the amount of data associated with video applications is very large. The transmission of such large amounts of data over a network can result in bandwidth utilization concerns.
  • the following description of video technology and an example network scheme are given below to provide a better understanding of the problems involved in transmitting video data over a network.
  • a display is comprised of a two dimensional array of picture elements, or "pixels," which form a viewing plane.
  • Each pixel has associated visual characteristics that determine how a pixel appears to a viewer. These visual characteristics may be limited to the perceived brightness, or "luminance,” for monochrome displays, or the visual characteristics may include color, or "chrominance,” information.
  • Video data is commonly provided as a set of data values mapped to an array of pixels. The set of data values specify the visual characteristics for those pixels that result in the display of a desired image.
  • a variety of color models exist for representing the visual characteristics of a pixel as one or more data values.
  • RGB color is a commonly used color model for display systems.
  • RGB color is based on a "color model" system.
  • a color model allows convenient specification of colors within a color range, such as the RGB (red, green, blue) primary colors.
  • a color model is a specification of a three-dimensional color coordinate system and a three-dimensional subspace or "color space" in the coordinate system within which each displayable color is represented by a point in space.
  • computer and graphic display systems are three- phosphor systems with a red, green and blue phosphor at each pixel location. The intensities of the red, green and blue phosphors are varied so that the combination of the three primary colors results in a desired output color.
  • the RGB color model uses a Cartesian coordinate system.
  • the subspace of interest in this coordinate system is known as the "RGB color cube” and is illustrated in Figure 1.
  • Each corner of the cube represents a color that is theoretically one-hundred percent pure — that is, the color at that location contains only the color specified, and contains no amount of other colors.
  • the corners are defined to be black, white, red, green, blue, magenta, cyan, and yellow. Red, green and blue are the primary colors, black is the absence of color, white is the combination of all colors, and cyan, magenta and yellow are the complements of red, green and blue.
  • the origin of the coordinate system corresponds to the black corner of the color cube.
  • the cube is a unit cube so that the distance between the origin and adjacent corners is 1.
  • the red corner is thus at location, (1,0,0).
  • the axis between the origin (black) and the red corner is referred to as the red axis 110.
  • the green corner is at location (0,1,0) and the axis 120 between the black origin and the green corner is referred to as the green axis.
  • the blue corner is at location (0,0,1) and the blue axis 130 is the axis between the blue corner and the origin. Cyan is at corner (0,1,1), magenta is at corner (1,0,1) and yellow is at corner (1,1,0).
  • the corner opposite the origin on the cube's diagonal at location (1,1,1) is the white corner.
  • a color is defined in the color cube by a vector having red, green and blue components.
  • vector 180 is the resultant of vectors 180R, (along the red axis), vector 180G (along the green axis) and vector 180B (along the blue axis).
  • the end point of vector 180 can be described mathematically by 0.25R + 0.50G + 0.75B.
  • the end of this vector defines a point in color space represented mathematically by the sum of its red, green and blue components.
  • a refresh buffer 140 also known as a video RAM, or VRAM, is used to store color information for each pixel on a video display, such as CRT display 160.
  • a DRAM can also be used as buffer 140.
  • the VRAM 140 contains one memory location for each pixel location on the display 160. For example, pixel 190 at screen location XnYfj corresponds to memory location 150 in the VRAM 140.
  • the number of bits stored at each memory location for each display pixel varies depending on the amount of color resolution required. For example, for word processing applications or display of text, two intensity values are acceptable so that only a single bit need be stored at each memory location (since the screen pixel is either "on" or "off").
  • the bits corresponding to the R component are provided to the R driver, the bits representing the green component are provided to the G driver, and the bits representing the blue component are provided to the blue driver.
  • These drivers activate the red, green and blue phosphors at the pixel location 190.
  • the bit values for each color, red, green and blue determine the intensity of that color in the display pixel. By varying the intensities of the red, green and blue components, different colors may be produced at that pixel location.
  • Color information may be represented by color models other than RGB.
  • One such color space is known as the YUV (or Y'CbCr as specified in ITU.BT-601) color space which is used in the commercial color TV broadcasting system.
  • the YUV color space is a recoding of the RGB color space, and can be mapped into the RGB color cube.
  • the RGB to YUV conversion that performs the mapping is defined by the following matrix equation: 0.299 0.587 0.114 R'
  • the inverse of the matrix is used for the reverse conversion.
  • the Y axis of the YUV color model represents the luminance of the display pixel, and matches the luminosity response curve for the human eye.
  • U and V are chrominance values. In a monochrome receiver, only the Y value is used. In a color receiver, all three axes are used to provide display information.
  • an image may be recorded with a color camera, which is an RGB system, and converted to YUV for transmission.
  • a color camera which is an RGB system
  • the YUV information is then retransformed into RGB information to drive the color display.
  • CMY cyan, magenta, yellow
  • RGB chrominance
  • color formats used in the prior art for transmitting image and video data over networks.
  • Some examples of existing color formats are H.261 and H.263, which are used in digital telephony, and MPEG1, MPEG2 and MJPEG.
  • These color formats use compression schemes to reduce the amount of data being transmitted.
  • many color formats use a variation of DCT (discrete cosine transform) compression to perform compression in the frequency domain.
  • DCT discrete cosine transform
  • a form of variable length Huffman encoding may also be implemented to reduce bandwidth requirements.
  • Specialized compression/decompression hardware or software is often used to perform the non-trivial conversion of data in these color formats to data for display.
  • FIG. 3 illustrates a sample network system comprising multiple transmitters 300A-300C for sourcing video data and a single receiver 303.
  • Receiver 303 is equipped with one or more display devices for providing video output associated with received video data.
  • transmitters 300A, 300B and 300C, and receiver 303 are coupled together via network 302, which may be, for example, a local area network (LAN).
  • Transmitter 300 A transmits video data along network connection 301A to network 302 using video protocol A.
  • Transmitter 300B transmits video data along network connection 301B to network 302 using video protocol B.
  • Transmitter 300C transmits video data along network connection 301C to network 302 using video protocol C.
  • receiver 303 may receive video data over network connection 305 from network 302 under any of video protocols A, B or C, as well as any other protocols used by other transmitters connected to network 302, or used by multiple services embodied within one of transmitters 300A-300C.
  • Receiver 303 may be equipped with different video cards (i.e., specialized hardware for video processing) or software plug-ins to support each video protocol, but this increases the complexity of the receiver, and necessitates hardware or software upgrades when new video protocols are developed. For systems wherein it is a goal to minimize processing and hardware requirements for a receiver, the added complexity of supporting multiple protocols is undesirable.
  • video cards i.e., specialized hardware for video processing
  • software plug-ins to support each video protocol
  • a method and apparatus of supporting a video protocol in a network environment is described.
  • video processing and hardware requirements associated with a receiver are minimized by specifying a single video protocol for transmission of video data between transmitters and receivers on a network.
  • the protocol specifies a color format that allows for high video quality and minimizes the complexity of the receiver.
  • Transmitters are equipped with transformation mechanisms that provide for conversion of video data into the designated protocol as needed. Compression of the components of the color format is provided to reduce transmission bandwidth requirements.
  • aspects of the designated protocol compensate for problems associated with transmitting video data over a network.
  • the designated protocol specifies a color format including a luminance value and two chrominance values. Quantized differential coding is applied to the luminance value and subsampling is performed on the chrominance values to reduce transmission bandwidth requirements.
  • upscaling of video data is performed at the receiver, whereas downscaling is performed at the transmitter.
  • Various display sizes can thus be accommodated with efficient use of network bandwidth.
  • Figure 1 is a diagram of an RGB color space.
  • Figure 2 is a block diagram of a video display apparatus.
  • Figure 3 is a block diagram of a network system having a single receiver and multiple transmitters.
  • Figure 4 A is a flow diagram illustrating a luma compression scheme in accordance with an embodiment of the invention.
  • Figure 4B is a flow diagram illustrating a luma decompression scheme in accordance with an embodiment of the invention.
  • Figure 5 is a diagram illustrating a subsampling and upsampling process in accordance with an embodiment of the invention.
  • Figure 6A is a flow diagram illustrating a video data transmission process in accordance with an embodiment of the invention.
  • Figure 6B is a flow diagram illustrating a video data reception process in accordance with an embodiment of the invention.
  • Figure 7 is a block diagram of a computer execution environment.
  • Figure 8 is a block diagram of a human interface device computer system.
  • Figure 9 is a block diagram of an embodiment of a human interface device. DETAILED DESCRIPTION OF THE INVENTION
  • the invention is a method and apparatus of supporting a video protocol in a network environment.
  • numerous specific details are set forth to provide a more thorough description of embodiments of the invention. It will be apparent, however, to one skilled in the art, that the invention may be practiced without these specific details. In other instances, well known features have not been described in detail so as not to obscure the invention.
  • a single video protocol is used for transmission of video data between a transmitter and a receiver.
  • the transmitter of the video data is responsible for supplying video data in accordance with the designated protocol.
  • a transmitter and its internal video services are configured to perform any necessary protocol transformations to bring video data into conformance with the designated protocol before transmission to a receiver.
  • Hardware and processing requirements of the receiver are minimized as only one video protocol need be supported at the receiver.
  • video data may also be transmitted from the receiver to the transmitter using the designated protocol.
  • the transmitter may then process the video data in the form of the designated protocol, or the transmitter may convert the video data into another video protocol for further processing.
  • the designated protocol is chosen to give high video quality to satisfactorily encompass all other video protocols while permitting strategic compression based on knowledge of human perception of luminance and chrominance.
  • the high video quality of the designated protocol ensures that any necessary protocol transformations by a transmitter do not result in a significant loss of video quality from the original video data.
  • An example of a protocol that provides high video quality with compression is a protocol specifying a color format with quantized differential coding of the luminance value and subsampling of chrominance values.
  • a transmitter may support video applications using the designated protocol, and the transmitter may be configured with mechanisms, such as hardware cards or software plug-ins or drivers, to convert between other video protocols and the designated protocol, for example, using color model matrix transformations.
  • data packets are used to transmit variably sized blocks of video data between a transmitter and a receiver using a connectionless datagram scheme.
  • a connectionless scheme means that each packet of video data, i.e., each video block, is processed as an independent unit, and the loss of a data packet does not affect the processing of other data packets. This independence provides for robust video processing even on unreliable networks where packet loss may be commonplace.
  • Some networks are prone to periodic packet loss, i.e., packet loss at regular intervals. This periodic behavior can result in the stagnation of portions of the video display as the same video blocks are repeatedly lost.
  • the spatial order in which video blocks are sent to the receiver for display may be pseudo-randomly determined to disrupt any periodicity in packet performance.
  • the data packets containing video data are provided with a sequence number.
  • the receiver can note when a sequence number is skipped, indicating that the packet was lost during transmission.
  • the receiver can then return to the transmitter a list or range of sequence numbers identifying the lost packet or packets.
  • the transmitter can decide whether to ignore the missed packets, resend the missed packets (such as for still images), or send updated packets (such as for streaming video that may have changed since the packet was lost).
  • the video data packet comprises the following information:
  • Sequence number - A video stream is processed as a series of blocks of video data.
  • the sequence number provides a mechanism for the receiver to tell the transmitter what sequence numbers have been missed (e.g., due to packet loss), so that the transmitter may determine whether to resend, update or ignore the associated video block.
  • X field designates the x-coordinate of the receiver's display device wherein the first pixel of the video block is to be displayed.
  • Y field designates the y-coordinate of the receiver's display device wherein the first pixel of the video block is to be displayed.
  • Width The width field specifies the width of the destination rectangle on the receiver's display device wherein the video block is to be displayed.
  • Height The height field specifies the height of the destination rectangle on the receiver's display device wherein the video block is to be displayed.
  • Source_w The source width specifies the width of the video block in pixels. Note that the source width may be smaller than the width of the destination rectangle on the receiver's display device. If this is so, the receiver will upscale the video block horizontally to fill the width of the destination rectangle. The source width should not be larger than the width of the destination rectangle as this implies downscaling, which should be performed by the transmitter for efficiency.
  • Source_h The source height specifies the height of the video block in pixels. Note that, as with source_w, the source height may be smaller than the height of the destination rectangle on the receiver's display device. As above, the receiver will upscale the video block vertically to fill the height of the destination rectangle. The source height should not be larger than the height of the destination rectangle as this implies downscaling, which should be performed by the transmitter for efficiency.
  • Luma encoding - The luma encoding field allows the transmitter to designate a particular luma encoding scheme from a set of specified luma encoding schemes.
  • Chroma_sub_X - This field allows the transmitter to designate the degree of horizontal subsampling performed on the video data chroma values.
  • Chroma_sub_Y - This field allows the transmitter to designate the degree of vertical subsampling performed on the video data chroma values.
  • Video data - The video data includes (source_w * source_h) pixel luma values (Y), and ((source_w/chroma_sub_x) *
  • the color model of the chosen protocol is specified by the International Telecommunications Union in ITU.BT-601 referring to an international standard for digital coding of television pictures using video data components Y'CbCr, where Y' is a luminance or "luma” value, Cb (or U') is a first chromaticity or "chroma” value represented as a blue color difference proportional to (B'-Y') and Cr (or V) is a second chroma value represented as a red color difference proportional to (R'-Y 1 ) (Note that primed values such as Y' indicate a gamma corrected value).
  • the transmitter performs any transformations required to convert the video data into the YUV format. This may include performing the RGB to YUV matrix conversion shown above to convert RGB data. Transformations may also include decompression from other color formats (e.g., H.261, MPEG1, etc.).
  • the receiver can drive an RGB display device by performing the above matrix operation to convert incoming YUV (Y'CbCr) data received from a transmitter into RGB data for display at the display rectangle identified in the data packet. No other color transformations are necessary at the receiver.
  • the receiver is also able to accept RGB data in the same video block format because RGB data is directly supported in the receiver. For transmission efficiency, however, any sizable video data transfers between a transmitter and receiver should be performed in the YUV color format to take advantage of the compression schemes described below.
  • each data packet containing a video block there are (source w * source_h) luma values ⁇ one for each pixel. If the luma encoding field indicates that no encoding is being performed, the luma values are unsigned eight-bit values. If, however, luma encoding is indicated in the luma encoding field, the luma values are encoded to achieve a compression ratio of 2:1. In an embodiment of the invention, the luma value "Y" is compressed using a quantized differential coding (QDC) scheme described below. In other embodiments, other compression schemes may be specified in the luma encoding field.
  • QDC quantized differential coding
  • the luma difference values can be satisfactorily quantized to one of sixteen quantization levels, each of which is identified by a four-bit code word.
  • the quantization is non-linear, with more quantization levels near zero where luma differences between consecutive pixels are more likely to occur.
  • the luma difference quantization is performed according to the following table:
  • Figure 4A is a flow diagram describing how the luma compression process is performed in accordance with an embodiment of the invention.
  • the scheme is based on a subtraction of a "last_value" from the current pixel luma value to generate the luma difference.
  • "Last_value” is used to model the luma value of the preceding pixel.
  • the "last_value” is modeled to account for the previous quantized luma difference rather than to match the actual luma value of the last pixel.
  • the modeled "last_value” in the compression process therefore matches the corresponding modeled "last_value” extracted in the decompression process.
  • the first luma value in each row has no luma value with which to form a difference.
  • an initial "last_value" is assigned from the middle of the luma range.
  • the first luma value in the row of pixels is set as the current luma value.
  • the "last_value” is subtracted from the current luma value to generate a current luma difference value.
  • the current luma difference is applied to a quantization function, in step 403, that outputs the quantized difference code.
  • the difference code is placed in the video block data packet.
  • step 405 the quantized difference level corresponding to the difference code is determined, and, in step 406, the "last_value” is updated by incrementing by the quantized difference level.
  • step 407 "last_value” is clamped to prevent overflow.
  • the clamping function is:
  • step 408 if there are more pixel luma values in the row, then process flows to step 409 wherein the next luma value is set as the current luma value. After step 409, the process returns to step 402. If there is no further pixel luma value in the row at step 408, then, in step 410, a determination is made whether there are further rows to process in the video block. If there are further rows to compress, the next row is designated in step 411 and the process returns to step 400. If there are no further rows at step 410, the luma compression is completed for the current video block.
  • a luma decompression process is illustrated in the flow diagram of Figure 4B.
  • the "last_value” is set to the same midrange value as is done for the beginning of a row in the compression scheme.
  • the first luma difference code is set as the current luma difference code.
  • the quantized difference value is determined from the current luma difference code in step 414.
  • the "last value” is incremented by the quantized difference value.
  • “last_value” is clamped to prevent overflow.
  • step 417 "lastjvalue,” now representing the decompressed current luma value, is written to a buffer.
  • step 418 If, in step 418, there are further luma difference codes in the current row of the video block, the next difference code is set as the current luma difference code in step 419, and the process returns to step 414. If, in step 418, there are no further luma difference codes in the current row, the process continues to step 420. In step 420, if there are no further rows in the video block, decompression is complete for the current video block. If, in step 420, there are further rows of luma difference codes, the next row of luma difference codes is set as the current row in step 421, and the process returns to step 412. Chroma Compression
  • the human eye is less sensitive to chroma information than to luma information, particularly in a spatial sense. For example, if, in generating an image, some of the chroma information is spread beyond the actual edges of an object in the image, the human eye will typically pick up on the edge queues provided by the luma information and overlook the inaccuracies in the spatial location of the chroma information. For this reason, some latitude can be taken with the manner in which chroma information is provided. Specifically, subsampling may be performed without significantly degrading visual quality. Subsampling may consist of sampling a single chroma value from
  • the amount of chroma information is specified by the chroma_sub_X and chroma_sub_Y fields in the video block data packet. If the values for both of those fields are zero, then there is no chroma information and the video block is monochrome, i.e., luma only.
  • chroma subsampling is:
  • Chroma_sub_X and chroma_sub_Y independently specify subsampling along respective axes.
  • Several subsampling arrangements achieved by different combinations of chroma_sub_X and chroma_sub_Y, as defined above, are: chroma sub X chroma sub Y one chroma value per compression 0 0 0, no chroma data 0 1 not permitted 1 0 not permitted 1 1 pixel 1:1 2 1 1 1 x 2 pixel array 2:1 1 2 2 x 1 pixel array 2:1 3 1 1 x 4 pixel array 4:1 1 3 4 x 1 pixel array 4:1 3 2 2 x 4 pixel array 8:1 2 3 4 x 2 pixel array 8:1 3 3 4 x 4 pixel array 16:1
  • Subsampling is performed as the video block data packet is being packed and may occur before or after luma compression as luma and chroma compression are substantially independent.
  • the chroma subsamples are upsampled prior to being converted to RGB. Upsampling may be accomplished by taking the subsampled chroma information and duplicating the chroma values for each pixel in the associated subsampling matrix.
  • 8 x 8 pixel array 500 represents the original video data prior to subsampling.
  • 4 x 2 pixel array 501 represents the video data after subsampling by the transmitter, and includes the data that would be transmitted to the receiver.
  • 8 x 8 pixel array 502 represents the video data after upsampling at the receiver.
  • the subsampling matrices are identified in array 500 as those pixel cells having the same index number. For example, all of the pixel cells containing a "1" are in the same subsampling matrix.
  • 4 x 2 array 501 contains the subsampled data from array 500.
  • the chroma values associated with those pixels with index "1” are averaged into chroma average value Al (Al comprises an averaged U value and an averaged V value) placed into the first cell of array 501.
  • the chroma values for those pixels with index "2" are averaged into chroma average value A2 and placed into the second location in array 501.
  • the other subsampling matrices indexed as "3"-"8" are averaged similarly.
  • the compression ratio seen between array 500 and array 501 is 8:1.
  • Array 501 is upsampled into array 502 by placing the averaged chroma values A1-A8 into the positions corresponding to the respective original subsampling matrices. For example, averaged chroma value Al is placed into each of the pixels in the upper left corner of 502 shown as containing "Al.” The insensitivity of the human eye to spatial errors in chroma information allows the averaged chroma values to provide satisfactory viewing results. Upscaling And Downscaling Of Video Data
  • the pixel array size of the source video block may differ from the size of the destination rectangle on the receiver's display. This size variation allows for a receiver with a large display to "blow up" or upscale a small video scene to make better use of the display resources. For example, a receiver may wish to upscale a 640 x 480 video stream to fill a 1024 x 1024 area on a large display device. Also, a receiver may have a smaller display than the size of a video stream. For this case, the video stream should be scaled down to be fully visible on the small display.
  • upscaling is performed by the receiver, whereas downscaling is performed by the transmitter.
  • One reason for this segregation of scaling duties is that scaled down video data requires lower network bandwidth to transmit.
  • the transmitter avoids sending video data that would be later discarded by the receiver. This also permits some simplification of the receiver in that resources, such as software code for downscaling video data, are not needed at the receiver.
  • Upscaling typically involves duplication of video data. It would be inefficient to send duplicated video data over a network. Therefore, the receiver performs all upscaling operations after receipt of the video data in its smaller form. Upscaling of video data is supported in the fields associated with the video data packet. Specifically, the video protocol provides separate fields for specifying the video source pixel array size and the destination display rectangle size. The amount of horizontal scaling is (width/source_w), and the amount of vertical scaling is (height /source_h).
  • Upscaling is performed after the video data has been decompressed and transformed into RGB format, though in certain embodiments upscaling may precede, or be combined with, the decompression steps.
  • the receiver expands the video data vertically, horizontally or both as need to make the video data fill the designated display rectangle. Expanding video data may be performed as simply as doubling pixel values, but more advanced image filtering techniques may be used to affect re-sampling of the image for better display quality.
  • FIG. 6A is a flow diagram illustrating how a transmitter processes video data in accordance with an embodiment of the invention.
  • the transmitter acquires video data for transmission to a receiver.
  • the video data may be acquired by any mechanism, such as capture of a video signal using a hardware capture board, generation of video data by a video service, or input of video data from a video input device such as a video camera.
  • step 601 if necessary, the video data is decompressed or converted into YUV color format in accordance with the established protocol.
  • the transmitter downscales the video data if the transmitter determines that downscaling is needed.
  • the luma values of the YUV video data are compressed, in step 603, using the quantized differential coding (QDC) scheme described herein, and loaded into a data packet.
  • the transmitter subsamples the chroma values of the YUV video data and loads D
  • the completed data packet containing the video data for a video block is sent to a receiver in step 605. After transmitting the data packet, the process returns to step 600.
  • FIG. 6B is a flow diagram illustrating how a receiver processes video data in accordance with an embodiment of the invention.
  • the receiver receives a compressed /subsampled YUV video block data packet from the transmitter.
  • the receiver decompresses the luma values of the data packet in step 607, and upsamples the chroma values in step 608.
  • the receiver With full YUV video data, the receiver performs a color transformation to convert the YUV video data to RGB data. If the destination display rectangle noted in the data packet header is larger than the source video data, the receiver performs any necessary upscaling to fill the designated display rectangle with the source video data.
  • the video data is loaded into a video buffer for display on a the receiver's display device. After loading the video data into the video buffer, the process returns to step 606.
  • An embodiment of the invention can be implemented as computer software in the form of computer readable code executed on a general purpose computers such as computer 700 illustrated in Figure 7, or in the form of bytecode class files executable within a JavaTM runtime environment running on such a computer.
  • a keyboard 710 and mouse 711 are coupled to a bi-directional system bus 718. The keyboard and mouse are for introducing user input to the computer system and communicating that user input to processor 713. Other suitable input devices may be used in addition to, or in place of, the mouse 711 and keyboard 710.
  • I/O (input/ output) unit 719 coupled to bi-directional system bus 718 represents such I/O elements as a printer, A/V (audio/video) I/O, etc.
  • Computer 700 includes a video memory 714, main memory 715 and mass storage 712, all coupled to bi-directional system bus 718 along with keyboard 710, mouse 711 and processor 713.
  • the mass storage 712 may include both fixed and removable media, such as magnetic, optical or magnetic optical storage systems or any other available mass storage technology.
  • Bus 718 may contain, for example, thirty-two address lines for addressing video memory 714 or main memory 715.
  • the system bus 718 also includes, for example, a 32-bit data bus for transferring data between and among the components, such as processor 713, main memory 715, video memory 714 and mass storage 712. Alternatively, multiplex data /address lines may be used instead of separate data and address lines.
  • the processor 713 is a microprocessor manufactured by Motorola, such as the 680X0 processor or a microprocessor manufactured by Intel, such as the 80X86, or Pentium processor, or a SPARCTM microprocessor from Sun MicrosystemsTM, Inc.
  • Main memory 715 is comprised of dynamic random access memory (DRAM).
  • Video memory 714 is a dual-ported video random access memory. One port of the video memory 714 is coupled to video amplifier 716. The video amplifier 716 is used to drive the cathode ray tube (CRT) raster monitor 717.
  • Video amplifier 716 is well known in the art and may be implemented by any suitable apparatus.
  • This circuitry converts pixel data stored in video memory 714 to a raster signal suitable for use by monitor 717.
  • Monitor 717 is a type of monitor suitable for displaying graphic images.
  • the video memory could be used to drive a flat panel or liquid crystal display (LCD), or any other suitable data presentation device.
  • Computer 700 may also include a communication interface 720 coupled to bus 718.
  • Communication interface 720 provides a two-way data communication coupling via a network link 721 to a local network 722.
  • ISDN integrated services digital network
  • communication interface 720 provides a data communication connection to the corresponding type of telephone line, which comprises part of network link 721.
  • LAN local area network
  • communication interface 720 provides a data communication connection via network link 721 to a compatible LAN.
  • Communication interface 720 could also be a cable modem or wireless interface. In any such implementation, communication interface 720 sends and receives electrical, electromagnetic or optical signals which carry digital data streams representing various types of information.
  • Network link 721 typically provides data communication through one or more networks to other data devices.
  • network link 721 may provide a connection through local network 722 to local server computer 723 or to data equipment operated by an Internet Service Provider (ISP) 724.
  • ISP 724 in turn provides data communication services through the world wide packet data communication network now commonly referred to as the "Internet" 725.
  • Internet 725 uses electrical, electromagnetic or optical signals which carry digital data streams.
  • Computer 700 can send messages and receive data, including program code, through the network(s), network link 721, and communication interface 720.
  • remote server computer 726 might transmit a requested code for an application program through Internet 725, ISP 724, local network 722 and communication interface 720.
  • the received code may be executed by processor 713 as it is received, and /or stored in mass storage 712, or other non-volatile storage for later execution. In this manner, computer 700 may obtain application code in the form of a carrier wave.
  • Application code may be embodied in any form of computer program product.
  • a computer program product comprises a medium configured to store or transport computer readable code or data, or in which computer readable code or data may be embedded.
  • Some examples of computer program products are CD-ROM disks, ROM cards, floppy disks, magnetic tapes, computer hard drives, servers on a network, and carrier waves.
  • the invention has application to computer systems where the data is provided through a network.
  • the network can be a local area network, a wide area network, the internet, world wide web, or any other suitable network configuration.
  • One embodiment of the invention is used in computer system configuration referred to herein as a human interface device computer system.
  • the functionality of the system is partitioned between a display and input device, and data sources or services.
  • the display and input device is a human interface device (HID).
  • the partitioning of this system is such that state and computation functions have been removed from the HID and reside on data sources or services.
  • one or more services communicate with one or more HIDs through some interconnect fabric, such as a network.
  • FIG 8. An example of such a system is illustrated in Figure 8. Referring to Figure 8, the system consists of computational service providers 800 communicating data through interconnect fabric 801 to HIDs 802.
  • Computational Service Providers In the HID system, the computational power and state maintenance is found in the service providers, or services.
  • the services are not tied to a specific computer, but may be distributed over one or more traditional desktop systems such as described in connection with Figure 7, or with traditional servers.
  • One computer may have one or more services, or a service may be implemented by one or more computers.
  • the service provides computation, state, and data to the HIDs and the service is under the control of a common authority or manager.
  • the services are found on computers 810, 811, 812, 813, and 814. In an embodiment of the invention, any of computers 810-814 could be implemented as a transmitter.
  • services examples include XI 1 /Unix services, archived video services, Windows NT service, JavaTM program execution service, and others.
  • a service herein is a process that provides output data and responds to user requests and input.
  • the interconnection fabric is any of multiple suitable communication paths for carrying data between the services and the HIDs.
  • the interconnect fabric is a local area network implemented as an Ethernet network. Any other local network may also be utilized.
  • the invention also contemplates the use of wide area networks, the internet, the world wide web, and others.
  • the interconnect fabric may be implemented with a physical medium such as a wire or fiber optic cable, or it may be implemented in a wireless environment.
  • HIDs - The HID is the means by which users access the computational services provided by the services.
  • Figure 8 illustrates HIDs 821, 822, and 823.
  • a HID consists of a display 826, a keyboard 824, mouse 825, and audio speakers 827.
  • the HID includes the electronics need to interface these devices to the interconnection fabric and to transmit to and receive data from the services.
  • an HID is implemented as a receiver.
  • a block diagram of the HID is illustrated in Figure 9.
  • the components of the HID are coupled internally to a PCI bus 912.
  • a network control block 902 communicates to the interconnect fabric, such as an ethernet, through line 914.
  • An audio codec 903 receives audio data on interface 916 and is coupled to block 902. USB data communication is provided on lines 913 to USB controller 901.
  • An embedded processor 904 may be, for example, a Sparc2ep with coupled flash memory 905 and DRAM 906.
  • the USB controller 901, network controller 902 and embedded processor 904 are all coupled to the PCI bus 912.
  • the video controller 909 may be for example, and ATI RagePro+ frame buffer controller that provides SVGA output on line 915.
  • NTSC data is provided in and out of the video controller through video decoder 910 and video encoder 911 respectively.
  • a smartcard interface 908 may also be coupled to the video controller 909.

Abstract

L'invention concerne un procédé et un appareil de prise en charge d'un protocole vidéo dans un environnement réseau. Selon une réalisation de l'invention, les exigences de traitement vidéo et de matériel associées à un récepteur sont minimisées grâce à la spécification d'un seul protocole vidéo destiné à la transmission de données vidéo entre des émetteurs et des récepteurs sur un réseau. Ce protocole spécifie un format couleur permettant une vidéo de haute qualité, et minimise la complexité du récepteur. Des émetteurs sont équipés de mécanismes de transformation permettant de convertir des données vidéo pour obtenir le protocole désigné nécessaire. Une compression des composants du format couleur permet de réduire les exigences de largeur de bande de transmission. Dans une réalisation, certains aspects du protocole désigné compensent des problèmes associés à la transmission de données vidéo sur un réseau. Le protocole désigné spécifie un format couleur comprenant une valeur de luminance et deux valeurs de chrominance. On applique à la valeur de luminance un codage différentiel quantifié et on réalise un sous-échantillonage sur les valeurs de chrominance, de façon à réduire les exigences de largeur de bande de transmission. On réalise un changement d'échelle vers le haut des données vidéo au niveau du récepteur, et un changement d'échelle vers le bas au niveau de l'émetteur. Diverses tailles d'affichage peuvent ainsi être obtenues avec une utilisation efficace de la largeur de bande du réseau.
EP99918718A 1998-04-20 1999-04-20 Procede et appareil de prise en charge d'un protocole video dans un environnement reseau Ceased EP1074091A2 (fr)

Applications Claiming Priority (3)

Application Number Priority Date Filing Date Title
US6349298A 1998-04-20 1998-04-20
US63492 1998-04-20
PCT/US1999/008673 WO1999055013A2 (fr) 1998-04-20 1999-04-20 Procede et appareil de prise en charge d'un protocole video dans un environnement reseau

Publications (1)

Publication Number Publication Date
EP1074091A2 true EP1074091A2 (fr) 2001-02-07

Family

ID=22049572

Family Applications (1)

Application Number Title Priority Date Filing Date
EP99918718A Ceased EP1074091A2 (fr) 1998-04-20 1999-04-20 Procede et appareil de prise en charge d'un protocole video dans un environnement reseau

Country Status (5)

Country Link
EP (1) EP1074091A2 (fr)
JP (1) JP2002512470A (fr)
AU (1) AU760841B2 (fr)
CA (1) CA2329426A1 (fr)
WO (1) WO1999055013A2 (fr)

Families Citing this family (9)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP4288398B2 (ja) 2000-01-18 2009-07-01 株式会社ニコン 画像記録装置、画像再生装置および画像処理プログラムを記録した記録媒体
JP4834917B2 (ja) * 2001-05-14 2011-12-14 株式会社ニコン 画像符号化装置および画像サーバ装置
US20040161039A1 (en) * 2003-02-14 2004-08-19 Patrik Grundstrom Methods, systems and computer program products for encoding video data including conversion from a first to a second format
FI115587B (fi) * 2003-12-03 2005-05-31 Nokia Corp Menetelmä ja laitteisto digitaalisen matriisikuvan alaspäin skaalaamiseksi
KR100754388B1 (ko) 2003-12-27 2007-08-31 삼성전자주식회사 레지듀 영상 다운/업 샘플링 방법 및 장치와 그를 이용한영상 부호화/복호화 방법 및 장치
EP2224724A3 (fr) * 2003-12-27 2012-04-11 Samsung Electronics Co., Ltd. Procédé de codage et de décodage utilisant un échantillonnage du résidue
WO2008044637A1 (fr) 2006-10-10 2008-04-17 Nippon Telegraph And Telephone Corporation Procédés de codage et de décodage vidéo, leur dispositif, leur programme, et le support de stockage contenant le programme
JP6282763B2 (ja) * 2012-09-21 2018-02-21 株式会社東芝 復号装置、符号化装置、復号方法、及び符号化方法
US20220148133A1 (en) * 2019-03-25 2022-05-12 Sony Interactive Entertainment Inc. Image display control device, transmitting device, image display control method, and program

Family Cites Families (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US5488570A (en) * 1993-11-24 1996-01-30 Intel Corporation Encoding and decoding video signals using adaptive filter switching criteria
EP1024664B1 (fr) * 1993-12-24 2002-05-29 Sharp Kabushiki Kaisha Dispositif de stockage et récupération de données d'image
US5550847A (en) * 1994-10-11 1996-08-27 Motorola, Inc. Device and method of signal loss recovery for realtime and/or interactive communications
CA2168641C (fr) * 1995-02-03 2000-03-28 Tetsuya Kitamura Systeme de codage-decodage de donnees d'imagerie
US6389174B1 (en) * 1996-05-03 2002-05-14 Intel Corporation Video transcoding with interim encoding format

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
See references of WO9955013A2 *

Also Published As

Publication number Publication date
AU3656799A (en) 1999-11-08
AU760841B2 (en) 2003-05-22
WO1999055013A3 (fr) 2000-04-27
WO1999055013A2 (fr) 1999-10-28
CA2329426A1 (fr) 1999-10-28
JP2002512470A (ja) 2002-04-23

Similar Documents

Publication Publication Date Title
US11973982B2 (en) Color volume transforms in coding of high dynamic range and wide color gamut sequences
US7215345B1 (en) Method and apparatus for clipping video information before scaling
US5587928A (en) Computer teleconferencing method and apparatus
US6141693A (en) Method and apparatus for extracting digital data from a video stream and using the digital data to configure the video stream for display on a television set
KR101130422B1 (ko) 이미지 정보를 처리하기 위한 방법 및 컴퓨터-판독가능 메모리 장치
US4897799A (en) Format independent visual communications
US7184057B2 (en) Systems and methods for providing color management
US5724450A (en) Method and system for color image compression in conjunction with color transformation techniques
US20050099434A1 (en) Compositing images from multiple sources
US20210377542A1 (en) Video encoding and decoding method, device, and system, and storage medium
US6694379B1 (en) Method and apparatus for providing distributed clip-list management
US20060050076A1 (en) Apparatus for and method of generating graphic data, and information recording medium
JP2001103331A (ja) カラー画像データ管理方法及び装置
AU760841B2 (en) Method and apparatus of supporting a video protocol in a network environment
US7430327B2 (en) Image processing apparatus, image processing program, and storage medium
US5519439A (en) Method and apparatus for generating preview images
US8411740B2 (en) System and method for low bandwidth display information transport
EP1036375B1 (fr) Traitement video dans des ordinateurs personnels au moyen d'un cube couleur accorde de maniere statistique
Liou et al. A new microcomputer based imaging system with C/sup 3/technique
JP3070683B2 (ja) 画像伝送方法およびその方法を用いた画像伝送装置
Hofmann The modelling of images for communication in multimedia environments and the evolution from the image signal to the image document
Furht et al. The Problem of Video Compression
Umemura et al. Real-time Transmission and Software Decompression of Digital Video in a Workstation
JPH11331853A (ja) 映像配信装置およびそのシステム

Legal Events

Date Code Title Description
PUAI Public reference made under article 153(3) epc to a published international application that has entered the european phase

Free format text: ORIGINAL CODE: 0009012

17P Request for examination filed

Effective date: 20001113

AK Designated contracting states

Kind code of ref document: A2

Designated state(s): AT BE CH CY DE DK ES FI FR GB GR IE IT LI LU MC NL PT SE

17Q First examination report despatched

Effective date: 20010205

RAP1 Party data changed (applicant data changed or rights of an application transferred)

Owner name: SUN MICROSYSTEMS, INC.

STAA Information on the status of an ep patent application or granted ep patent

Free format text: STATUS: THE APPLICATION HAS BEEN REFUSED

18R Application refused

Effective date: 20040516

REG Reference to a national code

Ref country code: HK

Ref legal event code: WD

Ref document number: 1035972

Country of ref document: HK