GB2331649A - Image compresssion system - Google Patents

Image compresssion system Download PDF

Info

Publication number
GB2331649A
GB2331649A GB9724807A GB9724807A GB2331649A GB 2331649 A GB2331649 A GB 2331649A GB 9724807 A GB9724807 A GB 9724807A GB 9724807 A GB9724807 A GB 9724807A GB 2331649 A GB2331649 A GB 2331649A
Authority
GB
United Kingdom
Prior art keywords
intensity
pixels
run length
areas
spatial
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Withdrawn
Application number
GB9724807A
Other versions
GB9724807D0 (en
Inventor
Dilip Daniel James
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Individual
Original Assignee
Individual
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Individual filed Critical Individual
Priority to GB9724807A priority Critical patent/GB2331649A/en
Publication of GB9724807D0 publication Critical patent/GB9724807D0/en
Publication of GB2331649A publication Critical patent/GB2331649A/en
Withdrawn legal-status Critical Current

Links

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N19/00Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
    • H04N19/90Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using coding techniques not provided for in groups H04N19/10-H04N19/85, e.g. fractals
    • H04N19/93Run-length coding
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N19/00Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
    • H04N19/10Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding

Landscapes

  • Engineering & Computer Science (AREA)
  • Multimedia (AREA)
  • Signal Processing (AREA)
  • Compression Or Coding Systems Of Tv Signals (AREA)

Abstract

A digital image compression system, capable of supporting full motion digital video, encodes the signals within a frame and further compresses by comparison with adjacent frames to eliminate temporal redundancy. In the encoding stage, an input signal is inspected and pixels identified as one of: 1) High frequency, low intensity elements of small spatial areas (1-2 pixels) (pixels 'c'); 2) Intermediate frequency elements of low intensity, larger spatial area than 1) (pixels 'd'); 3) Low frequency, high intensity elements represented by large spatial areas (pixels 'a'); 4) Isolated high intensity areas representing boundaries such as at the edge of objects (pixels 'b'). The four types of pixel are encoded as follows: 1) As zero (or if a row of zeroes is present, by run length encoding); 2) By run length encoding (for example with 8 bits for intensity and 4 bits for duration); 3) By run length encoding; 4) By 8 bits per pixel. Temporal redundancy reduction may be effected by the apparatus of Figure 3 (not shown) where the pixels in a current (b) and preceding (a) frame memory are compared by simultaneous clocking and the coordinates of and nature of any differences are temporarily recorded (c). 1) Area marked (a) representative of large spatial dimention of high intensity : (Run length encode). 2) Area marked (b) representative of boundary areas, with small sptial dimensions and high intensity. (Encode as 8 bit intensity value). 3) Area marked (c) representative of low intensity & small spatial dimension (Encode in 1 bit as zero) 4) Area marked (d) representative of medium intensity, intermediate spatial dimension. (run length endode.)

Description

1 DIGITAL IMAGE COMPRESSION SYSTEM 2331649 This is an application for
grant of Patent rights for an invention capable of supporting full motion digital video. The system maybe implemented using hardware or firinwarelsoftware.
Backuound of the invention:
In order to make possible the use of full motion digital video, either for transmission purposes or for use on a desk-top computer, it is neccessary that effective means are found for compressing the visual data. This is because the bandwidth needed to transmit uncompressed digital data would be too large and unwieldy to handle. Typically an 8 bit NTSC system with a sampling rate of 3 x chromi-nance sub carrier frequency of 3.579 MHz which equals 10.79MHz, would need a bit rate of 8 x 10.79MHz or 85.9 MWs. If we take the Nyquist rate of half the bit-rate the bandwidth required would be 47.95 MHz, if we add to this the bits needed for checking etc., the figure can be rounded off to SOMHz. An analog system on the other hand can handle the same information using a bandwidth of only 5MHz. Because of this extremely large bandwidth it is not possible to transmit uncompressed digital video signals over normal transmission channels, however dedicated satellite channels or fibre optic cables maybe used for the purpose. The storage of this video (uncompressed information) on the computer memory is also precluded by the sheer volume of information and the storage space needed. Thus if digital video transmissions are to become a reality some efficent methods of compression of data have to be utilised.
Several methods have been introduced which support full motion digital video. The present most widely used systems utilise methods of block encoding to implement the Discrete Cosine Transform (DCT) method of compression. In this system the screen is divided into blocks, the most usual configuration being 8 x 8 pixel blocks, this is necessary due to the fact that smaller blocks of data can be handled faster and more efficiently than trying to process data a whole screen at a time. The Block encoding technique based on the DCT first transforms the time domain analog signal which represents different magnitudes of the signal strength at different times, into a set of frequency values that represent exactly the same 2 signal, the process is reversible. The compression, termed lossy compression, since it involves a cerain loss of data is achieved by eliminating or representing as zero all high frequency components of the signal. The high frequency component of the video signal has been found to represent low inetnsity values which are of less importance when describing the picture. By the process of tun-iing all values representative of higher frequencies to zero, it is possible to achieve fairly high rates of compression at an average of 60% without appreciable loss in the picture quality. A built in quality factor can be utilised which improves compression at the cost of picture quality. The above ennumerated system works well but has certain inherent drwarbacks due to the complexity of the hardware/software needed to implement it, another major shortcoming arises due to the presence of artifacts and distortions present at the boundaries of the blocks. An embodiment of this system as applied to full motion digital video maybe found in EP 0222592 and a full description of the process is ennumerated in the 'Data Compression Bookby Mark Nelson & Jean-Loup Gailly. 'Image processing' by M A Sidi Ahmed gives a fuller description with source code and possible hardware configurations.
Another system dealing with the support of full motion digital video is ennumerated in US 5298992 and GB2291553 A. These systems take adavantage of the high degree of similarity between one frame of full motion digital video and the next. According to this system there are 4 memory areas containing respectively a preceding image, a current image(received from an image capture device such as a video camera) an intermediate image ( the result of an XOR operation carried out between the preceding image and the current image) and the encoded current image. The system works as follows:- A pixel by pixel comparison is carried out between the preceding image and the current image approximating to an XOR operation. The result of the XOR operation is stored as a temporary intermediate image, in which areas of similarity between the preceding image and the current image are represented by zeros and areas of difference are represented by pixel information. In the final stage compression of data is achieved by using a run length encoding system to compress the large runs of zeros in the intermediate image, The resultant encoded image is stored to memory. During decoding the process is reversed, the encoded current image is decoded and temporarily stored as the intermediate image, an XOR operation is carried out between the 3 preceding image and the current intermediate image, the resultant image, representative of the unencoded original current image is sent to the output, and the process is repeated with the nect current encoded image. The system outlined above has the advantage of being fast and uncomplicated, however compression is relatively poor and it has the disadvantage in cornmon with the block encoding systems ennumerated earlier, that individual frames cannot be accessed independently, nor can data be inserted or manipulated.
The present application overcomes this disadvantage by utilising synchronised clock counters to track co-ordinates simultaneously in both the preceding and the current frame by means of incremental/decremental registers. During the pixel by pixel comparison of the preceding image and the current image. Variations between the two images are marked by coordinates marking the beginning and end points on each scan line where such variations take place together with the nature of the variations occuring between these co-ordinates. During decoding the same embodiment is used. The processor reads information from the preceding image, while checking after each increment/ decrement of the counter registers to see if the current co-ordinate is present in the current frame. If 'yes' the processor switches to reading information from the current frame and writes this information to the preceding frame at the appropriate location. At the end of the variation the proccessor swicthes back to reading from the preceding image. In this way the original unencoded picture is built up. This is output and the process continues with the next current encoded image. The system has several advantages since it utilises frame numbering and allows for the accessing of individual frames as well as for the manipulation of data contained in the frame.
DETAILED DESCRIPTION OF THE INVENTION:
The system ennurnerated herein and for which this patent application is being made, has the advantage of offering a fast, simple and efficient method of image compression for full motion digital video, while at the same time allowing for manipulation of data such as change or insertion of data at a given location in a frame,accessing of individual frames and so on.
Compression of data as applied to digital images is based on three main methods which 4 exploit the redundancies in the analog signal these are: 1 Spectral redundancy: based on the correaltion and attributes of various colours. 2) Temporal redundancy: based on the fact that contiguous digital images such as are found in full motion video, vary by only small amounts. 3) Spatial redundancy: based on the correlation between neighbouring pixels.
The system described herein utilises all three forms of compression. The system described herein uses the fact that the information content of a signal can be described in terms of the picture which the signal might elicit from a reference producer, showing thereby that picture elements can be associated with signals as well as with pictures. Thus the signal maybe thought of as a series of discrete picture element values, this is in effect a digital signal. A crystal oscillator tuned to the chrominance sub-carrier band frequency of 3.579 MHz is used to achieve this, thereby recreating an almost exact facsimille of the signal as first received from the Phototube transmitting device. Higher frequency oscillators maybe used but to little purpose since the frequency of the incoming signal is accurately known. Of course it is necessary prior to digitizing the incoming signal to split it up into its component parts of Red, Green and Blue so that it is the equivalent of 3 seperate monochrome signals. These signals can then be passed to the crystal oscillator circuit for detection and amplification.
After recovering and digitizing the signal it is necessary to further process the signal information in order to remove redundant information. The analog video signal contains much information that is redundant, for instance the horizontal and vertical synchronisation signals can be processed at the receiver instead of being transmitted, the same applies for various other timing signals.
Other redundancies in the analog video signal relate to the variation of colour acuity characteristics of human vision. The colour acuity of the human eye decreases as the size of the viewed object is decreased and thereby occupies a small part of the field of view. Thus small objects on the screen/monitor are defined in terms of luminance rather than in terms of chrominance. Slightly larger spatial areas can be defined in terms of two instead of three colours and so on.
in order to remove these redundancies, a series of checks are carried out against the incoming, digitized signal values. The first check measures the amplitude of the incoming signal against a predetermined minimum amplitude value, all pixels having less than this predetermined value are stored as zero. This is because noise in the chrominance bands will, cause only chromatic variations, as opposed to luminance variations and will thereby be less visible. In other words these very low amplitude signals are treated as noise. If the amplitude is marginally higher, a check is carried out to determine whether the pixel is in isolation or whether neighbouring pixels have similar amplitude values. If the pixel is found to be in isolation its value is once again stored as zero. This applies to all pixels in groups of 1-2 pixels. This is because as has already been shown, for small spatial areas luminance is more important than chromacity, since chrominance is less visible in small spatial areas when viewed by the human eye. The amplitude or luminance value of the pixle being already low, approaching the surrounding grey level brightness, its value can be given as zero with no loss to the clarity of the picture. In other words isolated, small spatial areas (I to 2 pixies) may be conveniently valued at zero, the value of zero is given in order to preserve the spatial configuration of the original signal. This system results in considerable savings in memory since such low amplitude isolated pixel values form a considerable percentage of the picture and can be stored in one bit, per zero, or in the event of a run of zeros, as a run length encoded integer pair. Apart from the amplitude of the signal a check is also carried out on the adjacent pixels to see whether a set of pixels occupying adjacent positions have similar values. In the system ennuemrated herein 8 bits are alloted per colour intensity thereby giving a possible 255 intensities. Checks are made giving tolerances of plus I or minus I over the initial intensity value. Therefore if the first pixel in line is found to have an intensity of 128, subsequent pixies are checked to see if they occupy values of either 127, 128 or 129. If they have these values they are given a value of 128 and stored together as a run length encoding integer pair. That is one integer value representing the intensity and the other intensity value representing the number of adjacent pixels for which that intensity is maintained. Run length encoding results in considerable savings of memory, as long as the average number of pixels being represented is greater than 3. Scattered signals represented by high intensity value isolated pixels are recorded in 8 bits and represent boundary elements such as are found at the 6 edges of two objects. Thus the signal can be divided into four distinct categories, each of which uses its own coding system:
1) High frequency, low intensity elements representing small spatial areas of I to 2 pixels are recorded in one bit as zero, or in the event that a row of zeros is present, such as may be found in a chrmonance noise signal, run length encoding maybe used.
2) Intermediate frequency elements, of low intensity and slightly larger spatial area represented by 3 to 8 pixels, can be run length encoded, using any convenient unit but preferably, 8 bits for intensity and 4 bits for duration (number of adjacent pixels.) 1 3) Low frequecy high intensity elements, represented by large spatial areas, represented by 8-15 pixels or more, are ran length encoded.
4) Isolated high intensity amplitudes are recorded in 8 bits and represent boundary areas such as may be found at the edges of objects.
Thus it can be seen that the above system offers a very fast( real time) compression system based on a time domain representation of the Discrete Cosine Transform or the Fast Fourier transform. However the system herein ennumerated is more accurate than the DCT since each individual value is checked, decisions are not made on a broad basis such as frequency. Thus both the DCT and the Present system identify and eliminate ( store as zero) all high frequency, low amplitude signals occupying small spatial areas. However the run length encoding system as employed in the system ennumerated herein, works at optimum levels since it is only used when the need for storing a series of adjacent pixels with similar intensity values arises. Information which does not fit into this category is stored differently, zero values are stored as one bit or as run length encoded information depending upon if a run length of zeros is present. Thus the system makes maximum use of all the types of compression it employs, while still maintaining the information needed for screen display in the form of an encoded linked list. The system offers good reproduction with high resolution since an interleaved signal can be used. The second form of compression enumerated in this system uses the fact that contiguous video images vary by only small amounts to bring about 7 further compression. The system can work in tandem with the XOR system as ennumerated in US 5 298 992 and GB 2 291 553A or with the system enumerated herein. The latter is preferable since it allows for optimum control over the whole system of recording, playback and editing.
The second part of the digital compression system as ennumerated herein works as follows:Three memory areas (a), (b) and (c) are utilised. The area (a) contains the preceding image, the area (b) contains the current image ( as received from an image capture device such as a video camera) and the area (c) which contains the encoded current image.
During encoding a pixel by pixel comparison is carried out between the preceding image and the current image, during this comparison the coordinates in the preceding image and the current image are being synchronously and simultaneously tracked through incrementalldecremental registers, in synchronisation with the master clock counter after taking into account the time needed for horizontal and vertical retrace. When a variation is encountered between the preceding image and the current image, the beginning and end-point co-oridnates on each scan line where a variation has been noted together with the nature of the variation between the co-ordinates is encoded and noted to memory (c). This continues until all the variations between the preceeding image and the current image have been encoded and noted to memory (c) which is then assigned a frame number and stored to memory, (c) is erased,(b) replaces (a) and the process continues with the next current image being received at (b).
During decoding the same configuration is used, the area(a) containing the preceding image and the area (b) containing the current encoded image are simultaneously and synchronously tracked by clock counters through registers keeping track of the co-ordinates in the two areas being tracked. The system initially reads from the preceding image, after each increment/decrement of the registers the system cheeks to see if the current co-ordinate is present in the area (b) if "no" the processor continues to read from (a) if "yes" the processor switches to the information contained in (b) and proceeds to decode and write the 8 information from (b) into the appropriate location at (a), at the end of the variation a flag present in (b) directs the processor back to the inforration in (a) this continues until all the information in (b) has been decoded and overwritten into (a) at the appropriate locations. The information in (a) now represents the original unencoded image represented by the encoded information in (b). Thus by switching accessing of information between the preceding frame and the current encoded frame an accurate representation of the unencoded current image is reproduced. Frames are sequentially numbered and designated as either key frames or in-between frames. Key frames represent key locations along the recording process, from which point any in-between frames can be built-up and accessed.
i Significant departure from GB 9 717 443.7 is made by the fact that only one co-ordinate is used to mark a variation on a scan line. This is made possible due to the fact that information for the RGB colours is synchronised during reproduction. Thus if we take it in Red major order, it means that a single co-ordinate is needed in the information for the RED color and the same co-ordinate can be represented for the other colours(Blue and Green) by flags such as 00. After the variation has been read by the processor the flag will read 01 to indicate that at the next variation the processor should enter the memory at the 00 flag immediately succeeding the 01 flag and so on. At the end of the frame all flags are automatically reset to 00. The use of this system results in considerable savings in memory exceeding those ennumerated in US 5 298 992, while at the same time offering considerable control over all elements of recording, playback and editing.
Description of the drawings:
A reference to Fig 1 Page 1 / 1 and Table 1 therein will illustrate how the terms low, high, intermediate and boundary signals have been interpreted according to this system. Figure 1 shows, the monochrome component ( Le., either Red, blue or Green) of a picture of a house near a lake with a tree nearby, some open sky and so on. In this picture the stippled effect as seen in table 1 (a) represents the high frequency component) as can be seen this is fairly widespread and of little importance to the picture, the signals being of low amplitude and 9 also representing very small areas spatially, these signals are all represented as zero without much effecting the picture because the low intensity of the signal means that it is very close to the normal grey level background brightness of the screen and any chrominance will be lost because of the small spatial area occupied by the signal. The areas represented by light barring as in Table I (b) represent the intermediate frequencies, such frequencies can be found in areas of the clouds, the leaves of the tree, the tree hunk and the rock beside the lake. These blocks of information are quantized (i.e brought to an average value from plus or minus I the original value) and run length encoded. The areas with the cross hatched designs represent the low frequency high intensity component, which are very important to the overall picture, such areas can be seen in the roof and walls of the house, some areas of the tree trunk etc., These areas are treated indentically to the intermediate frequencies, the intensity information being approximated and run length encoded. This is made possible due to the fact that human visual acuity can discern about 128 shades of brightness, however it is unlikely that human acuity of vision will be able to differentiate between such small levels of brightness over small spatial areas. Finally there are small high intensity areas, these are boundary areas as are formed for instance at the edge of the lake, these are recorded using 8 bits. The boundary areas are represented by Table I (d), while the low intensity information is represented by Table I (c). As can be seen from the picture the high frequency low intensity signals represented by Table 1 (a) are almost at the same intesnity as the back-ground grey level.
Table 1 (e) further the areas are so small and scattered that it is almost impossible to attach any chromatic significance to them, therefore the-picture does not suffer at all if these signals are represented as zero. The intermediate areas are distinguished more from their size than from any other criteria, however by recoginising the existence of an intermediate sixe,the efficency of the coding system is further improved and wastage of a large memory space on a small area is avoided. The same holds true of the high intensity low frequency signals as described in Table I (c). The boundary signals represnted by Table I(d) represent small high intensity areas and are necessary for the definition of the picture these areas are stored as 8 bits, per pixel. Table I (e) represents the background brightness level
Reference to Figure 2 Page 2/2 shows how intensity values for adjacent pixels are approximated or quantized. Figure 2 represents individual pixels on a scan line, the numbers in the pixels (boxes) represent the individual intensity of each pixel. Figure 2 Page 2/2 shows how these numbers are approximated to a value of 128 and the pixels encoded using run length encoding. (Under heading RLE) the first integer in the RLE box represents a pixel intensity 128 and the second integer value represents how many adjacent pixels that intensity is sustained for, in this case 10 pixels.
Reference to Figure 3 Page 3/3 Shows an embodiment of the invention for ennabling only differences between two frames to be recorded to memory alongwith the co-ordinates marking the beginning and end-points of such differences on each scan line. Areas (a), (b) and (c) represent memory areas. The area (a) represents the preceding image, the area (b) represents the current image (as received through an image capture device such as a video camera) area (c) contains the encoded current image. A pixel by pixel comparison is carried out between the preceding image and the current image, during this comparison, the two areas (a) and (b) are being simultaneously and synchronously tracked by the clock counters (d) and (e) through incremental registers(x) and (y). Two seperate clocks are used for greater accuracy although in theory the incremental/decremental registers can be run by the master clock signal) when a variation is noted between areas (a) and (b) the co-ordinates marking the beginning and end points co-ordinates where variations take place together with the nature of the variation are encoded and noted to memory(c). Once all the variations between areas (a) and (b) have been noted encoded and noted to (c) the contents of (c) are sent to memory and the temporary record in (c) erased. The contents of (b) overwrite the contents of (a) making the contents of (b) now stored in (a) the preceding image. The process is repeated for succeeding images.
During decoding of images the same embodiment is used. The preceding image is read by the processor, during this process the clock counters (d) and (e) track the co-ordinates in (a) and (b) through (x) and (y) synchronously with the rate at which the processor is reading data from (a) After each increment/decrement by (x) (y) the processor checks to see whether the current co-ordinate is present in memory area (b) if "no" the processor continues to read from 11 (a) if "yes" the processor switches to information in area (b) and decodes and overwrites the information found there to the apprepriate location in area (a). At the end of the variation a flag in area (b) marking the end of the variation sends the processor back to area (a) where it proceeds to read from the appropriate location. The processor continues to check after each increment/decrement by the registers to see if the current o-ordinate is present in area (b) at the appropriate Flag location. If "yes" the processor switches to information in (b) and decodes and overwrites the same to (a) at the appropriate location. This process continues until all the information in (a) and (b) has been written to (a). At the end of this process (a) contains the original unencoded image represented by the encoded information in (b). This is output and now represents the preceding image. (b) is deleted and the next current encoded image is received at (b) and the process enumerated above is repeated.
Quali1y Factor:
By adjusting the quantization levels, an automatic quality factor can be built into the system. Thus the wider the quantization level, as for instance a swing of 5 range intensities either way, which if we take the median level to be an intensity of 125 would make all the intensities from 120 to 130 eligible for approximation to 125. Thus 120,121, 122 123 124, 125, 126, 127 128 129 & 130 representing a range of intensities from 120 to 130 would all be stored as 125, thereby increasing compression but reducing quality.
The system described herein can find wide spread applications wherever computers are used or video imagery is needed such as in the broadcast and reception og television signals directly onto the computer memory, in cellular phones where both audio and video can be provided and in such areas as teleconferencing, video games, virtual reality and so on.
FRAMES:
In the system ennumerated herein a frame is the equiva ent of one full sweep of the CRT beam across the screen, from the top right hand comer to the bottom left hand comer or 12 vice-versa. Frames are numbered sequentially, frames which are merely refreshed are not numbered. The system utilises or recognises two types of frames viz: key-frames and in-between frames. Key-frames represent points of radical change in the course of the stream of contiguous video images, and when measured against a predetermined level are found to be uneconomical when encoded using the temporal redunadancy method described earlier. Instead, when an image received in area (b) is identified by the system as a key-frame, it is enocded using the signal processing methods described earlier and temporarily stored in memory area (c) from where it is assigned a frame number and stored to memory. During decoding the key- frame is stored first in (b) from where it is decoded directly into memory area (a) overwriting the contents if any, in (a). The initial image is a key frame.
In-between frames are those frames that can be encoded using the temporal redundancy method. In-between frames contain only the variations between a preceding image and a current image together with the spatial coordinates of where these variations occur, they are therefore highly compressed.
Each key-frame together with the number of in-between frames which it can generate form a set. Key-frames are always numbered with integers (i.e., 1.0) In-between frames are always numbered by decimals (i.e., 1. 1, 1.2 etc.) Thus a set of the key frame 1.0 might be from frames 1. 1 to frae 1, 25, after which the next key-frame would be nunbered as 2.0, and so on.
The system allows for any frame in-between to be built up from its keyframe, the image can then be accessed and the information present can be changed by entering the relevant co-ordinates and data. Thus when a frame is required to be accessed the key-fi-ame is accessed first and the inbetween which is required is built up from that point.
13

Claims (5)

CLAIMS 1) A digital image compression system capable of supporting full motion digital video uses two stages of compression the first stage capitalizes on redundnacies in the analog video signal to divide the incoming signal into characteristics represntative of spatial areas and intensities which result in efficient encoding while the second stage utilises the temporal redundancy present in contiguous video images to record only the differences between two consecutive and contiguous images to memory.
1,q.
Amendments to the claims have been filed as follows 1) A digital image comoression system, capable of supporting full motioli digital vdeo, can work in conjunction with compression systems utilisintemporal redundancy methods which per-form an XOR approximation between a preceding image and a current image in a series of contiguous video images (as described more fully in the specification hereto attached) uses a method of dividing the incoming digitized video signal into characteristics representative of spatial areas and intensities such that inessential information is eliminated and essential information is compressed, as for instance low intensity signals occupying small spatial areas are recorded in one bit as zero while high intensity signals occupyin small spatial areas are recorded in 6 bits, intermediate and large spatial areas of hiah intensity are run length encoded using a different run length value for the intermediate spatial areas from that used for the la roe spatial ereas, the whole of the encoded information -- 'Orming an encoded linked list from which the original picture information can be reproduced, with losses, encoding is done scan line by scan line 2) 7he division of the digitized video signal as Claimed in Claim I irto characteristics rep-resenting spatial areas and intensities, divides the digitized video signal, which has been split into its monochrome components, into 4 or r.T-e categories, these are, isolated pixels represented by I to '7 cbns-ecutive and adjacent pixels on a scan line, possessing a p,-edetermired low intensity, which are recorded in 1 bit as zero, or in the event of a run of zeros being present as in a chrominance noise signal, are run length- encoded, small spatial areas, 1 to
2 adjacent and consecutive pixels on a scan line, possessing high intensity are recorded in 8 bits, intermediate spatial areas of
3 to 15 pixels which occuny adjacent and consecutive positions on a scan line and possess high intensity are run length encoded using
4 bits for the run length value, large spatial areas, 16 or more consecutive and adjacenk: pixels on a scan line, are run length encoded using anj chosen value of bits easily identifiable in the resulting encoded linked list of information, bit count codes maybe used to aid in identification of the different types of in-t-F-rination.
w Contd...
0 Page. 15" CLAIMS Contd.
The division of the digitized video signal, as Claimed in Claim I and Claim 2 above makes use of the characteristics of human spectral vision during encoding, for instance human visual perception cannot readily discern chromaficity in a small spatial area, if in addition to this the intensity of this small spatial area is near the background (3rey level brightness, it is Possible to record the information for this low intensity small spatial area ( I to 2 pixels occupying adjacent and consecutive oositions on a scan line) in I bit as zero, thereby preservi the spatial configuration of the original signal without appreciably affecting the overall quality of the picture as percieved by the human eye 4) The use of a different run length value for intermediate and large.. spatial areas, as Claimpd in Claim I and Claim 2 above means that the efficiency of the -run length encoding is greatly enhanced since run length encoding is used only when it results in positive compression of data and also because wastage of memory through the use of a large run length value for a small run length of pixel intensities is avoided, spatial area as claimed in Claiml Claim 2, and Claim 3 above refers to adjacent and consecutive pixels on a scan line.
5) The run length encoding as claimed in Claim 1,Claim 2 and Claim 4 above uses a system of quantization or approximation whereby subsequent pixel intensities representative of consecutive and adjacent pixels on a scan line, following an initial pixel intensity value on the same scan line, are approximated to plus one or minus one the initial pixel value, therefore if the initial pixel value is 128 subsequent adjacent and consecutive pixels having intensity values of 127 or 129 would be treat( as if they had an intensity value of 128 and be included in the run length, by increasing the approximation unit it is possible to increase compression of data at the cost of picture quality, 6) Change of the minimum predetermined intensity for Calculating the cut off value for low intensity signals as claimed in Claim I Claim 2 and Claim 3 above, can serve as a quality control, a higher intensity cut off value resulting in better compression at the cost of picture qualit) 7) A.-digital image compression system, capable of supporting full motion digital video, substantially as described herein.
GB9724807A 1997-11-25 1997-11-25 Image compresssion system Withdrawn GB2331649A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
GB9724807A GB2331649A (en) 1997-11-25 1997-11-25 Image compresssion system

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
GB9724807A GB2331649A (en) 1997-11-25 1997-11-25 Image compresssion system

Publications (2)

Publication Number Publication Date
GB9724807D0 GB9724807D0 (en) 1998-01-21
GB2331649A true GB2331649A (en) 1999-05-26

Family

ID=10822554

Family Applications (1)

Application Number Title Priority Date Filing Date
GB9724807A Withdrawn GB2331649A (en) 1997-11-25 1997-11-25 Image compresssion system

Country Status (1)

Country Link
GB (1) GB2331649A (en)

Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US5544286A (en) * 1993-01-29 1996-08-06 Microsoft Corporation Digital video data compression technique
GB2306271A (en) * 1994-06-22 1997-04-30 Microsoft Corp Data compression analyser

Patent Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US5544286A (en) * 1993-01-29 1996-08-06 Microsoft Corporation Digital video data compression technique
GB2306271A (en) * 1994-06-22 1997-04-30 Microsoft Corp Data compression analyser

Also Published As

Publication number Publication date
GB9724807D0 (en) 1998-01-21

Similar Documents

Publication Publication Date Title
US8416847B2 (en) Separate plane compression using plurality of compression methods including ZLN and ZLD methods
JP2619091B2 (en) Apparatus and method for compressing and decompressing digital color video data
EP0711487B1 (en) A method for specifying a video window's boundary coordinates to partition a video signal and compress its components
US8170095B2 (en) Faster image processing
US6006276A (en) Enhanced video data compression in intelligent video information management system
US4799677A (en) Video game having video disk read only memory
US5053861A (en) Compression method and apparatus for single-sensor color imaging systems
AU653877B2 (en) Apparatus for compression encoding video signals
US7526029B2 (en) General purpose compression for video images (RHN)
US6941021B2 (en) Video conferencing
CA1205178A (en) Method and apparatus for encoding and decoding video
US5930390A (en) Encoding/decoding signals using a remap table
EP0527245A1 (en) Method and system for coding and compressing video signals
US20020196848A1 (en) Separate plane compression
CN1078841A (en) The method and apparatus that the luminance/chrominance code that use mixes is compressed pictorial data
US6037982A (en) Multi-pass video compression
EP0711486B1 (en) High resolution digital screen recorder and method
JPH0759058A (en) Transmission device of digital image signal
Limb et al. Plateau coding of the chrominance component of color picture signals
JP2583627B2 (en) Method and apparatus for compressing and decompressing statistically encoded data for digital color video
GB2512825A (en) Transmitting and receiving a composite image
GB2331649A (en) Image compresssion system
EP1110408B1 (en) Compression and decompression system for digital video signals
JP2925047B2 (en) Data compression device and data decompression device
EP0973337A3 (en) System for deriving a decoded reduced-resolution video signal from a coded high-definition video signal

Legal Events

Date Code Title Description
WAP Application withdrawn, taken to be withdrawn or refused ** after publication under section 16(1)