Connect public, paid and private patent data with Google Patents Public Datasets

WO1993015587A1 - Digital audiovisual signal for 35mm equivalent television - Google Patents

Digital audiovisual signal for 35mm equivalent television


Publication number
WO1993015587A1 PCT/US1993/000980 US9300980W WO1993015587A1 WO 1993015587 A1 WO1993015587 A1 WO 1993015587A1 US 9300980 W US9300980 W US 9300980W WO 1993015587 A1 WO1993015587 A1 WO 1993015587A1
Grant status
Patent type
Prior art keywords
Prior art date
Application number
Other languages
French (fr)
Denyse Dubrucq
Original Assignee
Scabbard Technology, Inc.
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date



    • H04N11/00Colour television systems
    • H04N11/04Colour television systems using pulse code modulation
    • H04N19/00Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
    • H04N19/00Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
    • H04N19/50Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using predictive coding
    • H04N19/503Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using predictive coding involving temporal prediction
    • H04N19/507Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using predictive coding involving temporal prediction using conditional replenishment
    • H04N9/00Details of colour television systems
    • H04N9/79Processing of colour television signals in connection with recording
    • H04N9/80Transformation of the television signal for recording, e.g. modulation, frequency changing; Inverse transformation for playback
    • H04N9/804Transformation of the television signal for recording, e.g. modulation, frequency changing; Inverse transformation for playback involving pulse code modulation of the colour picture signal components
    • H04N9/8042Transformation of the television signal for recording, e.g. modulation, frequency changing; Inverse transformation for playback involving pulse code modulation of the colour picture signal components involving data reduction


The receiver signal processor, fills the image array (18) and sends signal through chromadecoder (413) for data bundles for each segment of the display having, in practice, 525 lines and 700-750 crosspoints. The chromadecoder (413) sends datastreams for each color component for each unit in the many data bundles simultaneously. The receiver signal accommodates other receiver types by consistant cutting of data array bundles to fit the line and crosspoint number of the lower resolution receiver for both digital and analog display. 3DTV sends signal to two displays (600, 662).



This application is a continuation-in-part of pending applications: Serial No. 97/459,140 filed December 29, 1989, and Serial No. 07/581,503 filed September 12, 1990.


1. Field of the Invention

This invention is directed to a digital audiovisual signal for television broadcast, video recording, and multimedia and computer applications. It provides a digital signal for image and for sound. The signal undergoes compression to enable its use in the present standard broadcast and recording bands for the television, i.e. the 6MHz band width allowing both 30Hz and

59.94Hz screen refreshment, the latter being a type of High

Definition Television (HDTV) .

2. Description of Prior Art

Current reported resolutions in the HDTV industrial efforts range from 850 to 1250 lines in the image with many systems having lower chromatic resolution. Others have high resolution in the center portion of the image and less resolution in side panels. Some current HDTV systems have wide screen imaging, in place of the standard aspect image of 4:3 proportions, they use 16:9 proportions. Cameras supporting the HDTV and higher resolution have a range of problems including an inability to make the micron and submicron circuit components, line outages taking out ribbons of image down or across a screen due to wire imperfections or breakages, and high light level requirements to activate light sensors in the moment of sensor exposure for the frame.

Digital techniques in imaging require much development since the majority of television work has been with analog signals up through the end of the 1980s. Some practices in digital imaging use blocks of pixels or color units and do configurational math to achieve a code representing the image area. Others allow color substitution, the ability to isolate areas or recolor areas.

The videodisc and compact disk interactive (CDI) version of the compact disc as well as computer manipulation of the television image digitize a frame or series of frames of video. Special Analog/Digital converters provided by video boards convert an analog signal to a digital signal which allows video overlays, image manipulation and resynthesis of an image. With fully digital systems, this can be done more routinely applying the capabilities more frequently in broadcast television and recorded video operations.

Digitizing the image creates applications for the system in fields where detailed imaging is required as in medical Xrays, Cat Scan displays, microscopy, and electron microscopy, industrial, advertising and display, and engineering applications. Two types of high definition television signal are possible; an augmentative type signal where the current standard signal is broadcast on one band and a second band carries resolution enhancing information; or a simulcast signal where the current standard signal is generated and transmitted on one band and the high definition television signal is broadcast on a second band. The Federal Communication Commission (FCC) is preparing for a second US Television Broadcast Standard which requires selection of a simulcast system, so this application will address this type signal. This conserves transmission bands as full conversion to the High Definition Television Standard, which is believed to take twenty five years, is complete.


A digital audiovisual signal system for supporting both standard and wide screen television images, which operates at 59.94Hz and fits in the 6MHz bandwidth used for television, is disclosed. The system can accommodate receivers of any resolution up to the 2625 line standard or 2520 line widescreen display or receiver.

The application entitled "Mosaic Monitor for Higher Resolution and Privacy Means for Audio Accommodation" filed December 29, 1989, Serial No. 97/459,140 and Super-CCD with Default Distribution and its Fabrication" filed September 12, 1990, Serial No. 07/581/503, both of which are incorporated by reference herein describe the hardware basis for the software of the signal compression presented in this application. The major signal compression means is provided by blocks of signal emerging from a bundle of rows of charge coupling device (CCD) type output having each crosspoint carried on a separate lead. This presents a square of signals simultaneously allowing comparison of light intensities to determine if all outputs are the same intensity or if they vary in intensity. Further techniques to compress the signal include comparing monochrome areas in larger squares, coding the color components, and in motion sequences, carrying only changes in image in the signal for frames following the initial picture. The system provides a digitized data stream for tranmitting and recording an image. The system further provides a conversion of the digital data stream back into an image for either a high definition television receiver designed for this signal system or for a television receiver with lower resolution using either an analog or digital signal of another design with appropriate converter.

It is the object of this invention to provide a digital audiovisual signal for broadcast, recording and computer use.

It is yet another object of this invention to implement means to correct for signal omissions or defaults, and to make complete images from a flawed signal by, first, having signal output for any given area be carried on a multiplicity of circuits to prevent block outages, and, second, to have software manipulation of data using intensity levels of surrounding points and in three color systems the other chroma intensity levels to fill in missing signal points. It is yet another object of this invention to feed data off of an optical unit from multiple segments of the unit and from both left and right directions, and in bundles of signal such that a square area of optical receivers presents a square of signal levels at one time allowing rapid and simultaneous processing of image signal components enabling instant image transmission without time-blurs.

It is yet another object of this invention to analyze each square of signal coming from the square of optical receivers on the optical unit to determine if the square of signal levels are at one intensity or are at multiple intensities and, in the event of one intensity, representing the single intensity for the entire square by one unit of signal at that time.

It is yet another object of this invention to consider squares of squares of signal levels of single intensity to determine if the larger square area is of the same signal intensity and, if so, to transmit the whole larger square area as one data point.

It is yet another object of this invention for multi-color systems, to code the light intensity levels of a number of color-specific optical units for each optical receiver location so as to digitally represent each intensity level set in the image, to transmit the code and color-specific light intensity level sets, and, in the receiver to reinstate the color-specific intensity levels from the coded image to display the identical image or image motion sequence as received by the camera. It is yet another object of this invention for motion sequences to transmit the full image for the first frame in the motion sequence and to transmit the following frames by transmitting only image changes throughout the motion sequence.

It is yet another object of this invention to use the special images derived from the following three signal compression operations: a) mapping patterns of multi-intensity squares thereby making a "coloring book" image; b) mapping patterns of common color, and even overlaying this over image "a"; and c) mapping areas of change in sequencial images, and even overlaying this over image "a". [These image variations allow viewing or computer input of select information for use in robotic control, event determination, and image manipulation.]

It is yet another object of this invention to provide a means for cropping images at camera resolution and dimensions to standard or wide screen aspect images, and to provide a means of cropping which can be done manually or by using the above special images to automate the centering of the cropped image.

It is yet another object of this invention to present a transmittable and recordable signal as a combination of digital signals comprising a chromacode, a digital data stream for the visual image component, a digital data stream for the sound component, and additional signals as appropriate all separated by key signals to direct output in receiver units for proper use. It is yet another object of this invention to apply the recorded or transmitted signal coming to a receiver separating the sound and image components, striking the full image electronically using the code for light intensity sets, taking portions of the image for each mosiac display unit, releasing the image components in the scan sequence for each mosaic display unit simultaneously passing the data streams through dechromator and directly to corresponding emitters for either:

a) a defined monochrome image for presentation;

b) a defined multicolor image equal to camera color level whereby corresponding brightness of each color is activated on the display by an electron scan/phospor or by a light emitting diode (LED) or other chromatically specific display means; or,

c) a defined multicolor image expanded beyond the camera color level whereby, for example, a black/white presenting unit is added allowing for red, green and blue camera output to be presented as white, red, green, and blue thereby attributing a base brightness level to the white component and providing a balance of the other colors to hue-out the image.

It is yet another object of this invention to alter the electronic image struck on a frame by frame basis with the changes transmitted for sequencial images in a motion sequence.

It is yet another object of this invention to allow further alteration of the electronic image struck by editing to provide computer overlays for enhancing the image or for presenting writing or for presenting graphics over the image or for altering color components and altering the proportions of image components.

It is yet another object of this invention to enable the reduction of the transmitted image to accommodate the resolution levels and digital/analog characteristics of other display and receiver types so as to allow the transmission to serve all other sets to the resolution level of the signal with complete and reliable reception of transmitted, recorded or computer images.

It is yet another object of this invention to systematically reduce the size of the image for use in a partial screen, to rotate the image, and to edit the image with text, graphics and color substitutions and parameter alterations to develop composites of camera images or composites of components of camera images to create new images and image sequences.

It is yet another object of this invention to compensate for image displacement in pan and scan motions of a camera by pro gressing the image on a frame by frame basis so as to preserve, at least partially, signal compression capabilities during these motions.

And, finally, it is yet another object of this invention to provide one or more signal compression operations in the sequence of signal generation to reduce as much as possible the overall in process data load of the system without disturbing the quality of the final signal output. DESCRIPTION OF THE DRAWINGS

The nature of the signal will be more readily understood from the following description taken in connection with the accompanying drawings forming part hereof in which:

FIG. 1 shows the physical segmenting of an optical unit. Each segment has an array of optical receivers giving squares of output to either side of each segment of the image area, thereby providing line group output in alternating directions thus feeding more signal information per time period.

FIG. 2 shows a cross section of an array of optical receivers having sensors, timing units, and output leads.

FIG. 3 shows an array of optical receivers in three dimension expanded so construction of the sensor unit is seen with wiring direction of timers and vertical signal carriers and components in place.

FIG. 4A shows the units of a camera including the camera lens, dichronic beam splitters on the prism separating red, then blue and having the green transmit on the light path; the three color specific optical units are shown flat as they connect to a series of processors for combining the signals of the three color-specific optical receiver arrays, coding the colors, creating the image and determining changes in sequential images.

FIG. 4B shows the cropping mechanism which reduces the camera image to either the standard 4 : 3 aspect image or the wide screen 16:9 aspect image. FIG. 5A shows light intensity levels as squares of data coming from the optical receiver arrays having default areas lacking in number values; defaults are determined mathematically and are presented in circles; having in one instance multiintensities and in the other one common intensity signal; and having the current standard signal feed to the right and the new HDTV signal feeding downward.

FIG. 5B shows patterns of multi-intensity output areas with mono-intensity areas and larger areas of mono-intensity.

FIG. 6 shows one sequence of image and voice compression from the camera and microphone to the transmission of the signal with output for current standard television, robotic tasks and sound receivers, and advanced definition for both standard and wide screen television.

FIG. 7A shows the data, prior to processing, in one segment of an optical array and the handling sequence out of the array unit.

FIG. 7B shows data feed including some mono-intensity and some multi-intensity pixels.

FIG. 8A shows the processing of data in which the outputs of a specific area in all three color-specific optical receiver arrays are combined and coded.

FIG. 8B shows the Chromacode Signal for the combined color intensity code numbers.

FIG. 8C shows the image signal.

FIG. 9 shows the transmission of the image code. FIG. 10A shows the raw signal as it arrives in the receiver.

FIG. 10B shows the chromacode separated from the image signal.

FIG. 11A shows the decoding result with three color output integrated.

FIG. 11B shows the decoding result separated by color.

FIG. 12A shows the integrated decoding result for a four color system.

FIG. 12B shows the decoding result for a four color system including white separated by color.

FIG. 13A shows the data for the white component of the data grouped by rows.

FIG. 13B shows the image equivalents for each intensity number.

FIG. 13C shows the full image as interpreted by image equivalents.

FIG. 14A shows reduction of the image to a square 1050 line system in numerical equivalents.

FIG. 14B shows reduction of the image in image equivalents.

FIG. 15 shows an array of image reductions using a full resolution pixel and a series of square and rectangular image accommodations for displays and receivers of lower resolution.

FIG. 16 is a table setting the binary signal parameters and markers for all seven phases of signal compression. DESCRIPTION OF THE PREFERRED EMBODIMENTS

Digital signal can emerge as purely a computerized chain of bit manipulations or it can be the emanation of structured hardware and firmware combined with a software component. To the extent that applications Serial No. 97/459,140 filed December 29, 1989, and Serial No. 07/581,503 filed September 12, 1990 define the circuit design of camera and receiver and aid in the production of signal and provides advantageous characterists for its generation, such disclosure is included in the description of the present invention.

Referring now more particularly to the drawings and FIG. 1 thereof, one color of an optical unit or sensor array unit 2 is shown being the green sensor array 18 with segments 181-186 having leads 24 outletting simultaneous output here in blocks of twenty-five optical receivers in arrays 25 emerging from the left as 2511, 2521, 2531, 2541, 2551, and 2561, and from the right as 2512, 2522, 2532, 2542, 2552, and 2562. This configuration provides a whole square area signal output simultaneously. It also provides simultaneous signal output per segment to each side of the array. It also provides simultaneous signal output from each of a given number of segments of the green optical unit 18. The design in FIG. 1 provides simultaneous signals from 300 optical receivers at one time allowing the same or faster data output per time interval than current technology in the National Television Standard Convention (NTSC), or PAL and SeCAM, European Standards. The design can be described as providing an interwoven mosaic optical sensor array for each segment having mosaic features in the segmenting of the sensor array entirety.

The optical receiver array circuitry, referring to FIG. 2 and FIG. 3, shows circuits in optical unit 2 for the same green array 18 with sensors 21 doped for color specificity 20, with timers 22 providing gate control for output/input to sensors 21, and with lead transistors 23 controlling sequencing of voltage transferrence to leads 24 which carry output signal out of the array. Notice that each of five signal units in a row feed off separate leads 241-245. Referring to the expanded array as presented in FIG. 3, the layering used to build the circuit can be recognized where the vertical sequencing of circles shows vertical wiring, and components evaporated in the sequence are indicated in engineering format. Block sensors 21 have space insulation between each sensor unit. Sets of five sensors in a line as part of a twenty-five unit array area are 211, 212, 213, 214 which release voltage in sequence so the areas each come to the computer unit for analysis simultaneously to determine whether the illustrated twenty-five signals have identical or multiple intensity output levels. Sensor arrays for red 16 and blue 17 are constructed similarly though may have doping substance on the exposure surface of the photodiode appropriate for their color bands. A camera 1 is shown conforming to optical elements in FIG. 4A and in planar configuration in FIG. 4B. Here shown is a simplified image 11 as seen through lens 12 as image 111 which on entering a red dichronic filter 13 sends the red component of the image 116 to the red sensor array 16 while the remaining color component pass through prism 14 and to the blue dichronic filter 15 which deflects the blue image 117 to the blue sensor array 17 and passes the green image 118 to the green sensor array 18 located in line with the optical path.

The signal output from the arrays 16-18 passes through the array leads 25 to the image coding unit 5 containing multiple processors 34, 41 including cropping processor 66 before transmission 61.

To identify the processing components in the signal processing, we have data compression units for each color, 36-38, which each identify the mono-intensity square areas and the multi-intensity square areas. The data from color compression units 36-38 feed into the combined color compressor 34. The chromacoder 41 codes the intensity levels for each color in the image and then sends for transmission the code and related intensity levels for each color and releases the color code output to be carried through the image processing. Image array 5 holds the current image as it stands and passes through it to Delta processor 59 which carries only changes in the image from the previous image. The change in image from the previous image comprises the Delta Compression. When there is a new field of view, the percentage of units transferred to the Delta processor image 59, exceeds the allowance for classification as a dependent frame and the new image, an independent image, is transmitted with the full chromacode. All frames after this in a single scene or motion sequence are transmitted with changes in the image and new chromacode only. For instance, a forest scene having a late arriving male cardinal flying in view will cause the bird image movement and red chromacode units to be added to the dependent frame sequence.

The image cropper 66 presents a full camera image which, here illustrated, is 1/5 taller and either 20 or 1,000 units wider than the wide screen and standard aspect displays. Using either an autocropper 661 which cuts the image to size or the manual cropper where a person can select the transmitted image by moving the frame as from location 662 to 663 to get the best picture. Following the cropping which puts out a reduced data load, power 26 is added and the signal is transmitted. In practice, the signal is carried on a transmission carrier, as for example, a system designed by the David Sarnoff Research Center, a subsidiary of SRI International in Princeton, New Jersey, which meets the airwaves criteria and carries signal in 32 bit words. The display associated with the cropper is a camera display which in our work has 3150 lines with 4500 crosspoints per line. Cropping is done to form a standard 4:3 aspect display of 2625 lines by 3500 crosspoints or a wide screen 16:9 aspect display of 2520 lines with 4480 crosspoints. The initial image coding of the signals from the optical receiver arrays is illustrated in FIGS. 5A and 5B where FIG. 5A shows raw data with, as examples, intensity levels "22" and "35" in two output square arrays 5, both of which have defaults 54, i. e. empty places where the data is missing. This may occur with faulty wiring. On large sensor arrays of other designs such breakages remove rows or columns of data or both. Here with the multiple lead design, the default points are dispersed. To save computing, if all of the accounted for intensity levels are the same, the square area 52 is presented as one intensity level for the area. If, on the other hand, the intensity levels vary within the area, 51, then the defaults are corrected 55 using preset computer extrapolations and the resulting numbers complete the intensity level pattern. Note that to create a current standard, NTSC in the United States, that the signal of the monochrome unit 52 is fed raw into the signal and the multichrome unit 51 is represented by the majority intensity, not the average intensity, but that most represented among the units making up the area. The optical receiver array signals 51 and 52 combine into the new signal 43 which has, in practice here, a 5x5 area, 25 times the detail of the current standard NTSC signal resolution. The NTSC signal prior to transmission is converted to an analog signal.

Extending the image coding in FIG. 5B to a second cycling testing for common intensity square areas, one gets squares of squares 53 which in this system have 625 optical receivers in 25×25 squares where all are the same intensity. A pattern of data transmission is shown with multi-intensity areas 51, monochrome areas 52 and the larger monochrome areas 53 which can extend further, taking areas of 25 larger monochrome areas until a whole monochrome screen can be represented with two data points and one chromacode signal unit where one square covers the left portion of the screen is represented by one image signal and the remainder of the screen is covered by a second.

The formula for filling in a default space 54 include these computer programmed exprapolations:

1) matching the majority surrounding color or intensity.

2) if on margin of two color zones, half being each of two colors, then that which forms a straight line is used:

x x x x o This is a dilemma which x x x x o x x x x o x x o o if filled with "x" looks x x x o o x x o o o x o o o o like this. Both fit. x o o o o x o o o o

3) if the pattern is forming a point where having five, three arid then the default, then the intersect is preserved with the default being the color of the wedge or triangle.

4) for tricolor cameras, the easiest placement is made by matching the other two color intensities of a coded color and using that code which automatically fills the void. The default system in tricolor cameras uses step 4 above whereever possible. If two voids exist and the remaining color is strange to the chromacode, then the methods in #1-3 are used for one set of colors matching the whole chromacode to the selected pattern.

Default correction in this multilead design tricolor system supports high default rates and still makes perfect pictures.

The signal 6 development from camera through transmission is illustrated in FIG. 6. It includes various possible transmissions and all the data compression means included in the signal processing. Note here that the Delta Compression 59 is located at the end of the compression cycle. This separation, if made in early stages, as say just after mono/multi-intensity determination, compares the current intensity with the intensity of the previous voltage output. If it is the same it is not processed further. This greatly reduces the data levels throughout the system. This concept is expanded further later in the text.

The scene 11 has stable 110 and moving 111 components and a commentator 490 provides voice input. The camera 1 carries the image sequence and the microphone 491 carries the voice sequence.

The voice and sound digital presentations are selected from an array of choices available today. The converting unit 492 processes the voice to provide an analog or digital signal generator 493 which is broadcast and recorded in the current standard as NTSC television. It is broadcast in sound transmission, radio through shortwave, with generator 494 from antennae 495, and it is digitized for inclusion in the new signal in generator 496 on sound cable 497 for broadcast and recording in the new signal which is received by new standard receivers with advanced speakers 499.

The image processing is shown starting with the camera 1 with lens 12 focusing the image on the dichonic filter/prism 14 shown in FIG.4A which presents square area intensity levels for each the red 16, green 18 and blue 17 sensor array areas here being 5×5 squares. The inital processors 36, 37, 38 for the three colors, respectively, determines if the intensity throughout the area for its color is the same throughout 361, 371 and 381 or if it varies 362, 372 and 382.

If the Delta Compression analysis is inserted as a second comparison immediately upon emergence of the data, rather than later in the process at 59, a second comparison is made determining if the present intensity pattern is the same as the previous intensity pattern for the optical receiver array area output. If the output for all colors is the same, no further signal is tranmitted. If it is different, the new intensity pattern is transmitted and processed through the system.

The remaining data is combined in the combined color processor 34 where all the mono-intensity components of an area are combined into a monochrome 341 or multi-chrome 342 status. The multichrome units 342 are processed to select and convey the majority color for the inplace standard analog signal as NTSC at feed 32; and the monochrome units 341 convey their chroma levels for the in place standard analog signal at feed 31. The combined signals provide the image signal which undergoes a digital to analog conversion in box 312 adding sound signal 493 and transmitting the signal via antenna 313. This 525 line signal is compatible with NTSC, PAL or SeCAM television standard equipment as illustrated with antenna 314, receiver 315 and speaker 498.

The multichrome 32 and monochrome 31 signals can be used to produce a black line drawing-type image. By coloring all multichrome signal units 32 black with processor 320 and all monochrome signal units 31 white with processor 310, image 317 emerges with margins between color blocks 321 in black and color block centers 311 in white. It makes a motion coloring book image.

The image 317 input provides one type of information for neural network calculations for robotic controls.

Output of multichrome segment 342 is a digitized signal with 8-bits representing 256-scale unit intensity levels for each of three colors. The signal feeds along transmission wire 344 to chromacoder 41 where its normally three colors are fed into the coding system. Intensity levels for the three color inputs 413 are recorded and assigned an eight bit code 414 which is used for all further image processing. The intensity levels 413 and code 414 are transmitted via signal processor 415 and are included in recording and broadcast signal for the new resolution signal 60.

In the case that a scene has more than 256 shades of color in its parameters, the chromacode bit number increases by one bit at a time, as, for example, if up to 512 shades are included, then the chromacode is nine-bits, if up to 1024, it is ten-bits. If the image is so rapidly changing that to allow for the speed of change and maximum detail of motion, the color scale unit number may be reduced to seven bit, 128-scale units of intensity, and thus reduce the number of shades in the image here allowing an eight bit chromacode to be used. This color cutting mode is one which can be selected by the cameraman or editor prior to signal preparation for transmission.

Output of monochrome segments 341 is sent via wire 345 to the pixel and multipixel comparator 530. Here blocks of 25, in the illustrated case, of monochrome units are compared. If they are all the same, one signal is represents the whole 625 unit area of the comparator 53. If they are not the same color, then colors of each of the areas are sent through collector 531 to chromacoder 41 and processed as described above for multicolor areas with here one color representing each pixel, sets of 25 optical array outputs for each color.

Taking the monochrome squares of 625 units, if a square of 25 of these are compared and the whole of 15,625 units are the same color, then it is stored. If the group is not the same color, then the separate squares of 625 units are represented as area 533 and the color recorded through chromacoder 41 and is processed as described above.

Taking squares of 15,625 units in the square of 25 53, if they are the same color, then the square of 390,625 are stored. If they are not the same color, then the individual squares of 15,625 units are processed and represented as area 535 and carried to chromacoder 41 as described above.

Taking squares of 390,625 units, in a square of 25 53, if they are the same color, then one unit is represented as area 539 covering over half the screen area with 9,765,625 units. It is processed through chromacoder 41 as described above. If the square of 25 units are not the same color, then the separate units of 390,625 are processed through processor 537 to chromacoder 41 as described above.

With the 9,765,625 units the same color, if the remaining screen is the same color, a similar part of a square of the same color is represented by a single color unit processed through processor 539 to chromacoder 41 making the complete signal for a monochrome screen defined by one chromacode signal and two color points covering 4 : 3 aspect normal display with 9,187,500 units or 16:9 aspect wide screen display with 11,289,600 units. For the camera display with 3,150 lines, this square fills 3,125 lines so four color points are needed to color the screen in monochrome for this camera display with 14,175,000 units. The chromacoder 41 conveys all color intensity components 413 for each color with its chromacode 414 into the high resolution video signal. It also conveys the chromacode 414 along with sequence component for each point to the imager 45.

The Delta Compressor has two possible locations, one in the intensity comparators 36, 37, 38 at the photoreceptor outputs; and the other is following the independent image array 45 at 59, the dependent image array. The imager 45, in the first configuration is complete for independent frames as the beginning of a new scene. If the Delta Compressor is at the photoreceptor, then subsequent images on imager 45 will represent only changes in the scene as shown in dependent frame imager 59. In this configuration only one imager is needed. In the second configuration, as illustrated in FIG. 6, the imager 45 always has the complete image defined and the sequence comparison is done such that the changes in sequenced frames are displayed on dependent imager 59.

In the second location Delta Compression at dependent imager 59, if the previous image for any point is the same as the current image, processor 595 discards the data 597. If it differs from the previous signal, the comparator 595 sends the signal 596 to the Delta Compression dependent imager 59. In the case that a preset percentage of the image changes, then frames that equal or exceed that level of changes are transmitted as independent frames. Independent frames include all locations defined by color and the full chromacode signal. An intermediate frame type, that which defines all locations but does not repeat the chromacode, but only adds the new chromacodes, is possible to include in the signal. To supply signal redundancy to cover transmission interference, long action sequences are interrupted by providing independent frames every few seconds depending on setting determined either by standards requirements or the editors' preference. The chromacode signal can be repeated in the unused transmission space as a general practice. For video tape and videodisc recording, the scanner starts the image at an independent frame to insure full color definition. In the case of still frame display from a motion sequence, the scanner will reference the independent frame preceeding the frame to display and apply the dependent frame changes to the point of the desired frame.

The sequence of independent and dependent frames comprise the camera image which is processed in the cropper 66 which cuts the image to the 4:3 Aspect size 660, 661 image or the 16:9 Aspect wide screen size 662, 663 image depending on choice to be transmitted. Besides manual cropping, there are several means to automate the task as, for example, centering a specified part of the image.

Transmission signal generator 60 combines voice 497, other signals 487 such as captioning, stock quotations and other paid transmissions which are transparent to normal receivers. chromacode 415, and cropped 4:3 Aspect image 660, 661 or 16:9 Aspect image 662, 663. It is transmitted using antenna 665 or recorded. The receiver antenna 765 collects the signal which displays video on the display 760 and presents sound with speaker 499.

To understand the nature of the signal itself, FIGS. 7-15 show numerical and image equivalent representation of the signal, simplified, for a six across by five row pixel image area. These thirty pixels of 5 x 5 color units are part of the whole video signal and are taken from the image light input in the camera to the photoarray output, compression, transmission and reception for three kinds of receivers, tri color as RGB, four color as the LED displays, and receivers of other resolutions less than the 35mm equivalent provided in this video system.

Looking at FIG. 7A, camera 1 is represented with lens 12, filter 13, prism 14, filter 15, and green sensor array 18. Array 18 contains thirty squares of 25 color units. Numbers represent voltage levels with full voltage at "9" and strong light dissipation at "0". Squares 250-256 are followed through the process. Leads 24 from the rows that flow left 241 and that flow right 242 make sets of 25 intensity levels of green light which represent squares of voltage levels.

FIG. 7B shows the same configuration with signal processing taking place. The fifth procession shows with, in the array 18 voltage of squares having released voltage 259 recharged to "9" and data progressing with 251 and 252 squares intensities having flowed to the left and right, with 253 and 254 awaiting analysis, and 255 and 256 analysed with 255 being a monochrome square and 256 being a multichrome square. Indicators 381 and 382 select active output. Square 250 is left on the inactive output of the left flowing display having been sent during the previous pulse.

FIG. 8A shows processor when data from the three color arrays are combined with strings of 25 data points from the red 16 and blue 17 arrays. The voltage levels "0-9" are presented for each data point for all three arrays. Each three intensity level color is represented and coded in chromacoder 41 with 413 being the intensity level trio and 414 being the code for that color.

Data for squares 250, 255 and 256, 253 and 254, and 251 and 252 are represented. For monochrome square 255, the other two signals on arrays 16 and 17 are also monochrome allowing one signal to represent all 25 color units with chromacode 41-255 being that one number. In contrast, multichrome square 256 has the output of each of the three arrays, 16, 17, and 18 each having 25 color units defined in intensities and chromacodes 41-256. Were one of the three arrays in the monochrome sequence not a monochrome array, then the full 25 color units would be defined for that square. That would require output from that square to be that like 41-256 to accommodate the multiple intensity in that square. FIG. 8B shows chromacode portion of the transmission 41 with three intensity data levels 413 and chromacode assigned 414.

FIG. 8C shows image signal 50 for the 6 x 5 array of 25 unit square pixels showing the chromacode for areas 250, 251, 253, and 255 with one number given for areas of monochrome signal 52 as for pixel 255 and 25 numbers given for multichrome areas 51 as pixels 251 and 253. Chromacode information 41 is packaged covering 21 of 25 spaces with three zeros and chromacode 414 followed by intensity levels 413 packing three sets of codes per 25 unit block. The chromacode signal is set in areas of monochrome 52 and data is transmitted as shown in FIG. 9.

The transmission signal 62 shown in FIG. 9 takes the data as presented in FIG. 8C and reads off each pixel vertically starting at the lead edge, left column for left flowing set and right column for the right flowing set, forming chains of chromacodes. Multichrome pixels 51 as 250, 253, 254, and 256 have twenty five numbers, monochrome pixels 52, as 255, have one number. In some monchrome sets, the chromacoding 410 is inserted with the code 414 and the three color intensity levels 413 represented.

This line of data defines the video portion of the signal. Sound data can be carried in the 24 empty spaces of the monochrome pixels just as the chromacode 410 is carried. Other transmitted data is carried on the signal in this manner as well.

What happens in receivers is shown in FIGS. 10-15. All receivers of this high resolution signal have the signal deciphering function as shown in FIGS. 10A and 10B.

FIG. 10A shows the raw signal input which sets the signal stream as it was prepared for transmission as shown in FIG. 8C. There are the video image 63 and chromacode signal 41 interspersed in monochrome pixels as shown in FIG. 10A.

In FIG. 10B, the chromacode signal 41 is separated from the image 63. The raw data in chromacode signal 41 is interpreted separating the code 414 from the three intensity levels 413.

The processing shown in FIGS. 10A and 10B is common among all types of receiver displays, the three color RGB, the four color and receivers with less resolution than this signal, as that which serves 1050 line systems.

FIG. 11 shows three color processing which is used for both electron scan and some Light Emitting Diode (LED) displays built to handle the signal resolution. The chromacode 41 in the chroma processor interprets the chromacode for the three colors, red, green and blue, in RGB Systems. The exact voltage for the three processors may be changed to get the correct chromatic levels depending on the type phosphor used in the screen and to trigger the appropriate light intensity output for the LEDs to match the color viewed by the camera.

FIG. 11A shows the array defined by intensities of the three colors. Activating the leads to the LED's or the emitter for the colors, red 76 goes to red leads, green 78 goes to green leads and blue 77 goes to blue leads. Not shown in drawings. but here described is the means of distributing signal to the display. The mosaic display segments the whole image into regions of 525 lines by 700 or 750 crosspoint units. They have five row bundles having signal feeds in one direction and the next five rows having signal feed in the other direction. Twenty five units of 700 crosspoint segments are used for 4:3 aspect displays. Thirty units of 750 crosspoint segments are used for the wide-screen display. Data feeds to these segments from the imager simultaneously so it takes no more time to feed one size display than another excepting for the 700 unit to 750 unit crosspoint expansion. The chromaprocessor converts the chromacode streams from all segments simultaneously.

FIG. 11B has arrow indicators 75 showing alternating direction of five row bundles for each color. These five row bundles of the image comprise 1/125th of a display segment. Each of the five display segments aligned vertically have the same bundle fed simultaneously over the area. The bundles of five rows of intensity level voltages feed simultaneously across the rows for the crosswise segment of the display it is in. The image data 63 is fed in segments through the chromadecoder having ten feeds for each segment of the display or, for 4:3 aspect displays, 1,250 simultaneous data streams being chromadecoded at one time. For wide screen there are 1,500 data streams being chromadecoded at one time. The rate of display refreshing at 59.94 Hz requires 67 or 68 row bundles to activate 700 crosspoints or 750 crosspoints in just over l/60th of a second, a practice compared to current standard sets where 350 row sets are interlaced so 175 rows of about 500 crosspoints maximum are activated. The data point output in a video frame for each CCD for this signal system is either 61,200 or 60,300. In CAMCORDERS, home video, the interleaved output is 87,500 for 350 line resolution. For full NTSC 525 line resolution, the equivalent is 184,100 or 183,400 depending on flow direction. This invention has one third the output per CCD unit per frame of conventional cameras.

The data is digital but can be converted to analog signal to activate electron emitter design displays. In this case, each color, each lead, would have a digital analog converter to stream the electrons to activate the phosphors which create the electronic image.

In contrast, the LED display receivers preserve the digital character of the signal so that the proper voltage level is fed to illuminate the LED set at the specific row/crosspoint intersection at the proper time.

Referring to Patent, Serial No. 07/459,140, the bundles of five rows of left feeding signal and of right feeding signal activated in each row of display segments are activated at one time. Then five sets of crosspoints for each direction for each segment across the display face are activated which makes 50 units per display with 25 segments of the 4:3 aspect receiv- er receiving data at one time. This means 1,250 of the 9,187,500 units of the display are activated at any one time. To activate the whole display requires 7,350 impulse groups in 1/59.94th second.

Comparing the mosaic monitor signal dispersion to an interlaced display of 350 lines with 500 crosspoint resolution at 30 Hz, which has 43,750 color points per dispersion means the mosaic monitor with 7,350 pulses needed has a pulse ratio of 1:5.952 of the interlaced design. It is a major performance drop for the image segments.

Because the current standard displays are analog, comparison of a digital display with it may not be totally appropriate. Engineers developing digital systems as the General Electric Digital Video System, DVI, have found digital imaging is slower. They make smaller images per time unit. Perhaps the large margin is what is needed to make a digital system workable.

In the mosiac displays with electron scanners exciting phosphors, to insure that each segment keeps its dimensionality, to prevent gaps or loss of edge image, a calibration system is used. This system has five vertical photodiodes, as used in the camera, at five locations on the image edge, one at each corner, top and bottom, left and right, and one at the left of a row in the center of the display segment. The scan lines at these positions are extended one color unit and output 30% white light. As all five units in all five locations are illuminated at a constant intensity level, the segment is aligned. If some units of any column are not excited, then the image is displaced and a correction is made. If any position gets a vertical column of excitation varying in intensity, the scan is too broad and it is narrowed to excite the calibration column with the 30% intensity light at the ends of the electron streams. If no excitation occurs, the segment is too narrow and it is widened until the 30% light excites the sensors.

The processor for the pixel for each image segment is identical to that in the camera comparator 37 (FIG. 6) which, if all sensors have the 30% light level, then the mono-intensity output indicates the image is in correct alignment. If the multi-intensity output occurs, the column(s) with variance are noted and corrections are made. If a column is dark, either the scan is too narrow or the image is displaced up and down. If its crosswise match is dark also, the image is broadened. If some units in the column are excited, say the lower three, then the vertical alignment is corrected so all five locations are excited. If the column of sensors have variable intensities, then the scan is too wide and the image is reduced in width until the constant 30% intensity scan is received.

To insure that the color balance is in place among the 25 segments of the display, the processor comparator 53 is applied to the whole array of calibration pixels, five columns of five optical receivers placed at the four corners and at one center side location of each segment in the display. If the whole set is monochrome, then the color balance among segments is correct. If there is variation, the segments lighter or darker than the majority are brought into conformity.

With wide screen displays and camera displays, the array for comparator 53 in the receiver units are expanded to take the number of pixels needed to represent each segment in the display.

The correction routines developed in receiver control software are applied in the chromadecoder and in the scan locators for emitters which control the beam scan ranges.

A second alignment configuration is in the camera where three points of parallel white light at 30% intensity are fed to the peripheral area of the image directly adjacent to the image. These three points appear on the three color images, red, green and blue, and must appear in identical photosensor locations consistently in time and consistently in location among the three color arrays. Focusing and zoom lens motion tends to cause misalignment of the sensor arrays. The correction can be made hydraulically or by heating and cooling the sensor array stems or pedistals until the 30% white light illuminates the same three sensors in each array at the same time.

The white light level must be consistent among sensors or alignment is off. It also can preserve color balance among the sensors. To adjust for indoor or outdoor light, the balance of white light components can be changed to have a color correc- tion made in the system. The 0-255 scale is adjusted to the white light intensity by setting the position at the "76" unit in the scale range. The scale of 0-75 76 77-255 sets the intensities for that below 30%, at 30%, and above 30% light intensity, respectively.

This arrangement needs attention by artists and lighting specialists to adjust the controls properly in each camera. Multiple camera taping requires that the color balance among the cameras match. Using the white light calibrators, all cameras can have the same 30% positioning for each of the three colors, red, green and blue.

Returning to the discussion of FIG. 11B, the three sets of electron streams are defined where five leads are fed in each the left and right directions at one time for the segment of the display this signal feeds. Here three sets of rows feed to the right and two sets to the left. Electronically, using gate transistors, the row sets are activated one set at a time. Columns are activated five at a time or one at a time depending on lead redundancy. If only one lead feeds a row one column at a time is activated. However, if the display has one lead for each five locations, like camera array wiring shown in FIGS. 2 and 3, all five locations can be activated at once. This uses timer arrays similar to those shown for the photoarrays 2.

With the chromadecoder, color changes can be made using a computer which can call up any chromacode and change the intensity of one or more of the three intensities. This new color assignment will effect all instances of that color in the image, or it can be regionalized using a "Windows" routine common in computer programming.

Also, to identify a specific item in the image, all colors but the point(s) of interest can be cancelled or made the same. Having this image overlaying the "coloring book image" 311 [See FIG. 6] where the monochrome pixels are white and multichrome pixels are black will locate the point of interest through a motion sequence.

A signal for a second type monitor, one with four colors, white, red, green, and blue, which is to be applied to most of the LED displays, is shown in FIG. 12A. The chromadecoder takes the three color-intensity signal 413 and interprets it to produce a four color output. The method illustrated here takes the lowest common intensity level of the three colors in signal 413 and uses that for the white signal 85. The difference between the white signal level and the three color units is retained to complete the four color signal. This eliminates one or more of the remaining three colors. For balanced color as the 30% white light, black or bright white, only the white LED is active unless the program calls for extreme white light which can activate all four colors thus stretching the gray scale spectrum considerably. For pastels, white plus a little glow of other colors is used. For vivid colors, two colored LEDs make the color emission and white light is not used. Black occurs when all colors are off. The four color chromadecoder signal image is shown with FIG. 12A carrying the whole four color array 81 with white carried by signal 85, red on 86, green on 88 and blue on 87. FIG. 12B shows each color separately, white 85, red 86, green 88 and blue 87. This data is handled similarly to that fed into the RGB displays.

Expansion of the white signal has a digital array 85 in FIG. 13A, a digital-graphic interpretation in FIG. 13B, and a graphic image 850 for the white signal intensity levels in FIG. 13C for the five rows of six segments of pixels in the model array. Two types of pixels, the multi intensity 9151 and the mono intensity 9150 are shown. The multi intensity pixel 9151 is used to illustrate the resolution changes in FIG. 15 needed to accommodate displays of lower resolution. The whole digital image 85 and graphic image 850 is used to illustrate resolution accommodation to a 1050 line system in FIGS. 14A and 14B.

Cutting the data to fit a 1050 line display shown in FIG. 14A gives the data array cut eliminating the 1st, 3rd and 5th rows and the 1st, 3rd and 5th columns of the pixel to reduce the resolution of the picture to be showable on a 1050 line display rather than at 2625 line display. The remaining 2/5 image cut consistently leaves four data points for each twenty five points in the transmission. FIG. 14A data array 9 is the reduction of FIG. 13A data array 85 to 1050 line resolution assuming a square pixel system. This would give 1050 rows by 1400 crosspoints which is possible for a single system scan display receiver. Specific pixel 9051 is reduced to data array 9021 and 9050 is reduced to 9020 being multicolor and monochrome squares respectively. In the graphic rendition, FIG. 13C data array 850 is reduced to FIG. 14B array 91 and multicolor pixel 9151 is reduced to 9121 and monochrome pixel 9150 is reduced to 9120. As one can see, the reduction in resolution cuts the detail level of the image. No attention is given to the reduction that eliminates multicolor in these reductions since it is beyond the compression stage once the signal is transmitted.

Other reductions are illustrated at the pixel level in FIG. 15 using multicolor pixel 9151 from FIG. 13C and representing it here as 915. These reductions can apply to partial screen displays on receivers of this system or on receivers with lesser resolution. The reduction means can be mixed to meet the specific line requirements of the display considering that pixels in this system are square. Pixel 914 is a 4/5 or 80% reduction. In the displays in this system it can fill 80% of the screen having nine 20% reduction images forming a margin across and down from any of the four corners. This allows ten channel viewing when selecting a program, editing, or selecting the right image for transmitting from those on ten cameras. Pixel 913 is a 60% reduction with 3/5 linear reduction fitting a 1575 line display with square pixels. Pixel 912 matches pixel 9121 in FIG. 14B is a 2/5 or 40% reduction. Pixel 911 is a 1/5 or 20% image fitting a 525 line display. This size allows 25 channels to be viewed simultaneously. It can replace current standard viewing on maximized NTSC standard sets if the transmission signal has better quality. All pixels illustrated in the series 911-915 are square pixels.

Examples of reductions to rectangular pixels are shown for a 2/5 × 3/5 reduction shown in rectangular pixel 9523, and a 3/5×2/5 reduction is shown in pixel 9532. These are used to accommodate rectangular pixel displays or for displaying warped images. Were a display to have a number of lines not a multiple of 525, then rows of these dimension pixels can be interspersed in constructing and expansion to accommodate a system as a 1175 line display now used in the NHK System of Japan. The display accommodator for this set if it had square pixels in the analog system would intersperse rows of pixel 9523 and columns of pixel size 9532 on a regular basis to pattern out with the least warp of image possible. The receiver accommodator would include the row and column constructors and a digital analog converter to enable the signal for the NHK display to be generated from the received 2520 line system here described since the NHK System is wide screen.

FIG. 16 is a table of binary signals and markers for the real digital system which was modeled in the series respresented by FIGS. 7-15. Its use of bytes, eight bits or units which can be off "0" or on "1", makes the signal output a 4-byte or 32-bit system. This is compatable with the David Sarnoff Research Center carrier, Advanced Digital Television (ADTV) for their high definition signal which would allow an improvement over their 1050 line digital system by substituting this signal providing up to 2625 line signal.

PHASE I presents data coming off the sensor arrays marking the mono-intensity and multi-intensity pixels for each the red, blue, and green arrays. It provides sensor data as sensor unit number, pixel number, binary marker and meaning.

PHASE II uses the binary marker output in grouping the output for applications in current resolution generation, in the black and white graphic display and the high definition signal. These correspond to sections in FIG. 6 at combining outputs 31 and 32 to generate NTSC signal 493-312; creating image 317 using output 310 and 320; and creating the high definition signal following the outputs 344 and 345 into the further compression and chromacoding systems, respectively.

PHASE III takes the three bytes of intensity for each red, green, and blue for each array address and assigns a byte-sized chromacode for the specific intensity levels. Chromacode sequences are marked with "11" in transmission so receiver circuits place the data following in the chromadecoder.

PHASE IV marks independent frames and dependent frames providing markers "01" for independent frames and "00" for dependent frames. Receiver circuitry directs the signal following "01" to paint the whole screen and that following "00" to the image altering function. It also allows for a change range setting for initiating independent frames. This phase also can insert independent frames on a cadence set to prevent long streams of dependent frames delaying full image after transmission interference or on turning on the channel.

PHASE V integrates the chromacode signal and the image signal with independent and dependent frames in transmission and in the cropping routines, automated and manual.

PHASE VI considers cropping implications of the normal and wide screen dimensions.

PHASE VII loads the 6MHZ bandwidth for signal transmission. It requires the receiver to display the previous image until the full independent image is complete for display. It facilitates addition of other data, sound, graphics, data which can be included in the display or transparent needing special receiver.

This ends the discussion of the included figures. There are several concepts that will be made in the text of the disclosure which must be considered and can be fully described without illustration. The first is the locator component in the dependent frames and the second is the duration of free time scanning in the signal progression.

First, as illustrated in 59 of FIG. 6, the dependent frame has randomly located image components and large areas of continuing image which is eliminated in the frame. Were non-changing image be given a single chromacode number, then the compression routines can be activated expanding monochrome areas to reduce the data level for the frame. Several other routines which can also be include are:

Segment cancellation where no change is reported for a given segment of the camera image where each image array has six segments and any of them reporting no change can be left out of the data transmission. Of course, the cropping takes portions of the top and bottom segment if not eliminating one or the other of them, thus the top cut from this can be 525 or less depending on part of the segment included in the cropped image.

Margination where the image is addressed starting at the image location x,y coordinates where changes are included. If, for example, no changes occur until row 80 and crosspoint 200, then that is where the dependent image begins. The upper left starting point of the image is 200/80. In similar fashion, a lower right point can be determined where no changes are to the right of that point or below it. This reduces the data field to the center when image changes are centered.

Change component addressing allows each component in the field of the dependent frame to have a x,y coordinate for the upper left initial encounter or the isolation of the rectangle encompassing it using a margination routine surrounding each window of change. This would make several windows within the parameters of the image for the frame changes.

Monochrome Compression would block out areas of no change reducing the image data for dependent frames considerably from that for independent frames. If the Delta Compression were in the individual photoarray processors 36, 37, and 38 then this compression would happen with no further requirements of special handling of the dependent frame processing. Here the discussion is embellished with the numbering of the no change area, say with "0" reducing the byte number of colors to 255 actual shades rather than the previously described 256 shades.

Applying the black/white type image 317 analysis for change creating a new/old pixel image into which the data stream defines changes fit reduces the data stream to a minimum. This way, with early Delta Compression, no-change areas are transparent to signal and change areas opaque and receive the signal.

Justification of systems for presenting dependent frames include: Monochrome compression requires no deviation in method of frame presentation. For true efficiency with some embellishment of method, the new/old pixel mapping of signal includes the transmission of a 1/25th resolution two color image carrying the change locations followed by an uninterrupted data stream filling in all changes. Marginization and segment cancellation can be included with the others to advantage in some circumstances.

A modified Delta Compression is used in pan and scan camera motions where portions of the image in the direction of movement are new and part of the previous image is removed. Using the marginization routine described above activated by motion sensors to get the proper alignment in the x,y field, the previous frame information is shifted to enable dependent frame signal for the repeated part of the image and full definition of the new part. Since the scene is the same, only new chromacode signal is included in the signal stream until the next independent frame.

Finally, the question of frame rate regulation needs be addressed. Frame rate variation can substitute for F-stop adjustment. The only restriction on frame rate is in the receiver/display where the image refreshing occurs every 30th, 60th or 59.94th of a second. This necessitates that an image must be available for presention unchanged at each time interval. If the transmission to the receiver came at any cadence, once per second on to l/120th or even 1/240th of a second, the display refreshing would always occur on schedule. In the once per second update, thirty to sixty presentations of the image would occur before any change in image was seen. In those which have faster frame transmission rates, the frame which is complete at the time of scan initiation is used. Some frames received are not presented. This provides a very clean animation in that it cuts down motion smear in the frame. Take, for example, sports motion. If a runner is moving quickly, a 1/30th second image makes his body extended and smeared in the direction of motion. A l/60th second image halves the smear length. A 1/120th second image halves the smear again and the 1/240th second image reduces it again by half. Were the image sequence to be every fourth 1/240th second image the runner would be moving with clearer image character and less distorted features than even using every other frame of a 1/120th second image.

Allowing the transmission of open frame rates, the particular receiver characteristics can determine the actual presentation frame rate. Highest quality sets can scan at 240 Hz and be served with the frequency frame rate on the occasion that that rate can be achieved. The low price sets can support 30 Hz frame rate while still having the 2625 line resolution.

If the 59.94 Hz frame rate is required in the transmission, then the Delta Compression must account for progression over time during the l/60th of a second plus interval. No set quality range based on screen refreshment rate applies and there would need to be considerable adjustment in the signal accommodation box for receivers scanning at 30 Hz.

In the case that there are more than 256 colors coded in an image, there are two ways of insuring the signal covers the whole set of colors. The first means is to expand the chromacode bit number by one allowing for 512 colors, by two allowing for 1,024 colors and so forth which is excellent for fine art and detailed still frames. The second means reduces the number of bits in one, two or all three colors giving their range 128 gradients of intensity rather than the 256 gradients. The full tricolor sets of 256 gradient intensities yields a 16,777,216 color palette. With one color restricted to 128 gradient shades, 8,388,608 colors are possible. With two colors restricted, 4,194,304 colors are possible. And with all three colors restricted, 2,097,158 colors are possible. These restricted colors may be fine for fast motion scenes.

The signal here described is creatable in software. Some parts are convertible to firmware and hardware design. The signal produces in the camera image nearly 70mm film resolution. In the cropped image, 35mm film resolution is achieved making this video system the only standard contender that matches the movies in resolution and the state of the art motion rates for sports, dance, and most industrial motion analysis for frame rates if the light levels enable 1/240th frame rate taping.

The preferred recording medium is compact disk since it allows editing by jumping between circular groove segments. In contrast, tape requires editing along a linear carrier requiring winding and rewinding between desired segments.

Finally, the accommodation to the anomalies of the photodiodes in the camera and to light emitting diodes in receivers must be accounted for. This is a camera specific or receiver specific factor which is programmed at the completion of manufacturing and is set in the memory of the unit to cycle matching the perbyte error rate, the unit number variation over the 256 scale for photoreceptivity or for brightness. Using the perbyte factor rather than percentage allows a block condition addition or subtraction to accommodate the extent of variation rather than a multiply, round off and add process.

Diodes have a linear anomaly making this correction a simple addition or subtraction of the proportional digital amount for the level of excitation taking place on each photodiode for the camera or for the excitation level required for display LEDs.

Expected range of correction is within plus or minus two percent which is ±5.12 in 256 at full range. If the error rate is 2% or 5p/B (perBytes), then output above 213 add five, between 170 and 212 add four, between 127 and 169 add three, between 84 and 126 add two, between 42 and 83 add one, and output below this is used as is. Output is increased or decreased by level.

In the case of the camera, the signal emerging from the segments of the array is processed in the following ways: data from each area is digitized in a flash comparator outputting the signal on a 0-255 scale in which each output is then corrected for the anomalies of their respective photodiode units; the mono- and multi-intensity determination is made; and the set is compared to that for the specific area on the previous frame. The memory acts like a large wheel containing all the anomaly patterns for each area output set imprinting the change on the digitized output followed by a second large wheel or the same with a changeable set of data giving the just-previous-frame output enabling immediate Delta Compression processing.

In the case of the receiver, characteristic balance of the red, green and blue components of the display are measured to determine voltage levels required to produce specific light levels as compared to the calibrated amount used in all camera outputs and in relation to other LED components of the system.

This precautionary measure may be done in top of the line sets more often than economy models. The correction is made as the voltage per color comes through the chromadecoder where the memory corrects the output for each LED unit with its anomaly factor according to excitation level as described above.

This signal can support stereo television of 3DTV by having two sets of optical receiver arrays of, for example, three by five or four by four 700 by 525 array sizes recording or broadcasting them simultaneously on the same video frame. This gives either an upright rectangular monofocal image of 2100 optical receiver units wide by 2625 rows; or, for the second configuration, 2800 units wide by 2100 rows. The chromacode and the chromadecoder in the receivers apply to both images.

The three by five configuration fits without distortion using 4200 crosspoints by 2625 rows. The four by four configuration requires the displacement of the outer two rows placing them in pairs along the lower margin of the image.

The only difference in the signal frame is that in the margins between the stereo images, there would not be intersecting common color expansions. This keeps the separation of the two sets of images discrete allowing unencumbered dual reception of the frame segment for each side of the image. In the stereo receivers, there needs to be only one set of processors including chromadecoder, but the leads cover two displays. DESCRIPTION OF TABLES

Tables describing the whole signal system from the camera light intensity input through display in several types of receivers have been created to give an example of the data stream for this television system. These tables use numbers 0-9 in place of the actual bytes of information which the signal processors use. These help people developing the signal software understand the system.

The Tables are as follows:

TABLE 1 - shows one optical receiver array exposed to light in the camera with lens, dichronic filters, prism determining the light path and, following the array, the processors which prepare to analyse the signal bundles as whether they are mono-intensity or multi-intensity areas. The array used here as an example has six square areas of 25 outputs across in five rows. The Table is used as FIG. 7A.

TABLE 2 - shows the first movement of the data stream from the optical arrays having the voltage values of the far left area of the first set of five rows and the far right area in the second set of five rows in the first level for processing. Note that the area is recharged having voltage values of 9 throughout both areas.

TABLE 3 - shows progression of the data stream to the second stage of the processor and the next the last area to the left in the first set of rows and the next to the last right area in the second set of rows moved to the initial processor. TABLE 4 - shows one more step in the data stream progression. Note that the areas where voltages are in process are recharged to the "9" voltage level. The first sets of data bundles are in the sending position. The pointers indicate which signal is passed on through the signal processors.

TABLE 5 - shows one more step with the left data bundle being all at voltage level "3'».

TABLE 6 - shows the next step in progression with the area with a common voltage level throughout being sent on as one number, 3, rather than the whole data bundle. The previous data set remains in the left processor, but the indicator selects what is sent. This Table serves as FIG. 7B.

TABLE 7 - shows the next step with the lone "3" in the mono-intensity send area, but the pointer selecting the data bundle for transmission. Note the top ten lines of the optical receiver array are all charged at the "9" level.

TABLE 8 - shows the initial data flow from row bundles three and four with the furthest left area in the third row and the furthest right area in the fourth row have voltage recharged. Note one output has a common intensity level.

TABLE 9 - shows continuing data flow with the mono-intensity area used on the left with the pointer selecting it to be sent and replacing the previous number. Note also exposure of the recharged area occurring on the right, second row bundle.

TABLE 10 - shows the signal progression with two multi-intensity data bundles sent to further processing.

TABLE 11 - shows more signal progression and increasing exposure of the second set of rows. TABLE 12 - shows signal progression through the next to the last array area in the third and fourth rows.

TAB-LE 13 - shows further progression with only the fifth set of rows to be processed. A mono-intensity area is coming from the right side outlets.

TABLE 14 - shows end of right side data stream which occurs in the full sized photoreceptor arrays since they have odd numbers of lines. The first area bundle starts in the data flow and photoresponse is occurring further in areas previously processed.

TABLE 15 - shows the right hand output set out as mono-intensity with pointer selection. The fifth row output is progressing in the processing sequence. The top row begins to respond to light.

TABLE 16 - shows the right side shut down and the first mono-intensity square from the fifth row sent on by pointer.

TABLE 17 - shows continuing flow from left side of the fifth row data. Exposure progressing in the optical photoreceptor array.

TABLE 18 - shows continuing flow from left side and increasing exposure on the optical photoreceptor array.

TABLE 19 - shows the final data array leaving the photoreceptor array and increasing exposure of the optical photoreceptor array.

TABLE 20-22 - takes the data stream to the pixel integrator where the data stream from the photoreceptor array illustrated is combined with output from the other two arrays in the red, green and blue (RGB) system. The three voltage levels are coded with a number, the code, which numbers new colors in order. The output from the left side of the photoreceptor data processors are combined on the left side of the page and the output from the right side of the page. The order of output is shown as one goes down the page. The page reference for output is given at each section. Note the few code numbers needed for the area. Intensity levels of the RGB system are expected to be consistent based on color commonness in most scenes. Note that the mono-intensity areas have mono-intensity areas in other color arrays and have a single code number to describe the square of data. Note the color code list increasing though tl\-ρ thirty data area outputs. TABLE 20 is used as FIG. 8A.

TABLE 23 - shows the color code arrays as Transmission 1, and the image array color coded in data array as Transmission 2. Note that the monochrome areas have the color represented by one code number. The color code in transmission array are shown at the right of the image array. This table is used as FIG. 8B for the color codes and intensity levels, and as FIG. βc representing the image array and in formation color code and intensity values.

TABLE 24 - shows the string of data used in transmission and recording of the image. The top row ends in the completion of the second row. In the first data stream representing the top row bundles of the photoreceptor array has the mono-intensity area is carried by one number followed by dashes. In contrast, the mono-intensity area signal space can be filled with chromacode as in the of the first row where the first three codes are carried. The initial data point in any 25 unit bundle is underlined. The constantly mono-chrome fifth row finishes carrying the chromacode and remaining areas have only the single chromacode number. This table is used as FIG. 9.

TABLE 25 - shows the first data arrays in the receiver constructed from the data stream which is handled consistently with the patterns used in creating the transmission data stream. The top image, used as FIG. 10A, shows the raw data stream in array. The lower image, used as FIG. 10B, shows the image array of color codes and the color code and the set of intensity levels separated..

TABLE 26 - shows the display array having translated the color code for the red, green, and blue voltage levels. The voltage may be inverted for the display. This table serves as FIG. llλ.

TABLE 27 - shows the separated colors having one color intensity for each point in the display. In displays of this type, the row bundles feed to the active transistor gates and excite the phosphor or the LED. This table is FIG* 11B.

TABLE 28 - shows the actual color groupings and the direction of flow to excite either the electron scan or LED display.

TABLE 29 - shows another color interpretation making a four color system. The black/white gradient is carried by a white emitter. The chromadecoder for this type display translates the three color group into a four color output by subtracting the lowest intensity level from all three colors and carrying the common intensity on the white emitter. This table serves as FIG. 12A.

TABLE 30 divides the row bundles for four color image. TABLE 31 separates the colors having space between row bundles. It is included in FIG. 12B along with the first section on the following table.

TABLE 32 shows the blue image used in FIG. 12B and the white image is expanded in patterns of numbers which comprises FIG. 13A. The lower row has a white level code and an image equivalent. The numbers key the computer input to create the images representing the intensity levels of white. This segment of the table serves as FIG. 13B.

TABLE 33 shows the image equivalent for the .white intensity levels plotted for the camera output. This table serves as FIG. 13C.

TABLE 34 shows the reduction of image from the 2625 image 5×5 array to the requirement for an image at 1050 lines achieved by retaining only the second and fourth positions in the second and fourth rows of the 25 unit arrays. First the number equivalent version which is FIG. 14A. Second is the reduced image with image equivalent units. This serves as FIG. 14B. Last is a series of reduction of image arrays starting, with the 25 unit square, 16 unit square, 9 unit square, 4 unit square, and one unit square. All are formed by removing fixed patterns to preserve the greatest diversity for the array less than the 25 units. The two patterns at the far right are rectangles of 3 × 2 and 2 × 3 which serve to fill where needed when reducing the image to that which has other than a multiple of 525 lines. This shape pixel may be required for accommodating the signal to provide an image on displays with rectangular images. This serves as FIG. 15.


CLAIMS I claim:
1. A system for processing a digital signal of an image comprising:
representing the color' intensity of a unit area of the image with a unit data number;
representing squares of unit areas having common color intensity with a square data number; and
representing multiple squares of squares of unit areas having common color intensity with a multiple square data number.
2. The system according to claim 1, further comprising:
in a motion sequence of consecutive frames, comparing the data numbers of a frame with the data numbers of the previous frame; and
transmitting the data numbers of the frame which are changed from the data numbers of the previous frame.
3. The system according to claim 1, further comprising:
for an image comprised of three or more component colors, combining data numbers from corresponding areas of each component color; and
coding the combined data numbers with an image signal data number and a color code data number.
4. The system according to claim 3, further comprising decoding the image code data number and the color code data number to reinstate the data numbers for each component color and
interpreting the data numbers with image equivalents therefor.
5. The system according to claim 1, further comprising, in the event that the unit data number of a particular unit area of the image is missing, then representing the color intensity of that particular unit area with a unit data number which matches the majority of unit data numbers of surrounding unit areas of the image.
6. The system according to claim 1, further comprising, in the event that a unit data number of a particular unit area for one color of an image comprising three or more component colors is missing, then representing the color intensity of that particular unit area with a unit data number which matches the unit data numbers of the corresponding unit areas of the other component colors.
7. The system according to claim 1, further comprising:
deriving a special image by mapping a first pattern of squares of unit areas having multiple color intensities thereby making a coloring book image;
mapping a second pattern of squares of unit areas having common color intensities; overlaying the second pattern over the first pattern;
mapping a third pattern of change in color intensities of an image from those in a previous image; and
overlaying the third pattern over the first pattern.
8. The system according to claim 1, further comprising
providing a means for cropping the image to correspond to a standard or to a wide screen aspect.
9. The system according to claim 7, further comprising
providing a means for cropping the image by using the special image to center the cropped image.
10. The system according to claim 3, further comprising:
representing sound with sound data numbers;
combining image signal data numbers and color code data numbers with sound data numbers; and
adding key signals to direct output in receiver units to decode the data numbers to reinstate the image and sound.
11. The system according to claim 10, further comprising further combining the data numbers with digital data numbers representing graphics over the image or for altering color components of the image or for altering portions of the image.
12. The system according to claim 1, further comprising
selecting systematically a set of data numbers to reinstate the image with a resolution corresponding to a resolution of a receiver.
13. The system according to claim 2, further comprising
transmitting at certain time intervals the data numbers
representing a complete image.
14. A system of processing a digital audiovisual signal
representing each unit area of each component color of an image with a color intensity data number;
compressing the data numbers by:
comparing the color intensity data numbers for square areas of the image, and representing any square area of mono-intensity color with a square data number;
combining and coding the color intensity data numbers of corresponding areas of each component color; and
comparing the data numbers of one frame in a motion sequence with the data numbers in a previous frame and selecting for transmission only those data numbers which are changed from the previous frame; and
transmitting the compressed data numbers.
15. The system according to claim 14, further comprising receiving, decoding and interpreting the compressed data numbers to reinstate the image.
16. The systems according to claim 15, further comprising selecting systematically a set of data numbers to alter the image.
17. A digital audiovisual signal comprising:
a digital data stream for the visual component of the signal;
a digital data stream for the audio component of the signal; a chromacode; and
key signals to direct output in receiver units for decoding and interpreting the signal.
18. The digital audiovisual signal according to claim 17, wherein the digital data stream comprises a compressed set of data numbers comprising:
a color code data number to represent combined color intensity data numbers from corresponding optical receivers for three or more component colors;
a square data number to represent a square area of mono-intensity color of the image, and data numbers in one frame which are different from corresponding data numbers of a previous frame in a motion sequence.
19. The digital audiovisual signal according to claim 18, wherein the chromacode comprises chromacode data numbers coded from the combined color intensity numbers.
PCT/US1993/000980 1992-01-29 1993-01-28 Digital audiovisual signal for 35mm equivalent television WO1993015587A1 (en)

Priority Applications (2)

Application Number Priority Date Filing Date Title
US82745892 true 1992-01-29 1992-01-29
US07/827,458 1992-01-29

Publications (1)

Publication Number Publication Date
WO1993015587A1 true true WO1993015587A1 (en) 1993-08-05



Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/US1993/000980 WO1993015587A1 (en) 1992-01-29 1993-01-28 Digital audiovisual signal for 35mm equivalent television

Country Status (1)

Country Link
WO (1) WO1993015587A1 (en)

Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US4656500A (en) * 1983-04-27 1987-04-07 Fuji Photo Film Co., Ltd. Adaptive type compression method for compressing a color image by imparting predetermined variable-length codes to combinations of quantized values based upon quantized prediction error components
US5047853A (en) * 1990-03-16 1991-09-10 Apple Computer, Inc. Method for compresssing and decompressing color video data that uses luminance partitioning
US5073820A (en) * 1989-10-31 1991-12-17 Olympus Optical Co., Ltd. Image data coding apparatus and coding method
US5122873A (en) * 1987-10-05 1992-06-16 Intel Corporation Method and apparatus for selectively encoding and decoding a digital motion video signal at multiple resolution levels

Patent Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US4656500A (en) * 1983-04-27 1987-04-07 Fuji Photo Film Co., Ltd. Adaptive type compression method for compressing a color image by imparting predetermined variable-length codes to combinations of quantized values based upon quantized prediction error components
US5122873A (en) * 1987-10-05 1992-06-16 Intel Corporation Method and apparatus for selectively encoding and decoding a digital motion video signal at multiple resolution levels
US5073820A (en) * 1989-10-31 1991-12-17 Olympus Optical Co., Ltd. Image data coding apparatus and coding method
US5047853A (en) * 1990-03-16 1991-09-10 Apple Computer, Inc. Method for compresssing and decompressing color video data that uses luminance partitioning

Similar Documents

Publication Publication Date Title
US6104425A (en) Method and apparatus for transmitting television signals, method and apparatus for receiving television signals, and method and apparatus for transmitting/receiving television signals
Yamaguchi et al. Color image reproduction based on multispectral and multiprimary imaging: experimental evaluation
US5635978A (en) Electronic television program guide channel system and method
US4454593A (en) Pictorial information processing technique
US5418572A (en) Method of and apparatus for displaying images at different rates
US3953668A (en) Method and arrangement for eliminating flicker in interlaced ordered dither images
US5642137A (en) Color selecting method
US6525741B1 (en) Chroma key of antialiased images
US4504852A (en) Method and apparatus for video standard conversion
US5155586A (en) Method and apparatus for flare correction
US5465119A (en) Pixel interlacing apparatus and method
US5124688A (en) Method and apparatus for converting digital YUV video signals to RGB video signals
US5450500A (en) High-definition digital video processor
US5097257A (en) Apparatus for providing output filtering from a frame buffer storing both video and graphics signals
US5995070A (en) LED display apparatus and LED displaying method
US6882365B1 (en) Direction-sensitive correction method and system for data including abrupt intensity gradients
US5682207A (en) Image display apparatus for simultaneous display of a plurality of images
US4963981A (en) Image sensor device capable of electronic zooming
US20070127040A1 (en) System and method for a high performance color filter mosaic array
US4746981A (en) Multiple screen digital video display
US5185666A (en) Digitized film image processing system with bordered split screen display
US4654484A (en) Video compression/expansion system
US5249164A (en) Digital color tv for personal computers
US5587928A (en) Computer teleconferencing method and apparatus
EP0367264A2 (en) A digital video tape recorder capable of high speed image reproduction

Legal Events

Date Code Title Description
AK Designated states

Kind code of ref document: A1

Designated state(s): AU BG CA CZ FI HU JP KP KR NO NZ PL RO RU SK UA

AL Designated countries for regional patents

Kind code of ref document: A1


COP Corrected version of pamphlet


DFPE Request for preliminary examination filed prior to expiration of 19th month from priority date (pct application filed before 20040101)
122 Ep: pct application non-entry in european phase
NENP Non-entry into the national phase in:

Ref country code: CA