WO1991013519A1 - Improvements in or relating to colour television - Google Patents

Improvements in or relating to colour television Download PDF

Info

Publication number
WO1991013519A1
WO1991013519A1 PCT/GB1991/000327 GB9100327W WO9113519A1 WO 1991013519 A1 WO1991013519 A1 WO 1991013519A1 GB 9100327 W GB9100327 W GB 9100327W WO 9113519 A1 WO9113519 A1 WO 9113519A1
Authority
WO
WIPO (PCT)
Prior art keywords
lines
signal
composite video
component
video signal
Prior art date
Application number
PCT/GB1991/000327
Other languages
French (fr)
Inventor
Paola Fabrizi
Original Assignee
National Transcommunications Limited
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by National Transcommunications Limited filed Critical National Transcommunications Limited
Publication of WO1991013519A1 publication Critical patent/WO1991013519A1/en

Links

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N9/00Details of colour television systems
    • H04N9/77Circuits for processing the brightness signal and the chrominance signal relative to each other, e.g. adjusting the phase of the brightness signal relative to the colour signal, correcting differential gain or differential phase
    • H04N9/78Circuits for processing the brightness signal and the chrominance signal relative to each other, e.g. adjusting the phase of the brightness signal relative to the colour signal, correcting differential gain or differential phase for separating the brightness signal or the chrominance signal from the colour television signal, e.g. using comb filter

Definitions

  • the present invention concerns methods and apparatus for improving the quality of colour television signals and, in particular, to such methods and apparatus applied to composite television signals (ie signals in which the luminance and chrominance components are transmitted in a frequency-division multiplex) such as PAL, NTSC and SECAM.
  • composite television signals ie signals in which the luminance and chrominance components are transmitted in a frequency-division multiplex
  • composite colour television signals suffer from the defects of "cross-colour” and “cross-luminance” due to the spectrum sharing of the luminance and chrominance components of the colour TV signal. In addition to causing irritating visible effects when the colour television signal is displayed, these defects can also prevent the satisfactory performance of various image processing tasks, such as chroma-keying.
  • a television signal S k (t) representing a line K in a frame r of a PAL signal may be expressed, as follows: r r
  • a spatially-neighbouring line (K + 312) in the succeeding field of frame r will be 312 lines away from line K since a frame of PAL comprises 625 lines (the corresponding distance for an NTSC signal is 262 lines) and the sub-carrier phase will have changed by approximately TV radians.
  • line K + 312 may be expressed, as follows:
  • Equation (4) contains no luminance term, thus it is possible to extract colour difference information free from cross-luminance effects from a signal corresponding to the difference between neighbouring lines in successive fields of a frame.
  • Equation (5) contains no chrominance term, thus it is possible to extract luminance information free from cross-colour effects from a signal corresponding to a line, K, in a first field of a frame minus half the difference between that line, K, and the neighbouring line, K+312, in the succeeding field of the frame.
  • a subjective improvement in the quality of a displayed television image derived from a conventional initial composite signal may be seen if, at the television signal decoder, circuitry is included to produce a signal according to equation (4), halve it (with the effect as shown in Fig. 1 ), then feed the resultant signal to the chrominance input of the decoder, and circuitry is included to subtract said resultant signal from the initial received signal to produce a signal according to equation (5) and feed this signal to the luminance input of the decoder.
  • a new-style television receiver according to the invention, or a conventional receiver with an add-on unit according to the invention would be completely compatible with conventional broadcast signals.
  • any pair of lines satisfying these two conditions may be used for the preferred method.
  • the selection of image portions used at the encoder to produce the luminance and colour difference component values for the two lines to be processed will influence the resolution of the image displayed at the receiver.
  • the vertical-temporal filtering does not introduce significant defects.
  • the method does not result in the resolution losses associated with other, "clean-PAL", methods.
  • the temporal processing may introduce judder and loss of resolution in moving areas of some scenes (at certain critical spatial frequencies and movement speeds). This can be overcome by providing an additional step such that low and high frequencies of the luminance signal are processed differently. Addition of split processing of high and low frequencies removes the unwanted temporal artifacts while still resulting in the elimination of cross-colour and cross-luminance.
  • the additional complexity of the split method amounts to an extra horizontal filter and further addition/subtraction units at both the coder and decoder.
  • the processing according to the present invention is also compatible with video recorders using standard component or composite input signals, and does not restrict future uses which may be made of currently unused spectral spaces in standard composite signals (eg spaces in a PAL spectrum).
  • Fig. 1 illustrates the effect of intra-frame differencing on the composite video signal
  • Fig. 2 illustrates the effect of intra-frame averaging on the luminance component of a video signal
  • Fig. 3 is a block diagram representing a coder/decoder pair according to one embodiment of the invention, for a PAL television signal;
  • Fig. 4 is a block diagram illustrating the structure of the intra-frame averaging unit of Fig. 3;
  • Fig. 5 is a block diagram illustrating the structure of the intra-frame difference and 2 unit of Fig. 3;
  • Fig. 6 is a block diagram representing the luminance path through a coder according to another embodiment of the invention, for a PAL television signal;
  • Fig. 7 is an example of a coder (luminance section) according to Fig. 6;
  • Fig. 8 is a block diagram representing a decoder according to a further embodiment of the invention, for a PAL television signal,- which may be used in conjunction with the coder of Fig. 4;
  • Fig. 9 is an example of a decoder according to Fig. 8.
  • FIG. 3 illustrates, in block diagrammatic form, the elements of a coder/decoder pair for implementing such a processing method in relation to PAL television signals.
  • the luminance and chrominance component signals, Y, U and V respectively, are produced in a conventional manner and are then fed to an intra-frame averaging unit 1.
  • the luminance signal and each of the chrominance signals are separately subjected to an intra-frame averaging process ie each component for a line K from a field 1 is averaged with the corresponding component from a line spaced 312 lines away from it in time.
  • the effect of this process on the luminance component signal is as shown in Fig. 2, a corresponding effect is produced on each of the chrominance component signals.
  • the averaged signals are then encoded in the PAL coder unit 3 in a conventional manner (eg the averaged chrominance component signals U, V are modulated onto a sub-carrier in quadrature, the modulated chrominance signal is combined with the averaged luminance signal, a colour burst and synchronisation signals, and the resultant signal is modulated onto a carrier).
  • the structure of the intra-frame averaging unit 1 is illustrated diagrammatically in Fig. 4.
  • the structure and function of unit 1 will be described in relation to processing of the luminance component only, extension to all 3 components is straightforward.
  • the unit 1 includes a field delay 5 into which luminance data is written and from which data is read out under control of a controller 10.
  • the incoming luminance component signal is added to the output of the field delay 5 in an adder 6 to produce a sum signal.
  • a first gate, G1 is provided to selectively feed either the incoming luminance component signal, or the sum signal output from adder 6, to the field delay 5 for storage.
  • a second gate, G2 is provided to selectively feed either the output of the field delay 5, or the sum signal output from adder 6, to a divide by two unit 7.
  • the operation of the first and second gates of unit 1 is controlled by the controller 10.
  • the output of the divide by two unit 7 forms the output of the intra-frame averaging unit 1.
  • first of all the luminance data being input to unit 1 on line a, represents successive lines of the first field. These lines are fed through gate G1 and written into successive memory locations in the field delay 5, under the control of the controller 10. When all of the lines of the first field have been received and stored in the field delay 5 the operating condition of gate G1 is changed by the controller 10 so as to prevent further incoming data on line a from passing to the field delay 5.
  • the next input luminance data corresponds to the first line of the second field and it is fed via line b to an input of an adder 6.
  • luminance data corresponding to the first line of the first field is read out from field delay 5 and fed to another input of adder 6.
  • Adder 6 calculates the sum of the two lines and outputs a sum signal. This sum signal is fed to gate G2 on line c and gate G2 is conditioned to pass the signal to the divide by two unit 7.
  • the divide by two unit 7 produces and outputs a signal representing the intra-frame average.
  • the sum signal output by the adder 6 is also fed to gate G1 on line d and gate G1 is conditioned to pass the signal to the field memory 5.
  • the sum signal is written into the field delay 5 in the place vacated by the last read out line (ie the sum of lines k and k+312 is written into the space which previously stored line k) .
  • the above procedure is repeated for the luminance data of each successive line of the second field as it is input to the unit 1 until all of the intra-frame averages have been calculated and output from unit via the divide by two unit 7.
  • the field delay 5 contains data corresponding to the set of intra-frame sums for frame r.
  • the operating condition of gate G1 is then changed back by the controller 10 so as to prevent further intra-frame sum data on line d from being written into the field delay 5, and to enable input data on line a to be written into the field delay 5.
  • the operating condition of gate G2 is also changed by controller 10 so as to prevent further data from adder 6 from passing to the divide by two unit 7 and to enable data read out from the field memory 5 to pass to the divide by two unit.
  • the unit 1 When the next luminance data is input to the unit 1 it will represent the first line of the first field of frame r+1. As this data is input to unit 1 it is written into field memory 5 through gate G1. Meanwhile data corresponding to the first intra-frame sum for frame r is read out of the field delay 5 and passes through gate G2 to the divide by two unit 7. The space in field delay 5 vacated by the first intra-frame sum of frame r is occupied by the newly written luminance data on the first line of frame r+1. Similarly, each successively input line of the first field of frame r+1 is written into field delay 5 and replaces the previously stored intra-frame sums for frame r (which are read out, divided by two in unit 7, and output) .
  • the corresponding output will be a set of newly-calculated intra-frame averages whereas for each second field of a frame which is input to unit 1 the corresponding output will be a set of stored intra- frame sums 2.
  • the output signal is delayed with respect to the input signal by a time equivalent to one field.
  • a composite PAL signal is fed to an intra-frame differencing and 7 2 unit 11 and to a subtractor 12.
  • the outputs of the intra-frame differencing and 2 unit 11 and the subtractor 12 are then decoded in a conventional manner, using chrominance demodulator 13, to produce luminance and chrominance components Y av . u av an ⁇ ⁇ v av
  • the structure of the intra-frame differencing and - . 2 unit 11 is illustrated diagrammatically in Fig. 5.
  • the unit 11 includes a field delay 15 into which data is written and from which data is read out under the control of a controller 20.
  • the incoming PAL composite signal is subtracted from the output of the field delay 15 in a subtractor 17 to produce a difference signal.
  • a first gate, G3 is provided to selectively feed either the incoming PAL composite signal, or the difference signal output by the subtractor 17, to the field delay 15 for storage.
  • a second gate, G4 is provided to selectively feed either the output of the field delay 15, or the difference signal output by the subtractor 17, to a divide by 2 unit 22.
  • the operation of the first and second gates is controlled by the controller 20.
  • the output of the divide by 2 unit forms the outputs of the intra-frame differencing and - 2 unit 11.
  • first of all the data being input to unit 11 on line e represents successive lines of the first field. These lines are fed through gate G3 and written into successive memory locations in the field delay 15, under the control of the controller 20.
  • the operating condition of gate G3 is changed by the controller 20 so as to prevent further incoming data on line e from passing to the field delay 15.
  • the next input data corresponds to the first line of the second field and it is fed via line f to the negative input of subtractor 17.
  • data corresponding to the first line of the first field is read out from field delay 15 and fed to the positive input of subtractor 17.
  • Subtractor 17 calculates the difference between the two lines (intra-frame difference) and outputs an intra-frame difference signal.
  • This intra-frame difference signal is fed to gate G4 on line g and gate G4 is conditioned to pass the signal to the divide by two unit 22.
  • the divide by two unit 22 produces and outputs a signal representing 1/2 the intra-frame difference.
  • the intra-frame difference signal output by the subtractor 17 is also fed to gate G3 on line h and gate G3 is conditioned to pass the signal to the field memory 15.
  • the intra-frame difference signal is written into the field delay 15 in the place vacated by the last read out line (ie the difference between line K and line K+312 is written into the space which previously stored line K) .
  • the above procedure is repeated for each successive line of the second field as it is input to the unit 11 until all of the intra-frame differences have been calculated and output from unit 11 via the divide by two unit 22.
  • the field delay 15 contains data corresponding to the set of intra-frame difference for frame r.
  • the operating condition of gate G3 is then changed back by the controller 20 so as to prevent further intra-frame difference data on line h from being written into the field delay 15, and to enable input data on line e to be written into the field delay 15.
  • the operating condition of gate G4 is also changed by controller 20 so as to prevent further data from subtractor 17 from passing to the divide by two unit 22 and to enable data read out from the field memory 15 to pass to the divide by two unit.
  • the corresponding output will be a set of newly-calculated intra-frame differences ⁇ 2, whereas for each second field of a frame which is input to unit 11 the corresponding output will be a set of stored intra-frame differences - 2.
  • the output signal is delayed with respect to the input signal by a time equivalent to one field.
  • the signals output from unit 11 are fed to chrominance demodulator 13, and to subtractor 12 to produce a pure luminance component signal in accordance with equation (5).
  • the chrominance demodulator 13 will misinterpret the true polarity of the chrominance components for one field of each frame.
  • subtractor 12 instead of obtaining a cancellation of chrominance components the subtraction will produce a reinforcement thereof for one field of each frame.
  • the above problem may be overcome by performing an inversion of polarity for one field of each frame. This may be carried out, for example, on the intra-frame difference signals fed to divide by two unit 22 (e.g. using a sign inverter as shown in broken lines in Fig. 5) or on the output of the unit 11. Alternatively measures may be taken at the subtractor 12 and the chrominance demodulator 13 to take into account the anomalous polarity of the chrominance components from unit 11 in one field of each frame,.
  • the controller 20 is synchronised to the composite PAL signal which is input to the decoder. This synchronisation may be achieved through the provision of a sync detector, either separate from or associated with the controller 20, which detects the line and field sync signals in the input composite PAL signal and outputs appropriate synchronisation signals to the controller 20.
  • Fig. 6 shows the luminance path in a coder incorporating the modified system.
  • the luminance component of the source is separated into components typically below 2.5MHz, Y- ⁇ p, and those above 2.5MHz, HP-
  • the Y j _.p components are not processed, but the Yfjp are intra-frame averaged and added back to Y-Q.p. This is because the Y ⁇ .p components do not produce cross- colour and therefore do not need pre-processing.
  • U and V are intra-frame averaged as before, without requiring split processing, and the recombined YUV source is PAL coded using a standard PAL coder.
  • Fig. 6 pre-processor could alternatively be provided by a structure using two field stores and one low pass filter.
  • Fig. 8 shows a decoder incorporating the modified system, which is preferably used with the coder of Fig. 6.
  • the composite signal is separated into components above and below 2.5MHz, C ⁇ -p and Cj-.p respectively.
  • the low pass components contain luminance only, Yj_.p, and no further processing is necessary.
  • the high pass components, C jj p are intra- frame differenced, divided by two and subtracted as before to give Yfjp, and colour difference signals. Demodulation of the colour difference signal gives U and V, and Yjjp is added to Y j .p to recover the full luminance bandwidth.
  • the modified decoder of Fig. 8 may be implemented in a number of ways, such as using the apparatus of Fig. 9. Other structures are possible, for example moving the low pass filter to the input and using two field stores after that to store the low pass and high pass elements.
  • the additional filtering and split processing removes the judder and improves resolution in moving areas. Since frequencies below about 2.5MHz do not contribute to cross-colour, the processing still eliminates all cross-colour and cross-luminance. No additional field delays are required to supplement the split processing in either encoder or decoder.
  • component signals relatively free from cross-effects are extracted from composite video signals by combining two lines of the composite video signal, the subcarrier phase difference between the two lines of the composite video signal approximating to Tr " and the two lines containing component information relating to two lines of the image which are spatially-neighbouring but one field apart.
  • the lines to be combined could contain component information relating to spatially-coincident lines (i.e. lines one or more frames apart) or to other spatially-neighbouring .lines (i.e. lines in the same or different fields but close to one another), the important fact is that the respective component information relating to the two image lines should have approximately the same magnitude for each of the two image lines.
  • the invention has been described in terms of combining two lines of a composite video signal containing information relating to two lines of an image. It is possible to envisage the situation where, with a careful choice of image lines and lines of the composite signal according to the principles of the invention, more than two lines of the composite signal, and/or information relating to more than two lines of the image, may be used.

Abstract

Chrominance and luminance component signals relatively free from cross-effects may be extracted from a composite video signal by arithmetically combining two lines of the composite video signal, the subcarrier phase difference between the two lines approximating to π and the two lines being selected so as to contain component information relating to two lines in the image where the magnitude of each component signal will be approximately the same. Preferably the composite video signal is encoded so that each line thereof contains averaged component information relating to said two lines of the image.

Description

Improvements in or relating to Colour Television.
The present invention concerns methods and apparatus for improving the quality of colour television signals and, in particular, to such methods and apparatus applied to composite television signals (ie signals in which the luminance and chrominance components are transmitted in a frequency-division multiplex) such as PAL, NTSC and SECAM.
As is well-known, composite colour television signals suffer from the defects of "cross-colour" and "cross-luminance" due to the spectrum sharing of the luminance and chrominance components of the colour TV signal. In addition to causing irritating visible effects when the colour television signal is displayed, these defects can also prevent the satisfactory performance of various image processing tasks, such as chroma-keying.
Various proposals have been made for elimination of cross-colour and cross-luminance, however the present invention provides new methods and apparatus which are particularly simple and effective, for tackling these problems.
The following discussion of the principle underlying the invention is given in terms of a PAL signal however it is to be understood that the principle may be extended to other composite colour television signals.
A television signal Sk(t) representing a line K in a frame r of a PAL signal may be expressed, as follows: r r
Sk(t) = Y]<(t) + Uk(t) sin6 sct + Vk(t) cos6Jsct (1)
where y, u and v are the usual luminance and respective chrominance components, andwsc is the sub-carrier frequency.
A spatially-neighbouring line (K + 312) in the succeeding field of frame r will be 312 lines away from line K since a frame of PAL comprises 625 lines (the corresponding distance for an NTSC signal is 262 lines) and the sub-carrier phase will have changed by approximately TV radians. Thus line K + 312 may be expressed, as follows:
S +312(t) = Y +312<t) + ϋk+312(t)sin(ωsct +TT +Sφ ) + k+3-]2 cos(ωsct +1X +S≠ ) (2) where ff is the phase error, ie the angle by which the sub-carrier phase at line K+312 varies from f.
Now, if it is assumed that the change of a television image from the first field of a frame to the next field is negligible then: γk = γk+312- ϋk = ϋk+312' an<3 Vk = Vk+312 (3)
Substituting these values into equation (2) and subtracting equation (2) from equation (1 ) (assuming that the phase errorS^ is negligible) gives:
Sk(t) - Sk+312(t) = 2Uk(t) sinl^sct + 2Vk(t) cosL sct (4)
(since sin*θ-= - sin (-0- +TT) and cos-©- = - cos (-Θ- + TT ) )
Equation (4) contains no luminance term, thus it is possible to extract colour difference information free from cross-luminance effects from a signal corresponding to the difference between neighbouring lines in successive fields of a frame.
It may be seen that subtraction of 1 /2 of equation (4) from equation (1 ) gives:
Sk(t) - 1/2(Sk(t) - Sjf+312(t)) = Yk(t) (5)
Equation (5) contains no chrominance term, thus it is possible to extract luminance information free from cross-colour effects from a signal corresponding to a line, K, in a first field of a frame minus half the difference between that line, K, and the neighbouring line, K+312, in the succeeding field of the frame.
A subjective improvement in the quality of a displayed television image derived from a conventional initial composite signal may be seen if, at the television signal decoder, circuitry is included to produce a signal according to equation (4), halve it (with the effect as shown in Fig. 1 ), then feed the resultant signal to the chrominance input of the decoder, and circuitry is included to subtract said resultant signal from the initial received signal to produce a signal according to equation (5) and feed this signal to the luminance input of the decoder. This results in a small reduction in cross-colour, and a substantial reduction in cross-luminance without the use of a notch filter (thereby improving horizontal resolution compared to standard PAL). Thus a new-style television receiver according to the invention, or a conventional receiver with an add-on unit according to the invention, would be completely compatible with conventional broadcast signals.
In practice >there will usually be differences between the television images of successive fields of a frame, the size of these differences being related to the amount of movement occurring in the scene represented by the television signal. Thus the equalities given in equations (3) will not be exactly correct and the precise cancellation of terms involved in producing equations (4) and (5) will not occur. This problem may be overcome by using in the television signals for lines K and K + 312 the average values of Y, U and V, ie by replacing Y and Yk+312 with Yk +Y-\r Λ.-- '\ ?
2 and making the corresponding replacements for U and V. This may be done as a pre-processing step at the television signal encoder. The effect of this intra- frame averaging on the television signal is shown in Fig. 2.
It has been found that a small reduction in both cross-effects results when the new encoding is carried out but a simple-PAL decoder is used to decode the signal. This may be further improved if a delay- line decoder (PAL-D) is used. Thus a new-style transmission would be completely compatible with existing receivers. Greater benefits are obtained if both the new encoding and the new decoding are applied to a composite television signal. In such a case cross-effects in the television picture are eliminated using only two paired intra-frame processors, one at the encoder and one at the decoder. Compared with recently developed so-called "clean-PAL" methods this is very simple. The above method is applicable both in moving as well as stationary areas because the processing does not cross frame boundaries.
In the above discussion the new encoding and decoding processes have been described in terms of one particular algorithm for producing (at the receiver) chrominance and luminance free of cross-colour and cross-luminance, namely: a) for chrominance, subtract input signal at the receiver from that input a field ago and divide by 2; and b) for luminance, subtract from the input signal at the receiver the signal produced by step a).
It is to be understood that manipulation of equations (1) to (3) can result in a number of different algorithms for producing chrominance and luminance, for example another suitable algorithm would be: a) for luminance, add the input signal at the receiver to that received a field ago and divide by two (ie 1/2 (Sk (t) + Sj[" +312(t)) = Yk (t)); and b) for chrominance, subtract from the input signal at the receiver the signal produced by step a).
This is only one variation of the basic algorithm, and many others exist. One method could use frame (rather than field) delays, and an inferior algorithm using line delays is also possible.
The success of the preferred method (in which new encoding and new decoding are used) and its variations depends on two factors: that the colour subcarrier is shifted by 180° between the two lines to be processed. and that the luminance and colour difference components are preconditioned to have the same values on the two lines to be processed.
In theory any pair of lines satisfying these two conditions may be used for the preferred method. However the selection of image portions used at the encoder to produce the luminance and colour difference component values for the two lines to be processed will influence the resolution of the image displayed at the receiver.
When the luminance and colour difference components of the two lines to be processed are conditioned to be the average of the values for a line and its neighbour a field away then for most picture material the vertical-temporal filtering this involves does not introduce significant defects. In particular the method does not result in the resolution losses associated with other, "clean-PAL", methods. However the temporal processing may introduce judder and loss of resolution in moving areas of some scenes (at certain critical spatial frequencies and movement speeds). This can be overcome by providing an additional step such that low and high frequencies of the luminance signal are processed differently. Addition of split processing of high and low frequencies removes the unwanted temporal artifacts while still resulting in the elimination of cross-colour and cross-luminance. The additional complexity of the split method amounts to an extra horizontal filter and further addition/subtraction units at both the coder and decoder.
The processing according to the present invention is also compatible with video recorders using standard component or composite input signals, and does not restrict future uses which may be made of currently unused spectral spaces in standard composite signals (eg spaces in a PAL spectrum).
Further features and advantages of the present invention will become clear from the following descrip¬ tion of embodiments thereof, given by way of example, in which:
Fig. 1 illustrates the effect of intra-frame differencing on the composite video signal;
Fig. 2 illustrates the effect of intra-frame averaging on the luminance component of a video signal;
Fig. 3 is a block diagram representing a coder/decoder pair according to one embodiment of the invention, for a PAL television signal;
Fig. 4 is a block diagram illustrating the structure of the intra-frame averaging unit of Fig. 3;
Fig. 5 is a block diagram illustrating the structure of the intra-frame difference and 2 unit of Fig. 3;
Fig. 6 is a block diagram representing the luminance path through a coder according to another embodiment of the invention, for a PAL television signal;
Fig. 7 is an example of a coder (luminance section) according to Fig. 6;
Fig. 8 is a block diagram representing a decoder according to a further embodiment of the invention, for a PAL television signal,- which may be used in conjunction with the coder of Fig. 4; and
Fig. 9 is an example of a decoder according to Fig. 8.
As described above greater benefits are obtained if a composite video signal is subjected both to pre-processing before encoding, and to appropriate, preferably intra-frame, processing before decoding. Fig. 3 illustrates, in block diagrammatic form, the elements of a coder/decoder pair for implementing such a processing method in relation to PAL television signals.
In the encoder portion of the system of Fig. 3 the luminance and chrominance component signals, Y, U and V respectively, are produced in a conventional manner and are then fed to an intra-frame averaging unit 1. In the intra-frame averaging unit 1 the luminance signal and each of the chrominance signals are separately subjected to an intra-frame averaging process ie each component for a line K from a field 1 is averaged with the corresponding component from a line spaced 312 lines away from it in time. The effect of this process on the luminance component signal is as shown in Fig. 2, a corresponding effect is produced on each of the chrominance component signals.
The averaged signals are then encoded in the PAL coder unit 3 in a conventional manner (eg the averaged chrominance component signals U, V are modulated onto a sub-carrier in quadrature, the modulated chrominance signal is combined with the averaged luminance signal, a colour burst and synchronisation signals, and the resultant signal is modulated onto a carrier).
The structure of the intra-frame averaging unit 1 is illustrated diagrammatically in Fig. 4. The structure and function of unit 1 will be described in relation to processing of the luminance component only, extension to all 3 components is straightforward.
The unit 1 includes a field delay 5 into which luminance data is written and from which data is read out under control of a controller 10. The incoming luminance component signal is added to the output of the field delay 5 in an adder 6 to produce a sum signal. A first gate, G1 , is provided to selectively feed either the incoming luminance component signal, or the sum signal output from adder 6, to the field delay 5 for storage. A second gate, G2, is provided to selectively feed either the output of the field delay 5, or the sum signal output from adder 6, to a divide by two unit 7. The operation of the first and second gates of unit 1 is controlled by the controller 10. The output of the divide by two unit 7 forms the output of the intra-frame averaging unit 1.
The operation of the unit 1 shown in Fig. 4 will now be described. Considering a frame r, composed of a first and second field, being input to the unit 1 , first of all the luminance data being input to unit 1 , on line a, represents successive lines of the first field. These lines are fed through gate G1 and written into successive memory locations in the field delay 5, under the control of the controller 10. When all of the lines of the first field have been received and stored in the field delay 5 the operating condition of gate G1 is changed by the controller 10 so as to prevent further incoming data on line a from passing to the field delay 5.
The next input luminance data corresponds to the first line of the second field and it is fed via line b to an input of an adder 6. At the same time, luminance data corresponding to the first line of the first field is read out from field delay 5 and fed to another input of adder 6. Adder 6 calculates the sum of the two lines and outputs a sum signal. This sum signal is fed to gate G2 on line c and gate G2 is conditioned to pass the signal to the divide by two unit 7. The divide by two unit 7 produces and outputs a signal representing the intra-frame average.
The sum signal output by the adder 6 is also fed to gate G1 on line d and gate G1 is conditioned to pass the signal to the field memory 5. The sum signal is written into the field delay 5 in the place vacated by the last read out line (ie the sum of lines k and k+312 is written into the space which previously stored line k) .
The above procedure is repeated for the luminance data of each successive line of the second field as it is input to the unit 1 until all of the intra-frame averages have been calculated and output from unit via the divide by two unit 7. At the end of this process the field delay 5 contains data corresponding to the set of intra-frame sums for frame r. The operating condition of gate G1 is then changed back by the controller 10 so as to prevent further intra-frame sum data on line d from being written into the field delay 5, and to enable input data on line a to be written into the field delay 5. The operating condition of gate G2 is also changed by controller 10 so as to prevent further data from adder 6 from passing to the divide by two unit 7 and to enable data read out from the field memory 5 to pass to the divide by two unit.
When the next luminance data is input to the unit 1 it will represent the first line of the first field of frame r+1. As this data is input to unit 1 it is written into field memory 5 through gate G1. Meanwhile data corresponding to the first intra-frame sum for frame r is read out of the field delay 5 and passes through gate G2 to the divide by two unit 7. The space in field delay 5 vacated by the first intra-frame sum of frame r is occupied by the newly written luminance data on the first line of frame r+1. Similarly, each successively input line of the first field of frame r+1 is written into field delay 5 and replaces the previously stored intra-frame sums for frame r (which are read out, divided by two in unit 7, and output) .
Thus for each first field of a frame which is input to unit 1 the corresponding output will be a set of newly-calculated intra-frame averages whereas for each second field of a frame which is input to unit 1 the corresponding output will be a set of stored intra- frame sums 2. In each case the output signal is delayed with respect to the input signal by a time equivalent to one field.
In the decoder portion of the system of Fig. 3 a composite PAL signal is fed to an intra-frame differencing and 7 2 unit 11 and to a subtractor 12. In the intra-frame differencing and 7 2 unit 11 a signal representing Ur av(t)sinwsct + Vr av(t)coswsct is produced (where Ur av = 1/2(Ur k + ϋr k+312) and Vr av=1/2(Vr k + Vr +3-| 2)) in accordance with equation (4) above by a method that is explained in greater detail below. The output of the intra-frame differencing and -r 2 unit 11 is fed to subtractor 12 which outputs a signal representing Yr av(t) (where Yr av = 1/2 (Yr + Yr k+312)) in accordance with equation (5) above. The outputs of the intra-frame differencing and 2 unit 11 and the subtractor 12 are then decoded in a conventional manner, using chrominance demodulator 13, to produce luminance and chrominance components Yav. uav an<^ vav
The structure of the intra-frame differencing and -. 2 unit 11 is illustrated diagrammatically in Fig. 5. The unit 11 includes a field delay 15 into which data is written and from which data is read out under the control of a controller 20. The incoming PAL composite signal is subtracted from the output of the field delay 15 in a subtractor 17 to produce a difference signal. A first gate, G3 , is provided to selectively feed either the incoming PAL composite signal, or the difference signal output by the subtractor 17, to the field delay 15 for storage. A second gate, G4 , is provided to selectively feed either the output of the field delay 15, or the difference signal output by the subtractor 17, to a divide by 2 unit 22. The operation of the first and second gates is controlled by the controller 20. The output of the divide by 2 unit forms the outputs of the intra-frame differencing and - 2 unit 11.
The operation of the unit 11 shown in Fig. 5 will now be described. Considering a frame r, composed of a first and a second field, being input to the unit 11 , first of all the data being input to unit 11 on line e represents successive lines of the first field. These lines are fed through gate G3 and written into successive memory locations in the field delay 15, under the control of the controller 20. When all of the lines of the first field have been received and stored in the field delay 15 the operating condition of gate G3 is changed by the controller 20 so as to prevent further incoming data on line e from passing to the field delay 15.
The next input data corresponds to the first line of the second field and it is fed via line f to the negative input of subtractor 17. At the same time, data corresponding to the first line of the first field is read out from field delay 15 and fed to the positive input of subtractor 17. Subtractor 17 calculates the difference between the two lines (intra-frame difference) and outputs an intra-frame difference signal. This intra-frame difference signal is fed to gate G4 on line g and gate G4 is conditioned to pass the signal to the divide by two unit 22. The divide by two unit 22 produces and outputs a signal representing 1/2 the intra-frame difference. The intra-frame difference signal output by the subtractor 17 is also fed to gate G3 on line h and gate G3 is conditioned to pass the signal to the field memory 15. The intra-frame difference signal is written into the field delay 15 in the place vacated by the last read out line (ie the difference between line K and line K+312 is written into the space which previously stored line K) .
The above procedure is repeated for each successive line of the second field as it is input to the unit 11 until all of the intra-frame differences have been calculated and output from unit 11 via the divide by two unit 22. At the end of this process the field delay 15 contains data corresponding to the set of intra-frame difference for frame r. The operating condition of gate G3 is then changed back by the controller 20 so as to prevent further intra-frame difference data on line h from being written into the field delay 15, and to enable input data on line e to be written into the field delay 15. The operating condition of gate G4 is also changed by controller 20 so as to prevent further data from subtractor 17 from passing to the divide by two unit 22 and to enable data read out from the field memory 15 to pass to the divide by two unit.
When the next data is input to the unit 11 it will be the first line of the first field of frame r + 1. As this data is input to unit 11 it is written into field memory 15 through gate G3. Meanwhile data corresponding to the first intra-frame difference for frame r is read out of the field delay 15 and passes through gate G4 to the divide by two unit 22. The space in field delay 15 vacated by the first intra-frame difference of frame r is occupied by the newly written data on the first line of frame r+1. Similarly, each successively input line of the first field of frame r+1 is written into field delay 15 and replaces the previously stored intra-frame differences for frame r (which are read out, divided by two in unit 22, and output) .
Thus for each first field of a frame which is input to unit 11 the corresponding output will be a set of newly-calculated intra-frame differences τ 2, whereas for each second field of a frame which is input to unit 11 the corresponding output will be a set of stored intra-frame differences - 2. In each case the output signal is delayed with respect to the input signal by a time equivalent to one field.
The signals output from unit 11 are fed to chrominance demodulator 13, and to subtractor 12 to produce a pure luminance component signal in accordance with equation (5).
It may be seen from the above that effectively the same set of signals is output from unit 11 in respect of the first field of a frame as is output in respect of the second field of that frame, namely 1/2 (intra-frame difference between line K and line K+312). This signal represents Ur av(t)sinwsct + Vr av(t)coswsct as mentioned above but the polarity of these components will be the same for both fields of the frame. However in a composite PAL signal the polarity of each chrominance component of the signal normally reverses from one field to the next (since the phase of the sub- carrier will have changed by approximately 7tradians and cos(*β+Tr') = - cos -θ- while sinf-θ'+Tπ = - sin- - ). Thus the chrominance demodulator 13 will misinterpret the true polarity of the chrominance components for one field of each frame. Furthermore in subtractor 12, instead of obtaining a cancellation of chrominance components the subtraction will produce a reinforcement thereof for one field of each frame.
The above problem may be overcome by performing an inversion of polarity for one field of each frame. This may be carried out, for example, on the intra-frame difference signals fed to divide by two unit 22 (e.g. using a sign inverter as shown in broken lines in Fig. 5) or on the output of the unit 11. Alternatively measures may be taken at the subtractor 12 and the chrominance demodulator 13 to take into account the anomalous polarity of the chrominance components from unit 11 in one field of each frame,.
The controller 20 is synchronised to the composite PAL signal which is input to the decoder. This synchronisation may be achieved through the provision of a sync detector, either separate from or associated with the controller 20, which detects the line and field sync signals in the input composite PAL signal and outputs appropriate synchronisation signals to the controller 20.
Since the processing carried out in the coder/decoder pair of Fig. 3 is intra-frame, ie it does not cross frame boundaries, only relatively small temporal artifacts are introduced. Also, since only two lines are averaged at a time, and vertical processing within a field is not performed, vertical resolution is only slightly reduced compared with standard PAL.
As mentioned above the temporal processing can introduce subjectively annoying judder and loss of resolution at certain critical spatial frequencies and movement speeds. To overcome this, some additional processing requiring a one-dimensional filter at each the encoder and decoder, can be added to the system of Figure 3.
Fig. 6 shows the luminance path in a coder incorporating the modified system. In this encoder the luminance component of the source is separated into components typically below 2.5MHz, Y-^p, and those above 2.5MHz, HP- The Yj_.p components are not processed, but the Yfjp are intra-frame averaged and added back to Y-Q.p. This is because the Yχ.p components do not produce cross- colour and therefore do not need pre-processing. U and V are intra-frame averaged as before, without requiring split processing, and the recombined YUV source is PAL coded using a standard PAL coder. Note that the split low pass/high pass processing does not necessitate the use of extra field delays - see the implementation of a luminance pre-processor shown in Fig. 7. A variety of hardware implementations are possible for example the Fig. 6 pre-processor could alternatively be provided by a structure using two field stores and one low pass filter.
Fig. 8 shows a decoder incorporating the modified system, which is preferably used with the coder of Fig. 6. At the receiver, the composite signal is separated into components above and below 2.5MHz, C^-p and Cj-.p respectively. The low pass components contain luminance only, Yj_.p, and no further processing is necessary. The high pass components, Cjjp, are intra- frame differenced, divided by two and subtracted as before to give Yfjp, and colour difference signals. Demodulation of the colour difference signal gives U and V, and Yjjp is added to Yj.p to recover the full luminance bandwidth.
The modified decoder of Fig. 8 may be implemented in a number of ways, such as using the apparatus of Fig. 9. Other structures are possible, for example moving the low pass filter to the input and using two field stores after that to store the low pass and high pass elements.
The additional filtering and split processing removes the judder and improves resolution in moving areas. Since frequencies below about 2.5MHz do not contribute to cross-colour, the processing still eliminates all cross-colour and cross-luminance. No additional field delays are required to supplement the split processing in either encoder or decoder.
The above description of detailed embodiments of the invention has dealt with apparatus implementing only one of the various possible algorithms arising from manipulation of equations (1) to (3). Clearly there are further embodiments of the invention, adapted for implementation of others of the possible algorithms, which have not been described in detail. Similarly there are various different arrangements of hardware suitable for implementing each of the algorithms, only a few of which have been discussed in depth.
Also it may be remembered that in equations (1) to (5) above the phase error term Sφwas neglected. It should be clear from the above text that correct cancellation of terms relies on the phase of the colour sub-carrier changing exactly 180° between lines spaced a field apart. If the 2><-j>term is included, this phase change is not exactly 180°, and equation (4) becomes:
Uχ(t)sinwsct[1 +cosS^>] + Vj (t)coswsct[1+cosStj> ] + Uκ(t)coswsctsin& _ Vj (t)sinwsctsinS|> -(A1) From equation (A1 ) it is seen that the
Figure imgf000019_0001
and 2Vjζ(t) terms of equation (4) are reduced to [1 +cos_.ψ]Uκ(t) and [1+cos_>d>]Vχ(t) respectively. Φ may be calculated as the difference between the actual phase shift andlT, ie for a standard composite PAL signal: φ = 2-TTx [0.5 - 312 x ,64 x 10"6 x 25] no. line offset lines time (Hz)
= 2τr x 8 x 10-4 - (A2) Therefore, the reduction in amplitude of Uκ(t) and Vκ(t) is 2 - [1+cos(16τrx 10"4)] = 1.26 x 10~5 which is negligible.
From (A1 ) it may be seen that additional components are introduced which will cause cross-talk between Uκ (t) and Vκ(t), and cross-talk into luminance. These components will have amplitude -46dB referenced to the maximum colour difference excursions, and can therefore be neglected.
In the preferred embodiments of the invention described above, component signals relatively free from cross-effects are extracted from composite video signals by combining two lines of the composite video signal, the subcarrier phase difference between the two lines of the composite video signal approximating to Tr" and the two lines containing component information relating to two lines of the image which are spatially-neighbouring but one field apart. As mentioned above, the lines to be combined could contain component information relating to spatially-coincident lines (i.e. lines one or more frames apart) or to other spatially-neighbouring .lines (i.e. lines in the same or different fields but close to one another), the important fact is that the respective component information relating to the two image lines should have approximately the same magnitude for each of the two image lines.
The invention has been described in terms of combining two lines of a composite video signal containing information relating to two lines of an image. It is possible to envisage the situation where, with a careful choice of image lines and lines of the composite signal according to the principles of the invention, more than two lines of the composite signal, and/or information relating to more than two lines of the image, may be used.

Claims

CLAIMS :
1. A method of extracting a component signal from a composite video signal whereby to reduce cross-effects in the extracted component signal, the method comprising arithmetically combining a plurality of lines of the composite video signal so as to substantially cancel the unwanted component, the subcarrier phase difference between lines of said plurality of lines approximating to IT, and said plurality of lines of the composite video signal containing component information relating to spatially-coincident or spatially-neighbouring lines of the image.
2. A method according to claim 1, wherein the component signals in each of said plurality of lines of the composite video signal contain averaged component information relating to a plurality of lines of the image.
3. A method according to claim 2, wherein the luminance component signal in each of said plurality of lines of the composite video signal contains averaged high frequency luminance information relating to a plurality of lines of the image.
4. A method according to any previous claim, in which another component signal with reduced cross- effects is extracted from the composite video signal by effecting a different arithmetical combination of said plurality of lines of the composite video signal.
5. A method according to claim 1 , 2 or 3, in which another component signal with reduced cross- effects is extracted from the composite video signal by effecting an arithmetical combination of said plurality of lines of the composite video signal with the first extracted component signal.
6. Apparatus for extracting from a composite video signal a component signal with reduced cross- effects, the apparatus comprising: means for arithmetically combining a plurality of lines of the composite video signal so as to substantially cancel the unwanted component, the subcarrier. phase difference between lines of said plurality of lines approximating toTT, and the plurality of lines containing component information relating to spatially-coincident or spatially-neighbouring lines of the image.
7. Apparatus according to claim 6, wherein the luminance component signal in each of said plurality of lines of the composite video signal contains averaged high frequency luminance information relating to a plurality of lines of the image, the apparatus further comprises means for separating the high frequency component of the composite vidoe signal from the low frequency component thereof, and the combining means is adapted to operate on the high frequency component of the composite video signal.
8. Apparatus according to claim 6 or 7 , and further comprising means for effecting a different arithmetical combination of said plurality of lines of the composite video signal, whereby to extract from said composite video signal another component signal with reduced cross-effects.
9. Apparatus according to claim 6 or 7, and further comprising means for effecting an arithmetical combination of said plurality of lines of the composite video signal with the first extracted component signal, whereby to extract another component signal with reduced cross-effects.
10. A method of encoding a composite video signal so as to facilitate extraction therefrom of a component signal with reduced cross-effects, the method comprising averaging component information relating to a plurality of lines of the image and using the averaged component information to form the relevant component signal of a line of the composite video signal, the subcarrier phase difference between lines of said plurality of lines of the image approximating to7-- . and said plurality of lines being spatially-coincident or spatially- neighbouring.
11. An encoding method according to claim 10, wherein the averaging step comprises averaging the high frequency components of the luminance signals relating to said lines of the image.
12. Apparatus for encoding a composite video signal so as to facilitate extraction therefrom of a component signal with reduced cross-effects, the apparatus comprising: means for averaging component information relating to a plurality of lines of the image, the subcarrier phase difference between lines of said plurality of lines of the image approximating toIT , and said plurality of lines being spatially-coincident or spatially-neighbouring; and means for outputting the averaged component information so as to form the relevant component signal of a line of the composite video Signal.
13. Encoding apparatus according to claim 12, wherein there is further provided means for separating the high frequency components of the luminance signals from the low frequency components thereof, and the averaging means comprises means for averaging the high frequency components of the luminance signals relating to said plurality of lines of the image.
PCT/GB1991/000327 1990-03-02 1991-03-04 Improvements in or relating to colour television WO1991013519A1 (en)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
GB909004687A GB9004687D0 (en) 1990-03-02 1990-03-02 Improvements in or relating to colour television
GB9004687.1 1990-03-02

Publications (1)

Publication Number Publication Date
WO1991013519A1 true WO1991013519A1 (en) 1991-09-05

Family

ID=10671889

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/GB1991/000327 WO1991013519A1 (en) 1990-03-02 1991-03-04 Improvements in or relating to colour television

Country Status (2)

Country Link
GB (1) GB9004687D0 (en)
WO (1) WO1991013519A1 (en)

Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
EP0124280A2 (en) * 1983-04-04 1984-11-07 Ampex Corporation Method and apparatus for the extraction of luminance information from a video signal
EP0199964A2 (en) * 1985-04-29 1986-11-05 International Business Machines Corporation Method and system for decomposition of NTSC color video signals
US4636841A (en) * 1984-05-31 1987-01-13 Rca Corporation Field comb for luminance separation of NTSC signals
WO1990004311A1 (en) * 1988-10-03 1990-04-19 General Electric Company Apparatus for combining and separating components of a video signal

Patent Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
EP0124280A2 (en) * 1983-04-04 1984-11-07 Ampex Corporation Method and apparatus for the extraction of luminance information from a video signal
US4636841A (en) * 1984-05-31 1987-01-13 Rca Corporation Field comb for luminance separation of NTSC signals
EP0199964A2 (en) * 1985-04-29 1986-11-05 International Business Machines Corporation Method and system for decomposition of NTSC color video signals
WO1990004311A1 (en) * 1988-10-03 1990-04-19 General Electric Company Apparatus for combining and separating components of a video signal

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
International Conference on Consumer Electronics, Digest of Technical Papers, Rosemont, Illinois, 6-9 June 1989, IEEE, N. Hurst: "Quadruplexing: an NTSC-compatible encoding technique that assures crosstalk-free transmission of luminance, chrominance, and two new signals", pages 192-193 *

Also Published As

Publication number Publication date
GB9004687D0 (en) 1990-04-25

Similar Documents

Publication Publication Date Title
US4530004A (en) Color television signal processing circuit
KR940006625B1 (en) Adaptive field or frame store processor
US4949166A (en) Apparatus for combining and separating constituent components of a video signal
US5231478A (en) Adaptive control signal generator for an adaptive luminance/chrominance separator
US4597007A (en) Circuitry for correcting frame combed luminance signal for motion induced distortion
US4766484A (en) NTSC/PAL switchable video color decoder using a digital comb filter and method
US4951129A (en) Digital prefiltering of encoded video signals
GB2072991A (en) Coding and decoding of NTSC colour television signals
JPH05115072A (en) Correlation adapting type luminance/color-signal separating circuit
US4553158A (en) Circuitry for correcting motion induced errors in frame comb filtered video signals
JPS58129892A (en) Color signal separation circuit
US4550340A (en) Apparatus for frame-to-frame comb filtering composite TV signal
JP2617622B2 (en) Motion adaptive color signal synthesis method and circuit
JPS6150483A (en) Decoding system of composite digital pal video signal
CA1219343A (en) Apparatus for reducing motion induced distortion in a frame combed chrominance signal
WO1991013519A1 (en) Improvements in or relating to colour television
JPS63217790A (en) Video signal recording and reproducing device
EP0600981B1 (en) Process and device for decoding image signals containing additional information
EP0606784B1 (en) Method and apparatus for improved SECAM encoding
Teichner Three-dimensional pre-and post-filtering for PAL TV signals
JPS6153915B2 (en)
JPS63292794A (en) Digital y/c separator
JPH0496595A (en) Video signal processing circuit
JP2822808B2 (en) Television signal forming method and television signal encoder
JPH0338991A (en) Luminance signal/chrominance signal separating circuit

Legal Events

Date Code Title Description
AK Designated states

Kind code of ref document: A1

Designated state(s): CA JP US

AL Designated countries for regional patents

Kind code of ref document: A1

Designated state(s): AT BE CH DE DK ES FR GB GR IT LU NL SE

NENP Non-entry into the national phase

Ref country code: CA