GB2263600A - Motion dependent video signal processing - Google Patents
Motion dependent video signal processing Download PDFInfo
- Publication number
- GB2263600A GB2263600A GB9201611A GB9201611A GB2263600A GB 2263600 A GB2263600 A GB 2263600A GB 9201611 A GB9201611 A GB 9201611A GB 9201611 A GB9201611 A GB 9201611A GB 2263600 A GB2263600 A GB 2263600A
- Authority
- GB
- United Kingdom
- Prior art keywords
- motion vectors
- frame
- field
- frames
- motion
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Granted
Links
Classifications
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N7/00—Television systems
- H04N7/01—Conversion of standards, e.g. involving analogue television standards or digital television standards processed at pixel level
- H04N7/0135—Conversion of standards, e.g. involving analogue television standards or digital television standards processed at pixel level involving interpolation processes
- H04N7/014—Conversion of standards, e.g. involving analogue television standards or digital television standards processed at pixel level involving interpolation processes involving the use of motion vectors
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T7/00—Image analysis
- G06T7/20—Analysis of motion
- G06T7/223—Analysis of motion using block-matching
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N5/00—Details of television systems
- H04N5/14—Picture signal circuitry for video frequency region
- H04N5/144—Movement detection
- H04N5/145—Movement estimation
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/10—Image acquisition modality
- G06T2207/10016—Video; Image sequence
Landscapes
- Engineering & Computer Science (AREA)
- Multimedia (AREA)
- Signal Processing (AREA)
- Computer Vision & Pattern Recognition (AREA)
- Physics & Mathematics (AREA)
- General Physics & Mathematics (AREA)
- Theoretical Computer Science (AREA)
- Television Systems (AREA)
Abstract
A method of processing an input digital video signal comprises: deriving motion vectors representing the motion of the content of respective blocks of pixels in a first field or frame of the input video signal between the said first field or frame and the following field or frame of the input video signal; assigning to each block one or more additional motion vectors selected from the motion vectors derived for the other blocks and the zero motion vector; selecting from the motion vectors associated with the said blocks output pixel motion vectors to be associated with respective pixels of an output field or frame to be produced; and producing an output field or frame from the input fields or frames by motion compensated temporal interpolation. The derived motion vectors are analysed to determine a plurality of statistical quantities (N, M, G) relating to said motion vectors and one or more of the processes of deriving motion vectors, assigning additional motion vectors, selecting output pixel motion vectors, and producing output fields or frames, are automatically modified in dependence upon one or more of said statistical quantities (N, M, G). The amount of operator intervention required during the processing operation is therefore substantially reduced. <IMAGE>
Description
MOTION DEPENDENT VIDEO SIGNAL PROCESSING
This invention relates to motion dependent processing of digital video signals.
Processing systems for digital video signals are known in which the processing operation depends upon the motion in the video material being processed. In frame rate conversion systems for example, in order to ensure smooth motion portrayal when the converted video signal is displayed, a technique involving "motion compensated temporal interpolation" can be employed. According to this technique, output fields or frames are temporally interpolated between pairs of input frames such that the image data in an output field or frame is temporally offset with respect to that in the input frames from which it is formed in dependence upon the particular processing operation being performed.Thus, interpolated output fields corresponding to a different frame rate to the input fields may be produced, the image data in the output fields being temporally offset with respect to that of the input fields to provide for smooth motion portrayal in the new format.
The technique of motion compensated temporal interpolation is described in detail in UK patent application GB-A-22317'\9 (the content of which is incorporated herein by reference) in relation to a video standards converter. Briefly, the stages of the conversion system disclosed in GB-A-2231749 are as follows.
Firstly, the fields of the input video signal are supplied to a progressive scan converter which produces from the input fields a series of progressive scan format frames, one for each of the input fields. The progressive scan format frames are then supplied to a direct block matcher which compares the content of blocks of pixels in a progressive scan frame with the content of the following progressive scan frame and produces correlation surfaces representing the difference in the contents so compared in the two frames. These correlation surfaces are then analysed by a motion vector estimator which derives motion vectors for the respective blocks representing the motion of the content of the block between the two frames. The derived motion vectors are then supplied to a motion vector reducer which assigns further motion vectors to the blocks.The motion vectors associated with each block are then passed to a motion vector selector which selects from the supplied motion vectors motion vectors to be associated with respective pixels of the output field or frame to be interpolated. Any irregularity in the selection of the motion vectors by the motion vector selector is removed by a motion vector postprocessor from which the processed motion vectors are supplied to an interpolator. The interpolator generates the pixels of the output field/frame by combining, with appropriate weighting, parts of the two progressive Qcan frames in dependence upon the motion vector for each output pixel and the temporal offset of the output field/frame with respect to the progressive scan frames from which it is formed.Thus, the interpolator interpolates along the direction of movement to produce motion compensated output fields/frames corresponding to a different frame rate to the input video signal.
The operation of such processing systems involving motion compensated temporal interpolation requires the careful setting of control parameters in various parts of the system in order to cope with the demands of differences in the content of program sequences. As the program content changes, adjustment of the control parameters may be necessary in order to ensure a satisfactory result when the processed video signal is displayed.For example, in the process of progressive scan conversion referred to above, a motion adaptive interpolation technique is used to produce the progressive scan frames by a combination of intra-field interpolation, in which the missing interlace lines of pixels in the corresponding input field are produced from the values of pixels in that input field, and inter-field interpolation, wherein the values of the missing pixels are derived from the values of pixels in the immediately preceding and immediately succeeding input fields. An algorithm is applied to estimate the amount of local motion present in the picture, and this is then used to mix together different proportions of inter- and intra-field interpolation.The concept is to use inter-field interpolation in wholly static picture areas to maintain as much vertical information as possible, and to use intra-field interpolation when significant motion is present to avoid interlace smear in the final output. In between these two extremes, a combination of inter-field and intra-field interpolation is used. There is therefore a trade-off between sharp static images, which are achieved using inter-field interpolation, and avoiding interlace smear on moving objects, which is achieved using intra-field interpolation. Although the algorithms implemented detect motion in images and adapt the interpolation process accordingly, these algorithms are not as sensitive to motion as the subsequent processing stages.If artifacts are introduced during progressive scan conversion, experience has shown that this confuses the motion estimation and vector selection processes with the result that inappropriate motion vectors may be assigned to output pixels and the quality of the displayed output will be reduced accordingly. Thus, some operator control is still required. An operator, periodically monitoring the results of the processing, may therefore modify the progressive scan conversion process, on the basis of experience, to vary the proportions of intra- and inter-field interpolation, for example to limit the process to solely intra-field or inter-field interpolation, to improve the resulting picture quality.
Similarly, the process of motion vector estimation is controlled by a number of parameters which are normally pre-set within the system.
If periodic monitoring of the results of processing indicates to the skilled operator that problems are arising in motion vector estimation, the operator must intervene to modify the motion vector estimation process by adjusting the appropriate parameters in a manner based on subjective judgement and experience. The process of motion vector estimation is described in detail in UK patent application no GB-A2231752, the content of which is incorporated herein by reference. The process involves threshold testing a correlation surface for a motion vector represented by a minimum which differs from the next smallest minimum by more than a predetermined threshold. The next smallest minimum may be prevented from originating within a certain area of the minimum under test. The correlation surfaces are then grown and the grown correlation surfaces retested for a minimum which satisfies the threshold test. The correlation surfaces are grown by adding together the elements of the correlation surfaces of neighbouring blocks so that the grown correlation surfaces are derived from a larger area. This process may be repeated until certain limitations are reached, for example the edge of the video frame is reached or the correlation surface has already been grown a predetermined number of times. Of those motion vectors, if any, which pass the threshold test, the best is selected for a particular block and supplied to the motion vector reducer. If no good motion vectors are found by this process, the correlation surfaces are weighted and the weighted surfaces retested.
If, even after weighting, no motion vector passing the test is located for a given block, the best available motion vector is passed on to the motion vector selector with a flag to indicate that it is a bad motion vector. Parameters such as the difference threshold used in the threshold test, and those controlling selection of the best motion vector from the grown correlation surfaces, are pre-set for a particular processing operation and are thus necessarily a compromise set to yield the best results with the average scene content of the program material. Variations in the scene content may render these parameters inappropriate. For example, if there is poor contrast or significant noise in the program material, the threshold test may be too stringent and the correct motion vector for a block may fail the threshold test.Thus, subsequent processing stages may be forced into use of an in appropriate motion vector. As a further example, a vector selected for supply to the motion vector reducer from tested motion vectors derived from correlation surfaces at various stages of growth may not be the most appropriate. Again, therefore, it is left to the skilled operator to recognise such problems and intervene to modify the motion vector estimation process as he sees fit.
Operator intervention may also be required in the process of motion vector reduction. This process involves application of an algorithm to assign additional motion vectors to each block which vectors are passed to the motion vector selector, in each case along with any good motion vector derived by the motion vector estimator to ensure the selector is not forced into selection of an inappropriate motion vecto:. The additional motion vectors are assigned to each block until up to a predetermined number of unique motion vectors, for example five, (including the original threshold tested motion vector for that block, if such exists, and the zero motion vector which is always supplied) are obtained.The additional vectors are obtained by initially searching for a sufficient number of unique motion vectors among the blocks which surround the particular block under consideration. If a total of five unique vectors are not obtained by this process, then the difference is made up with "global" motion vectors. The process of deriving global motion vectors is described in detail in UK patent application GB-A-2231227, the content of which is incorporated herein by reference. Briefly, these global vectors are determined in the vector reducer by ranking all good motion vectors (ie motion vectors which passed the threshold test) supplied by the motion vector estimator for a given input frame in order of frequency of occurrence. The four most frequently occurring motion vectors which represent sufficiently different motion are termed global motion vectors.The vector reduction algorithm requires considerable operator intervention in practice to ensure good results over varying scene content of the program material. For example, the more extreme the motion in the program sequence, the fewer the vectors that can be usefully supplied to the vector selector. Thus, the operator may be required to use subjective judgement to periodically modify the vector reduction process as the program content varies.
Finally, in cases where extreme motion is involved, the best results may be achieved by simply outputting the temporally nearest progressive scan frame to the required output frame rather than temporally interpolating the output frames using motion compensation.
Again, therefore, the operator may be required to intervene to switch the system output to "nearest progressive scan mode".
It will be appreciated from the above that processing operations using motion compensated temporal interpolation require adjustment of various preset parameters to modify different stages of the process on a dynamic basis in response to changing program content. Such modifications require substantial intervention by a skilled operator with sufficient experience to recognise the source of different problems and even then involves the subjective judgement of the operator to effect the modifications. The processing operation is therefore repeatedly disrupted, and the subjective nature of the modification process means that inappropriate adjustments may be made which further delay the processing operation.
According to the present invention there is provided a method of processing an input digital video signal, the method comprising: deriving motion vectors representing the motion of the content of respective blocks of pixels in a first field or frame of the input video signal between the said first field or frame and the following field or frame of the input video signal; assigning to each block one or more additional motion vectors selected from the motion vectors derived for the other blocks and the zero motion vector; selecting from the motion vectors associated with the said blocks output pixel motion vectors to be associated with respective pixels of an output field or frame to be produced; and producing an output field or frame from the input fields or frames by motion compensated temporal interpolation; characterised by analysing the derived motion vectors to determine a plurality of statistical quantities relating to said motion vectors and automatically modifying one or more of the processes of deriving motion vectors, assigning additional motion vectors, selecting output pixel motion vectors, and producing output fields or frames, in dependence upon one or more of said statistical quantities.
Thus, the derived motion vectors are used as a source of useful statistical information about the content of the program material being processed, the said statistical quantities being used to trigger automatic modification of appropriate parts of the processing operation in response to changes in the program content. Since adjustments are made automatically, the requirement for operator intervention in the processing operation is substantially reduced.
The statistical analysis may be performed at the vector reduction stage and the statistical quantities may be supplied to a system controller which then modifies the various processing stages as required. The analysis may be performed for each field or frame of the input signal and adjustments made on a field/frame basis.
Alternatively, for example, the said statistical quantities may be averaged in some cases over a predetermined number of input fields/frames and the average values used to trigger process modification if required.
Where the process of deriving motion vectors involves threshold testing correlation surfaces as previously described, the said statistical quantities preferably include: the proportion N of derived motion vectors for an input field or frame which passed the threshold test; and/or the proportion M of those derived vectors which passed the threshold test which have a magnitude greater than a threshold magnitude; and/or the proportion G of those derived motion vectors which passed the threshold test for the input field/frame which contribute to the global motion vectors for that field or frame. N serves as an indication of the ease or difficulty in deriving good, or valid, motion vectors. M serves as an indication of the amount of fast motion in the program material. G provides an indication of the amount of similar motion in the content of an input field/frame.In general, these statistical quantities can be used to indicate, for example, difficulty in deriving good motion vectors from the scene, and the possibility of motion blurring in the program sequence, or extreme movement, or scene changes which may give rise to noticeable errors at the vector selection stage and hence in the final output. Thus, the quantities can be used to optimise the various stages of the processing operation in dependence upon the content of a program sequence.
For example, the difference threshold used in the threshold test at the motion vector estimation stage may be increased to provide a more stringent test, or may be reduced to ensure a sufficient number of valid motion vectors is passed to the motion vector reducer.
As a further example, at the motion vector estimation stage, motion vectors may be derived from only the grown, or the furthest grown, correlation surfaces.
When analysis of the derived motion vectors indicates it is necessary, the process of producing output fields or frames may be automatically modified such that each output field or frame is produced directly from the temporally closest input field or frame.
Further, where the processing operation involves progressive scan conversion as previously described, the process of producing progressive scan frames may be automatically modified in dependence upon one or more of said statistical quantities to vary the proportions of intra-field and inter-field interpolation, for example to limit the process solely to intra-field interpolation.
As a further example, the process of assigning additional motion vectors (at the motion vector reduction stage) may be automatically modified such that the additional motion vectors exclude global motion vectors where the statistical analysis indicates this would be beneficial. A more extreme modification would be to restrict the vector reduction process to the point where the zero motion vector is the only additional motion vector which may be passed on, with any valid, non-zero, derived motion vector for a block, to the vector selector.
It will be appreciated that the invention extends to apparatus adapted to perform the method of processing an input digital video signal as hereinbefore described.
An embodiment of the invention will now be described, by way of example, with reference to the accompanying drawings in which:
Figure 1 is a block diagram of a frame rate converter embodying the present invention;
Figure 2 is a block diagram of the frame rate converter of Figure 1 in which specific control routes are shown as being separate for clarity;
Figure 3 is a data/control flow diagram for part of the apparatus of Figures 1 and 2;
Figure 4 is a flow diagram illustrating an operation performed by the apparatus of Figures 1 and 2;
Figure 5 is a flow diagram illustrating a further operation performed by the apparatus of Figures 1 and 2; and
Figure 6 is a flow diagram illustrating a third operation performed by the apparatus of Figures 1 and 2.
The frame rate converter of Figure 1 comprises a progressive scan converter 1, a time-base corrector 2, a direct block matcher 3, a motion vector estimator 4, a motion vector reducer 5, a motion vector selector 6, a motion vector post-processor 7, and an interpolator 8, connected as shown and all operating under the control of a system controller 9. The system controller 9 controls the apparatus via a bidirectional control bus 11 and a dual-port RAM (not shown).
The operation of the progressive scan converter 1, direct block matcher 3, motion vector estimator 4, motion vector reducer 5, motion vector selector 6, motion vector post-processor 7, and interpolator 8 are described in detail in UK patent applications nos GB-A-2231749, GB
A-2231752 and GB-A-2231227 referred to above. Briefly, however, the operation is as follows.
The video signal to be processed is supplied to the input 10 of the progressive scan converter 1. Since the frame rate converter operates at less than normal video rate, the video signal to be processed is reproduced at one-eighth speed by a high-definition digital VTR (not shown) and the video fields are supplied to the progressive scan converter. The progressive scan converter 1 converts the video fields into progressive scan format frames by intra-field and/or inter-field interpolation in the usual case where the image data in successive fields of the video signal is temporally offset. Where there is no temporal offset between the image content of pairs of fields supplied to the input 10, the progressive scan converter 1 may be bypassed or operated in previous field replacement mode.In any case, progressive scan frames are supplied as the input signal to the time-base corrector 2 where they are temporarily stored so that a number of progressive scan frames are available at the same time.
A pair of these progressive scan frames is supplied to the direct block matcher 3 which compares the content of successive blocks of pixels in the first frame of the pair with the content of areas of the second frame of the pair and produces correlation surfaces representing the difference between the contents so compared. These correlation surfaces are analysed by the motion vector estimator 4 which derives and supplies motion vectors, one per block, to the motion vector reducer 5. Motion vector estimation involves threshold testing the correlation surfaces to determine the minimum value of the difference represented thereby which differs from the next smallest minimum by more than a difference threshold.For example, the absolute value of the difference between the minimum and the next smallest minimum may be required to be greater than a given percentage (the difference threshold) of the absolute difference between the minimum and the maximum of the correlation surface. The next smallest minimum may be prevented from originating within a certain area of the minimum under test. The correlation surfaces are then grown and retested a number of times and the best motion vector selected from those, if any, which passed the threshold test. Correlation surfaces may be grown by adding together the surfaces from: a horizontal or vertical line of 3x1 blocks centred on the initial block; a group of 3x3 blocks centred on the initial block; and a group of 5x3 blocks centred on the initial block.
If there are still no motion vectors which pass the threshold test, the correlation surfaces are weighted towards the stationary, that is zero, motion vector and retested. Any motion vectors having passed the threshold test are considered good, or valid, motion vectors and are flagged as such to the motion vector reducer. If no valid motion vector is determined for a block, the best available motion vector is supplied but without a valid flag. (Of course, rather than flagging valid motion vectors, invalid motion vectors may be flagged as such.)
The motion vector reducer 5 then assigns additional motion vectors to each block in accordance with a vector reduction algorithm until five unique motion vectors are associated with each block.These consist of any valid motion vector derived for the block and the zero motion vector, the difference being made up first with valid motion vectors derived for the blocks surrounding the block under consideration, and finally with global motion vectors.
The vectors associated with each block are then passed to the motion vector selector 6 which also receives an input from the timebase corrector 2. The motion vector selector 6 performs a two-stage selection process to select a motion vector to be associated with each pixel of an output field to be produced. The first stage of the selection process involves testing the motion vectors supplied from the vector reducer for both of the frames of the initial frame pair used in motion vector estimation against the immediately preceding and immediately succeeding progressive scan frames. Thus, this stage utilises four progressive scan frames supplied by the time-base corrector 2. The second stage involves testing using only the motion vectors supplied for the two progressive scan frames of the initial pair to select the appropriate output pixel motion vectors.
Any irregularities in the motion vectors selected are removed by the motion vector post-processor 7 from where the processed motion vectors for the output pixels are supplied to the interpolator 8. The interpolator 8 generates the pixels of each output field by interpolation between the two frames of the original progressive scan frame pair which are supplied by the timebase corrector 2 after a time which takes into account the delay introduced by the intervening processes. For each output pixel, the interpolator 8 uses the motion vector supplied for that output pixel and the correct temporal position along the motion vector for output pixels in that field to determine which parts of the frames of the progressive scan frame pair should be combined, with appropriate weighting, to produce the output pixel.The correct temporal position for output pixels is determined by the system controller 9 in dependence upon the particular frame rate conversion being performed and will vary for different output fields. The output signal is then supplied via a frame recorder (not shown) to a high definition digital VTR (not shown) for recording.
Figure 2 is a diagram similar to Figure 1 in which only specific control routes provided by the control bus 11 with which the present embodiment is concerned are indicated by the broken lines. Where more than one control operation may be performed in connection with a particular processing stage, these control routes are identified separately for clarity and will be described hereinafter.
Figure 3 is a schematic data/control flow diagram for part of the motion vector reducer circuitry which analyses the derived motion vectors supplied by the motion vector estimator to determine a plurality of statistical quantities relating thereto in accordance with this embodiment of the invention. The motion vectors derived by the motion vector estimator LI in Figures 1 and 2 for a given progressive scan frame are stored in a vector store 15 of the motion vector reducer 5. As previously described, those of the derived motion vectors which have passed the threshold test are flagged as valid motion vectors, the valid flags being stored in relation to their corresponding motion vector in a vector valid flag store 16.A calculator 17 accesses the vector store 15 and vector valid flag store 16 to calculate, as a percentage, the proportion N of derived vectors for the frame which are valid. This is achieved by counting the number of vector valid flags stored in the valid flag store 16 and the number of vectors in the vector store 15, dividing the former number by the latter number and multiplying by 100.
A further calculator 18 also accesses the vector store 15 and valid flag store 16 to determine the global vectors for the input frame in question. Determination of global vectors is described in detail in
UK patent application no GB-A-2231227 referred to above. Briefly, however, the global vectors are determined by ranking all valid derived motion vectors in order of frequency of occurrence and selecting as global vectors the four most frequently occurring valid vectors which represen#t sufficiently different motion from one another. These global vectors are supplied to a third calculator 19 which, by accessing the vector store 15 and valid flag store 16, calculates, as a percentage, the proportion G of valid motion vectors which contributed to the global vectors.This is achieved by comparing each valid vector with each global motion vector and counting the number of times a match is obtained. The number of valid vectors is also counted and this latter number divided into the former, the result being multiplied by 100 to give the value of G.
A third calculator 20 calculates, as a percentage, the proportion
M of valid vectors which have a magnitude greater than a predetermined magnitude threshold. The magnitude threshold is supplied to the M calculator 20 from the system controller 9 via the control bus 11, a specific magnitude threshold control route being shown separately in
Figures 2 and 3 for clarity. The M calculator 20 calculates the value of M for the frame by accessing the vector store 15 and valid flag store 16 and comparing the magnitude of each valid vector with the magnitude threshold. The number of such vectors which exceed the threshold are counted, as are the number of valid vectors, the latter number being divided into the former and the result multiplied by 100 to give the value of M.
Various implementations of the N calculator 17, G calculator 19 and M calculator 20, for example by a microprocessor operating in accordance with appropriate software, or by logic circuitry, will be apparent to those skilled in the art from the foregoing description of the operation of these calculators.
After calculation, the statistical quantities N, M and G are supplied to the system controller 9 along the control bus 11 as indicated in Figure 2 by the separately identified N, M, G control route.
The value of the magnitude threshold supplied by the system controller 9 may be entered by a system operator prior to processing via a keyboard associated with the system controller 9, or may be prestored, for example in a disk file. The appropriate value of the magnitude threshold for a given process may vary in dependence upon the overall type of program material being processed and is determined for particular program types after appropriately system testing.
The statistical quantities N, M and G are provided to the system controller 9 on a frame-by-frame basis, and are monitored by the system controller as useful indicators of the scene content of the program sequence being processed. The values, and variations in, these quantities, either alone or in combination, can be used to optimise various stages in the processing operation automatically without requiring repeated intervention by, and the subjective judgement of, a skilled operator to effect the various modifications during a processing operation. For example, if the percentage of valid vectors falls whilst the magnitude of the vectors remains moderate and the percentage of vectors contributing to global vectors remains roughly constant, then the program material may be becoming more difficult for motion estimation.Such may be the case for example where there is a lack of sharp detail in scene content, or a drop in contrast, or significant noise in the input video signal. The difference threshold used by the motion vector estimator 4 for testing for valid motion vectors may then be too high, in which case the system controller reduces the difference threshold to ensure sufficient numbers of valid motion vectors are supplied to the motion vector reducer 5.In addition, it may also help to limit the range of correlation surfaces analysed by the motion vector estimator when selecting the most appropriate threshold tested motion vector for a particular block such that only the grown surfaces, or the furthest grown surfaces, are considered. (Where for example there is significant noise in the input signal, correlation is likely to be more accurate the larger the area of the input frames over which correlation is assessed.) An example of one such operation performed by the system controller 9 is provided by the schematic flow diagram of Figure 4.
In the example of Figure 4, the system controller 9 waits for the start of a new frame, ie a new set of vectors being supplied to the motion vector reducer 5 which then calculates the values of N, M and G as described with reference to Figure 3 and supplies these to the system controller. The controller 9 then compares the supplied value of N with that for the previous frame (old N) to see whether this percentage has fallen between the two frames. (Some tolerance may be included at this stage so that a reduction in N is only registered if the amount of the reduction exceeds a certain pre-set threshold, for example a difference of 1 in the percentage values.) If a reduction in
N is registered, the system controller then checks to see whether the value of M is within certain pre-set limits, for example 0 < M < 30, indicating that the number of large motion vectors is moderate.
Assuming this to be the case, the system controller then calculates the change a G in the percentage of valid vectors contributing to global vectors as between the new and old frames (ie G - old G) and checks to see whether this change is less than a further pre-set threshold, for example t G < 2 indicating that G has remained roughly constant for the two frames.If this is the case, then, as previously described, the material is becoming more difficult for motion estimation and the system controller 9 then modifies this process by limiting motion vector estimation to consideration of the grown correlation surfaces, via the correlation surface assessment control route of Figure 2, and by reducing the difference threshold, and hence the severity of the test for valid motion vectors, via the difference threshold control route of Figure 2. The extent to which consideration of correlation surfaces is limited and the amount by which the difference threshold is reduced may be preset in the system controller and effected within certain limits.These parameters, and the thresholds and limits used in the preceding steps of the flow diagram, will have been determined after appropriate system testing, and a different set of parameters may be used for different overall types of program material. By way of example, however, the difference threshold may be varied in steps of 0.5% between limits of 1% and 10%. Consideration of correlation surfaces may be limited to those grown as previously described by adding surfaces from at least 3 blocks, or at least 9 blocks, or 15 blocks. The parameters may be entered by the system operator prior to processing. Alternatively, such parameters may be pre-stored within the system controller, or separately for example in a disk file.
In the event that any of the tests performed in the operation of
Figure LI do not satisfy the relevant criteria, the system controller simply updates the values of old N and old G in preparation for the next frame as indicated. Of course, continued monitoring of the values of N, M and G by the system controller 9 may indicate that increasing the difference threshold and/or the range of correlation surfaces considered during vector estimation is appropriate and the system controller will act accordingly.
As a further example, if the percentage of valid vectors falls suddenly for one or a small number of frames, and/or the percentage of valid vectors contributing to the global vectors drops to a small value, then it may be concluded that a scene change, rapid crossfade, or similar disruption has occurred. Under such circumstances, all derived motion vectors should be regarded as suspect and motion compensated temporal interpolation of output fields is inappropriate.
Accordingly, in such circumstances, the process of producing output fields may be modified so that the interpolator outputs a field obtained directly from the temporally closest progressive scan input frame. Thus, motion compensation is disabled, all motion vectors being ignored, thereby ensuring that incorrect motion vectors cannot disrupt the output image. Such an operation performed by the system controller 9 is shown in the flow chart of Figure 5.
In Figure 5, the system controller 9 again waits for supply of the values of N, M and G from the motion vector reducer 5 at the start of a new frame. The controller 9 then calculates the difference between N and the value of N for the preceding frame (old N) and checks to see whether the change ~ N is greater than a predetermined threshold. If so, the value of G is tested to see whether it is below a predetermined low threshold in which case the system controller 9 switches the interpolator 8 to nearest progressive scan mode as previously described, via the output mode control route shown in Figure 2. Again, the various thresholds will have been determined from experience after appropriate system testing and may be pre-set by an operator prior to commencement of processing or may be prestored either within the system controller or separately. By way of example, however, the threshold for #N in this case may be set at 40 and the threshold for G at 20. In the event that the criteria for switching to nearest progressive scan mode are not met, no action is taken and the values of old N and old G are simply updated in preparation for the new frame as indicated in the figure. Of course, after switching to nearest progressive scan mode, the system controller 9 continues to monitor the values of N and G and switches the interpolator 8 back to motion compensated temporal interpolation mode when appropriate.
If the percentage of valid vectors falls and the magnitude of a significant number of the valid vectors grows large, it may be concluded that material containing significant motion is being processed. This may cause problems at all stages of the processing operation. Depending on the severity of the situation the system controller 9 may make one or more of the following adjustments.
Firstly, the process of progressive scan conversion may be modified to reduce the contribution by inter-field interpolation, if necessary to the point where interpolation is solely intra-field interpolation.
Secondly, as previously described, motion vector estimation may be modified to limit consideration of correlation surfaces to the grown or furthest grown surfaces to increase the likelihood of correct motion vectors being passed to the motion vector reducer. Further, the difference threshold used to test for valid motion vectors in the motion vector estimator may be reduced as previously described to ensure a sufficient number of motion vectors are derived. Further, the algorithm controlling assignment of additional motion vectors in the motion vector reducer may be simplified from a full algorithm supplying a maximum number of additional vectors per block to the vector selector to a simpler algorithm providing fewer vectors. For example, assignment of global vectors may be eliminated to improve reproduction of details in the output image.In a more extreme case, the vector reduction algorithm may be simplified to the point where only the original valid motion vector, if such exists, for a block and the zero motion vector are supplied to the motion vector selector. (Where motion is extreme, assignment of motion vectors from surrounding blocks to the block under consideration may be inappropriate since these may be significantly different from the derived motion vector for the block under consideration.) Beyond this point, as determined by experiment, the system controller may switch the interpolator to default to nearest progressive scan mode as previously described.
The above situation is represented by the schematic flow diagram of Figure 6. As before, the system controller 9 waits for calculation of the statistical quantities for the new frame by the motion vector reducer 5 and then checks to see whether the percentage N of valid vectors is less than a predetermined threshold, for example N < 70%.
If so, the value of M is checked to see whether it exceeds another predetermined threshold, eg M > 50% where the magnitude threshold is 16 pixels per field, indicating a significant number of motion vectors of large magnitude. If this threshold is also exceeded, the controller 9, as indicated in Figure 6; adjusts progressive scan conversion to intra field interpolation only via the interpolation control route shown in
Figure 2.Depending upon the severity of the situation shown by the actual values of N and M (which will be assessed separately by comparison with further, increasingly more stringent, predetermined thresholds), the controller 9 will also effect the other processing modifications indicated in Figure 6, namely: constrain the motion vector estimator 4 to consider only the grown correlation surfaces; reduce the difference threshold in the motion vector estimator 4 to a lower value; simplify the vector reduction algorithm (via the vector reduction control route of Figure 2) to supply less vectors to the motion vector selector as previously described; or switch the output to nearest progressive scan mode.Again, the various thresholds are determined through experience after the necessary testing and may be entered by the system operator prior to processing or pre-stored either in the system controller 9 or separately. Of course, continued monitoring of the statistical quantities derived by the motion vector reducer will indicate to the system controller 9 when the situation has improved to a point where less stringent processing criteria are required, and the system controller will adjust the various processes accordingly.
It wil ] be appreciated that it may be desirable for the system controller 9 to implement the processing modifications over a number of input frames to avoid a noticeable sudden change in output image quality. Further, it may be desirable in certain cases for the system controller 9 to average the values of the statistical quantities over a predetermined number of input frames and analyse the average values to determine what, if any, modification of the processing operation is required.
It will be apparent that other statistical quantities relating to the derived motion vectors apart from N, M and G referred to above may be useful to the system controller 9, and provision may be made in the motion vector reducer 5 for calculating such quantities. As an example, the average magnitude of all valid motion vectors may be useful in certain circumstances. Further, the operations of the system controller described above are provided as examples only, and other modification operations which can usefully be effected by the system controller based on statistical quantities relating to the derived motion vectors will be apparent to those skilled in the art. Finally, while an embodiment of the invention has been described with particular reference to a frame rate converter, the invention is equally applicable to other processing operations involving motion compensated temporal interpolation.
Claims (24)
1. A method of processing an input digital video signal, the method comprising:
deriving motion vectors representing the motion of the content of respective blocks of pixels in a first field or frame of the input video signal between the said first field or frame and the following field or frame of the input video signal;
assigning to each block one or more additional motion vectors selected from the motion vectors derived for the other blocks and the zero motion vector;
selecting from the motion vectors associated with the said blocks output pixel motion vectors to be associated with respective pixels of an output field or frame to be produced; and
producing an output field or frame from the input fields or frames by motion compensated temporal interpolation;
characterised by analysing the derived motion vectors to determine a plurality of statistical quantities relating to said motion vectors and automatically modifying one or more of the processes of deriving motion vectors, assigning additional motion vectors, selecting output pixel motion vectors, and producing output fields or frames, in dependence upon one or more of said statistical quantities.
2. A method as claimed in claim 1, including analysing the derived motion vectors to determine the said statistical quantities for each field or frame of the input signal.
3. A method as claimed in claim 1 or claim 2, wherein the step of deriving motion vectors includes generating correlation surfaces each of which correspond to the difference between the content of a respective block in the first field or frame and the content of an area in the second field or frame with which the block is compared, and threshold testing the correlation surface for a motion vector corresponding to the minimum value of said difference which differs from the next smallest minimum by more than a difference threshold.
4. A method as claimed in claim 3, wherein the said statistical quantities include the proportion M of those derived vectors which passed the threshold test for the input field or frame which have a magnitude greater than a magnitude threshold.
5. A method as claimed in claim 3 or claim 4, wherein the said statistical quantities include the proportion G of those derived motion vectors which passed the threshold test for the input field or frame which contribute to the global motion vectors for that field or frame.
6. A method as claimed in any one of claims 3 to 5, wherein the said statistical quantities include the proportion N of derived motion vectors for an input field or frame which passed the threshold test.
7. A method as claimed in any one of claims 3 to 6, wherein the process of deriving motion vectors is automatically modified by reducing the said difference threshold.
8. A method as claimed in any one of claims 3 to 6, wherein the correlation surfaces are grown and the grown correlation surfaces are threshold tested, and wherein the process of deriving motion vectors is automatically modified by deriving motion vectors from only the grown correlation surfaces.
9. A method as claimed in claim 7, when dependent upon claim 6, or claim 8, when dependent upon claim 6, wherein the process of deriving motion vectors is automatically modified in response to a reduction in
N.
10. A method as claimed in claim 7, when dependent upon claim 4 and claim 6, or claim 8, when dependent upon claim 4 and claim 6, wherein the process of deriving motion vectors is automatically modified in response to a reduction in N in combination with an increase in M.
11. A method as claimed in any preceding claim wherein the input signal has a frame format and the process of producing output fields or frames is automatically modified such that each output field or frame is produced directly from the temporally closest input frame.
12. A method as claimed in claim 11, when dependent upon claim 6, wherein the process of producing output fields or frames is automatically modified in response to a reduction in N.
13. A method as claimed in claim 11, when dependent upon claim 5, wherein the process of producing output fields or frames is automatically modified in response to a reduction in G.
14. A method as claimed in claim 11, when dependent upon claim 5 and claim 6, wherein the process of producing output fields or frames is automatically modified in response to a reduction in N in combination with a reduction in G.
15. A method as claimed in claim 11, when dependent upon claim 4 and claim 6, wherein the process of producing output fields or frames is automatically modified in response to a reduction in N in combination with an increase in M.
16. A method as claimed in any preceding claim, wherein the input video signal comprises a series of progressive scan format frames, one corresponding to each of the fields of an original video signal in field format, each of the progressive scan frames being produced by one or a combination of intra-field interpolation from a single field of the original video signal and inter-field interpolation from a plurality of fields of the original video signal, and wherein the process of producing progressive scan frames is automatically modified in dependence upon one or more of said statistical quantities such that said progressive scan frames are produced by intra-field interpolation only.
17. A method as claimed in claim 16, when dependent upon claim 4 and claim 6, wherein the process of producing progressive scan frames is automatically modified in response to a reduction in N in combination with an increase in M.
18. A method as claimed in any preceding claim, wherein the process of assigning additional motion vectors is automatically modified such that the additional motion vectors exclude global motion vectors.
19. A method as claimed in any preceding claim, wherein the process of assigning additional motion vectors is automatically modified such that the zero motion vector is the only additional motion vector.
20. A method as claimed in claim 18, when dependent upon claim 4 and claim 6, or claim 19, when dependent upon claim 4 and claim 6, wherein the process of assigning additional motion vectors is modified in response to a reduction in N and an increase in M.
21. A method as claimed in any preceding claim, wherein automatic modification of a said process is effected gradually over a predetermined number of input fields or frames.
22. A method of processing an input digital video signal substantially as hereinbefore described with reference to the accompanying drawings.
23. Apparatus adapted to perform the method of any of the preceding claims.
24. Apparatus for processing an input digital video signal substantially as hereinbefore described with reference to the accompanying drawings.
Priority Applications (2)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
GB9201611A GB2263600B (en) | 1992-01-24 | 1992-01-24 | Motion dependent video signal processing |
JP5010209A JPH05268580A (en) | 1992-01-24 | 1993-01-25 | Method and device for processing motion dependent type video signal |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
GB9201611A GB2263600B (en) | 1992-01-24 | 1992-01-24 | Motion dependent video signal processing |
Publications (3)
Publication Number | Publication Date |
---|---|
GB9201611D0 GB9201611D0 (en) | 1992-03-11 |
GB2263600A true GB2263600A (en) | 1993-07-28 |
GB2263600B GB2263600B (en) | 1995-06-07 |
Family
ID=10709233
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
GB9201611A Expired - Fee Related GB2263600B (en) | 1992-01-24 | 1992-01-24 | Motion dependent video signal processing |
Country Status (2)
Country | Link |
---|---|
JP (1) | JPH05268580A (en) |
GB (1) | GB2263600B (en) |
Cited By (2)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
GB2277005A (en) * | 1993-04-08 | 1994-10-12 | Sony Uk Ltd | Detecting motion vectors in video signal processing; comparison with threshold. |
EP2094007A1 (en) * | 2006-12-22 | 2009-08-26 | Sharp Kabushiki Kaisha | Image display device and method, and image processing device and method |
Families Citing this family (1)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
JP5023780B2 (en) * | 2007-04-13 | 2012-09-12 | ソニー株式会社 | Image processing apparatus, image processing method, and program |
Citations (1)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
GB2231227A (en) * | 1989-04-27 | 1990-11-07 | Sony Corp | Motion dependent video signal processing |
-
1992
- 1992-01-24 GB GB9201611A patent/GB2263600B/en not_active Expired - Fee Related
-
1993
- 1993-01-25 JP JP5010209A patent/JPH05268580A/en active Pending
Patent Citations (1)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
GB2231227A (en) * | 1989-04-27 | 1990-11-07 | Sony Corp | Motion dependent video signal processing |
Cited By (6)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
GB2277005A (en) * | 1993-04-08 | 1994-10-12 | Sony Uk Ltd | Detecting motion vectors in video signal processing; comparison with threshold. |
GB2277005B (en) * | 1993-04-08 | 1997-10-08 | Sony Uk Ltd | Motion compensated video signal processing |
EP2094007A1 (en) * | 2006-12-22 | 2009-08-26 | Sharp Kabushiki Kaisha | Image display device and method, and image processing device and method |
EP2094007A4 (en) * | 2006-12-22 | 2011-11-02 | Sharp Kk | Image display device and method, and image processing device and method |
CN101573972B (en) * | 2006-12-22 | 2012-11-21 | 夏普株式会社 | Image display device and method, and image processing device and method |
US8358373B2 (en) | 2006-12-22 | 2013-01-22 | Sharp Kabushiki Kaisha | Image displaying device and method, and image processing device and method |
Also Published As
Publication number | Publication date |
---|---|
GB9201611D0 (en) | 1992-03-11 |
JPH05268580A (en) | 1993-10-15 |
GB2263600B (en) | 1995-06-07 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
US5631706A (en) | Converter and method for converting video signals of interlace format to video signals of progressive format | |
KR100272582B1 (en) | Scan converter | |
US6104755A (en) | Motion detection using field-difference measurements | |
US7440033B2 (en) | Vector based motion compensation at image borders | |
US5157732A (en) | Motion vector detector employing image subregions and median values | |
US8340186B2 (en) | Method for interpolating a previous and subsequent image of an input image sequence | |
KR100579493B1 (en) | Motion vector generation apparatus and method | |
US5095354A (en) | Scanning format converter with motion compensation | |
EP0241854B1 (en) | Video signal processing circuit of motion adaptive type | |
US20120207218A1 (en) | Motion detection device and method, video signal processing device and method and video display device | |
WO2005081524A1 (en) | Reducing artefacts in scan-rate conversion of image signals by combining interpolation and extrapolation of images | |
AU8624591A (en) | pideo image processing | |
KR950000440B1 (en) | Apparatus and method for doubling the scanning line | |
JPH05347752A (en) | Video signal processing device | |
US5515114A (en) | Motion-vector detection apparatus for a video signal | |
US6452972B1 (en) | Motion detection using field-difference measurements | |
US8018530B2 (en) | Adaptive video de-interlacing | |
GB2263602A (en) | Motion compensated video signal processing | |
JPH0799619A (en) | Image processor | |
GB2263600A (en) | Motion dependent video signal processing | |
GB2277005A (en) | Detecting motion vectors in video signal processing; comparison with threshold. | |
JPH1098695A (en) | Image information converter and its device and product sum arithmetic unit | |
JP3871360B2 (en) | Video signal processing apparatus and method | |
GB2312806A (en) | Motion compensated video signal interpolation | |
JPH077721A (en) | Movement corrected video signal processing system |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PCNP | Patent ceased through non-payment of renewal fee |
Effective date: 20060124 |