AU2003266803B2 - Method of determining repeated pattern, frame interpolation method, and frame interpolation appartus - Google Patents

Method of determining repeated pattern, frame interpolation method, and frame interpolation appartus Download PDF

Info

Publication number
AU2003266803B2
AU2003266803B2 AU2003266803A AU2003266803A AU2003266803B2 AU 2003266803 B2 AU2003266803 B2 AU 2003266803B2 AU 2003266803 A AU2003266803 A AU 2003266803A AU 2003266803 A AU2003266803 A AU 2003266803A AU 2003266803 B2 AU2003266803 B2 AU 2003266803B2
Authority
AU
Australia
Prior art keywords
repeated pattern
mae
block
blocks
map
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Ceased
Application number
AU2003266803A
Other versions
AU2003266803A1 (en
Inventor
Jeong-Woo Kang
Jong-Sul Min
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Samsung Electronics Co Ltd
Original Assignee
Samsung Electronics Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Samsung Electronics Co Ltd filed Critical Samsung Electronics Co Ltd
Publication of AU2003266803A1 publication Critical patent/AU2003266803A1/en
Application granted granted Critical
Publication of AU2003266803B2 publication Critical patent/AU2003266803B2/en
Anticipated expiration legal-status Critical
Ceased legal-status Critical Current

Links

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N7/00Television systems
    • H04N7/01Conversion of standards, e.g. involving analogue television standards or digital television standards processed at pixel level
    • H04N7/0135Conversion of standards, e.g. involving analogue television standards or digital television standards processed at pixel level involving interpolation processes
    • H04N7/0147Conversion of standards, e.g. involving analogue television standards or digital television standards processed at pixel level involving interpolation processes the interpolation using an indication of film mode or an indication of a specific pattern, e.g. 3:2 pull-down pattern
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N7/00Television systems
    • H04N7/01Conversion of standards, e.g. involving analogue television standards or digital television standards processed at pixel level
    • H04N7/0127Conversion of standards, e.g. involving analogue television standards or digital television standards processed at pixel level by changing the field or frame frequency of the incoming video signal, e.g. frame rate converter
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N7/00Television systems
    • H04N7/01Conversion of standards, e.g. involving analogue television standards or digital television standards processed at pixel level
    • H04N7/0135Conversion of standards, e.g. involving analogue television standards or digital television standards processed at pixel level involving interpolation processes
    • H04N7/014Conversion of standards, e.g. involving analogue television standards or digital television standards processed at pixel level involving interpolation processes involving the use of motion vectors

Landscapes

  • Engineering & Computer Science (AREA)
  • Multimedia (AREA)
  • Signal Processing (AREA)
  • Television Systems (AREA)
  • Compression Or Coding Systems Of Tv Signals (AREA)

Description

AUSTRALIA
Patents Act COMPLETE SPECIFICATION
(ORIGINAL)
Class Int. Class Application Number: Lodged: Complete Specification Lodged: Accepted: Published: Priority Related Art: Name of Applicant: Samsung Electronics Co., Ltd Actual Inventor(s): Jong-sul Min, Jeong-woo Kang Address for Service and Correspondence: PHILLIPS ORMONDE FITZPATRICK Patent and Trade Mark Attorneys 367 Collins Street Melbourne 3000 AUSTRALIA Invention Title: METHOD OF DETERMINING REPEATED PATTERN, FRAME INTERPOLATION METHOD, AND FRAME INTERPOLATION APPARATUS Our Ref: 708033 POF Code: 460249/462172 The following statement is a full description of this invention, including the best method of performing it known to applicant(s): -1amm6 t. ,A METHOD OF DETERMINING REPEATED PATTERN, FRAME INTERPOLATION METHOD, AND FRAME INTERPOLATION APPARATUS CROSS REFERENCE TO RELATED APPLICATION This application claims the priority of Korean Patent Application No.
2002-79745, filed on 13 December 2002, in the Korean Intellectual Property Office, the disclosure of which is hereby incorporated by reference in its entirety.
BACKGROUND OF THE INVENTION 1. Field of the Invention The present invention relates to a frame rate converter, and more particularly, to a method of determining whether a block is included in a repeated pattern, for effective frame interpolation of a image with a repeated pattern, a frame interpolation method thereof, and a frame interpolation apparatus therefor.
2. Description of the Related Art Resolution of an image is determined by the number of pixels contained in a frame. In a maximum resolution of 1,920 x 1,080, a frame is composed of 1,920 pixels on each horizontal line and 1,080 pixels on each vertical line. Frame rate indicates the number of frames transmitted per second. When an image signal, such as a TV signal, is transmitted, the frame rate is determined based on human visual characteristics.
In general, image signals output from image output devices are broadcasted with various frequencies, according to local requirements. For example, an image signal with a vertical frequency of 50Hz should be output in Europe and China, and an image signal with a vertical frequency of 60Hz should be output in the Republic of Korea and North America.
Image output devices require frequency conversion when outputting image signals with various frequencies. Such frequency conversion is referred to as frame rate conversion. In particular, when converting a low frequency to a high frequency, the number of frames must be increased.
In the past, the number of frames was increased by repeating adjacent frames or creating new frames, by using an estimated motion vector based on the difference between adjacent frames.
A system with high-resolution verifies motion of the current frame according to the moving tendency of an image to construct natural-looking images using a motion vector correction technique. The motion vector correction technique comprises extracting a block with a plurality of motion vectors, including a motion vector targeted for correction, and correcting a motion vector that is in a direction different from those of adjacent motion vectors in this block, so that the corrected motion vector is in the same direction as the adjacent motion vectors.
As described above, the conventional frame interpolation method uses a correlation among frames to provide good performance for normal moving image or still image. However, the conventional frame interpolation method cannot ensure sufficient performance of frame interpolation when an image with a repeated pattern moves between frames for it is difficult to accurately estimate the motion vector.
For example, when a periodically-repeated pattern moves, such as a shirt with stripes, a table cloth with stripes, or a building with a series of windows, it is difficult to accurately estimate the motion vector between frames in contrast to normal moving image or still image. This is because the correlation among frames changes dramatically.
Accordingly, there exists the need for a method of determining the case in which the repeated pattern moves between frames and an effective frame interpolation method thereof.
The discussion of the background to the invention herein is included to explain the context of the invention. This is not to be taken as an admission that any of the material referred to was published, known or part of the common general knowledge in Australia as at the priority date of any of the claims.
SUMMARY OF THE INVENTION The present invention may provide a method of determining a repeated pattern when an image with a repeated pattern moves. The present invention may also provide a frame interpolation method used when an image with a repeated pattern moves. The present invention may also provide a frame interpolation apparatus for use with the frame interpolation method.
According to one aspect of the present invention, there is provided a method of determining whether a reference block of M x N is included in a repeated pattern when an image with the repeated pattern moves, the method including obtaining an error for a standard block and a reference block using a full search, each reference block included in a search area of (M+2P) x (N+2P) that belongs to a frame referred to during motion vector search, arranging x errors in the form of a map of x according to the order of the reference blocks, obtaining each deviation between the current block and adjacent blocks in a left diagonal and a right diagonal of the map, and separately accumulating the obtained deviations in the left diagonal and the obtained deviations in the right diagonal, comparing the accumulated deviation in the left diagonal to the accumulated deviation in the right diagonal, and selecting a greater deviation, comparing the selected deviation with a threshold of 1, and if the selected deviation is more than the threshold of 1, determining that the reference block is included in the repeated pattern.
The method may further comprise segmenting the map into subblocks with identical sizes, calculating a ratio of a maximum error to a minimum error for each subblock, counting a total number of subblocks having a ratio greater than a threshold of 2, and if the total number is more than a threshold of 3, determining that the subblocks are included in the repeated pattern.
The method may further comprise checking a distribution of a subblock with a ratio greater than the threshold of 2 and determining whether the subblock is included in a pseudo repeated pattern.
In the method, it may be determined whether the subblock is included in the pseudo repeated pattern by comparing the pseudo repeated pattern of subblocks that are concentrically distributed over horizontal, vertical, and diagonal directions of the map.
The method may further comprise checking the degree of repetition of the block which is determined as included in a repeated pattern in a reference blocks and adjacent blocks and determining that a reference block is included in the repeated pattern based on a degree of repetitions.
According to another aspect of the present invention, there is provided a frame interpolation method including obtaining an error for a standard block and a reference block using a full search, each reference block included in a search area of (M+2P) x (N+2P) that belongs to a frame referred to during motion vector search, (b) estimating a motion vector as location information of a reference block with minimum error, determining whether the standard block and reference blocks are included in the repeated pattern, based on obtained errors, calculating a correlation between a current block to be interpolated and adjacent blocks surrounding the current block, and obtaining an interpolated image by mixing blocks formed by linear interpolation and blocks formed by motion estimation and motion compensation (ME/MC), based on the calculated correlation.
Step may further comprise counting the number of all adjacent blocks that surround the reference blocks and are included in the repeated pattern and calculating a ratio of the total number of adjacent blocks surrounding the reference blocks to the counted number.
The current block to be formed by interpolation is expressed as follows, PSTYLELSPA CE 130 f(i,j) a xlinear(i, j) (total xMC(i,j) total wheref j) denotes a pixel location of the current block, a denotes the number of adjacent blocks included in the repeated pattern, total denotes the total number of adjacent blocks.
According to yet another aspect of the present invention, there is provided a frame interpolation apparatus including a mean absolute error (MAE) calculating unit, which calculates MAEs between a standard block and reference blocks using a full search, each reference block included in a search area of (M+2P) x (N+2P) that belongs to a frame referred to during motion vector search, an MAE map storing unit, which stores x errors in the form of a map of x according to the order of the reference blocks, a motion vector extracting unit, which recognizes a moving direction of a current block as a location y) of a reference block with a minimum MAE, among MAEs stored in the MAE map, a repeated pattern determining unit, which determines whether a block is included in a repeated pattern, referring to the MAE map stored in the MAE map storing unit, and a frame interpolating unit, which interpolates a frame using an extracted motion vector and linear interpolation.
BRIEF DESCRIPTION OF THE DRAWINGS The above and other aspects and advantages of the present invention will become more apparent by describing in detail exemplary embodiments thereof with reference to the attached drawings in which: FIG. 1 schematically illustrates the symmetric block matching method; FIG. 2 is a block diagram of a frame interpolation apparatus using conventional motion estimation motion compensation (ME/MC); FIG. 3 is a flowchart of a method of determining a repeated pattern according to the present invention; FIG. 4 illustrates a graph formed of MAEs present in right and left diagonals of a MAE map, for a block included in a repeated pattern; FIG. 5 illustrates a graph of a MAE map for a normal image; FIG. 6 illustrates MAE maps for blocks included in the repeated pattern, in which each MAE map is segmented into 16 subblocks and regions with a MAE ratio that is more than 3.0 are marked with black dots; FIG. 7 illustrates an image with a pseudo repeated pattern; FIG. 8 illustrates a MAE map, in which regions with a MAE ratio that is more than 3.0 are marked; FIG. 9 illustrates narrow band classification for a subblock with a MAE ratio that is more than 3.0, which is concentrically present in a certain direction; FIG. 10 is a flowchart describing frame interpolation according to the present invention; FIG. 11 is a view for explaining the correlation among the current block and adjacent blocks; FIG. 12 is a block diagram illustrating a frame interpolation apparatus, according to the present invention; FIG. 13 illustrates adjacent blocks for correlation checking; FIGS. 14A through 14D illustrate test images with repeated patterns; FIG. 15 illustrates images processed by conventional MC, in which the image with the repeated pattern is scattered; and FIG. 16 illustrates images processed by a method of interpolating a frame according to the present invention, in which motion artifacts are suppressed visually.
DETAILED DESCRIPTION OF THE INVENTION The present invention will now be described more fully with reference to the accompanying drawings, in which preferred embodiments of the invention are shown.
A block matching method used for motion estimation is classified into a forward block matching method, a backward block matching method, and a symmetric block matching method. Since a motion's locus is important to frame rate conversion, the symmetric block matching method, as shown in FIG. 1, is generally used for frame rate conversion.
FIG. 1 schematically illustrates the symmetric block matching method. In FIG.
1, the symmetric block matching method, respective errors are obtained for the standard block of a current frame and a reference block located in a motion vector search area of (M+2P) x (N+2P) of a reference frame. In the symmetric block matching method, the reference frame denotes the previous frame or the next frame.
Referring to FIG. 1, a frame k-1 represents the previous frame, a frame k+1 represents the next frame, and a frame k represents the current frame to be interpolated. The standard block of the current frame is represented by reference numeral 102. The motion vector search area of the previous frame is represented by reference numeral 110, and the motion vector search area of the next frame is represented by reference numeral 120. Assuming that the area of the standard block 102 is equal to M x N, a motion vector search area of the standard block 102 would be equal to (M+2P) x where P represents the number of pixels extending beyond the standard block in the x and y direction.
Generally, the area of the standard block 102 is equal to 8 x 8 pixels (including a total of 64 pixels), and the motion vector search area of the standard block 102 occupies 4 pixels in the x direction and 4 pixels in the y direction, is equal to 16 x 16 pixels (including a total of 256 pixels).
Here, searches are classified into a full search or a diamond search. The full search is more widely used.
In full search, a reference block, with an area identical to that of the standard block, is moved in units of a pixel and compared to the standard block. Thus, there are a total number of 256 reference blocks.
The symmetric block matching method uses a sum of absolute difference (SAD) to obtain respective errors for the reference block in the motion vector search area of the previous frame and the reference block in the motion vector search area of the next frame.
According to the symmetric block matching method, the reference block 112 at the top left side of the motion vector search area 110 in the frame k-1 and the reference block 122 at the bottom right side of the motion vector search area 120 in the frame k+1 are compared to each other. Also, a reference block 114, separated by one pixel from the right side of the reference block 112, and a reference block 124, separated by one pixel from the left side of the reference block 122, are compared to each other.
SADs are obtained by comparing symmetric reference blocks with each other and storing the results in a SAD map of x FIG. 2 is a block diagram of a frame interpolation apparatus using conventional motion estimation motion compensation (ME/MC).
A SAD calculating unit 202 calculates an SAD between symmetric reference blocks in the motion vector search area 110 of the frame k-1 and the motion vector search area 120 of the frame k+1, using a symmetric motion vector estimation method.
AM-I N-1 SAD E E(f2(i,j)- j) i=0 j=O0 where M and N denote a width and a height of the standard block, and f j) andf 2 denote a pixel location in the previous frame and a pixel location in the next frame.
Since the number of reference blocks in each motion vector search area is equal to 256, the number of SAD obtained is equal to 256.
A SAD map storing unit 204 stores the SAD map. The SAD map stores 576 SADs obtained by the SAD calculating unit 202.
A motion vector extracting unit 206 determines the location y) of the reference block with the minimum SAD, among the SADs stored in the SAD map storing unit 204, among motion candidates, and recognizes the location y) as the direction of image movement in a current block.
A motion filter 208 is used to consider the correlation between the current block and adjacent blocks and prevent occurrence of a motion vector error. In general, a median filter or an average filter is used as the motion filter 208. The frame interpolating unit 210 interpolates a current frame using a final motion vector processed by the motion filter 208, where the final motion vector is the average of motion vectors of the previous frame and the next frame.
As described above, with reference to FIGS. 1 and 2, conventional motion estimation used in frame rate conversion recognizes a motion vector with the minimum SAD in the motion vector search area as the optimal motion vector.
However, even the optimal motion vector may include error regarding an amount or direction of motion. Thus, the motion filter 208 verifies the correlation between the current block and adjacent blocks.
A median filter or average filter may be used as the motion filter 208. The median filter arranges motion vectors of a current block and adjacent blocks in order, from minimum to maximum, and extracts the middle value among the arranged motion vectors. The average filter calculates the average of all motion vectors.
The final motion vector processed by the motion filter 208 is regarded as having the optimal motion information of the current block and incorporating the average (dx, dy) of the motion vectors of the previous frame and the next frame.
Equation 2 below represents the frame interpolation method.
f 2 where fk(i, j) denotes a pixel location of an interpolated frame.
Using the motion vector obtained by conventional motion estimation, the conventional frame interpolation method exhibits sufficient performance when supporting normal moving or still images. But, when an image with a repeated pattern moves between frames, it is difficult to obtain an accurate motion vector using the conventional motion estimation algorithm.
For example, when an image with a periodically-repeated pattern moves, such as a shirt with stripes, a table cloth with stripes, or a building with series of windows, it is difficult to accurately estimate the motion vector. This is because the correlation between frames changes dramatically.
In short, it is difficult to accurately estimate motion information of an image with a repeated pattern using conventional motion estimation, which causes motion artifacts such as image shattering.
Accordingly, the present invention proposes an algorithm, which allows the determination of a repeated pattern. Also, the, present invention proposes a frame interpolation method, by which a block included in a repeated pattern is processed by linear interpolation and the other blocks are processed by conventional ME/MC using a motion vector. In addition, the present invention proposes a frame interpolation apparatus for use with the frame interpolation method.
FIG. 3 is a flowchart of a method of determining a repeated pattern according to the present invention.
The algorithm for determining a repeated pattern according to the present invention proceeds in three steps.
The following two assumptions can be made based on the fact that the correlation between the current block and adjacent blocks changes to a greater extent.
First, in the repeated pattern, the correlation among mean absolute errors (MAE) in a MAE map changes significantly. The derivation between MAEs in right and left diagonals of the block included in the repeated pattern is greater than in the case of normal moving or still images.
Second, significant change in the correlation among MAEs is evenly distributed over the entire motion vector search area of the block.
Under such assumptions, the algorithm for determining a repeated pattern proceeds in three steps. In the following description, a MAE map distance indicates the deviation between MAEs, and respective block errors indicate MAEs or SADs.
The MAE, which indicates the average of sum of differences between pixels, can be obtained using Equation 3 below 1 M-I N-I MAE= Z(If 2 QA-f A) MN j=o j=o where M and N denote the width and the height of a block, andf, andf 2 denote the pixel location of the previous frame and the pixel location of the next frame.
In a step of determining the MAE map distance, according to the first assumption, the MAE map is obtained from the motion vector search areas of the previous frame and the next frame (step S302). Then each motion vector search area is segmented into subblocks (step S304). The maximum MAE and the minimum MAE are obtained from each subblock and a ratio of the maximum MAE to the minimum MAE, a MAE ratio (Max/Min), is calculated (step S306).
Compared to normal moving or still images, of an image with a repeated pattern, the correlation among MAEs in the MAE map changes to a greater extent. The movement of the repeated pattern can be recognized by searching for the movement in the right and left diagonals of the motion vector search area.
The extent to which the correlation among MAEs in the MAE map changes can be measured using the following method.
MAEs of X are arranged in the form of the MAE map with the size of X according to the order the reference blocks. In the MAE map, deviations between adjacent MAEs in a right diagonal and a left diagonal are obtained and accumulated. The accumulated value for the right diagonal and the accumulated value for the left diagonal are compared to each other, and the greater value is taken and compared to a threshold 1.
Then the MAE map is segmented into subblocks. The MAE ratio is obtained from each subblock. The number of subblocks with an MAE ratio greater than a threshold of 2 is added up. The count value is compared to a threshold of 3.
In this embodiment of the present invention, the motion vector search area ranges from -8 +7 in the x and y direction. The number of MAEs in the MAE map is equal to 256, including 16 MAEs in each horizontal line and 16 MAEs in each vertical line. The number of subblocks is equal to 16, including 4 subblocks in each horizontal line and 4 subblocks in each vertical line.
The differences between adjacent MAEs in the right diagonal and the left diagonal are obtained and accumulated (step $308). The accumulated value represents the total MAE map distance (step $310).
After the total MAE map distance is obtained, the maximum total MAE map distance is set to the MAE map distance for the current block. If the maximum total MAE map distance is more than the threshold of 1, the process proceeds according to the second assumption (step $312).
FIG. 4 illustrates a graph formed of MAEs present in right and left diagonals of a MAE map, for a block included in a repeated pattern. As shown in FIG. 4, the correlation among MAEs changes significantly.
FIG. 5 illustrates a graph of a MAE map for a normal image. As shown in FIG.
5, the correlation among MAEs present in right and left diagonals does not change significantly.
In the second step of the algorithm for determining a repeated pattern, MAE map classification, a pseudo repeated pattern is discarded. The pseudo repeated pattern is not an actual repeated pattern, but contains characteristics similar to the repeated pattern. If the distribution of a region with a MAE ratio that is more than 3.0 is skewed, that is, regions are concentrically present in either direction, it is determined that subblocks are included in the pseudo repeated pattern.
If a subblock has a MAE ratio that is more than the threshold of 2, 3.0, in the present invention and occupies more than 50% of total subblocks, then the subblock is determined to be included in the repeated pattern and the process proceeds to the distribution check in step $314.
As shown in Equation 4, a region represented by is referred to as a majority region, and the other regions are referred to as minority regions. The respective numbers of majority region and a minority region are added up.
majority_count Y- f (i) ie majori t_count (4 minority_count= fi)( ie ni nority count wheref(') denotes the threshold of 2.
In step $314, the count value is used with Equation 5 to determine the pseudo repeated pattern.
(mi nority _count ax region count) A (mi nority count fl x (16 region_ count) where regioncount denotes the number of subblocks in the majority region and is set to in the present invention. A coefficient a is set to 0.5, and a coefficient p is set to 0.25.
FIG. 6 illustrates MAE maps for blocks included in the repeated pattern, in which each MAE map is segmented into 16 subblocks and regions with a MAE ratio that is more than 3.0 are marked with black dots. Referring to FIG. 6, the subblock with a MAE ratio that is more than the threshold of 3, 7.0, is evenly distributed over the MAE map. According to the experiment, since the MAE ratio of a normal image is less than 2.0, the threshold of 2 allows for distinct classification of normal images and images with a repeated pattern.
The distribution of a MAE ratio is checked in step S316, and a block with a narrow band is determined to be included in the pseudo repeated pattern in step S318.
FIG. 7 illustrates an image with a pseudo repeated pattern. In FIG. 7, the part marked with a circle is the pseudo repeated pattern. As shown in FIG. 8, regions with a MAE ratio that is more than 3.0 are concentrically present in the cross direction. This result does not correspond to the assumption about characteristics of images with repeated patterns. Hence, the pseudo repeated patterns must be excluded.
FIG. 9 illustrates narrow band classification for a subblock with a MAE ratio that is more than 3.0, which is concentrically present in a certain direction. Regions with a MAE ratio that is more than 3.0 are represented by a and marked with block dots, and the other regions are represented by a Determination of a repeated pattern ends with modified median filtering. In modified median filtering, repetition of adjacent blocks is checked, based on the fact that the repeated pattern is not dependent but concentrated, thereby accepting repetition of the current block only when adjacent blocks exhibit repetition at more than a predetermined level. In other words, the number of blocks included in the repeated pattern, among 24 adjacent blocks and including 5 blocks in each horizontal line and blocks in each vertical line, is added up. If the number of blocks included in the repeated pattern added up to more than a threshold 4, 6, in the present invention, the current block is determined to be included in the repeated pattern.
In the present invention, it is possible to use the MAE ratios of adjacent blocks instead of the MAE ratios of subblocks. In other words, the MAE ratios of adjacent blocks in a diagonal are obtained, the number of MAE ratios more than the threshold of 4 is added up, and the count value is compared to the threshold of 2.
Symmetrical block matching is used in the present invention, but other methods such as forward block matching or backward block matching can also be used.
In addition, the method of determining the repeated pattern of the present invention can be used in motion vector estimation as well as frame interpolation.
FIG. 10 is a flowchart describing a frame interpolation method according to the present invention.
In the first step S1002, a MAE map is obtained.
In the next step S1004, a motion vector is estimated based on the MAE map.
Then, in step S1006, it is determined whether a subblock of the MAE map is included in the repeated pattern. Determination of the repeated pattern has already been described in the flowchart of FIG. 3.
In step S1008, the correlation between the current block and adjacent blocks are checked to obtain a ratio of the block included in the repeated pattern to the other blocks without the repeated pattern, the correlation between the current block and adjacent blocks.
FIG. 11 is a view for explaining the correlation between the current block and adjacent blocks. As shown in FIG. 11, a block 1102 interposed between a repeated pattern region and a non-repeated region is affected by both the repeated pattern and the non-repeated pattern. Thus, it is necessary to mix a block generated by linear interpolation with a block generated by ME/MC for properly interpolating the block 1102.
In step S1110, an interpolated image is obtained by mixing the block generated by linear interpolation with a block generated by MEIMC, based on an obtained ratio.
In other words, frame interpolation is performed by soft switching.
FIG. 12 is a block diagram illustrating a frame interpolation apparatus, according to the present invention.
A MAE calculating unit 1202 calculates MAEs between reference blocks in the search areas of the previous frame k-1 and the next frame using symmetrical motion vector estimation.
Since the number of reference blocks in each motion vector search area is 256, the number of obtained MAEs is 256.
A MAE map storing unit 1204 stores a MAE map. The MAE map includes 256 MAEs obtained by the MAE calculating unit 1202.
The motion vector extracting unit 1206 recognizes the moving direction of the current block as the location y) of the reference block with the minimum MAE, among MAEs stored in the MAE map storing unit 1204, motion candidates.
A motion filter 1208 is used to consider the correlation between the current block and adjacent blocks and prevent motion vector error. A median filter or average filter is widely used as the motion filter 1208.
A repeated pattern determining unit 1210 determines whether a block is included in the repeated pattern, referring to the MAE map stored in the MAE map storing unit 1204. The repeated pattern determining unit 1210 includes a MAE map distance calculating unit 1210a, a MAE map classifying unit 1210b, and a modified median filter 1210c.
The MAE map distance calculating unit 1210a calculates the difference between MAEs of adjacent blocks surrounding the current block in right and left diagonals and the result is accumulated for calculating the distance between MAEs present in right and left diagonals.
The maximum distance among the calculated distances is set to the MAE map distance and compared to the threshold of 1. If the MAE map distance is more than the threshold of 1, the MAE map distance calculating unit 1210a determines that the first assumption is satisfied.
Then the MAE map distance calculating unit 1210a segments the MAE map into subblocks, and calculates the minimum MAE and the maximum MAE in each subblock to obtain a MAE ratio (Max/Min).
In the present invention, the motion vector search area is extended from -8 +7 in x and y direction. The total number of MAEs in the MAE map is 256 including 16 in each horizontal line and 16 in each vertical line. The MAE map is segmented into 16 subblocks, including 4 blocks in each horizontal line and 4 blocks in each vertical line.
The MAE map classifying unit 1210b excludes the pseudo repeated pattern. If a subblock has a MAE ratio that is more than the threshold of 2, 3.0, in the present invention and occupies more than 50% of total subblocks, then the MAE map classifying unit 1210b determines that such subblocks are included in the pseudo repeated pattern, in which the threshold of 3 is set to 7.
Then the MAE map classifying unit 1210b checks the distribution of the It subblock, which has a MAE ratio that is more than the threshold of 2. If the subblock has a narrow band, it is determined to be included in the pseudo repeated pattern.
The median filter 1210c checks repetition of adjacent blocks, accepting repetition of the current block only when adjacent blocks exhibit repetition at more than a predetermined level. In other words, the number of blocks with the repeated pattern among 24 adjacent blocks and including 5 blocks in each horizontal line and 5 blocks in each vertical line, is added up. If the number of blocks included in the repeated pattern adds up to than the threshold of 4, 6, in the present invention, the current block is determined to be included in the repeated pattern.
In the present invention, it is possible to use the MAE ratios of adjacent blocks instead of the MAE ratios of subblocks. In other words, the MAE ratios of adjacent blocks in right and left diagonals are obtained, the number of MAE ratios more than the threshold of 4 is added up, and the count value is compared to the threshold of 2.
A frame interpolating unit 1212 interpolates a frame using a final motion vector processed by the motion filter 1209 and using linear interpolation.
The frame interpolating unit 1212 checks adjacent blocks surrounding the current block to obtain a ratio of the block included in the repeated pattern to the other blocks without the repeated pattern, the correlation between the current block and adjacent blocks. Then the frame interpolating unit 1212 obtains the interpolated image by mixing a block generated by linear interpolation with a block generated by ME/MC.
For example, the correlation between the current block and adjacent blocks is checked by adding up the repetition of adjacent blocks surrounding the current block.
FIG. 13 illustrates adjacent blocks for correlation checking. In FIG. 13, twenty-five adjacent blocks surround the current block. The number of adjacent blocks included in the repeated pattern is added up.
Frame interpolation based on the obtained correlation is expressed by Equation 6.
(a x linear(i,j) (total a) xMC(i, f A) la total where f(i,j) denotes the pixel location of the current block, and a denotes the number of adjacent blocks included in the repeated pattern, with a range of 0 s a S in the present invention.
According to the present invention, an image with the repeated pattern is processed by linear interpolation, and another image without the repeated pattern is processed by ME/MC.
However, simple hardware switching cannot demonstrate the superior characteristics at a boundary between the block, included in the repeated pattern region and the non-repeated pattern region. Thus, soft switching is used based on the correlation between the current block and adjacent blocks.
In Equation 6, a denotes the number of adjacent blocks included in the repeated pattern. If a is great, if the image has many repeated patterns, linear interpolation is used. Otherwise, MC based interpolation is used with the average of motion vectors from the previous frame and the next frame.
FIGS. 14A through 14D illustrate test images with repeated patterns. FIG. 14A illustrates a grates image with repeated pectinations at the center. FIG. 148 illustrates a Snell Wilcox with radial repeated patterns. FIG. 14C illustrates a Melco image showing a building with repeated windows. FIG. 14D illustrates a restaurant image showing a table cloth with repeated stripes.
In order to compare the present invention with conventional MC, a mean square error (MSE) or a peak signal to noise ratio (PSNR), defined by Equations 7 and 8, is used.
M-I N-l M SE= 2Z (f f (i,j) 2 I (7) i=0 j-0 255 PSNR=201og 10 25 PSNR= 201og, 55f- )E where M and N denote a width and a height of a block, and f j) andf 2 j) denote the pixel location of the previous frame and the pixel location of the next frame.
Table 1 shows PSNRs obtained using Equations 7 and 8. The obtained PSNRs are the average of PSNRs from the frames.
[Table 1] Image .Conventional MC Proposed algorithm Improvement Grates 23.6 db 26.2 db 2.6 db Snell Wilcox 15.8 db 24.9 db 9.1 db Melco 26.1 db 26.5 db 0.4 db Restaurant 28.9 db 30.1 db 1.2 db As implied in Table 1 above, the PSNRs vary with the form of the repeated pattern, the amount of movement between frames, and the extent to which the repeated pattern occupies the image.
Next, the visual difference obtained when using a frame interpolation method according to the present invention and using a conventional MC will be described with reference to FIGS. 15 and 16.
FIG. 15 illustrates images processed by conventional MC, in which the image with the repeated pattern is scattered.
FIG. 16 illustrates images processed by a method of interpolating a frame according to the present invention, in which motion artifacts are suppressed visually.
As a result of comparison, the frame interpolation method according to the present invention eliminates serious motion artifacts exhibited by conventional MC.
As described above, a method of determining a repeated pattern according to the present invention allows for effective frame interpolation when an image with a repeated pattern moves.
A frame interpolation method according to the present invention allows for the use of linear interpolation and conventional MC when an image with a repeated pattern moves, thereby providing an interpolated image without motion artifacts.
In addition, a frame interpolation appratus according to the present invention mixes a frame processed by linear interpolation with a frame processed by conventional MC using motion compensation, when an image with a repeated pattern moves, thereby providing an interpolated image without motion artifacts.
While the present invention has been particularly shown and described with reference to exemplary embodiments thereof, it will be understood by those of ordinary skill in the art that various changes in form and details may be made therein without departing from the spirit and scope of the invention as defined by the appended claims and their equivalents.

Claims (24)

1. A method of determining whether a reference block of M x N is included in a repeated pattern when an image with the repeated pattern moves, the method including: obtaining an error for a standard block and a reference block using a full search, each reference block included in a search area of (M+2P) x (N+2P) that belongs to a frame referred to during motion vector search; arranging x errors in the form of a map of x according to the order of the reference blocks; obtaining each deviation between the current block and adjacent blocks in a left diagonal and a right diagonal of the map, and separately accumulating the obtained deviations in the left diagonal and the obtained deviations in the right diagonal; comparing the accumulated deviation in the left diagonal to the accumulated deviation in the right diagonal, and selecting a greater deviation; comparing the selected deviation with a threshold of 1; and if the selected deviation is more than the threshold of 1, determining that the reference block is included in the repeated pattern.
2. The method of claim 1, wherein the error for the standard block and the reference block is referred to as a mean absolute error (MAE) because it is obtained by averaging a sum of absolute differences between reference blocks.
3. The method of claim 1 or 2 further comprising: segmenting the map into subblocks with identical sizes; calculating a ratio of a maximum error to a minimum error for each subblock; counting a total number of subblocks having a ratio greater than a threshold of 2; and if the total number is more than a threshold of 3, determining that the subblocks are included in the repeated pattern.
4. The method of claim 3 further comprising: checking a distribution of a subbiock with a ratio greater than the threshold of 2 and determining whether the subblock is included in a pseudo repeated pattern. The method of claim 4, wherein it is determined whether the subblock is included in the pseudo repeated pattern by comparing the pseudo repeated pattern of subblocks that are concentrically distributed over horizontal, vertical, and diagonal directions of the map.
6. The method of claim 5 further comprising: checking the degree of repetition of the block which is determined as included in a repeated pattern in a reference blocks and adjacent blocks; and determining that a reference block is included in the repeated pattern based on a degree of repetitions.
7. The method of claim 6, wherein the degree of repetition is expressed by a total number of blocks which are determined as included in a repeated pattern in reference blocks, and adjacent blocks.
8. A frame interpolation method including: obtaining an error for a standard block and a reference block using a full search, each reference block included in a search area of (M+2P) x (N+2P) that belongs to a frame referred to during motion vector search; estimating a motion vector as location information of a reference block with minimum error; determining whether the standard block and reference blocks are included in the repeated pattern, based on obtained errors; calculating a correlation between a current block to be interpolated and adjacent blocks surrounding the current block; and obtaining an interpolated image by mixing blocks formed by linear interpolation and blocks formed by motion estimation and motion compensation (ME/MC), based on the calculated correlation.
9. The frame interpolation method of claim 8, wherein step further comprises: counting the number of all adjacent blocks that surround the reference blocks and are included in the repeated pattern; and calculating a ratio of the total number of adjacent blocks surrounding the reference blocks to the counted number. The frame interpolation method of claim 9, wherein the current block to be formed by interpolation is expressed as follows, <PSTYLELSPACE 130> f(j) ax linear(i, j) (total a) x MC(i, j) PSTY LELSPACE 13 0 f j) total wheref(i, j) denotes a pixel location of the current block, a denotes the number of adjacent blocks included in the repeated pattern, total denotes the total number of adjacent blocks.
11. The method of claim 8, 9 or 10, wherein step further comprises: obtaining an error for a standard block and a reference block while using a full search, each reference block included in a search area of (M+2P) x (N+2P) that belongs to a frame referred to during motion vector search; arranging x errors in the form of a map of x according to the order of the reference blocks; obtaining each deviation between the current block and adjacent blocks surrounding the current block in a left diagonal and a right diagonal of the map, and separately accumulating the obtained deviation in the left diagonal and the obtained deviations in the right diagonal; comparing the accumulated deviation in the left diagonal to the accumulated deviation in the right diagonal, and selecting a greater deviation; comparing the selected deviation with a threshold of 1; and if the selected deviation is more than the threshold of 1, determining that the reference block is included in the repeated pattern.
12. The frame interpolation method of claim 11, wherein the error for the standard block and the reference block is referred to as a mean absolute error (MAE) because it is obtained by averaging a sum of absolute differences between reference 3o blocks.
13. The frame interpolation method of claim 11 or 12 further comprising: segmenting the map into subblocks with identical sizes; calculating a ratio of a maximum error to a minimum error for each subblock; counting the total number of subblocks having a ratio greater than a threshold of 2; and if the total number is more than a threshold of 3, determining that the subblocks are included in the repeated pattern.
14. The frame interpolation method of claim 13 further comprising: checking a distribution of a subblock with a ratio greater than the threshold of 2 and determining whether the subblock is included in a pseudo repeated pattern. The frame interpolation method of claim 14, wherein it is determined whether the subblock is included in the pseudo repeated pattern by comparing the pseudo repeated pattern of subblocks that are concentrically distributed over horizontal, vertical, and diagonal directions of the map.
16. The frame interpolation method of claim 15 further comprising: checking the degree of repetition of the block which is determined as included in a repeated pattern in reference blocks and adjacent blocks; and determining that a reference block is included in the repeated pattern based on a degree of repetitions.
17. The method of claim 16, wherein the degree of repetition is expressed by the total number of blocks which are determined as included in a repeated pattern in reference blocks and adjacent blocks.
18. A frame interpolation apparatus including: a mean absolute error (MAE) calculating unit, which calculates MAEs between a standard block and reference blocks using a full search, each reference block included in a search area of (M+2P) x (N+2P) that belongs to a frame referred to during motion vector search; an MAE map storing unit, which stores x errors in the form of a map of x according to the order of the reference blocks; a motion vector extracting unit, which recognizes a moving direction of a current block as a location y) of a reference block with a minimum MAE, among MAEs stored in the MAE map; a repeated pattern determining unit, which determines whether a block is included in a repeated pattern, referring to the MAE map stored in the MAE map storing unit; and a frame interpolating unit, which interpolates a frame using an extracted motion vector and linear interpolation.
19. The frame interpolation apparatus of claim 18, wherein the repeated pattern determining unit further comprises: a MAE map distance calculating unit, which segments the MAE map into subblocks and calculates the minimum MAE and a maximum MAE in each subblock to obtain a MAE ratio (Max/Min); calculates a difference between MAEs of adjacent blocks in a right diagonal and a left diagonal; accumulates the calculated differences to calculate a distance between MAEs present in the right and left diagonals; selects a maximum distance among the calculated distances; and determines that a reference block is included in the repeated pattern if the maximum distance is more than the threshold of 1. The apparatus of claim 19, wherein the repeated pattern determining unit further comprises: a MAE map classifying unit, which counts a total number of subblocks with a MAE ratio greater than a threshold 2 in the right and left diagonals; and determines whether a subblock is included in the repeated pattern if the subblock has a MAE ratio that is more than the threshold 2 occupies more than 50% of total subblocks.
21. The frame interpolation apparatus of claim 20, wherein the MAE map classifying unit checks a distribution of the subblock with a MAE ratio that is greater than the threshold of 2.
22. The frame interpolation apparatus of claim 21, wherein the MAE map classifying unit determines whether the subblock is included in a pseudo repeated pattern by comparing pseudo repeated patterns of subblocks that are concentrically distributed over horizontal, vertical, and diagonal directions of the MAE map. W
23. The frame interpolation apparatus of claim 20, 21 or 22 wherein the repeated pattern determining unit further comprises: a median filter, which checks repetition of reference blocks and adjacent blocks that are included in the repeated pattern and determines that a reference block is included in the repeated pattern based on the degree of repetitions.
24. The frame interpolation apparatus of any one of claims 18 to 23, wherein the frame interpolating unit calculates a correlation among the current block to be interpolated and adjacent blacks surrounding the current block and, based on the calculated correlation, obtains an interpolated image by mixing blocks formed by linear interpolation and blocks formed by motion compensation (MC) The frame interpolation apparatus of claim 24, wherein the frame interpolating unit counts the number of all surrounding adjacent blocks that are included in the repeated pattern and surround the standard block, calculates a ratio of the total number of adjacent blocks surrounding the reference blocks to the counted number, and uses the ratio to calculate a correlation among the current block to be interpolated and adjacent blocks surrounding the current block.
26. The apparatus of claim 25, wherein the current block to be formed by interpolation is expressed as follows, PSTYLELSPACE 130 f(ij) ax linear(i, j) (total a) x MC(i, j) total wheref(ij) denotes a pixel location of the current block, a denotes the number of adjacent blocks included in the repeated pattern, total denotes the total number of adjacent blocks.
27. A method of determining whether a reference block is included in a repeated pattern when an image with the repeated pattern moves substantially as herein described with reference to Figs. 3 to 16 of the accompanying drawings.
28. A frame interpolation method substantially as herein described with reference to Figs. 3 to 16 of the accompanying drawings. ,Aft. '-4
29. A frame interpolation apparatus substantially as herein described with reference to Figs. 3 to 16 of the accompanying drawings. DATED: 4 December, 2003 PHILLIPS ORMONDE FITZPATRICK Attorneys for: SAMSUNG ELECTRONICS CO. LTD. o 1 0
AU2003266803A 2002-12-13 2003-12-05 Method of determining repeated pattern, frame interpolation method, and frame interpolation appartus Ceased AU2003266803B2 (en)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
KR2002-79745 2002-12-13
KR10-2002-0079745A KR100462629B1 (en) 2002-12-13 2002-12-13 Method for determining a repeated pattern, a frame interpolation method thereof and a frame interpolation apparatus thereof

Publications (2)

Publication Number Publication Date
AU2003266803A1 AU2003266803A1 (en) 2004-07-01
AU2003266803B2 true AU2003266803B2 (en) 2005-01-06

Family

ID=34270545

Family Applications (1)

Application Number Title Priority Date Filing Date
AU2003266803A Ceased AU2003266803B2 (en) 2002-12-13 2003-12-05 Method of determining repeated pattern, frame interpolation method, and frame interpolation appartus

Country Status (3)

Country Link
KR (1) KR100462629B1 (en)
CN (1) CN1250000C (en)
AU (1) AU2003266803B2 (en)

Families Citing this family (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP4398925B2 (en) * 2005-03-31 2010-01-13 株式会社東芝 Interpolation frame generation method, interpolation frame generation apparatus, and interpolation frame generation program
JP4165580B2 (en) * 2006-06-29 2008-10-15 トヨタ自動車株式会社 Image processing apparatus and image processing program
WO2011021915A2 (en) * 2009-08-21 2011-02-24 에스케이텔레콤 주식회사 Method and apparatus for encoding/decoding images using adaptive motion vector resolution
KR101356613B1 (en) 2009-08-21 2014-02-06 에스케이텔레콤 주식회사 Video Coding Method and Apparatus by Using Adaptive Motion Vector Resolution
KR102229192B1 (en) * 2019-12-06 2021-03-17 국방과학연구소 Control device that estimates the synchronous signal of a frame with variable length and control method there of

Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2001074072A1 (en) * 2000-03-27 2001-10-04 Teranex, Inc. Processing sequential video images to detect image motion among interlaced video fields or progressive video images
US20040022320A1 (en) * 2002-08-02 2004-02-05 Kddi Corporation Image matching device and method for motion pictures
US20040227851A1 (en) * 2003-05-13 2004-11-18 Samsung Electronics Co., Ltd. Frame interpolating method and apparatus thereof at frame rate conversion

Family Cites Families (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
KR100220834B1 (en) * 1996-11-15 1999-09-15 전주범 Apparatus and method of target tracking in image telephone
KR100301835B1 (en) * 1998-09-03 2001-09-06 구자홍 Method for block matching motion estimation and apparatus for the same
KR100317279B1 (en) * 1998-11-04 2002-01-15 구자홍 Lossless entropy coder for image coder
US6377297B1 (en) * 1999-12-07 2002-04-23 Tektronix, Inc. Detection of repeated and frozen frames in a video signal

Patent Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2001074072A1 (en) * 2000-03-27 2001-10-04 Teranex, Inc. Processing sequential video images to detect image motion among interlaced video fields or progressive video images
US20040022320A1 (en) * 2002-08-02 2004-02-05 Kddi Corporation Image matching device and method for motion pictures
US20040227851A1 (en) * 2003-05-13 2004-11-18 Samsung Electronics Co., Ltd. Frame interpolating method and apparatus thereof at frame rate conversion

Also Published As

Publication number Publication date
CN1250000C (en) 2006-04-05
KR20040052026A (en) 2004-06-19
CN1507274A (en) 2004-06-23
KR100462629B1 (en) 2004-12-23
AU2003266803A1 (en) 2004-07-01

Similar Documents

Publication Publication Date Title
US8199252B2 (en) Image-processing method and device
CN101189871B (en) Spatial and temporal de-interlacing with error criterion
US6636565B1 (en) Method for concealing error
US7893993B2 (en) Method for video deinterlacing and format conversion
US8498495B2 (en) Border region processing in images
US7345708B2 (en) Method and apparatus for video deinterlacing and format conversion
US7203234B1 (en) Method of directional filtering for post-processing compressed video
US9497468B2 (en) Blur measurement in a block-based compressed image
KR20050032893A (en) Image adaptive deinterlacing method based on edge
US20050243928A1 (en) Motion vector estimation employing line and column vectors
US6614485B2 (en) Deinterlacing apparatus
EP2175641B1 (en) Apparatus and method for low angle interpolation
KR20070116717A (en) Method and device for measuring mpeg noise strength of compressed digital image
KR20100103838A (en) Motion estimation with an adaptive search range
EP2378775A1 (en) Image decoding device and image coding device
US8023765B2 (en) Block noise removal device
US20130128979A1 (en) Video signal compression coding
AU2003266803B2 (en) Method of determining repeated pattern, frame interpolation method, and frame interpolation appartus
US7199819B2 (en) Device for automatically detecting picture degradation
KR100999371B1 (en) Apparatus and method for interference cancellation in boradband wireless access communication system
CN101309376B (en) Method and device for eliminating alternate line
EP1107609A1 (en) Method of processing motion vectors histograms to detect interleaved or progressive picture structures
JP2007501561A (en) Block artifact detection
CN102497524A (en) Edge adaptive de-interlacing interpolation method
CN105611214A (en) Method for de-interlacing through intra-field linear interpolation based on multidirectional detection

Legal Events

Date Code Title Description
FGA Letters patent sealed or granted (standard patent)
MK14 Patent ceased section 143(a) (annual fees not paid) or expired