AU2006252189B2 - Method and apparatus for determining quality appearance of weaved blocks of pixels - Google Patents

Method and apparatus for determining quality appearance of weaved blocks of pixels Download PDF

Info

Publication number
AU2006252189B2
AU2006252189B2 AU2006252189A AU2006252189A AU2006252189B2 AU 2006252189 B2 AU2006252189 B2 AU 2006252189B2 AU 2006252189 A AU2006252189 A AU 2006252189A AU 2006252189 A AU2006252189 A AU 2006252189A AU 2006252189 B2 AU2006252189 B2 AU 2006252189B2
Authority
AU
Australia
Prior art keywords
pixels
block
weaved
blocks
candidate
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Ceased
Application number
AU2006252189A
Other versions
AU2006252189A1 (en
Inventor
Andrew James Dorrell
Nicholas James Seow
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Canon Inc
Original Assignee
Canon Inc
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Canon Inc filed Critical Canon Inc
Priority to AU2006252189A priority Critical patent/AU2006252189B2/en
Publication of AU2006252189A1 publication Critical patent/AU2006252189A1/en
Application granted granted Critical
Publication of AU2006252189B2 publication Critical patent/AU2006252189B2/en
Ceased legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Description

S&F Ref: 790947 AUSTRALIA PATENTS ACT 1990 COMPLETE SPECIFICATION FOR A STANDARD PATENT Name and Address Canon Kabushiki Kaisha, of 30-2, Shimomaruko 3-chome, of Applicant : Ohta-ku, Tokyo, 146, Japan Actual Inventor(s): Andrew James Dorrell Nicholas James Seow Address for Service: Spruson & Ferguson St Martins Tower Level 35 31 Market Street Sydney NSW 2000 (CCN 3710000177) Invention Title: Method and apparatus for determining quality appearance of weaved blocks of pixels The following statement is a full description of this invention, including the best method of performing it known to me/us: 5845c(617943_1) METHOD AND APPARATUS FOR DETERMINING QUALITY APPEARANCE OF WEAVED BLOCKS OF PIXELS Field of the Invention The current invention relates to the field of video processing and, in particular, to a 5 method and apparatus for determining which of two weaved blocks of pixels has a higher quality appearance. The current invention also relates to a computer program product including a computer readable medium having recorded thereon a computer program for determining which of two weaved blocks of pixels has a higher quality appearance. Background 10 Video data is acquired (and/or encoded) and transmitted in one of two different formats referred to as "progressive" and "interlaced". Video data sequences captured on film or generated for computer display typically use the progressive format, whereas television is typically interlaced. In progressive video, a sequence of frames of video data is displayed on a display 15 screen at a specified frame-rate specified in Frames Per Second (fps). "Np" is a commonly used abbreviation for "N frames per second progressive video". For example 24p is used to mean "24 frames per second progressive video". "Ni" is a commonly used abbreviation for "N fields per second interlaced video", such as 60i for "60 fields per second interlaced video". 20 In "interlaced" video, odd and even rows of a frame of video data displayed on the display screen are updated separately. A frame of interlaced video typically includes two "fields". A first field consists of all of the odd rows of the frame of video data. The second field consists of all of the even rows of the frame of video data. Only one (1) field is updated every new frame (or screen refresh) on an "interlaced display", alternating even -2 and odd. For television, it is also common that the above odd and even rows are acquired at different times. Commonly the two fields are referred to as "top" and "bottom". The top field is displayed so that the first row of pixel data of the top field is displayed in the top row of the display screen. The bottom field is displayed so that the last row of pixel 5 data of the bottom field is displayed in the bottom row of the display screen. Displaying the top field and the bottom field in such a manner avoids any confusion about how the rows are actually numbered. For digital encoding purposes top and bottom field pairs are "packed" into a frame, meaning that the odd rows of the frame come from the bottom field and the even rows of 10 the frame come from the top field. The term "dominance" is used to describe which of the fields contained in the frame is to be displayed first in time. Specifically, "top dominant" describes a field packing method where the top field is displayed before the bottom field in any given encoded frame. Similarly "bottom dominant" describes a field packing method where the bottom field is displayed before the top field in any given encoded frame. 15 When film content, which is typically acquired at 24p, is converted for television to a standard format such as NTSC which comprises sixty (60) fields per second, a conversion is required. This conversion, known as "telecine conversion" or "3:2 pull-down", involves scanning each film frame to produce a digital image and separating the odd and even scan lines for use as fields. To effect the conversion from twenty four (24) frames to sixty (60) 20 fields it is necessary to repeat some fields. Referring to Fig. 1, 24p film frames 101, 102 and 103 are converted into fields (e.g., 104, 105, 106) by extracting the odd and even rows from the original progressive frames 101, 102 and 103. To affect a frame rate conversion from twenty four (24) frames per second to sixty (60) fields per second, the fields (e.g., 104, 105, 106, 107 and 108) are -3 present in a repeating 3:2 pattern known as a "pull-down pattern" from successive film frames as depicted in Fig. 1. As also shown in Fig. 1, three fields 104, 105 and 106 are drawn from a first film frame 101, two fields 107 and 108 from the next film frame 102, three fields 109, 110 and 111 from the next film frame 103 and so on. This leads to a 5 pattern where the first field (e.g., 104) and the third field (e.g., 106) of each group of five (5) fields (e.g., 104, 105, 106, 107 and 108) are identical. Again, for encoding purposes the fields are packed into encoding frames (e.g., 112). Due to the repetition of some fields some encoding frames (e.g., 113) contain data from multiple film frames (e.g., 101, 102). If the encoding frames (e.g., 112) are displayed directly on progressive display equipment 10 then any moving objects will have a visible artefact referred to variously as combing, sawtooth or motion artefact. The "visual quality" (or "quality appearance") of telecine converted video data can be improved during display (or playback) on progressive display equipment if the original film frames are reconstructed from the video data signal, in an "inverse telecine" process, 15 and displayed. Referring again to Fig. 1, the fields (e.g., 104, 105 and 106) are extracted from the encoded frame (e.g., 117) and recombined to reproduce the original 24p frames 114, 115 and 116. The original 24p frames 114, 115 and 116 may then be displayed using a 3:2 repeat pattern. This method of recombining top and bottom fields is often referred to as "weaving". The inverse telecine processing results in a higher quality progressive 20 display than vertically interpolating the fields, as the original vertical resolution is fully recovered. Inverse telecine processing also prevents the appearance of any wobble that results from the original film frames not being vertically antialiased prior to extraction of the fields.
-4 Inverse telecine processing may be performed based on metadata included with an encoded video data stream. For a variety of reasons, however, such metadata is often unreliable and most high quality video playback equipment performs some analysis of the video fields in order to infer that the fields are generated by a telecine process. Such 5 analysis is often referred to as "film content detection", "pull-down detection" or "24p detection". A common method of pull-down detection, especially for a 3:2 repeat pattern, is to look for regular repetition of one field in every five (5) fields. The image processing challenges for inverse telecine processing are numerous and the prior art contains many examples of processors that perform this task. An important 10 consideration in inverse telecine processing is to minimise latency and buffering. Typically this means that reconstructed frame data can only lag corresponding input pixel data by a single frame or less. In addition, a limited number of input fields (typically three (3) or four (4)) are typically available for determining whether inverse telecine processing should be applied (based on a film content detection). If film content detection is only 15 based on the detection of a repeated field then the detection method may require many frames of output to achieve a positive detection result. For example, delays of up to fifteen (15) frames can occur in prior art methods that use repeated field detection. All prior art methods of film detection require at least five (5) frames to be output before the 3:2 pull-down pattern can be detected and original film frames recovered. 20 During this time, display systems using such methods often output reduced quality frames. Less latency in film detection is possible if a pattern in the presence or absence of motion between fields in the input video data sequence can be detected. Ideally there should be no motion between fields that are derived from the same film frame. In practice however this is difficult to detect reliably due to naturally varying degrees of contrast and motion -5 present in the video frame sequence as well as the different sampling offsets used in the top and bottom fields. For example, a common measure of motion is the sum of absolute differences in pixel intensity. Large differences in such a motion measure may result from motion and/or 5 complex scene structure. Scenes containing low contrast also present a problem. Setting a detection threshold too high may prevent some scene detail being confused as motion. Setting the threshold too low however will result in poor detection of real scene motion during low motion or low contrast scenes. If a motion detection result is required to achieve and maintain efficient film detection then large latencies may result as detectable 10 motion must be present in a long enough sequence of input fields for a pattern to be identified. If motion detection is intermittent, a full pattern detection state may be difficult to achieve. Alternatively, if pattern detection is periodically lost then the resulting differences in display quality is noticeable and degrades the perceived quality of the display (or playback). 15 One known prior art motion detection method retains a history of motion levels detected over time and adapts the detection threshold used in accord with the motion level present in prior frames. A problem with such a method is that prior motion may not be a good indicator of the motion threshold for the current frame. This would be the case if the level of motion changes suddenly or if there is a scene change. Prior motion also may not 20 be a good indicator of the motion threshold for a current film frame if the contrast levels of an area undergoing motion change. The performance of history-based methods of motion detection is reduced if there are sudden changes in scene structure due to lighting changes. Finally, the dependence on historical data for setting detection thresholds means that there -6 is a delay after a scene change before an optimal threshold is achieved and reliable film pattern detection can be achieved. Summary It is an object of the present invention to substantially overcome, or at least 5 ameliorate, one or more disadvantages of existing arrangements. According to one aspect of the present invention there is provided a method of determining which of two weaved blocks of pixels has a higher quality appearance, said method comprising the steps of: a) generating said two weaved blocks of pixels from at least three input 10 fields of video data; b) generating an interpolated block of pixels from at least one of said input fields; c) comparing local analysis of pixels from said weaved blocks of pixels with local analysis of spatially corresponding pixels in the interpolated block of pixels to 15 determine which of said weaved block of pixels has a higher quality appearance. According to another aspect of the present invention there is provided an apparatus for determining which of two weaved blocks of pixels has a higher quality appearance, said apparatus comprising: first generating means for generating said two weaved blocks of pixels from at least 20 three input fields of video data; second generating means for generating an interpolated block of pixels from at least one of said input fields; -7 comparing means for comparing local analysis of pixels from said weaved blocks of pixels with local analysis of spatially corresponding pixels in the interpolated block of pixels to determine which of said weaved block of pixels has a higher quality appearance. According to still another aspect of the present invention there is provided a 5 computer readable medium, having a program recorded thereon, where the program is configured to make a computer execute a procedure to a determine which of two weaved blocks of pixels has a higher quality appearance, said program comprising: code for generating said two weaved blocks of pixels from at least three input fields of video data; 10 code for generating an interpolated block of pixels from at least one of said input fields; code for comparing local analysis of pixels from said weaved blocks of pixels with local analysis of spatially corresponding pixels in the interpolated block of pixels to determine which of said weaved block of pixels has a higher quality appearance. Other 15 aspects of the invention are also disclosed. Brief Description of the Drawings Some aspects of the prior art and one or more embodiments of the present invention will now be described with reference to the drawings and appendices, in which: Fig. 1 shows the relationship between 24fps film frames, 3:2 pull-down fields and 20 reconstructed original film frames; Fig. 2 is a schematic block diagram of a general purpose computer upon which arrangements described can be practiced; Fig. 3 shows a software architecture suitable for implementing the described methods according to one embodiment; -8 Fig. 4 is a flow diagram showing a method of generating a per-pixel output frame; Fig. 5 is a flow diagram showing a method of selecting a candidate output pixel; Figs. 6A, 6B and 6C show spatiotemporal relationships of pixels used to construct weave candidate blocks of pixels and to construct an interpolated candidate block of 5 pixels; Fig. 7 is a flow diagram showing a method of determining a block-artefact score for a candidate 3 x 5 block of pixels; Fig. 8 is a flow diagram showing a method of determining a sub-block artefact score; Fig. 9 is a block diagram representing determination of the sub-block artefact scores 10 of Fig.8 according to one embodiment; Fig. 10 is a flow diagram showing a method of determining a preferred candidate; Fig. 11 is a flow diagram showing a method of updating a pattern detecting finite state machine depending on a preferred candidate; Fig. 12 shows the states of the pattern detecting finite state machine (FSM) of Fig. 15 11; and Fig. 13 is a flow diagram showing a method of detecting a 3:2 pull-down pattern in video data according to one embodiment. Detailed Description including Best Mode Where reference is made in any one or more of the accompanying drawings to steps 20 and/or features, which have the same reference numerals, those steps and/or features have for the purposes of this description the same function(s) or operation(s), unless the contrary intention appears. It is to be noted that the discussions contained in the "Background" section and that above relating to prior art arrangements relate to discussions of documents or devices -9 which form public knowledge through their respective publication and/or use. Such should not be interpreted as a representation by the present inventor(s) or patent applicant that such documents or devices in any way form part of the common general knowledge in the art. 5 Methods of detecting a 3:2 pull-down (or film) pattern in video data will be described below with reference to Figs. 2 to 12. The described methods use a staged pattern detecting finite state machine (FSM). The described methods provide reduced detection latency and improved detection reliability in the presence of low motion, complex scene structure and/or low contrast content. The described methods also provide 10 improved response time. The methods described herein may be implemented using a computer system 200, such as that shown in Fig. 2 wherein the processes of Figs. I and 2 to 12 may be implemented as software, such as one or more application programs executable within the computer system 200. In particular, the steps of the described methods are effected by 15 instructions in the software that are carried out within the computer system 200. The instructions may be formed as one or more code modules, each for performing one or more particular tasks. The software may also be divided into two separate parts, in which a first part and the corresponding code modules performs the described methods and a second part and the corresponding code modules manage a user interface between the first part and 20 the user. The software may be stored in a computer readable medium, including the storage devices described below, for example. The software is loaded into the computer system 200 from the computer readable medium, and then executed by the computer system 200. A computer readable medium having such software or computer program recorded on it is a computer program product. The use of the computer program product in -10 the computer system 200 preferably effects an advantageous apparatus for implementing the described methods. As seen in Fig. 2, the computer system 200 is formed by a computer module 201, input devices such as a keyboard 202 and a mouse pointer device 203, and output devices 5 including a printer 215, a display device 214 and loudspeakers 217. An external Modulator-Demodulator (Modem) transceiver device 216 may be used by the computer module 201 for communicating to and from a communications network 220 via a connection 221. The network 220 may be a wide-area network (WAN), such as the Internet or a private WAN. Where the connection 221 is a telephone line, the modem 216 10 may be a traditional "dial-up" modem. Alternatively, where the connection 221 is a high capacity (eg: cable) connection, the modem 216 may be a broadband modem. A wireless modem may also be used for wireless connection to the network 220. The computer module 201 typically includes at least one processor unit 205, and a memory unit 206 for example formed from semiconductor random access memory (RAM) 15 and read only memory (ROM). The module 201 also includes a number of input/output (1/0) interfaces including an audio-video interface 207 that couples to the video display 214 and loudspeakers 217, an 1/0 interface 213 for the keyboard 202 and mouse 203 and optionally a joystick (not illustrated), and an interface 208 for the external modem 216 and printer 215. In some implementations, the modem 216 may be 20 incorporated within the computer module 201, for example within the interface 208. The computer module 201 also has a local network interface 211 which, via a connection 223, permits coupling of the computer system 200 to a local computer network 222, known as a Local Area Network (LAN). As also illustrated, the local network 222 may also couple to the wide network 220 via a connection 224, which would typically include a so-called -11 "firewall" device or similar functionality. The interface 211 may be formed by an EthernetTM circuit card, a wireless Bluetooth 1 or an IEEE 802.11 wireless arrangement. The interfaces 208 and 213 may afford both serial and parallel connectivity, the former typically being implemented according to the Universal Serial Bus (USB) standards 5 and having corresponding USB connectors (not illustrated). Storage devices 209 are provided and typically include a hard disk drive (HDD) 210. Other devices such as a floppy disk drive and a magnetic tape drive (not illustrated) may also be used. An optical disk drive 212 is typically provided to act as a non-volatile source of data. Portable memory devices, such optical disks (eg: CD-ROM, DVD), USB-RAM, and floppy disks 10 for example may then be used as appropriate sources of data to the system 200. The components 205 to 213 of the computer module 201 typically communicate via an interconnected bus 204 and in a manner which results in a conventional mode of operation of the computer system 200 known to those in the relevant art. Examples of computers on which the described arrangements can be practised include IBM-PC's and 15 compatibles, Sun Sparcstations, Apple MacTM or alike computer systems evolved therefrom. Typically, the application programs discussed above are resident on the hard disk drive 210 and read and controlled in execution by the processor 205. Intermediate storage of such programs and any data fetched from the networks 220 and 222 may be 20 accomplished using the semiconductor memory 206, possibly in concert with the hard disk drive 210. In some instances, the application programs may be supplied to the user encoded on one or more CD-ROM and read via the corresponding drive 212, or alternatively may be read by the user from the networks 220 or 222. Still further, the software can also be loaded into the computer system 200 from other computer readable - 12 media. Computer readable media refers to any storage medium that participates in providing instructions and/or data to the computer system 200 for execution and/or processing. Examples of such media include floppy disks, magnetic tape, CD-ROM, a hard disk drive, a ROM or integrated circuit, a magneto-optical disk, or a computer 5 readable card such as a PCMCIA card and the like, whether or not such devices are internal or external of the computer module 201. Examples of computer readable transmission media that may also participate in the provision of instructions and/or data include radio or infra-red transmission channels as well as a network connection to another computer or networked device, and the Internet or Intranets including e-mail transmissions 10 and information recorded on Websites and the like. The second part of the application programs and the corresponding code modules mentioned above may be executed to implement one or more graphical user interfaces (GUIs) to be rendered or otherwise represented upon the display 214. Through manipulation of the keyboard 202 and the mouse 203, a user of the computer system 200 15 and the application may manipulate the interface to provide controlling commands and/or input to the applications associated with the GUI(s). The described methods may alternatively be implemented in dedicated hardware such as one or more integrated circuits performing the functions or sub functions of the methods. Such dedicated hardware may include graphic processors, digital signal 20 processors, or one or more microprocessors and associated memories. A software architecture 300 suitable for implementing the described methods according to one exemplary embodiment is shown in Fig. 3. As seen in Fig. 3, video fields (e.g., 310) are input to a First-In-First-Out (FIFO) field buffer 315, for example, configured within memory 206. The FIFO field buffer 315 stores the most recent three - 13 consecutive fields (e.g., 310) of the video data at any point in time. The fields are input to three independent processing paths: 320, 330 and 340. Each path 320, 330 and 340 generates, for each output pixel location, a "candidate block of pixels", according to a predetermined rule, and a score that indicates the degree of interlace artefact present in the 5 associated generated candidate block of pixels. The score indicating the degree of interlace artifact will be referred to below as a "block-artefact-score". A center pixel of each candidate block of pixels is referred to as the "candidate output pixel". The first two paths, 320 and 330, generate candidate blocks of pixels which are weave reconstructions (or weaved blocks of pixels) of the input fields (e.g., 310). The path 10 320 weaves the first two fields 316 and 317 in the input FIFO field buffer 315 and the second path 330 weaves the second two fields 317 and 318 in the input FIFO field buffer 315. Collectively, the output of the paths 320 and 330 are candidate blocks of pixels which may be referred to as "weave candidate blocks of pixels". The third path 340 generates a candidate block of pixels using a spatiotemporal method. The candidate block of pixels 15 output by the third processing path 340 is referred to as an "interpolated candidate block of pixels". The set of all candidate output pixels from one of the paths 320, 330 or 340, for one set of three (3) input fields is referred to as a "candidate" or "candidate frame". The candidates associated with paths 320 and 330 are referred to as "weave candidates". The candidate associated with the path 340 is referred to as the "interpolated candidate". 20 The three candidate output pixels along with the frame-artefact scores corresponding to their candidate block of pixels are input to an analysis unit 380 which uses data in a stored state 385 to determine a "preferred candidate output pixel" and update the stored state in 385. The analysis unit 380 outputs a reconstructed frame 395 containing, at each pixel location, the candidate output pixel from the processing paths 320, 330 or 340 having -14 a best block-artefact score. That is, the reconstructed frame 395 contains, at each pixel location, the preferred candidate output .pixel. The reconstructed frame 395 will be hereinafter referred to as the "per-pixel output frame". The per-pixel output frame 395 may be used directly as output. Alternatively, in one 5 embodiment, the per-pixel output frame 395 can be buffered in memory 206 for conditional use pending the state of a pattern detecting finite state machine (FSM), the state of which is updated with each per-pixel output frame 395 generated. The analysis unit 380 includes such a pattern detecting finite state machine that uses the pattern of "preferred candidate" selections across whole per-pixel output frames to identify when the 10 input fields (e.g., 310) were derived from progressive (film) content via a pull-down process such as a 3:2 pull-down pattern process. As will be described in detail below, the analysis unit 380 uses the relative frame artefact scores for the weave candidates and the interpolated candidate to determine a "feature measurement" in the form of a preferred weave candidate based on whether each 15 of the weave candidates are considered "good" or "bad". This results in a more robust assessment of the weave candidates without the need to maintain historical threshold data. The analysis unit 380 is able to use this more robust knowledge to more confidently predict the position in a 3:2 pull-down (or film) pattern even before the complete 3:2 pull-down pattern has been detected. Even when a 3:2 pull-down pattern has not been fully detected, 20 any predicted position in a 3:2 pull-down pattern that can be generated is used by the analysis unit 380 to assist in deciding the preferred candidate in cases where the frame artefact scores do not provide a clear choice. This results in a high level of detection sensitivity that provides reliable film content detection (i.e.., reliable detection of a 3:2 pull-down pattern) even in the presence of low motion and/or low contrast.
- 15 A method 400 of generating a per-pixel output frame will now be described with reference to Fig. 4. The method 400 may be implemented as software, in accordance with the architecture 300 described above, resident the hard disk drive 210 and being controlled in its execution by the processor 205. 5 The method 400 begins at step 410, where the processor 205 updates the FIFO field buffer 315 to contain the most recent three (consecutive) fields (e.g., 310) of video data. At the next step 420 the pattern detecting finite state machine is updated with information from the processing of a previous video frame before the frame-artefact scores are initialised at step 425. A method 1100 of updating the pattern detecting finite state 10 machine, as executed at step 420, will be described in detail below with reference to Figs. 10 and 11. The three input fields in the FIFO field buffer 315 are then processed by the processing paths 320, 330 and 340 and analysis unit 380 to generate a per pixel output frame and an updated frame-artefact score for each of the candidate at step 430. The 15 candidate output pixels corresponding to each of the candidate blocks of pixels and the corresponding block-artefact scores are input to the analysis unit 380. At the next step 435, if, after the analysis is completed by the analysis unit 380, the processor 205 determines that both the weave candidates were of "bad" appearance, then in step 440, the per pixel output frame is selected. Otherwise, at step 450, the processor 205 20 selects the weave candidate block of pixels with the lowest overall frame-artefact score (i.e., as determined at step 430). The method 400 concludes at the next step 460, where the processor 205 writes the output frame to memory 206 or to the display 214.
-16 A method 500 of selecting a candidate output pixel will now be described with reference to Fig. 5. The method 500 may be implemented as software, in accordance with the architecture 300 described above, resident on the hard disk drive 210 and being controlled in its execution by the processor 205. 5 As described in detail below, in the method 500, the processor 205 performs the steps of generating two weave candidate blocks of pixels (or two weaved blocks of pixels) from at least three input fields of video data and generating an interpolated block of pixels from at least one of the input fields. The method 500 begins at the first step 510, where a first 3x5 weave candidate block of pixels (or first weaved block of pixels) centered about a 10 current output pixel location is generated as a weave of the first two fields (e.g., 316 and 317) in the FIFO field buffer 315. Accordingly, at step 510, the processor 205 performs the step of weaving a first pair of the fields in the FIFO field buffer 315 to determine a first weave candidate block of pixels (or a first one of the weaved blocks of pixels). If the first weave candidate block of pixels centered about the current output pixel location overlaps a 15 boundary of the input first two input fields, then the first two fields are extended by repeating boundary samples in order to complete the first weave candidate block of pixels. At the next step 520, the processor 205 generates a second 3x5 weave candidate block of pixels (or weaved block of pixels) centered about the current output pixel location as a weave of the second two fields (e.g., 317, 318) in the FIFO field buffer 315. 20 Accordingly, at step 510, the processor 205 performs the step of weaving a second pair of the fields in the FIFO field buffer 315 to determine a second one of the two weave candidate blocks of pixels (or a second one of the two weaved blocks of pixels). If the second weave candidate block of pixels centered about the current output pixel location - 17 overlaps the boundary of the second two input fields, the input is extended by repeating boundary samples in order to complete the second weave candidate block of pixels. At the next step 525, the processor 205 generates a 3x5 pixel interpolated candidate block of pixels using at least one of the input fields (e.g., 310). Accordingly, at step 525, 5 the processor 205 performs the step of interpolating one or more of the fields to determine an interpolated candidate block of pixels. The interpolated candidate block of pixels is centered about the current output pixel location and is generated using a "temporal median" method. If the interpolated candidate block of pixels centered about the current output pixel location overlaps the boundary of the input field, the input field is extended by 10 repeating boundary samples in order to complete the interpolated candidate block of pixels. The generation of the candidate blocks of pixels is described in further detail below. Subsequently at step 530 the candidate blocks of pixels are analyzed and an "block artefact-score" is assigned to each of the weave candidate blocks of pixels. Accordingly, at step 530, the processor 205 performs the step of assigning a score to each of the first 15 weaved block of pixels and the second weaved block of pixels. As described above, the block-artefact score is a measure of the degree of interlace artefact found to be present in the block. Alternatively, the block-artefact score may be considered a measure of "quality appearance" of the block. That is, the lower the block-artefact score assigned to one of the candidate blocks of pixels, the higher the quality appearance of the candidate block of 20 pixles. Then at step 532, the frame-artefact-score for each weave candidate is incremented if the block-artifact-score for the candidate block of pixels exceeds a predetermined threshold. The frame-artefact-score may also referred to below as a "bad-block-count". In the exemplary embodiment, the predetermined threshold is set to fifty (50) for 8-bit video - 18 data but is dependent on the measure used. The measure used in the exemplary embodiment is described in detail below. The method 500 continues at the next step 535, where if a control flag has been set to force a "film prediction mode" and the pattern detecting finite state machine is in a 5 "locked" or "committed" state (as will be described in detail below), then the method 500 proceeds to step 540. At step 540, one of the weave candidate block of pixels is selected. The selected weave candidate block of pixels is the weave candidate block of pixels that is predicted by a current position in the pull-down pattern, as will be described in detail below. 10 If the pattern detecting finite state machine is not in a "locked" or "committed" state, or the control flag to force a film prediction mode has not been set at step 535, then the method 500 proceeds to step 550. At step 550, the processor 205 selects the candidate block of output pixels with the lowest block-artefact-score. Accordingly, the processor 205 performs the step of comparing local analysis of pixels from the weaved blocks of 15 pixels (i.e., as represented by the block-artefact score assigned to each of the weaved blocks of pixels) with local analysis of spatially corresponding pixels in the interpolated block of pixels (i.e., as represented by the frame-artefact score assigned to each of the weaved blocks of pixels) to determine which of the blocks of pixels has a lower block artefact score. The block of pixels with the lowest frame-artefact score has a higher 20 "quality appearance." The method 500 concludes at the next step 560, where the processor 205 outputs the value of the center pixel of the selected candidate block of pixels (i.e., the preferred candidate output pixel).
- 19 In alternative embodiments, at step 550, instead of selecting the candidate block of pixels with the lowest block-artefact score, the block artefact scores corresponding to each of the candidate blocks of pixels may be compared with each other to determine a preferred candidate block of pixels. In this case, if the weave candidate block of pixels 5 predicted by a current "locked" or "committed" state is not significantly worse than the reference interpolated candidate block of pixels, then the weave candidate block of pixels may be selected. This results in reduced alternation between different candidate blocks of pixels. The organization of the candidate blocks of pixels and the generation of the weave 10 candidate blocks of pixels is now described in detail with reference to Figs. 6A and 6B. By convention the first field (e.g., 316) in the FIFO field buffer 315 is referred to as a "previous input field", the second field (e.g., 317) is referred to as a "current input field" and the third field is referred to as a "next input field". This naming convention is used as it is informative about the temporal order of fields. If the current location in the per-pixel 15 output frame is on a row that is a member of a top field and the current input field is a top field then the configuration of Fig. 6A applies. The configuration of Fig. 6A also applies if the location in the per-pixel output frame is on a row that is a member of a bottom field and the current input field is a bottom field. Otherwise the configuration of Fig. 6B applies. 20 Referring to Fig. 6A, an area covered by the weave candidate block of pixels in the per-pixel output frame is projected onto the previous input field as represented at 610 of Fig. 6A, onto the current input field as 620 and onto the next-input field as 630. The area 610 in the previous input field includes two rows of pixel data 640 which are present in the previous input field as well as three rows of pixel data 641 which are missing from the - 20 previous input field. Similarly, an area 620 in the current input field has present pixel data rows 642 and missing pixel rows 643, and an area 630 in the next input field has present data rows 644 and missing data rows 645. In order to generate the first weave candidate block of pixels (or a first weaved block 5 of pixels), as at step 510 of the method 500, it is necessary to determine which data configuration is applicable to the current output location (i.e., Fig.6A or Fig. 6B) and if the configuration of Fig. 6A is applicable, the rows 640 that are present in the previous input field and the rows 642 that are present in the current input field are copied to a candidate block of pixels configured within memory 206. That is, the first weave candidate block of 10 pixels 691 comprises the pixel data 640 and642 and the candidate output pixel is 694. If on the other hand the output location corresponds to the configuration of Fig. 6B, the first weave candidate block of pixels 691 is generated by copying the rows 640 that are present in the previous input field and the rows 642 that are present in the current input field to the candidate block of pixels 691 configured within memory 206. That is, the first 15 weave candidate block of pixels 691 comprises the pixel data 640 and 642 and the candidate output pixel is 694. Similarly, in order to generate the second weave candidate block of pixels, as at step 520 of the method 500 it is necessary to first determine which data configuration is applicable to the current output location (i.e., Fig. 6A or Fig. 6B). If the output location 20 corresponds to the configuration of Fig. 6A the second weave candidate block of pixels is generated by copying the rows of pixels 642 that are present in the current input field and the rows of pixels 644 that are present in the next input field to a candidate block of pixels 692 configured within memory 206. That is, the second weave candidate block of pixels 692 comprises the pixel data 642 and 644 and the candidate output pixel is 693.
- 21 If on the other hand the output location corresponds to the configuration of Fig. 6B, the second weave candidate block of pixels 692 is generated by copying the rows of pixels 642 that are present in the current input field and the rows of pixels 644 that are present in the next input field to the candidate block of pixels 692 configured within memory 205. 5 That is, the second weave candidate block of pixels 692 comprises the pixel data 642 and 644 and the candidate output pixel 693. The generation of the interpolated block of pixels, as at step 525 of the method 500, will now be described in detail with reference to Fig. 6C. An area covered by the candidate block of pixels in the per-pixel output frame is projected onto the previous input 10 field as 610, onto the current input field as 620 and onto the next-input field as 630. For the interpolated block of pixels, pixel data for the rows 643 that are not present in the current input field, are generated. In the exemplary embodiment, as seen in Fig. 6C, immediate spatial and temporal neighbors 650 of a missing sample 645 are used to determine that sample 645. Labeling the missing sample c, the immediate spatial 15 neighbors of the sample c (i.e., c; and c 2 ) and the temporal neighbors p and n, the missing sample c may be determined as follows: c = median(p, ((cI + C2) / 2), n) Other methods may also be used to determine the missing sample. In an alternative embodiment, a missing sample c is determined as follows: 20 c = (p + cj+ c 2 + n) / 4 Typically, the interpolated candidate block of pixels will have less detail and high frequency information, and appear more blurred but the interpolated candidate block of pixels is adequate as an indicator of how a "normal" block artefact score should be for the per-pixel output frame.
-22 A further advantage is that the interpolated candidate block of pixels provides a reference which can be used locally with the individual block measures from spatially corresponding positions in each candidate block of pixels, with weave candidate block of pixels output decisions made on a per-pixel basis. This is useful when processing video 5 with mixed content, where parts of a screen have different sources (eg. 3:2 pull-down video with native progressive text overlays added). A method 700 of determining a block-artefact score for a candidate 3x5 block of pixels will now be described with reference to Fig. 7. The method 700 may be implemented as software resident on the hard disk drive 210 and being controlled in its 10 execution by the processor 205. The method 700 measures the degree of interlace artefact within the candidate 3x5 block of pixels. The method 700 begins at a first step 710, where an input candidate block of pixels is divided into three (3) vertically adjacent and overlapped 3x3 sub-blocks containing rows 1 3, 2-4 and 3-5 respectively of the candidate block of pixels. For each sub-block, n e (1..3), 15 a separate "subblock artefact score" S is determined by the processor 205 at step 720. A method 800 of determining a sub-block artefact score S, as executed at step 720, will be described in detail below with reference to Fig. 8 The sub-block scores S, are refined at step 730 by applying a vertical minimum filter to the three (vertically adjacent) sub-block scores. At step 730, the processor 205 20 generates two (2) filtered scores SF according to equations (1) and (2) below: SF = min(S, S 2 ) (1)
SF
2 = min(S 2 , S 3 ) (2) -23 Subsequently at step 740, the processor 205 determines a block artefact score A for the candidate 3x5 block of pixels by applying a vertical maximum filter to the two filtered scores SF according to equation (3) as follows: A = max(SF, SF 2 ) (3) 5 The method 700 concludes at the next step 750, where the block artefact score A for the candidate 3x5 block of pixels is written to memory 206. The method 800 of determining a sub-block artefact score S will now be described with reference to Fig. 8. The method 800 may be implemented as software resident on the hard disk drive 210 and being controlled in its execution by the processor 205. 10 The method 800 begins at step 810, where the processor 205 determines a vector of three (3) row differences [D1, D2, D3] from the pixel values p(x,y) for xe (1..3) and ye (1..5) of the input sub-block according to equations (4), (5) and (6): D1 = max(Jp(1,1) - p(1,2)|,jp(2,1) - p(2,2)|,lp(3,1) - p(3,2)1) (4) D1 = max(p(1,3) - p(1,2)|, lp(2,3) - p(2,2)1, Ip(3,3) - p(3,2)1) (5) 15 D1 = max(p(1,1) - p(1,3)1,lp(2,1) - p(2,3)1, p(3,l) - p(3,3)1) (6) Subsequently at step 820, the processor 205 determines the artefact score for the sub block according to equation (7) as follows: S,, = DI+ D2 - D3 (7) 20 The above method 700 has been described in terms of a single block operation applied to a three (3) pixel by five (5) pixel block of pixels. This results in a number of computations being performed more than once as the processing location progresses through the data in a raster scan. In another advantageous embodiment, if an appropriate - 24 memory is available, the artefact scores may be determined using a series of smaller filters and caching the results of intermediate computations. A block diagram for such an embodiment is depicted in Fig. 9. In Fig. 9, the input to block 910 is a 3x3 candidate block of pixels. Block 910 performs the sub-block artefact score determinations described in 5 equations (4)-(7) above to generate samples of the artefact score S. Subsequently the block 920 operates on 1x2 sample block of pixels output from the block 910 to generate samples of the filtered scores SF. Subsequently the block 930 operates on 1x2 sample blocks of pixels output from the block 920 to generate samples of the block artefact score A. More generally a combination 940 of blocks 920 and 930 can be seen to be a morphological 10 cleaning process. The purpose of the morphological process is to ensure high values from the artefact measurement stage corresponding to at least two (2) vertically adjacent high values of the sub-block measure of equation (7). In alternative embodiments that employ the filter architecture of Fig. 9, more sophisticated morphological cleaning may be practical. 15 The updating of the pattern detecting finite state machine in step 420 will now be described in more detail. Step 420 actually comprises two sub-steps. In a first of these sub-steps a "feature measure" in the form of a preferred candidate block of output pixels is determined based on the accumulated frame-artefact-score (bad-block-count), for each candidate, accumulated during the processing for a complete per-pixel output frame. A 20 method 1000 of determining a preferred candidate, as executed at step 420 of the method 400, will be described below with reference to Fig. 10. In the second sub-step of step 420, the preferred candidate is used to determine how the pattern detecting finite state machine is updated at step 420. A method 1100 of updating the pattern detecting finite state - 25 machine, as executed at step 420, will be described in detail below with reference to Fig. 11. The method 1000 may be implemented as software resident on the hard disk drive 210 and being controlled in its execution by the processor 205. The method 1000 begins at a 5 first step 1010, where the processor 205 counts how many of the frame-artefact-scores associated with the candidates are more than a predetermined threshold bigger than a smallest artefact score. The smallest artefact score defines a "normal" level. determining which candidates have "abnormally" high frame-artefact-scores. If only one of the frame-artefact-scores is high at step 1020, then the method 1000 10 assumes that only one of the weave candidates was bad, and the method 1000 proceeds to step 1030. Then at step 1030, the processor 205 selects the weave candidate with the lowest frame-artefact-score as the preferred candidate. If two frame-artefact-scores are determined to be high at step 1020, then the method 1000 assumes that both weave candidates were bad and the method 1000 proceeds to step 15 1040. At step 1040, the processor 205 selects the interpolated candidate as the preferred candidate. Step 1040 should not occur during video produced by 3:2 pull-down but may occur at edit points or in native interlaced video. If none of the frame-artefact-scores are determined to be high at step 1020, then the method 1000 proceeds to step 1050. None of the frame-artefact-scores may be high if both weave candidates in the 3:2 pull-down 20 pattern are good. Alternatively, there may be low motion or low contrast, for example, which reduces the severity of the artefacts making it harder to detect the artefacts. Either way, at step 1050, none of the candidates are selected as the preferred candidate at step 1050. After the above described analysis of the frame-artefact scores in accordance with the method 1000, the pattern detecting finite state machine may be updated in accordance - 26 with the method 1100 which will now be described with reference to Fig. 11. The method 1100 may be implemented as software resident on the hard disk drive 210 and being controlled in its execution by the processor 205. The method 1100 begins at step 1110, where if the processor 205 determines that both weave candidates are bad (i.e the 5 interpolated candidate was the preferred candidate), then there is not a 3:2 pull-down pattern and the method 1100 proceeds to step 1130. At step 1140, there is not a 3:2 pull down pattern and the pattern detecting finite state machine is reset to its initial state as step 1140. If one of the weave candidates was bad and the other good, then the method 1100 10 proceeds to step 1150. At step 1150, the processor 205 updates the pattern detecting finite state machine with a "1" if the first weave candidate was the preferred candidate or a "2" if the second weave candidate was the preferred candidate. If none of the candidate blocks of pixels were considered to be clearly preferred, then the method 1100 proceeds to step 1120. At step 1120, if the pattern detecting finite state 15 machine is in a "committed" or "locked" state then the method 1100 proceeds to step 1130. Otherwise, if the pattern detecting finite state machine is in neither a locked nor committed state, the method 1100 proceeds to step 1140. If the pattern detecting finite state machine is in a committed or locked state, then there is only one (1) possible update for the pattern detecting finite state machine which will preserve the 3:2 pull-down pattern. Accordingly, 20 at the next step 1150, the pattern detecting finite state machine is updated with the expected weave candidate. Fig. 12 shows the pattern detecting finite state machine (FSM) 1200 comprising states representing the most recent set of field weave candidates which satisfy the 3:2 pull down pattern. The labels "1" and "2" of the states represent a history of which weave (i.e., -27 "weave candidate block of pixels 1" or "weave candidate block of pixels 2") has been selected during the method 1100. In Fig. 12, states (e.g., the state 1210) are represented by ellipses, regular transitions by arrows from the source state to the destination state, with user-supplied input which triggers the transition labelling the arrow, and transitions which 5 cause the pattern detecting finite state machine to regress a "stage" are indicated by boxes (e.g., 12160) with the destination state labelling the box. Updates to the pattern detecting finite state machine are either a "1", a "2" or a resetting. The resetting takes the finite state machine 1200 back to the "?" state 1210, representing no usable past data. For clarity, some transitions are represented as "Go to" 10 boxes instead of edges. As seen in Fig. 12, state 1210 represents the pattern detecting finite state machine 1200 having no information about which past weave candidates were good. States 1220, 1230, 1240, 1250, 1260, and 1270 represent "uncertain" states where the processor 205 has not accumulated enough information to know where in the 3:2 pull-down pattern detection 15 might be. This set of states 1210, 1220, 1230, 1240, 1250, 1260, 1270 is called the "uncertain stage". States 1280, 1290, 12100, 12110, 12120, 12130, 12140 and 12150 represent the pattern detecting finite state machine 1200 possessing enough information about past weave candidates to fix a unique point in the 3:2 pull-down pattern. This set of states is 20 called the "committed stage". The states 1280, 1290, 12100, 12110, 12120, 12130, 12140 and 12150, do not represent a complete 3:2 pull-down pattern since none of the states contain a five full five fields. However, each of the states 1280, 1290, 12100, 12110, 12120, 12130, 12140 and 12150, represent a "portion" of the 3:2 pull-down pattern.
-28 Transitions 12160, 12170, 12180, 12190, 12200, 12210, 12220 cause a regression to the uncertain stage and are taken when a new input cannot represent the next step in the 3:2 pull-down pattern given past results. As many of the past results as possible are retained within memory 206, for example, when determining to which state to regress. 5 States 12230, 12240, 12250, 12260, 12270 are reached when the processor 205 has accumulated enough information about past weave candidates to fit the past weave candidates to the complete 3:2 pull-down pattern. As long as reliable weave analysis information is received then the transitions remain in this group, called the "locked stage". In the locked stage, the pattern detecting finite state machine 1200 can predict the next 10 weave candidate in the complete pull-down pattern. Transitions 12280, 12290, 12300, 12310, 12320 cause a regression to the uncertain stage or committed stage and are taken when a new input cannot represent the next step in the 3:2 pull-down pattern given past results. Again, as many of the past results as possible are retained in memory 206, for example, when determining to which state to regress. 15 The advantages of updating the staged pattern detecting finite state machine (FSM) using a predicted weave candidate selection are that film frames can be recovered more quickly. Further, the state machine does not have to be reset. Still further, suboptimal frame recovery or deinterlacing does not have to be performed in the low motion/low contrast periods of video. 20 Once the pattern detecting finite state machine is in a committed or locked state, then for each window of three (3) fields, the processor 205 can predict which pair of fields to weave together to achieve the best deinterlaced per-pixel output frame, since the fields are from the same original film frame. Also, during transcoding or encoding, this information - 29 may be used to set a metadata flag to indicate which frames come from a motion picture film source. Such information can aid in processing a decoder. A method 1300 of detecting a 3:2 pull-down pattern in video data, according to one embodiment, will now be described by way of example with reference to Figs. 12 and 13. 5 The method 1300 may be implemented as software resident in the hard disk drive 210 and being controlled in its execution by the processor 205. The method 1300 begins at step 1301, where the processor 205 performs the step of detecting at least a portion of the 3:2 pull-down pattern within a sequence of fields of the video data. For example, the pattern detecting finite state machine 1200 may be in the 10 uncertain state 1240 (i.e., "1 1") in analysing the video data. The FIFO field buffer 315 is then updated to contain a plurality of consecutive fields (i.e., the most recent three consecutive fields) of video data at a particular point in time. In the example, "weave candidate 2" of the resulting weave candidates may then be selected by the analysis unit 380 as the preferred candidate block of pixels in accordance with the method 1000. As 15 such, the pattern detecting state machine is updated in accordance with the method 1100 to take the pattern detecting finite state machine 1200 into the state 1280 (i.e., "112"). The pattern detecting finite state machine 1200 is now in the "committed stage". The stage 1280 represents the pattern detecting finite state machine 1200 possessing enough information about past weave candidate to fix a unique point in the 3:2 pull-down pattern. 20 Accordingly, the processor 205 has detected a portion of the 3:2 pull-down pattern. The method 1300 continues at the next step 1303, where the processor 205 performs the step of determining a feature measurement for a current location in the sequence of fields. Continuing the example, the FIFO field buffer 315 is again updated with a further input field. That is, the FIFO field buffer 315 contains a most recent three consecutive -30 fields of video data at a next particular point in time. The analysis unit 380 uses the relative frame artefact scores for the resulting weave candidates and the interpolated candidate to determine a feature measurement for the input fields in the FIFO field buffer 315. Accordingly, the feature measurement is determined based on a plurality of 5 consecutive fields. In particular, the analysis unit 380 performs the method 1000 determining a preferred candidate depending on whether only one of the weave candidates is "bad" (i.e., only one of the frame-artefact-scores is high); if both weave candidates are bad (i.e., two framc-artefact-scores are determined to be high); or if both weave candidates are good (i.e., none of the frame-artefact-scores are determined to be high). In the present 10 example, the analysis unit 380 determines that only "weave candidate 2" is bad and the pattern detecting finite state machine 1200 is updated in accordance with the method 1100 to take the pattern detecting state machine into state 12130 (i.e., "1121"). Again, continuing the example, the FIFO field buffer 315 is again updated with a further input field. The analysis unit 380 uses the relative frame artefact scores for the 15 resulting weave candidates and the interpolated candidate to determine a further feature measurement for the input fields in the FIFO field buffer 315. In the present example, the analysis unit 380 performs the method 1000 to determine that only "weave candidate 1" is bad and the pattern detecting finite state machine 1200 is updated in accordance with the method 1100 to take the pattern detecting state machine into state 12230 (i.e., "11212"). 20 Accordingly, the pattern detecting finite state machine is in the locked stage and the pattern detecting finite state machine has accumulated enough information about past weave candidates to fit the past weave candidates to the complete 3:2 pull-down pattern. In the present example, when the FIFO field buffer 315 is again updated with a further input field, none of the resulting candidates are considered to be clearly preferred.
-31 However, since the pattern detecting finite state machine is in the locked stage then there is only one (1) possible update for the pattern detecting finite state machine which will preserve the 3:2 pull-down pattern. Accordingly, at the next step 1305 of the method 1300, the processor 205 performs the step of predicting a further feature measurement for the 5 current location based on the detected portion. In particular, since the pattern detecting state machine is in state 12230 (i.e., "11212"), then the analysis unit 380 predicts that the next transition for the pattern detecting finite state machine 1200 is "weave candidate 1". The method 1300 then concludes at the next step 1307, where the processor 205 performs the step of detecting the 3:2 pull-down pattern using the predicted feature 10 measurement. In the present example, the pattern detecting finite state machine 1200 is updated in accordance with the method 1100 to take the pattern detecting state machine into state 12250. That is, the detected 3:2 pull-down pattern is "12121". As described above, once the pattern detecting finite state machine 1200 is in a committed or locked state, then for each window of three (3) fields, the processor 205 can 15 predict which pair of fields to weave together to achieve the best deinterlaced per-pixel output frame, since the fields are from the same original film frame. Accordingly, the detected portion (e.g., state 1290 ("1211")) of the 3:2 pull-down pattern uniquely determines a next feature measurement in the pull-down pattern. Industrial Applicability 20 It is apparent from the above that the arrangements described are applicable to the computer and data processing industries). The foregoing describes only some embodiments of the present invention, and modifications and/or changes can be made thereto without departing from the scope and spirit of the invention, the embodiments being illustrative and not restrictive.
-32 In the context of this specification, the word "comprising" means "including principally but not necessarily solely" or "having" or "including", and not "consisting only of'. Variations of the word "comprising", such as "comprise" and "comprises" have correspondingly varied meanings.

Claims (7)

1. A method of determining which of two weaved blocks of pixels has a higher quality appearance, said method comprising the steps of: 5 generating said two weaved blocks of pixels from at least three input fields of video data; generating an interpolated block of pixels from at least one of said input fields; and comparing local analysis of pixels from said weaved blocks of pixels with local analysis of spatially corresponding pixels in the interpolated block of pixels to determine 10 which of said weaved block of pixels has a higher quality appearance.
2. The method according to claim 1, further comprising the steps of: weaving a first pair of said fields to determine a first one of said two weaved blocks of pixels; and 15 weaving a second pair of said fields to determine a second one of said two weaved blocks of pixels.
3. The method of claim 3, further comprising the step of assigning a score to each of said first weaved block of pixels and said second weaved block of pixels. 20
4. The method of claim 4, further comprising the step of comparing the scores assigned to each of said first weaved block of pixels and said second weaved block of pixels to determine which of said weaved blocks of pixels has a higher quality appearance. - 34
5. An apparatus for determining which of two weaved blocks of pixels has a higher quality appearance, said apparatus comprising: first generating means for generating said two weaved blocks of pixels from at least three input fields of video data; 5 second generating means for generating an interpolated block of pixels from at least one of said input fields; comparing means for comparing local analysis of pixels from said weaved blocks of pixels with local analysis of spatially corresponding pixels in the interpolated block of pixels to determine which of said weaved block of pixels has a higher quality appearance. 10
6. A computer readable medium, having a program recorded thereon, where the program is configured to make a computer execute a procedure to a determine which of two weaved blocks of pixels has a higher quality appearance, said program comprising: code for generating said two weaved blocks of pixels from at least three input fields 15 of video data; code for generating an interpolated block of pixels from at least one of said input fields; code for comparing local analysis of pixels from said weaved blocks of pixels with local analysis of spatially corresponding pixels in the interpolated block of pixels to 20 determine which of said weaved block of pixels has a higher quality appearance.
7. A method of determining which of two weaved blocks of pixels has a higher quality appearance, said method being substantially as herein before described with reference to any one of the embodiments as that embodiment is shown in Figs. 2 to 13. - 35 DATED this Twentieth Day of December 2006 CANON KABUSHIKI KAISHA Patent Attorneys for the Applicant 5 SPRUSON&FERGUSON
AU2006252189A 2006-12-21 2006-12-21 Method and apparatus for determining quality appearance of weaved blocks of pixels Ceased AU2006252189B2 (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
AU2006252189A AU2006252189B2 (en) 2006-12-21 2006-12-21 Method and apparatus for determining quality appearance of weaved blocks of pixels

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
AU2006252189A AU2006252189B2 (en) 2006-12-21 2006-12-21 Method and apparatus for determining quality appearance of weaved blocks of pixels

Publications (2)

Publication Number Publication Date
AU2006252189A1 AU2006252189A1 (en) 2008-07-10
AU2006252189B2 true AU2006252189B2 (en) 2009-06-04

Family

ID=39665867

Family Applications (1)

Application Number Title Priority Date Filing Date
AU2006252189A Ceased AU2006252189B2 (en) 2006-12-21 2006-12-21 Method and apparatus for determining quality appearance of weaved blocks of pixels

Country Status (1)

Country Link
AU (1) AU2006252189B2 (en)

Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2001080559A2 (en) * 2000-04-18 2001-10-25 Silicon Image Method, system and apparatus for identifying the source type and quality level of a video sequence
US6618439B1 (en) * 1999-07-06 2003-09-09 Industrial Technology Research Institute Fast motion-compensated video frame interpolator
US20060256237A1 (en) * 2005-04-22 2006-11-16 Stmicroelectronics Sa Deinterlacing of a sequence of moving images

Patent Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US6618439B1 (en) * 1999-07-06 2003-09-09 Industrial Technology Research Institute Fast motion-compensated video frame interpolator
WO2001080559A2 (en) * 2000-04-18 2001-10-25 Silicon Image Method, system and apparatus for identifying the source type and quality level of a video sequence
US20060256237A1 (en) * 2005-04-22 2006-11-16 Stmicroelectronics Sa Deinterlacing of a sequence of moving images

Also Published As

Publication number Publication date
AU2006252189A1 (en) 2008-07-10

Similar Documents

Publication Publication Date Title
JP4900548B2 (en) Improved spatial resolution of video images
US8693552B2 (en) Low latency cadence detection for frame rate conversion
US6118488A (en) Method and apparatus for adaptive edge-based scan line interpolation using 1-D pixel array motion detection
US9204086B2 (en) Method and apparatus for transmitting and using picture descriptive information in a frame rate conversion processor
JP4929819B2 (en) Video signal conversion apparatus and method
US9001272B2 (en) Image synthesizing device, coding device, program, and recording medium
KR101098630B1 (en) Motion adaptive upsampling of chroma video signals
US20090208123A1 (en) Enhanced video processing using motion vector data
JP4309453B2 (en) Interpolated frame generating apparatus, interpolated frame generating method, and broadcast receiving apparatus
JP3842756B2 (en) Method and system for edge adaptive interpolation for interlace-to-progressive conversion
US7256835B2 (en) Apparatus and method for deinterlacing video images
JP2006109488A (en) Video processing device capable of selecting field and method thereof
AU2006252189B2 (en) Method and apparatus for determining quality appearance of weaved blocks of pixels
JP2007501561A (en) Block artifact detection
JP5050637B2 (en) VIDEO SIGNAL PROCESSING DEVICE, VIDEO SIGNAL PROCESSING METHOD, VIDEO SIGNAL PROCESSING METHOD PROGRAM, AND RECORDING MEDIUM CONTAINING VIDEO SIGNAL PROCESSING METHOD PROGRAM
AU2006252187A1 (en) Method and apparatus for detecting a pull-down pattern
US20050270417A1 (en) Deinterlacing video images with slope detection
KR100931110B1 (en) Deinterlacing apparatus and method using fuzzy rule-based edge recovery algorithm
JP2007074588A (en) Image processor and processing method, program and recording medium
JP2005252365A (en) Image signal processing apparatus, image signal processing method, program and medium recording the same
US7420626B1 (en) Systems and methods for detecting a change in a sequence of interlaced data fields generated from a progressive scan source
JP2006510267A (en) Method for recognizing film and video occurring simultaneously in a television field
JP2005184442A (en) Image processor
WO2006013510A1 (en) De-interlacing
JP4310847B2 (en) Image information conversion apparatus and conversion method

Legal Events

Date Code Title Description
FGA Letters patent sealed or granted (standard patent)
MK14 Patent ceased section 143(a) (annual fees not paid) or expired