US20040233328A1 - Method and apparatus of adaptive de-interlacing of dynamic image - Google Patents
Method and apparatus of adaptive de-interlacing of dynamic image Download PDFInfo
- Publication number
- US20040233328A1 US20040233328A1 US10/851,240 US85124004A US2004233328A1 US 20040233328 A1 US20040233328 A1 US 20040233328A1 US 85124004 A US85124004 A US 85124004A US 2004233328 A1 US2004233328 A1 US 2004233328A1
- Authority
- US
- United States
- Prior art keywords
- line segment
- image
- interlacing
- frame
- shift value
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Granted
Links
Images
Classifications
-
- G—PHYSICS
- G11—INFORMATION STORAGE
- G11B—INFORMATION STORAGE BASED ON RELATIVE MOVEMENT BETWEEN RECORD CARRIER AND TRANSDUCER
- G11B20/00—Signal processing not specific to the method of recording or reproducing; Circuits therefor
- G11B20/10—Digital recording or reproducing
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N7/00—Television systems
- H04N7/01—Conversion of standards, e.g. involving analogue television standards or digital television standards processed at pixel level
- H04N7/0117—Conversion of standards, e.g. involving analogue television standards or digital television standards processed at pixel level involving conversion of the spatial resolution of the incoming video signal
- H04N7/012—Conversion between an interlaced and a progressive signal
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T1/00—General purpose image data processing
- G06T1/60—Memory management
-
- G—PHYSICS
- G09—EDUCATION; CRYPTOGRAPHY; DISPLAY; ADVERTISING; SEALS
- G09G—ARRANGEMENTS OR CIRCUITS FOR CONTROL OF INDICATING DEVICES USING STATIC MEANS TO PRESENT VARIABLE INFORMATION
- G09G5/00—Control arrangements or circuits for visual indicators common to cathode-ray tube indicators and other visual indicators
- G09G5/02—Control arrangements or circuits for visual indicators common to cathode-ray tube indicators and other visual indicators characterised by the way in which colour is displayed
- G09G5/06—Control arrangements or circuits for visual indicators common to cathode-ray tube indicators and other visual indicators characterised by the way in which colour is displayed using colour palettes, e.g. look-up tables
-
- G—PHYSICS
- G09—EDUCATION; CRYPTOGRAPHY; DISPLAY; ADVERTISING; SEALS
- G09G—ARRANGEMENTS OR CIRCUITS FOR CONTROL OF INDICATING DEVICES USING STATIC MEANS TO PRESENT VARIABLE INFORMATION
- G09G5/00—Control arrangements or circuits for visual indicators common to cathode-ray tube indicators and other visual indicators
- G09G5/36—Control arrangements or circuits for visual indicators common to cathode-ray tube indicators and other visual indicators characterised by the display of a graphic pattern, e.g. using an all-points-addressable [APA] memory
- G09G5/39—Control of the bit-mapped memory
- G09G5/395—Arrangements specially adapted for transferring the contents of the bit-mapped memory to the screen
-
- G—PHYSICS
- G10—MUSICAL INSTRUMENTS; ACOUSTICS
- G10H—ELECTROPHONIC MUSICAL INSTRUMENTS; INSTRUMENTS IN WHICH THE TONES ARE GENERATED BY ELECTROMECHANICAL MEANS OR ELECTRONIC GENERATORS, OR IN WHICH THE TONES ARE SYNTHESISED FROM A DATA STORE
- G10H1/00—Details of electrophonic musical instruments
- G10H1/0033—Recording/reproducing or transmission of music for electrophonic musical instruments
- G10H1/0041—Recording/reproducing or transmission of music for electrophonic musical instruments in coded form
- G10H1/0058—Transmission between separate instruments or between individual components of a musical system
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N5/00—Details of television systems
- H04N5/222—Studio circuitry; Studio devices; Studio equipment
- H04N5/262—Studio circuits, e.g. for mixing, switching-over, change of character of image, other special effects ; Cameras specially adapted for the electronic generation of special effects
- H04N5/265—Mixing
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N7/00—Television systems
- H04N7/01—Conversion of standards, e.g. involving analogue television standards or digital television standards processed at pixel level
- H04N7/0135—Conversion of standards, e.g. involving analogue television standards or digital television standards processed at pixel level involving interpolation processes
- H04N7/014—Conversion of standards, e.g. involving analogue television standards or digital television standards processed at pixel level involving interpolation processes involving the use of motion vectors
-
- G—PHYSICS
- G09—EDUCATION; CRYPTOGRAPHY; DISPLAY; ADVERTISING; SEALS
- G09G—ARRANGEMENTS OR CIRCUITS FOR CONTROL OF INDICATING DEVICES USING STATIC MEANS TO PRESENT VARIABLE INFORMATION
- G09G2340/00—Aspects of display data processing
- G09G2340/10—Mixing of images, i.e. displayed pixel being the result of an operation, e.g. adding, on the corresponding input pixels
-
- G—PHYSICS
- G09—EDUCATION; CRYPTOGRAPHY; DISPLAY; ADVERTISING; SEALS
- G09G—ARRANGEMENTS OR CIRCUITS FOR CONTROL OF INDICATING DEVICES USING STATIC MEANS TO PRESENT VARIABLE INFORMATION
- G09G5/00—Control arrangements or circuits for visual indicators common to cathode-ray tube indicators and other visual indicators
- G09G5/36—Control arrangements or circuits for visual indicators common to cathode-ray tube indicators and other visual indicators characterised by the display of a graphic pattern, e.g. using an all-points-addressable [APA] memory
- G09G5/363—Graphics controllers
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N5/00—Details of television systems
- H04N5/14—Picture signal circuitry for video frequency region
- H04N5/144—Movement detection
- H04N5/145—Movement estimation
Definitions
- This present invention generally relates to a method and apparatus of de-interlacing of dynamic image, and more particularly to a method and apparatus of adaptive de-interlacing of dynamic image, wherein the dynamic image taking a line segment composed of pixels as a unit for calculating and determining the process.
- NTSC National Television System Committee
- PAL Phase Alternation by Line
- Standard of NTSC is used in Japan or the US, according to which a frame is formed by 525 scanning lines i.e. 525 scanning lines are called a frame, which means the frame is displayed repeatedly at the speed of 30 frames per second.
- 525 scanning lines that form a frame are not finished in only one scanning. The frame is displayed by scanning one line and then the line following the next line.
- the smooth and clear frame displayed is actually constituted by an odd number, an even number, and then an odd number, the formatting method of which is called “double-space scanning” or “interlacing”.
- An interlaced video signal is composed of two fields, each of the two fields containing odd lines or even lines of the image.
- camera will output odd lines of image on instant basis and output even lines of image after 16.7 milliseconds. Since a temporal shift will occur in the process of outputting odd lines and even lines of the image, the temporal shift should be positioned in the system of frame-based processing. For a still frame, a good one can be acquired with this method, whereas the image will become blurred since serration will occur on the edge of the image called feathering for a frame with motion pictures.
- each field of odd lines and field of even lines since field of odd lines and field of even lines are formed by only half amount of scanning lines (262.5 lines), each field of odd lines and field of even lines only has half of the resolution of the original image. Moreover, even though each field of odd lines and field of even lines displays at the speed of 60 fields per second and such frame may not appear to have motion artifacts, but if the frame is enlarged, the scanning lines will appear thick and the frame will become blurred.
- de-interlacing is a method to convert interlacing to a progressive scan.
- the scanning lines are enhanced from 480i to 720p by the steps of de-interlacing and resampling, and the misalignment of image occurred during combination of odd and even scanning lines should be amended so that a progressive image can satisfy the demands of the audiences.
- the method of removing spatially redundant information is usually processed, according to the characteristic of human vision, with spatial transformation (such as discrete cosine transform i.e. DCT or wavelet transform) and quantization to filter and remove the part of high frequency to achieve compression.
- spatial transformation such as discrete cosine transform i.e. DCT or wavelet transform
- quantization to filter and remove the part of high frequency to achieve compression.
- principle of Motion Estimation is used to find out and remove temporally redundant information to achieve compression.
- I-frame Intra-frame
- B-frame Bi-directional frame
- P-frame Predicted frame
- I-frame Intra-frame
- B-frame Bi-directional frame
- P-frame Predicted frame
- I-frame Intra-frame
- B-frame Bi-directional frame
- P-frame Predicted frame
- I-frame does not need to put its relation with other frames under consideration since a complete frame is saved.
- P-frame takes former I-frame as reference frame, wherein the redundant part of frame is not saved and only different part of frame is saved.
- the principle of B-frame is the same as that of P-frame, the only difference being that B-frame can take former I-frame or P-frame as reference and can also take latter P-frame as reference.
- I-frame it usually cuts a frame as macro block of a 16 ⁇ 16 pixel for processing.
- Each macro block is composed of a luminance block (i.e. Y block) of four 4 ⁇ 4 pixels, a C r block of one 8 ⁇ 8 pixel and a C b block of one 8 ⁇ 8 pixel.
- the data should be saved is mainly the difference between current frame and reference frame, wherein the reference frame is the former I-frame or P-frame. That's because any parts of the same frame can often be found in some position of the former frame, only recording those parts shifted from parts of the former frame and thus reducing much the information of frame required of saving, and the technique is called motion compensation.
- P-frame it also takes macro block as a unit. Normally, each macro block can find the closest block of the macro block within some region. The procedure is called block matching, and coordinates of the comparative position of the current frame are (0, 0). When the macro block finds the nearest block by contrast, only the displacement of coordinates of the macro block in two frames is recorded, i.e.
- B-frame (displacement of x, displacement of y) and is shown as (dx, dy), which is so-called motion vector (MV).
- MV motion vector
- the image When the image is de-compressed (i.e., when it is broadcast), it is processed in an order opposite to that of encoding, and the signal of frequency domain would be converted to spatial signal to transform the signal back into data stream before compression, and these data streams are then integrated and sent to video reconstructed image buffer (or video image buffer), which restores received data to original frames. Besides, when the first one is I-frame, restoring directly and saving the latter frames by memory. Finally, restoring those frames only recording difference to the situation before compression by the method of motion compensation.
- motion compensation comprising pixels of two temporal shift fields being shifted to a common point in one instant and forming a frame.
- Motion estimation is consulted for deciding the shifting amount of each pixel, wherein identification and tracking of motion vector is from one field (for example, field of odd lines) to another field (for example, field of even lines). Otherwise, it is configured for cutting the fields into a plurality of macro blocks further and executing by the procedure of block matching.
- a macro block when taking a macro block as identification of motion vector, actually it only chooses the luminance block (i.e. Y block) of the macro block for executing, and discards the saturation block (i.e. C r block and C b block).
- the de-interlacing algorithm in the hardware system required for more requirements cannot execute. And it cannot display with the best image quality for lack of the selection mechanism of de-interlacing algorithm.
- the present invention provides a method and apparatus of adaptive de-interlacing of dynamic image, wherein the dynamic image taking a line segment composed of pixels as unit comprising the steps of calculating characteristic values of each line segment taking width of the line segment as a process unit of the current frame in sequence in accordance with width of a line segment which takes the pixels as a unit; calculating an image shift value of line segment in sequence in accordance with the characteristic values of each line segment of the current frame and the characteristic values of each corresponding line segment of the reference frame; comparing the image shift value of line segment with a threshold for determining a de-interlacing algorithm of the dynamic image; and executing the determined de-interlacing algorithm for constituting output image of the dynamic image.
- the present invention also provides an apparatus of adaptive de-interlacing of dynamic image comprising a calculating unit of characteristic values for receiving width of a line segment, calculating and then outputting characteristic value of a line segment of the current frame of the dynamic image; a calculating unit of image shift value of line segment for receiving the characteristic value of the line segment of the current frame and the characteristic value of a corresponding line segment of a reference frame, calculating and then outputting an image shift value of line segment; a determining unit for receiving and comparing the image shift value of line segment with a threshold, and then outputting information of determination; and a processing unit of video images for receiving the information of determination, choosing and executing a de-interlacing algorithm according to the information of determination, and then outputting a dynamic image completing de-interlacing process.
- the method and apparatus of adaptive de-interlacing of dynamic image of the present invention can solve some problems. For instance, we can give consideration to image quality of a frame with motion and a still frame when processing de-interlacing between current video/audio player system (for instance, a VCD player or DVD player) and digital display system (for instance, HDTV or plasma TV). Therefore, producing higher-resolution images and satisfying the requirement of quality of audio/video player for users.
- current video/audio player system for instance, a VCD player or DVD player
- digital display system for instance, HDTV or plasma TV
- FIG. 1 schematically shows the diagram of de-interlacing of prior art, wherein FIG. 1A is the diagram of de-interlacing images without motion and FIG. 1B is the diagram of de-interlacing images with motion;
- FIG. 2 schematically shows the flow chart of adaptive de-interlacing of the present invention
- FIG. 3 schematically shows the flow chart of one embodiment of adaptive de-interlacing of the present invention
- FIG. 4 schematically shows the diagram of the calculation method of image shift value of line segment
- FIG. 5 schematically shows the block chart of executing the adaptive de-interlacing.
- FIG. 2 schematically shows the flow chart of adaptive de-interlacing of the present invention.
- step 210 reading image information of a dynamic image and receiving width of a line segment for being a common base of image processing, wherein the width of a line segment is composed of Y values of each pixel within an image.
- step 220 calculating the difference in pixels between each line segment corresponding to width of the line segment of current frame in sequence (i.e. difference of Y value) for being characteristic value ( ⁇ i,n,m ) of a line segment.
- i current frame
- n scanning lines of frame
- m the m th line segment of the line
- j width of line segment (taking pixels as a unit) and the width is adjustable
- k represents position of a pixel in the line segment.
- the characteristic value ( ⁇ i,n,m ) of the line segment After acquiring the characteristic value ( ⁇ i,n,m ) of the line segment, then processing calculation of the image shift value of line segment in step 230 , wherein the calculation is that subtracting a characteristic value ( ⁇ i-1,n,m ) of a line segment that has the same width of line segment and the same position of pixels as the former frame from the characteristic value ( ⁇ i,n,m ) of a line segment of current frame acquired by step 220 for being an image shift value of line segment.
- comparing the image shift value of line segment with a threshold wherein the threshold is adjusted by the requirement of image quality, the performance of player system and requirement of memory space.
- the threshold is a programmable threshold.
- step 250 chooses Bob algorithm for de-interlacing process.
- step 250 chooses Weave algorithm for de-interlacing process.
- saving the characteristic value ( ⁇ i,n,m ) of the line segment Consequently, continually processing each line segment by the procedure of de-interlacing algorithm in sequence for constituting a dynamic image with de-interlacing process already completed.
- FIG. 3 and FIGS. 4A-4F depicting one specific embodiment of the present invention, wherein FIG. 3 schematically shows the flow chart of line segment-based adaptive de-interlacing of the present invention; while FIGS. 4A-4F schematically show the diagram of the calculation procedure of line segment-based adaptive de-interlacing of the present invention. Moreover, the image resolution of the frame in FIG. 4 are 8 ⁇ 8 pixels.
- a player system When a player system provides readable image information and width of a line segment, for instance reading the image information of current frame with width of the line segment is 4; and then Y values of the 1 st to the 4 th pixels of a line segment in odd fields of the frame read and Y values of the 1 st to the 4 th pixels of a line segment in even fields of the frame are ( 10 , 65 , 70 , 83 ) and ( 13 , 40 , 65 , 60 ) respectively. Besides, Y values of pixels of other line segments in odd fields and even fields of the frame are as shown in FIG. 4A.
- step 310 calculating the characteristic value ( ⁇ i,n,m ) of the 1 st line segment of current frame by the formula in equation 1, wherein the calculation process and the result are:
- the characteristic value ( ⁇ i,n,m ) of the 1 st line segment of current frame is 45, wherein the reference frame in FIG. 4B is composed of the characteristic values of each line segment of the former frame.
- step 320 getting absolute value of subtracting the characteristic value ( ⁇ i-1,n,m ) of the 1 st line segment of the former reference frame from the characteristic value ( ⁇ i,n,m ) of the 1 st line segment of current frame for being the calculation result of the image shift value of line segment.
- the image shift value of 1 st line segment of current frame is 12 (i.e. 45 - 33 ), referring to FIG. 4C.
- step 350 saving the characteristic value ( ⁇ i,n,m ) of a line segment of current frame into the position of the characteristic value of relative line segment of the former reference frame when calculating an image shift value of line segment; in other words, taking the characteristic value ( ⁇ i,n,m ) as the characteristic value ( ⁇ i-1,n,m ) of the former frame when calculating an image shift value of line segment of the next frame.
- the characteristic value of the 1 st line segment of reference frame is replaced by 45 .
- calculating characteristic values of each line segment of current frame which is shown in FIG. 4E.
- the de-interlacing method processed by each line segment in FIG. 4E is as shown in current frame in FIG. 4F; in other words, images of current frame in FIG. 4F are constituted according to the de-interlacing method processed by each line segment in FIG. 4E.
- the characteristic values of each line segment of reference frame are replaced by the characteristic values of current frame, referring to the diagram of reference frame in FIG. 4F.
- step 380 executes reading of the next frame or stops de-interlacing process. Otherwise, when the image information doesn't contain the signal of the field end, then detecting the image information again to see whether it is the end of the scanning line or not.
- step 360 reads the image information of the next scanning line.
- step 370 reads the image information of the next line segment for processing calculation of characteristic value of the line segment.
- FIG. 5 schematically shows the block chart of executing the adaptive de-interlacing of the present invention.
- the apparatus comprises a processing unit of adaptive de-interlacing 10 , configured for connecting with a micro-processing unit 20 , a memory unit 30 (which includes a buffer unit of video images 32 and a buffer unit of characteristic values 34 ) and a display unit 40 .
- the processing unit of adaptive de-interlacing 10 further comprises a calculating unit of characteristic values 12 , a calculating unit of image shift value of line segment 14 , a determining unit 16 and a processing unit of video images 18 .
- the calculating unit of characteristic values 12 in the processing unit of adaptive de-interlacing 10 receives and reads the image information from the buffer unit of video images 32 . Further, the image information in the buffer unit of video images 32 can be saved in the memory unit 30 by the way that saves each motion frame after decoding the image information of disc (for instance, a DVD disc) by an input unit (not shown in FIG. 5).
- the micro-processing unit 20 simultaneously delivers a signal with width of a line segment to the processing unit of adaptive de-interlacing 10 .
- the calculating unit of characteristic values 12 in the processing unit of adaptive de-interlacing 10 the calculating unit of image shift value of line segment 14 , the determining unit 16 and the processing unit of video images 18 know that how many pixels the width of the line segment is.
- the calculating unit of characteristic values 12 executes the calculation of characteristic value of the line segment in accordance with equation 1 and then delivers the calculation result ( ⁇ i,n,m ) of characteristic value of the line segment to the calculating unit of image shift value of line segment 14 .
- the calculating unit of image shift values of line segment 14 receives the characteristic value ( ⁇ i,n,m ) of a line segment of current frame, simultaneously reads the characteristic value ( ⁇ i-1,n,m ) of the same image position of a reference frame (for instance, a former frame) from the buffer unit of characteristic values 34 in the memory unit 30 . Next, sums up the absolute values of difference between the two characteristic values for acquiring an image shift value of line segment and then delivers the image shift value of line segment to the determining unit 16 .
- the determining unit 16 After the determining unit 16 receives the signal of the threshold delivered from the micro-processing unit 20 , comparing the image shift value of line segment with the threshold and then delivering the comparison result to the processing unit of video images 18 . Further, when the processing unit of video images 18 receives the comparison result from the determining unit 16 and it shows that the image shift value of line segment is substantially greater than the threshold, then delivering the image address required of executing de-interlacing at present to the buffer unit of video images 32 . Moreover, the contents of the image address include the encoding contents of odd fields and even fields.
- the buffer unit of video images 32 delivers the encoding information of each image (from the memory unit 30 ) to the processing unit of video images 18 in sequence
- Bob algorithm is chosen for completing image de-interlacing process of a line segment of current frame.
- the processing unit of video images 18 delivers the image information (provided from the memory unit 30 ) to the calculating unit of characteristic values 12 for reading the image information of the next line segment.
- the processing unit of video images 18 receives the comparison result from the determining unit 16 and it shows that the image shift value of line segment is substantially less than the threshold, then delivers the image address required of executing de-interlacing at present to the buffer unit of video images 32 .
- the buffer unit of video images 32 delivers the encoding information of each image (from the memory unit 30 ) to the processing unit of video images 18 .
- Weave algorithm is chosen for completing de-interlacing process of a line segment of current frame.
- delivering the processed image to the display unit 40 for displaying.
- the processing unit of video images 18 is executing de-interlacing process continuously, the image information read from the calculating unit of characteristic values 12 is detected continuously.
- contents of the encoding information containing the end of a field are detected, stops de-interlacing process; otherwise, keeping on executing adaptive de-interlacing process of each line segment of next frame.
- the apparatus of adaptive de-interlacing of the present invention should enforce accessing characteristic values of each line segment, thus it needs the memory (for instance, the buffer unit of characteristic values 34 ) with enough memory space for accessing the characteristic values.
- image resolution of the frame is 331,200 pixels.
- access of characteristic values uses the replacement method, i.e.
- the buffer unit of characteristic values 34 in the apparatus of adaptive de-interlacing of the present invention is adjustable according to the width of a line segment and the maximum is less than the value of frame resolution.
- the space used by the buffer unit of characteristic values 34 is extremely lowcomparing with a memory space of 256M Byte for image processing.
- de-interlacing process As we all know, smaller the process unit of image of a de-interlacing process, higher the image quality achieved. But respectively, it will increase substantially the number of times of calculation and determination required of de-interlacing process. If de-interlacing process only uses software for processing de-interlacing, delay of image broadcast will occur and therefore produces an unnatural frame. As a result, it should use the rapid access characteristic of the hardware for solving the delay problem of image.
- the apparatus of the present invention using limited memory space for providing access of characteristic value does not only solve the delay problem of image, but also acquire high-resolution frame of dynamic image.
- FIG. 5 schematically shows the block diagram of adaptive de-interlacing of the present invention. Although it's divided into different units, but it doesn't indicate that these units (except for the encoding information unit 20 and the display unit 50 , configured for input and output respectively) should be devices existing dependently. These units can be configured and combined in accordance with interface specification and requirement of products. For instance, when being used in the high-level image processing workstation or personal computer (PC) being able to broadcast DVD films, the processing unit of de-interlacing 10 can be embedded into CPU of high-level system or be manufactured into an individual device (for instance, a chip) and then connected to CPU.
- PC personal computer
- the processing unit of de-interlacing 10 , the memory unit 30 and the micro-processing unit 40 can be integrated into a chip.
- SOC i.e. System on a Chip
- the processing unit of de-interlacing of the present invention can also be integrated into different application system.
Landscapes
- Engineering & Computer Science (AREA)
- Multimedia (AREA)
- Physics & Mathematics (AREA)
- Signal Processing (AREA)
- Theoretical Computer Science (AREA)
- General Physics & Mathematics (AREA)
- Computer Hardware Design (AREA)
- Computer Graphics (AREA)
- Acoustics & Sound (AREA)
- Television Systems (AREA)
- Image Processing (AREA)
- Compression Or Coding Systems Of Tv Signals (AREA)
- Controls And Circuits For Display Device (AREA)
- Selective Calling Equipment (AREA)
- Television Signal Processing For Recording (AREA)
- Two-Way Televisions, Distribution Of Moving Picture Or The Like (AREA)
- Image Generation (AREA)
- Processing Of Color Television Signals (AREA)
- Digital Computer Display Output (AREA)
Abstract
Description
- 1. Field of the Invention
- This present invention generally relates to a method and apparatus of de-interlacing of dynamic image, and more particularly to a method and apparatus of adaptive de-interlacing of dynamic image, wherein the dynamic image taking a line segment composed of pixels as a unit for calculating and determining the process. 2. Description of the Prior Art
- As information technology develops, analog products have been gradually replaced by digital products, thus there are more and more video/audio players and display systems including the function of transmitting analog signals to digital signals. There are two kinds of scanning standard of analog television at present, i.e. National Television System Committee (NTSC) and Phase Alternation by Line (PAL). Standard of NTSC is used in Japan or the US, according to which a frame is formed by 525 scanning lines i.e. 525 scanning lines are called a frame, which means the frame is displayed repeatedly at the speed of 30 frames per second. Yet 525 scanning lines that form a frame are not finished in only one scanning. The frame is displayed by scanning one line and then the line following the next line. In other words, after the first line is scanned, the third line is scanned rather than the second line, and then the fifth, seventh, to the 525th line respectively. Then the scanning process returns to the second line and repeats, in the following are the fourth, sixth, eighth, etc. Therefore the smooth and clear frame displayed is actually constituted by an odd number, an even number, and then an odd number, the formatting method of which is called “double-space scanning” or “interlacing”.
- An interlaced video signal is composed of two fields, each of the two fields containing odd lines or even lines of the image. In the process of image capture, camera will output odd lines of image on instant basis and output even lines of image after 16.7 milliseconds. Since a temporal shift will occur in the process of outputting odd lines and even lines of the image, the temporal shift should be positioned in the system of frame-based processing. For a still frame, a good one can be acquired with this method, whereas the image will become blurred since serration will occur on the edge of the image called feathering for a frame with motion pictures. Besides, since field of odd lines and field of even lines are formed by only half amount of scanning lines (262.5 lines), each field of odd lines and field of even lines only has half of the resolution of the original image. Moreover, even though each field of odd lines and field of even lines displays at the speed of 60 fields per second and such frame may not appear to have motion artifacts, but if the frame is enlarged, the scanning lines will appear thick and the frame will become blurred.
- The disadvantages of “interlacing scanning” described above can be eliminated by a technique called “progressive scan”. In progressive scan, the first, second, third, to the 525th line are scanned in order and displayed at the speed of 60 frames per second. Therefore its scanning speed is twice the scanning speed of “interlacing” and the frame is displayed on the monitor with 525 scanning lines, which makes the frame fine and clear, which being the best merit of “progressive scan”. Therefore, most of the developed video/audio equipment at present has used this method for scanning and displaying.
- However, current video signal of NTSC system uses mainly the method of “interlacing” so far. Therefore, if a frame constituted by interlacing is displayed by a display system using a progressive scan, for instance, when a DVD film edited by interlacing is directly broadcast and displayed on HDTV, only frame of odd lines and frame of even lines can be displayed and the resolution of image will be worse. To solve this problem, the technique of “de-interlacing” should be used. In other words, de-interlacing is a method to convert interlacing to a progressive scan. For example, to convert standard definition TV (SDTV) to high definition TV (HDTV), the scanning lines are enhanced from 480i to 720p by the steps of de-interlacing and resampling, and the misalignment of image occurred during combination of odd and even scanning lines should be amended so that a progressive image can satisfy the demands of the audiences.
- Although the technique of de-interlacing can solve the problem that interlacing system is displayed on progressive scan system with poor resolution, yet another problem that cannot be neglected is that we can only obtain clear image in the case of a still frame but blurred image and motion artifacts in the case of a frame with motion and thus high image quality cannot be displayed. In General, there are non-motion compensated algorithms and motion-compensated one for solving the problem described above.
- 1. Non-Motion Compensated De-Interlacing Algorithm
- Two basic linear transformation techniques of non-motion compensated de-interlacing algorithms are called “Weave” and “Bob”, wherein “Weave” overlaying (or woven together) two input fields to produce a progressive frame. Although the different image fields can be fully aligned in the processing of a still image in this technique and a clear de-interlaced image can be produced, yet, obvious serration or feathering will occur since the image with motion will shift as time goes by. And therefore, when image of odd lines and image of even lines are woven into one frame, misalignment of image will occur since there is a temporal shift between image of odd lines and image of even lines. Therefore there will be serration or feathering and thus produces a blurred frame, as shown in FIG. 1. Besides, since “Bob” only accept one of fields of input images (for example, image of even lines), and the other fields (i.e. image of odd lines) are discarded, the vertical resolution of the image will decrease from 720×486 to 720×243 pixels. The voids of the discarded lines are filled in by adjacent scanning lines in this image with only half of the resolution in order to regain the resolution of 720×486 pixels. The merit of Bob algorithm is that it can eliminate motion artifacts of image and decreases calculation demands, and the disadvantage is that the vertical resolution of input image is still half of the original image after interpolating, which causes the detail resolution of progressive scan image to decrease.
- 2. Motion Compensated De-Interlacing Algorithm
- Because the method of motion compensation in technique of MPEG compression is used for the motion compensation of de-interlacing algorithm, thus we illustrate briefly the technique at first. In dynamic video compression technique, compressed method of MPEG is used in practice at present, MPEG being abbreviation of motion pictures experts group. The editing standards of which can be divided into three parts: video, audio, and system. In a continually broadcast motion picture, there is high relevance between the former picture and the latter one, therefore in a series of original motion picture sequences, there is high spatial relevance and high temporal relevance between the former picture and the latter one, and video compression is processed by removing redundant information to achieve the purpose of compression according to the relevance of these two kinds of information. The method of removing spatially redundant information is usually processed, according to the characteristic of human vision, with spatial transformation (such as discrete cosine transform i.e. DCT or wavelet transform) and quantization to filter and remove the part of high frequency to achieve compression. As for removing temporally redundant information, principle of Motion Estimation is used to find out and remove temporally redundant information to achieve compression.
- In the process of MPEG compression (or encoding), three different methods are used to compress each frame: Intra-frame (I-frame), Bi-directional frame (B-frame) and Predicted frame (P-frame), wherein the I-frame does not need to put its relation with other frames under consideration since a complete frame is saved. P-frame takes former I-frame as reference frame, wherein the redundant part of frame is not saved and only different part of frame is saved. The principle of B-frame is the same as that of P-frame, the only difference being that B-frame can take former I-frame or P-frame as reference and can also take latter P-frame as reference.
- In I-frame, it usually cuts a frame as macro block of a 16×16 pixel for processing. Each macro block is composed of a luminance block (i.e. Y block) of four 4×4 pixels, a Cr block of one 8×8 pixel and a Cb block of one 8×8 pixel.
- In P-frame, the data should be saved is mainly the difference between current frame and reference frame, wherein the reference frame is the former I-frame or P-frame. That's because any parts of the same frame can often be found in some position of the former frame, only recording those parts shifted from parts of the former frame and thus reducing much the information of frame required of saving, and the technique is called motion compensation. In P-frame, it also takes macro block as a unit. Normally, each macro block can find the closest block of the macro block within some region. The procedure is called block matching, and coordinates of the comparative position of the current frame are (0, 0). When the macro block finds the nearest block by contrast, only the displacement of coordinates of the macro block in two frames is recorded, i.e. (displacement of x, displacement of y) and is shown as (dx, dy), which is so-called motion vector (MV). The principle of B-frame is the same as the principle of P-frame except that B-frame can take former I-frame or P-frame as a reference (it can also take latter P-frame as a reference and take the average of both two as a reference).
- When the image is de-compressed (i.e., when it is broadcast), it is processed in an order opposite to that of encoding, and the signal of frequency domain would be converted to spatial signal to transform the signal back into data stream before compression, and these data streams are then integrated and sent to video reconstructed image buffer (or video image buffer), which restores received data to original frames. Besides, when the first one is I-frame, restoring directly and saving the latter frames by memory. Finally, restoring those frames only recording difference to the situation before compression by the method of motion compensation.
- As is described above, motion compensation comprising pixels of two temporal shift fields being shifted to a common point in one instant and forming a frame. Motion estimation is consulted for deciding the shifting amount of each pixel, wherein identification and tracking of motion vector is from one field (for example, field of odd lines) to another field (for example, field of even lines). Otherwise, it is configured for cutting the fields into a plurality of macro blocks further and executing by the procedure of block matching. Moreover, when taking a macro block as identification of motion vector, actually it only chooses the luminance block (i.e. Y block) of the macro block for executing, and discards the saturation block (i.e. Cr block and Cb block). The main reason is that human's eyes is sensitive to the change of luminance and is less sensitive to the change of saturation respectively. Therefore under the requirement of reducing the processing amount of data, in the process of MPEG compression (or encoding), only taking the luminance block as the basis of identification of motion vector.
- Current multi-functions DVD is edited by images of film using interlacing scanning, therefore interlacing constitutes a frame when it is broadcast. Thus, when a film is broadcast by the hi-fi digital TV, we should choose a method of Weave or Bob for broadcast to convert interlacing scanning to progressive scan. However, when we choose the Weave method for broadcast, misalignment of image will occur since there is a temporal shift between image of odd lines and image of even lines. Therefore there will be serration or feathering and thus produces a blurred frame. When we choose the Bob method for broadcast, although the misalignment of image will be overcome and a clear and natural dynamic image can be produced, the vertical resolution of a still image will be sacrificed. It's therefore that, between current video/audio player system and digital display system, we cannot give consideration to the image quality of a frame with motion and a still frame when processing de-interlacing.
- Besides, in the process of editing a VCD or DVD film, in accordance with some video/audio which uses standard of Joint Photographic Experts Group (JPEG) or those films edited by using I-frame of MPEG compressing standard in one disc and films without compression, dynamic image might only include encoding information of I-frame or only include information of dynamic image, and it cannot detect motion vector when playing such kind of film in the video/audio player system and therefore encoding incompatible problem occurs. Consequently, it cannot play such kind of film without vector of motion in the player system and it's not convenient for users. Besides, in accordance with a video/audio player system without providing a selection mechanism, when it's restricted for the limit of hardware performance such as insufficient memory or not enough bandwidth, the de-interlacing algorithm in the hardware system required for more requirements cannot execute. And it cannot display with the best image quality for lack of the selection mechanism of de-interlacing algorithm.
- The present invention provides a method and apparatus of adaptive de-interlacing of dynamic image, wherein the dynamic image taking a line segment composed of pixels as unit comprising the steps of calculating characteristic values of each line segment taking width of the line segment as a process unit of the current frame in sequence in accordance with width of a line segment which takes the pixels as a unit; calculating an image shift value of line segment in sequence in accordance with the characteristic values of each line segment of the current frame and the characteristic values of each corresponding line segment of the reference frame; comparing the image shift value of line segment with a threshold for determining a de-interlacing algorithm of the dynamic image; and executing the determined de-interlacing algorithm for constituting output image of the dynamic image.
- The present invention also provides an apparatus of adaptive de-interlacing of dynamic image comprising a calculating unit of characteristic values for receiving width of a line segment, calculating and then outputting characteristic value of a line segment of the current frame of the dynamic image; a calculating unit of image shift value of line segment for receiving the characteristic value of the line segment of the current frame and the characteristic value of a corresponding line segment of a reference frame, calculating and then outputting an image shift value of line segment; a determining unit for receiving and comparing the image shift value of line segment with a threshold, and then outputting information of determination; and a processing unit of video images for receiving the information of determination, choosing and executing a de-interlacing algorithm according to the information of determination, and then outputting a dynamic image completing de-interlacing process.
- According to this, the method and apparatus of adaptive de-interlacing of dynamic image of the present invention can solve some problems. For instance, we can give consideration to image quality of a frame with motion and a still frame when processing de-interlacing between current video/audio player system (for instance, a VCD player or DVD player) and digital display system (for instance, HDTV or plasma TV). Therefore, producing higher-resolution images and satisfying the requirement of quality of audio/video player for users.
- FIG. 1 schematically shows the diagram of de-interlacing of prior art, wherein FIG. 1A is the diagram of de-interlacing images without motion and FIG. 1B is the diagram of de-interlacing images with motion;
- FIG. 2 schematically shows the flow chart of adaptive de-interlacing of the present invention;
- FIG. 3 schematically shows the flow chart of one embodiment of adaptive de-interlacing of the present invention;
- FIG. 4 schematically shows the diagram of the calculation method of image shift value of line segment; and
- FIG. 5 schematically shows the block chart of executing the adaptive de-interlacing.
- Since the related techniques and methods of compression standard and encoding have been described in detail in prior art; therefore the complete process of these techniques and methods is not included in the following description. Moreover, the art of encoding and decoding used in the present invention adapted from MPEG compressing technique is quoted in summary here to support the description of the invention. And the block diagrams in the following text are not made according to relative position in reality and complete connection diagram, the function of which is only to illustrate the features of the invention.
- FIG. 2 schematically shows the flow chart of adaptive de-interlacing of the present invention. In
step 210, reading image information of a dynamic image and receiving width of a line segment for being a common base of image processing, wherein the width of a line segment is composed of Y values of each pixel within an image. Next, instep 220, calculating the difference in pixels between each line segment corresponding to width of the line segment of current frame in sequence (i.e. difference of Y value) for being characteristic value (δi,n,m) of a line segment. And the calculation formula is shown as equation 1: - wherein i represents current frame; n represents scanning lines of frame; m represents the mth line segment of the line; j represents width of line segment (taking pixels as a unit) and the width is adjustable; and k represents position of a pixel in the line segment.
- The calculation method of
equation 1 is that between a line segment of a line (for instance, the 1st line i.e. the nth line) within odd fields of a frame and a line segment corresponding to width of a chosen line segment of the 2nd line (i.e. the n+1th line) within even fields (for instance, width of a line segment composed of 4 pixels, i.e. j=4 and k=1˜4), calculating the difference in Y values of relative pixel position within adjacent lines between two fields in sequence and then getting absolute value of the difference. Then, adding those absolute values of each difference of Y value for being a characteristic value (δi,n,m) of the line segment. After acquiring the characteristic value (δi,n,m) of the line segment, then processing calculation of the image shift value of line segment instep 230, wherein the calculation is that subtracting a characteristic value (δi-1,n,m) of a line segment that has the same width of line segment and the same position of pixels as the former frame from the characteristic value (δi,n,m) of a line segment of current frame acquired bystep 220 for being an image shift value of line segment. In the following, comparing the image shift value of line segment with a threshold, wherein the threshold is adjusted by the requirement of image quality, the performance of player system and requirement of memory space. In other words, the threshold is a programmable threshold. Further, when the comparison result instep 240 shows that the image shift value of line segment is substantially greater than the threshold, then step 250 chooses Bob algorithm for de-interlacing process. Similarly, when the comparison result instep 240 shows that the image shift value of line segment is substantially less than the threshold, then step 250 chooses Weave algorithm for de-interlacing process. Moreover, saving the characteristic value (δi,n,m) of the line segment. Consequently, continually processing each line segment by the procedure of de-interlacing algorithm in sequence for constituting a dynamic image with de-interlacing process already completed. - FIG. 3 and FIGS. 4A-4F depicting one specific embodiment of the present invention, wherein FIG. 3 schematically shows the flow chart of line segment-based adaptive de-interlacing of the present invention; while FIGS. 4A-4F schematically show the diagram of the calculation procedure of line segment-based adaptive de-interlacing of the present invention. Moreover, the image resolution of the frame in FIG. 4 are 8×8 pixels. When a player system provides readable image information and width of a line segment, for instance reading the image information of current frame with width of the line segment is 4; and then Y values of the 1st to the 4th pixels of a line segment in odd fields of the frame read and Y values of the 1st to the 4th pixels of a line segment in even fields of the frame are (10, 65, 70, 83) and (13, 40, 65, 60) respectively. Besides, Y values of pixels of other line segments in odd fields and even fields of the frame are as shown in FIG. 4A. Next, in
step 310, calculating the characteristic value (δi,n,m) of the 1st line segment of current frame by the formula inequation 1, wherein the calculation process and the result are: - δi,n,m=Σ(10−13+64−40+70−65+83−70)=45
- Referring to FIG. 4B, the characteristic value (δi,n,m) of the 1st line segment of current frame is 45, wherein the reference frame in FIG. 4B is composed of the characteristic values of each line segment of the former frame. Next, in
step 320, getting absolute value of subtracting the characteristic value (δi-1,n,m) of the 1st line segment of the former reference frame from the characteristic value (δi,n,m) of the 1st line segment of current frame for being the calculation result of the image shift value of line segment. At this time, the image shift value of 1st line segment of current frame is 12 (i.e. 45-33), referring to FIG. 4C. Next, comparing the image shift value of line segment with a programmable threshold. When the threshold is 10, thus the image shift value of 1st line segment of current frame is greater than a 1st threshold, taking the line segment as a motion line segment with displacement and then Bob algorithm is chosen for de-interlacing process instep 330. And next, instep 350, saving the characteristic value (δi,n,m) of a line segment of current frame into the position of the characteristic value of relative line segment of the former reference frame when calculating an image shift value of line segment; in other words, taking the characteristic value (δi,n,m) as the characteristic value (δi-1,n,m) of the former frame when calculating an image shift value of line segment of the next frame. Referring to FIG. 4D, the characteristic value of the 1st line segment of reference frame is replaced by 45. In accordance with the same process described above, calculating characteristic values of each line segment of current frame, which is shown in FIG. 4E. When the image shift value of 2nd line segment of current frame is less than the threshold, taking the line segment as a still line segment without displacement and then Weave algorithm is chosen for de-interlacing process instep 340. The de-interlacing method processed by each line segment in FIG. 4E is as shown in current frame in FIG. 4F; in other words, images of current frame in FIG. 4F are constituted according to the de-interlacing method processed by each line segment in FIG. 4E. At this time, the characteristic values of each line segment of reference frame are replaced by the characteristic values of current frame, referring to the diagram of reference frame in FIG. 4F. - During the process of executing de-interlacing process of each line segment of current frame, the player system continuously detects the image information read to see whether it is the end of odd fields and even fields or not. Further, when the image information contains the signal of the field end, then step380 executes reading of the next frame or stops de-interlacing process. Otherwise, when the image information doesn't contain the signal of the field end, then detecting the image information again to see whether it is the end of the scanning line or not. When the image information is the end of the scanning line,
step 360 reads the image information of the next scanning line. Similarly, when the image information is not the end of the scanning line,step 370 reads the image information of the next line segment for processing calculation of characteristic value of the line segment. According to the procedure of the flow chart, de-interlacing process of each line segment of current frame is enforced for acquiring a dynamic image. - In the following one embodiment of an apparatus of adaptive de-interlacing of the present invention is depicted. FIG. 5 schematically shows the block chart of executing the adaptive de-interlacing of the present invention. The apparatus comprises a processing unit of
adaptive de-interlacing 10, configured for connecting with amicro-processing unit 20, a memory unit 30 (which includes a buffer unit ofvideo images 32 and a buffer unit of characteristic values 34) and adisplay unit 40. Moreover, the processing unit ofadaptive de-interlacing 10 further comprises a calculating unit ofcharacteristic values 12, a calculating unit of image shift value ofline segment 14, a determiningunit 16 and a processing unit ofvideo images 18. First, the calculating unit ofcharacteristic values 12 in the processing unit ofadaptive de-interlacing 10 receives and reads the image information from the buffer unit ofvideo images 32. Further, the image information in the buffer unit ofvideo images 32 can be saved in thememory unit 30 by the way that saves each motion frame after decoding the image information of disc (for instance, a DVD disc) by an input unit (not shown in FIG. 5). When the calculating unit ofcharacteristic values 12 reads the image information, themicro-processing unit 20 simultaneously delivers a signal with width of a line segment to the processing unit ofadaptive de-interlacing 10. It makes the calculating unit ofcharacteristic values 12 in the processing unit ofadaptive de-interlacing 10, the calculating unit of image shift value ofline segment 14, the determiningunit 16 and the processing unit ofvideo images 18 know that how many pixels the width of the line segment is. In the following, the calculating unit ofcharacteristic values 12 executes the calculation of characteristic value of the line segment in accordance withequation 1 and then delivers the calculation result (δi,n,m) of characteristic value of the line segment to the calculating unit of image shift value ofline segment 14. When the calculating unit of image shift values ofline segment 14 receives the characteristic value (δi,n,m) of a line segment of current frame, simultaneously reads the characteristic value (δi-1,n,m) of the same image position of a reference frame (for instance, a former frame) from the buffer unit ofcharacteristic values 34 in thememory unit 30. Next, sums up the absolute values of difference between the two characteristic values for acquiring an image shift value of line segment and then delivers the image shift value of line segment to the determiningunit 16. - After the determining
unit 16 receives the signal of the threshold delivered from themicro-processing unit 20, comparing the image shift value of line segment with the threshold and then delivering the comparison result to the processing unit ofvideo images 18. Further, when the processing unit ofvideo images 18 receives the comparison result from the determiningunit 16 and it shows that the image shift value of line segment is substantially greater than the threshold, then delivering the image address required of executing de-interlacing at present to the buffer unit ofvideo images 32. Moreover, the contents of the image address include the encoding contents of odd fields and even fields. After the buffer unit ofvideo images 32 delivers the encoding information of each image (from the memory unit 30) to the processing unit ofvideo images 18 in sequence, Bob algorithm is chosen for completing image de-interlacing process of a line segment of current frame. And finally, delivering the processed image to the display unit 40 (for instance, HDTV, PDP or LCD TV) for displaying. On the other hand, the calculating unit ofcharacteristic values 12 saves the characteristic value (δi,n,m) of current frame (acquired by previous calculation) into thememory unit 30 for being the characteristic value (δi-1,n,m) of the former frame when calculating an image shift value of line segment of the next frame. In the following, the processing unit ofvideo images 18 delivers the image information (provided from the memory unit 30) to the calculating unit ofcharacteristic values 12 for reading the image information of the next line segment. - On the other hand, when the processing unit of
video images 18 receives the comparison result from the determiningunit 16 and it shows that the image shift value of line segment is substantially less than the threshold, then delivers the image address required of executing de-interlacing at present to the buffer unit ofvideo images 32. After the buffer unit ofvideo images 32 delivers the encoding information of each image (from the memory unit 30) to the processing unit ofvideo images 18, Weave algorithm is chosen for completing de-interlacing process of a line segment of current frame. And finally, delivering the processed image to thedisplay unit 40 for displaying. Further, when the processing unit ofvideo images 18 is executing de-interlacing process continuously, the image information read from the calculating unit ofcharacteristic values 12 is detected continuously. When contents of the encoding information containing the end of a field are detected, stops de-interlacing process; otherwise, keeping on executing adaptive de-interlacing process of each line segment of next frame. - Since the apparatus of adaptive de-interlacing of the present invention should enforce accessing characteristic values of each line segment, thus it needs the memory (for instance, the buffer unit of characteristic values34) with enough memory space for accessing the characteristic values. To a frame composed of 720×460 pixels, image resolution of the frame is 331,200 pixels. When taking one pixel as a width of a line segment (i.e. the minimum width of a line segment) for processing adaptive de-interlacing of the present invention, it requires a memory space of 340K Bytes. In the meanwhile, since access of characteristic values uses the replacement method, i.e. reading the characteristic value (δi-1,n,m) of a line segment of the former frame, acquiring an image shift value (δi,n,m) of line segment by calculation and then taking the characteristic value of a line segment of current frame as the characteristic value (δi-1,n,m) of a reference frame when calculating an image shift value of line segment of next frame, therefore, the buffer unit of
characteristic values 34 in the apparatus of adaptive de-interlacing of the present invention is adjustable according to the width of a line segment and the maximum is less than the value of frame resolution. However, the space used by the buffer unit ofcharacteristic values 34 is extremely lowcomparing with a memory space of 256M Byte for image processing. Thus, no matter the memory space for processing is provided by thememory unit 30 or is embedded in the processing unit ofde-interlacing 10, it won't increase much requirement of hardware and therefore can acquire good performance. - As we all know, smaller the process unit of image of a de-interlacing process, higher the image quality achieved. But respectively, it will increase substantially the number of times of calculation and determination required of de-interlacing process. If de-interlacing process only uses software for processing de-interlacing, delay of image broadcast will occur and therefore produces an unnatural frame. As a result, it should use the rapid access characteristic of the hardware for solving the delay problem of image. The apparatus of the present invention using limited memory space for providing access of characteristic value does not only solve the delay problem of image, but also acquire high-resolution frame of dynamic image.
- Moreover, FIG. 5 schematically shows the block diagram of adaptive de-interlacing of the present invention. Although it's divided into different units, but it doesn't indicate that these units (except for the
encoding information unit 20 and the display unit 50, configured for input and output respectively) should be devices existing dependently. These units can be configured and combined in accordance with interface specification and requirement of products. For instance, when being used in the high-level image processing workstation or personal computer (PC) being able to broadcast DVD films, the processing unit of de-interlacing 10 can be embedded into CPU of high-level system or be manufactured into an individual device (for instance, a chip) and then connected to CPU. When being used in a player (for instance, a DVD player), the processing unit ofde-interlacing 10, thememory unit 30 and themicro-processing unit 40 can be integrated into a chip. As manufacturing of semiconductor develops, SOC (i.e. System on a Chip) technique is also well developed; therefore the processing unit of de-interlacing of the present invention can also be integrated into different application system. - While this invention has been described with reference to illustrative embodiments, this description does not intend or construe in a limiting sense. Various modifications and combinations of the illustrative embodiments, as well as other embodiments of the invention, will be apparent to persons skilled in the art upon reference to the description. It is therefore intended that the appended claims encompass any such modifications or embodiments.
Claims (20)
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
US10/851,240 US7196731B2 (en) | 2003-05-23 | 2004-05-24 | Method and apparatus of adaptive de-interlacing of dynamic image |
Applications Claiming Priority (2)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
US47273203P | 2003-05-23 | 2003-05-23 | |
US10/851,240 US7196731B2 (en) | 2003-05-23 | 2004-05-24 | Method and apparatus of adaptive de-interlacing of dynamic image |
Publications (2)
Publication Number | Publication Date |
---|---|
US20040233328A1 true US20040233328A1 (en) | 2004-11-25 |
US7196731B2 US7196731B2 (en) | 2007-03-27 |
Family
ID=33098338
Family Applications (10)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
US10/851,239 Active 2025-10-29 US7206026B2 (en) | 2003-05-23 | 2004-05-24 | Method and apparatus for adaptive frame rate conversion |
US10/851,220 Abandoned US20040233204A1 (en) | 2003-05-23 | 2004-05-24 | Method and apparatus for pattern ram sharing color look up table |
US10/851,101 Active 2026-04-08 US7420569B2 (en) | 2003-05-23 | 2004-05-24 | Adaptive pixel-based blending method and system |
US10/851,241 Active 2025-09-29 US7190405B2 (en) | 2003-05-23 | 2004-05-24 | Method and apparatus of adaptive de-interlacing of dynamic image |
US10/851,224 Active 2025-10-14 US7206028B2 (en) | 2003-05-23 | 2004-05-24 | Method and apparatus of adaptive de-interlacing of dynamic image |
US10/851,242 Active 2025-10-08 US7242436B2 (en) | 2003-05-23 | 2004-05-24 | Selection methodology of de-interlacing algorithm of dynamic image |
US10/851,240 Active 2025-09-29 US7196731B2 (en) | 2003-05-23 | 2004-05-24 | Method and apparatus of adaptive de-interlacing of dynamic image |
US10/851,223 Abandoned US20040233217A1 (en) | 2003-05-23 | 2004-05-24 | Adaptive pixel-based blending method and system |
US10/851,222 Active 2029-08-12 US7812890B2 (en) | 2003-05-23 | 2004-05-24 | Auto-configuration for instrument setting |
US11/335,597 Abandoned US20060119605A1 (en) | 2003-05-23 | 2006-01-20 | Method and apparatus for pattern ram sharing color look up table |
Family Applications Before (6)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
US10/851,239 Active 2025-10-29 US7206026B2 (en) | 2003-05-23 | 2004-05-24 | Method and apparatus for adaptive frame rate conversion |
US10/851,220 Abandoned US20040233204A1 (en) | 2003-05-23 | 2004-05-24 | Method and apparatus for pattern ram sharing color look up table |
US10/851,101 Active 2026-04-08 US7420569B2 (en) | 2003-05-23 | 2004-05-24 | Adaptive pixel-based blending method and system |
US10/851,241 Active 2025-09-29 US7190405B2 (en) | 2003-05-23 | 2004-05-24 | Method and apparatus of adaptive de-interlacing of dynamic image |
US10/851,224 Active 2025-10-14 US7206028B2 (en) | 2003-05-23 | 2004-05-24 | Method and apparatus of adaptive de-interlacing of dynamic image |
US10/851,242 Active 2025-10-08 US7242436B2 (en) | 2003-05-23 | 2004-05-24 | Selection methodology of de-interlacing algorithm of dynamic image |
Family Applications After (3)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
US10/851,223 Abandoned US20040233217A1 (en) | 2003-05-23 | 2004-05-24 | Adaptive pixel-based blending method and system |
US10/851,222 Active 2029-08-12 US7812890B2 (en) | 2003-05-23 | 2004-05-24 | Auto-configuration for instrument setting |
US11/335,597 Abandoned US20060119605A1 (en) | 2003-05-23 | 2006-01-20 | Method and apparatus for pattern ram sharing color look up table |
Country Status (6)
Country | Link |
---|---|
US (10) | US7206026B2 (en) |
EP (1) | EP1480198A1 (en) |
JP (1) | JP4365728B2 (en) |
KR (1) | KR100541333B1 (en) |
CN (10) | CN1291593C (en) |
TW (9) | TWI332652B (en) |
Cited By (4)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US20050046742A1 (en) * | 2003-09-02 | 2005-03-03 | Satoru Saito | Image signal processing circuit |
US20050270418A1 (en) * | 2004-05-18 | 2005-12-08 | Makoto Kondo | Image processing device and image processing method |
US20080136963A1 (en) * | 2006-12-08 | 2008-06-12 | Palfner Torsten | Method and apparatus for reconstructing image |
US20080158414A1 (en) * | 2006-12-29 | 2008-07-03 | Texas Instruments Incorporated | Method for detecting film pulldown cadences |
Families Citing this family (63)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US6489998B1 (en) * | 1998-08-11 | 2002-12-03 | Dvdo, Inc. | Method and apparatus for deinterlacing digital video images |
US8698840B2 (en) * | 1999-03-05 | 2014-04-15 | Csr Technology Inc. | Method and apparatus for processing video and graphics data to create a composite output image having independent and separate layers of video and graphics display planes |
US7319754B2 (en) * | 2002-06-28 | 2008-01-15 | Stmicroelectronics S.A. | Insertion of binary messages in video pictures |
DE10340546B4 (en) | 2003-09-01 | 2006-04-20 | Siemens Ag | Method and apparatus for visually assisting electrophysiology catheter application in the heart |
DE10340544B4 (en) * | 2003-09-01 | 2006-08-03 | Siemens Ag | Device for visual support of electrophysiology catheter application in the heart |
GB2411784B (en) * | 2004-03-02 | 2006-05-10 | Imagination Tech Ltd | Motion compensation deinterlacer protection |
US7483577B2 (en) * | 2004-03-02 | 2009-01-27 | Mitsubishi Electric Research Laboratories, Inc. | System and method for joint de-interlacing and down-sampling using adaptive frame and field filtering |
US20050268226A1 (en) * | 2004-05-28 | 2005-12-01 | Lipsky Scott E | Method and system for displaying image information |
KR100631685B1 (en) * | 2004-06-15 | 2006-10-09 | 삼성전자주식회사 | Image processing apparatus and method |
TWI245198B (en) * | 2004-09-01 | 2005-12-11 | Via Tech Inc | Deinterlace method and method for generating deinterlace algorithm of display system |
KR20060021446A (en) * | 2004-09-03 | 2006-03-08 | 삼성전자주식회사 | Method for deinterlacing and apparatus thereof |
CN100411422C (en) * | 2004-09-13 | 2008-08-13 | 威盛电子股份有限公司 | Input-output regulating device and method of audio-visual system |
JP4366277B2 (en) * | 2004-09-21 | 2009-11-18 | キヤノン株式会社 | Imaging apparatus and control method thereof |
TWI280798B (en) * | 2004-09-22 | 2007-05-01 | Via Tech Inc | Apparatus and method of adaptive de-interlace of image |
US20060268978A1 (en) * | 2005-05-31 | 2006-11-30 | Yang Genkun J | Synchronized control scheme in a parallel multi-client two-way handshake system |
US7657255B2 (en) * | 2005-06-23 | 2010-02-02 | Microsoft Corporation | Provisioning of wireless connectivity for devices using NFC |
US7450184B1 (en) * | 2005-06-27 | 2008-11-11 | Magnum Semiconductor, Inc. | Circuits and methods for detecting 2:2 encoded video and systems utilizing the same |
US7420626B1 (en) * | 2005-06-27 | 2008-09-02 | Magnum Semiconductor, Inc. | Systems and methods for detecting a change in a sequence of interlaced data fields generated from a progressive scan source |
US7522214B1 (en) * | 2005-06-27 | 2009-04-21 | Magnum Semiconductor, Inc. | Circuits and methods for deinterlacing video display data and systems using the same |
US7623183B2 (en) * | 2005-06-29 | 2009-11-24 | Novatek Microelectronics Corp. | Frame rate adjusting method and apparatus for displaying video on interlace display devices |
US7456904B2 (en) * | 2005-09-22 | 2008-11-25 | Pelco, Inc. | Method and apparatus for superimposing characters on video |
US7924345B2 (en) * | 2005-10-20 | 2011-04-12 | Broadcom Corp. | Method and system for deinterlacing using polarity change count |
JP4687404B2 (en) * | 2005-11-10 | 2011-05-25 | ソニー株式会社 | Image signal processing apparatus, imaging apparatus, and image signal processing method |
US8659704B2 (en) | 2005-12-20 | 2014-02-25 | Savant Systems, Llc | Apparatus and method for mixing graphics with video images |
US20070143801A1 (en) * | 2005-12-20 | 2007-06-21 | Madonna Robert P | System and method for a programmable multimedia controller |
CA2636858C (en) | 2006-01-27 | 2015-11-24 | Imax Corporation | Methods and systems for digitally re-mastering of 2d and 3d motion pictures for exhibition with enhanced visual quality |
WO2007129257A1 (en) * | 2006-05-04 | 2007-11-15 | Koninklijke Philips Electronics N.V. | Controlled frame rate conversion |
CA2653815C (en) | 2006-06-23 | 2016-10-04 | Imax Corporation | Methods and systems for converting 2d motion pictures for stereoscopic 3d exhibition |
JP5053373B2 (en) | 2006-06-29 | 2012-10-17 | トムソン ライセンシング | Adaptive pixel-based filtering |
US7940973B2 (en) * | 2006-09-19 | 2011-05-10 | Capso Vision Inc. | Capture control for in vivo camera |
EP2070042A2 (en) * | 2006-09-29 | 2009-06-17 | THOMSON Licensing | Automatic parameter estimation for adaptive pixel-based filtering |
US8233087B2 (en) * | 2006-11-08 | 2012-07-31 | Marvell International Ltd. | Systems and methods for deinterlacing high-definition and standard-definition video |
JP4270270B2 (en) * | 2006-12-05 | 2009-05-27 | ソニー株式会社 | Electronic device, imaging apparatus, electronic device control method and program |
US8612857B2 (en) * | 2007-01-08 | 2013-12-17 | Apple Inc. | Monitor configuration for media device |
US8607144B2 (en) * | 2007-01-08 | 2013-12-10 | Apple Inc. | Monitor configuration for media device |
US8233086B2 (en) * | 2007-06-08 | 2012-07-31 | Nintendo Co., Ltd. | Process for digitizing video over analog component video cables |
MX2010002657A (en) * | 2007-09-05 | 2010-04-09 | Savant Systems Llc | Multimedia control and distribution architechture. |
US20090161011A1 (en) * | 2007-12-21 | 2009-06-25 | Barak Hurwitz | Frame rate conversion method based on global motion estimation |
TWI384884B (en) * | 2008-01-11 | 2013-02-01 | Ultrachip Inc | Method for displaying dynamical colorful picture frame |
US8275033B2 (en) * | 2008-01-15 | 2012-09-25 | Sony Corporation | Picture mode selection for video transcoding |
US9204086B2 (en) * | 2008-07-17 | 2015-12-01 | Broadcom Corporation | Method and apparatus for transmitting and using picture descriptive information in a frame rate conversion processor |
KR101467875B1 (en) * | 2008-09-04 | 2014-12-02 | 삼성전자주식회사 | Digital camera for varying frame rate and the controlling method thereof |
US10075670B2 (en) * | 2008-09-30 | 2018-09-11 | Entropic Communications, Llc | Profile for frame rate conversion |
TWI384865B (en) * | 2009-03-18 | 2013-02-01 | Mstar Semiconductor Inc | Image processing method and circuit |
TWI452909B (en) * | 2009-06-29 | 2014-09-11 | Silicon Integrated Sys Corp | Circuit for correcting motion vectors, image generating device and method thereof |
CN102165779B (en) * | 2009-07-29 | 2015-02-25 | 松下电器(美国)知识产权公司 | Image processing apparatus, image coding method, program and integrated circuit |
JP5641743B2 (en) * | 2010-02-02 | 2014-12-17 | キヤノン株式会社 | Image processing apparatus and image processing apparatus control method |
CN102214427A (en) * | 2010-04-02 | 2011-10-12 | 宏碁股份有限公司 | Displayer and display method thereof |
TWI412278B (en) * | 2010-05-03 | 2013-10-11 | Himax Tech Ltd | Film-mode frame rate up conversion system and method |
JP5810307B2 (en) * | 2010-05-10 | 2015-11-11 | パナソニックIpマネジメント株式会社 | Imaging device |
JP2013026727A (en) * | 2011-07-19 | 2013-02-04 | Sony Corp | Display device and display method |
US9324170B2 (en) | 2011-08-18 | 2016-04-26 | Hewlett-Packard Development Company, L.P. | Creating a blended image |
JP5925957B2 (en) * | 2013-04-05 | 2016-05-25 | 株式会社東芝 | Electronic device and handwritten data processing method |
CN106412647B (en) * | 2015-07-29 | 2019-05-31 | 国基电子(上海)有限公司 | The set-top box of signal switching system and application the signal switching system |
US9552623B1 (en) * | 2015-11-04 | 2017-01-24 | Pixelworks, Inc. | Variable frame rate interpolation |
CN106569766A (en) * | 2016-11-08 | 2017-04-19 | 惠州Tcl移动通信有限公司 | Method and system for performing virtual dynamic processing based on display interface |
JP6958249B2 (en) * | 2017-11-06 | 2021-11-02 | セイコーエプソン株式会社 | Profile adjustment system, profile adjustment device, profile adjustment method, and profile adjustment program |
US10230920B1 (en) * | 2017-12-06 | 2019-03-12 | Pixelworks, Inc. | Adjusting interpolation phase for MEMC using image analysis |
US10977809B2 (en) | 2017-12-11 | 2021-04-13 | Dolby Laboratories Licensing Corporation | Detecting motion dragging artifacts for dynamic adjustment of frame rate conversion settings |
JP6877643B2 (en) * | 2018-06-22 | 2021-05-26 | 三菱電機株式会社 | Video display device |
EP3648059B1 (en) * | 2018-10-29 | 2021-02-24 | Axis AB | Video processing device and method for determining motion metadata for an encoded video |
US11064108B2 (en) * | 2019-08-21 | 2021-07-13 | Sony Corporation | Frame rate control for media capture based on rendered object speed |
US11593061B2 (en) | 2021-03-19 | 2023-02-28 | International Business Machines Corporation | Internet of things enable operated aerial vehicle to operated sound intensity detector |
Citations (3)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US6108047A (en) * | 1997-10-28 | 2000-08-22 | Stream Machine Company | Variable-size spatial and temporal video scaler |
US6330032B1 (en) * | 1999-09-30 | 2001-12-11 | Focus Enhancements, Inc. | Motion adaptive de-interlace filter |
US7129987B1 (en) * | 2003-07-02 | 2006-10-31 | Raymond John Westwater | Method for converting the resolution and frame rate of video data using Discrete Cosine Transforms |
Family Cites Families (54)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
JPS628193A (en) * | 1985-07-04 | 1987-01-16 | インタ−ナショナル ビジネス マシ−ンズ コ−ポレ−ション | Color image display system |
US4829455A (en) * | 1986-04-11 | 1989-05-09 | Quantel Limited | Graphics system for video and printed images |
US4916301A (en) * | 1987-02-12 | 1990-04-10 | International Business Machines Corporation | Graphics function controller for a high performance video display system |
US5175811A (en) * | 1987-05-20 | 1992-12-29 | Hitachi, Ltd. | Font data processor using addresses calculated on the basis of access parameters |
US5128776A (en) * | 1989-06-16 | 1992-07-07 | Harris Corporation | Prioritized image transmission system and method |
US5631850A (en) * | 1992-09-11 | 1997-05-20 | Sony Corporation | Audio visual equipment with a digital bus system and method for initializing and confirming connection |
US5565929A (en) * | 1992-10-13 | 1996-10-15 | Sony Corporation | Audio-visual control apparatus for determining a connection of appliances and controlling functions of appliances |
US5557733A (en) * | 1993-04-02 | 1996-09-17 | Vlsi Technology, Inc. | Caching FIFO and method therefor |
JP3272463B2 (en) * | 1993-04-15 | 2002-04-08 | 株式会社ソニー・コンピュータエンタテインメント | Image forming apparatus and method of using the same |
US5586236A (en) * | 1993-08-11 | 1996-12-17 | Object Technology Licensing Corp. | Universal color look up table and method of generation |
US5444835A (en) * | 1993-09-02 | 1995-08-22 | Apple Computer, Inc. | Apparatus and method for forming a composite image pixel through pixel blending |
JP3228381B2 (en) * | 1993-10-29 | 2001-11-12 | ソニー株式会社 | AV selector |
US5557734A (en) * | 1994-06-17 | 1996-09-17 | Applied Intelligent Systems, Inc. | Cache burst architecture for parallel processing, such as for image processing |
US5521644A (en) * | 1994-06-30 | 1996-05-28 | Eastman Kodak Company | Mechanism for controllably deinterlacing sequential lines of video data field based upon pixel signals associated with four successive interlaced video fields |
US5742298A (en) * | 1994-12-30 | 1998-04-21 | Cirrus Logic, Inc. | 64 bit wide video front cache |
JPH0944693A (en) * | 1995-08-02 | 1997-02-14 | Victor Co Of Japan Ltd | Graphic display device |
US5721842A (en) * | 1995-08-25 | 1998-02-24 | Apex Pc Solutions, Inc. | Interconnection system for viewing and controlling remotely connected computers with on-screen video overlay for controlling of the interconnection switch |
JP4191246B2 (en) * | 1995-11-08 | 2008-12-03 | ジェネシス マイクロチップ インク | Method and apparatus for non-interlaced scanning of video fields into progressively scanned video frames |
US6023302A (en) * | 1996-03-07 | 2000-02-08 | Powertv, Inc. | Blending of video images in a home communications terminal |
US5787466A (en) * | 1996-05-01 | 1998-07-28 | Sun Microsystems, Inc. | Multi-tier cache and method for implementing such a system |
KR100203264B1 (en) * | 1996-06-29 | 1999-06-15 | 윤종용 | Method and device for subpicture decoding in a digital video disc system |
JPH10126702A (en) * | 1996-10-23 | 1998-05-15 | Kokusai Electric Co Ltd | Signal connection switching device |
JP3996955B2 (en) * | 1996-12-06 | 2007-10-24 | コーニンクレッカ フィリップス エレクトロニクス エヌ ヴィ | Mixing graphics and video signals |
KR100238579B1 (en) * | 1997-04-15 | 2000-01-15 | 윤종용 | Method and apparatus for automatically selecting bnc/d-sub signal of display device having dpms function |
US5864369A (en) * | 1997-06-16 | 1999-01-26 | Ati International Srl | Method and apparatus for providing interlaced video on a progressive display |
KR100249228B1 (en) * | 1997-08-28 | 2000-03-15 | 구자홍 | Aspect Ratio Conversion Apparatus in Digital Television |
KR100287850B1 (en) * | 1997-12-31 | 2001-05-02 | 구자홍 | Deinterlacing system and method of digital tv |
US6198543B1 (en) * | 1998-02-05 | 2001-03-06 | Canon Kabushiki Kaisha | Color table look-up using compressed image data |
JPH11355585A (en) * | 1998-06-04 | 1999-12-24 | Toshiba Corp | Color image processor |
US6489998B1 (en) * | 1998-08-11 | 2002-12-03 | Dvdo, Inc. | Method and apparatus for deinterlacing digital video images |
US6515706B1 (en) * | 1998-09-15 | 2003-02-04 | Dvdo, Inc. | Method and apparatus for detecting and smoothing diagonal features video images |
WO2000065571A1 (en) * | 1999-04-26 | 2000-11-02 | Gibson Guitar Corp. | Universal audio communications and control system and method |
JP2003500944A (en) * | 1999-05-25 | 2003-01-07 | コーニンクレッカ フィリップス エレクトロニクス エヌ ヴィ | Method for converting an interlaced image signal to a progressively scanned image signal |
US6331874B1 (en) * | 1999-06-29 | 2001-12-18 | Lsi Logic Corporation | Motion compensated de-interlacing |
US6421090B1 (en) * | 1999-08-27 | 2002-07-16 | Trident Microsystems, Inc. | Motion and edge adaptive deinterlacing |
US6459455B1 (en) * | 1999-08-31 | 2002-10-01 | Intel Corporation | Motion adaptive deinterlacing |
JP3587113B2 (en) * | 2000-01-17 | 2004-11-10 | ヤマハ株式会社 | Connection setting device and medium |
US6480231B1 (en) * | 2000-03-24 | 2002-11-12 | Flashpoint Technology, Inc. | Efficiently de-interlacing a buffer of image data |
WO2001080219A2 (en) * | 2000-04-14 | 2001-10-25 | Realnetworks, Inc. | A system and method of providing music items to music renderers |
US6690425B1 (en) * | 2000-06-22 | 2004-02-10 | Thomson Licensing S.A. | Aspect ratio control arrangement in a video display |
US6661464B1 (en) * | 2000-11-21 | 2003-12-09 | Dell Products L.P. | Dynamic video de-interlacing |
KR100351160B1 (en) * | 2000-12-06 | 2002-09-05 | 엘지전자 주식회사 | Apparatus and method for compensating video motions |
EP1356670A1 (en) * | 2000-12-11 | 2003-10-29 | Koninklijke Philips Electronics N.V. | Motion compensated de-interlacing in video signal processing |
US7123212B2 (en) * | 2000-12-22 | 2006-10-17 | Harman International Industries, Inc. | Information transmission and display method and system for a handheld computing device |
US7020213B2 (en) * | 2000-12-28 | 2006-03-28 | Teac Corporation | Method and apparatus for selectively providing different electric signal paths between circuits |
US7030930B2 (en) * | 2001-03-06 | 2006-04-18 | Ati Technologies, Inc. | System for digitized audio stream synchronization and method thereof |
US6859235B2 (en) * | 2001-05-14 | 2005-02-22 | Webtv Networks Inc. | Adaptively deinterlacing video on a per pixel basis |
JP4596222B2 (en) * | 2001-06-26 | 2010-12-08 | ソニー株式会社 | Image processing apparatus and method, recording medium, and program |
KR100412503B1 (en) * | 2001-12-13 | 2003-12-31 | 삼성전자주식회사 | SetTop Box capable of setting easily resolution of digital broadcast signal |
US7061540B2 (en) * | 2001-12-19 | 2006-06-13 | Texas Instruments Incorporated | Programmable display timing generator |
KR100902315B1 (en) * | 2002-07-25 | 2009-06-12 | 삼성전자주식회사 | Apparatus and method for deinterlacing |
CN1175378C (en) * | 2002-07-26 | 2004-11-10 | 威盛电子股份有限公司 | Deivce and method for processing covered picture to become transparent one |
US7113597B2 (en) * | 2002-10-24 | 2006-09-26 | Hewlett-Packard Development Company,Lp. | System and method for protection of video signals |
US7034888B2 (en) * | 2003-03-26 | 2006-04-25 | Silicon Integrated Systems Corp. | Method for motion pixel detection |
-
2003
- 2003-08-01 TW TW092121178A patent/TWI332652B/en not_active IP Right Cessation
- 2003-12-11 CN CNB2003101204385A patent/CN1291593C/en not_active Expired - Lifetime
- 2003-12-31 TW TW092137847A patent/TWI229560B/en not_active IP Right Cessation
- 2003-12-31 TW TW092137803A patent/TWI238002B/en not_active IP Right Cessation
- 2003-12-31 TW TW092137801A patent/TWI256598B/en not_active IP Right Cessation
- 2003-12-31 TW TW092137846A patent/TWI236290B/en not_active IP Right Cessation
- 2003-12-31 TW TW092137849A patent/TWI240562B/en active
-
2004
- 2004-02-05 CN CNB2004100036527A patent/CN1278551C/en not_active Expired - Lifetime
- 2004-02-05 CN CNB2004100036512A patent/CN1271854C/en not_active Expired - Lifetime
- 2004-02-10 CN CNB2004100038950A patent/CN1324890C/en not_active Expired - Lifetime
- 2004-02-10 CN CNB2004100038931A patent/CN1272963C/en not_active Expired - Lifetime
- 2004-02-10 CN CNB2004100038946A patent/CN1266935C/en not_active Expired - Lifetime
- 2004-03-25 TW TW093108190A patent/TWI266521B/en not_active IP Right Cessation
- 2004-04-06 CN CNB200410033536XA patent/CN1302373C/en not_active Expired - Lifetime
- 2004-05-19 TW TW093114095A patent/TWI254892B/en active
- 2004-05-19 TW TW093114096A patent/TWI289993B/en active
- 2004-05-21 EP EP04012090A patent/EP1480198A1/en not_active Ceased
- 2004-05-24 US US10/851,239 patent/US7206026B2/en active Active
- 2004-05-24 CN CN201110048892.9A patent/CN102123250B/en not_active Expired - Lifetime
- 2004-05-24 US US10/851,220 patent/US20040233204A1/en not_active Abandoned
- 2004-05-24 US US10/851,101 patent/US7420569B2/en active Active
- 2004-05-24 CN CNB2004100457719A patent/CN1324903C/en not_active Expired - Lifetime
- 2004-05-24 KR KR1020040036992A patent/KR100541333B1/en active IP Right Grant
- 2004-05-24 CN CNA2004100457723A patent/CN1545309A/en active Pending
- 2004-05-24 US US10/851,241 patent/US7190405B2/en active Active
- 2004-05-24 US US10/851,224 patent/US7206028B2/en active Active
- 2004-05-24 JP JP2004153876A patent/JP4365728B2/en not_active Expired - Lifetime
- 2004-05-24 US US10/851,242 patent/US7242436B2/en active Active
- 2004-05-24 US US10/851,240 patent/US7196731B2/en active Active
- 2004-05-24 US US10/851,223 patent/US20040233217A1/en not_active Abandoned
- 2004-05-24 US US10/851,222 patent/US7812890B2/en active Active
-
2006
- 2006-01-20 US US11/335,597 patent/US20060119605A1/en not_active Abandoned
Patent Citations (3)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US6108047A (en) * | 1997-10-28 | 2000-08-22 | Stream Machine Company | Variable-size spatial and temporal video scaler |
US6330032B1 (en) * | 1999-09-30 | 2001-12-11 | Focus Enhancements, Inc. | Motion adaptive de-interlace filter |
US7129987B1 (en) * | 2003-07-02 | 2006-10-31 | Raymond John Westwater | Method for converting the resolution and frame rate of video data using Discrete Cosine Transforms |
Cited By (7)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US20050046742A1 (en) * | 2003-09-02 | 2005-03-03 | Satoru Saito | Image signal processing circuit |
US20050270418A1 (en) * | 2004-05-18 | 2005-12-08 | Makoto Kondo | Image processing device and image processing method |
US7379120B2 (en) * | 2004-05-18 | 2008-05-27 | Sony Corporation | Image processing device and image processing method |
US20080136963A1 (en) * | 2006-12-08 | 2008-06-12 | Palfner Torsten | Method and apparatus for reconstructing image |
US8115864B2 (en) * | 2006-12-08 | 2012-02-14 | Panasonic Corporation | Method and apparatus for reconstructing image |
US20080158414A1 (en) * | 2006-12-29 | 2008-07-03 | Texas Instruments Incorporated | Method for detecting film pulldown cadences |
US8115866B2 (en) * | 2006-12-29 | 2012-02-14 | Texas Instruments Incorporated | Method for detecting film pulldown cadences |
Also Published As
Similar Documents
Publication | Publication Date | Title |
---|---|---|
US7196731B2 (en) | Method and apparatus of adaptive de-interlacing of dynamic image | |
US6295089B1 (en) | Unsampled hd MPEG video and half-pel motion compensation | |
US6151075A (en) | Device and method for converting frame rate | |
US6104753A (en) | Device and method for decoding HDTV video | |
US6385248B1 (en) | Methods and apparatus for processing luminance and chrominance image data | |
US5793435A (en) | Deinterlacing of video using a variable coefficient spatio-temporal filter | |
US7471834B2 (en) | Rapid production of reduced-size images from compressed video streams | |
JP3092610B2 (en) | Moving picture decoding method, computer-readable recording medium on which the method is recorded, and moving picture decoding apparatus | |
US6442206B1 (en) | Anti-flicker logic for MPEG video decoder with integrated scaling and display functions | |
JP2000224591A (en) | Overall video decoding system, frame buffer, coding stream processing method, frame buffer assignment method and storage medium | |
JP2003501902A (en) | Video signal encoding | |
US20060072668A1 (en) | Adaptive vertical macroblock alignment for mixed frame video sequences | |
JP2002290876A (en) | Method for presenting motion image sequences | |
US7129987B1 (en) | Method for converting the resolution and frame rate of video data using Discrete Cosine Transforms | |
JPH0937243A (en) | Moving image coder and decoder | |
US6243140B1 (en) | Methods and apparatus for reducing the amount of buffer memory required for decoding MPEG data and for performing scan conversion | |
US6909752B2 (en) | Circuit and method for generating filler pixels from the original pixels in a video stream | |
JP2998741B2 (en) | Moving picture encoding method, computer-readable recording medium on which the method is recorded, and moving picture encoding apparatus | |
KR20020011247A (en) | Apparatus and method for increasing definition of digital television | |
US20050025462A1 (en) | Method and apparatus for equipping personal digital product with functions of recording and displaying of the digital video/audio multi-media | |
US7050494B1 (en) | Frame display method and apparatus using single field data | |
US20040066466A1 (en) | Progressive conversion of interlaced video based on coded bitstream analysis | |
Martins et al. | A unified approach to restoration, deinterlacing and resolution enhancement in decoding MPEG-2 video | |
US7423692B2 (en) | De-interlace method and method for generating de-interlace algorithm | |
JP2007336239A (en) | Digital broadcast receiver, and storing/reproducing method of digital broadcast signal |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
AS | Assignment |
Owner name: VIA TECHNOLOGIES, INC., TAIWAN Free format text: ASSIGNMENT OF ASSIGNORS INTEREST;ASSIGNORS:TSAO, SHENG-CHE;HSIUNG, JACKIE;CHIU, AN-TE;REEL/FRAME:015378/0283;SIGNING DATES FROM 20031119 TO 20031120 |
|
AS | Assignment |
Owner name: VIA TECHNOLOGIES, INC., TAIWAN Free format text: ASSIGNMENT OF ASSIGNORS INTEREST;ASSIGNORS:TSAO, SHENG-CHE;HSIUNG, JACKIE;CHIU, AN-TE;REEL/FRAME:018724/0982;SIGNING DATES FROM 20031119 TO 20031120 |
|
STCF | Information on status: patent grant |
Free format text: PATENTED CASE |
|
FPAY | Fee payment |
Year of fee payment: 4 |
|
FPAY | Fee payment |
Year of fee payment: 8 |
|
MAFP | Maintenance fee payment |
Free format text: PAYMENT OF MAINTENANCE FEE, 12TH YEAR, LARGE ENTITY (ORIGINAL EVENT CODE: M1553); ENTITY STATUS OF PATENT OWNER: LARGE ENTITY Year of fee payment: 12 |