US20100026897A1 - Method, Apparatus, and Computer Software for Modifying Moving Images Via Motion Compensation Vectors, Degrain/Denoise, and Superresolution - Google Patents

Method, Apparatus, and Computer Software for Modifying Moving Images Via Motion Compensation Vectors, Degrain/Denoise, and Superresolution Download PDF

Info

Publication number
US20100026897A1
US20100026897A1 US12/413,093 US41309309A US2010026897A1 US 20100026897 A1 US20100026897 A1 US 20100026897A1 US 41309309 A US41309309 A US 41309309A US 2010026897 A1 US2010026897 A1 US 2010026897A1
Authority
US
United States
Prior art keywords
software
artifacts
frames
video stream
filter
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Abandoned
Application number
US12/413,093
Other languages
English (en)
Inventor
Dillon Sharlet
Lance Maurer
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Cinnafilm Inc
Original Assignee
Cinnafilm Inc
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Cinnafilm Inc filed Critical Cinnafilm Inc
Priority to US12/413,093 priority Critical patent/US20100026897A1/en
Priority to EP09803294.9A priority patent/EP2556488A4/fr
Priority to PCT/US2009/038769 priority patent/WO2010014271A1/fr
Assigned to CINNAFILM, INC. reassignment CINNAFILM, INC. ASSIGNMENT OF ASSIGNORS INTEREST (SEE DOCUMENT FOR DETAILS). Assignors: MAURER, LANCE, SHARLET, DILLON
Publication of US20100026897A1 publication Critical patent/US20100026897A1/en
Abandoned legal-status Critical Current

Links

Images

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N5/00Details of television systems
    • H04N5/14Picture signal circuitry for video frequency region
    • H04N5/144Movement detection
    • H04N5/145Movement estimation
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T3/00Geometric image transformations in the plane of the image
    • G06T3/40Scaling of whole images or parts thereof, e.g. expanding or contracting
    • G06T3/4053Scaling of whole images or parts thereof, e.g. expanding or contracting based on super-resolution, i.e. the output image resolution being higher than the sensor resolution
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/20Analysis of motion
    • G06T7/223Analysis of motion using block-matching
    • G06T7/238Analysis of motion using block-matching using non-full search, e.g. three-step search
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N5/00Details of television systems
    • H04N5/14Picture signal circuitry for video frequency region
    • H04N5/21Circuitry for suppressing or minimising disturbance, e.g. moiré or halo
    • H04N5/213Circuitry for suppressing or minimising impulsive noise
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N7/00Television systems
    • H04N7/01Conversion of standards, e.g. involving analogue television standards or digital television standards processed at pixel level
    • H04N7/0112Conversion of standards, e.g. involving analogue television standards or digital television standards processed at pixel level one of the standards corresponding to a cinematograph film standard
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10016Video; Image sequence
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20016Hierarchical, coarse-to-fine, multiscale or multiresolution image processing; Pyramid transform
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20021Dividing image into blocks, subimages or windows

Definitions

  • the present invention relates to methods, apparatuses, and software for substantially removing artifacts from motion picture footage, such as film grain and noise effects, including impulsive noise such as dust.
  • Cinnafilm® streamlines current production processes for professional producers, editors, and filmmakers who use digital video to create their media projects.
  • the invention permits conversion of old film stock to digital formats without the need for long rendering times and extensive operator intervention associated with current technologies.
  • the present invention is of a video processing method and concomitant computer software stored on a computer-readable medium, comprising: receiving a video stream comprising a plurality of frames; removing via one or more GPU operations a plurality of artifacts from the video stream; outputting the video stream with the removed artifacts; and tracking artifacts between an adjacent subset of the plurality of frames prior to the removing step.
  • tracking comprises computing motion vectors for the tracked artifacts, including computing motion vectors for the tracked artifacts with at least a primary vector field and a secondary vector field with double the resolution of the primary vector field, computing motion vectors for the tracked artifacts via subpixel interpolation without favoring integer pixel lengths, and/or computing motion vectors for the tracked artifacts with a hierarchical set of resolutions of frames of the video stream.
  • Removing comprises removing artifacts that are identified via assumption that a motion compensated image signal is relatively constant compared to the artifacts, including employing a temporal wavelet filter by motion compensating a plurality of frames to be at a same point in time, performing an undecimated wavelet transform of each temporal frame, and applying a filter to each band of the wavelet transform and/or employing a Wiener filter using as an input a film grain profile image sequence extracted from the plurality of frames to remove film grain artifacts. Artifacts are prevented from being introduced into the video stream via a motion compensated temporal median filter employing confidence values. Superresolution analysis is performed on the video stream that is constant in time with respect to a number of frames used in the analysis.
  • FIG. 1 illustrates an inefficient standard implementation of transforming S into HH, HL, LH, and LL subbands of the wavelet transform
  • FIG. 2 illustrates a preferred efficient GPU implementation using multiple render targets
  • FIGS. 3( a ) and 3 ( b ) are diagrams illustrating colored vector candidates in M for the corresponding motion vectors in field M+1; dashed vectors identify improvements in accuracy along the edge of an object;
  • FIG. 4 is an illustration of subpixel interpolation; the original motion vector is dotted, and the shifted vector to compensate for subpixel interpolation is solid; bold grid lines are integer pixel locations, lighter grid lines are fractional pixel locations; in this example, the original vector had interpolation factors of (0,0) ⁇ (0.75,0.5); the adjusted vector has interpolation factors of (0.125,0.75) ⁇ (0.875,0.25), both of which are equally distant from the nearest of 0 or 1;
  • FIG. 7 is an illustration of the preferred temporal median calculation steps of the invention.
  • Embodiments of the present invention relate to methods, apparatuses, and software to enhance moving video images at the coded level to remove (and/or add) artifacts (such as film grain and other noise), preferably in real time (processing speed equal to or greater than ⁇ 30 frames per second). Accordingly, with the invention processed digital video can be viewed “live” as the source video is fed in. So, for example, the invention is useful with video “streamed” from the Internet, as well as in converting motion pictures stored on physical film.
  • artifacts such as film grain and other noise
  • Internal Video Processing Hardware preferably comprises a general purpose CPU (Pentium4®, Core2 Duo®, Core2 Quad® class), graphics card (DX9 PS3.0 or better capable), system board with expandability for video I/O cards (preferably PCI compatible), system memory, power supply, and hard drive.
  • a Front Panel User Interface preferably comprises a standard keyboard and mouse usable menu for access to image-modification features of the invention, along with three dials to assist in the fine tuning of the input levels. The menu is most preferably displayed on a standard video monitor. With the menu, the user can access at least some features and more preferably the entire set of features at any time, and can adjust subsets of those features.
  • the invention can also or alternatively be implemented with a panel display that includes a touchscreen.
  • the apparatus of the invention is preferably built into a sturdy, thermally proficient mechanical chassis, and conforms to common industry rack-mount standards.
  • the apparatus preferably has two sturdy handles for ease of installation.
  • I/O ports are preferably located in the front of the device on opposite ends.
  • Power on/off is preferably located in the front of the device, in addition to all user interfaces and removable storage devices (e.g., DVD drives, CD-ROM drives, USB inputs, Firewire inputs, and the like).
  • the power cord preferably extrudes from the unit in the rear.
  • An Ethernet port is preferably located anywhere on the box for convenience, but hidden using a removable panel.
  • the box is preferably anodized black wherever possible, and constructed in such a manner as to cool itself via convection only.
  • the apparatus of the invention is preferably locked down and secured to prevent tampering.
  • An apparatus takes in a digital video/audio stream on an input port (preferably SDI, or from a video data file or files, and optionally uses a digital video compression-decompression software module (CODEC) to decompress video frames and the audio buffers to separate paths (channels).
  • the video is preferably decompressed to a two dimensional (2D) array of pixel interleaved luminance-chrominance (YCbCr) data in either 4:4:4 or 4:2:2 sampling, or, optionally, red, green, and blue color components (RGB image, 8-bits per component).
  • the RGB image is optionally converted to a red, green, blue, and alpha component (RGBA, 8-bits per component) buffer.
  • RGBA red, green, blue, and alpha component
  • the audio and video is then processed by a sequence of operations, and then can be output to a second output port (SDI) or video data file or files.
  • SDI second output port
  • one embodiment of the present invention preferably utilizes commodity ⁇ 86 platform hardware, high end graphics hardware, and highly pipelined, buffered, and optimized software to achieve the process in realtime (or near realtime with advanced processing).
  • This configuration is highly reconfigurable, can rapidly adopt new video standards, and leverages the rapid advances occurring in the graphics hardware industry.
  • the video processing methods can work with any uncompressed video frame (YCbCr or RGB 2D array) that is interlaced or non-interlaced and at any frame rate, including 50 or 60 fields per second interlaced ( 50 i, 60 i ), 25 or 30 frames per second progressive ( 25 p, 30 p ), and 24 frames per second progressive, optionally encoded in the 2:3 pulldown or 2:3:3:2 pulldown formats.
  • YCbCr or RGB 2D array any uncompressed video frame that is interlaced or non-interlaced and at any frame rate, including 50 or 60 fields per second interlaced ( 50 i, 60 i ), 25 or 30 frames per second progressive ( 25 p, 30 p ), and 24 frames per second progressive, optionally encoded in the 2:3 pulldown or 2:3:3:2 pulldown formats.
  • CODECs there are numerous CODECs that exist to convert compressed video to uncompressed YCbCr or RGB 2D array frames. This embodiment of the present invention
  • an ‘operation’ is the fundamental building block of the Cinnafilm engine.
  • An operation has one critical function, ‘Frame’, which has the index of the frame to be processed as an argument.
  • the operation queries upstream operations until an input operation is reached, which implements ‘Frame’ in isolation by reading them from an outside source (instead of processing existing frames).
  • Video operations There are preferably four types of operations: (1) Video operations, (2) GPU (Graphics Processing Unit) operations, (3) Audio operations, and (4) Interleaved operations.
  • the type of operation indicates what type of frame that operation operates on. Interleaved frames are frames that possess both a video and an audio frame.
  • GPU frames are video frames that are stored in video memory on a graphics card. GPU operations transform one video memory frame into another video memory frame.
  • GPU converts video to GPU frames and back. It is technically a video operation, but it accepts GPU operations as its child nodes. Video frames go into the GPU operation, are processed by GPU operations on the GPU, and then the GPU operation downloads the frames back to the CPU for further processing.
  • AudioVideo converts interleaved frames into separate audio and video frames which can be processed by audio and video operations.
  • the present application next describes the preferred GPU implementation details of the Fast Fourier Transform using Cooley Tukey and Stockham autosort.
  • the 2D (two-dimensional) Fourier transform is performed on the GPU in six steps: Load, transform X, transform Y, inverse transform X, inverse transform Y, and Save.
  • the forward transform first three steps
  • the inverse transform last three steps
  • any number of filters and frequency domain operations can be applied.
  • Each group of six steps plus the filtering operations operates on a number of variably overlapping blocks in the input frame.
  • the load operation handles any windowing, zero padding and block overlapping necessary in order to make the frame fit into a set of uniformly sized blocks that is a power of two.
  • transform X and transform Y are performed with the Stockham auto sort algorithm for performing Cooley Tukey decimation-in-time. These two steps are identical except for the axis on which they operate. Inverse transform in X and Y is performed by using the same algorithm as transform X and transform Y except for negating the sign of the twiddle factors and a normalization constant.
  • the save operation uses the graphics card's geometry and alpha blending capability to overlap and sum the blocks, again with a weighted window function. This is accomplished by drawing the blocks as a set of overlapping quads. Alternatively, a shader program can be employed to compute the addresses of the pixels within the required blocks.
  • Fourier transforms on the GPU are performed with 2 complex transforms in parallel vector-wise.
  • Two complex numbers x 1 +iy 1 and x 2 +iy 2 are stored in a 4 component vector as (x 1 , Y 1 , x 2 , Y 2 ).
  • GPUs typically operate most efficiently on four component vectors due to their design for handling RGBA data.
  • many transforms are performed in parallel by putting many blocks into a single texture. For example, a frame broken up into M blocks ⁇ N blocks would be processed in one call by putting M ⁇ N blocks in a single texture.
  • the parallelism is realized by having many instances of the Fourier transform program processing all of the blocks at once. The more blocks, and by extension more image pixels or frequency bins available to the GPU, the more effectively the GPU will be able to parallelize its operations.
  • the present invention provides an adjustable value W from 0 to 1; the analysis window function is then be defined to be WeightedHann(x, W), and the synthesis window function is defined to be WeightedHann(x, 1-W). This provides the ability for user adjustability of the frequency domain algorithms without requiring advanced knowledge.
  • the present invention also provides an efficient implementation of Discrete Wavelet Transforms on the GPU, preferably following techniques disclosed in Starck, et al., “The Undecimated Wavelet Decomposition and its Reconstruction”, IEEE Transactions on Image Processing 16:2, February 2007, pp. 297-309.
  • Discrete Wavelet Transforms preferably following techniques disclosed in Starck, et al., “The Undecimated Wavelet Decomposition and its Reconstruction”, IEEE Transactions on Image Processing 16:2, February 2007, pp. 297-309.
  • multiple render targets allow for the computation of all 4 sub-bands (HH, HL, LH, LL) of a given level of the transform from scaling coefficients (S) in one pass, whereas the standard implementations or implementations on other hardware may require 4 independent passes to produce each sub-band. This reduces the memory bandwidth used for input data by a factor of 4. This applies at least to undecimated wavelet transforms, and can be applied to decimated wavelet transforms.
  • the present invention preferably employs improvements to motion estimation algorithms, including those disclosed in the related applications.
  • the invention provides a method for efficiently improving resolution of motion vector field, as follows: Suppose that sufficiently accurate motion vectors have been determined and are stored in a motion vector field M. To inexpensively improve the resolution of these accurate motion vectors, consider a new motion vector field M+1 with double the resolution of M. Each vector in M+1 has only four candidate vectors: The nearest four vectors in M, as shown in FIGS. 3( a ) and 3 ( b ). The reasoning is that if a block straddles a border of motion, the block must choose one of the areas of motion to represent. However, in a further subdivided level, the blocks may land entirely in one region of motion or the other, which differs from the choice of the coarser block.
  • One of the four neighbors of the coarser block should be the correct vector because one of the neighbors lies entirely within the area of motion which this new subdivided block entirely belongs to as well.
  • This candidate vector should be the result of the motion estimation optimization. Note that this technique will never produce new vectors, so it is only suitable for refinement after coarse but accurate motion vectors are found. This technique vastly improves motion vector accuracy near sharp edges in the original image.
  • the invention provides a method for improving accuracy of subpixel accurate motion estimation.
  • a problem that is encountered when attempting to perform sub pixel accurate motion estimation is that when evaluating a candidate vector, the origin of the vector is placed at an integer pixel location, while the end of the vector can end up at a fractional pixel location, requiring subsampling of the image. This problem will be referred to as unbalanced interpolation.
  • x has an integer pixel location
  • x′ may have a sub pixel component.
  • the preferred method of the invention solves the same problem, but with a relatively cheap operation, as follows.
  • the inventive solution to this problem is to displace both x and x′ by a carefully computed value that represents the sub pixel component of v.
  • v i round(v)
  • v f v ⁇ v i .
  • v f the sub pixel component of v while v i is the integer component.
  • the invention provides a method for improving motion estimator robustness against image noise in hierarchical block matching motion estimation algorithms.
  • Standard block matching algorithms work by forming a candidate vector set C. Then, each vector in the set is scored by computing the difference in pixels if the vector were to be applied to the images. In areas of low image information (low signal-to-noise ratio) the motion vector data can be very noisy doing to the influence of noise in the image. This noise reduces the precision and reliability of some algorithms relying on motion compensation; for example, temporal noise filtering or cut detection. Therefore, it is important to combat this noise produced by the algorithm.
  • the inventive method applies to hierarchical motion estimators as follows.
  • the first step is to form an image resolution pyramid, where the bottom of the pyramid is the full resolution image, and the subsequent levels are repeatedly downsampled by a factor of two.
  • Motion estimation begins by performing the search described above of the immediate neighbors at the top of the pyramid, and feeding the results to the subsequent levels which incrementally increase the accuracy of the motion vector data.
  • the candidate vector set is the immediately neighboring pixel in every direction.
  • the invention defines a constant e.
  • cm When optimizing the candidate vector set C and the current best vector v taken from the previous level in the hierarchy, define cm to be the minimally scoring vector candidate.
  • the standard behavior is to select argmin ⁇ c m , v ⁇ .
  • the inventive solution against noise is to select argmin ⁇ c m +e, v ⁇ . This way, a vector only is selected if the candidate vector is decisively better than the current vector (from the previous level). This results in the existing standard behavior in areas of detail (high SNR), where motion vectors can reliably be determined, and in areas of low detail (low SNR) the vectors are not noisy.
  • e is adjusted per level of the hierarchy to be small for the highly filtered levels of the image pyramid and large at the lowest level.
  • e i e/(i+1)
  • e is a user defined parameter
  • e i is the constant value for level i
  • the invention also provides for reducing noise and grain in moving images using motion compensation.
  • the inventive method exploits the fact that the motion compensated image signal is relatively constant compared to random additive noise.
  • Film grain is additive noise when the data is measured in film density.
  • the data should preferably be transformed to be in linear density space, resulting in the grain being additive noise.
  • the Temporal Wavelet filter of the invention works by motion compensating several frames (TEMPORAL_SIZE frames) to be at the same point in time, performing the undecimated wavelet transform of each temporal frame, and by applying a filter to each band of the wavelet transform. Additionally, the scale coefficients of the lowest level of the wavelet transform are also preferably temporally filtered to reduce the lowest frequency noise. Two operations implement temporal wavelet filtering: WaveletTemporal and NoiseFilter.
  • Each filter starts by collecting TEMPORAL_SIZE frames surrounding the frame to be filtered. This forms the input set.
  • the frames in the input set are motion compensated to align with the output frame (temporally in the center of the input set).
  • an undecimated wavelet transform is applied using the above-described efficient implementation of discrete wavelet transforms on the GPU, using an appropriate set of low and high pass filters.
  • one possible set of filters is [121]/4 (a three-tap Gaussian filter) the low pass filter, and the high pass filter is [010]-[121]/4 (a delta function minus a three-tap Gaussian).
  • the undecimated wavelet transform is performed using the “a trous” algorithm.
  • the detail coefficients are filtered preferably using a filtering method robust against motion compensation artifacts.
  • the detail coefficients of a one level wavelet transform is filtered using a hierarchical 3D median filter, and the scaling coefficients are filtered using a temporal Wiener filter.
  • WaveletTemporal all coefficients from an optional number of levels are filtered using a temporal Wiener filter. The Wiener filter in this application is robust against motion compensation artifacts.
  • the invention preferably employs applying a motion compensated Wiener filter to reduce noise in video sequence.
  • the preferred Wiener filter of the invention (compare to U.S. Pat. No. 5,500,685, to Korkoram) uses the Fourier transform operation outlined above.
  • the Wiener filter has several inputs: SPATIAL_SIZE (block size), TEMPORAL_SIZE (number of frames), AMOUNT, strength in individual RGB channels, and most importantly, a grain profile image sequence.
  • the grain profile can either be user selected or found with an automatic algorithm. An algorithm is given below with a method to eliminate the need for a sequence.
  • the grain profile image is a clean sample of the grain, which is at least SPATIAL_SIZE ⁇ SPATIAL_SIZE pixels, and lasts for TEMPORAL_SIZE frames.
  • the image data is offset to be zero mean, and the 3D Fourier transform is performed to produce a SPATIAL_SIZE ⁇ SPATIAL_SIZE ⁇ TEMPORAL_SIZE set of frequency bins.
  • the power spectrum is then found from this information. This power spectrum is then uploaded to the graphics card for use within the filter.
  • the filter step begins by collecting TEMPORAL_SIZE frames. This forms the input set. These frames are then motion compensated to align image details temporally in the same spatial position.
  • the output frame is the middle frame of the input set, if the set is of even size; the output frame is the later frame of the two middle frames.
  • each one is split into overlapping blocks and the Fourier transform is applied as above.
  • the 3D (three dimensional) Fourier transform is produced by taking the Fourier transform across the temporal bins in each 2D transform. Once the 3D transform is found, then the power spectrum is computed.
  • the filter gain for the power spectrum bin x, y, t is defined by: F(x, y, t)/(F(x, y, t)+AMOUNT*G(x, y, t)), where F is the power spectrum of the video image, and G is the power spectrum of the grain profile image sequence.
  • this task can be split up into several sets of blocks, such as [0, M/2) ⁇ [0, N/2), [M/2, M) ⁇ [0, N/2), [0, M/2] ⁇ [N/2, N), [M/2, M] ⁇ [N/2, N].
  • This has reduced the memory usage by 75% (1 ⁇ 4th of the original footprint), because only one of the sets of blocks is required in memory at once. This process can be done at more than just a factor of two. A factor of four would reduce memory usage by 15/16 for example ( 1/16th of the original footprint).
  • the geometry processing and alpha blending capability of the GPU is exploited to perform overlapped window calculations over multiple passes (one for each chunk of blocks).
  • the invention further provides for automatically locating a suitable grain profile image for the Wiener filter.
  • a suitable grain profile image To find a suitable profile image, define a cost function as the impact of the filter kernel described above. Therefore, the goal is to minimize G.
  • a significant part of this algorithm is determining the candidate block set.
  • a small candidate block set is important for efficiency purposes. To optimize this candidate block set, observe that in the vast majority of footage, motion is relatively low. This means that a block at some point x, y in one frame is likely very similar to the block at the same x, y in the nearby neighboring frames. This fact is exploited to reduce computational load: split each frame into a grid aligned on the desired block size (SPATIAL_SIZE in the wiener filter). A full search would define the candidate block set as every block in this grid. Instead, define a quality parameter Q in (0, 1). A given block in the grid should only be tested in 1/Q frames.
  • a block is defined to belong to the candidate block set if: x+y+i ⁇ 0 mod ceil(1/Q), where x and y are the block coordinates in the grid, and i is the frame index.
  • the invention also provides for improving usability of selecting profile images.
  • the power spectrum of the noise is required for the full temporal window.
  • this implies a requirement of a sequence of TEMPORAL_SIZE frames to profile from. In practice, this is difficult to accomplish and places another dimension of constraints on the profile images, which are already difficult to find in real, practical video.
  • the invention further provides for reducing noise with an intraframe wavelet shrinkage
  • the preferred filter employs a wavelet based spatial filtering technique, and is derived from profiling a sample of the grain to determine appropriate filter thresholds.
  • the filter thresholds are then adjustable in the user interface with live feedback.
  • the filter begins by performing the undecimated wavelet transform using three tap Gaussian filters up to some predefined number of levels, presumably enough levels to adequately isolate the detail coefficients responsible for the presence of noise.
  • the preferred implementation is variable up to four levels.
  • the detail coefficients of each level are then thresholded using soft thresholding or another thresholding method.
  • the filter thresholds are determined by profiling a sample designated by the user to be a mostly uniform region without much detail (a region with low signal to noise ratio). For each level, the filter thresholds are determined using the 1st and 2nd quartiles of the magnitude of the detail coefficients. This statistical analysis was chosen for its property that some image detail can be present in the wavelet transform of the area being profiled without affecting the lower quartiles. Therefore it is robust against user error for selecting inappropriate profiles, or allows for suboptimal profiles to be selected if no ideal profile is available.
  • the grain is not transformed into additive white Gaussian noise at all, but the filter thresholds adapt to the luminance at the particular location as if the noise were transformed to be additive.
  • T be the transformation from log density to density
  • F the filter function.
  • the invention also provides for reducing impulsive noise in moving image sequence via an artifact resistant method of motion compensated temporal median.
  • the motion compensated temporal median filter works by first finding forward and backward motion vectors, using Cinnafilm's block motion estimation engine. Once motion vectors are known, some odd number N of contiguous frames, centered on the frame desired for output, are motion compensated to produce N images of the same frame (the center frame has no motion compensation). Then a median operation is applied to the N samples to produce the output sample.
  • the temporal median filter is very effective for removing impulsive noise such as dust and dirt in film originated material. Note in FIG. 7 how the large black particle of dust was eliminated from the ball because the median selects the majority color present—red. If the black dot were real image detail, it would have been present in all three frames in the same location, and the median filter would not have filtered it.
  • the motion estimator produces a confidence value which is used by the temporal median filter to prevent artifacts caused by incorrect motion vectors.
  • the invention further provides for a practical technique for performing superresolution on moving images using motion vector solutions.
  • Standard superresolution algorithms work by finding subpixel accurate motion vectors in a sequence of frames, and then using fractional motion vectors to reconstruct pixels that are missing in one frame, but can be found in another. This is a very processor intensive task that can be made practical by keeping a history of the previous frames available for finding the appropriate sampling of a pixel.
  • Let F 0 , F 1 , . . . be a sequence of frames at some resolution M 1 ⁇ N 1 .
  • the superresolution image is some resolution M 2 ⁇ N 2 which is greater than the original resolution.
  • S be the superresolution history image, which has resolution M 2 ⁇ N 2 .
  • This image should have two components, the image color data (for example, RGB or YCbCr), and a second component which is the rating for that pixel. Note that this can be efficiently implemented on standard graphics hardware which typically has 4 channels for a texture: the first three channels store the image data (capable of holding most standard image color spaces), and the fourth stores the rating.
  • the score value is a rating of how well that pixel matches the pixel at that resolution, where zero is a perfect score. For example, suppose M 2 ⁇ N 2 is exactly twice M 1 ⁇ N 1 . Then for the first frame, the superresolution image pixels 0, 2, 4, 6, . . . x 0, 2, 4, 6 . . . should have a perfect score because they are exactly represented by the original image. For pixels not exactly sampled in the original image, they must be found in previous images using motion vector analysis. If a motion vector has a fractional part of 0.5, then it is a perfect match for the odd pixels in the previous example. This is because that pixel in the previous image moved to exactly half way between the two neighboring pixels in the subsequent (current image).
  • the score is some norm (length, squared length, etc.) of the difference of the vector's fractional part from the ideal. In this case, 0.5 is the ideal, and if the motion vector has a fractional part of 0.5, then it is a perfect match and the score is 0.
  • the image S is updated such that each pixel is the minimum score of either the current value, or the new value.
  • the scores of S are always incremented by some decay value. This prevents a perfect match from persisting for too long, and favors temporally closer frames in the case of near ties.
  • superresolution analysis becomes a constant time algorithm with respect to the number of frames used in the analysis.
  • the number of frames used is controlled by the decay parameter. High values of decay mean a smaller number of frames will be used to search for the best subsample match.
  • this algorithm demands more accuracy from the motion estimation algorithm due to the potential for error to accumulate.
  • the invention employs a multipass improvement in superresolution algorithm quality.
  • accuracy can be improved by a multipass method.
  • the complete S image i.e., including the score values should be stored for each frame.
  • a third pass is performed which minimizes the score values from each pass. This results in a complete neighborhood of frames being analyzed and used for the superresolution algorithm results, as opposed to only frames in one direction as in a single pass.

Landscapes

  • Engineering & Computer Science (AREA)
  • Multimedia (AREA)
  • Signal Processing (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Television Systems (AREA)
  • Image Processing (AREA)
US12/413,093 2008-07-30 2009-03-27 Method, Apparatus, and Computer Software for Modifying Moving Images Via Motion Compensation Vectors, Degrain/Denoise, and Superresolution Abandoned US20100026897A1 (en)

Priority Applications (3)

Application Number Priority Date Filing Date Title
US12/413,093 US20100026897A1 (en) 2008-07-30 2009-03-27 Method, Apparatus, and Computer Software for Modifying Moving Images Via Motion Compensation Vectors, Degrain/Denoise, and Superresolution
EP09803294.9A EP2556488A4 (fr) 2008-07-30 2009-03-30 Procédé, dispositif et logiciel informatique pour modifier des images animées avec des vecteurs de compensation de mouvement et par dégranulation/débruitage et super-résolution
PCT/US2009/038769 WO2010014271A1 (fr) 2008-07-30 2009-03-30 Procédé, dispositif et logiciel informatique pour modifier des images animées avec des vecteurs de compensation de mouvement et par dégranulation/débruitage et super-résolution

Applications Claiming Priority (3)

Application Number Priority Date Filing Date Title
US8482808P 2008-07-30 2008-07-30
US14130408P 2008-12-30 2008-12-30
US12/413,093 US20100026897A1 (en) 2008-07-30 2009-03-27 Method, Apparatus, and Computer Software for Modifying Moving Images Via Motion Compensation Vectors, Degrain/Denoise, and Superresolution

Publications (1)

Publication Number Publication Date
US20100026897A1 true US20100026897A1 (en) 2010-02-04

Family

ID=41607954

Family Applications (2)

Application Number Title Priority Date Filing Date
US12/413,146 Active 2030-12-08 US8208065B2 (en) 2008-07-30 2009-03-27 Method, apparatus, and computer software for digital video scan rate conversions with minimization of artifacts
US12/413,093 Abandoned US20100026897A1 (en) 2008-07-30 2009-03-27 Method, Apparatus, and Computer Software for Modifying Moving Images Via Motion Compensation Vectors, Degrain/Denoise, and Superresolution

Family Applications Before (1)

Application Number Title Priority Date Filing Date
US12/413,146 Active 2030-12-08 US8208065B2 (en) 2008-07-30 2009-03-27 Method, apparatus, and computer software for digital video scan rate conversions with minimization of artifacts

Country Status (3)

Country Link
US (2) US8208065B2 (fr)
EP (2) EP2556488A4 (fr)
WO (2) WO2010014271A1 (fr)

Cited By (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN103778641A (zh) * 2012-10-25 2014-05-07 西安电子科技大学 基于小波描述子的目标跟踪方法
US20160284092A1 (en) * 2015-03-23 2016-09-29 Canon Kabushiki Kaisha Image processing apparatus, image processing method, and non-transitory computer-readable storage medium
US20180343448A1 (en) * 2017-05-23 2018-11-29 Intel Corporation Content adaptive motion compensated temporal filtering for denoising of noisy video for efficient coding
US20190246138A1 (en) * 2016-09-14 2019-08-08 Beamr Imaging Ltd. Method of pre-processing of video information for optimized video encoding
CN110418604A (zh) * 2017-03-22 2019-11-05 赛佛欧普手术有限公司 用于检测电生理诱发电位变化的医疗系统和方法

Families Citing this family (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
KR100763917B1 (ko) * 2006-06-21 2007-10-05 삼성전자주식회사 고속으로 움직임을 추정하는 방법 및 장치
JP4785678B2 (ja) * 2006-09-01 2011-10-05 キヤノン株式会社 画像符号化装置および画像符号化方法
US8850054B2 (en) * 2012-01-17 2014-09-30 International Business Machines Corporation Hypertext transfer protocol live streaming
KR20140126195A (ko) * 2013-04-22 2014-10-30 삼성전자주식회사 배치 쓰레드 처리 기반의 프로세서, 그 프로세서를 이용한 배치 쓰레드 처리 방법 및 배치 쓰레드 처리를 위한 코드 생성 장치
GB201407665D0 (en) 2014-05-01 2014-06-18 Imagination Tech Ltd Cadence analysis for a video signal having an interlaced format
TWI721816B (zh) * 2017-04-21 2021-03-11 美商時美媒體公司 用於產生遊戲的運動向量的系統及方法
TWI664852B (zh) * 2018-03-19 2019-07-01 瑞昱半導體股份有限公司 影像處理裝置及影像處理方法

Citations (58)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US5276513A (en) * 1992-06-10 1994-01-04 Rca Thomson Licensing Corporation Implementation architecture for performing hierarchical motion analysis of video images in real time
US5500685A (en) * 1993-10-15 1996-03-19 Avt Communications Limited Wiener filter for filtering noise from a video signal
US5600731A (en) * 1991-05-09 1997-02-04 Eastman Kodak Company Method for temporally adaptive filtering of frames of a noisy image sequence using motion estimation
US5696848A (en) * 1995-03-09 1997-12-09 Eastman Kodak Company System for creating a high resolution image from a sequence of lower resolution motion images
US5742710A (en) * 1994-02-23 1998-04-21 Rca Thomson Licensing Corporation Computationally-efficient method for estimating image motion
US5771316A (en) * 1995-12-26 1998-06-23 C-Cube Microsystems Fade detection
US5831673A (en) * 1994-01-25 1998-11-03 Przyborski; Glenn B. Method and apparatus for storing and displaying images provided by a video signal that emulates the look of motion picture film
US20010000779A1 (en) * 1996-11-07 2001-05-03 Kabushiki Kaisha Sega Enterprises. Image processing device, image processing method and recording medium
US6268863B1 (en) * 1997-10-02 2001-07-31 National Research Council Canada Method of simulating a photographic camera
US20010030709A1 (en) * 1999-12-23 2001-10-18 Tarnoff Harry L. Method and apparatus for a digital parallel processor for film conversion
US6363117B1 (en) * 1998-12-31 2002-03-26 Sony Corporation Video compression using fast block motion estimation
US20030169820A1 (en) * 2000-05-31 2003-09-11 Jean- Yves Babonneau Device and method for motion-compensated recursive filtering of video images prior to coding and corresponding coding system
US20030206242A1 (en) * 2000-03-24 2003-11-06 Choi Seung Jong Device and method for converting format in digital TV receiver
US6661470B1 (en) * 1997-03-31 2003-12-09 Matsushita Electric Industrial Co., Ltd. Moving picture display method and apparatus
US20040001705A1 (en) * 2002-06-28 2004-01-01 Andreas Soupliotis Video processing system and method for automatic enhancement of digital video
US20040008904A1 (en) * 2003-07-10 2004-01-15 Samsung Electronics Co., Ltd. Method and apparatus for noise reduction using discrete wavelet transform
US20040012673A1 (en) * 2001-08-24 2004-01-22 Susumu Tanase Telecine converting method
US20040095511A1 (en) * 2002-11-20 2004-05-20 Amara Foued Ben Trailing artifact avoidance system and method
US20040105029A1 (en) * 2002-11-06 2004-06-03 Patrick Law Method and system for converting interlaced formatted video to progressive scan video
US20040135924A1 (en) * 2003-01-10 2004-07-15 Conklin Gregory J. Automatic deinterlacing and inverse telecine
US20040170330A1 (en) * 1998-08-12 2004-09-02 Pixonics, Inc. Video coding reconstruction apparatus and methods
US20040179602A1 (en) * 2001-08-21 2004-09-16 Olivier Le Meur Device and process for estimating noise level, noise reduction system and coding system comprising such a device
US20040213349A1 (en) * 2003-04-24 2004-10-28 Zador Andrew Michael Methods and apparatus for efficient encoding of image edges, motion, velocity, and detail
US20050024532A1 (en) * 2003-06-25 2005-02-03 Choi Seung Jong Apparatus for converting video format
US6868190B1 (en) * 2000-10-19 2005-03-15 Eastman Kodak Company Methods for automatically and semi-automatically transforming digital image data to provide a desired image look
US20050078176A1 (en) * 2003-09-25 2005-04-14 Wis Technologies, Inc. System and method for efficiently performing an inverse telecine procedure
US20050089196A1 (en) * 2003-10-24 2005-04-28 Wei-Hsin Gu Method for detecting sub-pixel motion for optical navigation device
US20050276323A1 (en) * 2002-09-27 2005-12-15 Vanguard Software Solutions, Inc. Real-time video coding/decoding
US20060056724A1 (en) * 2004-07-30 2006-03-16 Le Dinh Chon T Apparatus and method for adaptive 3D noise reduction
US20060110062A1 (en) * 2004-11-23 2006-05-25 Stmicroelectronics Asia Pacific Pte. Ltd. Edge adaptive filtering system for reducing artifacts and method
US20060114358A1 (en) * 2004-12-01 2006-06-01 Silverstein D Amnon Artifact reduction in a digital video
US20060262202A1 (en) * 2005-05-17 2006-11-23 Canon Kabushiki Kaisha Image processing apparatus and image processing method
US20060267539A1 (en) * 2005-05-30 2006-11-30 Yoshisuke Kuramoto Telecine device that utilizes standard video camera circuits
US20070003156A1 (en) * 2005-07-01 2007-01-04 Ali Corporation Image enhancing system
US20070019114A1 (en) * 2005-04-11 2007-01-25 De Garrido Diego P Systems, methods, and apparatus for noise reduction
US20070035621A1 (en) * 2004-04-22 2007-02-15 The Circle For The Promotion Of Science And Engine Movement decision method for acquiring sub-pixel motion image appropriate for super resolution processing and imaging device using the same
US20070047647A1 (en) * 2005-08-24 2007-03-01 Samsung Electronics Co., Ltd. Apparatus and method for enhancing image using motion estimation
US20070058716A1 (en) * 2005-09-09 2007-03-15 Broadcast International, Inc. Bit-rate reduction for multimedia data streams
US20070071344A1 (en) * 2005-09-29 2007-03-29 Ouzilevski Alexei V Video acquisition with integrated GPU processing
US20070074115A1 (en) * 2005-09-23 2007-03-29 Microsoft Corporation Automatic capturing and editing of a video
US20070097259A1 (en) * 2005-10-20 2007-05-03 Macinnis Alexander Method and system for inverse telecine and field pairing
US20070104273A1 (en) * 2005-11-10 2007-05-10 Lsi Logic Corporation Method for robust inverse telecine
US20070115298A1 (en) * 2003-03-04 2007-05-24 Clairvoyante, Inc Systems and Methods for Motion Adaptive Filtering
US20070171280A1 (en) * 2005-10-24 2007-07-26 Qualcomm Incorporated Inverse telecine algorithm based on state machine
US20070189635A1 (en) * 2006-02-08 2007-08-16 Anja Borsdorf Method for noise reduction in imaging methods
US20070206117A1 (en) * 2005-10-17 2007-09-06 Qualcomm Incorporated Motion and apparatus for spatio-temporal deinterlacing aided by motion compensation for field-based video
US20070229704A1 (en) * 2006-03-30 2007-10-04 Satyajit Mohapatra Pipelining techniques for deinterlacing video information
US20070247546A1 (en) * 2006-02-02 2007-10-25 Hyung-Jun Lim Apparatus and methods for processing video signals
US20070247457A1 (en) * 2004-06-21 2007-10-25 Torbjorn Gustafsson Device and Method for Presenting an Image of the Surrounding World
US20080123740A1 (en) * 2003-09-23 2008-05-29 Ye Jong C Video De-Noising Algorithm Using Inband Motion-Compensated Temporal Filtering
US7420618B2 (en) * 2003-12-23 2008-09-02 Genesis Microchip Inc. Single chip multi-function display controller and method of use thereof
US20080232665A1 (en) * 2007-03-21 2008-09-25 Anja Borsdorf Method for noise reduction in digital images with locally different and directional noise
US20080310509A1 (en) * 2007-06-13 2008-12-18 Nvidia Corporation Sub-pixel Interpolation and its Application in Motion Compensated Encoding of a Video Signal
US20080309680A1 (en) * 2007-06-13 2008-12-18 Teng-Yi Lin Noise Cancellation Device for an Image Signal Processing System
US20080317132A1 (en) * 2004-11-12 2008-12-25 Industrial Technology Research Institute And University Of Washington System and Method for Fast Variable-Size Motion Estimation
US7535517B2 (en) * 2005-04-14 2009-05-19 Samsung Electronics Co., Ltd. Method of motion compensated temporal noise reduction
US7813570B2 (en) * 2004-09-13 2010-10-12 Microsoft Corporation Accelerated video encoding using a graphics processing unit
US8238420B1 (en) * 2008-01-24 2012-08-07 Adobe Systems Incorporated Video content transcoding for mobile devices

Family Cites Families (27)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US7034862B1 (en) * 2000-11-14 2006-04-25 Eastman Kodak Company System and method for processing electronically captured images to emulate film tonescale and color
DE20112122U1 (de) * 2001-07-26 2001-11-29 Britax Teutonia Kinderwagen Zusammenklappbares Kinderwagenfahrgestell
CN1650622B (zh) * 2002-03-13 2012-09-05 图象公司 用于数字重新灌录或修改电影或其他图像序列数据的系统和方法
US7620109B2 (en) 2002-04-10 2009-11-17 Microsoft Corporation Sub-pixel interpolation in motion estimation and compensation
US7408988B2 (en) * 2002-12-20 2008-08-05 Lsi Corporation Motion estimation engine with parallel interpolation and search hardware
CN100493191C (zh) * 2003-07-09 2009-05-27 汤姆森许可贸易公司 具有低复杂度噪声消减的视频编码器及视频编码方法
US7346109B2 (en) 2003-12-23 2008-03-18 Genesis Microchip Inc. Motion vector computation for video sequences
JP2005191890A (ja) 2003-12-25 2005-07-14 Sharp Corp 画像処理装置、画像処理方法、画像処理プログラム、画像処理プログラムを記録した記録媒体、および画像処理装置を備えた画像形成装置
US7236170B2 (en) * 2004-01-29 2007-06-26 Dreamworks Llc Wrap deformation using subdivision surfaces
US7468757B2 (en) * 2004-10-05 2008-12-23 Broadcom Corporation Detection and correction of irregularities while performing inverse telecine deinterlacing of video
DE102004049676A1 (de) * 2004-10-12 2006-04-20 Infineon Technologies Ag Verfahren zur rechnergestützten Bewegungsschätzung in einer Vielzahl von zeitlich aufeinander folgenden digitalen Bildern, Anordnung zur rechnergestützten Bewegungsschätzung, Computerprogramm-Element und computerlesbares Speichermedium
US20060109899A1 (en) * 2004-11-24 2006-05-25 Joshua Kablotsky Video data encoder employing telecine detection
US7274428B2 (en) * 2005-03-24 2007-09-25 Eastman Kodak Company System and method for processing images to emulate film tonescale and color
EP1734767A1 (fr) * 2005-06-13 2006-12-20 SONY DEUTSCHLAND GmbH Procédé pour traiter des données digitales d'image
US7570309B2 (en) * 2005-09-27 2009-08-04 Samsung Electronics Co., Ltd. Methods for adaptive noise reduction based on global motion estimation
JP5013040B2 (ja) * 2005-09-29 2012-08-29 株式会社メガチップス 動き探索方法
ES2460541T3 (es) 2005-11-30 2014-05-13 Entropic Communications, Inc. Corrección de campo de vectores de movimiento
CN101379835B (zh) * 2006-02-02 2011-08-24 汤姆逊许可公司 使用组合参考双向预测进行运动估计的方法和设备
US8116576B2 (en) * 2006-03-03 2012-02-14 Panasonic Corporation Image processing method and image processing device for reconstructing a high-resolution picture from a captured low-resolution picture
US7701509B2 (en) * 2006-04-25 2010-04-20 Nokia Corporation Motion compensated video spatial up-conversion
JP4973031B2 (ja) * 2006-07-03 2012-07-11 ソニー株式会社 ノイズ抑圧方法、ノイズ抑圧方法のプログラム、ノイズ抑圧方法のプログラムを記録した記録媒体及びノイズ抑圧装置
US20080055477A1 (en) * 2006-08-31 2008-03-06 Dongsheng Wu Method and System for Motion Compensated Noise Reduction
US20080204598A1 (en) 2006-12-11 2008-08-28 Lance Maurer Real-time film effects processing for digital video
US20090051679A1 (en) * 2007-08-24 2009-02-26 Simon Robinson Local motion estimation using four-corner transforms
US8023562B2 (en) * 2007-09-07 2011-09-20 Vanguard Software Solutions, Inc. Real-time video coding/decoding
US8165209B2 (en) * 2007-09-24 2012-04-24 General Instrument Corporation Method and apparatus for providing a fast motion estimation process
US8654833B2 (en) * 2007-09-26 2014-02-18 Qualcomm Incorporated Efficient transformation techniques for video coding

Patent Citations (60)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US5600731A (en) * 1991-05-09 1997-02-04 Eastman Kodak Company Method for temporally adaptive filtering of frames of a noisy image sequence using motion estimation
US5276513A (en) * 1992-06-10 1994-01-04 Rca Thomson Licensing Corporation Implementation architecture for performing hierarchical motion analysis of video images in real time
US5500685A (en) * 1993-10-15 1996-03-19 Avt Communications Limited Wiener filter for filtering noise from a video signal
US5831673A (en) * 1994-01-25 1998-11-03 Przyborski; Glenn B. Method and apparatus for storing and displaying images provided by a video signal that emulates the look of motion picture film
US5742710A (en) * 1994-02-23 1998-04-21 Rca Thomson Licensing Corporation Computationally-efficient method for estimating image motion
US5696848A (en) * 1995-03-09 1997-12-09 Eastman Kodak Company System for creating a high resolution image from a sequence of lower resolution motion images
US5771316A (en) * 1995-12-26 1998-06-23 C-Cube Microsystems Fade detection
US20010000779A1 (en) * 1996-11-07 2001-05-03 Kabushiki Kaisha Sega Enterprises. Image processing device, image processing method and recording medium
US6661470B1 (en) * 1997-03-31 2003-12-09 Matsushita Electric Industrial Co., Ltd. Moving picture display method and apparatus
US6268863B1 (en) * 1997-10-02 2001-07-31 National Research Council Canada Method of simulating a photographic camera
US20040170330A1 (en) * 1998-08-12 2004-09-02 Pixonics, Inc. Video coding reconstruction apparatus and methods
US6363117B1 (en) * 1998-12-31 2002-03-26 Sony Corporation Video compression using fast block motion estimation
US20010030709A1 (en) * 1999-12-23 2001-10-18 Tarnoff Harry L. Method and apparatus for a digital parallel processor for film conversion
US20030206242A1 (en) * 2000-03-24 2003-11-06 Choi Seung Jong Device and method for converting format in digital TV receiver
US20030169820A1 (en) * 2000-05-31 2003-09-11 Jean- Yves Babonneau Device and method for motion-compensated recursive filtering of video images prior to coding and corresponding coding system
US6868190B1 (en) * 2000-10-19 2005-03-15 Eastman Kodak Company Methods for automatically and semi-automatically transforming digital image data to provide a desired image look
US20040179602A1 (en) * 2001-08-21 2004-09-16 Olivier Le Meur Device and process for estimating noise level, noise reduction system and coding system comprising such a device
US20040012673A1 (en) * 2001-08-24 2004-01-22 Susumu Tanase Telecine converting method
US20040001705A1 (en) * 2002-06-28 2004-01-01 Andreas Soupliotis Video processing system and method for automatic enhancement of digital video
US20060290821A1 (en) * 2002-06-28 2006-12-28 Microsoft Corporation Video processing system and method for automatic enhancement of digital video
US20050276323A1 (en) * 2002-09-27 2005-12-15 Vanguard Software Solutions, Inc. Real-time video coding/decoding
US20040105029A1 (en) * 2002-11-06 2004-06-03 Patrick Law Method and system for converting interlaced formatted video to progressive scan video
US20040095511A1 (en) * 2002-11-20 2004-05-20 Amara Foued Ben Trailing artifact avoidance system and method
US20040135924A1 (en) * 2003-01-10 2004-07-15 Conklin Gregory J. Automatic deinterlacing and inverse telecine
US20070024703A1 (en) * 2003-01-10 2007-02-01 Conklin Gregory J Automatic deinterlacing and inverse telecine
US20070115298A1 (en) * 2003-03-04 2007-05-24 Clairvoyante, Inc Systems and Methods for Motion Adaptive Filtering
US20040213349A1 (en) * 2003-04-24 2004-10-28 Zador Andrew Michael Methods and apparatus for efficient encoding of image edges, motion, velocity, and detail
US20050024532A1 (en) * 2003-06-25 2005-02-03 Choi Seung Jong Apparatus for converting video format
US20040008904A1 (en) * 2003-07-10 2004-01-15 Samsung Electronics Co., Ltd. Method and apparatus for noise reduction using discrete wavelet transform
US20080123740A1 (en) * 2003-09-23 2008-05-29 Ye Jong C Video De-Noising Algorithm Using Inband Motion-Compensated Temporal Filtering
US20050078176A1 (en) * 2003-09-25 2005-04-14 Wis Technologies, Inc. System and method for efficiently performing an inverse telecine procedure
US20050089196A1 (en) * 2003-10-24 2005-04-28 Wei-Hsin Gu Method for detecting sub-pixel motion for optical navigation device
US7420618B2 (en) * 2003-12-23 2008-09-02 Genesis Microchip Inc. Single chip multi-function display controller and method of use thereof
US20070035621A1 (en) * 2004-04-22 2007-02-15 The Circle For The Promotion Of Science And Engine Movement decision method for acquiring sub-pixel motion image appropriate for super resolution processing and imaging device using the same
US20070247457A1 (en) * 2004-06-21 2007-10-25 Torbjorn Gustafsson Device and Method for Presenting an Image of the Surrounding World
US20060056724A1 (en) * 2004-07-30 2006-03-16 Le Dinh Chon T Apparatus and method for adaptive 3D noise reduction
US7813570B2 (en) * 2004-09-13 2010-10-12 Microsoft Corporation Accelerated video encoding using a graphics processing unit
US20080317132A1 (en) * 2004-11-12 2008-12-25 Industrial Technology Research Institute And University Of Washington System and Method for Fast Variable-Size Motion Estimation
US20060110062A1 (en) * 2004-11-23 2006-05-25 Stmicroelectronics Asia Pacific Pte. Ltd. Edge adaptive filtering system for reducing artifacts and method
US20060114358A1 (en) * 2004-12-01 2006-06-01 Silverstein D Amnon Artifact reduction in a digital video
US20070019114A1 (en) * 2005-04-11 2007-01-25 De Garrido Diego P Systems, methods, and apparatus for noise reduction
US7535517B2 (en) * 2005-04-14 2009-05-19 Samsung Electronics Co., Ltd. Method of motion compensated temporal noise reduction
US20060262202A1 (en) * 2005-05-17 2006-11-23 Canon Kabushiki Kaisha Image processing apparatus and image processing method
US20060267539A1 (en) * 2005-05-30 2006-11-30 Yoshisuke Kuramoto Telecine device that utilizes standard video camera circuits
US20070003156A1 (en) * 2005-07-01 2007-01-04 Ali Corporation Image enhancing system
US20070047647A1 (en) * 2005-08-24 2007-03-01 Samsung Electronics Co., Ltd. Apparatus and method for enhancing image using motion estimation
US20070058716A1 (en) * 2005-09-09 2007-03-15 Broadcast International, Inc. Bit-rate reduction for multimedia data streams
US20070074115A1 (en) * 2005-09-23 2007-03-29 Microsoft Corporation Automatic capturing and editing of a video
US20070071344A1 (en) * 2005-09-29 2007-03-29 Ouzilevski Alexei V Video acquisition with integrated GPU processing
US20070206117A1 (en) * 2005-10-17 2007-09-06 Qualcomm Incorporated Motion and apparatus for spatio-temporal deinterlacing aided by motion compensation for field-based video
US20070097259A1 (en) * 2005-10-20 2007-05-03 Macinnis Alexander Method and system for inverse telecine and field pairing
US20070171280A1 (en) * 2005-10-24 2007-07-26 Qualcomm Incorporated Inverse telecine algorithm based on state machine
US20070104273A1 (en) * 2005-11-10 2007-05-10 Lsi Logic Corporation Method for robust inverse telecine
US20070247546A1 (en) * 2006-02-02 2007-10-25 Hyung-Jun Lim Apparatus and methods for processing video signals
US20070189635A1 (en) * 2006-02-08 2007-08-16 Anja Borsdorf Method for noise reduction in imaging methods
US20070229704A1 (en) * 2006-03-30 2007-10-04 Satyajit Mohapatra Pipelining techniques for deinterlacing video information
US20080232665A1 (en) * 2007-03-21 2008-09-25 Anja Borsdorf Method for noise reduction in digital images with locally different and directional noise
US20080310509A1 (en) * 2007-06-13 2008-12-18 Nvidia Corporation Sub-pixel Interpolation and its Application in Motion Compensated Encoding of a Video Signal
US20080309680A1 (en) * 2007-06-13 2008-12-18 Teng-Yi Lin Noise Cancellation Device for an Image Signal Processing System
US8238420B1 (en) * 2008-01-24 2012-08-07 Adobe Systems Incorporated Video content transcoding for mobile devices

Non-Patent Citations (2)

* Cited by examiner, † Cited by third party
Title
Starck, J L. et al., (the undecimated Wavelet Decomposition and its Reconstruction, IEEE Transactions on Image Processing, VOL. 16, NO. 2, February 2007, Pages 297-309. *
Starck, J l. ET AL., (The Undecimated Wavelet Decomposition and its Reconstruction, IEEE Transactions on Image Processing, VOL. 16, no.2, February 2007 pages 297-309 *

Cited By (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN103778641A (zh) * 2012-10-25 2014-05-07 西安电子科技大学 基于小波描述子的目标跟踪方法
US20160284092A1 (en) * 2015-03-23 2016-09-29 Canon Kabushiki Kaisha Image processing apparatus, image processing method, and non-transitory computer-readable storage medium
US9916521B2 (en) * 2015-03-23 2018-03-13 Canon Kabushiki Kaisha Depth normalization transformation of pixels
US20190246138A1 (en) * 2016-09-14 2019-08-08 Beamr Imaging Ltd. Method of pre-processing of video information for optimized video encoding
US10986363B2 (en) * 2016-09-14 2021-04-20 Beamr Imaging Ltd. Method of pre-processing of video information for optimized video encoding
CN110418604A (zh) * 2017-03-22 2019-11-05 赛佛欧普手术有限公司 用于检测电生理诱发电位变化的医疗系统和方法
US20180343448A1 (en) * 2017-05-23 2018-11-29 Intel Corporation Content adaptive motion compensated temporal filtering for denoising of noisy video for efficient coding
US10448014B2 (en) * 2017-05-23 2019-10-15 Intel Corporation Content adaptive motion compensated temporal filtering for denoising of noisy video for efficient coding

Also Published As

Publication number Publication date
EP2556664A4 (fr) 2014-01-22
US20100026886A1 (en) 2010-02-04
WO2010014270A1 (fr) 2010-02-04
WO2010014271A1 (fr) 2010-02-04
EP2556488A4 (fr) 2014-01-22
EP2556488A1 (fr) 2013-02-13
US8208065B2 (en) 2012-06-26
EP2556664A1 (fr) 2013-02-13

Similar Documents

Publication Publication Date Title
US20100026897A1 (en) Method, Apparatus, and Computer Software for Modifying Moving Images Via Motion Compensation Vectors, Degrain/Denoise, and Superresolution
US9342204B2 (en) Scene change detection and handling for preprocessing video with overlapped 3D transforms
US8265158B2 (en) Motion estimation with an adaptive search range
US7860167B2 (en) Apparatus and method for adaptive 3D artifact reducing for encoded image signal
US9635308B2 (en) Preprocessing of interlaced video with overlapped 3D transforms
US9628674B2 (en) Staggered motion compensation for preprocessing video with overlapped 3D transforms
EP0961229A2 (fr) Filtre non linéaire adaptatif destiné a réduire les artefacts de block
JP4982471B2 (ja) 信号処理方法、装置およびプログラム
US20130022288A1 (en) Image processing apparatus and method for reducing edge-induced artefacts
US20140023149A1 (en) Sparse geometry for super resolution video processing
JP2006146926A (ja) 2次元画像の表現方法、画像表現、画像の比較方法、画像シーケンスを処理する方法、動き表現を導出する方法、動き表現、画像の位置を求める方法、表現の使用、制御デバイス、装置、コンピュータプログラム、システム、及びコンピュータ読み取り可能な記憶媒体
Jaiswal et al. Exploitation of inter-color correlation for color image demosaicking
Dane et al. Optimal temporal interpolation filter for motion-compensated frame rate up conversion
Maalouf et al. Colour image super-resolution using geometric grouplets
US7129987B1 (en) Method for converting the resolution and frame rate of video data using Discrete Cosine Transforms
CN101087436A (zh) 视频信号的时间噪声分析
US8576926B2 (en) Single frame artifact filtration and motion estimation
JP5331643B2 (ja) 動きベクトル検出装置及びプログラム
Pham et al. Resolution enhancement of low-quality videos using a high-resolution frame
Andris et al. JPEG meets PDE-based Image Compression
KR101428531B1 (ko) 움직임 벡터의 정규화 및 윤곽선의 패턴 분석을 이용한 복수 영상 기반 초해상도 영상 생성 방법
JP5963166B2 (ja) 画像復元装置、方法、及びプログラム
Chen et al. High quality spatial interpolation of video frames using an adaptive warping method
Xiao et al. Robust orientation diffusion via PCA method and application to image super-resolution reconstruction
Venkatesan et al. Video deinterlacing with control grid interpolation

Legal Events

Date Code Title Description
AS Assignment

Owner name: CINNAFILM, INC.,NEW MEXICO

Free format text: ASSIGNMENT OF ASSIGNORS INTEREST;ASSIGNORS:SHARLET, DILLON;MAURER, LANCE;REEL/FRAME:022988/0456

Effective date: 20090518

STCB Information on status: application discontinuation

Free format text: ABANDONED -- FAILURE TO RESPOND TO AN OFFICE ACTION