WO2012109528A1 - Interpolation vidéo fondée sur les contours pour suréchantillonnage de vidéo et d'image - Google Patents

Interpolation vidéo fondée sur les contours pour suréchantillonnage de vidéo et d'image Download PDF

Info

Publication number
WO2012109528A1
WO2012109528A1 PCT/US2012/024630 US2012024630W WO2012109528A1 WO 2012109528 A1 WO2012109528 A1 WO 2012109528A1 US 2012024630 W US2012024630 W US 2012024630W WO 2012109528 A1 WO2012109528 A1 WO 2012109528A1
Authority
WO
WIPO (PCT)
Prior art keywords
pixels
interpolation
edge
filter
sharpening
Prior art date
Application number
PCT/US2012/024630
Other languages
English (en)
Inventor
Rahul VANAM
Yan Ye
Serhad Doken
Original Assignee
Vid Scale, Inc.
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Vid Scale, Inc. filed Critical Vid Scale, Inc.
Publication of WO2012109528A1 publication Critical patent/WO2012109528A1/fr

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T3/00Geometric image transformation in the plane of the image
    • G06T3/40Scaling the whole image or part thereof
    • G06T3/403Edge-driven scaling

Definitions

  • upsampling or zoom refers to the process of increasing the resolution of a digitized image or video. Upsampling is universally found in most video players like VLC player, Windows Media player, etc. Online video hosting websites like YouTube, Hulu, Daily Motion, etc. provide users with the option to upsample the video to full screen resolution. Many current TVs and DVD players come equipped with an inbuilt upsampling module.
  • Upsampling generally involves generating new image pixels from existing pixels.
  • a pixel In a video frame, a pixel is often coherent to its neighboring pixels. This property is used by most upsampling methods to generate new pixels.
  • upsampling In a video or image transcoder pipeline, upsampling can be challenging because the input is a low resolution video having coding artifacts.
  • the simplest approach to upsampling is pixel replication, which simply involves replicating pixels in both rows and columns.
  • pixel replication results in severe blockiness.
  • the blockiness and aliasing artifacts seen in pixel replication can be mitigated by applying a low pass filter to the upsampled frame.
  • Some techniques of reducing blockiness include applying a low pass filter.
  • Other techniques generally referred to as interpolation may include averaging of neighboring pixels to determine the new pixel values.
  • One technique of interpolation utilizes "zero stuffing" followed by low-pass filtering. This can be described as enlarging the dimensions of an image by placing zero-valued pixels at intermediate locations, and then applying a lowpass filter to the enlarged image.
  • videos commonly found on video hosting websites are sometimes heavily compressed to a low bitrate, and may have low resolution.
  • a commonly noticed artifact in these videos is blurriness.
  • An upsam ling process usually enhances the visibility of the blurriness in the video. Therefore, sharpening filters may be used to restore some of the details back to the video.
  • sharpening involves high pass filtering the blurred image and adding the weighted high pass image to the original image. This is referred to as unsharp masking.
  • high-pass filter including Laplacian filter and Laplacian of Gaussian filter.
  • One embodiment described herein is a method for interpolating an image comprising: determining an edge characteristic associated with an interpolation point, the edge characteristic having an edge magnitude and an edge angle; selecting an interpolation filter in response to the edge angle; and determining a pixel value at the interpolation point using the selected interpolation filter.
  • the edge characteristic may be based on determining horizontal gradients and vertical gradients of pixel values in neighboring regions associated with the interpolation point.
  • the neighboring regions associated with the interpolation point may be horizontal rectangular regions, vertical rectangular regions or square regions. In one embodiment, the neighboring regions are determined in response to the interpolation point being a row interpolation point, a column interpolation point or a center interpolation point.
  • the edge characteristic is determined using a first order gradient filter (which may be referred to as a "mask", since a convolution operation is not being
  • One such first order gradient filter is a modified Sobel operator.
  • the method may also include selecting an interpolation filter in response to the edge characteristic.
  • the selection of the interpolation filter may be made in response to the edge magnitude. That is, image points having edge magnitudes (or estimated
  • a threshold may use one set of interpolation filters (e.g., implemented as hardware circuits or software running on a microprocessor, or a combination thereof), while those below the threshold may use another set of interpolation filters.
  • the threshold may also be adapted depending on characteristics of the image and/or the edge characteristics.
  • the method may utilize interpolation filters that apply greater weighting to the pixels located along the direction of the edge angle, and less or no weight to the pixels located along the direction orthogonal to the edge angle.
  • the interpolation filter applies greater weighting to nearest-neighbor pixels, an intermediate weighting to the pixels located along the direction of the edge angle, and the least or no weight to the pixels located along the direction orthogonal to the edge angle.
  • the nearest-neighbor pixels are either pixels in the same row or in the same column as the interpolation point.
  • a computer readable medium made be used for storing instructions that when executed by the processor will cause the processor to: obtain a plurality of edge characteristics each associated with a respective one of a plurality of interpolation points, each of the edge characteristics having an edge magnitude and an edge angle; select an interpolation filter for each one of the plurality of interpolation points in response to the respective edge angle; determine a pixel value for each of the plurality of interpolation points using the
  • the method may be implemented in dedicated hardware or using hardware to accelerate a subset of the calculations.
  • the interpolation device comprises: an edge characteristic calculator configured to determine edge characteristics for each of a plurality of interpolation points; an interpolation filter selector configured to operate on the edge characteristics and to responsively generate interpolation filter identifiers for each of the plurality of interpolation points; and, an interpolation filter circuit configured to apply one of a plurality of interpolation filters in response to the interpolation filter identifiers and to output interpolated values for the plurality of interpolation points.
  • an edge characteristic calculator configured to determine edge characteristics for each of a plurality of interpolation points
  • an interpolation filter selector configured to operate on the edge characteristics and to responsively generate interpolation filter identifiers for each of the plurality of interpolation points
  • an interpolation filter circuit configured to apply one of a plurality of interpolation filters in response to the interpolation filter identifiers and to output interpolated values for the plurality of interpolation
  • an edge-based interpolation for upsampling followed by an adaptive sharpening filter is described herein.
  • the sharpening filter is controlled by the edge-based interpolation parameters that determine the pixels to be sharpened and the sharpening strength.
  • the method comprises:
  • the edge-based interpolation filter operation is combined with an adaptive sharpening filter into a single filter called a joint filter.
  • the method comprises: determining gradient data for image pixels to be interpolated; selectively sharpening neighboring original pixels; selectively identifying neighboring pixels that have yet to be interpolated in response to the pixel category of center row or column pixel; and, determining interpolated and sharpened pixel values using neighboring pixels according to a joint sharpening and interpolation filter.
  • FIG. 1 is a graphical depiction of pixels in an image
  • FIGS. 2A-2C are graphical depictions of center, column, and row interpolation points, respectively;
  • FIG. 3 is a Frame level PSNR comparison between edge-based interpolation and five tap filter approach for vid01_2X sequence.
  • FIG. 4 A is a block diagram of an edge-based interpolation and adaptive sharpening system using two separate filters
  • FIG. 4B is a more detailed block diagram of an edge-based interpolation and adaptive sharpening system using two separate filters
  • FIG. 5 is a flow chart of the interpolation process
  • FIG. 6 is a flow chart of the sharpening process
  • FIG. 7 is a block diagram of joint edge-based interpolation and adaptive sharpening filter.
  • FIG. 8 is a pixel diagram of flagged pixels
  • FIG. 9 is a pixel diagram of center, row, and column pixels
  • FIG. 10 illustrates the span of pixels over which the filter coefficients are applied
  • FIG. 11 is a flow chart for one embodiment of a combined filtering method
  • FIG. 12 is a pixel map showing the assignment of sharpening parameters for neighboring pixels of pixels that are (a) center, (b) row, and (c) column pixels;
  • FIGS. 13-16 are pixel diagrams identifying original pixels used to derive the interpolated pixels in the bounding box;
  • FIG. 17 identifies original pixels and estimated interpolated pixels for sharpening
  • FIG. 18A is a system diagram of an example communications system in which one or more disclosed embodiments may be implemented.
  • FIG. 18B is a system diagram of an example wireless transmit/receive unit (WTRU) that may be used within the communications system illustrated in FIG. 18A; and,
  • WTRU wireless transmit/receive unit
  • FIGS. 18C-18E are system diagrams of example radio access network and example core networks that may be used within the communications system illustrated in FIG. 18A.
  • the interpolation methods and devices described herein may be used in wired or wireless networks.
  • Devices including handheld devices, desktop, laptop or other computers may be used to perform the methods.
  • This includes cell phones, PDAs, tablet computers and/or displays, as well as cable TV set top boxes, televisions, and the like.
  • the interpolation, or a video upsampling scheme, described herein retains fidelity of edges and a computational complexity less than typical FIR filtering techniques.
  • FIG. 1 illustrates a pixel grid containing both original pixels (designated by squares) and estimated pixels, or pixel values to be estimated at interpolation points
  • Pixel A has diagonal original pixels
  • pixel B has two neighboring original pixels along the same column
  • pixel C has two neighboring original pixels along the same row.
  • the estimated pixels can be categorized into three groups: (a) pixels having diagonal neighboring original pixels (labeled as 'A' in FIG. 1), and the estimated pixels at these interpolation points may be referred to as center pixels; (b) pixels having neighboring original column pixels (labeled as 'B' in FIG. 1) as the nearest-neighbor pixels.
  • the estimated pixels may be referred to as column pixels; and (c) pixels having neighboring original row pixels (labeled as 'C in FIG. 1) as the nearest-neighbor pixels.
  • the estimated pixels may be referred to as row pixels.
  • the method includes the following aspects: edge detection; edge angle determination; and pixel estimation.
  • the method may include determining an edge characteristic associated with an interpolation point, where the edge characteristic includes an edge magnitude and an edge angle; selecting an interpolation filter in response to the edge angle; and determining a pixel value at the interpolation point using the selected interpolation filter.
  • Any software or hardware that computes the horizontal and vertical gradients of pixels may be used for edge detection.
  • a modified Sobel operator is used due to its low computational complexity.
  • a Sobel operator includes two square masks that compute the horizontal and vertical gradients. These gradients may then be used to compute or otherwise obtain the angle of the gradient and the angle of the edge, or estimates thereof.
  • the standard Sobel operator is applied to a square grid of pixels, while the modified Sobel operator may be applied to a rectangular grid of pixels.
  • the estimated pixel category, or interpolation point categories may be used to select a different modified Sobel mask G x and G y .
  • G x and G y are masks that may be used to pointwise multiply with the image pixels and then sum the products to computing horizontal ( ⁇ ) and vertical (Ay) gradients (or estimates thereof), respectively.
  • the value of the threshold T edge may depend on the resolution of the video, the pixel category, and other factors. Threshold values may be determined empirically, and some that have been found to perform well are set forth below in Table 1. The choice of threshold values can differ: in some embodiments a constant threshold may be used, while in other embodiments the threshold may be adapted on a frame- by-frame or on a block of pixel basis.
  • An adaptive threshold can be computed automatically based on the pixel characteristics within a block. At lower resolution, neighboring pixels are less likely to be coherent, resulting in larger gradient. Use of smaller threshold would result in many pixels being classified as edges. Therefore, to reduce incorrect classification, larger thresholds may be used for smaller resolution videos.
  • Table 1 List of thresholds corresponding to different frame sizes and pixel categories.
  • the interpolation filter may be selected in response to the edge (or gradient) magnitude, edge (or gradient) angle, or both.
  • one interpolation filter may be selected for interpolation points where the corresponding edge magnitude G is less than T e d ge .
  • one interpolation filter may be selected for interpolation points where the corresponding edge has either ⁇ or Ay equal to zero. In each of the scenarios, an edge angle need not be determined or provided, and the
  • interpolation filter used to interpolate the new pixel at the interpolation point is as follows: a. For center pixel: (aO + al + bO + bl)/4
  • the interpolation filter to use at respective interpolation points is determined in response to the pixel category and edge angle.
  • the interpolation assumes that the edges are linear. For curved edges, additional angles may be checked during pixel estimation. Alternatively, edge pixels belonging to a curve may be detected using a Hough transform-based method, and these pixels can be used for interpolating new pixels along a curved edge.
  • an interpolation filter is selected based on whether the edge angle is approximately 45 degrees.
  • One embodiment uses one center-interpolation filter for edge angles in the range of 45 to 135 degrees. Alternative embodiments may utilize other ranges of angles.
  • the interpolation may be performed as follows:
  • new pixel (a0+bl)/2.
  • new pixel (bO + 2*(al+bl) + a2)/6
  • new pixel (cO + 2 *(b0+bl) + al)/6
  • the method may therefore utilize interpolation filters that apply greater weighting to the pixels located along the direction of the edge angle, and less or no weight to the pixels located along the direction orthogonal to the edge angle.
  • the interpolation filter applies greater weighting to nearest-neighbor pixels, an intermediate weighting to the pixels located along the direction of the edge angle, and the least or no weight to the pixels located along the direction orthogonal to the edge angle.
  • the nearest-neighbor pixels are either pixels in the same row or in the same column as the interpolation point.
  • the gradient measurements ⁇ and Ay may be used in conjunction with a look up table, or LUT, to determine an appropriate interpolation filter.
  • the LUT may store a desired filter impulse response, or may simply provide an interpolation filter identifier that may be used to determine and apply the appropriate interpolation filter.
  • One embodiment described herein is a method for interpolating an image comprising: determining an edge characteristic associated with an interpolation point, the edge characteristic having an edge magnitude and an edge angle; selecting an interpolation filter in response to the edge angle; and determining a pixel value at the interpolation point using the selected interpolation filter.
  • the edge characteristic may be based on determining horizontal gradients and vertical gradients of pixel values in neighboring regions associated with the interpolation point.
  • the neighboring regions associated with the interpolation point may be horizontal rectangular regions as shown in FIG. 2B, vertical rectangular regions as shown in FIG. 2C or square regions as shown in FIG. 2A.
  • the neighboring regions are determined in response to the interpolation point being a row interpolation point, a column interpolation point or a center interpolation point.
  • the edge characteristic is determined using a first order gradient filter (which may be referred to as a "mask", when a convolution operation is not being performed).
  • a first order gradient filter is a modified Sobel operator.
  • VQM scores close to zero indicate no artifacts/impairments, while scores close to one are heavily impaired.
  • VQM only takes 15 seconds of videos for comparison.
  • the upsampling schemes are tested on different videos and transcoder bitrates and list the results in Table 3.
  • the term '2X' and '3X' in Table 3 refers to the transcoded video being encoded at half and one third the original bitrate, respectively.
  • both upsampling methods yield low impairments, with the edge-based interpolation performing slightly better than the 5-tap filter. It should be noted that when looking at the two processed videos, the edge-based interpolation seems to appear significantly better. This is expected as most objective measurement schemes have limitations.
  • the 3X videos often results in a slightly higher VQM score compared to the 2X videos, since lowering the bitrate increases coding distortions.
  • the edge-based interpolation has small perceptual improvement over the 5-tap method.
  • the VQM score for Book of Eli is greater than that for 'vid' videos, since the downsampled video was encoded at 50 times lower bitrate compared to the original video.
  • FIG. 3 illustrates a snapshot of PSNR vs. frame for vid01_2X sequence. It is clear that even on a frame-by-frame basis there is PSNR improvement when using the edge- based interpolation approach.
  • VQM scores. 2X and 3X indicate that the videos have been encoded at half and one third the original bitrate.
  • the two upsampling methods are compared by looking at the upsampled videos.
  • Experimental results obtained by subjecting video frames to the two upsampling schemes indicate that the edge-based interpolation method has sharper edges and more details compared to the five-tap filter approach.
  • the method is performed by a two-step filter approach, where an interpolation filter upsamples the input video to a higher spatial resolution, followed by an sharpening filter that enhances the details in the video.
  • the process is performed by a joint filter approach where one filter is applied to the input video signal to upsample it to a higher resolution and to enhance the signal details simultaneously.
  • the joint filter performs both interpolation and sharpening together, and may have associated processing gains.
  • the memory access may also be reduced.
  • FIG. 4A two separate filters - an edge-based interpolation filter and an adaptive sharpening filter are combined to generate upsampled and sharpened video frames from a raw video frame source.
  • FIG. 4B one embodiment is depicted where a YUV frame is first upsampled by two in both dimensions using an edge-based interpolator.
  • the edge based interpolator described above may be used.
  • Edge information captured during the interpolation of the luminance component (Y) of the frame may be used for determining pixels to be sharpened, and for controlling the strength of the adaptive sharpening filter.
  • Y luminance component
  • the edge information may be obtained from edge detection filters selected in part on the location of the interpolated pixel as either a center pixel A, a row pixel B or a column pixel C. Sharpening may be performed on the luma component because it usually contains more details or edges. The sharpened luma component is then combined with the chroma components to result in an upsampled sharpened frame. Edge information may also be extracted from one or more of the chroma components, and a weighted combination of the edge information from luma and from one or more of the chroma components may be used. Further, one or more of the chroma components may also go through adaptive sharpening.
  • Block diagrams in FIG. 5 and FIG. 6 illustrate one algorithm for edge-based interpolation and sharpening stages, respectively.
  • strength[i][j] for the interpolated pixel and its neighboring pixels are set to gradjj. For example, if the interpolated pixel at location (i, j) has a gradient gradjj ⁇ sharp_thresh, then:
  • sharpness threshold 100 is suitable, but the choice of threshold can differ. In other embodiments, sharpness threshold may also be determined from the local image characteristics and adapted to different values throughout the image.
  • Sharpening is applied to the neighboring pixels of an edge pixel to remove artifacts that might appear as speckles due to non-uniform variation in luminance.
  • the map[i][j] and strength[i][j] information from the interpolation stage is reused. This may lead to lower computational complexity.
  • the discrete LoG filter of size KxL is defined as
  • the parameter ⁇ determines the filtering strength. Larger ⁇ (> 1) may be used for weak sharpening, while smaller ⁇ ( ⁇ 1) may used for stronger sharpening.
  • the filter size determines the number of neighboring pixels considered during sharpening. In other embodiments, different filter sizes can be chosen based on both video resolution and content. Further, filter sizes cmay be adapted throughout an image frame. As an example, the 5x5 LoG filter operation may be expressed as:
  • FIG. 10 illustrates the span of pixels over which the filter coefficients are applied, where the black dot indicates the interpolated pixel (called the center pixel) that is to be sharpened using a 5x5 LoG filter.
  • the LoG filter coefficients are applied to the pixels within the bounding box.
  • ⁇ 3 ⁇ 4 is selected based on the edge strength as follows:
  • ⁇ 3 ⁇ 4 is defined as ⁇ (no sharpening), strength[i][j] ⁇ 100
  • local image characteristics may also be used for determining ⁇ 3 ⁇ 4. For example, over-sharpening of strong edges (that is, use of small ⁇ 3 ⁇ 4 values on strong edges) may result in ringing artifact. To avoid this, larger values for ⁇ 3 ⁇ 4 may be used as edge strengths strength[i][j] increases.
  • the sharpening filter can also be made sensitive to noise, by detecting noise before the interpolation stage, and using this information to choose a LoG filter with larger ⁇ ( ⁇ > 1). This will ensure that noise detected as edges are not amplified.
  • the sharpening process can be made sensitive to false edges, such as blocking artifacts. A blocking artifact detection algorithm can be used prior to interpolation. During the sharpening process, pixels classified as false edges can be smoothened instead of being sharpened.
  • FIG. 7 depicts a joint edge-based interpolation and adaptive sharpening filter embodiment.
  • interpolated pixels are computed within a filter window before the sharpening filter is applied.
  • the edge- based interpolation is combined with sharpening filter to provide a joint filter.
  • a flow chart for the combined filtering approach is given in FIG. 11.
  • the gradient grad y is computed for pixel located at (i,j) using a modified Sobel operator. Two buffers are used - buffers sharp flag and future grad - to flag the sharpening of neighboring pixels located at (k,n) that have not been interpolated yet. If the gradient grad y is greater than or equal to a predefined sharpness threshold sharp hresh, then the following two operations are performed on the neighboring 3x3 pixels, as illustrated in FIG. 12:
  • sharp thresh 100
  • pixel sharpening candidates are the neighboring 3x3 pixels.
  • a larger neighborhood is considered, and a different sharp thresh may be used; for example, the value of sharp thres may be determined based on local image characteristics. If grad y is less than sharpjhresh, then check if the associated sharp_flag[i][j] is equal to 1. If sharp_flag[i][j] is 1 , then use its associated future_grad[i][j] value to choose a suitable joint filter, otherwise continue to use grad y for selecting the joint filter.
  • the systems uses the gradient of a previously interpolated neighboring pixel at (i-a,j-b) whose gradi_ a j_b > sharp thresh, and 0 ⁇ a,b ⁇ 1.
  • the black dot is a pixel being interpolated and sharpened, and has a gradient greater than the sharpening threshold.
  • the 'x' are original pixels to be sharpened, and gray dots are pixels yet to be interpolated and are flagged for sharpening.
  • the joint filter operates partially on pixels that have already been interpolated in addition to the original pixels.
  • pixels within win size are used for sharpening as illustrated by the box in FIG. 13.
  • all original pixels that were used to derive the interpolated pixels in the bounding box are identified, as illustrated by the bold X's in FIG. 13.
  • the sharpening filter operates these original pixels. Because these pixels were used in deriving the interpolated pixels in the bounding box, the high frequency component computed from these original pixels would be close to the high frequency component computed from all pixels in the bounding box.
  • the joint filter can be written as the sum of the two filters given by
  • h(grad ,edge angle, win _ size, category) h EI (grad ,edge angle, filter _ size(win_size),category) + ⁇ LoG ( (grad), filter _ size(win _ size), category ),
  • LoG is the sharpening filter applied only to the original pixels
  • h E i is the edge-based interpolator.
  • 1
  • different ⁇ values can be chosen.
  • the edge-based interpolator hEI is a function of the gradient, edge angle, filter size and pixel category. As described above with respect to FIG. 9, for edge-based
  • interpolated pixels are categorized as center, row, and column pixels, based on their position in the pixel grid. Therefore, including the original pixels, the upsampled frame has four pixel categories.
  • Each interpolated pixel category e.g., center, row, or column
  • the three edge directions are no edge, edge angles between 0 and 90 degrees, and edge angles between 90 and 180 degrees.
  • the three edge directions are no edge, edge angles between 35 and 55 degrees, and edge angles between 125 and 145 degrees.
  • filtering is performed on the 4x4 original pixels. Therefore, a 4x4 LoG filter is considered, resulting in a 4x4 joint filter in Equation (7).
  • filter_size(5x5) for row, column, and original pixels are 5x4, 4x5, and 5x5, respectively, as illustrated in FIGS. 14, 15, and 16.
  • FIG. 13 interpolation and sharpening of the center pixel indicated by the black dot is shown.
  • the interpolated pixels within win size are derived from the original pixels represented as bold 'x'.
  • a LoG filter of 4x4 is applied to these original pixels.
  • FIG. 14 interpolation and sharpening of row pixel indicated by the black dot is shown.
  • a LoG filter of 5x4 is applied to the original pixels indicated by bold 'x'.
  • FIG. 15 interpolation and sharpening of column pixel indicated by the black dot is shown.
  • a LoG filter of 4x5 is applied to the original pixels indicated by bold 'x'.
  • sharpening of original pixel labeled 'A' is depicted.
  • a LoG filter of 5x5 is applied to the original pixels indicated by bold
  • joint filter_size(win_size) 4x4.
  • edge threshold r gradient, edge angle, filter_size(win_size)
  • filter 2 edge in second direction, and gradient > edge threshold filter 3; else, where the edge threshold determines whether an interpolated pixel is an edge pixel or not.
  • the edge threshold > sharp thresh.
  • edge threshold 125° ⁇ edge angle ⁇ 145° and gradient > edge threshold
  • the filter h cen ter, ⁇ is used when only interpolation is applied and sharpening is not.
  • the joint filters for all other pixel categories may be derived in a similar manner. In one embodiment, four different ⁇ values as listed in Equation (5), and two different win size values are used. Therefore, the number of joint filters for interpolated pixels is:
  • N j0m - t _ fllters _ m - terp _ pixels Number of pixel categories x Number of interpolation directions x Number of ⁇ x Number of win_size
  • the original pixels need sharpening only, and therefore during its filter design edge-based interpolation is not included.
  • the number of joint filters for original pixels is:
  • N jo irA _ filters _ orig ⁇ xeh Number of ⁇ x Number of win_size
  • These filters may be stored in look-up-tables, and may be chosen based on pixel category, gradient, edge angle, and win_size.
  • the joint filter is the same as Equation (7), but it uses a different sharpening component.
  • One example of the alternative joint filter is described for center pixels for small screen video resolution (e.g, 480x204).
  • the same design approach extends to row, column, and original pixels as well.
  • win size 5x5.
  • Original pixels and estimated interpolated pixels for sharpening are as illustrated in FIG. 17, where all pixels within win size are used for sharpening, with the interpolated pixels (gray dots) estimated from the original pixels (indicated by bold 'x').
  • the interpolated pixels are estimated using edge-based interpolation for the case when gradients are smaller than edge threshold. This corresponds to the following estimation filters for center, row, and column pixels: 0.25 0.25
  • FIG. 17 can be represented in a table form in Table 6 below, where Q j are pixels.
  • the 5x5 sharpening filter is represented below.
  • the sharpening operation is:
  • the sharpening filter g that operates only on original pixels, and yet provides the same high frequency component Ch in Equation (16) may be used according to:
  • Equation (16) is expressed in terms of the original pixels as follows.
  • Equation (17) the sum of coefficients corresponding to Coo is goo, and sum of coefficients corresponding to C 02 is goi- Combining the terms in Equation (18)
  • hEI center is as defined in Equation (8).
  • an adaptive approach can be used to switch between the two joint filter designs; for example, the adaptive model may be based on local image characteristics.
  • interpolation and sharpening filters described herein may be incorporated into any of a wide variety of terminals, such as, without limitation, digital televisions, wireless communication devices, wireless broadcast systems, personal digital assistants (PDAs), laptop or desktop computers, tablet computers, digital cameras, digital recording devices, video gaming devices, video game consoles, cellular or satellite radio telephones, digital media players, and the like.
  • terminals such as, without limitation, digital televisions, wireless communication devices, wireless broadcast systems, personal digital assistants (PDAs), laptop or desktop computers, tablet computers, digital cameras, digital recording devices, video gaming devices, video game consoles, cellular or satellite radio telephones, digital media players, and the like.
  • FIG. 18A is a diagram of an example communications system 100 in which one or more disclosed embodiments may be implemented.
  • the communications system 100 may be a multiple access system that provides content, such as voice, data, video, messaging, broadcast, etc., to multiple wireless users.
  • the communications system 100 may enable multiple wireless users to access such content through the sharing of system resources, including wireless bandwidth.
  • the communications systems 100 may employ one or more channel access methods, such as code division multiple access (CDMA), time division multiple access (TDMA), frequency division multiple access (FDMA), orthogonal FDMA (OFDM A), single-carrier FDMA (SC-FDMA), and the like.
  • CDMA code division multiple access
  • TDMA time division multiple access
  • FDMA frequency division multiple access
  • OFDM A orthogonal FDMA
  • SC-FDMA single-carrier FDMA
  • the communications system 100 may include wireless transmit/receive units (WTRUs) 102a, 102b, 102c, 102d, a radio access network (RAN) 104, a core network 106, a public switched telephone network (PSTN) 108, the Internet 110, and other networks 112, though it will be appreciated that the disclosed embodiments contemplate any number of WTRUs, base stations, networks, and/or network elements.
  • WTRUs 102a, 102b, 102c, 102d may be any type of device configured to operate and/or communicate in a wireless environment.
  • the WTRUs 102a, 102b, 102c, 102d may be configured to transmit and/or receive wireless signals and may include user equipment (UE), a mobile station, a fixed or mobile subscriber unit, a pager, a cellular telephone, a personal digital assistant (PDA), a smartphone, a laptop, a netbook, a personal computer, a wireless sensor, consumer electronics, and the like.
  • UE user equipment
  • PDA personal digital assistant
  • smartphone a laptop
  • netbook a personal computer
  • a wireless sensor consumer electronics, and the like.
  • the communications systems 100 may also include a base station 114a and a base station 114b.
  • Each of the base stations 114a, 114b may be any type of device configured to wirelessly interface with at least one of the WTRUs 102a, 102b, 102c, 102d to facilitate access to one or more communication networks, such as the core network 106, the Internet 110, and/or the networks 112.
  • the base stations 114a, 114b may be a base transceiver station (BTS), a Node-B, an eNode B, a Home Node B, a Home eNode B, a site controller, an access point (AP), a wireless router, and the like. While the base stations 114a, 114b are each depicted as a single element, it will be appreciated that the base stations 114a, 114b may include any number of interconnected base stations and/or network elements.
  • the base station 114a may be part of the RAN 104, which may also include other base stations and/or network elements (not shown), such as a base station controller (BSC), a radio network controller (RNC), relay nodes, etc.
  • BSC base station controller
  • RNC radio network controller
  • the base station 114a and/or the base station 114b may be configured to transmit and/or receive wireless signals within a particular geographic region, which may be referred to as a cell (not shown).
  • the cell may further be divided into cell sectors.
  • the cell associated with the base station 114a may be divided into three sectors.
  • the base station 114a may include three transceivers, i.e., one for each sector of the cell.
  • the base station 114a may employ multiple -input multiple output (MIMO) technology and, therefore, may utilize multiple transceivers for each sector of the cell.
  • MIMO multiple -input multiple output
  • the base stations 114a, 114b may communicate with one or more of the WTRUs 102a, 102b, 102c, 102d over an air interface 116, which may be any suitable wireless communication link (e.g., radio frequency (RF), microwave, infrared (IR), ultraviolet (UV), visible light, etc.).
  • the air interface 116 may be established using any suitable radio access technology (RAT).
  • RAT radio access technology
  • the communications system 100 may be a multiple access system and may employ one or more channel access schemes, such as CDMA, TDMA, FDMA, OFDMA, SC-FDMA, and the like.
  • the base station 114a in the RAN 104 and the WTRUs 102a, 102b, 102c may implement a radio technology such as Universal Mobile Telecommunications System (UMTS) Terrestrial Radio Access (UTRA), which may establish the air interface 116 using wideband CDMA (WCDMA).
  • WCDMA may include communication protocols such as High-Speed Packet Access (HSPA) and/or Evolved HSPA (HSPA+).
  • HSPA may include High-Speed Downlink Packet Access (HSDPA) and/or High-Speed Uplink Packet Access (HSUPA).
  • the base station 114a and the WTRUs 102a are identical to the base station 114a and the WTRUs 102a.
  • E-UTRA Evolved UMTS Terrestrial Radio Access
  • LTE Long Term Evolution
  • LTE-A LTE- Advanced
  • the base station 114a and the WTRUs 102a are identical to the base station 114a and the WTRUs 102a.
  • 102b, 102c may implement radio technologies such as IEEE 802.16 (i.e., Worldwide Interoperability for Microwave Access (WiMAX)), CDMA2000, CDMA2000 IX,
  • IEEE 802.16 i.e., Worldwide Interoperability for Microwave Access (WiMAX)
  • CDMA2000 Code Division Multiple Access 2000
  • CDMA2000 IX Code Division Multiple Access 2000
  • CDMA2000 EV-DO Interim Standard 2000 (IS-2000), Interim Standard 95 (IS-95), Interim Standard 856 (IS-856), Global System for Mobile communications (GSM), Enhanced Data rates for GSM Evolution (EDGE), GSM EDGE (GERAN), and the like.
  • IS-2000 Interim Standard 2000
  • IS-95 Interim Standard 95
  • IS-856 Interim Standard 856
  • GSM Global System for Mobile communications
  • EDGE Enhanced Data rates for GSM Evolution
  • GERAN GSM EDGE
  • the base station 114b in FIG. 18A may be a wireless router, Home
  • Node B, Home eNode B, or access point may utilize any suitable RAT for facilitating wireless connectivity in a localized area, such as a place of business, a home, a vehicle, a campus, and the like.
  • the base station 114b and the WTRUs 102c, 102d may implement a radio technology such as IEEE 802.11 to establish a wireless local area network (WLAN).
  • the base station 114b and the WTRUs 102c, 102d may implement a radio technology such as IEEE 802.15 to establish a wireless personal area network (WPAN).
  • WLAN wireless local area network
  • WPAN wireless personal area network
  • the base station 114b and the WTRUs 102c, 102d may utilize a cellular-based RAT (e.g., WCDMA, CDMA2000, GSM, LTE, LTE-A, etc.) to establish a picocell or femtocell.
  • a cellular-based RAT e.g., WCDMA, CDMA2000, GSM, LTE, LTE-A, etc.
  • the base station 114b may have a direct connection to the Internet 110.
  • the base station 114b may not be required to access the Internet 110 via the core network 106.
  • the RAN 104 may be in communication with the core network 106, which may be any type of network configured to provide voice, data, applications, and/or voice over internet protocol (VoIP) services to one or more of the WTRUs 102a, 102b, 102c, 102d.
  • the core network 106 may provide call control, billing services, mobile location-based services, pre-paid calling, Internet connectivity, video distribution, etc., and/or perform high-level security functions, such as user authentication.
  • the RAN 104 and/or the core network 106 may be in direct or indirect communication with other RANs that employ the same RAT as the RAN 104 or a different RAT.
  • the core network 106 may also be in communication with another RAN (not shown) employing a GSM radio technology.
  • the core network 106 may also serve as a gateway for the WTRUs
  • the PSTN 108 may include circuit-switched telephone networks that provide plain old telephone service (POTS).
  • POTS plain old telephone service
  • the Internet 110 may include a global system of interconnected computer networks and devices that use common communication protocols, such as the transmission control protocol (TCP), user datagram protocol (UDP) and the internet protocol (IP) in the TCP/IP internet protocol suite.
  • the networks 112 may include wired or wireless communications networks owned and/or operated by other service providers.
  • the networks 112 may include another core network connected to one or more RANs, which may employ the same RAT as the RAN 104 or a different RAT.
  • the communications system 100 may include multi-mode capabilities, i.e., the WTRUs 102a, 102b, 102c, 102d may include multiple transceivers for communicating with different wireless networks over different wireless links.
  • the WTRU 102c shown in FIG. 18A may be configured to communicate with the base station 114a, which may employ a cellular-based radio technology, and with the base station 114b, which may employ an IEEE 802 radio technology.
  • FIG. 18B is a system diagram of an example WTRU 102. As shown in
  • the WTRU 102 may include a processor 118, a transceiver 120, a transmit/receive element 122, a speaker/microphone 124, a keypad 126, a display/touchpad 128, nonremovable memory 130, removable memory 132, a power source 134, a global positioning system (GPS) chipset 136, and other peripherals 138. It will be appreciated that the WTRU 102 may include any sub-combination of the foregoing elements while remaining consistent with an embodiment.
  • GPS global positioning system
  • the processor 118 may be a general purpose processor, a special purpose processor, a conventional processor, a digital signal processor (DSP), a plurality of microprocessors, one or more microprocessors in association with a DSP core, a controller, a microcontroller, Application Specific Integrated Circuits (ASICs), Field Programmable Gate Array (FPGAs) circuits, any other type of integrated circuit (IC), a state machine, and the like.
  • the processor 118 may perform signal coding, data processing, power control, input/output processing, and/or any other functionality that enables the WTRU 102 to operate in a wireless environment.
  • the processor 118 may be coupled to the transceiver 120, which may be coupled to the transmit/receive element 122. While FIG. 18B depicts the processor 118 and the transceiver 120 as separate components, it will be appreciated that the processor 118 and the transceiver 120 may be integrated together in an electronic package or chip.
  • the transmit/receive element 122 may be configured to transmit signals to, or receive signals from, a base station (e.g., the base station 114a) over the air interface 116.
  • a base station e.g., the base station 114a
  • the transmit/receive element 122 may be an antenna configured to transmit and/or receive RF signals.
  • the transmit/receive element 122 may be an emitter/detector configured to transmit and/or receive IR, UV, or visible light signals, for example.
  • the transmit/receive element 122 may be configured to transmit and receive both RF and light signals. It will be appreciated that the transmit/receive element 122 may be configured to transmit and/or receive any combination of wireless signals.
  • the WTRU 102 may include any number of transmit/receive elements 122. More specifically, the WTRU 102 may employ MIMO technology. Thus, in one embodiment, the WTRU 102 may include two or more transmit/receive elements 122 (e.g., multiple antennas) for transmitting and receiving wireless signals over the air interface 116.
  • the WTRU 102 may include two or more transmit/receive elements 122 (e.g., multiple antennas) for transmitting and receiving wireless signals over the air interface 116.
  • the transceiver 120 may be configured to modulate the signals that are to be transmitted by the transmit/receive element 122 and to demodulate the signals that are received by the transmit/receive element 122.
  • the WTRU 102 may have multi-mode capabilities.
  • the transceiver 120 may include multiple transceivers for enabling the WTRU 102 to communicate via multiple RATs, such as UTRA and IEEE 802.11 , for example.
  • the processor 118 of the WTRU 102 may be coupled to, and may receive user input data from, the speaker/microphone 124, the keypad 126, and/or the display/touchpad 128 (e.g., a liquid crystal display (LCD) display unit or organic light- emitting diode (OLED) display unit).
  • the processor 118 may also output user data to the speaker/microphone 124, the keypad 126, and/or the display/touchpad 128.
  • the processor 118 may access information from, and store data in, any type of suitable memory, such as the non-removable memory 130 and/or the removable memory 132.
  • the nonremovable memory 130 may include random-access memory (RAM), read-only memory (ROM), a hard disk, or any other type of memory storage device.
  • the removable memory 132 may include a subscriber identity module (SIM) card, a memory stick, a secure digital (SD) memory card, and the like.
  • SIM subscriber identity module
  • SD secure digital
  • the processor 118 may access information from, and store data in, memory that is not physically located on the WTRU 102, such as on a server or a home computer (not shown).
  • the processor 118 may receive power from the power source 134, and may be configured to distribute and/or control the power to the other components in the WTRU 102.
  • the power source 134 may be any suitable device for powering the WTRU 102.
  • the power source 134 may include one or more dry cell batteries (e.g., nickel- cadmium (NiCd), nickel-zinc (NiZn), nickel metal hydride (NiMH), lithium-ion (Li-ion), etc.), solar cells, fuel cells, and the like.
  • the processor 118 may also be coupled to the GPS chipset 136, which may be configured to provide location information (e.g., longitude and latitude) regarding the current location of the WTRU 102.
  • location information e.g., longitude and latitude
  • the WTRU 102 may receive location information over the air interface 116 from a base station (e.g., base stations 114a, 114b) and/or determine its location based on the timing of the signals being received from two or more nearby base stations. It will be appreciated that the WTRU 102 may acquire location information by way of any suitable location-determination method while remaining consistent with an embodiment.
  • the processor 118 may further be coupled to other peripherals 138, which may include one or more software and/or hardware modules that provide additional features, functionality and/or wired or wireless connectivity.
  • the peripherals 138 may include an accelerometer, an e-compass, a satellite transceiver, a digital camera (for photographs or video), a universal serial bus (USB) port, a vibration device, a television transceiver, a hands free headset, a Bluetooth® module, a frequency modulated (FM) radio unit, a digital music player, a media player, a video game player module, an Internet browser, and the like.
  • the peripherals 138 may include an accelerometer, an e-compass, a satellite transceiver, a digital camera (for photographs or video), a universal serial bus (USB) port, a vibration device, a television transceiver, a hands free headset, a Bluetooth® module, a frequency modulated (FM) radio unit, a digital music player, a media player, a video game player
  • FIG. 18C is a system diagram of the RAN 104 and the core network
  • the RAN 104 may employ a UTRA radio technology to communicate with the WTRUs 102a, 102b, 102c over the air interface 116.
  • the RAN 104 may also be in communication with the core network 106.
  • the RAN 104 may include Node-Bs 140a, 140b, 140c, which may each include one or more transceivers for communicating with the WTRUs 102a, 102b, 102c over the air interface 116.
  • the Node-Bs 140a, 140b, 140c may each be associated with a particular cell (not shown) within the RAN 104.
  • the RAN 104 may also include RNCs 142a, 142b. It will be appreciated that the RAN 104 may include any number of Node-Bs and RNCs while remaining consistent with an embodiment.
  • the Node-Bs 140a, 140b may be in
  • the Node-B 140c may be in
  • the Node-Bs 140a, 140b, 140c may communicate with the respective RNCs 142a, 142b via an Iub interface.
  • the RNCs 142a, 142b may be in communication with one another via an Iur interface.
  • Each of the RNCs 142a, 142b may be configured to control the respective Node-Bs 140a, 140b, 140c to which it is connected.
  • each of the RNCs 142a, 142b may be configured to carry out or support other functionality, such as outer loop power control, load control, admission control, packet scheduling, handover control, macrodiversity, security functions, data encryption, and the like.
  • MGW media gateway
  • MSC mobile switching center
  • SGSN serving GPRS support node
  • GGSN gateway GPRS support node
  • the RNC 142a in the RAN 104 may be connected to the MSC 146 in the core network 106 via an IuCS interface.
  • the MSC 146 may be connected to the MGW 144.
  • the MSC 146 and the MGW 144 may provide the WTRUs 102a, 102b, 102c with access to circuit-switched networks, such as the PSTN 108, to facilitate communications between the WTRUs 102a, 102b, 102c and traditional land-line communications devices.
  • the RNC 142a in the RAN 104 may also be connected to the SGSN
  • the SGSN 148 in the core network 106 via an IuPS interface.
  • the SGSN 148 may be connected to the GGSN 150.
  • the SGSN 148 and the GGSN 150 may provide the WTRUs 102a, 102b, 102c with access to packet-switched networks, such as the Internet 110, to facilitate
  • the core network 106 may also be connected to the networks 112, which may include other wired or wireless networks that are owned and/or operated by other service providers.
  • FIG. 18D is a system diagram of the RAN 104 and the core network
  • the RAN 104 may employ an E-UTRA radio technology to communicate with the WTRUs 102a, 102b, 102c over the air interface 116.
  • the RAN 104 may also be in communication with the core network 106.
  • the RAN 104 may include eNode-Bs 160a, 160b, 160c, though it will be appreciated that the RAN 104 may include any number of eNode-Bs while remaining consistent with an embodiment.
  • the eNode-Bs 160a, 160b, 160c may each include one or more transceivers for communicating with the WTRUs 102a, 102b, 102c over the air interface 116.
  • the eNode-Bs may implement MIMO technology.
  • the eNode-B 160a for example, may use multiple antennas to transmit wireless signals to, and receive wireless signals from, the WTRU 102a.
  • Each of the eNode-Bs may be associated with a particular cell (not shown) and may be configured to handle radio resource management decisions, handover decisions, scheduling of users in the uplink and/or downlink, and the like. As shown in FIG. 18D, the eNode-Bs may communicate with one another over an X2 interface.
  • the core network 106 shown in FIG. 18D may include a mobility management gateway (MME) 162, a serving gateway 164, and a packet data network (PDN) gateway 166. While each of the foregoing elements are depicted as part of the core network 106, it will be appreciated that any one of these elements may be owned and/or operated by an entity other than the core network operator.
  • MME mobility management gateway
  • PDN packet data network
  • the MME 162 may be connected to each of the eNode-Bs 160a, 160b, 160c in the RAN 104 via an SI interface and may serve as a control node.
  • the MME 162 may be responsible for authenticating users of the WTRUs 102a, 102b, 102c, bearer activation/deactivation, selecting a particular serving gateway during an initial attach of the WTRUs 102a, 102b, 102c, and the like.
  • the MME 162 may also provide a control plane function for switching between the RAN 104 and other RANs (not shown) that employ other radio technologies, such as GSM or WCDMA.
  • the serving gateway 164 may be connected to each of the eNode Bs in the RAN 104 via the SI interface.
  • the serving gateway 164 may generally route and forward user data packets to/from the WTRUs 102a, 102b, 102c.
  • the serving gateway 164 may also perform other functions, such as anchoring user planes during inter-eNode B handovers, triggering paging when downlink data is available for the WTRUs 102a, 102b, 102c, managing and storing contexts of the WTRUs 102a, 102b, 102c, and the like.
  • the serving gateway 164 may also be connected to the PDN gateway
  • the WTRU 166 which may provide the WTRUs 102a, 102b, 102c with access to packet-switched networks, such as the Internet 110, to facilitate communications between the WTRUs 102a, 102b, 102c and IP-enabled devices.
  • the core network 106 may facilitate communications with other networks.
  • the core network 106 may provide the WTRUs 102a, 102b, 102c with access to circuit-switched networks, such as the PSTN 108, to facilitate communications between the WTRUs 102a, 102b, 102c and traditional land- line communications devices.
  • the core network 106 may include, or may communicate with, an IP gateway (e.g., an IP multimedia subsystem (IMS) server) that serves as an interface between the core network 106 and the PSTN 108.
  • IMS IP multimedia subsystem
  • the core network 106 may provide the WTRUs 102a, 102b, 102c with access to the networks 112, which may include other wired or wireless networks that are owned and/or operated by other service providers.
  • FIG. 18E is a system diagram of the RAN 104 and the core network
  • the RAN 104 may be an access service network (ASN) that employs IEEE 802.16 radio technology to communicate with the WTRUs 102a, 102b, 102c over the air interface 116.
  • ASN access service network
  • the communication links between the different functional entities of the WTRUs 102a, 102b, 102c, the RAN 104, and the core network 106 may be defined as reference points.
  • the RAN 104 may include base stations 170a,
  • the RAN 104 may include any number of base stations and ASN gateways while remaining consistent with an embodiment.
  • the base stations 170a, 170b, 170c may each be associated with a particular cell (not shown) in the RAN 104 and may each include one or more transceivers for communicating with the WTRUs 102a, 102b, 102c over the air interface 116.
  • the base stations 170a, 170b, 170c may implement MIMO technology.
  • the base station 170a for example, may use multiple antennas to transmit wireless signals to, and receive wireless signals from, the WTRU 102a.
  • the base stations 170a, 170b, 170c may also provide mobility management functions, such as handoff triggering, tunnel
  • the ASN gateway 172 may serve as a traffic aggregation point and may be responsible for paging, caching of subscriber profiles, routing to the core network 106, and the like.
  • RAN 104 may be defined as an Rl reference point that implements the IEEE 802.16 specification.
  • each of the WTRUs 102a, 102b, 102c may establish a logical interface (not shown) with the core network 106.
  • the logical interface between the WTRUs 102a, 102b, 102c and the core network 106 may be defined as an R2 reference point, which may be used for authentication, authorization, IP host configuration management, and/or mobility management.
  • the 140c may be defined as an R8 reference point that includes protocols for facilitating WTRU handovers and the transfer of data between base stations.
  • the communication link between the base stations and the ASN gateway 172 may be defined as an R6 reference point.
  • the R6 reference point may include protocols for facilitating mobility management based on mobility events associated with each of the WTRUs 102a, 102b, 100c.
  • the RAN 104 may be connected to the core network 106.
  • the communication link between the RAN 104 and the core network 106 may defined as an R3 reference point that includes protocols for facilitating data transfer and mobility management capabilities, for example.
  • the core network 106 may include a mobile IP home agent (MIP-HA) 144, an authentication, authorization, accounting (AAA) server 176, and a gateway 178. While each of the foregoing elements are depicted as part of the core network 106, it will be appreciated that any one of these elements may be owned and/or operated by an entity other than the core network operator.
  • MIP-HA mobile IP home agent
  • AAA authentication, authorization, accounting
  • the MIP-HA may be responsible for IP address management, and may enable the WTRUs 102a, 102b, 102c to roam between different ASNs and/or different core networks.
  • the MIP-HA 174 may provide the WTRUs 102a, 102b, 102c with access to packet-switched networks, such as the Internet 110, to facilitate communications between the WTRUs 102a, 102b, 102c and IP-enabled devices.
  • the AAA server 146 may be responsible for user authentication and for supporting user services.
  • the gateway 178 may facilitate interworking with other networks.
  • the gateway 178 may provide the WTRUs 102a, 102b, 102c with access to circuit-switched networks, such as the PSTN 108, to facilitate communications between the WTRUs 102a, 102b, 102c and traditional land-line communications devices.
  • the gateway 178 may provide the WTRUs 102a, 102b, 102c with access to the networks 112, which may include other wired or wireless networks that are owned and/or operated by other service providers.
  • the communication link between the RAN 104 the other ASNs may be defined as an R4 reference point, which may include protocols for coordinating the mobility of the WTRUs 102a, 102b, 102c between the RAN 104 and the other ASNs.
  • R5 reference may include protocols for facilitating interworking between home core networks and visited core networks.
  • Examples of computer-readable storage media include, but are not limited to, a read only memory (ROM), a random access memory (RAM), a register, cache memory, semiconductor memory devices, magnetic media such as internal hard disks and removable disks, magneto-optical media, and optical media such as CD-ROM disks, and digital versatile disks (DVDs).
  • ROM read only memory
  • RAM random access memory
  • register cache memory
  • semiconductor memory devices magnetic media such as internal hard disks and removable disks, magneto-optical media, and optical media such as CD-ROM disks, and digital versatile disks (DVDs).
  • a processor in association with software may be used to implement a radio frequency transceiver for use in a WTRU, UE, terminal, base station, RNC, or any host computer.
  • processing platforms, computing systems, controllers, and other devices containing processors are noted. These devices may contain at least one Central Processing Unit (“CPU”) and memory.
  • CPU Central Processing Unit
  • FIG. 1 A block diagram illustrating an exemplary computing system
  • FIG. 1 A block diagram illustrating an exemplary computing system
  • FIG. 1 A block diagram illustrating an exemplary computing system
  • FIG. 1 A block diagram illustrating an exemplary computing system
  • FIG. 1 A block diagram illustrating an exemplary computing system
  • memory may contain at least one or non-executing circuitry
  • CPU Central Processing Unit
  • acts and symbolic representations of operations or instructions may be performed by the various CPUs and memories. Such acts and operations or instructions may be referred to as being “executed,” “computer executed” or “CPU executed.”
  • the acts and symbolically represented operations or instructions include the manipulation of electrical signals by the CPU.
  • An electrical system represents data bits that can cause a resulting transformation or reduction of the electrical signals and the maintenance of data bits at memory locations in a memory system to thereby reconfigure or otherwise alter the CPU's operation, as well as other processing of signals.
  • the memory locations where data bits are maintained are physical locations that have particular electrical, magnetic, optical, or organic properties corresponding to or representative of the data bits. It should be understood that the exemplary embodiments are not limited to the above-mentioned platforms or CPUs and that other platforms and CPUs may support the described methods.
  • the data bits may also be maintained on a computer readable medium including magnetic disks, optical disks, and any other volatile (e.g., Random Access Memory (“RAM”)) or non-volatile (e.g., Read-Only Memory (“ROM”)) mass storage system readable by the CPU.
  • RAM Random Access Memory
  • ROM Read-Only Memory
  • the computer readable medium may include cooperating or interconnected computer readable medium, which exist exclusively on the processing system or are distributed among multiple interconnected processing systems that may be local or remote to the processing system. It should be understood that the exemplary embodiments are not limited to the above-mentioned memories and that other platforms and memories may support the described methods.
  • the term “set” is intended to include any number of items, including zero. Further, as used herein, the term “number” is intended to include any number, including zero.

Abstract

L'invention concerne une interpolation fondée sur les contours pour suréchantillonnage. Un procédé peut consister à déterminer une caractéristique de contour associée à un point d'interpolation, la caractéristique de contour ayant une amplitude de contour et un angle de contour ; sélectionner un filtre d'interpolation en réponse à l'angle de contour ; et déterminer une valeur de pixel au point d'interpolation à l'aide du filtre d'interpolation sélectionné. D'autres modes de réalisation comprennent une interpolation fondée sur les contours suivie par un filtre d'accentuation adaptatif. Le filtre d'accentuation est commandé par les paramètres d'interpolation fondée sur les contours qui déterminent les pixels à accentuer et l'intensité d'accentuation.
PCT/US2012/024630 2011-02-11 2012-02-10 Interpolation vidéo fondée sur les contours pour suréchantillonnage de vidéo et d'image WO2012109528A1 (fr)

Applications Claiming Priority (4)

Application Number Priority Date Filing Date Title
US201161442069P 2011-02-11 2011-02-11
US61/442,069 2011-02-11
US201161535353P 2011-09-15 2011-09-15
US61/535,353 2011-09-15

Publications (1)

Publication Number Publication Date
WO2012109528A1 true WO2012109528A1 (fr) 2012-08-16

Family

ID=45689053

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/US2012/024630 WO2012109528A1 (fr) 2011-02-11 2012-02-10 Interpolation vidéo fondée sur les contours pour suréchantillonnage de vidéo et d'image

Country Status (2)

Country Link
TW (1) TW201301199A (fr)
WO (1) WO2012109528A1 (fr)

Cited By (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2016137309A1 (fr) 2015-02-27 2016-09-01 Samsung Electronics Co., Ltd. Dispositif et procédé de traitement d'image
WO2017052405A1 (fr) * 2015-09-25 2017-03-30 Huawei Technologies Co., Ltd. Appareil et procédé pour une compensation de mouvement vidéo
WO2017052409A1 (fr) * 2015-09-25 2017-03-30 Huawei Technologies Co., Ltd. Appareil et procédé de compensation de mouvement en vidéo avec filtre d'interpolation sélectionnable
US10007970B2 (en) 2015-05-15 2018-06-26 Samsung Electronics Co., Ltd. Image up-sampling with relative edge growth rate priors
EP3503018A1 (fr) * 2017-12-22 2019-06-26 Vestel Elektronik Sanayi ve Ticaret A.S. Zone d'amélioration du contraste local adaptatif pour des vidéos mises à l'échelle
US10834416B2 (en) 2015-09-25 2020-11-10 Huawei Technologies Co., Ltd. Apparatus and method for video motion compensation
US10848784B2 (en) 2015-09-25 2020-11-24 Huawei Technologies Co., Ltd. Apparatus and method for video motion compensation
US10863205B2 (en) 2015-09-25 2020-12-08 Huawei Technologies Co., Ltd. Adaptive sharpening filter for predictive coding

Families Citing this family (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN104346770A (zh) 2013-07-24 2015-02-11 联咏科技股份有限公司 数据插补方法及数据插补系统
TW201510934A (zh) 2013-09-13 2015-03-16 Novatek Microelectronics Corp 影像銳化方法與影像處理裝置
TWI511088B (zh) * 2014-07-25 2015-12-01 Altek Autotronics Corp 產生方位影像的方法
TWI720513B (zh) * 2019-06-14 2021-03-01 元智大學 影像放大方法

Citations (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20070171287A1 (en) * 2004-05-12 2007-07-26 Satoru Takeuchi Image enlarging device and program

Patent Citations (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20070171287A1 (en) * 2004-05-12 2007-07-26 Satoru Takeuchi Image enlarging device and program

Non-Patent Citations (5)

* Cited by examiner, † Cited by third party
Title
ALGAZI V R ET AL: "Directional interpolation of images based on visual properties and rank order filtering", SPEECH PROCESSING 1. TORONTO, MAY 14 - 17, 1991; [INTERNATIONAL CONFERENCE ON ACOUSTICS, SPEECH & SIGNAL PROCESSING. ICASSP], NEW YORK, IEEE, US, vol. CONF. 16, 14 April 1991 (1991-04-14), pages 3005 - 3008, XP010043639, ISBN: 978-0-7803-0003-3, DOI: 10.1109/ICASSP.1991.151035 *
CHI-SHING WONG ET AL: "Adaptive Directional Window Selection for Edge-Directed Interpolation", COMPUTER COMMUNICATIONS AND NETWORKS (ICCCN), 2010 PROCEEDINGS OF 19TH INTERNATIONAL CONFERENCE ON, IEEE, PISCATAWAY, NJ, USA, 2 August 2010 (2010-08-02), pages 1 - 6, XP031744360, ISBN: 978-1-4244-7114-0 *
JAKHETIYA V ET AL: "Image interpolation by adaptive 2-D autoregressive modeling", PROCEEDINGS OF THE SPIE - THE INTERNATIONAL SOCIETY FOR OPTICAL ENGINEERING SPIE - THE INTERNATIONAL SOCIETY FOR OPTICAL ENGINEERING USA, vol. 7546, 2010, XP002675776, ISSN: 0277-786X *
PING YANG ET AL: "A Gradient-Based Adaptive Interpolation Filter for Multiple View Synthesis", 15 December 2009, ADVANCES IN MULTIMEDIA INFORMATION PROCESSING - PCM 2009, SPRINGER BERLIN HEIDELBERG, BERLIN, HEIDELBERG, PAGE(S) 551 - 560, ISBN: 978-3-642-10466-4, XP019134930 *
SOONJONG JIN ET AL: "Fine directional de-interlacing algorithm using modified Sobel operation", IEEE TRANSACTIONS ON CONSUMER ELECTRONICS, IEEE SERVICE CENTER, NEW YORK, NY, US, vol. 54, no. 2, 1 May 2008 (2008-05-01), pages 587 - 862, XP011229976, ISSN: 0098-3063, DOI: 10.1109/TCE.2008.4560171 *

Cited By (13)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
EP3262844A4 (fr) * 2015-02-27 2018-01-17 Samsung Electronics Co., Ltd. Dispositif et procédé de traitement d'image
WO2016137309A1 (fr) 2015-02-27 2016-09-01 Samsung Electronics Co., Ltd. Dispositif et procédé de traitement d'image
US10007970B2 (en) 2015-05-15 2018-06-26 Samsung Electronics Co., Ltd. Image up-sampling with relative edge growth rate priors
US10834416B2 (en) 2015-09-25 2020-11-10 Huawei Technologies Co., Ltd. Apparatus and method for video motion compensation
CN107925772A (zh) * 2015-09-25 2018-04-17 华为技术有限公司 利用可选插值滤波器进行视频运动补偿的装置和方法
WO2017052409A1 (fr) * 2015-09-25 2017-03-30 Huawei Technologies Co., Ltd. Appareil et procédé de compensation de mouvement en vidéo avec filtre d'interpolation sélectionnable
CN107925772B (zh) * 2015-09-25 2020-04-14 华为技术有限公司 利用可选插值滤波器进行视频运动补偿的装置和方法
US10820008B2 (en) 2015-09-25 2020-10-27 Huawei Technologies Co., Ltd. Apparatus and method for video motion compensation
WO2017052405A1 (fr) * 2015-09-25 2017-03-30 Huawei Technologies Co., Ltd. Appareil et procédé pour une compensation de mouvement vidéo
US10841605B2 (en) 2015-09-25 2020-11-17 Huawei Technologies Co., Ltd. Apparatus and method for video motion compensation with selectable interpolation filter
US10848784B2 (en) 2015-09-25 2020-11-24 Huawei Technologies Co., Ltd. Apparatus and method for video motion compensation
US10863205B2 (en) 2015-09-25 2020-12-08 Huawei Technologies Co., Ltd. Adaptive sharpening filter for predictive coding
EP3503018A1 (fr) * 2017-12-22 2019-06-26 Vestel Elektronik Sanayi ve Ticaret A.S. Zone d'amélioration du contraste local adaptatif pour des vidéos mises à l'échelle

Also Published As

Publication number Publication date
TW201301199A (zh) 2013-01-01

Similar Documents

Publication Publication Date Title
WO2012109528A1 (fr) Interpolation vidéo fondée sur les contours pour suréchantillonnage de vidéo et d'image
US11405621B2 (en) Sampling grid information for spatial layers in multi-layer video coding
US11356708B2 (en) Cross-plane filtering for chroma signal enhancement in video coding
US20230412839A1 (en) Geometry Conversion for 360-degree Video Coding
US20230388553A1 (en) Methods for simplifying adaptive loop filter in video coding
US20220385942A1 (en) Face discontinuity filtering for 360-degree video coding
US10321130B2 (en) Enhanced deblocking filters for video coding
US20150304685A1 (en) Perceptual preprocessing filter for viewing-conditions-aware video coding
US10045050B2 (en) Perceptual preprocessing filter for viewing-conditions-aware video coding
US10044913B2 (en) Temporal filter for denoising a high dynamic range video
TW200926763A (en) Converting video and image signal bit depths
EP3105922B1 (fr) Filtre de télécinéma inverse
Vanam et al. Adaptive bilateral filter for video and image upsampling
WO2023122077A1 (fr) Réseaux neuronaux basés sur l'attention temporelle pour compression vidéo
JP2014178742A (ja) 画像処理装置、画像処理方法及び画像処理プログラム
JP4910839B2 (ja) 画像処理装置および方法、並びにプログラム
Vanam et al. Joint edge-directed interpolation and adaptive sharpening filter
US20150264368A1 (en) Method to bypass re-sampling process in shvc with bit-depth and 1x scalability
JP2015035698A (ja) 画像処理装置、画像処理方法及び画像処理プログラム

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 12705032

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: DE

32PN Ep: public notification in the ep bulletin as address of the adressee cannot be established

Free format text: NOTING OF LOSS OF RIGHTS PURSUANT TO RULE 112(1) EPC (EPO FORM 1205A DATED 09.12.2013)

122 Ep: pct application non-entry in european phase

Ref document number: 12705032

Country of ref document: EP

Kind code of ref document: A1