WO2013095864A1 - Displayed image improvement - Google Patents

Displayed image improvement Download PDF

Info

Publication number
WO2013095864A1
WO2013095864A1 PCT/US2012/066335 US2012066335W WO2013095864A1 WO 2013095864 A1 WO2013095864 A1 WO 2013095864A1 US 2012066335 W US2012066335 W US 2012066335W WO 2013095864 A1 WO2013095864 A1 WO 2013095864A1
Authority
WO
WIPO (PCT)
Prior art keywords
image
source image
pixel values
display
shifted
Prior art date
Application number
PCT/US2012/066335
Other languages
French (fr)
Inventor
Michael L. Schmit
Shivashankar Gurumurthy
William Herz
Original Assignee
Advanced Micro Devices, Inc.
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Advanced Micro Devices, Inc. filed Critical Advanced Micro Devices, Inc.
Priority to KR1020147020469A priority Critical patent/KR20140105030A/en
Priority to CN201280067772.6A priority patent/CN104067310A/en
Priority to EP12798117.3A priority patent/EP2795573A1/en
Publication of WO2013095864A1 publication Critical patent/WO2013095864A1/en

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T3/00Geometric image transformations in the plane of the image
    • G06T3/40Scaling of whole images or parts thereof, e.g. expanding or contracting

Definitions

  • the present invention relates in general to image processing technology. In one aspect, the present invention relates to the display of digital image information.
  • Digital images are typically displayed on display screen devices (e.g., PC monitors and TV displays) that have a fixed number of pixels and color depth.
  • display screen devices e.g., PC monitors and TV displays
  • consumer video formats such as ATSC for broadcast HDTV, DVD and Biu-ray discs
  • MPEG-2 standard at main profile high level which has a maximum resolution is 1920 x 1080 and a 4:2:0 color space that has 8-bits per sub-pixel.
  • HD high definition
  • the digital image and video source content typically has more information (e.g., resolution and color space depth) than can be displayed on existing display screen devices.
  • a variety of image sources including videos streamed from the Internet and still image photographs taken by cameras or provided over the Internet, have a source resolution (e.g., a total of 5M pixels or even 10-20M pixels) that is far higher than the available resolution of a display screen. Even You ' Tube videos can have source resolution that is larger than most screens.
  • the decoder 2 receives the input image at input buffer 3, performs JPEG-type image decoding JPEG decoder 4 to generate high resolution image pixels I, and then downscales the high resolution image pixels 1 at downscaler 5 to generate the downsampled output image 6 that is stored in the current frame buffer 7 for output to the display 8.
  • the downscaler 5 takes group of pixels from the high resolution image pixels 1 (e.g., a 2 x 2 pixel group), computes the average value of that group, and uses the computed average value as a single output pixel for the output image 6.
  • more complex scaling algorithms may be used, but they still generate output images with reduced resolution and contrast, as demonstrated by the gray transition shading shown in the downsampled output image 6.
  • the present invention provides a display device, architecture, system, and method of operation for increasing the perceived spatial resolution of a displayed image beyond the physical number of pixels in the display device and/or for displaying an output image with greater color depth than is included in the source image content.
  • the perceived spatial resolution is increased by generating, downscaling, and displaying a plurality of slightly shifted images over time, thereby rendering a series of images rendered with different shifts that effectively convey visual features which are lost with conventional image downscaling.
  • the processing of the source image generates a downsampled output image with greater color depth than is in the source image by logically promoting each smaller bit depth pixel value (e.g., 8-bit values) from the source image to a larger bit depth pixel value (e.g., 10-bit values) and then using the computed average of the surrounding smaller bit depth pixel values as a larger bit depth pixel bit value that can be sent to the display.
  • a smaller bit depth pixel value e.g., 8-bit values
  • a larger bit depth pixel value e.g., 10-bit values
  • spatial averaging is used to replace an 8- bit pixel with 10-biis or more of color depth information per sub-pixel for display .
  • the image shift and spatial averaging techniques are combined to provide higher resolution and greater color depth.
  • an image processing system and method of operation are disclosed for displaying a source image using a memory, media, acceleration hardware, and a display.
  • the memory is provided for storing a source image having a first relatively high resolution digital format, and a display screen is provide for displaying images.
  • the source image may be a 2D image, paused video image, or 3D image, in which case the media acceleration hardware may be adapted to transform three-dimensional input source information into a two-dimensional source image by converging left and right images from the three-dimensional input source information into co-located pixel values for the two- dimensional source image.
  • the media acceleration hardware unit is adapted to generate a plurality of temporally shifted images from the source image by sequentially shifting an origin point for a frame applied to the source image by less than an interpixel spacing distance, thereby generating the plurality of temporally shifted images for display at the display screen.
  • the media acceleration hardware may also be adapted to generate the temporally shifted images by scaling source image pixel values from each shifted frame, thereby generating the plurality of temporally shifted images having a second, relatively low resolution for display at the display screen.
  • the scaling may be implemented with a downsampler configured to downsample the source image pixel values to reduce pixel density by a predetermined factor in a vertical and/or horizontal direction.
  • the media acceleration hardware may be implemented with a graphics processing unit (GPU) hardware decoder that includes an image decoder, origin shifter, and image scaler.
  • the image decoder receives the source image having the first digital format and produces a decoded image having an RGB format, YUV format, or any desired color space.
  • the origin shifter applies a plurality of frames to the decoded image, where each frame is shifted by less than an interpixel spacing distance and is used to generate a shifted decoded image.
  • the image scaler changes a picture size of each decoded image to produce a plurality of scaled shifted decoded images having the second, relatively low resolution for display at the display screen.
  • the media acceleration hardware is adapted to increase color depth values from the source image by selecting a pixel value and an associated plurality of pixel values from the source image having a first, relatively small bit depth, and computing from the plurality of pixel values an avera ge color depth value having a second, relatively large bit depth to replace the selected pixel value.
  • each 8-bit pixel value in the source image may be selected and replaced by the media acceleration hardware with a 10-bit pixel value that is computed by averaging the plurality of pixel values associated with the selected 8-bit pixel value.
  • Figure 1 shows a block diagram representation of a conventional display or playback device
  • Figure 2 shows a block diagram representation of an exemplary image processing system constructed in accordance with selected embodiments of the present invention.
  • Figures 3a-3d depict a source image and an associated sequence of shifted image frames that are shifted less than a pixel in order to illustrate selected embodiments of the present invention.
  • Figure 4 depicts a first image frame and a second shifted image frame that is shifted less than a pixel in order to illustrate selected embodiments of the present in vention.
  • Figure 5 depicts a process flow sequence for implementing selected embodiments of the present invention.
  • An improved display device and associated method of operation are described for image processing which takes advantage of increased computing power provided by graphics processing units.
  • high resolution source images are processed to increase apparent visual resolution by introducing a temporal display factor whereby slightly shifted versions of a source image are re-scaled and displayed over time by using CPU and GPU components to perform computing tasks.
  • the image shifting techniques may also be applied to 3D source image information by first subtracting the 3D information and then applying the image shifting technique. Any apparent motion of the displayed image may be controlled by increasing or decreasing the time between image shifts and increasing or decreasing the fractional distance shifted per time period.
  • the high resolution source images may be processed to provide greater color depth using downscaling to replace each 8-bit pixel value at the center of a window with a 10-bit pixel value that is the weighted average of all the pixels in the window.
  • the image processing system 100 may implemented in any graphics or video playback device, such as desktop or laptop computer, television, wireless or mobile device, personal digital assistants, mobile or cellular phones, DVRs, DVD and Blu-Ray players, handheld video players, digital picture frames, console game machines, projectors, tablets, digital book readers, and any other display device that processes and displays images on a fixed display screen.
  • the image processing system 100 may be implemented as a. host or applications processing unit that includes a bus 95 coupled to one or more processors or processing units 20 and a video or media acceleration hardware unit 30.
  • the image processing system 100 may include a main memory system having a large DDR SDRAM 62, 64 that is accessed through a DDR controller 60.
  • one or more memories e.g., IDE 72, flash memory unit 74, ROM 76, etc.
  • DDR. SDRAM or other memories may be integrated with or external to the image processing system 100.
  • Other input/output devices may also be accessed via one or more controllers, including peripheral devices 82, 84, 86 accessed by I/O controller 80, and display device 92 which is accessed through the display controller 90.
  • the display- device may be a computer monitor or television screen having a fixed resolution pixel count and color depth, where the resolution defines the smallest noticeable detail or line that can be perceived by the human visual system, and the color depth (or bit depth) is the number of bits used to represent the color of a single pixel in a bitmapped image or video frame buffer.
  • each pixel is typically formed with a plurality of sub-pixels (e.g., 3 or more sub-pixels) which each provide single-color regions that contribute to the displayed or sensed color when viewed at a distance, in addition, the number of bits used to define the range of intensity levels of each sub-pixel on the display 92 is fixed, such as 8 bits or 10 bits.
  • the image processing system 100 may include other buses, devices, and/or subsystems, depending on the implementation desired.
  • the image processing system 100 may include caches, modems, parallel or serial interfaces, SCSI interfaces, network interface cards, and the like.
  • the CPU 20 executes software stored in the flash memory 74 and/or SDRAM 62, 64.
  • the image processing system 100 may be implemented with a processing platform having at least one central processing unit (CPU) 20 and at least one media acceleration hardware 30, such as a graphics processing unit (GPU).
  • CPU central processing unit
  • media acceleration hardware e.g., GPU
  • CPU 20 and media acceleration hardware 30 may be discrete (e.g., separate components) or may combined into a single package or die.
  • the image processing system 100 receives an input or source image 101 that may be a high resolution JPEG image or other encoded bitstream that is encoded in a first data format and has a first relatively high resolution and fixed color depth.
  • the input image 101 may be received over a communication network or retrieved from system memory (e.g., 62, 64), and then stored at the input buffer 31.
  • the input image is then decoded using a predetermined image decode process, such as the JPEG decoding process at JPEG decoder 32.
  • a predetermined image decode process such as the JPEG decoding process at JPEG decoder 32.
  • other data formats including but not limited to PNG, JPEG-2000, JPEG-XR, and TIFF decoders.
  • the disclosed technique works with bitmaps that have no compression, such as RAW camera formats.
  • the color depth of the decoded image may then be converted using the color space converter 33 which uses spatial averaging techniques to increase the color depth of the input image.
  • the decoded image data may be processed to increase the perceived spatial resolution by sequentially shifting the origin of the decoded image with the origin shifter 34 and then scaling the shifted image using image scaler 35 to thereby generate a downscaled sequence of shifted output images 102-105 that may be further encoded and formatted into a second data format for display on the display 92.
  • the color space converter 33 increases the color depth of the input image 101 using spatial averaging of surrounding sub-pixels.
  • the color space converter 33 logically promotes the initial bit depth (e.g., 8 bits per sub-pixel) to a larger bit depth (e.g., 10-bits per sub-pixel), such as by adding bit positions or converting the input image information to floating point format.
  • a plurality of sub-pixels is the selected using a window to group pixels around a center pixel, and then used to compute an average value as the true or computed 10-bit value for the center pixel that can be sent to the display 92.
  • the color space converter 33 may compute a simple average of the surrounding source pixels, but other types of averaging computations may be performed. For example, each pixel at the center of the windo w may be replaced by a weighted average of all the pixels in the window.
  • the quantity of significant data obtained is mathematically defined by the ratio of the downsampling. For example, a downsample of 2 in both the x and y directions (a ratio of 4: 1) provides two doublings, where each doubling provides one additional bit of significance to be added to the per pixel color depth.
  • the color depth could be increased from 8 bits to 1 1 bits.
  • the color depth conversion is independent from actually having a display technology that is capable of displaying this many bits of color depth, though many TVs and computer monitor makers now support 10 or 12 bit color.
  • the high resolution input image 101 may be processed to increase apparent visual resolution by rescaling a plurality of slightly shifted versions of the input image for display over time, thereby generating a sequence of output images 102-105 that are routed across the bus 95 and controller 90 for the display 92.
  • each input image 101 is processed by the origin shifter 34 to shift the decoded input image by a predetermined amount, typically less than one pixel in distance, and the resulting shifted images are then rescaled using the downscaler 35.
  • the origin shifter 34 may be configured to shift the frame a fractional pixel distance to the right, and the downscaler 35 may be configured to process every block of 2x2 pixels as an average value that is used to generate one output pixel, though any desired scaling algorithm may be used.
  • an initial or reference frame would be applied by the origin shifter 34 to the decoded input image having an image detail portion 101a , and then rescaled by downscaler 35 to generate a first scaled output image 102 having a corresponding image detail portion 102a
  • the processing of the initial frame is indicated by the first downscale block grouping 1 10 applied to the input image detail portion 101a which generates the corresponding image detail portion 102a
  • the origin shifter 34 shifts the origin of the applied frame slightly to the right (e.g., by 0,3 times a single pixel distance) and the downscaler 35 rescales the shifted image to generate a second scaled output image 103 having a corresponding image detail portion 103 a.
  • the processing of the first shifted frame is indicated by the second downscale block grouping 11 1 applied to the input image detail portion 101a which generates the corresponding image detail portion 103a.
  • the shift-and- rescalc processing can be repeated to generate additional scaled output images 104, 105 having corresponding image detail portions 104a, 105a, as indicated by downscale block groupings 112, 113.
  • the shift-and-reseale processing generates output images that will be perceived over time to have sharper, higher resolution contrast than the static image processing such as shown in Figure 1.
  • origin shifter 34 may be adjusted and controlled as desired when generating the sequence of output images. If desired, the origin may be shifted randomly or in a predetermined pattern, such as an Archimedean spiral, a diagonal line, a square spiral, or alternating between two or more points.
  • Figures 3a-3d depict a source image 300 and an associated sequence of shifted image frames 31 1-314 that are shifted in two dimensions by less than a pixel distance.
  • example image 300 there is depicted an image of a soccer ball 301 in a first position, and a group of four pixels 302 is shown in the upper left corner or origin of the image 300, though it will be appreciated that the pixel group 302 is not drawn to scale.
  • the initial or reference frame 311 is shown in Figure 3a as being initially positioned (e.g., at time 0) with reference to the [0, 0] origin point, or the upper left pixel of the pixel group 302.
  • the image is then rescaied using any desired scaling algorithm, such as a bi-linear filter, to generate a first downscaled output image.
  • the origin point is changed to create a second shifted frame that is located a predetermined fractional distance in the x and/or y direction from the original reference frame 31 1 (shown with gray lines in Figure 3(b)).
  • the second reference frame 312 is shown with dashed lines as being positioned (e.g., at time 1) with reference to the shifted origin point [0, 0.5] so that the frame 312 is shifted down less than one pixel distance.
  • the image is then rescaied using any desired scaling algorithm to generate a second downscaled output image.
  • the sequence of processing steps may be repeated to shift origin point to create one or more shifted frames 313, 314 shown with dashed lines as being subsequently positioned (e.g., at time 2 and time 3) with reference to the shifted origin point so that the frame 313 (in Figure 3(c)) is shifted to new origin point [0,5, 0.5] and the frame 314 (in Figure 3(d)) is shifted to new origin point [0.5, 0],
  • the origin shifter 34 may use the source image pixel values to compute the shifted pixel values.
  • Figure 4 depicts a first image frame 402, and a second shifted image frame 404 that is shifted less than a pixel in length.
  • the pixels PI, P2, P3, P4 define the source image, and are spaced apart by a defined minimum pixel distance.
  • the first image frame 402 is applied to the source pixels PI , P2, P3, P5 such that pixel PI is the origin point for the first image frame 402.
  • the second image frame 404 is shifted in relation to the first image frame 402 by fractional lateral distance x and fractional vertical distance y, where both x and y are less than the defined minimum pixel distance.
  • other bilinear interpolation weight computations may be used to compute shifted pixel values.
  • the downscaling operation performed by the downscaler 35 may be adjusted and controlled to embody in any desired scaling process or circuit that takes input image and resizes it according to a defined ratio.
  • image scaling function changes the picture size of a video and/or adjusts the image for playback between different types of devices.
  • the downscaling function uses a window to select a group of pixels in a source image, and then replaces a pixel at the center of a window with a weighted average of all the pixels in the window.
  • image scaling may be implemented using an averaging filter and triangle filter for downscaling.
  • the scaling process may be implemented as a CPU scaler or a GPU scaler, which may be a hardware scaler or programmable pixel shader.
  • the amount of image motion may be controlled by increasing or decreasing the time between image shifts and/or by increasing or decreasing the fractional distance moved per time period.
  • one or more test images may be evaluated against a proposed shift pattern using different frame rates to determine if there is an improvement in perceived image resolution.
  • a frame rate of 10-20 frames/sec was found to create minimal artifacts, and a. shift pattern of alternating the origin between two points (0,0) and (0.5,0.5) was found to provide the best improvement in perceived image resolution with the least amount of side effects.
  • Different image types and display types may have different characteristics.
  • selected embodiments of the present invention may also be applied to render 3D stereoscopic video as 2D images having increased color depth and apparent resolution than the baseline image. This may be achieved by processing the stereoscopic 3D video image information in which there are two images (one for each eye) so as to remove the 3D information, and then averaging the co-located pixels in order to generate an increase in sub-pixel color depth.
  • apparent spatial resolution may be increased by subtracting out the 3D information, and then using the shift-and-rescale processing as described herein.
  • the 3D information may be removed using any desired image subtraction technique.
  • a motion estimation algorithm may be applied between the two complete left and right eye images such that the images from the two eyes are now converged back to a 2D image.
  • the "motion" is actually the parallax difference between the two eyes.
  • the image is at "infinity” (in camera terminology)
  • there will be motion and these small motion vectors can be "removed.”
  • Another subtraction technique may be used with a 3D source stream that includes a base video stream plus a delta. In this case, the "delta" information can be used directly without the need to find the motion vectors.
  • a source image is received (step 504), such as by retrieving a high resolution image from memory or pausing a high resolution video stream.
  • the retrieved image is a high resolution JPEG image, such as a photo from a camera with a resolution of 3840 x 2160 and an 8-bit color depth, and the pixel values are identified for the retrieved image.
  • step 506 it is determined if the image is to be processed to increase its color depth. If not (negative outcome to decision 508), then the image processing flow sequence proceeds to step 510. However, if the image color depth is to be increased
  • the RGB image values are processed to convert or increase their color depth values.
  • the color depth conversion processing may be implemented by first logically promoting the source pixels (or sub-pixels), such as by converting the 8-bit color depth values to 10-bit color depth values.
  • the color depth for each (sub)pixel is then recomputed as the average of the surrounding pixels (or sub- pixels), thereby increasing the color depth for each (sub)pixel.
  • step 510 it is determined if the image processing flow sequence includes image shifting. If not (negative outcome to decision 510), then the image processing flow sequence proceeds to step 512 where the source image is rescaled and then displayed as a static image. However, if the image processing flow sequence does include image shifting (affirmative outcome to decision 510), then the source image is rescaled and displayed (step 514), and for so long as the image shifting is required (negative outcome to decision 516), the image frame is shifted to compute shifted pixel values (step 518), and then the shifted image pixels are rescaled and displayed (step 514) in a process loop.
  • a sequence of slightly shifted output images are generated and displayed until such time as the image shifting is done (affirmative outcome to decision 516), at which point the sequence ends (step 520).
  • any desired image scaling algorithm may be used at step 514, including but not limited to simple pixel averaging, bilinear filtering, etc.
  • the determination at step 516 may use a predetermined shift pattern applied at a specified output frame rate, a. timer clock, or even an externally provided "finish" signal to decide when the shifting is finished.
  • the computation of the shifted pixel values at step 51 8 may be performed on the RGB images pixel values using any desired interpolation computation technique.
  • a decoded input source image is generated that has a first image resolution, such as by applying a JPEG, PNG, JPEG--20QQ, JPEG-XR, or TIFF decoding process to an input image.
  • the decoded input source image may be generated from a paused video image, or transformed from 3D input source information by converging left and right images from the 3D input source information into co-located pixel values for a two-dimensional input source image.
  • temporally shifted and scaled images are generated for display at a display screen having a second, lower image resolution by applying a plurality of shifted image frames to the decoded input source image and scaling pixel values in each frame to match the second, lower image resolution.
  • the temporally shifted and scaled images are generated by sequentially shifting an origin point for a frame applied to the decoded input source image by less than an interpixel spacing distance.
  • the origin point may be shifted randomly or using an Archimedean spiral pattern, a diagonal line pattern, a square spiral pattern, or by alternating between two or more points.
  • color depth values from the decoded input source image may be increased by replacing each pixel value from the decoded input source image having a first, relatively small bit depth with an average color depth value having a second, relatively large bit depth that is computed from a plurality of surrounding pixel values from the decoded input source image.
  • the temporally shifted and scaled images are displayed on the display screen having a second, lower image resolution.
  • an input source image is generated that has a first plurality of pixel values with a first color depth.
  • the first plurality of pixel values are converted to a second plurality of pixel values with an increased color depth by replacing each pixel value from the first plurality of pixel values with, an average color depth value having a second larger color depth that is computed from a plurality of surrounding pixel values from the first plurality of pixel values.
  • the first plurality of pixel values are converted by selecting and replacing each 8-bit pixel value in the input source image with a 10- bit pixel value that is computed by averaging a plurality of pixel values surrounding the selected 8-bit pixel value, though other bit depths may be generated.
  • the second plurality of pixel values are processed for display on a display screen having the second larger color depth.
  • the display processing may include scaling the second plurality of pixel values for display on the display screen having a resolution that is lower than the input source image resolution.
  • the display processing may include generating temporally shifted and scaled images from the second plurality of pixel values by applying a plurality of shifted image frames to the second plurality of pixel values which are shifted by less than an interpixel spacing distance and scaling pixel values in each image frame for display on the display screen.
  • the temporally shifted and scaled images may be displayed on the display screen having a resolution that, is lower than the input, source image resolution.
  • PLDs programmable logic devices
  • FPGAs field programmable gate arrays
  • PAL programmable array logic
  • ASICs application specific integrated circuits
  • microcontrollers with memory such as electronically erasable programmable read only memory (EEPROM), Flash memoiy, etc.
  • embedded microprocessors firmware, software, etc.
  • aspects of the embodiments may be embodied in microprocessors ha ving softw are- based circuit emulation, discrete logic (sequential and combinatorial), custom devices, fuzzy (neural) logic, quantum devices, and hybrids of any of the above device types.
  • the underlying device technologies may be provided in a variety of component types, e.g., metal- oxide semiconductor field-effect transistor (MOSFET) technologies such as complementary metal-oxide semiconductor (CMOS), bipolar technologies such as emitter-coupled logic (ECL), polymer technologies (e.g., silicon-conjugated polymer and metal-conjugated polymer-metal structures), mixed analog and digital, etc.
  • MOSFET metal- oxide semiconductor field-effect transistor
  • CMOS complementary metal-oxide semiconductor
  • ECL emitter-coupled logic
  • polymer technologies e.g., silicon-conjugated polymer and metal-conjugated polymer-metal structures
  • mixed analog and digital etc.
  • Hardware embodiments of the invention may be fabricated based upon software code (e.g., Verilog, HDL, RTL or GDSII data) that is used to configure (e.g. through specific maskworks) a fabrication facility so as to manufacture a device embodying aspects of the present invention.
  • software code e.g., Verilog, HDL, RTL or GDSII data
  • a computer program embodied on a computer-readable medium that stores instructions operable to control operation of one or more processors or circuits to perform image processing on a source image for display by- generating a plurality of temporally shifted images from a source image by sequentially shifting an origin point for a. frame applied to the source image by less than an interpixel spacing distance, thereby generating the plurality of temporally shifted images for display at a display screen.
  • any software-implemented aspects may be encoded on some form of program storage medium or implemented over some type of tangible transmission medium.
  • the program storage medium may be magnetic (e.g., a floppy disk or a hard drive) or optical (e.g., a compact disk read only memory, or CD ROM), and may be read only or random access.
  • the transmission medium may be twisted wire pairs, coaxial cable, optical fiber, or some other suitable transmission medium known to the art.

Landscapes

  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Controls And Circuits For Display Device (AREA)

Abstract

A image processing system for displaying a source image includes a memory for storing a source image having a high resolution digital format, a display screen having a low resolution digital format, and a media acceleration hardware unit adapted to increase color depth values from the source image and generate a plurality of temporally shifted images from the source image by sequentially shifting an origin point for a frame applied to the source image and scaling source image pixel values from each frame, thereby generating the plurality of shifted images having the second, relatively low resolution for display at the display screen.

Description

DISPLAYED IMAGE IMPROVEMENT
Michael L. Schmit, Shivashankar Gurumurthy, William Herz
BACKGROUND OF THE INVENTION
Field of the Invention
[001] The present invention relates in general to image processing technology. In one aspect, the present invention relates to the display of digital image information.
Description of the Related Art
[002] Digital images are typically displayed on display screen devices (e.g., PC monitors and TV displays) that have a fixed number of pixels and color depth. For example, consumer video formats (such as ATSC for broadcast HDTV, DVD and Biu-ray discs) specify the use of the MPEG-2 standard at main profile high level which has a maximum resolution is 1920 x 1080 and a 4:2:0 color space that has 8-bits per sub-pixel. Even high definition (HD) displays using the HD 1080p format can display only 2M pixels. However, the digital image and video source content typically has more information (e.g., resolution and color space depth) than can be displayed on existing display screen devices. For example, a variety of image sources, including videos streamed from the Internet and still image photographs taken by cameras or provided over the Internet, have a source resolution (e.g., a total of 5M pixels or even 10-20M pixels) that is far higher than the available resolution of a display screen. Even You'Tube videos can have source resolution that is larger than most screens.
[003] To display high resolution images on lower resolution displays, heavy downscafing is typically used whereby the image is scaled to reduce the number of pixels from the source image to fit the display size. In the example of a conventional display or playback device 10 shown in Figure 1, an input image is received that might be part of a digital photo taken by a camera with a resolution of 3840 x 2160 or some other high resolution format. The input image is processed by image decoder 2 by scaling the input image for display on the display 8 which has a lower resolution and different color depth capability (e.g., 1920 x 1080 pixels with a color depth of 10-bits). The decoder 2 receives the input image at input buffer 3, performs JPEG-type image decoding JPEG decoder 4 to generate high resolution image pixels I, and then downscales the high resolution image pixels 1 at downscaler 5 to generate the downsampled output image 6 that is stored in the current frame buffer 7 for output to the display 8. With a conventional image scaling approach such as shown in Figure 1 , the downscaler 5 takes group of pixels from the high resolution image pixels 1 (e.g., a 2 x 2 pixel group), computes the average value of that group, and uses the computed average value as a single output pixel for the output image 6. Of course, more complex scaling algorithms may be used, but they still generate output images with reduced resolution and contrast, as demonstrated by the gray transition shading shown in the downsampled output image 6.
[004] Accordingly, a need exists for an improved image display scheme which addresses various problems in the art that have been discovered by the above-named inventors where various limitations and disadvantages of conventional solutions and technologies will become apparent to one of skill in the art after reviewing the remainder of the present application with reference to the drawings and detailed description which follow, though it should be understood that this description of the related art section is not intended to serve as an admission that the described subject matter is prior art.
SUMMARY OF EMBODIMENTS OF THE INVENTION
[005] Broadly speaking, the present invention provides a display device, architecture, system, and method of operation for increasing the perceived spatial resolution of a displayed image beyond the physical number of pixels in the display device and/or for displaying an output image with greater color depth than is included in the source image content. In selected embodiments, the perceived spatial resolution is increased by generating, downscaling, and displaying a plurality of slightly shifted images over time, thereby rendering a series of images rendered with different shifts that effectively convey visual features which are lost with conventional image downscaling. By carefully controlling the shift distance and amount of time between displaying the shifted images and selecting appropriate image scaling algorithms, very fine edges in the original source image will appear sharper in the display image than they would otherwise in a purely static image without appearing that the image is moving. In other embodiments, the processing of the source image generates a downsampled output image with greater color depth than is in the source image by logically promoting each smaller bit depth pixel value (e.g., 8-bit values) from the source image to a larger bit depth pixel value (e.g., 10-bit values) and then using the computed average of the surrounding smaller bit depth pixel values as a larger bit depth pixel bit value that can be sent to the display. In this way, spatial averaging is used to replace an 8- bit pixel with 10-biis or more of color depth information per sub-pixel for display . In selected embodiments, the image shift and spatial averaging techniques are combined to provide higher resolution and greater color depth. [006] In selected example embodiments, an image processing system and method of operation are disclosed for displaying a source image using a memory, media, acceleration hardware, and a display. The memory is provided for storing a source image having a first relatively high resolution digital format, and a display screen is provide for displaying images. The source image may be a 2D image, paused video image, or 3D image, in which case the media acceleration hardware may be adapted to transform three-dimensional input source information into a two-dimensional source image by converging left and right images from the three-dimensional input source information into co-located pixel values for the two- dimensional source image. In selected embodiments, the media acceleration hardware unit is adapted to generate a plurality of temporally shifted images from the source image by sequentially shifting an origin point for a frame applied to the source image by less than an interpixel spacing distance, thereby generating the plurality of temporally shifted images for display at the display screen. The media acceleration hardware may also be adapted to generate the temporally shifted images by scaling source image pixel values from each shifted frame, thereby generating the plurality of temporally shifted images having a second, relatively low resolution for display at the display screen. The scaling may be implemented with a downsampler configured to downsample the source image pixel values to reduce pixel density by a predetermined factor in a vertical and/or horizontal direction. In selected embodiments, the media acceleration hardware may be implemented with a graphics processing unit (GPU) hardware decoder that includes an image decoder, origin shifter, and image scaler. The image decoder receives the source image having the first digital format and produces a decoded image having an RGB format, YUV format, or any desired color space. The origin shifter applies a plurality of frames to the decoded image, where each frame is shifted by less than an interpixel spacing distance and is used to generate a shifted decoded image. The image scaler changes a picture size of each decoded image to produce a plurality of scaled shifted decoded images having the second, relatively low resolution for display at the display screen. In other embodiments, the media acceleration hardware is adapted to increase color depth values from the source image by selecting a pixel value and an associated plurality of pixel values from the source image having a first, relatively small bit depth, and computing from the plurality of pixel values an avera ge color depth value having a second, relatively large bit depth to replace the selected pixel value. For example, each 8-bit pixel value in the source image may be selected and replaced by the media acceleration hardware with a 10-bit pixel value that is computed by averaging the plurality of pixel values associated with the selected 8-bit pixel value.
BRIEF DESCRIPTION OF THE DRAWINGS
[007] The present invention may be better understood, and its numerous objects, features and advantages made apparent to those skilled in the art by referencing the accompanying drawings. The use of the same reference number throughout the several figures designates a like or similar element,
[008] Figure 1 shows a block diagram representation of a conventional display or playback device,
[009] Figure 2 shows a block diagram representation of an exemplary image processing system constructed in accordance with selected embodiments of the present invention.
[010] Figures 3a-3d depict a source image and an associated sequence of shifted image frames that are shifted less than a pixel in order to illustrate selected embodiments of the present invention.
[01 1] Figure 4 depicts a first image frame and a second shifted image frame that is shifted less than a pixel in order to illustrate selected embodiments of the present in vention.
[012] Figure 5 depicts a process flow sequence for implementing selected embodiments of the present invention.
DETAILED DESCRIPTION
[013] An improved display device and associated method of operation are described for image processing which takes advantage of increased computing power provided by graphics processing units. In selected embodiments, high resolution source images are processed to increase apparent visual resolution by introducing a temporal display factor whereby slightly shifted versions of a source image are re-scaled and displayed over time by using CPU and GPU components to perform computing tasks. The image shifting techniques may also be applied to 3D source image information by first subtracting the 3D information and then applying the image shifting technique. Any apparent motion of the displayed image may be controlled by increasing or decreasing the time between image shifts and increasing or decreasing the fractional distance shifted per time period. In addition or in the alternative, the high resolution source images may be processed to provide greater color depth using downscaling to replace each 8-bit pixel value at the center of a window with a 10-bit pixel value that is the weighted average of all the pixels in the window. [014] Various illustrative embodiments of the present invention will now be described in detail with reference to the accompanying figures. While various details are set forth in the following description, it will be appreciated that the present invention may be practiced without these specific details, and that numerous implementation-specific decisions may be made to the invention described herein to achieve the device designer's specific goals, such as compliance with process technology or design-related constraints, which will vary from one implementation to another. While such a development effort might be complex and time-consuming, it would nevertheless be a routine undertaking for those of ordinary skill in the art having the benefit of this disclosure. For example, selected aspects are depicted with reference to simplified block diagram depictions rather than in detail in order to avoid limiting or obscuring the present invention.
[015] Turning now to Figure 2, there is depicted a block diagram representation of an exemplary image processing system 100 constructed in accordance with selected embodiments of the present invention. As depicted, the image processing system 100 may implemented in any graphics or video playback device, such as desktop or laptop computer, television, wireless or mobile device, personal digital assistants, mobile or cellular phones, DVRs, DVD and Blu-Ray players, handheld video players, digital picture frames, console game machines, projectors, tablets, digital book readers, and any other display device that processes and displays images on a fixed display screen. As depicted in Figure 2, the image processing system 100 may be implemented as a. host or applications processing unit that includes a bus 95 coupled to one or more processors or processing units 20 and a video or media acceleration hardware unit 30. In addition, the image processing system 100 may include a main memory system having a large DDR SDRAM 62, 64 that is accessed through a DDR controller 60. In addition or in the alternative, one or more memories (e.g., IDE 72, flash memory unit 74, ROM 76, etc.) are accessed through the static memory controller 70. Either or both of the DDR. SDRAM or other memories may be integrated with or external to the image processing system 100. Other input/output devices may also be accessed via one or more controllers, including peripheral devices 82, 84, 86 accessed by I/O controller 80, and display device 92 which is accessed through the display controller 90. The display- device may be a computer monitor or television screen having a fixed resolution pixel count and color depth, where the resolution defines the smallest noticeable detail or line that can be perceived by the human visual system, and the color depth (or bit depth) is the number of bits used to represent the color of a single pixel in a bitmapped image or video frame buffer. On the display 92, each pixel is typically formed with a plurality of sub-pixels (e.g., 3 or more sub-pixels) which each provide single-color regions that contribute to the displayed or sensed color when viewed at a distance, in addition, the number of bits used to define the range of intensity levels of each sub-pixel on the display 92 is fixed, such as 8 bits or 10 bits.
[016] For clarity and ease of understanding, not ail of the elements making up the image processing system 100 are described in detail Such details are well known to those of ordinary skill in the art, and may vary based on the particular computer vendor and microprocessor type. Moreover, the image processing system 100 may include other buses, devices, and/or subsystems, depending on the implementation desired. For example, the image processing system 100 may include caches, modems, parallel or serial interfaces, SCSI interfaces, network interface cards, and the like. In the illustrated embodiment, the CPU 20 executes software stored in the flash memory 74 and/or SDRAM 62, 64.
[017] As shown in Figure 2, the image processing system 100 may be implemented with a processing platform having at least one central processing unit (CPU) 20 and at least one media acceleration hardware 30, such as a graphics processing unit (GPU). As will be appreciated, aspects of the invention could operate using only CPU 20, media acceleration hardware (e.g., GPU) 30, or a combination thereof. Additionally, CPU 20 and media acceleration hardware 30 may be discrete (e.g., separate components) or may combined into a single package or die. The image processing system 100 receives an input or source image 101 that may be a high resolution JPEG image or other encoded bitstream that is encoded in a first data format and has a first relatively high resolution and fixed color depth. Under control of the CPU 20 and or media acceleration hardware 30, the input image 101 may be received over a communication network or retrieved from system memory (e.g., 62, 64), and then stored at the input buffer 31. The input image is then decoded using a predetermined image decode process, such as the JPEG decoding process at JPEG decoder 32. It should be noted that other data formats may be used, including but not limited to PNG, JPEG-2000, JPEG-XR, and TIFF decoders. In addition, the disclosed technique works with bitmaps that have no compression, such as RAW camera formats. In selected embodiments, the color depth of the decoded image may then be converted using the color space converter 33 which uses spatial averaging techniques to increase the color depth of the input image. In addition or in the alternative, the decoded image data may be processed to increase the perceived spatial resolution by sequentially shifting the origin of the decoded image with the origin shifter 34 and then scaling the shifted image using image scaler 35 to thereby generate a downscaled sequence of shifted output images 102-105 that may be further encoded and formatted into a second data format for display on the display 92.
[018] In selected embodiments, the color space converter 33 increases the color depth of the input image 101 using spatial averaging of surrounding sub-pixels. To provide increase color depth, the color space converter 33 logically promotes the initial bit depth (e.g., 8 bits per sub-pixel) to a larger bit depth (e.g., 10-bits per sub-pixel), such as by adding bit positions or converting the input image information to floating point format. A plurality of sub-pixels is the selected using a window to group pixels around a center pixel, and then used to compute an average value as the true or computed 10-bit value for the center pixel that can be sent to the display 92. For example, if four source pixels having 8-bit values of 10, 1 1, 12 and 13 are selected in a window surrounding a center pixel, the computed 8-bit average on a scale of 0 to 255 would be 11 since the true 11.5 value is effectively truncated by the 8-bit length. However, by computing the average on a 10-bit scale of 0 to 1023, the computed 10-bit average of 46 is truer value since it does not discard the 0.5 fraction. In this way, the color space converter 33 may compute a simple average of the surrounding source pixels, but other types of averaging computations may be performed. For example, each pixel at the center of the windo w may be replaced by a weighted average of all the pixels in the window. Though described with reference to an example process which selects and replaces each 8-bit pixel value in the source image with an averaged 10-bit pixel value, it will be appreciated that this is just an example, and the color depth could increased by any desired amount, such as converting from 6 to 8 bits, 8 to 9 bits, 8 to 12 bits, etc. The quantity of significant data obtained is mathematically defined by the ratio of the downsampling. For example, a downsample of 2 in both the x and y directions (a ratio of 4: 1) provides two doublings, where each doubling provides one additional bit of significance to be added to the per pixel color depth. With a 16 megapixel image downscaled to a 2MP display using three doublings (2->4->8->l 6), the color depth could be increased from 8 bits to 1 1 bits. As will be appreciated, the color depth conversion is independent from actually having a display technology that is capable of displaying this many bits of color depth, though many TVs and computer monitor makers now support 10 or 12 bit color.
[019] In addition, the high resolution input image 101 may be processed to increase apparent visual resolution by rescaling a plurality of slightly shifted versions of the input image for display over time, thereby generating a sequence of output images 102-105 that are routed across the bus 95 and controller 90 for the display 92. To this end, each input image 101 is processed by the origin shifter 34 to shift the decoded input image by a predetermined amount, typically less than one pixel in distance, and the resulting shifted images are then rescaled using the downscaler 35. In an example scenario where the input image 101 has a resolution of 3840 x 2160 and the display 92 has the capability to show 1920 x 1080 pixels, the origin shifter 34 may be configured to shift the frame a fractional pixel distance to the right, and the downscaler 35 may be configured to process every block of 2x2 pixels as an average value that is used to generate one output pixel, though any desired scaling algorithm may be used. In this scenario, an initial or reference frame would be applied by the origin shifter 34 to the decoded input image having an image detail portion 101a , and then rescaled by downscaler 35 to generate a first scaled output image 102 having a corresponding image detail portion 102a, The processing of the initial frame is indicated by the first downscale block grouping 1 10 applied to the input image detail portion 101a which generates the corresponding image detail portion 102a, Subsequently , the origin shifter 34 shifts the origin of the applied frame slightly to the right (e.g., by 0,3 times a single pixel distance) and the downscaler 35 rescales the shifted image to generate a second scaled output image 103 having a corresponding image detail portion 103 a. The processing of the first shifted frame is indicated by the second downscale block grouping 11 1 applied to the input image detail portion 101a which generates the corresponding image detail portion 103a. The shift-and- rescalc processing can be repeated to generate additional scaled output images 104, 105 having corresponding image detail portions 104a, 105a, as indicated by downscale block groupings 112, 113. As shown with the sequence of image detail portions 102a-105a , the shift-and-reseale processing generates output images that will be perceived over time to have sharper, higher resolution contrast than the static image processing such as shown in Figure 1.
[020] It will be appreciated that the timing and pattern of image shifting provided by origin shifter 34 may be adjusted and controlled as desired when generating the sequence of output images. If desired, the origin may be shifted randomly or in a predetermined pattern, such as an Archimedean spiral, a diagonal line, a square spiral, or alternating between two or more points. For example, Figures 3a-3d depict a source image 300 and an associated sequence of shifted image frames 31 1-314 that are shifted in two dimensions by less than a pixel distance. In the example image 300, there is depicted an image of a soccer ball 301 in a first position, and a group of four pixels 302 is shown in the upper left corner or origin of the image 300, though it will be appreciated that the pixel group 302 is not drawn to scale.
Under control of the origin shifter, the initial or reference frame 311 is shown in Figure 3a as being initially positioned (e.g., at time 0) with reference to the [0, 0] origin point, or the upper left pixel of the pixel group 302. Using the initial reference frame 311 , the image is then rescaied using any desired scaling algorithm, such as a bi-linear filter, to generate a first downscaled output image. At a subsequent time interval (e.g., time 1 ), the origin point is changed to create a second shifted frame that is located a predetermined fractional distance in the x and/or y direction from the original reference frame 31 1 (shown with gray lines in Figure 3(b)). In Figure 3b, the second reference frame 312 is shown with dashed lines as being positioned (e.g., at time 1) with reference to the shifted origin point [0, 0.5] so that the frame 312 is shifted down less than one pixel distance. Using the shifted reference frame 312, the image is then rescaied using any desired scaling algorithm to generate a second downscaled output image. The sequence of processing steps may be repeated to shift origin point to create one or more shifted frames 313, 314 shown with dashed lines as being subsequently positioned (e.g., at time 2 and time 3) with reference to the shifted origin point so that the frame 313 (in Figure 3(c)) is shifted to new origin point [0,5, 0.5] and the frame 314 (in Figure 3(d)) is shifted to new origin point [0.5, 0],
[021] As described herein, the origin shifter 34 may use the source image pixel values to compute the shifted pixel values. To illustrate an example bilinear interpolation computation used to shift the source image, reference is now made to Figure 4 which depicts a first image frame 402, and a second shifted image frame 404 that is shifted less than a pixel in length. As depicted, the pixels PI, P2, P3, P4 define the source image, and are spaced apart by a defined minimum pixel distance. The first image frame 402 is applied to the source pixels PI , P2, P3, P5 such that pixel PI is the origin point for the first image frame 402. The second image frame 404 is shifted in relation to the first image frame 402 by fractional lateral distance x and fractional vertical distance y, where both x and y are less than the defined minimum pixel distance. Using bilinear interpolation, the shifted pixel value PI 1 may be computed as Pn = (l -x)(f -y)P[ + x(l -y)P2 + ! i --x ?yi\ + xyP4, with corresponding computations for shifted pixels P2, P3, P4. Of course, other bilinear interpolation weight computations may be used to compute shifted pixel values.
[022] It will also be appreciated that the downscaling operation performed by the downscaler 35 may be adjusted and controlled to embody in any desired scaling process or circuit that takes input image and resizes it according to a defined ratio. In general, image scaling function changes the picture size of a video and/or adjusts the image for playback between different types of devices. In operation, the downscaling function uses a window to select a group of pixels in a source image, and then replaces a pixel at the center of a window with a weighted average of all the pixels in the window. In selected embodiments, image scaling may be implemented using an averaging filter and triangle filter for downscaling. The scaling process may be implemented as a CPU scaler or a GPU scaler, which may be a hardware scaler or programmable pixel shader.
[023] The subtle re-positioning of the origin point within the image may cause some very fine edges in the image to appear sharper than they would otherwise in a purely static image, but the motion is very slight. This same effect is observed with videos that are panning across a fine texture. If the video is paused, the texture appears a bit blurry and is not easily recognized, but as the video runs, temporal averaging allows the human visual system to clearly see and understand what the texture is. Using the image shifting technique disclosed herein, a video paused on a. single frame would not be statically shown, but would instead be displayed over and over with our very subtle movement created by slightly shifting or re-positioning the origin. The amount of image motion may be controlled by increasing or decreasing the time between image shifts and/or by increasing or decreasing the fractional distance moved per time period. In order to evaluate the effectiveness of image shifting patterns, one or more test images may be evaluated against a proposed shift pattern using different frame rates to determine if there is an improvement in perceived image resolution. In selected embodiments, a frame rate of 10-20 frames/sec was found to create minimal artifacts, and a. shift pattern of alternating the origin between two points (0,0) and (0.5,0.5) was found to provide the best improvement in perceived image resolution with the least amount of side effects. Different image types and display types may have different characteristics.
[024] In addition to processing two-dimensional source images and video still images as described hereinabove, selected embodiments of the present invention may also be applied to render 3D stereoscopic video as 2D images having increased color depth and apparent resolution than the baseline image. This may be achieved by processing the stereoscopic 3D video image information in which there are two images (one for each eye) so as to remove the 3D information, and then averaging the co-located pixels in order to generate an increase in sub-pixel color depth. In addition, apparent spatial resolution may be increased by subtracting out the 3D information, and then using the shift-and-rescale processing as described herein. In an example scenario where a 3D video information is provided having a defined video resolution (e.g., 1920 x 1080), the 3D information may be removed using any desired image subtraction technique. For example, a motion estimation algorithm may be applied between the two complete left and right eye images such that the images from the two eyes are now converged back to a 2D image. The "motion" is actually the parallax difference between the two eyes. In areas where the image is at "infinity" (in camera terminology), there will be no parallax, and thus no motion. In other areas, there will be motion, and these small motion vectors can be "removed." Another subtraction technique may be used with a 3D source stream that includes a base video stream plus a delta. In this case, the "delta" information can be used directly without the need to find the motion vectors.
[025] Turning now to Figure 5, an exemplary image processing flow sequence 500 is illustrated for displaying an image with increased color depth and apparent visual resolution. After the method begins at step 502, a source image is received (step 504), such as by retrieving a high resolution image from memory or pausing a high resolution video stream. In an example embodiment, the retrieved image is a high resolution JPEG image, such as a photo from a camera with a resolution of 3840 x 2160 and an 8-bit color depth, and the pixel values are identified for the retrieved image.
[026] At step 506, it is determined if the image is to be processed to increase its color depth. If not (negative outcome to decision 508), then the image processing flow sequence proceeds to step 510. However, if the image color depth is to be increased
(affirmative outcome to decision 508), then the RGB image values are processed to convert or increase their color depth values. The color depth conversion processing may be implemented by first logically promoting the source pixels (or sub-pixels), such as by converting the 8-bit color depth values to 10-bit color depth values. In addition, the color depth for each (sub)pixel is then recomputed as the average of the surrounding pixels (or sub- pixels), thereby increasing the color depth for each (sub)pixel.
[027] At step 510, it is determined if the image processing flow sequence includes image shifting. If not (negative outcome to decision 510), then the image processing flow sequence proceeds to step 512 where the source image is rescaled and then displayed as a static image. However, if the image processing flow sequence does include image shifting (affirmative outcome to decision 510), then the source image is rescaled and displayed (step 514), and for so long as the image shifting is required (negative outcome to decision 516), the image frame is shifted to compute shifted pixel values (step 518), and then the shifted image pixels are rescaled and displayed (step 514) in a process loop. By looping the image shift steps back through step 514, a sequence of slightly shifted output images are generated and displayed until such time as the image shifting is done (affirmative outcome to decision 516), at which point the sequence ends (step 520). As will be appreciated, any desired image scaling algorithm may be used at step 514, including but not limited to simple pixel averaging, bilinear filtering, etc. In addition, the determination at step 516 may use a predetermined shift pattern applied at a specified output frame rate, a. timer clock, or even an externally provided "finish" signal to decide when the shifting is finished. Finally, the computation of the shifted pixel values at step 51 8 may be performed on the RGB images pixel values using any desired interpolation computation technique.
[028] By now it will be appreciated that there is disclosed herein a method and apparatus for processing an input image for display. In the disclosed methodology, a decoded input source image is generated that has a first image resolution, such as by applying a JPEG, PNG, JPEG--20QQ, JPEG-XR, or TIFF decoding process to an input image. The decoded input source image may be generated from a paused video image, or transformed from 3D input source information by converging left and right images from the 3D input source information into co-located pixel values for a two-dimensional input source image. From the decoded input source image, temporally shifted and scaled images are generated for display at a display screen having a second, lower image resolution by applying a plurality of shifted image frames to the decoded input source image and scaling pixel values in each frame to match the second, lower image resolution. In selected embodiments, the temporally shifted and scaled images are generated by sequentially shifting an origin point for a frame applied to the decoded input source image by less than an interpixel spacing distance. The origin point may be shifted randomly or using an Archimedean spiral pattern, a diagonal line pattern, a square spiral pattern, or by alternating between two or more points. In addition, color depth values from the decoded input source image may be increased by replacing each pixel value from the decoded input source image having a first, relatively small bit depth with an average color depth value having a second, relatively large bit depth that is computed from a plurality of surrounding pixel values from the decoded input source image. Finally, the temporally shifted and scaled images are displayed on the display screen having a second, lower image resolution.
[029] In another form, there is disclosed a method and associated apparatus for processing an input image for display. As disclosed, an input source image is generated that has a first plurality of pixel values with a first color depth. The first plurality of pixel values are converted to a second plurality of pixel values with an increased color depth by replacing each pixel value from the first plurality of pixel values with, an average color depth value having a second larger color depth that is computed from a plurality of surrounding pixel values from the first plurality of pixel values. In an example embodiment, the first plurality of pixel values are converted by selecting and replacing each 8-bit pixel value in the input source image with a 10- bit pixel value that is computed by averaging a plurality of pixel values surrounding the selected 8-bit pixel value, though other bit depths may be generated. Finally, the second plurality of pixel values are processed for display on a display screen having the second larger color depth. The display processing may include scaling the second plurality of pixel values for display on the display screen having a resolution that is lower than the input source image resolution. In addition, the display processing may include generating temporally shifted and scaled images from the second plurality of pixel values by applying a plurality of shifted image frames to the second plurality of pixel values which are shifted by less than an interpixel spacing distance and scaling pixel values in each image frame for display on the display screen. In this way, the temporally shifted and scaled images may be displayed on the display screen having a resolution that, is lower than the input, source image resolution.
[030] As described herein, selected aspects of the invention as disclosed above maybe implemented in hardware or software. For example, selected aspects of the embodiments described above may be implemented as functionality programmed into any of a variety of circuitry, including but not limited to programmable logic devices (PLDs), such as field programmable gate arrays (FPGAs), programmable array logic (PAL) devices, electrically programmable logic and memory devices, and standard cell-based devices, as well as application specific integrated circuits (ASICs) and fully custom integrated circuits. Some other possibilities for implementing aspects of the embodiments include microcontrollers with memory (such as electronically erasable programmable read only memory (EEPROM), Flash memoiy, etc.), embedded microprocessors, firmware, software, etc. Furthermore, aspects of the embodiments may be embodied in microprocessors ha ving softw are- based circuit emulation, discrete logic (sequential and combinatorial), custom devices, fuzzy (neural) logic, quantum devices, and hybrids of any of the above device types. The underlying device technologies may be provided in a variety of component types, e.g., metal- oxide semiconductor field-effect transistor (MOSFET) technologies such as complementary metal-oxide semiconductor (CMOS), bipolar technologies such as emitter-coupled logic (ECL), polymer technologies (e.g., silicon-conjugated polymer and metal-conjugated polymer-metal structures), mixed analog and digital, etc. Thus, some portions of the detailed descriptions herein are consequently presented in terms of a hardware-implemented process and some portions of the detailed descriptions herein are consequently presented in terms of a software-implemented process involving symbolic representations of operations on data bits within a memory- of a computing system or computing device. Generally speaking, computer hardware is the physical part of a computer, including its digital circuitry, as distinguished from the computer software that executes within the hardware. The hardware of a computer is infrequently changed, in comparison with software and hardware data, which are "soft" in the sense that they are readily created, modified or erased on the computer. These descriptions and representations are the means used by those in the art to convey most effectively the substance of their work to others skilled in the art using both hardware and software. Hardware embodiments of the invention may be fabricated based upon software code (e.g., Verilog, HDL, RTL or GDSII data) that is used to configure (e.g. through specific maskworks) a fabrication facility so as to manufacture a device embodying aspects of the present invention.
[031] In other embodiments, there is disclosed a computer program embodied on a computer-readable medium that stores instructions operable to control operation of one or more processors or circuits to perform image processing on a source image for display by- generating a plurality of temporally shifted images from a source image by sequentially shifting an origin point for a. frame applied to the source image by less than an interpixel spacing distance, thereby generating the plurality of temporally shifted images for display at a display screen. As will be appreciated, any software-implemented aspects may be encoded on some form of program storage medium or implemented over some type of tangible transmission medium. The program storage medium may be magnetic (e.g., a floppy disk or a hard drive) or optical (e.g., a compact disk read only memory, or CD ROM), and may be read only or random access. Similarly, the transmission medium may be twisted wire pairs, coaxial cable, optical fiber, or some other suitable transmission medium known to the art.
[032] The particular embodiments disclosed above are illustrative only and should not be taken as limitations upon the present invention, as the invention may be modified and practiced in different but equivalent manners apparent to those skilled in the art having the benefit of the teachings herein. Accordingly , the foregoing description is not intended to limit the invention to the particular form set forth, but on the contrary, is intended to cover such alternatives, modifications and equivalents as may be included within the spirit and scope of the invention as defined by the appended claims so that those skilled in the art should understand that they can make various changes, substitutions and alterations without departing from the spirit and scope of the in vention in its broadest form.

Claims

WHAT IS CLAIMED IS;
1. An image processing system for displaying a source image, comprising:
a media acceleration hardware unit adapted to generate a plurality of temporally shifted images from a source image having a first digital image with a first resolution by sequentially shifting an origin point for a frame applied to the source image by less than an interpixel spacing distance, thereby generating the plurality of temporally shifted images for display at a display screen. 2. The image processing system of claim 1 , where the media acceleration hardware comprises:
an image decoder for receiving the source image having the first digital format and producing a decoded image;
an origin shifter for applying a plurality of frames to the decoded image, where each frame is shifted by less than an interpixel spacing distance and is used to generate a shifted decoded image; and
a scaler for changing a picture size of each decoded image to produce a plurality of scaled shifted decoded images having the second, relatively low resolution for display at the display screen, 3. The image processing system of claim 1 , where the media acceleration hardware is adapted to generate the plurality of temporally shifted images from the source image by scaling source image pixel values from each shifted frame, thereby generating the plurality of temporally shifted images having a second, relatively low resolution for display at the display screen. 4. The image processing system of claim 3 , where the media acceleration hardware comprises a downsampler configured to downsample the source image pixel values to reduce pixel density by a predetermined factor in a vertical and/or horizontal direction. 5. The image processing system of claim 1 , where the source image is a paused video image.
6. The image processing system of claim 1, where the media acceleration hardware is adapted to increase color depth values from the source image by selecting a pixel value and an associated plurality of pixel values from the source image having a first, relatively small bit depth, and computing from the plurality of pixel values an average color depth value having a second, relatively large bit depth to replace the selected pixel value. 7. The image processing system of claim 6, where the media acceleration hardware is adapted to select and replace each 8-bit pixel value in the source image with a 10-bit pixel value that is computed by averaging the plurality of pixel values associated with the selected 8-bit pixel value, 8. The image processing system of claim 1 , where the media acceleration hardware is adapted to transform three-dimensional input source information into a two-dimensional source image by converging left and right images from the three- dimensional input source information into co-located pixel values for the two- dimensional source image. 8. The image processing system of claim 1, where the media acceleration hardware is adapted to transform three-dimensional input source information into a two-dimensional source image by converging left and right images from the three- dimensional input source information into co-located pixel values for the two- dimensional source image. 9. The image processing system of claim 1, further comprising:
a memory for storing a source image having a first digital format with a first, relatively high resolution; and
a display screen for displaying images. 10. A method of processing an input image for display, comprising:
generating a decoded input source image having a first image resolution; generating from the decoded input source image a plurality of temporally shifted and scaled images for display at a display screen having a second, lower image resolution by applying a plurality of shifted image frames to the decoded input source image and scaling pixel values in each frame to match the second, lower image resolution. 11. The method of claim 10, where generating the decoded input source image comprises applying a JPEG, PNG, JPEG-2000, JPEG-XR, or TIFF decoding process to an input image. 12. The method of claim 10, where generating the plurality of temporally shifted and scaled images comprises sequentially shifting an origin point for a frame applied to the decoded input source image by less than an interpixel spacing distance. 13. The method of claim 12, where sequentially shifting the origin point comprises randomly shifting the origin point for each frame by less than an interpixel spacing distance from a previous frame. 14. The method of claim 12, where sequentially shifting the origin point comprises shifting the origin point for each frame by less than an interpixel spacing distance using an Archimedean spiral pattern, a diagonal line pattern, a square spiral pattern, or by alternating between two or more points. 15. The method of claim 10, further comprising increasing color depth values from the decoded input source image by replacing each pixel value from the decoded input source image having a first, relatively small bit depth with an average color depth value having a second, relatively large bit depth that is computed from a plurality of surrounding pixel values from the decoded input source image. 16. The method of claim 10, where the decoded input source image is generated from a paused video image. 17. The method of claim 10, further comprising transforming three- dimensional input source information into a two-dimensional input source image by converging left and right images from the three-dimensional input source information into co-located pixel values for the two-dimensional input source image.
18. The method of claim 10, further comprising displaying the plurality of temporally shifted and scaled images on the display screen having a second, lower image resolution. 19. A method of processing an input image for display, comprising:
generating an input source image with a first plurali ty of pixel values having a first color depth;
converting the first plurality of pixel values to a second plurality of pixel values having an increased color depth by replacing each pixel value from the first plurality of pixel values with an average color depth value having a second larger color depth that is computed from a plurality of surrounding pixel values from the first plurality of pixel values; and
processing the second plurality of pixel values for display on a display screen having the second larger color depth. 20. The method of claim 19, where converting the first plurality of pixel values comprises selecting and replacing each 8- bit pixel value in the input source image with a 10-bit pixel value that is computed by averaging a plurality of pixel values surrounding the selected 8-bit pixel value. 21. The method of claim 19, where processing the second plurality of pixel values for display comprises scaling the second plurality of pixel values for display on the display screen having a resolution that is lower than the input source image resolution. 22. The method of claim 19, where processing the second plurality of pixel values for display comprises:
generating from the second plurality of pixel values a plurality of temporally shifted and scaled images by applying a. plurality of shifted image frames to the second plurality of pixel values which are shifted by less than an inierpixel spacing distance and scaling pixel values in each image frame for display on the display screen; and
displaying the plurality of temporally shifted and scaled images on the display screen having a resolution that is lower than the input source image resolution.
23. A computer program embodied on a computer-readable medium, the computer program configured to control a processor to perform image processing on a. source image for display by generating a plurality of temporally shifted images from a source image by sequentially shifting an origin point for a frame applied to the source image by less than an interpixel spacing distance, thereby generating the plurality of temporally shifted images for display at a display screen.
2.0
PCT/US2012/066335 2011-12-23 2012-11-21 Displayed image improvement WO2013095864A1 (en)

Priority Applications (3)

Application Number Priority Date Filing Date Title
KR1020147020469A KR20140105030A (en) 2011-12-23 2012-11-21 Displayed image improvement
CN201280067772.6A CN104067310A (en) 2011-12-23 2012-11-21 Displayed image improvement
EP12798117.3A EP2795573A1 (en) 2011-12-23 2012-11-21 Displayed image improvement

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
US13/336,465 US20130162625A1 (en) 2011-12-23 2011-12-23 Displayed Image Improvement
US13/336,465 2011-12-23

Publications (1)

Publication Number Publication Date
WO2013095864A1 true WO2013095864A1 (en) 2013-06-27

Family

ID=47297473

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/US2012/066335 WO2013095864A1 (en) 2011-12-23 2012-11-21 Displayed image improvement

Country Status (5)

Country Link
US (1) US20130162625A1 (en)
EP (1) EP2795573A1 (en)
KR (1) KR20140105030A (en)
CN (1) CN104067310A (en)
WO (1) WO2013095864A1 (en)

Families Citing this family (17)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
EP2525581A3 (en) * 2011-05-17 2013-10-23 Samsung Electronics Co., Ltd. Apparatus and Method for Converting 2D Content into 3D Content, and Computer-Readable Storage Medium Thereof
JP6008298B2 (en) * 2012-05-28 2016-10-19 パナソニックIpマネジメント株式会社 Image processing apparatus, imaging apparatus, image processing method, and program
US9538077B1 (en) * 2013-07-26 2017-01-03 Ambarella, Inc. Surround camera to generate a parking video signal and a recorder video signal from a single sensor
US20150036942A1 (en) * 2013-07-31 2015-02-05 Lsi Corporation Object recognition and tracking using a classifier comprising cascaded stages of multiple decision trees
JP6312487B2 (en) * 2014-03-26 2018-04-18 キヤノン株式会社 Image processing apparatus, control method therefor, and program
US9560310B2 (en) * 2014-03-27 2017-01-31 Ctaccel Limited Method and system for rescaling image files
JP6270597B2 (en) * 2014-04-04 2018-01-31 キヤノン株式会社 Image forming apparatus
JP6334358B2 (en) * 2014-10-08 2018-05-30 エルジー ディスプレイ カンパニー リミテッド Image signal processing apparatus and bit extension calculation processing method
US9489710B2 (en) * 2015-02-10 2016-11-08 Qualcomm Incorporated Hybrid rendering in graphics processing
KR102440941B1 (en) * 2015-03-03 2022-09-05 삼성전자주식회사 Image processing devices for computing initial phase having magnitude and direction based on image processing information
CN106610806B (en) * 2015-10-27 2020-02-07 北京国双科技有限公司 Page information display method and device
US10187584B2 (en) 2016-12-20 2019-01-22 Microsoft Technology Licensing, Llc Dynamic range extension to produce high dynamic range images
CN106713922B (en) * 2017-01-13 2020-03-06 京东方科技集团股份有限公司 Image processing method and electronic device
CN108347647B (en) * 2018-02-12 2019-09-10 深圳创维-Rgb电子有限公司 Video picture displaying method, device, television set and storage medium
CN110502954B (en) * 2018-05-17 2023-06-16 杭州海康威视数字技术股份有限公司 Video analysis method and device
KR20210078218A (en) * 2019-12-18 2021-06-28 삼성전자주식회사 Electronic apparatus and method of controlling the same
CN113656623B (en) * 2021-08-17 2024-09-27 安徽大学 Time sequence shift and multi-branch space-time enhancement network-based lap-forming footprint image retrieval method

Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US6456340B1 (en) * 1998-08-12 2002-09-24 Pixonics, Llc Apparatus and method for performing image transforms in a digital display system
EP1833042A2 (en) * 2006-03-08 2007-09-12 Kabushiki Kaisha Toshiba Image processing apparatus and image display method
WO2008060818A2 (en) * 2006-10-24 2008-05-22 Hewlett-Packard Development Company, L.P. Generating and displaying spatially offset sub-frames
US20090097763A1 (en) * 2007-10-15 2009-04-16 Yi-Jen Chiu Converting video and image signal bit depths
US7548662B2 (en) * 2005-01-21 2009-06-16 Microsoft Corporation System and process for increasing the apparent resolution of a display
EP2383695A1 (en) * 2010-04-28 2011-11-02 Max-Planck-Gesellschaft zur Förderung der Wissenschaften e.V. Apparent display resolution enhancement for moving images

Family Cites Families (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US7190380B2 (en) * 2003-09-26 2007-03-13 Hewlett-Packard Development Company, L.P. Generating and displaying spatially offset sub-frames
ATE507552T1 (en) * 2005-09-28 2011-05-15 Sony Ericsson Mobile Comm Ab METHOD FOR INCREASE THE RESOLUTION OF A COLOR REPRESENTATION AND APPARATUS CARRYING OUT SUCH METHOD
US7545385B2 (en) * 2005-12-22 2009-06-09 Samsung Electronics Co., Ltd. Increased color depth, dynamic range and temporal response on electronic displays
US8248660B2 (en) * 2007-12-14 2012-08-21 Qualcomm Incorporated Efficient diffusion dithering using dyadic rationals

Patent Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US6456340B1 (en) * 1998-08-12 2002-09-24 Pixonics, Llc Apparatus and method for performing image transforms in a digital display system
US7548662B2 (en) * 2005-01-21 2009-06-16 Microsoft Corporation System and process for increasing the apparent resolution of a display
EP1833042A2 (en) * 2006-03-08 2007-09-12 Kabushiki Kaisha Toshiba Image processing apparatus and image display method
WO2008060818A2 (en) * 2006-10-24 2008-05-22 Hewlett-Packard Development Company, L.P. Generating and displaying spatially offset sub-frames
US20090097763A1 (en) * 2007-10-15 2009-04-16 Yi-Jen Chiu Converting video and image signal bit depths
EP2383695A1 (en) * 2010-04-28 2011-11-02 Max-Planck-Gesellschaft zur Förderung der Wissenschaften e.V. Apparent display resolution enhancement for moving images

Also Published As

Publication number Publication date
US20130162625A1 (en) 2013-06-27
CN104067310A (en) 2014-09-24
EP2795573A1 (en) 2014-10-29
KR20140105030A (en) 2014-08-29

Similar Documents

Publication Publication Date Title
US20130162625A1 (en) Displayed Image Improvement
US11595653B2 (en) Processing of motion information in multidimensional signals through motion zones and auxiliary information through auxiliary zones
US10855966B2 (en) View interpolation of multi-camera array images with flow estimation and image super resolution using deep learning
CN109716766B (en) Method and device for filtering 360-degree video boundary
TWI739937B (en) Method, device and machine-readable medium for image mapping
US20170236252A1 (en) Foveated video rendering
TWI751261B (en) Deblock filtering for 360 video
JP6163674B2 (en) Content adaptive bi-directional or functional predictive multi-pass pictures for highly efficient next-generation video coding
JP5722761B2 (en) Video compression apparatus, image processing apparatus, video compression method, image processing method, and data structure of video compression file
CN112399178A (en) Visual quality optimized video compression
Tsai et al. A real-time 1080p 2D-to-3D video conversion system
KR102676093B1 (en) Electronic apparatus and control method thereof
US20140286588A1 (en) Image processing system and method
US9020273B2 (en) Image processing method, image processor, integrated circuit, and program
CN102572359B (en) Auto-regressive edge-directed interpolation with backward projection constraint
CN114514746A (en) System and method for motion adaptive filtering as a pre-process for video coding
JP2016511962A (en) Interpolation method and corresponding apparatus
RU2732989C2 (en) Method, device and system for generating a video signal
JP2023538828A (en) Antialiasing for distance field graphics rendering
US20240031543A1 (en) Processing of extended dimension light field images
US20210358091A1 (en) Method and apparatus for geometric smoothing
CN118053092A (en) Video processing method and device, chip, storage medium and electronic equipment
CN116648903A (en) Processing of extended dimension light field images

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 12798117

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: DE

ENP Entry into the national phase

Ref document number: 20147020469

Country of ref document: KR

Kind code of ref document: A

WWE Wipo information: entry into national phase

Ref document number: 2012798117

Country of ref document: EP