WO2014074361A1 - Method and system for accelerating video preview digital camera - Google Patents

Method and system for accelerating video preview digital camera Download PDF

Info

Publication number
WO2014074361A1
WO2014074361A1 PCT/US2013/067444 US2013067444W WO2014074361A1 WO 2014074361 A1 WO2014074361 A1 WO 2014074361A1 US 2013067444 W US2013067444 W US 2013067444W WO 2014074361 A1 WO2014074361 A1 WO 2014074361A1
Authority
WO
WIPO (PCT)
Prior art keywords
host computer
imaging device
images
recited
digital
Prior art date
Application number
PCT/US2013/067444
Other languages
French (fr)
Inventor
Ji Shen
Bruce Barnes
Hamid Kharrati
Original Assignee
Pathway Innovations And Technologies, Inc.
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Pathway Innovations And Technologies, Inc. filed Critical Pathway Innovations And Technologies, Inc.
Priority to US14/382,181 priority Critical patent/US10402940B2/en
Publication of WO2014074361A1 publication Critical patent/WO2014074361A1/en

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/14Digital output to display device ; Cooperation and interconnection of the display device with other functional units
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N23/00Cameras or camera modules comprising electronic image sensors; Control thereof
    • H04N23/60Control of cameras or camera modules
    • H04N23/66Remote control of cameras or camera parts, e.g. by remote control devices
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N25/00Circuitry of solid-state image sensors [SSIS]; Control thereof
    • H04N25/40Extracting pixel data from image sensors by controlling scanning circuits, e.g. by modifying the number of pixels sampled or to be sampled
    • H04N25/44Extracting pixel data from image sensors by controlling scanning circuits, e.g. by modifying the number of pixels sampled or to be sampled by partially reading an SSIS array
    • H04N25/443Extracting pixel data from image sensors by controlling scanning circuits, e.g. by modifying the number of pixels sampled or to be sampled by partially reading an SSIS array by reading pixels from selected 2D regions of the array, e.g. for windowing or digital zooming
    • GPHYSICS
    • G09EDUCATION; CRYPTOGRAPHY; DISPLAY; ADVERTISING; SEALS
    • G09GARRANGEMENTS OR CIRCUITS FOR CONTROL OF INDICATING DEVICES USING STATIC MEANS TO PRESENT VARIABLE INFORMATION
    • G09G2340/00Aspects of display data processing
    • G09G2340/02Handling of images in compressed format, e.g. JPEG, MPEG
    • GPHYSICS
    • G09EDUCATION; CRYPTOGRAPHY; DISPLAY; ADVERTISING; SEALS
    • G09GARRANGEMENTS OR CIRCUITS FOR CONTROL OF INDICATING DEVICES USING STATIC MEANS TO PRESENT VARIABLE INFORMATION
    • G09G2340/00Aspects of display data processing
    • G09G2340/04Changes in size, position or resolution of an image
    • G09G2340/0407Resolution change, inclusive of the use of different resolutions for different screen areas
    • GPHYSICS
    • G09EDUCATION; CRYPTOGRAPHY; DISPLAY; ADVERTISING; SEALS
    • G09GARRANGEMENTS OR CIRCUITS FOR CONTROL OF INDICATING DEVICES USING STATIC MEANS TO PRESENT VARIABLE INFORMATION
    • G09G2350/00Solving problems of bandwidth in display systems

Definitions

  • the following subject matter relates to the general field of camera imaging.
  • Document cameras and other digital cameras increasingly have higher resolution digital imaging sensors, ranging from 2MPixel, 5MPixel, lOMPixel, to 30MPixel and even 40MPixel, and will have even higher resolutions in the future.
  • a live preview of the video feed from these cameras is desired to be shown on a display or a host computer, several factors significantly limit the video display speed, (often referred to as the “smoothness" of the video) measured in frames per second (fps) of the host computer.
  • a document camera 2 or other digital camera has been composed of a system of embedded electronics which include CMOS or CCD Sensors working in conjunction with Digital Signal Processing (DSP) processors, field
  • FPGA programmable gate arrays
  • ARM ARM processors.
  • Cameras based on these processors are capable of sending out high definition (HD) video at very high frame rates, such as 1080P HD video at sixty frames per second.
  • 1080P which is today's highest display resolution on consumer oriented HD TV's, is only roughly 2MPixel in resolution, while sensor resolution is going far beyond to lOx or even 20x the 1080P resolution, and ever increasing. So video displays on HDTV screens can often deliver 60fps but are unable to take full advantage of the camera sensor's high resolution.
  • Live video feed from a digital camera transmitted to a host computer normally requires a USB2.0 or USB3.0 connector, and can deliver full frame images with the maximum sensor resolution; however, the live video feed often suffers from the limited bandwidth of USB channels, which results in very low video frame rates.
  • a host computer's display screen is most likely to have less resolution than the imaging sensor.
  • a digital camera sent images directly to a host computer at whatever resolution the digital camera received the images from the digital camera's sensor outputs.
  • the previous method jams up the data link easily, and causes low frame rates when displaying video images on the host computer's monitor.
  • Scalar software is currently present on digital camera devices. Images are scaled to a set of predefined resolutions on the camera device. Image output is one directional to the host computer or other display terminals.
  • the "predefined" resolution sets severely limit the dexterity of these systems.
  • the predefined resolutions may not maximize the resolution of the display due to mismatched resolution values of the predefined resolution versus the display resolution of the host computer's display terminal .
  • One of the advantages of the inventive subject matter is that transmission of high quality video images without loss of video quality requires significantly less data to be transmitted through a narrow data pipe because the host computer receives only those pixels that will be visible on the host computer's display.
  • the imaging device accomplishes this by receiving real time input from the host computer in between each frame and dynamically calculating each frame to forward only the pixels needed by the host computer to achieve the least transmission delay over data links so that high resolution images can be viewed in real time on a host monitor with smooth motion.
  • "Windows" on desktops i.e. on-screen images of applications running on a processor, are often of a smaller size than the full screen.
  • the disclosed method of transferring images between a digital imaging device and a host computer includes the steps of transferring image data between a digital imaging device and a host computer at a predetermined frame rate; and receiving video frames at the host computer from the digital camera, the frames having been scaled or cropped within the digital imaging device before receipt thereof by the host computer, and executing commands to display the video frames in the host computer's video display window.
  • the present system uses a unique lossless data reduction technique.
  • the system includes a real time interactive/adaptive scalar (RTS), which runs in a digital imaging device. It takes input from a host computer once connected via USB3.0 or USB2.0 high speed interface (or any other high speed data link) and adapts the output of a digital camera to the host computer's capability for every video frame before it is transmitted to the computer and rendered.
  • Input parameters include: current computer monitor's screen resolution/size capability in pixels, current window size, current zoom factor, panning offset values, and other values.
  • the RTS performs scaling or cropping on raw image frame data acquired from a high resolution sensor of the digital imaging device for each frame. More enhanced data reduction of the scaled frames can be optionally performed for USB2.0 connections due to the highly limited bandwidth on such connections.
  • Application software running on the host computer coordinates rendering of the scaled/processed image frame in conjunction with the host operating system (OS), CPU, graphics processing unit (GPU), and Graphics Card.
  • OS operating system
  • GPU graphics processing unit
  • Graphics Card Graphics Card
  • the host computer is surveyed for graphical display resolution in pixel sizes. Dimension data for the host computer's video display window is collected. And scaling input data (SID) is sent from the host computer to the digital imaging device.
  • SID scaling input data
  • the SID includes display dimensions, window dimensions, a scaling factor, and panning offset data for the digital camera.
  • Digital images are transferred with resolution not exceeding a maximum resolution of the host computer's video display window, in uncompressed or compressed format. Digital images are transferred over a data link from the digital imaging device to the host computer via one or a combination of a USB2.0 connection, a USB3.0 connection or a limited transmission bandwidth connection.
  • Use of compression techniques in this system is possible but not required. Data reduction is sufficient to accomplish the objective of this system without using compression techniques.
  • Fig. 1 shows a prior art method of data transmission
  • Fig. 2 shows a system using the inventive method
  • Fig. 3 shows the electronics of the host computer
  • Fig. 4 is a flow chart showing steps performed in the inventive method
  • Fig. 5a is an example of scaling a frame in a camera device.
  • Fig. 5b is an example of a cropped frame in a camera device.
  • USB3.0 connection can enable transmission of 320MB/S. Transmission of ten megapixel image frames takes about 30MB/frame of data. Therefore, lOfps over
  • USB3.0 is possible.
  • USB2.0 has a practical throughput of 35MB/S, which means it can send about lfps of 10MP, or 2.5fps for 5MP.
  • the present subject matter achieves 10MP at close to 30fps, uncompressed. Compressing the video before sending over the connection allows an even higher frame transfer rate when transferred over USB3.0. However, avoiding compression is preferable to preserve quality of the image. If images are sent over USB2.0, then the images can be compressed to achieve 2MP at 30fps. Without compression, however, sending 2MP at only 6fps is possible with a USB2.0 connection.
  • the present system is a real time interactive/adaptive scalar (RTS) that runs in a camera device 202 (throughout this document digital imaging device and camera device are synonymous).
  • the system receives input parameters from a host computer 204 once connected via USB3.0 or USB2.0 high speed interface 206.
  • Input parameters include: a host computer monitor's screen resolution/size in pixels, a host computer's monitor window size, current zoom factor, panning offset values, and other values.
  • the host computer 204 is connected to a monitor 214 or other display means and contains a processor (not shown) as a part of its hardware 212 running video preview application software (VPAS) 208 and an operating system 210.
  • VPAS video preview application software
  • the host computer 204 includes a random access memory 302, a central processing unit 304, monitor 214 and a GPU card.
  • the camera device 202 contains a lens 224, a processor (not shown) within the camera's hardware 232 running real time scalar software, and a camera device operating system 230.
  • step 402 video preview application software within the host computer is initialized.
  • step 402 is optional as the video preview application software does not significantly influence the operation of the inventive system.
  • the VPAS surveys the screen and window sizes of the monitor 214 and stores the same in random access memory 302 for later reference by the digital camera device.
  • step 406 the VPAS acquires scaling and panning parameters in preparation for creating scaling input data (SID). With panning, for example, the VPAS establishes a point of reference in the image frame and determines how far in an x-y coordinate system a user has requested that an image be moved (the request being supplied via mouse, keypad or other means).
  • the VPAS sends that request in the form of SID, containing an offset measurement, to the digital camera so that an offset image (i.e., a panned image) is returned to the host computer from the digital camera.
  • the SID is created having instructions for the digital camera device.
  • the host computer then waits at step 410 for a single panned frame to be transferred from the digital camera. With scaling, the digital camera, in response to a scaling request, will crop the outer edges of the image and send that cropped image to the digital camera.
  • the scaled down frame image is displayed on the host computer's screen.
  • Any portion of the complete set of pixels of an image from a digital camera will always be available to the host computer in successive frames.
  • the host computer is only limited in that it would not be able to receive all of the pixels of a digital camera in a single frame.
  • Zooming in on an image will cause the digital camera to ignore transmitting the outer edges of the images, thereby reducing the pixel size of the frame being sent. Similarly, zooming out from an image will eliminate the need for high detail of less important features within a frame.
  • Several commonly known techniques or algorithms can be used to crop or scale an image such as discarding every other pixel, averaging every four pixels to form a new pixel, etc.
  • digital camera device 450 contains a processor running RTS software, which at step 452, is initialized.
  • step 454 high resolution images are acquired from the camera device's imaging sensor 224.
  • the images are stored in a frame buffer.
  • the camera device 450 listens for and receives SID information from the host computer device 400. "Listening" can be continuous, done at predetermined intervals, or done in response to a host computer event, for example, when a new host computer window is opened on screen.
  • the processor within the camera device 450 performs scaling and/or cropping based on the SID information that is received from the host computer device 400.
  • step 462 if desired, the image data is compressed; however, this step is not required.
  • Compression typically depends on the type of data link between the camera and the host computer. Compression can be used for USB2.0 but is unnecessary for USB3.0.
  • a scaled down frame is created.
  • the RTS performs scaling or cropping on the raw image frame acquired from the high resolution sensor, down to a smaller frame, for each frame. Alternatively, a group of frames can be processed at once.
  • the high resolution sensor can be stored and processed immediately before being sent to a host computer or it can be processed immediately upon being acquired in order to conserve storage space in the frame buffer and/or any associated storage device in the camera device 450. If a USB2.0 connection is used and a slow fps rate is acceptable to a user in exchange for high quality images, compression would not be necessary. Therefore, whether to do compression at the digital camera side of the connection can be completely controlled by the user. Further, if the connection is slow, with a USB2.0 connection, for example, and high frame rate is desirable, then compression can be set automatically upon the host computer's detection of the slower USB2.0 connection. Application software running on the host computer coordinates the subsequent rendering of the
  • a host computer monitor scaling factor and panning offset of the image can be applied via an interaction with host computer peripherals such as a mouse, a track pad and keyboard events.
  • a yet further method of transferring images between a digital imaging device and a host computer includes the steps of transferring uncompressed or compressed image data between a digital imaging device and a host computer at a predetermined frame rate; and receiving video frames at the host computer from the digital camera, the frames having been scaled or cropped within the digital imaging device before receipt thereof by the host computer, and executing commands to display the video frames in the host computer's video display window. Any compression, although unnecessary in this system, would occur in the camera.
  • a device using the present method can transfer images of very high resolution quality, e.g., 10MP, at close to full motion (30fps), over a (relatively slow) data link like USB3.0, which is the fastest standard based data link today but can normally only transmit the same resolution ( 10MP here) at less than lOfps.
  • very high resolution quality e.g. 10MP
  • USB3.0 which is the fastest standard based data link today but can normally only transmit the same resolution ( 10MP here) at less than lOfps.
  • a camera sensor's resolution is lOMPixel.
  • a 1080P resolution computer monitor which is equivalent to 2MPixels on a screen having an aspect ratio of 16:9
  • a lOMpixel image of a single video frame acquired from the sensor should be scaled down or cropped to be displayed on the smaller resolution screen.
  • the needed scaling or cropping can be performed by the host computer or it can be performed by the camera device. If the scaling and cropping is performed on the host computer, the camera device can output at full resolution, while transferring a large amount of data across a bandwidth limited USB connection. However, this greatly reduces the transfer rate of the image frames.
  • scaling and cropping is done in real time for every frame in the camera device before the images are transferred across the USB connection to the host computer, only a fraction of the total image size needs to be transferred. This results in a much reduced bandwidth requirement on the USB connection and increases video frame rate, while maintaining the same high resolution visual clarity.
  • Scaling and cropping algorithms are well supported in FPGA's, DSP's, and some ARM processors. The required processing in the camera device does not cause discernible delays in transmitting video frames to the host computer.
  • a scaling algorithm is depicted in Fig. 5a.
  • RTS software within the digital camera device performs a scaling down operation to create an SDF of the same size on the host computer monitor screen.
  • SDF Scalable Down-Down-Down-Down-Down-Down-Down-Down-Down-Down-Down-Down-Down-Down-Down-Down-Down-Down-Down-Down-D data volume transferred across USB connection is advantageously reduced.
  • RTSS performs a cropping operation first, then scales down the cropped image to create an SDF.
  • the host computer monitor screen size is smaller than the cropped frame, bandwidth saving of transferring the SDF is achieved.

Abstract

A method for processing images in an imaging device includes the steps of using real time scalar software (RTSS) for: receiving scalar input data (SID) from video preview application software (VPAS) within a host computer; and performing scaling and cropping operations within the imaging device on raw image frame data to create a scaled down frame (SDF) within the imaging device. As a result, images of high resolution can be transmitted efficiently with significantly reduced amounts of data over the data links, and achieve a high number of frames per second.

Description

METHOD AND SYSTEM FOR ACCELERATING VIDEO PREVIEW DIGITAL CAMERA
This application claims benefit under 35 U.S.C. §119(e) as a non-provisional application of provisional patent application no. 61/722,966 filed on November 6, 2012, the content of which is hereby incorporated by reference in its entirety.
FIELD
The following subject matter relates to the general field of camera imaging.
BACKGROUND
Document cameras and other digital cameras increasingly have higher resolution digital imaging sensors, ranging from 2MPixel, 5MPixel, lOMPixel, to 30MPixel and even 40MPixel, and will have even higher resolutions in the future. When a live preview of the video feed from these cameras is desired to be shown on a display or a host computer, several factors significantly limit the video display speed, (often referred to as the "smoothness" of the video) measured in frames per second (fps) of the host computer.
With reference to Fig. 1, a document camera 2 or other digital camera has been composed of a system of embedded electronics which include CMOS or CCD Sensors working in conjunction with Digital Signal Processing (DSP) processors, field
programmable gate arrays (FPGA) processors, or ARM processors. Cameras based on these processors are capable of sending out high definition (HD) video at very high frame rates, such as 1080P HD video at sixty frames per second. However, 1080P, which is today's highest display resolution on consumer oriented HD TV's, is only roughly 2MPixel in resolution, while sensor resolution is going far beyond to lOx or even 20x the 1080P resolution, and ever increasing. So video displays on HDTV screens can often deliver 60fps but are unable to take full advantage of the camera sensor's high resolution.
Live video feed from a digital camera transmitted to a host computer, normally requires a USB2.0 or USB3.0 connector, and can deliver full frame images with the maximum sensor resolution; however, the live video feed often suffers from the limited bandwidth of USB channels, which results in very low video frame rates.
A host computer's display screen is most likely to have less resolution than the imaging sensor. In the past, a digital camera sent images directly to a host computer at whatever resolution the digital camera received the images from the digital camera's sensor outputs. The previous method jams up the data link easily, and causes low frame rates when displaying video images on the host computer's monitor. Scalar software is currently present on digital camera devices. Images are scaled to a set of predefined resolutions on the camera device. Image output is one directional to the host computer or other display terminals. The "predefined" resolution sets severely limit the dexterity of these systems. Moreover, the predefined resolutions may not maximize the resolution of the display due to mismatched resolution values of the predefined resolution versus the display resolution of the host computer's display terminal .
Further, large sized video frame data must be transferred across limited bandwidth connections between the digital imaging device and the host computer. When frame resolution is high, this results in unacceptably low frame rate transfer for visual display on the host computer. To overcome such USB bandwidth limitations, image
compression using Motion JPEG or H.264 encoding techniques is often employed.
However, image compression often results in loss of image clarity due to the nature of most compression algorithms. Encoding of video on the camera and decoding on the host computer can also require significant processing overhead, which increases cost and causes low frame rates while displaying.
SUMMARY
In the case of visual presentation, it is highly desirable to display uncompressed video with the highest resolution possible, while maintaining a high frame rate, so that any movement of objects within a video image do not exhibit jerkiness, choppiness or stutter in the video. Such a desirable video characteristic is highly challenging to practitioners in the art.
One of the advantages of the inventive subject matter is that transmission of high quality video images without loss of video quality requires significantly less data to be transmitted through a narrow data pipe because the host computer receives only those pixels that will be visible on the host computer's display. The imaging device accomplishes this by receiving real time input from the host computer in between each frame and dynamically calculating each frame to forward only the pixels needed by the host computer to achieve the least transmission delay over data links so that high resolution images can be viewed in real time on a host monitor with smooth motion. For example, "Windows" on desktops, i.e. on-screen images of applications running on a processor, are often of a smaller size than the full screen. For an image that is minimized from a full screen image to a window that takes only a portion of the monitor's screen, less than all available pixels can be used. In a reduced sized window appearing in a monitor, the image appearing in the window needs the amount of pixels used by the window, i.e., a window one tenth the size of the screen will use one tenth the total number of available pixels. Today, the most common high resolution display monitors for PC's or Mac's have less than three megapixels, with two megapixel resolution monitors being the overwhelming majority. The latest model of an iMac® computer has 5MPixels using Apple's Retina Display technology. Five megapixels is still relatively rare. Since there are fewer pixels on the host computer's monitor in most cases, even though the desire is to display video at the sensor's maximum resolution, it is basically not necessary to transfer every frame of video at the sensor's full resolution. Instead, a scaled or cropped frame can deliver the same visual clarity at a fraction of the total bandwidth required.
The disclosed method of transferring images between a digital imaging device and a host computer includes the steps of transferring image data between a digital imaging device and a host computer at a predetermined frame rate; and receiving video frames at the host computer from the digital camera, the frames having been scaled or cropped within the digital imaging device before receipt thereof by the host computer, and executing commands to display the video frames in the host computer's video display window.
The present system uses a unique lossless data reduction technique. The system includes a real time interactive/adaptive scalar (RTS), which runs in a digital imaging device. It takes input from a host computer once connected via USB3.0 or USB2.0 high speed interface (or any other high speed data link) and adapts the output of a digital camera to the host computer's capability for every video frame before it is transmitted to the computer and rendered. Input parameters include: current computer monitor's screen resolution/size capability in pixels, current window size, current zoom factor, panning offset values, and other values.
The RTS performs scaling or cropping on raw image frame data acquired from a high resolution sensor of the digital imaging device for each frame. More enhanced data reduction of the scaled frames can be optionally performed for USB2.0 connections due to the highly limited bandwidth on such connections. Application software running on the host computer coordinates rendering of the scaled/processed image frame in conjunction with the host operating system (OS), CPU, graphics processing unit (GPU), and Graphics Card.
The host computer is surveyed for graphical display resolution in pixel sizes. Dimension data for the host computer's video display window is collected. And scaling input data (SID) is sent from the host computer to the digital imaging device. The SID includes display dimensions, window dimensions, a scaling factor, and panning offset data for the digital camera. Digital images are transferred with resolution not exceeding a maximum resolution of the host computer's video display window, in uncompressed or compressed format. Digital images are transferred over a data link from the digital imaging device to the host computer via one or a combination of a USB2.0 connection, a USB3.0 connection or a limited transmission bandwidth connection. Use of compression techniques in this system is possible but not required. Data reduction is sufficient to accomplish the objective of this system without using compression techniques.
BRIEF DESCRIPTION OF THE SEVERAL DRAWINGS
Fig. 1 shows a prior art method of data transmission;
Fig. 2 shows a system using the inventive method;
Fig. 3 shows the electronics of the host computer;
Fig. 4 is a flow chart showing steps performed in the inventive method;
Fig. 5a is an example of scaling a frame in a camera device; and
Fig. 5b is an example of a cropped frame in a camera device.
DETAILED DESCRIPTION A USB3.0 connection can enable transmission of 320MB/S. Transmission of ten megapixel image frames takes about 30MB/frame of data. Therefore, lOfps over
USB3.0 is possible. USB2.0 has a practical throughput of 35MB/S, which means it can send about lfps of 10MP, or 2.5fps for 5MP. The present subject matter achieves 10MP at close to 30fps, uncompressed. Compressing the video before sending over the connection allows an even higher frame transfer rate when transferred over USB3.0. However, avoiding compression is preferable to preserve quality of the image. If images are sent over USB2.0, then the images can be compressed to achieve 2MP at 30fps. Without compression, however, sending 2MP at only 6fps is possible with a USB2.0 connection.
With reference to Fig. 2, the present system is a real time interactive/adaptive scalar (RTS) that runs in a camera device 202 (throughout this document digital imaging device and camera device are synonymous). The system receives input parameters from a host computer 204 once connected via USB3.0 or USB2.0 high speed interface 206. Input parameters include: a host computer monitor's screen resolution/size in pixels, a host computer's monitor window size, current zoom factor, panning offset values, and other values. The host computer 204 is connected to a monitor 214 or other display means and contains a processor (not shown) as a part of its hardware 212 running video preview application software (VPAS) 208 and an operating system 210. VPAS is not necessary and is only shown in this embodiment as an example of the dexterity of the inventive system.
Host computer electronics are shown in Fig. 3. The host computer 204 includes a random access memory 302, a central processing unit 304, monitor 214 and a GPU card. The camera device 202 contains a lens 224, a processor (not shown) within the camera's hardware 232 running real time scalar software, and a camera device operating system 230.
With reference to Fig. 4, the method of operation within a host computer device 400 is depicted. At step 402, video preview application software within the host computer is initialized. In some embodiments, step 402 is optional as the video preview application software does not significantly influence the operation of the inventive system. At step 404, the VPAS surveys the screen and window sizes of the monitor 214 and stores the same in random access memory 302 for later reference by the digital camera device. At step 406, the VPAS acquires scaling and panning parameters in preparation for creating scaling input data (SID). With panning, for example, the VPAS establishes a point of reference in the image frame and determines how far in an x-y coordinate system a user has requested that an image be moved (the request being supplied via mouse, keypad or other means). The portion of the image that would no longer be visible on screen is cropped to reduce the amount of pixels being sent over the data link. At step 408, the VPAS sends that request in the form of SID, containing an offset measurement, to the digital camera so that an offset image (i.e., a panned image) is returned to the host computer from the digital camera. The SID is created having instructions for the digital camera device. The host computer then waits at step 410 for a single panned frame to be transferred from the digital camera. With scaling, the digital camera, in response to a scaling request, will crop the outer edges of the image and send that cropped image to the digital camera. At step 412, the scaled down frame image is displayed on the host computer's screen.
Any portion of the complete set of pixels of an image from a digital camera will always be available to the host computer in successive frames. The host computer is only limited in that it would not be able to receive all of the pixels of a digital camera in a single frame. As such, there are a plethora of on screen viewing options available. For example, any section of an image can be zoomed in on to view very small details of an image on a host computer's display screen. The host computer will only receive those portions of the image that are visible on screen or of most interest to the user.
Zooming in on an image will cause the digital camera to ignore transmitting the outer edges of the images, thereby reducing the pixel size of the frame being sent. Similarly, zooming out from an image will eliminate the need for high detail of less important features within a frame. Several commonly known techniques or algorithms can be used to crop or scale an image such as discarding every other pixel, averaging every four pixels to form a new pixel, etc.
With further reference to Fig. 4, digital camera device 450 contains a processor running RTS software, which at step 452, is initialized. At step 454, high resolution images are acquired from the camera device's imaging sensor 224. At step 456, the images are stored in a frame buffer. At step 458, the camera device 450 listens for and receives SID information from the host computer device 400. "Listening" can be continuous, done at predetermined intervals, or done in response to a host computer event, for example, when a new host computer window is opened on screen. At step 460, the processor within the camera device 450 performs scaling and/or cropping based on the SID information that is received from the host computer device 400. At step 462, if desired, the image data is compressed; however, this step is not required.
Compression typically depends on the type of data link between the camera and the host computer. Compression can be used for USB2.0 but is unnecessary for USB3.0. At step 464, a scaled down frame is created.
The RTS performs scaling or cropping on the raw image frame acquired from the high resolution sensor, down to a smaller frame, for each frame. Alternatively, a group of frames can be processed at once. The high resolution sensor can be stored and processed immediately before being sent to a host computer or it can be processed immediately upon being acquired in order to conserve storage space in the frame buffer and/or any associated storage device in the camera device 450. If a USB2.0 connection is used and a slow fps rate is acceptable to a user in exchange for high quality images, compression would not be necessary. Therefore, whether to do compression at the digital camera side of the connection can be completely controlled by the user. Further, if the connection is slow, with a USB2.0 connection, for example, and high frame rate is desirable, then compression can be set automatically upon the host computer's detection of the slower USB2.0 connection. Application software running on the host computer coordinates the subsequent rendering of the
scaled/processed image frame in conjunction with the host OS, CPU, GPU, and Graphics Card on the host computer's monitor. A host computer monitor scaling factor and panning offset of the image can be applied via an interaction with host computer peripherals such as a mouse, a track pad and keyboard events.
A yet further method of transferring images between a digital imaging device and a host computer includes the steps of transferring uncompressed or compressed image data between a digital imaging device and a host computer at a predetermined frame rate; and receiving video frames at the host computer from the digital camera, the frames having been scaled or cropped within the digital imaging device before receipt thereof by the host computer, and executing commands to display the video frames in the host computer's video display window. Any compression, although unnecessary in this system, would occur in the camera. A device using the present method can transfer images of very high resolution quality, e.g., 10MP, at close to full motion (30fps), over a (relatively slow) data link like USB3.0, which is the fastest standard based data link today but can normally only transmit the same resolution ( 10MP here) at less than lOfps.
EXAMPLE 1
Assume a camera sensor's resolution is lOMPixel. On a 1080P resolution computer monitor, which is equivalent to 2MPixels on a screen having an aspect ratio of 16:9, a lOMpixel image of a single video frame acquired from the sensor, should be scaled down or cropped to be displayed on the smaller resolution screen. The needed scaling or cropping can be performed by the host computer or it can be performed by the camera device. If the scaling and cropping is performed on the host computer, the camera device can output at full resolution, while transferring a large amount of data across a bandwidth limited USB connection. However, this greatly reduces the transfer rate of the image frames.
If scaling and cropping is done in real time for every frame in the camera device before the images are transferred across the USB connection to the host computer, only a fraction of the total image size needs to be transferred. This results in a much reduced bandwidth requirement on the USB connection and increases video frame rate, while maintaining the same high resolution visual clarity. Scaling and cropping algorithms are well supported in FPGA's, DSP's, and some ARM processors. The required processing in the camera device does not cause discernible delays in transmitting video frames to the host computer.
EXAMPLE 2
A scaling algorithm is depicted in Fig. 5a. RTS software within the digital camera device performs a scaling down operation to create an SDF of the same size on the host computer monitor screen. When the host computer monitor screen size is smaller than the sensor full frame size, data volume transferred across USB connection is advantageously reduced.
Example 3
A cropping and scaling algorithm is depicted in Fig. 5b. RTSS performs a cropping operation first, then scales down the cropped image to create an SDF. When the host computer monitor screen size is smaller than the cropped frame, bandwidth saving of transferring the SDF is achieved.
The previous description of the disclosed embodiments is provided to enable any person skilled in the art to make or use the present invention. Various modifications to these embodiments will be readily apparent to those skilled in the art, and the generic principles defined herein may be applied to other embodiments without departing from the spirit or scope of the invention. For example, one or more elements can be rearranged and/or combined, or additional elements may be added. Thus, the present invention is not intended to be limited to the embodiments shown herein but is to be accorded the widest scope consistent with the principles and novel features disclosed herein.

Claims

I claim :
1. A method for transferring digital images from an imaging device to a graphical display device comprising the steps of:
within a digital video imaging device:
receiving a graphical display resolution in pixel sizes from a host computer;
storing pixel size dimension data for the host computer's video display window; and receiving scaling input data (SID) from the host computer via a system driver level interface; and
transferring to the host computer digital images with a resolution of each digital image not exceeding a maximum number of pixels visible to the user on the host computer's video display window.
2. The method as recited in claim 1 wherein pixilation of a transferred digital image does not exceed pixilation of the video display window.
3. The method as recited in claim 1 further comprising interacting with peripherals to receive mouse, track pad, keyboard events to determine a scaling factor and panning offset of the image.
4. The method as recited in claim 3, wherein the SID is selected from the group consisting of display dimensions, window dimensions, scaling factor, panning offset data and any combination thereof.
5. The method as recited in claim 3 further comprising transferring the digital images to the host computer and displaying the digital images on the host computer's video display window at a high frame rate of about thirty frames per second.
6. The method as recited in claim 4 further comprising transferring digital images over a high speed data link from the imaging device to the host computer, wherein the high speed data link is selected from the group consisting of a USB2.0 connection, a USB3.0 connection and a limited transmission bandwidth connection.
7. The method as recited in claim 1 further comprising transferring images from the imaging device to the processor, the images comprising a resolution that is equal to the display window resolution or the resolution of image visible to the user at the time of display.
8. A method for processing images in an imaging device comprising
using real time scalar software (RTSS) for:
a. receiving scalar input data (SID)in the imaging device from video preview application software (VPAS) executing on a host computer;
b. performing a scaling and cropping operation within the imaging device on raw image frame data to create a scaled down frame (SDF) or cropped frame confined by the SID, within the imaging device; and c. storing the scaled or cropped frame within a non-transitory medium in preparation for sending to the host computer.
9. The method as recited in claim 8 further comprising transferring the SDF to the host computer via a high speed data link.
10. The method as recited in claim 9 wherein the SDF is uncompressed.
11. A method of transferring images between a digital imaging device and a host computer comprising :
transferring image data between a digital imaging device and a host computer having a video display window at a predetermined resolution; and
receiving video frames at the host computer from the digital imaging device, the frames having been scaled or cropped within the digital imaging device before receipt by the host computer, and
executing commands to display the video frames in the host computer's video display window.
12. The method of transferring images as recited in claim 11 wherein the transferred video frames have a resolution of 1080P transferred at a frame rate consisting of twenty, thirty, or higher frames per second.
13. The method of transferring images as recited in claim 12 wherein successive video frames are displayed on a host computer in real time.
14. The method of transferring images as recited in claim 12 wherein successive frames are displayed on a host computer with a constant time span between each successive frame.
15. The method of transferring images as recited in claim 11 wherein the scaling or cropping of the frame is executed such that a number of pixels transferred from the digital imaging device corresponds to a maximum number of pixels of the host computer.
16. The method of transferring images as recited in claim 11 wherein a number of pixels transferred from the digital imaging device is less than a maximum number of pixels stored at the digital imaging device.
PCT/US2013/067444 2010-01-28 2013-10-30 Method and system for accelerating video preview digital camera WO2014074361A1 (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
US14/382,181 US10402940B2 (en) 2010-01-28 2013-10-30 Method and system for accelerating video preview digital camera

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
US201261722966P 2012-11-06 2012-11-06
US61/722,966 2012-11-06

Publications (1)

Publication Number Publication Date
WO2014074361A1 true WO2014074361A1 (en) 2014-05-15

Family

ID=49063942

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/US2013/067444 WO2014074361A1 (en) 2010-01-28 2013-10-30 Method and system for accelerating video preview digital camera

Country Status (2)

Country Link
CN (1) CN103281510A (en)
WO (1) WO2014074361A1 (en)

Families Citing this family (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN116095355A (en) * 2023-01-18 2023-05-09 百果园技术(新加坡)有限公司 Video display control method and device, equipment, medium and product thereof

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20040205546A1 (en) * 1999-03-11 2004-10-14 Robert Blumberg Method and system for viewing scalable documents
JP2005191949A (en) * 2003-12-25 2005-07-14 Fujitsu Ltd Video image distribution apparatus and video image browsing apparatus
US20060277393A1 (en) * 2005-06-01 2006-12-07 Avermedia Technologies, Inc. Multi-image-source document camera
US20060290792A1 (en) * 2005-06-27 2006-12-28 Nokia Corporation Digital camera devices and methods for implementing digital zoom in digital camera devices and corresponding program products
US20070174489A1 (en) * 2005-10-28 2007-07-26 Yoshitsugu Iwabuchi Image distribution system and client terminal and control method thereof

Family Cites Families (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US7433546B2 (en) * 2004-10-25 2008-10-07 Apple Inc. Image scaling arrangement
KR101527037B1 (en) * 2009-06-23 2015-06-16 엘지전자 주식회사 Mobile terminal and method for controlling the same

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20040205546A1 (en) * 1999-03-11 2004-10-14 Robert Blumberg Method and system for viewing scalable documents
JP2005191949A (en) * 2003-12-25 2005-07-14 Fujitsu Ltd Video image distribution apparatus and video image browsing apparatus
US20060277393A1 (en) * 2005-06-01 2006-12-07 Avermedia Technologies, Inc. Multi-image-source document camera
US20060290792A1 (en) * 2005-06-27 2006-12-28 Nokia Corporation Digital camera devices and methods for implementing digital zoom in digital camera devices and corresponding program products
US20070174489A1 (en) * 2005-10-28 2007-07-26 Yoshitsugu Iwabuchi Image distribution system and client terminal and control method thereof

Also Published As

Publication number Publication date
CN103281510A (en) 2013-09-04

Similar Documents

Publication Publication Date Title
US10402940B2 (en) Method and system for accelerating video preview digital camera
US9665332B2 (en) Display controller, screen transfer device, and screen transfer method
JP4926601B2 (en) Video distribution system, client terminal and control method thereof
US6665453B2 (en) Multi-resolution support for video images
US8587653B1 (en) Modifying the resolution of video before transferring to a display system
US9736458B2 (en) Moving image capturing device, information processing system, information processing device, and image data processing method
US10454986B2 (en) Video synchronous playback method, apparatus, and system
WO2012079453A1 (en) Video data processing system and method, cpu, gpu and video monitoring system
CN102801963B (en) Electronic PTZ method and device based on high-definition digital camera monitoring
CN112204993A (en) Adaptive panoramic video streaming using overlapping partitioned segments
CA2828933C (en) Video processing apparatus, video processing system, and video processing method
US20090153434A1 (en) Display assistant system
TW201104561A (en) Projection system and method thereof
US20140002645A1 (en) Server and video surveillance method of target place
US9226003B2 (en) Method for transmitting video signals from an application on a server over an IP network to a client device
US8570404B2 (en) Communication device
CN114020228A (en) Screen display method and device
CN107318021B (en) Data processing method and system for remote display
WO2014074361A1 (en) Method and system for accelerating video preview digital camera
CN107318020B (en) Data processing method and system for remote display
CN108184053B (en) Embedded image processing method and device
CN114666477B (en) Video data processing method, device, equipment and storage medium
CN216625908U (en) Video storage device and endoscope equipment
CN115134633B (en) Remote video method and related device
EP3079344B1 (en) Image pickup apparatus, image pickup method and image pickup system

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 13852769

Country of ref document: EP

Kind code of ref document: A1

WWE Wipo information: entry into national phase

Ref document number: 14382181

Country of ref document: US

NENP Non-entry into the national phase

Ref country code: DE

122 Ep: pct application non-entry in european phase

Ref document number: 13852769

Country of ref document: EP

Kind code of ref document: A1