WO2002080526A1 - System and method for a software steerable web camera - Google Patents

System and method for a software steerable web camera Download PDF

Info

Publication number
WO2002080526A1
WO2002080526A1 PCT/US2002/006680 US0206680W WO02080526A1 WO 2002080526 A1 WO2002080526 A1 WO 2002080526A1 US 0206680 W US0206680 W US 0206680W WO 02080526 A1 WO02080526 A1 WO 02080526A1
Authority
WO
WIPO (PCT)
Prior art keywords
scene
subset
data
image
digitized
Prior art date
Application number
PCT/US2002/006680
Other languages
French (fr)
Inventor
Robert Novak
Original Assignee
Digeo, Inc.
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Digeo, Inc. filed Critical Digeo, Inc.
Publication of WO2002080526A1 publication Critical patent/WO2002080526A1/en

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T1/00General purpose image data processing
    • G06T1/0007Image acquisition
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N23/00Cameras or camera modules comprising electronic image sensors; Control thereof
    • H04N23/58Means for changing the camera field of view without moving the camera body, e.g. nutating or panning of optics or image sensors
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N23/00Cameras or camera modules comprising electronic image sensors; Control thereof
    • H04N23/60Control of cameras or camera modules
    • H04N23/698Control of cameras or camera modules for achieving an enlarged field of view, e.g. panoramic image capture

Definitions

  • This disclosure relates generall y to digital imaging, digital video or web cameras, and more particularl y but not exclusivel y, to systems and methods for capturing camera images by use of software control .
  • webcams Conventional digital imaging, digital video or web cameras
  • webcams can be used for tel econferencing, surveillance, and other purposes.
  • One of the probl ems with conventional webcams is that they have a very restricted field of vision. This restricted vision field is due to the limitations in the mechanism used to control the webcam and in the optics and other components in the webcam.
  • the us er might manuall y control the webcam to pan and/or tilt in various directions (e. g. , side -to -side or up-and -down) and/or to zoom in or away from an image to be captured.
  • this manual technique is inconvenient, as it requires the user to stop whatever he/she is doing, to readjust the webcam, and to then resume his/her previous activity.
  • Figure 1 is a block diagram showing a webcam coupled to a set top box according to an embodiment of the invention.
  • Figure 2 is a block diagram of an embodiment of the webcam of Figure 1.
  • Figure 4 is a block diagram of one example of a memory device of the set top box.
  • Figure 5A is an illustrative example block diagram showing a function of the webcam of Figure 1 in response to particular pan and/or tilt commands.
  • Figure 5B is an illustrative example block diagram of selected subsets in a digitized scene image data in response to particular pan and/or tilt commands.
  • Figure 6A is an illustrative example block diagram of a selected subset image data with distortions.
  • Figure 6B is an illustrative example block diagram of a selected subset image data that has been distortion compensated.
  • Figure 7 is a flowchart diagram of a method according to an embodiment of the invention.
  • Figure 8A is an illustrative example block diagram showing a function of the webcam of Figure 1 in response to particular pan and zoom commands.
  • Figure 8B is an illustrative example block diagram of a selected subset in the digitized scene image data in response to a particular pan command
  • Figure 8C is an illustrative example block diagram of the selected subset in Figure 8B in response to a particular zoom command.
  • Figure 9 is an illustrative example block diagram of the selected subset in Figure 9 in response to another particular zoom command.
  • Figure 10 is a flowchart diagram of a method according to another embodiment of the invention.
  • Figure 11 is another diagram shown to further assist in describing an operation of an embodiment of the invention.
  • an embodiment of the invention provides a system and method that capture camera images by use of software control.
  • the camera may be web camera or other types of camera that can support a wide angle lens.
  • the wide angle lens is used to capture a scene or image in the wide field of vision.
  • the captured scene or image data is then stored in an image collection array and then digitized and stored in memory.
  • the image collection array is a relatively larger sized array to permit the array to store image data from the wide vision field.
  • Processing is performed for user commands to effectively pan the webcam in particular directions and/or to zoom the webcam toward or away from an object to be captured as an image.
  • a particular subset of the digitized data is selected and processed so that selected subset data provides a simulated panning and/or zooming of the image of the captured object.
  • a compression/correction engine can then compensate the selected subset data for distortion and compress the selected subset data for transmission.
  • the invention advantageously permits a camera, such as a webcam, to have a wide vision field.
  • the invention may also advantageously provide a wide vision field for cameras that have short depth fields.
  • the invention also advantageously avoids the use of stepper motors to obtain particular images based on pan and zoom commands from the user.
  • numerous specific details are provided, such as the description of system components in Figures 1 through 10, to provide a thorough understanding of embodiments of the invention.
  • One skilled in the relevant art will recognize, however, that the invention can be practiced without one or more of the specific details, or with other methods, components, materials, parts, and the like.
  • well-known structures, materials, or operations are not shown or described in detail to avoid obscuring aspects of the invention.
  • FIG. 1 is a block diagram showing a webcam 100 communicatively coupled to a set top box (“STB") 140 according to an embodiment of the invention.
  • the webcam 100 can capture an image of an object 130 that is in the webcam field of vision.
  • Webcam 100 is communicatively coupled to STB 140 via, for example, a cable 110.
  • Webcam 100 may also be communicatively coupled to STB 140 by use of other suitable connections or methods, such as IR beams, radio signals, suitable wireless transmission techniques, and the like.
  • STB 140 is communicatively coupled to a cable network 160 and receives TV broadcasts, as well as other data, from the cable network 160.
  • STB 140 is also communicatively coupled to the Internet 150 or other networks for sending and receiving data.
  • Data received from the Internet 150 or cable network 160 may be displayed on a display 120.
  • STB 140 may also transmit images that are captured by the webcam 100 to other computers via the Internet 150.
  • STB may also transmit the captured webcam images to a printer 165 and/or to other devices 170 such as a computer in a local area network.
  • embodiments of the invention may also be implemented in other types of suitable cameras that can support a wide angle lens.
  • an embodiment of the invention may be implemented in, for example, security cameras, ATM cash machine cameras, spy cameras, portable cameras, or pin-hole type cameras.
  • the invention is not limited to the use of STB 140.
  • Other processing device may be used according to embodiments of the invention to perform image distortion compensation, image compression, and/or other functions that will be described below.
  • FIG. 2 is a block diagram of an embodiment of the webcam 100 of Figure 1.
  • Webcam 100 comprises a lens 210; a shutter 220; a filter 230; an image collection array 240; a sample stage 245; and an analog to digital converter ("ADC") 250.
  • the lens 210 may be a wide angle lens, such as a fish- eye lens, that has angular field of, for example, at least about 140 degrees, as indicated by lines 200. Using a wide- angle lens allows webcam 100 to capture a larger image area than a conventional webcam.
  • Shutter 220 opens and closes at a pre-specif ied rate, allowing light into the interior of webcam 100 and onto a filter 230.
  • Filter 230 allows for image collection array 240 to capture different colors of an image and may include a static filter, such as a Bayer filter, or may include a spinning disk filter. In another embodiment, the filter may be replaced with a beam splitter or other color differentiation device. In another embodiment, webcam 100 does not include a filter or other color differentiation device.
  • the image collection array 240 can include charge coupled device (“CCD”) sensors or complementary metal oxide semiconductor (“CMOS”) sensors, which are generally much less expensive than CCD sensors but may be more susceptible to noise. Other types of sensors may be used in the image collection array 240.
  • the size of the image collection array 240 is relatively larger in size such as, for example, 1024 by 768, 1200 by 768, or 2000 by 1000 sensors. The large sized array permits the array 240 to capture images in the wide vision field 200 that is viewed by the webcam 200.
  • a sample stage 245 reads the image data from the image collection array 240 when shutter 220 is closed, and an analog -to -digital converter (ADC) 250 converts the image data from an analog to digital form, and feeds the digitized image data to STB 140 via cable 110 for processing and/or transmission.
  • ADC analog -to -digital converter
  • the image data may be processed entirely by components of the webcam 100 and transmitted from webcam 100 to other devices such as the printer 165 or computer 170.
  • FIG. 3 is a block diagram of an embodiment of the set top box (STB) 140.
  • STB 140 includes a network interface 300; a processor 310; a memory device 320; a frame buffer 330; a converter 340; a modem 350; a webcam interface 360, and an input device 365, all interconnected for communication by system bus 370.
  • Network interface 300 connects the STB 140 to the cable network 160 ( Figure 1) to receive videocasts from the cable network 160.
  • the modem 350 or converter 340 may provide some or all of the functionality of the network interface 300.
  • Processor 310 executes instructions stored in memory 320, which will be discussed in further detail in conjunction with
  • Frame buffer 330 holds preprocessed data received from webcam 100 via webcam interface 360.
  • the frame buffer 330 is omitted since the data from webcam 100 may be loaded into memory 320 instead of loading the data into the frame buffer 330.
  • Modem 350 may be a conventional modem for communicating with the Internet 150 via a publ icl y switched tel ephone network. The modem 350 can transmit and receive digital information, such as tel evision scheduling information, the webcam 100 output images, or other information to Internet 150.
  • modem 350 may be a cabl e modem or a wirel ess modem for sending and receiving data from the Internet 150 or other network.
  • Webcam interface 360 is communicativel y coupl ed to webcam 100 and receives image output from the webcam 100.
  • Webcam interface 360 may include, for exampl e, a universal serial bus ( USB) port, a parall el port, an infrared (IR) receiver, or other suitabl e device for receiving data.
  • Input device 365 may include, for exampl e, a keyboard, mous e, joystick, or other device or combination of devices that a user ( local or remote) uses to control the pan, tilt, and/or zoom webcam 100 by use of software control according to embodiments of the invention.
  • input device 365 may include a wirel ess device, such an infrared IR remote control device that is separate from the STB 140.
  • the STB 140 also may include an IR receiver communicativel y coupl ed to the system bus 370 to receive IR signals from the remote control input device.
  • Figure 4 is a block diagram of an exampl e of a memory device 320 of the s et top box 140.
  • Memory device 320 may be, for exampl e, a hard drive, a disk drive, random access memory ( "RAM ") , read onl y memory ( "ROM ”) , flash memory, or any other suitabl e memory device, or any combination thereof.
  • Memory device 320 stores, for exampl e, a compression/correction engine 400 that performs compression and distortion compensation on the image data received from webcam 100.
  • Memory device 320 also stores, for exampl e, a webcam engine 410 that accepts and process user commands relating to the pan, tilt, and/or zoom functions of the webcam 100, as described below. It is also noted the compression/correction engine 400 and/or the webcam engine 410 may be stored in other storage areas that are accessibl e by the processor 310. It is noted that either one of the compression/correction engine 400 or webcam engine 410 may be impl emented, for exampl e, as a program, modul e, instruction, or the l ike.
  • Compression/correction engine 400 uses, for exampl e, any known suitabl e skew correction algorithm that compresses a subset of the image output from webcam 100 and that compensates the subset image output for distortion.
  • the distortion compensation of the subset image output may be performed before the compression of the subset image output.
  • the distortion is automaticall y corrected in the subset image output when performing the compression of the subset image output, and this l eads to a saving in processor resource.
  • Webcam engine 410 accepts input from a user including instructions to pan the webcam 100 in particular directions and/or to zoom the webcam 100 toward or away from an obj ect to be captured as an image.
  • Figures 5A and 5B illustrate exampl es of operations of embodiments of the invention.
  • Figure 5A is a block diagram illustrating a top view of webcam 100.
  • the vision field 200 of the wide angle lens 210 of webcam 100 captures a wide scene area including the three objects 480, 482, and 484.
  • a conventional webcam may only be able to capture the scene area in the limited vision field 481.
  • a conventional webcam may need manual adjustment or movement by stepper motors to capture the objects 480 or 484 that are outside the limited vision field 481.
  • the entire scene captured in the vision field 200 is stored as an image in the image collection array 240 ( Figure 2) and processed by stages 245 and 250, and the image data of the entire scene is stored as digitized scene image data 485 in frame buffer 330 (or memory 320).
  • each position in the scene area that is covered by vision field 200 corresponds to a position in the image collection array 240 ( Figure 2).
  • the values in the positions in the image collection array 240 are then digitized as values of the digitized scene image data 485.
  • the webcam engine 410 ( Figure 4) allows a user to select a subset area in the vision field 200 for display or transmission, so as to simulate a panning/tilting feature of conventional webcams that use stepper motors.
  • the digitized image data 485 was captured in response to a user directly or remotely sending a command 486 via input device 365 to pan the webcam 100 to the left in order to permit the capture of an image of the object 480.
  • the webcam engine 410 receives the pan left command 486 and accordingly samples an area 487 that contains an image of the object 480 in the digitized scene image data 485.
  • the webcam engine 410 selects an area (subset) 489 that contains an image of the object 484 in the digitized scene image data 485.
  • the webcam engine 410 selects a subset 496 that contains an image of the bottom portion 498 of object 484 in the digitized scene image data 485.
  • Webcam engine 410 then passes a selected area (e.g., selected area 487, 489, 496) to the compression/correction engine 400 ( Figure 4).
  • the compression/correction engine 400 then performs compression operation and distortion compensation.
  • the selected area 487 shows distortions 490 in the image of 480 as a result of using the wide angle lens 210.
  • the compression/correction engine 400 can perform distortion compensation to reverse the distortion caused by the wide angle lens 210 on the captured image of object 480. Typically, this compensation is performed by changing the curved surface of an image into a straight surface.
  • Figure 6B shows an image of the object 480 without distortions after applying distortion compensation on the selected area 487.
  • the image of the object 480 is shown as a normal rectilinear image.
  • the selected area 487 can then be compressed by the compression/correction engine 400.
  • the compression and distortion compensation for selected area 487 can be performed concurrently.
  • the distortion compensation for selected area 487 can be performed before compression of the selected area 487.
  • the webcam engine 410 then passes the compressed distortion-compensated selected image data 487 to an output device, such as display 120 ( Figure 1) for viewing, or to the printer 165 or other devices such as computer 170.
  • webcam engine 410 may transmit the data 487 to another device coupled to the Internet 150.
  • FIG. 7 is a flowchart diagram of a method 600 to perform a panning, tilting or zooming function according to an embodiment of the invention.
  • a user first sends (605) a pan/tilt command indicating a direction of an object to be captured in an image by a webcam.
  • a scene in the field of vision of a lens of the webcam is then captured (605).
  • the captured scene is in the vision field 200 ( Figure 2) of a wide angle lens 210 of the webcam 100.
  • the captured scene in the vision field is then stored (615) as scene image data in an image collection array.
  • the image collection array may, for example, include charge coupled devices or complementary metal oxide semiconductor sensors.
  • the scene image data in the image collection array is then processed and stored (620) as a digitized scene image data.
  • the digitized scene data may be stored in, for example, the frame buffer 330 in the set top box 140 or other processing device. Based on the pan/tilt/ zoom command(s), a subset of the digitized scene image data is selected (625). In one embodiment, the webcam engine 410 processes the pan/tilt/ zoom command(s) and selects the subset of the digitized scene image data based on the pan/tilt/zoom command(s).
  • Distortion compensation and compression is then performed (630) on the subset of the digitized scene image data.
  • the compression/correction engine 400 performs ( 630) the distortion compensation and compression of the subset of the digitized scene image data.
  • the distortion- compensated and compressed subset is then transmitted ( 635) to a selected destination such as display 120, to another device via Internet 150 or cable network 160, to printer 165, and/or to computer 170.
  • Figures 8A and 8B illustrate an example of another operation of embodiments of the invention.
  • the user sends a command 700 in order to capture an image of the object 710 and another command 705 to zoom the image of the object 710.
  • a conventional webcam will require a physical pan movement to the left to capture the image of the object 705 and to capture a zoomed image of the object 705.
  • the digitized scene image data 485 of the scene in the vision field 200 was captured in the manner described above.
  • the webcam engine 410 receives the pan left command 700 and accordingly selects an area 715 that contains an image of the object 710 in the digitized scene image data 485.
  • the compression/correction engine 400 can perform distortion compensation to reverse the distortion caused by the wide angle lens 210 on the captured image of object 710. Typically, this compensation is performed by changing the curved surface of an image into a straight surface. Also, as shown in Fi'gure 8C, in response to the zoom command 705, the webcam engine 410 can enlarge an image of the selected area 715 in, for example, the frame buffer 330. The compression/correction engine 400 can then compress the image of selected area 715 and transmit the compressed image to a destination such as the display 120 or other suitable devices.
  • Figures 8A and 9 to describe another function according to an embodiment of the invention.
  • FIG. 10 is a flowchart diagram of a method 800 to perform a zooming function according to an embodiment of the invention.
  • a user first sends (805) a zoom command indicating whether to zoom in or away from an object to be captured in an image by a webcam.
  • a scene in the field of vision of the lens of the webcam is then captured (810).
  • the captured scene in the vision field is then stored (815) as scene image data in an image collection array.
  • the scene image data in the image collection array is then processed and stored (820) as a digitized scene image data.
  • a subset of the digitized scene image data is selected (825). Processing of the subset of the digitized scene image data is then performed (827) based on the zoom command. For example, if the zoom command is for zooming the image of the captured object, then the subset of the digitized scene image data is enlarged. As another example, if the zoom command is for zooming away from the captured object, then the selected subset will cover a greater area in the digitized scene image data.
  • Distortion compensation and compression are then performed (830) on the subset of the digitized scene image data.
  • the distortion -compensated and compressed subset is then transmitted (835) to a selected destination such as display 120, to another device via Internet 150 or cable network 160, to printer 165, and/or to computer 170.
  • Figure 11 is another diagram shown to further assist in describing an operation of an embodiment of the invention.
  • a scene 900 falls within the vision field 905 of a wide angle lens 910 of a camera 915.
  • the captured scene is digitized and processed into a digitized scene data 920.
  • a subset 925 of the digitized scene data 920 is selected based on a pan, tilt, and/or zoom command(s) that can be transmitted from an input device by the user.
  • the selected subset 925 is then skew corrected (e.g., distortion compensated) into scene data 930 that can be transmitted to a destination.
  • the scene data 930 is also typically compressed in order to optimize the data transmission
  • webcam 100 may comprise a processor and perform the selection of the subset of the digitized scene image data and -the distortion compensation and compression of the subset instead of STB 140.
  • the webcam 100 can send the digitized scene image output to a processing device, such as a personal computer instead of the STB 140, and the processing device can select the subset of the digitized scene image data and perform the distortion compensation and compression of the subset.
  • the webcam 100 can instead send the digitized scene image output to an optional companion box device 175 ( Figure 1) instead of sending the digitized scene image output to the set top box 140.
  • the companion box 175 may include, for example, the functionality of an Interactive Companion Box, as described in U.S. Patent Application No. / , filed on March 22, 2001, entitled "Interactive
  • Functions of the Interactive Companion Box may include Internet access, Video -on -Demand, an electronic programming guide, videoconferencing, and/or other functions.
  • sample stage 245 in Figure 1 may instead perform the selection of the image subset to be compressed and compensated for distortion, instead of the webcam engine 410.
  • the components of this invention may be implemented by using a programmed general purpose digital computer, by using application specific integrated circuits or field programmable gate arrays, or by using a network of interconnected components and circuits. Connections may be wired, wireless, by modem, and the like.

Abstract

An apparatus for controlling the capture of an image of an object in a camera field of vision, includes: a camera (100) including a wide angle lens capable to capture a scene within a field of vision of the wide angle lens; an image collection array (240) communicatively coupled to the wide angle lens and capable to store data of the scene within the field of vision; a memory (320) communicatively coupled to the image collection array and capable to store digitized data of the scene within the field of vision; and a webcam engine (410) communicatively coupled to the memory and capable to select, based upon a user command, a subset of the digitized data of the scene to simulate an image captured by a panning, tilting or zooming function of the camera (100).

Description

SYSTEM AND METHOD FOR A SOFTWARE STEERABLE WEB CAMERA
Inventor : Robert Novak
Technical Field
This disclosure relates generall y to digital imaging, digital video or web cameras, and more particularl y but not exclusivel y, to systems and methods for capturing camera images by use of software control .
Background
Conventional digital imaging, digital video or web cameras ( "webcams ") can be used for tel econferencing, surveillance, and other purposes. One of the probl ems with conventional webcams is that they have a very restricted field of vision. This restricted vision field is due to the limitations in the mechanism used to control the webcam and in the optics and other components in the webcam.
In order to increase the vision field of a webcam, the us er might manuall y control the webcam to pan and/or tilt in various directions ( e. g. , side -to -side or up-and -down) and/or to zoom in or away from an image to be captured. However, this manual technique is inconvenient, as it requires the user to stop whatever he/she is doing, to readjust the webcam, and to then resume his/her previous activity.
Various other schemes have been proposed to increas e the webcam vision field, such as adding compl ex l ens assembl ies and stepper motors to the webcams to permit the camera to perform the pan and zoom functions. However, compl ex l ens assemblies are expensive and will make webcams unaffordabl e for many consumers. Additionall y, stepper motors use moving or mechanical parts that may fail after a certain amount of time, thus requiring expensive repairs or the need to purchase a new webcam. Stepper motors may also disadvantageously suffer from hysterisis, in which repeated pan, tilt or zooming operations lead to slightly inconsistent settings during each operation.
Furthermore, repairs for webcams on set top boxes (STBs) are particularly expensive because of the required service call for repairing the STB webcam. Accordingly, there is need for a new system and method to allow webcams to increase their vision field. There is also a need for a new system and method to permit webcams to perform particular operations, such as panning, tilting, and/or zooming, without using stepper motors or requiring the user to physically adjust the webcam.
BRIEF DESCRIPTION OF THE DRAWINGS
Non -limiting and non -exhaustive embodiments of the present invention are described with reference to the following figures, wherein like reference numerals refer to like parts throughout the various views unless otherwise specified.
Figure 1 is a block diagram showing a webcam coupled to a set top box according to an embodiment of the invention. Figure 2 is a block diagram of an embodiment of the webcam of Figure 1.
Figure 3 is a block diagram of an embodiment of the set top box of Figure 1.
Figure 4 is a block diagram of one example of a memory device of the set top box.
Figure 5A is an illustrative example block diagram showing a function of the webcam of Figure 1 in response to particular pan and/or tilt commands. Figure 5B is an illustrative example block diagram of selected subsets in a digitized scene image data in response to particular pan and/or tilt commands.
Figure 6A is an illustrative example block diagram of a selected subset image data with distortions.
Figure 6B is an illustrative example block diagram of a selected subset image data that has been distortion compensated.
Figure 7 is a flowchart diagram of a method according to an embodiment of the invention.
Figure 8A is an illustrative example block diagram showing a function of the webcam of Figure 1 in response to particular pan and zoom commands.
Figure 8B is an illustrative example block diagram of a selected subset in the digitized scene image data in response to a particular pan command;
Figure 8C is an illustrative example block diagram of the selected subset in Figure 8B in response to a particular zoom command. Figure 9 is an illustrative example block diagram of the selected subset in Figure 9 in response to another particular zoom command.
Figure 10 is a flowchart diagram of a method according to another embodiment of the invention. Figure 11 is another diagram shown to further assist in describing an operation of an embodiment of the invention.
DETAILED DESCRIPTION OF THE ILLUSTRATED EMBODIMENTS Embodiments of a system and method for a software steerable camera are disclosed herein. As an overview, an embodiment of the invention provides a system and method that capture camera images by use of software control. As an example, the camera may be web camera or other types of camera that can support a wide angle lens. The wide angle lens is used to capture a scene or image in the wide field of vision. The captured scene or image data is then stored in an image collection array and then digitized and stored in memory. In one embodiment, the image collection array is a relatively larger sized array to permit the array to store image data from the wide vision field. Processing is performed for user commands to effectively pan the webcam in particular directions and/or to zoom the webcam toward or away from an object to be captured as an image. However, instead of physically moving the webcam in response to the user commands, a particular subset of the digitized data is selected and processed so that selected subset data provides a simulated panning and/or zooming of the image of the captured object. A compression/correction engine can then compensate the selected subset data for distortion and compress the selected subset data for transmission.
The invention advantageously permits a camera, such as a webcam, to have a wide vision field. The invention may also advantageously provide a wide vision field for cameras that have short depth fields. The invention also advantageously avoids the use of stepper motors to obtain particular images based on pan and zoom commands from the user. In the description herein, numerous specific details are provided, such as the description of system components in Figures 1 through 10, to provide a thorough understanding of embodiments of the invention. One skilled in the relevant art will recognize, however, that the invention can be practiced without one or more of the specific details, or with other methods, components, materials, parts, and the like. In other instances, well-known structures, materials, or operations are not shown or described in detail to avoid obscuring aspects of the invention.
Reference throughout this specification to "one embodiment" or "an embodiment" means that a particular feature, structure, or characteristic described in connection with the embodiment is included in at least one embodiment of the present invention. Thus, the appearances of the phrases "in one embodiment " or "in an embodiment " in various places throughout this specification are not necessarily all referring to the same embodiment. Furthermore, the particular features, structures, or characteristics may be combined in any suitable manner in one or more embodiments.
Figure 1 is a block diagram showing a webcam 100 communicatively coupled to a set top box ("STB") 140 according to an embodiment of the invention. The webcam 100 can capture an image of an object 130 that is in the webcam field of vision. Webcam 100 is communicatively coupled to STB 140 via, for example, a cable 110. Webcam 100 may also be communicatively coupled to STB 140 by use of other suitable connections or methods, such as IR beams, radio signals, suitable wireless transmission techniques, and the like. Typically, STB 140 is communicatively coupled to a cable network 160 and receives TV broadcasts, as well as other data, from the cable network 160. Typically, STB 140 is also communicatively coupled to the Internet 150 or other networks for sending and receiving data. Data received from the Internet 150 or cable network 160 may be displayed on a display 120. STB 140 may also transmit images that are captured by the webcam 100 to other computers via the Internet 150. STB may also transmit the captured webcam images to a printer 165 and/or to other devices 170 such as a computer in a local area network. It is noted that embodiments of the invention may also be implemented in other types of suitable cameras that can support a wide angle lens. For example, an embodiment of the invention may be implemented in, for example, security cameras, ATM cash machine cameras, spy cameras, portable cameras, or pin-hole type cameras. It is further noted that the invention is not limited to the use of STB 140. Other processing device may be used according to embodiments of the invention to perform image distortion compensation, image compression, and/or other functions that will be described below.
Figure 2 is a block diagram of an embodiment of the webcam 100 of Figure 1. Webcam 100 comprises a lens 210; a shutter 220; a filter 230; an image collection array 240; a sample stage 245; and an analog to digital converter ("ADC") 250. The lens 210 may be a wide angle lens, such as a fish- eye lens, that has angular field of, for example, at least about 140 degrees, as indicated by lines 200. Using a wide- angle lens allows webcam 100 to capture a larger image area than a conventional webcam. Shutter 220 opens and closes at a pre-specif ied rate, allowing light into the interior of webcam 100 and onto a filter 230. Filter 230 allows for image collection array 240 to capture different colors of an image and may include a static filter, such as a Bayer filter, or may include a spinning disk filter. In another embodiment, the filter may be replaced with a beam splitter or other color differentiation device. In another embodiment, webcam 100 does not include a filter or other color differentiation device. In one embodiment, the image collection array 240 can include charge coupled device ("CCD") sensors or complementary metal oxide semiconductor ("CMOS") sensors, which are generally much less expensive than CCD sensors but may be more susceptible to noise. Other types of sensors may be used in the image collection array 240. The size of the image collection array 240 is relatively larger in size such as, for example, 1024 by 768, 1200 by 768, or 2000 by 1000 sensors. The large sized array permits the array 240 to capture images in the wide vision field 200 that is viewed by the webcam 200.
A sample stage 245 reads the image data from the image collection array 240 when shutter 220 is closed, and an analog -to -digital converter (ADC) 250 converts the image data from an analog to digital form, and feeds the digitized image data to STB 140 via cable 110 for processing and/or transmission. In an alternative embodiment, the image data may be processed entirely by components of the webcam 100 and transmitted from webcam 100 to other devices such as the printer 165 or computer 170.
For purposes of explaining the f nctionality of embodiments of the invention, other conventional components that are included in the webcam 100 have been omitted in the figures and are not discussed herein. Figure 3 is a block diagram of an embodiment of the set top box (STB) 140. STB 140 includes a network interface 300; a processor 310; a memory device 320; a frame buffer 330; a converter 340; a modem 350; a webcam interface 360, and an input device 365, all interconnected for communication by system bus 370. Network interface 300 connects the STB 140 to the cable network 160 (Figure 1) to receive videocasts from the cable network 160. In alternative embodiments, the modem 350 or converter 340 may provide some or all of the functionality of the network interface 300. Processor 310 executes instructions stored in memory 320, which will be discussed in further detail in conjunction with
Figure 4. Frame buffer 330 holds preprocessed data received from webcam 100 via webcam interface 360. In another embodiment, the frame buffer 330 is omitted since the data from webcam 100 may be loaded into memory 320 instead of loading the data into the frame buffer 330.
Converter 340 can convert, if necessary, digitall y encoded broadcasts to a format usabl e by display 120 ( Figure 1 ) . Modem 350 may be a conventional modem for communicating with the Internet 150 via a publ icl y switched tel ephone network. The modem 350 can transmit and receive digital information, such as tel evision scheduling information, the webcam 100 output images, or other information to Internet 150. Alternativel y, modem 350 may be a cabl e modem or a wirel ess modem for sending and receiving data from the Internet 150 or other network.
Webcam interface 360 is communicativel y coupl ed to webcam 100 and receives image output from the webcam 100. Webcam interface 360 may include, for exampl e, a universal serial bus ( USB) port, a parall el port, an infrared ( IR) receiver, or other suitabl e device for receiving data. Input device 365 may include, for exampl e, a keyboard, mous e, joystick, or other device or combination of devices that a user ( local or remote) uses to control the pan, tilt, and/or zoom webcam 100 by use of software control according to embodiments of the invention. Alternativel y, input device 365 may include a wirel ess device, such an infrared IR remote control device that is separate from the STB 140. In this particular alternative embodiment, the STB 140 also may include an IR receiver communicativel y coupl ed to the system bus 370 to receive IR signals from the remote control input device.
The components shown in Figure 3 may be configured in other ways and in addition, the components may also be integrated. Thus, the configuration of the STB 140 in Figure 3 is not intended to be l imiting. Figure 4 is a block diagram of an exampl e of a memory device 320 of the s et top box 140. Memory device 320 may be, for exampl e, a hard drive, a disk drive, random access memory ( "RAM ") , read onl y memory ( "ROM ") , flash memory, or any other suitabl e memory device, or any combination thereof. Memory device 320 stores, for exampl e, a compression/correction engine 400 that performs compression and distortion compensation on the image data received from webcam 100. Memory device 320 also stores, for exampl e, a webcam engine 410 that accepts and process user commands relating to the pan, tilt, and/or zoom functions of the webcam 100, as described below. It is also noted the compression/correction engine 400 and/or the webcam engine 410 may be stored in other storage areas that are accessibl e by the processor 310. It is noted that either one of the compression/correction engine 400 or webcam engine 410 may be impl emented, for exampl e, as a program, modul e, instruction, or the l ike.
Compression/correction engine 400 uses, for exampl e, any known suitabl e skew correction algorithm that compresses a subset of the image output from webcam 100 and that compensates the subset image output for distortion. The distortion compensation of the subset image output may be performed before the compression of the subset image output. In another embodiment, the distortion is automaticall y corrected in the subset image output when performing the compression of the subset image output, and this l eads to a saving in processor resource.
Webcam engine 410 accepts input from a user including instructions to pan the webcam 100 in particular directions and/or to zoom the webcam 100 toward or away from an obj ect to be captured as an image.
Figures 5A and 5B illustrate exampl es of operations of embodiments of the invention. For example, Figure 5A is a block diagram illustrating a top view of webcam 100. The vision field 200 of the wide angle lens 210 of webcam 100 captures a wide scene area including the three objects 480, 482, and 484. In contrast, a conventional webcam may only be able to capture the scene area in the limited vision field 481. As a result, a conventional webcam may need manual adjustment or movement by stepper motors to capture the objects 480 or 484 that are outside the limited vision field 481. For the webcam 100, the entire scene captured in the vision field 200 is stored as an image in the image collection array 240 (Figure 2) and processed by stages 245 and 250, and the image data of the entire scene is stored as digitized scene image data 485 in frame buffer 330 (or memory 320). Thus, each position in the scene area that is covered by vision field 200 corresponds to a position in the image collection array 240 (Figure 2). The values in the positions in the image collection array 240 are then digitized as values of the digitized scene image data 485. The webcam engine 410 (Figure 4) allows a user to select a subset area in the vision field 200 for display or transmission, so as to simulate a panning/tilting feature of conventional webcams that use stepper motors. For example, assume that the digitized image data 485 was captured in response to a user directly or remotely sending a command 486 via input device 365 to pan the webcam 100 to the left in order to permit the capture of an image of the object 480. The webcam engine 410 receives the pan left command 486 and accordingly samples an area 487 that contains an image of the object 480 in the digitized scene image data 485.
As another example, if the user were to send a pan right command 488 to webcam 100, then the webcam engine 410 selects an area (subset) 489 that contains an image of the object 484 in the digitized scene image data 485.
As another example, if the user were to send a tilt down command 495 to webcam 100, then the webcam engine 410 selects a subset 496 that contains an image of the bottom portion 498 of object 484 in the digitized scene image data 485.
Webcam engine 410 then passes a selected area (e.g., selected area 487, 489, 496) to the compression/correction engine 400 (Figure 4). The compression/correction engine 400 then performs compression operation and distortion compensation. For example, in Figure 6A, assume that the selected area 487 shows distortions 490 in the image of 480 as a result of using the wide angle lens 210. For images captured by a wide angle lens, the distortions become more pronounced toward the edges of the images. The compression/correction engine 400 can perform distortion compensation to reverse the distortion caused by the wide angle lens 210 on the captured image of object 480. Typically, this compensation is performed by changing the curved surface of an image into a straight surface.
Figure 6B shows an image of the object 480 without distortions after applying distortion compensation on the selected area 487. Thus, the image of the object 480 is shown as a normal rectilinear image. The selected area 487 can then be compressed by the compression/correction engine 400. In another embodiment, the compression and distortion compensation for selected area 487 can be performed concurrently. In yet another embodiment, the distortion compensation for selected area 487 can be performed before compression of the selected area 487.
The webcam engine 410 then passes the compressed distortion-compensated selected image data 487 to an output device, such as display 120 (Figure 1) for viewing, or to the printer 165 or other devices such as computer 170. In addition to or instead of passing the compressed distortion- compensated selected image data 487 to an output device, webcam engine 410 may transmit the data 487 to another device coupled to the Internet 150.
Figure 7 is a flowchart diagram of a method 600 to perform a panning, tilting or zooming function according to an embodiment of the invention. A user first sends (605) a pan/tilt command indicating a direction of an object to be captured in an image by a webcam. A scene in the field of vision of a lens of the webcam is then captured (605). In one embodiment, the captured scene is in the vision field 200 (Figure 2) of a wide angle lens 210 of the webcam 100. The captured scene in the vision field is then stored (615) as scene image data in an image collection array. The image collection array may, for example, include charge coupled devices or complementary metal oxide semiconductor sensors. The scene image data in the image collection array is then processed and stored (620) as a digitized scene image data. The digitized scene data may be stored in, for example, the frame buffer 330 in the set top box 140 or other processing device. Based on the pan/tilt/ zoom command(s), a subset of the digitized scene image data is selected (625). In one embodiment, the webcam engine 410 processes the pan/tilt/ zoom command(s) and selects the subset of the digitized scene image data based on the pan/tilt/zoom command(s).
Distortion compensation and compression is then performed (630) on the subset of the digitized scene image data. In one embodiment, the compression/correction engine 400 performs ( 630) the distortion compensation and compression of the subset of the digitized scene image data. The distortion- compensated and compressed subset is then transmitted ( 635) to a selected destination such as display 120, to another device via Internet 150 or cable network 160, to printer 165, and/or to computer 170.
Figures 8A and 8B illustrate an example of another operation of embodiments of the invention. Assume the user sends a command 700 in order to capture an image of the object 710 and another command 705 to zoom the image of the object 710. A conventional webcam will require a physical pan movement to the left to capture the image of the object 705 and to capture a zoomed image of the object 705. Assume in this example that the digitized scene image data 485 of the scene in the vision field 200 was captured in the manner described above. The webcam engine 410 receives the pan left command 700 and accordingly selects an area 715 that contains an image of the object 710 in the digitized scene image data 485. The compression/correction engine 400 can perform distortion compensation to reverse the distortion caused by the wide angle lens 210 on the captured image of object 710. Typically, this compensation is performed by changing the curved surface of an image into a straight surface. Also, as shown in Fi'gure 8C, in response to the zoom command 705, the webcam engine 410 can enlarge an image of the selected area 715 in, for example, the frame buffer 330. The compression/correction engine 400 can then compress the image of selected area 715 and transmit the compressed image to a destination such as the display 120 or other suitable devices. Reference is now made to Figures 8A and 9 to describe another function according to an embodiment of the invention. Assume the user sends a command 700 in order to capture an image of the object 710 and another command 740 to zoom away from the object 710. The webcam engine 410 receives the pan left command 700 and accordingly selects an area 750 that contains an image of the object 710 in the digitized scene image data 485. However, since the webcam engine 410 also received the zoom away command 740, the selected area 750 will be larger in size and cover a greater selected area portion in the digitized scene image area 485 than the selected area 715 in Figure 8B. Figure 10 is a flowchart diagram of a method 800 to perform a zooming function according to an embodiment of the invention. A user first sends (805) a zoom command indicating whether to zoom in or away from an object to be captured in an image by a webcam. A scene in the field of vision of the lens of the webcam is then captured (810). The captured scene in the vision field is then stored (815) as scene image data in an image collection array. The scene image data in the image collection array is then processed and stored (820) as a digitized scene image data. Based on the zoom command, a subset of the digitized scene image data is selected (825). Processing of the subset of the digitized scene image data is then performed (827) based on the zoom command. For example, if the zoom command is for zooming the image of the captured object, then the subset of the digitized scene image data is enlarged. As another example, if the zoom command is for zooming away from the captured object, then the selected subset will cover a greater area in the digitized scene image data.
Distortion compensation and compression are then performed (830) on the subset of the digitized scene image data. The distortion -compensated and compressed subset is then transmitted (835) to a selected destination such as display 120, to another device via Internet 150 or cable network 160, to printer 165, and/or to computer 170. Figure 11 is another diagram shown to further assist in describing an operation of an embodiment of the invention. A scene 900 falls within the vision field 905 of a wide angle lens 910 of a camera 915. The captured scene is digitized and processed into a digitized scene data 920. A subset 925 of the digitized scene data 920 is selected based on a pan, tilt, and/or zoom command(s) that can be transmitted from an input device by the user. The selected subset 925 is then skew corrected (e.g., distortion compensated) into scene data 930 that can be transmitted to a destination. The scene data 930 is also typically compressed in order to optimize the data transmission across a network.
Other variations and modifications of the above -described embodiments and methods are possible in light of the foregoing teaching. For example, webcam 100 may comprise a processor and perform the selection of the subset of the digitized scene image data and -the distortion compensation and compression of the subset instead of STB 140. As another example, the webcam 100 can send the digitized scene image output to a processing device, such as a personal computer instead of the STB 140, and the processing device can select the subset of the digitized scene image data and perform the distortion compensation and compression of the subset. As another example, the webcam 100 can instead send the digitized scene image output to an optional companion box device 175 (Figure 1) instead of sending the digitized scene image output to the set top box 140. The companion box 175 may include, for example, the functionality of an Interactive Companion Box, as described in U.S. Patent Application No. / , filed on March 22, 2001, entitled "Interactive
Companion Set Top Box, " by inventors Ted M. Tsuchida and James A. Billmaier, the disclosure of which is hereby incorporated by reference. Functions of the Interactive Companion Box may include Internet access, Video -on -Demand, an electronic programming guide, videoconferencing, and/or other functions.
As another example, the sample stage 245 in Figure 1 may instead perform the selection of the image subset to be compressed and compensated for distortion, instead of the webcam engine 410.
Further, at least some of the components of this invention may be implemented by using a programmed general purpose digital computer, by using application specific integrated circuits or field programmable gate arrays, or by using a network of interconnected components and circuits. Connections may be wired, wireless, by modem, and the like. The above description of illustrated embodiments of the invention, including what is described in the Abstract, is not intended to be exhaustive or to limit the invention to the precise forms disclosed. While specific embodiments of, and examples for, the invention are described herein for illustrative purposes, various equivalent modifications are possible within the scope of the invention, as those skilled in the relevant art will recognize.
These modifications can be made to the invention in light of the above detailed description. The terms used in the following claims should not be construed to limit the invention to the specific embodiments disclosed in the specification and the claims. Rather, the scope of the invention is to be determined entirely by the following claims, which are to be construed in accordance with established doctrines of claim interpretation.

Claims

CLAIMSWHAT IS CLAIMED IS:
1. A method of capturing an image by use of a camera, the method comprising: placing a scene within a field of vision of a wide angle lens coupled to the camera; storing image data of the scene in an image collection array; digitizing the scene image data into a digitized scene image data and storing the digitized scene image data in memory; based on a command, selecting a subset of the digitized scene image data; and performing additional processing on the selected subset of the digitized scene image data.
2. The method of claim 1 wherein the camera is used to transmit images on a network.
3. The method of claim 1 wherein the camera is communicatively coupled to a set top box that is capable of transmitting images over data streams in a network.
4. The method of claim 1 wherein the selecting the subset is controlled by a set top box that is capable to transmit images across a network.
5. The method of claim 1 wherein the performing the additional processing is controlled by a set top box that is capable to transmit images across a network.
6. The method of claim 1 wherein the camera is communicatively coupled to a companion box that is capable to control a set top box for transmitting images across a network
7. The method of claim 1 wherein the sel ecting the subset is controll ed by a companion box that is capabl e to control a set top box for transmitting images across a network.
8. The method of claim 1 wherein the performing the additional processing is controll ed by a companion box that is capabl e to control a set top box for transmitting images across a network.
9. The method of claim 1 wherein the additional processing comprises : performing distortion compensation on the sel ected subset of the digiti zed scene image data.
10. The method of claim 1 wherein the additional processing comprises : performing compression on the sel ected subset of the digiti zed scene image data.
11. The method of claim 10, further comprising: transmitting the compressed sel ected subset of the digiti zed scene image data to a destination device.
12. The method of claim 1 wherein the sel ected subset of the digiti zed scene image data is based on a pan command.
13. The method of claim 1 wherein the sel ected subset of the digiti zed scene image data is based on a tilt command.
14. The method of claim 1 wherein the additional processing includes : enlarging the image in the selected subset of the digitized scene image data in response to a zoom command.
15. The method of claim 1 wherein the additional processing includes: enlarging an area of the selected subset of the digitized scene image data in response to a zoom command.
16. The method of claim 1 wherein the camera is connected to a processor device.
17. The method of claim 1 wherein the selecting the subset is controlled by a processor device.
18. The method of claim 1 wherein the performing the additional processing is controlled by a processor device.
19. A method of controlling the capture of an image of an object in a camera field of vision, the method comprising: storing, in an image collection array, data of a scene within the field of vision; storing, in memory, digitized data of the scene within the field of vision; based upon a user command, selecting a subset of the digitized data of the scene to simulate an image captured by at least one of panning, tilting and zooming functions of a camera; and performing additional processing on the subset of the digitized data of the scene.
20. The method of claim 19 wherein the camera is used to transmit images in a network.
21. The method of claim 19 wherein the camera is communicatively coupled to a first unit that is capable to transmit images in a network.
22. The method of claim 19 wherein the selecting the subset is controlled by a first unit that is capable to transmit images in a network.
23. The method of claim 19 wherein the performing the additional processing is controlled by a first unit that is capable to transmit images in a network.
24. The method of claim 19 wherein the camera is communicatively coupled to a companion unit that is capable of being communicatively coupled to a first unit for transmitting images in a network.
25. The method of claim 19 wherein the selecting the subset is controlled by a companion unit that is capable of being communicatively coupled to a first unit for transmitting images in a network.
26. The method of claim 19 wherein the performing the additional processing is controlled by a companion unit that is capable of being communicatively coupled to a first unit for transmitting images in a network.
27. The method of claim 19 wherein the camera is communicatively coupled to a processing device.
28. The method of claim 19 wherein the selecting the subset is controlled by a processing device.
29. The method of claim 19 wherein the performing the additional processing is controlled by a processing device.
30. The method of claim 19 wherein the selected subset of the digitized data is based on a pan command.
31. The method of claim 19 wherein the selected subset of the digitized data is based on a tilt command.
32. The method of claim 19 wherein the additional processing includes: enlarging the image in the selected subset of the digitized data in response to a zoom command.
33. The method of claim 19 wherein the additional processing includes: enlarging an area of the selected subset of the digitized data in response to a zoom command.
34. The method of claim 19 wherein the additional processing comprises: performing distortion compensation on the selected subset of the digitized data of the scene.
35. The method of claim 19 wherein the additional processing comprises: performing compression on the selected subset of the digitized data of the scene.
36. The method of claim 35, further comprising: transmitting the compressed selected subset of the digitized data to a destination device.
37. An article of manufacture, comprising: a machine -readable medium having stored thereon instructions to: store image data of a scene in an image collection array; digitize the scene image data into a digitized scene image data and store the digitized scene image data in memory; based on a command, select a subset of the digitized scene image data; and perform additional processing on the selected subset of the digitized scene image data.
38. An article of manufacture, comprising: a machine -readable medium having stored thereon instructions to: store, in an image collection array, data of a scene within a field of vision of a wide angle lens of a camera; store, in memory, digitized data of the scene within the field of vision; based upon a user command, select a subset of the digitized data of the scene to simulate an image captured by at least one of panning, tilting and zooming functions of the camera; and perform additional processing on the subset of the digitized data of the scene.
39. An apparatus for capturing an image by use of a camera, the apparatus comprising: means for placing a scene within a field of vision of a wide angle lens coupled to the camera; communicatively coupled to the placing means, means for storing image data of the scene in an image collection array; communicatively coupled to the storing means, means for digitizing the scene image data into a digitized scene image data and for storing the digitized scene image data in memory; communicatively coupled to the digitizing and storing means, means for selecting a subset of the digitized scene image data based on a user command where the user can be local or remote to the camera location (remote access is optionally allowed) ; and communicatively coupled to the selecting means, means for performing additional processing on the selected subset of the digitized scene image data.
40. An apparatus for controlling the capture of an image of an object in a camera field of vision, the apparatus comprising: first means for storing, in an image collection array, data of a scene within the field of vision; communicatively coupled to the first storing means, second means for storing, in memory, digitized data of the scene within the field of vision; communicatively coupled to the second storing means, means for selecting a subset of the digitized data of the scene to simulate an image captured by at least one of panning, tilting and zooming functions of a camera, based upon a user command; and communicatively coupled to the selecting means, means for performing additional processing on the subset of the digitized data of the scene.
41. An apparatus for controlling the capture of an image of an object in a camera field of vision, the apparatus comprising: a camera including a wide angle lens capable to capture a scene within a field of vision of the wide angle lens; an image collection array communicatively coupled to the wide angl e l ens and capabl e to store data of the scene within the field of vision; a memory communicativel y coupl ed to the image coll ection array and capabl e to store digiti zed data of the scene within the field of vision; and a webcam engine communicativel y coupl ed to the memory and capabl e to sel ect, based upon a user command, a subset of the digiti zed data of the scene to simulate an image captured by at l east one of panning, tilting and zooming functions of the camera.
42. The apparatus of claim 41 further comprising: a compression/correction engine communicativel y coupl ed to the memory and capabl e to perform compression and distortion compensation on the subset of the digiti zed data of the scene.
43. The apparatus of claim 41 wherein the camera is capabl e to transmit images over a network.
44. The apparatus of claim 41 wherein the image coll ection array is capabl e to store data of an entire scene within the field of vision
45. The apparatus of claim 41 wherein the subset of the digiti zed data is transmitted to a destination device.
46. The apparatus of claim 41 wherein the webcam engine is included in a set top box unit that is capabl e to transmit images across a network.
47. The apparatus of claim 41 wherein the webcam engine is included in a companion unit for controlling a s et top box unit for transmitting images across a network.
48. An apparatus for controlling the capture of an image of an object in a camera field of vision, the apparatus comprising: a camera including a wide angle lens capable to capture a scene within a field of vision of the wide angle lens; an image collection array communicatively coupled to the wide angle lens and capable to store data of the scene within the field of vision; a processor device including a memory communicatively coupled to the image collection array and capable to store digitized data of the scene within the field of vision, the processor device capable to select a subset of the digitized data of the scene to simulate an image captured by at least one of panning, tilting and zooming functions of the camera.
49. The apparatus of claim 48 wherein the processor device further includes a webcam engine communicatively coupled to the memory and executable by the processor device to select, based upon a user command, the subset of the digitized data of the scene.
50. An apparatus for controlling the capture of an image by a camera, the apparatus comprising: a camera having a wide angle lens capable to capture a scene within a wide vision field; an image collection array communicatively coupled to the wide angle lens and capable to store image data of the entire scene within the wide vision field; sampling and digitizing stage communicatively coupled to the image collection array and capable to read and digitize the image data stored in the image collection array; a memory communicatively coupled to the sampling and digitizing stage and capable to store digitized image data of the entire scene within the wide vision field; and a webcam module communicatively coupled to the memory and capable to select a subset of the stored digitized image data based upon user commands.
51. The apparatus of claim 50 wherein the camera is used to transmit images across a network.
52. The apparatus of claim 50, further comprising: a compression/correction module communicatively coupled to the memory and capable to perform compression and distortion compensation on the subset of the stored digitized data.
53. The apparatus of claim 50 wherein the image collection array is capable to store data of an entire scene within the wide vision field.
54. The apparatus of claim 50 wherein the subset of the stored digitized data is transmitted to a destination device.
55. The apparatus of claim 50 wherein the webcam module is included in a unit that is capable to transmit images across a network.
56. The apparatus of claim 50 wherein the webcam module is included in a companion unit for controlling a set top box unit for transmitting images across a network.
57. An apparatus for controlling the image capture by a camera, the apparatus comprising: a unit capable of being communicatively coupled to the camera, and capable to store digitized data of a scene within a field of vision of the camera; the unit including a webcam engine capable to select, based upon a user command, a subset of the stored digitized data of the scene to simulate an image captured by at least one of panning, tilting and zooming functions of the camera; the unit further including a processor communicatively coupled to the webcam engine and capable to execute the webcam engine to permit the selection of the subset of the stored digitized data.
58. The apparatus of claim 57 wherein the unit further comprises: an image correction module communicatively coupled to the processor and capable to perform distortion compensation on the selected subset.
59. An apparatus for controlling the capture of an image of an object, the apparatus comprising: a lens capable to capture a scene within a wide field of vision of the lens; an image collection array communicatively coupled to the lens and capable to store data of the scene within the wide field of vision; a memory communicatively coupled to the image collection array and capable to store digitized data of the scene within the wide field of vision; and a processing stage communicatively coupled to the memory and capable to select a subset of the digitized data of the scene in response to a user command for controlling the capture of the image.
60. The apparatus of claim 59 wherein the processing stage further includes a webcam engine communicatively coupled to the memory and capable to select the subset of the digitized data of the scene.
61. The apparatus of claim 59 wherein the processing stage further includes an image correction engine communicatively coupled to the processor and capable to perform distortion compensation on the selected subset.
PCT/US2002/006680 2001-03-30 2002-03-04 System and method for a software steerable web camera WO2002080526A1 (en)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
US09/823,804 2001-03-30
US09/823,804 US20020141657A1 (en) 2001-03-30 2001-03-30 System and method for a software steerable web Camera

Publications (1)

Publication Number Publication Date
WO2002080526A1 true WO2002080526A1 (en) 2002-10-10

Family

ID=25239771

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/US2002/006680 WO2002080526A1 (en) 2001-03-30 2002-03-04 System and method for a software steerable web camera

Country Status (3)

Country Link
US (2) US20020141657A1 (en)
AU (1) AU2002244239A1 (en)
WO (1) WO2002080526A1 (en)

Families Citing this family (40)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2002080521A2 (en) * 2001-03-30 2002-10-10 Digeo, Inc. System and method for a software steerable web camera with multiple image subset capture
JP3960092B2 (en) * 2001-07-12 2007-08-15 日産自動車株式会社 Image processing apparatus for vehicle
US20030037341A1 (en) * 2001-08-17 2003-02-20 Van Der Meulen Pieter Sierd System for remotely controlling consumer electronics using a web-cam image
US7280149B2 (en) * 2001-12-21 2007-10-09 Flextronics Sales & Marketing (A-P) Ltd. Method and apparatus for detecting optimum lens focus position
US20030174146A1 (en) * 2002-02-04 2003-09-18 Michael Kenoyer Apparatus and method for providing electronic image manipulation in video conferencing applications
US20040008213A1 (en) * 2002-07-11 2004-01-15 Sun Microsystems, Inc., A Delaware Corporation Tagging multicolor images for improved compression
US20040008214A1 (en) * 2002-07-11 2004-01-15 Sun Microsystems, Inc., A Delaware Corporation Tagging repeating images for improved compression
KR100846449B1 (en) * 2003-03-27 2008-07-16 삼성전자주식회사 Method for setting web camera mode of mobile composition device
US20050120128A1 (en) * 2003-12-02 2005-06-02 Wilife, Inc. Method and system of bandwidth management for streaming data
US7599002B2 (en) * 2003-12-02 2009-10-06 Logitech Europe S.A. Network camera mounting system
US7779361B2 (en) * 2004-02-09 2010-08-17 Malmstrom R Dean Change-alarmed, integrated console apparatus and method
US7353458B2 (en) * 2004-02-09 2008-04-01 Portalis, Lc Computer presentation and command integration method
US7496846B2 (en) * 2004-02-09 2009-02-24 Portalis, Lc Computer presentation and command integration apparatus
US20060005135A1 (en) * 2004-06-30 2006-01-05 Nokia Corporation Method for quantifying a visual media file size in an electronic device, an electronic device utilizing the method and a software program for implementing the method
DE102004047674A1 (en) * 2004-09-30 2006-04-13 Siemens Ag Method and device for multimedia-based display of an object on the Internet
US8121392B2 (en) 2004-10-25 2012-02-21 Parata Systems, Llc Embedded imaging and control system
US20060171453A1 (en) * 2005-01-04 2006-08-03 Rohlfing Thomas R Video surveillance system
US20060255931A1 (en) * 2005-05-12 2006-11-16 Hartsfield Andrew J Modular design for a security system
JP2006345039A (en) * 2005-06-07 2006-12-21 Opt Kk Photographing apparatus
US7620316B2 (en) * 2005-11-28 2009-11-17 Navisense Method and device for touchless control of a camera
US20070219654A1 (en) * 2006-03-14 2007-09-20 Viditotus Llc Internet-based advertising via web camera search contests
US8069461B2 (en) 2006-03-30 2011-11-29 Verizon Services Corp. On-screen program guide with interactive programming recommendations
JP2007318596A (en) * 2006-05-29 2007-12-06 Opt Kk Compression method of image data by wide angle lens, expansion display method, compression apparatus, wide angle camera apparatus, and monitor system
US20080012953A1 (en) * 2006-07-13 2008-01-17 Vimicro Corporation Image Sensors
US7940955B2 (en) * 2006-07-26 2011-05-10 Delphi Technologies, Inc. Vision-based method of determining cargo status by boundary detection
US20080049099A1 (en) * 2006-08-25 2008-02-28 Imay Software Co., Ltd. Entire-view video image process system and method thereof
US8418217B2 (en) 2006-09-06 2013-04-09 Verizon Patent And Licensing Inc. Systems and methods for accessing media content
US8464295B2 (en) 2006-10-03 2013-06-11 Verizon Patent And Licensing Inc. Interactive search graphical user interface systems and methods
US8566874B2 (en) 2006-10-03 2013-10-22 Verizon Patent And Licensing Inc. Control tools for media content access systems and methods
US8510780B2 (en) 2006-12-21 2013-08-13 Verizon Patent And Licensing Inc. Program guide navigation tools for media content access systems and methods
US8015581B2 (en) 2007-01-05 2011-09-06 Verizon Patent And Licensing Inc. Resource data configuration for media content access systems and methods
DE102007013239A1 (en) * 2007-03-15 2008-09-18 Mobotix Ag supervision order
US20080291284A1 (en) * 2007-05-25 2008-11-27 Sony Ericsson Mobile Communications Ab Communication device and image transmission method
US8154578B2 (en) * 2007-05-31 2012-04-10 Eastman Kodak Company Multi-camera residential communication system
US20080313356A1 (en) * 2007-06-15 2008-12-18 Microsoft Corporation Remote control of devices through instant messenger
US8035686B2 (en) 2007-06-15 2011-10-11 At&T Intellectual Property I, L.P. STB/DVR video surveillance
US8103965B2 (en) 2007-06-28 2012-01-24 Verizon Patent And Licensing Inc. Media content recording and healing statuses
US8051447B2 (en) 2007-12-19 2011-11-01 Verizon Patent And Licensing Inc. Condensed program guide for media content access systems and methods
DE102008049921A1 (en) 2008-09-29 2010-04-15 Mobotix Ag Method of video stream generation
JP5872171B2 (en) * 2011-02-17 2016-03-01 クラリオン株式会社 Camera system

Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US5185667A (en) * 1991-05-13 1993-02-09 Telerobotics International, Inc. Omniview motionless camera orientation system
US6005611A (en) * 1994-05-27 1999-12-21 Be Here Corporation Wide-angle image dewarping method and apparatus
US6043837A (en) * 1997-05-08 2000-03-28 Be Here Corporation Method and apparatus for electronically distributing images from a panoptic camera system
US6128033A (en) * 1996-06-28 2000-10-03 At&T Corporation Audiovisual communications terminal apparatus for teleconferencing and method

Family Cites Families (28)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US4310222A (en) 1976-10-29 1982-01-12 Canon Kabushiki Kaisha Retro-focus type wide angle lens
US4772107A (en) 1986-11-05 1988-09-20 The Perkin-Elmer Corporation Wide angle lens with improved flat field characteristics
US4831438A (en) * 1987-02-25 1989-05-16 Household Data Services Electronic surveillance system
US5128776A (en) * 1989-06-16 1992-07-07 Harris Corporation Prioritized image transmission system and method
FR2661061B1 (en) * 1990-04-11 1992-08-07 Multi Media Tech METHOD AND DEVICE FOR MODIFYING IMAGE AREA.
US5359363A (en) * 1991-05-13 1994-10-25 Telerobotics International, Inc. Omniview motionless camera surveillance system
US6172707B1 (en) * 1992-06-22 2001-01-09 Canon Kabushiki Kaisha Image pickup device
KR940017747A (en) * 1992-12-29 1994-07-27 에프. 제이. 스미트 Image processing device
US5606364A (en) * 1994-03-30 1997-02-25 Samsung Aerospace Industries, Ltd. Surveillance system for processing a plurality of signals with a single processor
US5646677A (en) * 1995-02-23 1997-07-08 Motorola, Inc. Method and apparatus for interactively viewing wide-angle images from terrestrial, space, and underwater viewpoints
US5657073A (en) * 1995-06-01 1997-08-12 Panoramic Viewing Systems, Inc. Seamless multi-camera panoramic imaging with distortion correction and selectable field of view
US6031540A (en) 1995-11-02 2000-02-29 Imove Inc. Method and apparatus for simulating movement in multidimensional space with polygonal projections from subhemispherical imagery
US6144773A (en) 1996-02-27 2000-11-07 Interval Research Corporation Wavelet-based data compression
JPH09261519A (en) * 1996-03-22 1997-10-03 Canon Inc Image pickup device
JPH1051755A (en) * 1996-05-30 1998-02-20 Fujitsu Ltd Screen display controller for video conference terminal equipment
US6366311B1 (en) * 1996-10-11 2002-04-02 David A. Monroe Record and playback system for aircraft
JP3943674B2 (en) * 1996-10-25 2007-07-11 キヤノン株式会社 Camera control system, camera server and control method thereof
US7092012B2 (en) * 1996-11-15 2006-08-15 Canon Kabushiki Kaisha Image processing apparatus and method, storage medium, and communication system
US5877821A (en) * 1997-01-30 1999-03-02 Motorola, Inc. Multimedia input and control apparatus and method for multimedia communications
JP4332231B2 (en) 1997-04-21 2009-09-16 ソニー株式会社 Imaging device controller and imaging system
US6011558A (en) * 1997-09-23 2000-01-04 Industrial Technology Research Institute Intelligent stitcher for panoramic image-based virtual worlds
CN1178467C (en) * 1998-04-16 2004-12-01 三星电子株式会社 Method and apparatus for automatically tracing moving object
JP3762149B2 (en) * 1998-07-31 2006-04-05 キヤノン株式会社 Camera control system, camera server, camera server control method, camera control method, and computer-readable recording medium
US6353848B1 (en) 1998-07-31 2002-03-05 Flashpoint Technology, Inc. Method and system allowing a client computer to access a portable digital image capture unit over a network
US6223213B1 (en) * 1998-07-31 2001-04-24 Webtv Networks, Inc. Browser-based email system with user interface for audio/video capture
EP1047264B1 (en) * 1999-04-22 2007-05-09 Leo Vision Method and device for image processing and restitution with resampling
JP2001004909A (en) * 1999-06-18 2001-01-12 Olympus Optical Co Ltd Camera having automatic focusing device
US6543052B1 (en) * 1999-07-09 2003-04-01 Fujitsu Limited Internet shopping system utilizing set top box and voice recognition

Patent Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US5185667A (en) * 1991-05-13 1993-02-09 Telerobotics International, Inc. Omniview motionless camera orientation system
US6005611A (en) * 1994-05-27 1999-12-21 Be Here Corporation Wide-angle image dewarping method and apparatus
US6128033A (en) * 1996-06-28 2000-10-03 At&T Corporation Audiovisual communications terminal apparatus for teleconferencing and method
US6043837A (en) * 1997-05-08 2000-03-28 Be Here Corporation Method and apparatus for electronically distributing images from a panoptic camera system

Also Published As

Publication number Publication date
US20020141657A1 (en) 2002-10-03
US7071968B2 (en) 2006-07-04
US20020141658A1 (en) 2002-10-03
AU2002244239A1 (en) 2002-10-15

Similar Documents

Publication Publication Date Title
US20020141657A1 (en) System and method for a software steerable web Camera
US7593041B2 (en) System and method for a software steerable web camera with multiple image subset capture
US8243135B2 (en) Multiple-view processing in wide-angle video camera
US6005613A (en) Multi-mode digital camera with computer interface using data packets combining image and mode data
US20090309973A1 (en) Camera control apparatus and camera control system
CN1662061B (en) Motion targeting system and method
JP3533756B2 (en) Image input device
JP4804378B2 (en) Video display device and video display method
US20040179100A1 (en) Imaging device and a monitoring system
US20070002131A1 (en) Dynamic interactive region-of-interest panoramic/three-dimensional immersive communication system and method
KR101249322B1 (en) Imaging device, information processing device, information processing method, and program recording medium
KR20070092582A (en) Apparatus and method for image processing
JPH0510872B2 (en)
WO2007119712A1 (en) Camera apparatus, and image processing apparatus and image processing method
CN113923362A (en) Control apparatus, control method, and storage medium
US20060290785A1 (en) Image Capturing Apparatus with a Remote Controller
JP5171398B2 (en) Imaging device
WO2006067547A1 (en) Method for extracting of multiple sub-windows of a scanning area by means of a digital video camera
KR101589498B1 (en) Digital camera and controlling method thereof
KR102009988B1 (en) Method for compensating image camera system for compensating distortion of lens using super wide angle camera and Transport Video Interface Apparatus used in it
US11838634B2 (en) Method of generating a digital video image using a wide-angle field of view lens
JPH11261875A (en) Digital camera remote controller
WO2000002382A2 (en) Television camera
JP2003158684A (en) Digital camera
US20070036536A1 (en) Electronic apparatus with adjustable charge couple device

Legal Events

Date Code Title Description
AK Designated states

Kind code of ref document: A1

Designated state(s): AE AG AL AM AT AU AZ BA BB BG BR BY BZ CA CH CN CO CR CU CZ DE DK DM DZ EC EE ES FI GB GD GE GH GM HR HU ID IL IN IS JP KE KG KP KR KZ LC LK LR LS LT LU LV MA MD MG MK MN MW MX MZ NO NZ PL PT RO RU SD SE SG SI SK SL TJ TM TR TT TZ UA UG UZ VN YU ZA ZW

AL Designated countries for regional patents

Kind code of ref document: A1

Designated state(s): GH GM KE LS MW MZ SD SL SZ TZ UG ZM ZW AM AZ BY KG KZ MD RU TJ TM AT BE CH CY DE DK ES FI FR GB GR IE IT LU MC NL PT SE TR BF BJ CF CG CI CM GA GN GQ GW ML MR NE SN TD TG

121 Ep: the epo has been informed by wipo that ep was designated in this application
REG Reference to national code

Ref country code: DE

Ref legal event code: 8642

122 Ep: pct application non-entry in european phase
NENP Non-entry into the national phase

Ref country code: JP

WWW Wipo information: withdrawn in national office

Country of ref document: JP