US20130021491A1 - Camera Device Systems and Methods - Google Patents
Camera Device Systems and Methods Download PDFInfo
- Publication number
- US20130021491A1 US20130021491A1 US13/413,863 US201213413863A US2013021491A1 US 20130021491 A1 US20130021491 A1 US 20130021491A1 US 201213413863 A US201213413863 A US 201213413863A US 2013021491 A1 US2013021491 A1 US 2013021491A1
- Authority
- US
- United States
- Prior art keywords
- visual
- camera device
- framing
- logic
- indicator
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Abandoned
Links
- 238000000034 method Methods 0.000 title claims abstract description 10
- 238000009432 framing Methods 0.000 claims abstract description 139
- 230000000007 visual effect Effects 0.000 claims description 60
- 238000012545 processing Methods 0.000 claims description 26
- 230000008859 change Effects 0.000 claims description 3
- 230000008713 feedback mechanism Effects 0.000 abstract 1
- 230000004888 barrier function Effects 0.000 description 34
- 238000005286 illumination Methods 0.000 description 14
- 230000003287 optical effect Effects 0.000 description 9
- 230000009471 action Effects 0.000 description 6
- 210000000887 face Anatomy 0.000 description 6
- 238000004891 communication Methods 0.000 description 5
- 238000001514 detection method Methods 0.000 description 5
- 238000004091 panning Methods 0.000 description 4
- 230000009467 reduction Effects 0.000 description 4
- 230000015572 biosynthetic process Effects 0.000 description 3
- 230000007423 decrease Effects 0.000 description 3
- 238000005516 engineering process Methods 0.000 description 3
- 230000006870 function Effects 0.000 description 3
- 230000007246 mechanism Effects 0.000 description 3
- 238000003786 synthesis reaction Methods 0.000 description 3
- 238000012986 modification Methods 0.000 description 2
- 230000004048 modification Effects 0.000 description 2
- 230000004044 response Effects 0.000 description 2
- 239000004065 semiconductor Substances 0.000 description 2
- 230000001413 cellular effect Effects 0.000 description 1
- 238000006243 chemical reaction Methods 0.000 description 1
- 230000000295 complement effect Effects 0.000 description 1
- 230000003247 decreasing effect Effects 0.000 description 1
- 210000003811 finger Anatomy 0.000 description 1
- 210000003128 head Anatomy 0.000 description 1
- 230000000977 initiatory effect Effects 0.000 description 1
- 230000003993 interaction Effects 0.000 description 1
- 238000005259 measurement Methods 0.000 description 1
- 229910044991 metal oxide Inorganic materials 0.000 description 1
- 150000004706 metal oxides Chemical class 0.000 description 1
- 239000013618 particulate matter Substances 0.000 description 1
- 238000003825 pressing Methods 0.000 description 1
- 230000003068 static effect Effects 0.000 description 1
- 210000003813 thumb Anatomy 0.000 description 1
- 238000013024 troubleshooting Methods 0.000 description 1
Images
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T5/00—Image enhancement or restoration
- G06T5/73—Deblurring; Sharpening
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T5/00—Image enhancement or restoration
- G06T5/50—Image enhancement or restoration using two or more images, e.g. averaging or subtraction
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N19/00—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
- H04N19/10—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding
- H04N19/134—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the element, parameter or criterion affecting or controlling the adaptive coding
- H04N19/136—Incoming video signal characteristics or properties
- H04N19/137—Motion inside a coding unit, e.g. average field, frame or block difference
- H04N19/139—Analysis of motion vectors, e.g. their magnitude, direction, variance or reliability
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N19/00—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
- H04N19/10—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding
- H04N19/189—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the adaptation method, adaptation tool or adaptation type used for the adaptive coding
- H04N19/192—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the adaptation method, adaptation tool or adaptation type used for the adaptive coding the adaptation method, adaptation tool or adaptation type being iterative or recursive
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N19/00—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
- H04N19/42—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals characterised by implementation details or hardware specially adapted for video compression or decompression, e.g. dedicated software implementation
- H04N19/436—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals characterised by implementation details or hardware specially adapted for video compression or decompression, e.g. dedicated software implementation using parallelised computational arrangements
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N19/00—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
- H04N19/50—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using predictive coding
- H04N19/503—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using predictive coding involving temporal prediction
- H04N19/51—Motion estimation or motion compensation
- H04N19/537—Motion estimation other than block-based
- H04N19/54—Motion estimation other than block-based using feature points or meshes
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N19/00—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
- H04N19/50—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using predictive coding
- H04N19/503—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using predictive coding involving temporal prediction
- H04N19/51—Motion estimation or motion compensation
- H04N19/56—Motion estimation with initialisation of the vector search, e.g. estimating a good candidate to initiate a search
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N23/00—Cameras or camera modules comprising electronic image sensors; Control thereof
- H04N23/60—Control of cameras or camera modules
- H04N23/68—Control of cameras or camera modules for stable pick-up of the scene, e.g. compensating for camera body vibrations
- H04N23/681—Motion detection
- H04N23/6811—Motion detection based on the image signal
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N23/00—Cameras or camera modules comprising electronic image sensors; Control thereof
- H04N23/60—Control of cameras or camera modules
- H04N23/68—Control of cameras or camera modules for stable pick-up of the scene, e.g. compensating for camera body vibrations
- H04N23/682—Vibration or motion blur correction
- H04N23/683—Vibration or motion blur correction performed by a processor, e.g. controlling the readout of an image memory
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/10—Image acquisition modality
- G06T2207/10016—Video; Image sequence
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/20—Special algorithmic details
- G06T2207/20172—Image enhancement details
- G06T2207/20201—Motion blur correction
Definitions
- Image capture devices can be employed by users to capture one or more images and/or video of various target scenes. Often, a user operating an image capture device may wish to appear in a target scene. In this and other scenarios, when the image capture device is capturing an image in an autonomous and/or timed mode, there is little or no user feedback regarding framing of the image, particularly once the user is positioned within the target scene. Accordingly, the framing conditions of the captured imagery can be less than ideal. A user and/or subject in an image may appear off centered, a portion of the user may be outside of the current or ideal framing of the image, or other occurrences may negatively impact framing of the image. Additionally, an image capture device configured to capture video (e.g., multiple image frames stitched together to form a video), vibration and/or movement of the image capture device can result in the captured video becoming blurred due to the movement of the image capture device.
- an image capture device configured to capture video (e.g., multiple image frames stitched together to form a video), vibration
- FIGS. 1A-1B are drawings of a camera device according to various embodiments of the disclosure.
- FIG. 2 is a drawing of an alternative camera device according to various embodiments of the disclosure.
- FIG. 3 is a drawing of various components of a camera device of FIGS. 1A-1B and FIG. 2 according to various embodiments of the disclosure.
- FIGS. 4A-4D are drawings of a camera device according to embodiments of the disclosure that provide framing feedback.
- FIGS. 5-8 are drawings of a camera device providing framing feedback according to various embodiments of the disclosure.
- FIG. 9 is a flowchart illustrating one example of video capture logic executed in a camera device illustrated in FIGS. 1A-1B and FIG. 2 according to various embodiments of the disclosure.
- Embodiments of the present disclosure relate to systems and methods that can be executed in an image capture device or camera device (e.g., still image capture devices, video cameras, still and video multi-function camera devices, etc.). More specifically, embodiments of the disclosure are directed to providing feedback, or a visual frame indicator, to a subject in a target scene indicating how the camera device is aimed and how the target scene is framed, so that the subject can use this feedback regarding framing conditions and characteristics of one or more images that the camera device is capturing to improve the framing of the target scene.
- an image capture device or camera device e.g., still image capture devices, video cameras, still and video multi-function camera devices, etc.
- embodiments of the disclosure are directed to providing feedback, or a visual frame indicator, to a subject in a target scene indicating how the camera device is aimed and how the target scene is framed, so that the subject can use this feedback regarding framing conditions and characteristics of one or more images that the camera device is capturing to improve the framing of the target
- Framing conditions can include, but are not limited to, a location within an image along a vertical and horizontal axis (i.e., two dimensional framing conditions) as well as a depth of field of a subject in the image (i.e., third dimension).
- Embodiments of the disclosure are also directed to systems and methods in a camera device that can reduce blur in a video captured by the camera device by employing vibration and/or motion sensing capabilities that can be integrated within a camera device.
- a camera device can include or be incorporated within a camera, video camera, a mobile device with an integrated camera device, set top box, game unit, gaming console, web cameras, wireless or wired access points and routers, laptop computer, modems, tablet computers, or any other mobile or stationary devices suitable to capturing imagery and/or video as can be appreciated.
- a camera device according to an embodiment of the disclosure can be integrated within a device such as a smartphone, tablet computing system, laptop computer, desktop computer, or any other computing device that has the capability to receive and/or capture imagery via image capture hardware.
- camera device hardware can include components such as lenses, image sensors, or imagers, (e.g., charge coupled devices, CMOS image sensor, etc.), processor(s), image signal processor(s) (e.g., digital signal processor(s)), a main processor, memory, mass storage, or any other hardware, processing circuitry or software components that can facilitate capture of imagery and/or video.
- a digital signal processor can be incorporated as a part of a main processor in a camera device module that is in turn incorporated into a device having its own processor, memory and other components.
- a camera device can provide a user interface via a display that is integrated into the camera device and/or housed independently thereof.
- the display can be integrated with a mobile device, such as a smartphone and/or tablet computing device, and can include a touchscreen input device (e.g., a capacitive touchscreen, etc.) with which a user may interact with the user interface that is presented thereon.
- the camera device hardware can also include one or more buttons, dials, toggles, switches, or other input devices with which the user can interact with software or firmware executed in the camera device.
- FIGS. 1A-1B show a mobile device 102 that can comprise and/or incorporate a camera device according to various embodiments of the disclosure.
- the mobile device 102 may comprise, for example, a processor-based system, such as a computer system.
- a computer system may be embodied in the form of a desktop computer, a laptop computer, a personal digital assistant, a mobile device (e.g., cellular telephone, smart phone, etc.), tablet computing system, set-top box, music players, or other devices with like capability.
- the mobile device can include, for example, a camera device 104 , which can further include a lens system 108 as well as other hardware components that can be integrated with the device to facilitate image capture.
- the mobile device 102 can also include a display device 141 upon which various content and other user interfaces may be rendered.
- the mobile device 102 can also include one or more input devices with which a user can interact with a user interface rendered on the display device 141 .
- the mobile device 102 can include or be in communication with a mouse, touch input device (e.g., capacitive and/or resistive touchscreen incorporated with the display device 141 ), keyboard, or other input devices.
- the mobile device 102 may be configured to execute various applications, such as a camera application that can interact with an image capture module that includes various hardware and/or software components that facilitate capture and/or storage of images and/or video.
- the camera application can interact with application programming interfaces (API's) and/or other software libraries and/or drivers that are provided for the purpose interacting with image capture hardware, such as the lens system and other image capture hardware.
- API's application programming interfaces
- the camera application can be a special purpose application, a plug-in or executable library, one or more API's, image control algorithms, camera device firmware, or other software that can facilitate communication with image capture hardware in communication with the mobile device 102 .
- a camera application according to embodiments of the present disclosure can capture imagery and/or video via the various image capture hardware as well as facilitate storage of the captured imagery and/or video in memory and/or mass storage associated with the mobile device 102 .
- the mobile device 102 can also include a visual framing indicator 111 that can provide framing feedback to a user positioned in a target scene to which the lens system 108 is pointed.
- framing feedback can take the form of audible and/or visible indicators or cues that a current framing of an image to be determined from a position within or near the target scene to which the lens system 108 is aimed.
- the visual framing indicator 111 can take the form of a projection system or projector component (e.g., pico-projector), laser scanner, holographic optical element, LEDs, or any other components that can generate any type of markers, textual information, visual information, and/or video or images that are visible to the user, or visible on a remote surface relative to the camera device 104 .
- a projection system or projector component e.g., pico-projector
- laser scanner e.g., laser scanner, holographic optical element, LEDs, or any other components that can generate any type of markers, textual information, visual information, and/or video or images that are visible to the user, or visible on a remote surface relative to the camera device 104 .
- holographic optical element e.g., holographic optical element
- LEDs e.g., holographic optical element
- FIG. 2 illustrates an alternative example of a camera device 124 according to an embodiment of the disclosure.
- the depicted standalone camera device 124 can also include processing circuitry such as a digital signal processor, memory, and other components that can execute software logic to facilitate the embodiments described herein.
- the camera device 124 shown in FIG. 2 can also have a visual framing indicator 126 as discussed above with reference to the mobile device 102 of FIGS. 1A-1B to provide feedback regarding framing conditions to a user positioned within or near a target scene.
- the camera device 104 may include a lens system 163 having a fixed focal length or an adjustable focal length (e.g., a zoom lens).
- FIG. 3 illustrates an embodiment of the various image capture components, or one example of a camera device 300 as illustrated in FIGS. 1A-1B and FIG. 2 .
- a camera device according to an embodiment of the disclosure more generally comprises a camera device that can provide images and/or video in digital form.
- the camera device 300 includes a lens system 301 that conveys images of viewed scenes to an image sensor 302 .
- the image sensor 302 comprises a charge-coupled device (CCD) or a complementary metal oxide semiconductor (CMOS) sensor that is driven by one or more sensor drivers.
- the analog image signals captured by the sensor 302 are provided to an analog front end 304 for conversion into binary code that can be processed by a controller 308 or processor.
- CCD charge-coupled device
- CMOS complementary metal oxide semiconductor
- the controller 308 executes various types of logic that can be available in program memory 310 accessible to the camera device 300 in order to facilitate the functionality described herein.
- the controller 308 can place the camera device 300 into various modes, such as a video capture mode that allows a user to capture video.
- the video capture mode also allows video to be captured while detecting motion, vibration, and/or movement associated with the camera device 300 and/or the current framing of a target scene and skip the capture of frames when movement is above a threshold.
- the controller 308 can also place the camera device 300 into a framing feedback mode that allows a user to capture images and/or video of a target scene while providing visual and/or audible framing feedback to a user position in or near the target scene.
- the framing feedback mode also allows a user to perform gestures or provide audible commands that are interpreted by the controller to affect framing of a target scene or initiate other operations associated with the camera device 300 .
- the various modes can be selected by the controller 308 manually (e.g., via user input), automatically, and/or semi-automatically. The various modes can also be executed simultaneously by the controller 308 .
- the controller 308 interacts with various hardware components associated with the camera device 300 , such as image sensors, motion sensors, vibration sensors, accelerometers, user input devices, storage media, display devices, serial communication interface(s), and other hardware components as can be appreciated.
- the controller 308 also allows the camera device 300 to be operated in an autonomous and/or hands-free mode in which a user can initiate image and/or video capture, adjustment of a field of view of a lens system, and/or any other adjustments to the camera device 300 as discussed herein.
- a camera device 300 according to the present disclosure can also include one or more visual and/or audio projection systems that can include emit visible and/or audible feedback to a user.
- these projection systems can include, but are not limited to, one or more lights, light emitting diode(s) (LEDs), laser, holographic optical elements, video projection systems (e.g., pico projector, etc.), speakers, or other devices that can facilitate communication with a user.
- LEDs light emitting diode
- laser laser
- holographic optical elements e.g., holographic optical elements
- video projection systems e.g., pico projector, etc.
- speakers e.g., or other devices that can facilitate communication with a user.
- the video capture logic 315 in the program memory 310 is executed by the controller 308 to facilitate a video capture mode that can reduce blur or other video artifacts that result from vibration or other movement of the camera device 300 .
- the video capture logic 315 can determine a frame rate associated with the video and capture images from the image sensor 302 at a rate consistent with the frame rate and store the video in a mass storage 341 device accessible to the camera device 104 .
- the video capture logic 315 receives motion, movement and/or vibration data from one or more motion sensors 313 that are integrated within the camera device 300 .
- the video capture logic 315 can include vibration detection logic 316 , zooming detection logic 317 , tracking detection logic 318 , and/or panning detection logic 319 , which can detect movement of the camera device 300 as well as movement of the framing of a target scene that is caused by zooming and/or panning of the lens system 301 .
- the motion sensors 313 can include, but are not limited to, motion detectors, accelerometers and/or other devices that can detect movement of the camera device 300 .
- the video capture logic 315 skips capture of a frame associated with the video when movement data from the motion sensor(s) 313 indicates that motion of the camera device 300 and/or motion associated with adjustment of a zoom level and/or panning exceeds a threshold.
- the video capture logic 315 captures the video frame when movement of the camera device 300 is less than a threshold.
- the video capture logic 315 can adhere to a requested frame rate by producing an output video having a number of frames per time period consistent with the requested frame rate, but where the video frames may not be evenly spaced because frame capture may be initiated when movement is below a threshold.
- the video capture logic 315 can also adjust the movement threshold so that it is a relative measure of movement during capture of a given video. In this way, the video capture logic 315 can capture video frames over a given time period where the frames are associated with lower levels of movement relative to other portions of the time period. For example, the video capture logic 315 may identify that movement levels associated with the camera device 300 are oscillating. Accordingly, the video capture logic 315 can identify the oscillating pattern and capture video frames when movement levels are at their lowest relative to other portions of a given time period.
- the video capture logic 315 can also recognize an oscillating pattern in movement from the motion sensor(s) 313 and initiate capture of additional video frames before a peak in movement of the camera device 300 as well as subsequent to the peak so that enough frames are captured according to a requested frame rate. Additionally, because captured frames of the video may be unevenly temporally spaced, the video capture logic 315 can employ video encoding techniques that compensate for such a scenario to reduce the impact on the quality of the resultant captured video. Additionally, the video capture logic 315 can force capture of video frames even if movement of the camera device 300 does not drop below a movement threshold. For example, the video capture logic 315 can force capture of video frames in order to produce an output video possessing an acceptable number of frames per time period according to a requested frame rate.
- the controller 308 also executes framing feedback logic 321 available in the program memory 210 to facilitate a framing feedback mode that provides feedback or a visual framing indicator to a user regarding framing characteristics of an image or video captured (or to be captured) by the camera device 300 .
- the feedback includes feedback that is provided to a human subject in or near a target scene, as the feedback is visible and/or audible from a position within or near the target scene, which may not be a position from which a local display 345 of the camera device 300 is visible.
- a user who may be positioned within a target scene to which the lens system 301 is aimed can receive information regarding the current framing of an image, where the information is projected outside of the housing of the camera device 300 , and where the information includes framing conditions, zoom levels, lighting conditions, and other framing characteristics.
- the framing feedback logic 321 can employ a projection-illumination subsystem 323 of the camera device 300 in order to project visual feedback regarding the framing conditions of an image such that it is visible from the target scene.
- a subject of an image can determine, by viewing the feedback generated by the camera device 104 , whether the framing characteristics of the image to be captured are as the user intended or desires.
- the projection-illumination subsystem 323 can include projection systems, such as a microelectromechanical (MEMS) pico-projector 324 , one or more light emitting diodes, laser systems 326 , light source 327 , one or more holographic optical elements (HOE) 325 , or any other system that can emit a visible light or indicator (e.g., a glowing or fluorescent dot).
- the projection-illumination subsystem 323 can also include systems that can modify of control the visibility of the various projection systems, such as a fixed slit barrier 329 , adaptable slit barrier 330 and/or light source shroud 328 .
- the focal region logic 372 determines the current framing or field of view of a target scene as captured by the lens system 301 , and cause the laser scanning system 326 projects a frustum of light having a height and width proportional to the image sensor 302 such that the frustum of light illuminates anything that is within the current framing of the image.
- the focal region logic 372 causes the projection-illumination subsystem 323 to emit a frustum of light that approximates the field of view of the lens system 301 at any moment in time. In this way, a subject can determine whether he or she is within the current framing of an image by determining whether he or she can see or is within the frustum of light.
- the HOE 325 is configured to emit a fixed frustum of light that is tuned to approximate the framing of the fixed focal length lens system 301 .
- the MEMS pico-projector 324 and/or laser system 326 that is integrated within the camera device 300 emits a frustum of light and/or a boundary of light that is visible when cast against a background in the target scene that approximates the current framing of the image, or the frame boundary.
- the focal region logic 372 determines a zoom level associated with the lens system 301 and in turn determines a current framing of the image or field of view of the lens system 301 .
- the focal region logic 372 can calculate a zoom level from a user input device 331 , such as, a zoom input 333 device that allows the user to adjust the focal length of the lens system 301 . Accordingly, the focal region logic 372 causes the projection-illumination subsystem 323 to emit a frustum of light corresponding to the current framing of the image.
- the focal region logic 372 can also cause the projection-illumination subsystem 323 to emit ground lines or outlines that are cast against the ground and that correspond to the current framing, a rectangular area corresponding to the current framing and/or any other visible indication that approximates the current framing of an image to be captured by the camera device 300 .
- the framing feedback logic 321 can also employ the projection-illumination subsystem 323 to project a representation of a viewfinder or display of the camera device 104 such that it is visible from a position in the target scene and outside the housing of the camera device 300 . In this way, a user in the target scene can observe the current framing of the image.
- the display projection logic 376 is executed by the framing feedback logic 321 to cause the projection-illumination subsystem 323 to project the display or viewfinder representation on the ground at a position between the camera device 104 and the target scene, on a background of the target scene, behind the camera device 104 such that the camera device 104 is positioned between the projection and the target scene, and/or at any other surface or position such that is viewable from the target scene.
- the MEMS pico-projector 324 integrated within the camera device 300 project a representation of the target scene as viewed by the lens system 301 such that it is visible by a subject within the target scene.
- the framing feedback logic 221 can also provide framing feedback or a visual frame indicator in the form of a light source, or other visible source, emitted from the camera device 104 .
- the focal region logic 372 executed by the framing feedback logic 321 employs various mechanisms to control whether a light source 327 (e.g., light emitting diode(s), laser(s), glowing or florescent component, etc.) emits light (or reflected light) that is visible from various positions within or around the target scene. Accordingly, the framing focal region logic 372 adjusts visibility of such a light source 327 such that it is visible from a position within the current framing of the image.
- a light source 327 e.g., light emitting diode(s), laser(s), glowing or florescent component, etc.
- the framing focal region logic 372 adjusts visibility of such a light source 327 such that it is visible from a position within the current framing of the image.
- the focal region logic 372 disables visibility of the light source 327 from a position that is outside the current framing of the image.
- the projection-illumination subsystem 323 employs a light source shroud 328 , a fixed slit barrier 329 and/or adjustable slit barrier 330 that limits the viewing angles from which the light source 327 is visible so that a subject can look at the camera device 300 and determine whether he or she is in the current framing or field of view of the lens system 301 based upon whether he or she can see the light source.
- the focal region logic 372 can also determine a field of view of the lens system 301 based upon a zoom level of the lens system 301 and adjust visibility using these various mechanisms as the zoom level of the lens system 301 is adjusted. For example, the focal region logic 372 can adjust visibility of the light source 327 as the zoom level is adjusted so that a viewing angle from which the light source 327 is visible from the target scene increases as the zoom level decreases, and where the viewing angle decreases as the zoom level increases. It should also be appreciated that embodiments of the disclosure can be configured such that the light source 327 is visible from a position outside the target scene and adjusted such that the light source 327 is not visible from a position inside the target scene.
- the focal region logic 372 can position a visible indicator of the current framing of an image by generating ground lines emitted by the laser system 326 and/or the MEMS pico-projector 324 such that they are initially visible within the frame.
- focal region logic 372 can cause the laser scanning system 326 to emit lines corresponding to left and right ground lines such that they are visible on the ground via the lens system 301 .
- the focal region logic 372 can increase the angle of the left and right ground lines relative to one another until the ground lines are subsequently not visible, which translates into the ground lines being positioned just outside the current framing of the image. Accordingly, a subject can position himself or herself within the ground lines upon their final positioning by the focal region logic 372 and know that he or she is within the current framing of the image.
- the MEMS pico-projector 324 , adaptable light source 327 , HOE 325 , and laser scanning system 326 can provide framing feedback that is visible outside a housing of the camera device 300 (e.g., light projected on the ground, background, or any other surface outside the camera device 300 housing that is visible from the targets scene).
- the light source 327 operates in conjunction with the light source shroud 328 , the fixed slit barrier 329 and/or adaptable slit barrier 330 within the camera device 300 housing to enable or disable visibility of the light source 327 from various viewing angles outside of the housing.
- the framing feedback logic 221 can also execute audible feedback logic 375 to identify subjects in a target scene and cause a speaker system integrated within the camera device 300 to emit audible feedback regarding the current framing of an image.
- the audible feedback can provide a subject in a target scene with information regarding the current framing of an image.
- the audible feedback logic 375 can recognize faces, bodies or other objects in a target and determine whether the framing of the image can be improved.
- the audible feedback logic 375 can determine whether faces, bodies, and/or objects are centered within the current framing of the image and generate audible feedback via speech synthesis logic directing a user how faces, bodies, and/or objects should be moved within the current framing of the image.
- the audible feedback logic 375 can employ speech synthesis logic to instruct a subject within a current framing of the image a direction in which to move to appear centered within the current framing.
- the audible feedback logic 375 can also emit an audible message informing a subject in the target scene of when an image is to be captured.
- the audible feedback logic 375 employs speech synthesis logic 388 to generate a voice countdown to so that a user is aware of when an image and/or video is going to be captured by the camera device 104 .
- the framing feedback logic 221 also executes automated framing and cropping logic 377 that facilitates automated framing and cropping of a target scene based upon identification of faces, bodies, and/or objects within the target scene.
- the camera device 104 can be placed in a mode by the user that includes automatic image capture of a target scene, perhaps a target scene in which the user is positioned.
- the face-body-object detection logic 374 executed by the framing feedback logic 221 identifies the presence of one or more faces and/or bodies in the image and the automated framing and cropping logic 377 adjusts framing of an image captured by the image sensor 302 such that the identified faces and/or bodies are substantially centered within the current framing of the image.
- the framing feedback logic 221 also executes gesture recognition logic 385 that allows a user to control the camera device 300 via user input that can be detected while the user is in or near a target scene to which the lens system 301 is pointed. For example, by performing a gesture with a hand and/or arm, a user can adjust a zoom level, focus point, flash controls, and other aspects related to capture of an image by the camera device 300 as can be appreciated. Gestures that a user in a target scene can perform can be linked to actions that can be taken by the gesture recognition logic 385 to perform an action and/or alter characteristics of the camera device 300 . The gesture recognition logic 385 can also identify bodies appearing in a target scene to which the lens system 301 is aimed and track the corresponding body parts appearing in the scene.
- the gesture recognition logic 385 can employ time of flight camera methods to determine and track a position of a hand and/or arm within a target scene.
- the gesture recognition logic 385 performs the action in the camera device 300 .
- a user in a target scene can perform a gesture identifying a focus point within the current framing of an image.
- a user in the target scene can point at a specific area in the target scene for a predetermined amount of time.
- the gesture recognition logic 385 can cause the controller 308 to identify the area as the focus point in the image, which in turn causes the lens system 301 to focus on the area.
- the user can perform a gesture linked to initiating image or video capture (i.e., a “capture trigger” gesture).
- a gesture can comprise a user simulating pulling down on an object, pressing a button, a “thumbs up” hand signal, or any other gesture motion that can be recognized as can be appreciated.
- Such a gesture can also comprise recognizing when the subjects in the target scene have placed their arms by their side or in a still or ready position. Upon recognizing such a gesture, the gesture recognition logic 385 can, in some embodiments, initiate capture of an image after a predetermined delay or countdown. The gesture recognition logic 385 can also employ other portions of the framing feedback logic 321 to emit feedback to the user so that the user can be aware of the moment that the image and/or video will be captured by the camera device 300 .
- the user can also communicate a desired framing of the image via gestures.
- the user can instruct the gesture recognition logic 385 to frame the image such that the entire body of the user is captured by, for example, pointing at the ground beneath the feet of the user with one hand, indicating a desire for the user to have his or her feet within the framing of the image, and pointing to the head of the user with another hand.
- a user can, in a group shot, identify the subjects in a target scene that the user wishes to be present in a resultant framing of an image.
- the gesture recognition logic 385 can cause the face-object-body framing logic 374 to adjust cropping of a resultant image and/or a zoom level of a lens system 301 to appropriately frame the image and/or video as requested by the user.
- a user can also affect flash settings by performing a gesture that the gesture recognition logic 385 can recognize. For example, a user can toggle a flash between an off-setting, an on-setting, and an automatic setting by performing a gesture linked to modifying flash settings, which the gesture recognition logic 385 can recognize and modify accordingly.
- the controller 308 can also execute voice recognition logic 386 , which allows the user to speak commands that can be linked to various actions as with the gesture recognition logic 385 .
- the voice recognition logic 386 can allow a user to initiate capture of an image or video, specify whether the camera device 104 should capture one of image or video as well as modify framing, focus point, and flash settings as described with reference to the gesture recognition logic 385 .
- FIG. 4A illustrates one example of a camera device 300 operating in a mode that provides framing feedback to a subject regarding a current framing of an image.
- the camera device 300 is configured with a projection-illumination subsystem 323 that includes a light source 327 as well as a light source shroud 328 configured to modify a viewing angle 401 from which the light source is viewable.
- the framing focal region logic 372 identifies a current framing of a target scene to which the lens system 301 is aimed, or an amount of a target scene in the current framing of an image.
- the focal region logic 372 specifies a viewing angle 401 at which the light source 327 should be visible such that it is visible from a position within the current framing and not visible from a position outside the current framing. Accordingly, the focal region logic 372 can then adjust the light source shroud 328 such that the viewing angle from which the light source 327 is visible corresponds to the current framing of the image.
- FIG. 4B continues the example of FIG. 4A and illustrates how the focal region logic 372 can adjust the viewing angle 501 at which the light source 327 can be viewed.
- the focal region logic 372 can determine a zoom level associated with an adjustable focal length lens system 301 to determine a current framing of the image. Based upon the current framing of the image, the light source shroud 328 is configured to increase the viewing angle 501 when the zoom level of the lens system 301 is decreased (i.e. “zooming out”) and to decrease the viewing angle 501 when the zoom level of the lens system 211 is increased (i.e. “zooming in”).
- FIG. 4C illustrates another example of a camera device 300 according to an embodiment of the disclosure.
- the camera device 300 employs a fixed slit barrier 329 that includes a light source 327 such as an LED array 402 that is positioned behind a fixed barrier 404 .
- the LED array 402 comprises a linear array of a plurality of LED's positioned behind the fixed barrier 404 relative to a target scene such that the fixed barrier 404 limits the visibility of the LED array 402 from certain viewing angles.
- the fixed barrier 404 provides a slit through which light emanating from the LED array 402 can pass.
- the focal region logic 329 can activate a certain number of LED's in the LED array 404 that causes light to emanate through the fixed barrier 402 such that the light is visible at a viewing angle that corresponds to the field of view of the lens system 301 .
- the focal region logic 329 can activate an appropriate number of LED's from the LED array 404 that are laterally offset from the slit in the fixed barrier 402 such that they are visible by a subject 461 a at a viewing angle in the target scene that corresponds to an angle relative to the lens system 301 that corresponds to the current field of view.
- the zoom level of the lens system 301 it follows that the field of view or current framing correspondingly changes.
- the focal region logic 329 can activate and/or disable LED's in the LED array 404 as the zoom level of the lens system 301 changes such that it is visible by a subject 461 a within the field of view of the lens system 301 but not visible by a subject 461 b positioned outside the field of view.
- the fixed slit barrier 329 is also configured to allow light emanating from the LED array 404 to be visible at a viewing angle that is slightly less than a current field of view of the lens system 301 .
- the fixed slit barrier 329 introduces a field of view reduction 481 a , 481 b on opposing sides of the field of view such that the viewing angle of the LED array 404 is less than an angle of the field of view of the lens system 301 .
- This field of view reduction 481 a , 481 b can be chosen such that it is sized similarly to an average lateral distance between a human subject's eyes and shoulders.
- the field of view reduction 481 a , 481 b is introduced so that a user (e.g., subject 461 c ) cannot see light emanating from the LED array 404 when a portion of the subject 461 c that is laterally offset from the subject's 461 c eyes are outside the field of view of the lens system 301 even though the subject's 461 c face, and therefore eyes, may be within the field of view. Therefore, the LED array 404 is configured to activate LED's such that light emanating through the fixed barrier 402 is generally visible to the subject 461 c when the subject's entire body is within the field of view of the lens system 301 .
- FIG. 4D illustrates an example of a camera device 300 employing an adaptable slit barrier 330 to emanate light from a light source 328 such as a fixed LED source 493 such that the light is visible within the field of view of the lens system 301 .
- the adaptable slit barrier 330 employs an adaptable barrier 491 that can adjust an aperture through which light from the fixed LED source 493 passes to adjust the viewing angle from the target scene of the light.
- the In the example of FIG. 4D can emanate light such that there is a field of view reduction 481 a , 481 b so that the viewing angle of the light emanating from the adaptable slit barrier 330 is less than the field of view of the lens system 301 .
- the adaptable slit barrier 330 can employ techniques similar to those described in U.S. patent application Ser. No. 12/845,409, entitled “Display with Adaptable Parallax Barrier,” filed Jul. 28, 2010 (the '409 application), which is hereby incorporated herein by reference in its entirety.
- the adaptable barrier 491 can comprise a linear barrier element array as disclosed in the '409 application comprising a plurality of barrier elements, each of which being selectable to be substantially opaque or transparent. Accordingly, as the field of view of the lens system 301 changes, the focal region logic 372 can select some of the barrier elements in the linear barrier element array that are laterally offset from the center of the barrier 491 to be transparent.
- the adaptable slit battier 330 can allow light from the fixed LED source 493 to emanate through the adaptable barrier 491 and to the target scene such that the fixed LED source 493 is visible at a viewing angle corresponding to the field of view of the lens system 301 .
- either of these devices can also include an LED array as well as barrier oriented in two dimensions so that the viewing angle light emanating from the LED source can be controlled in both the horizontal and vertical directions.
- the fixed slit barrier 329 and/or adaptable slit barrier 330 can limit the viewing angle of the LED source in the horizontal and vertical directions relative to the target scene.
- FIG. 5 illustrates an example of a camera device 300 providing framing feedback to a subject 501 in a target scene to which the lens system 301 of the camera device 104 is aimed.
- the focal region logic 372 executed by the controller 308 causes the projection-illumination subsystem 323 to project one or more lines 502 a , 502 b with a holographic optical element 325 , laser system 326 , MEMS pico-projector 324 and/or any other mechanism that can project visible lines designating the focal region and/or field of view corresponding to the current framing of the image.
- These lines 502 a , 502 b correspond to the edge of the current framing of an image and are visible from the target scene so that a subject 501 can see whether he or she is within the current framing as well as a location within the current framing.
- the focal region logic 372 can also identify a suggested spot for the subject 501 to position himself within the field of view of the lens system 301 of the camera device 300 .
- the focal region logic 372 can then cause the projection-illumination subsystem 323 to emit an indicator that is cast on the ground in the target scene that provides a suggested position for the subject 501 based on the framing conditions within the field of view of the lens system 301 .
- the suggested position can be based on the size of the subject 501 within the current framing, lighting conditions, background elements in the target scene, or any other framing conditions as can be appreciated.
- FIG. 6 illustrates one way in which the gesture recognition logic 385 executed by the controller 308 can allow the camera device 300 to interpret gestures performed by a human subject 501 visible in the target scene to modify framing conditions or other attributes associated with the camera device 300 .
- the subject 501 can select framing conditions associated with an image captured by the camera device 300 .
- the subject 501 can point with an index finger to indicate the top of the image frame as well as point to the ground to indicate that the subject 501 desires that the image frame extend to the ground beneath the subject 501 .
- the subject 501 can also perform various other gestures that can be recognized by the gesture recognition logic 385 and linked to certain actions within the camera device. For example, as noted above, the subject 501 can perform a gesture identifying a focus point in the current framing of the image, and the gesture recognition logic 385 can adjust the focus point of the lens system 301 in response. The subject 501 can perform another gesture that can be linked with changing the depth of field setting of the camera device 300 , which the gesture recognition logic 385 can identify and act on. The subject 501 can perform gestures that select various modes in which the camera device 300 can be placed by performing one or more gestures. For example, a gesture can be linked to selection of a still image capture mode while another gesture can be linked to selection of a video capture mode.
- the subject 501 can select a scene mode associated with the camera device 300 , such as a landscape scene mode, a portrait scene mode, a fast-motion video capture mode, a high quality video capture mode, and any/or other mode that can be associated with the camera device 300 .
- a scene mode associated with the camera device 300 such as a landscape scene mode, a portrait scene mode, a fast-motion video capture mode, a high quality video capture mode, and any/or other mode that can be associated with the camera device 300 .
- the gesture recognition logic 385 also recognizes a gesture that can be linked to selection of an aspect ratio associated with image or video capture.
- the subject 501 can perform a gesture that also selects what or where the subject 501 would like to capture in an image or video captured by the camera device 300 .
- the camera device 300 can be configured with a wide, high resolution field of view of the target scene, and the gesture recognition logic 385 can allow the subject 501 to perform a gesture that selects a subset of the field of view as the current framing of the image.
- the gesture recognition logic 385 can modify the current framing of the image without altering the zoom level associated with the lens system 301 .
- the gesture recognition logic 385 can quickly modify framing conditions without having to modify a zoom level of the lens system 301 .
- the image sensor 302 can comprise an array of imager elements or image sensors, and the gesture recognition logic 385 can modify framing conditions by selecting a subset of the array of imager elements.
- the gesture recognition logic 385 can allow other adjustments to be made via gestures performed by a subject 501 .
- optical zoom adjustments For example, optical zoom adjustments, mechanical panning adjustments (e.g., when the camera device 300 is attached to a motorized tripod), flash settings (e.g., on, off, automatic, etc.), and other camera device 300 settings as can be appreciated can be linked to a gesture performed by the subject 501 , which can be recognized by the gesture recognition logic 385 , which can in turn cause the requested adjustment to be made.
- mechanical panning adjustments e.g., when the camera device 300 is attached to a motorized tripod
- flash settings e.g., on, off, automatic, etc.
- other camera device 300 settings can be linked to a gesture performed by the subject 501 , which can be recognized by the gesture recognition logic 385 , which can in turn cause the requested adjustment to be made.
- the gesture recognition logic 385 can also allow the subject 501 to perform gestures that alter framing feedback provided by the framing feedback logic 321 .
- the display projection logic 376 causes the MEMs pico-projector 324 of the camera device 300 to project a representation of the current framing of the image or video onto the ground within or near the target scene.
- the gesture recognition logic 385 also recognizes gestures that allow the user to modify where the projection appears. For example, the user can perform a gesture that causes the projection to appear on a background, on a surface behind the camera device 300 , or on any other surface within or near the target scene.
- the gesture recognition logic 385 can also recognize a gesture performed by the subject 501 that is linked to changing the size and/or orientation of the projection.
- FIG. 6 also illustrates how the display projection logic 376 can provide framing feedback to a subject 501 in a target scene to which the lens system 301 of the camera device 300 is aimed.
- the display projection logic 376 can cause the MEMS pico-projector 324 to generate a projection 621 of a current framing of an image, or the current field of view of the camera device, on a surface outside the housing of the camera device 300 that is visible from a position in the target scene by the subject 501 .
- the projection 621 is projected towards a ground level near the target scene such that it is visible by the subject 501 .
- the projection 621 generated by the MEMS pico-projector 324 can also include a textual and/or graphics overlay with additional information such as an indicator showing whether image and/or video capture is underway, textual information regarding camera device 300 settings (e.g., aperture, shutter speed, scene mode, etc.), whether there is excessive motion within the camera device 300 hindering image or video capture, or any other information that might be relevant to the subject 501 that is related to framing conditions.
- additional information such as an indicator showing whether image and/or video capture is underway, textual information regarding camera device 300 settings (e.g., aperture, shutter speed, scene mode, etc.), whether there is excessive motion within the camera device 300 hindering image or video capture, or any other information that might be relevant to the subject 501 that is related to framing conditions.
- FIG. 7 illustrates how the projection-illumination subsystem 323 can, via the MEMS pico-projector 324 , project the current framing of an image in various directions and on various surfaces such that it is visible from a position in the target scene.
- the camera device 300 is equipped with an additional MEMS pico-projector 324 that is positioned on an opposing side of the camera device 300 housing. This allows the projection 621 to be cast on any number of surfaces in any number of directions.
- the gesture recognition logic 385 can allow the subject 501 to perform a gesture to alter the positioning of the projection 621 .
- the subject 501 has performed a gesture to cause the gesture recognition logic 385 to request that the display projection logic 376 change a surface upon which the projection 621 is cast.
- the display projection logic 376 can also adjust and/or introduce skew into the projection 621 generated by the MEMS pico-projector 324 in the event that a surface upon which the projection 621 is cast is not normal to the camera device 300 , thereby yielding a proportional rectangular image projected on the surface.
- Such an adjustment can be manually directed with user inputs via an input device integrated within the camera device 300 or via gestures captured by the camera device, electronically via analysis of a projection which at least in part falls within the field of view of the image sensor and/or a second imager, and/or via triangulation based infrared emitter detectors
- FIG. 7 also illustrates how a gesture performed by the subject 501 can cause the gesture recognition logic 385 to alter the current framing of the image as directed by the subject 501 .
- the subject 501 performs a gesture indicating how a zoom level of the lens system 301 can be changed or how an image can be cropped by the controller 208 .
- the framing feedback logic 321 can direct the projection-illumination subsystem 323 to generate a frustum of light 701 that is visible from a position within the target scene.
- the frustum of light 701 is generated by a holographic optical element and/or a laser system, it can be configured so that it is substantially invisible from a position outside the target scene with the exception of a background on which the light falls and assuming there is minimal debris or particulate matter in the air surrounding the target scene. In this way, a subject 501 can know if he or she is in the target scene based upon whether he or she can see the frustum of light 701 and/or whether he or she is within the frustum of light 701 .
- FIG. 9 shown is a flowchart that provides one example of the operation of a portion of the video capture logic 315 according to various embodiments. It is understood that the flowchart of FIG. 9 provides merely an example of the many different types of functional arrangements that may be employed to implement the operation of the portion of the video capture logic 315 as described herein. As an alternative, the flowchart of FIG. 9 may be viewed as depicting an example of steps of a method implemented in an camera device 104 according to one or more embodiments.
- the video capture logic 315 initiates video capture according to a requested frame rate.
- the video capture logic 315 can determine, via one or more motion sensors 313 , a level of motion, movement and/or vibration of the camera device 300 .
- the video capture logic 315 can determine whether the level of movement of the camera device 300 exceeds a threshold. As noted above, such a threshold can be a threshold that is relative to movement during capture of a current video or an absolute threshold.
- the video capture logic 315 can skip capture of a video frame if the movement level exceeds the threshold.
- the video capture logic can determine whether capture of a video frame should be forced to comply with a requested frame rate, even if movement levels of the camera device 300 exceed the threshold.
- the video frame can be captured.
- Embodiments of the present disclosure can be implemented in various devices, for example, having a processor, memory, and image capture hardware.
- the logic described herein can be executable by one or more processors integrated with a device.
- an application executed in a computing device such as a mobile device, can invoke API's that provide the logic described herein as well as facilitate interaction with image capture hardware.
- any component discussed herein is implemented in the form of software, any one of a number of programming languages may be employed such as, for example, processor specific assembler languages, C, C++, C#, Objective C, Java, Javascript, Perl, PHP, Visual Basic, Python, Ruby, Delphi, Flash, or other programming languages.
- executable means a program file that is in a form that can ultimately be run by a processor.
- executable programs may be, for example, a compiled program that can be translated into machine code in a format that can be loaded into a random access portion of memory and run by a processor, source code that may be expressed in proper format such as object code that is capable of being loaded into a random access portion of the memory and executed by the processor, or source code that may be interpreted by another executable program to generate instructions in a random access portion of the memory to be executed by the processor, etc.
- An executable program may be stored in any portion or component of the memory including, for example, random access memory (RAM), read-only memory (ROM), hard drive, solid-state drive, USB flash drive, memory card, optical disc such as compact disc (CD) or digital versatile disc (DVD), floppy disk, magnetic tape, or other memory components.
- RAM random access memory
- ROM read-only memory
- hard drive solid-state drive
- USB flash drive USB flash drive
- memory card such as compact disc (CD) or digital versatile disc (DVD), floppy disk, magnetic tape, or other memory components.
- CD compact disc
- DVD digital versatile disc
- each block may represent a module, segment, or portion of code that comprises program instructions to implement the specified logical function(s).
- the program instructions may be embodied in the form of source code that comprises human-readable statements written in a programming language or machine code that comprises numerical instructions recognizable by a suitable execution system such as a processor in a computer system or other system.
- the machine code may be converted from the source code, etc.
- each block may represent a circuit or a number of interconnected circuits to implement the specified logical function(s).
- FIG. 9 shows a specific order of execution, it is understood that the order of execution may differ from that which is depicted. For example, the order of execution of two or more blocks may be scrambled relative to the order shown. Also, two or more blocks shown in succession in FIG. 9 may be executed concurrently or with partial concurrence. Further, in some embodiments, one or more of the blocks shown in FIG. 9 may be skipped or omitted. In addition, any number of counters, state variables, warning semaphores, or messages might be added to the logical flow described herein, for purposes of enhanced utility, accounting, performance measurement, or providing troubleshooting aids, etc. It is understood that all such variations are within the scope of the present disclosure.
- any logic or application described herein that comprises software or code, such as the framing feedback logic 321 and/or the video capture logic 315 can be embodied in any non-transitory computer-readable medium for use by or in connection with an instruction execution system such as, for example, a processor in a computer device or other system.
- the logic may comprise, for example, statements including instructions and declarations that can be fetched from the computer-readable medium and executed by the instruction execution system.
- a “computer-readable medium” can be any medium that can contain, store, or maintain the logic or application described herein for use by or in connection with the instruction execution system.
- the computer-readable medium can comprise any one of many physical media such as, for example, magnetic, optical, or semiconductor media.
- a suitable computer-readable medium would include, but are not limited to, magnetic tapes, magnetic floppy diskettes, magnetic hard drives, memory cards, solid-state drives, USB flash drives, or optical discs.
- the computer-readable medium may be a random access memory (RAM) including, for example, static random access memory (SRAM) and dynamic random access memory (DRAM), or magnetic random access memory (MRAM).
- the computer-readable medium may be a read-only memory (ROM), a programmable read-only memory (PROM), an erasable programmable read-only memory (EPROM), an electrically erasable programmable read-only memory (EEPROM), or other type of memory device.
Landscapes
- Engineering & Computer Science (AREA)
- Multimedia (AREA)
- Signal Processing (AREA)
- Theoretical Computer Science (AREA)
- Physics & Mathematics (AREA)
- General Physics & Mathematics (AREA)
- Computing Systems (AREA)
- Studio Devices (AREA)
- Image Processing (AREA)
- Compression Or Coding Systems Of Tv Signals (AREA)
Abstract
Description
- This application claims priority to co-pending U.S. provisional application entitled, “Image Capture Device Systems and Methods,” having Ser. No. 61/509,747, filed Jul. 20, 2011, which is entirely incorporated herein by reference.
- Image capture devices (e.g., cameras, camera devices, etc.) can be employed by users to capture one or more images and/or video of various target scenes. Often, a user operating an image capture device may wish to appear in a target scene. In this and other scenarios, when the image capture device is capturing an image in an autonomous and/or timed mode, there is little or no user feedback regarding framing of the image, particularly once the user is positioned within the target scene. Accordingly, the framing conditions of the captured imagery can be less than ideal. A user and/or subject in an image may appear off centered, a portion of the user may be outside of the current or ideal framing of the image, or other occurrences may negatively impact framing of the image. Additionally, an image capture device configured to capture video (e.g., multiple image frames stitched together to form a video), vibration and/or movement of the image capture device can result in the captured video becoming blurred due to the movement of the image capture device.
- Many aspects of the invention can be better understood with reference to the following drawings. The components in the drawings are not necessarily to scale, emphasis instead being placed upon clearly illustrating the principles of the present invention. Moreover, in the drawings, like reference numerals designate corresponding parts throughout the several views.
-
FIGS. 1A-1B are drawings of a camera device according to various embodiments of the disclosure. -
FIG. 2 is a drawing of an alternative camera device according to various embodiments of the disclosure. -
FIG. 3 is a drawing of various components of a camera device ofFIGS. 1A-1B andFIG. 2 according to various embodiments of the disclosure. -
FIGS. 4A-4D are drawings of a camera device according to embodiments of the disclosure that provide framing feedback. -
FIGS. 5-8 are drawings of a camera device providing framing feedback according to various embodiments of the disclosure. -
FIG. 9 is a flowchart illustrating one example of video capture logic executed in a camera device illustrated inFIGS. 1A-1B andFIG. 2 according to various embodiments of the disclosure. - Embodiments of the present disclosure relate to systems and methods that can be executed in an image capture device or camera device (e.g., still image capture devices, video cameras, still and video multi-function camera devices, etc.). More specifically, embodiments of the disclosure are directed to providing feedback, or a visual frame indicator, to a subject in a target scene indicating how the camera device is aimed and how the target scene is framed, so that the subject can use this feedback regarding framing conditions and characteristics of one or more images that the camera device is capturing to improve the framing of the target scene. Framing conditions can include, but are not limited to, a location within an image along a vertical and horizontal axis (i.e., two dimensional framing conditions) as well as a depth of field of a subject in the image (i.e., third dimension). Embodiments of the disclosure are also directed to systems and methods in a camera device that can reduce blur in a video captured by the camera device by employing vibration and/or motion sensing capabilities that can be integrated within a camera device.
- A camera device can include or be incorporated within a camera, video camera, a mobile device with an integrated camera device, set top box, game unit, gaming console, web cameras, wireless or wired access points and routers, laptop computer, modems, tablet computers, or any other mobile or stationary devices suitable to capturing imagery and/or video as can be appreciated. In some embodiments, a camera device according to an embodiment of the disclosure can be integrated within a device such as a smartphone, tablet computing system, laptop computer, desktop computer, or any other computing device that has the capability to receive and/or capture imagery via image capture hardware.
- Accordingly, camera device hardware can include components such as lenses, image sensors, or imagers, (e.g., charge coupled devices, CMOS image sensor, etc.), processor(s), image signal processor(s) (e.g., digital signal processor(s)), a main processor, memory, mass storage, or any other hardware, processing circuitry or software components that can facilitate capture of imagery and/or video. In some embodiments, a digital signal processor can be incorporated as a part of a main processor in a camera device module that is in turn incorporated into a device having its own processor, memory and other components.
- A camera device according to an embodiment of the disclosure can provide a user interface via a display that is integrated into the camera device and/or housed independently thereof. The display can be integrated with a mobile device, such as a smartphone and/or tablet computing device, and can include a touchscreen input device (e.g., a capacitive touchscreen, etc.) with which a user may interact with the user interface that is presented thereon. The camera device hardware can also include one or more buttons, dials, toggles, switches, or other input devices with which the user can interact with software or firmware executed in the camera device.
- Referring now to the drawings,
FIGS. 1A-1B show amobile device 102 that can comprise and/or incorporate a camera device according to various embodiments of the disclosure. Themobile device 102 may comprise, for example, a processor-based system, such as a computer system. Such a computer system may be embodied in the form of a desktop computer, a laptop computer, a personal digital assistant, a mobile device (e.g., cellular telephone, smart phone, etc.), tablet computing system, set-top box, music players, or other devices with like capability. The mobile device can include, for example, acamera device 104, which can further include alens system 108 as well as other hardware components that can be integrated with the device to facilitate image capture. Themobile device 102 can also include adisplay device 141 upon which various content and other user interfaces may be rendered. Themobile device 102 can also include one or more input devices with which a user can interact with a user interface rendered on thedisplay device 141. For example, themobile device 102 can include or be in communication with a mouse, touch input device (e.g., capacitive and/or resistive touchscreen incorporated with the display device 141), keyboard, or other input devices. - The
mobile device 102 may be configured to execute various applications, such as a camera application that can interact with an image capture module that includes various hardware and/or software components that facilitate capture and/or storage of images and/or video. In one embodiment, the camera application can interact with application programming interfaces (API's) and/or other software libraries and/or drivers that are provided for the purpose interacting with image capture hardware, such as the lens system and other image capture hardware. The camera application can be a special purpose application, a plug-in or executable library, one or more API's, image control algorithms, camera device firmware, or other software that can facilitate communication with image capture hardware in communication with themobile device 102. Accordingly, a camera application according to embodiments of the present disclosure can capture imagery and/or video via the various image capture hardware as well as facilitate storage of the captured imagery and/or video in memory and/or mass storage associated with themobile device 102. - The
mobile device 102 can also include avisual framing indicator 111 that can provide framing feedback to a user positioned in a target scene to which thelens system 108 is pointed. As described herein, framing feedback can take the form of audible and/or visible indicators or cues that a current framing of an image to be determined from a position within or near the target scene to which thelens system 108 is aimed. As described below, thevisual framing indicator 111 can take the form of a projection system or projector component (e.g., pico-projector), laser scanner, holographic optical element, LEDs, or any other components that can generate any type of markers, textual information, visual information, and/or video or images that are visible to the user, or visible on a remote surface relative to thecamera device 104. Such a surface can include a wall, floor, or any other surface that is within or near a target scene. -
FIG. 2 illustrates an alternative example of acamera device 124 according to an embodiment of the disclosure. Like thecamera device 104 in themobile device 102 ofFIGS. 1A-1B , the depictedstandalone camera device 124 can also include processing circuitry such as a digital signal processor, memory, and other components that can execute software logic to facilitate the embodiments described herein. Thecamera device 124 shown inFIG. 2 can also have avisual framing indicator 126 as discussed above with reference to themobile device 102 ofFIGS. 1A-1B to provide feedback regarding framing conditions to a user positioned within or near a target scene. Additionally, thecamera device 104 may include alens system 163 having a fixed focal length or an adjustable focal length (e.g., a zoom lens). -
FIG. 3 illustrates an embodiment of the various image capture components, or one example of acamera device 300 as illustrated inFIGS. 1A-1B andFIG. 2 . Although one implementation is shown inFIG. 3 and described herein, a camera device according to an embodiment of the disclosure more generally comprises a camera device that can provide images and/or video in digital form. Thecamera device 300 includes alens system 301 that conveys images of viewed scenes to animage sensor 302. By way of example, theimage sensor 302 comprises a charge-coupled device (CCD) or a complementary metal oxide semiconductor (CMOS) sensor that is driven by one or more sensor drivers. The analog image signals captured by thesensor 302 are provided to ananalog front end 304 for conversion into binary code that can be processed by acontroller 308 or processor. - The
controller 308 executes various types of logic that can be available inprogram memory 310 accessible to thecamera device 300 in order to facilitate the functionality described herein. In other words, thecontroller 308 can place thecamera device 300 into various modes, such as a video capture mode that allows a user to capture video. As described herein, the video capture mode also allows video to be captured while detecting motion, vibration, and/or movement associated with thecamera device 300 and/or the current framing of a target scene and skip the capture of frames when movement is above a threshold. Thecontroller 308 can also place thecamera device 300 into a framing feedback mode that allows a user to capture images and/or video of a target scene while providing visual and/or audible framing feedback to a user position in or near the target scene. The framing feedback mode also allows a user to perform gestures or provide audible commands that are interpreted by the controller to affect framing of a target scene or initiate other operations associated with thecamera device 300. The various modes can be selected by thecontroller 308 manually (e.g., via user input), automatically, and/or semi-automatically. The various modes can also be executed simultaneously by thecontroller 308. - Additionally, the
controller 308 interacts with various hardware components associated with thecamera device 300, such as image sensors, motion sensors, vibration sensors, accelerometers, user input devices, storage media, display devices, serial communication interface(s), and other hardware components as can be appreciated. Thecontroller 308 also allows thecamera device 300 to be operated in an autonomous and/or hands-free mode in which a user can initiate image and/or video capture, adjustment of a field of view of a lens system, and/or any other adjustments to thecamera device 300 as discussed herein. Additionally, acamera device 300 according to the present disclosure can also include one or more visual and/or audio projection systems that can include emit visible and/or audible feedback to a user. As described herein, these projection systems can include, but are not limited to, one or more lights, light emitting diode(s) (LEDs), laser, holographic optical elements, video projection systems (e.g., pico projector, etc.), speakers, or other devices that can facilitate communication with a user. - Accordingly, the
video capture logic 315 in theprogram memory 310 is executed by thecontroller 308 to facilitate a video capture mode that can reduce blur or other video artifacts that result from vibration or other movement of thecamera device 300. Upon receiving a command to initiate capture of video from acapture trigger 335, thevideo capture logic 315 can determine a frame rate associated with the video and capture images from theimage sensor 302 at a rate consistent with the frame rate and store the video in amass storage 341 device accessible to thecamera device 104. - The
video capture logic 315 receives motion, movement and/or vibration data from one ormore motion sensors 313 that are integrated within thecamera device 300. To this end, thevideo capture logic 315 can include vibration detection logic 316, zooming detection logic 317, tracking detection logic 318, and/or panning detection logic 319, which can detect movement of thecamera device 300 as well as movement of the framing of a target scene that is caused by zooming and/or panning of thelens system 301. Themotion sensors 313 can include, but are not limited to, motion detectors, accelerometers and/or other devices that can detect movement of thecamera device 300. Thevideo capture logic 315 skips capture of a frame associated with the video when movement data from the motion sensor(s) 313 indicates that motion of thecamera device 300 and/or motion associated with adjustment of a zoom level and/or panning exceeds a threshold. Thevideo capture logic 315 captures the video frame when movement of thecamera device 300 is less than a threshold. Thevideo capture logic 315 can adhere to a requested frame rate by producing an output video having a number of frames per time period consistent with the requested frame rate, but where the video frames may not be evenly spaced because frame capture may be initiated when movement is below a threshold. - The
video capture logic 315 can also adjust the movement threshold so that it is a relative measure of movement during capture of a given video. In this way, thevideo capture logic 315 can capture video frames over a given time period where the frames are associated with lower levels of movement relative to other portions of the time period. For example, thevideo capture logic 315 may identify that movement levels associated with thecamera device 300 are oscillating. Accordingly, thevideo capture logic 315 can identify the oscillating pattern and capture video frames when movement levels are at their lowest relative to other portions of a given time period. - In such a scenario, the
video capture logic 315 can also recognize an oscillating pattern in movement from the motion sensor(s) 313 and initiate capture of additional video frames before a peak in movement of thecamera device 300 as well as subsequent to the peak so that enough frames are captured according to a requested frame rate. Additionally, because captured frames of the video may be unevenly temporally spaced, thevideo capture logic 315 can employ video encoding techniques that compensate for such a scenario to reduce the impact on the quality of the resultant captured video. Additionally, thevideo capture logic 315 can force capture of video frames even if movement of thecamera device 300 does not drop below a movement threshold. For example, thevideo capture logic 315 can force capture of video frames in order to produce an output video possessing an acceptable number of frames per time period according to a requested frame rate. - The
controller 308 also executes framingfeedback logic 321 available in the program memory 210 to facilitate a framing feedback mode that provides feedback or a visual framing indicator to a user regarding framing characteristics of an image or video captured (or to be captured) by thecamera device 300. The feedback includes feedback that is provided to a human subject in or near a target scene, as the feedback is visible and/or audible from a position within or near the target scene, which may not be a position from which alocal display 345 of thecamera device 300 is visible. In other words, a user who may be positioned within a target scene to which thelens system 301 is aimed can receive information regarding the current framing of an image, where the information is projected outside of the housing of thecamera device 300, and where the information includes framing conditions, zoom levels, lighting conditions, and other framing characteristics. - In one embodiment, the framing
feedback logic 321 can employ a projection-illumination subsystem 323 of thecamera device 300 in order to project visual feedback regarding the framing conditions of an image such that it is visible from the target scene. In this way, a subject of an image can determine, by viewing the feedback generated by thecamera device 104, whether the framing characteristics of the image to be captured are as the user intended or desires. The projection-illumination subsystem 323 can include projection systems, such as a microelectromechanical (MEMS) pico-projector 324, one or more light emitting diodes,laser systems 326,light source 327, one or more holographic optical elements (HOE) 325, or any other system that can emit a visible light or indicator (e.g., a glowing or fluorescent dot). The projection-illumination subsystem 323 can also include systems that can modify of control the visibility of the various projection systems, such as afixed slit barrier 329,adaptable slit barrier 330 and/orlight source shroud 328. - As one example of execution of the framing
feedback logic 321, thefocal region logic 372 determines the current framing or field of view of a target scene as captured by thelens system 301, and cause thelaser scanning system 326 projects a frustum of light having a height and width proportional to theimage sensor 302 such that the frustum of light illuminates anything that is within the current framing of the image. In other words, thefocal region logic 372 causes the projection-illumination subsystem 323 to emit a frustum of light that approximates the field of view of thelens system 301 at any moment in time. In this way, a subject can determine whether he or she is within the current framing of an image by determining whether he or she can see or is within the frustum of light. - In the case of a
camera device 300 having a fixed focallength lens system 301, theHOE 325 is configured to emit a fixed frustum of light that is tuned to approximate the framing of the fixed focallength lens system 301. In the case of an adjustable focallength lens system 301, the MEMS pico-projector 324 and/orlaser system 326 that is integrated within thecamera device 300 emits a frustum of light and/or a boundary of light that is visible when cast against a background in the target scene that approximates the current framing of the image, or the frame boundary. In this scenario, thefocal region logic 372 determines a zoom level associated with thelens system 301 and in turn determines a current framing of the image or field of view of thelens system 301. As another example, thefocal region logic 372 can calculate a zoom level from a user input device 331, such as, azoom input 333 device that allows the user to adjust the focal length of thelens system 301. Accordingly, thefocal region logic 372 causes the projection-illumination subsystem 323 to emit a frustum of light corresponding to the current framing of the image. - Additionally, the
focal region logic 372 can also cause the projection-illumination subsystem 323 to emit ground lines or outlines that are cast against the ground and that correspond to the current framing, a rectangular area corresponding to the current framing and/or any other visible indication that approximates the current framing of an image to be captured by thecamera device 300. - The framing
feedback logic 321 can also employ the projection-illumination subsystem 323 to project a representation of a viewfinder or display of thecamera device 104 such that it is visible from a position in the target scene and outside the housing of thecamera device 300. In this way, a user in the target scene can observe the current framing of the image. The display projection logic 376 is executed by the framingfeedback logic 321 to cause the projection-illumination subsystem 323 to project the display or viewfinder representation on the ground at a position between thecamera device 104 and the target scene, on a background of the target scene, behind thecamera device 104 such that thecamera device 104 is positioned between the projection and the target scene, and/or at any other surface or position such that is viewable from the target scene. For example, the MEMS pico-projector 324 integrated within thecamera device 300 project a representation of the target scene as viewed by thelens system 301 such that it is visible by a subject within the target scene. - The framing feedback logic 221 can also provide framing feedback or a visual frame indicator in the form of a light source, or other visible source, emitted from the
camera device 104. Thefocal region logic 372 executed by the framingfeedback logic 321 employs various mechanisms to control whether a light source 327 (e.g., light emitting diode(s), laser(s), glowing or florescent component, etc.) emits light (or reflected light) that is visible from various positions within or around the target scene. Accordingly, the framingfocal region logic 372 adjusts visibility of such alight source 327 such that it is visible from a position within the current framing of the image. Thefocal region logic 372 disables visibility of thelight source 327 from a position that is outside the current framing of the image. As one example, the projection-illumination subsystem 323 employs alight source shroud 328, afixed slit barrier 329 and/oradjustable slit barrier 330 that limits the viewing angles from which thelight source 327 is visible so that a subject can look at thecamera device 300 and determine whether he or she is in the current framing or field of view of thelens system 301 based upon whether he or she can see the light source. - The
focal region logic 372 can also determine a field of view of thelens system 301 based upon a zoom level of thelens system 301 and adjust visibility using these various mechanisms as the zoom level of thelens system 301 is adjusted. For example, thefocal region logic 372 can adjust visibility of thelight source 327 as the zoom level is adjusted so that a viewing angle from which thelight source 327 is visible from the target scene increases as the zoom level decreases, and where the viewing angle decreases as the zoom level increases. It should also be appreciated that embodiments of the disclosure can be configured such that thelight source 327 is visible from a position outside the target scene and adjusted such that thelight source 327 is not visible from a position inside the target scene. - In one embodiment, the
focal region logic 372 can position a visible indicator of the current framing of an image by generating ground lines emitted by thelaser system 326 and/or the MEMS pico-projector 324 such that they are initially visible within the frame. For example,focal region logic 372 can cause thelaser scanning system 326 to emit lines corresponding to left and right ground lines such that they are visible on the ground via thelens system 301. Upon detecting the existence of the visible ground lines, thefocal region logic 372 can increase the angle of the left and right ground lines relative to one another until the ground lines are subsequently not visible, which translates into the ground lines being positioned just outside the current framing of the image. Accordingly, a subject can position himself or herself within the ground lines upon their final positioning by thefocal region logic 372 and know that he or she is within the current framing of the image. - In the context of the present disclosure, the MEMS pico-
projector 324, adaptablelight source 327,HOE 325, andlaser scanning system 326 can provide framing feedback that is visible outside a housing of the camera device 300 (e.g., light projected on the ground, background, or any other surface outside thecamera device 300 housing that is visible from the targets scene). In contrast, thelight source 327 operates in conjunction with thelight source shroud 328, the fixedslit barrier 329 and/oradaptable slit barrier 330 within thecamera device 300 housing to enable or disable visibility of thelight source 327 from various viewing angles outside of the housing. - The framing feedback logic 221 can also execute audible feedback logic 375 to identify subjects in a target scene and cause a speaker system integrated within the
camera device 300 to emit audible feedback regarding the current framing of an image. The audible feedback can provide a subject in a target scene with information regarding the current framing of an image. For example, the audible feedback logic 375 can recognize faces, bodies or other objects in a target and determine whether the framing of the image can be improved. For example, the audible feedback logic 375 can determine whether faces, bodies, and/or objects are centered within the current framing of the image and generate audible feedback via speech synthesis logic directing a user how faces, bodies, and/or objects should be moved within the current framing of the image. - In one embodiment, among others, the audible feedback logic 375 can employ speech synthesis logic to instruct a subject within a current framing of the image a direction in which to move to appear centered within the current framing. As an additional example, the audible feedback logic 375 can also emit an audible message informing a subject in the target scene of when an image is to be captured. For example, the audible feedback logic 375 employs
speech synthesis logic 388 to generate a voice countdown to so that a user is aware of when an image and/or video is going to be captured by thecamera device 104. - The framing feedback logic 221 also executes automated framing and cropping logic 377 that facilitates automated framing and cropping of a target scene based upon identification of faces, bodies, and/or objects within the target scene. For example, the
camera device 104 can be placed in a mode by the user that includes automatic image capture of a target scene, perhaps a target scene in which the user is positioned. Accordingly, the face-body-object detection logic 374 executed by the framing feedback logic 221 identifies the presence of one or more faces and/or bodies in the image and the automated framing and cropping logic 377 adjusts framing of an image captured by theimage sensor 302 such that the identified faces and/or bodies are substantially centered within the current framing of the image. - The framing feedback logic 221 also executes
gesture recognition logic 385 that allows a user to control thecamera device 300 via user input that can be detected while the user is in or near a target scene to which thelens system 301 is pointed. For example, by performing a gesture with a hand and/or arm, a user can adjust a zoom level, focus point, flash controls, and other aspects related to capture of an image by thecamera device 300 as can be appreciated. Gestures that a user in a target scene can perform can be linked to actions that can be taken by thegesture recognition logic 385 to perform an action and/or alter characteristics of thecamera device 300. Thegesture recognition logic 385 can also identify bodies appearing in a target scene to which thelens system 301 is aimed and track the corresponding body parts appearing in the scene. In one embodiment, thegesture recognition logic 385 can employ time of flight camera methods to determine and track a position of a hand and/or arm within a target scene. When a gesture linked to an action is recognized by thegesture recognition logic 385, thegesture recognition logic 385 performs the action in thecamera device 300. - As one example, a user in a target scene can perform a gesture identifying a focus point within the current framing of an image. For example, a user in the target scene can point at a specific area in the target scene for a predetermined amount of time. Accordingly, the
gesture recognition logic 385 can cause thecontroller 308 to identify the area as the focus point in the image, which in turn causes thelens system 301 to focus on the area. As another example, the user can perform a gesture linked to initiating image or video capture (i.e., a “capture trigger” gesture). Such a gesture can comprise a user simulating pulling down on an object, pressing a button, a “thumbs up” hand signal, or any other gesture motion that can be recognized as can be appreciated. Such a gesture can also comprise recognizing when the subjects in the target scene have placed their arms by their side or in a still or ready position. Upon recognizing such a gesture, thegesture recognition logic 385 can, in some embodiments, initiate capture of an image after a predetermined delay or countdown. Thegesture recognition logic 385 can also employ other portions of the framingfeedback logic 321 to emit feedback to the user so that the user can be aware of the moment that the image and/or video will be captured by thecamera device 300. - As another example of a gesture that can be recognized by the
gesture recognition logic 385, the user can also communicate a desired framing of the image via gestures. For example, the user can instruct thegesture recognition logic 385 to frame the image such that the entire body of the user is captured by, for example, pointing at the ground beneath the feet of the user with one hand, indicating a desire for the user to have his or her feet within the framing of the image, and pointing to the head of the user with another hand. As another example, a user can, in a group shot, identify the subjects in a target scene that the user wishes to be present in a resultant framing of an image. In response, thegesture recognition logic 385 can cause the face-object-body framing logic 374 to adjust cropping of a resultant image and/or a zoom level of alens system 301 to appropriately frame the image and/or video as requested by the user. - A user can also affect flash settings by performing a gesture that the
gesture recognition logic 385 can recognize. For example, a user can toggle a flash between an off-setting, an on-setting, and an automatic setting by performing a gesture linked to modifying flash settings, which thegesture recognition logic 385 can recognize and modify accordingly. Thecontroller 308 can also executevoice recognition logic 386, which allows the user to speak commands that can be linked to various actions as with thegesture recognition logic 385. Thevoice recognition logic 386 can allow a user to initiate capture of an image or video, specify whether thecamera device 104 should capture one of image or video as well as modify framing, focus point, and flash settings as described with reference to thegesture recognition logic 385. -
FIG. 4A illustrates one example of acamera device 300 operating in a mode that provides framing feedback to a subject regarding a current framing of an image. In the depicted example, thecamera device 300 is configured with a projection-illumination subsystem 323 that includes alight source 327 as well as alight source shroud 328 configured to modify aviewing angle 401 from which the light source is viewable. In the embodiment ofFIGS. 4A and 4B , the framingfocal region logic 372 identifies a current framing of a target scene to which thelens system 301 is aimed, or an amount of a target scene in the current framing of an image. Thefocal region logic 372 specifies aviewing angle 401 at which thelight source 327 should be visible such that it is visible from a position within the current framing and not visible from a position outside the current framing. Accordingly, thefocal region logic 372 can then adjust thelight source shroud 328 such that the viewing angle from which thelight source 327 is visible corresponds to the current framing of the image. -
FIG. 4B continues the example ofFIG. 4A and illustrates how thefocal region logic 372 can adjust theviewing angle 501 at which thelight source 327 can be viewed. As noted above, thefocal region logic 372 can determine a zoom level associated with an adjustable focallength lens system 301 to determine a current framing of the image. Based upon the current framing of the image, thelight source shroud 328 is configured to increase theviewing angle 501 when the zoom level of thelens system 301 is decreased (i.e. “zooming out”) and to decrease theviewing angle 501 when the zoom level of the lens system 211 is increased (i.e. “zooming in”). - Reference is now made to
FIG. 4C , which illustrates another example of acamera device 300 according to an embodiment of the disclosure. In the depicted example, thecamera device 300 employs a fixedslit barrier 329 that includes alight source 327 such as anLED array 402 that is positioned behind a fixedbarrier 404. In one embodiment, theLED array 402 comprises a linear array of a plurality of LED's positioned behind the fixedbarrier 404 relative to a target scene such that the fixedbarrier 404 limits the visibility of theLED array 402 from certain viewing angles. In the depicted embodiment, the fixedbarrier 404 provides a slit through which light emanating from theLED array 402 can pass. - The
focal region logic 329 can activate a certain number of LED's in theLED array 404 that causes light to emanate through the fixedbarrier 402 such that the light is visible at a viewing angle that corresponds to the field of view of thelens system 301. In other words, if the zoom level of thelens system 301 is modified, thefocal region logic 329 can activate an appropriate number of LED's from theLED array 404 that are laterally offset from the slit in the fixedbarrier 402 such that they are visible by a subject 461 a at a viewing angle in the target scene that corresponds to an angle relative to thelens system 301 that corresponds to the current field of view. As the zoom level of thelens system 301 is changed, it follows that the field of view or current framing correspondingly changes. Accordingly, thefocal region logic 329 can activate and/or disable LED's in theLED array 404 as the zoom level of thelens system 301 changes such that it is visible by a subject 461 a within the field of view of thelens system 301 but not visible by a subject 461 b positioned outside the field of view. - In the depicted example, the fixed
slit barrier 329 is also configured to allow light emanating from theLED array 404 to be visible at a viewing angle that is slightly less than a current field of view of thelens system 301. In other words, the fixedslit barrier 329 introduces a field ofview reduction LED array 404 is less than an angle of the field of view of thelens system 301. This field ofview reduction view reduction LED array 404 when a portion of the subject 461 c that is laterally offset from the subject's 461 c eyes are outside the field of view of thelens system 301 even though the subject's 461 c face, and therefore eyes, may be within the field of view. Therefore, theLED array 404 is configured to activate LED's such that light emanating through the fixedbarrier 402 is generally visible to the subject 461 c when the subject's entire body is within the field of view of thelens system 301. - Reference is now made to
FIG. 4D , which illustrates an example of acamera device 300 employing anadaptable slit barrier 330 to emanate light from alight source 328 such as a fixedLED source 493 such that the light is visible within the field of view of thelens system 301. Theadaptable slit barrier 330 employs anadaptable barrier 491 that can adjust an aperture through which light from the fixedLED source 493 passes to adjust the viewing angle from the target scene of the light. As in the example ofFIG. 4C , the In the example ofFIG. 4D can emanate light such that there is a field ofview reduction adaptable slit barrier 330 is less than the field of view of thelens system 301. - The
adaptable slit barrier 330 can employ techniques similar to those described in U.S. patent application Ser. No. 12/845,409, entitled “Display with Adaptable Parallax Barrier,” filed Jul. 28, 2010 (the '409 application), which is hereby incorporated herein by reference in its entirety. More specifically, theadaptable barrier 491 can comprise a linear barrier element array as disclosed in the '409 application comprising a plurality of barrier elements, each of which being selectable to be substantially opaque or transparent. Accordingly, as the field of view of thelens system 301 changes, thefocal region logic 372 can select some of the barrier elements in the linear barrier element array that are laterally offset from the center of thebarrier 491 to be transparent. In this way, theadaptable slit battier 330 can allow light from the fixedLED source 493 to emanate through theadaptable barrier 491 and to the target scene such that the fixedLED source 493 is visible at a viewing angle corresponding to the field of view of thelens system 301. - While the example discussed with reference to
FIGS. 4C and 4D includes a linear, or one-dimensional, fixedslit barrier 329 and/oradaptable slit barrier 330, either of these devices can also include an LED array as well as barrier oriented in two dimensions so that the viewing angle light emanating from the LED source can be controlled in both the horizontal and vertical directions. In this way, because the light emanating from the LED source is directed in the case of arectangular image sensor 302, the fixedslit barrier 329 and/oradaptable slit barrier 330 can limit the viewing angle of the LED source in the horizontal and vertical directions relative to the target scene. - Reference is now made to
FIG. 5 , which illustrates an example of acamera device 300 providing framing feedback to a subject 501 in a target scene to which thelens system 301 of thecamera device 104 is aimed. In the example ofFIG. 5 , thefocal region logic 372 executed by thecontroller 308 causes the projection-illumination subsystem 323 to project one ormore lines optical element 325,laser system 326, MEMS pico-projector 324 and/or any other mechanism that can project visible lines designating the focal region and/or field of view corresponding to the current framing of the image. Theselines - In one embodiment, the
focal region logic 372 can also identify a suggested spot for the subject 501 to position himself within the field of view of thelens system 301 of thecamera device 300. Thefocal region logic 372 can then cause the projection-illumination subsystem 323 to emit an indicator that is cast on the ground in the target scene that provides a suggested position for the subject 501 based on the framing conditions within the field of view of thelens system 301. The suggested position can be based on the size of the subject 501 within the current framing, lighting conditions, background elements in the target scene, or any other framing conditions as can be appreciated. - Reference is now made to
FIG. 6 , which illustrates one way in which thegesture recognition logic 385 executed by thecontroller 308 can allow thecamera device 300 to interpret gestures performed by ahuman subject 501 visible in the target scene to modify framing conditions or other attributes associated with thecamera device 300. As described above, the subject 501 can select framing conditions associated with an image captured by thecamera device 300. In the depicted example, the subject 501 can point with an index finger to indicate the top of the image frame as well as point to the ground to indicate that the subject 501 desires that the image frame extend to the ground beneath the subject 501. - The subject 501 can also perform various other gestures that can be recognized by the
gesture recognition logic 385 and linked to certain actions within the camera device. For example, as noted above, the subject 501 can perform a gesture identifying a focus point in the current framing of the image, and thegesture recognition logic 385 can adjust the focus point of thelens system 301 in response. The subject 501 can perform another gesture that can be linked with changing the depth of field setting of thecamera device 300, which thegesture recognition logic 385 can identify and act on. The subject 501 can perform gestures that select various modes in which thecamera device 300 can be placed by performing one or more gestures. For example, a gesture can be linked to selection of a still image capture mode while another gesture can be linked to selection of a video capture mode. As another example, the subject 501 can select a scene mode associated with thecamera device 300, such as a landscape scene mode, a portrait scene mode, a fast-motion video capture mode, a high quality video capture mode, and any/or other mode that can be associated with thecamera device 300. - The
gesture recognition logic 385 also recognizes a gesture that can be linked to selection of an aspect ratio associated with image or video capture. The subject 501 can perform a gesture that also selects what or where the subject 501 would like to capture in an image or video captured by thecamera device 300. In one embodiment, thecamera device 300 can be configured with a wide, high resolution field of view of the target scene, and thegesture recognition logic 385 can allow the subject 501 to perform a gesture that selects a subset of the field of view as the current framing of the image. Thegesture recognition logic 385 can modify the current framing of the image without altering the zoom level associated with thelens system 301. In this way, thegesture recognition logic 385 can quickly modify framing conditions without having to modify a zoom level of thelens system 301. As another example, theimage sensor 302 can comprise an array of imager elements or image sensors, and thegesture recognition logic 385 can modify framing conditions by selecting a subset of the array of imager elements. Thegesture recognition logic 385 can allow other adjustments to be made via gestures performed by a subject 501. For example, optical zoom adjustments, mechanical panning adjustments (e.g., when thecamera device 300 is attached to a motorized tripod), flash settings (e.g., on, off, automatic, etc.), andother camera device 300 settings as can be appreciated can be linked to a gesture performed by the subject 501, which can be recognized by thegesture recognition logic 385, which can in turn cause the requested adjustment to be made. - The
gesture recognition logic 385 can also allow the subject 501 to perform gestures that alter framing feedback provided by the framingfeedback logic 321. In the depicted example, the display projection logic 376 causes the MEMs pico-projector 324 of thecamera device 300 to project a representation of the current framing of the image or video onto the ground within or near the target scene. Thegesture recognition logic 385 also recognizes gestures that allow the user to modify where the projection appears. For example, the user can perform a gesture that causes the projection to appear on a background, on a surface behind thecamera device 300, or on any other surface within or near the target scene. Thegesture recognition logic 385 can also recognize a gesture performed by the subject 501 that is linked to changing the size and/or orientation of the projection. -
FIG. 6 also illustrates how the display projection logic 376 can provide framing feedback to a subject 501 in a target scene to which thelens system 301 of thecamera device 300 is aimed. In the depicted example, the display projection logic 376 can cause the MEMS pico-projector 324 to generate aprojection 621 of a current framing of an image, or the current field of view of the camera device, on a surface outside the housing of thecamera device 300 that is visible from a position in the target scene by the subject 501. In the depicted example, theprojection 621 is projected towards a ground level near the target scene such that it is visible by the subject 501. In some embodiments, theprojection 621 generated by the MEMS pico-projector 324 can also include a textual and/or graphics overlay with additional information such as an indicator showing whether image and/or video capture is underway, textual information regardingcamera device 300 settings (e.g., aperture, shutter speed, scene mode, etc.), whether there is excessive motion within thecamera device 300 hindering image or video capture, or any other information that might be relevant to the subject 501 that is related to framing conditions. - Reference is now made to
FIG. 7 , which continues the example ofFIG. 6 .FIG. 7 illustrates how the projection-illumination subsystem 323 can, via the MEMS pico-projector 324, project the current framing of an image in various directions and on various surfaces such that it is visible from a position in the target scene. In the depicted example, thecamera device 300 is equipped with an additional MEMS pico-projector 324 that is positioned on an opposing side of thecamera device 300 housing. This allows theprojection 621 to be cast on any number of surfaces in any number of directions. - Additionally, the
gesture recognition logic 385 can allow the subject 501 to perform a gesture to alter the positioning of theprojection 621. For example, inFIG. 7 , the subject 501 has performed a gesture to cause thegesture recognition logic 385 to request that the display projection logic 376 change a surface upon which theprojection 621 is cast. The display projection logic 376 can also adjust and/or introduce skew into theprojection 621 generated by the MEMS pico-projector 324 in the event that a surface upon which theprojection 621 is cast is not normal to thecamera device 300, thereby yielding a proportional rectangular image projected on the surface. Such an adjustment can be manually directed with user inputs via an input device integrated within thecamera device 300 or via gestures captured by the camera device, electronically via analysis of a projection which at least in part falls within the field of view of the image sensor and/or a second imager, and/or via triangulation based infrared emitter detectors -
FIG. 7 also illustrates how a gesture performed by the subject 501 can cause thegesture recognition logic 385 to alter the current framing of the image as directed by the subject 501. In the depicted example, the subject 501 performs a gesture indicating how a zoom level of thelens system 301 can be changed or how an image can be cropped by the controller 208. - Reference is now made to
FIG. 8 , which illustrates an example of an alternative way in which the framingfeedback logic 321 can generate framing feedback. In the depicted example, the framingfeedback logic 321 can direct the projection-illumination subsystem 323 to generate a frustum of light 701 that is visible from a position within the target scene. Additionally, because the frustum oflight 701 is generated by a holographic optical element and/or a laser system, it can be configured so that it is substantially invisible from a position outside the target scene with the exception of a background on which the light falls and assuming there is minimal debris or particulate matter in the air surrounding the target scene. In this way, a subject 501 can know if he or she is in the target scene based upon whether he or she can see the frustum oflight 701 and/or whether he or she is within the frustum oflight 701. - Referring next to
FIG. 9 , shown is a flowchart that provides one example of the operation of a portion of thevideo capture logic 315 according to various embodiments. It is understood that the flowchart ofFIG. 9 provides merely an example of the many different types of functional arrangements that may be employed to implement the operation of the portion of thevideo capture logic 315 as described herein. As an alternative, the flowchart ofFIG. 9 may be viewed as depicting an example of steps of a method implemented in ancamera device 104 according to one or more embodiments. - First, in
box 801, thevideo capture logic 315 initiates video capture according to a requested frame rate. Inbox 803, thevideo capture logic 315 can determine, via one ormore motion sensors 313, a level of motion, movement and/or vibration of thecamera device 300. Inbox 805, thevideo capture logic 315 can determine whether the level of movement of thecamera device 300 exceeds a threshold. As noted above, such a threshold can be a threshold that is relative to movement during capture of a current video or an absolute threshold. Inbox 807, thevideo capture logic 315 can skip capture of a video frame if the movement level exceeds the threshold. Inbox 809, the video capture logic can determine whether capture of a video frame should be forced to comply with a requested frame rate, even if movement levels of thecamera device 300 exceed the threshold. Inbox 811, the video frame can be captured. - Embodiments of the present disclosure can be implemented in various devices, for example, having a processor, memory, and image capture hardware. The logic described herein can be executable by one or more processors integrated with a device. In one embodiment, an application executed in a computing device, such as a mobile device, can invoke API's that provide the logic described herein as well as facilitate interaction with image capture hardware. Where any component discussed herein is implemented in the form of software, any one of a number of programming languages may be employed such as, for example, processor specific assembler languages, C, C++, C#, Objective C, Java, Javascript, Perl, PHP, Visual Basic, Python, Ruby, Delphi, Flash, or other programming languages.
- As such, these software components can be executable by one or more processors in various devices. In this respect, the term “executable” means a program file that is in a form that can ultimately be run by a processor. Examples of executable programs may be, for example, a compiled program that can be translated into machine code in a format that can be loaded into a random access portion of memory and run by a processor, source code that may be expressed in proper format such as object code that is capable of being loaded into a random access portion of the memory and executed by the processor, or source code that may be interpreted by another executable program to generate instructions in a random access portion of the memory to be executed by the processor, etc. An executable program may be stored in any portion or component of the memory including, for example, random access memory (RAM), read-only memory (ROM), hard drive, solid-state drive, USB flash drive, memory card, optical disc such as compact disc (CD) or digital versatile disc (DVD), floppy disk, magnetic tape, or other memory components.
- Although various logic described herein may be embodied in software or code executed by general purpose hardware as discussed above, as an alternative the same may also be embodied in dedicated hardware or a combination of software/general purpose hardware and dedicated hardware. If embodied in dedicated hardware, each can be implemented as a circuit or state machine that employs any one of or a combination of a number of technologies. These technologies may include, but are not limited to, discrete logic circuits having logic gates for implementing various logic functions upon an application of one or more data signals, application specific integrated circuits having appropriate logic gates, or other components, etc. Such technologies are generally well known by those skilled in the art and, consequently, are not described in detail herein.
- The flowchart of
FIG. 9 shows the functionality and operation of an implementation of portions of a camera device according to embodiments of the disclosure. If embodied in software, each block may represent a module, segment, or portion of code that comprises program instructions to implement the specified logical function(s). The program instructions may be embodied in the form of source code that comprises human-readable statements written in a programming language or machine code that comprises numerical instructions recognizable by a suitable execution system such as a processor in a computer system or other system. The machine code may be converted from the source code, etc. If embodied in hardware, each block may represent a circuit or a number of interconnected circuits to implement the specified logical function(s). - Although the flowchart of
FIG. 9 shows a specific order of execution, it is understood that the order of execution may differ from that which is depicted. For example, the order of execution of two or more blocks may be scrambled relative to the order shown. Also, two or more blocks shown in succession inFIG. 9 may be executed concurrently or with partial concurrence. Further, in some embodiments, one or more of the blocks shown inFIG. 9 may be skipped or omitted. In addition, any number of counters, state variables, warning semaphores, or messages might be added to the logical flow described herein, for purposes of enhanced utility, accounting, performance measurement, or providing troubleshooting aids, etc. It is understood that all such variations are within the scope of the present disclosure. - Also, any logic or application described herein that comprises software or code, such as the framing
feedback logic 321 and/or thevideo capture logic 315 can be embodied in any non-transitory computer-readable medium for use by or in connection with an instruction execution system such as, for example, a processor in a computer device or other system. In this sense, the logic may comprise, for example, statements including instructions and declarations that can be fetched from the computer-readable medium and executed by the instruction execution system. In the context of the present disclosure, a “computer-readable medium” can be any medium that can contain, store, or maintain the logic or application described herein for use by or in connection with the instruction execution system. The computer-readable medium can comprise any one of many physical media such as, for example, magnetic, optical, or semiconductor media. More specific examples of a suitable computer-readable medium would include, but are not limited to, magnetic tapes, magnetic floppy diskettes, magnetic hard drives, memory cards, solid-state drives, USB flash drives, or optical discs. Also, the computer-readable medium may be a random access memory (RAM) including, for example, static random access memory (SRAM) and dynamic random access memory (DRAM), or magnetic random access memory (MRAM). In addition, the computer-readable medium may be a read-only memory (ROM), a programmable read-only memory (PROM), an erasable programmable read-only memory (EPROM), an electrically erasable programmable read-only memory (EEPROM), or other type of memory device. - It should be emphasized that the above-described embodiments of the present disclosure are merely possible examples of implementations set forth for a clear understanding of the principles of the disclosure. Many variations and modifications may be made to the above-described embodiment(s) without departing substantially from the spirit and principles of the disclosure. All such modifications and variations are intended to be included herein within the scope of this disclosure and protected by the following claims.
Claims (21)
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
US13/413,863 US20130021491A1 (en) | 2011-07-20 | 2012-03-07 | Camera Device Systems and Methods |
Applications Claiming Priority (2)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
US201161509747P | 2011-07-20 | 2011-07-20 | |
US13/413,863 US20130021491A1 (en) | 2011-07-20 | 2012-03-07 | Camera Device Systems and Methods |
Publications (1)
Publication Number | Publication Date |
---|---|
US20130021491A1 true US20130021491A1 (en) | 2013-01-24 |
Family
ID=47555520
Family Applications (9)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
US13/232,052 Abandoned US20130021512A1 (en) | 2011-07-20 | 2011-09-14 | Framing of Images in an Image Capture Device |
US13/232,045 Abandoned US20130021488A1 (en) | 2011-07-20 | 2011-09-14 | Adjusting Image Capture Device Settings |
US13/235,975 Abandoned US20130021504A1 (en) | 2011-07-20 | 2011-09-19 | Multiple image processing |
US13/245,941 Abandoned US20130021489A1 (en) | 2011-07-20 | 2011-09-27 | Regional Image Processing in an Image Capture Device |
US13/281,521 Abandoned US20130021490A1 (en) | 2011-07-20 | 2011-10-26 | Facial Image Processing in an Image Capture Device |
US13/313,352 Active 2032-01-11 US9092861B2 (en) | 2011-07-20 | 2011-12-07 | Using motion information to assist in image processing |
US13/313,345 Abandoned US20130022116A1 (en) | 2011-07-20 | 2011-12-07 | Camera tap transcoder architecture with feed forward encode data |
US13/330,047 Abandoned US20130021484A1 (en) | 2011-07-20 | 2011-12-19 | Dynamic computation of lens shading |
US13/413,863 Abandoned US20130021491A1 (en) | 2011-07-20 | 2012-03-07 | Camera Device Systems and Methods |
Family Applications Before (8)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
US13/232,052 Abandoned US20130021512A1 (en) | 2011-07-20 | 2011-09-14 | Framing of Images in an Image Capture Device |
US13/232,045 Abandoned US20130021488A1 (en) | 2011-07-20 | 2011-09-14 | Adjusting Image Capture Device Settings |
US13/235,975 Abandoned US20130021504A1 (en) | 2011-07-20 | 2011-09-19 | Multiple image processing |
US13/245,941 Abandoned US20130021489A1 (en) | 2011-07-20 | 2011-09-27 | Regional Image Processing in an Image Capture Device |
US13/281,521 Abandoned US20130021490A1 (en) | 2011-07-20 | 2011-10-26 | Facial Image Processing in an Image Capture Device |
US13/313,352 Active 2032-01-11 US9092861B2 (en) | 2011-07-20 | 2011-12-07 | Using motion information to assist in image processing |
US13/313,345 Abandoned US20130022116A1 (en) | 2011-07-20 | 2011-12-07 | Camera tap transcoder architecture with feed forward encode data |
US13/330,047 Abandoned US20130021484A1 (en) | 2011-07-20 | 2011-12-19 | Dynamic computation of lens shading |
Country Status (1)
Country | Link |
---|---|
US (9) | US20130021512A1 (en) |
Cited By (32)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US20130300644A1 (en) * | 2012-05-11 | 2013-11-14 | Comcast Cable Communications, Llc | System and Methods for Controlling a User Experience |
US20130329113A1 (en) * | 2012-06-08 | 2013-12-12 | Sony Mobile Communications, Inc. | Terminal device and image capturing method |
US20130335587A1 (en) * | 2012-06-14 | 2013-12-19 | Sony Mobile Communications, Inc. | Terminal device and image capturing method |
US20140009623A1 (en) * | 2012-07-06 | 2014-01-09 | Pixart Imaging Inc. | Gesture recognition system and glasses with gesture recognition function |
US20140111668A1 (en) * | 2012-10-23 | 2014-04-24 | Sony Corporation | Content acquisition apparatus and storage medium |
US20150040040A1 (en) * | 2013-08-05 | 2015-02-05 | Alexandru Balan | Two-hand interaction with natural user interface |
US20150124115A1 (en) * | 2012-06-11 | 2015-05-07 | Omnivision Technologies, Inc. | Shutter release using secondary camera |
US20150297986A1 (en) * | 2014-04-18 | 2015-10-22 | Aquifi, Inc. | Systems and methods for interactive video games with motion dependent gesture inputs |
US20150341551A1 (en) * | 2014-05-20 | 2015-11-26 | Lenovo (Singapore) Pte. Ltd. | Projecting light at angle corresponding to the field of view of a camera |
US20160048216A1 (en) * | 2014-08-14 | 2016-02-18 | Ryan Fink | Methods for camera movement compensation for gesture detection and object recognition |
US9310667B2 (en) * | 2014-08-06 | 2016-04-12 | Kevin J. WARRIAN | Orientation system for image recording devices |
US20160119552A1 (en) * | 2014-10-24 | 2016-04-28 | Lg Electronics Inc. | Mobile terminal and controlling method thereof |
US20160148648A1 (en) * | 2014-11-20 | 2016-05-26 | Facebook, Inc. | Systems and methods for improving stabilization in time-lapse media content |
US20160236612A1 (en) * | 2013-10-09 | 2016-08-18 | Magna Closures Inc. | Control of display for vehicle window |
US9462255B1 (en) | 2012-04-18 | 2016-10-04 | Amazon Technologies, Inc. | Projection and camera system for augmented reality environment |
US9578221B1 (en) * | 2016-01-05 | 2017-02-21 | International Business Machines Corporation | Camera field of view visualizer |
US9648223B2 (en) * | 2015-09-04 | 2017-05-09 | Microvision, Inc. | Laser beam scanning assisted autofocus |
US20170272647A1 (en) * | 2016-03-17 | 2017-09-21 | Kabushiki Kaisha Toshiba | Imaging support apparatus, imaging support method, and computer program product |
US20170358144A1 (en) * | 2016-06-13 | 2017-12-14 | Julia Schwarz | Altering properties of rendered objects via control points |
WO2018075367A1 (en) * | 2016-10-18 | 2018-04-26 | Light Labs Inc. | Methods and apparatus for receiving, storing and/or using camera settings and/or user preference information |
US10165186B1 (en) * | 2015-06-19 | 2018-12-25 | Amazon Technologies, Inc. | Motion estimation based video stabilization for panoramic video from multi-camera capture device |
US10313552B2 (en) * | 2016-10-26 | 2019-06-04 | Orcam Technologies Ltd. | Systems and methods for providing visual feedback of a field of view |
US10375281B2 (en) | 2014-10-31 | 2019-08-06 | International Business Machines Corporation | Image-capture-scope indication device for image-capture apparatus |
US10447926B1 (en) | 2015-06-19 | 2019-10-15 | Amazon Technologies, Inc. | Motion estimation based video compression and encoding |
US10630893B2 (en) * | 2013-01-23 | 2020-04-21 | Orcam Technologies Ltd. | Apparatus for adjusting image capture settings based on a type of visual trigger |
US20200169663A1 (en) * | 2018-11-26 | 2020-05-28 | Sony Corporation | Physically based camera motion compensation |
US20220060572A1 (en) * | 2018-12-30 | 2022-02-24 | Sang Chul Kwon | Foldable mobile phone |
US11289078B2 (en) * | 2019-06-28 | 2022-03-29 | Intel Corporation | Voice controlled camera with AI scene detection for precise focusing |
US11372244B2 (en) * | 2017-12-25 | 2022-06-28 | Goertek Technology Co., Ltd. | Laser beam scanning display device and augmented reality glasses |
US11410413B2 (en) | 2018-09-10 | 2022-08-09 | Samsung Electronics Co., Ltd. | Electronic device for recognizing object and method for controlling electronic device |
US11509817B2 (en) * | 2014-11-03 | 2022-11-22 | Robert John Gove | Autonomous media capturing |
US11606482B2 (en) | 1997-01-27 | 2023-03-14 | West Texas Technology Partners, Llc | Methods for camera movement compensation |
Families Citing this family (53)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
JP5781351B2 (en) * | 2011-03-30 | 2015-09-24 | 日本アビオニクス株式会社 | Imaging apparatus, pixel output level correction method thereof, infrared camera system, and interchangeable lens system |
JP5778469B2 (en) | 2011-04-28 | 2015-09-16 | 日本アビオニクス株式会社 | Imaging apparatus, image generation method, infrared camera system, and interchangeable lens system |
KR101796481B1 (en) * | 2011-11-28 | 2017-12-04 | 삼성전자주식회사 | Method of eliminating shutter-lags with low power consumption, camera module, and mobile device having the same |
US9118876B2 (en) * | 2012-03-30 | 2015-08-25 | Verizon Patent And Licensing Inc. | Automatic skin tone calibration for camera images |
KR101917650B1 (en) * | 2012-08-03 | 2019-01-29 | 삼성전자 주식회사 | Method and apparatus for processing a image in camera device |
US9554042B2 (en) * | 2012-09-24 | 2017-01-24 | Google Technology Holdings LLC | Preventing motion artifacts by intelligently disabling video stabilization |
US9286509B1 (en) * | 2012-10-19 | 2016-03-15 | Google Inc. | Image optimization during facial recognition |
JP2014176034A (en) * | 2013-03-12 | 2014-09-22 | Ricoh Co Ltd | Video transmission device |
US9552630B2 (en) * | 2013-04-09 | 2017-01-24 | Honeywell International Inc. | Motion deblurring |
US9595083B1 (en) * | 2013-04-16 | 2017-03-14 | Lockheed Martin Corporation | Method and apparatus for image producing with predictions of future positions |
US9916367B2 (en) | 2013-05-03 | 2018-03-13 | Splunk Inc. | Processing system search requests from multiple data stores with overlapping data |
US8738629B1 (en) | 2013-05-03 | 2014-05-27 | Splunk Inc. | External Result Provided process for retrieving data stored using a different configuration or protocol |
US10003792B2 (en) | 2013-05-27 | 2018-06-19 | Microsoft Technology Licensing, Llc | Video encoder for images |
US10796617B2 (en) * | 2013-06-12 | 2020-10-06 | Infineon Technologies Ag | Device, method and system for processing an image data stream |
US9270959B2 (en) | 2013-08-07 | 2016-02-23 | Qualcomm Incorporated | Dynamic color shading correction |
EP3067746B1 (en) | 2013-12-06 | 2019-08-21 | Huawei Device Co., Ltd. | Photographing method for dual-camera device and dual-camera device |
US9251594B2 (en) | 2014-01-30 | 2016-02-02 | Adobe Systems Incorporated | Cropping boundary simplicity |
US9245347B2 (en) * | 2014-01-30 | 2016-01-26 | Adobe Systems Incorporated | Image Cropping suggestion |
US10121060B2 (en) * | 2014-02-13 | 2018-11-06 | Oath Inc. | Automatic group formation and group detection through media recognition |
KR102128468B1 (en) * | 2014-02-19 | 2020-06-30 | 삼성전자주식회사 | Image Processing Device and Method including a plurality of image signal processors |
CN103841328B (en) * | 2014-02-27 | 2015-03-11 | 深圳市中兴移动通信有限公司 | Low-speed shutter shooting method and device |
CN105359531B (en) | 2014-03-17 | 2019-08-06 | 微软技术许可有限责任公司 | Method and system for determining for the coder side of screen content coding |
US10104316B2 (en) * | 2014-05-08 | 2018-10-16 | Sony Corporation | Information processing device and information processing method |
WO2016004278A1 (en) * | 2014-07-03 | 2016-01-07 | Brady Worldwide, Inc. | Lockout/tagout device with non-volatile memory and related system |
US10924743B2 (en) | 2015-02-06 | 2021-02-16 | Microsoft Technology Licensing, Llc | Skipping evaluation stages during media encoding |
US11721414B2 (en) * | 2015-03-12 | 2023-08-08 | Walmart Apollo, Llc | Importing structured prescription records from a prescription label on a medication package |
US10853625B2 (en) | 2015-03-21 | 2020-12-01 | Mine One Gmbh | Facial signature methods, systems and software |
WO2016183380A1 (en) * | 2015-05-12 | 2016-11-17 | Mine One Gmbh | Facial signature methods, systems and software |
WO2016154123A2 (en) | 2015-03-21 | 2016-09-29 | Mine One Gmbh | Virtual 3d methods, systems and software |
US20160316220A1 (en) * | 2015-04-21 | 2016-10-27 | Microsoft Technology Licensing, Llc | Video encoder management strategies |
US10136132B2 (en) | 2015-07-21 | 2018-11-20 | Microsoft Technology Licensing, Llc | Adaptive skip or zero block detection combined with transform size decision |
EP3136726B1 (en) * | 2015-08-27 | 2018-03-07 | Axis AB | Pre-processing of digital images |
US9456195B1 (en) | 2015-10-08 | 2016-09-27 | Dual Aperture International Co. Ltd. | Application programming interface for multi-aperture imaging systems |
WO2017205597A1 (en) * | 2016-05-25 | 2017-11-30 | Gopro, Inc. | Image signal processing-based encoding hints for motion estimation |
EP3466051A1 (en) | 2016-05-25 | 2019-04-10 | GoPro, Inc. | Three-dimensional noise reduction |
US9639935B1 (en) * | 2016-05-25 | 2017-05-02 | Gopro, Inc. | Apparatus and methods for camera alignment model calibration |
US9851842B1 (en) * | 2016-08-10 | 2017-12-26 | Rovi Guides, Inc. | Systems and methods for adjusting display characteristics |
US10366122B2 (en) * | 2016-09-14 | 2019-07-30 | Ants Technology (Hk) Limited. | Methods circuits devices systems and functionally associated machine executable code for generating a searchable real-scene database |
CN106550227B (en) * | 2016-10-27 | 2019-02-22 | 成都西纬科技有限公司 | A kind of image saturation method of adjustment and device |
US10477064B2 (en) | 2017-08-21 | 2019-11-12 | Gopro, Inc. | Image stitching with electronic rolling shutter correction |
US10791265B1 (en) | 2017-10-13 | 2020-09-29 | State Farm Mutual Automobile Insurance Company | Systems and methods for model-based analysis of damage to a vehicle |
US11587046B1 (en) | 2017-10-25 | 2023-02-21 | State Farm Mutual Automobile Insurance Company | Systems and methods for performing repairs to a vehicle |
JP7004736B2 (en) * | 2017-10-26 | 2022-01-21 | 京セラ株式会社 | Image processing equipment, imaging equipment, driving support equipment, mobile objects, and image processing methods |
US11676242B2 (en) * | 2018-10-25 | 2023-06-13 | Sony Group Corporation | Image processing apparatus and image processing method |
US10861127B1 (en) * | 2019-09-17 | 2020-12-08 | Gopro, Inc. | Image and video processing using multiple pipelines |
US11064118B1 (en) * | 2019-12-18 | 2021-07-13 | Gopro, Inc. | Systems and methods for dynamic stabilization adjustment |
US11006044B1 (en) * | 2020-03-03 | 2021-05-11 | Qualcomm Incorporated | Power-efficient dynamic electronic image stabilization |
US11284157B2 (en) | 2020-06-11 | 2022-03-22 | Rovi Guides, Inc. | Methods and systems facilitating adjustment of multiple variables via a content guidance application |
TWI774039B (en) * | 2020-08-12 | 2022-08-11 | 瑞昱半導體股份有限公司 | System for compensating image with fixed pattern noise |
US11563899B2 (en) * | 2020-08-14 | 2023-01-24 | Raytheon Company | Parallelization technique for gain map generation using overlapping sub-images |
CN114079735B (en) * | 2020-08-19 | 2024-02-23 | 瑞昱半导体股份有限公司 | Image compensation system for fixed image noise |
US11902671B2 (en) * | 2021-12-09 | 2024-02-13 | Fotonation Limited | Vehicle occupant monitoring system including an image acquisition device with a rolling shutter image sensor |
WO2023150800A1 (en) * | 2022-02-07 | 2023-08-10 | Gopro, Inc. | Methods and apparatus for real-time guided encoding |
Citations (4)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US20080074533A1 (en) * | 2006-09-22 | 2008-03-27 | Kuo-Hung Liao | Digital image capturing device and method of automatic shooting thereof |
US20100013943A1 (en) * | 2008-07-18 | 2010-01-21 | Sony Ericsson Mobile Communications Ab | Arrangement and method relating to an image recording device |
WO2011052506A1 (en) * | 2009-10-28 | 2011-05-05 | 京セラ株式会社 | Portable image pickup apparatus |
US8681255B2 (en) * | 2010-09-28 | 2014-03-25 | Microsoft Corporation | Integrated low power depth camera and projection device |
Family Cites Families (44)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
KR100325253B1 (en) * | 1998-05-19 | 2002-03-04 | 미야즈 준이치롯 | Motion vector search method and apparatus |
US6486908B1 (en) * | 1998-05-27 | 2002-11-26 | Industrial Technology Research Institute | Image-based method and system for building spherical panoramas |
US20010047517A1 (en) * | 2000-02-10 | 2001-11-29 | Charilaos Christopoulos | Method and apparatus for intelligent transcoding of multimedia data |
JP2001245303A (en) * | 2000-02-29 | 2001-09-07 | Toshiba Corp | Moving picture coder and moving picture coding method |
US6407680B1 (en) * | 2000-12-22 | 2002-06-18 | Generic Media, Inc. | Distributed on-demand media transcoding system and method |
US7034848B2 (en) * | 2001-01-05 | 2006-04-25 | Hewlett-Packard Development Company, L.P. | System and method for automatically cropping graphical images |
EP1407620B1 (en) * | 2001-05-31 | 2009-07-15 | Canon Kabushiki Kaisha | Moving image and supplementary information storing method |
US7801215B2 (en) * | 2001-07-24 | 2010-09-21 | Sasken Communication Technologies Limited | Motion estimation technique for digital video encoding applications |
US20030126622A1 (en) * | 2001-12-27 | 2003-07-03 | Koninklijke Philips Electronics N.V. | Method for efficiently storing the trajectory of tracked objects in video |
KR100850705B1 (en) * | 2002-03-09 | 2008-08-06 | 삼성전자주식회사 | Method for adaptive encoding motion image based on the temperal and spatial complexity and apparatus thereof |
JP4275358B2 (en) * | 2002-06-11 | 2009-06-10 | 株式会社日立製作所 | Image information conversion apparatus, bit stream converter, and image information conversion transmission method |
US7259784B2 (en) * | 2002-06-21 | 2007-08-21 | Microsoft Corporation | System and method for camera color calibration and image stitching |
US20040131276A1 (en) * | 2002-12-23 | 2004-07-08 | John Hudson | Region-based image processor |
AU2003296127A1 (en) * | 2002-12-25 | 2004-07-22 | Nikon Corporation | Blur correction camera system |
US20130107938A9 (en) * | 2003-05-28 | 2013-05-02 | Chad Fogg | Method And Apparatus For Scalable Video Decoder Using An Enhancement Stream |
KR100566290B1 (en) * | 2003-09-18 | 2006-03-30 | 삼성전자주식회사 | Image Scanning Method By Using Scan Table and Discrete Cosine Transform Apparatus adapted it |
JP4123171B2 (en) * | 2004-03-08 | 2008-07-23 | ソニー株式会社 | Method for manufacturing vibration type gyro sensor element, vibration type gyro sensor element, and method for adjusting vibration direction |
WO2005094270A2 (en) * | 2004-03-24 | 2005-10-13 | Sharp Laboratories Of America, Inc. | Methods and systems for a/v input device to diplay networking |
US8315307B2 (en) * | 2004-04-07 | 2012-11-20 | Qualcomm Incorporated | Method and apparatus for frame prediction in hybrid video compression to enable temporal scalability |
US20060109900A1 (en) * | 2004-11-23 | 2006-05-25 | Bo Shen | Image data transcoding |
JP2006203682A (en) * | 2005-01-21 | 2006-08-03 | Nec Corp | Converting device of compression encoding bit stream for moving image at syntax level and moving image communication system |
US7843824B2 (en) * | 2007-01-08 | 2010-11-30 | General Instrument Corporation | Method and apparatus for statistically multiplexing services |
US7924316B2 (en) * | 2007-03-14 | 2011-04-12 | Aptina Imaging Corporation | Image feature identification and motion compensation apparatus, systems, and methods |
JP4983917B2 (en) * | 2007-05-23 | 2012-07-25 | 日本電気株式会社 | Moving image distribution system, conversion device, and moving image distribution method |
CN101755455A (en) * | 2007-07-30 | 2010-06-23 | 日本电气株式会社 | Connection terminal, distribution system, conversion method, and program |
US20090060039A1 (en) * | 2007-09-05 | 2009-03-05 | Yasuharu Tanaka | Method and apparatus for compression-encoding moving image |
US8098732B2 (en) * | 2007-10-10 | 2012-01-17 | Sony Corporation | System for and method of transcoding video sequences from a first format to a second format |
US8063942B2 (en) * | 2007-10-19 | 2011-11-22 | Qualcomm Incorporated | Motion assisted image sensor configuration |
US8170342B2 (en) * | 2007-11-07 | 2012-05-01 | Microsoft Corporation | Image recognition of content |
JP2009152672A (en) * | 2007-12-18 | 2009-07-09 | Samsung Techwin Co Ltd | Recording apparatus, reproducing apparatus, recording method, reproducing method, and program |
JP5242151B2 (en) * | 2007-12-21 | 2013-07-24 | セミコンダクター・コンポーネンツ・インダストリーズ・リミテッド・ライアビリティ・カンパニー | Vibration correction control circuit and imaging apparatus including the same |
JP2009159359A (en) * | 2007-12-27 | 2009-07-16 | Samsung Techwin Co Ltd | Moving image data encoding apparatus, moving image data decoding apparatus, moving image data encoding method, moving image data decoding method and program |
US20090217338A1 (en) * | 2008-02-25 | 2009-08-27 | Broadcom Corporation | Reception verification/non-reception verification of base/enhancement video layers |
US20090323810A1 (en) * | 2008-06-26 | 2009-12-31 | Mediatek Inc. | Video encoding apparatuses and methods with decoupled data dependency |
JP2010039788A (en) * | 2008-08-05 | 2010-02-18 | Toshiba Corp | Image processing apparatus and method thereof, and image processing program |
JP2010147808A (en) * | 2008-12-18 | 2010-07-01 | Olympus Imaging Corp | Imaging apparatus and image processing method in same |
US8311115B2 (en) * | 2009-01-29 | 2012-11-13 | Microsoft Corporation | Video encoding using previously calculated motion information |
US20100194851A1 (en) * | 2009-02-03 | 2010-08-05 | Aricent Inc. | Panorama image stitching |
US20100229206A1 (en) * | 2009-03-03 | 2010-09-09 | Viasat, Inc. | Space shifting over forward satellite communication channels |
US8520083B2 (en) * | 2009-03-27 | 2013-08-27 | Canon Kabushiki Kaisha | Method of removing an artefact from an image |
US20100309987A1 (en) * | 2009-06-05 | 2010-12-09 | Apple Inc. | Image acquisition and encoding system |
US20110170608A1 (en) * | 2010-01-08 | 2011-07-14 | Xun Shi | Method and device for video transcoding using quad-tree based mode selection |
US9007428B2 (en) * | 2011-06-01 | 2015-04-14 | Apple Inc. | Motion-based image stitching |
US8554011B2 (en) * | 2011-06-07 | 2013-10-08 | Microsoft Corporation | Automatic exposure correction of images |
-
2011
- 2011-09-14 US US13/232,052 patent/US20130021512A1/en not_active Abandoned
- 2011-09-14 US US13/232,045 patent/US20130021488A1/en not_active Abandoned
- 2011-09-19 US US13/235,975 patent/US20130021504A1/en not_active Abandoned
- 2011-09-27 US US13/245,941 patent/US20130021489A1/en not_active Abandoned
- 2011-10-26 US US13/281,521 patent/US20130021490A1/en not_active Abandoned
- 2011-12-07 US US13/313,352 patent/US9092861B2/en active Active
- 2011-12-07 US US13/313,345 patent/US20130022116A1/en not_active Abandoned
- 2011-12-19 US US13/330,047 patent/US20130021484A1/en not_active Abandoned
-
2012
- 2012-03-07 US US13/413,863 patent/US20130021491A1/en not_active Abandoned
Patent Citations (4)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US20080074533A1 (en) * | 2006-09-22 | 2008-03-27 | Kuo-Hung Liao | Digital image capturing device and method of automatic shooting thereof |
US20100013943A1 (en) * | 2008-07-18 | 2010-01-21 | Sony Ericsson Mobile Communications Ab | Arrangement and method relating to an image recording device |
WO2011052506A1 (en) * | 2009-10-28 | 2011-05-05 | 京セラ株式会社 | Portable image pickup apparatus |
US8681255B2 (en) * | 2010-09-28 | 2014-03-25 | Microsoft Corporation | Integrated low power depth camera and projection device |
Cited By (57)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US11606482B2 (en) | 1997-01-27 | 2023-03-14 | West Texas Technology Partners, Llc | Methods for camera movement compensation |
US9462255B1 (en) | 2012-04-18 | 2016-10-04 | Amazon Technologies, Inc. | Projection and camera system for augmented reality environment |
US9472005B1 (en) * | 2012-04-18 | 2016-10-18 | Amazon Technologies, Inc. | Projection and camera system for augmented reality environment |
US11093047B2 (en) | 2012-05-11 | 2021-08-17 | Comcast Cable Communications, Llc | System and method for controlling a user experience |
US20130300644A1 (en) * | 2012-05-11 | 2013-11-14 | Comcast Cable Communications, Llc | System and Methods for Controlling a User Experience |
US10664062B2 (en) | 2012-05-11 | 2020-05-26 | Comcast Cable Communications, Llc | System and method for controlling a user experience |
US9619036B2 (en) * | 2012-05-11 | 2017-04-11 | Comcast Cable Communications, Llc | System and methods for controlling a user experience |
US20130329113A1 (en) * | 2012-06-08 | 2013-12-12 | Sony Mobile Communications, Inc. | Terminal device and image capturing method |
US9438805B2 (en) * | 2012-06-08 | 2016-09-06 | Sony Corporation | Terminal device and image capturing method |
US20150124115A1 (en) * | 2012-06-11 | 2015-05-07 | Omnivision Technologies, Inc. | Shutter release using secondary camera |
US9313392B2 (en) * | 2012-06-11 | 2016-04-12 | Omnivision Technologies, Inc. | Shutter release using secondary camera |
US20130335587A1 (en) * | 2012-06-14 | 2013-12-19 | Sony Mobile Communications, Inc. | Terminal device and image capturing method |
US10175769B2 (en) * | 2012-07-06 | 2019-01-08 | Pixart Imaging Inc. | Interactive system and glasses with gesture recognition function |
US9904369B2 (en) * | 2012-07-06 | 2018-02-27 | Pixart Imaging Inc. | Gesture recognition system and glasses with gesture recognition function |
US20140009623A1 (en) * | 2012-07-06 | 2014-01-09 | Pixart Imaging Inc. | Gesture recognition system and glasses with gesture recognition function |
US9179031B2 (en) * | 2012-10-23 | 2015-11-03 | Sony Corporation | Content acquisition apparatus and storage medium |
US20140111668A1 (en) * | 2012-10-23 | 2014-04-24 | Sony Corporation | Content acquisition apparatus and storage medium |
US10630893B2 (en) * | 2013-01-23 | 2020-04-21 | Orcam Technologies Ltd. | Apparatus for adjusting image capture settings based on a type of visual trigger |
US9529513B2 (en) * | 2013-08-05 | 2016-12-27 | Microsoft Technology Licensing, Llc | Two-hand interaction with natural user interface |
US20150040040A1 (en) * | 2013-08-05 | 2015-02-05 | Alexandru Balan | Two-hand interaction with natural user interface |
US10308167B2 (en) * | 2013-10-09 | 2019-06-04 | Magna Closures Inc. | Control of display for vehicle window |
US20160236612A1 (en) * | 2013-10-09 | 2016-08-18 | Magna Closures Inc. | Control of display for vehicle window |
US10931866B2 (en) | 2014-01-05 | 2021-02-23 | Light Labs Inc. | Methods and apparatus for receiving and storing in a camera a user controllable setting that is used to control composite image generation performed after image capture |
US20150297986A1 (en) * | 2014-04-18 | 2015-10-22 | Aquifi, Inc. | Systems and methods for interactive video games with motion dependent gesture inputs |
US10051196B2 (en) * | 2014-05-20 | 2018-08-14 | Lenovo (Singapore) Pte. Ltd. | Projecting light at angle corresponding to the field of view of a camera |
US20150341551A1 (en) * | 2014-05-20 | 2015-11-26 | Lenovo (Singapore) Pte. Ltd. | Projecting light at angle corresponding to the field of view of a camera |
US20170242319A1 (en) * | 2014-08-06 | 2017-08-24 | Kevin J. WARRIAN | Orientation System For Image Recording Device |
US10031400B2 (en) * | 2014-08-06 | 2018-07-24 | Kevin J. WARRIAN | Orientation system for image recording device |
US9310667B2 (en) * | 2014-08-06 | 2016-04-12 | Kevin J. WARRIAN | Orientation system for image recording devices |
US10999480B2 (en) | 2014-08-14 | 2021-05-04 | Atheer, Inc. | Methods for camera movement compensation |
US10412272B2 (en) | 2014-08-14 | 2019-09-10 | Atheer, Inc. | Methods for camera movement compensation |
US20160048216A1 (en) * | 2014-08-14 | 2016-02-18 | Ryan Fink | Methods for camera movement compensation for gesture detection and object recognition |
US10116839B2 (en) * | 2014-08-14 | 2018-10-30 | Atheer Labs, Inc. | Methods for camera movement compensation for gesture detection and object recognition |
US20160119552A1 (en) * | 2014-10-24 | 2016-04-28 | Lg Electronics Inc. | Mobile terminal and controlling method thereof |
US9723222B2 (en) * | 2014-10-24 | 2017-08-01 | Lg Electronics Inc. | Mobile terminal with a camera and method for capturing an image by the mobile terminal in self-photography mode |
US10623619B2 (en) | 2014-10-31 | 2020-04-14 | International Business Machines Corporation | Image-capture-scope indication device for image-capture apparatus |
US10375281B2 (en) | 2014-10-31 | 2019-08-06 | International Business Machines Corporation | Image-capture-scope indication device for image-capture apparatus |
US11509817B2 (en) * | 2014-11-03 | 2022-11-22 | Robert John Gove | Autonomous media capturing |
US20160148648A1 (en) * | 2014-11-20 | 2016-05-26 | Facebook, Inc. | Systems and methods for improving stabilization in time-lapse media content |
US10165186B1 (en) * | 2015-06-19 | 2018-12-25 | Amazon Technologies, Inc. | Motion estimation based video stabilization for panoramic video from multi-camera capture device |
US10447926B1 (en) | 2015-06-19 | 2019-10-15 | Amazon Technologies, Inc. | Motion estimation based video compression and encoding |
US9648223B2 (en) * | 2015-09-04 | 2017-05-09 | Microvision, Inc. | Laser beam scanning assisted autofocus |
US9578221B1 (en) * | 2016-01-05 | 2017-02-21 | International Business Machines Corporation | Camera field of view visualizer |
US20170272647A1 (en) * | 2016-03-17 | 2017-09-21 | Kabushiki Kaisha Toshiba | Imaging support apparatus, imaging support method, and computer program product |
US10212336B2 (en) * | 2016-03-17 | 2019-02-19 | Kabushiki Kaisha Toshiba | Imaging support apparatus, imaging support method, and computer program product |
US20170358144A1 (en) * | 2016-06-13 | 2017-12-14 | Julia Schwarz | Altering properties of rendered objects via control points |
US10140776B2 (en) * | 2016-06-13 | 2018-11-27 | Microsoft Technology Licensing, Llc | Altering properties of rendered objects via control points |
WO2018075367A1 (en) * | 2016-10-18 | 2018-04-26 | Light Labs Inc. | Methods and apparatus for receiving, storing and/or using camera settings and/or user preference information |
CN110084087A (en) * | 2016-10-26 | 2019-08-02 | 奥康科技有限公司 | For analyzing image and providing the wearable device and method of feedback |
US10313552B2 (en) * | 2016-10-26 | 2019-06-04 | Orcam Technologies Ltd. | Systems and methods for providing visual feedback of a field of view |
US11372244B2 (en) * | 2017-12-25 | 2022-06-28 | Goertek Technology Co., Ltd. | Laser beam scanning display device and augmented reality glasses |
US11410413B2 (en) | 2018-09-10 | 2022-08-09 | Samsung Electronics Co., Ltd. | Electronic device for recognizing object and method for controlling electronic device |
US10771696B2 (en) * | 2018-11-26 | 2020-09-08 | Sony Corporation | Physically based camera motion compensation |
US20200169663A1 (en) * | 2018-11-26 | 2020-05-28 | Sony Corporation | Physically based camera motion compensation |
US20220060572A1 (en) * | 2018-12-30 | 2022-02-24 | Sang Chul Kwon | Foldable mobile phone |
US11616867B2 (en) * | 2018-12-30 | 2023-03-28 | Sang Chul Kwon | Foldable mobile phone |
US11289078B2 (en) * | 2019-06-28 | 2022-03-29 | Intel Corporation | Voice controlled camera with AI scene detection for precise focusing |
Also Published As
Publication number | Publication date |
---|---|
US20130021504A1 (en) | 2013-01-24 |
US20130021490A1 (en) | 2013-01-24 |
US9092861B2 (en) | 2015-07-28 |
US20130021483A1 (en) | 2013-01-24 |
US20130021489A1 (en) | 2013-01-24 |
US20130021512A1 (en) | 2013-01-24 |
US20130021484A1 (en) | 2013-01-24 |
US20130021488A1 (en) | 2013-01-24 |
US20130022116A1 (en) | 2013-01-24 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
US20130021491A1 (en) | Camera Device Systems and Methods | |
CN111066315B (en) | Apparatus, method and readable medium configured to process and display image data | |
US9638989B2 (en) | Determining motion of projection device | |
JP6075122B2 (en) | System, image projection apparatus, information processing apparatus, information processing method, and program | |
KR101237673B1 (en) | A method in relation to acquiring digital images | |
US11736792B2 (en) | Electronic device including plurality of cameras, and operation method therefor | |
CN105705993A (en) | Controlling a camera with face detection | |
JP6171353B2 (en) | Information processing apparatus, system, information processing method, and program | |
US9323339B2 (en) | Input device, input method and recording medium | |
KR102655625B1 (en) | Method and photographing device for controlling the photographing device according to proximity of a user | |
JP2011211493A (en) | Imaging apparatus, display method, and program | |
JP6892524B2 (en) | Slow motion video capture based on target tracking | |
JP7110443B2 (en) | Shooting method and shooting device, electronic equipment, storage medium | |
JP2010034820A (en) | Projector, control method of projector, and control program | |
JPWO2018150569A1 (en) | Gesture recognition device, gesture recognition method, projector including gesture recognition device, and video signal supply device | |
EP1579894A3 (en) | Gaming machine | |
JP2018061729A (en) | Image processing system and control method thereof | |
CN107852461B (en) | Method and apparatus for performing image capture | |
JP2005078291A (en) | Image projecting and displaying device, pointing position detecting method, program and recording medium | |
JP2018006803A (en) | Imaging apparatus, control method for imaging apparatus, and program | |
JP2018142944A (en) | Information processing apparatus, information processing method, and program |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
AS | Assignment |
Owner name: BROADCOM CORPORATION, CALIFORNIA Free format text: ASSIGNMENT OF ASSIGNORS INTEREST;ASSIGNORS:GORDON LEE, CHONG MING;JAMES, GERAINT;BENNETT, JAMES D.;AND OTHERS;SIGNING DATES FROM 20120229 TO 20120306;REEL/FRAME:028041/0876 |
|
STCB | Information on status: application discontinuation |
Free format text: ABANDONED -- FAILURE TO RESPOND TO AN OFFICE ACTION |
|
AS | Assignment |
Owner name: BANK OF AMERICA, N.A., AS COLLATERAL AGENT, NORTH CAROLINA Free format text: PATENT SECURITY AGREEMENT;ASSIGNOR:BROADCOM CORPORATION;REEL/FRAME:037806/0001 Effective date: 20160201 Owner name: BANK OF AMERICA, N.A., AS COLLATERAL AGENT, NORTH Free format text: PATENT SECURITY AGREEMENT;ASSIGNOR:BROADCOM CORPORATION;REEL/FRAME:037806/0001 Effective date: 20160201 |
|
AS | Assignment |
Owner name: AVAGO TECHNOLOGIES GENERAL IP (SINGAPORE) PTE. LTD., SINGAPORE Free format text: ASSIGNMENT OF ASSIGNORS INTEREST;ASSIGNOR:BROADCOM CORPORATION;REEL/FRAME:041706/0001 Effective date: 20170120 Owner name: AVAGO TECHNOLOGIES GENERAL IP (SINGAPORE) PTE. LTD Free format text: ASSIGNMENT OF ASSIGNORS INTEREST;ASSIGNOR:BROADCOM CORPORATION;REEL/FRAME:041706/0001 Effective date: 20170120 |
|
AS | Assignment |
Owner name: BROADCOM CORPORATION, CALIFORNIA Free format text: TERMINATION AND RELEASE OF SECURITY INTEREST IN PATENTS;ASSIGNOR:BANK OF AMERICA, N.A., AS COLLATERAL AGENT;REEL/FRAME:041712/0001 Effective date: 20170119 |