US20130293461A1 - Method And System For Determining How To Handle Processing Of An Image Based On Motion - Google Patents

Method And System For Determining How To Handle Processing Of An Image Based On Motion Download PDF

Info

Publication number
US20130293461A1
US20130293461A1 US13/932,268 US201313932268A US2013293461A1 US 20130293461 A1 US20130293461 A1 US 20130293461A1 US 201313932268 A US201313932268 A US 201313932268A US 2013293461 A1 US2013293461 A1 US 2013293461A1
Authority
US
United States
Prior art keywords
image
series
scene
faces
image samples
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Abandoned
Application number
US13/932,268
Inventor
Phil Elwell
Naushirwan Patuck
Benjamin Sewell
David Plowman
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Avago Technologies International Sales Pte Ltd
Original Assignee
Broadcom Corp
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Broadcom Corp filed Critical Broadcom Corp
Priority to US13/932,268 priority Critical patent/US20130293461A1/en
Assigned to BROADCOM CORPORATION reassignment BROADCOM CORPORATION ASSIGNMENT OF ASSIGNORS INTEREST (SEE DOCUMENT FOR DETAILS). Assignors: PATUCK, NAUSHIRWAN, PLOWMAN, DAVID, SEWELL, BENJAMIN, ELWELL, PHIL
Publication of US20130293461A1 publication Critical patent/US20130293461A1/en
Assigned to BANK OF AMERICA, N.A., AS COLLATERAL AGENT reassignment BANK OF AMERICA, N.A., AS COLLATERAL AGENT PATENT SECURITY AGREEMENT Assignors: BROADCOM CORPORATION
Assigned to AVAGO TECHNOLOGIES GENERAL IP (SINGAPORE) PTE. LTD. reassignment AVAGO TECHNOLOGIES GENERAL IP (SINGAPORE) PTE. LTD. ASSIGNMENT OF ASSIGNORS INTEREST (SEE DOCUMENT FOR DETAILS). Assignors: BROADCOM CORPORATION
Assigned to BROADCOM CORPORATION reassignment BROADCOM CORPORATION TERMINATION AND RELEASE OF SECURITY INTEREST IN PATENTS Assignors: BANK OF AMERICA, N.A., AS COLLATERAL AGENT
Abandoned legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/01Input arrangements or combined input and output arrangements for interaction between user and computer
    • G06F3/017Gesture based interaction, e.g. based on a set of recognized hand gestures
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N23/00Cameras or camera modules comprising electronic image sensors; Control thereof
    • H04N23/60Control of cameras or camera modules
    • H04N23/61Control of cameras or camera modules based on recognised objects
    • H04N23/611Control of cameras or camera modules based on recognised objects where the recognised objects include parts of the human body

Definitions

  • Certain embodiments of the invention relate to communication systems. More specifically, certain embodiments of the invention relate to a method and system for determining how to handle processing of an image based on motion.
  • Image and video capabilities may be incorporated into a wide range of devices such as, for example, mobile phones, digital televisions, digital direct broadcast systems, digital recording devices, gaming consoles and the like.
  • Mobile phones with built-in cameras, or camera phones have become prevalent in the mobile phone market, due to the low cost of CMOS image sensors and the ever increasing customer demand for more advanced mobile phones with image and video capabilities.
  • CMOS image sensors have become prevalent in the mobile phone market, due to the low cost of CMOS image sensors and the ever increasing customer demand for more advanced mobile phones with image and video capabilities.
  • a system and/or method for determining how to handle processing of an image based on motion substantially as shown in and/or described in connection with at least one of the figures, as set forth more completely in the claims.
  • FIG. 1 is a block diagram illustrating an exemplary mobile multimedia system that is operable to determine how to handle processing of an image based on motion, in accordance with an embodiment of the invention.
  • FIG. 2 is a block diagram illustrating an exemplary image of a scene that is determined based on tolerable amount of motion associated with identifiable objects, in accordance with an embodiment of the invention.
  • FIG. 3 is a block diagram illustrating an exemplary image of a scene that is determined based on a gesture received from an identifiable object, in accordance with an embodiment of the invention.
  • FIG. 4 is a flow chart illustrating exemplary steps for determining how to handle processing of an image based on motion, in accordance with an embodiment of the invention.
  • a mobile multimedia device may be operable to initiate capture of a series of image samples of a scene, where the scene may comprise one or more objects that may be identifiable by the mobile multimedia device.
  • An image for the scene may be determined by the mobile multimedia device, from the captured series of image samples, based on motion associated with one or more of the identifiable objects.
  • the mobile multimedia device may be operable to compare a newly captured image sample with a previously captured consecutive image sample during the process of capturing a series of image samples.
  • an amount of motion associated with one or more of the identifiable objects may then be determined based on the result of the comparison.
  • the newly captured image sample may be selected as the image for the scene.
  • the particular threshold may be set in such a way that an image of the scene may be determined and/or recorded by the mobile multimedia device while one or more of the identifiable objects in the scene are still or are within tolerable amount of motion or movement.
  • the identifiable objects may comprise, for example, faces which may be identified utilizing face detection.
  • the motion may be due to, for example, a gesture received from one or more of the identified faces.
  • the gesture may comprise, for example, a wink and/or a smile.
  • the smile may be identified, for example, utilizing smile detection.
  • the newly captured image sample may be selected as the image for the scene.
  • the particular threshold may be set in such a way that an image of the scene may be determined and/or recorded by the mobile multimedia device while a gesture such as, for example, a wink or a smile from one or more of the identified faces is detected.
  • FIG. 1 is a block diagram illustrating an exemplary mobile multimedia system that is operable to determine how to handle processing of an image based on motion, in accordance with an embodiment of the invention.
  • the mobile multimedia system 100 may comprise a mobile multimedia device 105 , a TV 105 h , a PC 105 k , an external camera 105 m , an external memory 105 n , an external LCD display 105 p and a scene 110 .
  • the mobile multimedia device 105 may be a mobile phone or other handheld communication device.
  • the mobile multimedia device 105 may comprise suitable logic, circuitry, interfaces and/or code that may be operable to communicate radio signals across a wireless communication network.
  • the mobile multimedia device 105 may be operable to process image, video and/or multimedia data.
  • the mobile multimedia device 105 may comprise a mobile multimedia processor (MMP) 105 a , a memory 105 t , a processor 105 f , an antenna 105 d , an audio block 105 s , a radio frequency (RF) block 105 e , an LCD display 105 b , a keypad 105 c and a camera 105 g.
  • MMP mobile multimedia processor
  • RF radio frequency
  • the mobile multimedia processor (MMP) 105 a may comprise suitable logic, circuitry, interfaces, and/or code that may be operable to perform image, video and/or multimedia processing for the mobile multimedia device 105 .
  • the MMP 105 a may be designed and optimized for video record/playback, mobile TV and 3D mobile gaming.
  • the MMP 105 a may perform a plurality of image processing techniques such as, for example, filtering, demosaic, lens shading correction, defective pixel correction, white balance, image compensation, Bayer interpolation, color transformation and post filtering.
  • the MMP 105 a may also comprise integrated interfaces, which may be utilized to support one or more external devices coupled to the mobile multimedia device 105 .
  • the MMP 105 a may support connections to a TV 105 h , an external camera 105 m , and an external LCD display 105 p .
  • the MMP 105 a may be communicatively coupled to the memory 105 t and/or the external memory 105 n .
  • the MMP 105 a may be operable to determine and/or record an image of the scene 110 utilizing a series of captured image samples of the scene 110 based on motion associated with one or more identifiable objects in the scene 110 .
  • the identifiable objects may comprise, for example, the faces 110 a - 110 c .
  • the MMP 105 a may comprise a motion detection module 105 u.
  • the motion detection module 105 u may comprise suitable logic, circuitry, interfaces and/or code that may be operable to detect motion such as, for example, a wink 110 e or a smile 110 d in the scene 110 .
  • the motion detection may be achieved by comparing the current image with a reference image and counting the number of different pixels.
  • the processor 105 f may comprise suitable logic, circuitry, interfaces, and/or code that may be operable to control operations and processes in the mobile multimedia device 105 .
  • the processor 105 f may be operable to process signals from the RF block 105 e and/or the MMP 105 a.
  • the memory 105 t may comprise suitable logic, circuitry, interfaces and/or code that may be operable to store information such as executable instructions, data and/or database that may be utilized by the processor 105 f and the multimedia processor 105 a .
  • the memory 105 t may comprise RAM, ROM, low latency nonvolatile memory such as flash memory and/or other suitable electronic data storage.
  • the mobile multimedia device 105 may receive RF signals via the antenna 105 d .
  • Received RF signals may be processed by the RF block 105 e and the RF signals may be further processed by the processor 105 f .
  • Audio and/or video data may be received from the external camera 105 m , and image data may be received via the integrated camera 105 g .
  • the MMP 105 a may utilize the external memory 105 n for storing of processed data.
  • Processed audio data may be communicated to the audio block 105 s and processed video data may be communicated to the LCD 105 b , the external LCD 105 p and/or the TV 105 h , for example.
  • the keypad 105 c may be utilized for communicating processing commands and/or other data, which may be required for image, audio or video data processing by the MMP 105 a.
  • the camera 105 g may be operable to initiate capture of a series of image samples of the scene 110 .
  • a shutter release button may be pressed to trigger the initiation of capturing the series of image samples of the scene 110 .
  • the scene 110 may comprise one or more objects such as the faces 110 a - 110 c that may be identifiable by the MMP 105 a .
  • An image for the scene 110 may be determined by the MMP 105 a utilizing the captured image samples based on motion associated with one or more of the identifiable objects such as the faces 110 a - 110 c . As soon as the image for the scene 110 has been determined, the capture of the series of image samples may be terminated.
  • the MMP 105 a may be operable to compare a newly captured image sample with a previously captured consecutive image sample during the process of capturing a series of image samples. An amount of motion associated with one or more of the identifiable objects such as the faces 110 a - 110 c may then be determined by the motion detection module 105 u in the MMP 105 a , based on the result of the comparison.
  • the newly captured image sample may be selected as the image for the scene 110 .
  • the particular threshold may be set in such a way that an image /of the scene 110 may be determined and/or recorded by the MMP 105 a while one or more of the identifiable objects such as the faces 110 a - 110 c in the scene 110 are still or are within a tolerable amount of movement.
  • the camera 105 g may operate in a number of different camera modes such as, for example, shutter priority mode, aperture priority mode, portrait mode, landscape mode or action mode.
  • the threshold may be set differently for different camera modes.
  • the camera 105 g may operate in the action or sports mode.
  • the threshold for the action mode may be set higher than or different from the threshold for the portrait mode.
  • An identifiable object in the scene 110 may comprise, for example, a face such as the face 110 a , which may be identified employing face detection.
  • the face detection may determine the locations and sizes of the faces 110 a - 110 c such as human faces in arbitrary images.
  • the face detection may detect facial features and ignore other items and/or features, such as buildings, trees and bodies.
  • the motion may be due to, for example, a gesture received from one or more of the identified faces 110 a - 110 c .
  • the gesture may comprise, for example, a wink 110 e and/or a smile 110 d .
  • the smile 110 d may be identified, for example, employing smile detection.
  • the smile detection may detect open eyes and/or upturned mouth associated with a smile such as the smile 110 d in the scene 110 .
  • the newly captured image sample may be selected as the image for the scene 110 .
  • the particular threshold may be set in such a way that an image of the scene 110 may be determined and/or recorded by the MMP 105 a while a gesture such as, for example, a wink 110 e or a smile 110 d from one or more of the identified faces 110 a - 110 c is detected.
  • the threshold may be set differently depending on the camera mode in which the camera 105 g may operate and/or the environmental conditions.
  • FIG. 2 is a block diagram illustrating an exemplary image of a scene that is determined based on tolerable amount of motion associated with identifiable objects, in accordance with an embodiment of the invention.
  • a series of image samples of a scene such as the scene 210 , of which image samples 201 , 202 , 203 are illustrated and an image 204 of the scene 210 .
  • the scene 210 may comprise a plurality of identifiable objects, of which the faces 210 a , 210 b , 210 c are illustrated.
  • the image sample 201 may comprise a plurality of faces, of which the faces 201 a , 201 b , 201 c are illustrated.
  • the image sample 202 may comprise a plurality of faces, of which the face 202 a , 202 b , 202 c are illustrated.
  • the image sample 203 may comprise a plurality of faces, of which the faces 203 a , 203 b , 203 c are illustrated.
  • the image 204 may comprise a plurality of faces, of which the faces 204 a , 204 b , 204 c are illustrated.
  • the MMP 105 a may be operable to compare the image sample 202 with the image sample 201 .
  • the faces 202 a , 202 b , 202 c in the image sample 202 are compared with the faces 201 a , 201 b , 201 c in the image sample 201 , respectively.
  • a large amount of motion which is above a particular threshold value for a portrait, may be detected or determined by the motion detection module 105 u in the MMP 105 a .
  • the amount of motion may be due to, for example, opening of eyes on the face 202 a , and changing to smiles on the faces 202 b and 202 c . Since the amount of motion is above the particular threshold value, the image sample 203 is then captured during the process of capturing the series of image samples.
  • the MMP 105 a may then be operable to compare the image sample 203 with the image sample 202 . For example, the faces 203 a , 203 b , 203 c in the image sample 203 are compared with the faces 202 a , 202 b , 202 c in the image sample 202 , respectively. As illustrated in FIG.
  • the result of the comparison between the image sample 203 and the image sample 202 may indicate that the faces 203 a , 203 b , 203 c may be still or there may be a small amount of motion, which is detected by the motion detection module 105 u .
  • the amount of motion that is detected may be below the particular threshold value for a portrait. Accordingly, the image sample 203 may be selected as the image 204 for the scene 210 . The capture of the series of the image samples may then be terminated.
  • FIG. 2 there are shown three faces 210 a - 210 c in the scene 210 , three image samples 201 , 202 , 203 , and three faces on an image sample such as the faces 201 a - 201 c on the image sample 201 .
  • the invention is not so limited.
  • the number of the image samples and the number of the faces may be different. Different identifiable objects in the scene 210 may be illustrated.
  • FIG. 3 is a block diagram illustrating an image of a scene that is determined based on a gesture received from an identifiable object, in accordance with an embodiment of the invention.
  • a series of image samples of a scene such as the scene 310 , of which image samples 301 , 302 , 303 are illustrated and an image 304 of the scene 310 .
  • the scene 310 may comprise a plurality of identifiable objects, of which the faces 310 a , 310 b , 310 c are illustrated.
  • the image sample 301 may comprise a plurality of faces, of which the faces 301 a , 301 b , 301 c are illustrated.
  • the image sample 302 may comprise a plurality of faces, of which the face 302 a , 302 b , 302 c are illustrated.
  • the image sample 303 may comprise a plurality of faces, of which the faces 303 a , 303 b , 303 c are illustrated.
  • the image 304 may comprise a plurality of faces, of which the faces 304 a , 304 b , 304 c are illustrated.
  • the MMP 105 a may be operable to compare the image sample 302 with the image sample 301 .
  • the faces 302 a , 302 b , 302 c in the image sample 302 are compared with the faces 301 a , 301 b , 301 c in the image sample 301 respectively.
  • the result of the comparison between the image sample 302 and the image sample 301 may indicate that the faces 302 a , 302 b , 302 c may be still or possess a tolerable amount of motion.
  • the amount of motion may be detected by the motion detection module 105 u .
  • the motion detection module 105 u may be operable to determine that the amount of motion this is detected is below a particular threshold value. Since the amount of motion is below the particular threshold value, a gesture from one of the faces 302 a , 302 b , 302 c may not be detected by the motion detection module 105 u . Accordingly, the image sample 303 is then captured during the process of capturing the series of image samples.
  • the MMP 105 a may then be operable to compare the image sample 303 with the image sample 302 .
  • the faces 303 a , 303 b , 303 c in the image sample 303 are compared with the faces 302 a , 302 b , 302 c in the image sample 302 , respectively.
  • a large amount of motion which is above the particular threshold value for a portrait, may be detected or determined by the motion detection module 105 u in the MMP 105 a .
  • the amount of motion may be due to a gesture such as, for example, due to the smile 303 d on the face 303 b . Since the amount of motion is above the particular threshold value due to the gesture such as the smile 303 d , the image sample 303 may be selected as the image 304 for the scene 310 .
  • the capture of the series of image samples may then be terminated.
  • FIG. 3 there are shown three faces 310 a - 310 c in the scene 310 , three image samples 301 , 302 , 303 , and three faces on an image sample such as the faces 301 a - 301 c on the image sample 301 .
  • the invention is not so limited.
  • the number of the image samples and the number of the faces may be different. Different identifiable objects and different gestures in the scene 310 may be illustrated.
  • FIG. 4 is a flow chart illustrating exemplary steps for determining how to handle processing of an image based on motion, in accordance with an embodiment of the invention.
  • the exemplary steps start at step 401 .
  • the mobile multimedia device 105 may be operable to identify a scene 110 from a position or particular viewing angle.
  • the camera 105 g in the mobile multimedia device 105 may be operable to initiate capture of a series of image samples 201 , 202 , 203 , of the scene 210 from the position or viewing angle, where the scene 210 may comprise one or more identifiable objects such as the faces 210 a - 210 c .
  • the MMP 105 a in the mobile multimedia device 105 may be operable to determine and/or record an image 204 for the scene 210 , from the captured series of image samples 201 , 202 , 203 , based on motion associated with one or more of the identifiable objects such as the faces 210 a - 210 c .
  • the capture of the series of image samples may be terminated when the image 204 for the scene 210 has been determined.
  • the LCD 105 b in the mobile multimedia device 105 may be operable to display the determined image 204 of the scene 210 .
  • the exemplary steps may proceed to the end step 407 .
  • a camera 105 g in a mobile multimedia device 105 may be operable to initiate capture of a series of image samples such as the image samples 201 , 202 , 203 of a scene 210 .
  • the scene 210 may comprise one or more objects that may be identifiable by the MMP 105 a in the mobile multimedia device 105 .
  • An image such as the image 204 to be created for the scene 210 may be determined by MMP 105 a in the mobile multimedia device 105 , from the captured series of image samples 201 , 202 , 203 , based on motion associated with one or more of the identifiable objects.
  • the capture of the series of image samples may be terminated.
  • the MMP 105 a in the mobile multimedia device 105 may be operable to compare a newly captured image sample such as the image sample 203 with a previously captured consecutive image sample such as the image sample 202 during the process of capturing the series of image samples 201 , 202 , 203 . An amount of motion associated with one or more of the identifiable objects may then be determined by the motion detection module 105 u based on the result of the comparison.
  • the newly captured image sample 203 may be selected as the image 204 for the scene 210 .
  • the particular threshold may be set in such a way that an image 204 of the scene 210 may be determined by the MMP 105 a while one or more of the identifiable objects such as the faces 203 a , 203 b , 203 c in the scene 210 are still or are within tolerable amount of motion or movement.
  • the identifiable objects may comprise, for example, faces 110 a - 110 c which may be identified utilizing face detection.
  • the motion may be due to, for example, a gesture received from one or more of the identified faces 110 a - 110 c .
  • the gesture may comprise, for example, a wink 110 e and/or a smile 110 d .
  • the smile 110 d may be identified, for example, utilizing smile detection.
  • the newly captured image sample such as the image sample 303 may be selected as the image 304 for the scene 310 .
  • the particular threshold may be set in such a way that an image 304 of the scene 310 may be determined by the MMP 105 a in the mobile multimedia device 105 while a gesture such as, for example, a smile 303 d from the identified face 303 b is detected.
  • inventions may provide a non-transitory computer readable medium and/or storage medium, and/or a non-transitory machine readable medium and/or storage medium, having stored thereon, a machine code and/or a computer program having at least one code section executable by a machine and/or a computer, thereby causing the machine and/or computer to perform the steps as described herein for determining how to handle processing of an image based on motion.
  • the present invention may be realized in hardware, software, or a combination of hardware and software.
  • the present invention may be realized in a centralized fashion in at least one computer system or in a distributed fashion where different elements are spread across several interconnected computer systems. Any kind of computer system or other apparatus adapted for carrying out the methods described herein is suited.
  • a typical combination of hardware and software may be a general-purpose computer system with a computer program that, when being loaded and executed, controls the computer system such that it carries out the methods described herein.
  • the present invention may also be embedded in a computer program product, which comprises all the features enabling the implementation of the methods described herein, and which when loaded in a computer system is able to carry out these methods.
  • Computer program in the present context means any expression, in any language, code or notation, of a set of instructions intended to cause a system having an information processing capability to perform a particular function either directly or after either or both of the following: a) conversion to another language, code or notation; b) reproduction in a different material form.

Landscapes

  • Engineering & Computer Science (AREA)
  • General Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Multimedia (AREA)
  • Signal Processing (AREA)
  • Human Computer Interaction (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Studio Devices (AREA)
  • Image Analysis (AREA)

Abstract

A mobile multimedia device may be operable to initiate capture of a series of image samples of a scene, where the scene may comprise one or more objects that may be identifiable by the mobile multimedia device. An image for the scene may be determined by the mobile multimedia device utilizing the captured image samples based on motion associated with the identifiable objects.

Description

    CROSS REFERENCE TO RELATED APPLICATIONS
  • This application is a continuation of U.S. application Ser. No. 12/763,334, filed Apr. 20, 2010, which claims priority to, and claims benefit from U.S. Provisional Application Ser. No. 61,391,971, which was filed on Apr. 1, 2010.
  • The above stated applications are hereby incorporated herein by reference in its entirety.
  • TECHNICAL FIELD
  • Certain embodiments of the invention relate to communication systems. More specifically, certain embodiments of the invention relate to a method and system for determining how to handle processing of an image based on motion.
  • BACKGROUND
  • Image and video capabilities may be incorporated into a wide range of devices such as, for example, mobile phones, digital televisions, digital direct broadcast systems, digital recording devices, gaming consoles and the like. Mobile phones with built-in cameras, or camera phones, have become prevalent in the mobile phone market, due to the low cost of CMOS image sensors and the ever increasing customer demand for more advanced mobile phones with image and video capabilities. As camera phones have become more widespread, their usefulness has been demonstrated in many applications, such as casual photography, but have also been utilized in more serious applications such as crime prevention, recording crimes as they occur, and news reporting.
  • Historically, the resolution of camera phones has been limited in comparison to typical digital cameras, due to the fact that they must be integrated into the small package of a mobile handset, limiting both the image sensor and lens size. In addition, because of the stringent power requirements of mobile handsets, large image sensors with advanced processing have been difficult to incorporate. However, due to advancements in image sensors, multimedia processors, and lens technology, the resolution of camera phones has steadily improved rivaling that of some digital cameras.
  • Further limitations and disadvantages of conventional and traditional approaches will become apparent to one of skill in the art, through comparison of such systems with the present invention as set forth in the remainder of the present application with reference to the drawings.
  • BRIEF SUMMARY OF THE INVENTION
  • A system and/or method for determining how to handle processing of an image based on motion, substantially as shown in and/or described in connection with at least one of the figures, as set forth more completely in the claims.
  • Various advantages, aspects and novel features of the present invention, as well as details of an illustrated embodiment thereof, will be more fully understood from the following description and drawings.
  • BRIEF DESCRIPTION OF THE DRAWINGS
  • FIG. 1 is a block diagram illustrating an exemplary mobile multimedia system that is operable to determine how to handle processing of an image based on motion, in accordance with an embodiment of the invention.
  • FIG. 2 is a block diagram illustrating an exemplary image of a scene that is determined based on tolerable amount of motion associated with identifiable objects, in accordance with an embodiment of the invention.
  • FIG. 3 is a block diagram illustrating an exemplary image of a scene that is determined based on a gesture received from an identifiable object, in accordance with an embodiment of the invention.
  • FIG. 4 is a flow chart illustrating exemplary steps for determining how to handle processing of an image based on motion, in accordance with an embodiment of the invention.
  • DETAILED DESCRIPTION
  • Certain embodiments of the invention can be found in a method and system for determining how to handle processing of an image based on motion. In various embodiments of the invention, a mobile multimedia device may be operable to initiate capture of a series of image samples of a scene, where the scene may comprise one or more objects that may be identifiable by the mobile multimedia device. An image for the scene may be determined by the mobile multimedia device, from the captured series of image samples, based on motion associated with one or more of the identifiable objects. As soon as the image for the scene has been determined, the capture of the series of image samples may be terminated. In this regard, the mobile multimedia device may be operable to compare a newly captured image sample with a previously captured consecutive image sample during the process of capturing a series of image samples. An amount of motion associated with one or more of the identifiable objects may then be determined based on the result of the comparison. In an exemplary embodiment of the invention, in instances when the determined amount of motion associated with one or more of the identifiable objects is below a particular threshold value, the newly captured image sample may be selected as the image for the scene. In this regard, for example, the particular threshold may be set in such a way that an image of the scene may be determined and/or recorded by the mobile multimedia device while one or more of the identifiable objects in the scene are still or are within tolerable amount of motion or movement.
  • The identifiable objects may comprise, for example, faces which may be identified utilizing face detection. The motion may be due to, for example, a gesture received from one or more of the identified faces. The gesture may comprise, for example, a wink and/or a smile. The smile may be identified, for example, utilizing smile detection.
  • In another exemplary embodiment of the invention, in instances when the determined amount of motion associated with one or more of the identifiable objects are above a particular threshold value, the newly captured image sample may be selected as the image for the scene. In this regard, for example, the particular threshold may be set in such a way that an image of the scene may be determined and/or recorded by the mobile multimedia device while a gesture such as, for example, a wink or a smile from one or more of the identified faces is detected.
  • FIG. 1 is a block diagram illustrating an exemplary mobile multimedia system that is operable to determine how to handle processing of an image based on motion, in accordance with an embodiment of the invention. Referring to FIG. 1, there is shown a mobile multimedia system 100. The mobile multimedia system 100 may comprise a mobile multimedia device 105, a TV 105 h, a PC 105 k, an external camera 105 m, an external memory 105 n, an external LCD display 105 p and a scene 110. The mobile multimedia device 105 may be a mobile phone or other handheld communication device.
  • The mobile multimedia device 105 may comprise suitable logic, circuitry, interfaces and/or code that may be operable to communicate radio signals across a wireless communication network. The mobile multimedia device 105 may be operable to process image, video and/or multimedia data. The mobile multimedia device 105 may comprise a mobile multimedia processor (MMP) 105 a, a memory 105 t, a processor 105 f, an antenna 105 d, an audio block 105 s, a radio frequency (RF) block 105 e, an LCD display 105 b, a keypad 105 c and a camera 105 g.
  • The mobile multimedia processor (MMP) 105 a may comprise suitable logic, circuitry, interfaces, and/or code that may be operable to perform image, video and/or multimedia processing for the mobile multimedia device 105. For example, the MMP 105 a may be designed and optimized for video record/playback, mobile TV and 3D mobile gaming. The MMP 105 a may perform a plurality of image processing techniques such as, for example, filtering, demosaic, lens shading correction, defective pixel correction, white balance, image compensation, Bayer interpolation, color transformation and post filtering. The MMP 105 a may also comprise integrated interfaces, which may be utilized to support one or more external devices coupled to the mobile multimedia device 105. For example, the MMP 105 a may support connections to a TV 105 h, an external camera 105 m, and an external LCD display 105 p. The MMP 105 a may be communicatively coupled to the memory 105 t and/or the external memory 105 n. In an exemplary embodiment of the invention, the MMP 105 a may be operable to determine and/or record an image of the scene 110 utilizing a series of captured image samples of the scene 110 based on motion associated with one or more identifiable objects in the scene 110. The identifiable objects may comprise, for example, the faces 110 a-110 c. The MMP 105 a may comprise a motion detection module 105 u.
  • The motion detection module 105 u may comprise suitable logic, circuitry, interfaces and/or code that may be operable to detect motion such as, for example, a wink 110 e or a smile 110 d in the scene 110. The motion detection may be achieved by comparing the current image with a reference image and counting the number of different pixels.
  • The processor 105 f may comprise suitable logic, circuitry, interfaces, and/or code that may be operable to control operations and processes in the mobile multimedia device 105. The processor 105 f may be operable to process signals from the RF block 105 e and/or the MMP 105 a.
  • The memory 105 t may comprise suitable logic, circuitry, interfaces and/or code that may be operable to store information such as executable instructions, data and/or database that may be utilized by the processor 105 f and the multimedia processor 105 a. The memory 105 t may comprise RAM, ROM, low latency nonvolatile memory such as flash memory and/or other suitable electronic data storage.
  • In operation, the mobile multimedia device 105 may receive RF signals via the antenna 105 d. Received RF signals may be processed by the RF block 105 e and the RF signals may be further processed by the processor 105 f. Audio and/or video data may be received from the external camera 105 m, and image data may be received via the integrated camera 105 g. During processing, the MMP 105 a may utilize the external memory 105 n for storing of processed data. Processed audio data may be communicated to the audio block 105 s and processed video data may be communicated to the LCD 105 b, the external LCD 105 p and/or the TV 105 h, for example. The keypad 105 c may be utilized for communicating processing commands and/or other data, which may be required for image, audio or video data processing by the MMP 105 a.
  • In an exemplary embodiment of the invention, the camera 105 g may be operable to initiate capture of a series of image samples of the scene 110. For example, a shutter release button may be pressed to trigger the initiation of capturing the series of image samples of the scene 110. The scene 110 may comprise one or more objects such as the faces 110 a-110 c that may be identifiable by the MMP 105 a. An image for the scene 110 may be determined by the MMP 105 a utilizing the captured image samples based on motion associated with one or more of the identifiable objects such as the faces 110 a-110 c. As soon as the image for the scene 110 has been determined, the capture of the series of image samples may be terminated. In this regard, the MMP 105 a may be operable to compare a newly captured image sample with a previously captured consecutive image sample during the process of capturing a series of image samples. An amount of motion associated with one or more of the identifiable objects such as the faces 110 a-110 c may then be determined by the motion detection module 105 u in the MMP 105 a, based on the result of the comparison.
  • In an exemplary embodiment of the invention, in instances when the determined amount of motion associated with one or more of the identifiable objects such as the faces 110 a-110 c are below a particular threshold value, the newly captured image sample may be selected as the image for the scene 110. In this regard, for example, the particular threshold may be set in such a way that an image /of the scene 110 may be determined and/or recorded by the MMP 105 a while one or more of the identifiable objects such as the faces 110 a-110 c in the scene 110 are still or are within a tolerable amount of movement. The camera 105 g may operate in a number of different camera modes such as, for example, shutter priority mode, aperture priority mode, portrait mode, landscape mode or action mode. Accordingly, the threshold may be set differently for different camera modes. For example, instead of operating in the portrait mode, the camera 105 g may operate in the action or sports mode. In this regard, for example, the threshold for the action mode may be set higher than or different from the threshold for the portrait mode.
  • An identifiable object in the scene 110 may comprise, for example, a face such as the face 110 a, which may be identified employing face detection. The face detection may determine the locations and sizes of the faces 110 a-110 c such as human faces in arbitrary images. The face detection may detect facial features and ignore other items and/or features, such as buildings, trees and bodies. The motion may be due to, for example, a gesture received from one or more of the identified faces 110 a-110 c. The gesture may comprise, for example, a wink 110 e and/or a smile 110 d. The smile 110 d may be identified, for example, employing smile detection. The smile detection may detect open eyes and/or upturned mouth associated with a smile such as the smile 110 d in the scene 110.
  • In another exemplary embodiment of the invention, in instances when the determined amount of motion associated with one or more of the identifiable objects such as the faces 110 a-110 c are above a particular threshold value, the newly captured image sample may be selected as the image for the scene 110. In this regard, for example, the particular threshold may be set in such a way that an image of the scene 110 may be determined and/or recorded by the MMP 105 a while a gesture such as, for example, a wink 110 e or a smile 110 d from one or more of the identified faces 110 a-110 c is detected. Depending on the camera mode in which the camera 105 g may operate and/or the environmental conditions, the threshold may be set differently.
  • FIG. 2 is a block diagram illustrating an exemplary image of a scene that is determined based on tolerable amount of motion associated with identifiable objects, in accordance with an embodiment of the invention. Referring to FIG. 2, there is shown a series of image samples of a scene such as the scene 210, of which image samples 201, 202, 203 are illustrated and an image 204 of the scene 210. The scene 210 may comprise a plurality of identifiable objects, of which the faces 210 a, 210 b, 210 c are illustrated. The image sample 201 may comprise a plurality of faces, of which the faces 201 a, 201 b, 201 c are illustrated. The image sample 202 may comprise a plurality of faces, of which the face 202 a, 202 b, 202 c are illustrated. The image sample 203 may comprise a plurality of faces, of which the faces 203 a, 203 b, 203 c are illustrated. The image 204 may comprise a plurality of faces, of which the faces 204 a, 204 b, 204 c are illustrated.
  • After the camera 105 g initiates capture of a series of image samples, the image sample 201 is captured first and the image sample 202 is captured next. In an exemplary embodiment of the invention, the MMP 105 a may be operable to compare the image sample 202 with the image sample 201. For example, the faces 202 a, 202 b, 202 c in the image sample 202 are compared with the faces 201 a, 201 b, 201 c in the image sample 201, respectively. As illustrated in FIG. 2, a large amount of motion, which is above a particular threshold value for a portrait, may be detected or determined by the motion detection module 105 u in the MMP 105 a. The amount of motion may be due to, for example, opening of eyes on the face 202 a, and changing to smiles on the faces 202 b and 202 c. Since the amount of motion is above the particular threshold value, the image sample 203 is then captured during the process of capturing the series of image samples. The MMP 105 a may then be operable to compare the image sample 203 with the image sample 202. For example, the faces 203 a, 203 b, 203 c in the image sample 203 are compared with the faces 202 a, 202 b, 202 c in the image sample 202, respectively. As illustrated in FIG. 2, the result of the comparison between the image sample 203 and the image sample 202 may indicate that the faces 203 a, 203 b, 203 c may be still or there may be a small amount of motion, which is detected by the motion detection module 105 u. The amount of motion that is detected may be below the particular threshold value for a portrait. Accordingly, the image sample 203 may be selected as the image 204 for the scene 210. The capture of the series of the image samples may then be terminated.
  • In the exemplary embodiment of the invention illustrated in FIG. 2, there are shown three faces 210 a-210 c in the scene 210, three image samples 201, 202, 203, and three faces on an image sample such as the faces 201 a-201 c on the image sample 201. Notwithstanding, the invention is not so limited. The number of the image samples and the number of the faces may be different. Different identifiable objects in the scene 210 may be illustrated.
  • FIG. 3 is a block diagram illustrating an image of a scene that is determined based on a gesture received from an identifiable object, in accordance with an embodiment of the invention. Referring to FIG. 3, there is shown a series of image samples of a scene such as the scene 310, of which image samples 301, 302, 303 are illustrated and an image 304 of the scene 310. The scene 310 may comprise a plurality of identifiable objects, of which the faces 310 a, 310 b, 310 c are illustrated. The image sample 301 may comprise a plurality of faces, of which the faces 301 a, 301 b, 301 c are illustrated. The image sample 302 may comprise a plurality of faces, of which the face 302 a, 302 b, 302 c are illustrated. The image sample 303 may comprise a plurality of faces, of which the faces 303 a, 303 b, 303 c are illustrated. The image 304 may comprise a plurality of faces, of which the faces 304 a, 304 b, 304 c are illustrated.
  • After the camera 105 g initiates capture of a series of image samples, the image sample 301 is captured first and the image sample 302 is captured next. In an exemplary embodiment of the invention, the MMP 105 a may be operable to compare the image sample 302 with the image sample 301. For example, the faces 302 a, 302 b, 302 c in the image sample 302 are compared with the faces 301 a, 301 b, 301 c in the image sample 301 respectively. As illustrated in FIG. 3, the result of the comparison between the image sample 302 and the image sample 301 may indicate that the faces 302 a, 302 b, 302 c may be still or possess a tolerable amount of motion. The amount of motion may be detected by the motion detection module 105 u. In this regard, the motion detection module 105 u may be operable to determine that the amount of motion this is detected is below a particular threshold value. Since the amount of motion is below the particular threshold value, a gesture from one of the faces 302 a, 302 b, 302 c may not be detected by the motion detection module 105 u. Accordingly, the image sample 303 is then captured during the process of capturing the series of image samples.
  • The MMP 105 a may then be operable to compare the image sample 303 with the image sample 302. For example, the faces 303 a, 303 b, 303 c in the image sample 303 are compared with the faces 302 a, 302 b, 302 c in the image sample 302, respectively. As illustrated in FIG. 3, a large amount of motion, which is above the particular threshold value for a portrait, may be detected or determined by the motion detection module 105 u in the MMP 105 a. The amount of motion may be due to a gesture such as, for example, due to the smile 303 d on the face 303 b. Since the amount of motion is above the particular threshold value due to the gesture such as the smile 303 d, the image sample 303 may be selected as the image 304 for the scene 310. The capture of the series of image samples may then be terminated.
  • In the exemplary embodiment of the invention illustrated in FIG. 3, there are shown three faces 310 a-310 c in the scene 310, three image samples 301, 302, 303, and three faces on an image sample such as the faces 301 a-301 c on the image sample 301. Notwithstanding, the invention is not so limited. The number of the image samples and the number of the faces may be different. Different identifiable objects and different gestures in the scene 310 may be illustrated.
  • FIG. 4 is a flow chart illustrating exemplary steps for determining how to handle processing of an image based on motion, in accordance with an embodiment of the invention. Referring to FIG. 4, the exemplary steps start at step 401. In step 402, the mobile multimedia device 105 may be operable to identify a scene 110 from a position or particular viewing angle. In step 403, the camera 105 g in the mobile multimedia device 105 may be operable to initiate capture of a series of image samples 201, 202, 203, of the scene 210 from the position or viewing angle, where the scene 210 may comprise one or more identifiable objects such as the faces 210 a-210 c. In step 404, the MMP 105 a in the mobile multimedia device 105 may be operable to determine and/or record an image 204 for the scene 210, from the captured series of image samples 201, 202, 203, based on motion associated with one or more of the identifiable objects such as the faces 210 a-210 c. In step 405, the capture of the series of image samples may be terminated when the image 204 for the scene 210 has been determined. In step 406, the LCD 105 b in the mobile multimedia device 105 may be operable to display the determined image 204 of the scene 210. The exemplary steps may proceed to the end step 407.
  • In various embodiments of the invention, a camera 105 g in a mobile multimedia device 105 may be operable to initiate capture of a series of image samples such as the image samples 201, 202, 203 of a scene 210. The scene 210 may comprise one or more objects that may be identifiable by the MMP 105 a in the mobile multimedia device 105. An image such as the image 204 to be created for the scene 210 may be determined by MMP 105 a in the mobile multimedia device 105, from the captured series of image samples 201, 202, 203, based on motion associated with one or more of the identifiable objects. As soon as the image 204 for the scene 210 has been determined, the capture of the series of image samples may be terminated. In this regard, the MMP 105 a in the mobile multimedia device 105 may be operable to compare a newly captured image sample such as the image sample 203 with a previously captured consecutive image sample such as the image sample 202 during the process of capturing the series of image samples 201, 202, 203. An amount of motion associated with one or more of the identifiable objects may then be determined by the motion detection module 105 u based on the result of the comparison.
  • In an exemplary embodiment of the invention, in instances when the determined amount of motion associated with one or more of the identifiable objects such as the faces 203 a, 203 b, 203 c is below a particular threshold value, the newly captured image sample 203 may be selected as the image 204 for the scene 210. In this regard, for example, the particular threshold may be set in such a way that an image 204 of the scene 210 may be determined by the MMP 105 a while one or more of the identifiable objects such as the faces 203 a, 203 b, 203 c in the scene 210 are still or are within tolerable amount of motion or movement.
  • The identifiable objects may comprise, for example, faces 110 a-110 c which may be identified utilizing face detection. The motion may be due to, for example, a gesture received from one or more of the identified faces 110 a-110 c. The gesture may comprise, for example, a wink 110 e and/or a smile 110 d. The smile 110 d may be identified, for example, utilizing smile detection.
  • In another exemplary embodiment of the invention, in instances when the determined amount of motion associated with one or more of the identifiable objects such as the faces 303 a, 303 b, 303 c is above a particular threshold value, the newly captured image sample such as the image sample 303 may be selected as the image 304 for the scene 310. In this regard, for example, the particular threshold may be set in such a way that an image 304 of the scene 310 may be determined by the MMP 105 a in the mobile multimedia device 105 while a gesture such as, for example, a smile 303 d from the identified face 303 b is detected.
  • Other embodiments of the invention may provide a non-transitory computer readable medium and/or storage medium, and/or a non-transitory machine readable medium and/or storage medium, having stored thereon, a machine code and/or a computer program having at least one code section executable by a machine and/or a computer, thereby causing the machine and/or computer to perform the steps as described herein for determining how to handle processing of an image based on motion.
  • Accordingly, the present invention may be realized in hardware, software, or a combination of hardware and software. The present invention may be realized in a centralized fashion in at least one computer system or in a distributed fashion where different elements are spread across several interconnected computer systems. Any kind of computer system or other apparatus adapted for carrying out the methods described herein is suited. A typical combination of hardware and software may be a general-purpose computer system with a computer program that, when being loaded and executed, controls the computer system such that it carries out the methods described herein.
  • The present invention may also be embedded in a computer program product, which comprises all the features enabling the implementation of the methods described herein, and which when loaded in a computer system is able to carry out these methods. Computer program in the present context means any expression, in any language, code or notation, of a set of instructions intended to cause a system having an information processing capability to perform a particular function either directly or after either or both of the following: a) conversion to another language, code or notation; b) reproduction in a different material form.
  • While the present invention has been described with reference to certain embodiments, it will be understood by those skilled in the art that various changes may be made and equivalents may be substituted without departing from the scope of the present invention. In addition, many modifications may be made to adapt a particular situation or material to the teachings of the present invention without departing from its scope. Therefore, it is intended that the present invention not be limited to the particular embodiment disclosed, but that the present invention will include all embodiments falling within the scope of the appended claims.

Claims (20)

What is claimed is:
1. A method for processing images, the method comprising:
in a mobile multimedia device:
initiating capture of a series of image samples of a scene, wherein the scene comprises one or more objects that are identifiable by the mobile multimedia device; and
determining from the captured series of image samples, an image based on motion associated with the one or more identifiable objects.
2. The method according to claim 1, wherein the motion comprises a gesture.
3. The method according to claim 1, comprising recording the image in response to the gesture.
4. The method according to claim 1, wherein the one or more identifiable objects comprises one or more faces.
5. The method according to claim 4, comprising identifying the one or more faces for each of the captured series of image samples utilizing face detection.
6. The method according to claim 5, wherein the gesture comprises a smile.
7. The method according to claim 6, comprising identifying the smile for each of the captured series of image samples utilizing smile detection.
8. The method according to claim 5, wherein the gesture comprises a wink.
9. The method according to claim 1, comprising comparing a newly captured one of the series of image samples with a previously captured consecutive one of the series of image samples during the capture of the series of image samples.
10. The method according to claim 9, comprising determining an amount of motion associated with the one or more identifiable objects based on the comparison.
11. A system for processing images, the system comprising:
one or more processors for use in a mobile multimedia device, the one or more processors being operable to:
initiate capture of a series of image samples of a scene, wherein the scene comprises one or more objects that are identifiable by the mobile multimedia device; and
determine from the captured series of image samples, an image in response to a gesture associated with the one or more identifiable objects.
12. The system according to claim 11, wherein the one or more processors is operable to record an image of the series of image samples in response to the gesture.
13. The system according to claim 11, wherein the scene comprises one or more faces as the identifiable objects.
14. The system according to claim 13, wherein the one or more processors are operable to identify the one or more faces for each of the captured series of image samples utilizing face detection.
15. The system according to claim 14, wherein the one or more processors are operable to compare a current face in newly captured one of the series of image samples with a previous face in previously captured consecutive one of the series of image samples during the capture of the series of image samples.
16. The system according to claim 15, wherein the one or more processors are operable to determine an amount of motion associated with the one or more identifiable objects based on the comparison.
17. The system according to claim 11, wherein the gesture comprises a wink.
18. The system according to claim 11, wherein the gesture comprises a smile.
19. The system according to claim 11, wherein the gesture comprises eyes opening.
20. A system for processing images, the system comprising:
one or more processors for use in a mobile multimedia device, the one or more processors being operable to:
initiate capture of a series of image samples of a scene, wherein the scene comprises one or more faces that are identifiable by the mobile multimedia device; and
compare the captured series of image samples;
determine from the comparison, a gesture associated with the one or more faces;
record an image in response to the gesture.
US13/932,268 2010-04-01 2013-07-01 Method And System For Determining How To Handle Processing Of An Image Based On Motion Abandoned US20130293461A1 (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
US13/932,268 US20130293461A1 (en) 2010-04-01 2013-07-01 Method And System For Determining How To Handle Processing Of An Image Based On Motion

Applications Claiming Priority (3)

Application Number Priority Date Filing Date Title
US31997110P 2010-04-01 2010-04-01
US12/763,334 US8503722B2 (en) 2010-04-01 2010-04-20 Method and system for determining how to handle processing of an image based on motion
US13/932,268 US20130293461A1 (en) 2010-04-01 2013-07-01 Method And System For Determining How To Handle Processing Of An Image Based On Motion

Related Parent Applications (1)

Application Number Title Priority Date Filing Date
US12/763,334 Continuation US8503722B2 (en) 2010-04-01 2010-04-20 Method and system for determining how to handle processing of an image based on motion

Publications (1)

Publication Number Publication Date
US20130293461A1 true US20130293461A1 (en) 2013-11-07

Family

ID=44709238

Family Applications (2)

Application Number Title Priority Date Filing Date
US12/763,334 Active 2032-01-12 US8503722B2 (en) 2010-04-01 2010-04-20 Method and system for determining how to handle processing of an image based on motion
US13/932,268 Abandoned US20130293461A1 (en) 2010-04-01 2013-07-01 Method And System For Determining How To Handle Processing Of An Image Based On Motion

Family Applications Before (1)

Application Number Title Priority Date Filing Date
US12/763,334 Active 2032-01-12 US8503722B2 (en) 2010-04-01 2010-04-20 Method and system for determining how to handle processing of an image based on motion

Country Status (1)

Country Link
US (2) US8503722B2 (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN108093170A (en) * 2017-11-30 2018-05-29 广东欧珀移动通信有限公司 User's photographic method, device and equipment

Families Citing this family (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US9383814B1 (en) 2008-11-12 2016-07-05 David G. Capper Plug and play wireless video game
US10086262B1 (en) 2008-11-12 2018-10-02 David G. Capper Video motion capture for wireless gaming
US9586135B1 (en) 2008-11-12 2017-03-07 David G. Capper Video motion capture for wireless gaming
US9474956B2 (en) * 2013-07-22 2016-10-25 Misfit, Inc. Methods and systems for displaying representations of facial expressions and activity indicators on devices
US20150201124A1 (en) * 2014-01-15 2015-07-16 Samsung Electronics Co., Ltd. Camera system and method for remotely controlling compositions of self-portrait pictures using hand gestures
CN108737714A (en) * 2018-03-21 2018-11-02 北京猎户星空科技有限公司 A kind of photographic method and device

Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20050200736A1 (en) * 2004-01-21 2005-09-15 Fuji Photo Film Co., Ltd. Photographing apparatus, method and program
US20100245614A1 (en) * 2009-03-31 2010-09-30 Casio Computer Co., Ltd. Image selection device and method for selecting image

Family Cites Families (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US7792335B2 (en) * 2006-02-24 2010-09-07 Fotonation Vision Limited Method and apparatus for selective disqualification of digital images
JP2005242567A (en) * 2004-02-25 2005-09-08 Canon Inc Movement evaluation device and method
JP4197019B2 (en) * 2006-08-02 2008-12-17 ソニー株式会社 Imaging apparatus and facial expression evaluation apparatus
CN101520590B (en) * 2008-02-29 2010-12-08 鸿富锦精密工业(深圳)有限公司 Camera and self portrait method
JP4770929B2 (en) * 2009-01-14 2011-09-14 ソニー株式会社 Imaging apparatus, imaging method, and imaging program.

Patent Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20050200736A1 (en) * 2004-01-21 2005-09-15 Fuji Photo Film Co., Ltd. Photographing apparatus, method and program
US20100245614A1 (en) * 2009-03-31 2010-09-30 Casio Computer Co., Ltd. Image selection device and method for selecting image

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN108093170A (en) * 2017-11-30 2018-05-29 广东欧珀移动通信有限公司 User's photographic method, device and equipment

Also Published As

Publication number Publication date
US8503722B2 (en) 2013-08-06
US20110242344A1 (en) 2011-10-06

Similar Documents

Publication Publication Date Title
US20130293461A1 (en) Method And System For Determining How To Handle Processing Of An Image Based On Motion
JP6198958B2 (en) Method, apparatus, computer program, and computer-readable storage medium for obtaining a photograph
EP3228075B1 (en) Sensor configuration switching for adaptation of video capturing frame rate
US8149280B2 (en) Face detection image processing device, camera device, image processing method, and program
US8284256B2 (en) Imaging apparatus and computer readable recording medium
US7397611B2 (en) Image capturing apparatus, image composing method and storage medium
US12073589B2 (en) Camera device, imaging system, control method, and program
US20110305446A1 (en) Imaging apparatus, focus position detecting method, and computer program product
US20120249729A1 (en) Imaging device capable of combining images
CN110383335A (en) The background subtraction inputted in video content based on light stream and sensor
US10778903B2 (en) Imaging apparatus, imaging method, and program
US20150138076A1 (en) Communication device and method of processing incoming call by facial image
CN101459770B (en) Digital photographing apparatus and method of controlling the same
WO2015128897A1 (en) Digital cameras having reduced startup time, and related devices, methods, and computer program products
US20170328976A1 (en) Operation device, tracking system, operation method, and program
US8041137B2 (en) Tiled output mode for image sensors
US20110235856A1 (en) Method and system for composing an image based on multiple captured images
CN106993138B (en) Time-gradient image shooting device and method
US8593528B2 (en) Method and system for mitigating seesawing effect during autofocus
TW201308219A (en) Ultra-wide-angle imaging method and system using the same
WO2020110710A1 (en) Image-capturing device, image-capturing method, and program
CN109862252B (en) Image shooting method and device
WO2020116102A1 (en) Imaging device, imaging method, and program

Legal Events

Date Code Title Description
AS Assignment

Owner name: BROADCOM CORPORATION, CALIFORNIA

Free format text: ASSIGNMENT OF ASSIGNORS INTEREST;ASSIGNORS:ELWELL, PHIL;PATUCK, NAUSHIRWAN;SEWELL, BENJAMIN;AND OTHERS;SIGNING DATES FROM 20100409 TO 20100420;REEL/FRAME:030816/0501

STCB Information on status: application discontinuation

Free format text: ABANDONED -- FAILURE TO RESPOND TO AN OFFICE ACTION

AS Assignment

Owner name: BANK OF AMERICA, N.A., AS COLLATERAL AGENT, NORTH CAROLINA

Free format text: PATENT SECURITY AGREEMENT;ASSIGNOR:BROADCOM CORPORATION;REEL/FRAME:037806/0001

Effective date: 20160201

Owner name: BANK OF AMERICA, N.A., AS COLLATERAL AGENT, NORTH

Free format text: PATENT SECURITY AGREEMENT;ASSIGNOR:BROADCOM CORPORATION;REEL/FRAME:037806/0001

Effective date: 20160201

AS Assignment

Owner name: AVAGO TECHNOLOGIES GENERAL IP (SINGAPORE) PTE. LTD., SINGAPORE

Free format text: ASSIGNMENT OF ASSIGNORS INTEREST;ASSIGNOR:BROADCOM CORPORATION;REEL/FRAME:041706/0001

Effective date: 20170120

Owner name: AVAGO TECHNOLOGIES GENERAL IP (SINGAPORE) PTE. LTD

Free format text: ASSIGNMENT OF ASSIGNORS INTEREST;ASSIGNOR:BROADCOM CORPORATION;REEL/FRAME:041706/0001

Effective date: 20170120

AS Assignment

Owner name: BROADCOM CORPORATION, CALIFORNIA

Free format text: TERMINATION AND RELEASE OF SECURITY INTEREST IN PATENTS;ASSIGNOR:BANK OF AMERICA, N.A., AS COLLATERAL AGENT;REEL/FRAME:041712/0001

Effective date: 20170119