US9898828B2 - Methods and systems for determining frames and photo composition within multiple frames - Google Patents

Methods and systems for determining frames and photo composition within multiple frames Download PDF

Info

Publication number
US9898828B2
US9898828B2 US15/159,825 US201615159825A US9898828B2 US 9898828 B2 US9898828 B2 US 9898828B2 US 201615159825 A US201615159825 A US 201615159825A US 9898828 B2 US9898828 B2 US 9898828B2
Authority
US
United States
Prior art keywords
frames
frame
area corresponding
overlapped area
candidate
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Expired - Fee Related
Application number
US15/159,825
Other versions
US20160267680A1 (en
Inventor
Bing-Sheng Lin
Yi-Chi Lin
Tai-Ling Lu
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
HTC Corp
Original Assignee
HTC Corp
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by HTC Corp filed Critical HTC Corp
Priority to US15/159,825 priority Critical patent/US9898828B2/en
Assigned to HTC CORPORATION reassignment HTC CORPORATION ASSIGNMENT OF ASSIGNORS INTEREST (SEE DOCUMENT FOR DETAILS). Assignors: LIN, YI-CHI, LU, TAI-LING, LIN, BING-SHENG
Publication of US20160267680A1 publication Critical patent/US20160267680A1/en
Application granted granted Critical
Publication of US9898828B2 publication Critical patent/US9898828B2/en
Expired - Fee Related legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • G06T7/2053
    • GPHYSICS
    • G06COMPUTING OR CALCULATING; COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/20Analysis of motion
    • G06T7/254Analysis of motion involving subtraction of images
    • GPHYSICS
    • G06COMPUTING OR CALCULATING; COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/01Input arrangements or combined input and output arrangements for interaction between user and computer
    • G06F3/048Interaction techniques based on graphical user interfaces [GUI]
    • G06F3/0484Interaction techniques based on graphical user interfaces [GUI] for the control of specific functions or operations, e.g. selecting or manipulating an object, an image or a displayed text element, setting a parameter value or selecting a range
    • G06F3/04845Interaction techniques based on graphical user interfaces [GUI] for the control of specific functions or operations, e.g. selecting or manipulating an object, an image or a displayed text element, setting a parameter value or selecting a range for image manipulation, e.g. dragging, rotation, expansion or change of colour
    • GPHYSICS
    • G06COMPUTING OR CALCULATING; COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/01Input arrangements or combined input and output arrangements for interaction between user and computer
    • G06F3/048Interaction techniques based on graphical user interfaces [GUI]
    • G06F3/0487Interaction techniques based on graphical user interfaces [GUI] using specific features provided by the input device, e.g. functions controlled by the rotation of a mouse with dual sensing arrangements, or of the nature of the input device, e.g. tap gestures based on pressure sensed by a digitiser
    • G06F3/0488Interaction techniques based on graphical user interfaces [GUI] using specific features provided by the input device, e.g. functions controlled by the rotation of a mouse with dual sensing arrangements, or of the nature of the input device, e.g. tap gestures based on pressure sensed by a digitiser using a touch-screen or digitiser, e.g. input of commands through traced gestures
    • GPHYSICS
    • G06COMPUTING OR CALCULATING; COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/10Segmentation; Edge detection
    • G06T7/174Segmentation; Edge detection involving the use of two or more images
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N23/00Cameras or camera modules comprising electronic image sensors; Control thereof
    • H04N23/70Circuitry for compensating brightness variation in the scene
    • H04N23/73Circuitry for compensating brightness variation in the scene by influencing the exposure time
    • H04N5/2353
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N5/00Details of television systems
    • H04N5/222Studio circuitry; Studio devices; Studio equipment
    • H04N5/262Studio circuits, e.g. for mixing, switching-over, change of character of image, other special effects ; Cameras specially adapted for the electronic generation of special effects
    • H04N5/2621Cameras specially adapted for the electronic generation of special effects during image pickup, e.g. digital cameras, camcorders, video cameras having integrated special effects capability
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N5/00Details of television systems
    • H04N5/222Studio circuitry; Studio devices; Studio equipment
    • H04N5/262Studio circuits, e.g. for mixing, switching-over, change of character of image, other special effects ; Cameras specially adapted for the electronic generation of special effects
    • H04N5/2625Studio circuits, e.g. for mixing, switching-over, change of character of image, other special effects ; Cameras specially adapted for the electronic generation of special effects for obtaining an image which is composed of images from a temporal image sequence, e.g. for a stroboscopic effect
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N5/00Details of television systems
    • H04N5/76Television signal recording
    • H04N5/91Television signal processing therefor
    • GPHYSICS
    • G06COMPUTING OR CALCULATING; COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10016Video; Image sequence
    • GPHYSICS
    • G06COMPUTING OR CALCULATING; COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20212Image combination
    • G06T2207/20224Image subtraction

Definitions

  • the disclosure relates generally to image frame management, and, more particularly to methods and systems for determining frames and photo composition within multiple frames.
  • a handheld device may have telecommunications capabilities, e-mail message capabilities, image capture capabilities, an advanced address book management system, a media playback system, and various other functions. Due to increased convenience and functions of the devices, these devices have become necessities of life.
  • the image capture unit such as a camera takes images immediately one after another in a short amount time. That is, when the continuous shot function is performed, a continuous image capture process is performed to continuously capture a plurality of images in sequence.
  • an inventive function called “dynamic continuous shot composition” may be also provided on the portable devices.
  • the dynamic continuous shot composition is a way for image composition.
  • a camera can be set on a tripod, and several images with the same scene are continuously captured by the camera.
  • the moving object within the images are extracted and overlapped onto the last image, thus to present the dynamic effect of the track of the moving object.
  • the overlap of the moving object onto the last image can be achieved by using an image composition algorithm. It is understood that, the image composition algorithm may be various and known in the art, and related descriptions are omitted here.
  • the respective images are continuously captured with a fixed time interval. If the time interval is too short or the moving speed of the object is too slow, the objects on the composed image may have a large overlapped portion, as shown in FIG. 1A . On the contrary, if the time interval is too long or the moving speed of the object is too slow, the objects may be scattered on the composed image, as shown in FIG. 1B , and the number of the objects on the composed image may be not enough, resulting in difficulties for presenting the dynamic effect.
  • a plurality of frames which are respectively captured with a predefined time interval are obtained. At least one object within at least two of the frames is detected. A moving speed of the object is calculated according to the positions of the object in the respective frames and the predefined time interval, and candidate frames are selected from the frames according to the moving speed of the object.
  • An embodiment of a system for determining frames within multiple frames comprises a storage unit and a processing unit.
  • the storage unit comprises a plurality of frames, which are respectively captured with a predefined time interval.
  • the processing unit detects at least one object within at least two of the frames.
  • the processing unit calculates a moving speed of the object according to the positions of the object in the respective frames and the predefined time interval, and selects candidate frames from the frames according to the moving speed of the object.
  • a plurality of frames which are respectively captured with a predefined time interval are obtained. At least one object within at least two of the frames is detected. A moving speed of the object is calculated according to the positions of the object in the respective frames and the predefined time interval, and candidate frames are selected from the frames according to the moving speed of the object. The candidate frames are composed to generate a composed photo.
  • An embodiment of a system for photo composition within multiple frames comprises a storage unit and a processing unit.
  • the storage unit comprises a plurality of frames, which are respectively captured with a predefined time interval.
  • the processing unit detects at least one object within at least two of the frames.
  • the processing unit calculates a moving speed of the object according to the positions of the object in the respective frames and the predefined time interval, and selects candidate frames from the frames according to the moving speed of the object.
  • the processing unit composes the candidate frames to generate a composed photo.
  • a table is looked up according to the moving speed of the object to obtain a frame gap number, and the candidates frames are selected within the frames at intervals of the frame gap number. In some embodiments, the faster the moving speed is, the smaller the frame gap number is.
  • a plurality of frames which are respectively captured with a predefined time interval are obtained. At least one object within at least two of the frames is detected. An overlapped area corresponding to the object within a first frame and a second frame is calculated, and at least one candidate frame is selected according to the overlapped area corresponding to the object.
  • An embodiment of a system for determining frames within multiple frames comprises a storage unit and a processing unit.
  • the storage unit comprises a plurality of frames, which are respectively captured with a predefined time interval.
  • the processing unit detects at least one object within at least two of the frames.
  • the processing unit calculates an overlapped area corresponding to the object within a first frame and a second frame, and selects at least one candidate frame according to the overlapped area corresponding to the object.
  • a plurality of frames which are respectively captured with a predefined time interval are obtained. At least one object within at least two of the frames is detected. An overlapped area corresponding to the object within a first frame and a second frame is calculated, and at least one candidate frame is selected according to the overlapped area corresponding to the object. The at least one candidate frame is composed to generate a composed photo.
  • An embodiment of a system for photo composition within multiple frames comprises a storage unit and a processing unit.
  • the storage unit comprises a plurality of frames, which are respectively captured with a predefined time interval.
  • the processing unit detects at least one object within at least two of the frames.
  • the processing unit calculates an overlapped area corresponding to the object within a first frame and a second frame, and selects at least one candidate frame according to the overlapped area corresponding to the object.
  • the processing unit composes the at least one candidate frame to generate a composed photo.
  • Methods for determining frames and photo composition within multiple frames may take the form of a program code embodied in a tangible media.
  • the program code When the program code is loaded into and executed by a machine, the machine becomes an apparatus for practicing the disclosed method.
  • FIG. 1A is a schematic diagram illustrating an example of a composed image with objects having a large overlapped portion
  • FIG. 1B is a schematic diagram illustrating an example of a composed image with scattered objects
  • FIG. 2 is a schematic diagram illustrating an embodiment of a system for determining frames and photo composition within multiple frames of the invention
  • FIG. 4 is a flowchart of another embodiment of a method for determining frames within multiple frames of the invention.
  • FIG. 5 is a flowchart of an embodiment of a method for selecting at least one candidate frame according to the overlapped area corresponding to the object of the invention
  • FIG. 6 is a flowchart of another embodiment of a method for selecting at least one candidate frame according to the overlapped, area corresponding to the object of the invention.
  • FIG. 7 is a flowchart of an embodiment of a method for photo composition within multiple frames of the invention.
  • FIG. 8 is a flowchart of another embodiment of a method for photo composition within multiple frames of the invention.
  • FIG. 2 is a schematic diagram illustrating an embodiment of a system for determining frames and/or photo composition within multiple frames of the invention.
  • the system for determining frames and/or photo composition within multiple frames 100 can be used in an electronic device, such as a computer, or a portable device, such as a digital camera, a handheld device such as a mobile phone, a smart phone, a PDA (Personal Digital Assistant), a GPS (Global Positioning System), or any picture-taking device.
  • an electronic device such as a computer, or a portable device, such as a digital camera, a handheld device such as a mobile phone, a smart phone, a PDA (Personal Digital Assistant), a GPS (Global Positioning System), or any picture-taking device.
  • PDA Personal Digital Assistant
  • GPS Global Positioning System
  • the system for determining frames and/or photo composition within multiple frames 100 comprises a storage unit 110 and a processing unit 120 .
  • the storage unit 110 comprises a plurality of frames, which are respectively captured with a time interval. It is understood that, in some embodiments, a time interval would be predefined or dynamically defined. It is understood that, in some embodiments, the frames can be obtained from a video. It is understood that, in some embodiments, the system for determining frames and/or photo composition within multiple frames 100 can also comprise an image capture unit (not shown in FIG. 2 ).
  • the image capture unit may be a CCD (Charge Coupled Device) or a CMOS (Complementary Metal-Oxide Semiconductor), placed at the imaging position for objects inside the electronic device.
  • the image capture unit can continuously capture the frames within a predefined time interval.
  • the system for determining frames and/or photo composition within multiple frames 100 can also comprise a display unit not shown in FIG. 2 ).
  • the display unit can display related figures and interfaces, and related data, such as the image frames continuously captured by the image capture unit.
  • the display unit may be a screen integrated with a touch-sensitive device (not shown).
  • the touch-sensitive device has a touch-sensitive surface comprising sensors in at least one dimension to detect contact and movement of an input tool, such as a stylus or finger on the touch-sensitive surface. That is, users can directly input related data via the display unit.
  • the processing unit 120 can control related components of the system for determining frames and/or photo composition within multiple frames 100 , process the image frames, and perform the methods for determining frames and/or photo composition within multiple frames, which will be discussed further in the following paragraphs. It is noted that, in some embodiments, the system for determining frames and/or photo composition within multiple frames 100 can further comprise a focus unit (not shown in FIG. 2 ). The processing unit 120 can control the focus unit to perform a focus process for at least one object during the photography process.
  • FIG. 3 is a flowchart of an embodiment of a method for determining frames within multiple frames of the invention.
  • the method for determining frames within multiple frames can be used in an electronic device, such as a computer, or a portable device, such as a digital camera, a handheld device such as a mobile phone, a smart phone, a PDA, a GPS, or any picture-taking device.
  • candidate frames for photo composition can be determined.
  • a plurality of frames are obtained. It is understood that, in some embodiments, the frames are continuously and respectively captured within a predefined time interval. In some embodiments, the frames can be obtained from a video.
  • step S 320 at least one object within at least two of the frames, such as the first two successive frames is detected. It is understood that, in some embodiments, the object can be obtained by transforming the at least two frames into grayscale frames, and subtracting the grayscale frames with each other to obtain a contour of the object.
  • a moving speed of the object is calculated according to the positions of the object in the respective frames and the predefined time interval.
  • step S 340 candidate frames are selected from the frames according to the moving speed of the object.
  • a table can be looked up according to the moving speed of the object to obtain a frame gap number, and the candidates frames can be selected within the frames at intervals of the frame gap number. In the table, the faster the moving speed is, the smaller the frame gap number is. It is noted that, the actual value of the frame gap number can be flexibly designed according different applications and requirements.
  • the selected candidate frames can be used for photo composition.
  • FIG. 4 is a flowchart of another embodiment of a method for determining frames within multiple frames of the invention.
  • the method for determining frames within multiple frames can be used in an electronic device, such as a computer, or a portable device, such as a digital camera, a handheld device such as a mobile phone, a smart phone, a PDA, a GPS, or any picture-taking device.
  • candidate frames for photo composition can be determined.
  • a plurality of frames are obtained. It is understood that, in some embodiments, the frames are continuously and respectively captured with a predefined time interval. In some embodiments, the frames can be obtained from a video.
  • step S 420 at least one object within at least two of the frames, such as the first two successive frames is detected. It is understood that, in some embodiments, the object can be obtained by transforming the at least two frames into grayscale frames, and subtracting the grayscale frames with each other to obtain a contour of the object.
  • an overlapped area corresponding to the object within two frames, such as a first frame and a second frame is calculated.
  • step S 440 at least one candidate frame is selected according to the overlapped area corresponding to the object.
  • the selected candidate frames can be used for photo composition.
  • FIG. 5 is a flowchart of an embodiment of a method for selecting at least one candidate frame according to the overlapped area corresponding to the object of the invention.
  • step S 510 it is determined whether the overlapped area corresponding to the object within two frames, such as a first frame and a second frame is less than a specific percentage of a contour area of the object. If the overlapped area corresponding to the object is less than a specific percentage of the contour area of the object (Yes in step S 510 ), in step S 520 , the rear frame, such as the second frame is selected as the candidate frame. If the overlapped area corresponding to the object is not less than a specific percentage of the contour area of the object (No in step S 510 ), the procedure is completed.
  • an overlapped area corresponding to the object within the first frame and a subsequent frame such as a third frame (wherein the second frame is not selected as a candidate frame) can be calculated, and accordingly determined.
  • an overlapped area corresponding to the object within the second frame and a subsequent frame such as a third frame (wherein the second frame is selected as a candidate frame) can be calculated, and accordingly determined. The process is repeated until all frames are examined.
  • FIG. 6 is a flowchart of another embodiment of a method for selecting at least one candidate frame according to the overlapped area corresponding to the object of the invention.
  • step S 610 it is determined whether the overlapped area corresponding to the object within two frames, such as a first frame and a second frame equals to zero. If the overlapped area corresponding to the object equals to zero (Yes in step S 610 ), in step S 620 , a moving speed of the object is calculated according to the positions of the object in the two frames and the predefined time interval. It is understood that, in some embodiments, once the contour of the object is detected, the area of the contour, the position of mass center of the area, and the moving speed of the object can be also calculated.
  • step S 630 candidate frames are selected within the frames according to the moving speed of the object.
  • a table can be looked up according to the moving speed of the object to obtain a frame gap number, and the candidates frames can be selected within the frames at intervals of the frame gap number.
  • the selected candidate frames can be used for photo composition. If the overlapped, area corresponding to the object does not equal to zero (No in step S 610 ), the procedure is completed.
  • an overlapped area corresponding to the object within the first frame and a subsequent frame such as a third frame (wherein the overlapped area corresponding to the object does not equal to zero) can be calculated, and accordingly determined. The process is repeated until all frames are examined.
  • FIG. 7 is a flowchart of an embodiment of a method for photo composition within multiple frames of the invention.
  • the method for photo composition within multiple frames can be used in an electronic device, such as a computer, or a portable device, such as a digital camera, a handheld device such as a mobile phone, a smart phone, a PDA, a GPS, or any picture-taking device.
  • candidate frames can be selected from frames and used for photo composition.
  • a plurality of frames are obtained. It is understood that, in some embodiments, the frames are continuously and respectively captured with a predefined time interval. In some embodiments, the frames can be obtained from a video.
  • step S 720 at least one object within at least two of the frames, such as the first two successive frames is detected. It is understood that, in some embodiments, the object can be obtained by transforming the at least two frames into grayscale frames, and subtracting the grayscale frames with each other to obtain a contour of the object.
  • a moving speed of the object is calculated according to the positions of the object in the respective frames and the predefined time interval.
  • step S 740 candidate frames are selected from the frames according to the moving speed of the object. It is understood that, in some embodiments, a table can be looked up according to the moving speed of the object to obtain a frame gap number, and the candidates frames can be selected within the frames at intervals of the frame gap number. In the table, the faster the moving speed is, the smaller the frame gap number is. It is noted that, the actual value of the frame gap number can be flexibly designed according different applications and requirements.
  • step S 750 the selected candidate frames are composed to generate a composed photo. It is understood that, the composition of candidate, frames can be performed by using an image composition algorithm. It is understood that, the image composition algorithm may be various and known in the art, and related descriptions are omitted here.
  • FIG. 8 is a flowchart of another embodiment of a method for photo composition within multiple frames of the invention.
  • the method for photo composition can be used in an electronic device, such as a computer, or a portable device, such as a digital camera, a handheld device such as a mobile phone, a smart phone, a PDA, a GPS, or any picture-taking device.
  • candidate frames can be selected from frames and used for photo composition.
  • step S 810 a plurality of frames are obtained. It is understood that, in some embodiments, the frames are continuously and respectively captured with a predefined time interval. In some embodiments, the frames can be obtained from a video.
  • step S 820 at least one object within at least two of the frames, such as the first two successive frames is detected. It is understood that, in some embodiments, the object can be obtained by transforming the at least two frames into grayscale frames, and subtracting the grayscale frames with each other to obtain a contour of the object.
  • step S 830 an overlapped area corresponding to the object within two frames, such as a first frame and a second frame is calculated.
  • step S 840 at least one candidate frame is selected according to the overlapped area corresponding to the object. It is understood that, in some embodiments, it is determined whether the overlapped area corresponding to the object is less than a specific percentage of a contour area of the object. If the overlapped area corresponding to the object is less than a specific percentage of the contour area of the object, the second frame is selected as the candidate frame. In some embodiments, it is determined whether the overlapped area corresponding to the object equals to zero.
  • a moving speed of the object is calculated according to the positions of the object in the respective frames and the predefined time interval, and candidate frames are selected within the frames according to the moving speed of the object.
  • the selected candidate frames are composed to generate a composed photo.
  • the composition of candidate frames can be performed by using an image composition algorithm. It is understood that, the image composition algorithm may be various and known in the art, and related descriptions are omitted here.
  • the methods and systems for determining frames and photo composition within multiple frames of the present invention can select appropriate frames from continuously captured frames according to the moving speed of the object, and/or the overlapped area corresponding to the object with frames, and accordingly generate a composed frame, thus improving the dynamic effect of the track of the moving object.
  • Methods for determining frames and photo composition within multiple frames may take the form of a program code (i.e., executable instructions) embodied in tangible media, such as floppy diskettes, CD-ROMS, hard drives, or any other machine-readable storage medium, wherein, when the program code is loaded into and executed by a machine, such as a computer, the machine thereby becomes an apparatus for practicing the methods.
  • the methods may also be embodied in the form of a program code transmitted over some transmission medium, such as electrical wiring or cabling, through fiber optics, or via any other form of transmission, wherein, when the program code is received and loaded into and executed by a machine, such as a computer, the machine becomes an apparatus for practicing the disclosed methods.
  • the program code When implemented on a general-purpose processor, the program code combines with the processor to provide a unique apparatus that operates analogously to application specific logic circuits.

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Multimedia (AREA)
  • General Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Signal Processing (AREA)
  • Human Computer Interaction (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Studio Devices (AREA)
  • Image Analysis (AREA)

Abstract

Methods and systems for determining frames and photo composition within multiple frames are provided. First, a plurality of frames, which are respectively captured with a predefined time interval are obtained. At least one object within at least two of the frames is detected. In some embodiments, a moving speed of the object is calculated according to the positions of the object in the respective frames and the predefined time interval, and candidate frames are selected from the frames according to the moving speed of the object. In some embodiments, an overlapped area corresponding to the object within a first frame and a second frame is calculated, and at least one candidate frame is selected according to the overlapped area corresponding to the object. The at least one candidate frame is composed to generate a composed photo.

Description

RELATED APPLICATIONS
The present application is a Divisional Application of the U.S. application Ser. No. 14/220,149, filed Mar. 20, 2014.
BACKGROUND OF THE INVENTION
Field of the Invention
The disclosure relates generally to image frame management, and, more particularly to methods and systems for determining frames and photo composition within multiple frames.
Description of the Related Art
Recently, portable devices, such as handheld devices, have become more and more technically advanced and multifunctional. For example, a handheld device may have telecommunications capabilities, e-mail message capabilities, image capture capabilities, an advanced address book management system, a media playback system, and various other functions. Due to increased convenience and functions of the devices, these devices have become necessities of life.
Currently, a function called ‘continuous shot’ is provided on the portable devices. In the continuous shot mode, the image capture unit, such as a camera takes images immediately one after another in a short amount time. That is, when the continuous shot function is performed, a continuous image capture process is performed to continuously capture a plurality of images in sequence. Additionally, an inventive function called “dynamic continuous shot composition” may be also provided on the portable devices. The dynamic continuous shot composition is a way for image composition. In the dynamic continuous shot composition, a camera can be set on a tripod, and several images with the same scene are continuously captured by the camera. The moving object within the images are extracted and overlapped onto the last image, thus to present the dynamic effect of the track of the moving object. The overlap of the moving object onto the last image can be achieved by using an image composition algorithm. It is understood that, the image composition algorithm may be various and known in the art, and related descriptions are omitted here.
Conventionally, the respective images are continuously captured with a fixed time interval. If the time interval is too short or the moving speed of the object is too slow, the objects on the composed image may have a large overlapped portion, as shown in FIG. 1A. On the contrary, if the time interval is too long or the moving speed of the object is too slow, the objects may be scattered on the composed image, as shown in FIG. 1B, and the number of the objects on the composed image may be not enough, resulting in difficulties for presenting the dynamic effect.
BRIEF SUMMARY OF THE INVENTION
Methods and systems for determining frames and photo composition within multiple frames are provided are provided.
In an embodiment of a method for determining frames within multiple frames, a plurality of frames, which are respectively captured with a predefined time interval are obtained. At least one object within at least two of the frames is detected. A moving speed of the object is calculated according to the positions of the object in the respective frames and the predefined time interval, and candidate frames are selected from the frames according to the moving speed of the object.
An embodiment of a system for determining frames within multiple frames comprises a storage unit and a processing unit. The storage unit comprises a plurality of frames, which are respectively captured with a predefined time interval. The processing unit detects at least one object within at least two of the frames. The processing unit calculates a moving speed of the object according to the positions of the object in the respective frames and the predefined time interval, and selects candidate frames from the frames according to the moving speed of the object.
In an embodiment of a method for photo composition within multiple frames, a plurality of frames, which are respectively captured with a predefined time interval are obtained. At least one object within at least two of the frames is detected. A moving speed of the object is calculated according to the positions of the object in the respective frames and the predefined time interval, and candidate frames are selected from the frames according to the moving speed of the object. The candidate frames are composed to generate a composed photo.
An embodiment of a system for photo composition within multiple frames comprises a storage unit and a processing unit. The storage unit comprises a plurality of frames, which are respectively captured with a predefined time interval. The processing unit detects at least one object within at least two of the frames. The processing unit calculates a moving speed of the object according to the positions of the object in the respective frames and the predefined time interval, and selects candidate frames from the frames according to the moving speed of the object. The processing unit composes the candidate frames to generate a composed photo.
In some embodiments, a table is looked up according to the moving speed of the object to obtain a frame gap number, and the candidates frames are selected within the frames at intervals of the frame gap number. In some embodiments, the faster the moving speed is, the smaller the frame gap number is.
In an embodiment of a method for determining frames within multiple frames, a plurality of frames, which are respectively captured with a predefined time interval are obtained. At least one object within at least two of the frames is detected. An overlapped area corresponding to the object within a first frame and a second frame is calculated, and at least one candidate frame is selected according to the overlapped area corresponding to the object.
An embodiment of a system for determining frames within multiple frames comprises a storage unit and a processing unit. The storage unit comprises a plurality of frames, which are respectively captured with a predefined time interval. The processing unit detects at least one object within at least two of the frames. The processing unit calculates an overlapped area corresponding to the object within a first frame and a second frame, and selects at least one candidate frame according to the overlapped area corresponding to the object.
In an embodiment of a method for photo composition within multiple frames, a plurality of frames, which are respectively captured with a predefined time interval are obtained. At least one object within at least two of the frames is detected. An overlapped area corresponding to the object within a first frame and a second frame is calculated, and at least one candidate frame is selected according to the overlapped area corresponding to the object. The at least one candidate frame is composed to generate a composed photo.
An embodiment of a system for photo composition within multiple frames comprises a storage unit and a processing unit. The storage unit comprises a plurality of frames, which are respectively captured with a predefined time interval. The processing unit detects at least one object within at least two of the frames. The processing unit calculates an overlapped area corresponding to the object within a first frame and a second frame, and selects at least one candidate frame according to the overlapped area corresponding to the object. The processing unit composes the at least one candidate frame to generate a composed photo.
In some embodiments, it is determined whether the overlapped area corresponding to the object is less than a specific percentage of a contour area of the object. If the overlapped area corresponding to the object is less than a specific percentage of the contour area of the object, the second frame is selected as the candidate frame.
In some embodiments, it is determined whether the overlapped area corresponding to the object equals to zero. If the overlapped area corresponding to the object equals to zero, a moving speed of the object is calculated according to the positions of the object in the respective frames and the predefined time interval, and candidate frames are selected within the frames according to the moving speed of the object.
Methods for determining frames and photo composition within multiple frames may take the form of a program code embodied in a tangible media. When the program code is loaded into and executed by a machine, the machine becomes an apparatus for practicing the disclosed method.
BRIEF DESCRIPTION OF THE DRAWINGS
The invention will become more fully understood by referring to the following detailed description with reference to the accompanying drawings, wherein:
FIG. 1A is a schematic diagram illustrating an example of a composed image with objects having a large overlapped portion;
FIG. 1B is a schematic diagram illustrating an example of a composed image with scattered objects;
FIG. 2 is a schematic diagram illustrating an embodiment of a system for determining frames and photo composition within multiple frames of the invention;
FIG. 3 is a flowchart of an embodiment of a method for determining frames within multiple frames of the invention;
FIG. 4 is a flowchart of another embodiment of a method for determining frames within multiple frames of the invention;
FIG. 5 is a flowchart of an embodiment of a method for selecting at least one candidate frame according to the overlapped area corresponding to the object of the invention;
FIG. 6 is a flowchart of another embodiment of a method for selecting at least one candidate frame according to the overlapped, area corresponding to the object of the invention;
FIG. 7 is a flowchart of an embodiment of a method for photo composition within multiple frames of the invention; and
FIG. 8 is a flowchart of another embodiment of a method for photo composition within multiple frames of the invention.
DETAILED DESCRIPTION OF THE INVENTION
Methods and systems for determining frames and photo composition within multiple frames are provided.
FIG. 2 is a schematic diagram illustrating an embodiment of a system for determining frames and/or photo composition within multiple frames of the invention. The system for determining frames and/or photo composition within multiple frames 100 can be used in an electronic device, such as a computer, or a portable device, such as a digital camera, a handheld device such as a mobile phone, a smart phone, a PDA (Personal Digital Assistant), a GPS (Global Positioning System), or any picture-taking device.
The system for determining frames and/or photo composition within multiple frames 100 comprises a storage unit 110 and a processing unit 120. The storage unit 110 comprises a plurality of frames, which are respectively captured with a time interval. It is understood that, in some embodiments, a time interval would be predefined or dynamically defined. It is understood that, in some embodiments, the frames can be obtained from a video. It is understood that, in some embodiments, the system for determining frames and/or photo composition within multiple frames 100 can also comprise an image capture unit (not shown in FIG. 2). The image capture unit may be a CCD (Charge Coupled Device) or a CMOS (Complementary Metal-Oxide Semiconductor), placed at the imaging position for objects inside the electronic device. The image capture unit can continuously capture the frames within a predefined time interval. It is also understood that, in some embodiments, the system for determining frames and/or photo composition within multiple frames 100 can also comprise a display unit not shown in FIG. 2). The display unit can display related figures and interfaces, and related data, such as the image frames continuously captured by the image capture unit. It is understood that, in some embodiments, the display unit may be a screen integrated with a touch-sensitive device (not shown). The touch-sensitive device has a touch-sensitive surface comprising sensors in at least one dimension to detect contact and movement of an input tool, such as a stylus or finger on the touch-sensitive surface. That is, users can directly input related data via the display unit. The processing unit 120 can control related components of the system for determining frames and/or photo composition within multiple frames 100, process the image frames, and perform the methods for determining frames and/or photo composition within multiple frames, which will be discussed further in the following paragraphs. It is noted that, in some embodiments, the system for determining frames and/or photo composition within multiple frames 100 can further comprise a focus unit (not shown in FIG. 2). The processing unit 120 can control the focus unit to perform a focus process for at least one object during the photography process.
FIG. 3 is a flowchart of an embodiment of a method for determining frames within multiple frames of the invention. The method for determining frames within multiple frames can be used in an electronic device, such as a computer, or a portable device, such as a digital camera, a handheld device such as a mobile phone, a smart phone, a PDA, a GPS, or any picture-taking device. In the embodiment, candidate frames for photo composition can be determined.
In step S310, a plurality of frames are obtained. It is understood that, in some embodiments, the frames are continuously and respectively captured within a predefined time interval. In some embodiments, the frames can be obtained from a video. In step S320, at least one object within at least two of the frames, such as the first two successive frames is detected. It is understood that, in some embodiments, the object can be obtained by transforming the at least two frames into grayscale frames, and subtracting the grayscale frames with each other to obtain a contour of the object. In step S330, a moving speed of the object is calculated according to the positions of the object in the respective frames and the predefined time interval. It is understood that, in some embodiments, once the contour of the object is detected, the area of the contour is calculated based on the contour of the object, the position of mass center of the area is calculated based on the area of the contour, and the moving speed of the object can be also calculated based on the position of mass center of the area. In step S340, candidate frames are selected from the frames according to the moving speed of the object. It is understood that, in some embodiments, a table can be looked up according to the moving speed of the object to obtain a frame gap number, and the candidates frames can be selected within the frames at intervals of the frame gap number. In the table, the faster the moving speed is, the smaller the frame gap number is. It is noted that, the actual value of the frame gap number can be flexibly designed according different applications and requirements. The selected candidate frames can be used for photo composition.
FIG. 4 is a flowchart of another embodiment of a method for determining frames within multiple frames of the invention. The method for determining frames within multiple frames can be used in an electronic device, such as a computer, or a portable device, such as a digital camera, a handheld device such as a mobile phone, a smart phone, a PDA, a GPS, or any picture-taking device. In the embodiment, candidate frames for photo composition can be determined.
In step S410, a plurality of frames are obtained. It is understood that, in some embodiments, the frames are continuously and respectively captured with a predefined time interval. In some embodiments, the frames can be obtained from a video. In step S420, at least one object within at least two of the frames, such as the first two successive frames is detected. It is understood that, in some embodiments, the object can be obtained by transforming the at least two frames into grayscale frames, and subtracting the grayscale frames with each other to obtain a contour of the object. In step S430, an overlapped area corresponding to the object within two frames, such as a first frame and a second frame is calculated. It is understood that, in some embodiments, once the contour of the object is detected, the area and the position of the contour of the object can be calculated, and the overlapped area can be accordingly calculated. In step S440, at least one candidate frame is selected according to the overlapped area corresponding to the object. The selected candidate frames can be used for photo composition.
FIG. 5 is a flowchart of an embodiment of a method for selecting at least one candidate frame according to the overlapped area corresponding to the object of the invention. In step S510, it is determined whether the overlapped area corresponding to the object within two frames, such as a first frame and a second frame is less than a specific percentage of a contour area of the object. If the overlapped area corresponding to the object is less than a specific percentage of the contour area of the object (Yes in step S510), in step S520, the rear frame, such as the second frame is selected as the candidate frame. If the overlapped area corresponding to the object is not less than a specific percentage of the contour area of the object (No in step S510), the procedure is completed. It is understood that, thereafter, an overlapped area corresponding to the object within the first frame and a subsequent frame, such as a third frame (wherein the second frame is not selected as a candidate frame) can be calculated, and accordingly determined. Alternatively, an overlapped area corresponding to the object within the second frame and a subsequent frame, such as a third frame (wherein the second frame is selected as a candidate frame) can be calculated, and accordingly determined. The process is repeated until all frames are examined.
FIG. 6 is a flowchart of another embodiment of a method for selecting at least one candidate frame according to the overlapped area corresponding to the object of the invention. In step S610, it is determined whether the overlapped area corresponding to the object within two frames, such as a first frame and a second frame equals to zero. If the overlapped area corresponding to the object equals to zero (Yes in step S610), in step S620, a moving speed of the object is calculated according to the positions of the object in the two frames and the predefined time interval. It is understood that, in some embodiments, once the contour of the object is detected, the area of the contour, the position of mass center of the area, and the moving speed of the object can be also calculated. In step S630, candidate frames are selected within the frames according to the moving speed of the object. It is understood that, in some embodiments, a table can be looked up according to the moving speed of the object to obtain a frame gap number, and the candidates frames can be selected within the frames at intervals of the frame gap number. In the table, the faster the moving speed is, the smaller the frame gap number is. It is noted that, the actual value of the frame gap number can be flexibly designed according different applications and requirements. The selected candidate frames can be used for photo composition. If the overlapped, area corresponding to the object does not equal to zero (No in step S610), the procedure is completed. It is understood that, thereafter, an overlapped area corresponding to the object within the first frame and a subsequent frame, such as a third frame (wherein the overlapped area corresponding to the object does not equal to zero) can be calculated, and accordingly determined. The process is repeated until all frames are examined.
FIG. 7 is a flowchart of an embodiment of a method for photo composition within multiple frames of the invention. The method for photo composition within multiple frames can be used in an electronic device, such as a computer, or a portable device, such as a digital camera, a handheld device such as a mobile phone, a smart phone, a PDA, a GPS, or any picture-taking device. In the embodiment, candidate frames can be selected from frames and used for photo composition.
In step S710, a plurality of frames are obtained. It is understood that, in some embodiments, the frames are continuously and respectively captured with a predefined time interval. In some embodiments, the frames can be obtained from a video. In step S720, at least one object within at least two of the frames, such as the first two successive frames is detected. It is understood that, in some embodiments, the object can be obtained by transforming the at least two frames into grayscale frames, and subtracting the grayscale frames with each other to obtain a contour of the object. In step S730, a moving speed of the object is calculated according to the positions of the object in the respective frames and the predefined time interval. It is understood that, in some embodiments, once the contour of the object is detected, the area of the contour, the position of mass center of the area, and the moving speed of the object can be also calculated. In step S740, candidate frames are selected from the frames according to the moving speed of the object. It is understood that, in some embodiments, a table can be looked up according to the moving speed of the object to obtain a frame gap number, and the candidates frames can be selected within the frames at intervals of the frame gap number. In the table, the faster the moving speed is, the smaller the frame gap number is. It is noted that, the actual value of the frame gap number can be flexibly designed according different applications and requirements. In step S750, the selected candidate frames are composed to generate a composed photo. It is understood that, the composition of candidate, frames can be performed by using an image composition algorithm. It is understood that, the image composition algorithm may be various and known in the art, and related descriptions are omitted here.
FIG. 8 is a flowchart of another embodiment of a method for photo composition within multiple frames of the invention. The method for photo composition can be used in an electronic device, such as a computer, or a portable device, such as a digital camera, a handheld device such as a mobile phone, a smart phone, a PDA, a GPS, or any picture-taking device. In the embodiment, candidate frames can be selected from frames and used for photo composition.
In step S810, a plurality of frames are obtained. It is understood that, in some embodiments, the frames are continuously and respectively captured with a predefined time interval. In some embodiments, the frames can be obtained from a video. In step S820, at least one object within at least two of the frames, such as the first two successive frames is detected. It is understood that, in some embodiments, the object can be obtained by transforming the at least two frames into grayscale frames, and subtracting the grayscale frames with each other to obtain a contour of the object. In step S830, an overlapped area corresponding to the object within two frames, such as a first frame and a second frame is calculated. It is understood that, in some embodiments, once the contour of the object is detected, the area and the position of the contour of the object can be calculated, and the overlapped area can be accordingly calculated. In step S840, at least one candidate frame is selected according to the overlapped area corresponding to the object. It is understood that, in some embodiments, it is determined whether the overlapped area corresponding to the object is less than a specific percentage of a contour area of the object. If the overlapped area corresponding to the object is less than a specific percentage of the contour area of the object, the second frame is selected as the candidate frame. In some embodiments, it is determined whether the overlapped area corresponding to the object equals to zero. If the overlapped area corresponding to the object equals to zero, a moving speed of the object is calculated according to the positions of the object in the respective frames and the predefined time interval, and candidate frames are selected within the frames according to the moving speed of the object. After all frames are examined, in step S850, the selected candidate frames are composed to generate a composed photo. It is understood that, the composition of candidate frames can be performed by using an image composition algorithm. It is understood that, the image composition algorithm may be various and known in the art, and related descriptions are omitted here.
Therefore, the methods and systems for determining frames and photo composition within multiple frames of the present invention can select appropriate frames from continuously captured frames according to the moving speed of the object, and/or the overlapped area corresponding to the object with frames, and accordingly generate a composed frame, thus improving the dynamic effect of the track of the moving object.
Methods for determining frames and photo composition within multiple frames, may take the form of a program code (i.e., executable instructions) embodied in tangible media, such as floppy diskettes, CD-ROMS, hard drives, or any other machine-readable storage medium, wherein, when the program code is loaded into and executed by a machine, such as a computer, the machine thereby becomes an apparatus for practicing the methods. The methods may also be embodied in the form of a program code transmitted over some transmission medium, such as electrical wiring or cabling, through fiber optics, or via any other form of transmission, wherein, when the program code is received and loaded into and executed by a machine, such as a computer, the machine becomes an apparatus for practicing the disclosed methods. When implemented on a general-purpose processor, the program code combines with the processor to provide a unique apparatus that operates analogously to application specific logic circuits.
While the invention has been described by way of example and in terms of preferred embodiment, it is to be understood that the invention is not limited thereto. Those who are skilled in this technology can still make various alterations and modifications without departing from the scope and spirit of this invention. Therefore, the scope of the present invention shall be defined and protected by the following claims and their equivalent.

Claims (7)

What is claimed is:
1. A method for determining frames within multiple frames for use in an electronic device, comprising:
obtaining a plurality of frames, wherein the respective frames are captured with a time interval;
detecting at least one object within at least two of the frames;
calculating an overlapped area corresponding to the object within a first frame and a second frame; and
selecting at least one candidate frame according to the overlapped area corresponding to the object, comprising the steps of:
determining whether the overlapped area corresponding to the object equals to zero;
if the overlapped area corresponding to the object equals to zero, calculating a moving speed of the object according to the positions of the object in the respective frames and the predefined time interval; and
selecting candidate frames within the frames according to the moving speed of the object.
2. The method of claim 1, wherein the step of selecting at least one candidate frame according to the overlapped area corresponding to the object comprises the steps of:
determining whether the overlapped area corresponding to the object is less than a specific percentage of a contour area of the object; and
if the overlapped area corresponding to the object is less than a specific percentage of the contour area of the object, selecting the second frame as the candidate frame.
3. The method of claim 1, wherein the step of detecting at least one object within at least two of the frames comprises the steps of:
transforming the at least two frames into grayscale frames; and
subtracting the grayscale frames with each other to obtain a contour of the object.
4. The method of claim 1, wherein the step of selecting candidate frames within the frames according to the moving speed of the object comprises the steps of:
looking up a table according to the moving speed of the object to obtain a frame gap number; and
selecting the candidates frames within the frames at intervals of the frame gap number.
5. The method of claim 4, wherein the faster the moving speed is, the smaller the frame gap number is.
6. The method of claim 1, further comprising recording a video, and the frames are obtained from the video.
7. A system for determining frames within multiple frames for use in an electronic device, comprising:
a storage unit configured to comprise a plurality of frames, wherein the respective frames are captured with a predefined time interval; and
a processor configured to detect at least one object within at least two of the frames, calculate an overlapped area corresponding to the object within a first frame and a second frame, and select at least one candidate frame according to the overlapped area corresponding to the object;
wherein the processor configured to select at least one candidate frame according to the overlapped area corresponding to the object comprises:
the processor configured to determine whether the overlapped area corresponding to the object equals to zero;
if the overlapped area corresponding to the object equals to zero, the processor configured to calculate a moving speed of the object according to the positions of the object in the respective frames and the predefined time interval; and
the processor configured to select candidate frames within the frames according to the moving speed of the object.
US15/159,825 2014-03-20 2016-05-20 Methods and systems for determining frames and photo composition within multiple frames Expired - Fee Related US9898828B2 (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
US15/159,825 US9898828B2 (en) 2014-03-20 2016-05-20 Methods and systems for determining frames and photo composition within multiple frames

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
US14/220,149 US20150271381A1 (en) 2014-03-20 2014-03-20 Methods and systems for determining frames and photo composition within multiple frames
US15/159,825 US9898828B2 (en) 2014-03-20 2016-05-20 Methods and systems for determining frames and photo composition within multiple frames

Related Parent Applications (1)

Application Number Title Priority Date Filing Date
US14/220,149 Division US20150271381A1 (en) 2014-03-20 2014-03-20 Methods and systems for determining frames and photo composition within multiple frames

Publications (2)

Publication Number Publication Date
US20160267680A1 US20160267680A1 (en) 2016-09-15
US9898828B2 true US9898828B2 (en) 2018-02-20

Family

ID=54120832

Family Applications (2)

Application Number Title Priority Date Filing Date
US14/220,149 Abandoned US20150271381A1 (en) 2014-03-20 2014-03-20 Methods and systems for determining frames and photo composition within multiple frames
US15/159,825 Expired - Fee Related US9898828B2 (en) 2014-03-20 2016-05-20 Methods and systems for determining frames and photo composition within multiple frames

Family Applications Before (1)

Application Number Title Priority Date Filing Date
US14/220,149 Abandoned US20150271381A1 (en) 2014-03-20 2014-03-20 Methods and systems for determining frames and photo composition within multiple frames

Country Status (3)

Country Link
US (2) US20150271381A1 (en)
CN (1) CN104933677B (en)
TW (1) TWI531911B (en)

Families Citing this family (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US9036943B1 (en) * 2013-03-14 2015-05-19 Amazon Technologies, Inc. Cloud-based image improvement
TWI647674B (en) * 2018-02-09 2019-01-11 光陽工業股份有限公司 Navigation method and system for presenting different navigation pictures

Citations (16)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US6659344B2 (en) 2000-12-06 2003-12-09 Ncr Corporation Automated monitoring of activity of shoppers in a market
US20060078224A1 (en) * 2002-08-09 2006-04-13 Masashi Hirosawa Image combination device, image combination method, image combination program, and recording medium containing the image combination program
US20090012595A1 (en) 2003-12-15 2009-01-08 Dror Seliktar Therapeutic Drug-Eluting Endoluminal Covering
CN101421759A (en) 2006-04-10 2009-04-29 诺基亚公司 Use frame selection to construct image panoramas
US20090201382A1 (en) * 2008-02-13 2009-08-13 Casio Computer Co., Ltd. Imaging apparatus for generating stroboscopic image
US20090208062A1 (en) * 2008-02-20 2009-08-20 Samsung Electronics Co., Ltd. Method and a handheld device for capturing motion
TW201001338A (en) 2008-06-16 2010-01-01 Huper Lab Co Ltd Method of detecting moving objects
US20100157085A1 (en) * 2008-12-18 2010-06-24 Casio Computer Co., Ltd. Image pickup device, flash image generating method, and computer-readable memory medium
US20100172641A1 (en) * 2009-01-08 2010-07-08 Casio Computer Co., Ltd. Photographing apparatus, photographing method and computer readable storage medium storing program therein
US20110043639A1 (en) * 2009-08-20 2011-02-24 Sanyo Electric Co., Ltd. Image Sensing Apparatus And Image Processing Apparatus
US20110205397A1 (en) 2010-02-24 2011-08-25 John Christopher Hahn Portable imaging device having display with improved visibility under adverse conditions
TW201140412A (en) 2010-01-27 2011-11-16 Wacom Co Ltd Position detecting device and method
US20120002112A1 (en) * 2010-07-02 2012-01-05 Sony Corporation Tail the motion method of generating simulated strobe motion videos and pictures using image cloning
US20120242853A1 (en) * 2011-03-25 2012-09-27 David Wayne Jasinski Digital camera for capturing an image sequence
US20120257071A1 (en) * 2011-04-06 2012-10-11 Prentice Wayne E Digital camera having variable duration burst mode
US8564614B2 (en) * 2009-08-27 2013-10-22 Casio Computer Co., Ltd. Display control apparatus, display control method and recording non-transitory medium

Patent Citations (17)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US6659344B2 (en) 2000-12-06 2003-12-09 Ncr Corporation Automated monitoring of activity of shoppers in a market
US20060078224A1 (en) * 2002-08-09 2006-04-13 Masashi Hirosawa Image combination device, image combination method, image combination program, and recording medium containing the image combination program
US20090012595A1 (en) 2003-12-15 2009-01-08 Dror Seliktar Therapeutic Drug-Eluting Endoluminal Covering
CN101421759A (en) 2006-04-10 2009-04-29 诺基亚公司 Use frame selection to construct image panoramas
US20090201382A1 (en) * 2008-02-13 2009-08-13 Casio Computer Co., Ltd. Imaging apparatus for generating stroboscopic image
US20090208062A1 (en) * 2008-02-20 2009-08-20 Samsung Electronics Co., Ltd. Method and a handheld device for capturing motion
TW201001338A (en) 2008-06-16 2010-01-01 Huper Lab Co Ltd Method of detecting moving objects
US20100157085A1 (en) * 2008-12-18 2010-06-24 Casio Computer Co., Ltd. Image pickup device, flash image generating method, and computer-readable memory medium
US20100172641A1 (en) * 2009-01-08 2010-07-08 Casio Computer Co., Ltd. Photographing apparatus, photographing method and computer readable storage medium storing program therein
US20110043639A1 (en) * 2009-08-20 2011-02-24 Sanyo Electric Co., Ltd. Image Sensing Apparatus And Image Processing Apparatus
CN101998058A (en) 2009-08-20 2011-03-30 三洋电机株式会社 Image sensing apparatus and image processing apparatus
US8564614B2 (en) * 2009-08-27 2013-10-22 Casio Computer Co., Ltd. Display control apparatus, display control method and recording non-transitory medium
TW201140412A (en) 2010-01-27 2011-11-16 Wacom Co Ltd Position detecting device and method
US20110205397A1 (en) 2010-02-24 2011-08-25 John Christopher Hahn Portable imaging device having display with improved visibility under adverse conditions
US20120002112A1 (en) * 2010-07-02 2012-01-05 Sony Corporation Tail the motion method of generating simulated strobe motion videos and pictures using image cloning
US20120242853A1 (en) * 2011-03-25 2012-09-27 David Wayne Jasinski Digital camera for capturing an image sequence
US20120257071A1 (en) * 2011-04-06 2012-10-11 Prentice Wayne E Digital camera having variable duration burst mode

Non-Patent Citations (3)

* Cited by examiner, † Cited by third party
Title
Author: Jacques, J.C.S. et al. Title: Background Subtraction and Shadow Detection in Grayscale Video Sequences Date: 2005 Published in: Computer Graphics and Image Processing, 2005. SIBGRAPI 2005. 1 8th Brazilian Symposium.
Corresponding Chinese office action dated Jul. 3, 2017.
Wang Liang-liang et al, "The research of Moving object detection algorithm in video images", College of physical science and technology, Southwest Jiaotong University, China Academic Journal Electronic Publishing House, 2010, pp. 147-149.

Also Published As

Publication number Publication date
TWI531911B (en) 2016-05-01
CN104933677B (en) 2018-11-09
US20150271381A1 (en) 2015-09-24
US20160267680A1 (en) 2016-09-15
CN104933677A (en) 2015-09-23
TW201537359A (en) 2015-10-01

Similar Documents

Publication Publication Date Title
KR101990073B1 (en) Method and apparatus for shooting and storing multi-focused image in electronic device
US20120098981A1 (en) Image capture methods and systems
US9030577B2 (en) Image processing methods and systems for handheld devices
US20140204263A1 (en) Image capture methods and systems
US9445073B2 (en) Image processing methods and systems in accordance with depth information
EP3109695B1 (en) Method and electronic device for automatically focusing on moving object
CN105391940A (en) Image recommendation method and apparatus
CN114390201A (en) Focusing method and device thereof
US9898828B2 (en) Methods and systems for determining frames and photo composition within multiple frames
CN113873160A (en) Image processing method, image processing device, electronic equipment and computer storage medium
US9071735B2 (en) Name management and group recovery methods and systems for burst shot
US20160373648A1 (en) Methods and systems for capturing frames based on device information
CN114501115A (en) Cutting and reprocessing method, device, equipment and medium for court video
US9420194B2 (en) Methods and systems for generating long shutter frames
US20150355780A1 (en) Methods and systems for intuitively refocusing images
US9307160B2 (en) Methods and systems for generating HDR images
CN113364985B (en) Live broadcast lens tracking method, device and medium
CN115604585A (en) Image processing method, device, equipment and medium
US9202133B2 (en) Methods and systems for scene recognition
CN115278053A (en) Image shooting method and electronic equipment
US8233787B2 (en) Focus method and photographic device using the method

Legal Events

Date Code Title Description
AS Assignment

Owner name: HTC CORPORATION, TAIWAN

Free format text: ASSIGNMENT OF ASSIGNORS INTEREST;ASSIGNORS:LIN, BING-SHENG;LIN, YI-CHI;LU, TAI-LING;SIGNING DATES FROM 20070910 TO 20140513;REEL/FRAME:038690/0488

STCF Information on status: patent grant

Free format text: PATENTED CASE

FEPP Fee payment procedure

Free format text: MAINTENANCE FEE REMINDER MAILED (ORIGINAL EVENT CODE: REM.); ENTITY STATUS OF PATENT OWNER: LARGE ENTITY

LAPS Lapse for failure to pay maintenance fees

Free format text: PATENT EXPIRED FOR FAILURE TO PAY MAINTENANCE FEES (ORIGINAL EVENT CODE: EXP.); ENTITY STATUS OF PATENT OWNER: LARGE ENTITY

STCH Information on status: patent discontinuation

Free format text: PATENT EXPIRED DUE TO NONPAYMENT OF MAINTENANCE FEES UNDER 37 CFR 1.362

FP Lapsed due to failure to pay maintenance fee

Effective date: 20220220