US20130314511A1 - Image capture device controlled according to image capture quality and related image capture method thereof - Google Patents
Image capture device controlled according to image capture quality and related image capture method thereof Download PDFInfo
- Publication number
- US20130314511A1 US20130314511A1 US13/890,254 US201313890254A US2013314511A1 US 20130314511 A1 US20130314511 A1 US 20130314511A1 US 201313890254 A US201313890254 A US 201313890254A US 2013314511 A1 US2013314511 A1 US 2013314511A1
- Authority
- US
- United States
- Prior art keywords
- image capture
- image
- quality metric
- controller
- metric index
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Abandoned
Links
- 238000000034 method Methods 0.000 title claims description 45
- 238000013442 quality metrics Methods 0.000 claims abstract description 99
- 238000001514 detection method Methods 0.000 claims description 21
- 230000000193 eyeblink Effects 0.000 claims description 3
- 238000013461 design Methods 0.000 description 10
- 238000010586 diagram Methods 0.000 description 9
- 230000000875 corresponding effect Effects 0.000 description 8
- 210000000887 face Anatomy 0.000 description 5
- 238000012545 processing Methods 0.000 description 5
- 238000003708 edge detection Methods 0.000 description 4
- 239000003381 stabilizer Substances 0.000 description 4
- 101100452236 Caenorhabditis elegans inf-1 gene Proteins 0.000 description 3
- 230000006641 stabilisation Effects 0.000 description 3
- 238000011105 stabilization Methods 0.000 description 3
- 230000004397 blinking Effects 0.000 description 2
- 230000008859 change Effects 0.000 description 2
- 230000006870 function Effects 0.000 description 2
- 238000001303 quality assessment method Methods 0.000 description 2
- 238000012935 Averaging Methods 0.000 description 1
- 230000004075 alteration Effects 0.000 description 1
- 230000008901 benefit Effects 0.000 description 1
- 230000002596 correlated effect Effects 0.000 description 1
- 238000011161 development Methods 0.000 description 1
- 210000003128 head Anatomy 0.000 description 1
- 238000012986 modification Methods 0.000 description 1
- 230000004048 modification Effects 0.000 description 1
- 230000003287 optical effect Effects 0.000 description 1
- 230000004044 response Effects 0.000 description 1
- 230000002123 temporal effect Effects 0.000 description 1
- 230000001960 triggered effect Effects 0.000 description 1
- 230000000007 visual effect Effects 0.000 description 1
Images
Classifications
-
- H04N5/23248—
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N5/00—Details of television systems
- H04N5/76—Television signal recording
- H04N5/765—Interface circuits between an apparatus for recording and another apparatus
- H04N5/77—Interface circuits between an apparatus for recording and another apparatus between a recording apparatus and a television camera
- H04N5/772—Interface circuits between an apparatus for recording and another apparatus between a recording apparatus and a television camera the recording apparatus and the television camera being placed in the same enclosure
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T3/00—Geometric image transformations in the plane of the image
- G06T3/40—Scaling of whole images or parts thereof, e.g. expanding or contracting
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T3/00—Geometric image transformations in the plane of the image
- G06T3/40—Scaling of whole images or parts thereof, e.g. expanding or contracting
- G06T3/4007—Scaling of whole images or parts thereof, e.g. expanding or contracting based on interpolation, e.g. bilinear interpolation
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T3/00—Geometric image transformations in the plane of the image
- G06T3/40—Scaling of whole images or parts thereof, e.g. expanding or contracting
- G06T3/4023—Scaling of whole images or parts thereof, e.g. expanding or contracting based on decimating pixels or lines of pixels; based on inserting pixels or lines of pixels
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T5/00—Image enhancement or restoration
- G06T5/73—Deblurring; Sharpening
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T7/00—Image analysis
- G06T7/0002—Inspection of images, e.g. flaw detection
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N23/00—Cameras or camera modules comprising electronic image sensors; Control thereof
- H04N23/60—Control of cameras or camera modules
- H04N23/64—Computer-aided capture of images, e.g. transfer from script file into camera, check of taken image quality, advice or proposal for image composition or decision on when to take image
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N23/00—Cameras or camera modules comprising electronic image sensors; Control thereof
- H04N23/60—Control of cameras or camera modules
- H04N23/68—Control of cameras or camera modules for stable pick-up of the scene, e.g. compensating for camera body vibrations
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N23/00—Cameras or camera modules comprising electronic image sensors; Control thereof
- H04N23/60—Control of cameras or camera modules
- H04N23/68—Control of cameras or camera modules for stable pick-up of the scene, e.g. compensating for camera body vibrations
- H04N23/681—Motion detection
- H04N23/6811—Motion detection based on the image signal
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N23/00—Cameras or camera modules comprising electronic image sensors; Control thereof
- H04N23/60—Control of cameras or camera modules
- H04N23/68—Control of cameras or camera modules for stable pick-up of the scene, e.g. compensating for camera body vibrations
- H04N23/682—Vibration or motion blur correction
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N9/00—Details of colour television systems
- H04N9/79—Processing of colour television signals in connection with recording
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/10—Image acquisition modality
- G06T2207/10016—Video; Image sequence
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/20—Special algorithmic details
- G06T2207/20172—Image enhancement details
- G06T2207/20192—Edge enhancement; Edge preservation
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/30—Subject of image; Context of image processing
- G06T2207/30168—Image quality inspection
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/30—Subject of image; Context of image processing
- G06T2207/30196—Human being; Person
- G06T2207/30201—Face
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T3/00—Geometric image transformations in the plane of the image
- G06T3/40—Scaling of whole images or parts thereof, e.g. expanding or contracting
- G06T3/4053—Scaling of whole images or parts thereof, e.g. expanding or contracting based on super-resolution, i.e. the output image resolution being higher than the sensor resolution
Definitions
- the disclosed embodiments of the present invention relate to an automatic shot scheme, and more particularly, to an image capture device controlled according to the image capture quality and related image capture method thereof.
- Camera modules have become popular elements used in a variety of applications.
- a smartphone is typically equipped with a camera module, thus allowing a user to easily and conveniently take pictures by using the smartphone.
- the smartphone is prone to generate blurred images.
- the camera aperture and/or sensor size of the smartphone is typically small, which leads to a small amount of light arriving at each pixel in camera sensor. As a result, the image quality may suffer from the small camera aperture and/or sensor size.
- the smartphone tends to be affected by hand shake.
- the shake of the smartphone will last for a period of time.
- any picture taken during this period of time would be affected by the hand shake.
- An image deblurring algorithm may be performed upon the blurred images.
- the computational complexity of the image deblurring algorithm is very high, resulting in considerable power consumption.
- artifact will be introduced if the image deblurring algorithm is not perfect.
- a camera module with an optical image stabilizer is expensive.
- the conventional smartphone is generally equipped with a digital image stabilizer (i.e., an electronic image stabilizer (EIS)).
- EIS electronic image stabilizer
- the digital image stabilizer can counteract the motion of images, but fails to prevent image blurring.
- the movement of a target object within a scene to be captured may cause the captured image to have blurry image contents.
- the captured image may have a blurry image content of the child if the child is still when the user is going to touch the shutter/capture button and then suddenly moves when the user actually touches the shutter/capture button.
- an electronic device e.g., a smartphone
- a stereo camera and a stereo display.
- the captured image or preview image generated by the stereo camera of the smartphone can be a stereo image (i.e., an image pair including a left-view image and a right-view image) or a single-view image (i.e., one of a left-view image and a right-view image).
- the smartphone may use to the smartphone to capture a single-view image only, or may send a single-view image selected from a stereo image captured by the smartphone to a two-dimensional (2D) display or a social network (e.g., Facebook).
- the conventional design simply selects a single image with a fixed viewing angle from a stereo image.
- the stereo images generated by the stereo camera may have different image quality.
- one viewing angle is better than the other viewing angle.
- Using a fixed viewing angle to select a single image from a stereo image fails to generate a 2D output with optimum image/video quality.
- an image capture device controlled according to the image capture quality and related image capture method thereof are proposed to solve the above-mentioned problem.
- an exemplary image capture device includes an image capture module and a controller.
- the image capture module is arranged for capturing a plurality of consecutive preview images under an automatic shot mode.
- the controller is arranged for analyzing the consecutive preview images to identify an image capture quality metric index, and determining if a target image capture condition is met by referring to at least the image capture quality metric index, wherein a captured image for the automatic shot mode is stored when the controller determines that the target image capture condition is met.
- an exemplary image capture method includes at least the following steps: capturing a plurality of consecutive preview images under an automatic shot mode; analyzing the consecutive preview images to identify an image capture quality metric index; determining if a target image capture condition is met by referring to at least the image capture quality metric index; and when the target image capture condition is met, storing a captured image for the automatic shot mode.
- an exemplary image capture device includes a multi-view image capture module and a controller.
- the multi-view image capture module is arranged for simultaneously generating a plurality of image capture outputs respectively corresponding to a plurality of different viewing angles.
- the controller is arranged for calculating an image capture quality metric index for each of the image capture outputs.
- a specific image capture output generated from the multi-view image capture module is outputted by the image capture device according to a plurality of image capture quality metric indices of the image capture outputs.
- an exemplary image capture method includes at least the following steps: utilizing a multi-view image capture module for simultaneously generating a plurality of image capture outputs respectively corresponding to a plurality of different viewing angles; calculating an image capture quality metric index for each of the image capture outputs; and outputting a specific image capture output generated from the multi-view image capture module according to a plurality of image capture quality metric indices of the image capture outputs.
- FIG. 1 is a block diagram illustrating an image capture device according to a first embodiment of the present invention.
- FIG. 2 is a diagram illustrating an example of generating a captured image under the automatic shot mode according to an embodiment of the present invention.
- FIG. 3 is a flowchart illustrating an image capture method according to an embodiment of the present invention.
- FIG. 4 is a flowchart illustrating an image capture method according to another embodiment of the present invention.
- FIG. 5 is a block diagram illustrating an image capture device according to a second embodiment of the present invention.
- FIG. 6 is an example illustrating an operation of obtaining the specific image capture output according to an embodiment of the present invention.
- FIG. 7 is an example illustrating an operation of obtaining the specific image capture output according to another embodiment of the present invention.
- FIG. 8 is an example illustrating an operation of obtaining the specific image capture output according to yet another embodiment of the present invention.
- FIG. 9 is a flowchart illustrating an image capture method according to another embodiment of the present invention.
- One technical feature of the present invention is to obtain and store a captured image when a target image capture condition (e.g., a stable image capture condition) is met under an automatic shot mode. For example, it is determined that the target image capture condition (e.g., the stable image capture condition) is met when a region of stable (ROS) is found stable due to having no movement/small movement, having a small blur value, and/or having a better image quality metric index. In this way, a non-blurred image (or better quality image) can be automatically obtained and stored under the automatic shot mode by checking the stable image capture condition.
- a target image capture condition e.g., a stable image capture condition
- Another technical feature of the present invention is to output a specific image capture output generated from a multi-view image capture module according to a plurality of image capture quality metric indices of a plurality of image capture outputs (e.g., image outputs or video outputs) respectively corresponding to a plurality of different viewing angles.
- a 2D image/video output derived from image capture outputs of the multi-view image capture module would have optimum image/video quality. Further details are described as below.
- FIG. 1 is a block diagram illustrating an image capture device according to a first embodiment of the present invention.
- the image capture device 100 may be at least a portion (i.e., part or all) of an electronic device.
- the image capture device 100 may be implemented in a portable device such as a smartphone or a digital camera.
- the image capture device 100 includes, but is not limited to, an image capture module 102 , a controller 104 , a storage device 106 , and a shutter/capture button 108 .
- the shutter/capture button 108 may be a physical button installed on the housing or a virtual button displayed on a touch screen.
- the user may touch/press the shutter/capture button 108 to activate an automatic shot mode for enabling the image capture device 100 to generate and store a captured image automatically.
- the image capture module 102 has the image capture capability, and may be used to generate a captured image when triggered by touch/press of the shutter/capture button 108 .
- the present invention focuses on the control scheme applied to the image capture module 102 rather than an internal structure of the image capture module 102 , further description of the internal structure of the image capture module 102 is omitted here for brevity.
- the image capture module 102 captures a plurality of consecutive preview images IMG_Pre under the automatic shot mode until the preview images show that a stable image capture condition is met.
- the controller 104 is arranged for analyzing the consecutive preview images IMG_Pre to identify an image capture quality metric index, and determining if the stable image capture condition is met by referring to at least the image capture quality metric index.
- the image capture quality metric index may be indicative of an image blur degree, and the controller 104 may identify the image capture quality metric index by performing a predetermined processing operation upon a region of stable (ROS) in each preview image of the consecutive preview images IMG_Pre.
- the ROS region in each preview image is determined by the controller 104 automatically without user intervention.
- the controller 104 performs face detection upon each preview image to determine a face region which is used as the ROS region in each preview image.
- Each face region may include one or more face images, each defined by a position (x, y) and a size (w, h), where x and y represent the X-coordinate and the Y-coordinate of a center (or a left-top corner) of a face image, and w and h represent the width and the height of the face image.
- a face region found in one preview image may be identical to or different from a face region found in another preview image.
- the face region is not necessarily a fixed image region in each of the consecutive preview images IMG_Pre.
- the controller 104 may use a center region, a focus region determined by auto-focus, a complex texture region determined by edge detection, or an entire image to act as the ROS region in each preview image. It should be noted that position and size of the ROS region in each preview image are fixed when the center region or the entire image is used as the ROS region.
- position and size of the ROS region in each preview image are not necessarily fixed when the focus region (which is dynamically determined by auto-focus performed for capturing each preview image) or the complex texture region (which is dynamically determined by edge detection performed by the controller 104 upon each preview image) is used as the ROS region.
- the ROS region in each preview image is determined by the controller 104 in response to a user input USER_IN. That is, the ROS region is manually selected by the user. For example, before the image capture device 100 enters the automatic shot mode, the use may determine a touch focus region by entering the user input USER_IN through a touch screen (not shown). After the image capture device 100 enters the automatic shot mode, the controller 104 uses the touch focus region selected by the user input USER_IN to act as the ROS region in each preview image. It should be noted that position and size of the ROS region in each preview image may be fixed since the touch focus region is determined before the automatic shot mode is activated. Alternatively, the position and size of the ROS region in each preview image may not be fixed since the ROS region can be tracked using object tracking technology.
- the image capture quality metric index may be identified by performing one or more predetermined processing operations upon the ROS region in each preview image of the consecutive preview images IMG_Pre.
- the controller 104 may identify the image capture quality metric index by estimating the image blur degree.
- the image blur degree can be estimated by performing a stable estimation for the ROS region in each preview image of the consecutive preview images IMG_Pre.
- the controller 104 detects a zero image blur degree when the stable estimation result indicates a completely stable state, e.g. no movement, and detects a low image blur degree when the stable estimation result indicates a nearly stable state, e.g. small movement.
- the stable estimation may be implemented using motion estimation performed upon ROS regions of the consecutive preview images IMG_Pre.
- the stable estimation result indicates a completely stable state, e.g. no movement; and when the motion vector obtained by the motion estimation is close to zero, the stable estimation result indicates a nearly stable state, e.g. small movement.
- the stable estimation may be implemented by calculating a sum of absolute differences (SAD) or a sum of squared differences (SSD) between ROS regions of two consecutive preview images.
- SAD sum of absolute differences
- SSD sum of squared differences
- the stable estimation may be implemented by calculating a difference between positions of ROS regions of two consecutive preview images and calculating a difference between sizes of the ROS regions of the two consecutive preview images. For example, in a case where the ROS region in each preview image is determined by face detection, the position difference and the size difference between ROS regions of two consecutive preview images may be used to determine the stable estimation result.
- the position difference and the size difference are both zero, the stable estimation result indicates a completely stable state, e.g. no movement.
- the position difference and the size difference are both close to zero, the stable estimation result indicates a nearly stable state, e.g. small movement.
- the movement estimation result also indicates a nearly stable state, e.g. small movement.
- the controller 104 may further perform another predetermined processing operation (e.g., a blur value estimation) for each preview image of the consecutive preview images IMG_Pre.
- the controller 104 may be configured to identify the image blur degree by referring to both of the stable estimation result and the blur value estimation result.
- the controller 104 detects a zero image blur degree when the stable estimation result indicates a completely stable state (e.g. no movement) and the blur value indicates no blur, and detects a low image blur degree when the stable estimation result indicates a nearly stable state (e.g. small movement) and the blur value is small.
- the blur value estimation may be implemented by performing edge detection upon the ROS region in each preview image, and then calculating the edge magnitude derived from the edge detection to act as a blur value of the ROS region.
- the blur value estimation may be implemented by calculating the image visual quality assessment metric of the ROS region in each preview image to act as a blur value of the ROS region.
- the blur value estimation may be implemented by obtaining inherent image characteristic information of each of the consecutive preview images by analyzing the consecutive preview images, and determining a blur value estimation result according to the inherent image characteristic information, where the inherent image characteristic information includes at least one of sharpness, blur, brightness, contrast, and color.
- the image capture quality metric index e.g., the image blur degree
- the image capture quality metric index may be determined according to at least the inherent image characteristic information.
- the controller 104 determines that the stable image capture condition is met. In other words, the controller 104 determines that the stable image capture condition is met when an ROS region is found stable without any change or with a small change.
- the stable image capture condition may be checked by referring to the image blur degree identified using preview images and additional indicator(s) provided by other circuit element(s).
- the controller 104 may further receive a sensor input SENSOR_IN from at least one sensor 101 of the smartphone, where the sensor input SENSOR_IN is indicative of a movement status associated with the image capture device 100 , especially a movement status of the image capture module 102 .
- the sensor 101 may be a G-sensor or a Gyro sensor.
- the controller 104 determines if the stable image capture condition is met by referring to the image blur degree and the movement status. In other words, the controller 104 determines that the stable image capture condition is met when the ROS region is found stable due to zero image blur degree/low image blur degree and the camera is found stable due to zero movement/small movement of the image capture module 102 .
- the controller 104 stores a captured image IMG into the storage device (e.g., a non-volatile memory) 106 as an image capture result for the automatic shot mode activated by user's touch/press of the shutter/capture button 108 .
- the controller 104 directly selects one of the consecutive preview images IMG_Pre as the captured image IMG, where the consecutive preview images IMG_Pre are obtained before the stable image capture condition is met. For example, the last preview image which has a stable ROS region and is captured under the condition that the camera is stable may be selected as the captured image IMG.
- the controller 104 controls the image capture module 102 to capture a new image IMG_New as the captured image IMG. That is, none of the preview images generated before the stable image capture condition is met is selected as the captured image IMG, and an image captured immediately after the stable image capture condition is met is the captured image IMG.
- FIG. 2 is a diagram illustrating an example of generating a captured image under the automatic shot mode according to an embodiment of the present invention.
- face detection is used to select the ROS region in each preview image.
- the image capture device 100 is affected by hand shake when capturing the preview image.
- the remaining parts of this preview image generated under the automatic shot mode are blurry.
- the controller 104 determines that the stable image capture condition is not met because the ROS region (i.e., the face region) is found unstable due to high image blur degree and the camera is found unstable due to large movement of the image capture module 102 .
- the image capture device 100 is not affected by hand shake, but the target object (i.e., the person) moves his head when the image capture device 100 captures the preview image.
- the face region of the target object is blurry, but the remaining parts of this preview image generated under the automatic shot mode are clear.
- the controller 104 also determines that the stable image capture condition is not met because the ROS region (i.e., the face region) is found unstable due to high image blur degree.
- the image capture device 100 is not affected by hand shake, and the target object (i.e., the person) is still when the image capture device 100 captures the preview image.
- the controller 104 determines that the stable image capture condition is met because the ROS region (i.e., the face region) is found stable due to zero image blur degree and the camera is found stable due to zero movement of the image capture module 102 .
- the image capture device 100 can successfully obtain a desired non-blurred image for the automatic shot mode when the stable image capture condition is met.
- the above-mentioned exemplary operation of checking an ROS region (e.g., a face region) to determine if a stable image capture condition is met is performed under an automatic shot mode, and is therefore different from an auto-focus operation performed based on the face region.
- the auto-focus operation checks the face region to adjust the lens position for automatic focus adjustment. After the focus point is successfully set by the auto-focus operation based on the face region, the automatic shot mode is enabled.
- the consecutive preview images IMG_Pre are captured under a fixed focus setting configured by the auto-focus operation.
- no focus adjustment is made to the lens.
- FIG. 3 is a flowchart illustrating an image capture method according to an embodiment of the present invention.
- the image capture method may be employed by the image capture device 100 . Provided that the result is substantially the same, the steps are not required to be executed in the exact order shown in FIG. 3 .
- the image capture method may be briefly summarized by following steps.
- Step 200 Start.
- Step 202 Check if the shutter/capture button 108 is touched/pressed to activate an automatic shot mode. If yes, go to step 204 ; otherwise, perform step 202 again.
- Step 204 Utilize the image capture module 102 to capture preview images.
- Step 206 Utilize the controller 104 to analyze consecutive preview images to identify an image capture quality metric index (e.g., an image blur degree).
- an image capture quality metric index e.g., an image blur degree
- Step 208 Receive a sensor input SENSOR_IN indicative of a movement status associated with the image capture module 102 .
- Step 210 Determine if a target image capture condition (e.g., a stable image capture condition) is met by referring to the image capture quality metric index (e.g., the image blur degree) and the movement status. If yes, go to step 212 ; otherwise, go to step 204 .
- a target image capture condition e.g., a stable image capture condition
- the image capture quality metric index e.g., the image blur degree
- Step 212 Store a captured image for the automatic shot mode into the storage device 106 .
- one of the consecutive preview images obtained before the stable image capture condition is met is directly selected as the captured image to be stored, or a new image captured immediately after the stable image capture condition is met is used as the captured image to be stored.
- Step 214 End.
- step 208 may be omitted, depending upon actual design consideration/requirement. That is, in an alternative design, a stable image capture condition may be checked without referring to the sensor input SENSOR_IN.
- FIG. 4 is a flowchart illustrating an image capture method according to another embodiment of the present invention. The major difference between the exemplary image capture methods shown in FIG. 3 and FIG. 4 is that step 208 is omitted, and step 210 is replaced by step 310 as below.
- Step 310 Determine if a target image capture condition (e.g., a stable image capture condition) is met by referring to the image capture quality metric index (e.g., the image blur degree). If yes, go to step 212 ; otherwise, go to step 204 .
- a target image capture condition e.g., a stable image capture condition
- the image capture quality metric index e.g., the image blur degree
- FIG. 5 is a block diagram illustrating an image capture device according to a second embodiment of the present invention.
- the image capture device 500 may be at least a portion (i.e., part or all) of an electronic device.
- the image capture device 500 may be implemented in a portable device such as a smartphone or a digital camera.
- the image capture device 500 includes, but is not limited to, a multi-view image capture module 502 , a controller 504 , a storage device 506 , a shutter/capture button 508 , and an optional electronic image stabilization (EIS) module 510 .
- the shutter/capture button 508 may be a physical button installed on the housing or a virtual button displayed on a touch screen.
- the user may touch/press the shutter/capture button 508 to enable the image capture device 500 to output a single-view image or a single-view video sequence.
- the multi-view image capture module 502 has the image capture capability, and is capable of simultaneously generating a plurality of image capture outputs respectively corresponding to a plurality of different viewing angles, where each of the image capture outputs may be a single image or a video sequence composed of consecutive images.
- the multi-view image capture device 502 may be implemented using a camera array or a multi-lens camera, and thus may be regarded as having a plurality of camera units for generating image capture outputs respectively corresponding to different viewing angles.
- the multi-view image capture device 502 shown in FIG. 5 may be a stereo camera configured to have two camera units 512 and 514 , where the camera unit 512 is used to generate a right-view image capture output S_OUT R , and the camera unit 514 is used to generate a left-view image capture output S_OUT L .
- the number of camera units is not meant to be a limitation of the present invention.
- the present invention focuses on the camera selection scheme applied to the multi-view image capture module 502 and the output selection scheme applied to image capture outputs generated from the multi-view image capture module 502 , further description of the internal structure of the multi-view image capture module 502 is omitted here for brevity.
- the controller 504 is arranged for calculating an image capture quality metric index for each of the image capture outputs.
- the image capture quality metric index may be calculated based on a selected image region (e.g., a face region having one or more face images) or an entire image area of each image.
- the image capture quality metric index is correlated with an image blur degree. For example, the image capture quality metric index would indicate good image capture quality when the image blur degree is low, and the image capture quality metric index would indicate poor image capture quality when the image blur degree is high.
- the aforementioned image capture quality metric index is an image quality metric index.
- the controller 504 performs face detection upon each image capture output (i.e., each of a left-view image and a right-view image) to obtain face detection result, and determines the image capture quality metric index (i.e., the image quality metric index) according to the face detection information.
- the left-view image and the right-view image generated from a stereo camera may have different quality.
- the face in one of the left-view image and the right-view image is clear, but the same face in the other of the left-view image and the right-view image may be blurry.
- one of the left-view image and the right-view image is clear, but the other of the left-view image and the right-view image is blurry.
- one of the left-view image and the right-view image may have more human faces, and one of the left-view image and the right-view image may have a better face angle.
- the face detection information is indicative of the image quality of the left-view image and the right-view image.
- the obtained face detection information may include at least one of a face angle, a face number (i.e., the number of human faces detected in an entire image), a face size (e.g., the size of a face region having one or more human faces), a face position (e.g., the position of a face region having one or more human faces), a face symmetry (e.g., a ratio of left face and right face of a face region having one or more human faces), an eye number (i.e., the number of human eyes detected in an entire image), and an eye blink status (i.e., the number of blinking human eyes detected in an entire image).
- a face angle i.e., the number of human faces detected in an entire image
- a face size e.g., the size of a face region having one or more human faces
- a face position e.g., the position of a face region having one or more human faces
- a face symmetry e.g., a ratio of left face and right
- the image capture quality metric index (i.e., the image quality metric index) is set by a larger value when an image capture output S_OUR R , S_OUT L (i.e., a left-view image or a right-view image) has larger front faces, more front faces, a larger eye number, and/or fewer blinking eyes.
- the controller 504 receives auto-focus information INF_ 1 of each image capture output (i.e., each of a left-view image and a right-view image) from the multi-view image capture module 502 , and determines the image capture quality metric index according to the auto-focus information INF_ 1 .
- the auto-focus information INF_ 1 is indicative of the image quality of the left-view image and the right-view image.
- the image capture quality metric index (i.e., the image quality metric index) is set by a larger value when an image capture output S_OUR R , S_OUT L (i.e., a left-view image or a right-view image) has a better auto-focus result.
- the controller 504 analyzes at least a portion (i.e., part or all) of each of the image capture output (i.e., each of a left-view image and a right-view image) to obtain inherent image characteristic information, and determines the image capture quality metric index (i.e., the image quality metric index) according to the inherent image characteristic information.
- the inherent image characteristic information is indicative of the image quality of the left-view image and the right-view image.
- the inherent image characteristic information may include at least one of sharpness, blur, brightness, contrast, and color.
- the image capture quality metric index (i.e., the image quality metric index) is set by a larger value when an image capture output S_OUR R , S_OUT L (i.e., a left-view image or a right-view image) is sharper or has a more suitable brightness distribution (i.e., a better white balance result).
- the controller 504 determines the image capture quality metric index (i.e., the image quality metric index) of each of the image capture output (i.e., each of a left-view image and a right-view image) according to electronic image stabilization (EIS) information INF_ 2 given by the optional EIS module 510 .
- EIS electronic image stabilization
- the EIS information INF_ 2 is indicative of the image quality of the left-view image and the right-view image.
- the image capture quality metric index (i.e., the image quality metric index) is set by a larger value when an image capture output S_OUR R , S_OUT L (i.e., a left-view image or a right-view image) is given more image stabilization.
- the controller 504 may employ any combination of above-mentioned face detection information, auto-focus information, inherent image characteristic information and EIS information to determine the image capture quality metric index.
- each of the image capture outputs S_OUT R and S_OUT L is a video sequence composed of consecutive images
- the aforementioned image capture quality metric index is a video quality metric index.
- the controller 504 may employ face detection information, auto-focus information, inherent image characteristic information, and/or EIS information to determine the image capture quality metric index (i.e., the video quality metric index) of each of the image capture outputs S_OUT L and S_OUT R (i.e., a left-view video sequence and a right-view video sequence).
- the video quality metric index may be derived from processing (e.g., summing or averaging) image quality metric indices of images included in the same video sequence.
- other video quality assessment methods may be employed for determining the video quality metric index of each video sequence.
- a specific image capture output S_OUT generated from the multi-view image capture module 502 may be saved as a file in the storage device 506 (e.g., a non-volatile memory), and then outputted by the image capture device 500 to a display device (e.g., a 2D display screen) 516 or a network (e.g., a social network) 518 .
- the specific image capture output S_OUT may be used for further processing such as face recognition or image enhancement.
- an image capture quality metric index when assigned by a larger value, it means that the image/video quality is better. Hence, based on comparison of the image capture quality metric indices of the image capture outputs, the controller 504 knows which one of the image capture outputs has the best image/video quality.
- the controller 504 refers to the image capture quality metric indices of the image capture outputs S_OUT R , S_OUT L to directly select one of the image capture outputs S_OUT R , S_OUT L as the specific image capture output S_OUT.
- FIG. 6 is an example illustrating an operation of obtaining the specific image capture output S_OUT according to an embodiment of the present invention. As shown in FIG. 6 , the left-view image S_OUT L has a blurry face region, and the right-view image S_OUT R is clear.
- the image quality metric index of the right-view image S_OUT R is larger than that of the left-view image S_OUT L , which implies that the right-view image S_OUT R has better image quality due to a stable face region in this example.
- the controller 504 Based on the comparison result of the image quality metric indices of the right-view image S_OUT R and the left-view image S_OUT L , the controller 504 directly selects the right-view image S_OUT R as the specific image S_OUT to be stored and outputted.
- the controller 504 may refer to the image capture quality metric indices of the image capture outputs S_OUT R , S_OUT L to control the multi-view image capture module 502 to generate a new image capture output S_OUT N corresponding to a selected viewing angle as the specific image capture output S_OUT.
- FIG. 7 is an example illustrating an operation of obtaining the specific image capture output S_OUT according to another embodiment of the present invention. As shown in FIG. 7 , the left-view image S_OUT L has a blurry face region, and the right-view image S_OUT R is clear.
- the image quality metric index of the right-view image S_OUT R is larger than that of the left-view image S_OUT L , which implies that the right-view image S_OUT R has better image quality due to a stable face region in this example.
- the controller 504 selects the camera unit 512 such that a new captured image S_OUT N corresponding to a selected viewing angle is generated as the specific image S_OUT to be stored and outputted.
- each of the image capture outputs S_OUT R , S_OUT L is a video sequence composed of consecutive images
- the controller 504 refers to the image capture quality metric indices of the image capture outputs S_OUT R , S_OUT L to directly select one of the image capture outputs S_OUT R , S_OUT L as the specific image capture output S_OUT.
- FIG. 8 is an example illustrating an operation of obtaining the specific image capture output S_OUT according to yet another embodiment of the present invention.
- the left-view video sequence S_OUT L includes two images each having a blurry face region, whereas each image included in the right-view video sequence S_OUT R is clear.
- the video quality metric index of the right-view video sequence S_OUT R is larger than that of the left-view video sequence S_OUT L , which implies that the right-view video sequence S_OUT R has better video quality due to a stable face region in this example.
- the controller 504 Based on the comparison result of the video quality metric indices of the right-view video sequence S_OUT R and the left-view video sequence S_OUT L , the controller 504 directly selects the right-view video sequence S_OUT R as the specific video sequence S_OUT to be stored and outputted.
- FIG. 9 is a flowchart illustrating an image capture method according to another embodiment of the present invention.
- the image capture method may be employed by the image capture device 500 . Provided that the result is substantially the same, the steps are not required to be executed in the exact order shown in FIG. 9 .
- the image capture method may be briefly summarized by following steps.
- Step 900 Start.
- Step 902 Check if the shutter/capture button 508 is touched/pressed. If yes, go to step 904 ; otherwise, perform step 902 again.
- Step 904 Utilize the multi-view image capture device 502 for simultaneously generating a plurality of image capture outputs respectively corresponding to a plurality of different viewing angles.
- each of the image capture outputs is a single image when the shutter/capture button 508 is touched/pressed to enable a photo mode.
- each of the image capture outputs is a video sequence when the shutter/capture button 508 is touched/pressed to enable a video recording mode.
- Step 906 Utilize the controller 504 for calculating an image capture quality metric index for each of the image capture outputs.
- the image capture quality metric index may be derived from face detection information, auto-focus information, inherent image characteristic information, and/or EIS information.
- Step 908 Utilize the controller 504 to compare image capture quality metric indices of the image capture outputs.
- Step 910 Utilize the controller 504 to decide which one of the image capture outputs has best image/video quality based on the comparison result.
- Step 912 Output a specific image capture output generated from the multi-view image capture module 502 according to an image capture output identified in step 910 to have best image/video quality.
- the image capture output which is identified in step 910 to have best image/video quality is directly selected as the specific image capture output.
- a camera unit used for generating the image capture output which is identified in step 910 to have best image/video quality is selected to capture a new image capture output as the specific image capture output.
- Step 914 End.
- the first image capture method shown in FIG. 3 / FIG. 4 is to obtain a captured image based on consecutive preview images generated by a single camera unit in a temporal domain
- the second image capture method shown in FIG. 9 is to obtain a specific image capture output based on multiple image capture outputs (e.g., multi-view images or multi-view video sequences) generated by different camera units in a spatial domain.
- combining technical features of the first image capture method shown in FIG. 3 / FIG. 4 and the second image capture method shown in FIG. 9 to obtain a captured image based on multiple image capture outputs generated by different camera units under a temporal-spatial domain is feasible. Please refer to FIG. 8 again.
- the controller 504 may be adequately modified to perform the second image capture method shown in FIG. 9 to select the image capture output S_OUT from multiple image capture outputs S_OUT L and S_OUT R , and then perform the first image capture method upon images included in the selected image capture output S_OUT to obtain a captured image for the automatic shot mode.
- the modified controller 504 is configured to treat the images included in the image capture output S_OUT selected based on the second image capture method as the aforementioned consecutive preview images to be processed by the first image capture method. This also falls within the scope of the present invention.
Landscapes
- Engineering & Computer Science (AREA)
- Multimedia (AREA)
- Signal Processing (AREA)
- General Physics & Mathematics (AREA)
- Theoretical Computer Science (AREA)
- Physics & Mathematics (AREA)
- Quality & Reliability (AREA)
- Computer Vision & Pattern Recognition (AREA)
- Studio Devices (AREA)
- Image Processing (AREA)
- Television Signal Processing For Recording (AREA)
- User Interface Of Digital Computer (AREA)
- Editing Of Facsimile Originals (AREA)
Abstract
An image capture device has an image capture module and a controller. The image capture module is used for capturing a plurality of consecutive preview images under an automatic shot mode. In addition, the image capture module can be a multi-view image capture module, which is used to capture a plurality of multiple-angle preview images. The controller is used for analyzing the preview images to identify an image capture quality metric index, and determining if a target image capture condition is met by referring to at least the image capture quality metric index. A captured image for the automatic shot mode is stored when the controller determines that the target image capture condition is met.
Description
- This application claims the benefit of U.S. provisional application No. 61/651,499, filed on May 24, 2012 and incorporated herein by reference.
- The disclosed embodiments of the present invention relate to an automatic shot scheme, and more particularly, to an image capture device controlled according to the image capture quality and related image capture method thereof.
- Camera modules have become popular elements used in a variety of applications. For example, a smartphone is typically equipped with a camera module, thus allowing a user to easily and conveniently take pictures by using the smartphone. However, due to inherent characteristics of the smartphone, the smartphone is prone to generate blurred images. For example, the camera aperture and/or sensor size of the smartphone is typically small, which leads to a small amount of light arriving at each pixel in camera sensor. As a result, the image quality may suffer from the small camera aperture and/or sensor size.
- Besides, due to lightweight and portability of the smartphone, the smartphone tends to be affected by hand shake. In general, the shake of the smartphone will last for a period of time. Hence, any picture taken during this period of time would be affected by the hand shake. An image deblurring algorithm may be performed upon the blurred images. However, the computational complexity of the image deblurring algorithm is very high, resulting in considerable power consumption. Besides, artifact will be introduced if the image deblurring algorithm is not perfect.
- Moreover, a camera module with an optical image stabilizer (OIS) is expensive. Hence, the conventional smartphone is generally equipped with a digital image stabilizer (i.e., an electronic image stabilizer (EIS)). The digital image stabilizer can counteract the motion of images, but fails to prevent image blurring.
- In addition to the camera shake, the movement of a target object within a scene to be captured may cause the captured image to have blurry image contents. For example, considering a case where the user wants to use the smartphone to take a picture of a child, the captured image may have a blurry image content of the child if the child is still when the user is going to touch the shutter/capture button and then suddenly moves when the user actually touches the shutter/capture button.
- With the development of science and technology, users are pursing stereoscopic and more real image displays rather than high quality images. Hence, an electronic device (e.g., a smartphone) may be equipped with a stereo camera and a stereo display. The captured image or preview image generated by the stereo camera of the smartphone can be a stereo image (i.e., an image pair including a left-view image and a right-view image) or a single-view image (i.e., one of a left-view image and a right-view image). That is, even though the smartphone is equipped with the stereo camera, the user may use to the smartphone to capture a single-view image only, or may send a single-view image selected from a stereo image captured by the smartphone to a two-dimensional (2D) display or a social network (e.g., Facebook). The conventional design simply selects a single image with a fixed viewing angle from a stereo image. However, the stereo images generated by the stereo camera may have different image quality. Sometimes, one viewing angle is better than the other viewing angle. Using a fixed viewing angle to select a single image from a stereo image fails to generate a 2D output with optimum image/video quality.
- In accordance with exemplary embodiments of the present invention, an image capture device controlled according to the image capture quality and related image capture method thereof are proposed to solve the above-mentioned problem.
- According to a first aspect of the present invention, an exemplary image capture device is disclosed. The exemplary image capture device includes an image capture module and a controller. The image capture module is arranged for capturing a plurality of consecutive preview images under an automatic shot mode. The controller is arranged for analyzing the consecutive preview images to identify an image capture quality metric index, and determining if a target image capture condition is met by referring to at least the image capture quality metric index, wherein a captured image for the automatic shot mode is stored when the controller determines that the target image capture condition is met.
- According to a second aspect of the present invention, an exemplary image capture method is disclosed. The exemplary image capture method includes at least the following steps: capturing a plurality of consecutive preview images under an automatic shot mode; analyzing the consecutive preview images to identify an image capture quality metric index; determining if a target image capture condition is met by referring to at least the image capture quality metric index; and when the target image capture condition is met, storing a captured image for the automatic shot mode.
- According to a third aspect of the present invention, an exemplary image capture device is disclosed. The exemplary image capture device includes a multi-view image capture module and a controller. The multi-view image capture module is arranged for simultaneously generating a plurality of image capture outputs respectively corresponding to a plurality of different viewing angles. The controller is arranged for calculating an image capture quality metric index for each of the image capture outputs. A specific image capture output generated from the multi-view image capture module is outputted by the image capture device according to a plurality of image capture quality metric indices of the image capture outputs.
- According to a fourth aspect of the present invention, an exemplary image capture method is disclosed. The exemplary image capture method includes at least the following steps: utilizing a multi-view image capture module for simultaneously generating a plurality of image capture outputs respectively corresponding to a plurality of different viewing angles; calculating an image capture quality metric index for each of the image capture outputs; and outputting a specific image capture output generated from the multi-view image capture module according to a plurality of image capture quality metric indices of the image capture outputs.
- These and other objectives of the present invention will no doubt become obvious to those of ordinary skill in the art after reading the following detailed description of the preferred embodiment that is illustrated in the various figures and drawings.
-
FIG. 1 is a block diagram illustrating an image capture device according to a first embodiment of the present invention. -
FIG. 2 is a diagram illustrating an example of generating a captured image under the automatic shot mode according to an embodiment of the present invention. -
FIG. 3 is a flowchart illustrating an image capture method according to an embodiment of the present invention. -
FIG. 4 is a flowchart illustrating an image capture method according to another embodiment of the present invention. -
FIG. 5 is a block diagram illustrating an image capture device according to a second embodiment of the present invention. -
FIG. 6 is an example illustrating an operation of obtaining the specific image capture output according to an embodiment of the present invention. -
FIG. 7 is an example illustrating an operation of obtaining the specific image capture output according to another embodiment of the present invention. -
FIG. 8 is an example illustrating an operation of obtaining the specific image capture output according to yet another embodiment of the present invention. -
FIG. 9 is a flowchart illustrating an image capture method according to another embodiment of the present invention. - Certain terms are used throughout the description and following claims to refer to particular components. As one skilled in the art will appreciate, manufacturers may refer to a component by different names. This document does not intend to distinguish between components that differ in name but not function. In the following description and in the claims, the terms “include” and “comprise” are used in an open-ended fashion, and thus should be interpreted to mean “include, but not limited to . . . ”. Also, the term “couple” is intended to mean either an indirect or direct electrical connection. Accordingly, if one device is coupled to another device, that connection may be through a direct electrical connection, or through an indirect electrical connection via other devices and connections.
- One technical feature of the present invention is to obtain and store a captured image when a target image capture condition (e.g., a stable image capture condition) is met under an automatic shot mode. For example, it is determined that the target image capture condition (e.g., the stable image capture condition) is met when a region of stable (ROS) is found stable due to having no movement/small movement, having a small blur value, and/or having a better image quality metric index. In this way, a non-blurred image (or better quality image) can be automatically obtained and stored under the automatic shot mode by checking the stable image capture condition. Another technical feature of the present invention is to output a specific image capture output generated from a multi-view image capture module according to a plurality of image capture quality metric indices of a plurality of image capture outputs (e.g., image outputs or video outputs) respectively corresponding to a plurality of different viewing angles. In this way, a 2D image/video output derived from image capture outputs of the multi-view image capture module would have optimum image/video quality. Further details are described as below.
- Please refer to
FIG. 1 , which is a block diagram illustrating an image capture device according to a first embodiment of the present invention. Theimage capture device 100 may be at least a portion (i.e., part or all) of an electronic device. For example, theimage capture device 100 may be implemented in a portable device such as a smartphone or a digital camera. In this embodiment, theimage capture device 100 includes, but is not limited to, animage capture module 102, acontroller 104, astorage device 106, and a shutter/capture button 108. The shutter/capture button 108 may be a physical button installed on the housing or a virtual button displayed on a touch screen. In this embodiment, the user may touch/press the shutter/capture button 108 to activate an automatic shot mode for enabling theimage capture device 100 to generate and store a captured image automatically. Theimage capture module 102 has the image capture capability, and may be used to generate a captured image when triggered by touch/press of the shutter/capture button 108. As the present invention focuses on the control scheme applied to theimage capture module 102 rather than an internal structure of theimage capture module 102, further description of the internal structure of theimage capture module 102 is omitted here for brevity. - In this embodiment, when the shutter/
capture button 108 is touched/pressed to active the automatic shot mode, theimage capture module 102 captures a plurality of consecutive preview images IMG_Pre under the automatic shot mode until the preview images show that a stable image capture condition is met. Specifically, thecontroller 104 is arranged for analyzing the consecutive preview images IMG_Pre to identify an image capture quality metric index, and determining if the stable image capture condition is met by referring to at least the image capture quality metric index. By way of example, but not limitation, the image capture quality metric index may be indicative of an image blur degree, and thecontroller 104 may identify the image capture quality metric index by performing a predetermined processing operation upon a region of stable (ROS) in each preview image of the consecutive preview images IMG_Pre. - In one exemplary design, the ROS region in each preview image is determined by the
controller 104 automatically without user intervention. For example, thecontroller 104 performs face detection upon each preview image to determine a face region which is used as the ROS region in each preview image. Each face region may include one or more face images, each defined by a position (x, y) and a size (w, h), where x and y represent the X-coordinate and the Y-coordinate of a center (or a left-top corner) of a face image, and w and h represent the width and the height of the face image. It should be noted that a face region found in one preview image may be identical to or different from a face region found in another preview image. In other words, as a face region is dynamically found in each preview image, the face region is not necessarily a fixed image region in each of the consecutive preview images IMG_Pre. Alternatively, thecontroller 104 may use a center region, a focus region determined by auto-focus, a complex texture region determined by edge detection, or an entire image to act as the ROS region in each preview image. It should be noted that position and size of the ROS region in each preview image are fixed when the center region or the entire image is used as the ROS region. However, position and size of the ROS region in each preview image are not necessarily fixed when the focus region (which is dynamically determined by auto-focus performed for capturing each preview image) or the complex texture region (which is dynamically determined by edge detection performed by thecontroller 104 upon each preview image) is used as the ROS region. - In another exemplary design, the ROS region in each preview image is determined by the
controller 104 in response to a user input USER_IN. That is, the ROS region is manually selected by the user. For example, before theimage capture device 100 enters the automatic shot mode, the use may determine a touch focus region by entering the user input USER_IN through a touch screen (not shown). After theimage capture device 100 enters the automatic shot mode, thecontroller 104 uses the touch focus region selected by the user input USER_IN to act as the ROS region in each preview image. It should be noted that position and size of the ROS region in each preview image may be fixed since the touch focus region is determined before the automatic shot mode is activated. Alternatively, the position and size of the ROS region in each preview image may not be fixed since the ROS region can be tracked using object tracking technology. - The image capture quality metric index may be identified by performing one or more predetermined processing operations upon the ROS region in each preview image of the consecutive preview images IMG_Pre. For example, the
controller 104 may identify the image capture quality metric index by estimating the image blur degree. In one exemplary design, the image blur degree can be estimated by performing a stable estimation for the ROS region in each preview image of the consecutive preview images IMG_Pre. Hence, thecontroller 104 detects a zero image blur degree when the stable estimation result indicates a completely stable state, e.g. no movement, and detects a low image blur degree when the stable estimation result indicates a nearly stable state, e.g. small movement. - In a first exemplary embodiment, the stable estimation may be implemented using motion estimation performed upon ROS regions of the consecutive preview images IMG_Pre. Regarding an ROS region in one preview image, when the motion vector obtained by the motion estimation is zero, the stable estimation result indicates a completely stable state, e.g. no movement; and when the motion vector obtained by the motion estimation is close to zero, the stable estimation result indicates a nearly stable state, e.g. small movement.
- In a second exemplary embodiment, the stable estimation may be implemented by calculating a sum of absolute differences (SAD) or a sum of squared differences (SSD) between ROS regions of two consecutive preview images. Regarding ROS regions of two consecutive preview images, when the SAD/SSD value is zero, the stable estimation result indicates a completely stable state, e.g. no movement; and when the SAD/SSD value is close to zero, the stable estimation result indicates a nearly stable state, e.g. small movement.
- In a third exemplary embodiment, the stable estimation may be implemented by calculating a difference between positions of ROS regions of two consecutive preview images and calculating a difference between sizes of the ROS regions of the two consecutive preview images. For example, in a case where the ROS region in each preview image is determined by face detection, the position difference and the size difference between ROS regions of two consecutive preview images may be used to determine the stable estimation result. When the position difference and the size difference are both zero, the stable estimation result indicates a completely stable state, e.g. no movement. When the position difference and the size difference are both close to zero, the stable estimation result indicates a nearly stable state, e.g. small movement. When one of the position difference and the size difference is zero and the other of the position difference and the size difference is close to zero, the movement estimation result also indicates a nearly stable state, e.g. small movement.
- In addition to the stable estimation, the
controller 104 may further perform another predetermined processing operation (e.g., a blur value estimation) for each preview image of the consecutive preview images IMG_Pre. In other words, thecontroller 104 may be configured to identify the image blur degree by referring to both of the stable estimation result and the blur value estimation result. Hence, thecontroller 104 detects a zero image blur degree when the stable estimation result indicates a completely stable state (e.g. no movement) and the blur value indicates no blur, and detects a low image blur degree when the stable estimation result indicates a nearly stable state (e.g. small movement) and the blur value is small. - In a first exemplary embodiment, the blur value estimation may be implemented by performing edge detection upon the ROS region in each preview image, and then calculating the edge magnitude derived from the edge detection to act as a blur value of the ROS region.
- In a second exemplary embodiment, the blur value estimation may be implemented by calculating the image visual quality assessment metric of the ROS region in each preview image to act as a blur value of the ROS region.
- In a third exemplary embodiment, the blur value estimation may be implemented by obtaining inherent image characteristic information of each of the consecutive preview images by analyzing the consecutive preview images, and determining a blur value estimation result according to the inherent image characteristic information, where the inherent image characteristic information includes at least one of sharpness, blur, brightness, contrast, and color. To put it another way, the image capture quality metric index (e.g., the image blur degree) may be determined according to at least the inherent image characteristic information.
- When identifying either a zero image blur degree or a low image blur degree (i.e., detecting that the image blur degree is lower than a predetermined threshold) by checking ROS region(s) of one or more preview images of the consecutive preview images IMG_Pre, the
controller 104 determines that the stable image capture condition is met. In other words, thecontroller 104 determines that the stable image capture condition is met when an ROS region is found stable without any change or with a small change. However, this merely serves as one possible implementation of the present invention. In an alternative design, the stable image capture condition may be checked by referring to the image blur degree identified using preview images and additional indicator(s) provided by other circuit element(s). For example, when theimage capture device 100 is employed in a smartphone, thecontroller 104 may further receive a sensor input SENSOR_IN from at least onesensor 101 of the smartphone, where the sensor input SENSOR_IN is indicative of a movement status associated with theimage capture device 100, especially a movement status of theimage capture module 102. For example, thesensor 101 may be a G-sensor or a Gyro sensor. Hence, thecontroller 104 determines if the stable image capture condition is met by referring to the image blur degree and the movement status. In other words, thecontroller 104 determines that the stable image capture condition is met when the ROS region is found stable due to zero image blur degree/low image blur degree and the camera is found stable due to zero movement/small movement of theimage capture module 102. - When the stable image capture condition is met under the automatic shot mode, the
controller 104 stores a captured image IMG into the storage device (e.g., a non-volatile memory) 106 as an image capture result for the automatic shot mode activated by user's touch/press of the shutter/capture button 108. In one exemplary design, thecontroller 104 directly selects one of the consecutive preview images IMG_Pre as the captured image IMG, where the consecutive preview images IMG_Pre are obtained before the stable image capture condition is met. For example, the last preview image which has a stable ROS region and is captured under the condition that the camera is stable may be selected as the captured image IMG. In another exemplary design, when the stable image capture condition is met, thecontroller 104 controls theimage capture module 102 to capture a new image IMG_New as the captured image IMG. That is, none of the preview images generated before the stable image capture condition is met is selected as the captured image IMG, and an image captured immediately after the stable image capture condition is met is the captured image IMG. - For better understanding of technical features of the present invention, please refer to
FIG. 2 , which is a diagram illustrating an example of generating a captured image under the automatic shot mode according to an embodiment of the present invention. Suppose that face detection is used to select the ROS region in each preview image. As shown in the sub-diagram (A) inFIG. 2 , theimage capture device 100 is affected by hand shake when capturing the preview image. Thus, besides the face region of a target object (i.e., a person), the remaining parts of this preview image generated under the automatic shot mode are blurry. Thecontroller 104 determines that the stable image capture condition is not met because the ROS region (i.e., the face region) is found unstable due to high image blur degree and the camera is found unstable due to large movement of theimage capture module 102. - As shown in the sub-diagram (B) in
FIG. 2 , theimage capture device 100 is not affected by hand shake, but the target object (i.e., the person) moves his head when theimage capture device 100 captures the preview image. Thus, the face region of the target object is blurry, but the remaining parts of this preview image generated under the automatic shot mode are clear. Though the camera is found stable due to zero movement of theimage capture module 102, thecontroller 104 also determines that the stable image capture condition is not met because the ROS region (i.e., the face region) is found unstable due to high image blur degree. - As shown in the sub-diagram (C) in
FIG. 2 , theimage capture device 100 is not affected by hand shake, and the target object (i.e., the person) is still when theimage capture device 100 captures the preview image. Thus, the face region and the remaining parts of this preview image generated under the automatic shot mode are clear. At this moment, thecontroller 104 determines that the stable image capture condition is met because the ROS region (i.e., the face region) is found stable due to zero image blur degree and the camera is found stable due to zero movement of theimage capture module 102. In this way, theimage capture device 100 can successfully obtain a desired non-blurred image for the automatic shot mode when the stable image capture condition is met. - The above-mentioned exemplary operation of checking an ROS region (e.g., a face region) to determine if a stable image capture condition is met is performed under an automatic shot mode, and is therefore different from an auto-focus operation performed based on the face region. Specifically, the auto-focus operation checks the face region to adjust the lens position for automatic focus adjustment. After the focus point is successfully set by the auto-focus operation based on the face region, the automatic shot mode is enabled. Thus, the consecutive preview images IMG_Pre are captured under a fixed focus setting configured by the auto-focus operation. In other words, during the procedure of checking the ROS region (e.g., the face region) to determine if the stable image capture condition is met, no focus adjustment is made to the lens.
- Please refer to
FIG. 1 in conjunction withFIG. 3 .FIG. 3 is a flowchart illustrating an image capture method according to an embodiment of the present invention. The image capture method may be employed by theimage capture device 100. Provided that the result is substantially the same, the steps are not required to be executed in the exact order shown inFIG. 3 . The image capture method may be briefly summarized by following steps. - Step 200: Start.
- Step 202: Check if the shutter/
capture button 108 is touched/pressed to activate an automatic shot mode. If yes, go to step 204; otherwise, performstep 202 again. - Step 204: Utilize the
image capture module 102 to capture preview images. - Step 206: Utilize the
controller 104 to analyze consecutive preview images to identify an image capture quality metric index (e.g., an image blur degree). - Step 208: Receive a sensor input SENSOR_IN indicative of a movement status associated with the
image capture module 102. - Step 210: Determine if a target image capture condition (e.g., a stable image capture condition) is met by referring to the image capture quality metric index (e.g., the image blur degree) and the movement status. If yes, go to step 212; otherwise, go to step 204.
- Step 212: Store a captured image for the automatic shot mode into the
storage device 106. For example, one of the consecutive preview images obtained before the stable image capture condition is met is directly selected as the captured image to be stored, or a new image captured immediately after the stable image capture condition is met is used as the captured image to be stored. - Step 214: End.
- It should be noted that
step 208 may be omitted, depending upon actual design consideration/requirement. That is, in an alternative design, a stable image capture condition may be checked without referring to the sensor input SENSOR_IN. Please refer toFIG. 4 , which is a flowchart illustrating an image capture method according to another embodiment of the present invention. The major difference between the exemplary image capture methods shown inFIG. 3 andFIG. 4 is thatstep 208 is omitted, and step 210 is replaced bystep 310 as below. - Step 310: Determine if a target image capture condition (e.g., a stable image capture condition) is met by referring to the image capture quality metric index (e.g., the image blur degree). If yes, go to step 212; otherwise, go to step 204.
- As a person skilled in the art can readily understand details of each step shown in
FIG. 3 andFIG. 4 after reading above paragraphs directed to theimage capture device 100 shown inFIG. 1 , further description is omitted here for brevity. -
FIG. 5 is a block diagram illustrating an image capture device according to a second embodiment of the present invention. Theimage capture device 500 may be at least a portion (i.e., part or all) of an electronic device. For example, theimage capture device 500 may be implemented in a portable device such as a smartphone or a digital camera. In this embodiment, theimage capture device 500 includes, but is not limited to, a multi-viewimage capture module 502, acontroller 504, astorage device 506, a shutter/capture button 508, and an optional electronic image stabilization (EIS)module 510. The shutter/capture button 508 may be a physical button installed on the housing or a virtual button displayed on a touch screen. In this embodiment, even though theimage capture device 500 is equipped with the multi-viewimage capture module 502, the user may touch/press the shutter/capture button 508 to enable theimage capture device 500 to output a single-view image or a single-view video sequence. The multi-viewimage capture module 502 has the image capture capability, and is capable of simultaneously generating a plurality of image capture outputs respectively corresponding to a plurality of different viewing angles, where each of the image capture outputs may be a single image or a video sequence composed of consecutive images. In this embodiment, the multi-viewimage capture device 502 may be implemented using a camera array or a multi-lens camera, and thus may be regarded as having a plurality of camera units for generating image capture outputs respectively corresponding to different viewing angles. By way of example, the multi-viewimage capture device 502 shown inFIG. 5 may be a stereo camera configured to have twocamera units camera unit 512 is used to generate a right-view image capture output S_OUTR, and thecamera unit 514 is used to generate a left-view image capture output S_OUTL. It should be noted that the number of camera units is not meant to be a limitation of the present invention. As the present invention focuses on the camera selection scheme applied to the multi-viewimage capture module 502 and the output selection scheme applied to image capture outputs generated from the multi-viewimage capture module 502, further description of the internal structure of the multi-viewimage capture module 502 is omitted here for brevity. - The
controller 504 is arranged for calculating an image capture quality metric index for each of the image capture outputs. Regarding each of the image capture outputs, the image capture quality metric index may be calculated based on a selected image region (e.g., a face region having one or more face images) or an entire image area of each image. Besides, the image capture quality metric index is correlated with an image blur degree. For example, the image capture quality metric index would indicate good image capture quality when the image blur degree is low, and the image capture quality metric index would indicate poor image capture quality when the image blur degree is high. - In a case where each of the image capture outputs S_OUTR and S_OUTL is a single image, the aforementioned image capture quality metric index is an image quality metric index. In a first exemplary embodiment, the
controller 504 performs face detection upon each image capture output (i.e., each of a left-view image and a right-view image) to obtain face detection result, and determines the image capture quality metric index (i.e., the image quality metric index) according to the face detection information. As mentioned above, the left-view image and the right-view image generated from a stereo camera may have different quality. For example, the face in one of the left-view image and the right-view image is clear, but the same face in the other of the left-view image and the right-view image may be blurry. In other words, one of the left-view image and the right-view image is clear, but the other of the left-view image and the right-view image is blurry. Besides, due to different viewing angles of the left-view image and the right-view image, one of the left-view image and the right-view image may have more human faces, and one of the left-view image and the right-view image may have a better face angle. Thus, the face detection information is indicative of the image quality of the left-view image and the right-view image. By way of example, the obtained face detection information may include at least one of a face angle, a face number (i.e., the number of human faces detected in an entire image), a face size (e.g., the size of a face region having one or more human faces), a face position (e.g., the position of a face region having one or more human faces), a face symmetry (e.g., a ratio of left face and right face of a face region having one or more human faces), an eye number (i.e., the number of human eyes detected in an entire image), and an eye blink status (i.e., the number of blinking human eyes detected in an entire image). In this embodiment, the image capture quality metric index (i.e., the image quality metric index) is set by a larger value when an image capture output S_OURR, S_OUTL (i.e., a left-view image or a right-view image) has larger front faces, more front faces, a larger eye number, and/or fewer blinking eyes. - In a second exemplary embodiment, the
controller 504 receives auto-focus information INF_1 of each image capture output (i.e., each of a left-view image and a right-view image) from the multi-viewimage capture module 502, and determines the image capture quality metric index according to the auto-focus information INF_1. As thecamera units camera units - In a third exemplary embodiment, the
controller 504 analyzes at least a portion (i.e., part or all) of each of the image capture output (i.e., each of a left-view image and a right-view image) to obtain inherent image characteristic information, and determines the image capture quality metric index (i.e., the image quality metric index) according to the inherent image characteristic information. The inherent image characteristic information is indicative of the image quality of the left-view image and the right-view image. For example, the inherent image characteristic information may include at least one of sharpness, blur, brightness, contrast, and color. In this embodiment, the image capture quality metric index (i.e., the image quality metric index) is set by a larger value when an image capture output S_OURR, S_OUTL (i.e., a left-view image or a right-view image) is sharper or has a more suitable brightness distribution (i.e., a better white balance result). - In a fourth exemplary embodiment, the
controller 504 determines the image capture quality metric index (i.e., the image quality metric index) of each of the image capture output (i.e., each of a left-view image and a right-view image) according to electronic image stabilization (EIS) information INF_2 given by theoptional EIS module 510. When theEIS module 510 is implemented in theimage capture device 500, the EIS information INF_2 is indicative of the image quality of the left-view image and the right-view image. In this embodiment, the image capture quality metric index (i.e., the image quality metric index) is set by a larger value when an image capture output S_OURR, S_OUTL (i.e., a left-view image or a right-view image) is given more image stabilization. - In a fifth exemplary embodiment, the
controller 504 may employ any combination of above-mentioned face detection information, auto-focus information, inherent image characteristic information and EIS information to determine the image capture quality metric index. - In another case where each of the image capture outputs S_OUTR and S_OUTL is a video sequence composed of consecutive images, the aforementioned image capture quality metric index is a video quality metric index. Similarly, the
controller 504 may employ face detection information, auto-focus information, inherent image characteristic information, and/or EIS information to determine the image capture quality metric index (i.e., the video quality metric index) of each of the image capture outputs S_OUTL and S_OUTR (i.e., a left-view video sequence and a right-view video sequence). For example, the video quality metric index may be derived from processing (e.g., summing or averaging) image quality metric indices of images included in the same video sequence. Alternatively, other video quality assessment methods may be employed for determining the video quality metric index of each video sequence. - Based on image capture quality metric indices of the image capture outputs S_OUTR and S_OUTL, a specific image capture output S_OUT generated from the multi-view
image capture module 502 may be saved as a file in the storage device 506 (e.g., a non-volatile memory), and then outputted by theimage capture device 500 to a display device (e.g., a 2D display screen) 516 or a network (e.g., a social network) 518. For example, the specific image capture output S_OUT may be used for further processing such as face recognition or image enhancement. - In an embodiment of the present invention, when an image capture quality metric index is assigned by a larger value, it means that the image/video quality is better. Hence, based on comparison of the image capture quality metric indices of the image capture outputs, the
controller 504 knows which one of the image capture outputs has the best image/video quality. - In a case where each of the image capture outputs S_OUTR, S_OUTL is a single image, the
controller 504 refers to the image capture quality metric indices of the image capture outputs S_OUTR, S_OUTL to directly select one of the image capture outputs S_OUTR, S_OUTL as the specific image capture output S_OUT. Please refer toFIG. 6 , which is an example illustrating an operation of obtaining the specific image capture output S_OUT according to an embodiment of the present invention. As shown inFIG. 6 , the left-view image S_OUTL has a blurry face region, and the right-view image S_OUTR is clear. Hence, the image quality metric index of the right-view image S_OUTR is larger than that of the left-view image S_OUTL, which implies that the right-view image S_OUTR has better image quality due to a stable face region in this example. Based on the comparison result of the image quality metric indices of the right-view image S_OUTR and the left-view image S_OUTL, thecontroller 504 directly selects the right-view image S_OUTR as the specific image S_OUT to be stored and outputted. - Alternatively, the
controller 504 may refer to the image capture quality metric indices of the image capture outputs S_OUTR, S_OUTL to control the multi-viewimage capture module 502 to generate a new image capture output S_OUTN corresponding to a selected viewing angle as the specific image capture output S_OUT. Please refer toFIG. 7 , which is an example illustrating an operation of obtaining the specific image capture output S_OUT according to another embodiment of the present invention. As shown inFIG. 7 , the left-view image S_OUTL has a blurry face region, and the right-view image S_OUTR is clear. Hence, the image quality metric index of the right-view image S_OUTR is larger than that of the left-view image S_OUTL, which implies that the right-view image S_OUTR has better image quality due to a stable face region in this example. Based on the comparison result of the image quality metric indices of the right-view image S_OUTR and the left-view image S_OUTL, thecontroller 504 selects thecamera unit 512 such that a new captured image S_OUTN corresponding to a selected viewing angle is generated as the specific image S_OUT to be stored and outputted. - In another case where each of the image capture outputs S_OUTR, S_OUTL is a video sequence composed of consecutive images, the
controller 504 refers to the image capture quality metric indices of the image capture outputs S_OUTR, S_OUTL to directly select one of the image capture outputs S_OUTR, S_OUTL as the specific image capture output S_OUT. Please refer toFIG. 8 , which is an example illustrating an operation of obtaining the specific image capture output S_OUT according to yet another embodiment of the present invention. As shown inFIG. 8 , the left-view video sequence S_OUTL includes two images each having a blurry face region, whereas each image included in the right-view video sequence S_OUTR is clear. Hence, the video quality metric index of the right-view video sequence S_OUTR is larger than that of the left-view video sequence S_OUTL, which implies that the right-view video sequence S_OUTR has better video quality due to a stable face region in this example. Based on the comparison result of the video quality metric indices of the right-view video sequence S_OUTR and the left-view video sequence S_OUTL, thecontroller 504 directly selects the right-view video sequence S_OUTR as the specific video sequence S_OUT to be stored and outputted. - Please refer to
FIG. 5 in conjunction withFIG. 9 .FIG. 9 is a flowchart illustrating an image capture method according to another embodiment of the present invention. The image capture method may be employed by theimage capture device 500. Provided that the result is substantially the same, the steps are not required to be executed in the exact order shown inFIG. 9 . The image capture method may be briefly summarized by following steps. - Step 900: Start.
- Step 902: Check if the shutter/
capture button 508 is touched/pressed. If yes, go to step 904; otherwise, performstep 902 again. - Step 904: Utilize the multi-view
image capture device 502 for simultaneously generating a plurality of image capture outputs respectively corresponding to a plurality of different viewing angles. For example, each of the image capture outputs is a single image when the shutter/capture button 508 is touched/pressed to enable a photo mode. For another example, each of the image capture outputs is a video sequence when the shutter/capture button 508 is touched/pressed to enable a video recording mode. - Step 906: Utilize the
controller 504 for calculating an image capture quality metric index for each of the image capture outputs. For example, the image capture quality metric index may be derived from face detection information, auto-focus information, inherent image characteristic information, and/or EIS information. - Step 908: Utilize the
controller 504 to compare image capture quality metric indices of the image capture outputs. - Step 910: Utilize the
controller 504 to decide which one of the image capture outputs has best image/video quality based on the comparison result. - Step 912: Output a specific image capture output generated from the multi-view
image capture module 502 according to an image capture output identified instep 910 to have best image/video quality. For example, the image capture output which is identified instep 910 to have best image/video quality is directly selected as the specific image capture output. For another example, a camera unit used for generating the image capture output which is identified instep 910 to have best image/video quality is selected to capture a new image capture output as the specific image capture output. - Step 914: End.
- As a person skilled in the art can readily understand details of each step shown in
FIG. 9 after reading above paragraphs directed to theimage capture device 500 shown inFIG. 5 , further description is omitted here for brevity. - As mentioned above, the first image capture method shown in FIG. 3/
FIG. 4 is to obtain a captured image based on consecutive preview images generated by a single camera unit in a temporal domain, and the second image capture method shown inFIG. 9 is to obtain a specific image capture output based on multiple image capture outputs (e.g., multi-view images or multi-view video sequences) generated by different camera units in a spatial domain. However, combining technical features of the first image capture method shown in FIG. 3/FIG. 4 and the second image capture method shown inFIG. 9 to obtain a captured image based on multiple image capture outputs generated by different camera units under a temporal-spatial domain is feasible. Please refer toFIG. 8 again. In an alternative design, thecontroller 504 may be adequately modified to perform the second image capture method shown inFIG. 9 to select the image capture output S_OUT from multiple image capture outputs S_OUTL and S_OUTR, and then perform the first image capture method upon images included in the selected image capture output S_OUT to obtain a captured image for the automatic shot mode. In other words, the modifiedcontroller 504 is configured to treat the images included in the image capture output S_OUT selected based on the second image capture method as the aforementioned consecutive preview images to be processed by the first image capture method. This also falls within the scope of the present invention. - Those skilled in the art will readily observe that numerous modifications and alterations of the device and method may be made while retaining the teachings of the invention. Accordingly, the above disclosure should be construed as limited only by the metes and bounds of the appended claims.
Claims (34)
1. An image capture device comprising:
an image capture module, arranged for capturing a plurality of consecutive preview images under an automatic shot mode; and
a controller, arranged for analyzing the consecutive preview images to identify an image capture quality metric index, and determining if a target image capture condition is met by referring to at least the image capture quality metric index;
wherein a captured image for the automatic shot mode is stored when the controller determines that the target image capture condition is met.
2. The image capture device of claim 1 , wherein the controller analyzes each of the consecutive preview images to obtain inherent image characteristic information, where the inherent image characteristic information includes at least one of sharpness, blur, brightness, contrast, and color; and the controller determines the image capture quality metric index according to at least the inherent image characteristic information.
3. The image capture device of claim 1 , wherein the controller identifies the image capture quality metric index by at least performing a stable estimation for each preview image of the consecutive preview images.
4. The image capture device of claim 1 , wherein the controller identifies the image capture quality metric index by at least performing a blur value estimation for each preview image of the consecutive preview images.
5. The image capture device of claim 1 , wherein the controller identifies the image capture quality metric index by analyzing at least a portion of each preview image of the consecutive preview images.
6. The image capture device of claim 5 , wherein the controller performs face detection upon each preview image to determine a face region to act as at least the portion of each preview image.
7. The image capture device of claim 1 , wherein the controller further receives a sensor input which is indicative of a movement status associated with the image capture module; and the controller determines if the target image capture condition is met by referring to the image capture quality metric index and the movement status.
8. The image capture device of claim 1 , wherein when the target image capture condition is met, the controller directly selects one of the consecutive preview images as the captured image.
9. The image capture device of claim 1 , wherein after the target image capture condition is met, the controller controls the image capture module to capture a new image as the captured image.
10. An image capture method comprising:
capturing a plurality of consecutive preview images under an automatic shot mode;
analyzing the consecutive preview images to identify an image capture quality metric index;
determining if a target image capture condition is met by referring to at least the image capture quality metric index; and
when the target image capture condition is met, storing a captured image for the automatic shot mode.
11. The image capture method of claim 10 , wherein the step of identifying the image capture quality metric index comprises:
analyzing each of the consecutive preview images to obtain inherent image characteristic information, where the inherent image characteristic information includes at least one of sharpness, blur, brightness, contrast, and color; and
determining the image capture quality metric index according to at least the inherent image characteristic information.
12. The image capture method of claim 10 , wherein the image capture quality metric index is identified by at least performing a stable movement estimation for each preview image of the consecutive preview images.
13. The image capture method of claim 10 , wherein the image capture quality metric index is identified by at least performing a blur value estimation for each preview image of the consecutive preview images.
14. The image capture method of claim 10 , wherein the image capture quality metric index is identified by analyzing at least a portion of each preview image of the consecutive preview images.
15. The image capture method of claim 14 , wherein face detection is performed upon each preview image to determine a face region to act as at least the portion of each preview image.
16. The image capture method of claim 10 , further comprising:
receiving a sensor input which is indicative of a movement status associated with an image capture module which generates the consecutive preview images;
wherein the step of determining if the target image capture condition is met comprises:
determining if the target image capture condition is met by referring to the image capture quality metric index and the movement status.
17. The image capture method of claim 10 , further comprising:
after the target image capture condition is met, directly selecting one of the consecutive preview images as the captured image.
18. The image capture method of claim 10 , further comprising:
when the target image capture condition is met, capturing a new image as the captured image.
19. An image capture device comprising:
a multi-view image capture module, arranged for simultaneously generating a plurality of image capture outputs respectively corresponding to a plurality of different viewing angles; and
a controller, arranged for calculating an image capture quality metric index for each of the image capture outputs;
wherein a specific image capture output generated from the multi-view image capture module is outputted by the image capture device according to a plurality of image capture quality metric indices of the image capture outputs.
20. The image capture device of claim 19 , wherein each of the image capture outputs is a single image or a video sequence.
21. The image capture device of claim 19 , wherein the controller refers to the image capture quality metric indices of the image capture outputs to directly select one of the image capture outputs as the specific image capture output.
22. The image capture device of claim 19 , wherein the controller refers to the image capture quality metric indices of the image capture outputs to control the multi-view image capture module to generate a new image capture output corresponding to a selected viewing angle as the specific image capture output.
23. The image capture device of claim 19 , wherein the controller performs face detection upon each image capture output to obtain face detection information, and determines the image capture quality metric index according to at least the face detection information.
24. The image capture device of claim 23 , wherein the face detection information includes at least one of a face angle, a face number, a face size, a face position, a face symmetry, an eye number, and an eye blink status.
25. The image capture device of claim 19 , wherein the controller receives auto-focus information of each image capture output from the multi-view image capture module, and determines the image capture quality metric index according to at least the auto-focus information.
26. The image capture device of claim 19 , wherein the controller analyzes each of the image capture output to obtain inherent image characteristic information, where the inherent image characteristic information includes at least one of sharpness, blur, brightness, contrast, and color; and the controller determines the image capture quality metric index according to at least the inherent image characteristic information.
27. An image capture method comprising:
utilizing a multi-view image capture module for simultaneously generating a plurality of image capture outputs respectively corresponding to a plurality of different viewing angles;
calculating an image capture quality metric index for each of the image capture outputs; and
outputting a specific image capture output generated from the multi-view image capture module according to a plurality of image capture quality metric indices of the image capture outputs.
28. The image capture method of claim 27 , wherein each of the image capture outputs is a single image or a video sequence.
29. The image capture method of claim 27 , wherein the step of outputting the specific image capture output comprises:
referring to the image capture quality metric indices of the image capture outputs to directly select one of the image capture outputs as the specific image capture output.
30. The image capture method of claim 27 , wherein the step of outputting the specific image capture output comprises:
referring to the image capture quality metric indices of the image capture outputs to control the multi-view image capture module to generate a new image capture output corresponding to a selected viewing angle as the specific image capture output.
31. The image capture method of claim 27 , wherein the step of calculating the image capture quality metric index comprises:
performing face detection upon each image capture output to obtain face detection information; and
determining the image capture quality metric index according to at least the face detection information.
32. The image capture method of claim 31 , wherein the face detection information includes at least one of a face angle, a face number, a face size, a face position, a face symmetry, an eye number, and an eye blink status.
33. The image capture method of claim 27 , wherein the step of calculating the image capture quality metric index comprises:
receiving auto-focus information of each image capture output from the multi-view image capture module; and
determining the image capture quality metric index according to at least the auto-focus information.
34. The image capture method of claim 27 , wherein the step of calculating the image capture quality metric index comprises:
analyzing each of the image capture output to obtain inherent image characteristic information, where the inherent image characteristic information includes at least one of sharpness, blur, brightness, contrast, and color; and
determining the image capture quality metric index according to at least the inherent image characteristic information.
Priority Applications (2)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
US13/890,254 US20130314511A1 (en) | 2012-05-24 | 2013-05-09 | Image capture device controlled according to image capture quality and related image capture method thereof |
CN201310196539.4A CN103428428B (en) | 2012-05-24 | 2013-05-24 | Image capturing device and image catching method |
Applications Claiming Priority (2)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
US201261651499P | 2012-05-24 | 2012-05-24 | |
US13/890,254 US20130314511A1 (en) | 2012-05-24 | 2013-05-09 | Image capture device controlled according to image capture quality and related image capture method thereof |
Publications (1)
Publication Number | Publication Date |
---|---|
US20130314511A1 true US20130314511A1 (en) | 2013-11-28 |
Family
ID=49621289
Family Applications (6)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
US13/868,092 Abandoned US20130314558A1 (en) | 2012-05-24 | 2013-04-22 | Image capture device for starting specific action in advance when determining that specific action is about to be triggered and related image capture method thereof |
US13/868,072 Active US9503645B2 (en) | 2012-05-24 | 2013-04-22 | Preview system for concurrently displaying multiple preview images generated based on input image generated by image capture apparatus and related preview method thereof |
US13/890,254 Abandoned US20130314511A1 (en) | 2012-05-24 | 2013-05-09 | Image capture device controlled according to image capture quality and related image capture method thereof |
US13/891,201 Active 2033-07-20 US9066013B2 (en) | 2012-05-24 | 2013-05-10 | Content-adaptive image resizing method and related apparatus thereof |
US13/891,196 Active US9560276B2 (en) | 2012-05-24 | 2013-05-10 | Video recording method of recording output video sequence for image capture module and related video recording apparatus thereof |
US15/296,002 Active US9681055B2 (en) | 2012-05-24 | 2016-10-17 | Preview system for concurrently displaying multiple preview images generated based on input image generated by image capture apparatus and related preview method thereof |
Family Applications Before (2)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
US13/868,092 Abandoned US20130314558A1 (en) | 2012-05-24 | 2013-04-22 | Image capture device for starting specific action in advance when determining that specific action is about to be triggered and related image capture method thereof |
US13/868,072 Active US9503645B2 (en) | 2012-05-24 | 2013-04-22 | Preview system for concurrently displaying multiple preview images generated based on input image generated by image capture apparatus and related preview method thereof |
Family Applications After (3)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
US13/891,201 Active 2033-07-20 US9066013B2 (en) | 2012-05-24 | 2013-05-10 | Content-adaptive image resizing method and related apparatus thereof |
US13/891,196 Active US9560276B2 (en) | 2012-05-24 | 2013-05-10 | Video recording method of recording output video sequence for image capture module and related video recording apparatus thereof |
US15/296,002 Active US9681055B2 (en) | 2012-05-24 | 2016-10-17 | Preview system for concurrently displaying multiple preview images generated based on input image generated by image capture apparatus and related preview method thereof |
Country Status (2)
Country | Link |
---|---|
US (6) | US20130314558A1 (en) |
CN (5) | CN103428423A (en) |
Cited By (31)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN103916602A (en) * | 2014-04-17 | 2014-07-09 | 深圳市中兴移动通信有限公司 | Method, first mobile terminal and system for control over remote shooting |
CN104200189A (en) * | 2014-08-27 | 2014-12-10 | 苏州佳世达电通有限公司 | Barcode scanning device and processing method thereof |
US20150015762A1 (en) * | 2013-07-12 | 2015-01-15 | Samsung Electronics Co., Ltd. | Apparatus and method for generating photograph image in electronic device |
US20150156419A1 (en) * | 2013-12-02 | 2015-06-04 | Yahoo! Inc. | Blur aware photo feedback |
US9185285B2 (en) * | 2010-11-24 | 2015-11-10 | Samsung Electronics Co., Ltd. | Method and apparatus for acquiring pre-captured picture of an object to be captured and a captured position of the same |
WO2015179021A1 (en) * | 2014-05-21 | 2015-11-26 | Google Technology Holdings LLC | Enhanced image capture |
WO2015179023A1 (en) * | 2014-05-21 | 2015-11-26 | Google Technology Holdings LLC | Enhanced image capture |
WO2016009199A3 (en) * | 2014-07-18 | 2016-03-10 | Omg Plc | Minimisation of blur in still image capture |
US20160094781A1 (en) * | 2013-06-06 | 2016-03-31 | Fujifilm Corporation | Auto-focus device and method for controlling operation of same |
US20160093028A1 (en) * | 2014-09-25 | 2016-03-31 | Lenovo (Beijing) Co., Ltd. | Image processing method, image processing apparatus and electronic device |
US20160105602A1 (en) * | 2014-10-14 | 2016-04-14 | Nokia Technologies Oy | Method, apparatus and computer program for automatically capturing an image |
US9357127B2 (en) | 2014-03-18 | 2016-05-31 | Google Technology Holdings LLC | System for auto-HDR capture decision making |
US9392322B2 (en) | 2012-05-10 | 2016-07-12 | Google Technology Holdings LLC | Method of visually synchronizing differing camera feeds with common subject |
US9413947B2 (en) | 2014-07-31 | 2016-08-09 | Google Technology Holdings LLC | Capturing images of active subjects according to activity profiles |
US9571727B2 (en) | 2014-05-21 | 2017-02-14 | Google Technology Holdings LLC | Enhanced image capture |
US9582716B2 (en) * | 2013-09-09 | 2017-02-28 | Delta ID Inc. | Apparatuses and methods for iris based biometric recognition |
FR3043233A1 (en) * | 2015-10-30 | 2017-05-05 | Merry Pixel | METHOD OF AUTOMATICALLY SELECTING IMAGES FROM A MOBILE DEVICE |
US9654700B2 (en) | 2014-09-16 | 2017-05-16 | Google Technology Holdings LLC | Computational camera using fusion of image sensors |
US20170169266A1 (en) * | 2015-12-14 | 2017-06-15 | Leadot Innovation, Inc. | Method of Controlling Operation of Cataloged Smart Devices |
US20170180646A1 (en) * | 2015-12-17 | 2017-06-22 | Lg Electronics Inc. | Mobile terminal and method for controlling the same |
WO2017123545A1 (en) * | 2016-01-12 | 2017-07-20 | Echostar Technologies L.L.C | Detection and marking of low quality video content |
US9813611B2 (en) | 2014-05-21 | 2017-11-07 | Google Technology Holdings LLC | Enhanced image capture |
EP3288252A1 (en) * | 2016-08-25 | 2018-02-28 | LG Electronics Inc. | Terminal and controlling method thereof |
CN107809590A (en) * | 2017-11-08 | 2018-03-16 | 青岛海信移动通信技术股份有限公司 | A kind of photographic method and device |
US9936143B2 (en) | 2007-10-31 | 2018-04-03 | Google Technology Holdings LLC | Imager module with electronic shutter |
US20180278839A1 (en) * | 2012-10-23 | 2018-09-27 | Snapaid Ltd. | Real time assessment of picture quality |
US20190116308A1 (en) * | 2017-10-12 | 2019-04-18 | Canon Kabushiki Kaisha | Image pick-up apparatus and control method thereof |
CN109792829A (en) * | 2016-10-11 | 2019-05-21 | 昕诺飞控股有限公司 | Control system, monitoring system and the method for controlling monitoring system of monitoring system |
US20200104601A1 (en) * | 2018-09-28 | 2020-04-02 | Opentv, Inc. | Systems and methods for generating media content |
EP3554070A4 (en) * | 2016-12-07 | 2020-06-17 | ZTE Corporation | Photograph-capture method, apparatus, terminal, and storage medium |
EP4086845A1 (en) * | 2021-05-07 | 2022-11-09 | Nokia Technologies Oy | Image processing |
Families Citing this family (64)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
WO2013145531A1 (en) * | 2012-03-29 | 2013-10-03 | 日本電気株式会社 | Analysis system |
JP5880263B2 (en) * | 2012-05-02 | 2016-03-08 | ソニー株式会社 | Display control device, display control method, program, and recording medium |
US20130314558A1 (en) * | 2012-05-24 | 2013-11-28 | Mediatek Inc. | Image capture device for starting specific action in advance when determining that specific action is about to be triggered and related image capture method thereof |
KR101880636B1 (en) * | 2012-07-25 | 2018-07-20 | 삼성전자주식회사 | Digital photographing apparatus and method for controlling thereof |
US20140099994A1 (en) * | 2012-10-04 | 2014-04-10 | Nvidia Corporation | Electronic camera embodying a proximity sensor |
KR102104497B1 (en) * | 2012-11-12 | 2020-04-24 | 삼성전자주식회사 | Method and apparatus for displaying image |
US9282244B2 (en) | 2013-03-14 | 2016-03-08 | Microsoft Technology Licensing, Llc | Camera non-touch switch |
EP2975995B1 (en) * | 2013-03-20 | 2023-05-31 | Covidien LP | System for enhancing picture-in-picture display for imaging devices used for surgical procedures |
US9430045B2 (en) * | 2013-07-17 | 2016-08-30 | Lenovo (Singapore) Pte. Ltd. | Special gestures for camera control and image processing operations |
KR20150051085A (en) * | 2013-11-01 | 2015-05-11 | 삼성전자주식회사 | Method for obtaining high dynamic range image,Computer readable storage medium of recording the method and a digital photographing apparatus. |
CN104333689A (en) * | 2014-03-05 | 2015-02-04 | 广州三星通信技术研究有限公司 | Method and device for displaying preview image during shooting |
KR102105961B1 (en) | 2014-05-13 | 2020-05-28 | 엘지전자 주식회사 | Mobile terminal and method for controlling the same |
US9503644B2 (en) | 2014-05-22 | 2016-11-22 | Microsoft Technology Licensing, Llc | Using image properties for processing and editing of multiple resolution images |
US9451178B2 (en) | 2014-05-22 | 2016-09-20 | Microsoft Technology Licensing, Llc | Automatic insertion of video into a photo story |
US11184580B2 (en) | 2014-05-22 | 2021-11-23 | Microsoft Technology Licensing, Llc | Automatically curating video to fit display time |
KR102189647B1 (en) * | 2014-09-02 | 2020-12-11 | 삼성전자주식회사 | Display apparatus, system and controlling method thereof |
KR20160029536A (en) * | 2014-09-05 | 2016-03-15 | 엘지전자 주식회사 | Mobile terminal and control method for the mobile terminal |
KR102252448B1 (en) * | 2014-09-12 | 2021-05-14 | 삼성전자주식회사 | Method for controlling and an electronic device thereof |
KR102290287B1 (en) * | 2014-10-24 | 2021-08-17 | 삼성전자주식회사 | IMAGE SENSOR GENERATING IMAGE SIGNAL AND PROXIMITY SIGNAL simultaneously |
CN104581379A (en) * | 2014-12-31 | 2015-04-29 | 乐视网信息技术(北京)股份有限公司 | Video preview image selecting method and device |
TWI565317B (en) * | 2015-01-06 | 2017-01-01 | 緯創資通股份有限公司 | Image processing method and mobile electronic device |
US10277888B2 (en) * | 2015-01-16 | 2019-04-30 | Qualcomm Incorporated | Depth triggered event feature |
CN105872352A (en) * | 2015-12-08 | 2016-08-17 | 乐视移动智能信息技术(北京)有限公司 | Method and device for shooting picture |
CN107800950B (en) * | 2016-09-06 | 2020-07-31 | 东友科技股份有限公司 | Image acquisition method |
CN108713318A (en) * | 2016-10-31 | 2018-10-26 | 华为技术有限公司 | A kind of processing method and equipment of video frame |
CN106453962B (en) * | 2016-11-30 | 2020-01-21 | 珠海市魅族科技有限公司 | Camera shooting control method of double-screen intelligent terminal |
US20180227502A1 (en) * | 2017-02-06 | 2018-08-09 | Qualcomm Incorporated | Systems and methods for reduced power consumption in imaging pipelines |
CN106937045B (en) | 2017-02-23 | 2020-08-14 | 华为机器有限公司 | Display method of preview image, terminal equipment and computer storage medium |
WO2018166069A1 (en) * | 2017-03-14 | 2018-09-20 | 华为技术有限公司 | Photographing preview method, graphical user interface, and terminal |
EP3668077B1 (en) * | 2017-08-09 | 2023-08-02 | FUJIFILM Corporation | Image processing system, server device, image processing method, and image processing program |
CN107731020B (en) * | 2017-11-07 | 2020-05-12 | Oppo广东移动通信有限公司 | Multimedia playing method, device, storage medium and electronic equipment |
AU2018415667B2 (en) * | 2018-03-26 | 2022-05-19 | Beijing Kunshi Intellectual Property Management Co., Ltd. | Video recording method and electronic device |
CN110086905B (en) * | 2018-03-26 | 2020-08-21 | 华为技术有限公司 | Video recording method and electronic equipment |
US10861148B2 (en) * | 2018-04-30 | 2020-12-08 | General Electric Company | Systems and methods for improved component inspection |
CN109005337B (en) * | 2018-07-05 | 2021-08-24 | 维沃移动通信有限公司 | Photographing method and terminal |
CN108600647A (en) * | 2018-07-24 | 2018-09-28 | 努比亚技术有限公司 | Shooting preview method, mobile terminal and storage medium |
CN109257538A (en) * | 2018-09-10 | 2019-01-22 | Oppo(重庆)智能科技有限公司 | Camera control method and relevant apparatus |
KR102637732B1 (en) * | 2018-09-21 | 2024-02-19 | 삼성전자주식회사 | Image signal processor, method of operating the image signal processor, and application processor including the image signal processor |
WO2020072267A1 (en) | 2018-10-05 | 2020-04-09 | Google Llc | Scale-down capture preview for a panorama capture user interface |
CN109194839B (en) * | 2018-10-30 | 2020-10-23 | 维沃移动通信(杭州)有限公司 | Display control method, terminal and computer readable storage medium |
CN109830077A (en) * | 2019-01-15 | 2019-05-31 | 苏州佳世达光电有限公司 | Monitoring device and monitoring method |
CN109922271A (en) * | 2019-04-18 | 2019-06-21 | 珠海格力电器股份有限公司 | Mobile terminal based on folding screen and photographing method thereof |
US10812771B1 (en) * | 2019-06-12 | 2020-10-20 | At&T Intellectual Property I, L.P. | Methods, systems, and devices for adjusting image content for streaming panoramic video content |
JP7210388B2 (en) * | 2019-06-25 | 2023-01-23 | キヤノン株式会社 | IMAGE PROCESSING DEVICE, IMAGING DEVICE, CONTROL METHOD AND PROGRAM FOR IMAGE PROCESSING DEVICE |
KR102665968B1 (en) * | 2019-06-27 | 2024-05-16 | 삼성전자주식회사 | Method and apparatus for blur estimation |
US10984513B1 (en) | 2019-09-30 | 2021-04-20 | Google Llc | Automatic generation of all-in-focus images with a mobile camera |
US12046072B2 (en) | 2019-10-10 | 2024-07-23 | Google Llc | Camera synchronization and image tagging for face authentication |
US11032486B2 (en) | 2019-10-11 | 2021-06-08 | Google Llc | Reducing a flicker effect of multiple light sources in an image |
CN110896451B (en) * | 2019-11-20 | 2022-01-28 | 维沃移动通信有限公司 | Preview picture display method, electronic device and computer readable storage medium |
CN111212235B (en) | 2020-01-23 | 2021-11-19 | 华为技术有限公司 | Long-focus shooting method and electronic equipment |
US11109741B1 (en) | 2020-02-21 | 2021-09-07 | Ambu A/S | Video processing apparatus |
US10835106B1 (en) | 2020-02-21 | 2020-11-17 | Ambu A/S | Portable monitor |
US11166622B2 (en) | 2020-02-21 | 2021-11-09 | Ambu A/S | Video processing apparatus |
US10980397B1 (en) * | 2020-02-21 | 2021-04-20 | Ambu A/S | Video processing device |
US11190689B1 (en) | 2020-07-29 | 2021-11-30 | Google Llc | Multi-camera video stabilization |
CN114125344B (en) * | 2020-08-31 | 2023-06-23 | 京东方科技集团股份有限公司 | Video processing apparatus, video processing method, monitor device, computer device, and medium |
CN114205515B (en) * | 2020-09-18 | 2023-04-07 | 荣耀终端有限公司 | Anti-shake processing method for video and electronic equipment |
FR3118380B1 (en) * | 2020-12-22 | 2024-08-30 | Fond B Com | Method for encoding images of a video sequence to be encoded, decoding method, corresponding devices and system. |
CN112954193B (en) * | 2021-01-27 | 2023-02-10 | 维沃移动通信有限公司 | Shooting method, shooting device, electronic equipment and medium |
US11533427B2 (en) | 2021-03-22 | 2022-12-20 | International Business Machines Corporation | Multimedia quality evaluation |
US11483472B2 (en) * | 2021-03-22 | 2022-10-25 | International Business Machines Corporation | Enhancing quality of multimedia |
US11716531B2 (en) | 2021-03-22 | 2023-08-01 | International Business Machines Corporation | Quality of multimedia |
EP4115789B1 (en) | 2021-07-08 | 2023-12-20 | Ambu A/S | Endoscope image processing device |
CN114422713B (en) * | 2022-03-29 | 2022-06-24 | 湖南航天捷诚电子装备有限责任公司 | Image acquisition and intelligent interpretation processing device and method |
Citations (4)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US20090284585A1 (en) * | 2008-05-15 | 2009-11-19 | Industrial Technology Research Institute | Intelligent multi-view display system and method thereof |
US20110311147A1 (en) * | 2009-02-12 | 2011-12-22 | Dolby Laboratories Licensing Corporation | Quality Evaluation of Sequences of Images |
US20130002814A1 (en) * | 2011-06-30 | 2013-01-03 | Minwoo Park | Method for automatically improving stereo images |
US20130194395A1 (en) * | 2011-06-28 | 2013-08-01 | Nokia Corporation | Method, A System, A Viewing Device and a Computer Program for Picture Rendering |
Family Cites Families (52)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
JPH0754966B2 (en) | 1985-12-09 | 1995-06-07 | 株式会社日立製作所 | Contour correction circuit |
US6075926A (en) | 1997-04-21 | 2000-06-13 | Hewlett-Packard Company | Computerized method for improving data resolution |
JP3564031B2 (en) | 1999-03-16 | 2004-09-08 | オリンパス株式会社 | Electronic still camera |
JP2000278710A (en) * | 1999-03-26 | 2000-10-06 | Ricoh Co Ltd | Device for evaluating binocular stereoscopic vision picture |
JP2001211351A (en) * | 2000-01-27 | 2001-08-03 | Fuji Photo Film Co Ltd | Image pickup device and its operation control method |
US20020056083A1 (en) * | 2000-03-29 | 2002-05-09 | Istvan Anthony F. | System and method for picture-in-browser scaling |
GB0125774D0 (en) * | 2001-10-26 | 2001-12-19 | Cableform Ltd | Method and apparatus for image matching |
JP4198449B2 (en) * | 2002-02-22 | 2008-12-17 | 富士フイルム株式会社 | Digital camera |
JP2004080252A (en) | 2002-08-14 | 2004-03-11 | Toshiba Corp | Video display unit and its method |
US7269300B2 (en) | 2003-10-24 | 2007-09-11 | Eastman Kodak Company | Sharpening a digital image in accordance with magnification values |
CN101959021B (en) * | 2004-05-13 | 2015-02-11 | 索尼株式会社 | Image display apparatus and image display method |
US7545391B2 (en) | 2004-07-30 | 2009-06-09 | Algolith Inc. | Content adaptive resizer |
US7711211B2 (en) | 2005-06-08 | 2010-05-04 | Xerox Corporation | Method for assembling a collection of digital images |
US8045047B2 (en) * | 2005-06-23 | 2011-10-25 | Nokia Corporation | Method and apparatus for digital image processing of an image having different scaling rates |
US7448753B1 (en) | 2005-07-19 | 2008-11-11 | Chinnock Randal B | Portable Digital Medical Camera for Capturing Images of the Retina or the External Auditory Canal, and Methods of Use |
WO2007052572A1 (en) * | 2005-11-02 | 2007-05-10 | Olympus Corporation | Electronic camera |
JP4956988B2 (en) * | 2005-12-19 | 2012-06-20 | カシオ計算機株式会社 | Imaging device |
US20070283269A1 (en) * | 2006-05-31 | 2007-12-06 | Pere Obrador | Method and system for onboard camera video editing |
JP4904108B2 (en) * | 2006-07-25 | 2012-03-28 | 富士フイルム株式会社 | Imaging apparatus and image display control method |
JP4218720B2 (en) | 2006-09-22 | 2009-02-04 | ソニー株式会社 | IMAGING DEVICE, IMAGING DEVICE CONTROL METHOD, AND COMPUTER PROGRAM |
JP2008096868A (en) * | 2006-10-16 | 2008-04-24 | Sony Corp | Imaging display device, and imaging display method |
US7948526B2 (en) | 2006-11-14 | 2011-05-24 | Casio Computer Co., Ltd. | Imaging apparatus, imaging method and program thereof |
US8615112B2 (en) * | 2007-03-30 | 2013-12-24 | Casio Computer Co., Ltd. | Image pickup apparatus equipped with face-recognition function |
JP4139430B1 (en) | 2007-04-27 | 2008-08-27 | シャープ株式会社 | Image processing apparatus and method, image display apparatus and method |
JP2008306236A (en) | 2007-06-05 | 2008-12-18 | Sony Corp | Image display device, image display method, program of image display method, and recording medium where program of image display method is recorded |
US20080304568A1 (en) * | 2007-06-11 | 2008-12-11 | Himax Technologies Limited | Method for motion-compensated frame rate up-conversion |
JP5053731B2 (en) * | 2007-07-03 | 2012-10-17 | キヤノン株式会社 | Image display control device, image display control method, program, and recording medium |
JP4999649B2 (en) * | 2007-11-09 | 2012-08-15 | キヤノン株式会社 | Display device |
JP5003529B2 (en) | 2008-02-25 | 2012-08-15 | 株式会社ニコン | Imaging apparatus and object detection method |
CN101266650A (en) | 2008-03-31 | 2008-09-17 | 北京中星微电子有限公司 | An image storage method based on face detection |
JP4543105B2 (en) | 2008-08-08 | 2010-09-15 | 株式会社東芝 | Information reproduction apparatus and reproduction control method |
EP2207342B1 (en) * | 2009-01-07 | 2017-12-06 | LG Electronics Inc. | Mobile terminal and camera image control method thereof |
JP5294922B2 (en) | 2009-02-26 | 2013-09-18 | キヤノン株式会社 | Playback apparatus and playback method |
JP2011045039A (en) * | 2009-07-21 | 2011-03-03 | Fujifilm Corp | Compound-eye imaging apparatus |
US8373802B1 (en) | 2009-09-01 | 2013-02-12 | Disney Enterprises, Inc. | Art-directable retargeting for streaming video |
US20110084962A1 (en) | 2009-10-12 | 2011-04-14 | Jong Hwan Kim | Mobile terminal and image processing method therein |
JP5116754B2 (en) | 2009-12-10 | 2013-01-09 | シャープ株式会社 | Optical detection device and electronic apparatus |
US8294748B2 (en) | 2009-12-11 | 2012-10-23 | DigitalOptics Corporation Europe Limited | Panorama imaging using a blending map |
US20110149029A1 (en) | 2009-12-17 | 2011-06-23 | Marcus Kellerman | Method and system for pulldown processing for 3d video |
JP5218388B2 (en) * | 2009-12-25 | 2013-06-26 | カシオ計算機株式会社 | IMAGING DEVICE, IMAGING DEVICE CONTROL METHOD, AND PROGRAM |
CN102118560A (en) | 2009-12-30 | 2011-07-06 | 深圳富泰宏精密工业有限公司 | Photographic system and method |
US20110301980A1 (en) | 2010-06-03 | 2011-12-08 | Siemens Medical Solutions Usa, Inc. | Automated Medical Image Storage System |
JP5569206B2 (en) | 2010-07-15 | 2014-08-13 | ソニー株式会社 | Image processing apparatus and method |
US20120019677A1 (en) | 2010-07-26 | 2012-01-26 | Nethra Imaging Inc. | Image stabilization in a digital camera |
CN102457673A (en) * | 2010-10-26 | 2012-05-16 | 宏达国际电子股份有限公司 | Video capturing method and system for same |
JP5779959B2 (en) * | 2011-04-21 | 2015-09-16 | 株式会社リコー | Imaging device |
FR2978894A1 (en) * | 2011-08-02 | 2013-02-08 | St Microelectronics Grenoble 2 | METHOD FOR PREVIEWING IMAGE IN A DIGITAL VIEWING APPARATUS |
WO2013047483A1 (en) * | 2011-09-30 | 2013-04-04 | 富士フイルム株式会社 | Imaging device, imaging method, and program |
US9001255B2 (en) * | 2011-09-30 | 2015-04-07 | Olympus Imaging Corp. | Imaging apparatus, imaging method, and computer-readable storage medium for trimming and enlarging a portion of a subject image based on touch panel inputs |
US9269323B2 (en) | 2011-10-28 | 2016-02-23 | Microsoft Technology Licensing, Llc | Image layout for a display |
US8848068B2 (en) | 2012-05-08 | 2014-09-30 | Oulun Yliopisto | Automated recognition algorithm for detecting facial expressions |
US20130314558A1 (en) * | 2012-05-24 | 2013-11-28 | Mediatek Inc. | Image capture device for starting specific action in advance when determining that specific action is about to be triggered and related image capture method thereof |
-
2013
- 2013-04-22 US US13/868,092 patent/US20130314558A1/en not_active Abandoned
- 2013-04-22 US US13/868,072 patent/US9503645B2/en active Active
- 2013-05-09 US US13/890,254 patent/US20130314511A1/en not_active Abandoned
- 2013-05-10 US US13/891,201 patent/US9066013B2/en active Active
- 2013-05-10 US US13/891,196 patent/US9560276B2/en active Active
- 2013-05-17 CN CN2013101845833A patent/CN103428423A/en active Pending
- 2013-05-20 CN CN2013101858481A patent/CN103428425A/en active Pending
- 2013-05-24 CN CN201310196195.7A patent/CN103428427B/en active Active
- 2013-05-24 CN CN201310196539.4A patent/CN103428428B/en not_active Expired - Fee Related
- 2013-05-24 CN CN201310196545.XA patent/CN103428460B/en active Active
-
2016
- 2016-10-17 US US15/296,002 patent/US9681055B2/en active Active
Patent Citations (4)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US20090284585A1 (en) * | 2008-05-15 | 2009-11-19 | Industrial Technology Research Institute | Intelligent multi-view display system and method thereof |
US20110311147A1 (en) * | 2009-02-12 | 2011-12-22 | Dolby Laboratories Licensing Corporation | Quality Evaluation of Sequences of Images |
US20130194395A1 (en) * | 2011-06-28 | 2013-08-01 | Nokia Corporation | Method, A System, A Viewing Device and a Computer Program for Picture Rendering |
US20130002814A1 (en) * | 2011-06-30 | 2013-01-03 | Minwoo Park | Method for automatically improving stereo images |
Cited By (59)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US9936143B2 (en) | 2007-10-31 | 2018-04-03 | Google Technology Holdings LLC | Imager module with electronic shutter |
US9185285B2 (en) * | 2010-11-24 | 2015-11-10 | Samsung Electronics Co., Ltd. | Method and apparatus for acquiring pre-captured picture of an object to be captured and a captured position of the same |
US9392322B2 (en) | 2012-05-10 | 2016-07-12 | Google Technology Holdings LLC | Method of visually synchronizing differing camera feeds with common subject |
US10659682B2 (en) * | 2012-10-23 | 2020-05-19 | Snapaid Ltd. | Real time assessment of picture quality |
US20180278839A1 (en) * | 2012-10-23 | 2018-09-27 | Snapaid Ltd. | Real time assessment of picture quality |
US11252325B2 (en) * | 2012-10-23 | 2022-02-15 | Snapaid Ltd. | Real time assessment of picture quality |
US10944901B2 (en) * | 2012-10-23 | 2021-03-09 | Snapaid Ltd. | Real time assessment of picture quality |
US11671702B2 (en) | 2012-10-23 | 2023-06-06 | Snapaid Ltd. | Real time assessment of picture quality |
US9609204B2 (en) * | 2013-06-06 | 2017-03-28 | Fujifilm Corporation | Auto-focus device and method for controlling operation of same |
US20160094781A1 (en) * | 2013-06-06 | 2016-03-31 | Fujifilm Corporation | Auto-focus device and method for controlling operation of same |
US20150015762A1 (en) * | 2013-07-12 | 2015-01-15 | Samsung Electronics Co., Ltd. | Apparatus and method for generating photograph image in electronic device |
US9992418B2 (en) * | 2013-07-12 | 2018-06-05 | Samsung Electronics Co., Ltd | Apparatus and method for generating photograph image in electronic device |
US9582716B2 (en) * | 2013-09-09 | 2017-02-28 | Delta ID Inc. | Apparatuses and methods for iris based biometric recognition |
US20150156419A1 (en) * | 2013-12-02 | 2015-06-04 | Yahoo! Inc. | Blur aware photo feedback |
US9210327B2 (en) * | 2013-12-02 | 2015-12-08 | Yahoo! Inc. | Blur aware photo feedback |
US9357127B2 (en) | 2014-03-18 | 2016-05-31 | Google Technology Holdings LLC | System for auto-HDR capture decision making |
CN103916602A (en) * | 2014-04-17 | 2014-07-09 | 深圳市中兴移动通信有限公司 | Method, first mobile terminal and system for control over remote shooting |
US9813611B2 (en) | 2014-05-21 | 2017-11-07 | Google Technology Holdings LLC | Enhanced image capture |
US11943532B2 (en) | 2014-05-21 | 2024-03-26 | Google Technology Holdings LLC | Enhanced image capture |
US9628702B2 (en) | 2014-05-21 | 2017-04-18 | Google Technology Holdings LLC | Enhanced image capture |
US11019252B2 (en) | 2014-05-21 | 2021-05-25 | Google Technology Holdings LLC | Enhanced image capture |
WO2015179021A1 (en) * | 2014-05-21 | 2015-11-26 | Google Technology Holdings LLC | Enhanced image capture |
US11290639B2 (en) | 2014-05-21 | 2022-03-29 | Google Llc | Enhanced image capture |
US11575829B2 (en) | 2014-05-21 | 2023-02-07 | Google Llc | Enhanced image capture |
US10250799B2 (en) | 2014-05-21 | 2019-04-02 | Google Technology Holdings LLC | Enhanced image capture |
US9729784B2 (en) | 2014-05-21 | 2017-08-08 | Google Technology Holdings LLC | Enhanced image capture |
WO2015179023A1 (en) * | 2014-05-21 | 2015-11-26 | Google Technology Holdings LLC | Enhanced image capture |
US9774779B2 (en) | 2014-05-21 | 2017-09-26 | Google Technology Holdings LLC | Enhanced image capture |
US9571727B2 (en) | 2014-05-21 | 2017-02-14 | Google Technology Holdings LLC | Enhanced image capture |
WO2016009199A3 (en) * | 2014-07-18 | 2016-03-10 | Omg Plc | Minimisation of blur in still image capture |
US9413947B2 (en) | 2014-07-31 | 2016-08-09 | Google Technology Holdings LLC | Capturing images of active subjects according to activity profiles |
CN104200189A (en) * | 2014-08-27 | 2014-12-10 | 苏州佳世达电通有限公司 | Barcode scanning device and processing method thereof |
US9654700B2 (en) | 2014-09-16 | 2017-05-16 | Google Technology Holdings LLC | Computational camera using fusion of image sensors |
US20160093028A1 (en) * | 2014-09-25 | 2016-03-31 | Lenovo (Beijing) Co., Ltd. | Image processing method, image processing apparatus and electronic device |
US9613404B2 (en) * | 2014-09-25 | 2017-04-04 | Lenovo (Beijing) Co., Ltd. | Image processing method, image processing apparatus and electronic device |
US20160105602A1 (en) * | 2014-10-14 | 2016-04-14 | Nokia Technologies Oy | Method, apparatus and computer program for automatically capturing an image |
US9888169B2 (en) * | 2014-10-14 | 2018-02-06 | Nokia Technologies Oy | Method, apparatus and computer program for automatically capturing an image |
FR3043233A1 (en) * | 2015-10-30 | 2017-05-05 | Merry Pixel | METHOD OF AUTOMATICALLY SELECTING IMAGES FROM A MOBILE DEVICE |
US9881191B2 (en) * | 2015-12-14 | 2018-01-30 | Leadot Innovation, Inc. | Method of controlling operation of cataloged smart devices |
US20170169266A1 (en) * | 2015-12-14 | 2017-06-15 | Leadot Innovation, Inc. | Method of Controlling Operation of Cataloged Smart Devices |
US10015400B2 (en) * | 2015-12-17 | 2018-07-03 | Lg Electronics Inc. | Mobile terminal for capturing an image and associated image capturing method |
US20170180646A1 (en) * | 2015-12-17 | 2017-06-22 | Lg Electronics Inc. | Mobile terminal and method for controlling the same |
WO2017123545A1 (en) * | 2016-01-12 | 2017-07-20 | Echostar Technologies L.L.C | Detection and marking of low quality video content |
US10531077B2 (en) | 2016-01-12 | 2020-01-07 | DISH Technologies L.L.C. | Detection and marking of low quality video content |
US9743077B2 (en) | 2016-01-12 | 2017-08-22 | Sling Media LLC | Detection and marking of low quality video content |
EP3288252A1 (en) * | 2016-08-25 | 2018-02-28 | LG Electronics Inc. | Terminal and controlling method thereof |
US10375308B2 (en) | 2016-08-25 | 2019-08-06 | Lg Electronics Inc. | Terminal and controlling method thereof |
EP3527045B1 (en) | 2016-10-11 | 2020-12-30 | Signify Holding B.V. | Surveillance system and method of controlling a surveillance system |
CN109792829A (en) * | 2016-10-11 | 2019-05-21 | 昕诺飞控股有限公司 | Control system, monitoring system and the method for controlling monitoring system of monitoring system |
US10939035B2 (en) | 2016-12-07 | 2021-03-02 | Zte Corporation | Photograph-capture method, apparatus, terminal, and storage medium |
EP3554070A4 (en) * | 2016-12-07 | 2020-06-17 | ZTE Corporation | Photograph-capture method, apparatus, terminal, and storage medium |
US10567640B2 (en) * | 2017-10-12 | 2020-02-18 | Canon Kabushiki Kaisha | Image pick-up apparatus and control method thereof |
US20190116308A1 (en) * | 2017-10-12 | 2019-04-18 | Canon Kabushiki Kaisha | Image pick-up apparatus and control method thereof |
CN107809590A (en) * | 2017-11-08 | 2018-03-16 | 青岛海信移动通信技术股份有限公司 | A kind of photographic method and device |
US10872240B2 (en) * | 2018-09-28 | 2020-12-22 | Opentv, Inc. | Systems and methods for generating media content |
US20200104601A1 (en) * | 2018-09-28 | 2020-04-02 | Opentv, Inc. | Systems and methods for generating media content |
US11423653B2 (en) | 2018-09-28 | 2022-08-23 | Opentv, Inc. | Systems and methods for generating media content |
US11887369B2 (en) | 2018-09-28 | 2024-01-30 | Opentv, Inc. | Systems and methods for generating media content |
EP4086845A1 (en) * | 2021-05-07 | 2022-11-09 | Nokia Technologies Oy | Image processing |
Also Published As
Publication number | Publication date |
---|---|
CN103428425A (en) | 2013-12-04 |
CN103428427B (en) | 2016-06-08 |
CN103428423A (en) | 2013-12-04 |
US20130315499A1 (en) | 2013-11-28 |
CN103428427A (en) | 2013-12-04 |
US20170034448A1 (en) | 2017-02-02 |
US9503645B2 (en) | 2016-11-22 |
US9066013B2 (en) | 2015-06-23 |
CN103428460A (en) | 2013-12-04 |
US9681055B2 (en) | 2017-06-13 |
CN103428460B (en) | 2018-03-13 |
US9560276B2 (en) | 2017-01-31 |
CN103428428B (en) | 2017-06-16 |
US20130314558A1 (en) | 2013-11-28 |
US20130315556A1 (en) | 2013-11-28 |
US20130314580A1 (en) | 2013-11-28 |
CN103428428A (en) | 2013-12-04 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
US20130314511A1 (en) | Image capture device controlled according to image capture quality and related image capture method thereof | |
CN107948519B (en) | Image processing method, device and equipment | |
US10419668B2 (en) | Portable device with adaptive panoramic image processor | |
US9998650B2 (en) | Image processing apparatus and image pickup apparatus for adding blur in an image according to depth map | |
US9906772B2 (en) | Method for performing multi-camera capturing control of an electronic device, and associated apparatus | |
JP5592006B2 (en) | 3D image processing | |
CN107945105B (en) | Background blurring processing method, device and equipment | |
KR102229811B1 (en) | Filming method and terminal for terminal | |
KR20200017072A (en) | Electronic device and method for providing notification relative to image displayed via display and image stored in memory based on image analysis | |
KR20200031169A (en) | Image processing method and device | |
US9357205B2 (en) | Stereoscopic image control apparatus to adjust parallax, and method and program for controlling operation of same | |
KR20190075654A (en) | Electronic device comprising plurality of cameras and method for operating therepf | |
US10757326B2 (en) | Image processing apparatus and image processing apparatus control method | |
US11610293B2 (en) | Image processing apparatus and image processing method | |
JP2012160780A (en) | Imaging device, image processing device, and image processing program | |
JP5153021B2 (en) | Imaging apparatus, imaging method, and program | |
WO2012147368A1 (en) | Image capturing apparatus | |
CN108520036B (en) | Image selection method and device, storage medium and electronic equipment | |
KR20110125077A (en) | A digital photographing apparatus, a method for controlling the same, and a computer-readable storage medium | |
JP7204387B2 (en) | Image processing device and its control method | |
JP2013179566A (en) | Image processing apparatus and imaging apparatus | |
JP2013085018A (en) | Imaging apparatus | |
JP2014134723A (en) | Image processing system, image processing method and program | |
KR101567668B1 (en) | Smartphones camera apparatus for generating video signal by multi-focus and method thereof | |
JP5988213B2 (en) | Arithmetic processing unit |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
AS | Assignment |
Owner name: MEDIATEK INC., TAIWAN Free format text: ASSIGNMENT OF ASSIGNORS INTEREST;ASSIGNORS:CHEN, DING-YUN;JU, CHI-CHENG;HO, CHENG-TSAI;REEL/FRAME:030379/0514 Effective date: 20130423 |
|
STCB | Information on status: application discontinuation |
Free format text: ABANDONED -- FAILURE TO RESPOND TO AN OFFICE ACTION |