US20080246852A1 - Image pickup device and image pickup method - Google Patents

Image pickup device and image pickup method Download PDF

Info

Publication number
US20080246852A1
US20080246852A1 US12/058,265 US5826508A US2008246852A1 US 20080246852 A1 US20080246852 A1 US 20080246852A1 US 5826508 A US5826508 A US 5826508A US 2008246852 A1 US2008246852 A1 US 2008246852A1
Authority
US
United States
Prior art keywords
image
automatically
specified part
image pickup
shift amount
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
US12/058,265
Other versions
US8144235B2 (en
Inventor
Yukio Mori
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Xacti Corp
Original Assignee
Sanyo Electric Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Priority to JP2007-090923 priority Critical
Priority to JPJP2007-090923 priority
Priority to JP2007090923A priority patent/JP4804398B2/en
Application filed by Sanyo Electric Co Ltd filed Critical Sanyo Electric Co Ltd
Assigned to SANYO ELECTRIC CO., LTD. reassignment SANYO ELECTRIC CO., LTD. ASSIGNMENT OF ASSIGNORS INTEREST (SEE DOCUMENT FOR DETAILS). Assignors: MORI, YUKIO
Publication of US20080246852A1 publication Critical patent/US20080246852A1/en
Application granted granted Critical
Publication of US8144235B2 publication Critical patent/US8144235B2/en
Assigned to XACTI CORPORATION reassignment XACTI CORPORATION ASSIGNMENT OF ASSIGNORS INTEREST (SEE DOCUMENT FOR DETAILS). Assignors: SANYO ELECTRIC CO., LTD.
Assigned to XACTI CORPORATION reassignment XACTI CORPORATION CORRECTIVE ASSIGNMENT TO CORRECT THE TO CORRECT THE INCORRECT PATENT NUMBER 13/446,454, AND REPLACE WITH 13/466,454 PREVIOUSLY RECORDED ON REEL 032467 FRAME 0095. ASSIGNOR(S) HEREBY CONFIRMS THE ASSIGNMENT OF ASSIGNORS INTEREST. Assignors: SANYO ELECTRIC CO., LTD.
Application status is Active legal-status Critical
Adjusted expiration legal-status Critical

Links

Images

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N5/00Details of television systems
    • H04N5/222Studio circuitry; Studio devices; Studio equipment ; Cameras comprising an electronic image sensor, e.g. digital cameras, video cameras, TV cameras, video cameras, camcorders, webcams, camera modules for embedding in other devices, e.g. mobile phones, computers or vehicles
    • H04N5/225Television cameras ; Cameras comprising an electronic image sensor, e.g. digital cameras, video cameras, camcorders, webcams, camera modules specially adapted for being embedded in other devices, e.g. mobile phones, computers or vehicles
    • H04N5/232Devices for controlling television cameras, e.g. remote control ; Control of cameras comprising an electronic image sensor
    • H04N5/23212Focusing based on image signals provided by the electronic image sensor
    • GPHYSICS
    • G06COMPUTING; CALCULATING; COUNTING
    • G06KRECOGNITION OF DATA; PRESENTATION OF DATA; RECORD CARRIERS; HANDLING RECORD CARRIERS
    • G06K9/00Methods or arrangements for reading or recognising printed or written characters or for recognising patterns, e.g. fingerprints
    • G06K9/00221Acquiring or recognising human faces, facial parts, facial sketches, facial expressions
    • G06K9/00228Detection; Localisation; Normalisation
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N5/00Details of television systems
    • H04N5/222Studio circuitry; Studio devices; Studio equipment ; Cameras comprising an electronic image sensor, e.g. digital cameras, video cameras, TV cameras, video cameras, camcorders, webcams, camera modules for embedding in other devices, e.g. mobile phones, computers or vehicles
    • H04N5/225Television cameras ; Cameras comprising an electronic image sensor, e.g. digital cameras, video cameras, camcorders, webcams, camera modules specially adapted for being embedded in other devices, e.g. mobile phones, computers or vehicles
    • H04N5/235Circuitry or methods for compensating for variation in the brightness of the object, e.g. based on electric image signals provided by an electronic image sensor
    • H04N5/2351Circuitry for evaluating the brightness variations of the object

Abstract

An easy-to-carry image pickup device is provided to obtain images with multiple view angles in which a target subject is positioned in an appropriate size. When capturing an image of a person as a target subject, the size and position of the person's face in the image are detected as a specified part, and a zoom magnification ratio for a lens unit and a shift amount of the incident light position for a light-axis shifting unit are automatically controlled based on the detected size and position of the person's face, so that the person's face has a predetermined size and is positioned in a predetermined position in the image.

Description

    CROSS REFERENCE TO RELATED APPLICATIONS
  • This application claims priority based on 35 USC 119 from prior Japanese Patent Application No. P2007-090923 filed on Mar. 30, 2007, the entire contents of which are incorporated herein by reference.
  • BACKGROUND OF THE INVENTION
  • 1. Field of the Invention
  • This invention relates generally to an image pickup device, such as a digital camera, and an image pickup method capable of adjusting the view angles, and more particularly to an image pickup device and an image pickup method for automatically adjusting the view angles relative to a target subject.
  • 2. Description of Related Art
  • With the developments in various digital technologies, an image pickup device such as a digital camera and a digital camcorder are becoming widely common at the present day. Capturing high-definition images has become possible with the increase in the number of pixels for solid state image sensors such as CCD (Charge Coupled Device) and CMOS (Complementary Metal Oxide Semiconductor) sensors. Moreover, an image pickup device having an optical zoom lens with an automatic focus function also is becoming common, which makes it possible for people not accustomed to using a camera to easily change the view angles by setting the zoom magnification ratio from a wide angle to a telescopic angle and easily capture an image that is in-focus.
  • However, setting a visual outline or a composition for taking a picture is difficult particularly for a beginner, and there are times when captured images are not composed in a way intended by the photographer, such as that a target subject may be too large or too small, or unrelated background may be captured more widely than the target subject.
  • A prior solution is disclosed for example in Japanese Laid-Open No. 6-22263, which discloses an image capturing method in which images having wide, telescopic, and intermediate view angles are captured at the same time by automatically capturing an image of a subject in multiple zoom magnification ratios in addition to the zoom magnification ratio set at the time of photography, allowing the photographer to select an image with an appropriate angle of view after the photo-shooting.
  • Japanese Laid-Open No. 2004-109247 discloses a method in which an image is captured by changing the zoom magnification ratio toward a wider angle than that being set at the time of photography, and from this wider-angle image, an image having the captured range intended by the photographer and an image with the same zoom magnification ratio and having a captured range displaced from the intended captured range are generated, so that the photographer can select an image in an appropriate captured range.
  • Japanese Laid-Open No. 2005-117316 discloses a method where particularly a person is set as a photographic target, and the position and orientation of the camera is controlled by a moving mechanism so that feature portions of the person's face detected by a detection means for detecting the feature points of the person's face such as the eyes, the ears and the nose are positioned in a reference region within the frame, thereby controlling the position and size of the face to be photographed.
  • However, the method disclosed in Japanese Laid-Open No. 6-22263 photographs a target subject only by changing the zoom magnification ratios and it does not consider the position of the target subject within the field of view of the image. Therefore, there is a possibility that the automatically photographed images may include a target subject disproportionately positioned at the lower side of the image or a target subject may be cut at a margin of the image.
  • In the method disclosed in Japanese Laid-Open No. 2004-109247, the obtained image will have inferior resolution if a conventional solid state image sensor is used without modification because the image is being cropped and enlarged to its original dimensions. On the other hand, to maintain the fineness of the image, a larger solid state image sensor needs to be used which necessitates an increase in the size of the image pickup device.
  • The method disclosed in Japanese Laid-Open No. 2005-117316 uses a camera which is fixed at a single site and has a large-scale moving mechanism for controlling the position and orientation of the camera with many motors such as a rotary motor and a tilt motor, and thus, it is not suited for a portable device.
  • SUMMARY OF THE INVENTION
  • This invention was made in view of the above problems, and one object of this invention, therefore, is to provide an image pickup device that can obtain images in which a target subject is appropriately positioned and sized with varied multiple view angles, to provide an image pickup device that is portable without any difficulties, and to provide such an image capturing method.
  • In order to achieve the above objects, one aspect of the invention provides an image pickup device having: an imaging unit for capturing an image; an image processing unit for detecting a specified part of a target subject from a reference image captured by the imaging unit; a first control unit for controlling a size of the specified part of the target subject such that it becomes a predetermined size; and a second control unit for controlling a position of the specified part of the target subject such that it is positioned in a predetermined position. For the imaging unit, a lens and a solid state image sensor that performs photoelectric conversion of the incident light from the lens to electric signals can be used. The first and second control units can be the same control equipment. As a method to set the specified part of the target subject in a predetermined size, a zoom function of the lens can be used. As a method to place the specified part in the predetermined position, a light-axis shifting function or cropping of the image can be adopted.
  • Another aspect of the invention provides an image pickup device having: a lens unit having a lens with an optical zoom function; a solid state image sensor for performing photoelectric conversion of an incident light from the lens to electric signals; an image processing unit for detecting a specified part of a target subject from the image of the electric signals obtained by the solid state image sensor; a light-axis shifting unit for adjusting a position of the light axis of the incident light entering the solid state image sensor through the lens unit; and a control unit for computing a zoom magnification ratio for the lens unit and a shift amount of the position of the incident light for the light-axis shifting unit based on the size and position of the specified part of the target subject detected by the image processing unit, such that the specified part has a predetermined size and is positioned in a predetermined position. When the target subject is photographed, the control unit computes the zoom magnification ratio and the shift amount from the size and position of the specified part of the target subject, and an automatically-set composition image containing the specified part of the target subject with the predetermined size and in the predetermined position is photographed by setting the zoom magnification ratio to the computed ratio and the shift amount to the computed shift amount. As the light-axis shifting unit, a drivable shift lens provided between the imaging lens and the solid state image sensor or a parallel-displaceable solid state image sensor can be used. If the target subject is a person, the specified part may be the face.
  • There may be a plurality of predetermined sizes for the specified part as the criteria for computing the zoom magnification ratio for the lens unit, and a plurality of the automatically-set composition images may be captured.
  • Moreover, the automatically-set composition image may designate the orientation of the specified part within the image such that the center of the specified part is positioned in the upper half of the image. The automatically-set composition image also may contain roughly the whole of the target subject, roughly the half of the target subject including the specified part, or may have the specified part as the main component of the automatically-set composition image. The distance between the line passing the center of the image in the horizontal direction and the line passing the center of the specified part in the horizontal direction may be set larger in a wide angle image than in a telescopic image. There may be a plurality of the predetermined positions for the specified part as the criteria for computing the shift amount of the incident light position for the light-axis shifting unit, and a plurality of the automatically-set composition images may be taken.
  • The automatically-set composition image may contain the specified part at one of the left, center, and right positions of the image in the horizontal direction for the same angle of view.
  • Also, when a plurality of the specified parts are detected, the zoom magnification ratio for the lens and the shift amount of the incident light position for the light-axis shifting unit may be determined based on the height of the region containing all of the plurality of the specified parts, the height of a specified part having the largest height, and the height of the whole image in order to obtain the automatically-set composition image.
  • Also, capturing an automatically-set composition image may be prohibited for an image in which either the computed zoom magnification ratio or the computed shift amount falls outside of a variable range for the zoom magnification ratio for the lens or the shift amount for the light-axis shifting unit.
  • In addition to the automatically-set composition images, an image also may be captured to have a composition with the zoom magnification ratio and the shift amount set at the time of photography.
  • Still another aspect of the invention provides an image pickup method that includes: detecting a specified part of a target subject from an image of the electric signals obtained by a solid state image sensor for performing photoelectric conversion of an incident light to electric signals; computing a zoom magnification ratio for the lens unit and a shift amount of the incident light position for the light-axis shifting unit based on the detected size and position of the specified part of the target subject, such that the specified part has a predetermined size and a predetermined position; setting the zoom magnification ratio of the lens unit to the computed zoom magnification ratio and the shift amount of the incident light position for the light-axis shifting unit to the computed shift amount; and capturing an automatically-set composition image containing the specified part of the target subject having the predetermined size and the predetermined position.
  • BRIEF DESCRIPTION OF THE DRAWINGS
  • FIG. 1 is a block diagram showing an internal configuration of the image pickup device according to one embodiment of the invention;
  • FIG. 2 is a schematic view showing a relation among the imaging lens, the shift lens, and the image sensor;
  • FIG. 3 is a flowchart for explaining basic operations of the image pickup device and view angle bracket photography according to the embodiment;
  • FIG. 4 is a block diagram showing a configuration of the face detection unit;
  • FIG. 5 shows one example of hierarchical images obtained by the reduced-size image generating unit;
  • FIG. 6 is a figure for explaining the face detection processing;
  • FIG. 7 shows a relation between the variable range for the focal length of the lens and the computed focal lengths necessary to obtain LS, MS, and TS images;
  • FIG. 8 is a figure showing the relation between the face area and view angles for the LS, MS, and TS images;
  • FIG. 9 shows one example of the LS image;
  • FIG. 10 shows one example of the MS image;
  • FIG. 11 shows one example of the TS image;
  • FIG. 12 is a figure for explaining the face area in the case that a plurality of faces are detected; and
  • FIG. 13 shows one example of the image compositions for the LS, MS, and TS view angles in which the target subject is positioned at the left, center, and right positions of the image in the horizontal direction.
  • DETAILED DESCRIPTION OF EMBODIMENTS
  • Preferred embodiments of the invention will be described below with reference to the accompanying drawings. The same reference numbers are assigned to the same parts in each of the drawings being referred to, and overlapping explanations for the same parts are omitted in principle. An image pickup device such as a digital camera and a digital camcorder that performs the photography method of the invention will be explained below. The image pickup device can be a device that performs video recording as long as it can capture a still image.
  • (Configuration of the Image Pickup Device)
  • First, an internal configuration of the image pickup device will be explained by referring to the drawings. FIG. 1 is a block diagram showing the internal configuration of the image pickup device.
  • The image pickup device of FIG. 1 includes: a solid state image sensor (an image sensor) 1 such as a CCD sensor or a CMOS sensor that converts the incident light to electric signals; a lens unit 18 having a zoom lens for providing the subject's optical image to the image sensor 1 and a motor for changing the focal length of the zoom lens, i.e. the optical zoom magnification ratio; an AFE (Analog Front End) 2 for converting the analog image signals outputted from the image sensor 1 to digital signals; a microphone 3 that converts the voice inputted from outside to electric signals; an image processing unit 4 for performing various image processing on the digital signals from the AFE 2 including face detection processing; a voice processing unit 5 for converting analog voice signals from the microphone 3 to digital signals; a compression processing unit 6 that performs compression coding processing, such as JPEG (Joint Photographic Experts Group) compression system on the image signals from the image processing unit 4 in the case of photography of a still image, or MPEG (Moving Picture Experts Group) compression system on the image signals from the image processing unit 4 and the voice signals from the voice processing unit 5 in the case of video shooting; a driver 7 that saves the compression-encoded signals that were compression-encoded at the compression processing unit 6 to an external memory 20 such as an SD card; a decompression processing unit 8 that decompresses and decodes the compression-encoded signals read out from the external memory 20 at the driver 7; a display unit 9 that displays an image from the image signals decoded and obtained at the decompression processing unit 8; a voice output circuit unit 10 that converts the voice signals from the decompression processing unit 8 to analog signals; a speaker unit 11 for playing back and outputting the voice based on the voice signals from the voice output circuit unit 10; a timing generator (TG) 12 for outputting timing control signals to match operation timings of each of the blocks; a CPU (Central Processing Unit) 13 for controlling driving operations of the entire image pickup device; a memory 14 for storing each program to perform each operation and for temporarily storing data at the time of program execution; an operation unit 15 to which instructions from the user are inputted, including a shutter button for taking a still image; a bus line 16 for exchanging data between the CPU 13 and each of the blocks; and a bus line 17 for exchanging data between the memory 14 and each of the blocks. In response to the image signals detected by the image processing unit 4, the CPU 13 drives the motor to control the focus, aperture, optical zoom-magnification ratio, and optical axis shifting for the lens unit 18.
  • As shown in FIG. 2, the lens unit 18 includes a shift lens 18 b provided between an imaging lens 18 a and the image sensor 1, and the shift lens 18 b is moveable in parallel relative to the acceptance surface of the image sensor 1. The position of the shift lens 18 b can be controlled by the CPU 13, and the shift lens 18 b can be shifted such that the light axis of the incident light from the imaging lens 18 a is positioned at a predetermined position on the image sensor 1.
  • (Basic Operations of the Image Pickup Device at the Time of Still Image Photography)
  • Next, basic operations of the image pickup device according to one embodiment at the time of capturing a still image will be explained by referring to the flow chart of FIG. 3. First, when a user turns on the power of the image pickup device (step 101), the photography mode for the image pickup device, i.e. the operation mode for the image sensor 1 is set automatically to a preview mode (step 102). In the preview mode, image signals which are analog signals obtained by the photoelectric conversion of the image sensor 1 are converted to digital signals at the AFE 2, subjected to image processing at the image processing unit 4, then compressed at the compression processing unit 6, and the image signals for the compressed image are temporarily stored at the external memory 20. These compressed signals are decompressed at the decompression processing unit 8 via the driver 7, and the image with the angle of view having the zoom magnification ratio of the lens unit 18 set at the present moment is displayed at the display unit 9.
  • Then, the user sets the zoom magnification ratio for the optical zoom to a desired angle of view relative to a target subject for photography (step 103). At that time, based on the image signals inputted to the image processing unit 4, the CPU 13 controls the lens unit 18 to perform optimal exposure control (Automatic Exposure: AE) and focus control (Auto Focus: AF). Once the user determines the photography angle of view and the composition, and presses the shutter button of the operation unit 15 half way (step 105), optimization processing for the AE and AF is performed (step 106).
  • Once the AE and AF are set for photography and the shutter button is fully pressed (step 107), the timing generator 12 provides timing control signals to the image sensor 1, the AFE2, the image processing unit 4, and the compression processing unit 6 respectively so as to synchronize the operation timing of each unit, and it is detected whether or not a face larger than a predetermined size exists in the inputted image signals at the image processing unit 4 (step 108). This face detection processing will be described in more detail below. If no face larger than the predetermined size is detected, then normal photography is performed. On the other hand, if a face larger than the predetermined size is detected, then view angle bracket photography is performed, which will be described in more detail below.
  • If a face larger than the predetermined size was not detected in the image signals, the driving mode for the image sensor 1 is set to a still image photography mode (step 125), and the raw data of the image signals which are analog signals outputted from the image sensor 1 are converted to digital signals at the AFE 2 and once written into the memory 14 (step 126). These digital signals are read from the memory 14 and various image processing such as signal conversions to generate brightness signals and color difference signals are provided. After the signals to which the image processing was given are compressed into the JPEG format (step 127), the compressed image is written into the external memory 20 (step 124) and the photography is completed. Then, the device goes back to the preview mode (step 102) as a photography standby mode.
  • (Face Detection Processing)
  • Next, the face detection processing of this image pickup device will be explained. The image processing unit 4 has a face detection device 40 that can detect a person's face from the inputted image signals. The configuration and operation of the face detection device 40 will be explained below.
  • FIG. 4 shows a configuration of the face detection device 40. The face detection device 40 includes a reduced image generating unit 42 that generates one or multiple reduced images based on the image data obtained at the AFE 2; a face determination unit 45 that determines whether or not a face exists in the inputted image by using each of the hierarchical images which are composed of the inputted image and the reduced images and a weight table for face detection stored in the memory 14; and a detection result output unit 26 that outputs the detection result of the face determination unit 45. When a face is detected, the detection result output unit 46 outputs the size and position of the detected face relative to the inputted image.
  • The weight table stored in the memory 14 was obtained from a large amount of training samples (face and non-face sample images). Such a weight table can be prepared by utilizing a known learning method called Adaboost (Yoav Freund, Robert E. Schapire, “A decision-theoretic generalization of on-line learning and an application to boosting”, European Conference on Computational Learning Theory, Sep. 20, 1995.)
  • Adaboost is one of the adaptive boosting methods for attaining a high accuracy classifier by selecting multiple weak classifiers that are effective for distinction out of multiple weak classifier candidates, and weighting and integrating them based on a large amount of training samples. Here, the weak classifier is a classifier that has higher classifying ability than pure accident but does not have sufficiently high accuracy. At the time of selecting the weak classifiers, if there already exists a selected weak classifier, the most effective weak classifier is selected from the remaining weak classifier candidates by prioritizing learning for the training samples that falsely recognize based on the already selected weak classifier.
  • FIG. 5 shows one example of the hierarchical images obtained by the reduced-size image generating unit 42. This example shows generated multiple hierarchical images with the reduction ratio R being set to 0.8. In FIG. 5, the reference number 50 indicates the input image and the reference numbers 51 to 55 indicate the reduced-size images. The reference number 61 indicates a determination area. In this example, the determination area is set to have a size of 24×24 pixels. The size of the determination area is the same for the input image and each of the reduced-size images. Also, in this example, as shown by the arrows, face images that match the determination area are detected by performing horizontal scanning to move the determination area from the left to right downward from the top on the hierarchical images. The scanning order, however, can be in any other order. The reason that the multiple reduced-size images 51 to 55 are generated in addition to the input image 50 is to detect various sized faces using one kind of the weighting table.
  • FIG. 6 is a figure that explains the face detection processing. The face detection processing by the face determination unit 45 is performed for each hierarchical image. Since the processing method is the same, however, only the face detection processing for the input image 50 will be explained here. In FIG. 6, the reference number 50 indicates the input image and the reference number 61 indicates the determination area provided within the input image.
  • The face detection processing for each hierarchical image is performed by using an image corresponding to the determination area set within the image and the weight table. The face determination processing consists of multiple determination steps that move sequentially from rough determination to finer determination, and when a face is not detected at a certain determination step, the step does not move to the next determination step but it is determined that a face does not exist in that determination area. only when a face is detected in all of the determination steps, it is determined that a face exists in that determination area, and the determination area is scanned and the process moves to determination for the next determination area. As such, the position and size of the detected face is outputted by the detection result output unit 46. Such face detection processing is described in detail in JP Laid Open No. 2007-265390 by the assignee of the present application, and incorporated herein by reference.
  • (Basic Operation of the Image Pickup Device at the Time of Video Recording)
  • Now, operation at the time of video recording will be explained. When an image capturing operation is instructed by the operation unit 15 in this image pickup device, analog image signals obtained by the photoelectric conversion of the image sensor 1 are outputted to the AFE 2. At this time, horizontal scanning and vertical scanning are performed at the image sensor 1 by the timing control signals provided from the TG 12, and pixel data image signals for each pixel are outputted. Once raw data for the image signals which are analog signals are converted to digital signals at the AFE 2, and entered to the image processing unit 4, various image processing is provided such as signal conversion processing to generate brightness signals and color-difference signals.
  • Then, the processed image signals are provided to the compression processing unit 6. At this time, analog voice signals obtained by the microphone 3 are converted to digital signals at the voice processing unit 5 and provided to the compression processing unit 6. At the compression processing unit 6, therefore, the digital image signals and the digital voice signals are compression coded based on the MPEG compression coding system, provided to the driver unit 7, and stored at the external memory 20. Also, at this time, the compressed signals stored at the external memory 20 are read out by the driver unit 7, provided to the decompression processing unit 8, and the image signals are obtained. These image signals are provided to the display unit 9 and the subject image being presently captured through the image sensor 1 is displayed.
  • When the image capturing operations are performed as described above, the timing control signals are given by the timing generator 12 to the AFE 2, the image processing unit 4, the voice processing unit 5, the compression processing unit 6, and the decompression processing unit 8, and synchronized operations to the image capturing operation of the image sensor 1 for each frame are performed.
  • When instructions to play back the video or image stored in the external memory 20 are entered through the operation unit 15, the compressed signals stored in the external memory 20 are read out by the driver unit 7 and provided to the decompression processing unit 8. Then at the decompression processing unit 8, the signals are decompressed based on the MPEG compression coding system at the decompression processing unit 8 and the image signals and the voice signals are obtained. Then the image signals are provided to the display unit 9 to play back the image, and the voice signals are provided to the speaker unit 11 via the voice output circuit unit 10 to play back the voice. The video based on the compressed signals stored in the external memory 20 is thus regenerated along with the voice. Also, when the compressed signals include only the image signals, only the image is regenerated at the display unit 9.
  • (View Angle Bracket Photography)
  • Next, the view angle bracket photography will be explained. In the image pickup device according to the invention, single or multiple view angle images with a composition in which a person within the image is captured in a predetermined position and size can be automatically obtained at the same time by combining the optical zoom, face detection functions, and the light-axis shifting function. This image capturing method will be called view angle bracket photography below.
  • If a face larger than a predetermined size is detected in the image signals at the step 108 of FIG. 3, the focal length and the ratio of the face relative to the height of the image at the time the shutter button was pressed are computed (step 109).
  • Next, the focal lengths (zoom magnification ratios and view angles) necessary to capture loose shot (LS), middle shot (MS), and tight shot (TS) images and the amount of light-axis shifting are computed (step 110). Here, LS indicates an angle of view and a composition intended for the entire body; MS indicates an angle of view and a composition intended for the upper body; and TS indicates an angle of view and a composition intended for the face closeup. Shots with their necessary focal lengths to fall outside of the variable range for the focal length of the lens unit 18 are extracted and excluded from the photography coverage (step 111).
  • FIG. 7 shows a relation between the variable range for the focal length of the lens unit 18 and the focal length necessary to capture the computed LS, MS, and TS images. The focal length becomes shorter towards the left and longer towards the right in FIG. 7. In any of the cases, the focal length at the present moment set by the user is arbitrary as long as it is within the variable range for the focal length of the lens. In the case of (1), all the focal lengths necessary to capture the LS, MS, and TS images fall within the variable range for the focal length of the lens, and therefore there is no shot excluded from the photography coverage. (2) is an example in which a person is positioned far away, and the focal length necessary to capture the TS image is longer than the variable range for the focal length of the lens, making it a shot excluded from the photography coverage. If the person is further away, the MS image also will be a shot excluded from the photography coverage. (3) is an example in which a person is positioned close, and the focal length necessary to capture the LS image is shorter than the variable range for the focal length of the lens, making it a shot excluded from the photography coverage. If the person is closer, the MS image also will be a shot excluded from the photography coverage.
  • The driving mode for the image sensor 1 is set to the still image capturing mode (step 112), and the raw data of the angle of view set by the user retained at the image processing unit 4 are converted to digital signals and written into the memory 14 (step 113).
  • If capturing the LS image is possible (step 114), then the image is captured by automatically setting the zoom magnification ratio to that for the LS image computed at the step 110, and shifting the light axis such that the position of the face comes to a predetermined position (step 115). Raw data for the LS image obtained by the photography step of the step 115 are converted to digital signals and written into the memory 14 (step 116). If the LS image photography is not possible, the process moves to the MS image photography.
  • If capturing the MS image is possible (step 117), then the image is captured by automatically setting the zoom magnification ratio to that for the MS image computed at the step 110, and shifting the light axis such that the position of the face comes to a predetermined position (step 118). Raw data for the MS image obtained by the photography step of the step 118 are converted to digital signals and written into the memory 14 (step 119). If the MS image photography is not possible, the process moves to the TS image photography.
  • If capturing the TS image is possible (step 120), then the image is captured by automatically setting the zoom magnification ratio to that for the TS image computed at the step 110, and shifting the light axis such that the position of the face comes to a predetermined position (step 121). Raw data for the TS image obtained by the photography step of the step 121 are converted to digital signals and written into the memory 14 (step 122). If the TS image photography is not possible, the process moves to generation of the compressed images.
  • Various image processing is performed by the image processing unit 4 on the digital signals of maximum total of four images, i.e. an image with the angle of view set by the user and images with the maximum of three view angles that were automatically captured, and the digital signals of the respective images are compressed into the JPEG format (step 123). Then, the compressed images are written into the external memory 20 (step 124) and the photography is completed. Thereafter, the process returns to the preview mode to be in a photography standby state (step 102).
  • The step 113 to convert the raw data of the image with the angle of view set by the user to digital signals and write them into the memory 14 may be omitted. If the step 113 is omitted, the number of images captured by one shutter operation becomes three images total. Also, a set of the image files photographed at the same time may be managed at the image pickup device as one file group, or they may be managed separately as independent files, and either of these may be selected by the user using the operation unit 15. If the set of the image files is managed as one file group, all the images in the group can be deleted in a lump when the photography itself is unwanted. If they are managed separately as independent files, each image can be selectively deleted in such a case that only the unwanted shots are deleted. As a method to position the face in a predetermined position of the image, trimming or cropping of the captured image may be used rather than shifting the light axis.
  • Next, the ratios of the face with respect to the height of the image in the LS, MS, and TS images and the positions of the face in the image will be explained. The ratios and the positions in the explanation below are one example and they can be changed arbitrarily.
  • FIG. 8 is a figure showing the relation between the face area and the view angles for the LS, MS, and TS images, and the angle of view set by the user can be either within this range or outside of this range. Also, as described above, LS indicates an angle of view and a composition intended for the entire body; MS indicates an angle of view and a composition intended for the upper body; and TS indicates an angle of view and a composition intended for the face closeup, which are the images shown in FIGS. 9, 10, and 11 respectively. Here, the vertical length of the image, i.e. the height is shown as H; the height of the face area is shown as FH; the distance from the horizontal line that passes the center of the image to the center of the face area is shown as SH.
  • At the LS, criteria for computing the focal length are set such that for example the height of the face area FH falls within the range of H/9=FH=H/7. The position of the face area for example is set as SH>H/6, and set such that the center of the face area is positioned above the ⅓ line of the image from the above, so that the body fits in a large range of the image as much as possible. Moreover, it is set to be H/2>FH/2+SH, i.e. SH<(H−FH)/2 so that the head does not fall outside of the image. In this case, therefore, the position of the face area i.e. the shift amount of the light axis is determined by setting the SH to fall within the range of H/6<SH<(H−FH)/2.
  • At the MS, criteria for computing the focal length are set such that for example the height of the face area FH falls within the range of H/5=FH=H/3. The position of the face area for example is set such that the center of the face area is positioned in the upper half of the image, to be SH>0, so that the body fits in a large range of the image as much as possible. Moreover, it is set to be SH<(H−FH)/2 so that the head does not fall outside of the image. In this case, therefore, the shift amount of the light axis is determined by setting the SH to fall within the range of 0<SH<(H−FH)/2.
  • At the TS, criteria for computing the focal length are set such that for example the height of the face area FH falls within the range of H/3=FH=2H/3. The position of the face area for example is set such that the center of the face area is positioned in the upper half of the image, to be SH>0. Moreover, it is set to be SH<(H−FH)/2 so that the head does not fall outside of the image. In this case, therefore, the position of the face area i.e. the shift amount of the light axis is determined by setting the SH to fall within the range of 0<SH<(H−FH)/2.
  • The value of the SH is preferably larger in a shot having a wider angle (such as the LS) and smaller in a shot having a more telescopic angle (such as the TS). This is because the face is desirably positioned at an upper side in a shot with a wider angle, whereas the face desirably is not positioned too high from the center of the image in a shot with a more telescopic angle. Therefore, preferably the value for the SH is SH>H/5 for the LS, and SH<H/8 for the TS, for example. Table 1 shows an example of the LS, MS, and TS settings based on the H, the FH, and the SH.
  • TABLE 1
    FH/H SH/H
    LS 1/8 1/4
    MS 1/4 1/8
    TS 1/2  1/16
  • As described above, even when the image captured with an angle of view set by the photographer has a composition with the person's position or size being disproportionate or being halfway, multiple images with ideal view angles and compositions can be captured automatically by the view angle bracket photography, and thus, photography failures can be reduced.
  • In the above explanations, an instance in which the target subject is just one person was described. However, the target subject also can be multiple people. In that case, as shown in FIG. 12, after the multiple faces are detected at the step 108, the largest FH value is set as FHmax and the height of the area containing the face areas of all the persons FHall is computed. Then, in order to take into consideration the difference between the area containing the face areas of all the persons and the face area having the largest FH, i.e. (FHall−FHmax) while basing the computation on the face area of the largest FH, the angle of view is computed based on the ratio of FHmax and H−(FHall−FHmax). Here, the SH is based on the center of the area containing the face areas of all the persons.
  • If the target subject is multiple persons, the setting range of FHmax for the LS, MS, and TS can be determined by substituting the H in the case of the target subject being one person as described above with H−(FHall−FHmax). The SH is the same with the case in which the target subject is one person. Table 2 shows an example of the LS, MS, and TS settings based on the H, the FHmax, the FHall, and the SH.
  • TABLE 2
    FHmax/{H −
    (FHall − FHmax)} SH/H
    LS 1/8 1/4
    MS 1/4 1/8
    TS 1/2  1/16
  • In the examples described above, the view angle bracket photography was explained in which the target subject was positioned at the center in the horizontal direction. However, the view angle bracket photography according to the invention is not limited to such examples, but it also can have a composition in which the target subject is off the center towards the left or right in the horizontal direction with light-axis shifting of the shift lens 18 b. Such view angle bracket photography may include multiple view angles with at least one of the compositions having the target subject at the center of the target image and off the center towards the left or right in the horizontal direction, or such view angle bracket photography may include multiple view angles with all of these compositions. FIG. 13 shows an example of 9 images (a) to (i) with the compositions having the target subject at the left, center, and right relative to the horizontal direction of the image for the LS, MS, and TS view angles respectively. In this case, the shift amount for the light-axis is computed for each composition at the step 110, and the light axis is moved to the left, right, or center and images are photographed at the step 115, 118, and 121.
  • When the angle of view and the composition set by the user is shifted off the center towards the left or right in the horizontal direction, there may be an instance in which a composition with the subject off the center in the opposite direction from the shifted direction cannot be photographed due to the shifting range limit of the light axis by the shift lens 18. For example, if the angle of view and the composition set by the user is similar to the image (a) of FIG. 13, the shift amount of the light axis to capture images with the compositions of the images (c) or (f) may fall outside of the shifting range. In that case, such compositions can be excluded from the photography coverage at the step 111.
  • While the specified view angles automatically set for the view angle bracket photography were set as three kinds of the LS, MS, and TS images, it can have four or more kinds, and it also can be user-selectable as to which view angles are used for the view angle bracket photography. Shifting of the light axis is not limited to be performed by the shift lens 18 b, but it also can be done by displacing the image sensor 1 in parallel with respect to the acceptance surface.
  • As described above, the present invention can be applied in an image pickup device having an optical zoom function, a face detection function, and a light-axis shifting function. According to the invention, images having the target subject placed in an appropriate size and an appropriate position can be obtained automatically for multiple view angles with an image pickup device that is easy to carry. Therefore, even when the image captured with an angle of view set by the photographer has a composition with the person's position or size being disproportionate or being halfway, multiple images with ideal view angles and compositions are captured automatically and thus photography failures can be reduced.
  • The invention may be embodied in other specific forms without departing from the spirit or essential characteristics thereof. The embodiments therefore are to be considered in all respects as illustrative and not restrictive; the scope of the invention being indicated by the appended claims rather than by the foregoing description, and all changes that come within the meaning and range of equivalency of the claims are therefore intended to be embraced therein.

Claims (21)

1. An image pickup device, comprising:
an imaging unit for capturing an image;
an image processing unit for detecting a specified part of a target subject from a reference image captured by the imaging unit;
a first control unit for controlling a size of the specified part of the target subject such that it becomes a predetermined size; and
a second control unit for controlling a position of the specified part of the target subject such that it is positioned in a predetermined position.
2. An image pickup device, comprising:
a lens unit having a lens with an optical zoom function;
a solid state image sensor for performing photoelectric conversion of an incident light from the lens to electric signals;
an image processing unit for detecting a specified part of a target subject from an image of the electric signals obtained by the solid state image sensor;
a light-axis shifting unit for adjusting a light axis position of the incident light entering the solid state image sensor through the lens unit; and
a control unit for computing a zoom magnification ratio for the lens unit and a shift amount of the incident light position for the light-axis shifting unit based on the size and position of the specified part of the target subject detected by the image processing unit, such that the specified part has a predetermined size and is positioned in a predetermined position,
wherein the image pickup device is configured such that when the target subject is photographed, the control unit computes the zoom magnification ratio and the shift amount from the size and position of the specified part of the target subject, and the image pickup device captures an automatically-set composition image containing the specified part of the target subject with the predetermined size and in the predetermined position by setting the zoom magnification ratio to the computed ratio and the shift amount to the computed shift amount.
3. The image pickup device according to claim 2, wherein the automatically-set composition image contains the specified part such that the center of the specified part is positioned in the upper half of the image.
4. The image pickup device according to claim 2, wherein the control unit computes the zoom magnification ratio and the shift amount for a plurality of images having different view angles to capture a plurality of the automatically-set composition images with the different view angles.
5. The image pickup device according to claim 2, wherein the automatically-set composition image contains at least one of roughly the whole of the target subject, roughly a half of the target subject including the specified part, and the specified part as a main component of the automatically-set composition image.
6. The image pickup device according to claim 2, wherein a distance between the line passing the center of the image in the horizontal direction and the line passing the center of the specified part in the horizontal direction is set larger in a wide angle image than in a telescopic image.
7. The image pickup device according to claim 2, wherein the image processing unit is capable of detecting a plurality of the specified parts of the target subjects, and a plurality of automatically-set composition images including the plurality of the specified parts of the target subjects are captured.
8. The image pickup device according to claim 2, wherein the automatically-set composition image contains the specified part at least at one of the left, center, and right positions of the image in the horizontal direction for the same angle of view.
9. The image pickup device according to claim 7, wherein when a plurality of the specified parts are detected, the zoom magnification ratio for the lens and the shift amount of the incident light position for the light-axis shifting unit are determined based on a height of a region containing all of the plurality of the specified parts, a height of a specified part having the largest height, and a height of the whole image to obtain the automatically-set composition image.
10. The image pickup device according to claim 2, wherein capturing the automatically-set composition image is prohibited for an image in which either the computed zoom magnification ratio or the computed shift amount falls outside of a variable range for the zoom magnification ratio for the lens or the shift amount for the light-axis shifting unit.
11. The image pickup device according to claim 2, wherein the image is captured to have a composition with the zoom magnification ratio and the shift amount set at the time of photography in addition to the automatically-set composition image.
12. An image pickup method, comprising the steps of:
detecting a specified part of a target subject from an image of electric signals obtained by a solid state image sensor for performing photoelectric conversion of an incident light to electric signals;
computing a zoom magnification ratio for a lens unit and a shift amount of the incident light position for a light-axis shifting unit based on the detected size and position of the specified part of the target subject, such that the specified part has a predetermined size and is positioned in a predetermined position;
setting the zoom magnification ratio of the lens unit to the computed zoom magnification ratio and the shift amount of the incident light position for the light-axis shifting unit to the computed shift amount; and
capturing an automatically-set composition image containing the specified part of the target subject having the predetermined size and the predetermined position.
13. The image pickup method according to claim 12, wherein the automatically-set composition image contains the specified part such that the center of the specified part is positioned in the upper half of the image.
14. The image pickup method according to claim 12, wherein the step of computing a zoom magnification ratio and a shift amount computes the zoom magnification ratio and the shift amount for a plurality of images having different view angles; and
the step of capturing an automatically-set composition image captures a plurality of the automatically-set composition images with the different view angles.
15. The image pickup method according to claim 12, wherein the automatically-set composition image contains at least one of roughly the whole of the target subject, roughly a half of the target subject including the specified part, and the specified part as a main component of the automatically-set composition image.
16. The image pickup method according to claim 12, wherein a distance between the line passing the center of the image in the horizontal direction and the line passing the center of the specified part in the horizontal direction is set larger in a wide angle image than in a telescopic image.
17. The image pickup method according to claim 12, wherein the step of detecting a specified part of a target subject includes detecting a plurality of the specified parts of the target subjects; and
the step of capturing an automatically-set composition image captures a plurality of automatically-set composition images including the plurality of the specified parts of the target subjects.
18. The image pickup method according to claim 12, wherein the automatically-set composition image contains the specified part at least at one of the left, center, and right positions of the image in the horizontal direction for the same angle of view.
19. The image pickup method according to claim 17, wherein when a plurality of the specified parts are detected, the step of computing a zoom magnification ratio and a shift amount determines the zoom magnification ratio for the lens and the shift amount of the incident light position for the light-axis shifting unit based on a height of a region containing all of the plurality of the specified parts, a height of a specified part having the largest height, and a height of the whole image to obtain the automatically-set composition image.
20. The image pickup method according to claim 12, further comprising:
extracting an automatically-set composition image in which either the computed zoom magnification ratio or the computed shift amount falls outside of a variable range for the zoom magnification ratio for the lens or the shift amount for the light-axis shifting unit; and
excluding the extracted automatically-set composition image from a photography coverage.
21. The image pickup method according to claim 12, further comprising:
capturing an image to have a composition with the zoom magnification ratio and the shift amount set at the time of photography in addition to capturing the automatically-set composition image.
US12/058,265 2007-03-30 2008-03-28 Image pickup device and image pickup method Active 2029-09-10 US8144235B2 (en)

Priority Applications (3)

Application Number Priority Date Filing Date Title
JP2007-090923 2007-03-30
JPJP2007-090923 2007-03-30
JP2007090923A JP4804398B2 (en) 2007-03-30 2007-03-30 Imaging apparatus and imaging method

Publications (2)

Publication Number Publication Date
US20080246852A1 true US20080246852A1 (en) 2008-10-09
US8144235B2 US8144235B2 (en) 2012-03-27

Family

ID=39826550

Family Applications (1)

Application Number Title Priority Date Filing Date
US12/058,265 Active 2029-09-10 US8144235B2 (en) 2007-03-30 2008-03-28 Image pickup device and image pickup method

Country Status (2)

Country Link
US (1) US8144235B2 (en)
JP (1) JP4804398B2 (en)

Cited By (23)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20090135291A1 (en) * 2007-11-28 2009-05-28 Fujifilm Corporation Image pickup apparatus and image pickup method used for the same
US20090262220A1 (en) * 2008-04-21 2009-10-22 Samsung Digital Imaging Co., Ltd. Digital photographing apparatus, method of controlling the same, and recording medium storing computer program for executing the method
US20090295832A1 (en) * 2008-06-02 2009-12-03 Sony Ericsson Mobile Communications Japan, Inc. Display processing device, display processing method, display processing program, and mobile terminal device
US20090322897A1 (en) * 2008-06-26 2009-12-31 Samsung Digital Imaging Co., Ltd. Digital image processing apparatus having self-navigator function, and operation method of the digital image processing apparatus
US20100023467A1 (en) * 2008-07-28 2010-01-28 Fujitsu Limited Rule learning method, program, and device
US20100149402A1 (en) * 2008-12-15 2010-06-17 Panasonic Corporation Imaging apparatus and camera body
US20100165119A1 (en) * 2008-12-31 2010-07-01 Nokia Corporation Method, apparatus and computer program product for automatically taking photos of oneself
US20100194903A1 (en) * 2009-02-03 2010-08-05 Kabushiki Kaisha Toshiba Mobile electronic device having camera
US20110007187A1 (en) * 2008-03-10 2011-01-13 Sanyo Electric Co., Ltd. Imaging Device And Image Playback Device
US20110033092A1 (en) * 2009-08-05 2011-02-10 Seung-Yun Lee Apparatus and method for improving face recognition ratio
US20110085016A1 (en) * 2009-10-14 2011-04-14 Tandberg Telecom As Device, computer program product and method for providing touch control of a video conference
US20110109786A1 (en) * 2009-11-06 2011-05-12 Canon Kabushiki Kaisha Image pickup apparatus
US20110317059A1 (en) * 2010-06-23 2011-12-29 Nikon Corporation Imaging apparatus
US20120019671A1 (en) * 2010-05-14 2012-01-26 Bar-Giora Goldberg Advanced Magnification Device and Method for Low-Power Sensor Systems
US20120062768A1 (en) * 2010-09-13 2012-03-15 Sony Ericsson Mobile Communications Japan, Inc. Image capturing apparatus and image capturing method
US20120062769A1 (en) * 2010-03-30 2012-03-15 Sony Corporation Image processing device and method, and program
CN103108120A (en) * 2011-11-14 2013-05-15 三星电子株式会社 Zoom control method and apparatus
US20140111679A1 (en) * 2010-04-01 2014-04-24 Olympus Imaging Corp. Imaging device, display device, control method, and method for controlling area change
US20150103192A1 (en) * 2013-10-14 2015-04-16 Qualcomm Incorporated Refocusable images
US20150163413A1 (en) * 2012-06-21 2015-06-11 Sony Corporation Image capture controlling device, image capture controlling method, and program
EP2827190A3 (en) * 2013-07-19 2015-09-02 Canon Kabushiki Kaisha AF controller, lens apparatus including the AF controller and image pickup apparatus
US20170034412A1 (en) * 2014-04-15 2017-02-02 Sony Corporation Systems, methods, and media for extracting information and a display image from two captured images
US10271010B2 (en) * 2013-10-31 2019-04-23 Shindig, Inc. Systems and methods for controlling the display of content

Families Citing this family (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20100214445A1 (en) * 2009-02-20 2010-08-26 Sony Ericsson Mobile Communications Ab Image capturing method, image capturing apparatus, and computer program
JP2011095501A (en) * 2009-10-29 2011-05-12 Canon Inc Imaging apparatus
JP5725793B2 (en) * 2010-10-26 2015-05-27 キヤノン株式会社 Imaging apparatus and control method thereof
JP5834232B2 (en) * 2011-01-17 2015-12-16 パナソニックIpマネジメント株式会社 Capturing an image recognition apparatus, the captured image recognition system and the captured image recognition method
KR101795601B1 (en) * 2011-08-11 2017-11-08 삼성전자주식회사 Apparatus and method for processing image, and computer-readable storage medium
CN103402058B (en) * 2013-08-22 2016-08-10 深圳市金立通信设备有限公司 A processing method and an image capturing apparatus

Citations (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US6263162B1 (en) * 1998-08-25 2001-07-17 Canon Kabushiki Kaisha Image-shake preventing apparatus
US20020063779A1 (en) * 1993-12-17 2002-05-30 Kitahiro Kaneda Electronic image movement correcting device with a variable correction step feature
US20040207743A1 (en) * 2003-04-15 2004-10-21 Nikon Corporation Digital camera system
US20070030381A1 (en) * 2005-01-18 2007-02-08 Nikon Corporation Digital camera
US7453506B2 (en) * 2003-08-25 2008-11-18 Fujifilm Corporation Digital camera having a specified portion preview section
US7454130B2 (en) * 2005-12-20 2008-11-18 Hon Hai Precision Industry Co., Ltd. Anti-vibration apparatus for image pickup system and image pickup system having the same
US7627237B2 (en) * 2005-05-16 2009-12-01 Sony Corporation Image processing apparatus and method and program
US7734098B2 (en) * 2004-01-27 2010-06-08 Canon Kabushiki Kaisha Face detecting apparatus and method

Family Cites Families (13)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO1979000072A1 (en) 1977-07-28 1979-02-22 D Pandres Ratio preserving control system
JP2753495B2 (en) 1988-03-14 1998-05-20 チノン株式会社 Camera zoom lens automatic scaling devices
JPH04373371A (en) * 1991-06-24 1992-12-25 Matsushita Electric Ind Co Ltd Video camera system with thermal picture detecting means
JP3478547B2 (en) * 1992-06-30 2003-12-15 キヤノン株式会社 Still image recording device
JPH09189934A (en) * 1996-01-09 1997-07-22 Canon Inc Image pickup device
JP4284949B2 (en) * 2002-09-05 2009-06-24 ソニー株式会社 Mobile imaging system, mobile imaging method, and imaging apparatus
JP2004109247A (en) * 2002-09-13 2004-04-08 Minolta Co Ltd Digital camera, image processor, and program
JP2004320286A (en) * 2003-04-15 2004-11-11 Nikon Corp Digital camera
JP2005117316A (en) 2003-10-07 2005-04-28 Matsushita Electric Ind Co Ltd Apparatus and method for photographing and program
JP3873994B2 (en) * 2004-07-14 2007-01-31 コニカミノルタフォトイメージング株式会社 Imaging device, and an image acquisition method
JP2006311196A (en) * 2005-04-28 2006-11-09 Sony Corp Imaging apparatus and imaging method
JP4650297B2 (en) * 2006-02-23 2011-03-16 セイコーエプソン株式会社 Camera, a control method of a camera, a program and a recording medium
JP4540661B2 (en) 2006-02-28 2010-09-08 三洋電機株式会社 Object detecting device

Patent Citations (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20020063779A1 (en) * 1993-12-17 2002-05-30 Kitahiro Kaneda Electronic image movement correcting device with a variable correction step feature
US6263162B1 (en) * 1998-08-25 2001-07-17 Canon Kabushiki Kaisha Image-shake preventing apparatus
US20040207743A1 (en) * 2003-04-15 2004-10-21 Nikon Corporation Digital camera system
US7453506B2 (en) * 2003-08-25 2008-11-18 Fujifilm Corporation Digital camera having a specified portion preview section
US7734098B2 (en) * 2004-01-27 2010-06-08 Canon Kabushiki Kaisha Face detecting apparatus and method
US20070030381A1 (en) * 2005-01-18 2007-02-08 Nikon Corporation Digital camera
US7627237B2 (en) * 2005-05-16 2009-12-01 Sony Corporation Image processing apparatus and method and program
US7454130B2 (en) * 2005-12-20 2008-11-18 Hon Hai Precision Industry Co., Ltd. Anti-vibration apparatus for image pickup system and image pickup system having the same

Cited By (54)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20090135291A1 (en) * 2007-11-28 2009-05-28 Fujifilm Corporation Image pickup apparatus and image pickup method used for the same
US8421905B2 (en) * 2007-11-28 2013-04-16 Fujifilm Corporation Image pickup apparatus and image pickup method used for the same
US8269879B2 (en) * 2007-11-28 2012-09-18 Fujifilm Corporation Image pickup apparatus and image pickup method used for the same
US20110007187A1 (en) * 2008-03-10 2011-01-13 Sanyo Electric Co., Ltd. Imaging Device And Image Playback Device
US20090262220A1 (en) * 2008-04-21 2009-10-22 Samsung Digital Imaging Co., Ltd. Digital photographing apparatus, method of controlling the same, and recording medium storing computer program for executing the method
US8159563B2 (en) * 2008-04-21 2012-04-17 Samsung Electronics Co., Ltd. Digital photographing apparatus, method of controlling the same, and recording medium storing computer program for executing the method
US9152229B2 (en) * 2008-06-02 2015-10-06 Sony Corporation Display processing device, display processing method, display processing program, and mobile terminal device
US20090295832A1 (en) * 2008-06-02 2009-12-03 Sony Ericsson Mobile Communications Japan, Inc. Display processing device, display processing method, display processing program, and mobile terminal device
US20090322897A1 (en) * 2008-06-26 2009-12-31 Samsung Digital Imaging Co., Ltd. Digital image processing apparatus having self-navigator function, and operation method of the digital image processing apparatus
US8370276B2 (en) * 2008-07-28 2013-02-05 Fujitsu Limited Rule learning method, program, and device selecting rule for updating weights based on confidence value
US20100023467A1 (en) * 2008-07-28 2010-01-28 Fujitsu Limited Rule learning method, program, and device
US8400556B2 (en) * 2008-12-15 2013-03-19 Panasonic Corporation Display control of imaging apparatus and camera body at focus operation
US20100149402A1 (en) * 2008-12-15 2010-06-17 Panasonic Corporation Imaging apparatus and camera body
US8432455B2 (en) * 2008-12-31 2013-04-30 Nokia Corporation Method, apparatus and computer program product for automatically taking photos of oneself
US20100165119A1 (en) * 2008-12-31 2010-07-01 Nokia Corporation Method, apparatus and computer program product for automatically taking photos of oneself
US8570431B2 (en) * 2009-02-03 2013-10-29 Fujitsu Mobile Communications Limited Mobile electronic device having camera
US20100194903A1 (en) * 2009-02-03 2010-08-05 Kabushiki Kaisha Toshiba Mobile electronic device having camera
US9311522B2 (en) * 2009-08-05 2016-04-12 Samsung Electronics Co., Ltd. Apparatus and method for improving face recognition ratio
US20110033092A1 (en) * 2009-08-05 2011-02-10 Seung-Yun Lee Apparatus and method for improving face recognition ratio
US9172911B2 (en) 2009-10-14 2015-10-27 Cisco Technology, Inc. Touch control of a camera at a remote video device
EP2489182A1 (en) * 2009-10-14 2012-08-22 Cisco Systems International Sarl Device and method for camera control
CN102648626A (en) * 2009-10-14 2012-08-22 思科系统国际公司 Device and method for camera control
US8619112B2 (en) 2009-10-14 2013-12-31 Cisco Technology, Inc. Device, computer program product and method for providing touch control of a video conference
WO2011046448A1 (en) 2009-10-14 2011-04-21 Tandberg Telecom As Device and method for camera control
NO20093142A1 (en) * 2009-10-14 2011-04-15 Tandberg Telecom As Apparatus and methods feed camera control
US20110085016A1 (en) * 2009-10-14 2011-04-14 Tandberg Telecom As Device, computer program product and method for providing touch control of a video conference
EP2489182A4 (en) * 2009-10-14 2013-10-09 Cisco Systems Int Sarl Device and method for camera control
US20110109786A1 (en) * 2009-11-06 2011-05-12 Canon Kabushiki Kaisha Image pickup apparatus
US9025052B2 (en) * 2009-11-06 2015-05-05 Canon Kabushiki Kaisha Image pickup apparatus that provides for control of angle of view during auto zooming
US9253388B2 (en) * 2010-03-30 2016-02-02 Sony Corporation Image processing device and method, and program
US20120062769A1 (en) * 2010-03-30 2012-03-15 Sony Corporation Image processing device and method, and program
US9277116B2 (en) * 2010-04-01 2016-03-01 Olympus Corporation Imaging device, display device, control method, and method for controlling area change
US20160080641A1 (en) * 2010-04-01 2016-03-17 Olympus Corporation Imaging device, display device, control method, and method for controlling area change
US9485413B2 (en) * 2010-04-01 2016-11-01 Olympus Corporation Imaging device, display device, control method, and method for controlling area change
US20140111679A1 (en) * 2010-04-01 2014-04-24 Olympus Imaging Corp. Imaging device, display device, control method, and method for controlling area change
US20120019671A1 (en) * 2010-05-14 2012-01-26 Bar-Giora Goldberg Advanced Magnification Device and Method for Low-Power Sensor Systems
US8390720B2 (en) * 2010-05-14 2013-03-05 Avaak, Inc. Advanced magnification device and method for low-power sensor systems
US20110317059A1 (en) * 2010-06-23 2011-12-29 Nikon Corporation Imaging apparatus
US9420167B2 (en) * 2010-06-23 2016-08-16 Nikon Corporation Imaging apparatus
US8692907B2 (en) * 2010-09-13 2014-04-08 Sony Corporation Image capturing apparatus and image capturing method
US20120062768A1 (en) * 2010-09-13 2012-03-15 Sony Ericsson Mobile Communications Japan, Inc. Image capturing apparatus and image capturing method
US8823837B2 (en) * 2011-11-14 2014-09-02 Samsung Electronics Co., Ltd. Zoom control method and apparatus, and digital photographing apparatus
CN103108120A (en) * 2011-11-14 2013-05-15 三星电子株式会社 Zoom control method and apparatus
US20130120617A1 (en) * 2011-11-14 2013-05-16 Samsung Electronics Co., Ltd. Zoom control method and apparatus, and digital photographing apparatus
US20150163413A1 (en) * 2012-06-21 2015-06-11 Sony Corporation Image capture controlling device, image capture controlling method, and program
EP2866433A4 (en) * 2012-06-21 2016-04-13 Sony Corp Imaging control device, imaging control method and program
US9456143B2 (en) * 2012-06-21 2016-09-27 Sony Corporation Image capture controlling device, image capture controlling method, and program
EP2827190A3 (en) * 2013-07-19 2015-09-02 Canon Kabushiki Kaisha AF controller, lens apparatus including the AF controller and image pickup apparatus
US9591201B2 (en) 2013-07-19 2017-03-07 Canon Kabushiki Kaisha AF controller, lens apparatus including the AF controller and image pickup apparatus
US20150103192A1 (en) * 2013-10-14 2015-04-16 Qualcomm Incorporated Refocusable images
US9973677B2 (en) * 2013-10-14 2018-05-15 Qualcomm Incorporated Refocusable images
US10271010B2 (en) * 2013-10-31 2019-04-23 Shindig, Inc. Systems and methods for controlling the display of content
US20170034412A1 (en) * 2014-04-15 2017-02-02 Sony Corporation Systems, methods, and media for extracting information and a display image from two captured images
US10070067B2 (en) * 2014-04-15 2018-09-04 Sony Corporation Systems, methods, and media for extracting information and a display image from two captured images

Also Published As

Publication number Publication date
JP4804398B2 (en) 2011-11-02
JP2008252508A (en) 2008-10-16
US8144235B2 (en) 2012-03-27

Similar Documents

Publication Publication Date Title
JP4591325B2 (en) The imaging device and program
US7573504B2 (en) Image recording apparatus, image recording method, and image compressing apparatus processing moving or still images
KR100821801B1 (en) Image capture apparatus and auto focus control method
US7791668B2 (en) Digital camera
US8014661B2 (en) Imaging device and imaging method
US7532235B2 (en) Photographic apparatus
JP4823179B2 (en) The imaging device and an imaging control method
EP1844604B1 (en) Image pickup device and image pickup method
US20030164890A1 (en) Image-capturing apparatus
JP4406937B2 (en) Imaging apparatus
US6453124B2 (en) Digital camera
US7456864B2 (en) Digital camera for capturing a panoramic image
US8614752B2 (en) Electronic still camera with peaking function
JP4904108B2 (en) Capturing apparatus and an image display control method
JP4702401B2 (en) Camera, camera control program, and camera control method
US20070211153A1 (en) Imaging apparatus
US8416303B2 (en) Imaging apparatus and imaging method
US5740337A (en) Stereoscopic imaging system with electronically controlled convergence angle
US7230648B2 (en) Image sensing apparatus and method of focusing and enlarging/reducing the in-focus image data on a display device
EP1874043B1 (en) Image pick up apparatus
JP3634232B2 (en) Digital still camera
US7706674B2 (en) Device and method for controlling flash
US8325268B2 (en) Image processing apparatus and photographing apparatus
US7834911B2 (en) Imaging device having multiple imaging elements
JP4799511B2 (en) Imaging apparatus and method, and program

Legal Events

Date Code Title Description
AS Assignment

Owner name: SANYO ELECTRIC CO., LTD., JAPAN

Free format text: ASSIGNMENT OF ASSIGNORS INTEREST;ASSIGNOR:MORI, YUKIO;REEL/FRAME:021106/0267

Effective date: 20080409

STCF Information on status: patent grant

Free format text: PATENTED CASE

AS Assignment

Owner name: XACTI CORPORATION, JAPAN

Free format text: ASSIGNMENT OF ASSIGNORS INTEREST;ASSIGNOR:SANYO ELECTRIC CO., LTD.;REEL/FRAME:032467/0095

Effective date: 20140305

AS Assignment

Owner name: XACTI CORPORATION, JAPAN

Free format text: CORRECTIVE ASSIGNMENT TO CORRECT THE TO CORRECT THE INCORRECT PATENT NUMBER 13/446,454, AND REPLACE WITH 13/466,454 PREVIOUSLY RECORDED ON REEL 032467 FRAME 0095. ASSIGNOR(S) HEREBY CONFIRMS THE ASSIGNMENT OF ASSIGNORS INTEREST;ASSIGNOR:SANYO ELECTRIC CO., LTD.;REEL/FRAME:032601/0646

Effective date: 20140305

FPAY Fee payment

Year of fee payment: 4