US9621797B2 - Image processing apparatus, control method for same, and program - Google Patents

Image processing apparatus, control method for same, and program Download PDF

Info

Publication number
US9621797B2
US9621797B2 US14/457,495 US201414457495A US9621797B2 US 9621797 B2 US9621797 B2 US 9621797B2 US 201414457495 A US201414457495 A US 201414457495A US 9621797 B2 US9621797 B2 US 9621797B2
Authority
US
United States
Prior art keywords
image data
focus range
trimming area
range
focus
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Expired - Fee Related, expires
Application number
US14/457,495
Other versions
US20150054982A1 (en
Inventor
Yuya Ota
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Canon Inc
Original Assignee
Canon Inc
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Canon Inc filed Critical Canon Inc
Assigned to CANON KABUSHIKI KAISHA reassignment CANON KABUSHIKI KAISHA ASSIGNMENT OF ASSIGNORS INTEREST (SEE DOCUMENT FOR DETAILS). Assignors: OTA, YUYA
Publication of US20150054982A1 publication Critical patent/US20150054982A1/en
Priority to US15/444,676 priority Critical patent/US10109036B2/en
Application granted granted Critical
Publication of US9621797B2 publication Critical patent/US9621797B2/en
Expired - Fee Related legal-status Critical Current
Adjusted expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T3/00Geometric image transformation in the plane of the image
    • G06T3/40Scaling the whole image or part thereof
    • H04N5/23229
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N23/00Cameras or camera modules comprising electronic image sensors; Control thereof
    • H04N23/80Camera processing pipelines; Components thereof
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/50Depth or shape recovery
    • G06T7/55Depth or shape recovery from multiple images
    • G06T7/557Depth or shape recovery from multiple images from light fields, e.g. from plenoptic cameras
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/70Determining position or orientation of objects or cameras
    • G06T7/73Determining position or orientation of objects or cameras using feature-based methods
    • G06T7/74Determining position or orientation of objects or cameras using feature-based methods involving reference images or patches
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N23/00Cameras or camera modules comprising electronic image sensors; Control thereof
    • H04N23/60Control of cameras or camera modules
    • H04N23/63Control of cameras or camera modules by using electronic viewfinders
    • H04N5/23212
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N5/00Details of television systems
    • H04N5/222Studio circuitry; Studio devices; Studio equipment
    • H04N5/262Studio circuits, e.g. for mixing, switching-over, change of character of image, other special effects ; Cameras specially adapted for the electronic generation of special effects
    • H04N5/2628Alteration of picture size, shape, position or orientation, e.g. zooming, rotation, rolling, perspective, translation
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10028Range image; Depth image; 3D point clouds
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2210/00Indexing scheme for image generation or computer graphics
    • G06T2210/22Cropping

Definitions

  • the present invention relates to a technique that performs image processing for image data of which the focus range is changeable after shooting.
  • LF camera light-field camera
  • LF camera light-field camera
  • LF camera light-field camera
  • LF camera light-field camera
  • a micro lens group called “micro lens”
  • a plurality of micro lenses corresponds to one light spot and information about light in different direction at a different depth of field is recorded for each micro lens, so that a LF image can be acquired.
  • the focus range and the viewpoint of the LF image captured by the LF camera can be changed after shooting.
  • LF image data includes light beam information indicating the intensity of a light beam relating to the captured image and directional information about the light beam.
  • Japanese Patent Laid-Open No. 2001-208961 discloses a technique for obtaining an image which is in focus on an object or a main object intended by a photographer within a cutout range when the object is photographed by a camera.
  • Japanese Patent Laid-Open No. H10-79882 discloses a technique that performs input adjustment (e.g., focus adjustment, exposure adjustment, and color adjustment) for a cutout image upon cutout execution of the image to be input from an electronic camera to a personal computer.
  • the LF camera may become popular because the LF camera can effectively use pixels by allocating them not only to resolution directions but also to a plurality of micro lenses.
  • the focus range or the like of the LF image can be changed after shooting, the focus range of the LF image must be determined upon display thereof.
  • the focus state of a trimming area set as the default focus range may be inappropriate.
  • the default focus range is set on the background side such as mountains and buildings located on the far side of the flowers.
  • the flowers on the front side are out of focus.
  • the present invention provides an image processing apparatus that performs image processing for image data of which the focus range is changeable after shooting.
  • the image processing apparatus changes a focus range depending on a target range after trimming processing or after enlargement processing, resulting in an increase in convenience of use.
  • an image processing apparatus that performs image processing for image data of which a focus range is changeable after shooting and includes an analyzing unit configured to analyze the image data in a trimming area specified with respect to the image data; a range determining unit configured to determine a focus range of the image data in the trimming area in accordance with a result analyzed by the analyzing unit; a focus range appending unit configured to append information about the focus range determined by the range determining unit to image data after trimming editing; and a storage unit configured to store the image data after trim editing and information about the focus range appended by the focus range appending unit.
  • an image processing apparatus that performs image processing for image data of which the focus range is changeable after shooting changes a focus range depending on a target range after trimming processing or after enlargement processing, so that the convenience of use can be enhanced.
  • FIG. 1A is a schematic diagram illustrating an example of the internal configuration of a light-field camera.
  • FIG. 1B is a schematic diagram illustrating an example of the internal configuration of a light-field camera.
  • FIG. 2 is a schematic diagram illustrating the positional relationship between a micro lens array 12 and each pixel of an image sensor 13 .
  • FIG. 3 is a schematic diagram illustrating the relationship between the direction of travel of a light beam incident on a micro lens and the recording area of the image sensor 13 .
  • FIG. 4 is a schematic diagram illustrating information about a light beam incident on the image sensor 13 .
  • FIG. 5 is a schematic diagram illustrating refocusing calculation processing.
  • FIG. 6 is a schematic diagram illustrating the relationship between the difference in angle of incidence on a micro lens and the recording area of the image sensor 13 .
  • FIG. 7 is a schematic diagram illustrating depth of field adjustment processing.
  • FIG. 8 is a block diagram illustrating a configuration of an apparatus according to a first embodiment of the present invention.
  • FIG. 9 is a diagram illustrating how a focus range changes before and after editing according to the first embodiment.
  • FIG. 10 is a diagram illustrating how a focus range changes with focusing on the center coordinate of a trimming area before and after editing according to a second embodiment of the present invention.
  • FIG. 11 is a diagram illustrating how a focus range changes with focusing on a depth position having the highest frequency before and after editing according to the second embodiment.
  • FIG. 12 is a diagram illustrating how a focus range changes before and after editing according to a third embodiment of the present invention.
  • FIG. 13 is a block diagram illustrating a configuration of an apparatus according to a fourth embodiment of the present invention.
  • FIG. 14 is a diagram illustrating how a focus range changes before and after enlargement according to a fourth embodiment.
  • FIGS. 1A and 1B are schematic diagrams illustrating an example of the internal constitution of a LF camera.
  • Light from an object, which is incident on a micro lens array 12 by passing through an imaging lens 11 is photoelectrically converted into an electrical signal by an image sensor 13 .
  • the obtained imaging data is referred to as “light-field data” (hereinafter referred to as “LF data”).
  • the imaging lens 11 projects light from an object onto the micro lens array 12 .
  • the imaging lens 11 is an interchangeable lens that is mounted to the main body of an imaging apparatus 10 .
  • a user can change an imaging magnification by the zoom operation of the imaging lens 11 .
  • the micro lens array 12 is constituted by arranging a plurality of micro lenses in a grid and is located between the imaging lens 11 and the image sensor 13 .
  • Each of micro lenses constituting the micro lens array 12 divides incident light from the imaging lens 11 into divided light components and outputs the divided light components to the image sensor 13 .
  • the image sensor 13 constituting the imaging unit is an imaging element having a plurality of pixels, where the intensity of light is detected by each pixel. The respective lights divided by each micro lens are incident on each pixel of the image sensor 13 for receiving light from an object.
  • FIG. 2 is a schematic diagram illustrating the positional relationship between the micro lens array 12 and each pixel of the image sensor 13 .
  • Each micro lens of the micro lens array 12 is arranged so as to correspond to a plurality of pixels of the image sensor 13 .
  • Light divided by each micro lens is incident on each pixel of the image sensor 13 , and the intensity of light (light beam information) from different directions can be detected by each pixel.
  • the incidence direction (directional information) of a light beam incident on each pixel of the image sensor 13 via micro lenses can be found depending on the positional relationship between each micro lens and each pixel of the image sensor 13 . In other words, information about the direction of travel of light is detected in conjunction with the intensity distribution of light.
  • An image in a focal plane having different distances from the lens vertex surfaces of the micro lens array 12 is obtained by combining the outputs of the pixels of the image sensor 13 , which are placed at positions corresponding to the eccentricity from the optical axis of each micro lens.
  • a light beam is represented by a function which is parameterized by parallel two planes using parameters such as a position, an orientation, a wavelength, or the like.
  • the direction of light incident on each pixel is determined by the arrangement of a plurality of pixels corresponding to each micro lens.
  • the imaging apparatus 10 acquires light beam information and directional information and then performs light beam rearrangement/calculation processing (hereinafter referred to as “reconstruction”) to thereby be able to generate image data of any focus position or any viewpoint.
  • the light beam information and directional information are included in LF data.
  • FIG. 3 is a schematic diagram illustrating the relationship between the direction of travel of a light beam incident on the micro lens of the micro lens array 12 and the recording area of the image sensor 13 .
  • An object image formed by the imaging lens 11 is focused on the micro lens array 12 , and a light beam incident on the micro lens array 12 is received by the image sensor 13 via the micro lens array 12 .
  • a light beam incident on the micro lens array 12 is received at different positions on the image sensor 13 depending on the direction of travel of light, and thus, an object image having a shape similar to that of the imaging lens 11 is focused for each micro lens.
  • FIG. 4 is a schematic diagram illustrating information about a light beam incident on the image sensor 13 .
  • a description will be given of a light beam received by the image sensor 13 with reference to FIG. 4 .
  • an assumption is made that the orthogonal coordinate system on the lens plane of the imaging lens 11 is given as (u, v), the orthogonal coordinate system on the imaging plane of the image sensor 13 is given as (x, y), and the distance between the lens plane of the imaging lens 11 and the imaging plane of the image sensor 13 is given as F.
  • the intensity of a light beam passing through the imaging lens 11 and the image sensor 13 can be represented by the four-dimensional function L (u, v, x, y).
  • the four-dimensional function L (u, v, x, y) for holding not only light beam positional information but also the direction of travel of a light beam are recorded on the image sensor 13 .
  • FIG. 5 is a schematic diagram illustrating refocusing calculation processing.
  • the intensity L′ (u, v, s, t) of a light beam in the orthogonal coordinate system (s, t) on the refocused plane is represented by the following Formula (1):
  • refocusing calculation processing is performed by using Formula (2), so that an image set to any focus point (refocused plane) can be reconstructed.
  • weighting is performed by multiplying a weighting coefficient for each image data that forms an image area assigned to each micro lens. For example, when an image with a deep depth of field wants to be generated, integration processing is performed only by using information about a light beam incident on the light receiving plane of the image sensor 13 at a relatively small angle. In other words, for a light beam incident on the image sensor 13 at a relatively large angle, integration processing is not performed by multiplying a weighting coefficient “0” (zero).
  • FIG. 6 is a schematic diagram illustrating the relationship between the difference in angle of incidence on a micro lens and the recording area of the image sensor 13 .
  • FIG. 7 is a schematic diagram illustrating depth of field adjustment processing. As shown in FIG. 6 , a light beam of which the angle of incidence to the image sensor 13 is relatively small is positioned at more central area. Thus, as shown in FIG. 7 , integration processing is performed only by using pixel data acquired at the central portion (hatched portion in FIG. 7 ) of the area. With the aid of such processing, an image with a deep depth of field can be expressed as if an aperture diaphragm provided in a typical imaging apparatus is stopped down.
  • a pan focus image with a deeper depth of field can also be generated by further reducing pixel data for use at the central portion of the area.
  • the depth of field of an image after shooting can be adjusted based on the actually acquired LF data (light beam information).
  • LF data light beam information
  • the system provides a LF image editing function to a user and is constituted within an advanced display or a PC (personal computer) including a display or the like.
  • a CPU Central Processing Unit
  • a system for performing simple editing after shooting may be provided in an imaging apparatus which is capable of capturing a LF image.
  • the following system is configured within the LF camera.
  • an apparatus to which the present invention is applied is not limited provided that it is capable of performing development/editing processing for a LF image.
  • the present invention is applicable to, for example, an apparatus that acquires light beam information and directional information included in LF image data (LF data) and then performs rearrangement and reconstruction of the light beam to thereby be able to generate image data of any focus position or any viewpoint.
  • a trimming target specifying unit (hereinafter referred to as “target specifying unit”) 101 is an IF (interface) unit that is used by a user for specifying a trimming area in an image with respect to image data.
  • the target specifying unit 101 is a typical input device such as a pointing device such as a mouse or a touch panel.
  • a trimming target is specified by, for example, the following methods:
  • a trimming area determining unit (hereinafter referred to as “area determining unit”) 102 determines an area as a trimming target based on the content specified by the target specifying unit 101 and the LF image data stored in a storage unit 106 as a trimming target. For example, when point coordinates for specifying an object are specified by the target specifying unit 101 , the area determining unit 102 determines an object area including the specified points as a trimming area by the analysis of the LF image stored in the storage unit 106 . Alternatively, when the coordinates of the start point and the end point for expressing a rectangular frame are specified by the target specifying unit 101 , the area determining unit 102 determines the rectangular area defined by the start point and the end point as a trimming area within the image. When any number of point coordinates for specifying any polygon is specified by the target specifying unit 101 , the area determining unit 102 determines an area surrounded by any number of point coordinates as a trimming area.
  • a trimming area analyzing unit (hereinafter referred to as “area analyzing unit”) 103 acquires information about a trimming area from the area determining unit 102 and then analyzes the content of image data in the trimming area.
  • area analyzing unit acquires information about a trimming area from the area determining unit 102 and then analyzes the content of image data in the trimming area.
  • the depth map includes map data indicating the position of the depth direction of each pixel mapped for an image expressed in two dimensions in vertical and horizontal directions.
  • a histogram type depth map indicating the number of pixels which are the counted number (frequency) of pixels present regardless of vertical and horizontal coordinates in the depth direction is used.
  • FIG. 9 illustrates a depth map according to the present embodiment.
  • a focus range 201 which is a front side area is set to a default focus range as viewed from the photographer's side.
  • a trimming target area 202 is a far side range which is out of focus.
  • the area analyzing unit 103 analyzes information about the trimming area acquired from the area determining unit 102 . In this manner, it is determined that the trimming target is present in the trimming target area 202 on the depth map.
  • a focus range determining unit 104 determines a focus range based on the result of analysis from the area analyzing unit 103 .
  • the focus range 201 prior to trim editing is a default focus range appended upon shooting.
  • the focus range may also be referred to as a “depth range” over which the object is brought into focus.
  • the focus range is not limited to a depth range having a predetermined width but may also be referred to as a focus position (depth position at which the object is brought into focus).
  • a focus range appending unit 105 appends the range determined by the focus range determining unit 104 as a focus range corresponding to the image obtained as a result of trimming to a LF image obtained as a result of trimming and stores the resulting LF image in the storage unit 106 .
  • Information about the determined focus range is in a form stored as the focus range prior to trim editing and is changed to information about the focus range after trim editing. Thus, information about the default focus position is overwritten.
  • information indicating the focus range is appended as metadata to, for example, the LF image data after trim editing.
  • metadata allows the LF image after trim editing to be displayed in an appropriate focus range upon display of the LF image.
  • the focus range is expressed as positions (two points) in the depth direction on the depth map. More specifically, when the two points are defined as the point A and the point B, the range between the position of the point A to the position of the point B in the depth direction is treated as the focus range.
  • information about the focus range after trim editing may also be additionally recorded in the LF image data prior to trim editing.
  • the entire data prior to trim editing is stored for the LF image itself.
  • the image can be displayed in the focus range which has been changed to an appropriate level in accordance with trim editing.
  • information indicating the default focus position is not overwritten, and thus, the default focus position upon shooting can be used as it is upon display of an image prior to trim editing.
  • the storage unit 106 stores LF image data prior to trim editing, LF image data after trim editing obtained from the area determining unit 102 , focus range data acquired from the focus range appending unit 105 , and associated data such as a thumbnail image.
  • a display unit 107 displays a LF image prior to trim editing and after trim editing.
  • the display unit 107 presents a UI (user interface) screen or the like used by a user upon specifying a trimming target using the target specifying unit 101 to the user.
  • a television apparatus, a PC display, a commercial display, a tablet device, a smart phone, or the like may be used as a display apparatus.
  • image data for display or the like is output from an output unit instead of the display unit 107 to an external device.
  • control is made such that the focus range is changed to an appropriate level depending on the result of trim editing.
  • the focus range with a high convenience of use for the user is automatically set.
  • the area analyzing unit 103 analyzes the center coordinate of a trimming area.
  • the area analyzing unit 103 also analyzes the position (hereinafter referred to as “mode depth position”) on a depth map having the largest number of pixels in the trimming area, that is, having the highest frequency in the trimming area when the number of pixels is set as a frequency.
  • the area analyzing unit 103 not only performs processing described in the first embodiment but also sends information about any one of or both the center coordinate of the trimming area and the mode depth position of the trimming area to the focus range determining unit 104 .
  • the focus range determining unit 104 sets the focus range to the entire trimming area.
  • the entire area after trim editing may be unable to be brought into focus depending on LF image data prior to trim editing.
  • the focus range which can be adjusted or changed after shooting is in a finite range under the photographing conditions such as optical properties of a micro lens group, optical properties of a main lens, photographing parameters such as an aperture value.
  • the focus range is optically narrowed as compared with the settings in which the aperture is stopped down on the wide-angle side.
  • the focus range determining unit 104 of the present embodiment determines the focus range after trim editing based on any one of the center coordinate of the trimming area or the mode depth position of the trimming area. Firstly, a description will be given of processing for determining a focus range based on the position of the center coordinate of the trimming area. In this case, the range which is settable in the depth direction and the front direction about the position of the depth direction in which a pixel at the center coordinate of the trimming area is present is set as the focus range after trim editing.
  • FIG. 10 shows the case where the focus range cannot be set to the entire area excluding a trimming non-target area 301 as a result of trim editing for a LF image prior to trim editing.
  • the range which is settable in the depth direction and the front direction about the mode depth position of the trimming area is set as the focus range after trim editing.
  • FIG. 11 shows the case where the focus range cannot be set to the entire area excluding a trimming non-target area 401 as a result of trim editing for a LF image prior to trim editing.
  • a focus range 403 is set as a focusable range in the depth direction and the front direction about a mode depth position 402 of the trimming area. Even when there is no object, which wants to be brought into focus, at the central portion of the trimmed area, focus can be made at the position according to the trim editing intention by such processing. Note that whether a focus range is determined from the center coordinate of the trimming area or a focus range is determined from the mode depth position of the trimming area can be selected by a user.
  • the area analyzing unit 103 analyzes an object (e.g., an image of a face of a person, a face of an animal such as a dog or cat, or the like) in the trimming area.
  • an object e.g., an image of a face of a person, a face of an animal such as a dog or cat, or the like
  • the area analyzing unit 103 analyzes an area in which each object is present.
  • the area analyzing unit 103 not only performs processing described in the first embodiment but also sends information about an area in which each object is present, which is obtained as a result of the analysis described above, to the focus range determining unit 104 .
  • the focus range determining unit 104 sets the focus range to the entire trimming area, the entire area after trim editing may be unable to be brought into focus depending on LF image data prior to trim editing as described in the second embodiment.
  • the focus range determining unit 104 of the present embodiment determines the focus range after trim editing based on the number of objects present in the trimming area. When the number of objects included in the trimming area is one, the focus range is determined about the depth range over which the object is present. On the other hand, when the number of objects included in the trimming area is in plural, the focus range is determined such that all the objects are included therein.
  • FIG. 12 shows the case where a trimming area 502 is selected for trim editing with respect to the entire image 501 prior to trim editing.
  • a trimming area 502 is selected for trim editing with respect to the entire image 501 prior to trim editing.
  • the aforementioned depth map with respect to the entire image prior to trim editing is shown in FIG. 12 .
  • a trimming target area 505 is present on the front side with respect to the objects as viewed from the photographer's side, whereas a focus range 506 prior to trim editing is outside the trimming target area 505 .
  • the area analyzing unit 103 detects the area for the triangle 503 and the circle 504 which are two objects included in the trimming area 502 .
  • the focus range determining unit 104 determines a focus range 510 such that a range 508 on the depth map in which a triangle 513 is present and a range 507 on the depth map in which a circle 514 is present after trim editing are included therein.
  • the focus range 506 is changed to the focus range 510 after trim editing.
  • the triangle 503 and the circle 504 which are blurry prior to trim editing are respectively displayed as the triangle 513 and the circle 514 which are brought into focus in an area 512 .
  • the focus range determining unit 104 defines the priority of the objects in order of size.
  • the focus range is determined such that an object having high priority is preferentially included therein.
  • the focus range determining unit 104 also defines an object which is present at the center of the trimming area with high priority. For the definition of the priority, whether the priority is given to the size of an object or the center coordinate thereof can be selected by a user.
  • a description will be given of a fourth embodiment of the present invention with reference to FIG. 13 . While, in the first to third embodiments, a description has been given by taking an example of trimming as editing processing, the present invention is also applicable to the case where enlargement processing is performed. In other words, enlargement processing after shooting may be regarded as processing for setting a range which is desired to be displayed on an enlarged scale as a trimming area and for removing the remaining area.
  • An enlargement operating unit 601 shown in FIG. 13 is an IF (interface) unit that is used by a user for directing an enlargement operation with respect to LF image data.
  • the enlargement operating unit 601 is a typical input device such as a pointing device such as a mouse or a touch panel.
  • the center coordinate of an area desired to be enlarged or a rectangular frame is typically used for specification.
  • a cross-shaped operating member may also be used for specifying an enlargement area in the vertical and horizontal directions. More specifically, an assumption is made that an area enlarged to an original image size is displayed in a state where a digital camera or the like performs during preview display or simple editing. Under this state, processing for changing a display area in the vertical and horizontal directions in accordance with a user's operating instruction is typically performed while the image is enlarged.
  • An enlargement area determining unit 602 determines an area to be enlarged based on the content specified by the enlargement operating unit 601 and an enlargement target (LF image data) stored in the storage unit 106 .
  • an enlargement area is determined depending on a predetermined fixed enlargement ratio.
  • the enlargement area determining unit 602 determines the area prior to enlargement as the enlargement area depending on the enlargement ratio.
  • a rectangular frame is specified as an enlargement area, the rectangular area is determined as the enlargement area.
  • the enlargement area is changed depending on the amount of change defined upon pressing of the cross-shaped button in each direction.
  • a predetermined number of pixels e.g. 50 pixels
  • the enlargement operating unit 601 performs an operation in the right direction, an area is shifted by 50 pixels to the right direction from the enlargement area defined at that time.
  • An enlargement area analyzing unit 603 acquires information about an enlargement area sent from the enlargement area determining unit 602 , and then analyzes the content of LF image data of the enlargement area. In the present embodiment, the enlargement area on the depth map is analyzed.
  • FIG. 14 shows an enlargement case and its depth map according to the present embodiment.
  • processing is executed for a LF image 701 prior to enlargement by taking a target area in which a circle 703 is displayed as an enlargement area 702 .
  • a far side area of a LF image prior to enlargement is in a focus range 705 as viewed from the photographer's side.
  • a range 704 corresponding to the enlargement area 702 is present in a portion which is out of focus on the front side of the focus range 705 .
  • the enlargement area analyzing unit 603 analyzes information about the enlargement area acquired from the enlargement area determining unit 602 . In this manner, the enlargement target is determined to be present in the range 704 on the depth map.
  • a focus range determining unit 604 determines a focus range based on the result of analysis from the enlargement area analyzing unit 603 . A description will be given of how a focus range changes after enlargement processing with respect to a LF image prior to enlargement processing with reference to FIG. 14 .
  • the focus range 705 prior to enlargement processing is changed to the position of the range 704 on the depth map in which the enlargement area 702 is present, and thus, becomes a focus range 708 . Consequently, the circle 703 which is blurry prior to enlargement processing is displayed as a circle 707 which is brought into focus after enlargement processing.
  • focus is automatically adjusted to the enlargement area when the enlargement operation is performed by the above processing, resulting in an increase in convenience of use for the user.
  • Embodiments of the present invention can also be realized by a computer of a system or apparatus that reads out and executes computer executable instructions recorded on a storage medium (e.g., non-transitory computer-readable storage medium) to perform the functions of one or more of the above-described embodiment(s) of the present invention, and by a method performed by the computer of the system or apparatus by, for example, reading out and executing the computer executable instructions from the storage medium to perform the functions of one or more of the above-described embodiment(s).
  • the computer may comprise one or more of a central processing unit (CPU), micro processing unit (MPU), or other circuitry, and may include a network of separate computers or separate computer processors.
  • the computer executable instructions may be provided to the computer, for example, from a network or the storage medium.
  • the storage medium may include, for example, one or more of a hard disk, a random-access memory (RAM), a read only memory (ROM), a storage of distributed computing systems, an optical disk (such as a compact disc (CD), digital versatile disc (DVD), or Blu-ray Disc (BD)TM), a flash memory device, a memory card, and the like.

Abstract

A trimming target specifying unit specifies a trimming target during editing of image data. A trimming area determining unit determines the specified target as a trimming area in an image. A trimming area analyzing unit analyzes image data in the trimming area determined by the trimming area determining unit. A focus range determining unit determines a focus range in accordance with the result of analysis by the trimming area analyzing unit. The focus range appending unit appends information about the focus range determined by the focus range determining unit to image data. A storage unit stores image data before and after trim editing and data appended thereto and a display unit displays the image data before and after trim editing.

Description

BACKGROUND OF THE INVENTION
Field of the Invention
The present invention relates to a technique that performs image processing for image data of which the focus range is changeable after shooting.
Description of the Related Art
There is known a camera (light-field camera, hereinafter also referred to as “LF camera”) that is capable of recording light information (light field, hereinafter also referred to as “LF”) in three dimensional space by utilizing a special lens and a high-resolution sensor. In the LF camera, a micro lens group called “micro lens” is arranged on the front surface of the high-resolution sensor. A plurality of micro lenses corresponds to one light spot and information about light in different direction at a different depth of field is recorded for each micro lens, so that a LF image can be acquired. The focus range and the viewpoint of the LF image captured by the LF camera can be changed after shooting. LF image data includes light beam information indicating the intensity of a light beam relating to the captured image and directional information about the light beam.
Japanese Patent Laid-Open No. 2001-208961 discloses a technique for obtaining an image which is in focus on an object or a main object intended by a photographer within a cutout range when the object is photographed by a camera. Japanese Patent Laid-Open No. H10-79882 discloses a technique that performs input adjustment (e.g., focus adjustment, exposure adjustment, and color adjustment) for a cutout image upon cutout execution of the image to be input from an electronic camera to a personal computer.
However, the number of pixels has been increasing due to the advancement of technique for an imaging element. In such circumstances, it is expected that the LF camera may become popular because the LF camera can effectively use pixels by allocating them not only to resolution directions but also to a plurality of micro lenses. Although the focus range or the like of the LF image can be changed after shooting, the focus range of the LF image must be determined upon display thereof. Thus, it is typical to append the focus range upon, for example, shooting as the focus range of the initial setting (hereinafter referred to as “default focus range”) from the viewpoint of convenience of use.
When an extra video portion is captured into a photographed image, it is typical for a user to perform trim editing for extracting a desired object image in order to adjust the composition of the image. On the other hand, when trim editing is performed for the LF image, it is contemplated that the focus state of a trimming area set as the default focus range may be inappropriate. For example, assume that a user wants to trim the video portion of flowers in a landscape image. In this case, when the default focus range is set on the background side such as mountains and buildings located on the far side of the flowers, the flowers on the front side are out of focus. When a user performs enlargement processing for enlarging the circumference of a desired object image in the photographed image in order to confirm the details of the image content, it is also contemplated that the focus state of the enlargement area may be inappropriate.
SUMMARY OF THE INVENTION
The present invention provides an image processing apparatus that performs image processing for image data of which the focus range is changeable after shooting. The image processing apparatus changes a focus range depending on a target range after trimming processing or after enlargement processing, resulting in an increase in convenience of use.
According to an aspect of the present invention, an image processing apparatus that performs image processing for image data of which a focus range is changeable after shooting and includes an analyzing unit configured to analyze the image data in a trimming area specified with respect to the image data; a range determining unit configured to determine a focus range of the image data in the trimming area in accordance with a result analyzed by the analyzing unit; a focus range appending unit configured to append information about the focus range determined by the range determining unit to image data after trimming editing; and a storage unit configured to store the image data after trim editing and information about the focus range appended by the focus range appending unit.
According to the present invention, an image processing apparatus that performs image processing for image data of which the focus range is changeable after shooting changes a focus range depending on a target range after trimming processing or after enlargement processing, so that the convenience of use can be enhanced.
Further features of the present invention will become apparent from the following description of exemplary embodiments with reference to the attached drawings.
BRIEF DESCRIPTION OF THE DRAWINGS
FIG. 1A is a schematic diagram illustrating an example of the internal configuration of a light-field camera.
FIG. 1B is a schematic diagram illustrating an example of the internal configuration of a light-field camera.
FIG. 2 is a schematic diagram illustrating the positional relationship between a micro lens array 12 and each pixel of an image sensor 13.
FIG. 3 is a schematic diagram illustrating the relationship between the direction of travel of a light beam incident on a micro lens and the recording area of the image sensor 13.
FIG. 4 is a schematic diagram illustrating information about a light beam incident on the image sensor 13.
FIG. 5 is a schematic diagram illustrating refocusing calculation processing.
FIG. 6 is a schematic diagram illustrating the relationship between the difference in angle of incidence on a micro lens and the recording area of the image sensor 13.
FIG. 7 is a schematic diagram illustrating depth of field adjustment processing.
FIG. 8 is a block diagram illustrating a configuration of an apparatus according to a first embodiment of the present invention.
FIG. 9 is a diagram illustrating how a focus range changes before and after editing according to the first embodiment.
FIG. 10 is a diagram illustrating how a focus range changes with focusing on the center coordinate of a trimming area before and after editing according to a second embodiment of the present invention.
FIG. 11 is a diagram illustrating how a focus range changes with focusing on a depth position having the highest frequency before and after editing according to the second embodiment.
FIG. 12 is a diagram illustrating how a focus range changes before and after editing according to a third embodiment of the present invention.
FIG. 13 is a block diagram illustrating a configuration of an apparatus according to a fourth embodiment of the present invention.
FIG. 14 is a diagram illustrating how a focus range changes before and after enlargement according to a fourth embodiment.
DESCRIPTION OF THE EMBODIMENTS
Hereinafter, preferred embodiments of the present invention will be described with reference to the drawings. Prior to the description of a video processing apparatus according to the embodiments of the present invention, a description will be given of an imaging apparatus for use in capturing a moving image of which the focus state can be specified. FIGS. 1A and 1B are schematic diagrams illustrating an example of the internal constitution of a LF camera. Light from an object, which is incident on a micro lens array 12 by passing through an imaging lens 11, is photoelectrically converted into an electrical signal by an image sensor 13. Note that the obtained imaging data is referred to as “light-field data” (hereinafter referred to as “LF data”).
The imaging lens 11 projects light from an object onto the micro lens array 12. The imaging lens 11 is an interchangeable lens that is mounted to the main body of an imaging apparatus 10. A user can change an imaging magnification by the zoom operation of the imaging lens 11. The micro lens array 12 is constituted by arranging a plurality of micro lenses in a grid and is located between the imaging lens 11 and the image sensor 13. Each of micro lenses constituting the micro lens array 12 divides incident light from the imaging lens 11 into divided light components and outputs the divided light components to the image sensor 13. The image sensor 13 constituting the imaging unit is an imaging element having a plurality of pixels, where the intensity of light is detected by each pixel. The respective lights divided by each micro lens are incident on each pixel of the image sensor 13 for receiving light from an object.
FIG. 2 is a schematic diagram illustrating the positional relationship between the micro lens array 12 and each pixel of the image sensor 13. Each micro lens of the micro lens array 12 is arranged so as to correspond to a plurality of pixels of the image sensor 13. Light divided by each micro lens is incident on each pixel of the image sensor 13, and the intensity of light (light beam information) from different directions can be detected by each pixel. Also, the incidence direction (directional information) of a light beam incident on each pixel of the image sensor 13 via micro lenses can be found depending on the positional relationship between each micro lens and each pixel of the image sensor 13. In other words, information about the direction of travel of light is detected in conjunction with the intensity distribution of light. An image in a focal plane having different distances from the lens vertex surfaces of the micro lens array 12 is obtained by combining the outputs of the pixels of the image sensor 13, which are placed at positions corresponding to the eccentricity from the optical axis of each micro lens. Note that a light beam is represented by a function which is parameterized by parallel two planes using parameters such as a position, an orientation, a wavelength, or the like. In other words, the direction of light incident on each pixel is determined by the arrangement of a plurality of pixels corresponding to each micro lens.
As described above, the imaging apparatus 10 acquires light beam information and directional information and then performs light beam rearrangement/calculation processing (hereinafter referred to as “reconstruction”) to thereby be able to generate image data of any focus position or any viewpoint. The light beam information and directional information are included in LF data.
FIG. 3 is a schematic diagram illustrating the relationship between the direction of travel of a light beam incident on the micro lens of the micro lens array 12 and the recording area of the image sensor 13. An object image formed by the imaging lens 11 is focused on the micro lens array 12, and a light beam incident on the micro lens array 12 is received by the image sensor 13 via the micro lens array 12. At this time, as shown in FIG. 3, a light beam incident on the micro lens array 12 is received at different positions on the image sensor 13 depending on the direction of travel of light, and thus, an object image having a shape similar to that of the imaging lens 11 is focused for each micro lens.
FIG. 4 is a schematic diagram illustrating information about a light beam incident on the image sensor 13. A description will be given of a light beam received by the image sensor 13 with reference to FIG. 4. Here, an assumption is made that the orthogonal coordinate system on the lens plane of the imaging lens 11 is given as (u, v), the orthogonal coordinate system on the imaging plane of the image sensor 13 is given as (x, y), and the distance between the lens plane of the imaging lens 11 and the imaging plane of the image sensor 13 is given as F. Thus, the intensity of a light beam passing through the imaging lens 11 and the image sensor 13 can be represented by the four-dimensional function L (u, v, x, y). Since a light beam incident on each micro lens is incident on different pixels depending on the direction of travel of light, the four-dimensional function L (u, v, x, y) for holding not only light beam positional information but also the direction of travel of a light beam are recorded on the image sensor 13.
Next, a description will be given of refocusing calculation processing after imaging. FIG. 5 is a schematic diagram illustrating refocusing calculation processing. When the positional relationship among the imaging lens plane, the imaging plane, and the refocused plane is set as shown in FIG. 5, the intensity L′ (u, v, s, t) of a light beam in the orthogonal coordinate system (s, t) on the refocused plane is represented by the following Formula (1):
[ Formula 1 ] L ( u , v , s , t ) = L ( u , v , u + s - u α , v + t - v α ) ( 1 )
Also, since an image E′ (s, t) obtained on a refocused plane is obtained by integrating the intensity L′ with respect to the lens aperture, the image E′ (s, t) is represented by the following Formula (2):
[ Formula 2 ] E ( s , t ) = 1 α 2 F 2 L ( u , v , u + s - u α , v + t - v α ) u v ( 2 )
Thus, refocusing calculation processing is performed by using Formula (2), so that an image set to any focus point (refocused plane) can be reconstructed.
Next, a description will be given of depth of field adjustment processing after imaging. Prior to refocusing calculation processing, weighting is performed by multiplying a weighting coefficient for each image data that forms an image area assigned to each micro lens. For example, when an image with a deep depth of field wants to be generated, integration processing is performed only by using information about a light beam incident on the light receiving plane of the image sensor 13 at a relatively small angle. In other words, for a light beam incident on the image sensor 13 at a relatively large angle, integration processing is not performed by multiplying a weighting coefficient “0” (zero).
FIG. 6 is a schematic diagram illustrating the relationship between the difference in angle of incidence on a micro lens and the recording area of the image sensor 13. FIG. 7 is a schematic diagram illustrating depth of field adjustment processing. As shown in FIG. 6, a light beam of which the angle of incidence to the image sensor 13 is relatively small is positioned at more central area. Thus, as shown in FIG. 7, integration processing is performed only by using pixel data acquired at the central portion (hatched portion in FIG. 7) of the area. With the aid of such processing, an image with a deep depth of field can be expressed as if an aperture diaphragm provided in a typical imaging apparatus is stopped down. A pan focus image with a deeper depth of field can also be generated by further reducing pixel data for use at the central portion of the area. As described above, the depth of field of an image after shooting can be adjusted based on the actually acquired LF data (light beam information). In the embodiments, a description will be given by illustrating an image processing apparatus for editing a LF image.
(First Embodiment)
A description will be given of an exemplary configuration of a system according to a first embodiment with reference to FIG. 8. The system provides a LF image editing function to a user and is constituted within an advanced display or a PC (personal computer) including a display or the like. A CPU (Central Processing Unit) implements processing to be described below by executing programs that are read from a memory. Alternatively, a system for performing simple editing after shooting may be provided in an imaging apparatus which is capable of capturing a LF image. In this case, the following system is configured within the LF camera. In addition to the above, an apparatus to which the present invention is applied is not limited provided that it is capable of performing development/editing processing for a LF image. The present invention is applicable to, for example, an apparatus that acquires light beam information and directional information included in LF image data (LF data) and then performs rearrangement and reconstruction of the light beam to thereby be able to generate image data of any focus position or any viewpoint.
A trimming target specifying unit (hereinafter referred to as “target specifying unit”) 101 is an IF (interface) unit that is used by a user for specifying a trimming area in an image with respect to image data. The target specifying unit 101 is a typical input device such as a pointing device such as a mouse or a touch panel. A trimming target is specified by, for example, the following methods:
    • Method for specifying an object.
    • Method for specifying a start point and an end point for expressing a rectangular frame.
    • Method for specifying any number of points for specifying any polygon.
A trimming area determining unit (hereinafter referred to as “area determining unit”) 102 determines an area as a trimming target based on the content specified by the target specifying unit 101 and the LF image data stored in a storage unit 106 as a trimming target. For example, when point coordinates for specifying an object are specified by the target specifying unit 101, the area determining unit 102 determines an object area including the specified points as a trimming area by the analysis of the LF image stored in the storage unit 106. Alternatively, when the coordinates of the start point and the end point for expressing a rectangular frame are specified by the target specifying unit 101, the area determining unit 102 determines the rectangular area defined by the start point and the end point as a trimming area within the image. When any number of point coordinates for specifying any polygon is specified by the target specifying unit 101, the area determining unit 102 determines an area surrounded by any number of point coordinates as a trimming area.
A trimming area analyzing unit (hereinafter referred to as “area analyzing unit”) 103 acquires information about a trimming area from the area determining unit 102 and then analyzes the content of image data in the trimming area. In the present embodiment, where the trimming area is located in a depth direction on a depth map to be described below is analyzed as the analysis content. The depth map includes map data indicating the position of the depth direction of each pixel mapped for an image expressed in two dimensions in vertical and horizontal directions. In the present embodiment, a histogram type depth map indicating the number of pixels which are the counted number (frequency) of pixels present regardless of vertical and horizontal coordinates in the depth direction is used.
FIG. 9 illustrates a depth map according to the present embodiment. For a LF image prior to trim editing, a focus range 201 which is a front side area is set to a default focus range as viewed from the photographer's side. A trimming target area 202 is a far side range which is out of focus. When a user performs trim editing for such a LF image, the image after trim editing is so-called “blurry image” which is out of focus. The area analyzing unit 103 analyzes information about the trimming area acquired from the area determining unit 102. In this manner, it is determined that the trimming target is present in the trimming target area 202 on the depth map. A focus range determining unit 104 determines a focus range based on the result of analysis from the area analyzing unit 103. A description will be given of how a focus range changes after trim editing with respect to a LF image prior to trim editing with reference to FIG. 9. The focus range 201 prior to trim editing is a default focus range appended upon shooting. Note that the focus range may also be referred to as a “depth range” over which the object is brought into focus. The focus range is not limited to a depth range having a predetermined width but may also be referred to as a focus position (depth position at which the object is brought into focus).
During trim editing, a trimming non-target area 203 is excluded and a focus range (see a range 204) is changed from the focus range 201 prior to trim editing to the entire area after trim editing. Consequently, the display image after trim editing becomes a non-blurry image which is in focus. A focus range appending unit 105 appends the range determined by the focus range determining unit 104 as a focus range corresponding to the image obtained as a result of trimming to a LF image obtained as a result of trimming and stores the resulting LF image in the storage unit 106. Information about the determined focus range is in a form stored as the focus range prior to trim editing and is changed to information about the focus range after trim editing. Thus, information about the default focus position is overwritten.
As a storage form of information indicating a focus range, information indicating the focus range is appended as metadata to, for example, the LF image data after trim editing. The use of metadata allows the LF image after trim editing to be displayed in an appropriate focus range upon display of the LF image. For example, the focus range is expressed as positions (two points) in the depth direction on the depth map. More specifically, when the two points are defined as the point A and the point B, the range between the position of the point A to the position of the point B in the depth direction is treated as the focus range.
Note that, in addition to history information obtained upon trim editing, information about the focus range after trim editing may also be additionally recorded in the LF image data prior to trim editing. In this case, the entire data prior to trim editing is stored for the LF image itself. When a user displays an image prior to and after trim editing using history information, the image can be displayed in the focus range which has been changed to an appropriate level in accordance with trim editing. In this case, information indicating the default focus position is not overwritten, and thus, the default focus position upon shooting can be used as it is upon display of an image prior to trim editing.
The storage unit 106 stores LF image data prior to trim editing, LF image data after trim editing obtained from the area determining unit 102, focus range data acquired from the focus range appending unit 105, and associated data such as a thumbnail image. A display unit 107 displays a LF image prior to trim editing and after trim editing. The display unit 107 presents a UI (user interface) screen or the like used by a user upon specifying a trimming target using the target specifying unit 101 to the user. A television apparatus, a PC display, a commercial display, a tablet device, a smart phone, or the like may be used as a display apparatus. When an apparatus is not provided with the display unit 107, image data for display or the like is output from an output unit instead of the display unit 107 to an external device.
According to the present embodiment, when trim editing is performed for a LF image, control is made such that the focus range is changed to an appropriate level depending on the result of trim editing. In other words, the focus range with a high convenience of use for the user is automatically set. When a fine adjustment of the focus range that has been changed upon editing or an additional trim editing is performed, the user can specify the change in focus range again.
(Second Embodiment)
Next, a description will be given of a second embodiment of the present invention. In the second embodiment, the operations of the area analyzing unit 103 and the focus range determining unit 104 are different from those in the first embodiment. Thus, detailed explanations are omitted by using reference numerals already used for the structural elements that are identical to the case of the first embodiment, and the points of difference will be explained in detail. Such a manner of omitting explanations is identical in the embodiments explained below.
The area analyzing unit 103 analyzes the center coordinate of a trimming area. The area analyzing unit 103 also analyzes the position (hereinafter referred to as “mode depth position”) on a depth map having the largest number of pixels in the trimming area, that is, having the highest frequency in the trimming area when the number of pixels is set as a frequency. The area analyzing unit 103 not only performs processing described in the first embodiment but also sends information about any one of or both the center coordinate of the trimming area and the mode depth position of the trimming area to the focus range determining unit 104.
The focus range determining unit 104 sets the focus range to the entire trimming area. In this case, the entire area after trim editing may be unable to be brought into focus depending on LF image data prior to trim editing. In the LF camera, the focus range which can be adjusted or changed after shooting is in a finite range under the photographing conditions such as optical properties of a micro lens group, optical properties of a main lens, photographing parameters such as an aperture value. For example, when a distant main object is captured in a not-so-bright environment, a user may set the main lens toward the telescopic side and set the aperture toward the open side. In this state, the focus range is optically narrowed as compared with the settings in which the aperture is stopped down on the wide-angle side.
The focus range determining unit 104 of the present embodiment determines the focus range after trim editing based on any one of the center coordinate of the trimming area or the mode depth position of the trimming area. Firstly, a description will be given of processing for determining a focus range based on the position of the center coordinate of the trimming area. In this case, the range which is settable in the depth direction and the front direction about the position of the depth direction in which a pixel at the center coordinate of the trimming area is present is set as the focus range after trim editing.
A description will be given of how a focus range changes after trim editing with respect to a LF image prior to trim editing with reference to FIG. 10. FIG. 10 shows the case where the focus range cannot be set to the entire area excluding a trimming non-target area 301 as a result of trim editing for a LF image prior to trim editing.
In this case, a focus range 303 is set in a focusable range about the position 302 on the depth map in which a pixel at the center coordinate of the trimmed area is present. The central portion of the trimming area can be brought into focus as a result of such processing. The central portion of the trimming area is often adequate as a trim editing intention. As described in the first embodiment, an additional fine adjustment or re-change can be made as required.
Next, a description will be given of processing for determining a focus range based on the mode depth position of the trimming area. In this case, the range which is settable in the depth direction and the front direction about the mode depth position of the trimming area is set as the focus range after trim editing.
A description will be given of how a focus range changes after trim editing with respect to a LF image prior to trim editing with reference to FIG. 11. FIG. 11 shows the case where the focus range cannot be set to the entire area excluding a trimming non-target area 401 as a result of trim editing for a LF image prior to trim editing.
In this case, a focus range 403 is set as a focusable range in the depth direction and the front direction about a mode depth position 402 of the trimming area. Even when there is no object, which wants to be brought into focus, at the central portion of the trimmed area, focus can be made at the position according to the trim editing intention by such processing. Note that whether a focus range is determined from the center coordinate of the trimming area or a focus range is determined from the mode depth position of the trimming area can be selected by a user.
(Third Embodiment)
Next, a description will be given of a third embodiment of the present invention. In the third embodiment, the operations of the area analyzing unit 103 and the focus range determining unit 104 are different from those in the first embodiment. The area analyzing unit 103 analyzes an object (e.g., an image of a face of a person, a face of an animal such as a dog or cat, or the like) in the trimming area. When a plurality of objects are present in the trimming area, the area analyzing unit 103 analyzes an area in which each object is present. The area analyzing unit 103 not only performs processing described in the first embodiment but also sends information about an area in which each object is present, which is obtained as a result of the analysis described above, to the focus range determining unit 104. Although the focus range determining unit 104 sets the focus range to the entire trimming area, the entire area after trim editing may be unable to be brought into focus depending on LF image data prior to trim editing as described in the second embodiment.
The focus range determining unit 104 of the present embodiment determines the focus range after trim editing based on the number of objects present in the trimming area. When the number of objects included in the trimming area is one, the focus range is determined about the depth range over which the object is present. On the other hand, when the number of objects included in the trimming area is in plural, the focus range is determined such that all the objects are included therein.
A description will be given of how a focus range changes after trim editing with respect to a LF image prior to trim editing with reference to FIG. 12. FIG. 12 shows the case where a trimming area 502 is selected for trim editing with respect to the entire image 501 prior to trim editing. There are two objects, i.e., a triangle 503 and a circle 504 in the trimming area 502.
The aforementioned depth map with respect to the entire image prior to trim editing is shown in FIG. 12. A trimming target area 505 is present on the front side with respect to the objects as viewed from the photographer's side, whereas a focus range 506 prior to trim editing is outside the trimming target area 505. When trim editing is performed for the entire image 501, the area analyzing unit 103 detects the area for the triangle 503 and the circle 504 which are two objects included in the trimming area 502.
The focus range determining unit 104 determines a focus range 510 such that a range 508 on the depth map in which a triangle 513 is present and a range 507 on the depth map in which a circle 514 is present after trim editing are included therein. The focus range 506 is changed to the focus range 510 after trim editing. Thus, the triangle 503 and the circle 504 which are blurry prior to trim editing are respectively displayed as the triangle 513 and the circle 514 which are brought into focus in an area 512.
When a plurality of objects are present and it is difficult to include all the objects in a trimming area, the focus range determining unit 104 defines the priority of the objects in order of size. The focus range is determined such that an object having high priority is preferentially included therein. The focus range determining unit 104 also defines an object which is present at the center of the trimming area with high priority. For the definition of the priority, whether the priority is given to the size of an object or the center coordinate thereof can be selected by a user.
(Fourth Embodiment)
Next, a description will be given of a fourth embodiment of the present invention with reference to FIG. 13. While, in the first to third embodiments, a description has been given by taking an example of trimming as editing processing, the present invention is also applicable to the case where enlargement processing is performed. In other words, enlargement processing after shooting may be regarded as processing for setting a range which is desired to be displayed on an enlarged scale as a trimming area and for removing the remaining area.
An enlargement operating unit 601 shown in FIG. 13 is an IF (interface) unit that is used by a user for directing an enlargement operation with respect to LF image data. The enlargement operating unit 601 is a typical input device such as a pointing device such as a mouse or a touch panel.
As a method for specifying an enlargement area, the center coordinate of an area desired to be enlarged or a rectangular frame is typically used for specification. As another method for specifying an enlargement area, a cross-shaped operating member may also be used for specifying an enlargement area in the vertical and horizontal directions. More specifically, an assumption is made that an area enlarged to an original image size is displayed in a state where a digital camera or the like performs during preview display or simple editing. Under this state, processing for changing a display area in the vertical and horizontal directions in accordance with a user's operating instruction is typically performed while the image is enlarged.
An enlargement area determining unit 602 determines an area to be enlarged based on the content specified by the enlargement operating unit 601 and an enlargement target (LF image data) stored in the storage unit 106. When the center coordinate is specified, an enlargement area is determined depending on a predetermined fixed enlargement ratio. The enlargement area determining unit 602 determines the area prior to enlargement as the enlargement area depending on the enlargement ratio. When a rectangular frame is specified as an enlargement area, the rectangular area is determined as the enlargement area.
When a user provides instructions to move a display image by the operation of a cross-shaped button or the like while the image is enlarged, the enlargement area is changed depending on the amount of change defined upon pressing of the cross-shaped button in each direction. For example, when the cross-shaped button is pressed in the right direction, it is assumed that the movement of the display image by a predetermined number of pixels (e.g. 50 pixels) is defined in advance in terms of an area prior to enlargement. In this case, when the enlargement operating unit 601 performs an operation in the right direction, an area is shifted by 50 pixels to the right direction from the enlargement area defined at that time.
An enlargement area analyzing unit 603 acquires information about an enlargement area sent from the enlargement area determining unit 602, and then analyzes the content of LF image data of the enlargement area. In the present embodiment, the enlargement area on the depth map is analyzed. FIG. 14 shows an enlargement case and its depth map according to the present embodiment. In the present embodiment, processing is executed for a LF image 701 prior to enlargement by taking a target area in which a circle 703 is displayed as an enlargement area 702. As can be seen from the depth map of the image prior to enlargement, a far side area of a LF image prior to enlargement is in a focus range 705 as viewed from the photographer's side. A range 704 corresponding to the enlargement area 702 is present in a portion which is out of focus on the front side of the focus range 705. Thus, if the above processing is performed as it is, an image after enlargement becomes a blurry image.
The enlargement area analyzing unit 603 analyzes information about the enlargement area acquired from the enlargement area determining unit 602. In this manner, the enlargement target is determined to be present in the range 704 on the depth map. A focus range determining unit 604 determines a focus range based on the result of analysis from the enlargement area analyzing unit 603. A description will be given of how a focus range changes after enlargement processing with respect to a LF image prior to enlargement processing with reference to FIG. 14. In the state of a display image 706 after enlargement processing, the focus range 705 prior to enlargement processing is changed to the position of the range 704 on the depth map in which the enlargement area 702 is present, and thus, becomes a focus range 708. Consequently, the circle 703 which is blurry prior to enlargement processing is displayed as a circle 707 which is brought into focus after enlargement processing.
When a user performs an enlargement operation, it is often the case that the user wants to focus on an area to be enlarged. According to the present embodiment, focus is automatically adjusted to the enlargement area when the enlargement operation is performed by the above processing, resulting in an increase in convenience of use for the user.
(Other Embodiments)
Embodiments of the present invention can also be realized by a computer of a system or apparatus that reads out and executes computer executable instructions recorded on a storage medium (e.g., non-transitory computer-readable storage medium) to perform the functions of one or more of the above-described embodiment(s) of the present invention, and by a method performed by the computer of the system or apparatus by, for example, reading out and executing the computer executable instructions from the storage medium to perform the functions of one or more of the above-described embodiment(s). The computer may comprise one or more of a central processing unit (CPU), micro processing unit (MPU), or other circuitry, and may include a network of separate computers or separate computer processors. The computer executable instructions may be provided to the computer, for example, from a network or the storage medium. The storage medium may include, for example, one or more of a hard disk, a random-access memory (RAM), a read only memory (ROM), a storage of distributed computing systems, an optical disk (such as a compact disc (CD), digital versatile disc (DVD), or Blu-ray Disc (BD)™), a flash memory device, a memory card, and the like.
While the present invention has been described with reference to exemplary embodiments, it is to be understood that the invention is not limited to the disclosed exemplary embodiments. The scope of the following claims is to be accorded the broadest interpretation so as to encompass all such modifications and equivalent structures and functions.
This application claims the benefit of Japanese Patent Application No. 2013-170912, filed on Aug. 21, 2013, which is hereby incorporated by reference herein in its entirety.

Claims (14)

What is claimed is:
1. An image processing apparatus that performs image processing for light-field image data which has been stored after shooting and for which a focus range is changeable after shooting, the image processing apparatus comprising:
at least one processor; and
a memory storing program which, when executed by the at least one processor, causes the processor to function as:
an analyzing unit configured to analyze a characteristic of partial image data in a trimming area specified with respect to the stored light-field image data;
a range determining unit configured to determine a focus range of the partial image data of the trimming area in accordance with a result of analysis by the analyzing unit;
a focus range appending unit configured to append information about the focus range determined by the range determining unit to edited light-field image data which is generated by trimming a part of the stored light-field image data;
a storage unit configured to store the edited image data after trim editing and the information about the focus range appended to the edited image data, and
an output unit configured to output, to a display unit, an image which is brought into focus on the focus range determined by the range determining unit by reconstructing the edited light-field image data using the appended information.
2. The image processing apparatus according to claim 1, wherein the analyzing unit analyzes the position of a depth direction relating to the trimming area and the range determining unit determines a depth range over which an image in the trimming area is brought into focus as the focus range.
3. The image processing apparatus according to claim 1, wherein the analyzing unit analyzes a center coordinate of the trimming area and the range determining unit acquires the center coordinate of the trimming area from the analyzing unit and then determines the focus range about the position of the depth direction in which a pixel at the center coordinate is present.
4. The image processing apparatus according to claim 1, wherein the analyzing unit analyzes a depth map indicating the position of the depth direction for each pixel constituting an image to thereby analyze the position of the depth direction relating to the trimming area.
5. The image processing apparatus according to claim 1, wherein the analyzing unit analyzes the position of the depth direction having a highest frequency in the trimming area when the number of pixels is set as a frequency and the range determining unit determines the focus range about the position of the depth direction analyzed by the analyzing unit.
6. The image processing apparatus according to claim 1, wherein the analyzing unit analyzes the position of an object in the trimming area and the range determining unit determines the focus range about the position of the object in the trimming area.
7. A non-transitory computer-readable medium storing a program for causing a computer to execute a control method to be executed by an image processing apparatus that performs image processing for light-filed image data which has been stored after shooting and for which the focus range is changeable after shooting, the method comprising:
analyzing a characteristic of partial image data of the trimming area specified with respect to the stored light-field image data;
determining a focus range of the partial image data in the trimming area in accordance with a result of analysis in the analyzing;
appending information about the focus range determined in the determining to edited light-field image data which is generated by trimming a part of the light-field image data stored in the storage;
storing the edited light-field image data after trim editing and information about the focus range appended to the edited light-field image data; and
outputting, to a display unit, an image which is brought into focus on the determined focus range by reconstructing the edited light-field image data using the appended information.
8. The non-transitory computer-readable medium according to claim 7, wherein the position of a depth direction relating to the trimming area is analyzed in the analyzing and a depth range over which an image in the trimming area is brought into focus is determined as the focus range in the determining.
9. The non-transitory computer-readable medium according to claim 7, wherein the center coordinate of the trimming area is analyzed in the analyzing and the center coordinate of the trimming area is acquired and then the focus range is determined about the position of the depth direction in which a pixel at the center coordinate is present in the determining.
10. The non-transitory computer-readable medium according to claim 7, wherein a depth map indicating the position of the depth direction for each pixel constituting an image is analyzed and then the position of the depth direction relating to the trimming area is analyzed in the analyzing.
11. The non-transitory computer-readable medium according to claim 7, wherein the position of the depth direction having the highest frequency in the trimming area is analyzed in the analyzing when the number of pixels is set as a frequency and the focus range is determined in the determining about the position of the depth direction analyzed in the analyzing.
12. The non-transitory computer-readable medium according to claim 7, wherein the position of an object in the trimming area is analyzed in the analyzing and the focus range is determined in the determining about the position of the object in the trimming area.
13. An image processing apparatus that performs image processing for image data which has been stored after shooting and for which a focus range is changeable after shooting, the image processing apparatus comprising:
at least one processor; and
a memory storing a program which, when executed by the at least one processor, causes the at least one processor to function as:
an analyzing unit configured to analyze a characteristic of partial image data in a trimming area specified with respect to the stored image data;
a range determining unit configured to determine a focus range of the partial image data of the trimming area in accordance with a result of the analyzing;
a focus range appending unit configured to append information about the focus range determined in the determining to edited image data which is generated by trimming a part of the stored image data; and
a storage unit configured to store the edited image data after trim editing and the information about the focus range appended to the edited image data,
wherein the analyzing unit analyzes the position of the depth direction having a highest frequency in the trimming area using a histogram relating to the number of pixels of each position of the depth direction in the trimming area, and the range determining unit determines the focus range about the position of the depth direction analyzed by the analyzing unit.
14. A non-transitory computer-readable medium storing a program for causing a computer to execute a control method to be executed by an image processing apparatus that performs image processing for image data which has been stored after shooting and for which the focus range is changeable after shooting, the method comprising:
analyzing a characteristic of partial image data of the trimming area specified with respect to the stored image data;
determining a focus range of the partial image data in the trimming area in accordance with a result of analysis in the analyzing;
appending information about the focus range determined in the determining to edited image data which is generated by trimming a part of the image data stored in the storage; and
storing the edited image data after trim editing and information about the focus range appended to the edited image data,
wherein the position of the depth direction having the highest frequency in the trimming area using a histogram relating to the number of pixels of each position of the depth direction in the trimming area, and the focus range is determined in the determining about the position of the depth direction analyzed in the analyzing.
US14/457,495 2013-08-21 2014-08-12 Image processing apparatus, control method for same, and program Expired - Fee Related US9621797B2 (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
US15/444,676 US10109036B2 (en) 2013-08-21 2017-02-28 Image processing apparatus, control method for same, and program that performs image processing for image data having a focus state that is changeable

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
JP2013-170912 2013-08-21
JP2013170912A JP6245892B2 (en) 2013-08-21 2013-08-21 Image processing apparatus, control method therefor, and program

Related Child Applications (1)

Application Number Title Priority Date Filing Date
US15/444,676 Division US10109036B2 (en) 2013-08-21 2017-02-28 Image processing apparatus, control method for same, and program that performs image processing for image data having a focus state that is changeable

Publications (2)

Publication Number Publication Date
US20150054982A1 US20150054982A1 (en) 2015-02-26
US9621797B2 true US9621797B2 (en) 2017-04-11

Family

ID=52480028

Family Applications (2)

Application Number Title Priority Date Filing Date
US14/457,495 Expired - Fee Related US9621797B2 (en) 2013-08-21 2014-08-12 Image processing apparatus, control method for same, and program
US15/444,676 Expired - Fee Related US10109036B2 (en) 2013-08-21 2017-02-28 Image processing apparatus, control method for same, and program that performs image processing for image data having a focus state that is changeable

Family Applications After (1)

Application Number Title Priority Date Filing Date
US15/444,676 Expired - Fee Related US10109036B2 (en) 2013-08-21 2017-02-28 Image processing apparatus, control method for same, and program that performs image processing for image data having a focus state that is changeable

Country Status (2)

Country Link
US (2) US9621797B2 (en)
JP (1) JP6245892B2 (en)

Families Citing this family (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20150104101A1 (en) * 2013-10-14 2015-04-16 Apple Inc. Method and ui for z depth image segmentation
US9870058B2 (en) * 2014-04-23 2018-01-16 Sony Corporation Control of a real world object user interface
CN107545586B (en) * 2017-08-04 2020-02-28 中国科学院自动化研究所 Depth obtaining method and system based on light field polar line plane image local part
JP7173841B2 (en) 2018-11-14 2022-11-16 キヤノン株式会社 Image processing device, its control method, and program
JP7198055B2 (en) * 2018-11-16 2022-12-28 キヤノン株式会社 Image processing device, its control method, and program
US11776093B2 (en) * 2019-07-16 2023-10-03 University Of Florida Research Foundation, Incorporated Automatic sharpness adjustment for imaging modalities

Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JPH1079882A (en) 1996-09-02 1998-03-24 Canon Inc Image input device
JP2001208961A (en) 2000-01-26 2001-08-03 Nikon Corp Camera
US20020057847A1 (en) * 2000-11-15 2002-05-16 Nikon Corporation Image-capturing device
US7046290B2 (en) 2000-01-26 2006-05-16 Nikon Corporation Multi-point auto-focus digital camera including electronic zoom
US20100265385A1 (en) * 2009-04-18 2010-10-21 Knight Timothy J Light Field Camera Image, File and Configuration Data, and Methods of Using, Storing and Communicating Same
US20110273471A1 (en) * 2009-01-19 2011-11-10 Sony Corporation Display control device, display control method and program
US20110305446A1 (en) * 2010-06-15 2011-12-15 Kei Itoh Imaging apparatus, focus position detecting method, and computer program product

Family Cites Families (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP5190882B2 (en) * 2008-11-07 2013-04-24 富士フイルム株式会社 Compound eye photographing apparatus, control method therefor, and program
KR101608970B1 (en) 2009-11-27 2016-04-05 삼성전자주식회사 Apparatus and method for processing image using light field data
JP5423893B2 (en) * 2010-06-28 2014-02-19 株式会社ニコン Imaging apparatus, image processing apparatus, and image processing program recording medium
JP5762142B2 (en) * 2011-05-31 2015-08-12 キヤノン株式会社 Imaging apparatus, image processing apparatus and method thereof
JP5947548B2 (en) * 2012-01-13 2016-07-06 キヤノン株式会社 IMAGING DEVICE, ITS CONTROL METHOD, IMAGE PROCESSING DEVICE, IMAGE GENERATION METHOD, PROGRAM
JP2013153375A (en) * 2012-01-26 2013-08-08 Sony Corp Image processing apparatus, image processing method, and recording medium
US9237263B2 (en) * 2012-10-05 2016-01-12 Vidinoti Sa Annotation method and apparatus

Patent Citations (9)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JPH1079882A (en) 1996-09-02 1998-03-24 Canon Inc Image input device
JP2001208961A (en) 2000-01-26 2001-08-03 Nikon Corp Camera
US7046290B2 (en) 2000-01-26 2006-05-16 Nikon Corporation Multi-point auto-focus digital camera including electronic zoom
US20060139478A1 (en) 2000-01-26 2006-06-29 Nikon Corporation Camera
US8558941B2 (en) 2000-01-26 2013-10-15 Nikon Corporation Digital camera having trimming and focusing ability
US20020057847A1 (en) * 2000-11-15 2002-05-16 Nikon Corporation Image-capturing device
US20110273471A1 (en) * 2009-01-19 2011-11-10 Sony Corporation Display control device, display control method and program
US20100265385A1 (en) * 2009-04-18 2010-10-21 Knight Timothy J Light Field Camera Image, File and Configuration Data, and Methods of Using, Storing and Communicating Same
US20110305446A1 (en) * 2010-06-15 2011-12-15 Kei Itoh Imaging apparatus, focus position detecting method, and computer program product

Also Published As

Publication number Publication date
JP2015041169A (en) 2015-03-02
JP6245892B2 (en) 2017-12-13
US10109036B2 (en) 2018-10-23
US20170169542A1 (en) 2017-06-15
US20150054982A1 (en) 2015-02-26

Similar Documents

Publication Publication Date Title
US10109036B2 (en) Image processing apparatus, control method for same, and program that performs image processing for image data having a focus state that is changeable
US11568517B2 (en) Electronic apparatus, control method, and non- transitory computer readable medium
US10200590B2 (en) Image pickup apparatus and method of controlling the same
US9843787B2 (en) Generation and use of a 3D radon image
JP6548367B2 (en) Image processing apparatus, imaging apparatus, image processing method and program
US9076214B2 (en) Image acquisition apparatus and image processing apparatus using selected in-focus image data
US9208569B2 (en) Image processing apparatus and control method thereof capable of performing refocus calculation processing for light field data
KR20190021138A (en) Electronic device which stores depth information associating with image in accordance with Property of depth information acquired using image and the controlling method thereof
US20170324950A1 (en) Methods and apparatus for controlling light field capture
US9380281B2 (en) Image processing apparatus, control method for same, and program
US9332195B2 (en) Image processing apparatus, imaging apparatus, and image processing method
US10148870B2 (en) Image capturing apparatus
US9955111B2 (en) Electronic apparatus and display control method
US10356381B2 (en) Image output apparatus, control method, image pickup apparatus, and storage medium
US9319579B2 (en) Image processing apparatus, control method, and program for the same with focus state specification and deletion confirmation of image data
JP2013145982A (en) Imaging apparatus, image processing apparatus and method
KR20210101087A (en) An electronic device and method for displaying image at the electronic device
JP6294703B2 (en) Image processing apparatus, image processing method, and program
JP6012396B2 (en) Image processing apparatus, image processing method, and program.
JP5743769B2 (en) Image processing apparatus and image processing method
US9936121B2 (en) Image processing device, control method of an image processing device, and storage medium that stores a program to execute a control method of an image processing device
JP2015198340A (en) Image processing system and control method therefor, and program
WO2021065176A1 (en) Processing device, electronic device, processing method, and program
WO2022004302A1 (en) Image processing device, imaging device, image processing method, and program
JP6120535B2 (en) Information processing apparatus, information processing method, and program

Legal Events

Date Code Title Description
AS Assignment

Owner name: CANON KABUSHIKI KAISHA, JAPAN

Free format text: ASSIGNMENT OF ASSIGNORS INTEREST;ASSIGNOR:OTA, YUYA;REEL/FRAME:034955/0578

Effective date: 20140728

STCF Information on status: patent grant

Free format text: PATENTED CASE

FEPP Fee payment procedure

Free format text: MAINTENANCE FEE REMINDER MAILED (ORIGINAL EVENT CODE: REM.); ENTITY STATUS OF PATENT OWNER: LARGE ENTITY

LAPS Lapse for failure to pay maintenance fees

Free format text: PATENT EXPIRED FOR FAILURE TO PAY MAINTENANCE FEES (ORIGINAL EVENT CODE: EXP.); ENTITY STATUS OF PATENT OWNER: LARGE ENTITY

STCH Information on status: patent discontinuation

Free format text: PATENT EXPIRED DUE TO NONPAYMENT OF MAINTENANCE FEES UNDER 37 CFR 1.362

FP Lapsed due to failure to pay maintenance fee

Effective date: 20210411