US9621797B2 - Image processing apparatus, control method for same, and program - Google Patents
Image processing apparatus, control method for same, and program Download PDFInfo
- Publication number
- US9621797B2 US9621797B2 US14/457,495 US201414457495A US9621797B2 US 9621797 B2 US9621797 B2 US 9621797B2 US 201414457495 A US201414457495 A US 201414457495A US 9621797 B2 US9621797 B2 US 9621797B2
- Authority
- US
- United States
- Prior art keywords
- image data
- focus range
- trimming area
- range
- focus
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Expired - Fee Related, expires
Links
Images
Classifications
-
- G—PHYSICS
- G06—COMPUTING OR CALCULATING; COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T3/00—Geometric image transformations in the plane of the image
- G06T3/40—Scaling of whole images or parts thereof, e.g. expanding or contracting
-
- H04N5/23229—
-
- G—PHYSICS
- G06—COMPUTING OR CALCULATING; COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T7/00—Image analysis
- G06T7/50—Depth or shape recovery
- G06T7/55—Depth or shape recovery from multiple images
- G06T7/557—Depth or shape recovery from multiple images from light fields, e.g. from plenoptic cameras
-
- G—PHYSICS
- G06—COMPUTING OR CALCULATING; COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T7/00—Image analysis
- G06T7/70—Determining position or orientation of objects or cameras
- G06T7/73—Determining position or orientation of objects or cameras using feature-based methods
- G06T7/74—Determining position or orientation of objects or cameras using feature-based methods involving reference images or patches
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N23/00—Cameras or camera modules comprising electronic image sensors; Control thereof
- H04N23/60—Control of cameras or camera modules
- H04N23/63—Control of cameras or camera modules by using electronic viewfinders
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N23/00—Cameras or camera modules comprising electronic image sensors; Control thereof
- H04N23/80—Camera processing pipelines; Components thereof
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N23/00—Cameras or camera modules comprising electronic image sensors; Control thereof
- H04N23/95—Computational photography systems, e.g. light-field imaging systems
- H04N23/957—Light-field or plenoptic cameras or camera modules
-
- H04N5/23212—
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N5/00—Details of television systems
- H04N5/222—Studio circuitry; Studio devices; Studio equipment
- H04N5/262—Studio circuits, e.g. for mixing, switching-over, change of character of image, other special effects ; Cameras specially adapted for the electronic generation of special effects
- H04N5/2628—Alteration of picture size, shape, position or orientation, e.g. zooming, rotation, rolling, perspective, translation
-
- G—PHYSICS
- G06—COMPUTING OR CALCULATING; COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/10—Image acquisition modality
- G06T2207/10028—Range image; Depth image; 3D point clouds
-
- G—PHYSICS
- G06—COMPUTING OR CALCULATING; COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2210/00—Indexing scheme for image generation or computer graphics
- G06T2210/22—Cropping
Definitions
- the present invention relates to a technique that performs image processing for image data of which the focus range is changeable after shooting.
- LF camera light-field camera
- LF camera light-field camera
- LF camera light-field camera
- LF camera light-field camera
- a micro lens group called “micro lens”
- a plurality of micro lenses corresponds to one light spot and information about light in different direction at a different depth of field is recorded for each micro lens, so that a LF image can be acquired.
- the focus range and the viewpoint of the LF image captured by the LF camera can be changed after shooting.
- LF image data includes light beam information indicating the intensity of a light beam relating to the captured image and directional information about the light beam.
- Japanese Patent Laid-Open No. 2001-208961 discloses a technique for obtaining an image which is in focus on an object or a main object intended by a photographer within a cutout range when the object is photographed by a camera.
- Japanese Patent Laid-Open No. H10-79882 discloses a technique that performs input adjustment (e.g., focus adjustment, exposure adjustment, and color adjustment) for a cutout image upon cutout execution of the image to be input from an electronic camera to a personal computer.
- the LF camera may become popular because the LF camera can effectively use pixels by allocating them not only to resolution directions but also to a plurality of micro lenses.
- the focus range or the like of the LF image can be changed after shooting, the focus range of the LF image must be determined upon display thereof.
- the focus state of a trimming area set as the default focus range may be inappropriate.
- the default focus range is set on the background side such as mountains and buildings located on the far side of the flowers.
- the flowers on the front side are out of focus.
- the present invention provides an image processing apparatus that performs image processing for image data of which the focus range is changeable after shooting.
- the image processing apparatus changes a focus range depending on a target range after trimming processing or after enlargement processing, resulting in an increase in convenience of use.
- an image processing apparatus that performs image processing for image data of which a focus range is changeable after shooting and includes an analyzing unit configured to analyze the image data in a trimming area specified with respect to the image data; a range determining unit configured to determine a focus range of the image data in the trimming area in accordance with a result analyzed by the analyzing unit; a focus range appending unit configured to append information about the focus range determined by the range determining unit to image data after trimming editing; and a storage unit configured to store the image data after trim editing and information about the focus range appended by the focus range appending unit.
- an image processing apparatus that performs image processing for image data of which the focus range is changeable after shooting changes a focus range depending on a target range after trimming processing or after enlargement processing, so that the convenience of use can be enhanced.
- FIG. 1A is a schematic diagram illustrating an example of the internal configuration of a light-field camera.
- FIG. 1B is a schematic diagram illustrating an example of the internal configuration of a light-field camera.
- FIG. 2 is a schematic diagram illustrating the positional relationship between a micro lens array 12 and each pixel of an image sensor 13 .
- FIG. 3 is a schematic diagram illustrating the relationship between the direction of travel of a light beam incident on a micro lens and the recording area of the image sensor 13 .
- FIG. 4 is a schematic diagram illustrating information about a light beam incident on the image sensor 13 .
- FIG. 5 is a schematic diagram illustrating refocusing calculation processing.
- FIG. 6 is a schematic diagram illustrating the relationship between the difference in angle of incidence on a micro lens and the recording area of the image sensor 13 .
- FIG. 7 is a schematic diagram illustrating depth of field adjustment processing.
- FIG. 8 is a block diagram illustrating a configuration of an apparatus according to a first embodiment of the present invention.
- FIG. 9 is a diagram illustrating how a focus range changes before and after editing according to the first embodiment.
- FIG. 10 is a diagram illustrating how a focus range changes with focusing on the center coordinate of a trimming area before and after editing according to a second embodiment of the present invention.
- FIG. 11 is a diagram illustrating how a focus range changes with focusing on a depth position having the highest frequency before and after editing according to the second embodiment.
- FIG. 12 is a diagram illustrating how a focus range changes before and after editing according to a third embodiment of the present invention.
- FIG. 13 is a block diagram illustrating a configuration of an apparatus according to a fourth embodiment of the present invention.
- FIG. 14 is a diagram illustrating how a focus range changes before and after enlargement according to a fourth embodiment.
- FIGS. 1A and 1B are schematic diagrams illustrating an example of the internal constitution of a LF camera.
- Light from an object, which is incident on a micro lens array 12 by passing through an imaging lens 11 is photoelectrically converted into an electrical signal by an image sensor 13 .
- the obtained imaging data is referred to as “light-field data” (hereinafter referred to as “LF data”).
- the imaging lens 11 projects light from an object onto the micro lens array 12 .
- the imaging lens 11 is an interchangeable lens that is mounted to the main body of an imaging apparatus 10 .
- a user can change an imaging magnification by the zoom operation of the imaging lens 11 .
- the micro lens array 12 is constituted by arranging a plurality of micro lenses in a grid and is located between the imaging lens 11 and the image sensor 13 .
- Each of micro lenses constituting the micro lens array 12 divides incident light from the imaging lens 11 into divided light components and outputs the divided light components to the image sensor 13 .
- the image sensor 13 constituting the imaging unit is an imaging element having a plurality of pixels, where the intensity of light is detected by each pixel. The respective lights divided by each micro lens are incident on each pixel of the image sensor 13 for receiving light from an object.
- FIG. 2 is a schematic diagram illustrating the positional relationship between the micro lens array 12 and each pixel of the image sensor 13 .
- Each micro lens of the micro lens array 12 is arranged so as to correspond to a plurality of pixels of the image sensor 13 .
- Light divided by each micro lens is incident on each pixel of the image sensor 13 , and the intensity of light (light beam information) from different directions can be detected by each pixel.
- the incidence direction (directional information) of a light beam incident on each pixel of the image sensor 13 via micro lenses can be found depending on the positional relationship between each micro lens and each pixel of the image sensor 13 . In other words, information about the direction of travel of light is detected in conjunction with the intensity distribution of light.
- An image in a focal plane having different distances from the lens vertex surfaces of the micro lens array 12 is obtained by combining the outputs of the pixels of the image sensor 13 , which are placed at positions corresponding to the eccentricity from the optical axis of each micro lens.
- a light beam is represented by a function which is parameterized by parallel two planes using parameters such as a position, an orientation, a wavelength, or the like.
- the direction of light incident on each pixel is determined by the arrangement of a plurality of pixels corresponding to each micro lens.
- the imaging apparatus 10 acquires light beam information and directional information and then performs light beam rearrangement/calculation processing (hereinafter referred to as “reconstruction”) to thereby be able to generate image data of any focus position or any viewpoint.
- the light beam information and directional information are included in LF data.
- FIG. 3 is a schematic diagram illustrating the relationship between the direction of travel of a light beam incident on the micro lens of the micro lens array 12 and the recording area of the image sensor 13 .
- An object image formed by the imaging lens 11 is focused on the micro lens array 12 , and a light beam incident on the micro lens array 12 is received by the image sensor 13 via the micro lens array 12 .
- a light beam incident on the micro lens array 12 is received at different positions on the image sensor 13 depending on the direction of travel of light, and thus, an object image having a shape similar to that of the imaging lens 11 is focused for each micro lens.
- FIG. 4 is a schematic diagram illustrating information about a light beam incident on the image sensor 13 .
- a description will be given of a light beam received by the image sensor 13 with reference to FIG. 4 .
- an assumption is made that the orthogonal coordinate system on the lens plane of the imaging lens 11 is given as (u, v), the orthogonal coordinate system on the imaging plane of the image sensor 13 is given as (x, y), and the distance between the lens plane of the imaging lens 11 and the imaging plane of the image sensor 13 is given as F.
- the intensity of a light beam passing through the imaging lens 11 and the image sensor 13 can be represented by the four-dimensional function L (u, v, x, y).
- the four-dimensional function L (u, v, x, y) for holding not only light beam positional information but also the direction of travel of a light beam are recorded on the image sensor 13 .
- FIG. 5 is a schematic diagram illustrating refocusing calculation processing.
- the intensity L′ (u, v, s, t) of a light beam in the orthogonal coordinate system (s, t) on the refocused plane is represented by the following Formula (1):
- refocusing calculation processing is performed by using Formula (2), so that an image set to any focus point (refocused plane) can be reconstructed.
- weighting is performed by multiplying a weighting coefficient for each image data that forms an image area assigned to each micro lens. For example, when an image with a deep depth of field wants to be generated, integration processing is performed only by using information about a light beam incident on the light receiving plane of the image sensor 13 at a relatively small angle. In other words, for a light beam incident on the image sensor 13 at a relatively large angle, integration processing is not performed by multiplying a weighting coefficient “0” (zero).
- FIG. 6 is a schematic diagram illustrating the relationship between the difference in angle of incidence on a micro lens and the recording area of the image sensor 13 .
- FIG. 7 is a schematic diagram illustrating depth of field adjustment processing. As shown in FIG. 6 , a light beam of which the angle of incidence to the image sensor 13 is relatively small is positioned at more central area. Thus, as shown in FIG. 7 , integration processing is performed only by using pixel data acquired at the central portion (hatched portion in FIG. 7 ) of the area. With the aid of such processing, an image with a deep depth of field can be expressed as if an aperture diaphragm provided in a typical imaging apparatus is stopped down.
- a pan focus image with a deeper depth of field can also be generated by further reducing pixel data for use at the central portion of the area.
- the depth of field of an image after shooting can be adjusted based on the actually acquired LF data (light beam information).
- LF data light beam information
- the system provides a LF image editing function to a user and is constituted within an advanced display or a PC (personal computer) including a display or the like.
- a CPU Central Processing Unit
- a system for performing simple editing after shooting may be provided in an imaging apparatus which is capable of capturing a LF image.
- the following system is configured within the LF camera.
- an apparatus to which the present invention is applied is not limited provided that it is capable of performing development/editing processing for a LF image.
- the present invention is applicable to, for example, an apparatus that acquires light beam information and directional information included in LF image data (LF data) and then performs rearrangement and reconstruction of the light beam to thereby be able to generate image data of any focus position or any viewpoint.
- a trimming target specifying unit (hereinafter referred to as “target specifying unit”) 101 is an IF (interface) unit that is used by a user for specifying a trimming area in an image with respect to image data.
- the target specifying unit 101 is a typical input device such as a pointing device such as a mouse or a touch panel.
- a trimming target is specified by, for example, the following methods:
- a trimming area determining unit (hereinafter referred to as “area determining unit”) 102 determines an area as a trimming target based on the content specified by the target specifying unit 101 and the LF image data stored in a storage unit 106 as a trimming target. For example, when point coordinates for specifying an object are specified by the target specifying unit 101 , the area determining unit 102 determines an object area including the specified points as a trimming area by the analysis of the LF image stored in the storage unit 106 . Alternatively, when the coordinates of the start point and the end point for expressing a rectangular frame are specified by the target specifying unit 101 , the area determining unit 102 determines the rectangular area defined by the start point and the end point as a trimming area within the image. When any number of point coordinates for specifying any polygon is specified by the target specifying unit 101 , the area determining unit 102 determines an area surrounded by any number of point coordinates as a trimming area.
- a trimming area analyzing unit (hereinafter referred to as “area analyzing unit”) 103 acquires information about a trimming area from the area determining unit 102 and then analyzes the content of image data in the trimming area.
- area analyzing unit acquires information about a trimming area from the area determining unit 102 and then analyzes the content of image data in the trimming area.
- the depth map includes map data indicating the position of the depth direction of each pixel mapped for an image expressed in two dimensions in vertical and horizontal directions.
- a histogram type depth map indicating the number of pixels which are the counted number (frequency) of pixels present regardless of vertical and horizontal coordinates in the depth direction is used.
- FIG. 9 illustrates a depth map according to the present embodiment.
- a focus range 201 which is a front side area is set to a default focus range as viewed from the photographer's side.
- a trimming target area 202 is a far side range which is out of focus.
- the area analyzing unit 103 analyzes information about the trimming area acquired from the area determining unit 102 . In this manner, it is determined that the trimming target is present in the trimming target area 202 on the depth map.
- a focus range determining unit 104 determines a focus range based on the result of analysis from the area analyzing unit 103 .
- the focus range 201 prior to trim editing is a default focus range appended upon shooting.
- the focus range may also be referred to as a “depth range” over which the object is brought into focus.
- the focus range is not limited to a depth range having a predetermined width but may also be referred to as a focus position (depth position at which the object is brought into focus).
- a focus range appending unit 105 appends the range determined by the focus range determining unit 104 as a focus range corresponding to the image obtained as a result of trimming to a LF image obtained as a result of trimming and stores the resulting LF image in the storage unit 106 .
- Information about the determined focus range is in a form stored as the focus range prior to trim editing and is changed to information about the focus range after trim editing. Thus, information about the default focus position is overwritten.
- information indicating the focus range is appended as metadata to, for example, the LF image data after trim editing.
- metadata allows the LF image after trim editing to be displayed in an appropriate focus range upon display of the LF image.
- the focus range is expressed as positions (two points) in the depth direction on the depth map. More specifically, when the two points are defined as the point A and the point B, the range between the position of the point A to the position of the point B in the depth direction is treated as the focus range.
- information about the focus range after trim editing may also be additionally recorded in the LF image data prior to trim editing.
- the entire data prior to trim editing is stored for the LF image itself.
- the image can be displayed in the focus range which has been changed to an appropriate level in accordance with trim editing.
- information indicating the default focus position is not overwritten, and thus, the default focus position upon shooting can be used as it is upon display of an image prior to trim editing.
- the storage unit 106 stores LF image data prior to trim editing, LF image data after trim editing obtained from the area determining unit 102 , focus range data acquired from the focus range appending unit 105 , and associated data such as a thumbnail image.
- a display unit 107 displays a LF image prior to trim editing and after trim editing.
- the display unit 107 presents a UI (user interface) screen or the like used by a user upon specifying a trimming target using the target specifying unit 101 to the user.
- a television apparatus, a PC display, a commercial display, a tablet device, a smart phone, or the like may be used as a display apparatus.
- image data for display or the like is output from an output unit instead of the display unit 107 to an external device.
- control is made such that the focus range is changed to an appropriate level depending on the result of trim editing.
- the focus range with a high convenience of use for the user is automatically set.
- the area analyzing unit 103 analyzes the center coordinate of a trimming area.
- the area analyzing unit 103 also analyzes the position (hereinafter referred to as “mode depth position”) on a depth map having the largest number of pixels in the trimming area, that is, having the highest frequency in the trimming area when the number of pixels is set as a frequency.
- the area analyzing unit 103 not only performs processing described in the first embodiment but also sends information about any one of or both the center coordinate of the trimming area and the mode depth position of the trimming area to the focus range determining unit 104 .
- the focus range determining unit 104 sets the focus range to the entire trimming area.
- the entire area after trim editing may be unable to be brought into focus depending on LF image data prior to trim editing.
- the focus range which can be adjusted or changed after shooting is in a finite range under the photographing conditions such as optical properties of a micro lens group, optical properties of a main lens, photographing parameters such as an aperture value.
- the focus range is optically narrowed as compared with the settings in which the aperture is stopped down on the wide-angle side.
- the focus range determining unit 104 of the present embodiment determines the focus range after trim editing based on any one of the center coordinate of the trimming area or the mode depth position of the trimming area. Firstly, a description will be given of processing for determining a focus range based on the position of the center coordinate of the trimming area. In this case, the range which is settable in the depth direction and the front direction about the position of the depth direction in which a pixel at the center coordinate of the trimming area is present is set as the focus range after trim editing.
- FIG. 10 shows the case where the focus range cannot be set to the entire area excluding a trimming non-target area 301 as a result of trim editing for a LF image prior to trim editing.
- the range which is settable in the depth direction and the front direction about the mode depth position of the trimming area is set as the focus range after trim editing.
- FIG. 11 shows the case where the focus range cannot be set to the entire area excluding a trimming non-target area 401 as a result of trim editing for a LF image prior to trim editing.
- a focus range 403 is set as a focusable range in the depth direction and the front direction about a mode depth position 402 of the trimming area. Even when there is no object, which wants to be brought into focus, at the central portion of the trimmed area, focus can be made at the position according to the trim editing intention by such processing. Note that whether a focus range is determined from the center coordinate of the trimming area or a focus range is determined from the mode depth position of the trimming area can be selected by a user.
- the area analyzing unit 103 analyzes an object (e.g., an image of a face of a person, a face of an animal such as a dog or cat, or the like) in the trimming area.
- an object e.g., an image of a face of a person, a face of an animal such as a dog or cat, or the like
- the area analyzing unit 103 analyzes an area in which each object is present.
- the area analyzing unit 103 not only performs processing described in the first embodiment but also sends information about an area in which each object is present, which is obtained as a result of the analysis described above, to the focus range determining unit 104 .
- the focus range determining unit 104 sets the focus range to the entire trimming area, the entire area after trim editing may be unable to be brought into focus depending on LF image data prior to trim editing as described in the second embodiment.
- the focus range determining unit 104 of the present embodiment determines the focus range after trim editing based on the number of objects present in the trimming area. When the number of objects included in the trimming area is one, the focus range is determined about the depth range over which the object is present. On the other hand, when the number of objects included in the trimming area is in plural, the focus range is determined such that all the objects are included therein.
- FIG. 12 shows the case where a trimming area 502 is selected for trim editing with respect to the entire image 501 prior to trim editing.
- a trimming area 502 is selected for trim editing with respect to the entire image 501 prior to trim editing.
- the aforementioned depth map with respect to the entire image prior to trim editing is shown in FIG. 12 .
- a trimming target area 505 is present on the front side with respect to the objects as viewed from the photographer's side, whereas a focus range 506 prior to trim editing is outside the trimming target area 505 .
- the area analyzing unit 103 detects the area for the triangle 503 and the circle 504 which are two objects included in the trimming area 502 .
- the focus range determining unit 104 determines a focus range 510 such that a range 508 on the depth map in which a triangle 513 is present and a range 507 on the depth map in which a circle 514 is present after trim editing are included therein.
- the focus range 506 is changed to the focus range 510 after trim editing.
- the triangle 503 and the circle 504 which are blurry prior to trim editing are respectively displayed as the triangle 513 and the circle 514 which are brought into focus in an area 512 .
- the focus range determining unit 104 defines the priority of the objects in order of size.
- the focus range is determined such that an object having high priority is preferentially included therein.
- the focus range determining unit 104 also defines an object which is present at the center of the trimming area with high priority. For the definition of the priority, whether the priority is given to the size of an object or the center coordinate thereof can be selected by a user.
- a description will be given of a fourth embodiment of the present invention with reference to FIG. 13 . While, in the first to third embodiments, a description has been given by taking an example of trimming as editing processing, the present invention is also applicable to the case where enlargement processing is performed. In other words, enlargement processing after shooting may be regarded as processing for setting a range which is desired to be displayed on an enlarged scale as a trimming area and for removing the remaining area.
- An enlargement operating unit 601 shown in FIG. 13 is an IF (interface) unit that is used by a user for directing an enlargement operation with respect to LF image data.
- the enlargement operating unit 601 is a typical input device such as a pointing device such as a mouse or a touch panel.
- the center coordinate of an area desired to be enlarged or a rectangular frame is typically used for specification.
- a cross-shaped operating member may also be used for specifying an enlargement area in the vertical and horizontal directions. More specifically, an assumption is made that an area enlarged to an original image size is displayed in a state where a digital camera or the like performs during preview display or simple editing. Under this state, processing for changing a display area in the vertical and horizontal directions in accordance with a user's operating instruction is typically performed while the image is enlarged.
- An enlargement area determining unit 602 determines an area to be enlarged based on the content specified by the enlargement operating unit 601 and an enlargement target (LF image data) stored in the storage unit 106 .
- an enlargement area is determined depending on a predetermined fixed enlargement ratio.
- the enlargement area determining unit 602 determines the area prior to enlargement as the enlargement area depending on the enlargement ratio.
- a rectangular frame is specified as an enlargement area, the rectangular area is determined as the enlargement area.
- the enlargement area is changed depending on the amount of change defined upon pressing of the cross-shaped button in each direction.
- a predetermined number of pixels e.g. 50 pixels
- the enlargement operating unit 601 performs an operation in the right direction, an area is shifted by 50 pixels to the right direction from the enlargement area defined at that time.
- An enlargement area analyzing unit 603 acquires information about an enlargement area sent from the enlargement area determining unit 602 , and then analyzes the content of LF image data of the enlargement area. In the present embodiment, the enlargement area on the depth map is analyzed.
- FIG. 14 shows an enlargement case and its depth map according to the present embodiment.
- processing is executed for a LF image 701 prior to enlargement by taking a target area in which a circle 703 is displayed as an enlargement area 702 .
- a far side area of a LF image prior to enlargement is in a focus range 705 as viewed from the photographer's side.
- a range 704 corresponding to the enlargement area 702 is present in a portion which is out of focus on the front side of the focus range 705 .
- the enlargement area analyzing unit 603 analyzes information about the enlargement area acquired from the enlargement area determining unit 602 . In this manner, the enlargement target is determined to be present in the range 704 on the depth map.
- a focus range determining unit 604 determines a focus range based on the result of analysis from the enlargement area analyzing unit 603 . A description will be given of how a focus range changes after enlargement processing with respect to a LF image prior to enlargement processing with reference to FIG. 14 .
- the focus range 705 prior to enlargement processing is changed to the position of the range 704 on the depth map in which the enlargement area 702 is present, and thus, becomes a focus range 708 . Consequently, the circle 703 which is blurry prior to enlargement processing is displayed as a circle 707 which is brought into focus after enlargement processing.
- focus is automatically adjusted to the enlargement area when the enlargement operation is performed by the above processing, resulting in an increase in convenience of use for the user.
- Embodiments of the present invention can also be realized by a computer of a system or apparatus that reads out and executes computer executable instructions recorded on a storage medium (e.g., non-transitory computer-readable storage medium) to perform the functions of one or more of the above-described embodiment(s) of the present invention, and by a method performed by the computer of the system or apparatus by, for example, reading out and executing the computer executable instructions from the storage medium to perform the functions of one or more of the above-described embodiment(s).
- the computer may comprise one or more of a central processing unit (CPU), micro processing unit (MPU), or other circuitry, and may include a network of separate computers or separate computer processors.
- the computer executable instructions may be provided to the computer, for example, from a network or the storage medium.
- the storage medium may include, for example, one or more of a hard disk, a random-access memory (RAM), a read only memory (ROM), a storage of distributed computing systems, an optical disk (such as a compact disc (CD), digital versatile disc (DVD), or Blu-ray Disc (BD)TM), a flash memory device, a memory card, and the like.
Landscapes
- Engineering & Computer Science (AREA)
- Multimedia (AREA)
- Signal Processing (AREA)
- Theoretical Computer Science (AREA)
- Physics & Mathematics (AREA)
- General Physics & Mathematics (AREA)
- Computer Vision & Pattern Recognition (AREA)
- Computing Systems (AREA)
- Studio Devices (AREA)
- Image Processing (AREA)
Priority Applications (1)
| Application Number | Priority Date | Filing Date | Title |
|---|---|---|---|
| US15/444,676 US10109036B2 (en) | 2013-08-21 | 2017-02-28 | Image processing apparatus, control method for same, and program that performs image processing for image data having a focus state that is changeable |
Applications Claiming Priority (2)
| Application Number | Priority Date | Filing Date | Title |
|---|---|---|---|
| JP2013170912A JP6245892B2 (ja) | 2013-08-21 | 2013-08-21 | 画像処理装置およびその制御方法、プログラム |
| JP2013-170912 | 2013-08-21 |
Related Child Applications (1)
| Application Number | Title | Priority Date | Filing Date |
|---|---|---|---|
| US15/444,676 Division US10109036B2 (en) | 2013-08-21 | 2017-02-28 | Image processing apparatus, control method for same, and program that performs image processing for image data having a focus state that is changeable |
Publications (2)
| Publication Number | Publication Date |
|---|---|
| US20150054982A1 US20150054982A1 (en) | 2015-02-26 |
| US9621797B2 true US9621797B2 (en) | 2017-04-11 |
Family
ID=52480028
Family Applications (2)
| Application Number | Title | Priority Date | Filing Date |
|---|---|---|---|
| US14/457,495 Expired - Fee Related US9621797B2 (en) | 2013-08-21 | 2014-08-12 | Image processing apparatus, control method for same, and program |
| US15/444,676 Expired - Fee Related US10109036B2 (en) | 2013-08-21 | 2017-02-28 | Image processing apparatus, control method for same, and program that performs image processing for image data having a focus state that is changeable |
Family Applications After (1)
| Application Number | Title | Priority Date | Filing Date |
|---|---|---|---|
| US15/444,676 Expired - Fee Related US10109036B2 (en) | 2013-08-21 | 2017-02-28 | Image processing apparatus, control method for same, and program that performs image processing for image data having a focus state that is changeable |
Country Status (2)
| Country | Link |
|---|---|
| US (2) | US9621797B2 (enExample) |
| JP (1) | JP6245892B2 (enExample) |
Families Citing this family (6)
| Publication number | Priority date | Publication date | Assignee | Title |
|---|---|---|---|---|
| US20150104101A1 (en) * | 2013-10-14 | 2015-04-16 | Apple Inc. | Method and ui for z depth image segmentation |
| US9870058B2 (en) * | 2014-04-23 | 2018-01-16 | Sony Corporation | Control of a real world object user interface |
| CN107545586B (zh) * | 2017-08-04 | 2020-02-28 | 中国科学院自动化研究所 | 基于光场极线平面图像局部的深度获取方法及系统 |
| JP7173841B2 (ja) | 2018-11-14 | 2022-11-16 | キヤノン株式会社 | 画像処理装置およびその制御方法ならびにプログラム |
| JP7198055B2 (ja) * | 2018-11-16 | 2022-12-28 | キヤノン株式会社 | 画像処理装置およびその制御方法ならびにプログラム |
| US11776093B2 (en) * | 2019-07-16 | 2023-10-03 | University Of Florida Research Foundation, Incorporated | Automatic sharpness adjustment for imaging modalities |
Citations (7)
| Publication number | Priority date | Publication date | Assignee | Title |
|---|---|---|---|---|
| JPH1079882A (ja) | 1996-09-02 | 1998-03-24 | Canon Inc | 画像入力装置 |
| JP2001208961A (ja) | 2000-01-26 | 2001-08-03 | Nikon Corp | カメラ |
| US20020057847A1 (en) * | 2000-11-15 | 2002-05-16 | Nikon Corporation | Image-capturing device |
| US7046290B2 (en) | 2000-01-26 | 2006-05-16 | Nikon Corporation | Multi-point auto-focus digital camera including electronic zoom |
| US20100265385A1 (en) * | 2009-04-18 | 2010-10-21 | Knight Timothy J | Light Field Camera Image, File and Configuration Data, and Methods of Using, Storing and Communicating Same |
| US20110273471A1 (en) * | 2009-01-19 | 2011-11-10 | Sony Corporation | Display control device, display control method and program |
| US20110305446A1 (en) * | 2010-06-15 | 2011-12-15 | Kei Itoh | Imaging apparatus, focus position detecting method, and computer program product |
Family Cites Families (7)
| Publication number | Priority date | Publication date | Assignee | Title |
|---|---|---|---|---|
| JP5190882B2 (ja) * | 2008-11-07 | 2013-04-24 | 富士フイルム株式会社 | 複眼撮影装置およびその制御方法並びにプログラム |
| KR101608970B1 (ko) | 2009-11-27 | 2016-04-05 | 삼성전자주식회사 | 광 필드 데이터를 이용한 영상 처리 장치 및 방법 |
| WO2012001947A1 (ja) * | 2010-06-28 | 2012-01-05 | 株式会社ニコン | 撮像装置、画像処理装置、画像処理プログラム記録媒体 |
| JP5762142B2 (ja) * | 2011-05-31 | 2015-08-12 | キヤノン株式会社 | 撮像装置、画像処理装置およびその方法 |
| JP5947548B2 (ja) * | 2012-01-13 | 2016-07-06 | キヤノン株式会社 | 撮像装置、その制御方法、画像処理装置、画像生成方法、プログラム |
| JP2013153375A (ja) * | 2012-01-26 | 2013-08-08 | Sony Corp | 画像処理装置、画像処理方法および記録媒体 |
| US9237263B2 (en) * | 2012-10-05 | 2016-01-12 | Vidinoti Sa | Annotation method and apparatus |
-
2013
- 2013-08-21 JP JP2013170912A patent/JP6245892B2/ja not_active Expired - Fee Related
-
2014
- 2014-08-12 US US14/457,495 patent/US9621797B2/en not_active Expired - Fee Related
-
2017
- 2017-02-28 US US15/444,676 patent/US10109036B2/en not_active Expired - Fee Related
Patent Citations (9)
| Publication number | Priority date | Publication date | Assignee | Title |
|---|---|---|---|---|
| JPH1079882A (ja) | 1996-09-02 | 1998-03-24 | Canon Inc | 画像入力装置 |
| JP2001208961A (ja) | 2000-01-26 | 2001-08-03 | Nikon Corp | カメラ |
| US7046290B2 (en) | 2000-01-26 | 2006-05-16 | Nikon Corporation | Multi-point auto-focus digital camera including electronic zoom |
| US20060139478A1 (en) | 2000-01-26 | 2006-06-29 | Nikon Corporation | Camera |
| US8558941B2 (en) | 2000-01-26 | 2013-10-15 | Nikon Corporation | Digital camera having trimming and focusing ability |
| US20020057847A1 (en) * | 2000-11-15 | 2002-05-16 | Nikon Corporation | Image-capturing device |
| US20110273471A1 (en) * | 2009-01-19 | 2011-11-10 | Sony Corporation | Display control device, display control method and program |
| US20100265385A1 (en) * | 2009-04-18 | 2010-10-21 | Knight Timothy J | Light Field Camera Image, File and Configuration Data, and Methods of Using, Storing and Communicating Same |
| US20110305446A1 (en) * | 2010-06-15 | 2011-12-15 | Kei Itoh | Imaging apparatus, focus position detecting method, and computer program product |
Also Published As
| Publication number | Publication date |
|---|---|
| US20170169542A1 (en) | 2017-06-15 |
| US20150054982A1 (en) | 2015-02-26 |
| US10109036B2 (en) | 2018-10-23 |
| JP6245892B2 (ja) | 2017-12-13 |
| JP2015041169A (ja) | 2015-03-02 |
Similar Documents
| Publication | Publication Date | Title |
|---|---|---|
| US11568517B2 (en) | Electronic apparatus, control method, and non- transitory computer readable medium | |
| US10109036B2 (en) | Image processing apparatus, control method for same, and program that performs image processing for image data having a focus state that is changeable | |
| CN104038690B (zh) | 图像处理装置、图像拍摄装置及图像处理方法 | |
| JP6548367B2 (ja) | 画像処理装置、撮像装置、画像処理方法及びプログラム | |
| US9076214B2 (en) | Image acquisition apparatus and image processing apparatus using selected in-focus image data | |
| US9380281B2 (en) | Image processing apparatus, control method for same, and program | |
| CN106233329A (zh) | 3d拉东图像的产生和使用 | |
| US9332195B2 (en) | Image processing apparatus, imaging apparatus, and image processing method | |
| JP2016039613A (ja) | 画像処理装置及びその制御方法、プログラム、記憶媒体 | |
| US9955111B2 (en) | Electronic apparatus and display control method | |
| US10356381B2 (en) | Image output apparatus, control method, image pickup apparatus, and storage medium | |
| KR20250130568A (ko) | 전자 장치 및 그 전자 장치에서 영상을 표시하는 방법 | |
| US12470829B2 (en) | Processing apparatus, electronic apparatus, processing method, and program | |
| US9319579B2 (en) | Image processing apparatus, control method, and program for the same with focus state specification and deletion confirmation of image data | |
| US9936121B2 (en) | Image processing device, control method of an image processing device, and storage medium that stores a program to execute a control method of an image processing device | |
| WO2018235382A1 (ja) | 撮像装置、撮像装置の制御方法、及び撮像装置の制御プログラム | |
| JP2014086899A (ja) | 画像処理装置、画像処理方法およびプログラム。 | |
| JP5743769B2 (ja) | 画像処理装置および画像処理方法 | |
| JP6294703B2 (ja) | 画像処理装置、画像処理方法およびプログラム | |
| JP2015198340A (ja) | 画像処理装置およびその制御方法、並びにプログラム | |
| WO2022004302A1 (ja) | 画像処理装置、撮像装置、画像処理方法、及びプログラム | |
| JP6120535B2 (ja) | 情報処理装置、情報処理方法、及びプログラム | |
| JP6210717B2 (ja) | 画像再生装置及び方法 | |
| JP6584091B2 (ja) | 電子機器及び表示制御方法 |
Legal Events
| Date | Code | Title | Description |
|---|---|---|---|
| AS | Assignment |
Owner name: CANON KABUSHIKI KAISHA, JAPAN Free format text: ASSIGNMENT OF ASSIGNORS INTEREST;ASSIGNOR:OTA, YUYA;REEL/FRAME:034955/0578 Effective date: 20140728 |
|
| STCF | Information on status: patent grant |
Free format text: PATENTED CASE |
|
| FEPP | Fee payment procedure |
Free format text: MAINTENANCE FEE REMINDER MAILED (ORIGINAL EVENT CODE: REM.); ENTITY STATUS OF PATENT OWNER: LARGE ENTITY |
|
| LAPS | Lapse for failure to pay maintenance fees |
Free format text: PATENT EXPIRED FOR FAILURE TO PAY MAINTENANCE FEES (ORIGINAL EVENT CODE: EXP.); ENTITY STATUS OF PATENT OWNER: LARGE ENTITY |
|
| STCH | Information on status: patent discontinuation |
Free format text: PATENT EXPIRED DUE TO NONPAYMENT OF MAINTENANCE FEES UNDER 37 CFR 1.362 |
|
| FP | Lapsed due to failure to pay maintenance fee |
Effective date: 20210411 |