WO2024024375A1 - Imaging device, imaging method, and imaging program - Google Patents

Imaging device, imaging method, and imaging program Download PDF

Info

Publication number
WO2024024375A1
WO2024024375A1 PCT/JP2023/023851 JP2023023851W WO2024024375A1 WO 2024024375 A1 WO2024024375 A1 WO 2024024375A1 JP 2023023851 W JP2023023851 W JP 2023023851W WO 2024024375 A1 WO2024024375 A1 WO 2024024375A1
Authority
WO
WIPO (PCT)
Prior art keywords
subject
optical system
imaging
imaging device
value
Prior art date
Application number
PCT/JP2023/023851
Other languages
French (fr)
Japanese (ja)
Inventor
勇起 吉留
Original Assignee
ソニーグループ株式会社
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by ソニーグループ株式会社 filed Critical ソニーグループ株式会社
Publication of WO2024024375A1 publication Critical patent/WO2024024375A1/en

Links

Images

Classifications

    • GPHYSICS
    • G02OPTICS
    • G02BOPTICAL ELEMENTS, SYSTEMS OR APPARATUS
    • G02B7/00Mountings, adjusting means, or light-tight connections, for optical elements
    • G02B7/28Systems for automatic generation of focusing signals
    • GPHYSICS
    • G02OPTICS
    • G02BOPTICAL ELEMENTS, SYSTEMS OR APPARATUS
    • G02B7/00Mountings, adjusting means, or light-tight connections, for optical elements
    • G02B7/28Systems for automatic generation of focusing signals
    • G02B7/34Systems for automatic generation of focusing signals using different areas in a pupil plane
    • GPHYSICS
    • G03PHOTOGRAPHY; CINEMATOGRAPHY; ANALOGOUS TECHNIQUES USING WAVES OTHER THAN OPTICAL WAVES; ELECTROGRAPHY; HOLOGRAPHY
    • G03BAPPARATUS OR ARRANGEMENTS FOR TAKING PHOTOGRAPHS OR FOR PROJECTING OR VIEWING THEM; APPARATUS OR ARRANGEMENTS EMPLOYING ANALOGOUS TECHNIQUES USING WAVES OTHER THAN OPTICAL WAVES; ACCESSORIES THEREFOR
    • G03B13/00Viewfinders; Focusing aids for cameras; Means for focusing for cameras; Autofocus systems for cameras
    • G03B13/32Means for focusing
    • G03B13/34Power focusing
    • G03B13/36Autofocus systems
    • GPHYSICS
    • G03PHOTOGRAPHY; CINEMATOGRAPHY; ANALOGOUS TECHNIQUES USING WAVES OTHER THAN OPTICAL WAVES; ELECTROGRAPHY; HOLOGRAPHY
    • G03BAPPARATUS OR ARRANGEMENTS FOR TAKING PHOTOGRAPHS OR FOR PROJECTING OR VIEWING THEM; APPARATUS OR ARRANGEMENTS EMPLOYING ANALOGOUS TECHNIQUES USING WAVES OTHER THAN OPTICAL WAVES; ACCESSORIES THEREFOR
    • G03B17/00Details of cameras or camera bodies; Accessories therefor
    • G03B17/18Signals indicating condition of a camera member or suitability of light
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N23/00Cameras or camera modules comprising electronic image sensors; Control thereof
    • H04N23/60Control of cameras or camera modules
    • H04N23/67Focus control based on electronic image sensor signals

Definitions

  • the present disclosure relates to an imaging device, an imaging method, and an imaging program.
  • Patent Document 1 various techniques regarding focus position adjustment when capturing images of a plurality of subjects have been proposed.
  • One aspect of the present disclosure enables focus position adjustment suitable for imaging multiple subjects.
  • An imaging device includes a processing unit that controls an optical system for imaging, and the processing unit controls the optical system for each of the nearest object and the farthest object among a plurality of objects. This includes adjusting the focusing position of the optical system so that the focus value approaches the intermediate value of the respective defocus values.
  • an optical system for imaging is configured such that defocus values of the closest object and the farthest object among a plurality of objects approach an intermediate value of the respective defocus values. This includes adjusting the focus position.
  • An imaging program causes a processor to perform an imaging program such that defocus values of the closest object and the farthest object among the plurality of objects approach an intermediate value of the respective defocus values.
  • a process for adjusting the focusing position of the optical system is executed.
  • FIG. 1 is a diagram illustrating an example of a schematic configuration of an imaging device according to an embodiment.
  • 1 is a diagram showing an example of a schematic configuration of an image sensor.
  • FIG. 3 is a diagram illustrating an example of functional blocks of a processing unit.
  • FIG. 3 is a diagram illustrating an example of subject selection.
  • FIG. 3 is a diagram illustrating an example of subject selection.
  • FIG. 3 is a diagram illustrating an example of subject selection.
  • It is a figure which shows the example of a focus position and F value after adjustment.
  • FIG. 3 is a diagram for explaining calculation of an F value.
  • 2 is a flowchart illustrating an example of processing (imaging method) executed by the imaging device.
  • 2 is a flowchart illustrating an example of processing (imaging method) executed by the imaging device.
  • the optimal focusing position where multiple subjects can be imaged with the minimum F value may be in front of the intermediate position of each subject (on the imaging device side). It is also known to try to adjust the focus position to approximately the middle position of each subject, but in this case, the setting deviates from the optimal setting, especially when the front and rear positions of the subjects are far apart. Attempting to optimize the parameters of both the focus position and the F-number increases the amount of calculation, and it becomes difficult to maintain optimal parameters, especially when photographing a fast-moving subject or when shooting a video.
  • the focus position is adjusted so that the minimum F value corresponding to the depth of field including the plurality of selected subjects is obtained. It is also possible to select a subject or, conversely, to exclude a subject from selection through a user operation. It is also possible to select a blur priority mode in which the F value is further reduced while the in-focus position remains the same.
  • FIG. 1 is a diagram showing an example of a schematic configuration of an imaging device according to an embodiment.
  • An object to be imaged by the imaging device 1 is referred to as a subject 9 and illustrated.
  • the subject 9 may be any object that can be imaged by the imaging device 1.
  • the object 9 may be a part of an object, and in that case, the objects 9 may be different parts of the same object.
  • the subject 9 may be a moving object; examples of such a subject 9 are a person, an animal, a vehicle, etc.
  • the closest subject 9 and the farthest subject 9 to the imaging device 1 are referred to as the closest subject 9 near and the farthest subject 9 far .
  • the imaging device 1 includes an optical system 2, an image sensor 3, a drive section 4, a display operation section 5, a storage section 6, and a processing section 7.
  • the optical system 2 is an imaging optical system provided for the image sensor 3 and guides light from the subject 9 to the image sensor 3.
  • the optical system 2 is configured to be able to adjust the focus position and F value.
  • the optical system 2 is configured to include a lens mechanism for adjusting the focal position, an aperture diaphragm mechanism for adjusting the F value, and the like.
  • the image sensor 3 images the subject 9 via the optical system 2.
  • An example of the image sensor 3 is a solid-state image sensor such as a CMOS image sensor.
  • An image within the field of view of the imaging device 1 is captured by the imaging element 3.
  • the image sensor 3 is configured to obtain the DF value of each of the plurality of subjects 9.
  • the DF value is a value indicating the amount of focus shift (defocus amount) when the subject 9 is out of focus.
  • the image sensor 3 will be explained with reference to FIG. 2 as well.
  • FIG. 2 is a diagram showing an example of a schematic configuration of an image sensor.
  • the imaging device 3 includes a plurality of pixels 31 arranged in an array so as to define an imaging surface 30.
  • one pixel 31 outputs a pixel signal according to the amount of received light of any one of red light, green light, and blue light.
  • the plurality of pixels 31 include a plurality of imaging pixels 311 and a plurality of phase difference detection pixels 312. An image is generated based on at least the pixel signal from the imaging pixel 311. Although this is not an essential configuration, the illustrated phase difference detection pixel 312 is different from the imaging pixel 311 in that half of the light receiving area is shielded from light. A pair of phase difference detection pixels 312 whose portions opposite to each other in the array direction are shielded from light are arranged in each part of the image sensor 3. Based on the amount of light received by each of the pair of phase difference detection pixels 312, that is, the pixel signal level, the DF value of the subject 9 corresponding to that portion can be calculated.
  • the image may be an image of one frame of a video, and the image may be read as a video as appropriate as long as there is no contradiction. Imaging may be read as photographing as appropriate.
  • the drive section 4 drives the optical system 2.
  • the drive unit 4 moves the lenses included in the optical system 2 to adjust the focus position of the optical system 2, and moves the aperture to adjust the F value of the optical system 2.
  • the drive unit 4 includes, for example, an actuator and the like.
  • the display operation unit 5 displays information handled by the imaging device 1 and receives operations on the imaging device 1. For example, the display operation unit 5 displays a captured image. The display operation unit 5 also displays an operation screen and receives user operations.
  • the display operation unit 5 includes, for example, a touch panel display.
  • the storage unit 6 stores information used by the imaging device 1.
  • An example of the information stored in the storage unit 6 is an imaging program 61.
  • the imaging program 61 causes the processing unit 7 to execute various processes in the imaging device 1 .
  • the imaging program 61 can be distributed collectively or separately via a network such as the Internet. Further, the imaging program 61 can be recorded collectively or separately on various computer-readable recording media, and can be executed by being read from the recording medium by the computer.
  • the processing unit 7 is configured to include, for example, one or more processors.
  • the imaging program 61 stored in the storage unit 6 is read out and expanded into the memory, so that the processing of the processing unit 7 according to the imaging program 61 is executed.
  • the processing unit 7 directly or indirectly controls other elements in the imaging device 1, including the optical system 2. The processing unit 7 will be further explained with reference to FIG. 3.
  • FIG. 3 is a diagram showing an example of functional blocks of the processing section.
  • the processing unit 7 includes an image generation unit 71, an object recognition unit 72, a subject selection unit 73, a distance measurement unit 74, a DF value calculation unit 75, an F value calculation unit 76, an optical system adjustment unit 77, A movement detection section 78 is included.
  • the image generation unit 71 generates an image based on pixel signals from the image sensor 3. Unless otherwise specified, it is assumed that the image includes a plurality of subject candidates located within the field of view of the imaging device 1.
  • the subject candidate is different from the subject 9 in that it is a subject that has not yet been selected by a subject selection unit 73, which will be described later.
  • the object recognition unit 72 recognizes multiple subject candidates in the image generated by the image generation unit 71. Various known image recognition algorithms may be used. By automatically recognizing subject candidates, user operations can be simplified accordingly.
  • the subject selection unit 73 selects a plurality of subjects 9 from the plurality of subject candidates recognized by the object recognition unit 72. Selection may be understood to mean selection of the subject 9 to be focused.
  • the subject 9 may be automatically selected by the subject selection unit 73, or may be manually selected by the user of the imaging device 1. In the case of automatic selection, the user's operation can be simplified, and in the case of manual operation, it is possible to respond to the user's arbitrary wishes.
  • the selection of the subject 9 may be performed by cooperation between the subject selection section 73 and the display operation section 5.
  • the display operation unit 5 of the imaging device 1 displays an image including a plurality of subject candidates.
  • the plurality of subject candidates are heads (including facial parts) of different people.
  • the display operation unit 5 displays the plurality of subject candidates in a manner that allows them to be selected as the subject 9.
  • each of the plurality of subject candidates is highlighted with a rectangular frame to facilitate selection by the user.
  • Some of the subject candidates are selected as the subject 9 by user operations such as touching the screen.
  • two people are selected as the subjects 9. By changing the display mode of the frame before and after selection, it becomes easier to distinguish between them.
  • the subject 9 can be selected by a simple user operation.
  • the subject selection unit 73 may automatically select the subject 9.
  • data such as an image of the subject 9 may be prepared in advance and stored (registered) in the storage unit 6.
  • the subject selection unit 73 may select the registered subject 9 from among the plurality of subject candidates in the image.
  • a plurality of parts of one object may each be selected as the subject 9.
  • the head, neck, belly, elbow, hand, and knee of one person are each taken as the subject 9. selected.
  • a skeleton as shown by lines connecting each part may also be selected as the subject 9.
  • One object may be divided into a large number of parts, and some of these parts may be selected as the subject 9.
  • one dog is subdivided into a number of square frames, and two of these parts are selected as the subject 9.
  • one airplane is subdivided into a large number of rectangular frames, and two of them are selected as the subject 9. Based on the principle described below, it is possible to perform focus position adjustment suitable for imaging an entire object.
  • the subject 9 may be selected by specifying a range in the image.
  • a part of the area in the image is filled in by the user's operation, and a subject candidate that overlaps the area is selected as the subject 9.
  • the distance measuring section 74 measures the distance of the subject 9 selected by the subject selecting section 73.
  • the distance from the imaging device 1 to the subject 9, more specifically, for example, the distance from the imaging surface 30 of the image sensor 3 to the subject 9 is measured.
  • Various known ranging techniques may be used. Examples of ranging methods include triangulation, ToF (Time Of Flight), and the like.
  • the image sensor 3 may be a plurality of image sensors 3 used as a stereo sensor.
  • ToF the image sensor 3 may be configured to include pixels that output distance measuring light.
  • distance measurement is performed based on the distribution of charges within the pixel 31 and the difference. Among the plurality of objects 9, the large object 9 with the shortest measured distance and the longest object 9 are identified as the closest object 9 near and the farthest object 9 far .
  • the DF value calculation unit 75 calculates the DF value of each of the plurality of subjects 9 based on the pixel signal from the phase difference detection pixel 312 of the image sensor 3. More specifically, the DF value calculation unit 75 calculates the DF value of the subject 9 corresponding to that portion based on the difference in level of the pixel signals from the pair of phase difference detection pixels 312. Since the DF value calculation itself using the phase difference detection pixels 312 is well known, detailed explanation will be omitted. For example, a calculation method as shown in Patent Document 2 may be used. In the present embodiment, the DF value calculation unit 45 also calculates an intermediate value between the DF value of the closest subject 9 near and the DF value of the farthest subject 9 far .
  • the F value calculation unit 76 calculates the F value of the optical system 2. Details will be described later.
  • the optical system adjustment section 77 adjusts the optical system 2. Adjustment of the optical system 2 includes adjustment of the focus position and adjustment of the F value. For example, a control signal specifying these parameters is transmitted from the optical system adjustment section 77 to the driving section 4, and the driving section 4 drives the optical system 2 according to the control signal. Note that, hereinafter, the focus position will also be referred to as focus position FP. Specifically, the optical system adjustment section 77 adjusts the optical system 2 so that the DF values of the closest object 9 near and the farthest object 9 far approach the intermediate value calculated by the DF value calculation section 75 described above. Adjust the focus position FP. For example, the relationship between the amount of change in the DF value and the amount of drive of the optical system 2, that is, the amount of movement of the focus position FP, is known in advance, and therefore, such adjustment of the focus position FP is possible.
  • the F value calculation section 76 calculates the minimum depth of field that includes the closest object 9 near and the farthest object 9 far . Calculate the F value.
  • Such an F value is, for example, based on the position of the focusing position FP of the optical system 2 adjusted by the optical system adjustment section 77 and the positions of the closest subject 9 near and the farthest subject 9 far (for example, the distance measuring section 74). distance measurement results).
  • the depth of field will also be referred to as depth of field x.
  • the optical system adjustment unit 77 adjusts the F value of the optical system 2 so that it approaches the F value calculated by the F value calculation unit 76.
  • the adjusted focus position FP and F value will be explained with reference to FIG. 8 as well.
  • FIG. 8 is a diagram showing an example of the focused position and F value after adjustment.
  • the corresponding depth of field x is illustrated.
  • the focus position FP is located between the closest subject 9 near and the farthest subject 9 far , more specifically, in this example, closer to the nearest subject 9 near than the farthest subject 9 far .
  • the depth of field x corresponding to the F value is determined to be the minimum range including the closest subject 9 near and the farthest subject 9 far . That is, the closest subject 9 near and the farthest subject 9 far are located at one end and the other end of the range of depth of field x (field range).
  • Such a focus position FP can be said to be one of the focus positions FP suitable for imaging a plurality of subjects including the closest subject 9 near and the farthest subject 9 far .
  • some conventional techniques can only focus on an intermediate position even when the difference between the front and back of the subject 9 is large, making it impossible to obtain the optimal focus position FP.
  • the F value of the optical system 2 is adjusted so that it becomes even smaller (so that the depth of field x becomes larger) than the F value calculated by the F value calculation unit 76. Good too. It is also possible to take an image in a mode (focus priority mode) that allows a certain degree of blur and increases the depth of field x.
  • FIG. 9 is a diagram for explaining calculation of the F value.
  • a lens 21, which is a condensing lens is exemplified.
  • the diameter of the lens 21 is shown as a diameter D.
  • the focal length of the lens 21 is shown as a focal length f.
  • the distance between the lens 21 and the focus position FP is shown as a distance a.
  • the distance between the lens 21 and the imaging surface 30 of the image sensor 3 is shown as a distance b.
  • the depth of field x the portion from the focus position FP to the closest subject 9 near is referred to as front depth of field x near and illustrated.
  • the portion from the focus position FP to the farthest subject 9 far is referred to as a rear depth of field x far and is illustrated.
  • the solid line indicates the optical path passing through the focus position FP.
  • the broken line indicates the optical path passing through the closest object 9 near .
  • the dashed line indicates the optical path passing through the farthest object 9 far .
  • the distance between the light from the closest subject 9 near (the light along the optical path indicated by the dashed line) and the light from the farthest subject 9 far (the light along the optical path indicated by the dashed-dotted line) on the imaging surface 30 is expressed as the allowable circle of confusion system ⁇ . It is called and illustrated.
  • the permissible circle of confusion system ⁇ may be approximately the same as the resolution of the image sensor 3.
  • the distance between the imaging surface 30 of the image sensor 3 and the focal point of the light from the closest subject 9 near is referred to as front depth of focus X near in the drawing.
  • the distance between the imaging surface 30 of the image sensor 3 and the focal point of the light from the farthest object 9 far is referred to as a rear depth of focus X far .
  • ⁇ D is treated as a value sufficiently smaller than 1 (for example, about 10 ⁇ 3 ).
  • the rear depth of focus X far is calculated in the same way, and has the same value as the front depth of focus X near .
  • the DF value is the amount of blur on the imaging surface 30, and since the values of the front depth of focus A certain Fno is calculated.
  • the F value calculation unit 76 of the processing unit 7 calculates the F value as described above.
  • the permissible circle of confusion system ⁇ can be said to correspond to the permissible amount of blur, so this value is changed to select a trade-off between the degree of focus of multiple subjects 9 and the degree of blur of the background, such as in the blur priority mode. be able to.
  • the movement detection section 78 detects the movement of the subject 9 selected by the subject selection section 73.
  • Various known detection algorithms may be used. For example, even if the subject 9 moves, the subject 9 can be tracked. Detection may be performed for each image cycle. In the case of video shooting, the movement of the subject 9 may be detected for each frame, for example. Detection of movement by the movement detection unit 48 may include detection of a movement direction of the subject 9, for example, a direction toward the imaging device 1 and a direction away from the imaging device 1.
  • the optical system adjustment section 77 controls the optical system 2 based on the detection result of the movement detection section 78. For example, each time movement of the subject 9 is detected by the movement detection unit 78, the focus position FP and F value of the optical system 2 may be adjusted (updated) as described above. By tracking the subject 9 and controlling the optical system 2, optimal adjustment of the focal position FP and F value of the optical system 2 can be maintained. For example, while shooting a video, the focus position FP and F value of the optical system 2 can be continuously changed smoothly.
  • FIG. 10 is a flowchart illustrating an example of processing (imaging method) executed by the imaging device. Since the specific contents of each process have been explained so far, detailed explanation will not be repeated.
  • step S1 multiple subjects are selected.
  • the subject selection unit 73 of the processing unit 7 selects a plurality of subjects 9 from a plurality of subject candidates.
  • step S2 the focus position is adjusted so that the DF value becomes an intermediate value.
  • the distance measuring section 74 of the processing section 7 measures the distances of each of the plurality of selected subjects 9. Based on the distance measurement results, the closest subject 9 near and the farthest subject 9 far are identified.
  • the DF value calculation unit 75 calculates the DF value of the closest subject 9 near and the farthest subject 9 far , and also calculates an intermediate value thereof.
  • the optical system adjustment unit 77 adjusts the focusing position FP of the optical system 2 so that the DF values of the closest subject 9 near and the farthest subject 9 far approach their intermediate values.
  • step S3 the F value is adjusted.
  • the F value calculation unit 76 of the processing unit 7 calculates the closest subject 9 near and the farthest subject 9 far based on the position of the nearest subject 9 near , the position of the farthest subject 9 far , and the adjusted focus position FP. Calculate the minimum F value that yields a depth of field x that includes
  • the optical system adjustment unit 77 adjusts the F value of the optical system 2 so that it approaches the F value calculated by the F value calculation unit 76.
  • the focal position FP and F value of the optical system 2 are adjusted so that the depth of field x that includes all of the plurality of selected subjects 9 is obtained.
  • the F value is adjusted to the minimum value.
  • imaging is performed with the parameters of the optical system 2 optimally adjusted in this way.
  • the adjustable F value is limited by the configuration of the optical system 2 and the like. It may not be possible to include all of the selected subjects 9 in the depth of field x. In that case, the subject selection unit 73 of the processing unit 7 may exclude the farthest subject 9 far from the selection targets. This will be explained with reference to FIG.
  • FIG. 11 is a flowchart illustrating an example of processing (imaging method) executed by the imaging device.
  • the process in step S11 is the same as step S1 in FIG. 10 described above, so the explanation will be omitted.
  • step S12 it is determined whether all of the selected subjects can be included in the depth of field. For example, the subject selection unit 73 of the processing unit 7 determines that the distance between the closest subject 9 near and the farthest subject 9 far is the upper limit of the F value of the adjustable optical system 2, that is, the upper limit of the depth of field x. If it is larger than the value, it is determined that all of the selected subjects 9 cannot be included in the depth of field x. If it is determined in this way (step S12: Yes), the process proceeds to step S14. If not (step S12: No), the process proceeds to step S13.
  • step S13 the farthest subject is excluded from selection. This process is executed, for example, by the subject selection unit 73 of the processing unit 7.
  • the excluded farthest subject 9 far returns to the state before selection, that is, to a subject candidate.
  • the remaining objects 9 remain as selection targets, and the farthest object 9 among them becomes the new farthest object 9 far . After that, the process returns to step S12.
  • step S14 the number of subjects 9 to be selected is narrowed down until all the subjects 9 to be selected are included in the depth of field x. In this state, the process proceeds to step S14.
  • step S14 and step S15 is the same as the processing in step S2 and step S3 in FIG. 10 described above, so the explanation will be omitted.
  • the adjustment of the focus position FP of the optical system 2 by the optical system adjustment unit 77 of the processing unit 7 is not limited to the adjustment described above (first adjustment), but also adjusts the focus position FP to a different position. It may also include a second adjustment.
  • the second adjustment may be an adjustment of the focus position FP using various known methods. For example, in the second adjustment, the focus position FP of the optical system 2 is adjusted so that the focus position FP of the optical system 2 approaches the farthest subject 9 far or the subject 9 located near the center of the image. be done.
  • the first adjustment and the second adjustment may be switched automatically or by user operation. By switching and using a plurality of adjustments, flexibility in adjusting the focus position can be increased. For example, the first adjustment and the second adjustment are switched depending on whether a specific subject 9 enters or exits the field of view. As an example, the first adjustment may be used when a particular subject 9 is within the field of view, and the second adjustment may be used when the subject 9 moves out of the field of view. For example, the second adjustment may be switched to the first adjustment after a certain period of time has passed since the subject 9 left the field of view. Changes in the focus position FP due to the entrance and exit of the subject 9 can be suppressed.
  • the imaging device 1 includes the processing section 7 that controls the imaging optical system 2.
  • the optical system 2 is controlled by the processing unit 7 so that the DF values of the closest subject 9 near and the farthest subject 9 far among the plurality of subjects 9 approach the intermediate value of the respective DF values.
  • This includes adjusting the focus position FP of. Thereby, focus position adjustment suitable for imaging a plurality of subjects 9 can be performed.
  • the control of the optical system 2 by the processing unit 7 is based on the depth of field may include adjusting the F-number of the optical system so that it approaches the minimum F-number that can be obtained. This enables optimal F-number adjustment based on optimal focus position adjustment.
  • the imaging device 1 includes an image sensor 3 that images a subject 9 via an optical system 2, and the image sensor 3 has a DF value of each of a plurality of subjects 9.
  • a phase difference detection pixel 312 may be included to obtain the following. For example, using such an image sensor 3, the DF values of the closest subject 9 near and the farthest subject 9 far can be obtained.
  • the processing unit 7 recognizes a plurality of subject candidates located within the field of view of the imaging device 1, and selects a plurality of subjects 9 from the recognized subject candidates. good.
  • the imaging device 1 may include a display operation unit 5 that displays a plurality of subject candidates located within the field of view of the imaging device 1 in a manner that allows them to be selected as the subject 9. Thereby, for example, a plurality of subjects 9 to be focused can be appropriately selected.
  • the processing unit 7 may detect the movement of the plurality of subjects 9 and control the optical system 2 based on the detection results. Thereby, even if the subject 9 moves, optimal focus position adjustment can be maintained.
  • each of the plurality of subjects 9 may be a different part of the same object. Thereby, it is possible to adjust the focus position suitable for imaging the entire object.
  • the processing unit 7 excludes the farthest subject 9 far from the selection target. You may do so. This makes it possible to deal with the case where, for example, there is a limit to the adjustment of the F value of the optical system 2.
  • the processing unit 7 adjusts the focusing position FP of the optical system 2 so that the DF values of the closest subject 9 near and the farthest subject 9 far among the plurality of subjects 9 approach the intermediate value of the respective DF values.
  • flexibility in adjusting the focus position can be increased.
  • the imaging method described with reference to FIG. 10 and the like is also one of the disclosed techniques.
  • the imaging method is such that the focusing position of the imaging optical system 2 is adjusted so that the DF values of the closest subject 9 near and the farthest subject 9 far among the plurality of subjects 9 approach the intermediate value of the respective DF values. including adjusting the FP.
  • the imaging program 61 described with reference to FIG. 1 and the like is also one of the techniques disclosed.
  • the imaging program 61 causes the processor (processing unit 7) to perform imaging so that the DF values of the closest object 9 near and the farthest object 9 far among the plurality of objects 9 approach the intermediate value of the respective DF values.
  • a process of adjusting the focal position FP of the optical system 2 for use in the optical system 2 is executed.
  • a computer-readable recording medium (storage unit 6) on which the imaging program 61 is recorded is also one of the techniques disclosed.
  • the present technology can also have the following configuration.
  • a processing unit that controls the optical system for imaging Equipped with The control of the optical system by the processing unit includes focusing the optical system so that the defocus values of the closest object and the farthest object among the plurality of objects approach an intermediate value of the respective defocus values. including adjusting the position; Imaging device.
  • the control of the optical system by the processing unit includes adjusting the F value of the optical system so that it approaches a minimum F value that provides a depth of field including the closest object and the farthest object.
  • the imaging device according to (1) or (2).
  • the processing unit recognizes a plurality of subject candidates located within a field of view of the imaging device, and selects the plurality of subjects from the recognized plurality of subject candidates.
  • the imaging device according to any one of (1) to (3).
  • (5) comprising a display operation unit that displays a plurality of subject candidates located within the field of view of the imaging device in a manner selectable as the subject;
  • the imaging device according to any one of (1) to (4).
  • the processing unit detects movement of the plurality of subjects and controls the optical system based on the detection result.
  • the imaging device according to any one of (1) to (5).
  • Each of the plurality of subjects is a different part of the same object, The imaging device according to any one of (1) to (6).
  • Adjustment of the focusing position of the optical system by the processing unit includes: A first adjustment that adjusts the focusing position of the optical system so that the defocus values of the closest object and the farthest object among the plurality of objects approach an intermediate value of the respective defocus values; a second adjustment that adjusts the focusing position of the optical system to a position different from the first adjustment; including, The imaging device according to any one of (1) to (8).
  • Imaging device 2 Optical system 21 Lens 3 Imaging element 31 Pixel 30 Imaging surface 311 Imaging pixel 312 Phase difference detection pixel 4 Drive section 5 Display operation section 6 Storage section 61 Imaging program 7 Processing section 71 Image generation section 72 Object recognition section 73 Subject selection section 74 Distance measurement section 75 DF value calculation section 76 F value calculation section 77 Optical system adjustment section 78 Movement detection section 9 Subject 9 nearest subject 9 far farthest subject FP Focusing position x Depth of field

Abstract

This imaging device comprises a processing unit that controls an optical system for imaging. The control of the optical system by the processing unit includes adjusting the focusing position of the optical system so that the defocus values of each of a nearest subject and a farthest subject among a plurality of subjects approach an intermediate value of each of the defocus values.

Description

撮像装置、撮像方法及び撮像プログラムImaging device, imaging method, and imaging program
 本開示は、撮像装置、撮像方法及び撮像プログラムに関する。 The present disclosure relates to an imaging device, an imaging method, and an imaging program.
 例えば特許文献1に開示されるように、複数の被写体を撮像する際の合焦位置(フォーカス位置)調整に関するさまざまな技術が提案されている。 For example, as disclosed in Patent Document 1, various techniques regarding focus position adjustment when capturing images of a plurality of subjects have been proposed.
特開2001-116980号公報Japanese Patent Application Publication No. 2001-116980 特開2022-16546号公報Japanese Patent Application Publication No. 2022-16546
 すべての被写体を被写界深度に含めるには、F値(絞り値)を大きくする必要があるが、F値を大きくしすぎると問題になることがある。具体的なF値が合焦位置によっても変わるので、合焦位置調整の検討も必要になる。複数の被写体を適切なF値で撮像できるような合焦位置調整について、具体的な検討はこれまで行われていない。 In order to include all subjects in the depth of field, it is necessary to increase the F value (aperture value), but increasing the F value too much may cause problems. Since the specific F value changes depending on the focus position, it is also necessary to consider adjusting the focus position. No specific studies have been conducted so far regarding focusing position adjustment that allows multiple subjects to be imaged at appropriate F-numbers.
 本開示の一側面は、複数の被写体の撮像に適した合焦位置調整を可能にする。 One aspect of the present disclosure enables focus position adjustment suitable for imaging multiple subjects.
 本開示の一側面に係る撮像装置は、撮像用の光学系を制御する処理部、を備え、処理部による光学系の制御は、複数の被写体のうちの最至近被写体及び最遠被写体それぞれのデフォーカス値が、それぞれのデフォーカス値の中間値に近づくように、光学系の合焦位置を調整することを含む。 An imaging device according to an aspect of the present disclosure includes a processing unit that controls an optical system for imaging, and the processing unit controls the optical system for each of the nearest object and the farthest object among a plurality of objects. This includes adjusting the focusing position of the optical system so that the focus value approaches the intermediate value of the respective defocus values.
 本開示の一側面に係る撮像方法は、複数の被写体のうちの最至近被写体及び最遠被写体それぞれのデフォーカス値が、それぞれのデフォーカス値の中間値に近づくように、撮像用の光学系の合焦位置を調整すること、を含む。 In an imaging method according to one aspect of the present disclosure, an optical system for imaging is configured such that defocus values of the closest object and the farthest object among a plurality of objects approach an intermediate value of the respective defocus values. This includes adjusting the focus position.
 本開示の一側面に係る撮像プログラムは、プロセッサに、複数の被写体のうちの最至近被写体及び最遠被写体それぞれのデフォーカス値が、それぞれのデフォーカス値の中間値に近づくように、撮像用の光学系の合焦位置を調整する処理、を実行させる。 An imaging program according to one aspect of the present disclosure causes a processor to perform an imaging program such that defocus values of the closest object and the farthest object among the plurality of objects approach an intermediate value of the respective defocus values. A process for adjusting the focusing position of the optical system is executed.
実施形態に係る撮像装置の概略構成の例を示す図である。1 is a diagram illustrating an example of a schematic configuration of an imaging device according to an embodiment. 撮像素子の概略構成の例を示す図である。1 is a diagram showing an example of a schematic configuration of an image sensor. 処理部の機能ブロックの例を示す図である。FIG. 3 is a diagram illustrating an example of functional blocks of a processing unit. 被写体の選択の例を示す図である。FIG. 3 is a diagram illustrating an example of subject selection. 被写体の選択の例を示す図である。FIG. 3 is a diagram illustrating an example of subject selection. 被写体の選択の例を示す図である。FIG. 3 is a diagram illustrating an example of subject selection. 被写体の選択の例を示す図である。FIG. 3 is a diagram illustrating an example of subject selection. 調整後の合焦位置及びF値の例を示す図である。It is a figure which shows the example of a focus position and F value after adjustment. F値の算出を説明するための図である。FIG. 3 is a diagram for explaining calculation of an F value. 撮像装置において実行される処理(撮像方法)の例を示すフローチャートである。2 is a flowchart illustrating an example of processing (imaging method) executed by the imaging device. 撮像装置において実行される処理(撮像方法)の例を示すフローチャートである。2 is a flowchart illustrating an example of processing (imaging method) executed by the imaging device.
 以下に、本開示の実施形態について図面に基づいて詳細に説明する。なお、以下の各実施形態において、同一の要素には同一の符号を付することにより重複する説明を省略する。 Below, embodiments of the present disclosure will be described in detail based on the drawings. In addition, in each of the following embodiments, the same elements are given the same reference numerals to omit redundant explanation.
 以下に示す項目順序に従って本開示を説明する。
  0.序
  1.実施形態
  2.変形例
  3.効果の例
The present disclosure will be described according to the order of items shown below.
0. Introduction 1. Embodiment 2. Modification example 3. Example of effect
0.序
 複数の被写体を撮像する際、これまでは、知識や経験に基づいて合焦位置及びF値をマニュアルで調節することも少なくなかった。このような調整は簡単ではなく、また、動画撮影ではほぼ不可能である。複数の被写体をすべて被写界深度に含めるためにF値を大きくしすぎると問題になることがある。例えば、背景までくっきり映りこんだり、撮像素子の高感度状態にして撮像するためノイズ等による画質劣化が生じたりする。とくに被写体間の距離が大きい場合の具体的な対処法は、これまでのところ見当たらない。
0. Introduction When photographing multiple subjects, until now, the focus position and F value have often been manually adjusted based on knowledge and experience. Such adjustments are not easy, and are almost impossible when shooting a video. Increasing the F value too large in order to include all multiple subjects in the depth of field may cause problems. For example, the background may be clearly reflected, or the image quality may deteriorate due to noise or the like because the image sensor is set to a high-sensitivity state. In particular, no specific solution has been found so far when the distance between subjects is large.
 複数の被写体を最小のF値で撮像できる最適な合焦位置は、各被写体の中間位置よりも手前(撮像装置側)になり得る。合焦位置を各被写体のほぼ中間位置に合わせるような試みも知られているが、その場合は、とくに被写体の前後位置が離れている場合に最適設定から外れる。合焦位置及びF値の両方のパラメータを最適化しようとすると計算量が大きくなり、とくに動きの速い被写体や動画撮影では最適なパラメータを維持することが難しくなる。 The optimal focusing position where multiple subjects can be imaged with the minimum F value may be in front of the intermediate position of each subject (on the imaging device side). It is also known to try to adjust the focus position to approximately the middle position of each subject, but in this case, the setting deviates from the optimal setting, especially when the front and rear positions of the subjects are far apart. Attempting to optimize the parameters of both the focus position and the F-number increases the amount of calculation, and it becomes difficult to maintain optimal parameters, especially when photographing a fast-moving subject or when shooting a video.
 上述の課題の少なくとも一部が、開示される技術によって対処される。例えば、選択された複数の被写体を含む被写界深度に対応する最小のF値が得られるように、合焦位置が調整される。ユーザ操作によって被写体を選択したり、反対に、被写体を選択対象から外したりすることもできる。合焦位置はそのままでF値をさらに小さくするようなボケ優先モードの選択も可能である。 At least some of the issues described above are addressed by the disclosed technology. For example, the focus position is adjusted so that the minimum F value corresponding to the depth of field including the plurality of selected subjects is obtained. It is also possible to select a subject or, conversely, to exclude a subject from selection through a user operation. It is also possible to select a blur priority mode in which the F value is further reduced while the in-focus position remains the same.
1.実施形態
 図1は、実施形態に係る撮像装置の概略構成の例を示す図である。撮像装置1の撮像対象を、被写体9と称し図示する。被写体9は、撮像装置1の撮像対象となり得るあらゆる物体であってよい。被写体9は、物体の一部であってもよく、その場合、同じ物体における異なる複数の部分が、複数の被写体9となり得る。被写体9は、移動する物体であってよく、そのような被写体9の例は、人、動物、乗物等である。複数の被写体9が存在してよく、図1には、4つの被写体9が例示される。複数の被写体9のうち、撮像装置1の最も近い被写体9及び最も離れた被写体9を、最至近被写体9near及び最遠被写体9farと称し図示する。
1. Embodiment FIG. 1 is a diagram showing an example of a schematic configuration of an imaging device according to an embodiment. An object to be imaged by the imaging device 1 is referred to as a subject 9 and illustrated. The subject 9 may be any object that can be imaged by the imaging device 1. The object 9 may be a part of an object, and in that case, the objects 9 may be different parts of the same object. The subject 9 may be a moving object; examples of such a subject 9 are a person, an animal, a vehicle, etc. There may be multiple subjects 9, and four subjects 9 are illustrated in FIG. Among the plurality of subjects 9, the closest subject 9 and the farthest subject 9 to the imaging device 1 are referred to as the closest subject 9 near and the farthest subject 9 far .
 撮像装置1は、光学系2と、撮像素子3と、駆動部4と、表示操作部5と、記憶部6と、処理部7とを含む。 The imaging device 1 includes an optical system 2, an image sensor 3, a drive section 4, a display operation section 5, a storage section 6, and a processing section 7.
 光学系2は、撮像素子3に対して設けられた撮像用の光学系であり、被写体9からの光を撮像素子3に導く。光学系2は、合焦位置及びF値を調整可能に構成される。例えば、光学系2は合焦位置を調整するためのレンズ機構、F値を調整するための開口の絞り機構等を含んで構成される。 The optical system 2 is an imaging optical system provided for the image sensor 3 and guides light from the subject 9 to the image sensor 3. The optical system 2 is configured to be able to adjust the focus position and F value. For example, the optical system 2 is configured to include a lens mechanism for adjusting the focal position, an aperture diaphragm mechanism for adjusting the F value, and the like.
 撮像素子3は、光学系2を介して、被写体9を撮像する。撮像素子3の例は、CMOSイメージセンサ等の固体撮像素子である。撮像装置1の視野内の画像が、撮像素子3によって撮像される。具体的に、図1において白抜き矢印で模式的に示されるように、被写体9からの光が光学系2を介して撮像素子3に入射し、それによって被写体9が撮像される。撮像素子3は、複数の被写体9それぞれのDF値が得られるように構成される。DF値は、被写体9にフォーカスが合っていない場合のそのフォーカスのずれ量(デフォーカス量)を示す値である。撮像素子3について、図2も参照して説明する。 The image sensor 3 images the subject 9 via the optical system 2. An example of the image sensor 3 is a solid-state image sensor such as a CMOS image sensor. An image within the field of view of the imaging device 1 is captured by the imaging element 3. Specifically, as schematically shown by the white arrow in FIG. 1, light from the subject 9 enters the image sensor 3 via the optical system 2, and the subject 9 is thereby imaged. The image sensor 3 is configured to obtain the DF value of each of the plurality of subjects 9. The DF value is a value indicating the amount of focus shift (defocus amount) when the subject 9 is out of focus. The image sensor 3 will be explained with reference to FIG. 2 as well.
 図2は、撮像素子の概略構成の例を示す図である。撮像素子3は、撮像面30を規定するようにアレイ状に配置された複数の画素31を含む。例えば1つの画素31は、赤色光、緑色光及び青色光のいずれかの受光光量に応じた画素信号を出力する。 FIG. 2 is a diagram showing an example of a schematic configuration of an image sensor. The imaging device 3 includes a plurality of pixels 31 arranged in an array so as to define an imaging surface 30. For example, one pixel 31 outputs a pixel signal according to the amount of received light of any one of red light, green light, and blue light.
 複数の画素31は、複数の撮像用画素311と、複数の位相差検波用画素312とを含む。少なくとも撮像用画素311からの画素信号に基づいて、画像が生成される。必須の構成ではないが、例示される位相差検波用画素312は、撮像用画素311と比較して、とくに受光領域の半分が遮光されている点において相違する。アレイ方向において互いに反対側の部分が遮光された一対の位相差検波用画素312が、撮像素子3の各部に配置される。一対の位相差検波用画素312それぞれの受光光量、すなわちそれらの画素信号レベルに基づいて、その部分に対応する被写体9のDF値を算出することができる。 The plurality of pixels 31 include a plurality of imaging pixels 311 and a plurality of phase difference detection pixels 312. An image is generated based on at least the pixel signal from the imaging pixel 311. Although this is not an essential configuration, the illustrated phase difference detection pixel 312 is different from the imaging pixel 311 in that half of the light receiving area is shielded from light. A pair of phase difference detection pixels 312 whose portions opposite to each other in the array direction are shielded from light are arranged in each part of the image sensor 3. Based on the amount of light received by each of the pair of phase difference detection pixels 312, that is, the pixel signal level, the DF value of the subject 9 corresponding to that portion can be calculated.
 なお、画像は映像の1フレームの画像であってよく、矛盾の無い範囲において、画像は映像に適宜読み替えられてよい。撮像は撮影に適宜読み替えられてよい。 Note that the image may be an image of one frame of a video, and the image may be read as a video as appropriate as long as there is no contradiction. Imaging may be read as photographing as appropriate.
 図1に戻り、駆動部4は、光学系2を駆動する。駆動部4は、光学系2に含まれるレンズを動かして光学系2の合焦位置を調整したり、開口を動かして光学系2のF値を調整したりする。駆動部4は、例えばアクチュエータ等を含んで構成される。 Returning to FIG. 1, the drive section 4 drives the optical system 2. The drive unit 4 moves the lenses included in the optical system 2 to adjust the focus position of the optical system 2, and moves the aperture to adjust the F value of the optical system 2. The drive unit 4 includes, for example, an actuator and the like.
 表示操作部5は、撮像装置1で扱われる情報を表示したり、撮像装置1の操作を受け付けたりする。例えば、表示操作部5は、撮像画像を表示する。表示操作部5は、操作画面を表示したり、ユーザ操作を受け付けたりもする。表示操作部5は、例えばタッチパネルディスプレイを含んで構成される。 The display operation unit 5 displays information handled by the imaging device 1 and receives operations on the imaging device 1. For example, the display operation unit 5 displays a captured image. The display operation unit 5 also displays an operation screen and receives user operations. The display operation unit 5 includes, for example, a touch panel display.
 記憶部6は、撮像装置1で用いられる情報を記憶する。記憶部6に記憶される情報の1つとして、撮像プログラム61が例示される。撮像プログラム61は、撮像装置1における各種の処理を処理部7に実行させる。撮像プログラム61は、インターネットなどのネットワークを介してまとめて又は別々に配布することができる。また、撮像プログラム61は、各種のコンピュータ読み取り可能な記録媒体にまとめて又は別々に記録され、コンピュータによって記録媒体から読み込まれることによって実行することができる。 The storage unit 6 stores information used by the imaging device 1. An example of the information stored in the storage unit 6 is an imaging program 61. The imaging program 61 causes the processing unit 7 to execute various processes in the imaging device 1 . The imaging program 61 can be distributed collectively or separately via a network such as the Internet. Further, the imaging program 61 can be recorded collectively or separately on various computer-readable recording media, and can be executed by being read from the recording medium by the computer.
 処理部7は、例えば1つ以上のプロセッサを含んで構成される。記憶部6に記憶された撮像プログラム61が読み出されてメモリに展開されることによって、撮像プログラム61に従う処理部7の処理が実行される。処理部7は、光学系2を含め、撮像装置1における他の要素を直接的又は間接的に制御する。処理部7について、図3を参照してさらに説明する。 The processing unit 7 is configured to include, for example, one or more processors. The imaging program 61 stored in the storage unit 6 is read out and expanded into the memory, so that the processing of the processing unit 7 according to the imaging program 61 is executed. The processing unit 7 directly or indirectly controls other elements in the imaging device 1, including the optical system 2. The processing unit 7 will be further explained with reference to FIG. 3.
 図3は、処理部の機能ブロックの例を示す図である。処理部7は、画像生成部71と、物体認識部72と、被写体選択部73と、測距部74と、DF値算出部75と、F値算出部76と、光学系調整部77と、移動検出部78とを含む。 FIG. 3 is a diagram showing an example of functional blocks of the processing section. The processing unit 7 includes an image generation unit 71, an object recognition unit 72, a subject selection unit 73, a distance measurement unit 74, a DF value calculation unit 75, an F value calculation unit 76, an optical system adjustment unit 77, A movement detection section 78 is included.
 画像生成部71は、撮像素子3からの画素信号に基づいて画像を生成する。とくに説明がある場合を除き、画像は、撮像装置1の視野内に位置する複数の被写体候補を含む画像であるものとする。被写体候補は、被写体9と比較して、後述の被写体選択部73によって選択される前の物である点で相違する。 The image generation unit 71 generates an image based on pixel signals from the image sensor 3. Unless otherwise specified, it is assumed that the image includes a plurality of subject candidates located within the field of view of the imaging device 1. The subject candidate is different from the subject 9 in that it is a subject that has not yet been selected by a subject selection unit 73, which will be described later.
 物体認識部72は、画像生成部71によって生成された画像中の複数の被写体候補を認識する。種々の公知の画像認識アルゴリズム等が用いられてよい。被写体候補を自動認識することで、その分、ユーザ操作の簡略化等が可能になる。 The object recognition unit 72 recognizes multiple subject candidates in the image generated by the image generation unit 71. Various known image recognition algorithms may be used. By automatically recognizing subject candidates, user operations can be simplified accordingly.
 被写体選択部73は、物体認識部72によって認識された複数の被写体候補から、複数の被写体9を選択する。選択は、フォーカス対象とする被写体9の選択の意味に解されてよい。被写体9は、被写体選択部73が自動的に選択してもよいし、撮像装置1のユーザが手動で選択してもよい。自動選択の場合はユーザ操作を簡略化することができ、手動操作の場合はユーザの任意の要望に対応することができる。被写体9の選択は、被写体選択部73と表示操作部5との協働によって行われてもよい。いくつかの具体例について、図4~図7を参照して説明する。 The subject selection unit 73 selects a plurality of subjects 9 from the plurality of subject candidates recognized by the object recognition unit 72. Selection may be understood to mean selection of the subject 9 to be focused. The subject 9 may be automatically selected by the subject selection unit 73, or may be manually selected by the user of the imaging device 1. In the case of automatic selection, the user's operation can be simplified, and in the case of manual operation, it is possible to respond to the user's arbitrary wishes. The selection of the subject 9 may be performed by cooperation between the subject selection section 73 and the display operation section 5. Some specific examples will be described with reference to FIGS. 4 to 7.
 図4~図7は、被写体の選択の例を示す図である。撮像装置1の表示操作部5は、複数の被写体候補を含む画像を表示する。 4 to 7 are diagrams showing examples of subject selection. The display operation unit 5 of the imaging device 1 displays an image including a plurality of subject candidates.
 図4に示される例では、複数の被写体候補は、互いに異なる人物の頭部(顔部分を含む)である。表示操作部5は、それらの複数の被写体候補を、被写体9として選択可能な態様で表示する。この例では、ユーザによる選択を容易にするために、複数の被写体候補それぞれが四角枠でハイライト表示される。そのうちのいくつかの被写体候補が、画面タッチ等のユーザ操作によって被写体9として選択される。例えば図4の(B)において太線の四角枠で示されるように、二人の人物が被写体9として選択される。選択の前後で枠の表示態様を変えることで、区別が容易になる。 In the example shown in FIG. 4, the plurality of subject candidates are heads (including facial parts) of different people. The display operation unit 5 displays the plurality of subject candidates in a manner that allows them to be selected as the subject 9. In this example, each of the plurality of subject candidates is highlighted with a rectangular frame to facilitate selection by the user. Some of the subject candidates are selected as the subject 9 by user operations such as touching the screen. For example, as shown by the thick rectangular frames in FIG. 4B, two people are selected as the subjects 9. By changing the display mode of the frame before and after selection, it becomes easier to distinguish between them.
 例えば上記のようにして、簡単なユーザ操作で被写体9を選択することができる。先にも触れたが、被写体選択部73が被写体9を自動的に選択してもよい。例えば、被写体9の画像等のデータが予め準備され、記憶部6に記憶(登録)されていてよい。被写体選択部73は、画像中の複数の被写候補のうち、登録された被写体9を選択してよい。 For example, as described above, the subject 9 can be selected by a simple user operation. As mentioned earlier, the subject selection unit 73 may automatically select the subject 9. For example, data such as an image of the subject 9 may be prepared in advance and stored (registered) in the storage unit 6. The subject selection unit 73 may select the registered subject 9 from among the plurality of subject candidates in the image.
 1つの物体における複数の部分(パーツ)それぞれが、被写体9として選択されてもよい。図5に示される例では、円形マーカで示されるように、1人の人物の頭の部分、首の部分、腹の部分、肘の部分、手の部分及び膝の部分それぞれが、被写体9として選択される。各部分を結ぶ線で示されるような骨格も被写体9として選択されてよい。 A plurality of parts of one object may each be selected as the subject 9. In the example shown in FIG. 5, as indicated by circular markers, the head, neck, belly, elbow, hand, and knee of one person are each taken as the subject 9. selected. A skeleton as shown by lines connecting each part may also be selected as the subject 9.
 1つの物体が全体にわたって多数の部分に細かく区切られ、それらのうちのいくつかの部分が被写体9として選択されてもよい。図6の(A)に示される例では、1匹の犬が多数の四角枠で細分化され、そのうちの2つの部分が被写体9として選択される。図6の(B)に示される例では、1台の飛行機が多数の四角枠で細分化され、そのうちの2つの部分が被写体9として選択される。後述の原理により、1つの物体全体の撮像に適した合焦位置調整を行うことができる。 One object may be divided into a large number of parts, and some of these parts may be selected as the subject 9. In the example shown in FIG. 6A, one dog is subdivided into a number of square frames, and two of these parts are selected as the subject 9. In the example shown in FIG. 6B, one airplane is subdivided into a large number of rectangular frames, and two of them are selected as the subject 9. Based on the principle described below, it is possible to perform focus position adjustment suitable for imaging an entire object.
 画像中の範囲指定によって被写体9が選択されてもよい。図7に示される例では、ユーザ操作によって画像中の一部の領域が塗りつぶされ、その領域と重なる被写体候補が被写体9として選択される。 The subject 9 may be selected by specifying a range in the image. In the example shown in FIG. 7, a part of the area in the image is filled in by the user's operation, and a subject candidate that overlaps the area is selected as the subject 9.
 図3に戻り、測距部74は、被写体選択部73によって選択された被写体9を測距する。撮像装置1から被写体9までの距離、より具体的には、例えば撮像素子3の撮像面30から被写体9までの距離が測定される。種々の公知の測距手法が用いられてよい。測距手法の例は、三角測距、ToF(Time Of Flight)等である。三角測距を用いる場合、撮像素子3は、ステレオセンサとして用いられる複数の撮像素子3であってよい。ToFを用いる場合、撮像素子3は、測距用の光を出力する画素を含んで構成されてもよい。間接ToF法であれば画素31内の電荷の振り分け及び差分に基づいて測距が行われる。複数の被写体9のうち、測定された距離が最も短い大きい被写体9及び最も長い被写体9が、最至近被写体9near及び最遠被写体9farとして特定される。 Returning to FIG. 3, the distance measuring section 74 measures the distance of the subject 9 selected by the subject selecting section 73. The distance from the imaging device 1 to the subject 9, more specifically, for example, the distance from the imaging surface 30 of the image sensor 3 to the subject 9 is measured. Various known ranging techniques may be used. Examples of ranging methods include triangulation, ToF (Time Of Flight), and the like. When using triangulation, the image sensor 3 may be a plurality of image sensors 3 used as a stereo sensor. When using ToF, the image sensor 3 may be configured to include pixels that output distance measuring light. In the indirect ToF method, distance measurement is performed based on the distribution of charges within the pixel 31 and the difference. Among the plurality of objects 9, the large object 9 with the shortest measured distance and the longest object 9 are identified as the closest object 9 near and the farthest object 9 far .
 DF値算出部75は、撮像素子3の位相差検波用画素312からの画素信号に基づいて、複数の被写体9それぞれのDF値を算出する。より具体的に、DF値算出部75は、一対の位相差検波用画素312からの画素信号のレベルの相違に基づいて、その部分に対応する被写体9のDF値を算出する。位相差検波用画素312を利用したDF値算出自体は公知であるので、詳細な説明は省略する。例えば特許文献2に示されるような算出手法が用いられてよい。本実施形態では、DF値算出部45は、最至近被写体9nearのDF値及び最遠被写体9farのDF値の中間値も算出する。 The DF value calculation unit 75 calculates the DF value of each of the plurality of subjects 9 based on the pixel signal from the phase difference detection pixel 312 of the image sensor 3. More specifically, the DF value calculation unit 75 calculates the DF value of the subject 9 corresponding to that portion based on the difference in level of the pixel signals from the pair of phase difference detection pixels 312. Since the DF value calculation itself using the phase difference detection pixels 312 is well known, detailed explanation will be omitted. For example, a calculation method as shown in Patent Document 2 may be used. In the present embodiment, the DF value calculation unit 45 also calculates an intermediate value between the DF value of the closest subject 9 near and the DF value of the farthest subject 9 far .
 F値算出部76は、光学系2のF値を算出する。詳細は後述する。 The F value calculation unit 76 calculates the F value of the optical system 2. Details will be described later.
 光学系調整部77は、光学系2を調整する。光学系2の調整は、合焦位置の調整及びF値の調整を含む。例えば、それらのパラメータを指定する制御信号が光学系調整部77から駆動部4に送信され、駆動部4が制御信号に従って光学系2を駆動する。なお、以降では、合焦位置を、合焦位置FPとも称する。具体的に、光学系調整部77は、最至近被写体9near及び最遠被写体9farそれぞれのDF値が、上述のDF値算出部75によって算出された中間値に近づくように、光学系2の合焦位置FPを調整する。例えば、DF値の変化量と光学系2の駆動量すなわち合焦位置FPの移動量との関係が予め把握されており、従って、そのような合焦位置FPの調整が可能である。 The optical system adjustment section 77 adjusts the optical system 2. Adjustment of the optical system 2 includes adjustment of the focus position and adjustment of the F value. For example, a control signal specifying these parameters is transmitted from the optical system adjustment section 77 to the driving section 4, and the driving section 4 drives the optical system 2 according to the control signal. Note that, hereinafter, the focus position will also be referred to as focus position FP. Specifically, the optical system adjustment section 77 adjusts the optical system 2 so that the DF values of the closest object 9 near and the farthest object 9 far approach the intermediate value calculated by the DF value calculation section 75 described above. Adjust the focus position FP. For example, the relationship between the amount of change in the DF value and the amount of drive of the optical system 2, that is, the amount of movement of the focus position FP, is known in advance, and therefore, such adjustment of the focus position FP is possible.
 光学系調整部77によって調整された光学系2の合焦位置FPに基づいて、F値算出部76は、最至近被写体9near及び最遠被写体9farを含む被写界深度が得られる最小のF値を算出する。このようなF値は、例えば、光学系調整部77によって調整された光学系2の合焦位置FPの位置と、最至近被写体9near及び最遠被写体9farの位置(例えば測距部74の測距結果)とに基づいて算出される。なお、以降では、被写界深度を、被写界深度xとも称する。 Based on the focus position FP of the optical system 2 adjusted by the optical system adjustment section 77, the F value calculation section 76 calculates the minimum depth of field that includes the closest object 9 near and the farthest object 9 far . Calculate the F value. Such an F value is, for example, based on the position of the focusing position FP of the optical system 2 adjusted by the optical system adjustment section 77 and the positions of the closest subject 9 near and the farthest subject 9 far (for example, the distance measuring section 74). distance measurement results). Note that hereinafter, the depth of field will also be referred to as depth of field x.
 光学系調整部77は、F値算出部76によって算出されたF値に近づくように、光学系2のF値を調整する。調整後の合焦位置FP及びF値について、図8も参照して説明する。 The optical system adjustment unit 77 adjusts the F value of the optical system 2 so that it approaches the F value calculated by the F value calculation unit 76. The adjusted focus position FP and F value will be explained with reference to FIG. 8 as well.
 図8は、調整後の合焦位置及びF値の例を示す図である。F値に関しては、対応する被写界深度xが図示される。合焦位置FPは、最至近被写体9nearと最遠被写体9farとの間、より具体的に、この例では、最遠被写体9farよりも最至近被写体9nearの近くに位置している。F値に対応する被写界深度xは、最至近被写体9near及び最遠被写体9farを含む最小の範囲に定められる。すなわち、被写界深度xの範囲(被写界範囲)の一端及び他端に、最至近被写体9near及び最遠被写体9farが位置している。このような合焦位置FPは、最至近被写体9near及び最遠被写体9farを含む複数の被写体の撮像に適した合焦位置FPの1つといえる。 FIG. 8 is a diagram showing an example of the focused position and F value after adjustment. Regarding the F value, the corresponding depth of field x is illustrated. The focus position FP is located between the closest subject 9 near and the farthest subject 9 far , more specifically, in this example, closer to the nearest subject 9 near than the farthest subject 9 far . The depth of field x corresponding to the F value is determined to be the minimum range including the closest subject 9 near and the farthest subject 9 far . That is, the closest subject 9 near and the farthest subject 9 far are located at one end and the other end of the range of depth of field x (field range). Such a focus position FP can be said to be one of the focus positions FP suitable for imaging a plurality of subjects including the closest subject 9 near and the farthest subject 9 far .
 例えば、従来技術には、被写体9の前後差が大きい場合にも中間位置にしか合焦できず、最適な合焦位置FPが得られないものもある。本実施形態によれば、あらゆる状況下で、即座に理論上最小のF値で複数の被写体9を被写界深度xに収める合焦位置調整が可能である。 For example, some conventional techniques can only focus on an intermediate position even when the difference between the front and back of the subject 9 is large, making it impossible to obtain the optimal focus position FP. According to the present embodiment, it is possible to immediately adjust the focus position to bring the plurality of subjects 9 within the depth of field x at the theoretical minimum F value under any situation.
 なお、F値の最小化は必須ではない。同じ合焦位置FPのままで、F値算出部76によって算出されたF値よりもさらに小さくなるように(被写界深度xが大きくなるように)、光学系2のF値が調整されてもよい。或る程度のボケを許容して被写界深度xを大きくするようなモード(ピント優先モード)での撮像も可能になる。 Note that minimizing the F value is not essential. While keeping the same focus position FP, the F value of the optical system 2 is adjusted so that it becomes even smaller (so that the depth of field x becomes larger) than the F value calculated by the F value calculation unit 76. Good too. It is also possible to take an image in a mode (focus priority mode) that allows a certain degree of blur and increases the depth of field x.
 F値算出部76によるF値の算出について、さらに図9を参照して説明する。 The calculation of the F value by the F value calculation unit 76 will be further explained with reference to FIG. 9.
 図9は、F値の算出を説明するための図である。光学系2の構成要素として、集光レンズであるレンズ21が例示される。レンズ21の径を、径Dと称し図示する。レンズ21の焦点距離を、焦点距離fと称し図示する。レンズ21と合焦位置FPとの間の距離を、距離aと称し図示する。レンズ21と撮像素子3の撮像面30との間の距離を、距離bと称し図示する。被写界深度xのうち、合焦位置FPから最至近被写体9nearまでの部分を、前側被写界深度xnearと称し図示する。合焦位置FPから最遠被写体9farまでの部分を、後側被写界深度xfarと称し図示する。 FIG. 9 is a diagram for explaining calculation of the F value. As a component of the optical system 2, a lens 21, which is a condensing lens, is exemplified. The diameter of the lens 21 is shown as a diameter D. The focal length of the lens 21 is shown as a focal length f. The distance between the lens 21 and the focus position FP is shown as a distance a. The distance between the lens 21 and the imaging surface 30 of the image sensor 3 is shown as a distance b. Of the depth of field x, the portion from the focus position FP to the closest subject 9 near is referred to as front depth of field x near and illustrated. The portion from the focus position FP to the farthest subject 9 far is referred to as a rear depth of field x far and is illustrated.
 3種類の光路が、実線、破線及び一点鎖線で模式的に示される。実線は、合焦位置FPを通る光路を示す。破線は、最至近被写体9nearを通る光路を示す。一点鎖線は、最遠被写体9farを通る光路を示す。 Three types of optical paths are schematically shown by solid lines, dashed lines and dash-dotted lines. The solid line indicates the optical path passing through the focus position FP. The broken line indicates the optical path passing through the closest object 9 near . The dashed line indicates the optical path passing through the farthest object 9 far .
 撮像面30における、最至近被写体9nearからの光(破線の光路の光)と最遠被写体9farからの光(一点鎖線の光路の光)との間の距離を、許容錯乱円系δと称し図示する。許容錯乱円系δは、撮像素子3の分解能とほぼ同じであってよい。撮像素子3の撮像面30と、最至近被写体9nearからの光の焦点との間の距離を、前側焦点深度Xnearと称し図示する。撮像素子3の撮像面30と、最遠被写体9farからの光の焦点との間の距離を、後側焦点深度Xfarと称し図示する。 The distance between the light from the closest subject 9 near (the light along the optical path indicated by the dashed line) and the light from the farthest subject 9 far (the light along the optical path indicated by the dashed-dotted line) on the imaging surface 30 is expressed as the allowable circle of confusion system δ. It is called and illustrated. The permissible circle of confusion system δ may be approximately the same as the resolution of the image sensor 3. The distance between the imaging surface 30 of the image sensor 3 and the focal point of the light from the closest subject 9 near is referred to as front depth of focus X near in the drawing. The distance between the imaging surface 30 of the image sensor 3 and the focal point of the light from the farthest object 9 far is referred to as a rear depth of focus X far .
 三角形の相似より、下記の式(1)及び式(2)が成立する。
Figure JPOXMLDOC01-appb-M000001
Figure JPOXMLDOC01-appb-M000002
Due to the similarity of triangles, the following equations (1) and (2) hold true.
Figure JPOXMLDOC01-appb-M000001
Figure JPOXMLDOC01-appb-M000002
 上記の式(1)及び式(2)に基づいて、前側焦点深度Xnear及び後側焦点深度Xfarの比率を算出すると、下記の式(3)のようになる。なお、δDは1よりも十分に小さ値(例えば10-3程度)として扱っている。
Figure JPOXMLDOC01-appb-M000003
When the ratio of the front focal depth X near and the rear focal depth X far is calculated based on the above equations (1) and (2), it becomes the following equation (3). Note that δD is treated as a value sufficiently smaller than 1 (for example, about 10 −3 ).
Figure JPOXMLDOC01-appb-M000003
 F値の算出について述べる。以下の式では、F値をFnoとして示す。定義より、Fno=f/Dである。まず、上記の式(1)から、下記の式(4)が導かれる。
Figure JPOXMLDOC01-appb-M000004
The calculation of the F value will be described. In the following formula, the F value is indicated as Fno. By definition, Fno=f/D. First, the following equation (4) is derived from the above equation (1).
Figure JPOXMLDOC01-appb-M000004
 fがFδよりも十分に大きいので、下記の式(5)のように前側焦点深度Xnearが算出される。
Figure JPOXMLDOC01-appb-M000005
Since f is sufficiently larger than Fδ, the front depth of focus X near is calculated as shown in equation (5) below.
Figure JPOXMLDOC01-appb-M000005
 後側焦点深度Xfarも同様に算出され、前側焦点深度Xnearと同じ値になる。DF値は、撮像面30上でのボケ量であり、前側焦点深度Xnear、後側焦点深度Xfar等の値が分かっていることから、下記の式(6)のように、F値であるFnoが算出される。
Figure JPOXMLDOC01-appb-M000006
The rear depth of focus X far is calculated in the same way, and has the same value as the front depth of focus X near . The DF value is the amount of blur on the imaging surface 30, and since the values of the front depth of focus A certain Fno is calculated.
Figure JPOXMLDOC01-appb-M000006
 例えば以上のようにして、処理部7のF値算出部76がF値を算出する。なお、許容錯乱円系δは、ボケの許容量に相当するともいえるので、この値を変化させてボケ優先モードなどの複数の被写体9の合焦度合いと背景のぼけ具合のトレードオフを選択することができる。 For example, the F value calculation unit 76 of the processing unit 7 calculates the F value as described above. Note that the permissible circle of confusion system δ can be said to correspond to the permissible amount of blur, so this value is changed to select a trade-off between the degree of focus of multiple subjects 9 and the degree of blur of the background, such as in the blur priority mode. be able to.
 図3に戻り、移動検出部78は、被写体選択部73によって選択された被写体9の移動を検出する。種々の公知の検出アルゴリズムが用いられてよい。例えば被写体9が移動しても、その被写体9を追尾することができる。画周期ごとの検出が行われてよい。動画撮影であれば、例えば1フレームごとに被写体9の移動が検出されてよい。移動検出部48による移動の検出は、被写体9の移動方向、例えば撮像装置1に近づく方向及び撮像装置1から離れる方向の検出を含んでよい。 Returning to FIG. 3, the movement detection section 78 detects the movement of the subject 9 selected by the subject selection section 73. Various known detection algorithms may be used. For example, even if the subject 9 moves, the subject 9 can be tracked. Detection may be performed for each image cycle. In the case of video shooting, the movement of the subject 9 may be detected for each frame, for example. Detection of movement by the movement detection unit 48 may include detection of a movement direction of the subject 9, for example, a direction toward the imaging device 1 and a direction away from the imaging device 1.
 一実施形態において、光学系調整部77は、移動検出部78の検出結果に基づいて光学系2を制御する。例えば、移動検出部78によって被写体9の移動が検出されるごとに、これまで説明したように、光学系2の合焦位置FP及びF値が調整されて(更新されて)よい。被写体9を追尾して光学系2を制御することで、光学系2の合焦位置FP及びF値の最適な調整を維持することができる。例えば動画撮影を行っている間、光学系2の合焦位置FP及びF値をスムーズに変更し続けることができる。 In one embodiment, the optical system adjustment section 77 controls the optical system 2 based on the detection result of the movement detection section 78. For example, each time movement of the subject 9 is detected by the movement detection unit 78, the focus position FP and F value of the optical system 2 may be adjusted (updated) as described above. By tracking the subject 9 and controlling the optical system 2, optimal adjustment of the focal position FP and F value of the optical system 2 can be maintained. For example, while shooting a video, the focus position FP and F value of the optical system 2 can be continuously changed smoothly.
 図10は、撮像装置において実行される処理(撮像方法)の例を示すフローチャートである。各処理の具体的な内容はこれまで説明したとおりであるので、詳細な説明は繰り返さない。 FIG. 10 is a flowchart illustrating an example of processing (imaging method) executed by the imaging device. Since the specific contents of each process have been explained so far, detailed explanation will not be repeated.
 ステップS1において、複数の被写体が選択される。処理部7の被写体選択部73は、複数の被写体候補から、複数の被写体9を選択する。 In step S1, multiple subjects are selected. The subject selection unit 73 of the processing unit 7 selects a plurality of subjects 9 from a plurality of subject candidates.
 ステップS2において、DF値が中間値になるように合焦位置が調整される。処理部7の測距部74は、選択された複数の被写体9それぞれを測距する。測距結果に基づいて、最至近被写体9near及び最遠被写体9farが特定される。DF値算出部75は、最至近被写体9nearのDF値及び最遠被写体9farのDF値を算出し、また、それらの中間値を算出する。光学系調整部77は、最至近被写体9near及び最遠被写体9farのDF値が、それらの中間値に近づくように、光学系2の合焦位置FPを調整する。 In step S2, the focus position is adjusted so that the DF value becomes an intermediate value. The distance measuring section 74 of the processing section 7 measures the distances of each of the plurality of selected subjects 9. Based on the distance measurement results, the closest subject 9 near and the farthest subject 9 far are identified. The DF value calculation unit 75 calculates the DF value of the closest subject 9 near and the farthest subject 9 far , and also calculates an intermediate value thereof. The optical system adjustment unit 77 adjusts the focusing position FP of the optical system 2 so that the DF values of the closest subject 9 near and the farthest subject 9 far approach their intermediate values.
 ステップS3において、F値が調整される。処理部7のF値算出部76は、最至近被写体9nearの位置、最遠被写体9farの位置、及び調整後の合焦位置FPに基づいて、最至近被写体9near及び最遠被写体9farを含む被写界深度xが得られる最小のF値を算出する。光学系調整部77は、F値算出部76によって算出されたF値に近づくように、光学系2のF値を調整する。 In step S3, the F value is adjusted. The F value calculation unit 76 of the processing unit 7 calculates the closest subject 9 near and the farthest subject 9 far based on the position of the nearest subject 9 near , the position of the farthest subject 9 far , and the adjusted focus position FP. Calculate the minimum F value that yields a depth of field x that includes The optical system adjustment unit 77 adjusts the F value of the optical system 2 so that it approaches the F value calculated by the F value calculation unit 76.
 例えば以上のようにして、選択した複数の被写体9をすべて含む被写界深度xが得られるように、光学系2の合焦位置FP及びF値が調整される。F値は最小値に調整される。図には表れないが、このように光学系2のパラメータが最適調整された状態で撮像が行われる。 For example, as described above, the focal position FP and F value of the optical system 2 are adjusted so that the depth of field x that includes all of the plurality of selected subjects 9 is obtained. The F value is adjusted to the minimum value. Although not shown in the figure, imaging is performed with the parameters of the optical system 2 optimally adjusted in this way.
2.変形例
 開示される技術は、上記の実施形態に限定されない。いくつかの変形例について述べる。
2. Modifications The disclosed technology is not limited to the above embodiments. Some modifications will be described.
 調整可能なF値は、光学系2の構成等によって制限される。選択した被写体9のすべてを被写界深度xに含めることができない場合がある。その場合、処理部7の被写体選択部73は、最遠被写体9farを選択対象から除外してよい。図11を参照して説明する。 The adjustable F value is limited by the configuration of the optical system 2 and the like. It may not be possible to include all of the selected subjects 9 in the depth of field x. In that case, the subject selection unit 73 of the processing unit 7 may exclude the farthest subject 9 far from the selection targets. This will be explained with reference to FIG.
 図11は、撮像装置において実行される処理(撮像方法)の例を示すフローチャートである。ステップS11の処理は、先の説明した図10のステップS1と同様であるので説明は省略する。 FIG. 11 is a flowchart illustrating an example of processing (imaging method) executed by the imaging device. The process in step S11 is the same as step S1 in FIG. 10 described above, so the explanation will be omitted.
 ステップS12において、選択した被写体のすべてを被写界深度に含めることができるか否かが判断される。例えば、処理部7の被写体選択部73は、最至近被写体9nearと最遠被写体9farとの間の距離が、調整可能な光学系2のF値の上限値すなわち被写界深度xの上限値よりも大きい場合に、選択した被写体9のすべてを被写界深度xに含めることができないと判断する。このように判断された場合(ステップS12:Yes)、ステップS14に処理が進められる。そうでない場合(ステップS12:No)、ステップS13に処理が進められる。 In step S12, it is determined whether all of the selected subjects can be included in the depth of field. For example, the subject selection unit 73 of the processing unit 7 determines that the distance between the closest subject 9 near and the farthest subject 9 far is the upper limit of the F value of the adjustable optical system 2, that is, the upper limit of the depth of field x. If it is larger than the value, it is determined that all of the selected subjects 9 cannot be included in the depth of field x. If it is determined in this way (step S12: Yes), the process proceeds to step S14. If not (step S12: No), the process proceeds to step S13.
 ステップS13において、最遠被写体が選択対象から除外される。この処理は、例えば処理部7の被写体選択部73によって実行される。除外された最遠被写体9farは、選択前の状態、すなわち被写体候補に戻る。残りの被写体9は、選択対象として残り、そのうちの最も最遠に位置する被写体9が、新たな最遠被写体9farとなる。その後、ステップS12に処理が戻される。 In step S13, the farthest subject is excluded from selection. This process is executed, for example, by the subject selection unit 73 of the processing unit 7. The excluded farthest subject 9 far returns to the state before selection, that is, to a subject candidate. The remaining objects 9 remain as selection targets, and the farthest object 9 among them becomes the new farthest object 9 far . After that, the process returns to step S12.
 上記のステップS12~ステップS13の処理を経ることで、選択対象の被写体9がすべて被写界深度xに含まれるようになるまで、選択対象の被写体9の数が絞られる。この状態で、ステップS14に処理が進められることになる。 By going through the processing of steps S12 and S13 above, the number of subjects 9 to be selected is narrowed down until all the subjects 9 to be selected are included in the depth of field x. In this state, the process proceeds to step S14.
 ステップS14及びステップS15の処理は、先に説明した図10のステップS2及びステップS3の処理と同様であるので説明は省略する。 The processing in step S14 and step S15 is the same as the processing in step S2 and step S3 in FIG. 10 described above, so the explanation will be omitted.
 上記のフローチャートの処理によれば、光学系2のF値が制限されるような場合にも対処することができる。 According to the process in the above flowchart, it is possible to deal with the case where the F value of the optical system 2 is limited.
 処理部7の光学系調整部77による光学系2の合焦位置FPの調整は、これまで説明した調整(第1の調整)だけでなく、これとは別の位置に合焦位置FPを調整する第2の調整も含んでよい。第2の調整は、種々の公知の手法による合焦位置FPの調整であってよい。例えば、第2の調整では、光学系2の合焦位置FPが、最遠被写体9far、又は、画像の中央付近に位置する被写体9に近づくように、光学系2の合焦位置FPが調整される。 The adjustment of the focus position FP of the optical system 2 by the optical system adjustment unit 77 of the processing unit 7 is not limited to the adjustment described above (first adjustment), but also adjusts the focus position FP to a different position. It may also include a second adjustment. The second adjustment may be an adjustment of the focus position FP using various known methods. For example, in the second adjustment, the focus position FP of the optical system 2 is adjusted so that the focus position FP of the optical system 2 approaches the farthest subject 9 far or the subject 9 located near the center of the image. be done.
 第1の調整及び第2の調整は、自動的に或いはユーザ操作によって切り替えられてよい。複数の調整を切り替えて用いることで、合焦位置調整の柔軟性を高めることができる。例えば、特定の被写体9が視野内に入ったり視野外に出ていったりしたことに応じて、第1の調整及び第2の調整が切り替えられる。一例として、特定の被写体9が視野内に入っているときには第1の調整が用いられ、その被写体9が視野外に出て行ったときには第2の調整が用いられてよい。例えば、被写体9が視野内から出て行ってから一定時間が経過した後で、第2の調整から第1の調整への切り替えが行われてよい。被写体9の出入りに起因する合焦位置FPの変化を抑制することができる。 The first adjustment and the second adjustment may be switched automatically or by user operation. By switching and using a plurality of adjustments, flexibility in adjusting the focus position can be increased. For example, the first adjustment and the second adjustment are switched depending on whether a specific subject 9 enters or exits the field of view. As an example, the first adjustment may be used when a particular subject 9 is within the field of view, and the second adjustment may be used when the subject 9 moves out of the field of view. For example, the second adjustment may be switched to the first adjustment after a certain period of time has passed since the subject 9 left the field of view. Changes in the focus position FP due to the entrance and exit of the subject 9 can be suppressed.
3.効果の例
 以上で説明した技術は、例えば次のように特定される。開示される技術の1つは、撮像装置1である。図1~図9等を参照して説明したように、撮像装置1は、撮像用の光学系2を制御する処理部7、を備える。処理部7による光学系2の制御は、複数の被写体9のうちの最至近被写体9near及び最遠被写体9farそれぞれのDF値が、それぞれのDF値の中間値に近づくように、光学系2の合焦位置FPを調整することを含む。これにより、複数の被写体9の撮像に適した合焦位置調整を行うことができる。
3. Examples of effects The techniques described above are specified as follows, for example. One of the techniques disclosed is an imaging device 1. As described with reference to FIGS. 1 to 9, etc., the imaging device 1 includes the processing section 7 that controls the imaging optical system 2. The optical system 2 is controlled by the processing unit 7 so that the DF values of the closest subject 9 near and the farthest subject 9 far among the plurality of subjects 9 approach the intermediate value of the respective DF values. This includes adjusting the focus position FP of. Thereby, focus position adjustment suitable for imaging a plurality of subjects 9 can be performed.
 図1、図3、図8及び図9等を参照して説明したように、処理部7による光学系2の制御は、最至近被写体9near及び最遠被写体9farを含む被写界深度xが得られる最小のF値に近づくように、光学系のF値を調整することを含んでよい。これにより、最適な合焦位置調整に基づく最適なF値調整が可能になる。 As described with reference to FIG. 1, FIG. 3, FIG. 8 , FIG. 9, etc., the control of the optical system 2 by the processing unit 7 is based on the depth of field may include adjusting the F-number of the optical system so that it approaches the minimum F-number that can be obtained. This enables optimal F-number adjustment based on optimal focus position adjustment.
 図1及び図2等を参照して説明したように、撮像装置1は、光学系2を介して被写体9を撮像する撮像素子3を備え、撮像素子3は、複数の被写体9それぞれのDF値が得られるように、位相差検波用画素312を含んでよい。例えばこのような撮像素子3を用いて、最至近被写体9near及び最遠被写体9farのDF値を得ることができる。 As described with reference to FIGS. 1 and 2, the imaging device 1 includes an image sensor 3 that images a subject 9 via an optical system 2, and the image sensor 3 has a DF value of each of a plurality of subjects 9. A phase difference detection pixel 312 may be included to obtain the following. For example, using such an image sensor 3, the DF values of the closest subject 9 near and the farthest subject 9 far can be obtained.
 図3~図7等を参照して説明したように、処理部7は、撮像装置1の視野内に位置する複数の被写体候補を認識し、認識した被写体候補から複数の被写体9を選択してよい。例えば、撮像装置1は、撮像装置1の視野内に位置する複数の被写体候補を、被写体9として選択可能な態様で表示する表示操作部5を備えてよい。これにより、例えばフォーカス対象となる複数の被写体9を適切に選択することができる。 As described with reference to FIGS. 3 to 7, etc., the processing unit 7 recognizes a plurality of subject candidates located within the field of view of the imaging device 1, and selects a plurality of subjects 9 from the recognized subject candidates. good. For example, the imaging device 1 may include a display operation unit 5 that displays a plurality of subject candidates located within the field of view of the imaging device 1 in a manner that allows them to be selected as the subject 9. Thereby, for example, a plurality of subjects 9 to be focused can be appropriately selected.
 図3等を参照して説明したように、処理部7は、複数の被写体9の移動を検出し、検出結果に基づいて光学系2を制御してよい。これにより、被写体9が移動した場合でも、最適な合焦位置調整を維持することができる。 As described with reference to FIG. 3 and the like, the processing unit 7 may detect the movement of the plurality of subjects 9 and control the optical system 2 based on the detection results. Thereby, even if the subject 9 moves, optimal focus position adjustment can be maintained.
 図5~図7等を参照して説明したように、複数の被写体9それぞれは、同じ物体の異なる部分であってよい。これにより、その物体全体の撮像に適した合焦位置調整を行うことができる。 As described with reference to FIGS. 5 to 7, etc., each of the plurality of subjects 9 may be a different part of the same object. Thereby, it is possible to adjust the focus position suitable for imaging the entire object.
 図11等を参照して説明したように、処理部7は、選択した複数の被写体9のすべてを被写界深度に含めることができない場合には、最遠被写体9farを選択の対象から除外してよい。これにより、例えば光学系2のF値の調整に制限がある場合でも対処することができる。 As explained with reference to FIG. 11 etc., when the depth of field cannot include all of the selected subjects 9, the processing unit 7 excludes the farthest subject 9 far from the selection target. You may do so. This makes it possible to deal with the case where, for example, there is a limit to the adjustment of the F value of the optical system 2.
 処理部7による光学系2の合焦位置FPの調整は、複数の被写体9のうちの最至近被写体9near及び最遠被写体9farそれぞれのDF値が、それぞれのDF値の中間値に近づくように、光学系2の合焦位置FPを調整する第1の調整と、第1の調整とは別の位置に光学系2の合焦位置FPを調整する第2の調整と、を含んでよい。これにより、例えば合焦位置調整の柔軟性を高めることができる。 The processing unit 7 adjusts the focusing position FP of the optical system 2 so that the DF values of the closest subject 9 near and the farthest subject 9 far among the plurality of subjects 9 approach the intermediate value of the respective DF values. may include a first adjustment that adjusts the focus position FP of the optical system 2, and a second adjustment that adjusts the focus position FP of the optical system 2 to a position different from the first adjustment. . Thereby, for example, flexibility in adjusting the focus position can be increased.
 図10等を参照して説明した撮像方法も、開示される技術の1つである。撮像方法は、複数の被写体9のうちの最至近被写体9near及び最遠被写体9farそれぞれのDF値が、それぞれのDF値の中間値に近づくように、撮像用の光学系2の合焦位置FPを調整すること、を含む。このような撮像方法によっても、これまで説明したように、複数の被写体9の撮像に適した合焦位置調整を行うことができる。 The imaging method described with reference to FIG. 10 and the like is also one of the disclosed techniques. The imaging method is such that the focusing position of the imaging optical system 2 is adjusted so that the DF values of the closest subject 9 near and the farthest subject 9 far among the plurality of subjects 9 approach the intermediate value of the respective DF values. including adjusting the FP. With such an imaging method, as described above, it is possible to perform focus position adjustment suitable for imaging a plurality of subjects 9.
 図1等を参照して説明した撮像プログラム61も、開示される技術の1つである。撮像プログラム61は、プロセッサ(処理部7)に、複数の被写体9のうちの最至近被写体9near及び最遠被写体9farそれぞれのDF値が、それぞれのDF値の中間値に近づくように、撮像用の光学系2の合焦位置FPを調整する処理、を実行させる。このような撮像プログラム61によっても、これまで説明したように、複数の被写体9の撮像に適した合焦位置調整を行うことができる。撮像プログラム61が記録されたコンピュータ読み取り可能な記録媒体(記憶部6)も、開示される技術の1つである。 The imaging program 61 described with reference to FIG. 1 and the like is also one of the techniques disclosed. The imaging program 61 causes the processor (processing unit 7) to perform imaging so that the DF values of the closest object 9 near and the farthest object 9 far among the plurality of objects 9 approach the intermediate value of the respective DF values. A process of adjusting the focal position FP of the optical system 2 for use in the optical system 2 is executed. With such an imaging program 61, as described above, it is possible to perform focus position adjustment suitable for imaging a plurality of subjects 9. A computer-readable recording medium (storage unit 6) on which the imaging program 61 is recorded is also one of the techniques disclosed.
 なお、本開示に記載された効果は、あくまで例示であって、開示された内容に限定されない。他の効果があってもよい。 Note that the effects described in the present disclosure are merely examples and are not limited to the disclosed contents. There may also be other effects.
 以上、本開示の実施形態について説明したが、本開示の技術的範囲は、上述の実施形態そのままに限定されるものではなく、本開示の要旨を逸脱しない範囲において種々の変更が可能である。また、異なる実施形態及び変形例にわたる構成要素を適宜組み合わせてもよい。 Although the embodiments of the present disclosure have been described above, the technical scope of the present disclosure is not limited to the above-described embodiments as they are, and various changes can be made without departing from the gist of the present disclosure. Furthermore, components of different embodiments and modifications may be combined as appropriate.
 なお、本技術は以下のような構成も取ることができる。
(1)
 撮像用の光学系を制御する処理部、
 を備え、
 前記処理部による前記光学系の制御は、複数の被写体のうちの最至近被写体及び最遠被写体それぞれのデフォーカス値が、それぞれのデフォーカス値の中間値に近づくように、前記光学系の合焦位置を調整することを含む、
 撮像装置。
(2)
 前記処理部による前記光学系の制御は、前記最至近被写体及び前記最遠被写体を含む被写界深度が得られる最小のF値に近づくように、前記光学系のF値を調整することを含む、
 (1)に記載の撮像装置。
(3)
 前記光学系を介して被写体を撮像する撮像素子を備え、
 前記撮像素子は、前記複数の被写体それぞれのデフォーカス値が得られるように、位相差検波用画素を含む、
 (1)又は(2)に記載の撮像装置。
(4)
 前記処理部は、前記撮像装置の視野内に位置する複数の被写体候補を認識し、認識した複数の被写体候補から前記複数の被写体を選択する、
 (1)~(3)のいずれかに記載の撮像装置。
(5)
 前記撮像装置の視野内に位置する複数の被写体候補を、前記被写体として選択可能な態様で表示する表示操作部を備える、
 (1)~(4)のいずれかに記載の撮像装置。
(6)
 前記処理部は、前記複数の被写体の移動を検出し、検出結果に基づいて前記光学系を制御する、
 (1)~(5)のいずれかに記載の撮像装置。
(7)
 前記複数の被写体それぞれは、同じ物体の異なる部分である、
 (1)~(6)のいずれかに記載の撮像装置。
(8)
 前記処理部は、選択した前記複数の被写体のすべてを被写界深度に含めることができない場合には、前記最遠被写体を前記選択の対象から除外する、
 (1)~(7)のいずれかに記載の撮像装置。
(9)
 前記処理部による前記光学系の合焦位置の調整は、
  前記複数の被写体のうちの最至近被写体及び最遠被写体それぞれのデフォーカス値が、それぞれのデフォーカス値の中間値に近づくように、前記光学系の合焦位置を調整する第1の調整と、
  前記第1の調整とは別の位置に光学系の合焦位置を調整する第2の調整と、
 を含む、
 (1)~(8)のいずれかに記載の撮像装置。
(10)
 複数の被写体のうちの最至近被写体及び最遠被写体それぞれのデフォーカス値が、それぞれのデフォーカス値の中間値に近づくように、撮像用の光学系の合焦位置を調整すること、
 を含む、
 撮像方法。
(11)
 プロセッサに、
 複数の被写体のうちの最至近被写体及び最遠被写体それぞれのデフォーカス値が、それぞれのデフォーカス値の中間値に近づくように、撮像用の光学系の合焦位置を調整する処理、
 を実行させる、
 撮像プログラム。
Note that the present technology can also have the following configuration.
(1)
a processing unit that controls the optical system for imaging;
Equipped with
The control of the optical system by the processing unit includes focusing the optical system so that the defocus values of the closest object and the farthest object among the plurality of objects approach an intermediate value of the respective defocus values. including adjusting the position;
Imaging device.
(2)
The control of the optical system by the processing unit includes adjusting the F value of the optical system so that it approaches a minimum F value that provides a depth of field including the closest object and the farthest object. ,
The imaging device according to (1).
(3)
comprising an image sensor that captures an image of a subject through the optical system,
The image sensor includes a phase difference detection pixel so as to obtain a defocus value for each of the plurality of subjects.
The imaging device according to (1) or (2).
(4)
The processing unit recognizes a plurality of subject candidates located within a field of view of the imaging device, and selects the plurality of subjects from the recognized plurality of subject candidates.
The imaging device according to any one of (1) to (3).
(5)
comprising a display operation unit that displays a plurality of subject candidates located within the field of view of the imaging device in a manner selectable as the subject;
The imaging device according to any one of (1) to (4).
(6)
The processing unit detects movement of the plurality of subjects and controls the optical system based on the detection result.
The imaging device according to any one of (1) to (5).
(7)
Each of the plurality of subjects is a different part of the same object,
The imaging device according to any one of (1) to (6).
(8)
The processing unit excludes the farthest subject from the selection target if all of the selected plurality of subjects cannot be included in the depth of field.
The imaging device according to any one of (1) to (7).
(9)
Adjustment of the focusing position of the optical system by the processing unit includes:
A first adjustment that adjusts the focusing position of the optical system so that the defocus values of the closest object and the farthest object among the plurality of objects approach an intermediate value of the respective defocus values;
a second adjustment that adjusts the focusing position of the optical system to a position different from the first adjustment;
including,
The imaging device according to any one of (1) to (8).
(10)
adjusting the focusing position of the imaging optical system so that the defocus values of the closest object and the farthest object among the plurality of objects approach an intermediate value of the respective defocus values;
including,
Imaging method.
(11)
to the processor,
A process of adjusting the focus position of the imaging optical system so that the defocus values of the closest object and the farthest object among the plurality of objects approach an intermediate value of the respective defocus values;
to execute,
Imaging program.
   1 撮像装置
   2 光学系
  21 レンズ
   3 撮像素子
  31 画素
  30 撮像面
 311 撮像用画素
 312 位相差検波用画素
   4 駆動部
   5 表示操作部
   6 記憶部
  61 撮像プログラム
   7 処理部
  71 画像生成部
  72 物体認識部
  73 被写体選択部
  74 測距部
  75 DF値算出部
  76 F値算出部
  77 光学系調整部
  78 移動検出部
   9 被写体
near 最至近被写体
 9far 最遠被写体
  FP 合焦位置
   x 被写界深度
1 Imaging device 2 Optical system 21 Lens 3 Imaging element 31 Pixel 30 Imaging surface 311 Imaging pixel 312 Phase difference detection pixel 4 Drive section 5 Display operation section 6 Storage section 61 Imaging program 7 Processing section 71 Image generation section 72 Object recognition section 73 Subject selection section 74 Distance measurement section 75 DF value calculation section 76 F value calculation section 77 Optical system adjustment section 78 Movement detection section 9 Subject 9 nearest subject 9 far farthest subject FP Focusing position x Depth of field

Claims (11)

  1.  撮像用の光学系を制御する処理部、
     を備え、
     前記処理部による前記光学系の制御は、複数の被写体のうちの最至近被写体及び最遠被写体それぞれのデフォーカス値が、それぞれのデフォーカス値の中間値に近づくように、前記光学系の合焦位置を調整することを含む、
     撮像装置。
    a processing unit that controls the optical system for imaging;
    Equipped with
    The control of the optical system by the processing unit includes focusing the optical system so that the defocus values of the closest object and the farthest object among the plurality of objects approach an intermediate value of the respective defocus values. including adjusting the position;
    Imaging device.
  2.  前記処理部による前記光学系の制御は、前記最至近被写体及び前記最遠被写体を含む被写界深度が得られる最小のF値に近づくように、前記光学系のF値を調整することを含む、
     請求項1に記載の撮像装置。
    The control of the optical system by the processing unit includes adjusting the F value of the optical system so that it approaches a minimum F value that provides a depth of field including the closest object and the farthest object. ,
    The imaging device according to claim 1.
  3.  前記光学系を介して被写体を撮像する撮像素子を備え、
     前記撮像素子は、前記複数の被写体それぞれのデフォーカス値が得られるように、位相差検波用画素を含む、
     請求項1に記載の撮像装置。
    comprising an image sensor that captures an image of a subject through the optical system,
    The image sensor includes a phase difference detection pixel so as to obtain a defocus value for each of the plurality of subjects.
    The imaging device according to claim 1.
  4.  前記処理部は、前記撮像装置の視野内に位置する複数の被写体候補を認識し、認識した複数の被写体候補から前記複数の被写体を選択する、
     請求項1に記載の撮像装置。
    The processing unit recognizes a plurality of subject candidates located within a field of view of the imaging device, and selects the plurality of subjects from the recognized plurality of subject candidates.
    The imaging device according to claim 1.
  5.  前記撮像装置の視野内に位置する複数の被写体候補を、前記被写体として選択可能な態様で表示する表示操作部を備える、
     請求項1に記載の撮像装置。
    comprising a display operation unit that displays a plurality of subject candidates located within the field of view of the imaging device in a manner selectable as the subject;
    The imaging device according to claim 1.
  6.  前記処理部は、前記複数の被写体の移動を検出し、検出結果に基づいて前記光学系を制御する、
     請求項1に記載の撮像装置。
    The processing unit detects movement of the plurality of subjects and controls the optical system based on the detection result.
    The imaging device according to claim 1.
  7.  前記複数の被写体それぞれは、同じ物体の異なる部分である、
     請求項1に記載の撮像装置。
    Each of the plurality of subjects is a different part of the same object,
    The imaging device according to claim 1.
  8.  前記処理部は、選択した前記複数の被写体のすべてを被写界深度に含めることができない場合には、前記最遠被写体を前記選択の対象から除外する、
     請求項1に記載の撮像装置。
    The processing unit excludes the farthest subject from the selection target if all of the selected plurality of subjects cannot be included in the depth of field.
    The imaging device according to claim 1.
  9.  前記処理部による前記光学系の合焦位置の調整は、
      前記複数の被写体のうちの最至近被写体及び最遠被写体それぞれのデフォーカス値が、それぞれのデフォーカス値の中間値に近づくように、前記光学系の合焦位置を調整する第1の調整と、
      前記第1の調整とは別の位置に光学系の合焦位置を調整する第2の調整と、
     を含む、
     請求項1に記載の撮像装置。
    Adjustment of the focusing position of the optical system by the processing unit includes:
    A first adjustment that adjusts the focusing position of the optical system so that the defocus values of the closest object and the farthest object among the plurality of objects approach an intermediate value of the respective defocus values;
    a second adjustment that adjusts the focusing position of the optical system to a position different from the first adjustment;
    including,
    The imaging device according to claim 1.
  10.  複数の被写体のうちの最至近被写体及び最遠被写体それぞれのデフォーカス値が、それぞれのデフォーカス値の中間値に近づくように、撮像用の光学系の合焦位置を調整すること、
     を含む、
     撮像方法。
    adjusting the focusing position of the imaging optical system so that the defocus values of the closest object and the farthest object among the plurality of objects approach an intermediate value of the respective defocus values;
    including,
    Imaging method.
  11.  プロセッサに、
     複数の被写体のうちの最至近被写体及び最遠被写体それぞれのデフォーカス値が、それぞれのデフォーカス値の中間値に近づくように、撮像用の光学系の合焦位置を調整する処理、
     を実行させる、
     撮像プログラム。
    to the processor,
    A process of adjusting the focus position of the imaging optical system so that the defocus values of the closest object and the farthest object among the plurality of objects approach an intermediate value of the respective defocus values;
    to execute,
    Imaging program.
PCT/JP2023/023851 2022-07-27 2023-06-27 Imaging device, imaging method, and imaging program WO2024024375A1 (en)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
JP2022-119325 2022-07-27
JP2022119325 2022-07-27

Publications (1)

Publication Number Publication Date
WO2024024375A1 true WO2024024375A1 (en) 2024-02-01

Family

ID=89706073

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/JP2023/023851 WO2024024375A1 (en) 2022-07-27 2023-06-27 Imaging device, imaging method, and imaging program

Country Status (1)

Country Link
WO (1) WO2024024375A1 (en)

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2008129470A (en) * 2006-11-22 2008-06-05 Canon Inc Imaging apparatus and portable electronic equipment
JP2012181324A (en) * 2011-03-01 2012-09-20 Nikon Corp Imaging apparatus
JP2015170919A (en) * 2014-03-05 2015-09-28 オリンパス株式会社 Imaging apparatus, and control method of imaging apparatus
JP2017009942A (en) * 2015-06-26 2017-01-12 キヤノン株式会社 Imaging device
JP2020067503A (en) * 2018-10-22 2020-04-30 キヤノン株式会社 Imaging device, monitoring system, method for controlling imaging device, and program

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2008129470A (en) * 2006-11-22 2008-06-05 Canon Inc Imaging apparatus and portable electronic equipment
JP2012181324A (en) * 2011-03-01 2012-09-20 Nikon Corp Imaging apparatus
JP2015170919A (en) * 2014-03-05 2015-09-28 オリンパス株式会社 Imaging apparatus, and control method of imaging apparatus
JP2017009942A (en) * 2015-06-26 2017-01-12 キヤノン株式会社 Imaging device
JP2020067503A (en) * 2018-10-22 2020-04-30 キヤノン株式会社 Imaging device, monitoring system, method for controlling imaging device, and program

Similar Documents

Publication Publication Date Title
RU2456654C2 (en) Image capturing device, control method thereof and data medium
JP5247044B2 (en) Imaging device
US7826735B2 (en) Auto focus unit and digital camera
US9411128B2 (en) Automatic focusing apparatus with cyclic pattern determination
JP4846004B2 (en) Imaging system and lens apparatus
US8203643B2 (en) Automatic focusing device
US20100007748A1 (en) Imaging apparatus and imaging method
JP2016142925A (en) Imaging apparatus, method of controlling the same, program, and storage medium
JP5845023B2 (en) FOCUS DETECTION DEVICE, LENS DEVICE HAVING THE SAME, AND IMAGING DEVICE
EP1458181A1 (en) Digital camera with distance-dependent focussing method
JP6525813B2 (en) Imaging device, control method, program, and storage medium
US20110044676A1 (en) Image pickup system having ranging function
JP2009265239A (en) Focus detecting apparatus, focus detection method, and camera
JP6812387B2 (en) Image processing equipment and image processing methods, programs, storage media
JP6659100B2 (en) Imaging device
JP2021027523A5 (en)
JP6486098B2 (en) Imaging apparatus and control method thereof
JP6140945B2 (en) Focus adjustment device and imaging device
US9742983B2 (en) Image capturing apparatus with automatic focus adjustment and control method thereof, and storage medium
JP2002365524A (en) Autofocus device and imaging device using the same
JPH09211308A (en) Mechanism for detecting object of automatic focusing image pickup unit
JP5322593B2 (en) Focus adjustment apparatus and method
WO2024024375A1 (en) Imaging device, imaging method, and imaging program
JP2020144158A (en) Imaging device and control device therefor
JP5256847B2 (en) Imaging device

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 23846109

Country of ref document: EP

Kind code of ref document: A1