WO2018179695A1 - 制御装置、撮像装置、制御方法及びプログラム - Google Patents
制御装置、撮像装置、制御方法及びプログラム Download PDFInfo
- Publication number
- WO2018179695A1 WO2018179695A1 PCT/JP2018/001304 JP2018001304W WO2018179695A1 WO 2018179695 A1 WO2018179695 A1 WO 2018179695A1 JP 2018001304 W JP2018001304 W JP 2018001304W WO 2018179695 A1 WO2018179695 A1 WO 2018179695A1
- Authority
- WO
- WIPO (PCT)
- Prior art keywords
- main subject
- distance
- image
- area
- unit
- Prior art date
Links
- 238000003384 imaging method Methods 0.000 title claims description 125
- 238000000034 method Methods 0.000 title claims description 98
- 238000001514 detection method Methods 0.000 claims abstract description 129
- 238000004364 calculation method Methods 0.000 claims abstract description 16
- 230000008569 process Effects 0.000 claims description 65
- 230000033001 locomotion Effects 0.000 claims description 25
- 230000006641 stabilisation Effects 0.000 claims description 19
- 238000011105 stabilization Methods 0.000 claims description 19
- 238000012545 processing Methods 0.000 description 73
- 230000000875 corresponding effect Effects 0.000 description 17
- 238000005259 measurement Methods 0.000 description 17
- 238000010586 diagram Methods 0.000 description 16
- 101150043088 DMA1 gene Proteins 0.000 description 11
- 238000012634 optical imaging Methods 0.000 description 11
- 230000007246 mechanism Effects 0.000 description 10
- 101150090596 DMA2 gene Proteins 0.000 description 9
- 238000007781 pre-processing Methods 0.000 description 8
- 238000005516 engineering process Methods 0.000 description 7
- 230000006870 function Effects 0.000 description 7
- 230000008859 change Effects 0.000 description 6
- 238000012937 correction Methods 0.000 description 5
- 230000003287 optical effect Effects 0.000 description 4
- 238000006243 chemical reaction Methods 0.000 description 3
- 230000000694 effects Effects 0.000 description 3
- 230000001276 controlling effect Effects 0.000 description 2
- 238000011156 evaluation Methods 0.000 description 2
- 238000009434 installation Methods 0.000 description 2
- 239000004973 liquid crystal related substance Substances 0.000 description 2
- 238000012423 maintenance Methods 0.000 description 2
- 238000012986 modification Methods 0.000 description 2
- 230000004048 modification Effects 0.000 description 2
- 230000035945 sensitivity Effects 0.000 description 2
- 244000025254 Cannabis sativa Species 0.000 description 1
- 101001139126 Homo sapiens Krueppel-like factor 6 Proteins 0.000 description 1
- 230000001133 acceleration Effects 0.000 description 1
- 230000005540 biological transmission Effects 0.000 description 1
- 238000007621 cluster analysis Methods 0.000 description 1
- 238000002485 combustion reaction Methods 0.000 description 1
- 238000004891 communication Methods 0.000 description 1
- 230000000295 complement effect Effects 0.000 description 1
- 238000010276 construction Methods 0.000 description 1
- 230000002596 correlated effect Effects 0.000 description 1
- 230000006837 decompression Effects 0.000 description 1
- 238000005401 electroluminescence Methods 0.000 description 1
- 238000009432 framing Methods 0.000 description 1
- 230000001771 impaired effect Effects 0.000 description 1
- 229910044991 metal oxide Inorganic materials 0.000 description 1
- 150000004706 metal oxides Chemical class 0.000 description 1
- 230000000116 mitigating effect Effects 0.000 description 1
- 238000003825 pressing Methods 0.000 description 1
- 230000009257 reactivity Effects 0.000 description 1
- 230000004044 response Effects 0.000 description 1
- 238000005070 sampling Methods 0.000 description 1
- 239000004065 semiconductor Substances 0.000 description 1
- 239000007787 solid Substances 0.000 description 1
- 230000002123 temporal effect Effects 0.000 description 1
Images
Classifications
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N23/00—Cameras or camera modules comprising electronic image sensors; Control thereof
- H04N23/60—Control of cameras or camera modules
- H04N23/67—Focus control based on electronic image sensor signals
-
- G—PHYSICS
- G03—PHOTOGRAPHY; CINEMATOGRAPHY; ANALOGOUS TECHNIQUES USING WAVES OTHER THAN OPTICAL WAVES; ELECTROGRAPHY; HOLOGRAPHY
- G03B—APPARATUS OR ARRANGEMENTS FOR TAKING PHOTOGRAPHS OR FOR PROJECTING OR VIEWING THEM; APPARATUS OR ARRANGEMENTS EMPLOYING ANALOGOUS TECHNIQUES USING WAVES OTHER THAN OPTICAL WAVES; ACCESSORIES THEREFOR
- G03B13/00—Viewfinders; Focusing aids for cameras; Means for focusing for cameras; Autofocus systems for cameras
- G03B13/32—Means for focusing
- G03B13/34—Power focusing
- G03B13/36—Autofocus systems
-
- G—PHYSICS
- G03—PHOTOGRAPHY; CINEMATOGRAPHY; ANALOGOUS TECHNIQUES USING WAVES OTHER THAN OPTICAL WAVES; ELECTROGRAPHY; HOLOGRAPHY
- G03B—APPARATUS OR ARRANGEMENTS FOR TAKING PHOTOGRAPHS OR FOR PROJECTING OR VIEWING THEM; APPARATUS OR ARRANGEMENTS EMPLOYING ANALOGOUS TECHNIQUES USING WAVES OTHER THAN OPTICAL WAVES; ACCESSORIES THEREFOR
- G03B19/00—Cameras
- G03B19/02—Still-picture cameras
- G03B19/12—Reflex cameras with single objective and a movable reflector or a partly-transmitting mirror
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N23/00—Cameras or camera modules comprising electronic image sensors; Control thereof
- H04N23/10—Cameras or camera modules comprising electronic image sensors; Control thereof for generating image signals from different wavelengths
- H04N23/11—Cameras or camera modules comprising electronic image sensors; Control thereof for generating image signals from different wavelengths for generating image signals from visible and infrared light wavelengths
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N23/00—Cameras or camera modules comprising electronic image sensors; Control thereof
- H04N23/60—Control of cameras or camera modules
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N23/00—Cameras or camera modules comprising electronic image sensors; Control thereof
- H04N23/60—Control of cameras or camera modules
- H04N23/61—Control of cameras or camera modules based on recognised objects
- H04N23/611—Control of cameras or camera modules based on recognised objects where the recognised objects include parts of the human body
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N23/00—Cameras or camera modules comprising electronic image sensors; Control thereof
- H04N23/60—Control of cameras or camera modules
- H04N23/63—Control of cameras or camera modules by using electronic viewfinders
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N23/00—Cameras or camera modules comprising electronic image sensors; Control thereof
- H04N23/60—Control of cameras or camera modules
- H04N23/67—Focus control based on electronic image sensor signals
- H04N23/672—Focus control based on electronic image sensor signals based on the phase difference signals
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N25/00—Circuitry of solid-state image sensors [SSIS]; Control thereof
- H04N25/60—Noise processing, e.g. detecting, correcting, reducing or removing noise
- H04N25/616—Noise processing, e.g. detecting, correcting, reducing or removing noise involving a correlated sampling function, e.g. correlated double sampling [CDS] or triple sampling
Definitions
- the present disclosure relates to a control device, an imaging device, a control method, and a program.
- This disclosure is intended to provide a control device, an imaging device, a control method, and a program that can accurately detect an object of autofocus.
- a main subject detection unit that detects a first main subject among subjects included in the first image and acquires main subject information indicating the first main subject;
- a distance information calculation unit that detects a distance of a subject included in the first image and acquires first distance information indicating the distance;
- a detection unit that detects a distance including the first main subject based on the main subject information and the first distance information, and acquires main subject distance information indicating the distance including the first main subject.
- the present disclosure for example, A control device as described above; And an imaging unit.
- a main subject detection unit detects a first main subject among subjects included in the first image, acquires main subject information indicating the first main subject
- the distance information calculation unit detects the distance of the subject included in the first image, acquires first distance information indicating the distance
- the detection unit detects a distance including the first main subject based on the main subject information and the first distance information, and acquires main subject distance information indicating the distance including the first main subject.
- a main subject detection unit detects a first main subject among subjects included in the first image, acquires main subject information indicating the first main subject
- the distance information calculation unit detects the distance of the subject included in the first image, acquires first distance information indicating the distance, Based on the main subject information and the first distance information, the detection unit detects a distance including the first main subject, and acquires a main subject distance information indicating the distance including the first main subject in the computer. It is a program to be executed.
- the target of autofocus can be accurately detected.
- the effect described here is not necessarily limited, and any effect described in the present disclosure may be used. Further, the contents of the present disclosure are not construed as being limited by the exemplified effects.
- FIG. 1 is a diagram illustrating a schematic configuration of an imaging apparatus according to an embodiment of the present disclosure.
- FIG. 2 is a block diagram illustrating a configuration example of an imaging apparatus according to an embodiment of the present disclosure.
- FIG. 3 is a diagram for explaining a problem to be considered when performing autofocus.
- FIG. 4 is a diagram for explaining processing performed by each functional block in the control unit.
- FIG. 5A to FIG. 5C are diagrams for reference in describing the processing performed in the imaging apparatus according to an embodiment of the present disclosure.
- FIG. 6A and FIG. 6B are diagrams for reference in describing the processing performed in the imaging apparatus according to an embodiment of the present disclosure.
- FIG. 7 is a diagram for reference in describing processing performed in the imaging apparatus according to an embodiment of the present disclosure.
- FIG. 1 is a diagram illustrating a schematic configuration of an imaging apparatus according to an embodiment of the present disclosure.
- FIG. 2 is a block diagram illustrating a configuration example of an imaging apparatus according to an embodiment of the present disclosure
- FIG. 8 is a diagram for reference in describing processing performed in the imaging apparatus according to an embodiment of the present disclosure.
- FIG. 9 is a flowchart illustrating a flow of processing performed in the imaging apparatus according to an embodiment of the present disclosure.
- FIG. 10 is a diagram for reference in describing the first control performed in the imaging apparatus according to an embodiment of the present disclosure.
- FIG. 11 is a flowchart illustrating a flow of first control processing performed in the imaging apparatus according to an embodiment of the present disclosure.
- FIG. 12 is a diagram for reference in describing the first control performed in the imaging apparatus according to an embodiment of the present disclosure.
- FIG. 13 is a flowchart illustrating a flow of a second control process performed in the imaging apparatus according to an embodiment of the present disclosure.
- FIG. 14 is a diagram for reference in describing the second control performed in the imaging apparatus according to an embodiment of the present disclosure.
- FIG. 15 is a flowchart illustrating a flow of third control processing performed in the imaging apparatus according to an embodiment of the present disclosure.
- FIG. 16 is a block diagram illustrating an example of a schematic configuration of the vehicle control system.
- FIG. 17 is an explanatory diagram illustrating an example of the installation positions of the outside-vehicle information detection unit and the imaging unit.
- FIG. 1 is a schematic cross-sectional view illustrating a schematic configuration of an imaging apparatus 1 according to an embodiment of the present disclosure.
- the imaging apparatus 1 includes a housing (body) 10, an optical imaging system 20 including a photographic lens 22, a semi-transmissive mirror 11, an imaging element 12 ⁇ / b> A, an image plane phase difference AF sensor 12 ⁇ / b> B, a dedicated phase difference AF sensor 13, and an electronic viewfinder 14.
- the display 15 is provided.
- the imaging unit is configured by a configuration including the imaging device 12A and the optical imaging system 20.
- an optical imaging system 20 is provided for the housing 10.
- the optical imaging system 20 is, for example, a so-called interchangeable lens unit, and a photographing lens 22 and a diaphragm are provided in a lens barrel 21.
- the photographing lens 22 is driven by a focus drive system (not shown), and AF (Auto-Focus) operation is enabled.
- the optical imaging system 20 may be configured integrally with the housing 10 or the optical imaging system 20 may be detachable from the housing 10 via a predetermined adapter.
- the semi-transmissive mirror 11 is provided in the housing 10 between the photographing lens 22 and the image pickup device 12A in the housing 10. Subject light is incident on the semi-transmissive mirror 11 via the photographing lens 22.
- the semi-transmissive mirror 11 reflects a part of the subject light incident through the photographing lens 22 in the direction of the upper dedicated phase difference AF sensor 13 and transmits a part of the subject light to the image sensor 12A. Note that the transmissivity, reflectance, and the like of the semi-transmissive mirror 11 can be arbitrarily set.
- an image sensor 12A for generating a captured image is provided.
- a CCD (Charge Coupled Device), a CMOS (Complementary Metal Oxide Semiconductor) or the like is used as the image sensor 12A.
- the image sensor 12A photoelectrically converts subject light incident through the photographing lens 22 and converts it into a charge amount to generate an image.
- the image signal is subjected to predetermined signal processing such as white balance adjustment processing, gamma correction processing, and the like, and is finally stored as image data in a storage medium in the imaging apparatus 1 or a portable memory that can be attached to and detached from the imaging apparatus 1. Saved in etc.
- the image sensor 12A includes, for example, R (Red) pixels, G (Green) pixels, and B (Blue) pixels, which are normal imaging pixels, and an image plane phase difference AF sensor 12B that performs phase difference focus detection. ing. That is, the image plane phase difference AF sensor 12B is configured by arranging the image plane phase difference pixels in a part of the image sensor 12A. Each pixel constituting the image sensor photoelectrically converts incident light from the subject into a charge amount, and outputs a pixel signal.
- the dedicated phase difference AF sensor 13 is provided, for example, in the housing 10 so as to be positioned above the semi-transmissive mirror 11 and in front of the image sensor 12A.
- the dedicated phase difference AF sensor 13 is, for example, a phase difference detection type AF dedicated module.
- the subject light collected by the photographing lens 22 is reflected by the semi-transmissive mirror 11 and enters the dedicated phase difference AF sensor 13.
- the focus detection signal detected by the dedicated phase difference AF sensor 13 is supplied to a processing unit that calculates the defocus amount in the imaging apparatus 1.
- the imaging apparatus 1 performs AF using the dedicated phase difference AF sensor 13 and the image plane phase difference AF sensor 12B.
- the present invention is not limited to this, and the AF method performed by the imaging apparatus 1 may be an AF method using one of the dedicated phase difference AF sensor 13 and the image plane phase difference AF sensor 12B, or another known method.
- An AF method may be used.
- An AF method using a hybrid of a plurality of AF methods may be used. If the imaging apparatus 1 does not include the dedicated phase difference AF sensor 13, AF is performed using the image plane phase difference AF sensor 12B.
- the housing 10 is provided with an electronic view finder (EVF) 14.
- the electronic viewfinder 14 includes, for example, a liquid crystal display (LCD), an organic EL (Electroluminescence) display, and the like.
- the electronic viewfinder 14 is supplied with image data obtained by processing an image signal extracted from the image sensor 12A by a signal processing unit (not shown).
- the electronic viewfinder 14 displays images corresponding to the image data as real-time images (through images).
- the housing 10 is provided with a display 15.
- the display 15 is a flat display such as a liquid crystal display or an organic EL.
- the display 15 is supplied with image data obtained by processing an image signal extracted from the image sensor 12A by a signal processing unit (not shown), and the display 15 displays them as a real-time image (so-called through image).
- the display 15 is provided on the back side of the housing, but is not limited thereto, and may be provided on the top surface of the housing, or may be movable or removable.
- the display 15 may not be provided in the housing 10, and in this case, a television device or the like connected to the imaging device 1 may function as the display 15.
- an area in which AF corresponding to the mode hereinafter referred to as an AF area as appropriate
- an AF area is displayed superimposed on the real-time image.
- the imaging apparatus 1 includes, for example, a preprocessing circuit 31, a camera processing circuit 32, and an image.
- a memory 33, a control unit 34, a graphic I / F (Interface) 35, an input unit 36, an R / W (reader / writer) 37, and a storage medium 38 are provided.
- the optical imaging system 20 includes a photographing lens 22 (including a focus lens and a zoom lens) for condensing light from a subject on the imaging element 12A, a lens driving mechanism 22A for adjusting the focus by moving the focus lens, and a shutter. It consists of a mechanism and an iris mechanism. These are driven based on a control signal from the control unit 34.
- the lens driving mechanism 22A realizes an AF operation by moving the photographing lens 22 along the optical axis direction in accordance with a control signal supplied from a control unit 34 (for example, an AF control unit 34D described later).
- the optical image of the subject obtained through the optical imaging system 20 is formed on the imaging element 12A as an imaging device.
- the dedicated phase difference AF sensor 13 is, for example, a phase difference detection type autofocus dedicated sensor.
- the subject light collected by the photographing lens 22 is reflected by the semi-transmissive mirror 11 and enters the dedicated phase difference AF sensor 13.
- the focus detection signal detected by the dedicated phase difference AF sensor 13 is supplied to the control unit 34.
- the imaging element 12A has a normal imaging pixel and a phase difference detection pixel.
- the image plane phase difference AF sensor 12B is an autofocus sensor including a plurality of phase difference detection pixels.
- the focus detection signal detected by the image plane phase difference AF sensor 12B is supplied to the control unit 34.
- the pre-processing circuit 31 performs sample hold and the like on the image signal output from the image sensor 12A so as to maintain a good S / N (Signal / Noise) ratio by CDS (Correlated Double Sampling) processing. Further, the gain is controlled by AGC (Auto-Gain-Control) processing, A / D (Analog / Digital) conversion is performed, and a digital image signal is output.
- AGC Auto-Gain-Control
- a / D Analog / Digital
- the camera processing circuit 32 performs signal processing such as white balance adjustment processing, color correction processing, gamma correction processing, Y / C conversion processing, and AE (Auto-Exposure) processing on the image signal from the preprocessing circuit 31.
- signal processing such as white balance adjustment processing, color correction processing, gamma correction processing, Y / C conversion processing, and AE (Auto-Exposure) processing on the image signal from the preprocessing circuit 31.
- the image memory 33 is a buffer memory composed of volatile memory, for example, DRAM (Dynamic Random Access Memory), and temporarily stores image data that has been subjected to predetermined processing by the preprocessing circuit 31 and the camera processing circuit 32. It is something to store.
- DRAM Dynamic Random Access Memory
- the control unit 34 includes, for example, a CPU (Central Processing Unit), a RAM (Random Access Memory), a ROM (Read Only Memory), and the like.
- the ROM stores a program that is read and executed by the CPU.
- the RAM is used as a work memory for the CPU.
- the CPU controls the entire imaging apparatus 1 by executing various processes in accordance with programs stored in the ROM and issuing commands.
- the control unit 34 includes, as functional blocks, for example, a main subject detection unit 34A, a depth detection unit 34B that is an example of a distance information calculation unit, a foreground depth range detection unit 34C that is an example of a detection unit, and an AF control. Part 34D. Processing performed by each of these functional blocks will be described later.
- a main subject is referred to as a foreground, and a subject that is not the main subject is referred to as a background.
- the main subject is a subject considered to be important for the user among a plurality of subjects, and more specifically, a subject including a portion that the user wants to focus on.
- the graphic I / F 35 generates an image signal to be displayed on the display 15 from the image signal supplied from the control unit 34 and supplies the signal to the display 15 to display an image.
- the display 15 displays a through image being captured, an image recorded on the storage medium 38, and the like.
- the input unit 36 includes, for example, a power button for switching power on / off, a release button for instructing start of recording of a captured image, an operator for zoom adjustment, a touch screen configured integrally with the display 15, and the like. Consists of.
- a control signal corresponding to the input is generated and output to the control unit 34.
- the control part 34 performs the arithmetic processing and control corresponding to the said control signal.
- the R / W 37 is an interface to which a storage medium 38 for recording image data generated by imaging is connected.
- the R / W 37 writes the data supplied from the control unit 34 to the storage medium 38, and outputs the data read from the storage medium 38 to the control unit 34.
- the storage medium 38 is a large-capacity storage medium such as a hard disk, a memory stick (registered trademark of Sony Corporation), and an SD memory card, for example.
- the image is stored in a compressed state based on a standard such as JPEG. Further, EXIF (Exchangeable Image File Format) data including additional information such as information relating to the saved image and imaging date and time is also stored in association with the image.
- EXIF Exchangeable Image File Format
- the camera processing circuit 32 performs image quality correction processing on the image signal supplied from the preprocessing circuit 31 and supplies the image signal to the graphic I / F 35 via the control unit 34 as a through image signal. Thereby, a through image is displayed on the display 15. The user can adjust the angle of view by looking at the through image displayed on the display 15.
- the control unit 34 outputs a control signal to the optical imaging system 20 to operate the shutters constituting the optical imaging system 20.
- an image signal for one frame is output from the image sensor 12A.
- the camera processing circuit 32 performs image quality correction processing on the image signal for one frame supplied from the image sensor 12 ⁇ / b> A via the preprocessing circuit 31, and supplies the processed image signal to the control unit 34.
- the control unit 34 compresses and encodes the input image signal and supplies the generated encoded data to the R / W 37. Thereby, the data file of the captured still image is stored in the storage medium 38 via the R / W 37. It should be noted that at the time of moving image shooting, the above-described processing is performed in real time in accordance with a moving image shooting instruction. It is also possible to take a still image during moving image shooting by pressing the shutter button during moving image shooting.
- the control unit 34 transmits the selected still image file from the storage medium 38 via the R / W 37 in response to an operation input from the input unit 36. Read. A decompression decoding process is performed on the read image file. Then, the decoded image signal is supplied to the graphic I / F 35 via the control unit 34. As a result, the still image stored in the storage medium 38 is displayed on the display 15.
- the main subject detection process often requires a longer calculation time than the AF distance information detection process (distance calculation), and in this case, there is a time lag (delay) with respect to the latest distance measurement result.
- Arise When attempting to perform AF using the latest distance measurement result, the recognition result of the main subject has only the previous (old) result in time, so the movement of the subject at the time when the delay occurs is not considered. End up.
- AF is performed after the computation of the main subject detection process is completed, focusing is performed based on the distance measurement result that is temporally previous, so that the subject moves in the Z-axis direction (depth direction). AF followability deteriorates.
- FIG. 3 shows an example in which the distance information D1 is obtained as the distance measurement result of the player HA, and the distance information D2 is obtained as the distance measurement result of the player HB as another player.
- an image IMb is acquired as an input image at the next time Tb (after time).
- the image IMb is, for example, an image after one frame with respect to the image IMa.
- the image IMb may be an image several frames after the image IMa.
- the image IMb includes a player HC in addition to the players HA and HB described above.
- Distance information detection processing is performed on the image IMb, and distance information on the subject in the image IMb is obtained as a distance measurement result.
- FIG. 3 shows an example in which distance information D1 is obtained as a distance measurement result of player HA, distance information D2 is obtained as a distance measurement result of player HB, and distance information D3 is obtained as a distance measurement result of player HC. ing.
- the main subject detection process requires a longer calculation time than the distance information detection process. For this reason, since the main subject detection process is not completed when the distance information detection process is completed and the distance measurement result is obtained, when performing AF using the distance measurement result at time Tb, as described above. Therefore, the movement of the subject between the times Ta and Tb is not taken into consideration. On the other hand, when the process is waited until the main subject detection process is completed, the player HA moves in the Z-axis direction (depth direction) of the subject during that time, and a deviation occurs between the actual distance and the distance measurement result. The followability of AF becomes worse with respect to the movement in the Z-axis direction.
- the foreground depth range is a distance including the main subject.
- the foreground depth range is described as a predetermined distance range including the main subject, but may be the distance itself.
- a main subject detection unit 34A, a depth detection unit 34B, a foreground depth range detection unit 34C, and an AF control unit 34D which are functional blocks constituting the control unit 34, are shown.
- FIG. 5A shows an example of the image IM1.
- the image IM1 is an image in which grass exists on the front side, a person H1 that runs from the right to the left of the screen exists on the back side, and a forest (indicated by three trees in FIG. 5A) exists in the background. is there.
- the image IM1 is input to each of the main subject detection unit 34A and the depth detection unit 34B.
- the main subject detection unit 34A performs main subject detection processing for detecting a main subject (first main subject) among subjects included in the image IM1. For example, the main subject detection unit 34A detects a motion based on a difference between frames, and regards the subject from which the motion is detected as a main subject.
- the main subject detection unit 34A detects a main subject by performing a main subject detection process, and acquires a main subject map MA1 (an example of main subject information) indicating the main subject. That is, the main subject map MA1 is information indicating a region where the main subject exists. An example of the main subject map MA1 is shown in FIG. 5B.
- a white portion in the main subject map MA1 is identified as a moving main subject.
- white portions other than the portions corresponding to the person H1 exist locally due to the influence of noise.
- the main subject map MA1 is supplied to the foreground depth range detection unit 34C.
- the depth detection unit 34B detects the distance of the subject included in the image IM1 by performing distance information detection processing.
- the depth detection unit 34B performs distance information detection processing using sensor information obtained from the image plane phase difference AF sensor 12B and the dedicated phase difference AF sensor 13, for example. It should be noted that distance information detection processing may be performed using sensor information obtained from one of the image plane phase difference AF sensor 12B and the dedicated phase difference AF sensor 13 in accordance with the area where distance information is detected.
- the depth detection unit 34B detects the distance of the subject included in the image IM1, and acquires the depth map DMA1 (an example of first distance information).
- An example of the depth map DMA1 is shown in FIG. 5C. In the depth map DMA1 shown in FIG. 5C, the distance is represented by shades of black and white, and the distance becomes white and the distance becomes black.
- the depth map DMA1 is supplied to the foreground depth range detection unit 34C.
- the foreground depth range detector 34C detects the foreground depth range based on the main subject map MA1 and the depth map DMA1. An example of processing for detecting the foreground depth range performed by the foreground depth range detection unit 34C will be described.
- the foreground depth range detection unit 34C performs clustering (also referred to as cluster analysis or cluster classification) that decomposes the depth map DMA1 into finite clusters.
- clustering also referred to as cluster analysis or cluster classification
- a method called a k-means method can be applied.
- FIG. 6A An example of a depth map that has been clustered is shown as a depth map CDMA1 in FIG. 6A.
- the distance information indicated by the depth map DMA1 is classified into six clusters (clusters CL1 to CL6) by the clustering of the foreground depth range detection unit 34C.
- cluster CL1 corresponds to 0 to 3 m (meters)
- cluster CL2 corresponds to 3 to 6 m
- cluster CL3 corresponds to 6 to 9 m
- cluster CL4 corresponds to 9 to 12 m
- the cluster CL5 corresponds to 12 to 15m
- the cluster CL6 corresponds to 15 to ⁇ m.
- the boundary value of each cluster CL may be included in any cluster CL.
- 3m that is the boundary value between the cluster CL1 and the cluster CL2 may be included in the cluster CL1 or may be included in the cluster CL2.
- the foreground depth range detection unit 34C performs a matching process for matching the depth map CDMA1 after clustering with the main subject map MA1.
- the result of the matching process is shown in FIG. 6B.
- the foreground depth range detection unit 34C calculates the ratio of the main subject included in each cluster CL. This ratio is indicated, for example, by the ratio of the area (number of pixels) of the main subject to the area (number of pixels) constituting each cluster CL.
- the area of the main subject is weighted according to the position of the main subject on the screen (map), Corresponding weighting can also be performed.
- the larger (stronger) weighting can be performed.
- an evaluation value indicating the main subject likeness is acquired based on a known recognition technique or the like, and a larger (strong) weighting that seems to be a main subject as the evaluation value is larger It can be performed.
- FIG. 7 shows an example of the ratio of main subjects included in each cluster CL.
- the foreground depth range detection unit 34C detects a cluster whose ratio of main subjects is equal to or greater than a certain threshold as the foreground depth range. That is, the foreground depth range is information indicating the area of the detected cluster.
- the ratio of main subjects included in the cluster CL1 is 6%
- the ratio of main subjects included in the cluster CL2 is 0%
- the ratio of main subjects included in the cluster CL3 is 42%.
- the ratio of the main subject included in the cluster CL4 is 0%
- the ratio of the main subject included in the cluster CL5 is 3%
- the ratio of the main subject included in the cluster CL6 is 0%. .
- the foreground depth range detection unit 34C detects the distance range (6 to 9 m) corresponding to the cluster CL3 as the foreground depth range.
- the foreground depth range detection unit 34C can also detect a cluster having the largest proportion of main subjects as the foreground depth range.
- the foreground depth range detection unit 34C acquires foreground depth range information MD (an example of main subject distance information) that is information indicating the detected foreground depth range. Then, the foreground depth range detection unit 34C supplies the foreground depth range information MD to the AF control unit 34D.
- the foreground depth range information MD supplied to the AF control unit 34D is held in a memory or the like included in the control unit 34. Note that the process of detecting the foreground depth range is performed periodically, for example, according to the frame period.
- an image IM2 (an example of a second image) is acquired by the imaging device 1 as an input image at time T2 temporally after time T1.
- the image IM2 for example, is an image one frame after the image IM1, and is an image acquired most recently in time.
- An example of the image IM2 is shown in FIG. 8A.
- the depth detection unit 34B performs distance information detection processing to detect the distance for each subject included in the image IM2, and acquires the depth map DMA2 (an example of second distance information).
- An example of the depth map DMA2 is shown in FIG. 8B.
- the distance is represented by black and white shading, and the distance is white and the distance is black.
- the depth map DMA2 is supplied to the AF control unit 34D.
- the AF control unit 34D detects the main subject included in the depth map DMA2, that is, the main subject (second main subject) included in the image IM2, based on the foreground depth range information MD detected using the previous frame. .
- the AF control unit 34D performs clustering that decomposes the Depth map DMA2 into finite clusters.
- a clustering method for example, a method called a k-means method similar to the depth detection unit 34B can be applied.
- FIG. 8C An example of a depth map that has been clustered is shown in FIG. 8C as a depth map CDMA2.
- a depth map CDMA2 Even for a moving subject, the movement in the Z-axis direction during the time and distance information detection process between one frame is slight. Therefore, the depth map CDMA2 is classified into substantially the same clusters (for example, six clusters CL1 to CL6) as the depth map CDMA1.
- the AF control unit 34D detects that the main subject exists in the range corresponding to the cluster CL3 in the range of 6 to 9 m in the depth map DMA2.
- An example of a range in which the main subject is detected is surrounded by a thick line in FIG. 8C.
- the foreground depth range obtained using the past frame is used.
- the foreground depth range obtained using the past frame is used.
- FIG. 9 is a flowchart showing the flow of processing for detecting the foreground depth range.
- the main subject detection unit 34A performs main subject detection processing on the image IM1 that is the input image at time T1, and detects the main subject in the image IM1.
- the main subject map MA1 as the detection result is supplied to the foreground depth range detection unit 34C. Then, the process proceeds to step ST12.
- step ST12 the Depth detection unit 34B performs distance information detection processing on the image IM1, and detects distance information of each subject in the image IM1.
- the depth map DMA1 as the detection result is supplied to the foreground depth range detection unit 34C. Then, the process proceeds to step ST13.
- step ST13 the foreground depth range detector 34C detects the foreground depth range using the main subject map MA1 and the depth map DMA1. Since the foreground depth range detection unit 34C detects the foreground depth range as described above, a duplicate description is omitted. Foreground depth range information MD indicating the foreground depth range is supplied to the AF control unit 34D. Then, the process proceeds to step ST14.
- step ST14 the image IM2 is input as an input image at time T2, which is later in time than time T1.
- the depth detection unit 34B performs distance information detection processing on the image IM2, and detects distance information of each subject in the image IM2.
- a depth map DMA2 as a detection result is supplied to the AF control unit. Then, the process proceeds to step ST15. Thereafter, at each timing when the main subject detection process is performed by the main subject detection unit 34A, the same processing as step ST11 is performed based on the latest main subject information obtained by the main subject detection processing.
- the AF control unit 34D detects the main subject included in the depth map DMA2 based on the foreground depth range information MD, and detects the main subject included in the image IM2. Since the process in which the AF control unit 34D detects the main subject based on the foreground depth range information MD has been described above, a duplicate description will be omitted. With the above processing, the main subject in the current frame can be detected even when the main subject detection processing by the main subject detection unit 34A for the image IM2 is not completed.
- the foreground depth range information MD can be used for various processes of the imaging apparatus 1, for example, processes in AF control. Specifically, the AF control unit 34D controls the movement of the lens based on the detection result of the main subject (for example, the second main subject included in the image IM2 described above) included in the latest image in time. In addition, foreground depth range information MD can be used.
- an example of AF control using the foreground depth range information MD (first to third control examples) will be described. In the following description, the same reference numerals are assigned to the same or similar components as those described in the foreground depth range detection process, and a duplicate description is omitted as appropriate.
- the following control is control performed in a mode in which an AF area is locally selected from among a plurality of AF areas presented as UI (User Interface).
- the AF area is presented to the user, for example, by being displayed on the display 15 by a rectangular frame with a color.
- the user selects, for example, one AF area as an AF area among the plurality of AF areas.
- Such an AF mode is called a spot or the like.
- a spot an example in which a spot is selected as an example of the AF mode will be described. However, processing described below may be performed in an AF mode different from the spot.
- FIG. 10A shows an example of an image IM3 displayed on the display 15 at time T3.
- Time T3 may be the same time as time T2 described above.
- the display 15 shows an AF area AR1 set by the user. AF is performed within the range specified in the AF area AR1.
- the AF area AR1 is an area presented as a UI, and sensor information used for AF performed in the AF area AR1 is not necessarily output from a sensor (for example, the image plane phase difference AF sensor 12B) in the AF area AR1.
- the sensor information is not necessarily limited. That is, the AF area AR1 used as the UI does not necessarily match the range of the AF sensor used when performing AF in signal processing.
- a professional photographer is assumed as the user.
- Such a user can perform camera work such that a main subject that moves (for example, the face of the person H1) is aligned with one AF area AR1 that is local and has a narrow range.
- the main subject for example, the face of the person H1 falls out of the AF area AR1, and AF is in the background. May be done.
- the phenomenon that the focus is inadvertently matched with the background is also referred to as background loss.
- the person H1 is a player with intense movement such as American football or soccer, there is a high possibility that background loss will occur.
- the effective AF area is expanded to the periphery, and the main subject is present in the AF area after enlargement.
- FIG. 11 is a flowchart showing the flow of the first control process
- FIG. 12 shows a Depth map DMA3 which is a Depth map of the image IM3 acquired most recently in time.
- the foreground depth range can be acquired by the above-described control using an image acquired temporally before the image IM3.
- the foreground depth range is a cluster CL3 (for example, 6 to 9 m).
- the following processing is performed by the control unit 34 (for example, the AF control unit 34D).
- step ST21 in the flowchart of FIG. 11 it is determined whether or not a main subject exists in the set AF area AR1. For example, as shown in FIG. 12, it is determined whether or not the cluster CL3 is included in the AF area AR1 using the depth map DMA3. If the cluster CL3 is included in the AF area AR1, the process proceeds to step ST22 assuming that the main subject is in the AF area AR1.
- step ST22 AF is performed in the AF area AR1, and the position of the lens is controlled so that the area corresponding to the AF area AR1 is in focus.
- the lens position is determined by the AF control unit 34D controlling the lens driving mechanism 22A.
- step ST21 the cluster CL3 is not included in the AF area AR1. That is, since the determination in step ST21 is negative, the process proceeds to step ST23.
- step ST23 processing for expanding the AF search range is performed, and processing for changing the AF area is performed.
- the AF area AR2 which is the AF area after the change, is indicated by a dotted line.
- the size of the AF area AR2 can be set as appropriate.
- the size of the AF area AR2 is the size of several AF areas AR1. Then, the process proceeds to step ST24.
- step ST24 it is determined whether or not the main subject exists in the AF area AR2 after the change.
- the process proceeds to step ST25.
- step ST25 AF is performed in the AF area AR2, and the lens position is controlled so that the area corresponding to the AF area AR2 is in focus.
- step ST24 since the cluster CL3 is not included in the AF area AR2 after the change, that is, the main subject does not exist in the AF area AR2, the process proceeds to step ST26.
- step ST26 since there is no main subject around the set AF area AR1, AF is performed in the AF area AR1, which is the AF area originally set, and the area corresponding to the AF area AR1 is focused.
- the position of the lens is controlled so that
- the range of the AF area is expanded in consideration of the fact that the main subject often exists around the AF area. If a main subject exists in the AF area after enlargement, AF is performed in the AF area after enlargement. Thereby, when there is no main subject in the AF area set by the user, it is possible to prevent AF from being performed on the background.
- the AF area AR2 after the change is preferably not displayed on the display 15 while respecting the user's intention (intention that the set AF area is the AF area AR1), but may be displayed.
- the AF area is gradually enlarged in a plurality of steps, and it is determined in each step whether or not a main subject exists in the AF area after the change. good.
- the position of the lens is locked so that the point once AF is locked and AF is not performed on the person who has crossed.
- a so-called stabilization process for fixing the lens or adjusting the reactivity of the lens movement is performed. Because of this stabilization processing, AF may not be performed on the main subject even if the user quickly moves the imaging device 1 so that the main subject is included in the AF area AR1.
- the AF is always performed forward without performing the stabilization process, there is a problem that AF is performed across the main subject and the stability of the AF is impaired.
- the second control example is control that avoids these problems by using the foreground depth range information MD.
- FIG. 13 is a flowchart showing the flow of processing in the second control.
- FIG. 14A shows the depth map DMA5 of the image IM5 acquired at a certain time T5
- FIG. 14B shows the depth map DMA6 of the image IM6 acquired at the latest time T6.
- the foreground depth range can be acquired in the same manner as the above-described processing using images acquired before the images IM5 and IM6, and in this example, the cluster CL3 (for example, 6 to 9 m) is acquired. ).
- the following processing is performed by the control unit 34 (for example, the AF control unit 34D).
- step ST31 in the flowchart of FIG. 13 it is determined that the background exists in the AF area AR1 at time T5. For example, as shown in FIG. 14A, since the area of the cluster CL3 is not included in the AF area AR1 set by the user, it is determined that the main subject does not exist and the background exists in the AF area AR1. Is done. Then, the process proceeds to step ST32.
- step ST32 an operation of moving the imaging device 1 at time T6, which is temporally later than time T5, is performed.
- time T6 it is determined whether or not a main subject exists in the AF area AR1.
- the process proceeds to step ST33.
- step ST33 a process for invalidating the stabilization process is performed, and control is performed so that the stabilization process is not performed. Then, AF is performed in the AF area AR1, and the face of the person H1 as the main subject is focused. As a result, when the AF is performed on the background that is not the main subject and the AF area AR1 is shifted to the state that matches the main subject, the AF is returned to the background or the like. Can be speeded up.
- step ST34 stabilization processing is performed as necessary. After the moving operation of the imaging device 1, when the distance included in the AF area AR1 is a distance corresponding to the clusters CL5 and CL6, it is not necessary to perform stabilization processing. On the other hand, even if the user's intention is to include the main subject in the AF area AR1 after the moving operation of the image pickup apparatus 1, an unintended person may exist or pass in front of the main subject. However, a person who is present closer to the imaging device 1 should be detected as the cluster CL1 or CL2 in the AF area AR1. Therefore, in such a case, it is possible to prevent the stability of the AF from being lost by performing the stabilization process. And a process returns to step ST31 and the process mentioned above is repeated.
- the third control is an example in which stabilization processing is performed when following the movement of the main subject.
- FIG. 15 is a flowchart showing the flow of processing in the third control.
- step ST41 it is detected that the main subject exists in the AF area AR1 set by the user. For example, when the area of the cluster CL3 is included in the AF area AR1, it is determined that the main subject exists in the AF area AR1. Then, the process proceeds to step ST42.
- step ST42 it is determined whether or not the main subject has disappeared from the AF area AR1 in the image acquired most recently. If the main subject continues to exist from the AF area AR1, the process returns to step ST41. If no main subject is left from the AF area AR1, the process proceeds to step ST43.
- step ST43 stabilization processing is performed so that the focus does not match the background even when framing is removed from the main subject.
- the lens drive mechanism 22A is controlled by the control unit 34, and the position of the lens is fixed.
- the stabilization process is started, for example, when there is no main subject in the AF area AR1 over several frames, the area corresponding to the AF area AR1 may be focused.
- the third control during the AF operation, if the foreground depth range exists in the AF area AR1, in other words, if the main subject exists, the AF is in the main subject. It is determined that the camera is in a locked state and the focus position is locked so that it does not move significantly. Thereby, even when the main subject is temporarily removed from the AF area AR1 while the main subject is being tracked by the imaging apparatus 1, it is possible to prevent the focus from being adjusted to the background.
- the main subject in the latest frame can be detected using the foreground depth range information MD, and various controls of AF can be realized.
- the first to third controls in the above-described embodiment may be related controls instead of independent controls.
- the second control may be performed after it is determined that there is no main subject in the expanded AF area AR1. Further, the control to be performed may be different depending on the mode set in the imaging apparatus 1.
- the main subject detection unit 34A detects the movement based on the difference between frames and regards the moving subject as the main subject.
- the present invention is not limited to this.
- the main subject detection unit 34 ⁇ / b> A may detect motion by optical flow.
- subject recognition such as face recognition or person recognition may be used, or subject tracking may be used, and the main subject may be specified based on the result.
- a subject existing in a region designated by the user or a specific color range designated by the user may be regarded as a main subject.
- the main subject may be learned and the result may be regarded as the main subject.
- the main subject may be specified using saliency, which is a technique for extracting a remarkable area on the screen. Further, the main subject may be specified by combining these methods.
- the depth detection unit 34B obtains the distance information using the sensor information acquired by the image plane phase difference AF sensor 12B and the dedicated phase difference AF sensor 13, but the present invention is not limited to this. It is not something.
- the depth detection unit 34B may acquire distance information based on the result of stereo viewing by a multi-view camera, may acquire distance information using a light field camera, or may use a triangulation sensor. You may make it acquire distance information using.
- the distance information may be acquired using a plurality of frames of images (using moving object parallax), or the distance information may be acquired using the degree of blur. A method of acquiring distance information using the degree of blur is called DFD (Depth from defocus).
- the clustering method in the above-described embodiment is not limited to the k-means method, and a known method can be applied.
- the distance range in which the main subject exists in the latest frame may be colored or surrounded by a frame and presented to the user as a UI.
- the display 15 may be made to function as a presentation unit, and the range of the distance in which the main subject exists in the latest frame may be displayed on the display 15 separately from other places.
- the display 15 may be separated from the imaging device 1.
- the processing order in the first to third controls described above may be changed, or may be performed in parallel.
- the processes of steps ST11 and ST12 may be performed in parallel.
- the process of fixing the lens position has been described as an example of the stabilization process.
- Fixing the position of the lens means that the movement of the lens position is zero and that some movement is allowed within a range where the focus is not greatly shifted.
- the lens position may be fixed or variable within a predetermined range according to the set sensitivity.
- the imaging device in the above-described embodiment includes medical devices such as a microscope, smartphones, computer devices, game devices, robots, security cameras, moving bodies (vehicles, trains, airplanes, helicopters, small flying vehicles, construction vehicles, agriculture For example, a vehicle for use.
- medical devices such as a microscope, smartphones, computer devices, game devices, robots, security cameras, moving bodies (vehicles, trains, airplanes, helicopters, small flying vehicles, construction vehicles, agriculture For example, a vehicle for use.
- the present disclosure can be realized by a control device (for example, a one-chip microcomputer) having the control unit 34, and can also be realized as an imaging system including a plurality of devices. It is also possible to realize it. For example, it is possible to download a program for performing the control described in the embodiment, and an imaging device (for example, an imaging device provided in a smartphone) that does not have the control function described in the embodiment downloads and installs the program. Thus, the control described in the embodiment can be performed in the imaging apparatus.
- the technology according to the present disclosure can be applied to various products.
- the technology according to the present disclosure is realized as a device that is mounted on any type of mobile body such as an automobile, an electric vehicle, a hybrid electric vehicle, a motorcycle, a bicycle, personal mobility, an airplane, a drone, a ship, and a robot. May be.
- the distance measurement result of the present disclosure can be used, for example, for setting and maintaining a region of interest in sensing, automatic driving assistance such as dangerous driving, and the like.
- FIG. 16 is a block diagram illustrating a schematic configuration example of a vehicle control system that is an example of a mobile control system to which the technology according to the present disclosure can be applied.
- the vehicle control system 12000 includes a plurality of electronic control units connected via a communication network 12001.
- the vehicle control system 12000 includes a drive system control unit 12010, a body system control unit 12020, a vehicle exterior information detection unit 12030, a vehicle interior information detection unit 12040, and an integrated control unit 12050.
- a microcomputer 12051, a sound image output unit 12052, and an in-vehicle network I / F (Interface) 12053 are illustrated as a functional configuration of the integrated control unit 12050.
- the drive system control unit 12010 controls the operation of the device related to the drive system of the vehicle according to various programs.
- the drive system control unit 12010 includes a driving force generator for generating a driving force of a vehicle such as an internal combustion engine or a driving motor, a driving force transmission mechanism for transmitting the driving force to wheels, and a steering angle of the vehicle. It functions as a control device such as a steering mechanism that adjusts and a braking device that generates a braking force of the vehicle.
- the body system control unit 12020 controls the operation of various devices mounted on the vehicle body according to various programs.
- the body system control unit 12020 functions as a keyless entry system, a smart key system, a power window device, or a control device for various lamps such as a headlamp, a back lamp, a brake lamp, a blinker, or a fog lamp.
- the body control unit 12020 can be input with radio waves transmitted from a portable device that substitutes for a key or signals from various switches.
- the body system control unit 12020 receives input of these radio waves or signals, and controls a door lock device, a power window device, a lamp, and the like of the vehicle.
- the vehicle outside information detection unit 12030 detects information outside the vehicle on which the vehicle control system 12000 is mounted.
- the imaging unit 12031 is connected to the vehicle exterior information detection unit 12030.
- the vehicle exterior information detection unit 12030 causes the imaging unit 12031 to capture an image outside the vehicle and receives the captured image.
- the vehicle outside information detection unit 12030 may perform an object detection process or a distance detection process such as a person, a car, an obstacle, a sign, or a character on a road surface based on the received image.
- the imaging unit 12031 is an optical sensor that receives light and outputs an electrical signal corresponding to the amount of received light.
- the imaging unit 12031 can output an electrical signal as an image, or can output it as distance measurement information. Further, the light received by the imaging unit 12031 may be visible light or invisible light such as infrared rays.
- the vehicle interior information detection unit 12040 detects vehicle interior information.
- a driver state detection unit 12041 that detects a driver's state is connected to the in-vehicle information detection unit 12040.
- the driver state detection unit 12041 includes, for example, a camera that images the driver, and the vehicle interior information detection unit 12040 determines the degree of fatigue or concentration of the driver based on the detection information input from the driver state detection unit 12041. It may be calculated or it may be determined whether the driver is asleep.
- the microcomputer 12051 calculates a control target value of the driving force generator, the steering mechanism, or the braking device based on the information inside / outside the vehicle acquired by the vehicle outside information detection unit 12030 or the vehicle interior information detection unit 12040, and the drive system control unit A control command can be output to 12010.
- the microcomputer 12051 realizes ADAS (Advanced Driver Assistance System) functions including vehicle collision avoidance or impact mitigation, following traveling based on inter-vehicle distance, vehicle speed maintenance traveling, vehicle collision warning, or vehicle lane departure warning. It is possible to perform cooperative control for the purpose.
- ADAS Advanced Driver Assistance System
- the microcomputer 12051 controls the driving force generator, the steering mechanism, the braking device, and the like based on the information around the vehicle acquired by the vehicle exterior information detection unit 12030 or the vehicle interior information detection unit 12040. It is possible to perform cooperative control for the purpose of automatic driving that autonomously travels without depending on the operation.
- the microcomputer 12051 can output a control command to the body system control unit 12030 based on information outside the vehicle acquired by the vehicle outside information detection unit 12030.
- the microcomputer 12051 controls the headlamp according to the position of the preceding vehicle or the oncoming vehicle detected by the outside information detection unit 12030, and performs cooperative control for the purpose of anti-glare, such as switching from a high beam to a low beam. It can be carried out.
- the sound image output unit 12052 transmits an output signal of at least one of sound and image to an output device capable of visually or audibly notifying information to a vehicle occupant or the outside of the vehicle.
- an audio speaker 12061, a display unit 12062, and an instrument panel 12063 are illustrated as output devices.
- the display unit 12062 may include at least one of an on-board display and a head-up display, for example.
- FIG. 17 is a diagram illustrating an example of an installation position of the imaging unit 12031.
- the imaging unit 12031 includes imaging units 12101, 12102, 12103, 12104, and 12105.
- the imaging units 12101, 12102, 12103, 12104, and 12105 are provided, for example, at positions such as a front nose, a side mirror, a rear bumper, a back door, and an upper part of a windshield in the vehicle interior of the vehicle 12100.
- the imaging unit 12101 provided in the front nose and the imaging unit 12105 provided in the upper part of the windshield in the vehicle interior mainly acquire an image in front of the vehicle 12100.
- the imaging units 12102 and 12103 provided in the side mirror mainly acquire an image of the side of the vehicle 12100.
- the imaging unit 12104 provided in the rear bumper or the back door mainly acquires an image behind the vehicle 12100.
- the imaging unit 12105 provided on the upper part of the windshield in the passenger compartment is mainly used for detecting a preceding vehicle or a pedestrian, an obstacle, a traffic light, a traffic sign, a lane, or the like.
- FIG. 17 shows an example of the shooting range of the imaging units 12101 to 12104.
- the imaging range 12111 indicates the imaging range of the imaging unit 12101 provided in the front nose
- the imaging ranges 12112 and 12113 indicate the imaging ranges of the imaging units 12102 and 12103 provided in the side mirrors, respectively
- the imaging range 12114 The imaging range of the imaging part 12104 provided in the rear bumper or the back door is shown. For example, by superimposing the image data captured by the imaging units 12101 to 12104, an overhead image when the vehicle 12100 is viewed from above is obtained.
- At least one of the imaging units 12101 to 12104 may have a function of acquiring distance information.
- at least one of the imaging units 12101 to 12104 may be a stereo camera including a plurality of imaging elements, or may be an imaging element having pixels for phase difference detection.
- the microcomputer 12051 based on the distance information obtained from the imaging units 12101 to 12104, the distance to each three-dimensional object in the imaging range 12111 to 12114 and the temporal change in this distance (relative speed with respect to the vehicle 12100).
- a solid object that travels at a predetermined speed (for example, 0 km / h or more) in the same direction as the vehicle 12100, particularly the closest three-dimensional object on the traveling path of the vehicle 12100. it can.
- the microcomputer 12051 can set an inter-vehicle distance to be secured in advance before the preceding vehicle, and can perform automatic brake control (including follow-up stop control), automatic acceleration control (including follow-up start control), and the like.
- automatic brake control including follow-up stop control
- automatic acceleration control including follow-up start control
- cooperative control for the purpose of autonomous driving or the like autonomously traveling without depending on the operation of the driver can be performed.
- the microcomputer 12051 converts the three-dimensional object data related to the three-dimensional object to other three-dimensional objects such as a two-wheeled vehicle, a normal vehicle, a large vehicle, a pedestrian, and a utility pole based on the distance information obtained from the imaging units 12101 to 12104. It can be classified and extracted and used for automatic avoidance of obstacles.
- the microcomputer 12051 identifies obstacles around the vehicle 12100 as obstacles that are visible to the driver of the vehicle 12100 and obstacles that are difficult to see.
- the microcomputer 12051 determines the collision risk indicating the risk of collision with each obstacle, and when the collision risk is equal to or higher than the set value and there is a possibility of collision, the microcomputer 12051 is connected via the audio speaker 12061 or the display unit 12062. By outputting an alarm to the driver and performing forced deceleration or avoidance steering via the drive system control unit 12010, driving assistance for collision avoidance can be performed.
- At least one of the imaging units 12101 to 12104 may be an infrared camera that detects infrared rays.
- the microcomputer 12051 can recognize a pedestrian by determining whether a pedestrian is present in the captured images of the imaging units 12101 to 12104. Such pedestrian recognition is, for example, whether or not the user is a pedestrian by performing a pattern matching process on a sequence of feature points indicating the outline of an object and a procedure for extracting feature points in the captured images of the imaging units 12101 to 12104 as infrared cameras. It is carried out by the procedure for determining.
- the audio image output unit 12052 When the microcomputer 12051 determines that there is a pedestrian in the captured images of the imaging units 12101 to 12104 and recognizes the pedestrian, the audio image output unit 12052 has a rectangular contour line for emphasizing the recognized pedestrian.
- the display unit 12062 is controlled so as to be superimposed and displayed.
- voice image output part 12052 may control the display part 12062 so that the icon etc. which show a pedestrian may be displayed on a desired position.
- the technology according to the present disclosure can be applied to the imaging unit 12031 among the configurations described above.
- the imaging device 1 in FIG. 2 can be applied to the imaging unit 12031.
- the distance to obstacles outside the vehicle can be detected appropriately, so that automatic driving control systems and automatic braking systems can operate properly, enabling safe and comfortable driving To.
- the control executed by the control unit 34 is lens control corresponding to the distance measurement result. It is not limited.
- This indication can also take the following composition.
- a main subject detection unit that detects a first main subject among subjects included in the first image and acquires main subject information indicating the first main subject;
- a distance information calculation unit that detects a distance of the subject included in the first image and acquires first distance information indicating the distance;
- a detecting unit that detects a distance including the first main subject based on the main subject information and the first distance information, and acquires main subject distance information indicating a distance including the first main subject. Control device.
- the distance information calculation unit detects a distance of a subject included in a second image acquired later in time than the first image, acquires second distance information indicating the distance,
- the control device according to (1) further including a control unit that detects a second main subject among the subjects included in the second image based on the main subject distance information and the second distance information.
- the control unit controls the movement of the lens so that the first region is in focus when the second main subject exists in the first region set in the imaging range of the imaging unit, and the first region is in focus.
- control device wherein when the second main subject does not exist in an area, control is performed to set a second area obtained by enlarging the first area.
- the control unit controls the movement of the lens so that the second area is in focus, and the second main subject is placed in the second area. If the lens does not exist, the movement of the lens is controlled so that the first region is in focus. (4).
- the control unit invalidates the stabilization process when the first main subject does not exist in the first area set in the imaging range of the imaging unit and the second main subject exists in the first area.
- the control device according to (2) or (3).
- the control unit activates a stabilization process when the first main subject exists in the first area set in the shooting range of the imaging unit and the second main subject does not exist in the first area.
- the control device according to (2) or (3).
- the detection unit classifies the first distance information into clusters, and refers to the main subject information, thereby indicating information indicating a cluster having a ratio of the first main subject equal to or greater than a predetermined threshold value as the main subject distance information.
- the control device according to any one of (1) to (7).
- the detection unit performs cluster classification on the first distance information, and refers to the main subject information, thereby acquiring information indicating a cluster having the largest ratio of the first main subject as the main subject distance information.
- the control device according to any one of (7).
- the control device according to any one of (1) to (9), wherein the distance indicates a predetermined distance range.
- the first area is one area selected by the user from among a plurality of selectable autofocus areas displayed on the display unit. (4), (5), (9) Control device.
- the control device according to any one of (4), (7), (8), and (9), wherein the second region is a region that is not displayed on the display unit.
- the control device according to any one of (4), (7), (8), and (9), wherein the second region is a region that is temporarily displayed on the display unit.
- the control device according to any one of (4), (7), (8), and (9), wherein the stabilization process is a process of fixing a lens position.
- the control device according to any one of (2) to (14), wherein the second image is an image acquired most recently in time.
- An imaging apparatus comprising: an imaging unit.
- a lens driving unit The imaging device according to (16), wherein movement of the lens is controlled by controlling the lens driving unit by the control device.
- a main subject detection unit detects a first main subject among subjects included in the first image, acquires main subject information indicating the first main subject
- a distance information calculation unit detects the distance of the subject included in the first image, acquires first distance information indicating the distance
- a detection unit detects a distance including the first main subject based on the main subject information and the first distance information, and acquires main subject distance information indicating a distance including the first main subject.
- a main subject detection unit detects a first main subject among subjects included in the first image, acquires main subject information indicating the first main subject
- a distance information calculation unit detects the distance of the subject included in the first image, acquires first distance information indicating the distance
- Control for detecting a distance including the first main subject based on the main subject information and the first distance information, and acquiring main subject distance information indicating a distance including the first main subject.
- a program that causes a computer to execute the method.
- DESCRIPTION OF SYMBOLS 1 ... Imaging device, 15 ... Display, 34 ... Control part, 34A ... Main subject detection part, 34B ... Depth detection part, 34C ... Foreground depth range detection part, 34D ... AF control unit, IM1 ... first image, IM2 ... second image, MA1 ... main subject map, DMA1 ... Depth map, MD ... foreground depth range information, CL ... cluster , AR ... AF area
Landscapes
- Engineering & Computer Science (AREA)
- Multimedia (AREA)
- Signal Processing (AREA)
- Physics & Mathematics (AREA)
- General Physics & Mathematics (AREA)
- Studio Devices (AREA)
- Automatic Focus Adjustment (AREA)
- Focusing (AREA)
Abstract
Description
第1画像に含まれる被写体のうちの第1主要被写体を検出し、当該第1主要被写体を示す主要被写体情報を取得する主要被写体検出部と、
第1画像に含まれる被写体の距離を検出し、当該距離を示す第1距離情報を取得する距離情報算出部と、
主要被写体情報及び第1距離情報に基づいて、第1主要被写体が含まれる距離を検出し、第1主要被写体が含まれる距離を示す主要被写体距離情報を取得する検出部と
を有する制御装置である。
上述した制御装置と、
撮像部と
を有する撮像装置である。
主要被写体検出部が、第1画像に含まれる被写体のうちの第1主要被写体を検出し、当該第1主要被写体を示す主要被写体情報を取得し、
距離情報算出部が、第1画像に含まれる被写体の距離を検出し、当該距離を示す第1距離情報を取得し、
検出部が、主要被写体情報及び第1距離情報に基づいて、第1主要被写体が含まれる距離を検出し、第1主要被写体が含まれる距離を示す主要被写体距離情報を取得する
制御方法である。
主要被写体検出部が、第1画像に含まれる被写体のうちの第1主要被写体を検出し、当該第1主要被写体を示す主要被写体情報を取得し、
距離情報算出部が、第1画像に含まれる被写体の距離を検出し、当該距離を示す第1距離情報を取得し、
検出部が、主要被写体情報及び第1距離情報に基づいて、第1主要被写体が含まれる距離を検出し、第1主要被写体が含まれる距離を示す主要被写体距離情報を取得する
制御方法をコンピュータに実行させるプログラムである。
<1.一実施形態>
<2.変形例>
<3.応用例>
以下に説明する実施形態等は本開示の好適な具体例であり、本開示の内容がこれらの実施形態等に限定されるものではない。
[撮像装置の構成例]
始めに、本開示の一実施形態に係る撮像装置の構成例について説明する。図1は、本開示の一実施形態に係る撮像装置1の概略構成を示す断面模式図である。
次に、図2のブロック図を参照して、撮像装置1の内部構成例(主に信号処理に係る構成例)について説明する。撮像装置1は、上述した光学撮像系20、専用位相差AFセンサ13、撮像素子12A、像面位相差AFセンサ12B、ディスプレイ15の他に、例えば、前処理回路31、カメラ処理回路32、画像メモリ33、制御部34、グラフィックI/F(Interface)35、入力部36、R/W(リーダ/ライタ)37および記憶媒体38を備えている。
ここで、上述した撮像装置1における基本的な動作について説明する。画像の撮像前には、撮像素子12Aによって受光されて光電変換された信号が、順次、前処理回路31に供給される。前処理回路31では、入力信号に対してCDS処理、AGC処理などが施され、さらに画像信号に変換される。
ここで、オートフォーカスについて考慮すべき問題について説明する。種々の画像処理技術を用いて主要被写体検出処理を行うことにより主要被写体を認識し、当該主要被写体に優先的にピントを合わせる技術が提案されている。しかしながら、主要被写体検出処理によって検出された主要被写体は必ずしもユーザが意図する主要被写体と一致しない場合がある。特にプロカメラマンのようなハイエンドユーザにとって、自身の意図する被写体が主要被写体と認識されずに、他の被写体にピントが合ってしまうことは好ましくなく、決定的なシーンを逃してしまうことさえあり得る。ハイエンドユーザは自ら設定したAFエリア内でAFを行うことを望むことが多いが、一方で限定されたAFエリア内に被写体を置き続けることはプロのカメラマンにとっても困難である。
(前景Depth範囲を検出する動作)
撮像装置1の動作例について説明する。始めに、図4~図8を参照して、前景Depth範囲を検出する動作例について説明する。前景Depth範囲とは、主要被写体が含まれる距離であり、本実施形態では主要被写体が含まれる所定の距離範囲として説明するが、距離そのものであっても良い。図4では、制御部34を構成する機能ブロックである主要被写体検出部34A、Depth検出部34B、前景Depth範囲検出部34C及びAF制御部34Dがそれぞれ示されている。
前景Depth範囲情報MDは、撮像装置1の様々な処理、例えば、AF制御における処理に用いることができる。具体的には、AF制御部34Dが時間的に最も後の画像に含まれる主要被写体(例えば、上述した画像IM2に含まれる第2主要被写体)の検出結果に基づいてレンズの移動を制御する処理に、前景Depth範囲情報MDを用いることができる。以下、前景Depth範囲情報MDを使用したAF制御の例(第1~第3の制御例)について説明する。なお、以下の説明では、前景Depth範囲の検出処理で説明した内容と同一、同質の構成に関しては同一の参照符号を付し、重複した説明を適宜、省略する。
次に、第2の制御について説明する。例えば、主要被写体を切り替える等の際に、撮像装置1を大きく振る操作がなされたとする。操作終了後のタイミングで、ユーザにより設定されているAFエリアAR1には背景のみが存在し、当該背景にピントが合った場合を考える。この後、ユーザが、AFエリアAR1内に主要被写体が含まれるように素早く撮像装置1を移動する。
次に、第3の制御について説明する。第3の制御は、主要被写体の動きに追従している場合には、安定化処理を行う例である。
上述した一実施形態における第1~第3の制御は独立した制御でなく、関連する制御であっても良い。例えば、第1の制御において、拡張されたAFエリアAR1内に主要被写体が存在しないと判断された後に、第2の制御が行われるようにしても良い。また、撮像装置1に設定されるモードに応じて、行われる制御が異なるようにしても良い
本開示に係る技術(本技術)は、様々な製品へ応用することができる。例えば、本開示に係る技術は、自動車、電気自動車、ハイブリッド電気自動車、自動二輪車、自転車、パーソナルモビリティ、飛行機、ドローン、船舶、ロボット等のいずれかの種類の移動体に搭載される装置として実現されてもよい。本開示の測距結果を、例えば、センシングにおける注目領域の設定・維持、危険運転等の自動運転アシストなどに利用することができる。
(1)
第1画像に含まれる被写体のうちの第1主要被写体を検出し、当該第1主要被写体を示す主要被写体情報を取得する主要被写体検出部と、
前記第1画像に含まれる前記被写体の距離を検出し、当該距離を示す第1距離情報を取得する距離情報算出部と、
前記主要被写体情報及び前記第1距離情報に基づいて、前記第1主要被写体が含まれる距離を検出し、前記第1主要被写体が含まれる距離を示す主要被写体距離情報を取得する検出部と
を有する制御装置。
(2)
前記距離情報算出部は、前記第1画像よりも時間的に後に取得される第2画像に含まれる被写体の距離を検出し、当該距離を示す第2距離情報を取得し、
前記主要被写体距離情報及び前記第2距離情報に基づいて、前記第2画像に含まれる前記被写体のうちの第2主要被写体を検出する制御部を有する
(1)に記載の制御装置。
(3)
前記制御部は、前記第2主要被写体の検出結果に基づいてレンズの移動を制御する
(2)に記載の制御装置。
(4)
前記制御部は、撮像部の撮影範囲において設定される第1領域に前記第2主要被写体が存在する場合には、前記第1領域に焦点が合うようにレンズの移動を制御し、前記第1領域に前記第2主要被写体が存在しない場合には、前記第1領域を拡大した第2領域を設定する制御を行う
(2)または(3)に記載の制御装置。
(5)
前記制御部は、前記第2領域に前記第2主要被写体が存在する場合には、前記第2領域に焦点が合うように前記レンズの移動を制御し、前記第2領域に前記第2主要被写体が存在しない場合には、前記第1領域に焦点が合うように前記レンズの移動を制御する
(4)に記載の制御装置。
(6)
前記制御部は、撮像部の撮影範囲において設定される第1領域に前記第1主要被写体が存在せず、前記第1領域に前記第2主要被写体が存在する場合には、安定化処理を無効にする制御を行う
(2)または(3)に記載の制御装置。
(7)
前記制御部は、撮像部の撮影範囲において設定される第1領域に前記第1主要被写体が存在し、前記第1領域に前記第2主要被写体が存在しない場合には、安定化処理を有効にする制御を行う
(2)または(3)に記載の制御装置。
(8)
前記検出部は、前記第1距離情報をクラスタ分類し、前記主要被写体情報を参照することにより、前記第1主要被写体が含まれる割合が所定の閾値以上のクラスタを示す情報を前記主要被写体距離情報として取得する
(1)~(7)の何れかに記載の制御装置。
(9)
前記検出部は、前記第1距離情報をクラスタ分類し、前記主要被写体情報を参照することにより、前記第1主要被写体が含まれる割合が最も大きいクラスタを示す情報を前記主要被写体距離情報として取得する
(1)~(7)の何れかに記載の制御装置。
(10)
前記距離は、所定の距離範囲を示す
(1)~(9)の何れかに記載の制御装置。
(11)
前記第1領域は、表示部に表示され、選択可能な複数のオートフォーカス領域のうち、ユーザにより選択された1個の領域である
(4)、(5)、(9)の何れかに記載の制御装置。
(12)
前記第2領域は、前記表示部に表示されない領域である
(4)、(7)、(8)、(9)の何れかに記載の制御装置。
(13)
前記第2領域は、前記表示部に一時的に表示される領域である
(4)、(7)、(8)、(9)の何れかに記載の制御装置。
(14)
前記安定化処理は、レンズの位置を固定する処理である
(4)、(7)、(8)、(9)に記載の制御装置。
(15)
前記第2画像は、時間的に最も後に取得された画像である
(2)~(14)の何れかに記載の制御装置。
(16)
(1)~(15)の何れかに記載の制御装置と、
撮像部と
を有する撮像装置。
(17)
前記第2主要被写体の範囲を提示する提示部を有する
(16)に記載の撮像装置。
(18)
レンズ駆動部を有し、
前記制御装置により前記レンズ駆動部が制御されることにより、レンズの移動が制御される
(16)に記載の撮像装置。
(19)
主要被写体検出部が、第1画像に含まれる被写体のうちの第1主要被写体を検出し、当該第1主要被写体を示す主要被写体情報を取得し、
距離情報算出部が、前記第1画像に含まれる前記被写体の距離を検出し、当該距離を示す第1距離情報を取得し、
検出部が、前記主要被写体情報及び前記第1距離情報に基づいて、前記第1主要被写体が含まれる距離を検出し、前記第1主要被写体が含まれる距離を示す主要被写体距離情報を取得する
制御方法。
(20)
主要被写体検出部が、第1画像に含まれる被写体のうちの第1主要被写体を検出し、当該第1主要被写体を示す主要被写体情報を取得し、
距離情報算出部が、前記第1画像に含まれる前記被写体の距離を検出し、当該距離を示す第1距離情報を取得し、
検出部が、前記主要被写体情報及び前記第1距離情報に基づいて、前記第1主要被写体が含まれる距離を検出し、前記第1主要被写体が含まれる距離を示す主要被写体距離情報を取得する
制御方法をコンピュータに実行させるプログラム。
Claims (20)
- 第1画像に含まれる被写体のうちの第1主要被写体を検出し、当該第1主要被写体を示す主要被写体情報を取得する主要被写体検出部と、
前記第1画像に含まれる前記被写体の距離を検出し、当該距離を示す第1距離情報を取得する距離情報算出部と、
前記主要被写体情報及び前記第1距離情報に基づいて、前記第1主要被写体が含まれる距離を検出し、前記第1主要被写体が含まれる距離を示す主要被写体距離情報を取得する検出部と
を有する制御装置。 - 前記距離情報算出部は、前記第1画像よりも時間的に後に取得される第2画像に含まれる被写体の距離を検出し、当該距離を示す第2距離情報を取得し、
前記主要被写体距離情報及び前記第2距離情報に基づいて、前記第2画像に含まれる前記被写体のうちの第2主要被写体を検出する制御部を有する
請求項1に記載の制御装置。 - 前記制御部は、前記第2主要被写体の検出結果に基づいてレンズの移動を制御する
請求項2に記載の制御装置。 - 前記制御部は、撮像部の撮影範囲において設定される第1領域に前記第2主要被写体が存在する場合には、前記第1領域に焦点が合うようにレンズの移動を制御し、前記第1領域に前記第2主要被写体が存在しない場合には、前記第1領域を拡大した第2領域を設定する制御を行う
請求項2に記載の制御装置。 - 前記制御部は、前記第2領域に前記第2主要被写体が存在する場合には、前記第2領域に焦点が合うように前記レンズの移動を制御し、前記第2領域に前記第2主要被写体が存在しない場合には、前記第1領域に焦点が合うように前記レンズの移動を制御する
請求項4に記載の制御装置。 - 前記制御部は、撮像部の撮影範囲において設定される第1領域に前記第1主要被写体が存在せず、前記第1領域に前記第2主要被写体が存在する場合には、安定化処理を無効にする制御を行う
請求項2に記載の制御装置。 - 前記制御部は、撮像部の撮影範囲において設定される第1領域に前記第1主要被写体が存在し、前記第1領域に前記第2主要被写体が存在しない場合には、安定化処理を有効にする制御を行う
請求項2に記載の制御装置。 - 前記検出部は、前記第1距離情報をクラスタ分類し、前記主要被写体情報を参照することにより、前記第1主要被写体が含まれる割合が所定の閾値以上のクラスタを示す情報を前記主要被写体距離情報として取得する
請求項1に記載の制御装置。 - 前記検出部は、前記第1距離情報をクラスタ分類し、前記主要被写体情報を参照することにより、前記第1主要被写体が含まれる割合が最も大きいクラスタを示す情報を前記主要被写体距離情報として取得する
請求項1に記載の制御装置。 - 前記距離は、所定の距離範囲を示す
請求項1に記載の制御装置。 - 前記第1領域は、表示部に表示され、選択可能な複数のオートフォーカス領域のうち、ユーザにより選択された1個の領域である
請求項4に記載の制御装置。 - 前記第2領域は、前記表示部に表示されない領域である
請求項11に記載の制御装置。 - 前記第2領域は、前記表示部に一時的に表示される領域である
請求項11に記載の制御装置。 - 前記安定化処理は、レンズの位置を固定する処理である
請求項6に記載の制御装置。 - 前記第2画像は、時間的に最も後に取得された画像である
請求項2に記載の制御装置。 - 請求項1に記載の制御装置と、
撮像部と
を有する撮像装置。 - 前記第2主要被写体の範囲を提示する提示部を有する
請求項16に記載の撮像装置。 - レンズ駆動部を有し、
前記制御装置により前記レンズ駆動部が制御されることにより、レンズの移動が制御される
請求項16に記載の撮像装置。 - 主要被写体検出部が、第1画像に含まれる被写体のうちの第1主要被写体を検出し、当該第1主要被写体を示す主要被写体情報を取得し、
距離情報算出部が、前記第1画像に含まれる前記被写体の距離を検出し、当該距離を示す第1距離情報を取得し、
検出部が、前記主要被写体情報及び前記第1距離情報に基づいて、前記第1主要被写体が含まれる距離を検出し、前記第1主要被写体が含まれる距離を示す主要被写体距離情報を取得する
制御方法。 - 主要被写体検出部が、第1画像に含まれる被写体のうちの第1主要被写体を検出し、当該第1主要被写体を示す主要被写体情報を取得し、
距離情報算出部が、前記第1画像に含まれる前記被写体の距離を検出し、当該距離を示す第1距離情報を取得し、
検出部が、前記主要被写体情報及び前記第1距離情報に基づいて、前記第1主要被写体が含まれる距離を検出し、前記第1主要被写体が含まれる距離を示す主要被写体距離情報を取得する
制御方法をコンピュータに実行させるプログラム。
Priority Applications (3)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
EP18774522.9A EP3606039A1 (en) | 2017-03-31 | 2018-01-18 | Control device, imaging device, control method and program |
US16/493,599 US10999488B2 (en) | 2017-03-31 | 2018-01-18 | Control device, imaging device, and control method |
JP2019508602A JP7103346B2 (ja) | 2017-03-31 | 2018-01-18 | 制御装置、撮像装置、制御方法及びプログラム |
Applications Claiming Priority (2)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
JP2017072821 | 2017-03-31 | ||
JP2017-072821 | 2017-03-31 |
Publications (1)
Publication Number | Publication Date |
---|---|
WO2018179695A1 true WO2018179695A1 (ja) | 2018-10-04 |
Family
ID=63674600
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
PCT/JP2018/001304 WO2018179695A1 (ja) | 2017-03-31 | 2018-01-18 | 制御装置、撮像装置、制御方法及びプログラム |
Country Status (4)
Country | Link |
---|---|
US (1) | US10999488B2 (ja) |
EP (1) | EP3606039A1 (ja) |
JP (1) | JP7103346B2 (ja) |
WO (1) | WO2018179695A1 (ja) |
Families Citing this family (1)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
JP2021197623A (ja) * | 2020-06-12 | 2021-12-27 | キヤノン株式会社 | 撮像装置及びその制御方法及びプログラム |
Citations (4)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
JP2007233034A (ja) | 2006-03-01 | 2007-09-13 | Nikon Corp | 撮像装置 |
JP2012124712A (ja) * | 2010-12-08 | 2012-06-28 | Sharp Corp | 画像処理装置、画像処理方法及び画像処理プログラム |
JP2017038245A (ja) * | 2015-08-11 | 2017-02-16 | キヤノン株式会社 | 画像処理装置及び画像処理方法 |
JP2017163412A (ja) * | 2016-03-10 | 2017-09-14 | キヤノン株式会社 | 画像処理装置およびその制御方法、撮像装置、プログラム |
Family Cites Families (5)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
JPH0664225B2 (ja) * | 1985-08-27 | 1994-08-22 | ミノルタカメラ株式会社 | 焦点調節装置 |
JP3897087B2 (ja) * | 1999-11-16 | 2007-03-22 | 富士フイルム株式会社 | 画像処理装置、画像処理方法、および記録媒体 |
JP5038283B2 (ja) * | 2008-11-05 | 2012-10-03 | キヤノン株式会社 | 撮影システム及びレンズ装置 |
JP5229371B2 (ja) * | 2011-10-03 | 2013-07-03 | 株式会社ニコン | 撮像装置 |
US9196027B2 (en) * | 2014-03-31 | 2015-11-24 | International Business Machines Corporation | Automatic focus stacking of captured images |
-
2018
- 2018-01-18 EP EP18774522.9A patent/EP3606039A1/en not_active Withdrawn
- 2018-01-18 JP JP2019508602A patent/JP7103346B2/ja active Active
- 2018-01-18 US US16/493,599 patent/US10999488B2/en active Active
- 2018-01-18 WO PCT/JP2018/001304 patent/WO2018179695A1/ja active Application Filing
Patent Citations (4)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
JP2007233034A (ja) | 2006-03-01 | 2007-09-13 | Nikon Corp | 撮像装置 |
JP2012124712A (ja) * | 2010-12-08 | 2012-06-28 | Sharp Corp | 画像処理装置、画像処理方法及び画像処理プログラム |
JP2017038245A (ja) * | 2015-08-11 | 2017-02-16 | キヤノン株式会社 | 画像処理装置及び画像処理方法 |
JP2017163412A (ja) * | 2016-03-10 | 2017-09-14 | キヤノン株式会社 | 画像処理装置およびその制御方法、撮像装置、プログラム |
Also Published As
Publication number | Publication date |
---|---|
US10999488B2 (en) | 2021-05-04 |
JP7103346B2 (ja) | 2022-07-20 |
EP3606039A1 (en) | 2020-02-05 |
JPWO2018179695A1 (ja) | 2020-02-06 |
US20200084369A1 (en) | 2020-03-12 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
JP7105754B2 (ja) | 撮像装置、及び、撮像装置の制御方法 | |
US20200344421A1 (en) | Image pickup apparatus, image pickup control method, and program | |
JP7014218B2 (ja) | 画像処理装置、および画像処理方法、並びにプログラム | |
JP7059522B2 (ja) | 制御装置、制御方法、プログラム及び撮像システム | |
TWI757419B (zh) | 攝像裝置、攝像模組及攝像裝置之控制方法 | |
JP6977722B2 (ja) | 撮像装置、および画像処理システム | |
JP7020434B2 (ja) | 画像処理装置、および画像処理方法、並びにプログラム | |
WO2017175492A1 (ja) | 画像処理装置、画像処理方法、コンピュータプログラム及び電子機器 | |
US11889177B2 (en) | Electronic device and solid-state imaging device | |
JP6816768B2 (ja) | 画像処理装置と画像処理方法 | |
JP6816769B2 (ja) | 画像処理装置と画像処理方法 | |
US11025828B2 (en) | Imaging control apparatus, imaging control method, and electronic device | |
CN110012215B (zh) | 图像处理装置和图像处理方法 | |
WO2017122396A1 (ja) | 制御装置、制御方法及びプログラム | |
JP7144926B2 (ja) | 撮像制御装置、撮像装置、および、撮像制御装置の制御方法 | |
WO2017149964A1 (ja) | 画像処理装置、画像処理方法、コンピュータプログラム及び電子機器 | |
JP7103346B2 (ja) | 制御装置、撮像装置、制御方法及びプログラム | |
TWI794207B (zh) | 攝像裝置、相機模組、攝像系統、及攝像裝置之控制方法 | |
WO2018220993A1 (ja) | 信号処理装置、信号処理方法及びコンピュータプログラム | |
WO2023210197A1 (ja) | 撮像装置および信号処理方法 | |
JP7140819B2 (ja) | 撮像装置 | |
US20190306444A1 (en) | Imaging control apparatus and method, and vehicle | |
KR20200119790A (ko) | 인식 장치와 인식 방법 그리고 프로그램 |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
121 | Ep: the epo has been informed by wipo that ep was designated in this application |
Ref document number: 18774522 Country of ref document: EP Kind code of ref document: A1 |
|
ENP | Entry into the national phase |
Ref document number: 2019508602 Country of ref document: JP Kind code of ref document: A |
|
NENP | Non-entry into the national phase |
Ref country code: DE |
|
WWE | Wipo information: entry into national phase |
Ref document number: 2018774522 Country of ref document: EP |
|
ENP | Entry into the national phase |
Ref document number: 2018774522 Country of ref document: EP Effective date: 20191031 |