WO2019082832A1 - Imaging apparatus, imaging apparatus control method, and program - Google Patents

Imaging apparatus, imaging apparatus control method, and program

Info

Publication number
WO2019082832A1
WO2019082832A1 PCT/JP2018/039130 JP2018039130W WO2019082832A1 WO 2019082832 A1 WO2019082832 A1 WO 2019082832A1 JP 2018039130 W JP2018039130 W JP 2018039130W WO 2019082832 A1 WO2019082832 A1 WO 2019082832A1
Authority
WO
WIPO (PCT)
Prior art keywords
image data
imaging
exposure
frame
subject
Prior art date
Application number
PCT/JP2018/039130
Other languages
French (fr)
Japanese (ja)
Inventor
宏典 海田
Original Assignee
キヤノン株式会社
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Priority claimed from JP2018189988A external-priority patent/JP7286294B2/en
Application filed by キヤノン株式会社 filed Critical キヤノン株式会社
Publication of WO2019082832A1 publication Critical patent/WO2019082832A1/en
Priority to US16/850,836 priority Critical patent/US11375132B2/en

Links

Images

Classifications

    • GPHYSICS
    • G03PHOTOGRAPHY; CINEMATOGRAPHY; ANALOGOUS TECHNIQUES USING WAVES OTHER THAN OPTICAL WAVES; ELECTROGRAPHY; HOLOGRAPHY
    • G03BAPPARATUS OR ARRANGEMENTS FOR TAKING PHOTOGRAPHS OR FOR PROJECTING OR VIEWING THEM; APPARATUS OR ARRANGEMENTS EMPLOYING ANALOGOUS TECHNIQUES USING WAVES OTHER THAN OPTICAL WAVES; ACCESSORIES THEREFOR
    • G03B15/00Special procedures for taking photographs; Apparatus therefor
    • GPHYSICS
    • G03PHOTOGRAPHY; CINEMATOGRAPHY; ANALOGOUS TECHNIQUES USING WAVES OTHER THAN OPTICAL WAVES; ELECTROGRAPHY; HOLOGRAPHY
    • G03BAPPARATUS OR ARRANGEMENTS FOR TAKING PHOTOGRAPHS OR FOR PROJECTING OR VIEWING THEM; APPARATUS OR ARRANGEMENTS EMPLOYING ANALOGOUS TECHNIQUES USING WAVES OTHER THAN OPTICAL WAVES; ACCESSORIES THEREFOR
    • G03B17/00Details of cameras or camera bodies; Accessories therefor
    • G03B17/02Bodies
    • GPHYSICS
    • G03PHOTOGRAPHY; CINEMATOGRAPHY; ANALOGOUS TECHNIQUES USING WAVES OTHER THAN OPTICAL WAVES; ELECTROGRAPHY; HOLOGRAPHY
    • G03BAPPARATUS OR ARRANGEMENTS FOR TAKING PHOTOGRAPHS OR FOR PROJECTING OR VIEWING THEM; APPARATUS OR ARRANGEMENTS EMPLOYING ANALOGOUS TECHNIQUES USING WAVES OTHER THAN OPTICAL WAVES; ACCESSORIES THEREFOR
    • G03B7/00Control of exposure by setting shutters, diaphragms or filters, separately or conjointly
    • G03B7/08Control effected solely on the basis of the response, to the intensity of the light received by the camera, of a built-in light-sensitive device
    • G03B7/091Digital circuits
    • G03B7/093Digital circuits for control of exposure time
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N23/00Cameras or camera modules comprising electronic image sensors; Control thereof
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N23/00Cameras or camera modules comprising electronic image sensors; Control thereof
    • H04N23/60Control of cameras or camera modules
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N23/00Cameras or camera modules comprising electronic image sensors; Control thereof
    • H04N23/70Circuitry for compensating brightness variation in the scene

Definitions

  • the present invention relates to a technique for capturing an image timed to the movement of an object.
  • This shooting mode is a shooting mode in which the imaging person sets a desired shutter speed and the imaging device automatically sets exposure setting values other than the shutter speed such as the aperture value and the ISO sensitivity.
  • the photographer can perform imaging at a desired shutter speed by using this imaging mode. For example, by setting a shutter speed with a short exposure time, it is possible to capture an image with less subject blurring even for a fast-moving subject such as a waterfall spray or a racing car.
  • Japanese Patent Application Laid-Open No. 2006-197192 discloses an imaging device that detects the amount of movement of an object from an image captured before capturing a still image, and determines the shutter speed based on the detection result.
  • One aspect of the present invention is an imaging apparatus, wherein a first imaging means, a second imaging means, and the first imaging means perform exposure for image data of a first frame.
  • Determining means for determining the movement of the subject in the image data of the plurality of frames using the image data of the plurality of frames captured by the second imaging means in the meantime, and based on the result determined by the determining means And stopping the exposure for the image data of the first frame by the first imaging means, and the exposure for the image data of the second frame after the first frame by the first imaging means
  • Control means for starting the control.
  • Another aspect of the present invention is an imaging apparatus which is detachable from an external imaging apparatus having a first imaging means, wherein the second imaging means and the first imaging means are the first frame.
  • a determination unit that determines movement of a subject in image data of the plurality of frames using image data of a plurality of frames captured by the second imaging unit while exposure for image data is being performed; And stopping the exposure for the image data of the first frame by the first imaging unit on the basis of the result determined by the determination unit, and the second imaging unit after the first frame by the first imaging unit. It is characterized by having control means for starting exposure for image data of two frames.
  • a so-called digital camera is taken as an imaging apparatus according to an embodiment of the present invention, but the present invention is not limited to this.
  • the present invention may be implemented as another device having an imaging function, for example, a digital video camera, a mobile phone, a smartphone, and other portable electronic devices.
  • an imaging apparatus for capturing an image synchronized with the movement of an object by determining the timing of starting exposure based on a motion analysis result using a motion vector in a period during exposure. I will explain. Hereinafter, a first embodiment of the present invention will be described.
  • FIG. 1A is a block diagram showing a configuration example of an imaging device 100 according to the first embodiment of the present invention.
  • the imaging apparatus 100 includes a first imaging system 110, a second imaging system 120, and an operation member 130.
  • the first control circuit 111 is, for example, a processor such as a CPU or an MPU.
  • the first control circuit 111 reads an operation program of each block included in the first imaging system 110 from the first ROM 112 described later, expands the program on the first RAM 113 described later, and executes the program. It controls the operation of each block included in 110.
  • the first control circuit 111 also controls and controls the overall operation of the first imaging system 110 and the second imaging system 120.
  • the first ROM 112 is an electrically erasable and recordable nonvolatile memory, and stores parameters necessary for the operation of each block in addition to the operation program of each block included in the first imaging system 110.
  • the first RAM 113 is a rewritable volatile memory, and is used as a temporary storage area of data output in the operation of each block included in the first imaging system 110.
  • the first optical system 114 is constituted by a lens group including a zoom lens and a focus lens, and forms an object image on a first image sensor 115 described later.
  • the first imaging element 115 is configured of, for example, a CCD or a CMOS sensor provided with color filters of a plurality of colors.
  • the first imaging device 115 photoelectrically converts the optical image formed on the first imaging device 115 by the first optical system 114, and obtains the obtained analog image signal to the first A / D conversion circuit 116. Output.
  • the first image sensor 115 starts exposure based on the timing when the shutter button included in the operation member 130 is full-pressed, and instructs the timing of the exposure end output from the exposure end timing generation circuit 200 described later. The exposure is ended based on the signal.
  • the first A / D conversion circuit 116 converts the input analog image signal into a digital image signal, and outputs the obtained digital image data to the first RAM 113.
  • the first image processing circuit 117 performs white balance adjustment, color interpolation, noise correction, gamma processing, conversion to luminance / color difference signals, aberration correction, and the like on the image data stored in the first RAM 113. Apply various image processing.
  • the image output circuit 118 is a circuit for receiving the image data processed by the first image processing circuit 117 via the first RAM 113 and outputting the image data to an external device. Specifically, image data is read from or written to a recording medium removable from the imaging apparatus 100, and images are transmitted and received to and from a smartphone, a server, or the like using a wireless or wired communication function.
  • the display device 119 is a display device such as an LCD or an organic EL display, and displays an image recorded in the first RAM 113.
  • the second control circuit 121 is, for example, a processor such as a CPU or an MPU.
  • the second control circuit 121 reads an operation program of each block included in the second imaging system 120 from the second ROM 122 described later, expands the program in the second RAM 123 described later, and executes the second imaging system. It controls the operation of each block included in 120.
  • the second ROM 122 is an electrically erasable and recordable nonvolatile memory, and stores, in addition to the operation program of each block included in the second imaging system 120, parameters and the like necessary for the operation of each block.
  • the second RAM 123 is a rewritable volatile memory, and is used as a temporary storage area of data output in the operation of each block included in the second imaging system 120.
  • the second optical system 124 is composed of a lens group including a zoom lens and a focus lens, and forms an object image on a second image sensor 125 described later.
  • the second imaging element 125 is an imaging element such as a CCD or CMOS sensor, for example, and is an analog image obtained by photoelectrically converting an optical image formed on the second imaging element 125 by the second optical system 124
  • the signal is output to the second A / D conversion circuit 126.
  • the second imaging element 125 is an element used to detect movement and blurring, and thus does not necessarily have to include color filters of a plurality of colors, and has a configuration including a monochrome (white) filter and an infrared filter. It is also good.
  • the second A / D conversion circuit 126 converts the input analog image signal into a digital image signal, and outputs the obtained digital image data to the second RAM 123.
  • the second image processing circuit 127 applies various image processing such as simple noise correction and gamma processing to the image data stored in the second RAM 123. If the second image sensor 125 includes color filters of a plurality of colors, color interpolation or conversion processing to a luminance signal is also performed. In addition, the second image processing circuit 127 includes the timing generation circuit 200, and based on the result of the motion analysis using the image data stored in the second RAM 123, the exposure end of the first image sensor 115 is completed. Generate a signal that indicates the timing of A signal instructing the timing of the end of exposure is output to the first imaging system 110 via the second control circuit 121. When the first imaging system 110 receives this signal, the first control circuit 111 controls the exposure of the first imaging element 115 to end.
  • image processing such as simple noise correction and gamma processing to the image data stored in the second RAM 123. If the second image sensor 125 includes color filters of a plurality of colors, color interpolation or conversion processing to a luminance signal is also performed. In addition, the
  • the operation member 130 is an operation member that receives an instruction from the user, and includes a shutter button and a dial key. Further, the display device 119 may have a touch panel function. Signals generated by the user operating these operation members are reflected in drive control of the first imaging system 110 and the second imaging system 120.
  • the first imaging system 110 and the second imaging system 120 are integrally configured as the imaging device 100
  • the present invention is not limited to this.
  • the first imaging system 110 and the operation member 130 may be a camera body
  • the second imaging system 120 may be an imaging device detachable from the camera body. That is, the second imaging system 120 may be an imaging device that can be attached to and detached from an external imaging device.
  • the interchangeable lens device including the first optical system 114 includes the first imaging device 115 to the display device 119, and the operation member 130. It becomes a structure which can be detached with respect to a camera body.
  • FIG. 1B is a diagram illustrating a smartphone (or a tablet terminal) as an example of the imaging device 100.
  • a touch panel that doubles as the display device 119 and the operation member 130 is provided on the front of the smartphone, and the first optical system 114 of the first imaging system 110 and the second of the second imaging system 120 are provided on the back of the smartphone.
  • the optical system 124 is disposed.
  • the present invention can also be implemented in such a smartphone.
  • the second control circuit 121 can be omitted.
  • the second imaging system 120 only includes the second optical system 124, the second imaging element 125, the second A / D conversion circuit 126, and the second RAM 123, and the other components are One imaging system 110 may be shared. By this, when the second imaging system 120 is another camera device, the configuration can be simplified.
  • FIG. 12 shows a table in which the configurations of the first image sensor 115 and the second image sensor 125 in the present embodiment are compared.
  • the frame rate of the first imaging device 115 is 20 fps (frames / second)
  • the frame rate of the second imaging device 125 is 1000 fps.
  • the second imaging element 125 can set the shutter speed at which the exposure time is shorter than that of the first imaging element 115. Then, in order to realize this shutter speed, the second imaging device 125 needs to have higher sensitivity than the first imaging device 115. Therefore, the second imaging element 125 is configured such that the pixel pitch is larger than the first imaging element 115 instead of reducing the number of pixels.
  • the horizontal size of the imaging unit is 36 mm for the first imaging element 115, while it is 4 mm for the second imaging element 125.
  • the number of horizontal pixels is 6,400 for the first image sensor 115, while the second image sensor 125 is 640 pixels.
  • the pixel pitch is 5.62 ⁇ m for the first imaging device 115, while the pixel pitch is 6.25 ⁇ m for the second imaging device 125.
  • the timing generation circuit 200 analyzes a motion by detecting a motion vector of image data stored in the second RAM 123 by the second imaging element 125 imaging at a high frame rate.
  • the second imaging element 125 is configured to include color filters for a plurality of colors, color interpolation or conversion processing to a luminance signal is performed first, and each pixel is a signal of the same component Shall be provided. Then, based on the analysis result of the movement, the timing of the exposure end of the first imaging device 115 is determined, and a signal for ending the exposure of the first imaging device 115 is output to the first imaging system 110 Do.
  • FIG. 2 is a block diagram showing a configuration example of the timing generation circuit 200 according to the first embodiment.
  • the timing generation circuit 200 includes a motion vector calculation circuit 201, an accumulated amount calculation circuit 202, a representative accumulated amount calculation circuit 203, and a timing determination circuit 204.
  • FIGS. 3 and 4 are flowcharts of imaging processing in the high-speed imaging mode according to the first embodiment.
  • the flowchart of FIG. 3 is started when the power of the imaging apparatus 100 is turned on.
  • step S301 the first control circuit 111 determines whether the imaging mode is set, and if not set, the process proceeds to step S302, and if set, the process proceeds to step S305.
  • step S302 the first control circuit 111 determines whether the setting menu of the shake level or movement start level is selected, and if another process is selected, the process proceeds to step S303, and the other process is performed in step S303. Do.
  • the first control circuit 111 proceeds to step S304 if the setting menu for the shake level or the movement start level is selected.
  • step S304 the first control circuit 111 displays a screen for setting the blur level or the movement level on the display device 119, and sets one of the levels according to the operation result of the operation member 130 by the user.
  • the first display device 119 displays a graded level from "standard” to "low” as the shake level, and allows the user to select.
  • a second threshold described later is set such that the blurring included in the captured image becomes smaller as the user selects a blurring level closer to “low”.
  • the stepwise movement start level is displayed on the first display device 119 and can be selected by the user. Since the determination criteria that the user has started moving may depend on the image of the user, the level of starting is indicated by a numerical value so that the level can be selected in a wide range.
  • the first control circuit 111 determines the determination value of the amount of movement in the first imaging system 110, and the second control circuit 121 determines the determination value of the amount of blurring movement.
  • a first threshold used in step S322 described later is set.
  • the first control circuit 111 determines a shake allowance value in the first imaging system 110, and the second control circuit 121 determines in step S331 described later based on the shake allowance value. Set a second threshold to be used.
  • the shake allowance value is set to the permissible circle of confusion diameter.
  • the permissible circle of confusion diameter represents a limit value that can be resolved by an observer with a visual acuity of 1.0 when observing a photograph with a visual distance of 250 mm, and becomes about 20 ⁇ m on a 36 ⁇ 24 mm imaging device .
  • the pitch 22.48 ⁇ m (5.62 ⁇ 4) of four pixels of the first image sensor 115 is taken as the permissible circle of confusion diameter.
  • step S305 the first control circuit 111 activates the first image sensor 115.
  • step S306 the first control circuit 111 determines whether the high-speed shooting mode is selected as the shooting mode. If the high-speed shooting mode is not selected, the process proceeds to step S307 and performs other shooting mode processing in step S307. If the high-speed shooting mode is selected, the first control circuit 111 proceeds to step S308.
  • step S308 the first control circuit 111 drives the first optical system 114 based on the contrast value of the subject obtained from the first image sensor 115 or the output of a distance measuring sensor (not shown) to perform automatic operation. Perform focus control (AF).
  • AF focus control
  • step S309 the first control circuit 111 performs automatic exposure control (AE) for the first image sensor 115 based on the luminance value of the subject obtained from the first image sensor 115.
  • AE automatic exposure control
  • step S310 the first control circuit 111 determines whether the SW1 in the shutter switch is turned on by pressing the shutter switch included in the operation member 130 halfway, and steps S308 and S309 are performed until the switch is turned on. repeat.
  • step S310 When SW1 is turned on in step S310, the second control circuit 121 activates the second imaging element 125 in step S311.
  • step S312 the first control circuit 111 performs AF using the first optical system 114 on the main subject selected when the SW1 is turned on.
  • step S313 the first control circuit 111 performs AE for the first image sensor 115 on the main subject selected when the switch SW1 is turned on.
  • step S314 the second control circuit 121 receives zoom information of the first optical system 114 from the first control circuit 111, and controls the zoom state of the second optical system 124. Control of the zoom state of the second optical system 124 will be described with reference to FIG.
  • FIG. 5 is a diagram for explaining the positional relationship between the imaging device 100 and the subject 500 when the SW 1 is turned on.
  • the first optical system 114 of the imaging device 100 has a focal length of 300 mm, and tries to capture an object 500 moving 40 m ahead at 0.3 m / sec (300 mm / sec).
  • the subject 500 is assumed to move in the vicinity of the optical axis of the first optical system 114.
  • 40 m ahead will be called the object plane.
  • the moving speed of the subject 500 can be measured by calculating a motion vector described later from the distance information to the subject 500 and the image obtained during framing.
  • the second control circuit 121 moves the focal length of the second optical system 124 to the telephoto side for zooming to increase the resolution of motion detection in the second image sensor 125.
  • the focal length is thus increased and the zoom position is moved to the telephoto side, the angle of view is narrowed, and therefore, when there is a subject other than near the optical axis, the subject may be out of the field of view.
  • the field of view can be moved to a region out of the optical axis by using a known technique for moving the optical axis or the position of the imaging device.
  • the determination value of the movement amount is smaller than the blur allowable value.
  • the focal length of the second optical system 124 is changed based on the determination value of the amount of movement.
  • step S315 the second control circuit 121 performs AF using the second optical system 124 based on the information of the main subject selected when the SW1 is turned on.
  • step S316 the second control circuit 121 performs AE for the second imaging element 115 based on the information of the main subject selected when the SW1 is turned on.
  • step S317 the first control circuit 111 determines whether the SW 2 in the shutter switch is turned on by fully pressing the shutter switch included in the operation member 130, and performs steps S312 to S316 until the SW2 is turned on. repeat.
  • the first control circuit 111 sets the exposure period based on the result of the AE performed in step S313 in step S318 in FIG. 4 to expose the first image sensor. Start.
  • step S319 the second control circuit 121 sets a frame rate to be 1000 fps or a predetermined multiple (for example, 50 times) of the frame rate set for the first image sensor 115, Exposure of the image sensor 125 is started.
  • the second imaging element 125 reaches the exposure time according to the set frame rate, it outputs the obtained analog image signal to the second A / D conversion circuit 126 and immediately starts the next exposure. repeat. That is, during the single exposure period of the first imaging device 115, the exposure of the second imaging device 125 is repeatedly performed at a frame rate faster than that.
  • FIG. 6 is a diagram for explaining the operation of the first image sensor 115, the second image sensor 125, and the timing generation circuit 200 in the first embodiment.
  • the first imaging device 115 in the first imaging system 110 immediately starts exposure.
  • the second imaging device 125 in the second imaging system 120 starts imaging of an image at a high frame rate.
  • the second image pickup device 125 continuously picks up an image with a short exposure time at time T1, time T2, time T3.
  • step S320 the motion vector calculation circuit 201 in the timing generation circuit 200 calculates the reliability of the motion vector and the motion vector between the frames of the image data obtained by the second imaging element 125.
  • the motion vector is a vector representing the amount of movement of the subject in the horizontal direction and the amount of movement in the vertical direction between frames. The method of calculating the motion vector will be described in detail with reference to FIGS. 7 to 9.
  • FIG. 7 is a flowchart showing the process of calculating the reliability of the motion vector and the motion vector by the motion vector calculation circuit 201.
  • FIG. 8 is a diagram for explaining a method of calculating a motion vector
  • FIG. 8A is a diagram showing image data of the M-th frame
  • FIG. 8B is a diagram showing image data of the M + 1-th frame
  • FIG. 8C is a diagram showing a motion vector between the Mth frame and the (M + 1) th frame.
  • the motion vectors in FIG. 8C describe only representative motion vectors for simplification.
  • M is a positive integer.
  • FIG. 9 is a diagram for explaining a method of calculating a motion vector by the block matching method.
  • a block matching method is described as an example of a motion vector calculation method.
  • the motion vector calculation method is not limited to this example, and may be, for example, a gradient method.
  • step 701 of FIG. 7 image data of two temporally adjacent frames is input to the motion vector calculation circuit 201. Then, the motion vector calculation circuit 201 sets the Mth frame as a reference frame, and sets the M + 1th frame as a reference frame.
  • step 702 of FIG. 7 the motion vector calculation circuit 201 arranges a reference block 902 of N ⁇ N pixels in the reference frame 901 as shown in FIG. 9.
  • step 703 of FIG. 7 the motion vector calculation circuit 201 calculates (N + n) ⁇ (N + n) pixels around the coordinate 904 of the center coordinate of the reference block 902 of the reference frame 901 as shown in FIG. Are set as a search range 905.
  • the motion vector calculation circuit 201 performs correlation calculation between the reference block 902 of the reference frame 901 and the reference block 906 of N ⁇ N pixels of different coordinates present in the search range 905 of the reference frame 903. And calculate the correlation value.
  • the correlation value is calculated based on the sum of absolute differences between frames for pixels of the reference block 902 and the reference block 906. That is, the coordinate with the smallest value of the sum of absolute differences between frames is the coordinate with the highest correlation value.
  • the method of calculating the correlation value is not limited to the method of obtaining the sum of absolute differences between frames, and may be a method of calculating a correlation value based on, for example, the sum of squared differences between frames or a normal cross correlation value. In the example of FIG.
  • the resolution in units of subpixels is 0.5 pixels.
  • (1) is an equation relating to the x component, the y component can be calculated similarly.
  • the motion vector calculation circuit 201 calculates a motion vector based on the coordinates of the reference block indicating the highest correlation value obtained at step 704, and the correlation value of the motion vector is used as the reliability of the motion vector.
  • a motion vector is obtained based on the same coordinates 904 corresponding to the center coordinates of the reference block 902 of the reference frame 901 and the center coordinates of the reference block 906 in the search range 905 of the reference frame 903. That is, the inter-coordinate distance and direction from the same coordinate 904 to the center coordinate of the reference block 906 are determined as a motion vector.
  • a correlation value that is the result of correlation calculation with the reference block 906 at the time of motion vector calculation is obtained as the reliability of the motion vector.
  • the reliability of the motion vector is higher as the correlation value between the reference block and the reference block is higher.
  • step 706 of FIG. 7 the motion vector calculation circuit 201 determines whether or not motion vectors have been calculated for all pixels of the reference frame 701. If the motion vector calculation circuit 201 determines in step 706 that motion vectors of all pixels have not been calculated, the process returns to step 702. Then, in step 702, the reference block 902 of N ⁇ N pixels is arranged in the reference frame 701 described above centering on the pixels for which the motion vector is not calculated, and the processing from step 703 to step 705 is performed as described above. . That is, the motion vector calculation circuit 201 calculates the motion vectors of all the pixels of the reference frame 901 by repeating the processing from step 702 to step 705 while moving the reference block 902 in FIG. 9. An example of this motion vector is shown in FIG.
  • FIG. 8C The example of FIG. 8 shows an example in which a person moves from left to right between the Mth frame of FIG. 8A and the (M + 1) th frame of FIG. 8B.
  • a representative example of the motion vector when the subject is moving as described above is shown in FIG. 8C.
  • the subject position present in the Mth frame is the start point of the motion vector
  • the subject position in the M + 1th frame corresponding thereto is the end point of the motion vector.
  • the motion vector calculation circuit 201 may calculate the motion vector at predetermined pixels smaller than all the pixels, instead of calculating motion vectors of all the pixels.
  • the moving speed of the subject may change. Therefore, the magnitude of the motion vector between two temporally adjacent frames is converted to the movement velocity on the object plane, and the focal point of the second optical system during the exposure of the first imaging device 115 by the above-described calculation method. It is preferable that the distance, the imaging magnification, and the angle of view be appropriately changed.
  • the motion vector calculation circuit 201 calculates the motion vector between the frames of the image data obtained at time T0 and time T1 and the reliability of the motion vector at time T1 based on the process of the flowchart of FIG. 7 described above. Thereafter, at time T2, motion vectors between the frames of the image data obtained at time T1 and time T2 and reliability of the motion vector are calculated. After time T3, the same process is repeated to calculate the reliability of the motion vector and the motion vector between the frames of the image data obtained from the second imaging element 125.
  • step S321 the accumulated amount calculation circuit 202 tracks the motion vector calculated in step 320 in a plurality of frames, and calculates the accumulated amount of motion vector. Then, the representative accumulation amount calculation circuit 203 determines a representative accumulation amount representing the entire frame based on the calculated accumulation amount of the motion vector.
  • FIG. 11 is a diagram showing motion vectors among a plurality of frames calculated in step S320.
  • the calculation method of the accumulation amount of the motion vector in the period from time T0 to time T3 is demonstrated for the simplification of description, the accumulation amount of motion vector is calculated by the same method also about the period after it I assume.
  • a motion vector 1101 indicates the motion vector calculated between the frame at time T0 and the frame at time T1 in FIG.
  • the motion vector 1102 indicates the motion vector calculated between the frame at time T1 and the frame at time T2 in FIG.
  • the motion vector 1103 indicates the motion vector calculated between the frame at time T2 and the frame at time T3 in FIG.
  • the accumulated amount calculation circuit 202 selects a motion vector having an end point coordinate Q of the motion vector 1101 calculated between frames at time T0 and time T1 as a start point coordinate from among the motion vectors calculated between frames at time T1 and time T2. Search for. Then, the motion vector 1102 that satisfies the condition is linked with the motion vector 1101. In addition, the accumulated amount calculation circuit 202 calculates a motion vector having an end point coordinate R of the motion vector 1102 calculated between the frames at time 1 and time T2 as a start point coordinate, of the motion vector calculated between the frames at time T2 and time T3. Search from among Then, the motion vector 1103 that satisfies the condition is linked with the motion vector 1102. The motion vectors are linked by the same process in the subsequent periods.
  • tracking motion vectors of all pixels are calculated.
  • the calculated tracking motion vector indicates that the subject present at coordinate P at time T0 moves to coordinate Q at time T1, moves to coordinate R at time T2, and moves to coordinate S at time T3.
  • the accumulation amount calculation circuit 202 calculates the length of the tracking motion vector as the accumulation amount (VecLen) of the motion vector as shown in Expression (5).
  • VecLen VecLen1 + VecLen2 + VecLen3 (5)
  • VecLen1 indicates the length of the motion vector of the motion vector 1101 calculated between the frames at time T0 and time T1.
  • VecLen2 indicates the length of the motion vector of the motion vector 1102 calculated between the frames at time T1 and time T2.
  • VecLen3 indicates the length of the motion vector of the motion vector 1103 calculated between the frames at time T2 and time T3.
  • the accumulation amount calculation circuit 202 calculates the sum of the lengths of the motion vectors constituting the tracking motion vector as the accumulation amount of the motion vector based on Expression (5).
  • the processing for calculating the accumulated amount of motion vectors as described above is performed on the tracking motion vectors of all pixels to calculate the accumulated amounts of motion vectors of all pixels.
  • the accumulated amount calculation circuit 202 may exclude the motion vector whose reliability of the motion vector is lower than a predetermined value from the summation processing of the length of the motion vector according to Expression (5). In addition, the accumulation amount calculation circuit 202 excludes a motion vector whose reliability of the motion vector is lower than a predetermined value and a motion vector after that temporally from the summation processing of the length of the motion vector according to equation (5). It is good. As a result, it is possible to calculate the accumulated amount of motion vector using only the motion vector having a high degree of reliability of the motion vector. Alternatively, each motion vector may be separated into a component in the X direction and a component in the Y direction, and the sum of the lengths of the motion vectors may be calculated for each direction.
  • the representative accumulation amount calculation circuit 203 selects the maximum value among the accumulation amounts of motion vectors obtained from all the pixels in the frame, and determines the accumulation amount of the selected maximum motion vector as a representative accumulation amount. By performing such processing for each frame, as shown in FIG. 6, one representative cumulative amount is calculated for each frame.
  • the representative accumulation amount by the representative accumulation amount calculation circuit 203 is not limited to the one based on the maximum value among the accumulation amounts of the motion vectors of all the pixels in the frame, and the accumulation of motion vectors of all the pixels in the frame It may be an average value or median value.
  • the representative accumulated amount in each direction may be determined.
  • the timing determination circuit 204 determines whether the representative accumulated amount is equal to or greater than the first threshold, and determines that the subject is not moving if not equal to or greater than the first threshold. Proceed to step S323.
  • the first threshold is a threshold set to detect the timing at which the subject starts moving based on the movement start level set in step S304 as described above, and the second threshold set in step S304 Are set independently. It is a value that can be set arbitrarily according to the movement of the subject to be photographed, and a subject whose movement starts faster like an insect like a dragonfly is the first threshold, compared to a person who starts moving slowly like a person It is desirable to set a small value.
  • the first threshold is set assuming the movement of the person. If the shooting mode is macro, the first threshold is set assuming movement of the insect. Alternatively, the first threshold may be set. Further, since both the first threshold and the second threshold can be set to arbitrary values within a predetermined range by the user, the first threshold may be larger depending on the set value. For example, the second threshold may be larger.
  • step S323 the first control circuit 111 of the first imaging system 110 determines whether the exposure time of the first imaging element 115 has reached the set exposure time based on the AE performed in step S313. If not, the process returns to step S322. If the exposure time of the first imaging element has reached the exposure time set based on the AE performed in step S313, the process proceeds to step S324.
  • step S324 the first control circuit 111 stops the exposure of the first imaging element 115, and the first A / D conversion circuit 116 with respect to the analog image signal generated by the first imaging element 115. And, the first image processing circuit 117 performs predetermined processing. Then, the image data obtained by the first image processing circuit 117 is transmitted to the display device 119 and used as image data for live view.
  • step S 325 the first control circuit 111 stops the exposure of the second image sensor 125 via the second control circuit 121.
  • step S326 the second control circuit 121 resets the representative accumulated amount calculated by the accumulated amount calculation circuit 202 and the representative accumulated amount calculation circuit 203, and returns to step S318.
  • step S322 if the representative cumulative amount is equal to or greater than the first threshold value, the timing determination circuit 204 can determine that the subject has moved, so the process proceeds to step S327.
  • step S327 the timing determination circuit 204 outputs a signal for instructing the first imaging system 110 to perform a reset process. This process is performed as soon as it is determined that the representative accumulated amount has reached the first threshold or more. In the example shown in FIG. 6, the representative accumulated amount based on the motion vector calculated between each frame up to time T3 is equal to or greater than the first threshold. Therefore, at this time, the timing determination circuit 204 outputs a signal for instructing the first imaging system 110 to perform a reset process via the second control circuit 121.
  • a signal for instructing reset processing is output when one of the representative cumulative amounts becomes equal to or greater than the threshold.
  • step S328 as soon as the first control circuit 111 receives a signal for instructing reset processing from the second control circuit 121, the first control circuit 111 causes the first imaging element 115 to stop the exposure, and each pixel Perform a reset process to discard the accumulated charge. Then, the first control circuit 11 completes the reset process, makes the charge accumulated in the pixels substantially zero, and immediately starts the exposure of the first image sensor 115.
  • step S329 the second control circuit 121 causes the second image sensor 125 to stop the exposure and perform reset processing, and the second control circuit 121 synchronizes with the timing for starting the exposure of the first image sensor 115. The exposure of the second imaging element 125 is started.
  • step S330 the second control circuit 121 resets the representative accumulated amount calculated by the representative accumulated amount calculating circuit 203 so far, and causes the representative accumulated amount calculating circuit 203 to newly start calculating the representative accumulated amount.
  • the representative accumulated amount calculated up to time T3 is reset, and the calculation of the representative accumulated amount is started again from time T4.
  • step S331 the timing determination circuit 204 determines whether the representative accumulated amount is equal to or more than the second threshold set in step S304. If the representative accumulated amount is equal to or more than the second threshold, the process proceeds to step S332.
  • step S332 the first control circuit 111 of the first imaging system 110 determines whether the exposure time of the first imaging element 115 has reached the set exposure time based on the AE performed in step S313. If not, the process returns to step S331. If the exposure time of the first imaging element has reached the exposure time set based on the AE performed in step S313, the process advances to step S334.
  • step S334 the first control circuit 111 stops the exposure of the first image sensor 115.
  • step S331 the timing determination circuit 204 proceeds to step S333 if the representative accumulated amount is equal to or greater than the second threshold.
  • step S333 the timing determination circuit 204 outputs a signal for instructing the first imaging system 110 to finish the exposure. This process is performed as soon as it is determined that the representative accumulated amount is equal to or greater than the second threshold. In the example shown in FIG. 6, the representative accumulated amount based on the motion vector calculated between each frame up to time T8 is equal to or greater than the second threshold. Therefore, at this time point, the timing determination circuit 204 outputs a signal for instructing the first imaging system 110 to finish the exposure via the second control circuit 121.
  • the representative cumulative amount is determined separately in the X direction and the Y direction
  • a signal for instructing the end of exposure is output when one of the representative cumulative amounts becomes equal to or greater than the second threshold. .
  • the process proceeds to step S333 and step S335, and the first control circuit 111 determines that the exposure time of the first image sensor 115 is appropriate.
  • the exposure of the first image sensor 115 is stopped even if it does not reach.
  • the first control circuit 111 outputs an analog image signal generated by the first imaging element 115 to the first A / D conversion circuit 116.
  • the digital image data generated by the A / D conversion circuit 116 is subjected to predetermined processing by the first image processing circuit 117, and is output to the image output circuit 118 as image data for recording.
  • the image output circuit 118 writes image data for recording on a recording medium removable from the imaging apparatus 100, or uses the wireless or wired communication function to record image data for recording on an external device such as a smartphone or a server.
  • the first control circuit 111 stops the exposure of the first imaging element 115 at a timing slightly after time T8.
  • the calculation time from the generation of the frame image at time T8 in the second imaging element 125 to the acquisition of the representative accumulated amount, and the signal output from the timing determination circuit 204 are output to the first control circuit 111.
  • the time to reach occurs as a time lag. However, if the threshold is set in consideration of these time lags, the influence of the time lag can be suppressed.
  • step S335 the second control circuit 121 of the second imaging system 120 stops the exposure of the second imaging element 125.
  • step S336 the first control circuit 111 of the first imaging system 110 determines whether the imaging mode is still selected. If the imaging mode remains, the process returns to step S306 to select another mode. If it is, the process returns to step S302.
  • the exposure process of the first image sensor 115 during exposure is stopped based on the movement amount of the subject during the exposure period of the first image sensor 115, and the reset process is performed.
  • the first image sensor 115 immediately resumes the exposure. Therefore, it is possible to capture an image according to the timing of movement of the subject.
  • step S322 by adjusting the first threshold value to be compared with the representative cumulative amount in step S322, it is possible to adjust the magnitude of the movement of the subject to be regarded as the movement start. For example, as this threshold value is decreased, it is possible to reset the exposure and restart the exposure when a smaller movement is detected.
  • the motion vector calculated by the motion vector calculation circuit 201 becomes gradually smaller. Therefore, among the tracking motion vectors calculated by the accumulated amount calculation circuit 202, any tracking motion vector in which the connected motion vector is gradually reduced is selected. Then, in the selected tracking motion vector, the timing determination circuit 204 arranges the magnitudes of the connected motion vectors in order of time, and obtains the attenuation factor of the magnitude of the motion vector between two consecutive motion vectors. The timing determination circuit 204 performs an operation based on the obtained attenuation factor to predict the magnitude of the motion vector of the frame to be obtained next from the second imaging element 125.
  • the timing determination circuit 204 outputs a signal for instructing the first imaging system 110 to perform a reset process when the magnitude of the predicted motion vector is less than the first threshold. If the first threshold value is a positive value close to 0, the reset process of the first image sensor 115 and the restart of exposure can be performed at the timing when the movement of the subject stops.
  • the analog image signal read out when the exposure of the first image sensor 115 is stopped in step S324 is not used for recording, the signals of surrounding pixels are added or averaged. And the resolution may be reduced.
  • the reading method of the first image sensor 115 may be changed such that the resolution of the image data is higher when the exposure is started in step S328 than when the exposure is started in step S318. By doing this, the processing load at the time of generating image data not for recording can be reduced.
  • the timing determination circuit 204 determines the cumulative amount of each portion of the first imaging device 115. It is also possible to output a signal instructing start of reset processing and re-exposure. Alternatively, the entire frame may be divided into divided blocks, and a signal instructing start of reset processing and re-exposure may be output for each divided block based on the accumulated amount representing the divided block.
  • the accumulation amount calculation circuit 202 calculates the sum of the lengths of each of the linked motion vectors as the tracking motion vector length as the accumulation amount of the motion vector. It is not limited to If each motion vector constituting the tracking motion vector as shown in FIG. 9 or a part of each motion vector passes through the same coordinates, the length passing through the same coordinates is the length of the motion vector according to equation (5) It may be excluded from the summation process of Thereby, for example, it is possible to suppress excessive addition of the motion vector length to a subject with a minute periodic motion (repetitive motion) that moves back and forth between adjacent coordinates.
  • the motion of the subject is detected based on the motion vector, but in order to reduce the calculation load, instead of the motion vector, the sum of absolute differences between frames is It may be configured to compare with the threshold and the second threshold.
  • the first imaging device based on the focal length of the second imaging system 120, the imaging magnification, and the motion analysis result using the image obtained by changing the angle of view.
  • the timing of the reset processing of 115 and the start of re-exposure is decided. Therefore, even if the first imaging device 115 and the second imaging device 125 have specifications different in resolution, it is possible to capture an image with less blurring.
  • the movement resolution is increased by moving the focal length of the second optical system 124 to the telephoto side.
  • the focal length of the general lens is moved to the telephoto side, the F value is It gets bigger and the image gets darker.
  • the sensitivity is increased to make the image brighter, noise increases and the motion vector calculation accuracy is degraded. Therefore, according to the magnitude of the noise component of the image obtained by the second imaging element 125, the maximum moving amount of the focal length may be limited.
  • the image data obtained by determining that the representative accumulated amount is equal to or more than the first threshold and starting the exposure is in accordance with the timing intended by the user. For example, if there is a predetermined user operation such as pressing the shutter button all the way to the photographed image data, the photographed image data is regarded as a recording target, and if there is no predetermined operation, it is not recorded. Image data may be deleted.
  • the full-pressing of the shutter button is an example, and any other operation such as button operation or tilting of the imaging device 100 in a predetermined direction may be performed as long as the user performs an operation.
  • a method of using detection of voice such as the user emitting a predetermined word may be used.
  • the flowchart of FIG. 13 is a flowchart of the imaging process in the high-speed shooting mode of the second embodiment, and differs from the flowchart of FIG. 4 only in that step S1300 is included.
  • the second threshold is set to a sufficiently large value as compared to the first threshold.
  • steps S318 to S326 and steps S327 to S336 in FIG. 13 are the same as those shown in FIG.
  • step S322 in FIG. 13 if the representative cumulative amount calculated by the representative cumulative amount calculation circuit 203 is equal to or greater than the first threshold value, the timing determination circuit 204 determines that the subject has moved, and proceeds to step S1300.
  • step S1300 the timing determination circuit 204 determines whether the representative accumulated amount calculated by the representative accumulated amount calculation circuit 203 has become equal to or larger than the third threshold.
  • the third threshold is larger than the first threshold and smaller than the second threshold. If the representative accumulated amount is less than the third threshold, it is considered that the subject has just started to move, so the imaging device 100 proceeds to step S327 and exposes the first imaging element 115 as in the first embodiment. Is stopped to perform reset processing, and exposure of the first imaging element 115 is immediately started.
  • the representative accumulated amount is equal to or more than the third threshold value, it is considered that the movement amount of the subject has already largely exceeded the reference value for determining that movement has occurred. In this case, even if the first image sensor 115 is reset and reexposure of the first image sensor 115 is immediately started, the exposure of the first image sensor 115 is resumed at the timing when the movement is too large. There is a high possibility of Therefore, if the representative accumulated amount is equal to or more than the third threshold, the imaging device 100 continues the exposure of the first imaging device 115 without performing the reset process of the first imaging device 115, and proceeds to step S331. . By doing this, even when the amount of movement at the time of movement start of the subject is large, it is possible to pick up an image including the movement start of the subject with good timing.
  • FIG. 14 is a diagram for explaining the operation of the first image sensor 115, the second image sensor 125, and the timing generation circuit 200 in the second embodiment.
  • the representative accumulated amount based on the motion vector calculated between each frame up to time T2 is equal to or greater than the first threshold.
  • the representative accumulated amount is equal to or greater than the third threshold that is larger than the first threshold. Therefore, at time T2, the timing determination circuit 204 continues the exposure of the first imaging element 115 without outputting a signal for instructing the first imaging system 110 to perform a reset process. Thereafter, when the representative accumulated amount becomes equal to or greater than the second threshold corresponding to the blur level, the timing determination circuit 204 instructs the first imaging system 110 to perform reset processing via the second control circuit 121.
  • the exposure time of the first image sensor 115 when the representative accumulated amount exceeds the first threshold may be compared with the fourth threshold.
  • the exposure time of the first image sensor 115 is equal to or greater than the fourth threshold, it takes time for the subject to move, so it is considered that the subject has just started to move. Therefore, reset processing of the first image sensor 115 is performed, and exposure of the first image sensor 115 is immediately started.
  • the exposure time of the first image sensor 115 is less than the fourth threshold, it is considered that the subject has suddenly moved.
  • the exposure of the first image sensor 115 is stopped based on the movement amount of the subject during the exposure period of the first image sensor 115, and the reset process is performed immediately.
  • the first imaging element 115 resumes the exposure. Therefore, it is possible to capture an image according to the timing of movement of the subject.
  • switching between exposure stop and reset or exposure continued is performed. Therefore, regardless of the size of the amount of movement of the subject's movement, it is possible to pick up an image including the movement of the subject with good timing.
  • the second image processing circuit 127 includes a second timing determination circuit 1500 in addition to the timing determination circuit 127.
  • FIG. 15 is a block diagram showing a configuration example of a second timing generation circuit according to the third embodiment.
  • the second timing generation circuit 1500 includes an object identification circuit 1501, a learning model 1502, a position determination circuit 1503, and a timing determination circuit 1504.
  • the flowchart of FIG. 16 is a flowchart of imaging processing in the high-speed shooting mode of the third embodiment, and differs from the flowchart of FIG. 4 only in that steps S1620 to S1623 are included.
  • steps S318 and S319, steps S323 to S326, and steps S328 to S336 in FIG. 16 are the same as those shown in FIG.
  • step S1601 in FIG. 16 the process of identifying a plurality of predetermined subjects learned in advance from the image data obtained by the second imaging device 125 by the subject identification circuit 1501 in the second timing generation circuit 1500.
  • the subject identification circuit 1501 is configured of, for example, a GPU (Graphics Processing Unit).
  • the subject identification circuit 1501 identifies a plurality of subjects to be learned from image data by using a learning model 1502 obtained by performing machine learning for identifying a predetermined subject in advance. Can.
  • step S1602 the position determination circuit 1503 in the second timing generation circuit 1500 starts to determine whether the positions of a plurality of objects identified by the object identification circuit 1501 satisfy a predetermined condition. That is, it is determined whether a plurality of subjects have moved to a position satisfying a predetermined condition.
  • step S1603 the timing determination circuit 1504 determines whether the positions of a plurality of objects satisfy a predetermined condition. If the conditions do not satisfy, the process advances to step S323. If the conditions are satisfied, the process advances to step S1604.
  • step S1604 the timing determination circuit 1504 outputs a signal for instructing the first imaging system 110 to perform a reset process. This process is performed as soon as it is determined that the positions of a plurality of subjects satisfy a predetermined condition.
  • a predator such as a whale and a predatory animal such as a rat or a rabbit are selected.
  • the distance between the plurality of identified subjects is equal to or less than the threshold (close).
  • the predator can cause the first imaging element 115 to perform reset processing immediately before capturing the predator, and exposure can be started immediately.
  • a tennis racket and a tennis ball are selected as the plurality of subjects identified in step S1601.
  • step S1603 as a condition of the positions of the plurality of subjects, it is selected that the distance between the plurality of identified subjects is equal to or less than the threshold (close). By doing this, it is possible to cause the first imaging element 115 to perform reset processing immediately before striking the ball, and to start exposure immediately.
  • the plurality of subjects identified in step S1601 the player's arm and the throw are selected.
  • step S1603 as a condition of the positions of the plurality of subjects, it is set that the distance between the plurality of identified subjects is 1 meter. By doing this, it is possible to cause the first imaging element 115 to perform reset processing at the timing immediately after throwing, and to start exposure immediately.
  • the distance between the plurality of subjects is described as an example of the condition of the position of the plurality of subjects here, the area where the plurality of subjects overlap may be set as the condition.
  • step S328 The processes after step S328 are the same as in the first embodiment. If the timing generation circuit 200 determines that the representative accumulated amount is equal to or greater than the second threshold value, the first control circuit 111 determines that the exposure time of the first imaging element 115 has not reached the appropriate time. The exposure of the first imaging element 115 is stopped.
  • the exposure of the first imaging device 115 during exposure is stopped and reset is performed. Processing is performed, and the first imaging element 115 immediately resumes exposure. Therefore, it is possible to capture an image in accordance with the timing at which a plurality of subjects have a predetermined composition.

Abstract

The present invention provides an imaging apparatus that captures an image by matching timing with the movement of an object. This imaging apparatus determines the movement of an object in image data of a plurality of frames by using the image data of the plurality of frames captured by a second imaging means while exposure for image data of a first frame is performed by a first imaging means, stops exposure for the image data of the first frame by the first imaging means on the basis of the determination result, and starts exposure for the image data of a second frame.

Description

撮像装置、撮像装置の制御方法、および、プログラムImaging device, control method of imaging device, and program
 本発明は、被写体の動きにタイミングを合わせた画像を撮像するための技術に関するものである。 The present invention relates to a technique for capturing an image timed to the movement of an object.
 近年、スマートフォンに搭載されたカメラやデジタルカメラなどの撮像装置において、シャッタスピードを優先する撮影モードを搭載するものが知られている。この撮影モードは、撮像者が所望のシャッタスピードを設定し、絞り値やISO感度といったシャッタスピード以外の露出設定値を撮像装置が自動で設定する撮影モードである。撮像者は、この撮影モードを用いることにより、好みのシャッタスピードで撮像することができる。例えば、露光時間の短いシャッタスピードを設定することで、滝の水しぶきやレーシングカーといった動きの速い被写体に対しても、被写体ぶれが少ない画像を撮像することができる。特開2006-197192号公報には、静止画の撮像前に撮像した画像から被写体の動き量を検出し、その検出結果に基づいて、シャッタスピードを決定する撮像装置が開示されている。 2. Description of the Related Art In recent years, in an imaging device such as a camera mounted on a smartphone or a digital camera, one equipped with an imaging mode giving priority to shutter speed has been known. This shooting mode is a shooting mode in which the imaging person sets a desired shutter speed and the imaging device automatically sets exposure setting values other than the shutter speed such as the aperture value and the ISO sensitivity. The photographer can perform imaging at a desired shutter speed by using this imaging mode. For example, by setting a shutter speed with a short exposure time, it is possible to capture an image with less subject blurring even for a fast-moving subject such as a waterfall spray or a racing car. Japanese Patent Application Laid-Open No. 2006-197192 discloses an imaging device that detects the amount of movement of an object from an image captured before capturing a still image, and determines the shutter speed based on the detection result.
 被写体ぶれが少ない画像を撮像するためには、露光時間の短い高速のシャッタスピードにて撮像する必要がある。しかしながら、撮像前に高速なシャッタスピードを設定して撮像したとしても、撮像のタイミングを逃してしまうことがある。 In order to capture an image with less subject blurring, it is necessary to capture at a high shutter speed with a short exposure time. However, even if imaging is performed by setting a high shutter speed before imaging, the imaging timing may be missed.
 例えば、動物の動き出す瞬間や鳥の飛び立つ瞬間など、不規則で高速に動く被写体を捉えようとしたときに、シャッタを押す反応が間に合わず、所望の動き出しの状態の被写体を捉えることができない場合がある。シャッタスピードを高速に設定したとしても、撮影開始のタイミングが適切でなければ、不規則で高速に動く被写体の所望の動きを捉えた撮影を行なうことはできない。 For example, when trying to capture an irregular, high-speed moving subject such as the moment when an animal moves or the moment the bird takes off, there is a case where the shutter pressing action can not catch up and the desired moving subject can not be captured. is there. Even if the shutter speed is set to a high speed, if the timing of the start of shooting is not appropriate, it is not possible to capture a desired motion of a subject moving irregularly and at high speed.
 本発明の一つの側面は、撮像装置であって、第一の撮像手段と、第二の撮像手段と、前記第一の撮像手段が第一のフレームの画像データのための露光を行っている間に前記第二の撮像手段にて撮像された複数のフレームの画像データを用いて、前記複数のフレームの画像データにおける被写体の動きを判定する判定手段と、前記判定手段が判定した結果に基づいて、前記第一の撮像手段による前記第一のフレームの画像データのための露光を止め、前記第一の撮像手段による前記第一のフレームの後の第二のフレームの画像データのための露光を開始する制御手段を、有することを特徴とするものである。 One aspect of the present invention is an imaging apparatus, wherein a first imaging means, a second imaging means, and the first imaging means perform exposure for image data of a first frame. Determining means for determining the movement of the subject in the image data of the plurality of frames using the image data of the plurality of frames captured by the second imaging means in the meantime, and based on the result determined by the determining means And stopping the exposure for the image data of the first frame by the first imaging means, and the exposure for the image data of the second frame after the first frame by the first imaging means Control means for starting the control.
 本発明の別の一つの側面は、第一の撮像手段を有する外部の撮像装置に着脱可能な撮像装置であって、第二の撮像手段と、前記第一の撮像手段が第一のフレームの画像データのための露光を行っている間に前記第二の撮像手段にて撮像された複数のフレームの画像データを用いて、前記複数のフレームの画像データにおける被写体の動きを判定する判定手段と、前記判定手段が判定した結果に基づいて、前記第一の撮像手段による前記第一のフレームの画像データのための露光を止め、前記第一の撮像手段による前記第一のフレームの後の第二のフレームの画像データのための露光を開始する制御手段を、有することを特徴とするものである。 Another aspect of the present invention is an imaging apparatus which is detachable from an external imaging apparatus having a first imaging means, wherein the second imaging means and the first imaging means are the first frame. A determination unit that determines movement of a subject in image data of the plurality of frames using image data of a plurality of frames captured by the second imaging unit while exposure for image data is being performed; And stopping the exposure for the image data of the first frame by the first imaging unit on the basis of the result determined by the determination unit, and the second imaging unit after the first frame by the first imaging unit. It is characterized by having control means for starting exposure for image data of two frames.
本発明の第一の実施形態に係る撮像装置の構成例を示すブロック図である。It is a block diagram showing an example of composition of an imaging device concerning a first embodiment of the present invention. 本発明の第一の実施形態に係る撮像装置の一例としてのスマートフォンを示す図である。It is a figure showing a smart phone as an example of an imaging device concerning a first embodiment of the present invention. 本発明の第一の実施形態に係るタイミング生成回路の構成例を示すブロック図である。It is a block diagram showing an example of composition of a timing generation circuit concerning a first embodiment of the present invention. 本発明の第一の実施形態の高速撮影モードにおける撮像処理のフローチャートである。It is a flowchart of the imaging process in the high-speed imaging | photography mode of 1st embodiment of this invention. 本発明の第一の実施形態の高速撮影モードにおける撮像処理のフローチャートである。It is a flowchart of the imaging process in the high-speed imaging | photography mode of 1st embodiment of this invention. 撮像装置と被写体の位置関係を説明するための図である。It is a figure for demonstrating the positional relationship of an imaging device and a to-be-photographed object. 本発明の第一の実施形態の第一の撮像素子、第二の撮像素子、および、タイミング生成回路による動作を説明するための図である。It is a figure for demonstrating the operation | movement by the 1st image sensor of 1st embodiment of this invention, a 2nd image sensor, and a timing generation circuit. 本発明の第一の実施形態の動きベクトル算出回路による動きベクトルおよび動きベクトルの信頼度の算出処理のフローチャートである。It is a flowchart of the calculation process of the reliability of the motion vector by the motion vector calculation circuit of 1st embodiment of this invention, and a motion vector. M番目フレームの画像データを示す図である。It is a figure which shows the image data of the Mth frame. M+1番目フレームの画像データを示す図である。It is a figure which shows the image data of the M + 1st frame. M番目フレームとM+1番目フレームの間における動きベクトルを示す図である。It is a figure which shows the motion vector between Mth flame | frame and M + 1st flame | frame. ブロックマッチング法による動きベクトルの算出方法を説明するための図である。It is a figure for demonstrating the calculation method of the motion vector by the block matching method. 3点内挿の計算方法を説明するための図である。It is a figure for demonstrating the calculation method of 3-point interpolation. 複数のフレーム間の動きベクトルを示す図である。It is a figure which shows the motion vector between several frames. 第一の撮像素子と第二の撮像素子の構成を示す表である。It is a table showing the composition of the 1st image sensor and the 2nd image sensor. 本発明の第二の実施形態の高速撮影モードにおける撮像処理のフローチャートである。It is a flowchart of the imaging process in the high-speed imaging | photography mode of 2nd embodiment of this invention. 本発明の第二の実施形態の第一の撮像素子、第二の撮像素子、および、タイミング生成回路による動作を説明するための図である。It is a figure for demonstrating the operation | movement by the 1st image sensor of 2nd embodiment of this invention, a 2nd image sensor, and a timing generation circuit. 本発明の第三の実施形態に係る第二のタイミング生成回路の構成例を示すブロック図である。It is a block diagram showing an example of composition of the 2nd timing generation circuit concerning a third embodiment of the present invention. 本発明の第三の実施形態の高速撮影モードにおける撮像処理のフローチャートである。It is a flowchart of the imaging process in the high-speed imaging | photography mode of 3rd embodiment of this invention.
 以下、本発明の実施形態について、添付図面を用いて詳細に説明する。ここでは、本発明の実施形態に係る撮像装置として、所謂、デジタルカメラを取り上げることとするが、本発明はこれに限定されるものではない。撮像機能を有する他の装置、例えば、デジタルビデオカメラ、携帯電話、スマートフォン、その他の携帯型電子機器等として実施されても良い。 Hereinafter, embodiments of the present invention will be described in detail with reference to the attached drawings. Here, a so-called digital camera is taken as an imaging apparatus according to an embodiment of the present invention, but the present invention is not limited to this. The present invention may be implemented as another device having an imaging function, for example, a digital video camera, a mobile phone, a smartphone, and other portable electronic devices.
 (第一の実施形態)
 本発明の第一の実施形態では、露光中の期間における動きベクトルを用いた動き解析結果に基づき、露光開始のタイミングを決めることにより、被写体の動きにタイミングを合わせた画像を撮像する撮像装置の説明を行う。以下、本発明の第一の実施形態について説明する。
(First embodiment)
According to a first embodiment of the present invention, an imaging apparatus for capturing an image synchronized with the movement of an object by determining the timing of starting exposure based on a motion analysis result using a motion vector in a period during exposure. I will explain. Hereinafter, a first embodiment of the present invention will be described.
 図1Aは、本発明の第一の実施形態に係る撮像装置100の構成例を示すブロック図である。撮像装置100は、第一の撮像システム110、第二の撮像システム120、および、操作部材130より構成される。 FIG. 1A is a block diagram showing a configuration example of an imaging device 100 according to the first embodiment of the present invention. The imaging apparatus 100 includes a first imaging system 110, a second imaging system 120, and an operation member 130.
 まず、第一の撮像システム110について説明する。第一の制御回路111は、例えばCPUやMPUなどのプロセッサである。第一の制御回路111は、第一の撮像システム110が備える各ブロックの動作プログラムを後述の第一のROM112より読み出し、後述の第一のRAM113に展開して実行することにより第一の撮像システム110が備える各ブロックの動作を制御する。また、第一の制御回路111は、第一の撮像システム110および第二の撮像システム120の全体の動作を統括し、制御する。第一のROM112は、電気的に消去および記録が可能な不揮発性メモリであり、第一の撮像システム110が備える各ブロックの動作プログラムに加え、各ブロックの動作に必要なパラメータ等を記憶する。第一のRAM113は、書き換え可能な揮発性メモリであり、第一の撮像システム110が備える各ブロックの動作において出力されたデータの一時的な記憶領域として用いられる。 First, the first imaging system 110 will be described. The first control circuit 111 is, for example, a processor such as a CPU or an MPU. The first control circuit 111 reads an operation program of each block included in the first imaging system 110 from the first ROM 112 described later, expands the program on the first RAM 113 described later, and executes the program. It controls the operation of each block included in 110. The first control circuit 111 also controls and controls the overall operation of the first imaging system 110 and the second imaging system 120. The first ROM 112 is an electrically erasable and recordable nonvolatile memory, and stores parameters necessary for the operation of each block in addition to the operation program of each block included in the first imaging system 110. The first RAM 113 is a rewritable volatile memory, and is used as a temporary storage area of data output in the operation of each block included in the first imaging system 110.
 第一の光学系114は、ズームレンズ、フォーカスレンズを含むレンズ群で構成され、被写体像を後述の第一の撮像素子115に結像する。第一の撮像素子115は、例えば複数色のカラーフィルタを備えたCCDやCMOSセンサ等で構成されている。第一の撮像素子115は、第一の光学系114により第一の撮像素子115に結像された光学像を光電変換し、得られたアナログ画像信号を第一のA/D変換回路116に出力する。また、第一の撮像素子115は、操作部材130に含まれるシャッタボタンが全押しされたタイミングに基づき露光を開始し、後述の露光終了のタイミング生成回路200より出力された露光終了のタイミングを指示する信号に基づき露光を終了する。第一のA/D変換回路116は、入力されたアナログ画像信号をデジタル画像信号に変換し、得られたデジタル画像データを第一のRAM113に出力する。 The first optical system 114 is constituted by a lens group including a zoom lens and a focus lens, and forms an object image on a first image sensor 115 described later. The first imaging element 115 is configured of, for example, a CCD or a CMOS sensor provided with color filters of a plurality of colors. The first imaging device 115 photoelectrically converts the optical image formed on the first imaging device 115 by the first optical system 114, and obtains the obtained analog image signal to the first A / D conversion circuit 116. Output. Further, the first image sensor 115 starts exposure based on the timing when the shutter button included in the operation member 130 is full-pressed, and instructs the timing of the exposure end output from the exposure end timing generation circuit 200 described later. The exposure is ended based on the signal. The first A / D conversion circuit 116 converts the input analog image signal into a digital image signal, and outputs the obtained digital image data to the first RAM 113.
 第一の画像処理回路117は、第一のRAM113に記憶されている画像データに対して、ホワイトバランス調整、色補間、ノイズ補正、ガンマ処理、輝度/色差信号への変換、および、収差補正など、様々な画像処理を適用する。画像出力回路118は、第一の画像処理回路117で処理された画像データを、第一のRAM113を介して受け取り、外部装置に出力するための回路である。具体的には、撮像装置100に着脱可能な記録メディアに対して画像データの読み出しや書き込みを行ったり、無線あるいは有線通信機能を用いてスマートフォンやサーバーなどと画像の送受信を行ったりする。表示装置119は、LCDや有機ELディスプレイ等の表示デバイスであり、第一のRAM113に記録した画像を表示する。 The first image processing circuit 117 performs white balance adjustment, color interpolation, noise correction, gamma processing, conversion to luminance / color difference signals, aberration correction, and the like on the image data stored in the first RAM 113. Apply various image processing. The image output circuit 118 is a circuit for receiving the image data processed by the first image processing circuit 117 via the first RAM 113 and outputting the image data to an external device. Specifically, image data is read from or written to a recording medium removable from the imaging apparatus 100, and images are transmitted and received to and from a smartphone, a server, or the like using a wireless or wired communication function. The display device 119 is a display device such as an LCD or an organic EL display, and displays an image recorded in the first RAM 113.
 次に、第二の撮像システム120について説明する。第二の制御回路121は、例えばCPUやMPUなどのプロセッサである。第二の制御回路121は、第二の撮像システム120が備える各ブロックの動作プログラムを後述の第二のROM122より読み出し、後述の第二のRAM123に展開して実行することにより第二の撮像システム120が備える各ブロックの動作を制御する。第二のROM122は、電気的に消去および記録が可能な不揮発性メモリであり、第二の撮像システム120が備える各ブロックの動作プログラムに加え、各ブロックの動作に必要なパラメータ等を記憶する。第二のRAM123は、書き換え可能な揮発性メモリであり、第二の撮像システム120が備える各ブロックの動作において出力されたデータの一時的な記憶領域として用いられる。 Next, the second imaging system 120 will be described. The second control circuit 121 is, for example, a processor such as a CPU or an MPU. The second control circuit 121 reads an operation program of each block included in the second imaging system 120 from the second ROM 122 described later, expands the program in the second RAM 123 described later, and executes the second imaging system. It controls the operation of each block included in 120. The second ROM 122 is an electrically erasable and recordable nonvolatile memory, and stores, in addition to the operation program of each block included in the second imaging system 120, parameters and the like necessary for the operation of each block. The second RAM 123 is a rewritable volatile memory, and is used as a temporary storage area of data output in the operation of each block included in the second imaging system 120.
 第二の光学系124は、ズームレンズ、フォーカスレンズを含むレンズ群で構成され、被写体像を後述の第二の撮像素子125に結像する。第二の撮像素子125は、例えばCCDやCMOSセンサ等の撮像素子であり、第二の光学系124により第二の撮像素子125に結像された光学像を光電変換し、得られたアナログ画像信号を第二のA/D変換回路126に出力する。第二の撮像素子125は、動きやぶれを検出するために用いる素子であるため、必ずしも複数色のカラーフィルタを備えている必要はなく、単色(ホワイト)のフィルタや赤外フィルタを備えた構成としてもよい。第二のA/D変換回路126は、入力されたアナログ画像信号にデジタル画像信号に変換し、得られたデジタル画像データを第二のRAM123に出力する。 The second optical system 124 is composed of a lens group including a zoom lens and a focus lens, and forms an object image on a second image sensor 125 described later. The second imaging element 125 is an imaging element such as a CCD or CMOS sensor, for example, and is an analog image obtained by photoelectrically converting an optical image formed on the second imaging element 125 by the second optical system 124 The signal is output to the second A / D conversion circuit 126. The second imaging element 125 is an element used to detect movement and blurring, and thus does not necessarily have to include color filters of a plurality of colors, and has a configuration including a monochrome (white) filter and an infrared filter. It is also good. The second A / D conversion circuit 126 converts the input analog image signal into a digital image signal, and outputs the obtained digital image data to the second RAM 123.
 第二の画像処理回路127は、第二のRAM123に記憶されている画像データに対して、簡易的なノイズ補正やガンマ処理など、様々な画像処理を適用する。第二の撮像素子125が複数色のカラーフィルタを備えているのであれば、色補間、あるいは輝度信号への変換処理も行う。また、第二の画像処理回路127は、タイミング生成回路200を具備しており、第二のRAM123に記憶されている画像データを用いた動き解析結果に基づき、第一の撮像素子115の露光終了のタイミングを指示する信号を生成する。なお、露光終了のタイミングを指示する信号は、第二の制御回路121を介して第一の撮像システム110に出力される。第一の撮像システム110ではこの信号を受け取ると、第一の制御回路111が第一の撮像素子115の露光を終了させるよう制御する。 The second image processing circuit 127 applies various image processing such as simple noise correction and gamma processing to the image data stored in the second RAM 123. If the second image sensor 125 includes color filters of a plurality of colors, color interpolation or conversion processing to a luminance signal is also performed. In addition, the second image processing circuit 127 includes the timing generation circuit 200, and based on the result of the motion analysis using the image data stored in the second RAM 123, the exposure end of the first image sensor 115 is completed. Generate a signal that indicates the timing of A signal instructing the timing of the end of exposure is output to the first imaging system 110 via the second control circuit 121. When the first imaging system 110 receives this signal, the first control circuit 111 controls the exposure of the first imaging element 115 to end.
 操作部材130は、ユーザからの指示を受け付ける操作部材であり、シャッタボタンやダイヤルキーを含む。また、表示装置119がタッチパネル機能を備えるようにしてもよい。ユーザがこれらの操作部材を操作することで生成される信号は、第一の撮像システム110と第二の撮像システム120の駆動制御に反映される。 The operation member 130 is an operation member that receives an instruction from the user, and includes a shutter button and a dial key. Further, the display device 119 may have a touch panel function. Signals generated by the user operating these operation members are reflected in drive control of the first imaging system 110 and the second imaging system 120.
 なお、ここでは第一の撮像システム110と第二の撮像システム120が、撮像装置100として一体的に構成されている例を上げて説明を行ったが、これに限られるものではない。例えば、第一の撮像システム110と操作部材130がカメラ本体であり、第二の撮像システム120は、カメラ本体と着脱可能な撮像装置としてもよい。つまり、第二の撮像システム120は、外部の撮像装置に着脱可能な撮像装置であってもよい。また、第一の撮像システム110が一眼レフカメラであるとするならば、第一の光学系114を含む交換レンズ装置は、第一の撮像素子115乃至表示装置119、および、操作部材130を含むカメラ本体に対して着脱可能な構成となる。図1Bは、撮像装置100の一例としてのスマートフォン(あるいはタブレット端末)を示す図である。スマートフォンの前面には、表示装置119と操作部材130を兼ねるタッチパネルが設けられ、スマートフォンの背面には、第一の撮像システム110の第一の光学系114と、第二の撮像システム120の第二の光学系124が配置されている。このようなスマートフォンにおいても、本発明を実施することができる。 Here, although an example in which the first imaging system 110 and the second imaging system 120 are integrally configured as the imaging device 100 has been described, the present invention is not limited to this. For example, the first imaging system 110 and the operation member 130 may be a camera body, and the second imaging system 120 may be an imaging device detachable from the camera body. That is, the second imaging system 120 may be an imaging device that can be attached to and detached from an external imaging device. Also, assuming that the first imaging system 110 is a single-lens reflex camera, the interchangeable lens device including the first optical system 114 includes the first imaging device 115 to the display device 119, and the operation member 130. It becomes a structure which can be detached with respect to a camera body. FIG. 1B is a diagram illustrating a smartphone (or a tablet terminal) as an example of the imaging device 100. A touch panel that doubles as the display device 119 and the operation member 130 is provided on the front of the smartphone, and the first optical system 114 of the first imaging system 110 and the second of the second imaging system 120 are provided on the back of the smartphone. The optical system 124 is disposed. The present invention can also be implemented in such a smartphone.
 また、第二の制御回路121の機能を第一の制御回路111が兼用するようにすれば、第二の制御回路121を省くことも可能である。また、第二の撮像システム120は第二の光学系124、第二の撮像素子125、第二のA/D変換回路126、および、第二のRAM123のみを有し、他の構成要素を第一の撮像システム110が兼用するようにしてもよい。こうすることで、第二の撮像システム120を別のカメラ装置とした場合に、その構成を簡素化することが可能となる。 In addition, if the function of the second control circuit 121 is shared by the first control circuit 111, the second control circuit 121 can be omitted. In addition, the second imaging system 120 only includes the second optical system 124, the second imaging element 125, the second A / D conversion circuit 126, and the second RAM 123, and the other components are One imaging system 110 may be shared. By this, when the second imaging system 120 is another camera device, the configuration can be simplified.
 ここで、第一の撮像素子115が記録用の画像を生成することを目的とするのに対して、第二の撮像素子125は素早く移動する被写体の動きを検出することを目的としており、必要とされるフレームレートが互いに異なる。図12に、本実施形態における第一の撮像素子115と第二の撮像素子125の構成を比較した表を示す。本実施形態では、第一の撮像素子115のフレームレートが20fps(フレーム/秒)であるのに対し、第二の撮像素子125のフレームレートは1000fpsであるものとする。 Here, while the first imaging device 115 aims to generate an image for recording, the second imaging device 125 aims to detect the movement of the subject moving quickly, and is necessary. Frame rates are different from one another. FIG. 12 shows a table in which the configurations of the first image sensor 115 and the second image sensor 125 in the present embodiment are compared. In the present embodiment, the frame rate of the first imaging device 115 is 20 fps (frames / second), whereas the frame rate of the second imaging device 125 is 1000 fps.
 そのため、第二の撮像素子125は、第一の撮像素子115よりも、より短い露光時間となるシャッタスピードを設定することができる。そして、このシャッタスピードを実現可能とするため、第二の撮像素子125は第一の撮像素子115よりも感度を高くする必要がある。そこで、第二の撮像素子125は、第一の撮像素子115に対して、画素数を減らす代わりに、画素ピッチが大きくなるように構成されている。具体的には、図12に示すように、撮像部の水平サイズは第一の撮像素子115が36mmであるのに対して、第二の撮像素子125は4mmである。水平画素数は第一の撮像素子115が6400画素であるのに対して、第二の撮像素子125は640画素である。画素ピッチは第一の撮像素子115が5.62μmであるのに対して、第二の撮像素子125は6.25μmである。 Therefore, the second imaging element 125 can set the shutter speed at which the exposure time is shorter than that of the first imaging element 115. Then, in order to realize this shutter speed, the second imaging device 125 needs to have higher sensitivity than the first imaging device 115. Therefore, the second imaging element 125 is configured such that the pixel pitch is larger than the first imaging element 115 instead of reducing the number of pixels. Specifically, as shown in FIG. 12, the horizontal size of the imaging unit is 36 mm for the first imaging element 115, while it is 4 mm for the second imaging element 125. The number of horizontal pixels is 6,400 for the first image sensor 115, while the second image sensor 125 is 640 pixels. The pixel pitch is 5.62 μm for the first imaging device 115, while the pixel pitch is 6.25 μm for the second imaging device 125.
 次に、第二の撮像システム120の第二の画像処理回路127が具備する、タイミング生成回路200の構成について図2を用いて説明する。このタイミング生成回路200は、第二の撮像素子125が高速のフレームレートで撮像して第二のRAM123に記憶した画像データの動きベクトルを検出することで、動きの解析を行う。この画像データは、第二の撮像素子125が複数色のカラーフィルタを備えた構成であるならば、色補間や輝度信号への変換処理が先に行われており、各画素が同じ成分の信号を備えているものとする。そして、この動きの解析結果に基づいて、第一の撮像素子115の露光終了のタイミングを決定し、第一の撮像素子115の露光を終了させるための信号を、第一の撮像システム110に出力する。 Next, the configuration of the timing generation circuit 200 included in the second image processing circuit 127 of the second imaging system 120 will be described using FIG. The timing generation circuit 200 analyzes a motion by detecting a motion vector of image data stored in the second RAM 123 by the second imaging element 125 imaging at a high frame rate. In this image data, if the second imaging element 125 is configured to include color filters for a plurality of colors, color interpolation or conversion processing to a luminance signal is performed first, and each pixel is a signal of the same component Shall be provided. Then, based on the analysis result of the movement, the timing of the exposure end of the first imaging device 115 is determined, and a signal for ending the exposure of the first imaging device 115 is output to the first imaging system 110 Do.
 図2は、第一の実施形態に係るタイミング生成回路200の構成例を示すブロック図である。図2において、タイミング生成回路200は、動きベクトル算出回路201、累積量算出回路202、代表累積量算出回路203およびタイミング決定回路204より構成する。 FIG. 2 is a block diagram showing a configuration example of the timing generation circuit 200 according to the first embodiment. In FIG. 2, the timing generation circuit 200 includes a motion vector calculation circuit 201, an accumulated amount calculation circuit 202, a representative accumulated amount calculation circuit 203, and a timing determination circuit 204.
 次に、本発明の第一の実施形態の撮像装置100における高速撮影モードにおける撮像処理について、図3および図4のフローチャートを用いて説明する。図3および図4は、第一の実施形態の高速撮影モードにおける撮像処理のフローチャートである。図3のフローチャートは撮像装置100の電源がオンになると開始される。 Next, imaging processing in the high-speed imaging mode in the imaging apparatus 100 according to the first embodiment of the present invention will be described using the flowcharts of FIGS. 3 and 4. FIGS. 3 and 4 are flowcharts of imaging processing in the high-speed imaging mode according to the first embodiment. The flowchart of FIG. 3 is started when the power of the imaging apparatus 100 is turned on.
 ステップS301において、第一の制御回路111は撮影モードが設定されているかを判定し、設定されていなければステップS302に進み、設定されていればステップS305に進む。 In step S301, the first control circuit 111 determines whether the imaging mode is set, and if not set, the process proceeds to step S302, and if set, the process proceeds to step S305.
 ステップS302において、第一の制御回路111はぶれレベル、もしくは動き出しレベルの設定メニューが選択されているかを判定し、他の処理が選択されていればステップS303に進み、ステップS303において他の処理を行う。第一の制御回路111はぶれレベル、もしくは動き出しレベルの設定メニューが選択されていれば、ステップS304に進む。 In step S302, the first control circuit 111 determines whether the setting menu of the shake level or movement start level is selected, and if another process is selected, the process proceeds to step S303, and the other process is performed in step S303. Do. The first control circuit 111 proceeds to step S304 if the setting menu for the shake level or the movement start level is selected.
 ステップS304において、第一の制御回路111は表示装置119にぶれレベル、もしくは動き出しレベル設定のための画面の表示し、ユーザによる操作部材130の操作結果に応じて、いずれかのレベルを設定する。例えば、第一の表示装置119にぶれレベルが「標準」から「低」に向けて段階的なレベルが表示され、ユーザが選択できるようになっている。ユーザが「低」に近い段階のぶれレベルを選択するほど、撮像した画像に含まれるぶれが小さくなるように、後述する第二の閾値が設定される。動き出しレベルにおいても同様に、第一の表示装置119に段階的な動き出しレベルが表示され、ユーザが選択できるようになっている。動き出したという判定基準はユーザの持つイメージによるところもあるため、動き出しのレベルは数値で示され、広い範囲でレベルを選択できるようになっている。 In step S304, the first control circuit 111 displays a screen for setting the blur level or the movement level on the display device 119, and sets one of the levels according to the operation result of the operation member 130 by the user. For example, the first display device 119 displays a graded level from "standard" to "low" as the shake level, and allows the user to select. A second threshold described later is set such that the blurring included in the captured image becomes smaller as the user selects a blurring level closer to “low”. Similarly, at the movement start level, the stepwise movement start level is displayed on the first display device 119 and can be selected by the user. Since the determination criteria that the user has started moving may depend on the image of the user, the level of starting is indicated by a numerical value so that the level can be selected in a wide range.
 動き出しレベルが選択されると、第一の制御回路111は、第一の撮像システム110における動き量の判定値を決定し、第二の制御回路121は、このぶれ動き量の判定値に基づいて後述のステップS322で用いる第一の閾値を設定する。ぶれレベルが選択されると、第一の制御回路111は、第一の撮像システム110におけるぶれ許容値を決定し、第二の制御回路121は、このぶれ許容値に基づいて後述のステップS331で用いる第二の閾値を設定する。 When the motion start level is selected, the first control circuit 111 determines the determination value of the amount of movement in the first imaging system 110, and the second control circuit 121 determines the determination value of the amount of blurring movement. A first threshold used in step S322 described later is set. When the shake level is selected, the first control circuit 111 determines a shake allowance value in the first imaging system 110, and the second control circuit 121 determines in step S331 described later based on the shake allowance value. Set a second threshold to be used.
 ここでは、一例として、ユーザが動き出しレベルよりもぶれレベルを小さく設定し、かつ、ユーザが最もぶれが小さくなるぶれレベル「低」を選択したものとして説明を行う。 Here, as an example, the description will be made on the assumption that the user sets the shake level smaller than the movement start level, and the user selects the shake level “low” which minimizes the shake.
 ぶれレベル「低」の場合、ぶれ許容値は許容錯乱円径に設定されるものとする。ここで許容錯乱円径とは、明視距離250mmで写真を観察するときに、視力1.0の観察者が解像可能な限界値を表し、36×24mmの撮像素子上では約20μmになる。本発明の第一の実施形態では第一の撮像素子115の4画素分のピッチ22.48μm(5.62×4)を許容錯乱円径とする。ぶれレベルおよび第二の閾値の設定を終えると、ステップS301に戻る。 In the case of the shake level "low", the shake allowance value is set to the permissible circle of confusion diameter. Here, the permissible circle of confusion diameter represents a limit value that can be resolved by an observer with a visual acuity of 1.0 when observing a photograph with a visual distance of 250 mm, and becomes about 20 μm on a 36 × 24 mm imaging device . In the first embodiment of the present invention, the pitch 22.48 μm (5.62 × 4) of four pixels of the first image sensor 115 is taken as the permissible circle of confusion diameter. When the setting of the shake level and the second threshold is finished, the process returns to step S301.
 ステップS305において、第一の制御回路111は第一の撮像素子115を起動する。 In step S305, the first control circuit 111 activates the first image sensor 115.
 ステップS306において、第一の制御回路111は撮影モードとして高速撮影モードが選択されているかを判定し、選択されていなければステップS307に進み、ステップS307においてその他の撮影モードの処理を行う。第一の制御回路111は、高速撮影モードが選択されていればステップS308に進む。 In step S306, the first control circuit 111 determines whether the high-speed shooting mode is selected as the shooting mode. If the high-speed shooting mode is not selected, the process proceeds to step S307 and performs other shooting mode processing in step S307. If the high-speed shooting mode is selected, the first control circuit 111 proceeds to step S308.
 ステップS308において、第一の制御回路111は第一の撮像素子115から得られる被写体のコントラスト値、あるいは、不図示の測距センサの出力に基づいて、第一の光学系114を駆動して自動フォーカス制御(AF)を行う。 In step S308, the first control circuit 111 drives the first optical system 114 based on the contrast value of the subject obtained from the first image sensor 115 or the output of a distance measuring sensor (not shown) to perform automatic operation. Perform focus control (AF).
 ステップS309において、第一の制御回路111は第一の撮像素子115から得られる被写体の輝度値に基づいて、第一の撮像素子115のための自動露出制御(AE)を行う。 In step S309, the first control circuit 111 performs automatic exposure control (AE) for the first image sensor 115 based on the luminance value of the subject obtained from the first image sensor 115.
 ステップS310において、第一の制御回路111は操作部材130に含まれるシャッタスイッチが半押しされることによって、シャッタスイッチ内のSW1がオンになったかを判定し、オンになるまでステップS308およびS309を繰り返す。 In step S310, the first control circuit 111 determines whether the SW1 in the shutter switch is turned on by pressing the shutter switch included in the operation member 130 halfway, and steps S308 and S309 are performed until the switch is turned on. repeat.
 ステップS310においてSW1がオンになると、ステップS311において、第二の制御回路121は第二の撮像素子125を起動する。 When SW1 is turned on in step S310, the second control circuit 121 activates the second imaging element 125 in step S311.
 ステップS312において、第一の制御回路111は、SW1がオンになった際に選択された主被写体に対して、第一の光学系114を用いたAFを行う。 In step S312, the first control circuit 111 performs AF using the first optical system 114 on the main subject selected when the SW1 is turned on.
 ステップS313において、第一の制御回路111は、SW1がオンになった際に選択された主被写体に対して第一の撮像素子115のためのAEを行う。 In step S313, the first control circuit 111 performs AE for the first image sensor 115 on the main subject selected when the switch SW1 is turned on.
 ステップS314において、第二の制御回路121は、第一の制御回路111から第一の光学系114のズーム情報を受信し、第二の光学系124のズーム状態を制御する。この第二の光学系124のズーム状態を制御について、図5を用いて説明する。 In step S314, the second control circuit 121 receives zoom information of the first optical system 114 from the first control circuit 111, and controls the zoom state of the second optical system 124. Control of the zoom state of the second optical system 124 will be described with reference to FIG.
 図5は、SW1がオンされたときの撮像装置100と被写体500の位置関係を説明するための図である。図5において、撮像装置100の第一の光学系114は焦点距離300mmであり、40m先を0.3m/sec(300mm/sec)で移動する被写体500を撮像しようとしている。なお、被写体500は第一の光学系114の光軸近傍を移動しているものとする。以降の説明において、40m先を物面と呼ぶ。また、被写体500の移動速度は、被写体500までの距離情報と、フレーミング中に得られる画像から後述の動きベクトルを算出することで計測することができる。 FIG. 5 is a diagram for explaining the positional relationship between the imaging device 100 and the subject 500 when the SW 1 is turned on. In FIG. 5, the first optical system 114 of the imaging device 100 has a focal length of 300 mm, and tries to capture an object 500 moving 40 m ahead at 0.3 m / sec (300 mm / sec). The subject 500 is assumed to move in the vicinity of the optical axis of the first optical system 114. In the following description, 40 m ahead will be called the object plane. The moving speed of the subject 500 can be measured by calculating a motion vector described later from the distance information to the subject 500 and the image obtained during framing.
 また、本実施形態における第一の光学系114の結像倍率は、被写体までの距離÷焦点距離で求められるため、40×1000÷300=133.3となる。 Further, since the imaging magnification of the first optical system 114 in the present embodiment is obtained by the distance to the subject and the focal length, it is 40 × 1000/300 = 133.3.
 第一の撮像素子115の全体が捉えている物面での被写体の画角は、133.3×5.62×6400/1000=4795.7mmである。 The angle of view of the subject on the object plane captured by the entire first imaging element 115 is 133.3 × 5.62 × 6400/1000 = 4795.7 mm.
 ここで、SW1がオンになる前は、第一の撮像システム110と第二の撮像システム120で得られる画像の画角が同じであるとする。このとき、第二の光学系124の結像倍率は4795.7×1000÷6.25÷640=1198.9であり、焦点距離は40×1000÷1198.9=33.3mmである。また、このとき第二の撮像素子125における単位画素あたりの物面での被写体サイズは、1198.9×6.25÷1000=7.5mmである。この値に後述する動きベクトルの算出の分解能を掛け合わせた値が第二の撮像素子125で捉えることができる動きの分解能となる。動きベクトルの算出の分解能を0.5画素とすると、動きの分解能は7.5×0.5=3.75mmとなる。 Here, it is assumed that the angle of view of the images obtained by the first imaging system 110 and the second imaging system 120 is the same before the SW 1 is turned on. At this time, the imaging magnification of the second optical system 124 is 4795.7 × 1000 ÷ 6.25 ÷ 640 = 1198.9, and the focal length is 40 × 1000 ÷ 1198.9 = 33.3 mm. At this time, the subject size on the object plane per unit pixel in the second image sensor 125 is 1198.9 × 6.256.21000 = 7.5 mm. A value obtained by multiplying this value by the resolution of motion vector calculation described later is the resolution of the motion that can be captured by the second image sensor 125. Assuming that the resolution of motion vector calculation is 0.5 pixels, the motion resolution is 7.5 × 0.5 = 3.75 mm.
 一方、第一の撮像素子115における単位画素あたりの物面での被写体サイズは、133.3×5.62÷1000=0.75mmとなり、ぶれ許容値は4画素分なので0.75×4=3.0mmとなる。そのため、このままでは第二の撮像素子125の動きの分解能よりもぶれ許容値のほうが小さいので、第二の撮像素子125を用いたとしても、第一の撮像素子115のぶれが許容値以下であるかを判定することができない。 On the other hand, the object size on the object plane per unit pixel in the first image sensor 115 is 133.3 × 5.62 ÷ 1000 = 0.75 mm, and the blurring tolerance is for four pixels, so 0.75 × 4 = It will be 3.0 mm. Therefore, since the shake allowable value is smaller than the resolution of the movement of the second image sensor 125 as it is, the shake of the first image sensor 115 is less than the allowable value even if the second image sensor 125 is used. It can not be determined.
 そこで、第二の制御回路121は第二の光学系124の焦点距離を望遠側に移動してズームを行い、第二の撮像素子125における動き検知の分解能を高くする。 Therefore, the second control circuit 121 moves the focal length of the second optical system 124 to the telephoto side for zooming to increase the resolution of motion detection in the second image sensor 125.
 300mm/secで移動する被写体400がぶれ許容値3.0mmに達する時間は3.0÷300×1000=10.0msecである。 The time for the subject 400 moving at 300 mm / sec to reach the shake allowable value 3.0 mm is 3.0 ÷ 300 × 1000 = 10.0 msec.
 そのため、第二の撮像システム120における単位フレーム(1msec)あたりに求められる動きの分解能は3.0÷10.0÷0.5=0.6mmとなる。 Therefore, the resolution of the movement obtained per unit frame (1 msec) in the second imaging system 120 is 3.0 × 10.0 × 0.5 = 0.6 mm.
 そこで、第二の制御回路121は、第二の光学系124の結像倍率を0.6×1000÷6.25=96.0、焦点距離を40×1000÷96.0=416.6mmに移動すれば、動きの分解能がぶれ許容値よりも細かくなる。こうすることで、第二の撮像素子125で得られた画像データを用いて動き検知結果に基づいて、第一の撮像素子115の露光終了のタイミングを制御し、被写体400を許容錯乱円径以下のぶれで撮像することが可能となる。 Therefore, the second control circuit 121 sets the imaging magnification of the second optical system 124 to 0.6 × 1000 ÷ 6.25 = 96.0, and the focal length to 40 × 1000 ÷ 96.0 = 416.6 mm. If it moves, the resolution of movement becomes finer than the blurring tolerance value. In this way, based on the motion detection result using the image data obtained by the second imaging element 125, the timing of the exposure end of the first imaging element 115 is controlled, and the subject 400 is smaller than the diameter of the permissible circle of confusion. It becomes possible to image with blurring.
 なお、第二の光学系124を望遠側にズームしているので、第二の撮像素子125の画角は第一の撮像素子115と異なっており、第二の撮像素子125の画角は、96.0×6.25×640/1000=384.0mmである。このように焦点距離を大きくし、ズーム位置を望遠側に移動すると画角が狭くなるため、光軸近傍以外に被写体が存在する場合には、被写体が視野外になってしまう可能性がある。その場合、公知の光軸や撮像素子の位置を移動する技術を用いて、光軸から外れた領域に視野を移動することができる構成とするのがよい。 In addition, since the second optical system 124 is zoomed to the telephoto side, the angle of view of the second image sensor 125 is different from that of the first image sensor 115, and the angle of view of the second image sensor 125 is 96.0 × 6.25 × 640/1000 = 384.0 mm. When the focal length is thus increased and the zoom position is moved to the telephoto side, the angle of view is narrowed, and therefore, when there is a subject other than near the optical axis, the subject may be out of the field of view. In that case, it is preferable that the field of view can be moved to a region out of the optical axis by using a known technique for moving the optical axis or the position of the imaging device.
 また、ここではぶれ許容値を基準として第二の光学系124の焦点距離を変更する例を上げて説明を行ったが、ぶれ許容値よりも動き量の判定値のほうが小さい値である場合には、この動き量の判定値を基準として第二の光学系124の焦点距離を変更する。 Further, although the example in which the focal length of the second optical system 124 is changed based on the blur allowable value has been described here, the determination value of the movement amount is smaller than the blur allowable value. The focal length of the second optical system 124 is changed based on the determination value of the amount of movement.
 図3に戻り、ステップS315において、第二の制御回路121は、SW1がオンになった際に選択された主被写体の情報に基づいて第二の光学系124を用いたAFを行う。 Returning to FIG. 3, in step S315, the second control circuit 121 performs AF using the second optical system 124 based on the information of the main subject selected when the SW1 is turned on.
 ステップS316において、第二の制御回路121は、SW1がオンになった際に選択された主被写体の情報に基づいて第二の撮像素子115のためのAEを行う。 In step S316, the second control circuit 121 performs AE for the second imaging element 115 based on the information of the main subject selected when the SW1 is turned on.
 ステップS317において、第一の制御回路111は操作部材130に含まれるシャッタスイッチが全押しされることによって、シャッタスイッチ内のSW2がオンになったかを判定し、オンになるまでステップS312乃至S316を繰り返す。 In step S317, the first control circuit 111 determines whether the SW 2 in the shutter switch is turned on by fully pressing the shutter switch included in the operation member 130, and performs steps S312 to S316 until the SW2 is turned on. repeat.
 ステップS317においてSW2がオンになると、図4のステップS318において、第一の制御回路111は、ステップS313にて行ったAEの結果に基づいて露光期間を設定し、第一の撮像素子の露光を開始する。 When SW2 is turned on in step S317, the first control circuit 111 sets the exposure period based on the result of the AE performed in step S313 in step S318 in FIG. 4 to expose the first image sensor. Start.
 ステップS319において、第二の制御回路121は、1000fps、あるいは、第一の撮像素子115に対して設定されたフレームレートの所定倍(例えば、50倍)となるフレームレートを設定し、第二の撮像素子125の露光を開始する。第二の撮像素子125は設定したフレームレートに応じた露光時間に達すると、得られたアナログ画像信号を第二のA/D変換回路126に出力するとともに、すぐさま次の露光を開始する処理を繰り返す。すなわち、第一の撮像素子115の1回の露光期間の間に、それよりも早いフレームレートで、第二の撮像素子125の露光が繰り返し行われる。 In step S319, the second control circuit 121 sets a frame rate to be 1000 fps or a predetermined multiple (for example, 50 times) of the frame rate set for the first image sensor 115, Exposure of the image sensor 125 is started. When the second imaging element 125 reaches the exposure time according to the set frame rate, it outputs the obtained analog image signal to the second A / D conversion circuit 126 and immediately starts the next exposure. repeat. That is, during the single exposure period of the first imaging device 115, the exposure of the second imaging device 125 is repeatedly performed at a frame rate faster than that.
 ここで、図6は、第一の実施形態における第一の撮像素子115、第二の撮像素子125、および、タイミング生成回路200による動作を説明するための図である。図6の時刻T0において、ユーザによりシャッタボタンが全押しされてSW2がオンになると、すぐさま第一の撮像システム110における第一の撮像素子115は露光を開始する。さらに、第二の撮像システム120における第二の撮像素子125は高速フレームレートで画像の撮像を開始する。第二の撮像素子125は、SW2がオンになった時刻T0の後は、時刻T1、時刻T2、時刻T3・・・において短い露光時間で連続して撮像を行う。 Here, FIG. 6 is a diagram for explaining the operation of the first image sensor 115, the second image sensor 125, and the timing generation circuit 200 in the first embodiment. At time T0 in FIG. 6, when the user fully presses the shutter button and the switch SW2 is turned on, the first imaging device 115 in the first imaging system 110 immediately starts exposure. Furthermore, the second imaging device 125 in the second imaging system 120 starts imaging of an image at a high frame rate. After time T0 when the SW2 is turned on, the second image pickup device 125 continuously picks up an image with a short exposure time at time T1, time T2, time T3.
 ステップS320において、タイミング生成回路200内の動きベクトル算出回路201が、第二の撮像素子125で得られた画像データのフレーム間における動きベクトルおよび動きベクトルの信頼度を算出する。動きベクトルとは、フレーム間における被写体の水平方向の移動量と垂直方向の移動量をベクトルとして表したものである。動きベクトルの算出方法について、図7乃至図9を用いて詳しく説明する。 In step S320, the motion vector calculation circuit 201 in the timing generation circuit 200 calculates the reliability of the motion vector and the motion vector between the frames of the image data obtained by the second imaging element 125. The motion vector is a vector representing the amount of movement of the subject in the horizontal direction and the amount of movement in the vertical direction between frames. The method of calculating the motion vector will be described in detail with reference to FIGS. 7 to 9.
 図7は、動きベクトル算出回路201による動きベクトルおよび動きベクトルの信頼度の算出処理を示すフローチャートである。図8は動きベクトルの算出方法を説明するための図であり、図8AはM番目フレームの画像データを示す図であり、図8BはM+1番目フレームの画像データを示す図である。また、図8Cは、M番目フレームとM+1番目フレームの間における動きベクトルを示す図である。図8Cの動きベクトルは、簡略化のため、代表的な動きベクトルのみを記載している。なお、Mは正の整数である。図9は、ブロックマッチング法による動きベクトルの算出方法を説明するための図である。なお、本実施形態では、動きベクトルの算出手法として、ブロックマッチング法を例に挙げて説明するが、動きベクトルの算出手法はこの例に限定されず、例えば勾配法でもよい。 FIG. 7 is a flowchart showing the process of calculating the reliability of the motion vector and the motion vector by the motion vector calculation circuit 201. FIG. 8 is a diagram for explaining a method of calculating a motion vector, FIG. 8A is a diagram showing image data of the M-th frame, and FIG. 8B is a diagram showing image data of the M + 1-th frame. FIG. 8C is a diagram showing a motion vector between the Mth frame and the (M + 1) th frame. The motion vectors in FIG. 8C describe only representative motion vectors for simplification. Here, M is a positive integer. FIG. 9 is a diagram for explaining a method of calculating a motion vector by the block matching method. In the present embodiment, a block matching method is described as an example of a motion vector calculation method. However, the motion vector calculation method is not limited to this example, and may be, for example, a gradient method.
 図7のステップ701において、動きベクトル算出回路201には、時間的に隣接する2つのフレームの画像データが入力される。そして、動きベクトル算出回路201は、M番目フレームを基準フレームに設定し、M+1番目フレームを参照フレームに設定する。 In step 701 of FIG. 7, image data of two temporally adjacent frames is input to the motion vector calculation circuit 201. Then, the motion vector calculation circuit 201 sets the Mth frame as a reference frame, and sets the M + 1th frame as a reference frame.
 図7のステップ702において、動きベクトル算出回路201は、図9のように、基準フレーム901において、N×N画素の基準ブロック902を配置する。 In step 702 of FIG. 7, the motion vector calculation circuit 201 arranges a reference block 902 of N × N pixels in the reference frame 901 as shown in FIG. 9.
 図7のステップ703において、動きベクトル算出回路201は、図9のように、参照フレーム903に対し、基準フレーム901の基準ブロック902の中心座標と同座標904の周囲(N+n)×(N+n)画素を、探索範囲905として設定する。 In step 703 of FIG. 7, the motion vector calculation circuit 201 calculates (N + n) × (N + n) pixels around the coordinate 904 of the center coordinate of the reference block 902 of the reference frame 901 as shown in FIG. Are set as a search range 905.
 図7のステップ704において、動きベクトル算出回路201は、基準フレーム901の基準ブロック902と、参照フレーム903の探索範囲905内に存在する異なる座標のN×N画素の参照ブロック906との相関演算を行い、相関値を算出する。相関値は、基準ブロック902および参照ブロック906の画素に対するフレーム間の差分絶対値の和に基づき算出する。つまり、フレーム間の差分絶対値の和の値が最も小さい座標が、最も相関値が高い座標となる。なお、相関値の算出方法は、フレーム間の差分絶対値の和を求める方法に限定されず、例えばフレーム間の差分二乗和や正規相互相関値に基づく相関値を算出する方法でもよい。図9の例では、参照ブロック906が最も相関が高いことを示しているとする。なお、公知の技術を用いて動きベクトルをサブピクセル単位で算出することができる。具体的には、図10に示す連続した相関値データC(k)において、下記(1)~(4)式による3点内挿の手法を用いる。 In step 704 of FIG. 7, the motion vector calculation circuit 201 performs correlation calculation between the reference block 902 of the reference frame 901 and the reference block 906 of N × N pixels of different coordinates present in the search range 905 of the reference frame 903. And calculate the correlation value. The correlation value is calculated based on the sum of absolute differences between frames for pixels of the reference block 902 and the reference block 906. That is, the coordinate with the smallest value of the sum of absolute differences between frames is the coordinate with the highest correlation value. The method of calculating the correlation value is not limited to the method of obtaining the sum of absolute differences between frames, and may be a method of calculating a correlation value based on, for example, the sum of squared differences between frames or a normal cross correlation value. In the example of FIG. 9, it is assumed that the reference block 906 indicates that the correlation is the highest. Note that motion vectors can be calculated in units of subpixels using known techniques. Specifically, in the continuous correlation value data C (k) shown in FIG. 10, a method of three-point interpolation according to the following equations (1) to (4) is used.
 x=k+D÷SLOP・・・(1)
 C(x)=C(k)-|D|・・・(2)
 D={C(k-1)-C(k+1)}÷2・・・(3)
 SLOP=MAX{C(k+1)-C(k)、C(k-1)-C(k)}・・・(4)
x = k + D ÷ SLOP (1)
C (x) = C (k)-| D | (2)
D = {C (k-1) -C (k + 1)} ÷ 2 (3)
SLOP = MAX {C (k + 1) -C (k), C (k-1) -C (k)} (4)
 ただし、図10ではk=2である。 However, in FIG. 10, k = 2.
 なお、本発明の第一の実施形態ではサブピクセル単位の分解能を0.5画素とする。また、(1)はx成分に関する式だが、同様にしてy成分も算出できる。 In the first embodiment of the present invention, the resolution in units of subpixels is 0.5 pixels. Also, although (1) is an equation relating to the x component, the y component can be calculated similarly.
 図7のステップ705において、動きベクトル算出回路201は、ステップ704で求めた最も高い相関値を示す参照ブロックの座標に基づいて動きベクトルを算出し、その動きベクトルの相関値を動きベクトルの信頼度とする。図9の例の場合、参照フレーム903の探索範囲905の中で、基準フレーム901の基準ブロック902の中心座標に対応した同座標904と、参照ブロック906の中心座標に基づき動きベクトルが求められる。つまり、同座標904から参照ブロック906の中心座標までの座標間距離と方向が動きベクトルとして求められる。また、その動きベクトル算出時における参照ブロック906との相関演算結果である相関値が動きベクトルの信頼度として求められる。なお、動きベクトルの信頼度は、基準ブロックと参照ブロックとの相関値が高いほど高くなる。 At step 705 in FIG. 7, the motion vector calculation circuit 201 calculates a motion vector based on the coordinates of the reference block indicating the highest correlation value obtained at step 704, and the correlation value of the motion vector is used as the reliability of the motion vector. I assume. In the case of the example of FIG. 9, a motion vector is obtained based on the same coordinates 904 corresponding to the center coordinates of the reference block 902 of the reference frame 901 and the center coordinates of the reference block 906 in the search range 905 of the reference frame 903. That is, the inter-coordinate distance and direction from the same coordinate 904 to the center coordinate of the reference block 906 are determined as a motion vector. Also, a correlation value that is the result of correlation calculation with the reference block 906 at the time of motion vector calculation is obtained as the reliability of the motion vector. The reliability of the motion vector is higher as the correlation value between the reference block and the reference block is higher.
 図7のステップ706において、動きベクトル算出回路201は、基準フレーム701の全画素について動きベクトルを算出したか否か判定する。動きベクトル算出回路201は、ステップ706において全画素の動きベクトルを算出していないと判定した場合には、ステップ702に処理を戻す。そして、ステップ702では、動きベクトルが算出されていない画素を中心として前述した基準フレーム701にN×N画素の基準ブロック902が配置され、以下前述同様に、ステップ703からステップ705の処理が行われる。すなわち、動きベクトル算出回路201は、図9の基準ブロック902を移動させながら、ステップ702からステップ705までの処理を繰り返して、基準フレーム901の全画素の動きベクトルを算出する。この動きベクトルの例を図8Cに示す。図8の例では、図8AのM番目のフレームと図8BのM+1番目のフレームの間で、人が左から右に移動している例を示している。このように被写体が移動している場合の動きベクトルの代表例を図8Cに示している。図8Cに示す動きベクトルは、M番目のフレームに存在している被写体位置を動きベクトルの始点とし、それに対応するM+1番目のフレームの被写体位置を動きベクトルの終点としている。なお、動きベクトル算出回路201は、全画素の動きベクトルを算出するのではなく、全画素よりも少ない所定画素において動きベクトルを算出してもよい。 In step 706 of FIG. 7, the motion vector calculation circuit 201 determines whether or not motion vectors have been calculated for all pixels of the reference frame 701. If the motion vector calculation circuit 201 determines in step 706 that motion vectors of all pixels have not been calculated, the process returns to step 702. Then, in step 702, the reference block 902 of N × N pixels is arranged in the reference frame 701 described above centering on the pixels for which the motion vector is not calculated, and the processing from step 703 to step 705 is performed as described above. . That is, the motion vector calculation circuit 201 calculates the motion vectors of all the pixels of the reference frame 901 by repeating the processing from step 702 to step 705 while moving the reference block 902 in FIG. 9. An example of this motion vector is shown in FIG. 8C. The example of FIG. 8 shows an example in which a person moves from left to right between the Mth frame of FIG. 8A and the (M + 1) th frame of FIG. 8B. A representative example of the motion vector when the subject is moving as described above is shown in FIG. 8C. In the motion vector shown in FIG. 8C, the subject position present in the Mth frame is the start point of the motion vector, and the subject position in the M + 1th frame corresponding thereto is the end point of the motion vector. The motion vector calculation circuit 201 may calculate the motion vector at predetermined pixels smaller than all the pixels, instead of calculating motion vectors of all the pixels.
 以上のような処理により、時間的に隣接する2枚の高速撮像フレーム間における動きベクトルおよび動きベクトルの信頼度を算出する。 By the above processing, the reliability of the motion vector and the motion vector between two temporally adjacent high-speed imaging frames is calculated.
 なお、被写体の移動速度が変化する場合もある。そのため、時間的に隣接する2つのフレーム間における動きベクトルの大きさから物面における移動速度に換算し、前述の計算方法で、第一の撮像素子115の露光中に第二の光学系の焦点距離、結像倍率、画角を適宜変更する構成とするのがよい。 The moving speed of the subject may change. Therefore, the magnitude of the motion vector between two temporally adjacent frames is converted to the movement velocity on the object plane, and the focal point of the second optical system during the exposure of the first imaging device 115 by the above-described calculation method. It is preferable that the distance, the imaging magnification, and the angle of view be appropriately changed.
 次に、動きベクトル算出回路201が、第二の撮像素子125から得られた画像データに対して動きベクトルおよび動きベクトルの信頼度を算出する時系列動作について、図6を参照して説明する。 Next, a time-series operation in which the motion vector calculation circuit 201 calculates the motion vector and the reliability of the motion vector for the image data obtained from the second imaging device 125 will be described with reference to FIG.
 動きベクトル算出回路201は、時刻T1において、時刻T0と時刻T1で得られた画像データのフレーム間の動きベクトルおよび動きベクトルの信頼度を前述の図7のフローチャートの処理に基づき算出する。その後、時刻T2において、時刻T1と時刻T2で得られた画像データのフレーム間の動きベクトルおよび動きベクトルの信頼度を算出する。時刻T3以後、同様の処理を繰り返し、第二の撮像素子125から得られた画像データのフレーム間における動きベクトルおよび動きベクトルの信頼度を算出する。 The motion vector calculation circuit 201 calculates the motion vector between the frames of the image data obtained at time T0 and time T1 and the reliability of the motion vector at time T1 based on the process of the flowchart of FIG. 7 described above. Thereafter, at time T2, motion vectors between the frames of the image data obtained at time T1 and time T2 and reliability of the motion vector are calculated. After time T3, the same process is repeated to calculate the reliability of the motion vector and the motion vector between the frames of the image data obtained from the second imaging element 125.
 以上が、図4のステップS320における動きベクトルの算出方法についての説明である。 The above is the description of the motion vector calculation method in step S320 in FIG.
 図4に戻り、ステップS321において、累積量算出回路202は、ステップ320において算出した動きベクトルを複数フレームにおいて追跡し、動きベクトルの累積量を算出する。そして、代表累積量算出回路203は、算出した動きベクトルの累積量に基づき、フレーム全体を代表する代表累積量を決定する。 Returning to FIG. 4, in step S321, the accumulated amount calculation circuit 202 tracks the motion vector calculated in step 320 in a plurality of frames, and calculates the accumulated amount of motion vector. Then, the representative accumulation amount calculation circuit 203 determines a representative accumulation amount representing the entire frame based on the calculated accumulation amount of the motion vector.
 まず動きベクトルの累積量の算出方法について、図11を用いて説明する。図11は、ステップS320において算出した複数のフレーム間の動きベクトルを示す図である。なお、説明の簡略化のため、時刻T0から時刻T3までの期間における動きベクトルの累積量の算出方法について説明するが、それ以降の期間に関しても同様の方法で動きベクトルの累積量を算出するものとする。 First, a method of calculating the accumulated amount of motion vectors will be described with reference to FIG. FIG. 11 is a diagram showing motion vectors among a plurality of frames calculated in step S320. In addition, although the calculation method of the accumulation amount of the motion vector in the period from time T0 to time T3 is demonstrated for the simplification of description, the accumulation amount of motion vector is calculated by the same method also about the period after it I assume.
 図11において、動きベクトル1101は、図6の時刻T0のフレームと時刻T1のフレームとの間で算出された動きベクトルを示す。動きベクトル1102は、図6の時刻T1のフレームと時刻T2のフレームとの間で算出された動きベクトルを示す。動きベクトル1103は、図6の時刻T2のフレームと時刻T3のフレームとの間で算出された動きベクトルを示す。 In FIG. 11, a motion vector 1101 indicates the motion vector calculated between the frame at time T0 and the frame at time T1 in FIG. The motion vector 1102 indicates the motion vector calculated between the frame at time T1 and the frame at time T2 in FIG. The motion vector 1103 indicates the motion vector calculated between the frame at time T2 and the frame at time T3 in FIG.
 累積量算出回路202は、時刻T0と時刻T1のフレーム間で算出した動きベクトル1101の終点座標Qを始点座標とする動きベクトルを、時刻T1と時刻T2のフレーム間で算出した動きベクトルの中から検索する。そして、条件を満たす動きベクトル1102を、動きベクトル1101と連結する。また、累積量算出回路202は、時刻1と時刻T2のフレーム間で算出した動きベクトル1102の終点座標Rを始点座標とする動きベクトルを、時刻T2と時刻T3のフレーム間で算出した動きベクトルの中から検索する。そして、条件を満たす動きベクトル1103を、動きベクトル1102と連結する。これ以降の期間においても同様の処理により、動きベクトルを連結していく。 The accumulated amount calculation circuit 202 selects a motion vector having an end point coordinate Q of the motion vector 1101 calculated between frames at time T0 and time T1 as a start point coordinate from among the motion vectors calculated between frames at time T1 and time T2. Search for. Then, the motion vector 1102 that satisfies the condition is linked with the motion vector 1101. In addition, the accumulated amount calculation circuit 202 calculates a motion vector having an end point coordinate R of the motion vector 1102 calculated between the frames at time 1 and time T2 as a start point coordinate, of the motion vector calculated between the frames at time T2 and time T3. Search from among Then, the motion vector 1103 that satisfies the condition is linked with the motion vector 1102. The motion vectors are linked by the same process in the subsequent periods.
 このような複数フレームにおける動きベクトルの連結処理を時刻T0で算出した全ての動きベクトルに対して行うことにより、全画素の追跡動きベクトルを算出する。なお、算出した追跡動きベクトルにより、時刻T0において座標Pに存在した被写体が、時刻T1では座標Qに移動し、時刻T2では座標Rに移動し、時刻T3では座標Sに移動したことが分かる。 By performing such a process of linking motion vectors in a plurality of frames on all motion vectors calculated at time T0, tracking motion vectors of all pixels are calculated. The calculated tracking motion vector indicates that the subject present at coordinate P at time T0 moves to coordinate Q at time T1, moves to coordinate R at time T2, and moves to coordinate S at time T3.
 次に、累積量算出回路202が、追跡動きベクトルに基づき、動きベクトルの累積量を算出する方法について説明する。 Next, a method of calculating the accumulation amount of the motion vector based on the tracking motion vector will be described.
 累積量算出回路202は、式(5)のように追跡動きベクトルの長さを動きベクトルの累積量(VecLen)として算出する。 The accumulation amount calculation circuit 202 calculates the length of the tracking motion vector as the accumulation amount (VecLen) of the motion vector as shown in Expression (5).
 VecLen=VecLen1+VecLen2+VecLen3・・・(5) VecLen = VecLen1 + VecLen2 + VecLen3 (5)
 VecLen1は、時刻T0と時刻T1のフレーム間で算出した動きベクトル1101の動きベクトルの長さを示す。VecLen2は、時刻T1と時刻T2のフレーム間で算出した動きベクトル1102の動きベクトルの長さを示す。VecLen3は、時刻T2と時刻T3のフレーム間で算出した動きベクトル1103の動きベクトルの長さを示す。累積量算出回路202は、式(5)に基づき、追跡動きベクトルを構成する動きベクトルの長さの総和を動きベクトルの累積量として算出する。以上のような動きベクトルの累積量の算出処理を全画素の追跡動きベクトルに対して行い、全画素の動きベクトルの累積量を算出する。 VecLen1 indicates the length of the motion vector of the motion vector 1101 calculated between the frames at time T0 and time T1. VecLen2 indicates the length of the motion vector of the motion vector 1102 calculated between the frames at time T1 and time T2. VecLen3 indicates the length of the motion vector of the motion vector 1103 calculated between the frames at time T2 and time T3. The accumulation amount calculation circuit 202 calculates the sum of the lengths of the motion vectors constituting the tracking motion vector as the accumulation amount of the motion vector based on Expression (5). The processing for calculating the accumulated amount of motion vectors as described above is performed on the tracking motion vectors of all pixels to calculate the accumulated amounts of motion vectors of all pixels.
 なお、累積量算出回路202は、動きベクトルの信頼度が所定値よりも低い動きベクトルに関しては、式(5)による動きベクトルの長さの総和処理から除外しても良い。また、累積量算出回路202は、動きベクトルの信頼度が所定値よりも低い動きベクトルおよび時間的にそれ以降の動きベクトルに関しては、式(5)による動きベクトルの長さの総和処理から除外しても良い。これにより、動きベクトルの信頼度が高い動きベクトルのみを用いた動きベクトルの累積量を算出することができる。また、それぞれの動きベクトルをX方向の成分とY方向の成分に分離し、それぞれの方向毎に動きベクトルの長さの総和を求めるようにしてもよい。 The accumulated amount calculation circuit 202 may exclude the motion vector whose reliability of the motion vector is lower than a predetermined value from the summation processing of the length of the motion vector according to Expression (5). In addition, the accumulation amount calculation circuit 202 excludes a motion vector whose reliability of the motion vector is lower than a predetermined value and a motion vector after that temporally from the summation processing of the length of the motion vector according to equation (5). It is good. As a result, it is possible to calculate the accumulated amount of motion vector using only the motion vector having a high degree of reliability of the motion vector. Alternatively, each motion vector may be separated into a component in the X direction and a component in the Y direction, and the sum of the lengths of the motion vectors may be calculated for each direction.
 次に、代表累積量の算出方法について説明する。代表累積量算出回路203は、フレーム内の全画素から得られた動きベクトルの累積量のうちの最大値を選択し、選択した最大の動きベクトルの累積量を代表累積量として決定する。このような処理をフレーム毎に行うことにより、図6に示すように、フレーム毎に1つの代表累積量を算出する。 Next, the method of calculating the representative accumulated amount will be described. The representative accumulation amount calculation circuit 203 selects the maximum value among the accumulation amounts of motion vectors obtained from all the pixels in the frame, and determines the accumulation amount of the selected maximum motion vector as a representative accumulation amount. By performing such processing for each frame, as shown in FIG. 6, one representative cumulative amount is calculated for each frame.
 なお、代表累積量算出回路203による代表累積量は、フレーム内の全画素の動きベクトルの累積量のうちの最大値に基づくものに限られるものではなく、フレーム内の全画素の動きベクトルの累積量の平均値や中央値でも良い。また、動きベクトルの累積量をX方向の成分とY方向の成分に分離した場合には、それぞれの方向における代表累積量を決定するようにしてもよい。 The representative accumulation amount by the representative accumulation amount calculation circuit 203 is not limited to the one based on the maximum value among the accumulation amounts of the motion vectors of all the pixels in the frame, and the accumulation of motion vectors of all the pixels in the frame It may be an average value or median value. When the accumulated amount of motion vector is separated into the component in the X direction and the component in the Y direction, the representative accumulated amount in each direction may be determined.
 図4に戻り、ステップS322において、タイミング決定回路204は、代表累積量が第一の閾値以上になったかを判定し、第一の閾値以上になっていなければ、被写体が動き出していないと判断し、ステップS323に進む。この第一の閾値は、上述したようにステップS304で設定した動き出しレベルに基づいて、被写体が動き出すタイミングを検出するために設定された閾値であって、ステップS304で設定される第二の閾値とは独立して設定される。撮影する被写体の動きに応じて任意に設定することが可能な値であり、人物のように動き出しが遅い被写体に比べ、トンボなどの昆虫のように動き出しが速い被写体のほうが、第一の閾値として小さな値を設定することが望ましい。ポートレートの撮影モードであれば人物の動きを想定した第一の閾値を設定し、マクロの撮影モードであれば昆虫の動きを想定した第一の閾値を設定するなど、撮影モードに応じて自動的に第一の閾値を設定するようにしてもよい。また、第一の閾値と第二の閾値は、いずれもユーザが予め定められた範囲内で任意の値に設定できるため、設定した値に応じて、第一の閾値のほうが大きくなることもあれば、第二の閾値のほうが大きくなることもある。 Returning to FIG. 4, in step S322, the timing determination circuit 204 determines whether the representative accumulated amount is equal to or greater than the first threshold, and determines that the subject is not moving if not equal to or greater than the first threshold. Proceed to step S323. The first threshold is a threshold set to detect the timing at which the subject starts moving based on the movement start level set in step S304 as described above, and the second threshold set in step S304 Are set independently. It is a value that can be set arbitrarily according to the movement of the subject to be photographed, and a subject whose movement starts faster like an insect like a dragonfly is the first threshold, compared to a person who starts moving slowly like a person It is desirable to set a small value. If the shooting mode is portrait mode, the first threshold is set assuming the movement of the person. If the shooting mode is macro, the first threshold is set assuming movement of the insect. Alternatively, the first threshold may be set. Further, since both the first threshold and the second threshold can be set to arbitrary values within a predetermined range by the user, the first threshold may be larger depending on the set value. For example, the second threshold may be larger.
 ステップS323において、第一の撮像システム110の第一の制御回路111は、第一の撮像素子115の露光時間がステップS313で行ったAEに基づいて設定された露光時間に達したかを判定し、達していなければステップS322に戻る。第一の撮像素子の露光時間がステップS313で行ったAEに基づいて設定された露光時間に達していれば、ステップS324に進む。 In step S323, the first control circuit 111 of the first imaging system 110 determines whether the exposure time of the first imaging element 115 has reached the set exposure time based on the AE performed in step S313. If not, the process returns to step S322. If the exposure time of the first imaging element has reached the exposure time set based on the AE performed in step S313, the process proceeds to step S324.
 ステップS324において、第一の制御回路111は、第一の撮像素子115の露光を停止し、第一の撮像素子115によって生成されたアナログ画像信号に対して、第一のA/D変換回路116および第一の画像処理回路117で所定の処理を施す。そして、第一の画像処理回路117でよりされた画像データは表示装置119に送信され、ライブビューのための画像データとして用いられる。 In step S324, the first control circuit 111 stops the exposure of the first imaging element 115, and the first A / D conversion circuit 116 with respect to the analog image signal generated by the first imaging element 115. And, the first image processing circuit 117 performs predetermined processing. Then, the image data obtained by the first image processing circuit 117 is transmitted to the display device 119 and used as image data for live view.
 ステップS325において、第一の制御回路111は、第二の制御回路121を介して、第二の撮像素子125の露光を停止する。 In step S 325, the first control circuit 111 stops the exposure of the second image sensor 125 via the second control circuit 121.
 ステップS326において、第二の制御回路121は、累積量算出回路202および代表累積量算出回路203で算出した代表累積量をリセットし、ステップS318に戻る。 In step S326, the second control circuit 121 resets the representative accumulated amount calculated by the accumulated amount calculation circuit 202 and the representative accumulated amount calculation circuit 203, and returns to step S318.
 ステップS322において、タイミング決定回路204は代表累積量が第一の閾値以上になっていれば、被写体が動き出したと判断できるため、ステップS327に進む。 In step S322, if the representative cumulative amount is equal to or greater than the first threshold value, the timing determination circuit 204 can determine that the subject has moved, so the process proceeds to step S327.
 ステップS327において、タイミング決定回路204は、第一の撮像システム110にリセット処理を指示するための信号を出力する。この処理は、代表累積量が第一の閾値以上になったことが判定されると、すぐさま行われる。図6に示す例では、時刻T3までの各フレーム間で算出した動きベクトルに基づく代表累積量が第一の閾値以上になっている。そのため、この時点でタイミング決定回路204は、第二の制御回路121を介して、第一の撮像システム110にリセット処理を指示するための信号を出力する。なお、X方向とY方向で別々に代表累積量を求めた場合には、いずれか一方の代表累積量が閾値以上となった場合に、リセット処理を指示するための信号を出力する。 In step S327, the timing determination circuit 204 outputs a signal for instructing the first imaging system 110 to perform a reset process. This process is performed as soon as it is determined that the representative accumulated amount has reached the first threshold or more. In the example shown in FIG. 6, the representative accumulated amount based on the motion vector calculated between each frame up to time T3 is equal to or greater than the first threshold. Therefore, at this time, the timing determination circuit 204 outputs a signal for instructing the first imaging system 110 to perform a reset process via the second control circuit 121. When the representative cumulative amount is determined separately in the X direction and the Y direction, a signal for instructing reset processing is output when one of the representative cumulative amounts becomes equal to or greater than the threshold.
 ステップS328では、第一の制御回路111は、第二の制御回路121からリセット処理を指示するための信号を受けるとすぐに、第一の撮像素子115に対して、露光を中止させ、各画素に溜まっていた電荷を捨てるリセット処理を行わせる。そして、第一の制御回路11は、このリセット処理が完了し、画素に蓄積された電荷を実質的にゼロにするとともに、すぐさま第一の撮像素子115の露光を開始する。 In step S328, as soon as the first control circuit 111 receives a signal for instructing reset processing from the second control circuit 121, the first control circuit 111 causes the first imaging element 115 to stop the exposure, and each pixel Perform a reset process to discard the accumulated charge. Then, the first control circuit 11 completes the reset process, makes the charge accumulated in the pixels substantially zero, and immediately starts the exposure of the first image sensor 115.
 ステップS329では、第二の制御回路121は、第二の撮像素子125に対しても露光を中止させてリセット処理を行わせ、第一の撮像素子115の露光の開始のタイミングに合わせて、第二の撮像素子125の露光を開始する。 In step S329, the second control circuit 121 causes the second image sensor 125 to stop the exposure and perform reset processing, and the second control circuit 121 synchronizes with the timing for starting the exposure of the first image sensor 115. The exposure of the second imaging element 125 is started.
 ステップS330では、第二の制御回路121は、それまでに代表累積量算出回路203が算出した代表累積量をリセットし、新たに代表累積量算出回路203に代表累積量の算出を開始させる。図6に示す例では、時刻T3までに算出した代表累積量をリセットし、時刻T4から再び代表累積量の算出を開始している。 In step S330, the second control circuit 121 resets the representative accumulated amount calculated by the representative accumulated amount calculating circuit 203 so far, and causes the representative accumulated amount calculating circuit 203 to newly start calculating the representative accumulated amount. In the example shown in FIG. 6, the representative accumulated amount calculated up to time T3 is reset, and the calculation of the representative accumulated amount is started again from time T4.
 ステップS331において、タイミング決定回路204は、代表累積量がステップS304で設定された第二の閾値以上になったかを判定し、第二の閾値以上になっていなければステップS332に進む。 In step S331, the timing determination circuit 204 determines whether the representative accumulated amount is equal to or more than the second threshold set in step S304. If the representative accumulated amount is equal to or more than the second threshold, the process proceeds to step S332.
 ステップS332において、第一の撮像システム110の第一の制御回路111は、第一の撮像素子115の露光時間がステップS313で行ったAEに基づいて設定された露光時間に達したかを判定し、達していなければステップS331に戻る。第一の撮像素子の露光時間がステップS313で行ったAEに基づいて設定された露光時間に達していれば、ステップS334に進む。 In step S332, the first control circuit 111 of the first imaging system 110 determines whether the exposure time of the first imaging element 115 has reached the set exposure time based on the AE performed in step S313. If not, the process returns to step S331. If the exposure time of the first imaging element has reached the exposure time set based on the AE performed in step S313, the process advances to step S334.
 ステップS334において、第一の制御回路111は第一の撮像素子115の露光を停止する。 In step S334, the first control circuit 111 stops the exposure of the first image sensor 115.
 ステップS331において、タイミング決定回路204は代表累積量が第二の閾値以上になっていればステップS333に進む。 In step S331, the timing determination circuit 204 proceeds to step S333 if the representative accumulated amount is equal to or greater than the second threshold.
 ステップS333において、タイミング決定回路204は、第一の撮像システム110に露光終了を指示するための信号を出力する。この処理は、代表累積量が第二の閾値以上になったことが判定されると、すぐさま行われる。図6に示す例では、時刻T8までの各フレーム間で算出した動きベクトルに基づく代表累積量が第二の閾値以上になっている。そのため、この時点でタイミング決定回路204は、第二の制御回路121を介して、第一の撮像システム110に露光終了を指示するための信号を出力する。なお、X方向とY方向で別々に代表累積量を求めた場合には、いずれか一方の代表累積量が第二の閾値以上となった場合に、露光終了を指示するための信号を出力する。 In step S333, the timing determination circuit 204 outputs a signal for instructing the first imaging system 110 to finish the exposure. This process is performed as soon as it is determined that the representative accumulated amount is equal to or greater than the second threshold. In the example shown in FIG. 6, the representative accumulated amount based on the motion vector calculated between each frame up to time T8 is equal to or greater than the second threshold. Therefore, at this time point, the timing determination circuit 204 outputs a signal for instructing the first imaging system 110 to finish the exposure via the second control circuit 121. When the representative cumulative amount is determined separately in the X direction and the Y direction, a signal for instructing the end of exposure is output when one of the representative cumulative amounts becomes equal to or greater than the second threshold. .
 つまり、代表累積量が第二の閾値以上になったことが判定されるとすぐにステップS333とステップS335に進み、第一の制御回路111は、第一の撮像素子115の露光時間が適正時間に達していなくとも第一の撮像素子115の露光を停止する。第一の制御回路111は、第一の撮像素子115によって生成されたアナログ画像信号を第一のA/D変換回路116に出力する。A/D変換回路116で生成されたデジタル画像データは第一の画像処理回路117で所定の処理が施され、記録用の画像データとして画像出力回路118に出力される。画像出力回路118は、撮像装置100に着脱可能な記録メディアに対して記録用の画像データの書き込みを行ったり、無線あるいは有線通信機能を用いてスマートフォンやサーバーなどの外部装置に記録用の画像データを送信したりする。 That is, as soon as it is determined that the representative accumulated amount has become equal to or greater than the second threshold, the process proceeds to step S333 and step S335, and the first control circuit 111 determines that the exposure time of the first image sensor 115 is appropriate. The exposure of the first image sensor 115 is stopped even if it does not reach. The first control circuit 111 outputs an analog image signal generated by the first imaging element 115 to the first A / D conversion circuit 116. The digital image data generated by the A / D conversion circuit 116 is subjected to predetermined processing by the first image processing circuit 117, and is output to the image output circuit 118 as image data for recording. The image output circuit 118 writes image data for recording on a recording medium removable from the imaging apparatus 100, or uses the wireless or wired communication function to record image data for recording on an external device such as a smartphone or a server. Send
 図6に示す例では、第一の制御回路111は、時刻T8よりわずかに後のタイミングで第一の撮像素子115の露光を停止することになる。実際には、第二の撮像素子125において時刻T8のフレーム画像が生成されてから代表累積量が得られるまでの算出時間や、タイミング決定回路204から出力された信号が第一の制御回路111に到達するまでの時間がタイムラグとして生じる。しかしながら、これらのタイムラグを考慮して閾値を設定するようにすれば、タイムラグによる影響を抑えることができる。 In the example shown in FIG. 6, the first control circuit 111 stops the exposure of the first imaging element 115 at a timing slightly after time T8. In practice, the calculation time from the generation of the frame image at time T8 in the second imaging element 125 to the acquisition of the representative accumulated amount, and the signal output from the timing determination circuit 204 are output to the first control circuit 111. The time to reach occurs as a time lag. However, if the threshold is set in consideration of these time lags, the influence of the time lag can be suppressed.
 ステップS335において、第二の撮像システム120の第二の制御回路121は、第二の撮像素子125の露光を停止する。 In step S335, the second control circuit 121 of the second imaging system 120 stops the exposure of the second imaging element 125.
 ステップS336において、第一の撮像システム110の第一の制御回路111は、撮影モードが選択されたままであるかを判定し、撮影モードのままであればステップS306に戻り、別のモードが選択されていればステップS302に戻る。 In step S336, the first control circuit 111 of the first imaging system 110 determines whether the imaging mode is still selected. If the imaging mode remains, the process returns to step S306 to select another mode. If it is, the process returns to step S302.
 以上説明したように、第一の実施形態においては、第一の撮像素子115の露光期間中の被写体の動き量に基づいて、露光途中である第一の撮像素子115の露光を止めてリセット処理を行わせ、すぐに第一の撮像素子115に露光を再開させている。そのため、被写体の動き出しのタイミングに合わせた画像を撮像することができる。 As described above, in the first embodiment, the exposure process of the first image sensor 115 during exposure is stopped based on the movement amount of the subject during the exposure period of the first image sensor 115, and the reset process is performed. As a result, the first image sensor 115 immediately resumes the exposure. Therefore, it is possible to capture an image according to the timing of movement of the subject.
 なお、ステップS322において代表累積量と比較される第一の閾値を調整することによって、動き出しとみなすための被写体の動作の大きさを調整することができる。例えば、この閾値を小さくするほど、より小さな動きを検出した場合に露光のリセットと露光の再開を行わせることが可能となる。 Note that, by adjusting the first threshold value to be compared with the representative cumulative amount in step S322, it is possible to adjust the magnitude of the movement of the subject to be regarded as the movement start. For example, as this threshold value is decreased, it is possible to reset the exposure and restart the exposure when a smaller movement is detected.
 また、第一の実施形態においては、被写体が静止した状態から動き出すタイミングを検出して撮像する例について説明を行なったが、逆に被写体が動いている状態から静止状態に遷移するタイミングを検出する例にも、応用することが可能である。 In the first embodiment, an example is described in which the timing at which the subject starts moving from a stationary state is detected and imaging is performed. Conversely, the timing at which the subject moves from a moving state to a stationary state is detected It is possible to apply also to the example.
 被写体が動いている状態から静止する場合には、動きベクトル算出回路201で算出される動きベクトルが段々と小さくなる。そこで、累積量算出回路202が算出した追跡動きベクトルの中で、連結した動きベクトルが段階的に小さくなるいずれかの追跡動きベクトルを選択する。そして、タイミング決定回路204が、選択した追跡動きベクトルにおいて、連結されたそれぞれの動きベクトルの大きさを時間順に並べ、連続する2つの動きベクトル間における動きベクトルの大きさの減衰率を求める。タイミング決定回路204は、求めた減衰率に基づく演算を行って、第二の撮像素子125から次に得られるフレームの動きベクトルの大きさを予想する。タイミング決定回路204は、予想した動きベクトルの大きさが第一の閾値未満になった場合に、第一の撮像システム110にリセット処理を指示するための信号を出力する。第一の閾値を0に近い正の値とすれば、被写体の動きが止まるタイミングで、第一の撮像素子115のリセット処理と露光の再開を行うことができる。 When the subject stops moving, the motion vector calculated by the motion vector calculation circuit 201 becomes gradually smaller. Therefore, among the tracking motion vectors calculated by the accumulated amount calculation circuit 202, any tracking motion vector in which the connected motion vector is gradually reduced is selected. Then, in the selected tracking motion vector, the timing determination circuit 204 arranges the magnitudes of the connected motion vectors in order of time, and obtains the attenuation factor of the magnitude of the motion vector between two consecutive motion vectors. The timing determination circuit 204 performs an operation based on the obtained attenuation factor to predict the magnitude of the motion vector of the frame to be obtained next from the second imaging element 125. The timing determination circuit 204 outputs a signal for instructing the first imaging system 110 to perform a reset process when the magnitude of the predicted motion vector is less than the first threshold. If the first threshold value is a positive value close to 0, the reset process of the first image sensor 115 and the restart of exposure can be performed at the timing when the movement of the subject stops.
 また、第一の実施形態において、ステップS324で第一の撮像素子115の露光を停止した際に読み出されるアナログ画像信号は、記録用として使われないので、周囲の画素の信号を加算もしくは平均化して、解像度を落とすようにしても良い。あるいは、ステップS318において露光を開始するときよりも、ステップS328で露光を開始するときほうが画像データの解像度が高くなるように、第一の撮像素子115の読み出し方式を変更するようにしてもよい。このようにすることで、記録用ではない画像データを生成する際の処理負荷が軽くなる。 In the first embodiment, since the analog image signal read out when the exposure of the first image sensor 115 is stopped in step S324 is not used for recording, the signals of surrounding pixels are added or averaged. And the resolution may be reduced. Alternatively, the reading method of the first image sensor 115 may be changed such that the resolution of the image data is higher when the exposure is started in step S328 than when the exposure is started in step S318. By doing this, the processing load at the time of generating image data not for recording can be reduced.
 また、第一の実施形態では、タイミング決定回路204から出力された信号に基づいて、撮像素子115のフレーム全体のリセット処理および再露光の開始を指示する例について説明したが、これに限られるものではない。例えば第一の撮像素子115が、ライン、領域、あるいは、画素毎に露光時間を制御できる構成であれば、タイミング決定回路204は、第一の撮像素子115のこれらの部分毎の累積量に基づいてリセット処理および再露光の開始を指示する信号を出力するようにしてもよい。また、フレーム全体を分割ブロックに分割し、分割ブロックを代表する累積量に基づき、分割ブロック毎にリセット処理および再露光の開始を指示する信号を出力しても良い。 Further, in the first embodiment, an example has been described in which reset processing of the entire frame of the imaging device 115 and start of re-exposure are instructed based on the signal output from the timing determination circuit 204, but the present invention is limited thereto. is not. For example, if the first imaging device 115 can control the exposure time for each line, area, or pixel, the timing determination circuit 204 determines the cumulative amount of each portion of the first imaging device 115. It is also possible to output a signal instructing start of reset processing and re-exposure. Alternatively, the entire frame may be divided into divided blocks, and a signal instructing start of reset processing and re-exposure may be output for each divided block based on the accumulated amount representing the divided block.
 また、第一の実施形態では、累積量算出回路202は、追跡動きベクトルの長さとして、連結した動きベクトル各々の長さの総和を動きベクトルの累積量として算出する例について説明したが、これに限られるものではない。図9のような追跡動きベクトルを構成する各動きベクトル、または、各動きベクトルの一部が同一座標を通過する場合は、同一座標を通過する長さを式(5)による動きベクトルの長さの総和処理から除外しても良い。これにより、例えば、隣接座標を行き来するような微小な周期的動き(反復運動)の被写体に対して、動きベクトル長を過分に加算してしまうことを抑制することができる。 Also, in the first embodiment, an example has been described in which the accumulation amount calculation circuit 202 calculates the sum of the lengths of each of the linked motion vectors as the tracking motion vector length as the accumulation amount of the motion vector. It is not limited to If each motion vector constituting the tracking motion vector as shown in FIG. 9 or a part of each motion vector passes through the same coordinates, the length passing through the same coordinates is the length of the motion vector according to equation (5) It may be excluded from the summation process of Thereby, for example, it is possible to suppress excessive addition of the motion vector length to a subject with a minute periodic motion (repetitive motion) that moves back and forth between adjacent coordinates.
 また、第一の実施形態では、動きベクトルに基づいて被写体の動き出しを検出していたが、演算負荷を減らすために、動きベクトルの代わりに、フレーム間の差分絶対値の和を、第一の閾値および第二の閾値と比較する構成としてもよい。 Also, in the first embodiment, the motion of the subject is detected based on the motion vector, but in order to reduce the calculation load, instead of the motion vector, the sum of absolute differences between frames is It may be configured to compare with the threshold and the second threshold.
 また、第一の実施形態によれば、第二の撮像システム120の焦点距離、結像倍率、および、画角を変更して得た画像を用いた動き解析結果に基づき、第一の撮像素子115のリセット処理および再露光の開始のタイミングを決めている。そのため、第一の撮像素子115と第二の撮像素子125で解像度が異なる仕様であっても、ぶれの少ない画像を撮像することができる。 Further, according to the first embodiment, the first imaging device based on the focal length of the second imaging system 120, the imaging magnification, and the motion analysis result using the image obtained by changing the angle of view. The timing of the reset processing of 115 and the start of re-exposure is decided. Therefore, even if the first imaging device 115 and the second imaging device 125 have specifications different in resolution, it is possible to capture an image with less blurring.
 なお、第一の実施形態では、第二の光学系124の焦点距離を望遠側に移動することで動きの分解能を上げたが、一般的なレンズにおいて焦点距離を望遠側に移動するとF値が大きくなり、画像が暗くなる。その分、明るくするために感度を上げるとノイズが多くなり、動きベクトルの算出精度が悪化してしまう。そこで、第二の撮像素子125で得られる画像のノイズ成分の大きさに応じて、焦点距離の最大移動量に制限を設ける構成としてもよい。 In the first embodiment, the movement resolution is increased by moving the focal length of the second optical system 124 to the telephoto side. However, when the focal length of the general lens is moved to the telephoto side, the F value is It gets bigger and the image gets darker. As a result, when the sensitivity is increased to make the image brighter, noise increases and the motion vector calculation accuracy is degraded. Therefore, according to the magnitude of the noise component of the image obtained by the second imaging element 125, the maximum moving amount of the focal length may be limited.
 さらに、代表累積量が第一の閾値以上であると判断されて露光を開始したことで得られた画像データが、ユーザの意図したタイミングに沿ったものであるかを判定する構成としてもよい。例えば、撮影された画像データに対して、シャッタボタンを全押しするなどの所定のユーザの操作があれば、その撮像された画像データは記録の対象とし、所定の操作が無ければ記録せずに画像データを消去するようにしてもよい。もちろんシャッタボタンの全押しは一つの例であって、ユーザによる操作であれば、他のボタン操作や、撮像装置100を所定方向に傾けるなどの他の操作であってもよい。あるいは、ユーザによる操作に限らず、ユーザが予め定められた単語を発するなどの音声の検出を利用する方法としてもよい。 Further, it may be determined whether the image data obtained by determining that the representative accumulated amount is equal to or more than the first threshold and starting the exposure is in accordance with the timing intended by the user. For example, if there is a predetermined user operation such as pressing the shutter button all the way to the photographed image data, the photographed image data is regarded as a recording target, and if there is no predetermined operation, it is not recorded. Image data may be deleted. Of course, the full-pressing of the shutter button is an example, and any other operation such as button operation or tilting of the imaging device 100 in a predetermined direction may be performed as long as the user performs an operation. Alternatively, not limited to the operation by the user, a method of using detection of voice such as the user emitting a predetermined word may be used.
 (第二の実施形態)
 次に、本発明の第二の実施形態について説明を行う。第二の実施形態では、被写体の動き出しを検出した時点での、被写体の動き量がある程度大きい場合には、第一の撮像素子115をリセット処理せずに、そのまま露光を継続する処理を行う。
Second Embodiment
Next, a second embodiment of the present invention will be described. In the second embodiment, when the movement amount of the subject at the time of detection of the movement of the subject is large, the process of continuing the exposure as it is without performing the reset process on the first imaging element 115 is performed.
 第二の実施形態の撮像装置100における高速撮影モードにおける撮像処理について、図13のフローチャートを用いて説明する。図13のフローチャートは、第二の実施形態の高速撮影モードにおける撮像処理のフローチャートであり、図4フローチャートに対して、ステップS1300を有する点のみが異なる。なお、この第二の実施形態では、第一の閾値に比べて、第二の閾値が十分に大きい値に設定されていることが前提となる。 Imaging processing in the high-speed imaging mode in the imaging apparatus 100 according to the second embodiment will be described using the flowchart in FIG. The flowchart of FIG. 13 is a flowchart of the imaging process in the high-speed shooting mode of the second embodiment, and differs from the flowchart of FIG. 4 only in that step S1300 is included. In the second embodiment, it is premised that the second threshold is set to a sufficiently large value as compared to the first threshold.
 図13のステップS318乃至S326、および、ステップS327乃至ステップS336の処理は、図4に示したものと同じ処理であるため、説明を省略する。 The processes in steps S318 to S326 and steps S327 to S336 in FIG. 13 are the same as those shown in FIG.
 図13のステップS322において、タイミング決定回路204は、代表累積量算出回路203が算出した代表累積量が第一の閾値以上になっていれば、被写体が動き出したと判断し、ステップS1300に進む。 In step S322 in FIG. 13, if the representative cumulative amount calculated by the representative cumulative amount calculation circuit 203 is equal to or greater than the first threshold value, the timing determination circuit 204 determines that the subject has moved, and proceeds to step S1300.
 ステップS1300において、タイミング決定回路204は、代表累積量算出回路203が算出した代表累積量が第三の閾値以上になったかを判定する。この第三の閾値は、第一の閾値より大きく、かつ、第二の閾値より小さい値である。代表累積量が第三の閾値未満であれば、被写体は動き出したばかりだと考えられるため、撮像装置100は、第一の実施形態と同様に、ステップS327に進み、第一の撮像素子115の露光を中止させてリセット処理を行い、すぐに第一の撮像素子115の露光を開始する。 In step S1300, the timing determination circuit 204 determines whether the representative accumulated amount calculated by the representative accumulated amount calculation circuit 203 has become equal to or larger than the third threshold. The third threshold is larger than the first threshold and smaller than the second threshold. If the representative accumulated amount is less than the third threshold, it is considered that the subject has just started to move, so the imaging device 100 proceeds to step S327 and exposes the first imaging element 115 as in the first embodiment. Is stopped to perform reset processing, and exposure of the first imaging element 115 is immediately started.
 反対に、代表累積量が第三の閾値以上であれば、被写体の動き量が、すでに動き出しと判定するための基準値を大きく超えていると考えられる。この場合、第一の撮像素子115のリセット処理を行い、すぐに第一の撮像素子115の再露光を開始したとしても、動き出しを大きく過ぎたタイミングで第一の撮像素子115の露光が再開される可能性が高い。よって、撮像装置100は、代表累積量が第三の閾値以上であれば、第一の撮像素子115のリセット処理を行わずに、第一の撮像素子115の露光を継続し、ステップS331に進む。こうすることで、被写体の動き出しの際の動き量が大きい場合であっても、被写体の動き出しを含めた画像をタイミング良く撮像することができるようになる。 On the other hand, if the representative accumulated amount is equal to or more than the third threshold value, it is considered that the movement amount of the subject has already largely exceeded the reference value for determining that movement has occurred. In this case, even if the first image sensor 115 is reset and reexposure of the first image sensor 115 is immediately started, the exposure of the first image sensor 115 is resumed at the timing when the movement is too large. There is a high possibility of Therefore, if the representative accumulated amount is equal to or more than the third threshold, the imaging device 100 continues the exposure of the first imaging device 115 without performing the reset process of the first imaging device 115, and proceeds to step S331. . By doing this, even when the amount of movement at the time of movement start of the subject is large, it is possible to pick up an image including the movement start of the subject with good timing.
 図14は、第二の実施形態における第一の撮像素子115、第二の撮像素子125、および、タイミング生成回路200による動作を説明するための図である。図14に示す例では、時刻T2までの各フレーム間で算出した動きベクトルに基づく代表累積量が第一の閾値以上となっている。さらに、この時点で、代表累積量は第一の閾値より大きな第三の閾値以上となっている。そのため、タイミング決定回路204は、時刻T2では、第一の撮像システム110にリセット処理を指示するための信号を出力せずに、第一の撮像素子115の露光を継続されるようにしている。その後、代表累積量がぶれレベルに応じた第二の閾値以上になると、タイミング決定回路204は、第二の制御回路121を介して、第一の撮像システム110にリセット処理を指示するための信号を出力する。 FIG. 14 is a diagram for explaining the operation of the first image sensor 115, the second image sensor 125, and the timing generation circuit 200 in the second embodiment. In the example illustrated in FIG. 14, the representative accumulated amount based on the motion vector calculated between each frame up to time T2 is equal to or greater than the first threshold. Furthermore, at this point in time, the representative accumulated amount is equal to or greater than the third threshold that is larger than the first threshold. Therefore, at time T2, the timing determination circuit 204 continues the exposure of the first imaging element 115 without outputting a signal for instructing the first imaging system 110 to perform a reset process. Thereafter, when the representative accumulated amount becomes equal to or greater than the second threshold corresponding to the blur level, the timing determination circuit 204 instructs the first imaging system 110 to perform reset processing via the second control circuit 121. Output
 なお、ここでは、代表累積量と第三の閾値を比較する構成を例にあげて説明を行ったが、これに限られるものではない。代表累積量と第三の閾値を比較する代わりに、代表累積量が第一の閾値を超えた時点での第一の撮像素子115の露光時間を第四の閾値と比較する構成としてもよい。この場合、第一の撮像素子115の露光時間が第四の閾値以上であれば、被写体が動き出すまでに時間が掛かっているため、被写体は動き出したばかりだと考えられる。よって、第一の撮像素子115のリセット処理を行い、すぐに第一の撮像素子115の露光を開始する。反対に、第一の撮像素子115の露光時間が第四の閾値未満であれば、被写体が急に動き出したと考えられる。ここで第一の撮像素子115のリセット処理を行ってから再露光を開始すると、動き出しを大きく過ぎたタイミングで第一の撮像素子115の露光が再開される可能性が高い。よって、第一の撮像素子115のリセット処理を行わずに第一の撮像素子115の露光を開始する。 Here, although the configuration has been described taking the example of comparing the representative accumulated amount and the third threshold as an example, the present invention is not limited to this. Instead of comparing the representative accumulated amount with the third threshold, the exposure time of the first image sensor 115 when the representative accumulated amount exceeds the first threshold may be compared with the fourth threshold. In this case, if the exposure time of the first image sensor 115 is equal to or greater than the fourth threshold, it takes time for the subject to move, so it is considered that the subject has just started to move. Therefore, reset processing of the first image sensor 115 is performed, and exposure of the first image sensor 115 is immediately started. On the contrary, if the exposure time of the first image sensor 115 is less than the fourth threshold, it is considered that the subject has suddenly moved. Here, when re-exposure is started after performing the reset process of the first imaging element 115, there is a high possibility that the exposure of the first imaging element 115 is resumed at the timing when the movement start is too large. Therefore, the exposure of the first image sensor 115 is started without performing the reset process of the first image sensor 115.
 このように、第二の実施形態においても、第一の撮像素子115の露光期間中の被写体の動き量に基づいて、第一の撮像素子115の露光を止めてリセット処理を行わせ、すぐに第一の撮像素子115に露光を再開させている。そのため、被写体の動き出しのタイミングに合わせた画像を撮像することができる。さらに、第二の実施形態においては、動き出しの動き量に応じて、露光を止めてリセットを行うのか、露光を継続するのかを切り替えている。そのため、被写体の動き出しの動き量の大きさによらず、被写体の動き出しを含めた画像をタイミング良く撮像することができるようになる。 As described above, also in the second embodiment, the exposure of the first image sensor 115 is stopped based on the movement amount of the subject during the exposure period of the first image sensor 115, and the reset process is performed immediately. The first imaging element 115 resumes the exposure. Therefore, it is possible to capture an image according to the timing of movement of the subject. Furthermore, in the second embodiment, depending on the amount of movement of the movement, switching between exposure stop and reset or exposure continued is performed. Therefore, regardless of the size of the amount of movement of the subject's movement, it is possible to pick up an image including the movement of the subject with good timing.
 (第三の実施形態)
 次に、本発明の第三の実施形態について説明を行う。第三の実施形態では、被写体の動き量ではなく、複数の被写体間の距離が所定の条件を満たす場合に、第一の撮像素子115の露光を止めてリセット処理を行わせ、すぐに第一の撮像素子115に露光を再開させる。
Third Embodiment
Next, a third embodiment of the present invention will be described. In the third embodiment, when the distance between a plurality of subjects, not the amount of movement of the subject, satisfies the predetermined condition, the exposure of the first image sensor 115 is stopped and the reset process is performed, and the first The exposure of the image pickup device 115 is restarted.
 第三の実施形態では、第二の画像処理回路127は、タイミング決定回路127に加え、第二のタイミング決定回路1500を備える。 In the third embodiment, the second image processing circuit 127 includes a second timing determination circuit 1500 in addition to the timing determination circuit 127.
 図15は、第三の実施形態に係る第二のタイミング生成回路の構成例を示すブロック図である。図15において、第二のタイミング生成回路1500は、被写体識別回路1501、学習モデル1502、位置判定回路1503、および、タイミング決定回路1504より構成される。 FIG. 15 is a block diagram showing a configuration example of a second timing generation circuit according to the third embodiment. In FIG. 15, the second timing generation circuit 1500 includes an object identification circuit 1501, a learning model 1502, a position determination circuit 1503, and a timing determination circuit 1504.
 次に、本発明の第三の実施形態の撮像装置100における高速撮影モードにおける撮像処理について、図16のフローチャートを用いて説明する。図16のフローチャートは、第三の実施形態の高速撮影モードにおける撮像処理のフローチャートであり、図4フローチャートに対して、ステップS1620乃至S1623を有する点のみが異なる。 Next, imaging processing in the high-speed imaging mode in the imaging apparatus 100 according to the third embodiment of the present invention will be described using the flowchart in FIG. The flowchart of FIG. 16 is a flowchart of imaging processing in the high-speed shooting mode of the third embodiment, and differs from the flowchart of FIG. 4 only in that steps S1620 to S1623 are included.
 図16のステップS318、S319、ステップS323乃至ステップS326、および、ステップS328乃至ステップS336の処理は、図4に示したものと同じ処理であるため、説明を省略する。 The processes in steps S318 and S319, steps S323 to S326, and steps S328 to S336 in FIG. 16 are the same as those shown in FIG.
 図16のステップS1601において、第二のタイミング生成回路1500内の被写体識別回路1501が、第二の撮像素子125で得られた画像データから、予め学習された、所定の複数の被写体を識別する処理を開始する。この被写体識別回路1501は、例えばGPU(Graphics Processing Unit)で構成されている。被写体識別回路1501は、事前に所定の被写体を識別するための機械学習を行うことにより得られた学習モデル1502を用いることで、画像データから、学習対象となった複数の被写体をそれぞれ識別することができる。 In step S1601 in FIG. 16, the process of identifying a plurality of predetermined subjects learned in advance from the image data obtained by the second imaging device 125 by the subject identification circuit 1501 in the second timing generation circuit 1500. To start. The subject identification circuit 1501 is configured of, for example, a GPU (Graphics Processing Unit). The subject identification circuit 1501 identifies a plurality of subjects to be learned from image data by using a learning model 1502 obtained by performing machine learning for identifying a predetermined subject in advance. Can.
 ステップS1602において、第二のタイミング生成回路1500内の位置判定回路1503が、被写体識別回路1501にて識別された複数の被写体の位置が、所定条件を満たしているか否かの判定を開始する。すなわち、複数の被写体が所定の条件を満たす位置まで動いたか、を判定する。 In step S1602, the position determination circuit 1503 in the second timing generation circuit 1500 starts to determine whether the positions of a plurality of objects identified by the object identification circuit 1501 satisfy a predetermined condition. That is, it is determined whether a plurality of subjects have moved to a position satisfying a predetermined condition.
 ステップS1603において、タイミング決定回路1504は、複数の被写体の位置が所定の条件を満たしているかを判定し、満たしていなければ、ステップS323に進み、満たしていればステップS1604に進む。 In step S1603, the timing determination circuit 1504 determines whether the positions of a plurality of objects satisfy a predetermined condition. If the conditions do not satisfy, the process advances to step S323. If the conditions are satisfied, the process advances to step S1604.
 ステップS1604において、タイミング決定回路1504は、第一の撮像システム110にリセット処理を指示するための信号を出力する。この処理は、複数の被写体の位置が所定の条件を満たすことが判定されると、すぐさま行われる。 In step S1604, the timing determination circuit 1504 outputs a signal for instructing the first imaging system 110 to perform a reset process. This process is performed as soon as it is determined that the positions of a plurality of subjects satisfy a predetermined condition.
 例えば、ステップS1601で識別される複数の被写体として、鷹などの捕食動物と、ネズミやウサギなどの被捕食動物を選択する。そして、ステップS1603において、複数の被写体の位置の条件として、識別した複数の被写体間の距離が閾値以下である(近接している)ことを選択する。こうすることで、捕食動物が、被捕食動物を捉える直前のタイミングで、第一の撮像素子115にリセット処理を行わせ、すぐに露光を開始させることができる。あるいは、ステップS1601で識別される複数の被写体として、テニスラケットと、テニスボールを選択する。そして、ステップS1603において、複数の被写体の位置の条件として、識別した複数の被写体間の距離が閾値以下である(近接している)ことを選択する。こうすることで、ボールを打つ直前のタイミングで、第一の撮像素子115にリセット処理を行わせ、すぐに露光を開始させることができる。あるいは、ステップS1601で識別される複数の被写体として、やり投げの選手の腕と、やりを選択する。そして、ステップS1603において、複数の被写体の位置の条件として、識別した複数の被写体間の距離が1メートルであることを条件とする。こうすることで、やりを投げた直後のタイミングで、第一の撮像素子115にリセット処理を行わせ、すぐに露光を開始させることができる。なお、ここでは、複数の被写体の位置の条件として、複数の被写体間の距離を例にあげて説明を行ったが、複数の被写体が重なる面積を条件としてもよい。 For example, as the plurality of subjects identified in step S1601, a predator such as a whale and a predatory animal such as a rat or a rabbit are selected. Then, in step S1603, as a condition of the positions of the plurality of subjects, it is selected that the distance between the plurality of identified subjects is equal to or less than the threshold (close). By doing this, the predator can cause the first imaging element 115 to perform reset processing immediately before capturing the predator, and exposure can be started immediately. Alternatively, a tennis racket and a tennis ball are selected as the plurality of subjects identified in step S1601. Then, in step S1603, as a condition of the positions of the plurality of subjects, it is selected that the distance between the plurality of identified subjects is equal to or less than the threshold (close). By doing this, it is possible to cause the first imaging element 115 to perform reset processing immediately before striking the ball, and to start exposure immediately. Alternatively, as the plurality of subjects identified in step S1601, the player's arm and the throw are selected. Then, in step S1603, as a condition of the positions of the plurality of subjects, it is set that the distance between the plurality of identified subjects is 1 meter. By doing this, it is possible to cause the first imaging element 115 to perform reset processing at the timing immediately after throwing, and to start exposure immediately. Although the distance between the plurality of subjects is described as an example of the condition of the position of the plurality of subjects here, the area where the plurality of subjects overlap may be set as the condition.
 なお、ステップS328以降の処理は第一の実施形態と同様である。タイミング生成回路200によって代表累積量が第二の閾値以上になっていることが判定されると、第一の制御回路111は、第一の撮像素子115の露光時間が適正時間に達していなくとも、第一の撮像素子115の露光を停止する。 The processes after step S328 are the same as in the first embodiment. If the timing generation circuit 200 determines that the representative accumulated amount is equal to or greater than the second threshold value, the first control circuit 111 determines that the exposure time of the first imaging element 115 has not reached the appropriate time. The exposure of the first imaging element 115 is stopped.
 以上説明したように、第三の実施形態においては、第一の撮像素子115の露光期間中の複数の被写体の位置に基づいて、露光途中である第一の撮像素子115の露光を止めてリセット処理を行わせ、すぐに第一の撮像素子115に露光を再開させている。そのため、複数の被写体が所定の構図になったタイミングに合わせた画像を撮像することができる。 As described above, in the third embodiment, based on the positions of a plurality of subjects during the exposure period of the first imaging device 115, the exposure of the first imaging device 115 during exposure is stopped and reset is performed. Processing is performed, and the first imaging element 115 immediately resumes exposure. Therefore, it is possible to capture an image in accordance with the timing at which a plurality of subjects have a predetermined composition.
 本発明は上記実施の形態に制限されるものではなく、本発明の精神及び範囲から離脱することなく、様々な変更及び変形が可能である。従って、本発明の範囲を公にするために以下の請求項を添付する。 The present invention is not limited to the above embodiment, and various changes and modifications can be made without departing from the spirit and scope of the present invention. Accordingly, the following claims are attached to disclose the scope of the present invention.
 本願は、2017年10月27日提出の日本国特許出願特願2017-208369と2018年10月5日提出の日本国特許出願特願2018-189988を基礎として優先権を主張するものであり、その記載内容の全てをここに援用する。 The present application claims priority based on Japanese Patent Application No. 2017-208369 filed Oct. 27, 2017 and Japanese Patent Application No. 2018-189988 filed Oct. 5, 2018, The entire contents of the description are incorporated herein.

Claims (22)

  1.  第一の撮像手段と、
     第二の撮像手段と、
     前記第一の撮像手段が第一のフレームの画像データのための露光を行っている間に前記第二の撮像手段にて撮像された複数のフレームの画像データを用いて、前記複数のフレームの画像データにおける被写体の動きを判定する判定手段と、
     前記判定手段が判定した結果に基づいて、前記第一の撮像手段による前記第一のフレームの画像データのための露光を止め、前記第一の撮像手段による前記第一のフレームの後の第二のフレームの画像データのための露光を開始する制御手段を、
     有することを特徴とする撮像装置。
    First imaging means,
    Second imaging means,
    The image data of a plurality of frames captured by the second imaging means while the first imaging means is performing exposure for the image data of the first frame A determination unit that determines a movement of a subject in image data;
    The exposure for the image data of the first frame by the first imaging means is stopped based on the result determined by the determination means, and the second after the first frame by the first imaging means is stopped. Control means to start the exposure for the frame's image data,
    An imaging device characterized by having:
  2.  前記判定手段は、前記被写体の動き量を算出する算出手段を有し、前記動き量に基づいて前記被写体の動きを判定することを特徴とする請求項1に記載の撮像装置。 The imaging apparatus according to claim 1, wherein the determination unit includes a calculation unit that calculates a movement amount of the subject, and determines the movement of the subject based on the movement amount.
  3.  前記算出手段は、複数の画像データのフレーム間における前記被写体の動き量の累積量を算出する累積量算出手段を有し、
     前記制御手段は、前記累積量と第一の閾値を比較した結果に応じて、前記第一の撮像手段による前記第一のフレームの画像データのための露光を止めることを特徴とする請求項2に記載の撮像装置。
    The calculation means includes accumulation amount calculation means for calculating an accumulation amount of the movement amount of the subject between frames of a plurality of image data,
    3. The apparatus according to claim 2, wherein the control means stops the exposure for the image data of the first frame by the first imaging means according to the result of comparing the accumulated amount and the first threshold. The imaging device according to.
  4.  前記累積量算出手段は、前記第二の撮像手段にて新たなフレームの画像データが撮像されると、それまで撮像された複数の画像データのフレーム間における前記被写体の動き量の累積量を新たに算出し、
     前記制御手段は、前記累積量が前記第一の閾値以上となった場合に、前記第一の撮像手段による前記第一のフレームの画像データのための露光を止めることを特徴とする請求項3に記載の撮像装置。
    The accumulated amount calculating means, when the image data of a new frame is imaged by the second imaging means, newly adds the accumulated amount of the movement amount of the subject between frames of the plurality of image data imaged so far Calculated to
    4. The apparatus according to claim 3, wherein the control means stops the exposure for the image data of the first frame by the first imaging means when the accumulated amount becomes equal to or more than the first threshold. The imaging device according to.
  5.  前記算出手段は、前記被写体の動き量として、動きベクトルを算出することを特徴とする請求項3または4に記載の撮像装置。 The imaging device according to claim 3, wherein the calculation unit calculates a motion vector as the amount of movement of the subject.
  6.  前記累積量算出手段は、前記複数の画像データにおいて算出された複数の動きベクトルを追跡し、追跡した動きベクトルの長さの総和に基づいて、前記累積量を算出することを特徴とする請求項5に記載の撮像装置。 The accumulative amount calculating means tracks a plurality of motion vectors calculated in the plurality of image data, and calculates the accumulative amount based on a sum of lengths of the tracked motion vectors. 5. The imaging device according to 5.
  7.  前記累積量算出手段は、複数の画素のそれぞれにおいて、前記複数の画像データにおいて算出された複数の動きベクトルを追跡し、追跡した複数の動きベクトルのうち、いずれか1つの動きベクトルの長さの総和に基づいて前記累積量を算出することを特徴とする請求項6に記載の撮像装置。 The accumulated amount calculation unit tracks a plurality of motion vectors calculated in the plurality of image data in each of a plurality of pixels, and has a length of any one motion vector among the plurality of motion vectors tracked. The imaging apparatus according to claim 6, wherein the cumulative amount is calculated based on a sum.
  8.  前記算出手段は、算出した動きベクトルの信頼度を算出し、
     前記累積量算出手段は、前記複数の画像データにおいて算出された複数の動きベクトルのうち、信頼度が所定値より低い動きベクトルを除外して、前記累積量を算出することを特徴とする請求項5乃至7のいずれか1項に記載の撮像装置。
    The calculating means calculates the reliability of the calculated motion vector,
    The accumulated amount calculating means calculates the accumulated amount by excluding a motion vector having a reliability lower than a predetermined value among a plurality of motion vectors calculated in the plurality of image data. The imaging device according to any one of 5 to 7.
  9.  前記算出手段は、前記被写体の動き量として、画像データのフレーム間における画素の値の差分絶対値を算出することを特徴とする請求項3または4に記載の撮像装置。 5. The image pickup apparatus according to claim 3, wherein the calculation means calculates an absolute value of a difference between pixel values of the image data as the movement amount of the subject.
  10.  前記累積量算出手段は、前記複数の画像データのフレーム間において算出された画素の値の差分絶対値の和を用いて、前記累積量を算出することを特徴とする請求項9に記載の撮像装置。 10. The imaging according to claim 9, wherein the cumulative amount calculating unit calculates the cumulative amount using a sum of absolute differences of pixel values calculated between frames of the plurality of image data. apparatus.
  11.  前記制御手段は、前記累積量が前記第一の閾値以上となった場合に、前記累積量が前記第一の閾値より大きい第三の閾値以上であれば、前記第一の撮像手段による前記第一のフレームの画像データのための露光を止めず、前記累積量が前記第三の閾値以上でなければ、前記第一の撮像手段による前記第一のフレームの画像データのための露光を止めることを特徴とする請求項3に記載の撮像装置。 If the accumulated amount is equal to or greater than a third threshold greater than the first threshold when the accumulated amount is equal to or greater than the first threshold, the control unit may be configured to Not stopping the exposure for the image data of one frame, and stopping the exposure for the image data of the first frame by the first imaging means if the accumulated amount is not more than the third threshold The imaging device according to claim 3, characterized in that
  12.  前記制御手段は、前記累積量が前記第一の閾値以上となった場合に、前記第一の撮像素子の露光時間が第四の閾値以上であれば、前記第一の撮像手段による前記第一のフレームの画像データのための露光を止めず、前記第一の撮像素子の露光時間が前記第四の閾値以上でなければ、前記第一の撮像手段による前記第一のフレームの画像データのための露光を止めることを特徴とする請求項3に記載の撮像装置。 The control means, when the accumulated amount is equal to or more than the first threshold value, if the exposure time of the first image pickup element is equal to or more than a fourth threshold value, the first image pickup means The exposure for the image data of the first frame is not stopped, and the exposure time of the first imaging device is not equal to or more than the fourth threshold, for the image data of the first frame by the first imaging means The image pickup apparatus according to claim 3, wherein the exposure of the light source is stopped.
  13.  前記判定手段は、前記第二の撮像手段にて撮像された複数のフレームの画像データから所定の複数の被写体を識別する識別手段を有し、前記複数の被写体の位置に基づいて前記被写体の動きを判定することを特徴とする請求項1に記載の撮像装置。 The determination means includes identification means for identifying a plurality of predetermined subjects from image data of a plurality of frames captured by the second imaging means, and movement of the subjects based on positions of the plurality of subjects The image pickup apparatus according to claim 1, wherein:
  14.  前記制御手段が、前記判定手段が判定した結果に基づいて、前記第一の撮像手段による前記第一のフレームの画像データのための露光を止める前と後で、前記第一の撮像素子は信号の読み出し方式を変えることを特徴とする請求項1乃至13のいずれか1項に記載の撮像装置。 The first imaging device receives a signal before and after the control unit stops the exposure for the image data of the first frame by the first imaging unit based on the result determined by the determination unit. The image pickup apparatus according to any one of claims 1 to 13, wherein the readout system of (1) is changed.
  15.  前記制御手段は、前記第二のフレームの画像データを記録するか否かを、ユーザの操作に応じて判断することを特徴とする請求項1乃至14のいずれか1項に記載の撮像装置。 The image pickup apparatus according to any one of claims 1 to 14, wherein the control means determines whether or not the image data of the second frame is to be recorded, according to a user's operation.
  16.  第一の撮像手段を有する外部の撮像装置に着脱可能な撮像装置であって、
     第二の撮像手段と、
     前記第一の撮像手段が第一のフレームの画像データのための露光を行っている間に前記第二の撮像手段にて撮像された複数のフレームの画像データを用いて、前記複数のフレームの画像データにおける被写体の動きを判定する判定手段と、
     前記判定手段が判定した結果に基づいて、前記第一の撮像手段による前記第一のフレームの画像データのための露光を止め、前記第一の撮像手段による前記第一のフレームの後の第二のフレームの画像データのための露光を開始する制御手段を、
     有することを特徴とする撮像装置。
    An imaging apparatus that is detachable from an external imaging apparatus having a first imaging unit, the imaging apparatus comprising:
    Second imaging means,
    The image data of a plurality of frames captured by the second imaging means while the first imaging means is performing exposure for the image data of the first frame A determination unit that determines a movement of a subject in image data;
    The exposure for the image data of the first frame by the first imaging means is stopped based on the result determined by the determination means, and the second after the first frame by the first imaging means is stopped. Control means to start the exposure for the frame's image data,
    An imaging device characterized by having:
  17.  前記判定手段は、前記被写体の動き量を算出する算出手段を有し、前記動き量に基づいて前記被写体の動きを判定することを特徴とする請求項16に記載の撮像装置。 17. The imaging apparatus according to claim 16, wherein the determination unit includes a calculation unit that calculates a movement amount of the subject, and determines the movement of the subject based on the movement amount.
  18.  前記算出手段は、複数の画像データのフレーム間における前記被写体の動き量の累積量を算出する累積量算出手段を有し、
     前記制御手段は、前記累積量と閾値を比較した結果に応じて、前記第一の撮像手段による前記第一のフレームの画像データのための露光を止めるための信号を出力することを特徴とする請求項17に記載の撮像装置。
    The calculation means includes accumulation amount calculation means for calculating an accumulation amount of the movement amount of the subject between frames of a plurality of image data,
    The control means outputs a signal for stopping the exposure for the image data of the first frame by the first imaging means according to the result of comparison of the accumulated amount and a threshold value. The imaging device according to claim 17.
  19.  前記累積量算出手段は、前記第二の撮像手段にて新たなフレームの画像データが撮像されると、それまで撮像された複数の画像データのフレーム間における前記被写体の動き量の累積量を新たに算出し、
     前記制御手段は、前記累積量が前記閾値以上となった場合に、前記第一の撮像手段による前記第一のフレームの画像データのための露光を止めるための信号を出力することを特徴とする請求項18に記載の撮像装置。
    The accumulated amount calculating means, when the image data of a new frame is imaged by the second imaging means, newly adds the accumulated amount of the movement amount of the subject between frames of the plurality of image data imaged so far Calculated to
    The control means outputs a signal for stopping the exposure for the image data of the first frame by the first imaging means when the accumulated amount becomes equal to or more than the threshold value. The imaging device according to claim 18.
  20.  撮像装置の制御方法であって、
     第一の撮像手段が第一のフレームの画像データのための露光を行っている間に第二の撮像手段にて撮像された複数のフレームの画像データを用いて、前記複数のフレームの画像データにおける被写体の動きを判定し、
     前記被写体の動きに基づいて、前記第一の撮像手段による前記第一のフレームの画像データのための露光を止め、前記第一の撮像手段による前記第一のフレームの後の第二のフレームの画像データのための露光を開始する、
     ことを特徴とする撮像装置の制御方法。
    It is a control method of an imaging device, and
    Image data of the plurality of frames using image data of a plurality of frames captured by the second imaging unit while the first imaging unit performs exposure for the image data of the first frame Determine the movement of the subject in
    The exposure for the image data of the first frame by the first imaging means is stopped based on the movement of the subject, and the second frame after the first frame by the first imaging means Start exposure for image data,
    And controlling the imaging device.
  21.  撮像装置で用いられるプログラムであって、前記撮像装置に備えられたコンピュータに、
     第一の撮像手段が第一のフレームの画像データのための露光を行っている間に第二の撮像手段にて撮像された複数のフレームの画像データを用いて、前記複数のフレームの画像データにおける被写体の動きを判定するステップと、
     前記被写体の動きに基づいて、前記第一の撮像手段による前記第一のフレームの画像データのための露光を止め、前記第一の撮像手段による前記第一のフレームの後の第二のフレームの画像データのための露光を開始するステップを、
     実行させることを特徴とするプログラム。
    A program used in an imaging device, comprising: a computer provided in the imaging device;
    Image data of the plurality of frames using image data of a plurality of frames captured by the second imaging unit while the first imaging unit performs exposure for the image data of the first frame Determining the movement of the subject at
    The exposure for the image data of the first frame by the first imaging means is stopped based on the movement of the subject, and the second frame after the first frame by the first imaging means Starting an exposure for the image data,
    A program characterized by having it run.
  22.  撮像装置のコンピュータに実行されるプログラムを記憶する、不揮発性のコンピュータ読み取り可能な記憶媒体であって、前記プログラムは前記撮像装置に備えられたコンピュータに、
     第一の撮像手段が第一のフレームの画像データのための露光を行っている間に第二の撮像手段にて撮像された複数のフレームの画像データを用いて、前記複数のフレームの画像データにおける被写体の動きを判定するステップと、
     前記被写体の動きに基づいて、前記第一の撮像手段による前記第一のフレームの画像データのための露光を止め、前記第一の撮像手段による前記第一のフレームの後の第二のフレームの画像データのための露光を開始するステップを、
     実行させることを特徴とする記憶媒体。
    A non-volatile computer-readable storage medium storing a program to be executed on a computer of an imaging device, wherein the program is stored in the computer provided in the imaging device,
    Image data of the plurality of frames using image data of a plurality of frames captured by the second imaging unit while the first imaging unit performs exposure for the image data of the first frame Determining the movement of the subject at
    The exposure for the image data of the first frame by the first imaging means is stopped based on the movement of the subject, and the second frame after the first frame by the first imaging means Starting an exposure for the image data,
    A storage medium characterized in that it is executed.
PCT/JP2018/039130 2017-10-27 2018-10-22 Imaging apparatus, imaging apparatus control method, and program WO2019082832A1 (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
US16/850,836 US11375132B2 (en) 2017-10-27 2020-04-16 Imaging apparatus, method of controlling the imaging apparatus, and program

Applications Claiming Priority (4)

Application Number Priority Date Filing Date Title
JP2017208369 2017-10-27
JP2017-208369 2017-10-27
JP2018-189988 2018-10-05
JP2018189988A JP7286294B2 (en) 2017-10-27 2018-10-05 IMAGING DEVICE, IMAGING DEVICE CONTROL METHOD, AND PROGRAM

Related Child Applications (1)

Application Number Title Priority Date Filing Date
US16/850,836 Continuation US11375132B2 (en) 2017-10-27 2020-04-16 Imaging apparatus, method of controlling the imaging apparatus, and program

Publications (1)

Publication Number Publication Date
WO2019082832A1 true WO2019082832A1 (en) 2019-05-02

Family

ID=66246893

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/JP2018/039130 WO2019082832A1 (en) 2017-10-27 2018-10-22 Imaging apparatus, imaging apparatus control method, and program

Country Status (1)

Country Link
WO (1) WO2019082832A1 (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113906729A (en) * 2019-06-13 2022-01-07 索尼集团公司 Imaging apparatus, imaging control method, and program

Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2002040506A (en) * 2000-07-25 2002-02-06 Ricoh Co Ltd Image pickup unit
JP2009077272A (en) * 2007-09-21 2009-04-09 Casio Comput Co Ltd Imaging device and program therefor
WO2017090458A1 (en) * 2015-11-26 2017-06-01 ソニー株式会社 Imaging device, imaging method, and program

Patent Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2002040506A (en) * 2000-07-25 2002-02-06 Ricoh Co Ltd Image pickup unit
JP2009077272A (en) * 2007-09-21 2009-04-09 Casio Comput Co Ltd Imaging device and program therefor
WO2017090458A1 (en) * 2015-11-26 2017-06-01 ソニー株式会社 Imaging device, imaging method, and program

Cited By (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113906729A (en) * 2019-06-13 2022-01-07 索尼集团公司 Imaging apparatus, imaging control method, and program
EP3975543A4 (en) * 2019-06-13 2022-07-13 Sony Group Corporation Image capturing device, image capturing control method, and program
US11785346B2 (en) 2019-06-13 2023-10-10 Sony Group Corporation Imaging device and imaging control method

Similar Documents

Publication Publication Date Title
CN104065868B (en) Image capture apparatus and control method thereof
US8818055B2 (en) Image processing apparatus, and method, and image capturing apparatus with determination of priority of a detected subject and updating the priority
US8994783B2 (en) Image pickup apparatus that automatically determines shooting mode most suitable for shooting scene, control method therefor, and storage medium
US11258948B2 (en) Image pickup apparatus, control method of image pickup apparatus, and storage medium
JP2018093275A (en) Imaging apparatus and flicker determination method
JP2009231967A (en) Image recording method, image recording device, and image recording program
US11388331B2 (en) Image capture apparatus and control method thereof
JP2020017807A (en) Image processing apparatus, image processing method, and imaging apparatus
JP5014267B2 (en) Imaging device
JP7286294B2 (en) IMAGING DEVICE, IMAGING DEVICE CONTROL METHOD, AND PROGRAM
JP5448868B2 (en) IMAGING DEVICE AND IMAGING DEVICE CONTROL METHOD
WO2019082832A1 (en) Imaging apparatus, imaging apparatus control method, and program
US11523048B2 (en) Electronic device, control method of electronic device, and non-transitory computer readable medium
JP2020088810A (en) Imaging apparatus and control method of imaging apparatus
JP2019216398A (en) Imaging apparatus and control method therefor, and program
JP2014077976A (en) Focusing device and imaging apparatus using the same
JP5832618B2 (en) Imaging apparatus, control method thereof, and program
JP2023004678A (en) Processing device and control method therefor
JP7321691B2 (en) IMAGING DEVICE, IMAGING DEVICE CONTROL METHOD, AND PROGRAM
JP2010093451A (en) Imaging apparatus and program for imaging apparatus
JP7123544B2 (en) IMAGING DEVICE, IMAGING DEVICE METHOD, AND PROGRAM
JP2021132272A (en) Electronic apparatus
JP5415208B2 (en) Imaging device
JP2016006940A (en) Camera with contrast af function
JP2019220889A (en) Imaging apparatus, control method for imaging apparatus, and program

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 18871469

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: DE

122 Ep: pct application non-entry in european phase

Ref document number: 18871469

Country of ref document: EP

Kind code of ref document: A1