WO2020012960A1 - Imaging device - Google Patents

Imaging device Download PDF

Info

Publication number
WO2020012960A1
WO2020012960A1 PCT/JP2019/025371 JP2019025371W WO2020012960A1 WO 2020012960 A1 WO2020012960 A1 WO 2020012960A1 JP 2019025371 W JP2019025371 W JP 2019025371W WO 2020012960 A1 WO2020012960 A1 WO 2020012960A1
Authority
WO
WIPO (PCT)
Prior art keywords
motion vector
reference value
image data
image
calculation unit
Prior art date
Application number
PCT/JP2019/025371
Other languages
French (fr)
Japanese (ja)
Inventor
英志 三家本
篠原 隆之
Original Assignee
株式会社ニコン
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 株式会社ニコン filed Critical 株式会社ニコン
Publication of WO2020012960A1 publication Critical patent/WO2020012960A1/en

Links

Images

Classifications

    • GPHYSICS
    • G03PHOTOGRAPHY; CINEMATOGRAPHY; ANALOGOUS TECHNIQUES USING WAVES OTHER THAN OPTICAL WAVES; ELECTROGRAPHY; HOLOGRAPHY
    • G03BAPPARATUS OR ARRANGEMENTS FOR TAKING PHOTOGRAPHS OR FOR PROJECTING OR VIEWING THEM; APPARATUS OR ARRANGEMENTS EMPLOYING ANALOGOUS TECHNIQUES USING WAVES OTHER THAN OPTICAL WAVES; ACCESSORIES THEREFOR
    • G03B5/00Adjustment of optical system relative to image or object surface other than for focusing
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N23/00Cameras or camera modules comprising electronic image sensors; Control thereof
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N23/00Cameras or camera modules comprising electronic image sensors; Control thereof
    • H04N23/60Control of cameras or camera modules

Definitions

  • the present invention relates to an imaging device.
  • An imaging device includes an imaging device that captures an image of a subject using an optical system and outputs a signal.
  • a motion vector calculating unit configured to calculate information on a motion vector of the subject based on image data having a resolution corresponding to a focal length of the optical system, which is generated based on the signal.
  • FIG. 2 is a block diagram illustrating a shake correction device included in the camera.
  • FIG. 4 is a diagram illustrating basic calculation timing of motion vector information in a motion vector calculation unit.
  • 5 is a flowchart illustrating a flow of an operation of the shake correction apparatus.
  • FIG. 4 is a block diagram illustrating details of generation of processing image data in a signal processing unit.
  • FIG. 1 is a sectional view schematically showing the camera 1.
  • a three-dimensional orthogonal coordinate system is set. Specifically, an axis parallel to the optical axis of the lens barrel 1B is defined as a Z-axis (horizontal direction on the paper), an axis intersecting the Z-axis in a plane perpendicular to the Z-axis is defined as an X-axis (a depth direction on the paper), and Z An axis perpendicular to the Z axis and the X axis in a plane perpendicular to the axis is defined as a Y axis (vertical direction in the drawing).
  • the rotation direction about the Z axis is the Roll direction
  • the rotation direction about the Y axis is the Yaw direction
  • the rotation direction about the X axis is the Pitch direction.
  • the camera body 1A and the camera body 1A and the lens barrel 1B are integrated.
  • the present invention is not limited to this, and a camera in which the lens barrel is detachable from the camera body may be used.
  • the camera 1 has a structure that does not include a so-called quick return mirror that changes an optical path for observing a subject inside the camera.
  • the camera 1 is not limited thereto, and may be a camera that has a quick return mirror. .
  • the camera body 1A includes an image sensor 3, a recording medium 13, a storage unit 14, an operation unit 15, a release switch 17, a back liquid crystal 18, and a CPU 2.
  • the CPU 2 includes a blur correction device 100 described later.
  • the image sensor 3 is provided on a predetermined focal plane of the photographing optical system, and is an element that generates a signal by photoelectrically converting subject light incident through the photographing optical systems 4, 5, and 6 of the lens barrel 1B. , CMOS and the like. From the signal output from the image sensor 3, image data is generated by a signal processing unit 40 described later included in the CPU 2.
  • the image sensor 3 has focus detection pixels, and the CPU 2 performs focus detection processing by a well-known pupil division type phase difference method using pixel output data from the focus detection pixels. Alternatively, focus detection may be performed by a well-known contrast method using data output from the image sensor 3.
  • the recording medium 13 is a medium for recording captured image data, and a memory card such as an SD card or a CF card is used.
  • the storage unit 14 is a memory such as an EEPROM, for example.
  • the storage unit 14 stores information on the size of image data for motion vector calculation corresponding to the zoom position (focal length) of the photographing optical system, as described later.
  • the operation unit 15 can perform a zoom operation, and can perform a zoom operation via the operation unit 15 to change the zoom position of the photographing optical system.
  • the sensor rate of the image sensor 3 is determined based on the luminance measured based on the output of the image sensor 3 in a sensor control unit 46 described below included in the CPU 2.
  • the release switch 17 is a member for performing a shooting operation of the camera 1 and is a switch for a user to perform a shooting instruction operation.
  • the back liquid crystal 18 is a color liquid crystal display provided on the back of the camera body 1A and displaying a photographed subject image (reproduced image, live view image), operation-related information (menu), and the like.
  • the shutter controls subject light incident on the image sensor 3 in response to a shooting instruction from the release switch 17 or the like.
  • the CPU 2 is a central processing unit that controls the entire camera 1 and includes a shake correction device 100 described later.
  • the lens barrel 1B has a photographing optical system including a zoom lens 4, a focus lens 5, a blur correction lens 6, and a zoom lens drive mechanism 7, and further includes a focus lens drive mechanism 8, a blur correction lens drive mechanism 9, and a diaphragm 10. , An aperture driving mechanism 11, an angular velocity sensor 12 (blur detection sensor), and a blur correction lens position detecting section 21.
  • the zoom lens 4 is a lens group that is driven by a zoom lens driving mechanism 7 such as a DC motor and changes the zoom position (focal length) by moving along the optical axis direction.
  • a zoom lens driving mechanism 7 such as a DC motor
  • changes the zoom position magnétique length
  • a lens drive amount calculation unit 39 described later included in the CPU 2 calculates the drive amount of the zoom lens 4, and drives the zoom lens 4 via the zoom lens drive mechanism 7. Change the zoom position.
  • the focus lens 5 is a lens group that is driven by a focus lens driving mechanism 8 such as a stepping motor, moves in the optical axis direction, and focuses.
  • a focus lens driving mechanism 8 such as a stepping motor
  • the shake correction lens 6 is a lens group that is optically shake-driven by a shake correction lens driving mechanism 9 such as a VCM (voice coil motor) and is movable on a plane perpendicular to the optical axis.
  • a shake correction lens driving mechanism 9 such as a VCM (voice coil motor) and is movable on a plane perpendicular to the optical axis.
  • the aperture 10 is driven by an aperture drive mechanism 11 and controls the amount of subject light passing through the photographing optical system.
  • the angular velocity sensor 12 is a sensor that detects an angular velocity of a camera shake (a shake output signal) generated by the camera 1.
  • the angular velocity sensor 12 is composed of two sensors, and is a sensor such as a vibration gyro that detects the angular velocity around the X axis (Pitch) and around the Y axis (Yaw).
  • the angular velocity sensor 12 may further include a third sensor to detect the angular velocity around the Z axis (Roll).
  • the angular velocity sensor 12 is also connected to a later-described motion vector calculation unit 41 included in the CPU 2, and the angular velocity detected by the angular velocity sensor 12 is sent to the motion vector calculation unit 41.
  • FIG. 2 is a block diagram illustrating the shake correction device 100 included in the camera 1.
  • the blur correction device 100 includes an amplification unit 31, a first A / D conversion unit 32, a second A / D conversion unit 33, a reference value calculation unit 34, a subtraction unit 43, a target position calculation unit 36, a center bias calculation unit 37, A reference value correction unit 50 and a lens drive amount calculation unit 39 are provided.
  • the blur correction device 100 further includes a signal processing unit 40, a sensor control unit 46, and a motion vector calculation unit 41.
  • the amplifying unit 31 amplifies the output of the angular velocity sensor 12.
  • the first A / D converter 32 performs A / D conversion on the output of the amplifier 31.
  • the reference value calculator 34 calculates a reference value (first reference value, reference value before correction) of the vibration detection signal (output of the first A / D converter 32) obtained from the angular velocity sensor 12.
  • the reference value of the angular velocity is, for example, a vibration detection signal output from the angular velocity sensor 12 when the camera 1 (camera body 1A, lens barrel 1B) is stationary.
  • the reference value calculation unit 34 can calculate the reference value based on the output of a low-pass filter that reduces a predetermined high-frequency component from the output of the angular velocity sensor 12, for example.
  • the subtraction unit 43 subtracts a reference value (second reference value, corrected reference value) obtained by correcting the first reference value calculated by the reference value calculation unit 34 from the output of the first A / D conversion unit 32.
  • the target position calculator 36 calculates a target position for driving the blur correction lens 6 based on the output of the angular velocity sensor 12 after the subtraction of the reference value by the subtractor 43.
  • the center bias calculator 37 uses the centripetal force for moving the blur correction lens 6 toward the center of its movable range based on the target position of the blur correction lens 6 calculated by the target position calculator 36 as a bias amount. Calculate. Then, the control position of the blur correction lens 6 is calculated by subtracting the calculated bias amount from the target position of the blur correction lens 6.
  • the lens driving amount calculation unit 39 calculates the driving amount of the lenses of the zoom lens 4 and the blur correction lens 6.
  • the drive amount calculation of the blur correction lens 6 by the lens drive amount calculation unit 39 is detected by the shake correction lens position detection unit 21 and the target position from the target position calculation unit 36, and the A / D is converted by the second A / D conversion unit 33.
  • the drive amount of the blur correction lens 6 by the blur correction lens driving mechanism 9 is calculated from the current position of the blur correction lens 6 obtained from the converted value. Further, when the photographer performs a zoom operation from the operation unit 15, the lens driving amount calculation unit 39 instructs the zoom lens driving mechanism 7 to drive the zoom lens 4.
  • the sensor control unit 46 sets the sensor rate of the image sensor 3 based on the measured luminance of the subject. For example, when the subject is dark, the exposure time for obtaining one image becomes long, so the sensor rate is set to a low value of 15 fps. Since the exposure time for obtaining one image becomes shorter as the image becomes brighter, the sensor rate is set high at 30, 60, and 120 fps.
  • the signal processing unit 40 performs processing such as noise processing and A / D conversion on the signal acquired by the image sensor 3 and creates recording image data to be recorded as a still image on the recording medium 13. Further, the signal processing unit 40 continuously generates processing image data having a smaller image size than the recording image data in a time-series manner based on the signal acquired by the image sensor 3.
  • the image data for processing includes moving image display (live view display) on the rear liquid crystal 18, calculation of a motion vector by a motion vector calculation unit 41, various calculation processes such as auto focus and automatic exposure, and wireless image processing. Used for sending. The creation of the processing image data will be described later.
  • the motion vector calculation unit 41 calculates motion vector information indicating a motion (movement direction and motion amount) of an image from a plurality of processing image data (first processing image data described later) processed by the signal processing unit 40. I do.
  • the motion vector information is represented by a signed size in the X-axis direction, the Y-axis direction, the Roll direction, or the like. Further, the motion vector information includes a detection delay time and the like.
  • the motion vector calculation unit 41 compares the brightness information such as a change in the position of high brightness included in the two or more pieces of image data captured by the image sensor 3 to determine the motion direction and the motion amount of the image. Is detected, and motion vector information is calculated. In addition to luminance information, motion vector information may be calculated by image pattern matching or the like. The motion vector information may be detected from one image, may be calculated from two separate frames, or may be calculated from three images.
  • FIG. 3 is a diagram illustrating the operation timing of the motion vector in the motion vector operation unit 41.
  • the motion vector calculation unit 41 calculates a motion vector from the (n-1) th image data and the next nth image data captured at a time later than the time at which the (n-1) th image data was acquired. Compute information. For example, when image data is sent from the image sensor 3 at 30 fps, motion vector information is obtained once every 33 ms.
  • Time t1 is the time when the exposure of the (n-1) th image data is started.
  • Time t2 is an intermediate time between the time when the exposure of the (n-1) th image data is started and the time when the exposure is completed.
  • Time t4 is a time when exposure of the n-th image data is started.
  • the time t5 is a time exactly intermediate between the time when the exposure of the n-th image data is started and the time when the exposure is completed.
  • the time t6 is the time when the motion vector information calculated from the n-1 image data and the n image data is obtained, and the generation time of the motion vector information is the time when the motion vector information is between t5 and t2. It is reasonable to think that it is just in the middle.
  • a detection delay time of t6-t3 occurs between the time when the motion vector information is obtained and the time when the motion vector is generated.
  • the reference value corrector 50 includes a center bias remover 38, a reference value correction amount calculator 35, and a reference value subtractor 42.
  • motion vector information X Information in the X direction of the motion vector information is referred to as motion vector information X.
  • the correction of the reference value in the Y direction is the same as that in the X direction.
  • the Roll direction information of the motion vector information is not received. Note that the Roll vector information of the motion vector information may be received.
  • the center bias removing unit 38 subtracts the bias correction amount from the motion vector information X.
  • the bias correction amount X is calculated from the center bias amount of the X component calculated by the center bias calculator 37 (subtracted from the target position of the blur correction lens 6).
  • the reference value correction amount calculation unit 35 Based on the motion vector information X from which the bias correction amount X has been removed by the center bias removing unit 38, the reference value correction amount calculation unit 35 calculates the reference value correction amount from the output value of the angular velocity sensor around the Y axis (Yaw). Is calculated. In the present embodiment, only the sign of the motion vector information X is determined, and if the motion vector information X is confirmed in the minus direction, the correction amount is a certain fixed amount. If the motion vector information is confirmed in the plus direction, the correction amount is Is a certain amount of minus.
  • the reference value subtraction / addition unit 42 corrects the first reference value with the correction amount calculated by the reference value correction amount calculation unit 35 to obtain a corrected second reference value.
  • FIG. 4 is a flowchart showing the flow of the operation of the shake correction apparatus 100.
  • Step 001 After the power of the camera 1 is turned on, the shake correction apparatus 100 starts a calculation for optical image stabilization. Depending on the camera, when the release switch 17 is half-pressed or when the shake correction is turned on by an instruction unit (not shown), the shake correction apparatus 100 starts the calculation for optical image stabilization.
  • Step 002 The blur correction device 100 amplifies the output of the angular velocity sensor 12 by the amplifying unit 31, and then performs A / D conversion by the first A / D converting unit 32.
  • Step 003 In the blur correction device 100, the reference value calculation unit 34 calculates the reference value of the angular velocity based on the signal after the A / D conversion of the output of the angular velocity sensor 12 (first reference value, zero deg / s). (Equivalent value). Since the reference value of the angular velocity changes due to temperature characteristics, drift characteristics immediately after startup, and the like, for example, the stationary output of the angular velocity sensor 12 at the time of factory shipment cannot be used as the reference value.
  • a method of calculating the reference value a method of calculating a moving average for a predetermined time and a method of calculating by a LPF process are known.
  • reference value calculation by LPF processing is used.
  • FIG. 5 is a diagram showing the reference value calculation unit 34 (HPF).
  • the cutoff frequency fc of the LPF 34 is generally set to a low frequency of about 0.1 [Hz]. This is because camera shake is dominant at a frequency of about 1 to 10 [Hz]. If fc is 0.1 [Hz], the effect on the camera shake component is small, and good blur correction can be performed.
  • the reference value calculation result may have an error.
  • fc the time constant is large
  • the error once increases, there is a problem that it takes time to converge to the true value.
  • the error of the reference value is corrected.
  • Step 004 When the motion vector information is updated (S004: YES), the blur correction device 100 proceeds to S005, and when the information is not updated (S004: NO), the process proceeds to S006.
  • Step 005 When the motion vector information is updated, the blur correction device 100 corrects the first reference value calculated in S003 in the reference value correction unit 50, and calculates a second reference value.
  • the reference value correction step will be described later.
  • Step 006 The blur correction device 100 controls the blur correction lens 6 in the target position calculation unit 36 based on the first reference value obtained in S003 or the second reference value obtained in S005 and the output of the angular velocity sensor 12. Calculate the target position. At this time, the target position calculation unit 36 calculates the target position of the blur correction lens 6 in consideration of the focal length, the subject distance, the shooting magnification, and the blur correction lens characteristic information.
  • Step 007 The blur correction device 100 performs a center bias process to prevent the blur correction lens 6 from reaching the movable end.
  • the center bias processing such as a method of setting a bias amount according to target position information, an HPF processing, an incomplete integration processing (in S006), and the method is not limited here.
  • Step 008 The blur correction device 100 calculates the lens drive amount from the difference between the target position information considering the center bias component and the blur correction lens position information in the lens drive amount calculation unit 39.
  • Step 009 The blur correction device 100 drives the blur correction lens 6 to the target position via the blur correction lens driving mechanism 9, and returns to S002.
  • FIG. 6 is a detailed flowchart of the reference value correction step 005 in FIG.
  • Step 101 The blur correction device 100 sums up all the calculated motion vector information, and proceeds to step 102.
  • Step 102 In the shake correction apparatus 100, the center bias removing unit 38 converts the center bias component calculated in step 007 into the same scale as the motion vector information, and proceeds to step 103.
  • the conversion method is calculated based on focal length, subject distance, shooting magnification, and resolution information of motion vector information.
  • Bias_MV Bias_ ⁇ * f * (1 + ⁇ ) / MV_pitch
  • Bias_MV Center bias component (same scale as motion vector information)
  • Bias_ ⁇ Center bias component (angle)
  • f focal length
  • shooting magnification
  • MV_pitch motion vector pitch size
  • the center bias component since a difference between a plurality of captured frames is obtained for a motion vector, a delay time occurs before the motion vector is detected. Therefore, it is preferable that the center bias component also have a delay time equivalent to the motion vector information. For example, if the delay time is 3 frames at 30 [fps], the delay is about 100 [ms]. Therefore, the center bias component included in the motion vector information can be calculated more accurately by using the bias information before 100 [ms].
  • Step 103 The blur correction device 100 subtracts the center bias component converted in step 102 from the motion vector information in the center bias removing unit 38, and proceeds to step 104. Thereby, motion vector information based on the reference value error can be obtained.
  • Step 104 The blur correction device 100 acquires the difference: MV_diff between the latest motion vector information (n) and the motion vector information (n ⁇ 1) one frame before, and proceeds to step 105.
  • Step 105 In the blur correction device 100, the reference value correction amount calculation unit 35 sets an amount by which the reference value is corrected based on MV_diff. For the reference value, a correction amount is set based on the following idea, and the process proceeds to step 106.
  • Step 106 The blur correction device 100 causes the reference value subtraction / addition unit 42 to subtract the ⁇ 0_comp calculated in Step 105 from the first reference value calculated in S003 (FIG. 4), and obtain a corrected second reference value.
  • the second reference value is obtained in S005 in FIG. 3, but the blur correction process repeats the loop of returning to S002 after proceeding to S009 as shown in the flowchart of FIG. Therefore, the second reference value corrected in S005 is updated whenever the motion vector information is updated.
  • FIG. 7A is a graph showing a second reference value in the Yaw direction.
  • the dotted line in the drawing indicates the first reference value when the correction is not performed according to the present embodiment, and the solid line in the drawing indicates the second reference value when the correction is performed according to the present embodiment.
  • FIG. 7B is a graph showing the direction of the motion vector information in the X direction.
  • the first reference value is corrected to minus as shown in practice in FIG.
  • the second reference value changes according to the value calculated by the reference value calculation unit 34 until time t3.
  • the first reference value is corrected to minus.
  • the correction amount at this time is constant. That is, the correction amount at time t1 and the correction amount at time t3 are the same.
  • the reference value changes according to the value calculated by the reference value calculation unit 34 to become the corrected second reference value.
  • the first reference value is corrected to a certain amount minus. Further, when the motion vector information is confirmed in the minus direction as at times t22 and t25 in the drawing, the first reference value is corrected to a certain amount plus.
  • FIG. 8 is a block diagram illustrating details of generation of the processing image data in the signal processing unit 40.
  • the signal processing unit 40 has a preprocessing unit 40A and a size adjustment unit 40B.
  • the preprocessing unit 40A performs, for example, an AFE circuit (Analog Front End circuit) on the image signal output from the image sensor 3 such as noise processing or A / D conversion to convert the image signal into digital image data. To the size adjustment unit 40B.
  • AFE circuit Analog Front End circuit
  • the size adjustment unit 40B converts the resolution input from the preprocessing unit 40A and adjusts the image size (number of pixels) of the captured data. For example, when the release switch 17 is fully pressed, the image size of the image data obtained from the pre-processing unit 40A is not reduced, or the reduction ratio of the image size of the image data is small, and is smaller than the processing image data described later. For example, full-size recording image data B0 is created and recorded on the recording medium 13.
  • the size adjustment unit 40B is not in the fully-pressed state. For example, when displaying a through image on the rear liquid crystal 18, when the release switch 17 is half-pressed to perform AF (autofocus), AE (automatic exposure) ) Is performed, a transmission image is created, a motion vector is calculated, or the like, the image size is reduced to a processing image size suitable for such processing.
  • AF autofocus
  • AE automatic exposure
  • first processing image data B1 smaller than recording image data B0 is created, and further reduced therefrom to create second processing image data B2.
  • the relationship between the sizes of the recording image data B0, the first processing image data B1, and the second processing image data B2 is shown below.
  • the size of the recording image data B0 is, for example, 4608 ⁇ 2592 pixels.
  • the second processing image data B2 has, for example, a VGA size and a resolution reduced to 640 ⁇ 360 pixels (pixels are thinned out), and is used for a through image, AF, AE, and transmission.
  • the first processing image data B1 has a size between the recording image data B0 and the second processing image data B2, and varies depending on the zoom position as described below.
  • the size adjustment unit 40B is connected to the lens drive amount calculation unit 39, and obtains zoom position information from the lens drive amount calculation unit 39.
  • the present invention is not limited to this, and if a lens position detector is provided, the zoom position may be obtained from the lens position detector.
  • the size adjustment unit 40B is also connected to the storage unit 14.
  • the storage unit 14 stores the size of the first processing image data B1 corresponding to the zoom position.
  • An example of the size of the first processing image data B1 for each area of the zoom position is shown below.
  • Tele area 640 360 pixels
  • Middle area 1280 ⁇ 720 pixels
  • Wide area 1920 ⁇ 1080 pixels
  • the motion vector detected on the image plane when the reference value of the angular velocity sensor 12 deviates from the true value is smaller as the focal length of the imaging optical system is shorter, and is larger as the focal length is longer.
  • the image size of the first processing image data B1 is 640 ⁇ 360 pixels which is equal to the image size of the second processing image data B2. Then, as the zoom position moves from the tele area to the wide area, the image size of B1 increases, that is, the resolution increases.
  • the image data used in the motion vector calculation unit 41 is created in the signal processing unit 40 (size adjustment unit 40B) in different image sizes corresponding to the zoom positions.
  • the size adjustment unit 40B reads the image size corresponding to the zoom position obtained from the lens drive amount calculation unit 39 from the size of the first processing image data B1 stored in the storage unit 14, and reads out the image size of the image size.
  • One processing image data B1 is created.
  • the image size of the first processing image data B1 created by the size adjustment unit 40B increases as the area becomes wider. Then, since the resolution is increased, the detection resolution (detection accuracy) of the motion vector is improved. Therefore, a motion vector can be accurately detected.
  • the range in which the motion vector is detected is limited, and the moving amount of the feature point between two pieces of image data is, for example, up to 16 pixels.
  • the motion vector In the case of the tele region, the motion vector is larger than in the case of the wide region.
  • the image size in the tele area is smaller than that in the wide area. This makes it possible to prevent the motion vector from being undetectable in the tele area.
  • the processing load and processing time can be reduced by reducing the image data.
  • the present invention is not limited to the embodiments described above, and various modifications and changes as described below are possible, and these are also within the scope of the present invention.
  • the first processing image data B1 for motion vector calculation is once created, and then the second processing image data B2 used for other processing is converted from the first processing image data B1. Created.
  • the present invention is not limited thereto, and the second processing image data B2 may be created directly from the full-size image without passing through the first processing image data B1.
  • the size adjustment unit 40B may reduce the size of the first processing image data B1. For example, when the camera shake is large, the motion vector becomes large. If the size of the first processing image data B1 used for the motion vector calculation is large, it may be out of the detection limit. Therefore, the size of the first processing image data B1 used for the motion vector calculation is reduced. As a result, it is possible to prevent the motion vector from being uncalculable.
  • the size of the first processing image data B1 may be changed depending on the subject blur size.
  • the correction amount of the reference value is a certain amount of plus or minus.
  • the correction amount may be changed depending on the zoom position.
  • the auto-focus operation may be performed by half-pressing the release switch 17, and the calculation of the motion vector may be stopped for a certain period. In such a case, the correction amount of the motion vector after the calculation of the motion vector is restarted after being stopped for a certain period may be increased.
  • the angular velocity sensor 12 is provided in the lens barrel 1B.
  • the present invention is not limited to this, and the angular velocity sensor 12 may be provided in the camera body 1A.
  • an acceleration sensor may be provided in the camera body 1A or the lens barrel 1B.
  • the center bias which is a centripetal force for moving the blur correction lens toward the center of its movable range, is determined based on the target position of the blur correction lens calculated by the target position calculation unit.
  • the control using the center bias is performed, the control without using the center bias may be performed. In that case, there is no center bias calculator and no center bias remover.

Abstract

Provided is an imaging device capable of further improving vibration-damping performance in optical blur correction. This imaging device 1 has: an imaging element 3 that captures an image of a subject from an optical system and outputs a signal; and a motion vector calculation unit 41 that calculates information pertaining to a motion vector for the subject, on the basis of image data having a resolution corresponding to the focal distance of the optical system and generated on the basis of the signal.

Description

撮像装置Imaging device
 本発明は、撮像装置に関するものである。 << The present invention relates to an imaging device.
 例えば、撮影した画像から画像の動きベクトル情報を検出し、この動きベクトル情報を、ブレ補正レンズの目標駆動位置の演算にフィードバックすることにより、光学的ブレ補正の防振性能を高める技術が提案されている(特許文献1参照)。従来から動きベクトル情報の検出精度を向上するという要求があった。 For example, a technique has been proposed in which motion vector information of an image is detected from a captured image, and the motion vector information is fed back to the calculation of a target drive position of the blur correction lens, thereby improving the vibration reduction performance of optical blur correction. (See Patent Document 1). Conventionally, there has been a demand for improving the detection accuracy of motion vector information.
特開平10-145662号公報JP-A-10-145662
 本発明の撮像装置は、光学系による被写体の像を撮像し信号を出力する撮像素子と、
 前記信号に基づいて生成される、前記光学系の焦点距離に応じた解像度の画像データに基づいて、前記被写体の動きベクトルに関する情報を算出する動きベクトル算出部と、を有する構成とした。
An imaging device according to the present invention includes an imaging device that captures an image of a subject using an optical system and outputs a signal.
A motion vector calculating unit configured to calculate information on a motion vector of the subject based on image data having a resolution corresponding to a focal length of the optical system, which is generated based on the signal.
カメラを模式的に示す断面図である。It is sectional drawing which shows a camera typically. カメラに含まれるブレ補正装置を示すブロック図である。FIG. 2 is a block diagram illustrating a shake correction device included in the camera. 動きベクトル演算部での動きベクトル情報の基本的な演算タイミングを説明する図である。FIG. 4 is a diagram illustrating basic calculation timing of motion vector information in a motion vector calculation unit. ブレ補正装置の動作の流れを示したフローチャートである。5 is a flowchart illustrating a flow of an operation of the shake correction apparatus. 基準値演算部を示した図である。FIG. 5 is a diagram illustrating a reference value calculation unit. 図4の基準値補正ステップの詳細なフローチャートである。5 is a detailed flowchart of a reference value correction step in FIG. (a)は、Yaw方向の第2基準値を示したグラフであり、(b)はX方向の動きベクトル情報の方向を示したグラフである。(A) is a graph showing the second reference value in the Yaw direction, and (b) is a graph showing the direction of the motion vector information in the X direction. 信号処理部における処理用画像データの作成の詳細を示すブロック図である。FIG. 4 is a block diagram illustrating details of generation of processing image data in a signal processing unit.
 以下、図面等を参照して、本発明の実施形態について説明する。図1は、カメラ1を模式的に示す断面図である。
 図1に示すように、本実施形態においては3次元直交座標系が設定される。具体的には、レンズ鏡筒1Bの光軸に平行な軸をZ軸(紙面水平方向)とし、Z軸に垂直な平面内でZ軸と交わる軸をX軸(紙面奥行き方向)とし、Z軸に垂直な平面内でZ軸とX軸とに垂直に交わる軸をY軸(紙面鉛直方向)とする。そして、Z軸を中心とする回転方向をRoll方向、Y軸を中心とする回転方向をYaw方向、X軸を中心とする回転方向をPitch方向とする。
Hereinafter, embodiments of the present invention will be described with reference to the drawings and the like. FIG. 1 is a sectional view schematically showing the camera 1.
As shown in FIG. 1, in this embodiment, a three-dimensional orthogonal coordinate system is set. Specifically, an axis parallel to the optical axis of the lens barrel 1B is defined as a Z-axis (horizontal direction on the paper), an axis intersecting the Z-axis in a plane perpendicular to the Z-axis is defined as an X-axis (a depth direction on the paper), and Z An axis perpendicular to the Z axis and the X axis in a plane perpendicular to the axis is defined as a Y axis (vertical direction in the drawing). The rotation direction about the Z axis is the Roll direction, the rotation direction about the Y axis is the Yaw direction, and the rotation direction about the X axis is the Pitch direction.
(カメラ1)
 カメラ1は、カメラボディ1Aと、このカメラボディ1Aとレンズ鏡筒1Bとが一体型であるが、これに限らず、カメラボディに対してレンズ鏡筒が着脱自在のカメラであってもよい。
 また、カメラ1は、実施形態ではカメラ内部に被写体観察するために光路を変更するいわゆるクイックリターンミラーを備えない構造であるが、これに限定されず、クイックリターンミラーを有するカメラであってもよい。
(Camera 1)
In the camera 1, the camera body 1A and the camera body 1A and the lens barrel 1B are integrated. However, the present invention is not limited to this, and a camera in which the lens barrel is detachable from the camera body may be used.
In the embodiment, the camera 1 has a structure that does not include a so-called quick return mirror that changes an optical path for observing a subject inside the camera. However, the camera 1 is not limited thereto, and may be a camera that has a quick return mirror. .
(カメラボディ1A)
 カメラボディ1Aは、撮像センサ3、記録媒体13、記憶部14、操作部15、レリーズスイッチ17、背面液晶18、及びCPU2を備える。なお、CPU2は、後述のブレ補正装置100を含む。
(Camera body 1A)
The camera body 1A includes an image sensor 3, a recording medium 13, a storage unit 14, an operation unit 15, a release switch 17, a back liquid crystal 18, and a CPU 2. Note that the CPU 2 includes a blur correction device 100 described later.
 撮像センサ3は撮影光学系の予定焦点面に設けられ、レンズ鏡筒1Bの撮影光学系4,5,6を介して入射した被写体光を光電変換して信号を生成する素子であり、例えばCCD、CMOSなどにより構成されている。
 撮像センサ3から出力された信号から、CPU2に含まれる後述の信号処理部40で画像データが生成される。
The image sensor 3 is provided on a predetermined focal plane of the photographing optical system, and is an element that generates a signal by photoelectrically converting subject light incident through the photographing optical systems 4, 5, and 6 of the lens barrel 1B. , CMOS and the like.
From the signal output from the image sensor 3, image data is generated by a signal processing unit 40 described later included in the CPU 2.
 撮像センサ3は焦点検出用の画素を有し、CPU2は、焦点検出用画素からの画素出力データを用いて周知の瞳分割式位相差方式により焦点検出処理を行う。または、撮像センサ3から出力されるデータを用いて周知のコントラスト方式により焦点検出を行ってもよい。 The image sensor 3 has focus detection pixels, and the CPU 2 performs focus detection processing by a well-known pupil division type phase difference method using pixel output data from the focus detection pixels. Alternatively, focus detection may be performed by a well-known contrast method using data output from the image sensor 3.
 記録媒体13は、撮像された画像データを記録するための媒体であり、SDカード、CFカード等のメモリカードが使用される。 (4) The recording medium 13 is a medium for recording captured image data, and a memory card such as an SD card or a CF card is used.
 記憶部14は、例えばEEPROM等のメモリである。記憶部14は、後述するように、撮影光学系のズームポジション(焦点距離)に対応した、動きベクトル演算用の画像データのサイズの情報を記憶している。 The storage unit 14 is a memory such as an EEPROM, for example. The storage unit 14 stores information on the size of image data for motion vector calculation corresponding to the zoom position (focal length) of the photographing optical system, as described later.
 操作部15は、ズーム操作が可能で、操作部15を介してズーム操作を行うことにより、撮影光学系のズームポジションを変更することができる。 The operation unit 15 can perform a zoom operation, and can perform a zoom operation via the operation unit 15 to change the zoom position of the photographing optical system.
 CPU2に含まれる後述のセンサ制御部46において、撮像センサ3の出力に基づいて測光された輝度を基に、撮像センサ3のセンサレートが決定される。 The sensor rate of the image sensor 3 is determined based on the luminance measured based on the output of the image sensor 3 in a sensor control unit 46 described below included in the CPU 2.
 レリーズスイッチ17は、カメラ1の撮影操作を行う部材であって、ユーザが撮影の指示操作をするスイッチである。 The release switch 17 is a member for performing a shooting operation of the camera 1 and is a switch for a user to perform a shooting instruction operation.
 背面液晶18は、カメラボディ1Aの背面に設けられ、撮影した被写体像(再生画像、ライブビュー画像)や操作に関連した情報(メニュー)などを表示するカラー液晶ディスプレイである。 The back liquid crystal 18 is a color liquid crystal display provided on the back of the camera body 1A and displaying a photographed subject image (reproduced image, live view image), operation-related information (menu), and the like.
 シャッタは、レリーズスイッチ17などによる撮影指示に応じて、撮像センサ3に入射する被写体光を制御する。 (4) The shutter controls subject light incident on the image sensor 3 in response to a shooting instruction from the release switch 17 or the like.
 CPU2は、カメラ1の全体の制御を行う中央処理装置であり、後述のブレ補正装置100を備える。 The CPU 2 is a central processing unit that controls the entire camera 1 and includes a shake correction device 100 described later.
(レンズ鏡筒1B)
 次に、レンズ鏡筒1Bについて説明する。レンズ鏡筒1Bは、ズームレンズ4、フォーカスレンズ5、ブレ補正レンズ6、ズームレンズ駆動機構7からなる撮影光学系を有し、さらに、フォーカスレンズ駆動機構8、ブレ補正レンズ駆動機構9、絞り10、絞り駆動機構11、角速度センサ12(ブレ検出センサ)及びブレ補正レンズ位置検出部21を備える。
(Lens barrel 1B)
Next, the lens barrel 1B will be described. The lens barrel 1B has a photographing optical system including a zoom lens 4, a focus lens 5, a blur correction lens 6, and a zoom lens drive mechanism 7, and further includes a focus lens drive mechanism 8, a blur correction lens drive mechanism 9, and a diaphragm 10. , An aperture driving mechanism 11, an angular velocity sensor 12 (blur detection sensor), and a blur correction lens position detecting section 21.
 ズームレンズ4は、DCモータ等のズームレンズ駆動機構7により駆動され、光軸方向に沿って移動することにより、ズームポジション(焦点距離)を変化させるレンズ群である。
 撮影者が操作部15を介してズーム操作を行うと、CPU2に含まれる後述のレンズ駆動量演算部39はズームレンズ4の駆動量を演算し、ズームレンズ駆動機構7を介してズームレンズ4のズームポジションを変更する。
The zoom lens 4 is a lens group that is driven by a zoom lens driving mechanism 7 such as a DC motor and changes the zoom position (focal length) by moving along the optical axis direction.
When the photographer performs a zoom operation via the operation unit 15, a lens drive amount calculation unit 39 described later included in the CPU 2 calculates the drive amount of the zoom lens 4, and drives the zoom lens 4 via the zoom lens drive mechanism 7. Change the zoom position.
 フォーカスレンズ5は、ステッピングモータ等のフォーカスレンズ駆動機構8により駆動され、光軸方向に移動して、焦点を合わせるレンズ群である。 The focus lens 5 is a lens group that is driven by a focus lens driving mechanism 8 such as a stepping motor, moves in the optical axis direction, and focuses.
 ブレ補正レンズ6は、VCM(ヴォイスコイルモータ)等のブレ補正レンズ駆動機構9により光学的にブレ補正駆動され、光軸に垂直な面上で可動なレンズ群である。 The shake correction lens 6 is a lens group that is optically shake-driven by a shake correction lens driving mechanism 9 such as a VCM (voice coil motor) and is movable on a plane perpendicular to the optical axis.
 絞り10は、絞り駆動機構11に駆動され、撮影光学系を通過する被写体光の光量を制御する。 The aperture 10 is driven by an aperture drive mechanism 11 and controls the amount of subject light passing through the photographing optical system.
 角速度センサ12は、カメラ1生じる手ブレの角速度(ブレ出力信号)を検出するセンサでる。角速度センサ12は2つのセンサからなり、それぞれX軸回り(Pitch)、Y軸回り(Yaw)、の角速度を検出する振動ジャイロ等のセンサである。なお、角速度センサ12は、さらに3つめのセンサを備えてZ軸回り(Roll)の角速度も検出してもよい。
 角速度センサ12は、CPU2に含まれる後述の動きベクトル演算部41にも接続されており、角速度センサ12により検出された角速度は、動きベクトル演算部41に送られる。
The angular velocity sensor 12 is a sensor that detects an angular velocity of a camera shake (a shake output signal) generated by the camera 1. The angular velocity sensor 12 is composed of two sensors, and is a sensor such as a vibration gyro that detects the angular velocity around the X axis (Pitch) and around the Y axis (Yaw). The angular velocity sensor 12 may further include a third sensor to detect the angular velocity around the Z axis (Roll).
The angular velocity sensor 12 is also connected to a later-described motion vector calculation unit 41 included in the CPU 2, and the angular velocity detected by the angular velocity sensor 12 is sent to the motion vector calculation unit 41.
(ブレ補正装置100)
 図2は、カメラ1に含まれるブレ補正装置100を示すブロック図である。ブレ補正装置100は、増幅部31と、第1A/D変換部32、第2A/D変換部33、基準値演算部34、減算部43と、目標位置演算部36、センタバイアス演算部37、基準値補正部50及びレンズ駆動量演算部39を備える。ブレ補正装置100は、さらに、信号処理部40、センサ制御部46、及び動きベクトル演算部41を備える。
(Blur correction device 100)
FIG. 2 is a block diagram illustrating the shake correction device 100 included in the camera 1. The blur correction device 100 includes an amplification unit 31, a first A / D conversion unit 32, a second A / D conversion unit 33, a reference value calculation unit 34, a subtraction unit 43, a target position calculation unit 36, a center bias calculation unit 37, A reference value correction unit 50 and a lens drive amount calculation unit 39 are provided. The blur correction device 100 further includes a signal processing unit 40, a sensor control unit 46, and a motion vector calculation unit 41.
 増幅部31は、角速度センサ12の出力を増幅する。 The amplifying unit 31 amplifies the output of the angular velocity sensor 12.
 第1A/D変換部32は、増幅部31の出力をA/D変換する。 (1) The first A / D converter 32 performs A / D conversion on the output of the amplifier 31.
 基準値演算部34は、角速度センサ12から得られた振動検出信号(第1A/D変換部32の出力)の基準値(第1基準値、補正前の基準値)を演算する。角速度の基準値とは、例えば、カメラ1(カメラボディ1A、レンズ鏡筒1B)が静止しているときに角速度センサ12から出力される振動検出信号である。基準値演算部34は、例えば、角速度センサ12の出力から所定の高周波成分を低減するローパスフィルタの出力に基づいて基準値を求めることができる。 The reference value calculator 34 calculates a reference value (first reference value, reference value before correction) of the vibration detection signal (output of the first A / D converter 32) obtained from the angular velocity sensor 12. The reference value of the angular velocity is, for example, a vibration detection signal output from the angular velocity sensor 12 when the camera 1 (camera body 1A, lens barrel 1B) is stationary. The reference value calculation unit 34 can calculate the reference value based on the output of a low-pass filter that reduces a predetermined high-frequency component from the output of the angular velocity sensor 12, for example.
 減算部43は、基準値演算部34において演算された第1基準値を補正した基準値(第2準値、補正後の基準値)を第1A/D変換部32の出力から減算する。 The subtraction unit 43 subtracts a reference value (second reference value, corrected reference value) obtained by correcting the first reference value calculated by the reference value calculation unit 34 from the output of the first A / D conversion unit 32.
 目標位置演算部36は、減算部43において基準値が減算された後の角速度センサ12の出力を基に、ブレ補正レンズ6を駆動する目標位置を演算する。 The target position calculator 36 calculates a target position for driving the blur correction lens 6 based on the output of the angular velocity sensor 12 after the subtraction of the reference value by the subtractor 43.
 センタバイアス演算部37は、目標位置演算部36によって算出されたブレ補正レンズ6の目標位置に基づいて、ブレ補正レンズ6を、その可動範囲の中心に向かって移動させるための向心力をバイアス量として演算する。そして、ブレ補正レンズ6の目標位置から、算出したバイアス量を減算することによりブレ補正レンズ6の制御位置を算出する。
 このようにセンタリングバイアス処理を行うことで、ブレ補正レンズ6がハードリミットに衝突することを有効に防止することができ、さらには、撮影画像の見栄えを向上させることができる。
The center bias calculator 37 uses the centripetal force for moving the blur correction lens 6 toward the center of its movable range based on the target position of the blur correction lens 6 calculated by the target position calculator 36 as a bias amount. Calculate. Then, the control position of the blur correction lens 6 is calculated by subtracting the calculated bias amount from the target position of the blur correction lens 6.
By performing the centering bias processing in this way, it is possible to effectively prevent the blur correction lens 6 from colliding with the hard limit, and to further improve the appearance of the captured image.
 レンズ駆動量演算部39は、ズームレンズ4及びブレ補正レンズ6のレンズの駆動量演算を行う。
 レンズ駆動量演算部39によるブレ補正レンズ6の駆動量演算は、目標位置演算部36からの目標位置と、ブレ補正レンズ位置検出部21により検出され、第2A/D変換部33によりA/D変換された値から求められたブレ補正レンズ6の現在位置から、ブレ補正レンズ駆動機構9によるブレ補正レンズ6の駆動量を演算する。
 またレンズ駆動量演算部39は、操作部15より撮影者がズーム操作を行うと、ズームレンズ駆動機構7に駆動量を指示してズームレンズ4を駆動する。
The lens driving amount calculation unit 39 calculates the driving amount of the lenses of the zoom lens 4 and the blur correction lens 6.
The drive amount calculation of the blur correction lens 6 by the lens drive amount calculation unit 39 is detected by the shake correction lens position detection unit 21 and the target position from the target position calculation unit 36, and the A / D is converted by the second A / D conversion unit 33. The drive amount of the blur correction lens 6 by the blur correction lens driving mechanism 9 is calculated from the current position of the blur correction lens 6 obtained from the converted value.
Further, when the photographer performs a zoom operation from the operation unit 15, the lens driving amount calculation unit 39 instructs the zoom lens driving mechanism 7 to drive the zoom lens 4.
(センサ制御部46)
 センサ制御部46は、測光された被写体の輝度に基づき、撮像センサ3のセンサレートを設定する。例えば、被写体が暗いときには、1枚の画像を得るための露光時間が長くなるので、センサレートを低く15fpsに設定する。明るくなるに従い、1枚の画像を得るための露光時間が短くなるので、30,60,120fpsとセンサレートを高く設定する。
(Sensor control unit 46)
The sensor control unit 46 sets the sensor rate of the image sensor 3 based on the measured luminance of the subject. For example, when the subject is dark, the exposure time for obtaining one image becomes long, so the sensor rate is set to a low value of 15 fps. Since the exposure time for obtaining one image becomes shorter as the image becomes brighter, the sensor rate is set high at 30, 60, and 120 fps.
(信号処理部40)
 信号処理部40は、撮像センサ3によって取得された信号に対してノイズ処理やA/D変換等の処理を行い、記録媒体13に静止画として記録される記録用画像データを作成する。また、信号処理部40は、撮像センサ3によって取得された信号に基づいて、記録用画像データより画像サイズの小さい処理用画像データを時系列的に連続して作成する。
 処理用画像データは、背面液晶18での動画像表示(ライブビュー表示)や、動きベクトル演算部41での動きベクトルの演算、オートフォーカスや自動露出、等の各種の演算処理や、無線による画像の送信に使用される。この処理用画像データの作成については後述する。
(Signal processing unit 40)
The signal processing unit 40 performs processing such as noise processing and A / D conversion on the signal acquired by the image sensor 3 and creates recording image data to be recorded as a still image on the recording medium 13. Further, the signal processing unit 40 continuously generates processing image data having a smaller image size than the recording image data in a time-series manner based on the signal acquired by the image sensor 3.
The image data for processing includes moving image display (live view display) on the rear liquid crystal 18, calculation of a motion vector by a motion vector calculation unit 41, various calculation processes such as auto focus and automatic exposure, and wireless image processing. Used for sending. The creation of the processing image data will be described later.
(動きベクトル演算部41)
 動きベクトル演算部41は、信号処理部40により処理された複数の処理用画像データ(後述の第1処理用画像データ)から、像の動き(動き方向、動き量)を示す動きベクトル情報を演算する。動きベクトル情報は、X軸方向、Y軸方向、Roll方向の符号付きの大きさ等で表される。さらに動きベクトル情報は、検出遅れ時間等も含む。
(Motion vector calculator 41)
The motion vector calculation unit 41 calculates motion vector information indicating a motion (movement direction and motion amount) of an image from a plurality of processing image data (first processing image data described later) processed by the signal processing unit 40. I do. The motion vector information is represented by a signed size in the X-axis direction, the Y-axis direction, the Roll direction, or the like. Further, the motion vector information includes a detection delay time and the like.
 具体的には、動きベクトル演算部41は、撮像センサ3により撮像された2以上の画像データに含まれる高輝度の位置の変化などの輝度情報を比較することで、像の動き方向及び動き量を検出し、動きベクトル情報を演算する。輝度情報以外にも、画像のパターンマッチングなどで動きベクトル情報を演算してもよい。
 なお、動きベクトル情報は、1つの画像から検出してもよいし、2つの離れたフレームから算出してもよいし、3つの画像から算出してもよい。
Specifically, the motion vector calculation unit 41 compares the brightness information such as a change in the position of high brightness included in the two or more pieces of image data captured by the image sensor 3 to determine the motion direction and the motion amount of the image. Is detected, and motion vector information is calculated. In addition to luminance information, motion vector information may be calculated by image pattern matching or the like.
The motion vector information may be detected from one image, may be calculated from two separate frames, or may be calculated from three images.
(動きベクトルの基本的演算方法)
 図3は、動きベクトル演算部41における、動きベクトルの演算タイミングを説明する図である。
 動きベクトル演算部41は、n-1番目の画像データと、n-1番目の画像データが取得された時刻よりも後の時刻に撮像された、その次のn番目の画像データとから動きベクトル情報を演算する。
 例えば、撮像センサ3から30fpsで画像データが送られる場合、33msに1回、動きベクトル情報が得られる。
(Basic calculation method of motion vector)
FIG. 3 is a diagram illustrating the operation timing of the motion vector in the motion vector operation unit 41.
The motion vector calculation unit 41 calculates a motion vector from the (n-1) th image data and the next nth image data captured at a time later than the time at which the (n-1) th image data was acquired. Compute information.
For example, when image data is sent from the image sensor 3 at 30 fps, motion vector information is obtained once every 33 ms.
 ここで、
 時刻t1はn-1番目の画像データの露光が開始された時刻である。
 時刻t2はn-1番目の画像データの露光が開始された時刻と露光が終了された時刻とのちょうど中間の時刻である。
 時刻t4はn番目の画像データの露光が開始される時刻である。
 時刻t5はn番目の画像データの露光が開始された時刻と露光が終了された時刻とのちょうど中間の時刻である。
 時刻t6はn-1の画像データとnの画像データとから演算された動きベクトル情報が得られた時刻であるが、この動きベクトル情報の発生時刻は、t5とt2との間のt3とのちょうど中間と考えるのが妥当である。動きベクトル情報が得られた時刻と、動きベクトルが発生した時刻との間には、t6-t3の検出遅れ時間が生じている。
here,
Time t1 is the time when the exposure of the (n-1) th image data is started.
Time t2 is an intermediate time between the time when the exposure of the (n-1) th image data is started and the time when the exposure is completed.
Time t4 is a time when exposure of the n-th image data is started.
The time t5 is a time exactly intermediate between the time when the exposure of the n-th image data is started and the time when the exposure is completed.
The time t6 is the time when the motion vector information calculated from the n-1 image data and the n image data is obtained, and the generation time of the motion vector information is the time when the motion vector information is between t5 and t2. It is reasonable to think that it is just in the middle. A detection delay time of t6-t3 occurs between the time when the motion vector information is obtained and the time when the motion vector is generated.
(基準値補正部50)
 基準値補正部50は、センタバイアス除去部38、基準値補正量演算部35、基準値減加算部42を備える。
(Reference value correction unit 50)
The reference value corrector 50 includes a center bias remover 38, a reference value correction amount calculator 35, and a reference value subtractor 42.
 以下の説明では、X方向の基準値の補正について説明する。動きベクトル情報のX方向の情報を、動きベクトル情報Xとする。Y方向の基準値の補正ついても、X方向と同様である。本実施形態では、動きベクトル情報のRoll方向の情報は受信しない。なお、動きベクトル情報のRoll方向の情報を受信してもよい。 In the following description, correction of the reference value in the X direction will be described. Information in the X direction of the motion vector information is referred to as motion vector information X. The correction of the reference value in the Y direction is the same as that in the X direction. In the present embodiment, the Roll direction information of the motion vector information is not received. Note that the Roll vector information of the motion vector information may be received.
(センタバイアス除去部38)
 センタバイアス除去部38は、動きベクトル情報Xから、バイアス補正量を減算する。バイアス補正量Xは、センタバイアス演算部37において演算された(ブレ補正レンズ6の目標位置から減算された)X成分のセンタバイアス量から算出される。
(Center bias removing unit 38)
The center bias removing unit 38 subtracts the bias correction amount from the motion vector information X. The bias correction amount X is calculated from the center bias amount of the X component calculated by the center bias calculator 37 (subtracted from the target position of the blur correction lens 6).
(基準値補正量演算部35)
 基準値補正量演算部35は、センタバイアス除去部38においてバイアス補正量Xが除去された動きベクトル情報Xをもとに、Y軸周り(Yaw)方向の角速度センサの出力値から基準値補正量を演算する。
 本実施形態では、動きベクトル情報Xの正負のみ判断し、動きベクトル情報Xがマイナス方向に確認されると、補正量はプラスの一定量、動きベクトル情報がプラス方向に確認されると、補正量はマイナスの一定量である。
(Reference value correction amount calculation unit 35)
Based on the motion vector information X from which the bias correction amount X has been removed by the center bias removing unit 38, the reference value correction amount calculation unit 35 calculates the reference value correction amount from the output value of the angular velocity sensor around the Y axis (Yaw). Is calculated.
In the present embodiment, only the sign of the motion vector information X is determined, and if the motion vector information X is confirmed in the minus direction, the correction amount is a certain fixed amount. If the motion vector information is confirmed in the plus direction, the correction amount is Is a certain amount of minus.
(基準値減加算部42)
 基準値減加算部42は、基準値補正量演算部35により演算された補正量で第1基準値を補正して補正後の第2基準値を求める。
(Reference value subtraction / addition unit 42)
The reference value subtraction / addition unit 42 corrects the first reference value with the correction amount calculated by the reference value correction amount calculation unit 35 to obtain a corrected second reference value.
(ブレ補正装置100の動作)
 図4は、ブレ補正装置100の動作の流れを示したフローチャートである。
 ステップ001:カメラ1の電源がONにされた後、ブレ補正装置100は光学防振の為の演算を開始する。カメラによっては、レリーズスイッチ17が半押しされた場合や不図示の指示部によりブレ補正がONされた場合などに、ブレ補正装置100が光学防振の為の演算を開始する。
(Operation of the image stabilizer 100)
FIG. 4 is a flowchart showing the flow of the operation of the shake correction apparatus 100.
Step 001: After the power of the camera 1 is turned on, the shake correction apparatus 100 starts a calculation for optical image stabilization. Depending on the camera, when the release switch 17 is half-pressed or when the shake correction is turned on by an instruction unit (not shown), the shake correction apparatus 100 starts the calculation for optical image stabilization.
 ステップ002:ブレ補正装置100は、角速度センサ12の出力を増幅部31で増幅した後、第1A/D変換部32によりA/D変換する。 Step 002: The blur correction device 100 amplifies the output of the angular velocity sensor 12 by the amplifying unit 31, and then performs A / D conversion by the first A / D converting unit 32.
 ステップ003:ブレ補正装置100は、基準値演算部34において、角速度センサ12の出力のA/D変換後の信号を基に、演算上の角速度の基準値(第1基準値、ゼロdeg/s相当の値)を算出する。角速度の基準値は、温度特性や、起動直後のドリフト特性等により変化するため、例えば、工場出荷時における角速度センサ12の静止時出力を基準値に用いることはできない。 Step 003: In the blur correction device 100, the reference value calculation unit 34 calculates the reference value of the angular velocity based on the signal after the A / D conversion of the output of the angular velocity sensor 12 (first reference value, zero deg / s). (Equivalent value). Since the reference value of the angular velocity changes due to temperature characteristics, drift characteristics immediately after startup, and the like, for example, the stationary output of the angular velocity sensor 12 at the time of factory shipment cannot be used as the reference value.
 基準値の算出方法については、所定時間の移動平均を演算する方法や、LPF処理により演算する方法が知られている。本実施形態では、LPF処理による基準値演算を用いる。 As a method of calculating the reference value, a method of calculating a moving average for a predetermined time and a method of calculating by a LPF process are known. In the present embodiment, reference value calculation by LPF processing is used.
 図5は基準値演算部34(HPF)を示した図である。LPF34のカットオフ周波数fcは、0.1[Hz]程度の低い周波数に設定するのが一般的である。これは、手ブレは1~10[Hz]程度の周波数が支配的であることに起因する。0.1[Hz]のfcであれば、手ブレ成分に与える影響は少なく、良好なブレ補正を行うことができる。 FIG. 5 is a diagram showing the reference value calculation unit 34 (HPF). The cutoff frequency fc of the LPF 34 is generally set to a low frequency of about 0.1 [Hz]. This is because camera shake is dominant at a frequency of about 1 to 10 [Hz]. If fc is 0.1 [Hz], the effect on the camera shake component is small, and good blur correction can be performed.
 しかしながら、実際の撮影時には、構図の微調整(パンニング検出できないレベルの)等、低周波の動きが加わるため、基準値演算結果に誤差を持ってしまうこともある。また、fcが低い(時定数が大きい)為に、一端誤差が大きくなってしまった場合、真値に収束するまでに時間を要してしまうという課題がある。本実施形態は、この基準値の誤差を補正するものである。 However, at the time of actual photographing, movement of a low frequency such as fine adjustment of the composition (a level at which panning cannot be detected) is added, so that the reference value calculation result may have an error. Further, when fc is low (the time constant is large) and the error once increases, there is a problem that it takes time to converge to the true value. In the present embodiment, the error of the reference value is corrected.
 ステップ004:ブレ補正装置100は、動きベクトルの情報が更新された場合(S004がYES)、S005へ進み、更新されていない場合(S004がNO)は、S006へ進む。 Step 004: When the motion vector information is updated (S004: YES), the blur correction device 100 proceeds to S005, and when the information is not updated (S004: NO), the process proceeds to S006.
 ステップ005:ブレ補正装置100は、動きベクトルの情報が更新された場合、基準値補正部50においてS003で算出した第1基準値の補正を行い、第2基準値を算出する。基準値補正ステップについては、後述する。 Step 005: When the motion vector information is updated, the blur correction device 100 corrects the first reference value calculated in S003 in the reference value correction unit 50, and calculates a second reference value. The reference value correction step will be described later.
 ステップ006:ブレ補正装置100は、目標位置演算部36において、S003で求めた第1基準値あるいはS005で求めた第2基準値と角速度センサ12の出力とをもとに、ブレ補正レンズ6の目標位置を演算する。この際、目標位置演算部36は、焦点距離、被写体距離、撮影倍率、ブレ補正レンズ特性情報を加味して、ブレ補正レンズ6の目標位置を演算する。 Step 006: The blur correction device 100 controls the blur correction lens 6 in the target position calculation unit 36 based on the first reference value obtained in S003 or the second reference value obtained in S005 and the output of the angular velocity sensor 12. Calculate the target position. At this time, the target position calculation unit 36 calculates the target position of the blur correction lens 6 in consideration of the focal length, the subject distance, the shooting magnification, and the blur correction lens characteristic information.
 ステップ007:ブレ補正装置100は、ブレ補正レンズ6が可動端へ到達することを防ぐため、センタバイアス処理を行う。
 センタバイアス処理の方法については、目標位置情報に応じてバイアス量を設定する方法や、HPF処理、不完全積分処理(S006にて)等、種々あるが、ここでは方法は問わない。
Step 007: The blur correction device 100 performs a center bias process to prevent the blur correction lens 6 from reaching the movable end.
There are various methods of the center bias processing, such as a method of setting a bias amount according to target position information, an HPF processing, an incomplete integration processing (in S006), and the method is not limited here.
 ステップ008:ブレ補正装置100は、レンズ駆動量演算部39において、センタバイアス成分を加味した目標位置情報と、ブレ補正レンズ位置情報の差分から、レンズ駆動量を演算する。 Step 008: The blur correction device 100 calculates the lens drive amount from the difference between the target position information considering the center bias component and the blur correction lens position information in the lens drive amount calculation unit 39.
 ステップ009:ブレ補正装置100は、ブレ補正レンズ駆動機構9を介してブレ補正レンズ6を目標位置まで駆動させ、S002へ戻る。 Step 009: The blur correction device 100 drives the blur correction lens 6 to the target position via the blur correction lens driving mechanism 9, and returns to S002.
(基準値補正ステップ)
 図6は、図4の基準値補正ステップ005の詳細なフローチャートである。
 ステップ101:ブレ補正装置100は、算出した動きベクトル情報を全て合計し、ステップ102へ進む。
 ステップ102:ブレ補正装置100は、センタバイアス除去部38において、ステップ007で演算したセンタバイアス成分を、動きベクトル情報と同一スケールに換算し、ステップ103へ進む。
 換算方法は、焦点距離、被写体距離、撮影倍率、動きベクトル情報の分解能情報を基に演算する。
 
   Bias_MV=Bias_θ*f*(1+β)/MV_pitch
   Bias_MV:センタバイアス成分(動きベクトル情報同一スケール)
   Bias_θ:センタバイアス成分(角度)
   f:焦点距離
   β:撮影倍率
   MV_pitch:動きベクトルピッチサイズ
 
(Reference value correction step)
FIG. 6 is a detailed flowchart of the reference value correction step 005 in FIG.
Step 101: The blur correction device 100 sums up all the calculated motion vector information, and proceeds to step 102.
Step 102: In the shake correction apparatus 100, the center bias removing unit 38 converts the center bias component calculated in step 007 into the same scale as the motion vector information, and proceeds to step 103.
The conversion method is calculated based on focal length, subject distance, shooting magnification, and resolution information of motion vector information.

Bias_MV = Bias_θ * f * (1 + β) / MV_pitch
Bias_MV: Center bias component (same scale as motion vector information)
Bias_θ: Center bias component (angle)
f: focal length β: shooting magnification MV_pitch: motion vector pitch size
 また、動きベクトルは撮像した複数のフレーム間の差分をとるため検出するまでに遅れ時間が発生する。そのため、センタバイアス成分も、動きベクトル情報と同等の遅れ時間を持たせることが好ましい。例えば、30[fps]で、3フレーム分の遅れ時間を持っている場合、約100[ms]遅れることになる。このため、100[ms]前のバイアス情報を用いることで、より正確に動きベクトル情報に含まれる、センタバイアス成分が演算できる。 In addition, since a difference between a plurality of captured frames is obtained for a motion vector, a delay time occurs before the motion vector is detected. Therefore, it is preferable that the center bias component also have a delay time equivalent to the motion vector information. For example, if the delay time is 3 frames at 30 [fps], the delay is about 100 [ms]. Therefore, the center bias component included in the motion vector information can be calculated more accurately by using the bias information before 100 [ms].
 ステップ103:ブレ補正装置100は、センタバイアス除去部38においてステップ102で換算したセンタバイアス成分を動きベクトル情報から減算し、ステップ104へ進む。これにより、基準値誤差による動きベクトル情報を取得することができる。 Step 103: The blur correction device 100 subtracts the center bias component converted in step 102 from the motion vector information in the center bias removing unit 38, and proceeds to step 104. Thereby, motion vector information based on the reference value error can be obtained.
 ステップ104:ブレ補正装置100は、最新の動きベクトル情報(n)と1フレーム前の動きベクトル情報(n-1)の差分:MV_diffを取得し、ステップ105へ進む。 Step 104: The blur correction device 100 acquires the difference: MV_diff between the latest motion vector information (n) and the motion vector information (n−1) one frame before, and proceeds to step 105.
 ステップ105:ブレ補正装置100は、基準値補正量演算部35において、MV_diffを基に、基準値を補正する量を設定する。基準値は、以下の考えにより、補正量を設定し、ステップ106へ進む。
 
  MV_diff>0:ω0_comp=-ω0_comp_def
  MV_diff<0:ω0_comp=+ω0_comp_def
  MV_diff=0:ω0_comp=0
 
   ω0_comp :基準値補正量
   ω0_comp_def:基準値補正常数
 
Step 105: In the blur correction device 100, the reference value correction amount calculation unit 35 sets an amount by which the reference value is corrected based on MV_diff. For the reference value, a correction amount is set based on the following idea, and the process proceeds to step 106.

MV_diff> 0: ω0_comp = −ω0_comp_def
MV_diff <0: ω0_comp = + ω0_comp_def
MV_diff = 0: ω0_comp = 0

ω0_comp: reference value correction amount ω0_comp_def: reference value correction constant
 ステップ106:ブレ補正装置100は、基準値減加算部42において、ステップ105にて演算したω0_compをS003(図4)にて演算した第1基準値から減算して補正後の第2基準値を求める。以上のとおり、図3のS005で第2基準値を求めることになるが、ブレ補正処理は、図3のフローチャートで示すようにS009まで進んだ後はS002に戻る、ループを繰り返す。ゆえにS005で補正した第2基準値は動きベクトル情報が更新されると随時更新される。 Step 106: The blur correction device 100 causes the reference value subtraction / addition unit 42 to subtract the ω0_comp calculated in Step 105 from the first reference value calculated in S003 (FIG. 4), and obtain a corrected second reference value. Ask. As described above, the second reference value is obtained in S005 in FIG. 3, but the blur correction process repeats the loop of returning to S002 after proceeding to S009 as shown in the flowchart of FIG. Therefore, the second reference value corrected in S005 is updated whenever the motion vector information is updated.
 図7(a)は、Yaw方向の第2基準値を示したグラフである。図中点線は本実施形態による補正しなかった場合の第1基準値を示し、図中実線は、本実施形態により補正した場合の第2基準値を示す。 FIG. 7A is a graph showing a second reference value in the Yaw direction. The dotted line in the drawing indicates the first reference value when the correction is not performed according to the present embodiment, and the solid line in the drawing indicates the second reference value when the correction is performed according to the present embodiment.
 図7(b)はX方向の動きベクトル情報の方向を示したグラフである。
 例えば、図7(b)の時刻t1のように、算出した動きベクトル情報がプラス方向の場合、(a)に実践で示すように第1基準値をマイナスに補正する。
 その後、時刻t3まで第2基準値は、基準値演算部34で演算される値に従い変化する。
 時刻t3において、算出した動きベクトル情報がプラス方向であると確認されると、また第1基準値をマイナスに補正する。本実施形態では、このときの補正量は一定とする。すなわち時刻t1の補正量と時刻t3の補正量は同じである。
 その後も、動きベクトル情報が確認される時刻と時刻の間、基準値は、基準値演算部34で演算された値に従い変化して補正後の第2基準値となる。
 また、動きベクトル情報がプラス方向に確認されると、第1基準値を一定量マイナスに補正する。
 また、図中の時刻t22やt25のように動きベクトル情報がマイナス方向に確認されると、第1基準値を一定量プラスに補正する。
FIG. 7B is a graph showing the direction of the motion vector information in the X direction.
For example, when the calculated motion vector information is in the plus direction as at time t1 in FIG. 7B, the first reference value is corrected to minus as shown in practice in FIG.
After that, the second reference value changes according to the value calculated by the reference value calculation unit 34 until time t3.
At time t3, when it is confirmed that the calculated motion vector information is in the plus direction, the first reference value is corrected to minus. In the present embodiment, the correction amount at this time is constant. That is, the correction amount at time t1 and the correction amount at time t3 are the same.
Thereafter, between the times when the motion vector information is confirmed, the reference value changes according to the value calculated by the reference value calculation unit 34 to become the corrected second reference value.
When the motion vector information is confirmed in the plus direction, the first reference value is corrected to a certain amount minus.
Further, when the motion vector information is confirmed in the minus direction as at times t22 and t25 in the drawing, the first reference value is corrected to a certain amount plus.
 次に、実施形態の信号処理部40における処理用画像データの作成の詳細について説明する。
 図8は、信号処理部40における処理用画像データの作成の詳細を示すブロック図である。信号処理部40は、前処理部40Aと、サイズ調整部40Bとを有する。
Next, details of creation of processing image data in the signal processing unit 40 of the embodiment will be described.
FIG. 8 is a block diagram illustrating details of generation of the processing image data in the signal processing unit 40. The signal processing unit 40 has a preprocessing unit 40A and a size adjustment unit 40B.
(前処理部40A)
 前処理部40Aは、例えばAFE回路(Analog Front End回路)で、撮像センサ3から出力された画像信号に対してノイズ処理やA/D変換等の処理を行い、画像信号をデジタルの撮像データにしてサイズ調整部40Bに入力する。
(Pre-processing unit 40A)
The preprocessing unit 40A performs, for example, an AFE circuit (Analog Front End circuit) on the image signal output from the image sensor 3 such as noise processing or A / D conversion to convert the image signal into digital image data. To the size adjustment unit 40B.
(サイズ調整部40B)
 サイズ調整部40Bは、前処理部40Aから入力された解像度変換して撮像データの画像サイズ(画素数)を調整する。例えば、レリーズスイッチ17が全押しされた場合、前処理部40Aから得られる撮像データの画像サイズは縮小されず、又は画像データの画像サイズの縮小の割合は小さく、後述する処理用画像データと比べて大きな例えばフルサイズの記録用画像データB0が作成されて、記録媒体13に記録される。
(Size adjustment unit 40B)
The size adjustment unit 40B converts the resolution input from the preprocessing unit 40A and adjusts the image size (number of pixels) of the captured data. For example, when the release switch 17 is fully pressed, the image size of the image data obtained from the pre-processing unit 40A is not reduced, or the reduction ratio of the image size of the image data is small, and is smaller than the processing image data described later. For example, full-size recording image data B0 is created and recorded on the recording medium 13.
 サイズ調整部40Bは、また、全押しの状態ではなく、例えば背面液晶18にスルー画を表示する場合、レリーズスイッチ17が半押しでAF(オートフォーカス)が行われている場合、AE(自動露出)が行われている場合、送信用画像を作成する場合、動きベクトルの演算が行われている場合等、それらの処理用に適した処理用の画像サイズに縮小される。 The size adjustment unit 40B is not in the fully-pressed state. For example, when displaying a through image on the rear liquid crystal 18, when the release switch 17 is half-pressed to perform AF (autofocus), AE (automatic exposure) ) Is performed, a transmission image is created, a motion vector is calculated, or the like, the image size is reduced to a processing image size suitable for such processing.
 ここで、処理用の画像サイズは、実施形態において、2段階に縮小される。まず、記録用画像データB0より小さい第1処理用画像データB1が作成され、そこからさらに縮小されて第2処理用画像データB2が作成される。記録用画像データB0、第1処理用画像データB1、第2処理用画像データB2のサイズの関係を以下に示す。
 
   B0≧B1≧B2
 
Here, the image size for processing is reduced in two steps in the embodiment. First, first processing image data B1 smaller than recording image data B0 is created, and further reduced therefrom to create second processing image data B2. The relationship between the sizes of the recording image data B0, the first processing image data B1, and the second processing image data B2 is shown below.

B0 ≧ B1 ≧ B2
 一例として、撮像センサ3での撮像サイズAが4608×3456ピクセルの場合、記録用画像データB0のサイズは、例えば4608×2592ピクセルである。
 第2処理用画像データB2は、例えばVGAサイズで、640×360ピクセルに解像度が縮小(画素が間引き)され、スルー画用、AF用、AE用、送信用に用いられる。
As an example, when the imaging size A of the imaging sensor 3 is 4608 × 3456 pixels, the size of the recording image data B0 is, for example, 4608 × 2592 pixels.
The second processing image data B2 has, for example, a VGA size and a resolution reduced to 640 × 360 pixels (pixels are thinned out), and is used for a through image, AF, AE, and transmission.
 第1処理用画像データB1は、記録用画像データB0と第2処理用画像データB2との間のサイズで、次に述べるように、ズームポジションによって変動する。
 サイズ調整部40Bは、レンズ駆動量演算部39と接続され、レンズ駆動量演算部39からズームポジション情報を得る。ただし、これに限らず、レンズの位置検出部が設けられている場合は、レンズ位置検出部からズームポジションを得てもよい。
 ズームポジションは、例えば、レンズ鏡筒1Bがテレ(望遠)端から一定領域のテレ領域、ワイド(広角)端から一定領域のワイド領域、テレ領域とワイド領域との間のミドル領域との3つの領域に分けられている。
The first processing image data B1 has a size between the recording image data B0 and the second processing image data B2, and varies depending on the zoom position as described below.
The size adjustment unit 40B is connected to the lens drive amount calculation unit 39, and obtains zoom position information from the lens drive amount calculation unit 39. However, the present invention is not limited to this, and if a lens position detector is provided, the zoom position may be obtained from the lens position detector.
There are three zoom positions, for example, when the lens barrel 1 </ b> B is in a fixed area from the tele (telephoto) end, in a fixed area from the wide (wide angle) end, and in a middle area between the tele area and the wide area. It is divided into areas.
 サイズ調整部40Bは、記憶部14にも接続されている。
 記憶部14には、ズームポジションに対応した第1処理用画像データB1のサイズが記憶されている。ズームポジションの領域ごとの第1処理用画像データB1のサイズの一例を以下に示す。
 
 テレ領域  640×360ピクセル
 ミドル領域 1280×720ピクセル
 ワイド領域 1920×1080ピクセル
 
 角速度センサ12の基準値が真の値からずれている時に像面上で検出される動きベクトルは、撮影光学系の焦点距離が短いほど小さく、焦点距離が長いほど大きい。そのため、動きベクトルの検出には、ワイド領域では解像度が高い画像が必要だが動きベクトル検出の範囲は小さくてよく、テレ領域では解像度が高い画像は必要でないが動きベクトルの検出の範囲を大きくする必要がある。
 例えば、ズームポジションがテレ領域の場合、第1処理用画像データB1の画像サイズは第2処理用画像データB2の画像サイズと等しい640×360ピクセルである。そして、ズームポジションがテレ領域からワイド領域に行くに従い、B1の画像サイズが大きく、つまり解像度が高くなる。
  このように、動きベクトル演算部41で用いる画像データは、信号処理部40(サイズ調整部40B)において、ズームポジションに対応して異なる画像サイズで作成される。
The size adjustment unit 40B is also connected to the storage unit 14.
The storage unit 14 stores the size of the first processing image data B1 corresponding to the zoom position. An example of the size of the first processing image data B1 for each area of the zoom position is shown below.

Tele area 640 × 360 pixels Middle area 1280 × 720 pixels Wide area 1920 × 1080 pixels
The motion vector detected on the image plane when the reference value of the angular velocity sensor 12 deviates from the true value is smaller as the focal length of the imaging optical system is shorter, and is larger as the focal length is longer. Therefore, to detect a motion vector, a high-resolution image is required in the wide area, but the range of motion vector detection may be small.In the tele area, a high-resolution image is not required, but the range of motion vector detection needs to be large. There is.
For example, when the zoom position is in the tele area, the image size of the first processing image data B1 is 640 × 360 pixels which is equal to the image size of the second processing image data B2. Then, as the zoom position moves from the tele area to the wide area, the image size of B1 increases, that is, the resolution increases.
As described above, the image data used in the motion vector calculation unit 41 is created in the signal processing unit 40 (size adjustment unit 40B) in different image sizes corresponding to the zoom positions.
 サイズ調整部40Bは、記憶部14に記憶されている第1処理用画像データB1のサイズから、レンズ駆動量演算部39から得られたズームポジションに対応する画像サイズを読み出し、その画像サイズの第1処理用画像データB1を作成する。 The size adjustment unit 40B reads the image size corresponding to the zoom position obtained from the lens drive amount calculation unit 39 from the size of the first processing image data B1 stored in the storage unit 14, and reads out the image size of the image size. One processing image data B1 is created.
(ワイド領域)
 ワイド領域になるほど、手ぶれの影響が小さくなり、検出される動きベクトルも小さくなるので、動きベクトルを検出しようとしたときに、画像サイズが小さく解像度が低いとと検出できない可能性がある。
 しかし、本実施形態によると、ワイド領域になるほど、サイズ調整部40Bが作成する第1処理用画像データB1の画像サイズは大きくなる。そうすると、解像度が高くなるので、動きベクトルの検出分解能(検出精度)が向上する。ゆえに、動きベクトルを精度よく検出することができる。
(Wide area)
Since the influence of camera shake becomes smaller and the detected motion vector becomes smaller as the area becomes wider, it may not be possible to detect a motion vector if the image size is small and the resolution is low.
However, according to the present embodiment, the image size of the first processing image data B1 created by the size adjustment unit 40B increases as the area becomes wider. Then, since the resolution is increased, the detection resolution (detection accuracy) of the motion vector is improved. Therefore, a motion vector can be accurately detected.
(テレ領域)
 動きベクトルを検出する範囲には制限を設けてあり、2つの画像データ間での特徴点の移動量は、例えば16ピクセルまでである。テレ領域の場合、動きベクトルはワイド領域に比べて大きくなる。ここで、ワイド領域のように画像サイズが大きく、解像度が高いと、動きベクトルを検出するための特徴点の移動量が、動きベクトルの検出限界である16ピクセルを超えて、動きベクトルを検出できない可能性が高くなる。
 したがって、実施形態ではテレ領域において画像サイズをワイド領域と比べてと小さくする。これにより、テレ領域では、動きベクトルが検出不能となることを防止することができる。また、画像データが小さくなることにより、処理の負荷や処理時間短縮することができる。
(Tele area)
The range in which the motion vector is detected is limited, and the moving amount of the feature point between two pieces of image data is, for example, up to 16 pixels. In the case of the tele region, the motion vector is larger than in the case of the wide region. Here, if the image size is large and the resolution is high like a wide area, the movement amount of the feature point for detecting the motion vector exceeds 16 pixels which is the detection limit of the motion vector, and the motion vector cannot be detected. The likelihood increases.
Therefore, in the embodiment, the image size in the tele area is smaller than that in the wide area. This makes it possible to prevent the motion vector from being undetectable in the tele area. In addition, the processing load and processing time can be reduced by reducing the image data.
(変形形態)
 以上、説明した実施形態に限定されることなく、以下に示すような種々の変形や変更が可能であり、それらも本発明の範囲内である。
(1)実施形態では、一旦、動きベクトル演算用の第1処理用画像データB1を作成してから、その第1処理用画像データB1より、他の処理に用いる第2処理用画像データB2を作成した。しかし、これに限らず、第2処理用画像データB2は第1処理用画像データB1を介さずに直接、フルサイズ画像より作成してもよい。
(Modified form)
The present invention is not limited to the embodiments described above, and various modifications and changes as described below are possible, and these are also within the scope of the present invention.
(1) In the embodiment, the first processing image data B1 for motion vector calculation is once created, and then the second processing image data B2 used for other processing is converted from the first processing image data B1. Created. However, the present invention is not limited thereto, and the second processing image data B2 may be created directly from the full-size image without passing through the first processing image data B1.
(2)サイズ調整部40Bは、角速度が大きい場合、すなわち、手ブレ量が大きい場合、第1処理用画像データB1のサイズを小さくしてもよい。
 例えば、手ブレが大きい場合、動きベクトルは大きくなる。動きベクトル演算のために用いる第1処理用画像データB1のサイズが大きいと、検出限界から外れる可能性がある。ゆえに、動きベクトル演算に用いる第1処理用画像データB1のサイズを小さくする。これにより、動きベクトルが演算不能となることを防止できる。
(2) When the angular velocity is high, that is, when the camera shake amount is large, the size adjustment unit 40B may reduce the size of the first processing image data B1.
For example, when the camera shake is large, the motion vector becomes large. If the size of the first processing image data B1 used for the motion vector calculation is large, it may be out of the detection limit. Therefore, the size of the first processing image data B1 used for the motion vector calculation is reduced. As a result, it is possible to prevent the motion vector from being uncalculable.
(3)また、被写体ブレが、例えば撮影した画像データから検出可能な場合、被写体ブレ大きさによって、第1処理用画像データB1のサイズを変えてもよい。 (3) If the subject blur can be detected from, for example, captured image data, the size of the first processing image data B1 may be changed depending on the subject blur size.
(4)実施形態では、基準値の補正量は、プラスかマイナスの一定量であったが、ズームポジションによって大きさを変えてもよい。また、レリーズスイッチ17の半押しによりオートフォーカス動作が行われ、動きベクトルの演算が一定期間停止する場合がある。このような場合、動きベクトルの演算が一定期間停止後に再開した後の動きベクトルの補正量を、大きくするようにしてもよい。 (4) In the embodiment, the correction amount of the reference value is a certain amount of plus or minus. However, the correction amount may be changed depending on the zoom position. Further, the auto-focus operation may be performed by half-pressing the release switch 17, and the calculation of the motion vector may be stopped for a certain period. In such a case, the correction amount of the motion vector after the calculation of the motion vector is restarted after being stopped for a certain period may be increased.
(5)本実施形態は角速度センサ12をレンズ鏡筒1Bが備える構造であったが、これに限らず、角速度センサ12はカメラボディ1A内に備えられていてもよい。 (5) In this embodiment, the angular velocity sensor 12 is provided in the lens barrel 1B. However, the present invention is not limited to this, and the angular velocity sensor 12 may be provided in the camera body 1A.
(6)また、角速度センサではなく、加速度センサをカメラボディ1A又はレンズ鏡筒1B内に備えるものであってもよい。 (6) Further, instead of the angular velocity sensor, an acceleration sensor may be provided in the camera body 1A or the lens barrel 1B.
(7)上述の実施形態では、目標位置演算部によって算出されたブレ補正レンズの目標位置に基づいて、ブレ補正レンズを、その可動範囲の中心に向かって移動させるための向心力であるセンタバイアスを用いた制御をおこなったが、センタバイアスを用いない制御をおこなってもよい。その場合は、センタバイアス演算部とセンタバイアス除去部を持たない。 (7) In the above-described embodiment, the center bias, which is a centripetal force for moving the blur correction lens toward the center of its movable range, is determined based on the target position of the blur correction lens calculated by the target position calculation unit. Although the control using the center bias is performed, the control without using the center bias may be performed. In that case, there is no center bias calculator and no center bias remover.
 1:カメラ、1A:カメラボディ、1B:レンズ鏡筒、2:CPU、3:撮像センサ、4:ズームレンズ、5:フォーカスレンズ、6:ブレ補正レンズ、7:ズームレンズ駆動機構、8:フォーカスレンズ駆動機構、9:ブレ補正レンズ駆動機構、10:絞り、11:駆動機構、12:角速度センサ、13:記録媒体、14:記憶部、15:操作部、17:レリーズスイッチ、18:背面液晶、21:ブレ補正レンズ位置検出部、31:増幅部、34:基準値演算部、35:基準値補正量演算部、36:目標位置演算部、37:センタバイアス演算部、38:センタバイアス除去部、39:レンズ駆動量演算部、40:信号処理部、40A:前処理部、40B:サイズ調整部、41:動きベクトルベクトル演算部、42:基準値減加算部、43:減算部、46:センサ制御部、50:基準値補正部、100:ブレ補正装置 1: camera, 1A: camera body, 1B: lens barrel, 2: CPU, 3: imaging sensor, 4: zoom lens, 5: focus lens, 6: blur correction lens, 7: zoom lens driving mechanism, 8: focus Lens drive mechanism, 9: blur correction lens drive mechanism, 10: aperture, 11: drive mechanism, 12: angular velocity sensor, 13: recording medium, 14: storage unit, 15: operation unit, 17: release switch, 18: rear liquid crystal , 21: blur correction lens position detector, 31: amplifier, 34: reference value calculator, 35: reference value correction amount calculator, 36: target position calculator, 37: center bias calculator, 38: center bias removal Unit, 39: lens drive amount calculation unit, 40: signal processing unit, 40A: preprocessing unit, 40B: size adjustment unit, 41: motion vector vector calculation unit, 42: reference value deduction and addition unit, 3: subtraction unit, 46: sensor control unit, 50: reference value correction unit, 100: a motion compensation device

Claims (7)

  1.  光学系による被写体の像を撮像し信号を出力する撮像素子と、
     前記信号に基づいて生成される、前記光学系の焦点距離に応じた解像度の画像データに基づいて、前記被写体の動きベクトルに関する情報を算出する動きベクトル算出部と、
    を有する撮像装置。
    An image sensor that captures an image of a subject by an optical system and outputs a signal;
    A motion vector calculation unit that calculates information on a motion vector of the subject based on image data having a resolution corresponding to a focal length of the optical system, which is generated based on the signal.
    An imaging device having:
  2.  前記動きベクトル算出部は、前記光学系の焦点距離が短いほど解像度の高い画像データを基に前記動きベクトルに関する情報を算出する、
     請求項1に記載の撮像装置。
    The motion vector calculation unit calculates information about the motion vector based on image data having a higher resolution as the focal length of the optical system is shorter,
    The imaging device according to claim 1.
  3.  前記撮像素子から出力された前記信号を基に第1画像データを生成し、前記第1画像データから、前記光学系の焦点距離に対応する解像度の第2画像データを生成する画像生成部を有し、
     前記動きベクトル算出部は、前記第2画像データに基づいて前記動きベクトルに関する情報を算出する、
     請求項1または2に記載の撮像装置。
    An image generation unit that generates first image data based on the signal output from the image sensor and generates second image data having a resolution corresponding to a focal length of the optical system from the first image data. And
    The motion vector calculation unit calculates information on the motion vector based on the second image data.
    The imaging device according to claim 1.
  4.  前記撮像素子から出力された前記信号を基に、前記光学系の焦点距離に対応する解像度の第3画像データを生成する画像生成部を有し、
     前記動きベクトル算出部は、前記第3画像データに基づいて前記動きベクトルに関する情報を算出する、
     請求項1または2に記載の撮像装置。
    An image generation unit that generates third image data having a resolution corresponding to a focal length of the optical system based on the signal output from the imaging element,
    The motion vector calculation unit calculates information on the motion vector based on the third image data,
    The imaging device according to claim 1.
  5.  前記撮像装置の振れを検出し振れ信号を出力するセンサと、
     前記振れ信号に基づいて前記被写体の像のブレを補正する補正素子と、
     前記動きベクトルに関する情報と前記振れ信号とを用いて前記補正素子の移動量を演算する移動量演算部とを有する、
     請求項1から4のいずれか1項に記載の撮像装置。
    A sensor that detects a shake of the imaging device and outputs a shake signal;
    A correction element for correcting blurring of the image of the subject based on the shake signal;
    A moving amount calculating unit that calculates a moving amount of the correction element using the information on the motion vector and the shake signal,
    The imaging device according to claim 1.
  6.  前記振れ信号を基に前記振れ信号の基準となる第1基準値を演算する基準値演算部と、
     前記動きベクトルに関する情報に基づいて前記第1基準値を補正して第2基準値を求める基準値補正部と、を有し、
     前記移動量演算部は、前記振れ信号と前記第2基準値とに基づいて前記補正素子の前記移動量を演算する、
    請求項5に記載の撮像装置。
    A reference value calculation unit that calculates a first reference value serving as a reference of the shake signal based on the shake signal;
    A reference value correction unit that corrects the first reference value based on the information about the motion vector to obtain a second reference value,
    The movement amount calculation unit calculates the movement amount of the correction element based on the shake signal and the second reference value,
    The imaging device according to claim 5.
  7.  前記動きベクトル算出部は、前記振れ信号が大きくなるほど解像度の低い画像データから前記動きベクトルに関する情報を算出する、
     請求項5または6に記載の撮像装置。
    The motion vector calculation unit calculates information on the motion vector from image data having a lower resolution as the shake signal increases,
    The imaging device according to claim 5.
PCT/JP2019/025371 2018-07-09 2019-06-26 Imaging device WO2020012960A1 (en)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
JP2018-129972 2018-07-09
JP2018129972 2018-07-09

Publications (1)

Publication Number Publication Date
WO2020012960A1 true WO2020012960A1 (en) 2020-01-16

Family

ID=69142370

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/JP2019/025371 WO2020012960A1 (en) 2018-07-09 2019-06-26 Imaging device

Country Status (2)

Country Link
TW (1) TW202017355A (en)
WO (1) WO2020012960A1 (en)

Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JPH10145662A (en) * 1996-11-15 1998-05-29 Canon Inc Image pickup device, storage medium, lens unit and shake corrector
JP2002333644A (en) * 2001-05-10 2002-11-22 Canon Inc Image blur detector
JP2005260663A (en) * 2004-03-12 2005-09-22 Casio Comput Co Ltd Digital camera and program
JP2018078582A (en) * 2017-11-30 2018-05-17 キヤノン株式会社 Image shake correction device, control method, program, and storage medium

Patent Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JPH10145662A (en) * 1996-11-15 1998-05-29 Canon Inc Image pickup device, storage medium, lens unit and shake corrector
JP2002333644A (en) * 2001-05-10 2002-11-22 Canon Inc Image blur detector
JP2005260663A (en) * 2004-03-12 2005-09-22 Casio Comput Co Ltd Digital camera and program
JP2018078582A (en) * 2017-11-30 2018-05-17 キヤノン株式会社 Image shake correction device, control method, program, and storage medium

Also Published As

Publication number Publication date
TW202017355A (en) 2020-05-01

Similar Documents

Publication Publication Date Title
JP6472176B2 (en) Imaging apparatus, image shake correction apparatus, image pickup apparatus control method, and image shake correction method
US10827124B2 (en) Shake correction device, imaging apparatus, and shake correction method
JP6543946B2 (en) Shake correction device, camera and electronic device
US10659692B2 (en) Image blur correction device, imaging apparatus, control method of imaging apparatus and non-transitory storage medium
JP6171575B2 (en) Blur correction device and optical apparatus
JP6268981B2 (en) Blur correction device, interchangeable lens and camera
JP6171576B2 (en) Blur correction device and optical apparatus
WO2020012960A1 (en) Imaging device
JP2011013555A (en) Camera-shake correction device and optical instrument
JP2019091063A (en) Shake correction device, electronic apparatus and camera
WO2020012961A1 (en) Imaging device
JP2020137011A (en) Imaging apparatus, control method of the same, program, and storage medium
JP6590018B2 (en) Blur correction device and camera
WO2019203147A1 (en) Imaging device
JP6590013B2 (en) Interchangeable lens and imaging device
JP6468343B2 (en) Interchangeable lenses and optical equipment
JP6717396B2 (en) Image stabilization apparatus and image pickup apparatus
JP6485499B2 (en) Blur correction device and optical apparatus
JP6414285B2 (en) Blur correction device and optical apparatus
JP6610722B2 (en) Blur correction device and optical apparatus
JP6299188B2 (en) Blur correction device, lens barrel and camera
JP7172214B2 (en) Interchangeable lenses, camera bodies and camera systems
JP6318502B2 (en) Blur correction device and optical apparatus
JP2020178372A (en) interchangeable lens
JP2015166771A (en) Imaging apparatus and method for controlling the same

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 19834961

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: DE

122 Ep: pct application non-entry in european phase

Ref document number: 19834961

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: JP