CN109391767B - Image pickup apparatus and method executed therein - Google Patents

Image pickup apparatus and method executed therein Download PDF

Info

Publication number
CN109391767B
CN109391767B CN201810879400.2A CN201810879400A CN109391767B CN 109391767 B CN109391767 B CN 109391767B CN 201810879400 A CN201810879400 A CN 201810879400A CN 109391767 B CN109391767 B CN 109391767B
Authority
CN
China
Prior art keywords
unit
image pickup
image
feature point
pickup apparatus
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN201810879400.2A
Other languages
Chinese (zh)
Other versions
CN109391767A (en
Inventor
竹内谦司
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Canon Inc
Original Assignee
Canon Inc
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Canon Inc filed Critical Canon Inc
Publication of CN109391767A publication Critical patent/CN109391767A/en
Application granted granted Critical
Publication of CN109391767B publication Critical patent/CN109391767B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G02OPTICS
    • G02BOPTICAL ELEMENTS, SYSTEMS OR APPARATUS
    • G02B27/00Optical systems or apparatus not provided for by any of the groups G02B1/00 - G02B26/00, G02B30/00
    • G02B27/64Imaging systems using optical elements for stabilisation of the lateral and angular position of the image
    • G02B27/646Imaging systems using optical elements for stabilisation of the lateral and angular position of the image compensating for small deviations, e.g. due to vibration or shake
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N23/00Cameras or camera modules comprising electronic image sensors; Control thereof
    • H04N23/60Control of cameras or camera modules
    • H04N23/68Control of cameras or camera modules for stable pick-up of the scene, e.g. compensating for camera body vibrations
    • H04N23/682Vibration or motion blur correction
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N23/00Cameras or camera modules comprising electronic image sensors; Control thereof
    • H04N23/60Control of cameras or camera modules
    • H04N23/68Control of cameras or camera modules for stable pick-up of the scene, e.g. compensating for camera body vibrations
    • H04N23/682Vibration or motion blur correction
    • H04N23/683Vibration or motion blur correction performed by a processor, e.g. controlling the readout of an image memory
    • GPHYSICS
    • G02OPTICS
    • G02BOPTICAL ELEMENTS, SYSTEMS OR APPARATUS
    • G02B27/00Optical systems or apparatus not provided for by any of the groups G02B1/00 - G02B26/00, G02B30/00
    • G02B27/0025Optical systems or apparatus not provided for by any of the groups G02B1/00 - G02B26/00, G02B30/00 for optical correction, e.g. distorsion, aberration
    • G02B27/0037Optical systems or apparatus not provided for by any of the groups G02B1/00 - G02B26/00, G02B30/00 for optical correction, e.g. distorsion, aberration with diffracting elements
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N19/00Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
    • H04N19/50Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using predictive coding
    • H04N19/503Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using predictive coding involving temporal prediction
    • H04N19/51Motion estimation or motion compensation
    • H04N19/513Processing of motion vectors
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N23/00Cameras or camera modules comprising electronic image sensors; Control thereof
    • H04N23/60Control of cameras or camera modules
    • H04N23/68Control of cameras or camera modules for stable pick-up of the scene, e.g. compensating for camera body vibrations
    • H04N23/681Motion detection
    • H04N23/6811Motion detection based on the image signal
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N23/00Cameras or camera modules comprising electronic image sensors; Control thereof
    • H04N23/60Control of cameras or camera modules
    • H04N23/68Control of cameras or camera modules for stable pick-up of the scene, e.g. compensating for camera body vibrations
    • H04N23/681Motion detection
    • H04N23/6812Motion detection based on additional sensors, e.g. acceleration sensors
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N23/00Cameras or camera modules comprising electronic image sensors; Control thereof
    • H04N23/60Control of cameras or camera modules
    • H04N23/68Control of cameras or camera modules for stable pick-up of the scene, e.g. compensating for camera body vibrations
    • H04N23/682Vibration or motion blur correction
    • H04N23/685Vibration or motion blur correction performed by mechanical compensation
    • H04N23/687Vibration or motion blur correction performed by mechanical compensation by shifting the lens or sensor position
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N5/00Details of television systems
    • H04N5/14Picture signal circuitry for video frequency region
    • H04N5/144Movement detection
    • H04N5/145Movement estimation

Landscapes

  • Engineering & Computer Science (AREA)
  • Multimedia (AREA)
  • Signal Processing (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Optics & Photonics (AREA)
  • Studio Devices (AREA)
  • Adjustment Of Camera Lenses (AREA)

Abstract

The invention relates to an image pickup apparatus and a method executed therein. The image pickup apparatus detects a shake or the like applied to the image pickup apparatus by the vibration sensor. A motion vector detection unit detects a motion vector of an image in an image signal by an image pickup unit. The feature point tracking unit calculates a coordinate value of an object on the imaging screen that changes with the passage of time, based on the motion vector. The feature coordinate mapping and position and orientation estimation unit estimates a position and orientation of the image pickup apparatus and a positional relationship including a depth between the object and the image pickup apparatus based on an output of the vibration sensor and the coordinate value of the object. The calculation unit calculates a control amount of image blur correction using the feature point of the main object, the feature coordinate map, and the position or orientation information of the image pickup apparatus. The correction lens is driven according to the output of the target position calculation unit, and a shake correction operation of the image pickup apparatus is performed.

Description

Image pickup apparatus and method executed therein
Technical Field
The present invention relates to an image blur correction technique for an optical apparatus such as a video camera, a digital still camera, and an interchangeable lens thereof.
Background
As functions for correcting image blur of an object generated due to hand shake or the like of a user holding a main body portion of an image pickup apparatus, an optical image blur correction process and an electronic image blur correction process are provided. In the optical image blur correction process, the vibration of the main body portion is detected by an angular velocity sensor or the like, and control is performed such that a correction lens provided in the imaging optical system moves in accordance with the detection result. When the direction of the optical axis of the imaging optical system is changed and the image formed on the light receiving surface of the imaging element is moved, the image blur can be corrected. In addition, in the electronic image blur correction process, image blur is corrected manually by performing image processing on a captured image.
A photographer desires to perform image capturing while moving with an object (a moving object or a stationary object) and keeping the object within an image capturing angle of view. An operation for tracking the selected subject so that the detection position of the subject image is close to a specific position within the subject screen may be referred to as a "subject tracking operation". The movement desired by the photographer at this time is called "mirror (camera work)". For example, a user moving an image pickup apparatus so that a detected object position reaches (or approaches) a specific position within an image pickup screen of the apparatus may be referred to as moving a mirror. The terms "subject tracking operation" and "mirror motion" are used throughout the present application. The specific position may be, for example, a center position of the imaging screen or a position designated by the photographer. In addition, there is a method of assisting an object tracking operation with an image blur correction unit. In japanese patent laid-open No. 2010-93362, an object tracking technique for driving an image blur correction unit is disclosed as follows: the inside of the screen is divided into blocks, a face of a subject or the like is detected by template matching, and the movement of the subject is tracked and set to be within the screen.
On the other hand, in order to correct image blur due to hand shake or the like, it is necessary to detect a change in the position and orientation of the image pickup apparatus. As a self-position estimation method for detecting the attitude position of an image pickup apparatus, a position and attitude estimation (visual and inertial sensor fusion) technique using a motion Structure (SFM) and an inertial sensor is provided. Methods for estimating the 3D position of an object existing in the real space and the position and orientation of an image pickup apparatus by applying this technique are known.
In the method disclosed in japanese patent application laid-open No. 2010-93362, a shake correction operation is performed based on a change in the position and orientation of an image pickup apparatus in which a mirror movement intentionally performed by a photographer for subject tracking and a change in the position and orientation due to hand shake coexist. However, a problem with current shake correction operations is that these shake correction operations may offset changes in the position and/or orientation of the image pickup apparatus caused by moving the mirror. This is of course undesirable because the photographer's mirror is essentially for following the desired movement of the subject with the image capture apparatus. As a result, the current shake correction operation may cause unnatural viewing angle variations in the captured image.
Disclosure of Invention
The invention suppresses an unnatural angle of view change due to image blur correction in image capturing according to a moving mirror.
An apparatus according to an embodiment of the present invention is an image pickup apparatus for acquiring an image signal with an image pickup unit, the image pickup apparatus including: a first acquisition unit configured to acquire first information indicating a shake of the image pickup apparatus detected by a shake detection unit; a second acquisition unit configured to acquire second information indicating a movement of an object detected in an image signal of the image pickup unit; a tracking unit configured to calculate coordinate values of the object on an imaging screen using the second information and track a feature point; an estimation unit configured to estimate a position and/or a posture of the image pickup apparatus and a positional relationship including a depth between the object and the image pickup apparatus, based on the first information and a coordinate value of the object; a calculation unit configured to calculate a control amount of shake correction using the estimated value of the position or orientation of the image pickup apparatus acquired from the estimation unit, the positional relationship acquired from the estimation unit, the first information, and the calculated coordinate value of the object; and a correction unit configured to correct an image blur generated due to a shake of the image pickup apparatus based on the control amount calculated by the calculation unit.
A method performed in an image pickup apparatus for acquiring an image signal with an image pickup unit, the method comprising: acquiring first information indicating a shake of the image pickup apparatus detected by a shake detection unit, and second information indicating a movement of an object detected in an image signal of the image pickup unit; calculating coordinate values of the object on an imaging screen using the second information, and tracking a feature point; estimating a position and/or a posture of the image pickup apparatus and a positional relationship including a depth between the object and the image pickup apparatus from the first information and the coordinate value of the object; a calculation step of calculating a control amount of shake correction using the estimated position or orientation of the image pickup apparatus, the positional relationship, the first information, and the calculated coordinate value of the object; and correcting image blur generated due to shake of the image pickup apparatus based on the control amount calculated in the calculating step.
Other features, advantages and aspects of the present invention will become apparent from the following description of exemplary embodiments (with reference to the accompanying drawings). It is to be understood that any feature described herein in relation to a particular embodiment or group of embodiments may be combined with features of one or more other embodiments, in the absence of any limitation other than the limitation imposed by the broadest aspect of the invention as defined above. In particular, features from different embodiments may be combined, if desired, or where it is advantageous to combine elements or features from various embodiments into one embodiment.
Drawings
Fig. 1 is a diagram showing a configuration example of an image pickup apparatus according to an embodiment of the present invention.
Fig. 2 is a diagram showing a structural example of an image blur correction device according to a first embodiment of the present invention.
Fig. 3A and 3B are diagrams showing the structures of the target position calculation unit and the main object feedback amount calculation unit.
Fig. 4 is a flowchart of the target position calculation process according to the first embodiment.
Fig. 5 is a flowchart of the position and orientation estimation process according to the first embodiment.
Fig. 6 is a flowchart of the main object feedback amount calculation process according to the first embodiment.
Fig. 7 is a diagram showing a relationship between the coordinate position of an object in world coordinates and the coordinate position in camera coordinates.
Fig. 8 is a diagram showing a perspective projection model in which a virtual imaging surface is provided at a position in front of the lens.
Fig. 9A and 9B are diagrams illustrating a relationship of position and orientation between a main object, a background object close to the main object, and an image capturing apparatus.
Fig. 10A and 10B are diagrams illustrating a relationship between movements of feature points of a main object and a background in an image capturing operation.
Fig. 11 is a diagram showing a structural example of an image blur correction device according to a second embodiment of the present invention.
DETAILED DESCRIPTION OF EMBODIMENT (S) OF INVENTION
Embodiments of the present invention will be described with reference to the accompanying drawings. In the embodiment, an image blur correction device configured to perform image blur correction on a captured image is exemplified. The image blur correction device is configured to drive and control a movable member of an image blur correction optical system and the like. The image blur correction device and/or the image blur correction optical system may be mounted in an image pickup apparatus such as a video camera, a digital camera, a silver salt still camera, and an optical device (for example, an observation device such as binoculars, telescopes, and monoculars), or the like. In addition, the image blur correction apparatus may be mounted in an optical apparatus such as an interchangeable lens of a digital single-lens reflex camera. An operation of performing image blur correction using a shake detection signal of the apparatus is hereinafter referred to as an "image blur correction operation".
First embodiment
Fig. 1 is a block diagram showing a configuration example of an image pickup apparatus according to the present embodiment. The image capturing apparatus 100 is, for example, a digital still camera, and has a moving image shooting function.
The image capturing apparatus 100 includes a zoom unit 101. The zoom unit 101 constitutes an imaging optical system, and includes a zoom lens with which an imaging magnification is changed. The zoom driving unit 102 drives the zoom unit 101 according to a control signal from the control unit 119. An image blur correction lens (hereinafter referred to as a correction lens) 103 is a movable optical member that can be moved to correct an image blur. The correction lens 103 is movable in a direction perpendicular to the optical axis of the image pickup optical system. The image blur correction lens driving unit 104 controls driving of the correction lens 103 according to a control signal from the control unit 119. The stop and shutter unit 105 includes a mechanical shutter having a stop function. The stop and shutter driving unit 106 drives the stop and shutter unit 105 according to a control signal from the control unit 119. The focus lens 107 is a movable lens for focus adjustment, and its position can be changed along the optical axis of the image pickup optical system. The focus drive unit 108 drives the focus lens 107 according to a control signal from the control unit 119.
The imaging optical system forms an image on the imaging unit 109. An image pickup element (such as a CCD image sensor and a CMOS image sensor) of the image pickup unit 109 converts an optical image into an electrical signal representing the image. The electrical representation of the image may be formed by pixels. The term CCD is an abbreviation for "charge coupled device". The term CMOS is an abbreviation for "complementary metal oxide semiconductor". The image pickup signal processing unit 110 performs analog/digital (a/D) conversion, correlated double sampling, gamma correction, white balance correction, color interpolation processing, and the like on the electric signal output from the image pickup unit 109, and converts the electric signal into a video signal.
The video signal processing unit 111 processes the video signal acquired from the image pickup signal processing unit 110 according to various uses. Specifically, the video signal processing unit 111 generates a video signal for display, and performs encoding processing and data file conversion processing for recording. The display unit 112 performs image display as necessary according to the video signal for display output from the video signal processing unit 111. The power supply unit 115 supplies power to each unit of the image pickup apparatus 100 according to the use. The external input and output terminal unit 116 is used to input and output communication signals and video signals to and from an external device. The operation unit 117 includes operation members such as buttons and switches for the user to give instructions to the image pickup apparatus 100. For example, the operation unit 117 may include a release switch configured to sequentially turn on a first switch (denoted as SW1) and a second switch (denoted as SW2) according to the pushed amount of the release button. In addition, the operation unit 117 includes switches for setting various modes. The storage unit 118 stores various types of data including video information and the like.
The control unit 119 includes, for example, a CPU, ROM, and RAM. The CPU is an abbreviation of "central processing unit". ROM is an abbreviation for "read only memory". RAM is an abbreviation for "random access memory". The CPU loads a control program stored in the ROM into the RAM and executes the control program, controls the respective units of the image capturing apparatus 100, and realizes various operations to be described below. When the SW1 is turned on by a half-press operation of a release button included in the operation unit 117, the control unit 119 calculates an Autofocus (AF) evaluation value based on a video signal for display output from the video signal processing unit 111 to the display unit 112. The control unit 119 controls the focus drive unit 108 based on the AF evaluation value, and thereby performs auto focus detection and focus adjustment control. In addition, the control unit 119 performs Automatic Exposure (AE) processing for determining an aperture value and a shutter speed for obtaining an appropriate exposure amount based on the video signal luminance information and a predetermined program line graph. In addition, when the SW2 is turned on by the full-press operation of the release button, the control unit 119 performs image pickup processing using the determined aperture value and shutter speed, and controls the respective processing units so that image data obtained by the image pickup unit 109 is stored in the storage unit 118.
The operation unit 117 includes an operation switch for selecting an image blur correction (stabilization) mode. In a case where the image blur correction mode is selected by the operation of the operation switch, the control unit 119 instructs the image blur correction lens driving unit 104 to perform an image blur correction operation. The image blur correction lens driving unit 104 performs an image blur correction operation according to a control instruction of the control unit 119 until an instruction to turn off the image blur correction is issued. In addition, the operation unit 117 includes an image pickup mode selection switch that can select a still image pickup mode or a moving image pickup mode. Processing for selecting an image capturing mode is performed by a user operation of an image capturing mode selection switch, and the control unit 119 changes the operation condition of the image blur correction lens driving unit 104. The image blur correction lens driving unit 104 constitutes the image blur correction device of the present embodiment. In addition, the operation unit 117 includes a reproduction mode selection switch for selecting a reproduction mode. In the case where the user selects the reproduction mode by the operation of the reproduction mode selection switch, the control unit 119 performs control so that the image blur correction operation is stopped. In addition, the operation unit 117 includes a magnification change switch for instructing a zoom magnification change. In the case where a zoom magnification change is instructed according to a user operation of the magnification change switch, the zoom drive unit 102 that has received an instruction from the control unit 119 drives the zoom unit 101 and moves the zoom lens to the instructed position.
Fig. 2 is a diagram showing a configuration example of the image blur correction device of the present embodiment. Processing for calculating the driving direction and the driving amount relating to the correction lens 103 and position control will be described below. The image blur correction device of the present embodiment includes a first vibration sensor 201 and a second vibration sensor 203. The first vibration sensor 201 is, for example, an angular velocity sensor. The first vibration sensor 201 detects vibration components (angular velocities) in the vertical direction (pitch direction), horizontal direction (yaw direction), and rotational direction (roll direction) around the optical axis of the image pickup apparatus 100 in a normal posture (posture in which the longitudinal direction of the image pickup screen substantially coincides with the horizontal direction). The first vibration sensor 201 outputs a detection signal to the a/D converter 202. The second vibration sensor 203 is, for example, an acceleration sensor. The second vibration sensor 203 detects an acceleration component in the vertical direction, an acceleration component in the horizontal direction, and an acceleration component in the optical axis direction of the image pickup apparatus 100 in the normal posture, and the second vibration sensor 203 outputs a detection signal to the a/D converter 204. The a/ D converters 202 and 204 acquire detection signals from the first vibration sensor and the second vibration sensor, and convert analog values into digital values. Here, although the case where the vibration detection unit includes the first vibration sensor and the second vibration sensor is exemplified in the present embodiment, the present invention may be applied to an embodiment including the first vibration sensor or the second vibration sensor.
The position detection sensor 212 detects the position of the correction lens 103 and outputs a position detection signal to the a/D converter 218. The a/D converter 218 acquires a detection signal from the position detection sensor 212 and converts an analog value into a digital value.
The target position calculation unit 213 calculates a control target position of the correction lens 103 based on outputs from the a/D converter 202 and a calculation unit (main object feedback amount calculation unit) 219 to be described below. The target position calculation unit 213 outputs the correction position control signals of the correction lens 103 in the pitch direction and the yaw direction to the subtractor 214. The subtractor 214 subtracts the position detection signal from the corrected position control signal. The position detection signal may be received from the position detection sensor 212 via the a/D converter 218, and the corrected position control signal is received from the target position calculation unit 213. The output of subtractor 214 is provided to control filter 215. The control filter 215 acquires a correction position control signal from the target position calculation unit 213 and a deviation of the position information of the correction lens 103 from the position detection sensor 212, and performs feedback control. That is, the control filter 215 outputs a control signal for image blur correction to the image blur correction lens driving unit 104 (having an actuator), and performs drive control of the correction lens 103.
Next, the drive control operation of the correction lens 103 using the image blur correction device will be described in detail.
The target position calculation unit 213 (which may be referred to herein as an acquisition unit) acquires a vibration detection signal (angular velocity signal) from the first vibration sensor 201 and a main object feedback amount from the calculation unit 219, and generates a correction position control signal for driving the correction lens 103 in the pitch direction and the yaw direction. The corrected position control signal is output to the control filter 215 via the subtractor 214.
The position detection sensor 212 detects the position of the correction lens 103 in the pitch direction and the yaw direction, and outputs a position detection signal to the control filter 215 through (i.e., via) the a/D converter 218 and the subtractor 214. The subtractor 214 outputs a signal obtained by subtracting the position detection signal from the corrected position control signal to the control filter 215. The control filter 215 performs feedback control by the image blur correction lens driving unit 104 so that the position detection signal value converges to the value of the correction position control signal from the target position calculation unit 213. The correction position control signal output from the target position calculation unit 213 is a control signal for moving the correction lens 103 so that the image blur of the subject is cancelled out. For example, the target position calculation unit 213 performs filter processing or the like on the shake detection information, and generates a correction speed control signal or a correction position control signal. In the case where vibration such as hand shake is applied to the image pickup apparatus during image pickup, image blur can be suppressed until vibration to a certain degree according to a control operation of moving the correction lens 103.
Fig. 3A is a block diagram showing a detailed internal configuration of the target position calculating unit 213. The high-pass filter 301 performs processing for removing a Direct Current (DC) offset component of the detection signal from the first vibration sensor 201. The low-pass filter 302 performs processing for converting the angular velocity signal into a signal corresponding to an angle. The integral gain unit 303 multiplies the output of the low-pass filter 302 by a predetermined integral gain. The adder 304 adds the output of the integral gain unit 303 to the main object feedback amount. The main object feedback amount as the control amount of the shake correction will be described below.
The target position calculation process will be described with reference to fig. 4. Fig. 4 is a flowchart showing the flow of the target position calculation process. The target position calculation unit 213 acquires data of the shake angular velocity of the image capturing apparatus 100 detected by the first vibration sensor 201 (S116). The high-pass filter 301 removes the DC offset component from the acquired data (S117). In addition, the filtering process is performed with the low-pass filter 302 (S118), the integral gain unit 303 multiplies the gain, and converts the angular velocity signal of the shake component into an angle signal (S119). The adder 304 adds the main object feedback amount calculated by the calculation unit 219 to the output of the integral gain unit 303 (S120). The addition result is output to the subtractor 214.
Next, the following structure will be explained with reference to fig. 2: a motion vector is detected and a feature point is tracked in a captured image, and the position and orientation of the image capturing apparatus are estimated. The image data acquired by the imaging unit 109 is processed by an imaging signal processing unit 110. The motion vector detection unit 211 (which may be referred to as an acquisition unit herein) detects a motion vector of a captured image in a signal output from the image pickup signal processing unit 110. The global vector calculation unit 220 calculates a global vector indicating a uniform movement of the entire image capture screen based on the detected motion vector information. The global vector is calculated using the motion vector with the highest frequency of occurrence, and the global vector information is sent to the main object feedback amount calculation unit 219. The feature point tracking unit 209 performs processing for detecting and tracking a predetermined feature point in a captured image based on the detected motion vector information. Here, the feature point tracking unit 209 may be simply referred to as a tracking unit.
The main object separation unit 208 acquires the output of the feature point tracking unit 209, and specifies the coordinate region of the main object within the captured image. The main subject is an important subject, and is determined by an image size, a feature of the subject (for example, a face of a person), an operation of a photographer, and the like. The main subject separation unit 208 extracts a feature point of the main subject corresponding to the tracking feature point, and separates movement of other feature points (such as a background). The main object separation unit 208 outputs the coordinates of the feature points of the main object to the calculation unit 219, and outputs the coordinates of the feature points of the background other than the main object to the feature coordinate mapping and position and orientation estimation unit 205.
The feature coordinate mapping and position and orientation estimation section 205 estimates the position and orientation of the image capturing apparatus 100 and the position of the feature point in the real space of the image captured by the image capturing apparatus 100 using the SFM and the inertial sensor information. The feature coordinate mapping and position and orientation estimation unit (hereinafter simply referred to as an estimation unit) 205 includes a feature coordinate mapping estimation unit 206 and a position and orientation estimation unit 207. The estimation processing performed by the feature coordinate map estimation unit 206 and the position and orientation estimation unit 207 will be described in detail below.
The position and orientation estimation processing performed by the estimation unit 205 will be described with reference to the flowchart of fig. 5. The processing of S101 to S105 is performed in parallel with the processing of S108 to S115. First, the processing of S108 to S115 will be described.
The image pickup unit 109 photoelectrically converts an optical signal formed by the image pickup optical system into an electrical signal, and acquires an analog image signal (S108). Next, the image pickup signal processing unit 110 converts the analog image signal acquired from the image pickup unit 109 into a digital image signal, and performs predetermined image processing. The motion vector detection unit 211 detects a motion vector based on the image signal (S109). Upon detection of the motion vector, an image signal one frame before that stored in the memory in advance is acquired (S112). The image signal and the image signal of the current frame are compared, and a motion vector is calculated from the displacement of the image. Methods for detecting a motion vector include a correlation method and a block matching method. The method for calculating the motion vector in the present invention is arbitrary.
The global vector calculation unit 220 calculates a global vector from the motion vector information of the detected image (S110). In the case of calculating a motion vector value that appears most frequently in a captured image in known histogram processing or the like, a global vector is calculated. The feature point tracking unit 209 detects and tracks the movement position of a predetermined feature point within the captured image in the coordinates of each frame when capturing a moving image (S111). As for the feature point tracking technique, there is a method in which a square window is set with a feature point as a center, and when a new frame of a target video is provided, a point at which a residual within the window is minimum between frames is obtained. The tracking processing may be performed using a known method, and details of the known method will not be described.
The main object separation unit 208 specifies the coordinate region of the main object within the captured image, extracts the feature point of the main object corresponding to the tracking feature point, and separates the movement of the other feature points (S113). Here, a region other than the main object is set as a background region. As the subject detection method, for example, there is a method of: color information is acquired from an image signal, a histogram of the color information is divided into mountain-like distribution ranges, the divided regions are classified into one object, and the object is detected. According to this classification for classifying into a plurality of regions having similar image signals, a plurality of subjects can be distinguished and detected. In addition, in moving image shooting, in each image-pickup frame, there are feature points that continuously exist in a shot image, and feature points whose movement amount is different from other detected feature points and is smaller than those other feature points are determined as a main object. On the other hand, a feature point that disappears from the captured image (leaves the imaging angle of view) in each image frame, or a feature point having a movement amount of the same degree as that of another detected feature point is determined as a feature point other than the main object. This is a method of making a determination using a difference in movement within a captured image between a main object whose purpose of a photographer is to intentionally accommodate within an image capturing angle and another object that has inadvertently moved due to hand shake or the like. The feature point coordinates belonging to the divided main object region are held in a predetermined storage region within the memory (S114). Feature point coordinates belonging to a region other than the main object (for example, a background region) are held in a predetermined storage region within the memory (S115). Subsequently, the process proceeds to S106.
Next, the position and orientation estimation process will be described with reference to S101 to S107 in fig. 2 and 5.
The position and orientation estimation unit 207 acquires a vibration detection signal from the first vibration sensor 201 (S101). The differentiator 217 in fig. 2 performs a differentiation operation on the output of the a/D converter 218, and outputs the calculation result to the subtractor 216. When the difference of the position detection signals between the image capturing frames of the correction lens 103 is calculated, the moving speed of the correction lens 103 is calculated (S102).
The subtractor 216 acquires the outputs of the a/D converter 202 and the differentiator 217, subtracts the moving speed of the correction lens 103 from the angular speed detection information according to the first vibration sensor 201, and thereby calculates information corresponding to the shake correction residual angular speed of the image capturing apparatus 100 (S103). The output of subtractor 216 is input to position/orientation estimation section 207 in fig. 2. The position and orientation estimation unit 207 acquires acceleration information applied to the image capturing apparatus 100 detected from the a/D converter 204 with the second vibration sensor 203 (S104). The position and orientation estimation unit 207 estimates the position and orientation of the image capturing apparatus 100 in the real space (S105). The feature coordinate map estimation unit 206 estimates 3D position coordinates including depths of feature points in the real space for the image capturing apparatus and generates a feature coordinate map (S106). The feature coordinate mapping is a mapping of 3D position coordinates estimated based on both: estimated position and orientation information of the image capturing apparatus 100; and coordinate change information of frames of the moving image from the 2D feature points in the captured image other than the main object calculated by the main object separation unit 208.
The position and orientation estimation unit 207 corrects the position and orientation estimation value obtained in S105 based on the feature coordinate mapping information, the estimated position and orientation of the image capturing apparatus 100, and the 2D feature point coordinates in the captured image other than the main object calculated by the main object separation unit 208 (S107). In a case where the processing for estimating the position and orientation and the processing for estimating the feature coordinate map are repeatedly performed for frames of a moving image, the position and orientation can be correctly estimated. Here, the position and orientation estimation information in S105 is calculated according to: coordinates of feature points calculated from an image at the time of shake correction with the correction lens 103, and a shake correction remaining angle obtained by subtracting the moving speed of the correction lens 103 from angular speed detection information from the first vibration sensor 201. The position and orientation estimation value and the feature coordinate mapping information according to the estimation unit 205 are output to the calculation unit 219.
Next, processing performed by the calculation unit 219 for calculating the main object feedback amount will be described with reference to fig. 3B and 6. Fig. 3B is a diagram showing a detailed internal structure of the calculation unit 219. The calculation unit 219 acquires the global vector information calculated by the global vector calculation unit 220. The integrator 305 integrates the global vector information and calculates the amount of movement of the pixel in the captured image. A conversion unit (in-image-pickup-screen feature coordinate conversion unit) 308 acquires the position and orientation estimation value estimated by the estimation unit 205 and 3D space feature coordinate mapping information including depth information, and converts the 3D space feature coordinate mapping information into feature point coordinates in a captured image.
The first subtractor 309 subtracts the output of the main object separation unit 208 from the output of the conversion unit 308. The output of the main object separating unit 208 corresponds to the coordinates of the feature point of the main object. The first subtractor 309 outputs the subtracted signal to the second subtractor 306. The second subtractor 306 subtracts the output of the first subtractor 309 from the output of the integrator 305, and outputs it to the angle conversion gain unit 307. The angle conversion gain unit 307 multiplies the gain value to convert the calculated pixel movement amount into a value corresponding to the angle, and outputs the calculated control amount to the target position calculation unit 213. Thus, it should be understood that the calculation unit 219 may calculate a control amount of shake correction.
Fig. 6 is a flowchart illustrating the main object feedback amount calculation process. The processing in S121 and S122 and the processing in S125 to S129 are performed as parallel processing. In S121, the calculation unit 219 acquires the global vector calculated by the global vector calculation unit 220. Next, the integrator 305 integrates the acquired global vector value and calculates a pixel shift amount (S122).
On the other hand, the calculation unit 219 acquires the position and orientation estimation value of the image capturing apparatus 100 estimated by the estimation unit 205 (S125). The calculation unit 219 acquires the 3D feature coordinate map including the depth other than the main object estimated by the main object separation unit 208 from the feature points belonging to the region other than the main object (S126). The conversion unit 308 converts the 3D feature coordinate map including the depth other than the main object into 2D feature coordinates in the captured image using the feature point coordinates and the position and orientation estimation value of the image capturing apparatus (S127). First, processing for converting the 3D feature coordinates of objects other than the main object in the world coordinate system into 3D feature coordinates in the camera coordinate system is performed. The world coordinate system is a fixed coordinate system that defines the coordinates of an object regardless of the position of the camera. Details will be described with reference to fig. 7.
Fig. 7 is a diagram showing a relationship between the coordinate position of an object in world coordinates and the coordinate position in camera coordinates. T denotes a vector from a starting point OW in world coordinates to a starting point OC in camera coordinates. (rx, ry, rz) represents a unit vector indicating the direction of the axis (x, y, z) in the camera coordinates in the case of viewing in world coordinates. A point (X, Y, Z) in the camera coordinate system is represented as a point (X, Y, Z) in the world coordinate system. The relationship between these coordinates is shown in the following equation 1.
Figure BDA0001754109080000141
In formula 1, R denotes a rotation matrix, and T denotes a parallel movement vector.
Next, conversion from the 3D feature coordinates of the camera coordinate system to the image coordinates is performed by, for example, perspective conversion. Fig. 8 illustrates a perspective projection model in the case where a virtual imaging plane is set at the position of the focal length f in front of the lens. A point O in fig. 8 represents the center of the camera lens, and the Z-axis represents the optical axis of the camera. In addition, a coordinate system including the point O as a starting point is referred to as a camera coordinate system. (X, Y, Z) represents the coordinate position of the object in the camera coordinate system. The image coordinates projected from the camera coordinates (X, Y, Z) of the object according to the perspective conversion are represented as (X, Y). The conversion formula from (X, Y, Z) to (X, Y) is expressed by the following formula 2.
Figure BDA0001754109080000142
In this way, the 3D feature coordinate map including the depth other than the main object can be converted into the 2D feature coordinates in the captured image.
In S128 of fig. 6, the feature coordinates belonging to the main object region separated by the main object separating unit 208 are acquired. Next, the first subtractor 309 compares the feature coordinates of the main object region acquired in S128 with the movement in each moving image frame of the feature coordinates outside the main object region estimated in S127. The difference is calculated by the subtraction processing (S129). The second subtractor 306 subtracts the difference calculated in S129 from the moving amount of the global vector calculated in S122 (S123). The angle conversion gain unit 307 calculates the main object feedback amount by multiplying the gain value so that a value corresponding to the angle is obtained, and outputs the result to the target position calculation unit 213 (S124). The main object feedback amount is expressed by the following formula 3.
The main object feedback amount is the global vector movement amount- (feature coordinate movement amount of background region-feature coordinate movement amount of main object) (formula 3)
In the target position calculation unit 213, an adder 304 adds the main object feedback amount to the target position of the correction lens 103 calculated based on the detection output of the first vibration sensor 201 (see fig. 3A).
Referring to fig. 9A and 9B and fig. 10A and 10B, an effect when shake correction control is performed using the target position to which the main object feedback amount is added will be described. Fig. 9A and 9B are diagrams illustrating a positional relationship between a moving mirror (positional and orientation change) of an image pickup apparatus and an object. Fig. 10A and 10B are diagrams illustrating the relationship of feature point coordinates in a captured image during image capturing.
Fig. 9A is a schematic diagram illustrating a positional posture relationship among the main object 504, the background objects 502 and 503 near the back side of the main object, and the image pickup apparatus. A state in the case where the user changes the mirror while the main object 504 is captured at the center of the angle of view at the time of capturing a moving image is shown. A first moving image capturing frame 901 is represented as frame 1, and a next second moving image capturing frame 902 is represented as frame 2. The example of fig. 9A illustrates a mirror movement in which the position and orientation of the image pickup apparatus changes.
Fig. 10A schematically shows the positional relationship between the feature point coordinates of the background objects 502 and 503 and the main object 504 in the captured image 501 during the mirror image capturing from the frame 1 toward the frame 2. Fig. 10A illustrates mirror imaging in which the main object 504 is held at the center of the imaging screen. Regarding the positional change of the feature coordinates from the frame 1 to the frame 2, the amount of movement of the feature coordinates of the main object is relatively small compared to the amount of movement of the feature coordinates of the background. The coordinates are changed so that the image of the main object 504 remains at the home position. Therefore, in equation 3, the global vector movement amount, which is the most frequent uniform movement in the entire imaging screen, and the feature coordinate movement amount of the background region are in the same movement. The characteristic coordinate movement amount of the background region in this case is a movement amount obtained by converting 3D characteristic coordinates including the depth of the background object 502 or 503 into 2D characteristic coordinates by a method based on the position and orientation estimation value of the image capturing apparatus. Since the global vector movement amount and the feature coordinate movement amount of the background region are in a relationship in which these two cancel each other out, the main object feedback amount is equal to the feature coordinate movement amount of the main object. The correction lens 103 is controlled so that the movement amount of the characteristic coordinate value of the main object becomes zero in accordance with the main object feedback amount. That is, correction is performed so that a change in the coordinate value of the main object on the imaging screen becomes small. Control is performed so that only the movement of the main object on the imaging screen is suppressed without suppressing the change in the angle of view due to the mirror movement that causes the movement of the entire imaging screen.
Fig. 9B is a schematic diagram showing a mirror movement in the following case: in the positional posture relationship among the main object 504, the background objects 502 and 503, and the image pickup apparatus, only the posture of the image pickup apparatus changes regardless of the position of the main object 504. During the change from frame 1 to frame 2, the posture and the image capturing direction of the image capturing apparatus change. Fig. 10B shows a positional relationship between the feature point coordinates of the main object 504 and the background objects 502 and 503 in the captured image 501 at the time of shooting by the mirror. In such a mirror motion, image capturing is performed which is not intended by a photographer who intends to place a main object in a screen intentionally. This change in imaging angle of view is a change in angle of view that the photographer desires to suppress, for example, a change in angle of view due to hand shake or the like. In this case, the characteristic coordinate moving amount of the background region corresponding to the background object 502 or 503 and the characteristic coordinate moving amount of the main object 504 are in the same movement and in a relationship in which the two cancel each other out. Therefore, according to formula 3, the main object feedback amount is equal to the global vector movement amount. In accordance with the main object feedback amount, the correction lens 103 is controlled so that the most frequent uniform movement amount in the entire imaging screen becomes zero, and therefore, the shake correction is performed so that the change in the angle of view of the entire imaging screen becomes small. That is, image blur generated due to shake of the image pickup apparatus such as hand shake which causes a change in the angle of view of the entire image pickup screen is corrected.
In the present embodiment, in the case of calculating the main object feedback amount according to equation 3, it is possible to determine which of the movement of the entire imaging screen and the movement of the main object coordinates on the imaging screen is to be suppressed without performing complicated determination processing.
In the related-art object tracking operation, since the movement of the object is simply determined on the captured image and correction is made so that the position of the object image is held at a specific position, it cannot be determined whether the movement of the object on the captured image is caused by the movement of the object or the movement of the image capturing apparatus. In addition, in the related art method, the size of the movement visible in the captured image changes due to the difference in depth distance between the subject and the image capturing apparatus, and the actual amount of movement of each subject cannot be determined. On the other hand, in the present embodiment, with respect to the relationship between the characteristic coordinates of the main object and the background and the position of the image pickup apparatus, the 3D coordinates including the depth are determined and the movements of the object and the image pickup apparatus are determined, and thus the respective movements can be appropriately determined and controlled.
In addition, in the present embodiment, using the position and orientation of the image capturing apparatus estimated in S127 of fig. 6 and the feature coordinates on the captured image estimated from the 3D feature points belonging to the background region, the movement of the feature points other than the main object (such as the background) can be estimated. If the feature point of the background area tracked on the captured image is moved outside the captured picture by the mirror movement, the feature coordinates of the background area can be continuously estimated. Alternatively, even if the feature point is hidden behind another object, or even if tracking is not possible due to a change in the imaging situation such as a change in the luminance of a captured image, the feature coordinates of the background area can be continuously estimated.
In this embodiment, in image capturing according to a mirror movement used for an object tracking operation, image blur correction is performed by separating a change in the position and orientation of the image capturing apparatus due to hand shake from a change in the position and orientation due to mirror movement. Therefore, a good captured image in which unnatural variations in angle of view that occur in the prior art shake correction method are suppressed can be obtained.
Second embodiment
Next, a second embodiment of the present invention will be explained. The present embodiment is different from the first embodiment in that electronic image blur correction is performed by image processing. In the present embodiment, the same reference numerals are used for the same portions as those of the first embodiment, and the details of these portions will be omitted. Fig. 11 is a diagram showing a configuration example of the image blur correction device of the present embodiment. Differences from the structural example shown in fig. 2 will be described below. In the image blur correction device of the present embodiment, the calculation unit 219 outputs the calculated main object feedback amount to the image pickup signal processing unit 110.
In the present embodiment, there is no process for adding the main object feedback amount to the target position of the correction lens 103 shown in S120 of fig. 4 in the first embodiment. Alternatively, processing for instructing the image pickup signal processing unit 110 of a coordinate position for reading an image signal is performed by the calculation unit 219 based on the main object feedback amount. The image pickup signal processing unit 110 performs image blur correction by changing the position at which the image signal after image pickup is extracted.
In the present embodiment, electronic image blur correction processing for changing the coordinate position for reading the image signal output from the image pickup unit 109 can be realized. The optical image blur correction process and the electronic image blur correction process may be used together or switched depending on the image capturing conditions, the shake state, and the like.
While the present invention has been described with reference to exemplary embodiments, it is to be understood that the invention is not limited to the disclosed exemplary embodiments. The scope of the following claims is to be accorded the broadest interpretation so as to encompass all such modifications and equivalent structures and functions.
Priority of japanese patent application 2017-150810 filed on 3.8.2017 is claimed, the entire contents of which are incorporated herein by reference.

Claims (17)

1. An image pickup apparatus for acquiring an image signal with an image pickup unit, comprising:
a first acquisition unit configured to acquire first information indicating a shake of the image pickup apparatus detected by a shake detection unit;
a second acquisition unit configured to acquire second information indicating a movement of an object detected in an image signal of the image pickup unit;
a tracking unit configured to calculate coordinate values of the object on an imaging screen using the second information and track a feature point;
an estimation unit configured to estimate a position and/or a posture of the image pickup apparatus and a positional relationship including a depth between the object and the image pickup apparatus, based on the first information and a coordinate value of the object;
a calculation unit configured to calculate a control amount of shake correction using the estimated value of the position or orientation of the image pickup apparatus acquired from the estimation unit, the positional relationship acquired from the estimation unit, the first information, and the calculated coordinate value of the object; and
a correction unit configured to correct an image blur generated due to a shake of the image pickup apparatus based on the control amount calculated by the calculation unit.
2. The image capturing apparatus according to claim 1, wherein the calculation unit calculates the control amount corresponding to a change in position or orientation of the image capturing apparatus with a mirror movement for tracking the object.
3. The image pickup apparatus according to claim 1, wherein the shake detection unit is an angular velocity sensor and/or an acceleration sensor.
4. The apparatus according to claim 1, further comprising a separation unit configured to acquire an output of the tracking unit and separate a feature point of a first object and a feature point of a second object,
wherein the separation unit outputs the coordinates of the feature point of the second object to the estimation unit.
5. The apparatus according to claim 4, wherein the estimation unit generates feature coordinate mapping information on 3D position coordinates in a real space from the first information and coordinate change information on the feature point of the second object calculated by the separation unit, and calculates an estimated value of the position or orientation of the apparatus using the feature coordinate mapping information and the coordinates of the feature point of the second object calculated by the separation unit.
6. The apparatus according to claim 4, further comprising a calculation unit configured to acquire the second information and calculate a global vector representing movement of the entire imaging screen,
wherein the calculation unit calculates a difference between the movement amount of the feature point of the second object acquired from the estimation unit and the movement amount of the feature point of the first object acquired from the separation unit, and calculates a feedback amount by subtracting the difference from the global vector and outputs the feedback amount to the correction unit.
7. The image pickup apparatus according to claim 5,
the first subject is a main subject and the second subject is a background, an
The estimation unit estimates movement of a feature point of the second object using the position or orientation of the image pickup apparatus and feature coordinates on a captured image estimated from 3D feature coordinates of a region belonging to the second object.
8. The image capturing apparatus according to claim 5, wherein the calculation unit includes:
a conversion unit configured to acquire an estimated value of a position or orientation of the image pickup apparatus and the feature coordinate mapping information from the estimation unit, and convert the feature coordinate mapping information into feature coordinates on the image pickup screen; and
a subtraction unit configured to subtract a movement amount of the feature point of the first object from an output of the conversion unit.
9. The apparatus according to claim 8, wherein the calculation unit and the correction unit control such that a change in the coordinate value of the first object in the image capture screen becomes small in a case where a moving amount of the feature point of the first object is smaller than a moving amount of the feature point of the second object.
10. The apparatus according to claim 8, wherein the calculation unit and the correction unit control such that, in a case where a moving amount of the feature point of the second object and a moving amount of the feature point of the first object are the same, a change in the entire imaging screen due to shake becomes small.
11. The image pickup apparatus according to claim 1, further comprising:
a separation unit configured to acquire an output of the tracking unit and separate a feature point of a first subject and a feature point of a second subject,
wherein the separation unit outputs the coordinates of the feature point of the second object to the estimation unit, an
The calculation unit calculates the control amount corresponding to a change in position or posture of the image pickup apparatus with a mirror movement for tracking the object.
12. The apparatus according to claim 11, wherein the estimation unit generates feature coordinate mapping information of 3D position coordinates from the first information and coordinate change information on the feature point of the second object calculated by the separation unit, and calculates an estimated value of the position or orientation of the apparatus using the feature coordinate mapping information and the coordinates of the feature point of the second object calculated by the separation unit.
13. The image pickup apparatus according to claim 12,
the first subject is a main subject and the second subject is a background, an
The estimation unit estimates movement of a feature point of the second object using the position or orientation of the image pickup apparatus and feature coordinates on a captured image estimated from 3D feature coordinates of a region belonging to the second object.
14. The image capturing apparatus according to claim 12, wherein the calculation unit includes: a conversion unit configured to acquire an estimated value of a position or orientation of the image pickup apparatus and the feature coordinate mapping information from the estimation unit, and convert the feature coordinate mapping information into feature coordinates on the image pickup screen; and a subtraction unit configured to subtract a movement amount of the feature point of the first object from an output of the conversion unit.
15. The apparatus according to claim 14, wherein the calculation unit and the correction unit control such that a change in the coordinate value of the first object in the image capture screen becomes small in a case where a moving amount of the feature point of the first object is smaller than a moving amount of the feature point of the second object.
16. The apparatus according to claim 14, wherein the calculation unit and the correction unit control such that, in a case where a moving amount of the feature point of the second object and a moving amount of the feature point of the first object are the same, a change in the entire imaging screen due to shake becomes small.
17. A method performed in an image pickup apparatus for acquiring an image signal with an image pickup unit, characterized by comprising:
acquiring first information indicating a shake of the image pickup apparatus detected by a shake detection unit, and second information indicating a movement of an object detected in an image signal of the image pickup unit;
calculating coordinate values of the object on an imaging screen using the second information, and tracking a feature point;
estimating a position and/or a posture of the image pickup apparatus and a positional relationship including a depth between the object and the image pickup apparatus from the first information and the coordinate value of the object;
a calculation step of calculating a control amount of shake correction using the estimated position or orientation of the image pickup apparatus, the positional relationship, the first information, and the calculated coordinate value of the object; and
correcting image blur generated due to shake of the image pickup apparatus based on the control amount calculated in the calculating step.
CN201810879400.2A 2017-08-03 2018-08-03 Image pickup apparatus and method executed therein Active CN109391767B (en)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
JP2017150810A JP6904843B2 (en) 2017-08-03 2017-08-03 Imaging device and its control method
JP2017-150810 2017-08-03

Publications (2)

Publication Number Publication Date
CN109391767A CN109391767A (en) 2019-02-26
CN109391767B true CN109391767B (en) 2021-06-01

Family

ID=63518350

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201810879400.2A Active CN109391767B (en) 2017-08-03 2018-08-03 Image pickup apparatus and method executed therein

Country Status (5)

Country Link
US (1) US10511774B2 (en)
JP (1) JP6904843B2 (en)
CN (1) CN109391767B (en)
DE (1) DE102018118644A1 (en)
GB (1) GB2567043B (en)

Families Citing this family (10)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2019062340A (en) * 2017-09-26 2019-04-18 キヤノン株式会社 Image shake correction apparatus and control method
CN110349177B (en) * 2019-07-03 2021-08-03 广州多益网络股份有限公司 Method and system for tracking key points of human face of continuous frame video stream
CN110401796B (en) * 2019-07-05 2020-09-29 浙江大华技术股份有限公司 Jitter compensation method and device of image acquisition device
EP4013030A4 (en) * 2019-08-27 2022-08-31 Guangdong Oppo Mobile Telecommunications Corp., Ltd. Image processing method and apparatus, and electronic device and computer-readable storage medium
WO2021038752A1 (en) * 2019-08-28 2021-03-04 株式会社ソニー・インタラクティブエンタテインメント Image processing device, system, image processing method and image processing program
CN112887584A (en) * 2019-11-29 2021-06-01 华为技术有限公司 Video shooting method and electronic equipment
JP2022089269A (en) * 2020-12-04 2022-06-16 株式会社日立製作所 Calibration device and calibration method
CN112837362A (en) * 2021-01-28 2021-05-25 清华大学深圳国际研究生院 Three-dimensional human body posture estimation method for obtaining space positioning and computer readable storage medium
CN114627215B (en) * 2022-05-16 2022-09-16 山东捷瑞数字科技股份有限公司 Method and device for camera shake animation production based on three-dimensional software
WO2024043752A1 (en) * 2022-08-26 2024-02-29 Samsung Electronics Co., Ltd. Method and electronic device for motion-based image enhancement

Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101426092B (en) * 2007-11-02 2011-11-23 韩国科亚电子股份有限公司 Apparatus for digital image stabilization using object tracking and method thereof
CN104813656A (en) * 2012-11-29 2015-07-29 阿尔卡特朗讯公司 A videoconferencing server with camera shake detection
CN105939454A (en) * 2015-03-03 2016-09-14 佳能株式会社 Image capturing apparatus, control method thereof
CN106257911A (en) * 2016-05-20 2016-12-28 上海九鹰电子科技有限公司 Image stability method and device for video image

Family Cites Families (12)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US7382400B2 (en) * 2004-02-19 2008-06-03 Robert Bosch Gmbh Image stabilization system and method for a video camera
JP2010093362A (en) 2008-10-03 2010-04-22 Nikon Corp Imaging apparatus and optical apparatus
GB2474886A (en) * 2009-10-30 2011-05-04 St Microelectronics Image stabilisation using motion vectors and a gyroscope
JP5409342B2 (en) * 2009-12-25 2014-02-05 キヤノン株式会社 Imaging apparatus and control method thereof
JP5589527B2 (en) * 2010-04-23 2014-09-17 株式会社リコー Imaging apparatus and tracking subject detection method
US20110304706A1 (en) * 2010-06-09 2011-12-15 Border John N Video camera providing videos with perceived depth
BR112015010384A2 (en) * 2012-11-12 2017-07-11 Behavioral Recognition Sys Inc image stabilization techniques for video surveillance systems
US10136063B2 (en) * 2013-07-12 2018-11-20 Hanwha Aerospace Co., Ltd Image stabilizing method and apparatus
US10091432B2 (en) * 2015-03-03 2018-10-02 Canon Kabushiki Kaisha Image capturing apparatus, control method thereof and storage medium storing control program therefor
US10708571B2 (en) * 2015-06-29 2020-07-07 Microsoft Technology Licensing, Llc Video frame processing
JP6600232B2 (en) * 2015-11-05 2019-10-30 キヤノン株式会社 Image blur correction apparatus and method
JP6368396B2 (en) 2017-04-10 2018-08-01 株式会社神戸製鋼所 Hydrogen gas cooling method and hydrogen gas cooling system

Patent Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101426092B (en) * 2007-11-02 2011-11-23 韩国科亚电子股份有限公司 Apparatus for digital image stabilization using object tracking and method thereof
CN104813656A (en) * 2012-11-29 2015-07-29 阿尔卡特朗讯公司 A videoconferencing server with camera shake detection
CN105939454A (en) * 2015-03-03 2016-09-14 佳能株式会社 Image capturing apparatus, control method thereof
CN106257911A (en) * 2016-05-20 2016-12-28 上海九鹰电子科技有限公司 Image stability method and device for video image

Also Published As

Publication number Publication date
US20190045126A1 (en) 2019-02-07
GB2567043B (en) 2019-11-27
DE102018118644A1 (en) 2019-02-07
GB2567043A (en) 2019-04-03
GB201812266D0 (en) 2018-09-12
JP6904843B2 (en) 2021-07-21
CN109391767A (en) 2019-02-26
US10511774B2 (en) 2019-12-17
JP2019029962A (en) 2019-02-21

Similar Documents

Publication Publication Date Title
CN109391767B (en) Image pickup apparatus and method executed therein
JP6600232B2 (en) Image blur correction apparatus and method
JP6592335B2 (en) Image blur correction apparatus and method
US10015406B2 (en) Zoom control device, imaging apparatus, control method of zoom control device, and recording medium
CN109391755B (en) Image pickup apparatus and method executed therein
CN107018311B (en) Mobile information acquisition device, mobile information acquisition method, and recording medium
CN110062131B (en) Image blur correction device, control method thereof, and imaging device
US10873701B2 (en) Image pickup apparatus and control method thereof
US20190098215A1 (en) Image blur correction device and control method
JP6395401B2 (en) Image shake correction apparatus, control method therefor, optical apparatus, and imaging apparatus
JP7013205B2 (en) Image shake correction device and its control method, image pickup device
JP6833381B2 (en) Imaging equipment, control methods, programs, and storage media
JP2018205551A (en) Imaging device and control method of the same
JP7414484B2 (en) Optical equipment, imaging device and control method
JP2022009046A (en) Image blur correction device, control method of the same, and imaging apparatus
JP2017215350A (en) Image blur correction device, optical unit, imaging apparatus and control method
JP2023047605A (en) Control device, imaging apparatus, control method, and program
JP6778014B2 (en) Imaging device and its control method, program, storage medium
JP2020170074A (en) Imaging apparatus, lens device, control method of imaging apparatus, and control method lens device
CN113497893A (en) Image pickup apparatus and control method of image pickup apparatus
JP2020136845A (en) Imaging apparatus and control method for imaging apparatus

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant