WO2014069632A1 - 画像処理装置、画像処理方法、画像処理プログラム及び記録媒体 - Google Patents
画像処理装置、画像処理方法、画像処理プログラム及び記録媒体 Download PDFInfo
- Publication number
- WO2014069632A1 WO2014069632A1 PCT/JP2013/079724 JP2013079724W WO2014069632A1 WO 2014069632 A1 WO2014069632 A1 WO 2014069632A1 JP 2013079724 W JP2013079724 W JP 2013079724W WO 2014069632 A1 WO2014069632 A1 WO 2014069632A1
- Authority
- WO
- WIPO (PCT)
- Prior art keywords
- matrix
- frame image
- component
- motion
- calculation unit
- Prior art date
Links
- 238000012545 processing Methods 0.000 title claims abstract description 86
- 238000003672 processing method Methods 0.000 title claims description 11
- 239000011159 matrix material Substances 0.000 claims abstract description 439
- 238000004364 calculation method Methods 0.000 claims abstract description 167
- 238000003384 imaging method Methods 0.000 claims abstract description 68
- 238000005096 rolling process Methods 0.000 claims description 80
- 238000000034 method Methods 0.000 claims description 60
- 238000013519 translation Methods 0.000 claims description 33
- 230000009467 reduction Effects 0.000 claims description 12
- 230000008569 process Effects 0.000 description 48
- 230000006870 function Effects 0.000 description 24
- 238000012937 correction Methods 0.000 description 22
- 238000010586 diagram Methods 0.000 description 18
- 238000013461 design Methods 0.000 description 7
- 230000008859 change Effects 0.000 description 6
- 238000000926 separation method Methods 0.000 description 6
- PXFBZOLANLWPMH-UHFFFAOYSA-N 16-Epiaffinine Natural products C1C(C2=CC=CC=C2N2)=C2C(=O)CC2C(=CC)CN(C)C1C2CO PXFBZOLANLWPMH-UHFFFAOYSA-N 0.000 description 3
- 238000005259 measurement Methods 0.000 description 3
- 230000009466 transformation Effects 0.000 description 3
- 238000006243 chemical reaction Methods 0.000 description 2
- 238000001914 filtration Methods 0.000 description 2
- 238000013507 mapping Methods 0.000 description 2
- 238000012935 Averaging Methods 0.000 description 1
- 238000013459 approach Methods 0.000 description 1
- 238000004891 communication Methods 0.000 description 1
- 238000006073 displacement reaction Methods 0.000 description 1
- 230000000694 effects Effects 0.000 description 1
- 230000007613 environmental effect Effects 0.000 description 1
- 239000004615 ingredient Substances 0.000 description 1
- 230000007257 malfunction Effects 0.000 description 1
- 230000007246 mechanism Effects 0.000 description 1
- 238000004091 panning Methods 0.000 description 1
- 239000004065 semiconductor Substances 0.000 description 1
- 239000007787 solid Substances 0.000 description 1
Images
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T3/00—Geometric image transformations in the plane of the image
- G06T3/10—Selection of transformation methods according to the characteristics of the input images
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T3/00—Geometric image transformations in the plane of the image
- G06T3/40—Scaling of whole images or parts thereof, e.g. expanding or contracting
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T5/00—Image enhancement or restoration
- G06T5/90—Dynamic range modification of images or parts thereof
- G06T5/92—Dynamic range modification of images or parts thereof based on global image properties
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N23/00—Cameras or camera modules comprising electronic image sensors; Control thereof
- H04N23/60—Control of cameras or camera modules
- H04N23/68—Control of cameras or camera modules for stable pick-up of the scene, e.g. compensating for camera body vibrations
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N23/00—Cameras or camera modules comprising electronic image sensors; Control thereof
- H04N23/60—Control of cameras or camera modules
- H04N23/68—Control of cameras or camera modules for stable pick-up of the scene, e.g. compensating for camera body vibrations
- H04N23/681—Motion detection
- H04N23/6811—Motion detection based on the image signal
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N23/00—Cameras or camera modules comprising electronic image sensors; Control thereof
- H04N23/60—Control of cameras or camera modules
- H04N23/68—Control of cameras or camera modules for stable pick-up of the scene, e.g. compensating for camera body vibrations
- H04N23/682—Vibration or motion blur correction
- H04N23/683—Vibration or motion blur correction performed by a processor, e.g. controlling the readout of an image memory
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N23/00—Cameras or camera modules comprising electronic image sensors; Control thereof
- H04N23/60—Control of cameras or camera modules
- H04N23/68—Control of cameras or camera modules for stable pick-up of the scene, e.g. compensating for camera body vibrations
- H04N23/689—Motion occurring during a rolling shutter mode
Definitions
- the present invention relates to an image processing apparatus, an image processing method, an image processing program, and a recording medium.
- Patent Document 1 an apparatus that determines an appropriate cutout area to be cut from a captured frame image is known in order to eliminate the influence of camera shake or the like during moving image shooting (see, for example, Patent Document 1).
- the image processing apparatus described in Patent Document 1 detects motion data indicating how much a frame image is deviated from a reference frame image (for example, a previously input frame image), and a cut-out area according to the detected motion data By moving or deforming, the clipping region is corrected so as to be stationary with respect to the movement of the camera.
- the movement amount or deformation amount is calculated in consideration of the movement of four degrees of freedom of rotation, enlargement / reduction, and parallel movement or the movement of six degrees of freedom of parallel movement, rotation, enlargement / reduction, and shear, and the cut region Is adjusted.
- Patent Document 2 a method for accurately calculating a rolling shutter distortion component and a camera motion component is known (see, for example, Patent Document 2).
- the image processing apparatus described in Patent Literature 2 substitutes a global motion vector expressed as an affine transformation matrix into a component separation formula that separates a rolling shutter distortion component and a camera motion component using unknown component parameters. Turn into.
- the apparatus solves an equation obtained from the modeled component separation formula, calculates each sentence parameter, and accurately calculates the rolling shutter distortion component and the camera motion component individually.
- a camera motion (camera motion component) based on motion data between frame images. Need to be calculated more accurately.
- it is conceivable to accurately calculate the camera motion component by modeling all the movements of the camera using the image processing apparatus described in Patent Document 2.
- Patent Document 2 discloses only a component separation formula when the motion vector is limited to affine transformation with 6 degrees of freedom. Therefore, when the motion vector is expressed by perspective projection, component separation different from affine transformation is disclosed. The expression needs to be modeled. As described above, the image processing apparatus described in Patent Document 2 needs to prepare a component separation formula according to the format of motion data, and thus may not be applied. Further, the image processing apparatus described in Patent Document 2 inevitably derives an inaccurate rolling shutter distortion component and a camera motion component when an inaccurate global motion vector is substituted into the component separation formula. Therefore, there is a possibility that robust processing cannot be performed.
- an image processing apparatus an image processing method, an image processing program, and a recording medium are desired that can easily and appropriately determine an area to be cut out from a frame image and that can simplify the design of a correction filter such as camera shake correction. Yes.
- the image processing device sets a region having a size smaller than the frame image in the frame image captured by the imaging device, and the position of the region is determined according to the movement of the imaging device.
- the image processing apparatus generates an output frame image by correcting the shape.
- the apparatus includes an input unit, a motion acquisition unit, a matrix calculation unit, and a drawing unit.
- the input unit sequentially inputs the first frame image and the second frame image.
- the motion acquisition unit acquires motion data between the first frame image and the second frame image.
- the matrix calculation unit is a projection matrix for projecting the output frame image onto the second frame image, a first matrix including a rolling shutter distortion component, a translation component in a direction orthogonal to the imaging direction, and a rotation component based on the imaging direction. Are calculated from a second matrix including at least one of the above and an auxiliary matrix including a motion component not included in the first matrix and the second matrix.
- the drawing unit generates an output frame image from the second frame image using the projection matrix.
- the matrix calculation unit includes a first calculation unit, a second calculation unit, and an auxiliary matrix calculation unit.
- the first calculation unit calculates a first matrix of the projection matrix using the motion data.
- the second calculation unit calculates a second matrix of the projection matrix using the motion data, the first matrix, and the past second matrix.
- the auxiliary matrix calculation unit calculates an auxiliary matrix of the projection matrix using the motion data, the first matrix, and the past auxiliary matrix.
- the deformation movement of an image is handled by dividing it into at least one of a translation component or a rotation component, a rolling shutter distortion component, and other components, and each is calculated separately.
- the first calculation unit calculates the first matrix including the rolling shutter distortion component to some extent accurately using the motion data.
- the second calculation unit calculates a second matrix including at least one of the translation component and the rotation component using the motion data, the first matrix, and the past second matrix.
- the auxiliary matrix calculation unit calculates an auxiliary matrix including motion components not included in the first matrix and the second matrix, using the motion data, the first matrix, and the past auxiliary matrix. In this way, the image processing apparatus can decompose the motion data into three and separately calculate them in the calculation process, so that processing according to the motion component is possible.
- the translation component or rotation component and the component included in the auxiliary matrix are calculated using different formulas, the translation component or rotation component that strongly reflects human intentions, and the other remaining components, Can be independently corrected without relating them to each other.
- the motion component that strongly reflects the human intention and the other motion component can be corrected with different filters, so that the trouble caused by the unintended motion of the user can be solved while appropriately following the camera work. It is possible to achieve both.
- correction for each component is independent, a correction filter can be easily designed.
- the motion calculation method of the second calculation unit is not limited, for example, an accurate but robust calculation method can be applied to the motion calculation method of the second calculation unit. In this case, the entire image processing apparatus can perform robust processing.
- the first calculation unit may calculate the first matrix of the projection matrix based on the translation component included in the motion data. As described above, assuming that the rolling shutter distortion component is caused only by the parallel movement component, an almost accurate rolling shutter distortion component can be easily and quickly estimated.
- the auxiliary matrix may include a component that converts a rectangle into a trapezoid.
- the deformation of the image is regarded as eight degrees of freedom, by including a component for converting the quadrangle into a trapezoid in the auxiliary matrix, the process of following the movement intended by the user is not affected. It is possible to more naturally resolve problems caused by movements not intended by the user.
- the auxiliary matrix may include a scaling component.
- the scaling component tends not to be used in the imaging scene, by including the scaling component in the auxiliary matrix, the process that follows the user's intended movement is not affected, and the user's unintended movement It becomes possible to solve the problem caused by the problem more naturally.
- the second matrix may include a scaling component.
- a scaling component For example, when the enlargement / reduction component tends to be used in the imaging scene, by including the enlargement / reduction component in the second matrix, the process of following the movement intended by the user can be performed more accurately.
- the motion acquisition unit may input the output value of the gyro sensor. In this way, appropriate image processing can be performed even in the case of cooperation with hardware.
- An image processing method sets a region having a size smaller than a frame image in a frame image captured by an imaging device, and corrects the position or shape of the region according to the movement of the imaging device.
- an image processing method for generating an output frame image includes an input step, a motion acquisition step, a matrix calculation step, and a drawing step.
- the input step the first frame image and the second frame image are sequentially input.
- the motion acquisition step acquires motion data between the first frame image and the second frame image.
- the matrix calculation step includes a projection matrix for projecting the output frame image onto the second frame image, a first matrix including a rolling shutter distortion component, a translation component in a direction orthogonal to the imaging direction, and a rotation component based on the imaging direction. Are calculated from a second matrix including at least one of the above and an auxiliary matrix including a motion component not included in the first matrix and the second matrix.
- the drawing step generates an output frame image from the second frame image using the projection matrix.
- the matrix calculation step includes a first calculation step, a second calculation step, and an auxiliary matrix calculation step. In the first calculation step, a first matrix of the projection matrix is calculated using the motion data.
- the second calculation step calculates a second matrix of the projection matrix using the motion data, the first matrix, and the second matrix in the past.
- the auxiliary matrix calculation step calculates an auxiliary matrix of the projection matrix using the motion data, the first matrix, and the past auxiliary matrix.
- An image processing program sets a region having a size smaller than a frame image in a frame image captured by an imaging device, and corrects the position or shape of the region according to the movement of the imaging device.
- an image processing program for causing a computer to function to generate an output frame image.
- the program causes the computer to function as an input unit, a motion acquisition unit, a matrix calculation unit, and a drawing unit.
- the input unit sequentially inputs the first frame image and the second frame image.
- the motion acquisition unit acquires motion data between the first frame image and the second frame image.
- the matrix calculation unit is a projection matrix for projecting the output frame image onto the second frame image, a first matrix including a rolling shutter distortion component, a translation component in a direction orthogonal to the imaging direction, and a rotation component based on the imaging direction. Are calculated from a second matrix including at least one of the above and an auxiliary matrix including a motion component not included in the first matrix and the second matrix.
- the drawing unit generates an output frame image from the second frame image using the projection matrix.
- the matrix calculation unit includes a first calculation unit, a second calculation unit, and an auxiliary matrix calculation unit.
- the first calculation unit calculates a first matrix of the projection matrix using the motion data.
- the second calculation unit calculates a second matrix of the projection matrix using the motion data, the first matrix, and the past second matrix.
- the auxiliary matrix calculation unit calculates an auxiliary matrix of the projection matrix using the motion data, the first matrix, and the past auxiliary matrix.
- a recording medium is a computer-readable recording medium on which an image processing program is recorded.
- the image processing program sets an area having a size smaller than the frame image in the frame image captured by the imaging apparatus, and corrects the position or shape of the area according to the movement of the imaging apparatus to generate an output frame image To make the computer work.
- the program causes the computer to function as an input unit, a motion acquisition unit, a matrix calculation unit, and a drawing unit.
- the input unit sequentially inputs the first frame image and the second frame image.
- the motion acquisition unit acquires motion data between the first frame image and the second frame image.
- the matrix calculation unit is a projection matrix for projecting the output frame image onto the second frame image, a first matrix including a rolling shutter distortion component, a translation component in a direction orthogonal to the imaging direction, and a rotation component based on the imaging direction. Are calculated from a second matrix including at least one of the above and an auxiliary matrix including a motion component not included in the first matrix and the second matrix.
- the drawing unit generates an output frame image from the second frame image using the projection matrix.
- the matrix calculation unit includes a first calculation unit, a second calculation unit, and an auxiliary matrix calculation unit.
- the first calculation unit calculates a first matrix of the projection matrix using the motion data.
- the second calculation unit calculates a second matrix of the projection matrix using the motion data, the first matrix, and the past second matrix.
- the auxiliary matrix calculation unit calculates an auxiliary matrix of the projection matrix using the motion data, the first matrix, and the past auxiliary matrix.
- an image processing apparatus an image processing method, and an image processing that can easily and appropriately determine an area to be cut out from a frame image and that can easily design a correction filter such as camera shake correction.
- a program and a recording medium are provided.
- A) shows the distortion of the subject when the camera operation is in the X direction when the scanning order of line scanning in the X direction is in the Y direction.
- (B) shows the distortion of the subject when the camera operation is in the Y direction when the scanning order of line scanning in the X direction is in the Y direction.
- (C) shows the distortion of the subject when the camera operation is in the X direction when the scanning order of line scanning in the Y direction is in the X direction.
- (D) shows the distortion of the subject when the camera operation is in the Y direction when the scanning order of line scanning in the Y direction is in the X direction.
- It is a schematic diagram explaining a system without rolling shutter distortion. It is a schematic diagram explaining the calculation method of a distortion coefficient. It is a graph which shows the relationship between a moving frame image and distortion count. It is a graph which shows the relationship between a motion matrix component and movement amount. It is a schematic diagram explaining a parallel displacement component and a rotation component. It is a conceptual diagram which shows the relationship between an input frame image and an output frame image. 6 is a flowchart for explaining the operation of the image processing apparatus according to the embodiment.
- the image processing apparatus is an apparatus that outputs an image while eliminating problems such as image shake and rolling shutter distortion.
- the image processing apparatus according to the present embodiment is employed, for example, in the case of continuous shooting of a plurality of images or moving image shooting.
- the image processing apparatus according to the present embodiment is preferably mounted on a mobile terminal with limited resources such as a mobile phone, a digital camera, and a PDA (Personal Digital Assistant), but is not limited thereto. For example, it may be mounted on a normal computer system.
- a mobile terminal having a camera function will be described as an example in consideration of ease of understanding.
- FIG. 1 is a functional block diagram of a mobile terminal 2 including an image processing apparatus 1 according to the present embodiment.
- a mobile terminal 2 shown in FIG. 1 is a mobile terminal carried by a user, for example, and has a hardware configuration shown in FIG.
- FIG. 2 is a hardware configuration diagram of the mobile terminal 2.
- the portable terminal 2 physically includes a main storage device such as a CPU (Central Processing Unit) 100, a ROM (Read Only Memory) 101, and a RAM (Random Access Memory) 102, a camera, a keyboard, and the like.
- the input device 103, the output device 104 such as a display, the auxiliary storage device 105 such as a hard disk, and the like are configured as a normal computer system.
- Each function of the portable terminal 2 and the image processing apparatus 1 to be described later causes the input device 103 and the output device 104 to be controlled under the control of the CPU 100 by reading predetermined computer software on hardware such as the CPU 100, the ROM 101, and the RAM 102. This is realized by operating and reading and writing data in the main storage device and the auxiliary storage device 105.
- the image processing apparatus 1 normally includes a CPU 100, a main storage device such as the ROM 101 and the RAM 102, an input device 103, an output device 104, an auxiliary storage device 105, and the like. It may be configured as a computer system.
- the mobile terminal 2 may include a communication module or the like.
- the mobile terminal 2 includes a camera (imaging device) 20, an image processing device 1, an image recording unit 22, a previous data recording unit 23, and a display unit 21.
- the camera 20 has a function of capturing an image.
- a CMOS pixel sensor or the like is used, and an image is captured by a focal plane shutter system. That is, the camera 20 scans in the vertical or horizontal direction of the image and inputs pixel values.
- the camera 20 has a continuous imaging function that repeatedly captures images at a predetermined interval from a timing specified by a user operation or the like, for example. That is, the camera 20 has a function of acquiring not only a still image (still frame image) but also a moving image (continuous moving frame image).
- the user can freely take a picture by sliding the camera 20, rotating the camera 20 with a predetermined position as an origin, tilting the camera 20 in the horizontal and vertical directions, or combining the above-described operations.
- the camera 20 has a function of outputting a captured frame image to the image processing apparatus 1 every time it is captured.
- the display unit 21 is a display device that can display an image or a video.
- the image processing apparatus 1 has a function of outputting a frame image while eliminating problems such as camera shake and rolling shutter distortion. For example, as shown in FIGS. 3 (A), (B) , the frame image sequentially captured by the camera 20 and frame i-1, frame i, to the center position and Cf i-1, Cf i.
- the image processing apparatus 1 sets a cutout area K i-1 having a size smaller than that of the frame image frame i-1 .
- the size of the cutout area K i-1 is 70 to 90% of the size of the frame image frame i-1 .
- the center position of the cutout area K i-1 is Cr i-1 .
- This cutout area K i-1 is an output frame image.
- the camera 20 changes from the imaging position indicated by (A) to the imaging position indicated by (B) (shift to the upper right direction indicated by the solid line arrow in FIG. 3B).
- a frame image frame i shifted to the upper right with respect to the frame image frame i ⁇ 1 is obtained.
- the image processing apparatus 1 sets a clipping region K i at a position where the movement between the frame image frame i ⁇ 1 and the frame image frame i is canceled (indicated by a dotted arrow in FIG. 3B). Shift down to the left).
- the image processing apparatus 1 includes an input unit 10, a motion acquisition unit 11, a calculation unit (matrix calculation unit) 30, and a drawing unit 17.
- the input unit 10 has a function of inputting a frame image captured by the camera 20.
- the input unit 10 has a function of inputting, for example, a frame image captured by the camera 20 every time it is captured. Further, the input unit 10 has a function of storing the frame image in the image recording unit 22 provided in the mobile terminal 2.
- the input unit 10 has a function of outputting an input frame image to the motion acquisition unit 11.
- the motion acquisition unit 11 has a function of acquiring a motion between frame images using an input frame image (second frame image) and a frame image (first frame image) captured immediately before or before the input frame image. Have.
- the first frame image is stored in the image recording unit 22, for example.
- the motion acquisition unit 11 refers to the image recording unit 22 and uses a frame image input in the past as a reference frame image, and acquires relative motion data between the reference frame image and the input frame image.
- the reference frame image only needs to overlap with the input frame image to be processed over a certain area.
- the motion acquisition unit 11 does not need to change the reference frame image every time the input frame image is input, and when the input frame image to be processed does not overlap with a certain area or more, the reference frame from the next time onward is removed.
- the image may be updated.
- the motion acquisition unit 11 compares frame images and acquires, for example, motion data P (observation value) having 8 degrees of freedom.
- the motion acquisition unit 11 outputs the motion data P to the calculation unit 30.
- the motion acquisition unit 11 may acquire the motion data P by inputting the output value of the gyro sensor provided in the mobile terminal 2.
- the gyro sensor has a function of detecting and outputting motion data P between the reference frame image and the input frame image.
- FIG. 4 is a schematic diagram illustrating a motion with 8 degrees of freedom.
- the image before deformation is a square
- the image is deformed into (A) to (H) by the movement of the camera 20.
- (A) is enlargement / reduction
- (B) is a parallelogram (horizontal direction)
- (C) is a parallelogram (vertical direction)
- (D) is rotation
- (E) is parallel movement (lateral direction)
- And (F) are parallel movements (vertical direction)
- (G) is trapezoid (horizontal direction)
- (H) is trapezoid (vertical direction).
- the image processing apparatus 1 sets the cutout area K i in accordance with the above deformation.
- FIG. 5 is a schematic diagram showing the relationship among the reference frame image frame i-1 , the frame image frame i to be processed, the cutout areas K i-1 and K i , and the output frame image out-frame i .
- P represents motion data (motion vector) between the reference frame image frame i ⁇ 1 and the frame image frame i to be processed.
- the motion data P is known by the motion acquisition unit 11.
- a conversion formula (projection matrix) for associating the cutout region K i-1 of the reference frame image frame i-1 with the output frame image out-frame i-1 is indicated by P dst i-1 .
- the projection matrix P dst i ⁇ 1 is also referred to as the previous projection matrix or the first projection matrix. Since the projection matrix P dst i ⁇ 1 is a value calculated in the previous calculation, it is a known value.
- the projection matrix P dst i ⁇ 1 is stored, for example, in the previous data recording unit 23. In this case, as shown in FIG.
- the conversion formula for associating the cutout region K i of the frame image frame i and an output frame image out-frame i (projection matrix) P dst i, the motion data P and the first projection matrix Using P dst i ⁇ 1 it can be calculated by the following mathematical formula (1).
- the projection matrix P dst i is also referred to as a second projection matrix.
- the cut-out area is completely fixed.
- it is necessary to follow movements intended by the user for example, parallel movement of the camera and pan / tilt), and to eliminate problems such as camera shake.
- a high-pass filter is used to filter a motion intended by the user and a motion that becomes a problem.
- the second projection matrix P dst i is expressed by the following formula (2).
- HPF is a high-pass filter. That is, high-frequency components such as camera shake are passed to fix the screen, and low-frequency components, which are movements intended by the user, are cut to cause the screen to follow the user's movements.
- the motion data P includes a rolling shutter distortion component.
- FIG. 6 is a schematic diagram illustrating rolling shutter distortion.
- the rolling shutter distortion is that the subject is distorted into a parallelogram in the horizontal direction when the camera 20 moves in the horizontal direction relative to the subject as shown in FIG.
- the rolling shutter distortion occurs so that the subject expands and contracts in the vertical direction as shown in FIG. Since this rolling shutter distortion component needs to be always corrected, it is a component that should always pass the high-pass filter. That is, there is no need to apply a high-pass filter for the rolling shutter distortion component. Therefore, a framework is constructed in which the rolling shutter distortion component is separated, the movement intended by the user is followed, and problems such as camera shake are eliminated.
- FIG. 7 is a schematic diagram for explaining the framework. As shown in FIG. 7, it is assumed that there is a frame image frame i to be processed that is moved by the motion data P from the reference frame image frame i-1 .
- the known value is indicated by a solid arrow, and the unknown value is indicated by a dotted arrow.
- An arrow P is motion data (motion vector) and is known.
- An arrow D i-1 is an operator (first matrix D i-1 ) indicating a rolling shutter distortion component of the reference frame image frame i-1 .
- the first matrix D i ⁇ 1 is known because it is a value obtained in the previous calculation.
- the first matrix D i ⁇ 1 is stored, for example, in the previous data recording unit 23. By using the first matrix D i ⁇ 1 , it is possible to convert an input image frame image into a frame image without rolling shutter distortion.
- the frame image of the input image system is converted into a frame image of the system in which the rolling shutter distortion does not completely exist.
- the first matrix D i ⁇ 1 is not an accurate value
- the frame image of the input image system is converted to a frame image of the system with reduced rolling shutter distortion.
- the accuracy of the first matrix D i-1 affects the degree to which the rolling shutter distortion component is not considered.
- the system can be converted into a system in which the rolling shutter distortion is reduced.
- the rolling shutter distortion component is separated to some extent. It becomes possible.
- the framework of the present embodiment allows a certain amount of error to be included in removing rolling shutter distortion.
- a matrix S i-1 indicated by an arrow is a vector (projection matrix) that associates the cut - out region of RS_frame i-1 with the output frame image out-frame i . Since the projection matrix S i ⁇ 1 is a value obtained in the previous calculation, it is known.
- the projection matrix S i ⁇ 1 can be calculated using, for example, the first projection matrix P dst i ⁇ 1 and the first matrix D i ⁇ 1 obtained in the previous calculation.
- the projection matrix S i ⁇ 1 is stored, for example, in the previous data recording unit 23.
- the projection matrix S i-1 is also referred to as a first projection matrix of a system in which the rolling shutter distortion component is reduced.
- the motion data N in the system having no rolling shutter distortion can be calculated. .
- the motion data N is also referred to as temporary motion data. If the motion data N in the system in which the rolling shutter distortion is reduced is obtained, the rolling shutter distortion that associates the cut-out area of RS_frame i with the output frame image out-frame i is reduced as in Equation (2).
- the system projection matrix S i (second projection matrix of the system with reduced rolling shutter distortion) is expressed by the following equation (3).
- the second projection matrix P dst i is expressed by the following formula (4). In this way, by converting to a system in which rolling shutter distortion is reduced, a high-pass filter can be applied only to N ⁇ S i ⁇ 1 that does not consider rolling shutter distortion.
- the calculation unit 30 is a calculation unit for realizing the above framework. That is, the calculation unit 30 uses the previous data already calculated and recorded in the previous data recording unit 23 and the current data as the measurement value to project the output frame image onto the second frame image. P dst i is calculated.
- the first matrix D i ⁇ 1 indicating the rolling shutter distortion component and the system in which the rolling shutter distortion between the frame images input before the second frame image is input are reduced.
- Previous motion data N PRE , previous second matrix and previous correction matrix between input frame images before inputting the second frame image, and projection matrix S i ⁇ in a system with reduced rolling shutter distortion A case where 1 is stored will be described. Details of the second matrix and the correction matrix will be described later.
- the calculation unit 30 uses the second projection matrix P dst i for projecting the output frame image to the second frame image, the first matrix D i including the rolling shutter distortion component, the parallel movement component in the direction orthogonal to the imaging direction, and the imaging. Calculation is performed from the second matrix including at least one of the rotation components based on the direction, and the auxiliary matrix not including the motion component included in the first matrix Di and the second matrix. That is, the calculation unit 30 calculates the second projection matrix P dst i independently in a state where the second projection matrix P dst i is separated into the first matrix D i , the second matrix, and the auxiliary matrix.
- the calculation unit 30 includes a first calculation unit 12, a second calculation unit 13, and an auxiliary matrix calculation unit 14.
- the first calculator 12 calculates a first matrix D i that includes a rolling shutter distortion component.
- the first calculator 12 has a function of calculating a rolling shutter distortion component based on the parallel movement component.
- the amount of distortion increases as the moving speed of the camera 20 increases. From this, it can be said that the rolling shutter distortion can be estimated using the moving speed of the camera 20.
- the speed of movement of the camera 20 can be estimated using the amount of parallel movement between frames. Accordingly, the first calculating unit 12 based on, for example, a parallel movement amount included in the motion data P, and calculates a first matrix D i including the rolling shutter distortion component.
- a value obtained by averaging the parallel movement amounts included in the motion data P i-1 and the motion data P i is the parallel movement. It may be an amount.
- the rolling shutter distortion component is expressed by the following equation (5), where (x i , y i , 1) t is a coordinate system with distortion and (X i , Y i , 1) t is a coordinate system without distortion. Is done.
- the value of Y affects the distortion component.
- d x i and d y i included in the rolling shutter distortion component are parallel movement components of the image, and ⁇ is a distortion coefficient.
- the distortion coefficient ⁇ is a value calculated by dividing the time for reading one line of a frame image by the value obtained by adding the time for reading the entire frame image and the time for reading the next frame image.
- the distortion coefficient ⁇ is calculated by dividing the time for reading one line of a frame image by the time from the time of reading the first line of the frame image to the time of reading the first line of the next frame image. Value.
- the distortion coefficient ⁇ varies depending on the specification of the pixel sensor provided in the camera 20 and the driving setting of the pixel sensor.
- the distortion coefficient ⁇ can be calculated by the following equation.
- the time difference t F between adjacent frames can be derived based on the frame rate of the moving frame image sequence.
- FIG. 8 is a schematic diagram illustrating a method for calculating the distortion coefficient ⁇ .
- the distortion coefficient ⁇ is calculated using a still frame image frame b and a moving frame image sequence frame i . Comparing still frame images and moving frame images to determine how much distortion occurs, and by determining how much translation between moving frame images, distortion and translation Find the correlation of quantities. A distortion coefficient ⁇ is calculated from this correlation.
- the subject and the camera 20 are stopped, and a still frame image frame b is captured. Then move the object or the camera 20, the dynamic frame image sequence: frame 0, frame 1, frame 2, ..., taking a frame i-1, frame i.
- Strain amount of dynamic frame image frame i can be calculated by calculating the movement matrix M b ⁇ i to moving the frame image frame i from still frame images frame b. Assuming that the coordinate system of the still frame image frame b is (x b , y b , 1) and the coordinate system of the moving frame image frame i is (x i , y i , 1), ).
- the motion matrix M b ⁇ i is only a translational component and a distortion component, the distortion amount can be approximated as in the following equation (7).
- Formula (5) is compared with Formula (7).
- the distortion components are m 01 b ⁇ i and m 11 b ⁇ i .
- the source of the distortion component is a parallel movement amount (d x i , d y i ) t as a motion component between successive frames.
- a motion matrix M i ⁇ 1 ⁇ i is obtained from the moving frame image frame i and the immediately preceding moving frame image frame i ⁇ 1 .
- the dynamic frame image frame i the relationship between the preceding dynamic frame image frame i-1 can be expressed by the following equation (8).
- the motion component between frames (m 02 i ⁇ 1 ⁇ i , m 12 i ⁇ 1 ⁇ i ) t may be set as the translation component (d x i , dy i ) t using the above equation (8). Further, the parallel movement amount of the center coordinates of the moving frame image frame i may be (d x i , d y i ) t . In addition, the first parallel movement amount is calculated using the frame image frame i and the immediately preceding frame image frame i ⁇ 1 among the continuous frame images, and the frame image frame i and the immediately following frame image frame i + 1 are used.
- the second translation amount may be calculated, and the translation amount may be obtained using an average (weighted average, polynomial approximation, etc.) of the first translation amount and the second translation amount.
- the accuracy can be improved by using the average amount of translation. (D x i, d y i ) by calculating the t, distortion coefficients ⁇ , the following equation (9) can be expressed by (10).
- m 01 b ⁇ i , m 11 b ⁇ i , d x i , d y i are measured, and the distortion coefficient ⁇ is obtained by using the equations (9) and (10). Can be requested.
- m 01 b ⁇ i , m 11 b ⁇ i , d x i , and d y i that are measurement values for obtaining the distortion coefficient ⁇ include errors.
- the distortion coefficient obtained for one frame image frame i is used as a temporary distortion coefficient, the temporary distortion coefficient is obtained for each frame image for a plurality of frame images, and the error is converged using these temporary distortion coefficients.
- the corrected distortion coefficient ⁇ with high accuracy may be calculated.
- the horizontal axis represents frame i and the vertical axis represents distortion coefficient ⁇ i .
- provisional distortion coefficients ⁇ i may be obtained for various frame i , and the average value (weighted average value) may be adopted as the distortion coefficient ⁇ .
- the provisional distortion coefficient ⁇ i may be obtained for various frame i , and the median value thereof may be adopted as the distortion coefficient ⁇ .
- the horizontal axis represents the movement amount d x i
- the vertical axis represents the motion matrix component m 01 b ⁇ i . As shown in FIG.
- the distortion coefficient ⁇ may be obtained from the slope of the regression line shown in Equation (10) by plotting on a two-dimensional plane having the parallel movement amount and rolling shutter distortion component as coordinate axes. Even when the measured values m 01 b ⁇ i , m 11 b ⁇ i , d x i , and d y i are small and the influence of errors is large, the distortion coefficient ⁇ is accurately determined by the above-described method. Can be sought.
- the temporary distortion coefficient ⁇ i and the distortion coefficient ⁇ need to be equal to or less than the above-described upper limit value (reciprocal number of the line number N L of the frame image).
- the distortion coefficient alpha k of the temporary predetermined frame image frame k is is greater than the reciprocal of the number of lines N L of the frame image, the distortion coefficient alpha k of the temporary for the target frame image frame k of frame images
- the distortion coefficient ⁇ may be calculated by a method using the average value, the median value, or the regression line described above after correcting the reciprocal of the number of lines NL .
- the provisional distortion coefficient ⁇ k of the predetermined frame image frame k is larger than the reciprocal of the number of lines N L of the frame image
- the above-described average except for the provisional distortion coefficient ⁇ k of the frame image frame k The distortion coefficient ⁇ may be calculated by a method using a value, a median value, or a regression line.
- the distortion coefficient ⁇ may be corrected to be the reciprocal number of the line number N L of the frame image.
- the rolling shutter distortion varies depending on the specification of the pixel sensor and the driving setting of the pixel sensor.
- the rolling shutter distortion can be accurately estimated by reflecting the conditions specific to the camera 20.
- the distortion coefficient ⁇ is calculated using actual measurement values, it may be recorded in what imaging environment conditions the distortion coefficient ⁇ is calculated.
- the imaging environment condition includes, for example, “brightness” or “temperature”.
- the first calculation unit 12 is configured to be able to refer to the camera information recording unit that records the distortion coefficient ⁇ calculated by the above method. For example, a table in which element setting values and distortion coefficients ⁇ are associated with each other is provided. You may provide the table which linked
- the first calculation unit 12 refers to the camera information recording unit, acquires the value of the distortion coefficient ⁇ according to the setting of the pixel sensor, and estimates the rolling shutter distortion component using the camera motion component. Information regarding the current setting information of the pixel sensor and information regarding the imaging environment may be acquired from the camera 20, for example.
- the time for reading one line of the frame image, the time for reading the entire frame image, and the time for reading the next frame image are recorded in the camera information recording unit in association with the setting information and the information about the imaging environment.
- the distortion coefficient ⁇ may be calculated based on information recorded in the camera information recording unit instead of directly acquiring the distortion coefficient ⁇ from the camera information recording unit.
- the first calculation unit 12 Since the first calculation unit 12 derives the first matrix D i in consideration of only parallel movement as described above, the calculated first matrix D i is not a true value, but obtains an approximately correct value. be able to.
- the first calculation unit 12 outputs the calculated first matrix Di to the second calculation unit 13, the auxiliary matrix calculation unit 14, and the drawing unit 17.
- the frame image frame i to be processed can be converted into a system image RS_frame i without rolling shutter distortion.
- the motion data N in system rolling shutter distortion using the motion data P and the first matrix D i is reduced can also be derived.
- the high-pass filter can be applied only to N ⁇ S i ⁇ 1 in which the rolling shutter distortion is not considered.
- N ⁇ S i-1 is a matrix
- the calculation unit 30 handles the N ⁇ S i ⁇ 1 motion component by dividing it into the motion of the camera 20 reflecting the user's intention and the other motion components.
- the second calculator 13 calculates a second matrix including the movement of the camera 20 that reflects the user's intention.
- the auxiliary matrix calculation unit 14 calculates an auxiliary matrix.
- the second calculating unit 13 and the auxiliary matrix calculating unit 14 uses the first matrix D i calculated, are configured to be able to execute operations in no rolling shutter distortion system.
- the second calculation unit 13 estimates that the movement of the camera 20 reflecting the user's intention is at least one of a translation component and a rotation component.
- the second calculation unit 13 may calculate the motion by separately dividing the translation component and the rotation component as a component, or a motion component (combination component) obtained by combining the translation component and the rotation component. It may be calculated.
- the second calculation unit 13 calculates the translation component and the rotation components of the x-axis and the y-axis.
- the combined component may be calculated.
- the second calculation unit 13 may perform estimation in consideration of an enlargement / reduction component depending on the imaging scene. That is, the second matrix may include only the translation component, may include only the rotation component, may include only the translation component and the rotation component, or may be enlarged or reduced in addition to the translation component and the rotation component. Ingredients may be included.
- FIG. 11 is a schematic diagram for explaining the translational component and the rotational component.
- FIG. 11 shows the relationship between the coordinate system (x, y) of the output frame image out-frame and the coordinate system (X, Y) of the frame image frame to be processed.
- the axis extending in the imaging direction is the z axis
- the axis extending in the direction orthogonal to the imaging direction is the x axis and the y axis.
- the translation component is a component that moves in the direction parallel to the x axis, the y axis, and the z axis. is there.
- the rotational component is at least one rotational component of the x-axis, y-axis, and z-axis, and is at least one of a yaw component, a roll component, and a pitch component.
- the second calculating unit 13 calculates the user motion matrix A c to calculate the second matrix.
- the user motion matrix Ac is a matrix used for calculating the second matrix, and includes at least one of a parallel movement component in a direction orthogonal to the imaging direction and a rotation component based on the imaging direction. For example, as illustrated in FIG. 11, the second calculation unit 13 associates the center coordinates (0, 0) of the output frame image out-frame with the center (0, 0) of the frame image frame that is the second frame image.
- the user motion matrix Ac to be calculated is calculated.
- the second calculating unit 13 can calculate the user motion matrix A c from the motion data P.
- the second calculation unit 13 uses, for example, a known first projection matrix P dst i ⁇ 1 that projects the output frame image out-frame to the first frame image, and the motion data P, to generate a frame image. Specify the center coordinates of the frame.
- the second calculation unit 13, an output frame image out-frame of coordinates (x, y) by translating and rotating, calculated user movement matrix A c to project into the center of the frame image frame (X, Y) To do.
- the yaw component, roll component, and pitch component can be estimated to some extent accurately from the amount of movement near the center and the focal length, for example.
- the second calculation unit 13 outputs the calculated user motion matrix Ac to the auxiliary matrix calculation unit 14.
- the second calculating unit 13 has a function for calculating a second matrix by correcting the user motion matrix A c.
- the second calculation unit 13 has a function of removing a high-frequency component due to camera shake with respect to the movement of the camera 20 that the user intended.
- the 2nd calculation part 13 removes the frequency component resulting from a camera work using a 1st filter.
- the first filter is, for example, a high pass filter.
- the second calculation unit 13 applies a spring model that returns to the center position as the cutout area K i approaches the outer edge frame of the frame image frame i , to the camera work. The following may be realized.
- the second calculation unit 13 may apply the first filter that acts to return to the center position in proportion to the square of the distance to the outer edge to the user motion matrix Ac to obtain the second matrix.
- the second calculation unit 13 may adjust the spring coefficient (proportional coefficient) in each of the x-axis, the y-axis, and the z-axis.
- the second calculation unit 13 may set the friction to be high near the center of the frame image frame i . By setting in this way, followable stability can be improved.
- the auxiliary matrix calculation unit 14 calculates the “remaining motion matrix B c ” of the motion data N using the user motion matrix A c .
- the components included in the matrix B c include, for example, the remaining components that are not represented by the first matrix D i and the user motion matrix A c among the eight components illustrated in FIG. 4, the first matrix D i, and the user motion.
- an error component which is not accurately represented by the matrix a c e.g., error component or the like of the error component and the rotation component of the rolling shutter components
- the remaining motion matrix B c varies depending on the set degrees of freedom and the user motion matrix A c .
- the remaining motion matrix B c is a component that converts a square into a trapezoid ((G), (H) shown in FIG. 4) and Includes error components.
- the remaining motion matrix B c is an enlargement / reduction component, a component for converting a square into a trapezoid ((G), (H)) and an error component.
- the remaining motion matrix B c includes an enlargement / reduction component and an error component.
- the remaining motion matrix B c includes only an error component.
- the auxiliary matrix calculation unit 14 calculates N ⁇ S i ⁇ 1 calculated using the first projection matrix S i ⁇ 1 and the motion data N in the system in which the rolling shutter distortion component is reduced, and the user motion matrix A c .
- the motion matrix B c including the remaining motion components is calculated.
- the user motion matrix A c mainly includes a rotation component (yaw component y i , pitch component p i, and roll component r i ) will be described.
- the auxiliary matrix calculation unit 14 calculates a motion matrix B c including the remaining motion components using the following formula (11) or (12).
- the auxiliary matrix calculation unit 14 includes N ⁇ S i ⁇ 1 motion components as a user motion matrix A c including a yaw component y i , a pitch component p i, and a roll component r i , and other motion components l i. Is divided into the remaining motion matrix B c including the component, and the user motion matrix A c is calculated separately and substituted into the above equation (11) or (12) to obtain the component l i .
- Equation (11) will be described.
- the auxiliary matrix calculation unit 14 has a function of correcting the calculated “remaining motion matrix B c ” and calculating an auxiliary matrix.
- the remaining motion matrix B c is a matrix that includes components that are auxiliary to the motion, and may include errors or ignored components. For this reason, ideally, the remaining motion matrix B c is a unit matrix for identity mapping. Therefore, either a unit matrix in which timing the rest of the motion matrix B c becomes a problem.
- the second correcting unit 16 so that the error is eliminated progressively, i.e., rather than correcting the sudden unit matrix, is corrected so that the rest of the motion matrix B c becomes gradually matrix. For example, to calculate the difference between the matrix and the remainder of the motion matrix B c, it may be corrected so that the difference is 80%.
- the auxiliary matrix calculation unit 14 may realize the above processing using a high-pass filter. In this way, by avoiding sudden correction to the unit matrix, it is possible to avoid the output frame image from finally becoming unnatural.
- the second matrix is a matrix obtained by applying a high-pass filter to the user motion matrix A c including the yaw component y i , the pitch component p i, and the roll component r i
- the auxiliary matrix is a high-pass to the remaining motion matrix B c .
- a matrix to which a filter has been applied may be designed as a complex one-variable filter. That is, the filter is designed accurately only for the movement of the camera 20 reflecting the user's intention.
- the high-pass filter may be designed using, for example, data stored in the previous data recording unit 23.
- the second calculation unit 13 outputs the calculated second matrix to the previous data recording unit 23 to be recorded.
- the high-pass filter for deriving the second matrix is designed using, for example, the past second matrix stored in the previous data recording unit 23.
- the auxiliary matrix calculation unit 14 applies a high-pass filter to the remaining motion components l i .
- the high-pass filter may be designed using, for example, data stored in the previous data recording unit 23. For example, every time the auxiliary matrix is calculated, the auxiliary matrix calculating unit 14 outputs the calculated auxiliary matrix to the previous data recording unit 23 to be recorded.
- the high-pass filter for deriving the auxiliary matrix is designed using, for example, the past auxiliary matrix stored in the previous data recording unit 23.
- the high-pass filter applied to the remaining motion components l i may have a simple design that approximates the identity matrix. In this way, it is possible to distinguish between a correctly designed filter and a certain degree of accurate filter, so that the overall filter design is facilitated.
- the calculation unit 30 derives a projection matrix S i in a system without rolling shutter distortion using the second matrix and the auxiliary matrix as follows. Then, the calculation unit 30 derives a second projection matrix P dst i that associates the output frame image out-frame i with the input frame image frame i using the first matrix D i , the second matrix, and the auxiliary matrix. To do.
- the drawing unit 17 calculates the cutout region K i of the input frame image frame i using the second projection matrix P dst i and outputs it to the display unit 21 as the output frame image out-frame i .
- FIG. 13 is a flowchart showing the operation of the image processing apparatus 1 according to this embodiment.
- the control process shown in FIG. 13 is executed, for example, at the timing when the imaging function of the mobile terminal 2 is turned on, and is repeatedly executed at a predetermined cycle.
- the input frame image to be processed is the second and subsequent input frame images.
- the image processing apparatus 1 executes an image input process (S10: input step).
- S10 input step
- the input unit 10 inputs an input frame image frame i from the camera 20.
- S12 motion acquisition step
- the motion acquisition unit 11 acquires motion data P between the input frame image frame i and the frame image frame i-1 .
- the routine proceeds to calculation of the first matrix D i including the rolling shutter distortion component (S14: first calculation step).
- the first calculating unit 12 calculates the first matrix D i including the rolling shutter distortion component based on the motion data obtained in the processing of S12.
- the process of S14 ends, the process proceeds to a calculation process of motion data N (S16).
- step S16 the calculating unit 30, using the first matrix D i obtained by the processing of motion data obtained P, S14 in the processing of S12, to calculate the provisional motion data N.
- the process of S16 ends, the process proceeds to a rotation component calculation process (S18).
- the second calculation unit 13 calculates a rotation component (user motion matrix A c ). For example, it can be estimated to some extent accurately from the movement amount and focal length of the center position of the input frame image frame i . As an example, calculation of the yaw direction will be described. When the movement amount in the x-axis direction is dmx and the focal length is dn, the component in the yaw direction can be easily calculated by the following mathematical formula. When the process of S18 is finished, the routine proceeds to filtering to user movement matrix A c (S20: second calculation step).
- processing step S20 the second calculating unit 13 to obtain a second matrix by correcting the user motion matrix A c obtained by the process of S18 in the high-pass filter. That is, the second calculating unit 13 refers to the previous data recording unit 23, acquires the past second matrix, by using the past second matrix and the motion matrix A c, obtaining a second matrix.
- the routine proceeds to filtering to the rest of the motion matrix B c (S22: auxiliary matrix calculating step).
- the auxiliary matrix calculating unit 14 calculates the remaining movement matrix B c, to filter the remaining movement matrix B c, to obtain an auxiliary matrix.
- the auxiliary matrix calculation unit 14 includes N ⁇ S i ⁇ 1 motion components as a user motion matrix A c including a yaw component y i , a pitch component p i, and a roll component r i , and other motion components l i.
- the user motion matrix A c is calculated separately and substituted into the above equation (11) to obtain the component l i as the remaining motion matrix B c .
- the auxiliary matrix calculating section 14 refers to the previous data recording unit 23, acquires the past auxiliary matrix, using the historical auxiliary matrix and the motion matrix A c, may be obtained auxiliary matrix.
- the calculation unit 30 calculates a projection matrix S i in a system without rolling shutter distortion, using the second matrix obtained in the process of S20 and the auxiliary matrix obtained in the process of S22. .
- the process proceeds to a drawing matrix (projection matrix) calculation process (S26).
- the calculation unit 30 calculates the second projection matrix P dst i in the input image system using the projection matrix S i obtained in the process of S24, for example, as shown in the above formula (4). .
- the process of S26 ends, the process proceeds to a drawing process (S28: drawing step).
- the drawing unit 17 calculates the cutout area K i of the input frame image frame i using the second projection matrix P dst i obtained in the process of S26, and displays it as the output frame image out-frame i. To the unit 21.
- the process of S28 ends the process proceeds to a determination process (S30).
- the image processing apparatus 1 determines whether or not the input of the image is completed. For example, the image processing apparatus 1 determines whether or not the input of the image has ended based on whether or not a predetermined number of inputs has been reached or whether or not a predetermined time has elapsed since the previous input. In the process of S30, when it is determined that the image input has not ended, the process proceeds to S10 again. On the other hand, if it is determined in step S30 that the input of the image has ended, the control process shown in FIG. 13 ends. By executing the control process shown in FIG. 13, it is possible to independently correct the translational component and the rotational component that strongly reflect the intention of the human being and the other components and the error component without relating them to each other. Become.
- the first calculation step, the second calculation step, and the auxiliary matrix calculation step correspond to a matrix calculation step.
- the image processing program includes a main module, an input module, and an arithmetic processing module.
- the main module is a part that comprehensively controls image processing.
- the input module operates the mobile terminal 2 so as to acquire an input image.
- the arithmetic processing module includes a motion acquisition module, a calculation module (a first calculation module, a second calculation module, and an auxiliary matrix calculation module) and a drawing module. Functions realized by executing the main module, the input module, and the arithmetic processing module are the input unit 10, the motion acquisition unit 11, and the calculation unit 30 (the first calculation unit 12 and the second calculation unit) of the image processing apparatus 1 described above. 13 and the auxiliary matrix calculation unit 14) and the drawing unit 17 have the same functions.
- the image processing program is provided by a recording medium such as a ROM or a semiconductor memory, for example.
- the image processing program may be provided as a data signal via a network.
- the movement of the image deformation includes at least one of the parallel movement component or the rotation component, the rolling shutter distortion component, and the other components. It is divided into and handled separately, and each is calculated separately.
- first calculation unit 12 using the motion data, first matrix D i including the rolling shutter distortion components are somewhat accurately calculated.
- the second calculation unit 13 by using the data P motion and the output frame image out-frame first projection matrix known to the i-1 is projected to the reference frame image frame i-1 P dst i-1, in the imaging direction user movement matrix a c include at least one rotating component relative to the translation component and the imaging direction of the orthogonal direction to some extent precisely calculated.
- the auxiliary matrix calculation unit 14 includes the first matrix D i , the user motion matrix A c, and the first projection matrix P dst i ⁇ 1 to include the motion components included in the first matrix D i and the user motion matrix A c. No remaining motion matrix B c is calculated. Then, the user motion matrix A c and the remaining motion matrix B c are corrected, and the second matrix and the auxiliary matrix are calculated. As described above, the image processing apparatus 1 can decompose the motion data into three and separately calculate the motion data in the calculation process, so that processing according to the motion component is possible. For example, it is possible to remove a rolling shutter component to be removed from a correction target for performing camera shake removal or the like.
- the translation component or rotation component and the component included in the auxiliary matrix are calculated using different formulas, the translation component or rotation component that strongly reflects human intentions, and the other remaining components, Can be independently corrected without relating them to each other.
- the motion component that strongly reflects the human intention and the other motion component can be corrected with different filters, so that the trouble caused by the unintended motion of the user can be solved while appropriately following the camera work. It is possible to achieve both.
- correction for each component is independent, a correction filter can be easily designed.
- the motion calculation method of the second calculation unit is not limited, for example, an accurate but robust calculation method can be applied to the motion calculation method of the second calculation unit. In this case, the image processing apparatus 1 as a whole can perform robust processing. Furthermore, complete electronic camera shake correction can be realized.
- the image processing apparatus 1 the image processing method and image processing program, while utilizing the value of the remaining movement matrix B c, corrected to the ideal value (i.e., identity mapping) can do. For this reason, it is possible to more naturally resolve problems caused by unintended movements of the user.
- the image processing apparatus 1 the image processing method, and the image processing program according to the present embodiment, it is possible to easily and almost accurately calculate the rolling shutter distortion component by assuming that the rolling shutter distortion component is caused only by the parallel movement component. It can be estimated quickly.
- the above-described embodiment shows an example of the image processing apparatus according to the present invention.
- the image processing apparatus according to the present invention is not limited to the image processing apparatus 1 according to the embodiment, and the image processing apparatus according to the embodiment may be modified or otherwise changed without changing the gist described in each claim. It may be applied to the above.
- the camera 20 may continuously capture still images.
- the image input by the input unit 10 may be an image transmitted from another device via a network.
- the size of the image captured by the camera 20 has been described as being the same. However, the size of the captured image may be different for each imaging.
- the image is deformed with 8 degrees of freedom shown in FIG. 4 has been described.
- the image is not limited to 8 degrees of freedom, for example, as shown in (A) to (F) in FIG. It may be 6 degrees of freedom.
- the first calculation unit 12 estimates the rolling shutter distortion component and an example in which the second calculation unit 13 estimates the translation component and the rotation component have been described.
- the present invention is limited to the above method. Instead, various known methods can be employed.
- the first calculation unit 12 may estimate the rolling shutter distortion component using the parallel movement component obtained by the second calculation unit 13.
- the present invention is not limited to this. is not. That is, it is not necessary to sequentially generate output frame images from input frame images, and the input frame image and the image that is the reference destination of the output frame image may be shifted in time.
- a third frame image existing between the first frame image and the second frame image is set as a processing target for generating an output frame image, and a projection matrix that associates the third frame image with the output frame image is set. It may be calculated.
- the third frame image may be processed in the same manner as the second frame image described in the above embodiment, or the projection matrix to the third frame image using not only the first frame image but also the second frame image. May be calculated. That is, a projection matrix of the third frame image is calculated using a first frame image that is past data as viewed from the third frame image and a second frame image that is future data as viewed from the third frame image. Also good.
- Equation (2) an example in which a high-pass filter is applied to the motion data P and the first projection matrix P dst i ⁇ 1 has been described, for example, as shown in Equation (2).
- the formula (2) may be changed as follows.
- the high-pass filter acts on the motion data N.
- Equation (13) is changed as follows.
- the motion data N may be divided.
- the high-pass filter may change the coefficient of the high-pass filter in accordance with the cutout position.
Landscapes
- Engineering & Computer Science (AREA)
- Physics & Mathematics (AREA)
- General Physics & Mathematics (AREA)
- Theoretical Computer Science (AREA)
- Multimedia (AREA)
- Signal Processing (AREA)
- Studio Devices (AREA)
Abstract
Description
Claims (9)
- 撮像装置によって撮像されたフレーム画像内に前記フレーム画像よりも小さい大きさの領域を設定し、前記撮像装置の動きに応じて前記領域の位置又は形状を補正して出力フレーム画像を生成する画像処理装置であって、
第1フレーム画像及び第2フレーム画像を順次入力する入力部と、
前記第1フレーム画像と前記第2フレーム画像との間の動きデータを取得する動き取得部と、
前記出力フレーム画像を前記第2フレーム画像へ射影させる射影行列を、ローリングシャッタ歪み成分を含む第1行列、撮像方向に直交する方向への平行移動成分及び撮像方向を基準とした回転成分の少なくとも一方を含む第2行列、並びに、前記第1行列及び前記第2行列に含まれない動き成分を含む補助行列から算出する行列演算部と、
前記射影行列を用いて前記第2フレーム画像から前記出力フレーム画像を生成する描画部と、
を備え、
前記行列演算部は、
前記動きデータを用いて、前記射影行列の第1行列を算出する第1算出部と、
前記動きデータ、前記第1行列、及び、過去の前記第2行列を用いて、前記射影行列の第2行列を算出する第2算出部と、
前記動きデータ、前記第1行列、及び、過去の前記補助行列を用いて、前記射影行列の補助行列を算出する補助行列算出部と、
を有する、
画像処理装置。 - 前記第1算出部は、前記動きデータに含まれる平行移動成分に基づいて、前記射影行列の第1行列を算出する請求項1に記載の画像処理装置。
- 前記補助行列は、四角形を台形へ変換する成分を含む請求項1又は2に記載の画像処理装置。
- 前記補助行列は、拡大縮小成分を含む請求項1~3の何れか一項に記載の画像処理装置。
- 前記第2行列は、拡大縮小成分を含む請求項1~4の何れか一項に記載の画像処理装置。
- 前記動き取得部は、ジャイロセンサの出力値を取得する請求項1~5の何れか一項に記載の画像処理装置。
- 撮像装置によって撮像されたフレーム画像内に前記フレーム画像よりも小さい大きさの領域を設定し、前記撮像装置の動きに応じて前記領域の位置又は形状を補正して出力フレーム画像を生成する画像処理方法であって、
第1フレーム画像及び第2フレーム画像を順次入力する入力ステップと、
前記第1フレーム画像と前記第2フレーム画像との間の動きデータを取得する動き取得ステップと、
前記出力フレーム画像を前記第2フレーム画像へ射影させる射影行列を、ローリングシャッタ歪み成分を含む第1行列、撮像方向に直交する方向への平行移動成分及び撮像方向を基準とした回転成分の少なくとも一方を含む第2行列、並びに、前記第1行列及び前記第2行列に含まれない動き成分を含む補助行列から算出する行列演算ステップと、
前記射影行列を用いて前記第2フレーム画像から前記出力フレーム画像を生成する描画ステップと、
を備え、
前記行列演算ステップは、
前記動きデータを用いて、前記射影行列の第1行列を算出する第1算出ステップと、
前記動きデータ、前記第1行列、及び、過去の前記第2行列を用いて、前記射影行列の第2行列を算出する第2算出ステップと、
前記動きデータ、前記第1行列、及び、過去の前記補助行列を用いて、前記射影行列の補助行列を算出する補助行列算出ステップと、
を有する、
画像処理方法。 - 撮像装置によって撮像されたフレーム画像内に前記フレーム画像よりも小さい大きさの領域を設定し、前記撮像装置の動きに応じて前記領域の位置又は形状を補正して出力フレーム画像を生成するようにコンピュータを機能させる画像処理プログラムであって、
前記コンピュータを、
第1フレーム画像及び第2フレーム画像を順次入力する入力部、
前記第1フレーム画像と前記第2フレーム画像との間の動きデータを取得する動き取得部、
前記出力フレーム画像を前記第2フレーム画像へ射影させる射影行列を、ローリングシャッタ歪み成分を含む第1行列、撮像方向に直交する方向への平行移動成分及び撮像方向を基準とした回転成分の少なくとも一方を含む第2行列、並びに、前記第1行列及び前記第2行列に含まれない動き成分を含む補助行列から算出する行列演算部、及び、
前記射影行列を用いて前記第2フレーム画像から前記出力フレーム画像を生成する描画部
として機能させ、
前記行列演算部は、
前記動きデータを用いて、前記射影行列の第1行列を算出する第1算出部と、
前記動きデータ、前記第1行列、及び、過去の前記第2行列を用いて、前記射影行列の第2行列を算出する第2算出部と、
前記動きデータ、前記第1行列、及び、過去の前記補助行列を用いて、前記射影行列の補助行列を算出する補助行列算出部と、
を有する、
画像処理プログラム。 - 撮像装置によって撮像されたフレーム画像内に前記フレーム画像よりも小さい大きさの領域を設定し、前記撮像装置の動きに応じて前記領域の位置又は形状を補正して出力フレーム画像を生成するようにコンピュータを機能させる画像処理プログラムを記録したコンピュータ読取可能な記録媒体であって、
前記コンピュータを、
第1フレーム画像及び第2フレーム画像を順次入力する入力部、
前記第1フレーム画像と前記第2フレーム画像との間の動きデータを取得する動き取得部、
前記出力フレーム画像を前記第2フレーム画像へ射影させる射影行列を、ローリングシャッタ歪み成分を含む第1行列、撮像方向に直交する方向への平行移動成分及び撮像方向を基準とした回転成分の少なくとも一方を含む第2行列、並びに、前記第1行列及び前記第2行列に含まれない動き成分を含む補助行列から算出する行列演算部、及び、
前記射影行列を用いて前記第2フレーム画像から前記出力フレーム画像を生成する描画部
として機能させ、
前記行列演算部は、
前記動きデータを用いて、前記射影行列の第1行列を算出する第1算出部と、
前記動きデータ、前記第1行列、及び、過去の前記第2行列を用いて、前記射影行列の第2行列を算出する第2算出部と、
前記動きデータ、前記第1行列、及び、過去の前記補助行列を用いて、前記射影行列の補助行列を算出する補助行列算出部と、
を有する、
記録媒体。
Priority Applications (5)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201380020123.5A CN104247395B (zh) | 2012-11-05 | 2013-11-01 | 图像处理装置、图像处理方法 |
KR1020147028881A KR101624450B1 (ko) | 2012-11-05 | 2013-11-01 | 화상 처리 장치, 화상 처리 방법, 및 기록 매체 |
JP2014544608A JP5906493B2 (ja) | 2012-11-05 | 2013-11-01 | 画像処理装置、画像処理方法、画像処理プログラム及び記録媒体 |
EP13850725.6A EP2849428B1 (en) | 2012-11-05 | 2013-11-01 | Image processing device, image processing method, image processing program, and storage medium |
US14/397,579 US9639913B2 (en) | 2012-11-05 | 2013-11-01 | Image processing device, image processing method, image processing program, and storage medium |
Applications Claiming Priority (2)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
JPPCT/JP2012/078591 | 2012-11-05 | ||
PCT/JP2012/078591 WO2014068779A1 (ja) | 2012-11-05 | 2012-11-05 | 画像処理装置、画像処理方法、画像処理プログラム及び記録媒体 |
Publications (1)
Publication Number | Publication Date |
---|---|
WO2014069632A1 true WO2014069632A1 (ja) | 2014-05-08 |
Family
ID=50626742
Family Applications (2)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
PCT/JP2012/078591 WO2014068779A1 (ja) | 2012-11-05 | 2012-11-05 | 画像処理装置、画像処理方法、画像処理プログラム及び記録媒体 |
PCT/JP2013/079724 WO2014069632A1 (ja) | 2012-11-05 | 2013-11-01 | 画像処理装置、画像処理方法、画像処理プログラム及び記録媒体 |
Family Applications Before (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
PCT/JP2012/078591 WO2014068779A1 (ja) | 2012-11-05 | 2012-11-05 | 画像処理装置、画像処理方法、画像処理プログラム及び記録媒体 |
Country Status (6)
Country | Link |
---|---|
US (1) | US9639913B2 (ja) |
EP (1) | EP2849428B1 (ja) |
JP (1) | JP5906493B2 (ja) |
KR (1) | KR101624450B1 (ja) |
CN (1) | CN104247395B (ja) |
WO (2) | WO2014068779A1 (ja) |
Cited By (4)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
KR20180065957A (ko) | 2016-12-08 | 2018-06-18 | 가부시키가이샤 모르포 | 화상처리장치, 전자기기, 화상처리방법 및 프로그램 |
JP2018206365A (ja) * | 2017-06-08 | 2018-12-27 | 株式会社リコー | 画像処理方法、装置及び電子デバイス |
WO2020039747A1 (ja) * | 2018-08-20 | 2020-02-27 | ソニーセミコンダクタソリューションズ株式会社 | 信号処理装置、撮像装置、信号処理方法 |
JP7493190B1 (ja) | 2023-02-07 | 2024-05-31 | 株式会社マーケットヴィジョン | 情報処理システム |
Families Citing this family (25)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US9736374B2 (en) * | 2013-09-19 | 2017-08-15 | Conduent Business Services, Llc | Video/vision based access control method and system for parking occupancy determination, which is robust against camera shake |
JP6448218B2 (ja) * | 2014-05-12 | 2019-01-09 | キヤノン株式会社 | 撮像装置、その制御方法および情報処理システム |
FR3023956B1 (fr) * | 2014-07-18 | 2018-01-12 | Safran Electronics & Defense Sas | Procede et dispositif de traitement de mouvements de hautes frequences dans un systeme optronique |
KR102264840B1 (ko) * | 2014-11-27 | 2021-06-15 | 삼성전자주식회사 | 비디오 프레임 인코딩 회로, 그것의 인코딩 방법 및 그것을 포함하는 비디오 데이터 송수신 장치 |
US9912868B2 (en) * | 2015-09-15 | 2018-03-06 | Canon Kabushiki Kaisha | Image-blur correction apparatus, tilt correction apparatus, method of controlling image-blur correction apparatus, and method of controlling tilt correction apparatus |
JP6652300B2 (ja) * | 2016-01-14 | 2020-02-19 | キヤノン株式会社 | 画像処理装置、撮像装置および制御方法 |
JP2018037944A (ja) * | 2016-09-01 | 2018-03-08 | ソニーセミコンダクタソリューションズ株式会社 | 撮像制御装置、撮像装置および撮像制御方法 |
JP6699902B2 (ja) * | 2016-12-27 | 2020-05-27 | 株式会社東芝 | 画像処理装置及び画像処理方法 |
JP6960238B2 (ja) * | 2017-04-28 | 2021-11-05 | キヤノン株式会社 | 像ブレ補正装置及びその制御方法、プログラム、記憶媒体 |
US10884747B2 (en) | 2017-08-18 | 2021-01-05 | International Business Machines Corporation | Prediction of an affiliated register |
US10534609B2 (en) | 2017-08-18 | 2020-01-14 | International Business Machines Corporation | Code-specific affiliated register prediction |
US10884746B2 (en) | 2017-08-18 | 2021-01-05 | International Business Machines Corporation | Determining and predicting affiliated registers based on dynamic runtime control flow analysis |
US10719328B2 (en) | 2017-08-18 | 2020-07-21 | International Business Machines Corporation | Determining and predicting derived values used in register-indirect branching |
US10725918B2 (en) | 2017-09-19 | 2020-07-28 | International Business Machines Corporation | Table of contents cache entry having a pointer for a range of addresses |
US10713050B2 (en) | 2017-09-19 | 2020-07-14 | International Business Machines Corporation | Replacing Table of Contents (TOC)-setting instructions in code with TOC predicting instructions |
US10884929B2 (en) | 2017-09-19 | 2021-01-05 | International Business Machines Corporation | Set table of contents (TOC) register instruction |
US10620955B2 (en) | 2017-09-19 | 2020-04-14 | International Business Machines Corporation | Predicting a table of contents pointer value responsive to branching to a subroutine |
US11061575B2 (en) | 2017-09-19 | 2021-07-13 | International Business Machines Corporation | Read-only table of contents register |
US10705973B2 (en) | 2017-09-19 | 2020-07-07 | International Business Machines Corporation | Initializing a data structure for use in predicting table of contents pointer values |
US10896030B2 (en) | 2017-09-19 | 2021-01-19 | International Business Machines Corporation | Code generation relating to providing table of contents pointer values |
US20190297265A1 (en) * | 2018-03-21 | 2019-09-26 | Sawah Innovations Inc. | User-feedback video stabilization device and method |
US10547790B2 (en) | 2018-06-14 | 2020-01-28 | Google Llc | Camera area locking |
KR102581210B1 (ko) * | 2019-01-10 | 2023-09-22 | 에스케이하이닉스 주식회사 | 이미지 신호 처리 방법, 이미지 신호 프로세서 및 이미지 센서 칩 |
JP7444162B2 (ja) * | 2019-03-28 | 2024-03-06 | ソニーグループ株式会社 | 画像処理装置、画像処理方法、プログラム |
US11711613B2 (en) * | 2021-04-27 | 2023-07-25 | Qualcomm Incorporated | Image alignment for computational photography |
Citations (3)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
JP2007226643A (ja) | 2006-02-24 | 2007-09-06 | Morpho Inc | 画像処理装置 |
JP2010118962A (ja) * | 2008-11-13 | 2010-05-27 | Canon Inc | 撮像装置及びその制御方法及びプログラム |
JP2010193302A (ja) | 2009-02-19 | 2010-09-02 | Sony Corp | 画像処理装置、カメラモーション成分算出方法、画像処理プログラム及び記録媒体 |
Family Cites Families (15)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US5696848A (en) * | 1995-03-09 | 1997-12-09 | Eastman Kodak Company | System for creating a high resolution image from a sequence of lower resolution motion images |
JP2002236924A (ja) | 2001-02-07 | 2002-08-23 | Gen Tec:Kk | 2次元連続画像を利用した3次元物体の動き追跡方法及び同装置等 |
KR100574227B1 (ko) | 2003-12-18 | 2006-04-26 | 한국전자통신연구원 | 카메라 움직임을 보상한 객체 움직임 추출 장치 및 그 방법 |
US7880769B2 (en) * | 2004-02-13 | 2011-02-01 | Qualcomm Incorporated | Adaptive image stabilization |
US8054335B2 (en) * | 2007-12-20 | 2011-11-08 | Aptina Imaging Corporation | Methods and system for digitally stabilizing video captured from rolling shutter cameras |
WO2009131382A2 (en) * | 2008-04-22 | 2009-10-29 | Core Logic Inc. | Apparatus and method for correcting moving image wavering |
JP4915423B2 (ja) * | 2009-02-19 | 2012-04-11 | ソニー株式会社 | 画像処理装置、フォーカルプレーン歪み成分算出方法、画像処理プログラム及び記録媒体 |
JP5487722B2 (ja) | 2009-05-25 | 2014-05-07 | ソニー株式会社 | 撮像装置と振れ補正方法 |
US8508605B2 (en) * | 2009-10-14 | 2013-08-13 | Csr Technology Inc. | Method and apparatus for image stabilization |
JP2011114407A (ja) * | 2009-11-24 | 2011-06-09 | Sony Corp | 画像処理装置、画像処理方法、プログラム及び記録媒体 |
US8179446B2 (en) * | 2010-01-18 | 2012-05-15 | Texas Instruments Incorporated | Video stabilization and reduction of rolling shutter distortion |
JP5683839B2 (ja) | 2010-05-17 | 2015-03-11 | セミコンダクター・コンポーネンツ・インダストリーズ・リミテッド・ライアビリティ・カンパニー | 撮像装置の制御回路 |
JP5249377B2 (ja) | 2011-03-22 | 2013-07-31 | キヤノン株式会社 | 撮像装置、及びその制御方法、プログラム |
US8823813B2 (en) * | 2011-06-06 | 2014-09-02 | Apple Inc. | Correcting rolling shutter using image stabilization |
US9460495B2 (en) * | 2012-04-06 | 2016-10-04 | Microsoft Technology Licensing, Llc | Joint video stabilization and rolling shutter correction on a generic platform |
-
2012
- 2012-11-05 WO PCT/JP2012/078591 patent/WO2014068779A1/ja active Application Filing
-
2013
- 2013-11-01 CN CN201380020123.5A patent/CN104247395B/zh active Active
- 2013-11-01 JP JP2014544608A patent/JP5906493B2/ja active Active
- 2013-11-01 US US14/397,579 patent/US9639913B2/en active Active
- 2013-11-01 WO PCT/JP2013/079724 patent/WO2014069632A1/ja active Application Filing
- 2013-11-01 EP EP13850725.6A patent/EP2849428B1/en not_active Not-in-force
- 2013-11-01 KR KR1020147028881A patent/KR101624450B1/ko active IP Right Grant
Patent Citations (3)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
JP2007226643A (ja) | 2006-02-24 | 2007-09-06 | Morpho Inc | 画像処理装置 |
JP2010118962A (ja) * | 2008-11-13 | 2010-05-27 | Canon Inc | 撮像装置及びその制御方法及びプログラム |
JP2010193302A (ja) | 2009-02-19 | 2010-09-02 | Sony Corp | 画像処理装置、カメラモーション成分算出方法、画像処理プログラム及び記録媒体 |
Non-Patent Citations (1)
Title |
---|
See also references of EP2849428A4 |
Cited By (7)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
KR20180065957A (ko) | 2016-12-08 | 2018-06-18 | 가부시키가이샤 모르포 | 화상처리장치, 전자기기, 화상처리방법 및 프로그램 |
US10440277B2 (en) | 2016-12-08 | 2019-10-08 | Morpho, Inc. | Image processing device, electronic equipment, image processing method and non-transitory computer-readable medium for enlarging objects on display |
JP2018206365A (ja) * | 2017-06-08 | 2018-12-27 | 株式会社リコー | 画像処理方法、装置及び電子デバイス |
WO2020039747A1 (ja) * | 2018-08-20 | 2020-02-27 | ソニーセミコンダクタソリューションズ株式会社 | 信号処理装置、撮像装置、信号処理方法 |
US11196929B2 (en) | 2018-08-20 | 2021-12-07 | Sony Semiconductor Solutions Corporation | Signal processing device, imaging device, and signal processing method |
JP7493190B1 (ja) | 2023-02-07 | 2024-05-31 | 株式会社マーケットヴィジョン | 情報処理システム |
WO2024166475A1 (ja) * | 2023-02-07 | 2024-08-15 | 株式会社マーケットヴィジョン | 情報処理システム |
Also Published As
Publication number | Publication date |
---|---|
JP5906493B2 (ja) | 2016-04-20 |
KR101624450B1 (ko) | 2016-05-25 |
EP2849428A1 (en) | 2015-03-18 |
CN104247395A (zh) | 2014-12-24 |
US9639913B2 (en) | 2017-05-02 |
EP2849428B1 (en) | 2016-10-19 |
CN104247395B (zh) | 2017-05-17 |
JPWO2014069632A1 (ja) | 2016-09-08 |
EP2849428A4 (en) | 2015-07-15 |
WO2014068779A1 (ja) | 2014-05-08 |
US20150123990A1 (en) | 2015-05-07 |
KR20140138947A (ko) | 2014-12-04 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
JP5906493B2 (ja) | 画像処理装置、画像処理方法、画像処理プログラム及び記録媒体 | |
JP6170395B2 (ja) | 撮像装置およびその制御方法 | |
JP6209002B2 (ja) | 撮像装置およびその制御方法 | |
JP5531194B2 (ja) | 画像処理装置、画像処理方法及び画像処理プログラム | |
JP5794705B2 (ja) | 撮像装置、その制御方法及びプログラム | |
JP4926920B2 (ja) | 防振画像処理装置及び防振画像処理方法 | |
JP5499050B2 (ja) | 画像処理装置、撮像装置、及び画像処理方法 | |
JP7009107B2 (ja) | 撮像装置およびその制御方法 | |
JP2015015587A (ja) | 撮像装置およびその制御方法 | |
JP6513941B2 (ja) | 画像処理方法、画像処理装置及びプログラム | |
JP5424068B2 (ja) | 画像処理装置、画像処理方法、画像処理プログラム及び記憶媒体 | |
JPWO2018066027A1 (ja) | 画像処理装置、撮像システム、画像処理方法および画像処理プログラム | |
JP6980480B2 (ja) | 撮像装置および制御方法 | |
JP5279453B2 (ja) | 画像振れ補正装置、撮像装置及び画像振れ補正方法 | |
JP6604783B2 (ja) | 画像処理装置、撮像装置および画像処理プログラム | |
CN110692235B (zh) | 图像处理装置、图像处理程序及图像处理方法 | |
JP6375131B2 (ja) | 撮像装置、画像処理方法及び制御プログラム | |
JP6671975B2 (ja) | 画像処理装置、撮像装置、画像処理方法およびコンピュータプログラム | |
JP5401696B2 (ja) | 画像処理装置、画像処理方法及び画像処理プログラム | |
JP2012124939A (ja) | 撮像装置及び撮像装置の制御方法 | |
JP7137433B2 (ja) | ブレ補正装置、撮像装置、ブレ補正方法、及びプログラム | |
JP4286301B2 (ja) | 手ぶれ補正装置、手ぶれ補正方法および手ぶれ補正プログラムを記録した記録媒体 | |
JP6355421B2 (ja) | 画像処理装置および撮像装置 | |
JP2012147202A (ja) | 画像処理装置およびその方法 |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
121 | Ep: the epo has been informed by wipo that ep was designated in this application |
Ref document number: 13850725 Country of ref document: EP Kind code of ref document: A1 |
|
ENP | Entry into the national phase |
Ref document number: 2014544608 Country of ref document: JP Kind code of ref document: A |
|
ENP | Entry into the national phase |
Ref document number: 20147028881 Country of ref document: KR Kind code of ref document: A |
|
WWE | Wipo information: entry into national phase |
Ref document number: 14397579 Country of ref document: US |
|
REEP | Request for entry into the european phase |
Ref document number: 2013850725 Country of ref document: EP |
|
WWE | Wipo information: entry into national phase |
Ref document number: 2013850725 Country of ref document: EP |
|
NENP | Non-entry into the national phase |
Ref country code: DE |