WO2017090458A1 - 撮影装置、および撮影方法、並びにプログラム - Google Patents
撮影装置、および撮影方法、並びにプログラム Download PDFInfo
- Publication number
- WO2017090458A1 WO2017090458A1 PCT/JP2016/083474 JP2016083474W WO2017090458A1 WO 2017090458 A1 WO2017090458 A1 WO 2017090458A1 JP 2016083474 W JP2016083474 W JP 2016083474W WO 2017090458 A1 WO2017090458 A1 WO 2017090458A1
- Authority
- WO
- WIPO (PCT)
- Prior art keywords
- exposure
- unit
- image
- camera
- camera motion
- Prior art date
Links
- 238000003384 imaging method Methods 0.000 title claims abstract description 63
- 230000033001 locomotion Effects 0.000 claims abstract description 300
- 238000001514 detection method Methods 0.000 claims abstract description 78
- 238000009826 distribution Methods 0.000 claims abstract description 27
- 238000004458 analytical method Methods 0.000 claims description 16
- 230000001133 acceleration Effects 0.000 claims description 5
- 239000003550 marker Substances 0.000 claims description 4
- 238000012545 processing Methods 0.000 description 61
- 238000000034 method Methods 0.000 description 59
- 230000008569 process Effects 0.000 description 39
- 238000012937 correction Methods 0.000 description 36
- 239000000203 mixture Substances 0.000 description 36
- 238000004364 calculation method Methods 0.000 description 30
- 230000009467 reduction Effects 0.000 description 29
- 239000013598 vector Substances 0.000 description 28
- 238000013507 mapping Methods 0.000 description 21
- 239000011159 matrix material Substances 0.000 description 21
- 230000006866 deterioration Effects 0.000 description 11
- 230000006870 function Effects 0.000 description 11
- 238000003860 storage Methods 0.000 description 7
- 238000010586 diagram Methods 0.000 description 6
- 230000015572 biosynthetic process Effects 0.000 description 4
- 238000006243 chemical reaction Methods 0.000 description 4
- 230000003111 delayed effect Effects 0.000 description 4
- 238000011156 evaluation Methods 0.000 description 4
- 230000010354 integration Effects 0.000 description 4
- 238000003786 synthesis reaction Methods 0.000 description 4
- 230000015556 catabolic process Effects 0.000 description 3
- 238000004891 communication Methods 0.000 description 3
- 238000006731 degradation reaction Methods 0.000 description 3
- 239000006185 dispersion Substances 0.000 description 3
- 238000005516 engineering process Methods 0.000 description 3
- 238000013459 approach Methods 0.000 description 2
- 230000005540 biological transmission Effects 0.000 description 2
- 238000006073 displacement reaction Methods 0.000 description 2
- 238000013213 extrapolation Methods 0.000 description 2
- 230000003287 optical effect Effects 0.000 description 2
- 238000011946 reduction process Methods 0.000 description 2
- 239000004065 semiconductor Substances 0.000 description 2
- 238000009825 accumulation Methods 0.000 description 1
- 230000002146 bilateral effect Effects 0.000 description 1
- 230000008859 change Effects 0.000 description 1
- 238000000354 decomposition reaction Methods 0.000 description 1
- 230000000694 effects Effects 0.000 description 1
- 238000005259 measurement Methods 0.000 description 1
- 238000000691 measurement method Methods 0.000 description 1
- 238000002156 mixing Methods 0.000 description 1
- 238000012986 modification Methods 0.000 description 1
- 230000004048 modification Effects 0.000 description 1
- 238000012805 post-processing Methods 0.000 description 1
- 238000003825 pressing Methods 0.000 description 1
- 238000000513 principal component analysis Methods 0.000 description 1
- 238000009877 rendering Methods 0.000 description 1
- 230000000717 retained effect Effects 0.000 description 1
- 238000004904 shortening Methods 0.000 description 1
Images
Classifications
-
- G—PHYSICS
- G03—PHOTOGRAPHY; CINEMATOGRAPHY; ANALOGOUS TECHNIQUES USING WAVES OTHER THAN OPTICAL WAVES; ELECTROGRAPHY; HOLOGRAPHY
- G03B—APPARATUS OR ARRANGEMENTS FOR TAKING PHOTOGRAPHS OR FOR PROJECTING OR VIEWING THEM; APPARATUS OR ARRANGEMENTS EMPLOYING ANALOGOUS TECHNIQUES USING WAVES OTHER THAN OPTICAL WAVES; ACCESSORIES THEREFOR
- G03B7/00—Control of exposure by setting shutters, diaphragms or filters, separately or conjointly
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N23/00—Cameras or camera modules comprising electronic image sensors; Control thereof
- H04N23/60—Control of cameras or camera modules
- H04N23/68—Control of cameras or camera modules for stable pick-up of the scene, e.g. compensating for camera body vibrations
- H04N23/682—Vibration or motion blur correction
-
- G—PHYSICS
- G03—PHOTOGRAPHY; CINEMATOGRAPHY; ANALOGOUS TECHNIQUES USING WAVES OTHER THAN OPTICAL WAVES; ELECTROGRAPHY; HOLOGRAPHY
- G03B—APPARATUS OR ARRANGEMENTS FOR TAKING PHOTOGRAPHS OR FOR PROJECTING OR VIEWING THEM; APPARATUS OR ARRANGEMENTS EMPLOYING ANALOGOUS TECHNIQUES USING WAVES OTHER THAN OPTICAL WAVES; ACCESSORIES THEREFOR
- G03B5/00—Adjustment of optical system relative to image or object surface other than for focusing
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T5/00—Image enhancement or restoration
- G06T5/10—Image enhancement or restoration using non-spatial domain filtering
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T5/00—Image enhancement or restoration
- G06T5/50—Image enhancement or restoration using two or more images, e.g. averaging or subtraction
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T5/00—Image enhancement or restoration
- G06T5/70—Denoising; Smoothing
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N23/00—Cameras or camera modules comprising electronic image sensors; Control thereof
- H04N23/60—Control of cameras or camera modules
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N23/00—Cameras or camera modules comprising electronic image sensors; Control thereof
- H04N23/60—Control of cameras or camera modules
- H04N23/68—Control of cameras or camera modules for stable pick-up of the scene, e.g. compensating for camera body vibrations
- H04N23/681—Motion detection
- H04N23/6812—Motion detection based on additional sensors, e.g. acceleration sensors
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N23/00—Cameras or camera modules comprising electronic image sensors; Control thereof
- H04N23/60—Control of cameras or camera modules
- H04N23/68—Control of cameras or camera modules for stable pick-up of the scene, e.g. compensating for camera body vibrations
- H04N23/682—Vibration or motion blur correction
- H04N23/684—Vibration or motion blur correction performed by controlling the image sensor readout, e.g. by controlling the integration time
- H04N23/6845—Vibration or motion blur correction performed by controlling the image sensor readout, e.g. by controlling the integration time by combination of a plurality of images sequentially taken
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N23/00—Cameras or camera modules comprising electronic image sensors; Control thereof
- H04N23/70—Circuitry for compensating brightness variation in the scene
- H04N23/73—Circuitry for compensating brightness variation in the scene by influencing the exposure time
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N23/00—Cameras or camera modules comprising electronic image sensors; Control thereof
- H04N23/70—Circuitry for compensating brightness variation in the scene
- H04N23/76—Circuitry for compensating brightness variation in the scene by influencing the image signals
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N23/00—Cameras or camera modules comprising electronic image sensors; Control thereof
- H04N23/80—Camera processing pipelines; Components thereof
- H04N23/81—Camera processing pipelines; Components thereof for suppressing or minimising disturbance in the image signal generation
-
- G—PHYSICS
- G03—PHOTOGRAPHY; CINEMATOGRAPHY; ANALOGOUS TECHNIQUES USING WAVES OTHER THAN OPTICAL WAVES; ELECTROGRAPHY; HOLOGRAPHY
- G03B—APPARATUS OR ARRANGEMENTS FOR TAKING PHOTOGRAPHS OR FOR PROJECTING OR VIEWING THEM; APPARATUS OR ARRANGEMENTS EMPLOYING ANALOGOUS TECHNIQUES USING WAVES OTHER THAN OPTICAL WAVES; ACCESSORIES THEREFOR
- G03B2207/00—Control of exposure by setting shutters, diaphragms, or filters separately or conjointly
- G03B2207/005—Control of exposure by setting shutters, diaphragms, or filters separately or conjointly involving control of motion blur
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/20—Special algorithmic details
- G06T2207/20048—Transform domain processing
- G06T2207/20056—Discrete and fast Fourier transform, [DFT, FFT]
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T7/00—Image analysis
- G06T7/20—Analysis of motion
Definitions
- the present disclosure relates to a photographing device, a photographing method, and a program, and in particular, a photographing device that can capture an image with a large amount of blur while suppressing an image including a large amount of blur due to the movement of the camera, And a photographing method and a program.
- Patent Document 1 acquires the movement of the camera as its “angular velocity value” or “integrated value of angular velocity”, and terminates exposure when the value exceeds a predetermined value.
- Patent Technology 2 when an image with little blur is actually obtained, it is necessary to set a small amount of motion that can be tolerated in one exposure, and a large number of times of photographing are required. Shooting cost and image processing calculation cost increase.
- the present disclosure has been made in view of such a situation, and in particular, suppresses an image including a large blur due to the movement of the camera from being shot, and enables an image with less blur to be shot. It is.
- An imaging apparatus includes a camera motion detection unit that detects camera motion, and a comparison unit that calculates a distribution degree of the camera motion trajectory based on the camera motion detection result and compares the distribution degree with a predetermined threshold value And an exposure control unit that controls the start and end of exposure based on the comparison result of the comparison unit.
- the comparison unit may calculate a first principal component of covariance as a distribution degree of the camera motion trajectory based on the camera motion detection result and compare it with the predetermined threshold value.
- the comparison unit generates a PSF (Point Spread Function) image for the camera motion detection result as a distribution degree of the camera motion trajectory based on the camera motion detection result, and performs frequency analysis to generate the predetermined It can be made to compare with a threshold value.
- PSF Point Spread Function
- the comparison unit approximates the detection result in a certain range retroactive from the current time to a predetermined time as a distribution degree of the camera motion trajectory based on the camera motion detection result. Can be measured by extrapolation (extrapolation), and compared with the predetermined threshold.
- the camera motion detection unit includes a gyro sensor, an acceleration sensor, a geomagnetic sensor, an altitude sensor, a vibration sensor, and an object marker from a sub camera for detecting the motion of the camera used for imaging, which is different from the camera used for imaging. It is possible to include a motion capture that tracks and measures movement.
- the exposure control unit is longer than the minimum exposure time from the start of exposure and within the maximum exposure time, and the blur is larger than a predetermined value according to the comparison result. Can be controlled to end the exposure.
- the exposure control unit can be controlled to end the exposure when the maximum exposure time comes after the exposure is started based on the comparison result of the comparison unit.
- the exposure control unit determines that blur is greater than a predetermined value based on the comparison result when the comparison result determines that the blur is greater than the predetermined value. Until the exposure is started, the timing at which the exposure is started can be controlled to be delayed.
- the exposure control unit can be controlled to end the exposure in consideration of SNR (Signal-to-Noise-Ratio) based on the comparison result of the comparison unit.
- SNR Signal-to-Noise-Ratio
- noise removal unit that removes image noise by accumulating a plurality of images taken at predetermined intervals by the camera.
- the noise removing unit may further include a noise removing unit that integrates only images having a blur amount smaller than a predetermined size among a plurality of images captured by the camera and removes noise of the image. Can be.
- the noise removing unit can add and weight a plurality of images taken by the camera according to the exposure time and remove noise from the images.
- the noise removal unit may add and add a weight that takes into account the blurring direction of each of the plurality of images taken by the camera to remove the noise of the image.
- the noise removal unit may add and add an equal weight to a plurality of images taken by the camera to remove noise from the images.
- the noise removing unit applies FFT (Fast Fourier Transform) to a plurality of images captured by the camera, collects components of a predetermined amplitude for each frequency component, and generates an image by applying inverse FFT. Thus, noise in the image can be removed.
- FFT Fast Fourier Transform
- the noise removing unit for a plurality of images captured by the camera, FFT is performed to collect the maximum amplitude components for each frequency component, and the inverse FFT is performed to generate an image, thereby generating the image. Image noise can be removed.
- the imaging method detects a camera motion, calculates a distribution degree of the camera motion trajectory based on a detection result of the camera motion, compares it with a predetermined threshold, and based on the comparison result
- a photographing method including a step of controlling the start and end of exposure.
- a program includes a camera motion detection unit that detects camera motion, a comparison unit that calculates a distribution degree of the camera motion trajectory based on the camera motion detection result, and compares the distribution degree with a predetermined threshold value.
- a program that causes a computer to function as an exposure control unit that controls the start and end of exposure based on the comparison result of the comparison unit.
- a camera motion is detected, a distribution degree of the camera motion trajectory based on the camera motion detection result is calculated, compared with a predetermined threshold, and an exposure is performed based on the comparison result. Is controlled.
- FIG. 10 is a flowchart for describing photographing processing by the image photographing unit in FIG. 9.
- FIG. It is a figure explaining the exposure time in the imaging
- combination part of the imaging device of FIG. 14 is a flowchart for describing a first processing example of a shake correction image combining process by the shake correction image combining unit in FIG. 13.
- FIG. 14 is a flowchart for explaining a second processing example of a shake correction image synthesis process by the shake correction image synthesis unit of FIG. 13. It is a figure explaining the relationship between the 1st main component of a motion blur amount, and a camera motion determination threshold value. It is a figure explaining the relationship between the 1st main component of a motion blur amount, and a camera motion determination threshold value. It is a figure explaining the integration vector according to the direction of a motion. It is a flowchart explaining the motion determination process using the threshold value according to direction by the motion determination part of FIG. It is a figure explaining the structural example of 2nd Embodiment of the imaging device to which the technique of this indication is applied.
- FIG. 11 is a diagram illustrating a configuration example of a general-purpose personal computer.
- Patent Document 1 starts exposure and starts to acquire the movement of a camera as a photographing unit as its “angular velocity value” or “integrated value of angular velocity”. The exposure is terminated when “” exceeds a predetermined value.
- the integrated value of angular velocity is considered to be proportional to the amount of movement of the pixel during exposure on the image plane of the camera, but the integrated value is large but the amount of blur is small unless the moving speed is assumed to be constant. Such a case may not be detected.
- the upper left part of FIG. 1 shows the movement trajectory when there is movement at equal intervals in the time direction, and the displacement of the position (black circle mark) at each timing. Indicates PSF (Point Spread Function). Moreover, the upper right part and the lower right part of FIG. 1 show the trajectory and the PSF between the displacements when there is movement in a state where the intervals in the time direction are biased.
- PSF Point Spread Function
- the covariance is obtained from the motion of the camera, and the motion is evaluated based on the first principal component of the covariance in the motion trajectory point sequence.
- FIG. 2 is a block diagram illustrating a configuration example of the first embodiment of the imaging device 11 of the present disclosure.
- the imaging device (camera) 11 in FIG. 2 includes an image capturing unit 32 that captures a scene, a camera motion detection unit 31 that detects the motion of the capturing device 11, and a data holding unit that stores the captured image and the measured camera motion. 33 and a blur correction image combining unit 34 that reads out a plurality of stored images and camera movements and combines them while correcting the blur.
- the camera motion detection unit 31 receives an exposure start signal from the image photographing unit 32, measures the camera motion during exposure, determines whether the motion of the photographing device (camera) 11 exceeds a predetermined value, and The determination result is output to the image photographing unit 32. In addition, the camera motion detection unit 31 transmits camera motion data during frame image capturing to the data holding unit.
- the image photographing unit 32 exposes a photographing scene to measure image data, transmits the exposure start timing to the camera motion detecting unit 31, and receives a motion determination result from the camera motion detecting unit 31. Then, the image capturing unit 32 determines the end of exposure according to the motion determination result, gain-corrects the captured image according to the exposure time, and outputs the corrected image data to the data holding unit 33. .
- the data holding unit 33 receives the frame image from the image photographing unit 32, receives the camera motion data of the frame image from the camera motion detection unit 31, and holds the data.
- the data holding unit 33 outputs a predetermined number of recorded frame images and camera motion data to the shake correction image combining unit.
- the blur correction image composition unit 34 receives the image and the camera motion data from the data holding unit 33, aligns the position of each frame image based on the camera motion data, and selectively blends all the frame images. As a result, a blended blur correction image with reduced noise is taken.
- the camera motion detection unit 31 includes a motion detection unit 51 and a motion determination unit 52.
- the motion detection unit 51 periodically measures camera movement during exposure, calculates a moving position point sequence on the image plane of the camera using camera parameters such as image size and focal length, and moves the point sequence. It outputs to the determination part 52.
- the motion determination unit 52 receives the movement position point sequence from the motion detection unit 51, determines whether or not the blur due to motion falls within an allowable amount using a predetermined camera motion determination threshold parameter, and displays the determination result as an image. Output to the imaging unit 32.
- the motion detection unit 51 performs camera motion measurement using, for example, a gyro sensor that measures angular velocity.
- the measurement method by the motion detection unit 51 is not limited to this gyro sensor, and any method can be used as long as the motion of the camera can be understood. That is, when measuring the camera motion, the camera motion detection unit 31 replaces the gyro sensor with, for example, an acceleration sensor that measures acceleration, a geomagnetic sensor that measures the north direction, an altitude sensor that measures altitude from atmospheric pressure, and vibration. You may make it utilize the vibration capture which measures, the motion capture which tracks the marker of an object from an external camera, and measures a motion.
- step S11 the motion detection unit 51 measures the rotational motion component of the camera motion as angular velocity (gx, gy, gz) at a predetermined time interval by the gyro sensor.
- the time interval is, for example, an interval of 1 millisecond, and the motion detection unit 51 including a gyro sensor samples the angular velocity at a predetermined time interval.
- step S12 the motion detection unit 51 integrates the angular velocities and converts them into angular rotation amounts (ax, ay, az).
- step S13 the motion detection unit 51 acquires the current position coordinates (x, y).
- the initial value is, for example, the center (0, 0) on the camera image plane.
- the positions calculated in the immediately preceding process are sequentially converted into the current position coordinates (x, y). ).
- step S14 the motion detection unit 51 rotates and moves the current position coordinates (x, y) using the angular rotation amount (ax, ay, az).
- Rotational movement is calculated as matrix multiplication using Euler angles ⁇ as shown in the following equation (1).
- ⁇ is the pitch
- ⁇ is the yaw
- ⁇ is the angular rotation amount in the roll direction.
- x and y are the current coordinate positions
- z is the distance to the image plane and the focal length is used.
- the motion detection unit 51 divides x ′ and y ′ by z ′, respectively, as shown by the following formula (2) in order to accurately calculate the blur on the image plane.
- Y '' are coordinates after rotational movement.
- i indicates the time series relationship of x and y.
- Rotate indicates a rotation formula.
- step S15 the motion detection unit 51 continuously performs such mapping position calculation to obtain a point mapping position sequence (x0, y0), (x1, y1),..., (Xi, yi) It outputs to the determination part 52.
- the motion determination unit 52 acquires and holds the point mapping position sequence (x0, y0), (x1, y1),..., (Xi, yi).
- step S16 the motion determination unit 52 executes a motion determination process, and performs a calculation using the point mapping position sequence (xi, yi) as an input. Details of the motion determination process will be described later.
- step S17 the motion determination unit 52 outputs, for example, 0 when the blur (blur) is equal to or less than the threshold, and 1 when the blur (blur) is equal to or greater than the threshold, according to the motion determination result.
- the motion determination is performed by obtaining the first principal component of the covariance of the point mapping position sequence and determining the threshold value of the first principal component.
- the covariance matrix is a 2 ⁇ 2 matrix having ⁇ ij as a component as in the following equation (4), where Xi is the x coordinate xi of the point mapping position sequence, Yi is the y coordinate yi of the point mapping position sequence, ⁇ i and ⁇ j are respective expected values.
- the first principal component is obtained by singular value decomposition of the covariance matrix.
- the first principal component indicates the maximum variance magnitude of the covariance matrix, which is considered to represent the maximum blur magnitude.
- step S18 the motion determination unit 52 determines whether or not the process is finished based on whether or not the blur is equal to or greater than the threshold value or the longest exposure time as the motion judgment result. If so, the process returns to step S11 to execute the subsequent processes.
- step S18 if the motion determination result is greater than or equal to the threshold value or the longest exposure time, the camera motion detection process ends.
- the camera motion is sequentially detected, and the camera motion determination result is continuously output.
- the motion determination unit 52 includes a covariance matrix eigenvalue calculation unit 71 and a threshold calculation unit 72.
- the threshold calculation unit 72 receives the first principal component from the covariance matrix eigenvalue calculation unit 71, performs threshold processing using the degradation threshold as a parameter, and outputs the calculation result as a motion determination result.
- ⁇ Deterioration of the image occurs due to blurring, but noise is increased if the exposure time is controlled to be short in order to eliminate blurring. Therefore, it is necessary to calculate a threshold value considering both blur and noise in order to optimally prevent deterioration.
- step S31 the covariance matrix eigenvalue calculation unit 71 receives an input of a point mapping position sequence (xi, yi).
- step S32 the covariance matrix eigenvalue calculation unit 71 calculates the covariance matrix represented by Expression (4).
- step S33 the covariance matrix eigenvalue calculation unit 71 calculates the first principal component and supplies it to the threshold value calculation unit 72.
- step S34 the threshold calculation unit 72 outputs 0 as a determination result indicating that the blur is smaller than the deterioration threshold when the first main component, that is, the maximum blur magnitude is smaller than the deterioration threshold.
- 1 is output as a determination result indicating that the blur is larger than the deterioration threshold.
- step S35 the threshold value calculation unit 72 outputs a determination result.
- the degree of blur can be approximated by the size of the first principal component of the covariance of the point map position sequence, and shot noise and thermal noise, which are the main causes of noise, can be approximated by a Gaussian distribution with luminance dependency. Therefore, in the present disclosure, an evaluation formula that considers blur and noise is defined, and the value is subjected to threshold processing.
- the noise dispersion amount N (x) is modeled in a linear relationship with the luminance x, for example, as shown in the following equation (5).
- the dispersion amount N (x) is a noise variance according to the luminance
- ⁇ is a parameter for controlling the influence of the noise variance
- x is the luminance
- B is the first principal component of blur
- ⁇ is an adjustment parameter.
- Equation (6) while the exposure time is short, since the luminance x is small, the influence of the noise variance N is large and the exp evaluation value approaches 0, and even if the blur B is somewhat large, it is allowed. When the exposure time is extended and the influence of the noise variance N is reduced, the evaluation value of exp approaches 1 and the size of the blur B is evaluated as it is. As described above, the exposure time for optimally controlling the deterioration of the image can be determined by simultaneously evaluating the noise and the blur and making the weight variable.
- Threshold value processing is executed using the deterioration degree score C thus obtained as a deterioration threshold value, and a motion determination result for determining whether to continue or end the exposure is obtained.
- a covariance matrix is obtained from a point mapping position sequence
- a first principal component is calculated and compared with a threshold value to determine whether the motion is greater than a predetermined threshold value.
- a PSF Point Spread Function (Point Spread Function)
- the frequency thereof may be analyzed
- the motion determination result may be analyzed.
- FIG. 7 shows a configuration example of the motion determination unit 52 that obtains a PSF from a point mapping position sequence, analyzes its frequency, and obtains a motion judgment result.
- the PSF image generation unit 91 generates a PSF image from the point mapping position sequence and outputs the PSF image to the frequency analysis unit 92.
- the frequency analysis unit 92 receives the PSF image, performs frequency analysis, and calculates and outputs a motion determination result depending on whether the frequency component is equal to or less than a certain threshold value.
- the PSF image is a gray image obtained by rendering a moving image sequence (point mapping position sequence) as a broken line in order to a small image, and is, for example, the image P on the right side of FIG. When there is no movement, it becomes a point image.
- frequency components obtained by performing FFT conversion on this PSF image are plotted one-dimensionally from a low frequency to a high frequency, and it is determined whether or not each frequency component is a predetermined value or more.
- step S51 the PSF image generation unit 91 receives an input of a point mapping position sequence (xi, yi).
- step S52 the PSF image generation unit 91 generates a PSF image and outputs it to the frequency analysis unit 92.
- step S53 the frequency analysis unit 92 performs frequency analysis by plotting a frequency component obtained by performing FFT conversion on the PSF image in a one-dimensional manner from a low frequency to a high frequency.
- step S54 the frequency analysis unit 92 plots and determines a threshold value based on whether each frequency component is equal to or higher than a predetermined value. More specifically, when the motion is large, high-frequency components are lost in the PSF image, so pay attention to high-frequency components above a certain level and compare the deterioration threshold to see if the integrated value of that component is above a certain level. To calculate the motion judgment result.
- the frequency analysis unit 92 outputs 0 as a determination result indicating that the blur is smaller than the threshold, and when the magnitude of the blur is larger than the predetermined threshold. , 1 is output as the determination result indicating that the blur is larger than the threshold value.
- step S55 the frequency analysis unit 92 outputs a determination result.
- the image photographing unit 32 includes an exposure control unit 101, an exposure unit 102, and a gain correction unit 103.
- the exposure unit 102 exposes the shooting scene and outputs image data. More specifically, the exposure unit 102 exposes a photographic scene and outputs image data to the gain correction unit 103. The exposure unit 102 receives exposure start and end control from the exposure control unit 101.
- the exposure control unit 101 controls the start and end of exposure. More specifically, the exposure control unit 101 receives an exposure start signal such as pressing the shutter button of the camera, and controls the exposure unit 102 to start exposure. Further, the exposure control unit 101 notifies the camera motion detection unit 31 of the timing for starting exposure. When the exposure time measured by the internal timer after receiving the motion determination result from the camera motion detection unit 31 after the start of exposure is not less than the preset minimum exposure time and not more than the maximum exposure time, When it is a motion determination result indicating that a motion larger than the motion is detected (the determination result is 1), the exposure unit 102 is controlled to end the exposure. If the exposure time is equal to or longer than the maximum exposure time, the exposure control unit 101 controls the exposure unit 102 to end the exposure. Further, the exposure control unit 101 transmits the exposure time from the start to the end of exposure to the gain correction unit 103.
- an exposure start signal such as pressing the shutter button of the camera
- the exposure control unit 101 notifies the camera motion detection unit 31 of the timing for starting exposure.
- the gain correction unit 103 corrects the gain of the image data. More specifically, the gain correction unit 103 outputs image data obtained by multiplying the image data received from the exposure unit 102 by a gain according to the exposure time.
- step S71 the exposure control unit 101 transmits an exposure start signal to the exposure unit 102, and the exposure unit 102 starts exposure.
- step S72 the exposure control unit 101 transmits an exposure start to the camera motion detection unit 31.
- step S73 the exposure control unit 101 measures the exposure time using an internal timer, determines whether a preset minimum exposure time has elapsed, and if the exposure time is less than the minimum exposure time, Continue measuring. If it is determined in step S73 that the preset minimum exposure time has elapsed, the process proceeds to step S74.
- step S74 the exposure control unit 101 measures the exposure time using an internal timer, and determines whether a preset maximum exposure time has elapsed. In step S74, if the exposure time is less than the maximum exposure time, the process proceeds to step S75.
- step S75 the exposure control unit 101 determines whether or not the motion determination result from the camera motion determination unit 31 is smaller than the blur threshold and it is considered that there is no blur. If it is determined in step S75 that no blur has occurred, the process returns to step S74.
- step S74 if the exposure time does not exceed the maximum exposure time and if the motion determination result does not receive exposure completion (large shake) in step S75, the processes in steps S74 and S75 are repeated. Thus, the determination operation based on the motion determination result and the determination of the maximum exposure time are continued. That is, during this time, exposure continues.
- step S74 If it is determined in step S74 that the exposure time has exceeded the maximum exposure time, or if the motion determination result is considered large in step S75, the process proceeds to step S76.
- step S76 the exposure control unit 101 transmits exposure completion to the exposure unit 102, and the exposure unit 102 completes exposure of the shooting scene and supplies the image data to the gain correction unit 103. At this time, the exposure control unit 101 supplies information on the exposure time in the exposure unit 102 to the gain correction unit 103.
- step S77 the gain correction unit 103 corrects the image data by multiplying the gain according to the exposure time received from the exposure control unit 101.
- step S78 the gain correction unit 103 outputs the image data with the gain corrected.
- the exposure when reading of the topmost line of the image is started at time t11, the exposure is always continued until time t12 when the minimum exposure time tmin elapses, and thereafter the movement is continued. Depending on the determination result, the exposure continues until time t14 when the longest exposure time tmax is maximized until it is notified by the motion determination result that the shake is large. In FIG. 11, at time t13, it is notified that the blur is large according to the motion determination result, and it is shown that the exposure is completed.
- the image line is away from the top line in the downward direction in the figure.
- the exposure start time t11 is used as a reference, and the exposure start time is delayed along the exposure start line L1.
- the minimum exposure time tmin is also set to be delayed along the minimum exposure line L2 with respect to the time t12 because the delay is delayed according to the lower lines with respect to the exposure start line L1.
- the longest exposure time tmax is set with a delay along the longest exposure line L4 with respect to the time t14.
- the exposure end line L3 is set according to the lower lines with the time t13 as a reference.
- the range indicated by the oblique lines is represented as the time of exposure longer than the minimum exposure time tmin.
- the exposure start is started at the timing regardless of the determination result from the camera motion detection unit 31, but at the start timing, the exposure start is determined based on the determination result from the camera motion detection unit 31. If the exposure is started from this state, an image with a large blur may be captured in the minimum exposure time tmin.
- the exposure control unit 101 may change the exposure start timing using the determination result received from the camera motion detection unit 31. For example, when the exposure start timing is reached but the camera motion is large, the blur can be reduced by delaying the exposure to start after the camera motion has stopped.
- the exposure unit 102 outputs an exposure start signal for starting motion determination at a timing before the start of exposure, and refers to the motion determination result at the actual exposure start timing.
- the exposure control unit 101 may delay the start of exposure of the exposure unit 102 if the motion determination result has a large amount of blur, and start exposure of the exposure unit when the motion determination result has no blur. .
- the exposure unit 102 detects that blurring is detected as a result of motion determination within the minimum exposure time tmin from the start of exposure, that is, the exposure time is short, even if the exposure is terminated at this timing, a sufficient SNR ( If it is detected that the blur is large when the Signal Noise Ratio is not earned, the exposure is terminated once, the previous exposure result is discarded, and after the blur is lost again, sufficient SNR can be earned. The exposure may be performed again.
- the start and end of exposure may be controlled by predicting future movements from movements measured up to a predetermined time. For example, a range of motion data that has been traced back to a predetermined time from the current time is approximated by a polynomial, and the motion at a future time is extrapolated and measured from the polynomial, and the motion exceeds the threshold. Predict time. By using such prediction, it becomes possible to measure the exposure end timing determined at the upper line of the image, and the exposure end timing at the lower end line of the screen, so that the entire image is captured within a predetermined blur. Can be controlled.
- the data holding unit 33 includes a memory 121.
- the data holding unit 33 receives images and motions, associates them with each other, stores a plurality of frames of data in the internal memory 121, and performs a post-shake correction image combining unit.
- the image and motion data are output to 34.
- the blur correction image combining unit 34 calculates an alignment parameter necessary for aligning the position of the succeeding image to the position based on the head image, and deforms the image received from the data holding unit 33.
- a blend processing unit 142 that receives the registered image and adds and blends it to the integrated image in the middle of integration
- a frame memory 143 that receives and stores the blend image and supplies the integrated image to the blend processing unit 142
- a frame memory 143 includes a corrected image output unit 144 that receives and outputs the accumulated image from 143.
- the image alignment unit 141 receives the motion data from the memory 121 of the data holding unit 33, and calculates alignment parameters for aligning the subsequent image to the position of the top image using the camera parameters.
- the image alignment unit 141 receives, for example, the angular rotation amount of the camera between the frame images acquired by the gyro sensor as motion data. Canceling the movement due to the camera rotation and adjusting the position corresponds to obtaining the movement amount of each pixel position of the image, and can be realized by using the rotation matrix based on the Euler angles of the above-described equation (1). .
- x and y are pixel positions of the image
- z is a focal length given by a camera parameter.
- the number of vertical and horizontal pixels, the center of the image, and the pixel size are also given as camera parameters. In this way, an image aligned with the top image is output.
- the blend processing unit 142 blends the aligned image and the accumulated image being accumulated in the frame memory 143, updates the accumulated image, and sends it to the frame memory 143.
- the weights of the frame images to be accumulated may be equalized and accumulated. For example, when the number of images is eight, the weight of each image is 1/8.
- an integration method for example, there is a method in which the reciprocal of the exposure time of each frame image is weighted. Assuming that each frame image is suppressed to a blur amount below a certain level by the above-described motion detection means, the amount of noise in the image is considered to decrease according to the exposure time, and a weight that increases as the exposure time increases is adopted. Thus, further noise reduction can be realized. Further, an image that is large as an image to be integrated and is inappropriate as an image to be integrated may be discarded, and only an image that has a small amount of blur and is appropriate as an image to be integrated may be extracted and integrated.
- i is the frame image number.
- w (i) is the weight of the frame image i.
- the denominator is the sum of the exposure times of all the frame images, and the numerator is the exposure time t (i) of the frame image i.
- step S91 the image alignment unit 141 receives the frame image and the corresponding motion data from the memory 121 of the data holding unit 33.
- step S92 the image alignment unit 141 calculates an alignment parameter for the first frame. That is, the image alignment unit 141 calculates where each pixel position of the current frame image corresponds to the top image.
- step S93 the image alignment unit 141 deforms the current frame image based on the position correspondence between the current frame image and the top frame image.
- step S94 the blend processing unit 142 reads the accumulated image from the frame memory 143.
- step S95 the blend processing unit 142 blends the deformed current frame image and the integrated image based on the calculated weight.
- step S96 the blend processing unit 142 determines whether or not the specified number of blends has been reached. If it is determined that the number has not been reached, the process returns to step S91. That is, the process of steps S91 to S96 is repeated until the next frame image and motion are input and the specified number of blends is reached. If it is determined in step S96 that the specified number of blends has been reached, the process proceeds to step S96.
- step S97 the blend processing unit 142 outputs the current integrated image as a blended / blur corrected image and ends the process.
- the images that have been aligned and aligned in shape are integrated and blended to correct blurring. Further, at this time, it is possible to further correct the blur by adding a weight according to the exposure time of the image to be integrated.
- each image is aligned with the head image position and then converted to the frequency space, components having a large amplitude at each frequency in the frequency space are preferentially selected, and these components are collected and converted back to the image space.
- this is a method of obtaining an image with less blur by combining a plurality of images.
- step S111 the image alignment unit 141 receives the frame image and the corresponding motion data from the memory 121 of the data holding unit 33.
- step S112 the image alignment unit 141 calculates alignment parameters for the first frame. That is, the image alignment unit 141 calculates where each pixel position of the current frame image corresponds to the top image.
- step S113 the image alignment unit 141 deforms the current frame image based on the position correspondence between the current frame image and the top frame image.
- step S114 the blend processing unit 142 multiplies the current frame by FFT (Fast Fourier Transform) and converts it to a frequency component.
- FFT Fast Fourier Transform
- step S115 the blend processing unit 142 reads the integrated image that has been converted into the frequency component from the frame memory 143.
- step S116 the blend processing unit 142 selectively synthesizes, for example, the maximum value of each frequency component.
- step S117 the blend processing unit 142 determines whether or not the specified number of blends has been reached. If it is determined that the number has not reached, the process returns to step S111. That is, the process of steps S111 to S117 is repeated until the next frame image and motion are input and the prescribed number of blends is reached. If it is determined in step S117 that the specified number of blends has been reached, the process proceeds to step S118.
- step S118 the corrected image output unit 144 performs two-dimensional inverse FFT conversion on the blend image that has reached the number of blends.
- step S119 the corrected image output unit 144 outputs the current integrated image as a blended / blur corrected image and ends the process.
- the aligned image is subjected to FFT conversion, and a large amplitude component among the frequency components is selectively synthesized, and then subjected to inverse FFT to be blended to correct blurring. Is possible.
- each image has a different blurring method (amount of motion, direction of motion), and a suitable blend image set can be obtained if exposure can be controlled so that the image blurring method is different. it can.
- the exposure time can be controlled. This control can also be applied in the imaging device 11 of the present disclosure.
- the determination of whether or not the first principal component of the covariance matrix exceeds the threshold is that the first principal component vector F1 is contained in a circle th whose diameter is the camera motion determination threshold. It is equivalent to determining whether or not.
- the camera motion determination threshold value is changed anisotropically for each direction, and threshold processing is performed. Do. This makes it possible to control exposure so that image blurring differs and obtain a suitable blended image set.
- the camera motion threshold th1 is set in four directions that are slanted vertically and horizontally. It is an isotropic value for the first frame of frame 1 motion determination.
- the first principal component F1 is a vector in a direction close to the right diagonal direction, but is within the threshold th1, and therefore the motion determination result is regarded as having no significant motion and no blur.
- the camera motion determination threshold is set to a threshold th2 that is changed for each direction according to the motion blur result of frame 1.
- the threshold value th2 the threshold value in the right oblique direction is reduced.
- the first principal component F1 is a vector in a direction close to the horizontal direction, but is within the threshold th2. Therefore, the motion determination result is regarded as having no significant motion and no blur.
- the threshold value th3 is set by changing the camera motion determination threshold value according to the motion blur result up to frame 2.
- the threshold value is set to be larger in the diagonally right direction than the threshold value th2, and the threshold value is set to be smaller than the threshold value th2 in the vertical direction.
- the first principal component F1 is a vector in a direction close to the left diagonal direction, but is within the threshold th3, so that the motion determination result is regarded as having no significant motion and no blur.
- the threshold value for each direction is determined by calculating a histogram of the movement vector MV ( ⁇ x, ⁇ y) between the points in the point mapping position sequence. As shown in FIG. 18, a histogram in four directions is calculated from the set of movement vectors MV. White circles in the figure are plots of the movement vector MV. This is collected and accumulated in the effective areas Z1 to Z4 for each direction defined in gray in the figure, and the lengths of the accumulated vectors SV1 to SV4, which are the accumulated movement vectors, are calculated for each direction. The histogram value.
- the movement vectors in the effective area Z1 in the upper right direction and the lower left direction are integrated (the vector in the lower left direction is inverted), and the integrated vector SV1 composed of the histogram value of the thick arrow. Get.
- the effective area is defined with a certain angular spread centered on the direction of interest.
- the movement vectors in the upward and downward effective areas Z2 are integrated (the vector in the downward direction is inverted in sign) to obtain an integrated vector SV2 composed of the histogram value of the thick arrow.
- the movement vectors in the effective area Z3 in the diagonally upper left direction and the diagonally lower right direction are accumulated (vectors in the lower right direction are inverted in sign) and accumulated from the histogram values of the thick arrows.
- the vector SV3 is obtained.
- the movement vectors in the effective area Z4 in the left direction and the right direction are integrated (the vector in the left direction is inverted in sign) to obtain an integrated vector SV4 composed of histogram values of thick arrows.
- the effective area may be a pattern other than this, or may overlap with an adjacent direction or may have a gap.
- a threshold is set so that the blur is reduced in the blur direction in subsequent frame shooting. Therefore, a threshold value for each direction expressed by the following formula (8) is calculated.
- d represents a direction, and in the example of FIG. 18, it is vertical, horizontal, right-up diagonal, and right-down diagonal.
- Th is a parameter set by the user with a threshold value.
- ⁇ is an adjustment parameter. The smaller the value, the smaller the threshold value, and severely restricts blurring in that direction.
- h (d) is a histogram value in the direction d.
- step S131 the covariance matrix eigenvalue calculation unit 71 inputs the point mapping position sequence (xi, yi) of the frame image at time t.
- step S132 the covariance matrix eigenvalue calculation unit 71 calculates a covariance matrix from the point mapping position sequence (xi, yi).
- step S133 the covariance matrix eigenvalue calculation unit 71 calculates the first principal component and the first principal component direction from the covariance matrix, and outputs them to the threshold calculation unit 72.
- step S134 the threshold value calculation unit 72 inputs the point mapping position sequence (xij, yij) of the frame image up to time t-1.
- step S136 the threshold value calculation unit 72 integrates the movement vectors for each direction to obtain a histogram value (magnitude of the integration vector) for each direction.
- step S137 the threshold value calculation unit 72 calculates the threshold value in each direction from the histogram value for each direction.
- the threshold value calculation unit 72 includes the first principal component vector (the length is the first principal component and the direction is the first principal component) of the frame image t in the polygonal region obtained by connecting the threshold values for each direction.
- the threshold is determined based on whether or not (direction) is within the range.
- step S139 the threshold value calculation unit 72 outputs 0 as the motion determination result if the first principal component vector falls within the polygonal area, and 1 if it protrudes.
- the threshold value can be set according to the direction of movement in this way, for example, in the detection of a face image, it is known that the tolerance in the vertical direction is higher than the horizontal direction.
- the threshold value may be set loosely for the direction, and a strict threshold value may be set for the horizontal direction. In-vehicle cameras are more susceptible to vibration in the vertical direction than in the horizontal direction, so a loose threshold is set for the horizontal direction and a strict threshold is set for the vertical direction. You may do it. Furthermore, in the case of a depth camera or the like, it is desirable that the resolution in the horizontal direction is higher, so the threshold value in the horizontal direction may be set to be strict.
- Second embodiment >> In the above description, shooting of a still image has been described, but it can also be applied to shooting of a moving image.
- FIG. 20 illustrates a configuration example of the second embodiment of the imaging apparatus 11 that captures a moving image to which the technology of the present disclosure is applied.
- symbol are attached
- the imaging apparatus 11 in FIG. 20 includes an image capturing unit 32 that captures a scene, a camera motion detection unit 31 that detects camera motion, and noise reduction that performs noise reduction processing by adjusting parameters according to the exposure time. Part 171 is provided. That is, the imaging apparatus 11 in FIG. 20 is different from the imaging apparatus 11 in FIG. 2 in that a noise reduction unit 171 is provided instead of the data holding unit 33 and the shake correction image synthesis unit 34.
- the camera motion detector 31 receives an exposure start signal from the image capturing unit 32, measures the camera motion during exposure, determines whether the camera motion exceeds a predetermined value, and displays the determination result as an image. Output to the imaging unit 32. In addition, the camera motion detection unit 31 transmits the camera exposure time during frame image capturing to the noise reduction unit 171.
- the image photographing unit 32 exposes a photographing scene to measure image data, transmits the exposure start timing to the camera motion detection unit 31, receives a motion determination result from the camera motion detection unit 31, and receives the motion determination result. Accordingly, the end of exposure is determined, the captured image is gain-corrected according to the exposure time, and the corrected image data is output to the noise reduction unit 171.
- the noise reduction unit 171 performs noise reduction processing of the image data from the exposure time and the image data, and outputs a moving image frame image that has undergone blur correction.
- the brightness of the image is adjusted by the gain correction processing in the image photographing unit 32, but the amount of noise is different because the exposure time is different.
- the noise reduction unit 171 performs noise reduction processing with an intensity corresponding to the exposure time.
- the noise reduction unit 171 includes a filter generation unit 191 and a noise reduction processing unit 192.
- the filter generation unit 191 receives the exposure time from the camera motion detection unit 31, generates a noise reduction filter, and outputs the filter to the noise reduction processing unit 192.
- the noise reduction processing unit 192 receives the image data from the image capturing unit 32, performs noise reduction processing of the image data using the filter received from the filter generation unit 191, and outputs a shake correction moving image frame image.
- a median filter can be used as a filter corresponding to the exposure time generated by the filter generation unit 191.
- the median filter collects pixel values around the pixel of interest and replaces the pixel value of the pixel of interest with the median value. If this replaced pixel value is output as it is, not only the noise but also the texture of the original image is crushed. Therefore, a method of blending and outputting the median value and the original pixel value is often used.
- the blend ratio of this blend can be changed according to the exposure time.
- step S151 the filter generation unit 191 receives the exposure time from the camera motion detection unit 31.
- step S152 the filter generation unit 191 generates a filter according to the exposure time for noise reduction, and outputs the filter to the noise reduction processing unit 192.
- step S153 the noise reduction processing unit 192 receives the image data from the image capturing unit 32, and performs noise reduction processing of the image data using the filter received from the filter generation unit 191.
- step S154 the noise reduction processing unit 192 outputs a shake correction moving image frame image.
- the median filter median value will be blended so that it is blended strongly, or if the exposure time is long and the noise is expected to be low, the original pixel The value can be blended strongly.
- the noise reduction filter is not limited to the median filter, and a filter having a noise reduction effect such as a Wiener filter or a bilateral filter may be used.
- the filter coefficient itself may be changed according to the exposure time. That is, the filter coefficient may be switched so as to be strong when the exposure time is short and weak when it is long.
- the exposure time can be increased by suppressing the blur even if the “blur total absolute value” is large by the principal component analysis of the motion trajectory point sequence (point mapping position sequence). At that time, the exposure time can be appropriately lengthened.
- the range of “the first principal component of the motion trajectory point sequence” can be accommodated to the extent that it can be corrected by the in-plane blur correction post-processing.
- the control that falls within the default “dispersion amount of the first principal component of the motion trajectory point sequence” the frequency components of the original image in the vertical direction with respect to the specific blur direction are retained, and different blur directions are collected for high accuracy. It is possible to obtain a correct blur correction result image.
- FIG. 23 shows a configuration example of a general-purpose personal computer.
- This personal computer incorporates a CPU (Central Processing Unit) 1001.
- An input / output interface 1005 is connected to the CPU 1001 via a bus 1004.
- a ROM (Read Only Memory) 1002 and a RAM (Random Access Memory) 1003 are connected to the bus 1004.
- the input / output interface 1005 includes an input unit 1006 including an input device such as a keyboard and a mouse for a user to input an operation command, an output unit 1007 for outputting a processing operation screen and an image of the processing result to a display device, programs, and various types.
- a storage unit 1008 including a hard disk drive for storing data, a LAN (Local Area Network) adapter, and the like are connected to a communication unit 1009 that executes communication processing via a network represented by the Internet.
- magnetic disks including flexible disks
- optical disks including CD-ROM (Compact Disc-Read Only Memory), DVD (Digital Versatile Disc)), magneto-optical disks (including MD (Mini Disc)), or semiconductors
- a drive 1010 for reading / writing data from / to a removable medium 1011 such as a memory is connected.
- the CPU 1001 is read from a program stored in the ROM 1002 or a removable medium 1011 such as a magnetic disk, an optical disk, a magneto-optical disk, or a semiconductor memory, installed in the storage unit 1008, and loaded from the storage unit 1008 to the RAM 1003. Various processes are executed according to the program.
- the RAM 1003 also appropriately stores data necessary for the CPU 1001 to execute various processes.
- the CPU 1001 loads the program stored in the storage unit 1008 to the RAM 1003 via the input / output interface 1005 and the bus 1004 and executes the program, for example. Is performed.
- the program executed by the computer (CPU 1001) can be provided by being recorded on the removable medium 1011 as a package medium, for example.
- the program can be provided via a wired or wireless transmission medium such as a local area network, the Internet, or digital satellite broadcasting.
- the program can be installed in the storage unit 1008 via the input / output interface 1005 by attaching the removable medium 1011 to the drive 1010. Further, the program can be received by the communication unit 1009 via a wired or wireless transmission medium and installed in the storage unit 1008. In addition, the program can be installed in advance in the ROM 1002 or the storage unit 1008.
- the program executed by the computer may be a program that is processed in time series in the order described in this specification, or in parallel or at a necessary timing such as when a call is made. It may be a program for processing.
- the system means a set of a plurality of components (devices, modules (parts), etc.), and it does not matter whether all the components are in the same housing. Accordingly, a plurality of devices housed in separate housings and connected via a network and a single device housing a plurality of modules in one housing are all systems. .
- the present disclosure can take a cloud computing configuration in which one function is shared by a plurality of devices via a network and is processed jointly.
- each step described in the above flowchart can be executed by one device or can be shared by a plurality of devices.
- the plurality of processes included in the one step can be executed by being shared by a plurality of apparatuses in addition to being executed by one apparatus.
- this indication can also take the following structures.
- a camera motion detector that detects camera motion
- a comparison unit that calculates a distribution degree of the camera motion trajectory based on the camera motion detection result and compares the distribution degree with a predetermined threshold
- An exposure apparatus including: an exposure control unit that controls start and end of exposure based on a comparison result of the comparison unit.
- the comparison unit calculates a first principal component of covariance as a distribution degree of the camera motion trajectory based on the camera motion detection result, and compares the first principal component with the predetermined threshold.
- the comparison unit generates a PSF (Point Spread Function) image for the camera motion detection result as a distribution degree of the camera motion trajectory based on the camera motion detection result, and performs frequency analysis to generate the PSF image.
- the photographing apparatus according to ⁇ 1> which is compared with a predetermined threshold.
- the comparison unit approximates the detection result in a certain range retroactive from a current time to a predetermined time as a distribution degree of the camera motion trajectory based on the camera motion detection result,
- the imaging apparatus according to ⁇ 1> in which a motion at a future time is extrapolated (extrapolated) from a polynomial and measured, and compared with the predetermined threshold value.
- the camera motion detection unit includes an object from a gyro sensor, an acceleration sensor, a geomagnetic sensor, an altitude sensor, a vibration sensor, and a sub-camera for detecting the motion of the camera used for imaging, which is different from the camera used for imaging.
- the imaging device according to any one of ⁇ 1> to ⁇ 4>, including a motion capture that tracks a marker and measures a motion.
- the exposure control unit is longer than the minimum exposure time after the exposure is started and within the maximum exposure time, and the blur is more than predetermined by the comparison result.
- the imaging device according to any one of ⁇ 1> to ⁇ 5>, wherein the exposure is controlled to be terminated when it is determined that the value is also larger.
- the exposure control unit controls to end the exposure when the maximum exposure time comes after the exposure is started based on the comparison result of the comparison unit.
- ⁇ 1> to ⁇ 6 > The imaging device in any one of>.
- the exposure control unit determines that the comparison result indicates that the blur is greater than a predetermined value at the timing when the exposure is started.
- control according to any one of ⁇ 1> to ⁇ 7>, wherein control is performed so as to delay a timing at which the exposure is started until it is considered to be smaller than a predetermined value.
- the exposure control unit performs control so as to end the exposure in consideration of an SNR (Signal to Noise Ratio) based on a comparison result of the comparison unit.
- SNR Signal to Noise Ratio
- the noise removing unit further includes a noise removing unit that integrates only images having a blur size smaller than a predetermined size among a plurality of images captured by the camera and removes noise of the image.
- ⁇ 12> The imaging apparatus according to ⁇ 10>, wherein the noise removing unit removes noise of the image by adding a weight according to an exposure time and integrating the plurality of images taken by the camera.
- the noise removing unit adds and adds a weight considering a blurring direction of each image for a plurality of images taken by the camera, and removes the noise of the image. apparatus.
- the noise removing unit adds and adds an equal weight to a plurality of images captured by the camera to remove noise of the images.
- the noise removing unit performs FFT (Fast Fourier Transform) on a plurality of images captured by the camera, collects components of a predetermined amplitude for each frequency component, and applies an inverse FFT to obtain an image.
- FFT Fast Fourier Transform
- the noise removing unit performs FFT on a plurality of images captured by the camera, collects components having the maximum amplitude for each frequency component, and generates an image by performing inverse FFT.
- the imaging device according to ⁇ 15> wherein noise of the image is removed.
- Detect camera movement Based on the detection result of the camera motion, the distribution degree of the camera motion trajectory is calculated and compared with a predetermined threshold value.
- An imaging method including a step of controlling start and end of exposure based on a comparison result.
- a camera motion detection unit that detects camera motion;
- a comparison unit that calculates a distribution degree of the camera motion trajectory based on the camera motion detection result and compares the distribution degree with a predetermined threshold;
- a program that causes a computer to function as an exposure control unit that controls the start and end of exposure based on the comparison result of the comparison unit.
- 11 shooting device 31 camera motion detection unit, 32 image shooting unit, 33 data holding unit, 34 shake correction image synthesis unit, 51 motion detection unit, 52 motion determination unit, 71 covariance matrix eigenvalue calculation unit, 72 threshold calculation unit, 91 PSF image generation unit, 92 frequency analysis unit, 101 exposure control unit, 102 exposure unit, 103 gain correction unit, 121 memory, 141 image alignment unit, 142 blend processing unit, 143 frame memory, 144 correction image output unit, 171 Noise reduction unit, 191 filter generation unit, 192 noise reduction processing unit
Landscapes
- Engineering & Computer Science (AREA)
- Multimedia (AREA)
- Signal Processing (AREA)
- Physics & Mathematics (AREA)
- General Physics & Mathematics (AREA)
- Theoretical Computer Science (AREA)
- Studio Devices (AREA)
- Exposure Control For Cameras (AREA)
- Adjustment Of Camera Lenses (AREA)
Abstract
Description
1.第1の実施の形態
2.第2の実施の形態
3.応用例
<従来の撮影装置>
特許文献1の技術は、露光を開始すると共に、撮影部であるカメラの動きを、その「角速度値」または「角速度の積算値」として取得を開始し、「角速度値」または「角速度の積算値」が所定値を超えた場合に露光を終了するというものである。
そこで、本開示の撮影装置においては、カメラの動きから、共分散を求め、動き軌跡点列における共分散の第一主成分に基づいて動きを評価する。
次に、図3を参照して、カメラ動き検出部31の構成例について説明する。
次に、図4のフローチャートを参照して、カメラ動き検出部31によるカメラ動き検出処理について説明する。尚、カメラ動き検出処理は、ジャイロセンサを用いて角速度を計測することにより求められるものであるため、カメラの動きを回転運動として捉えることを前提とする。
次に、図5のブロック図を参照して、動き判定部52の第1の構成例について説明する。動き判定部52は、共分散行列固有値計算部71および閾値計算部72からなる。
次に、図6のフローチャートを参照して、図5の動き判定部52による動き判定処理について説明する。
以上においては、点写像位置列から共分散行列を求め、第一主成分を計算して閾値と比較することにより、動きが所定の閾値よりも大きいか否かを判定する例について説明してきたが、点写像位置列からPSF(Point Spread Function(点像分布関数))を求めて、その周波数を解析して、動き判定結果を解析してもよい。
次に、図8のフローチャートを参照して、図7の動き判定部52による動き判定処理について説明する。
次に、図9のブロック図を参照して、画像撮影部32の構成例について説明する。
次に、図10のフローチャートを参照して、画像撮影部32による撮影処理について説明する。
次に、図12のブロック図を参照して、データ保持部33の構成例について説明する。
次に、図13のブロック図を参照して、ブレ補正画像合成部34の構成例について説明する。
次に、図14のフローチャートを参照して、ブレ補正画像合成処理の第1の処理例について説明する。
以上においては、位置合わせした画像を積算することで、ブレを補正する例について説明してきたが、“Burst Deblurring: Removing Camera Shake Through Fourier Burst Accumulation”, Mauricio Delbracioに記載されているような、複数枚のブレの方向や大きさの異なる画像を撮影しておいて、それぞれの画像からブレの無い成分を選択的に合成してブレを減らした画像を得る方法が提案されているので、この手法を適用してもよい。
次に、図19のフローチャートを参照して、方向別閾値を用いた動き判定処理について説明する。
以上においては、静止画の撮影について説明してきたが、動画の撮影にも応用することができる。
ノイズ低減部171は、フィルタ生成部191およびノイズ低減処理部192を備えている。
次に、図22のフローチャートを参照して、ノイズ低減処理について説明する。
<ソフトウェアにより実行させる例>
ところで、上述した一連の処理は、ハードウェアにより実行させることもできるが、ソフトウェアにより実行させることもできる。一連の処理をソフトウェアにより実行させる場合には、そのソフトウェアを構成するプログラムが、専用のハードウェアに組み込まれているコンピュータ、または、各種のプログラムをインストールすることで、各種の機能を実行することが可能な、例えば汎用のパーソナルコンピュータなどに、記録媒体からインストールされる。
<1> カメラの動きを検出するカメラ動き検出部と、
前記カメラ動き検出結果に基づいた、前記カメラ動き軌跡の分布度合いを計算し、所定の閾値と比較する比較部と、
前記比較部の比較結果に基づいて、露光の開始および終了を制御する露光制御部と
を含む撮影装置。
<2> 前記比較部は、前記カメラ動き検出結果に基づいた、前記カメラ動き軌跡の分布度合いとして、共分散の第一主成分を算出し、前記所定の閾値と比較する
<1>に記載の撮影装置。
<3> 前記比較部は、前記カメラ動き検出結果に基づいた、前記カメラ動き軌跡の分布度合いとして、前記カメラ動き検出結果に対するPSF(Point Spread Function)画像を生成し、周波数解析することにより、前記所定の閾値と比較する
<1>に記載の撮影装置。
<4> 前記比較部は、前記カメラ動き検出結果に基づいた、前記カメラ動き軌跡の分布度合いとして、現在時刻から所定の時刻まで遡った一定範囲の前記検出結果を多項式で近似しておき、前記多項式より未来時刻での動きを外挿(補外)して計測し、前記所定の閾値と比較する
<1>に記載の撮影装置。
<5> 前記カメラ動き検出部は、ジャイロセンサ、加速度センサ、地磁気センサ、高度センサ、振動センサ、並びに、撮像に用いるカメラとは異なる、撮像に用いるカメラの動きを検出するためのサブカメラから物体のマーカーを追跡して動きを計測するモーションキャプチャを含む
<1>乃至<4>のいずれかに記載の撮影装置。
<6> 前記露光制御部は、前記比較部の比較結果に基づいて、前記露光が開始されてから最小露光時間より長く、かつ、最大露光時間内であって、前記比較結果によりブレが所定よりも大きいとみなされたとき、前記露光を終了するように制御する
<1>乃至<5>のいずれかに記載の撮影装置。
<7> 前記露光制御部は、前記比較部の比較結果に基づいて、前記露光が開始されてから前記最大露光時間になったとき、前記露光を終了するように制御する
<1>乃至<6>のいずれかに記載の撮影装置。
<8> 前記露光制御部は、前記比較部の比較結果に基づいて、前記露光が開始されるタイミングにおいて、前記比較結果によりブレが所定よりも大きいとみなされたとき、前記比較結果によりブレが所定よりも小さいとみなされるまで、前記露光が開始されるタイミングを遅延させるように制御する
<1>乃至<7>のいずれかに記載の撮影装置。
<9> 前記露光制御部は、前記比較部の比較結果に基づいて、SNR(Signal to Noise Ratio)を考慮して、前記露光を終了するように制御する
<1>乃至<8>のいずれかに記載の撮影装置。
<10> 前記カメラにより所定の間隔で撮像された複数の画像を積算して、画像のノイズを除去するノイズ除去部をさらに含む
<1>乃至<9>のいずれかに記載の撮影装置。
<11> 前記ノイズ除去部は、前記カメラにより撮像された複数の画像のうち、ブレの大きさが所定の大きさよりも小さい画像のみを積算し前記画像のノイズを除去するノイズ除去部をさらに含む
<10>に記載の撮影装置。
<12> 前記ノイズ除去部は、前記カメラにより撮像された複数の画像について、露光時間に応じて重みを付加して積算し前記画像のノイズを除去する
<10>に記載の撮影装置。
<13> 前記ノイズ除去部は、前記カメラにより撮像された複数の画像について、各画像のブレの方向を加味した重みを付加して積算し前記画像のノイズを除去する
<10>に記載の撮影装置。
<14> 前記ノイズ除去部は、前記カメラにより撮像された複数の画像について、均等の重みを付加して積算し前記画像のノイズを除去する
<10>に記載の撮影装置。
<15> 前記ノイズ除去部は、前記カメラにより撮像された複数の画像について、FFT(Fast Fourier Transform)を掛けて、周波数成分毎に所定の振幅の成分を集めて、逆FFTを掛けて画像を生成することで、前記画像のノイズを除去する
<10>に記載の撮影装置。
<16> 前記ノイズ除去部は、前記カメラにより撮像された複数の画像について、FFTを掛けて、周波数成分毎に最大値の振幅の成分を集めて、逆FFTを掛けて画像を生成することで、前記画像のノイズを除去する
<15>に記載の撮影装置。
<17> カメラの動きを検出し、
カメラの動きの検出結果に基づいて、前記カメラ動き軌跡の分布度合いを計算し、所定の閾値と比較し、
比較結果に基づいて、露光の開始および終了を制御するステップ
を含む撮影方法。
<18> カメラの動きを検出するカメラ動き検出部と、
前記カメラ動き検出結果に基づいた、前記カメラ動き軌跡の分布度合いを計算し、所定の閾値と比較する比較部と、
前記比較部の比較結果に基づいて、露光の開始および終了を制御する露光制御部としてコンピュータを機能させる
プログラム。
Claims (18)
- カメラの動きを検出するカメラ動き検出部と、
前記カメラ動き検出結果に基づいた、前記カメラ動き軌跡の分布度合いを計算し、所定の閾値と比較する比較部と、
前記比較部の比較結果に基づいて、露光の開始および終了を制御する露光制御部と
を含む撮影装置。 - 前記比較部は、前記カメラ動き検出結果に基づいた、前記カメラ動き軌跡の分布度合いとして、共分散の第一主成分を算出し、前記所定の閾値と比較する
請求項1に記載の撮影装置。 - 前記比較部は、前記カメラ動き検出結果に基づいた、前記カメラ動き軌跡の分布度合いとして、前記カメラ動き検出結果に対するPSF(Point Spread Function)画像を生成し、周波数解析することにより、前記所定の閾値と比較する
請求項1に記載の撮影装置。 - 前記比較部は、前記カメラ動き検出結果に基づいた、前記カメラ動き軌跡の分布度合いとして、現在時刻から所定の時刻まで遡った一定範囲の前記検出結果を多項式で近似しておき、前記多項式より未来時刻での動きを外挿(補外)して計測し、前記所定の閾値と比較する
請求項1に記載の撮影装置。 - 前記カメラ動き検出部は、ジャイロセンサ、加速度センサ、地磁気センサ、高度センサ、振動センサ、並びに、撮像に用いるカメラとは異なる、撮像に用いるカメラの動きを検出するためのサブカメラから物体のマーカーを追跡して動きを計測するモーションキャプチャを含む
請求項1に記載の撮影装置。 - 前記露光制御部は、前記比較部の比較結果に基づいて、前記露光が開始されてから最小露光時間より長く、かつ、最大露光時間内であって、前記比較結果によりブレが所定よりも大きいとみなされたとき、前記露光を終了するように制御する
請求項1に記載の撮影装置。 - 前記露光制御部は、前記比較部の比較結果に基づいて、前記露光が開始されてから前記最大露光時間になったとき、前記露光を終了するように制御する
請求項1に記載の撮影装置。 - 前記露光制御部は、前記比較部の比較結果に基づいて、前記露光が開始されるタイミングにおいて、前記比較結果によりブレが所定よりも大きいとみなされたとき、前記比較結果によりブレが所定よりも小さいとみなされるまで、前記露光が開始されるタイミングを遅延させるように制御する
請求項1に記載の撮影装置。 - 前記露光制御部は、前記比較部の比較結果に基づいて、SNR(Signal to Noise Ratio)を考慮して、前記露光を終了するように制御する
請求項1に記載の撮影装置。 - 前記カメラにより所定の間隔で撮像された複数の画像を積算して、画像のノイズを除去するノイズ除去部をさらに含む
請求項1に記載の撮影装置。 - 前記ノイズ除去部は、前記カメラにより撮像された複数の画像のうち、ブレの大きさが所定の大きさよりも小さい画像のみを積算し前記画像のノイズを除去するノイズ除去部をさらに含む
請求項10に記載の撮影装置。 - 前記ノイズ除去部は、前記カメラにより撮像された複数の画像について、露光時間に応じて重みを付加して積算し前記画像のノイズを除去する
請求項10に記載の撮影装置。 - 前記ノイズ除去部は、前記カメラにより撮像された複数の画像について、各画像のブレの方向を加味した重みを付加して積算し前記画像のノイズを除去する
請求項10に記載の撮影装置。 - 前記ノイズ除去部は、前記カメラにより撮像された複数の画像について、均等の重みを付加して積算し前記画像のノイズを除去する
請求項10に記載の撮影装置。 - 前記ノイズ除去部は、前記カメラにより撮像された複数の画像について、FFT(Fast Fourier Transform)を掛けて、周波数成分毎に所定の振幅の成分を集めて、逆FFTを掛けて画像を生成することで、前記画像のノイズを除去する
請求項10に記載の撮影装置。 - 前記ノイズ除去部は、前記カメラにより撮像された複数の画像について、FFTを掛けて、周波数成分毎に最大値の振幅の成分を集めて、逆FFTを掛けて画像を生成することで、前記画像のノイズを除去する
請求項15に記載の撮影装置。 - カメラの動きを検出し、
カメラの動きの検出結果に基づいて、前記カメラ動き軌跡の分布度合いを計算し、所定の閾値と比較し、
比較結果に基づいて、露光の開始および終了を制御するステップ
を含む撮影方法。 - カメラの動きを検出するカメラ動き検出部と、
前記カメラ動き検出結果に基づいた、前記カメラ動き軌跡の分布度合いを計算し、所定の閾値と比較する比較部と、
前記比較部の比較結果に基づいて、露光の開始および終了を制御する露光制御部としてコンピュータを機能させる
プログラム。
Priority Applications (2)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
JP2017552357A JPWO2017090458A1 (ja) | 2015-11-26 | 2016-11-11 | 撮影装置、および撮影方法、並びにプログラム |
US15/773,695 US10542217B2 (en) | 2015-11-26 | 2016-11-11 | Shooting device and shooting method to suppress blur in an image |
Applications Claiming Priority (2)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
JP2015230909 | 2015-11-26 | ||
JP2015-230909 | 2015-11-26 |
Publications (1)
Publication Number | Publication Date |
---|---|
WO2017090458A1 true WO2017090458A1 (ja) | 2017-06-01 |
Family
ID=58763163
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
PCT/JP2016/083474 WO2017090458A1 (ja) | 2015-11-26 | 2016-11-11 | 撮影装置、および撮影方法、並びにプログラム |
Country Status (3)
Country | Link |
---|---|
US (1) | US10542217B2 (ja) |
JP (1) | JPWO2017090458A1 (ja) |
WO (1) | WO2017090458A1 (ja) |
Cited By (7)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
WO2019082832A1 (ja) * | 2017-10-27 | 2019-05-02 | キヤノン株式会社 | 撮像装置、撮像装置の制御方法、および、プログラム |
WO2019082831A1 (ja) * | 2017-10-27 | 2019-05-02 | キヤノン株式会社 | 撮像装置、撮像装置の制御方法、および、プログラム |
JP2019083518A (ja) * | 2017-10-27 | 2019-05-30 | キヤノン株式会社 | 撮像装置、撮像装置の制御方法、および、プログラム |
JP2019083517A (ja) * | 2017-10-27 | 2019-05-30 | キヤノン株式会社 | 撮像装置、撮像装置の制御方法、および、プログラム |
JP2019083362A (ja) * | 2017-10-27 | 2019-05-30 | キヤノン株式会社 | 撮像装置、撮像装置の方法、および、プログラム |
EP3627823A4 (en) * | 2017-06-13 | 2020-04-22 | Guangdong Oppo Mobile Telecommunications Corp., Ltd. | IMAGE SELECTION PROCESS AND RELATED PRODUCT |
JP2021004729A (ja) * | 2019-06-25 | 2021-01-14 | 株式会社小野測器 | 状態計測装置 |
Families Citing this family (9)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US9922398B1 (en) | 2016-06-30 | 2018-03-20 | Gopro, Inc. | Systems and methods for generating stabilized visual content using spherical visual content |
JP6469324B1 (ja) * | 2017-04-27 | 2019-02-13 | 三菱電機株式会社 | 画像読み取り装置 |
US10587807B2 (en) * | 2018-05-18 | 2020-03-10 | Gopro, Inc. | Systems and methods for stabilizing videos |
US10750092B2 (en) | 2018-09-19 | 2020-08-18 | Gopro, Inc. | Systems and methods for stabilizing videos |
JP7197785B2 (ja) * | 2019-01-28 | 2022-12-28 | 日本電信電話株式会社 | 映像処理装置、映像処理方法、及び映像処理プログラム |
KR102518373B1 (ko) * | 2019-02-12 | 2023-04-06 | 삼성전자주식회사 | 이미지 센서 및 이를 포함하는 전자 기기 |
CN110223239B (zh) * | 2019-04-30 | 2023-04-14 | 努比亚技术有限公司 | 一种图像处理方法、终端及计算机可读存储介质 |
EP3889880B1 (en) | 2020-03-30 | 2022-03-23 | Axis AB | Wearable camera noise reduction |
JP2022138647A (ja) * | 2021-03-10 | 2022-09-26 | 株式会社ソニー・インタラクティブエンタテインメント | 画像処理装置、情報処理システム、および画像取得方法 |
Citations (2)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
JP2004248021A (ja) * | 2003-02-14 | 2004-09-02 | Minolta Co Ltd | 撮像装置並びに画像処理装置及び方法 |
JP2006333061A (ja) * | 2005-05-26 | 2006-12-07 | Sanyo Electric Co Ltd | 手ぶれ補正装置 |
Family Cites Families (4)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
JP2884262B2 (ja) | 1990-11-01 | 1999-04-19 | 大成建設株式会社 | 搬送籠の水平引込み、鉛直保持構造 |
US8045009B2 (en) * | 2004-05-10 | 2011-10-25 | Hewlett-Packard Development Company, L.P. | Image-exposure systems and methods using detecting motion of a camera to terminate exposure |
US8482618B2 (en) * | 2005-02-22 | 2013-07-09 | Hewlett-Packard Development Company, L.P. | Reduction of motion-induced blur in images |
JP2007081487A (ja) | 2005-09-09 | 2007-03-29 | Sharp Corp | 撮像装置および電子情報機器 |
-
2016
- 2016-11-11 WO PCT/JP2016/083474 patent/WO2017090458A1/ja active Application Filing
- 2016-11-11 US US15/773,695 patent/US10542217B2/en not_active Expired - Fee Related
- 2016-11-11 JP JP2017552357A patent/JPWO2017090458A1/ja active Pending
Patent Citations (2)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
JP2004248021A (ja) * | 2003-02-14 | 2004-09-02 | Minolta Co Ltd | 撮像装置並びに画像処理装置及び方法 |
JP2006333061A (ja) * | 2005-05-26 | 2006-12-07 | Sanyo Electric Co Ltd | 手ぶれ補正装置 |
Non-Patent Citations (1)
Title |
---|
RYUICHI OGINO: "Camera-shake locus detection and visualization", ITE TECHNICAL REPORT, vol. 31, no. 14, 24 February 2007 (2007-02-24), pages 22 * |
Cited By (14)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
EP3627823A4 (en) * | 2017-06-13 | 2020-04-22 | Guangdong Oppo Mobile Telecommunications Corp., Ltd. | IMAGE SELECTION PROCESS AND RELATED PRODUCT |
US11363196B2 (en) | 2017-06-13 | 2022-06-14 | Guangdong Oppo Mobile Telecommunications Corp., Ltd. | Image selection method and related product |
US11375132B2 (en) | 2017-10-27 | 2022-06-28 | Canon Kabushiki Kaisha | Imaging apparatus, method of controlling the imaging apparatus, and program |
JP2019083517A (ja) * | 2017-10-27 | 2019-05-30 | キヤノン株式会社 | 撮像装置、撮像装置の制御方法、および、プログラム |
JP2019083362A (ja) * | 2017-10-27 | 2019-05-30 | キヤノン株式会社 | 撮像装置、撮像装置の方法、および、プログラム |
JP2019083518A (ja) * | 2017-10-27 | 2019-05-30 | キヤノン株式会社 | 撮像装置、撮像装置の制御方法、および、プログラム |
US11258948B2 (en) | 2017-10-27 | 2022-02-22 | Canon Kabushiki Kaisha | Image pickup apparatus, control method of image pickup apparatus, and storage medium |
WO2019082831A1 (ja) * | 2017-10-27 | 2019-05-02 | キヤノン株式会社 | 撮像装置、撮像装置の制御方法、および、プログラム |
WO2019082832A1 (ja) * | 2017-10-27 | 2019-05-02 | キヤノン株式会社 | 撮像装置、撮像装置の制御方法、および、プログラム |
JP7123544B2 (ja) | 2017-10-27 | 2022-08-23 | キヤノン株式会社 | 撮像装置、撮像装置の方法、および、プログラム |
JP7286294B2 (ja) | 2017-10-27 | 2023-06-05 | キヤノン株式会社 | 撮像装置、撮像装置の制御方法、および、プログラム |
JP7321691B2 (ja) | 2017-10-27 | 2023-08-07 | キヤノン株式会社 | 撮像装置、撮像装置の制御方法、および、プログラム |
JP2021004729A (ja) * | 2019-06-25 | 2021-01-14 | 株式会社小野測器 | 状態計測装置 |
JP7182520B2 (ja) | 2019-06-25 | 2022-12-02 | 株式会社小野測器 | 状態計測装置 |
Also Published As
Publication number | Publication date |
---|---|
US20180324358A1 (en) | 2018-11-08 |
US10542217B2 (en) | 2020-01-21 |
JPWO2017090458A1 (ja) | 2018-09-13 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
WO2017090458A1 (ja) | 撮影装置、および撮影方法、並びにプログラム | |
US10007990B2 (en) | Generating composite images using estimated blur kernel size | |
Hee Park et al. | Gyro-based multi-image deconvolution for removing handshake blur | |
US8532420B2 (en) | Image processing apparatus, image processing method and storage medium storing image processing program | |
US8208746B2 (en) | Adaptive PSF estimation technique using a sharp preview and a blurred image | |
KR101633377B1 (ko) | 다중 노출에 의한 프레임 처리 방법 및 장치 | |
US9202263B2 (en) | System and method for spatio video image enhancement | |
WO2019071613A1 (zh) | 一种图像处理方法及装置 | |
JP5499050B2 (ja) | 画像処理装置、撮像装置、及び画像処理方法 | |
US8983221B2 (en) | Image processing apparatus, imaging apparatus, and image processing method | |
US8698905B2 (en) | Estimation of point spread functions from motion-blurred images | |
Hu et al. | Image deblurring using smartphone inertial sensors | |
WO2013025220A1 (en) | Image sharpness classification system | |
KR102106537B1 (ko) | 하이 다이나믹 레인지 영상 생성 방법 및, 그에 따른 장치, 그에 따른 시스템 | |
CN113395454B (zh) | 图像拍摄的防抖方法与装置、终端及可读存储介质 | |
US10003745B2 (en) | Imaging apparatus, imaging method and program, and reproduction apparatus | |
JP6282133B2 (ja) | 撮像装置、その制御方法、および制御プログラム | |
JP7263149B2 (ja) | 画像処理装置、画像処理方法、およびプログラム | |
Guthier et al. | A real-time system for capturing hdr videos | |
Rajakaruna et al. | Image deblurring for navigation systems of vision impaired people using sensor fusion data | |
JP6739955B2 (ja) | 画像処理装置、画像処理方法、画像処理プログラム、および記録媒体 | |
KR20140009706A (ko) | 이미지 데이터에 포함된 모션 블러 영역을 찾고 그 모션 블러 영역을 처리하는 이미지 프로세싱 장치 및 그 장치를 이용한 이미지 프로세싱 방법 | |
JP2019176261A (ja) | 画像処理装置 | |
JP7146088B2 (ja) | 画像処理装置及び画像処理方法 | |
JP2008117119A (ja) | 動きベクトル検出方法、動きベクトル除去方法及び動きベクトル検出装置 |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
121 | Ep: the epo has been informed by wipo that ep was designated in this application |
Ref document number: 16868406 Country of ref document: EP Kind code of ref document: A1 |
|
ENP | Entry into the national phase |
Ref document number: 2017552357 Country of ref document: JP Kind code of ref document: A |
|
WWE | Wipo information: entry into national phase |
Ref document number: 15773695 Country of ref document: US |
|
NENP | Non-entry into the national phase |
Ref country code: DE |
|
122 | Ep: pct application non-entry in european phase |
Ref document number: 16868406 Country of ref document: EP Kind code of ref document: A1 |