WO2009154294A1 - 移動量抽出装置及びプログラム、画像補正装置及びプログラム並びに記録媒体 - Google Patents
移動量抽出装置及びプログラム、画像補正装置及びプログラム並びに記録媒体 Download PDFInfo
- Publication number
- WO2009154294A1 WO2009154294A1 PCT/JP2009/061329 JP2009061329W WO2009154294A1 WO 2009154294 A1 WO2009154294 A1 WO 2009154294A1 JP 2009061329 W JP2009061329 W JP 2009061329W WO 2009154294 A1 WO2009154294 A1 WO 2009154294A1
- Authority
- WO
- WIPO (PCT)
- Prior art keywords
- image
- frame
- frame image
- error function
- unit
- Prior art date
Links
- 238000003702 image correction Methods 0.000 title claims abstract description 27
- 238000000605 extraction Methods 0.000 title claims description 28
- PXFBZOLANLWPMH-UHFFFAOYSA-N 16-Epiaffinine Natural products C1C(C2=CC=CC=C2N2)=C2C(=O)CC2C(=CC)CN(C)C1C2CO PXFBZOLANLWPMH-UHFFFAOYSA-N 0.000 claims abstract description 48
- 230000008859 change Effects 0.000 claims abstract description 48
- 239000000284 extract Substances 0.000 claims abstract description 10
- 230000009466 transformation Effects 0.000 claims description 49
- 238000000034 method Methods 0.000 claims description 41
- 238000006243 chemical reaction Methods 0.000 claims description 34
- 238000012545 processing Methods 0.000 claims description 33
- 238000012937 correction Methods 0.000 claims description 32
- 238000004364 calculation method Methods 0.000 claims description 23
- 230000008569 process Effects 0.000 claims description 10
- 230000015572 biosynthetic process Effects 0.000 claims description 5
- 238000003786 synthesis reaction Methods 0.000 claims description 5
- 239000000203 mixture Substances 0.000 claims 1
- 238000004422 calculation algorithm Methods 0.000 abstract description 4
- 230000006870 function Effects 0.000 description 39
- 230000014509 gene expression Effects 0.000 description 12
- 239000011159 matrix material Substances 0.000 description 6
- 238000010586 diagram Methods 0.000 description 5
- 239000002131 composite material Substances 0.000 description 4
- 230000006641 stabilisation Effects 0.000 description 4
- 238000011105 stabilization Methods 0.000 description 4
- 230000010365 information processing Effects 0.000 description 2
- 230000008901 benefit Effects 0.000 description 1
- 238000009795 derivation Methods 0.000 description 1
- 238000013461 design Methods 0.000 description 1
- 238000011161 development Methods 0.000 description 1
- 238000005516 engineering process Methods 0.000 description 1
- 230000010354 integration Effects 0.000 description 1
- 206010025482 malaise Diseases 0.000 description 1
Images
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T7/00—Image analysis
- G06T7/20—Analysis of motion
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T5/00—Image enhancement or restoration
- G06T5/50—Image enhancement or restoration using two or more images, e.g. averaging or subtraction
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N23/00—Cameras or camera modules comprising electronic image sensors; Control thereof
- H04N23/60—Control of cameras or camera modules
- H04N23/68—Control of cameras or camera modules for stable pick-up of the scene, e.g. compensating for camera body vibrations
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N5/00—Details of television systems
- H04N5/14—Picture signal circuitry for video frequency region
- H04N5/144—Movement detection
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N5/00—Details of television systems
- H04N5/76—Television signal recording
- H04N5/765—Interface circuits between an apparatus for recording and another apparatus
- H04N5/77—Interface circuits between an apparatus for recording and another apparatus between a recording apparatus and a television camera
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/10—Image acquisition modality
- G06T2207/10016—Video; Image sequence
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/20—Special algorithmic details
- G06T2207/20212—Image combination
- G06T2207/20221—Image fusion; Image merging
Definitions
- the present invention relates to a movement amount extraction device and program, an image correction device and program, and a recording medium.
- video cameras have become more popular due to their downsizing and lower prices, and are being used in various places.
- small video cameras have recently been used for remote control, such as robots that search for victims in places where humans cannot enter and unmanned helicopters that check disaster situations from the sky, in order to gather information quickly in the event of a disaster. It is also installed in other rescue robots.
- GPU Graphics Processing Unit
- the GPU is also mounted on a general PC and can perform high-speed computation by parallel processing.
- the processing performance of the GPU, particularly the floating point arithmetic performance, may be 10 times or more that of the CPU.
- Non-Patent Document 1 As a blur correction technique using a GPU by the inventors of the present application, “stabilization of video images using a GPU” is disclosed (see Non-Patent Document 1).
- the technique described in Non-Patent Document 1 uses the BFGS (quasi-Newton) algorithm to estimate video motion based on the estimated global motion when estimating global motion using affine transformation.
- BFGS quadsi-Newton
- Non-Patent Document 1 requires a long time for global motion, that is, an estimation of the amount of change because the convergence time is long and the number of calculations of the BFGS method increases. For this reason, since the technique disclosed in Patent Document 1 can perform blur correction processing for only 4 to 5 frames on 30 frame images per second, the motion image blur correction is performed substantially in real time. I could't.
- the present invention has been proposed to solve the above-described problems.
- An image change amount extraction apparatus is an affine transformation parameter including a parallel movement amount and a rotational movement amount with respect to a first frame image of a plurality of frame images constituting a moving image.
- An image conversion unit that performs an image conversion process using the image conversion unit to generate a first converted frame image, and the image conversion unit sets predetermined values for the parallel movement amount and the rotational movement amount, respectively.
- a converted frame image is generated, a second converted frame image generated by the image converting unit and a second frame image different from the first frame image among the plurality of frame images constituting the moving image.
- the square value of the difference between the pixel values of the same coordinate with the frame image of the first frame image is calculated, and the first converted frame image and the second frame image at least overlap all the same coordinates.
- the image change amount extraction device generates a first converted frame image and a second frame image each time a predetermined value is set for each of the parallel movement amount and the rotational movement amount and the first converted frame image is generated.
- the error function is derived by integrating the square values of at least all the same coordinates that overlap and the error function value becomes the minimum value using the BFGS method, and the error function value becomes the minimum value Are extracted as the amount of change of the first frame image with respect to the second frame image. Therefore, it is possible to extract the change amount of the first frame image with respect to the second frame image in real time with a very short search time.
- An image correction apparatus includes: the first frame image based on the image change amount extraction device; the first frame image; and the change amount extracted by the image change amount extraction device.
- a correction unit that performs correction processing on the first frame image so as to reduce a deviation from the second frame image.
- an image correction device comprising: a first frame based on the image change amount extraction device; a second frame image; and a change amount extracted by the image change amount extraction device.
- a correction unit configured to perform a correction process on the second frame image so as to reduce a shift between the image and the second frame image.
- Each of the image correction devices can correct an image in accordance with the change amount in real time using the change amount of the image extracted in real time.
- the image change amount extraction apparatus and program derive an error function by integrating square values of all the same coordinates at least overlapping the first converted frame image and the second frame image,
- the case where the value of the error function is the minimum value is searched using the BFGS method, and the affine transformation parameter when the value of the error function is the minimum value is used as the amount of change of the first frame image with respect to the second frame image. Extract.
- the search time when the value of the error function becomes the minimum value can be shortened, and the change amount of the image constituting the moving image can be extracted in real time.
- the image correction apparatus and program according to one embodiment of the present invention can perform real-time image correction according to the amount of change by extracting the amount of change in the image constituting the moving image in real time.
- FIG. 1 It is a block diagram which shows the structure of the image correction apparatus which concerns on embodiment of this invention. It is a figure for demonstrating estimation of global motion. It is a figure which shows the movement amount with respect to the number of frames before correction
- FIG. 1 is a block diagram showing a configuration of an image correction apparatus according to an embodiment of the present invention.
- the image correction apparatus includes a camera 10 that captures an image of a subject and generates an image, and an image processing apparatus 20 that performs image processing so as to eliminate blurring of the image generated by the camera 10.
- the image processing apparatus 20 includes an input / output port 21 that exchanges signals with the camera 10, a CPU (Central Processing Unit) 22 that performs arithmetic processing, a hard disk drive 23 that stores images and other data, and a CPU 22 ROM (Read Only Memory) 24 for storing the control program, RAM (Random Access Memory) 25 as a data work area, and GPU 26 (Graphics Processing Unit) for performing predetermined arithmetic processing for image processing, I have.
- CPU Central Processing Unit
- ROM Read Only Memory
- RAM Random Access Memory
- GPU 26 Graphics Processing Unit
- the CPU 22 When the CPU 22 receives a moving image from the camera 10 via the input / output port 21, the CPU 22 sequentially transfers the moving image to the GPU 26, causes the GPU 26 to perform predetermined arithmetic processing, and from each frame image constituting the moving image.
- the amount of movement of the camera 10 is obtained for each frame (estimation of global motion). In the present embodiment, it is assumed that the movement of the camera 10 from which vibration has been removed is gentle and smooth. Then, the CPU 22 performs vibration correction on each frame image based on the obtained movement amount of the camera 10.
- Equation (2) represents how the camera 10 has moved from an arbitrary frame. Affine transformation parameters
- Equation (3) is a value obtained by summing the squares of the differences between the luminance values of the two frame images.
- Equation (3) is the sum of the squares of the differences between frames, and represents a difference from the difference image. That is, even if the equation (3) is calculated, only an image that does not show what is captured even when viewed by a human can be obtained.
- Non-Patent Document 1 Originally, global motion is the whole movement that humans can see. Therefore, as described in Non-Patent Document 1, it is most natural to think that the error function is an integrated value of the difference between pixel values when images are simply superimposed.
- the expression (3) of the present embodiment is a simple square expression, and strictly speaking, the same solution as that of the error function of Non-Patent Document 1 is not always obtained, and is considered to be special.
- vibration correction can be performed without problems using the solution of equation (3). That is, it was found that the same result can be obtained although the definition of the error function is different between Non-Patent Document 1 and this embodiment. Therefore, formula (3) in this embodiment is a simple square formula, so that the calculation is speeded up because there is no route calculation, the difference becomes large, and the convergence to the minimum value is accelerated. There is an advantage that the failure of global motion correction is reduced. Therefore, the CPU 22 and the GPU 26 of the image processing apparatus 20 illustrated in FIG. 1 perform the following calculation.
- FIG. 2 is a diagram for explaining estimation of global motion.
- Shake amount of the camera, the image shift amount of the frame image I n + 1 in the case of the reference frame image I n (rotation angle theta, respectively the amount of movement of the xy-directions b1, b2) become. Therefore, CPU 22 shown in FIG. 1 is transmitted, as a candidate of the image moving amount of the frame image I n + 1, it stores a plurality of affine transformation parameters prepared in advance, a plurality of affine transformation parameters together with the frame image I n + 1 to GPU26 To do.
- the frame image In + 1 is preferably the latest frame image among the moving images generated by the camera 10.
- the CPU 22 causes the GPU 26 to calculate the error value E when each affine transformation parameter is used, and extracts the affine transformation parameter when the error value E is minimized as the movement amount of the camera 10.
- the CPU 22 calculates sin ⁇ and cos ⁇ from ⁇ , and sends b1, b2, sin ⁇ , and cos ⁇ to the GPU 26 as affine transformation parameters. May be.
- the GPU 26 when the GPU 26 receives the affine transformation parameter transmitted from the CPU 22, the GPU 26 performs a deformation process on the frame image In + 1 using the affine transformation parameter described above.
- the GPU 26 calculates the square of the difference in pixel values for the same coordinates of each frame image I n + 1 which is deformed frame image I n (luminance value). Note that the calculation of the square of the difference between the luminance values is performed on all coordinates (for example, at least all coordinates in the overlapping region of the frame images I n and I n + 1 ). Note that the GPU 26 calculates the square value of the difference between the luminance values in parallel and independently at each same coordinate in the overlapping region. Thereby, since GPU26 can be calculated independently by each coordinate, it can perform a parallel calculation process and can perform a high-speed process.
- the GPU 26 integrates the squares of the differences in luminance values at all coordinates in parallel, and obtains the integrated value as an error value.
- the GPU 26 may accumulate the squares of the luminance value differences to some extent in parallel, and the CPU 22 may accumulate the squares of the remaining luminance value differences sequentially to sum these accumulated values. Every time the affine transformation parameter is changed, the error value described above is calculated.
- the CPU 22 next selects the affine transformation parameter when the error value becomes the smallest among all the error values. Then, the selected affine transformation parameter is extracted as the movement between frames, that is, the movement amount of the camera.
- the CPU 22 When referring to an area where the luminance value is not defined (undefined area: an area where the frame images I n and I n + 1 do not overlap), the CPU 22 excludes the pixel from the calculation of the error value. The difference in luminance value of the pixel is set to zero. Then, the CPU 22 corrects the error value E as follows by using the number of pixels ⁇ e finally effective for all the pixels ⁇ .
- the CPU 22 calculates the error value by regarding the difference in luminance value of pixels in the undefined area as 0, and the error value is I intentionally make it bigger. Note that the difference in luminance value is regarded as 0 as long as ⁇ is sufficiently smaller than 1, and is not limited to a case where ⁇ is less than 1 ⁇ 4.
- the NUMGSAL RECIPES BFGS method (quasi-Newton method) algorithm is used to search for the minimum value of the error function.
- the algorithm of the BFGS (Broyden, Fletcher, Goldfarb, Shanno) method performs a search in the minimum direction using a function and a derivative, so that the number of calculations is small and the convergence time is short. Since the BFGS method requires a derivative, the equation (3) is transformed into the following equations (4) and (5) to obtain the derivative. *
- the affine transformation parameters to be obtained are three ( ⁇ , b 1 , b 2 ), and the affine matrix T is expressed by Equation (16). ).
- the CPU 22 of the image processing apparatus 20 shown in FIG. 1 defines the error function of Expression (3) using the affine transformation matrix of Expression (16), and searches for the minimum value of this error function using the quasi-Newton method.
- the BFGS method which is one of the methods is used.
- the BFGS method requires a derivative. Therefore, the CPU 22 searches for the minimum value of the error function of Expression (3) using the derivatives of Expressions (17) to (19) (including Expressions (20) to (23)), and becomes the minimum value.
- Parameters ( ⁇ , b 1 , b 2 ) are obtained and extracted as the amount of image movement, that is, the amount of camera shake.
- Is a Gaussian kernel.
- the CPU 22 of the image processing apparatus 20 shown in FIG. 1 performs the calculation of the following equation (25) using the obtained conversion matrix, so that the frame to be processed is reduced so that the shift between the frame images is reduced. Vibration correction can be performed on the image.
- n and m are continuous natural numbers, but when performing vibration correction of a predetermined frame image with respect to a reference frame image, n, m may not be a continuous natural number.
- the present inventors calculated the number of times of applying the BFGS method per frame, the following results were obtained.
- the average is 42.87 when the GPU calculates, and 11.43 when the CPU calculates.
- the average is 7.707 times when the GPU is calculated, and the average is 6.481 times when the CPU is calculated. That is, using the error function of Expression (3) reduces the number of calculations and enables calculation in a short time.
- FIG. 3A and 3B are diagrams showing the movement amount with respect to the number of frames before correction and after correction (corrected by the image correction apparatus), where FIG. 3A shows the movement amount in the X-axis direction, and FIG. 3B shows the movement amount in the Y-axis direction. It is. As shown in the figure, the movement amount became very smooth by the correction.
- the CPU 22 of the image processing apparatus 20 may sequentially synthesize frame images in which one of the rotational movement amount and the parallel movement amount is corrected to generate a composite image including a plurality of frames.
- FIG. 4 is a diagram showing a composite image generated by combining the first to third frame images.
- the CPU 22 sequentially superimposes the corrected latest frame images so as to be horizontal with respect to the center position.
- a composite image larger than the frame image is generated, which is composed of a new frame image near the center and an old frame image near the edge.
- the GPU 26 sets a determination flag as to whether an image exists at each coordinate, and calculates the error function E only at the coordinate where the image exists.
- the estimation error of the moving amount of the frame image is reduced, and global motion estimation is possible even if there is almost no overlapping portion between the latest frame image and the previous frame image.
- the GPU 26 may discard the frame images before a predetermined frame from the latest frame image, and sequentially discard them.
- the GPU 26, the previous frame image I n, the synthesized image is the synthesis of I n-1, I n- 2 ⁇ and the frame image I n, using the following latest frame image I n + 1, the error
- the function E may be calculated. Accordingly, even when the shake amount of the camera 10 is large, the overlapping range of the frame image I n, which are combined with the next latest frame image I n + 1 is larger, the blur amount of the camera is reliably detected.
- the image correction apparatus searches for the minimum value of the error function of Equation (3) by applying the BFGS method, so that an error can be achieved in a very short time compared to the conventional case.
- An affine transformation parameter when the function becomes the minimum value is obtained, and blurring of a moving image can be corrected in real time using the affine transformation parameter.
- the minimum value search using the BFGS method the minimum value is searched by repeating a plurality of calculations, so even a slight difference in the calculation speed of individual calculation formulas greatly affects the final calculation speed.
- the image correction apparatus according to the present embodiment performs calculation for each pixel of the image, this difference is significant.
- the image correction apparatus according to the present embodiment can search for the minimum value of the error function at high speed without using the square root calculation by devising the error function. It has also been found that by using an error function, the number of iterations of the minimum value search using the BFGS method itself can be reduced.
- the image correction apparatus can generate a combined image having a size larger than that of the frame image by sequentially combining the corrected frame images. Then, the image correction device extracts the movement amount of the latest frame image with respect to the large-sized composite image, so that even when the camera 10 has a large amount of shake, the amount of shake is reliably extracted and the shake is corrected. can do.
- the image correction apparatus can correct subject shake of a moving image in real time using the above-described equation (3) not only when the camera 10 is shaken but also when the subject is shaken. .
- affine transformation parameters ( ⁇ , b 1 , b 2 ) that are three variables are used, but in the second embodiment, affine transformation parameters ( ⁇ , b 1 , b 2 , z) that are four variables are used. ) Is used. Note that z is a parameter in the zoom direction and indicates the magnification of the image.
- the error function is expressed as the following equation (26).
- Equation (26) ⁇ is a set of all coordinate values on the screen plane.
- I (x) is the luminance value of the pixel x.
- the CPU 22 of the image processing apparatus 20 shown in FIG. 1 uses the equations (28) to (31) (equation (32) for the error function using the four variable affine transformation parameters described above. ) To (38).) To apply the BFGS method. Thereby, the CPU 22 searches for the minimum value of the error value in a short time, and extracts the affine transformation parameter at this time as the movement between frames, that is, the movement amount of the camera. And CPU22 can correct
- the image correction apparatus can extract the movement amount using the affine transformation parameters including the zoom direction parameter, the camera 10 changes as the size of the subject appearing in the image changes. Even if the camera vibrates, the moving image can be corrected so as to suppress the vibration in real time.
- the present invention is not limited to the above-described embodiment, and it is needless to say that the present invention can also be applied to a design modified within the scope of the claims.
- the frame image to be converted may not be adjacent to the frame image I n.
- a predetermined frame image several frames away from the reference frame image can be represented by an affine transformation parameter.
- the image processing apparatus 20 corrects the moving image generated by the camera 10 in real time, but can also correct the moving image stored in advance in the hard disk drive 23 in the same manner.
Landscapes
- Engineering & Computer Science (AREA)
- Multimedia (AREA)
- Signal Processing (AREA)
- Physics & Mathematics (AREA)
- General Physics & Mathematics (AREA)
- Theoretical Computer Science (AREA)
- Computer Vision & Pattern Recognition (AREA)
- Studio Devices (AREA)
- Image Analysis (AREA)
Abstract
Description
藤澤、他2名、「GPUを用いたビデオ映像の安定化」、社団法人情報処理学会、情報処理学会論文誌Vol.49、No.2、p.1-8
図1は、本発明の実施の形態に係る画像補正装置の構成を示すブロック図である。画像補正装置は、被写体を撮像して画像を生成するカメラ10と、カメラ10で生成された画像のぶれをなくすように画像処理を行う画像処理装置20と、を備えている。
映像の安定化行うためにはグローバルモーションを知る必要がある。連続したフレーム間において、隣接したフレーム間での動きを求められれば、カメラ10がどのように動いたか知ることができる。
[振動補正]
画面の動きを滑らかにするために、推定したグローバルモーションに基づく修正のための変換行列を求める必要がある。補正前のフレームから補正後のフレームまでの変換行列Sは、補正の対象となるフレームの前後kフレームまでのアフィン変換を用いて次の式(24)によって表される。
つぎに、本発明の第2の実施形態について説明する。なお、第1の実施形態と同一の部位には同一の符号を付し、重複する説明は省略する。
20 画像処理装置
22 CPU
26 GPU
Claims (15)
- 動画像を構成する複数のフレーム画像のうちの第1のフレーム画像に対して、平行移動量と回転移動量とを含んだアフィン変換パラメータを用いて画像変換処理を行って、第1の変換フレーム画像を生成する画像変換部と、
前記画像変換部により前記平行移動量及び前記回転移動量にそれぞれ所定の値が設定されて前記第1の変換フレーム画像が生成される毎に、前記画像変換部により生成された第1の変換フレーム画像と、前記動画像を構成する複数のフレーム画像のうちの前記第1のフレーム画像と異なる第2のフレーム画像と、の同一座標の各画素値の差分の自乗値を演算し、前記第1の変換フレーム画像及び前記第2のフレーム画像が少なくとも重複するすべての同一座標についての前記自乗値を積算してエラー関数を導出するエラー関数導出部と、
BFGS法を用いて前記エラー関数導出部により導出されたエラー関数の値が最小値となる場合を探索し、前記エラー関数の値が最小値となる場合のアフィン変換パラメータを第2のフレーム画像に対する前記第1のフレーム画像の変化量として抽出する変化量抽出部と、
を備えた画像変化量抽出装置。 - 前記画像変換部は、前記第1のフレーム画像に対して、画像の倍率を更に含んだアフィン変換パラメータを用いて、画像変換処理を行う
請求項1に記載の画像変化量抽出装置。 - 前記エラー関数導出部は、前記第1の変換フレーム画像と、前記第1のフレーム画像に隣接する第2のフレーム画像と、の同一座標の各画素値の差分の自乗値を演算する
請求項1に記載の画像変化量抽出装置。 - 前記エラー関数導出部は、前記第1の変換フレーム画像と前記第2のフレーム画像との各々の同一座標について、各画素値の差分の自乗値をそれぞれ独立して並列演算する
請求項1に記載の画像変化量抽出装置。 - 前記画像変換部は、動画像を構成する複数のフレーム画像のうちの最新の第1のフレーム画像に対して、前記画像変換処理を行って第1の変換フレーム画像を逐次生成し、
前記エラー関数導出部は、前記画像変換部により逐次生成された第1の変換フレーム画像と、前記第1のフレーム画像の直前のフレームである第2のフレーム画像と、の同一座標の各画素値の差分の自乗値を演算する
請求項1に記載の画像変化量抽出装置。 - 請求項1に記載の画像変化量抽出装置と、
前記第1のフレーム画像と前記画像変化量抽出装置により抽出された変化量とに基づいて、第1のフレーム画像と前記第2のフレーム画像とのずれが少なくなるように、前記第1のフレーム画像に対して補正処理を行う補正部と、
を備えた画像補正装置。 - 前記第2のフレーム画像に対して、前記補正部により補正処理された前記第1のフレーム画像を合成する画像合成部を更に備えた
請求項8に記載の画像補正装置。 - 請求項8に記載の画像変化量抽出装置と、
前記第1のフレーム画像と前記画像変化量抽出装置により抽出された変化量とに基づいて、第1のフレーム画像と前記第2のフレーム画像とのずれが少なくなるように、前記第1のフレーム画像に対して補正処理を行う補正部と、
前記第2のフレーム画像に対して、前記補正部により補正処理された前記第1のフレーム画像を合成する画像合成部と、を備え、
前記画像変化量抽出装置は、次の第1のフレーム画像に対して、前記画像合成部により合成された画像を前記第2のフレーム画像として、前記次の第1のフレーム画像の変化量を抽出する
画像補正装置。 - 請求項1に記載の画像変化量抽出装置と、
前記第2のフレーム画像と前記画像変化量抽出装置により抽出された変化量とに基づいて、第1のフレーム画像と前記第2のフレーム画像とのずれが少なくなるように、前記第2のフレーム画像に対して補正処理を行う補正部と、
を備えた画像補正装置。 - 前記第1のフレーム画像に対して、前記補正部により補正処理された前記第2のフレーム画像を合成する画像合成部を更に備えた
請求項11に記載の画像補正装置。 - コンピュータを、請求項8または請求項11に記載の画像補正装置の各部として機能させるための画像補正プログラム。
- コンピュータを、
動画像を構成する複数のフレーム画像のうちの第1のフレーム画像に対して、平行移動量と回転移動量とを含んだアフィン変換パラメータを用いて画像変換処理を行って、第1の変換フレーム画像を生成する画像変換手段、
前記画像変換手段より前記平行移動量及び前記回転移動量にそれぞれ所定の値が設定されて前記第1の変換フレーム画像が生成される毎に、前記画像変換手段により生成された第1の変換フレーム画像と、前記動画像を構成する複数のフレーム画像のうちの前記第1のフレーム画像と異なる第2のフレーム画像と、の同一座標の各画素値の差分の自乗値を演算し、前記第1の変換フレーム画像及び前記第2のフレーム画像が少なくとも重複するすべての同一座標についての自乗値を積算してエラー関数を導出するエラー関数導出手段、及び、
BFGS法を用いて前記エラー関数導出手段により導出されたエラー関数の値が最小値となる場合を探索し、前記エラー関数の値が最小値となる場合のアフィン変換パラメータを第2のフレーム画像に対する前記第1のフレーム画像の変化量として抽出する変化量抽出手段、
として機能させるための画像変化量抽出プログラム。 - コンピュータを、
動画像を構成する複数のフレーム画像のうちの第1のフレーム画像に対して、平行移動量と回転移動量とを含んだアフィン変換パラメータを用いて画像変換処理を行って、第1の変換フレーム画像を生成する画像変換部、
前記画像変換部より前記平行移動量及び前記回転移動量にそれぞれ所定の値が設定されて前記第1の変換フレーム画像が生成される毎に、前記画像変換部により生成された第1の変換フレーム画像と、前記動画像を構成する複数のフレーム画像のうちの前記第1のフレーム画像と異なる第2のフレーム画像と、の同一座標の各画素値の差分の自乗値を演算し、前記第1の変換フレーム画像及び前記第2のフレーム画像が少なくとも重複するすべての同一座標についての自乗値を積算してエラー関数を導出するエラー関数導出部、及び、
BFGS法を用いて前記エラー関数導出部により導出されたエラー関数の値が最小値となる場合を探索し、前記エラー関数の値が最小値となる場合のアフィン変換パラメータを第2のフレーム画像に対する前記第1のフレーム画像の変化量として抽出する変化量抽出部、
として機能させるための画像変化量抽出プログラムが記録された記録媒体。
Priority Applications (2)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
JP2010518003A JP4771186B2 (ja) | 2008-06-20 | 2009-06-22 | 移動量抽出装置及びプログラム、画像補正装置及びプログラム並びに記録媒体 |
US12/999,828 US20110135206A1 (en) | 2008-06-20 | 2009-06-22 | Motion Extraction Device and Program, Image Correction Device and Program, and Recording Medium |
Applications Claiming Priority (2)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
JP2008-162477 | 2008-06-20 | ||
JP2008162477 | 2008-06-20 |
Publications (1)
Publication Number | Publication Date |
---|---|
WO2009154294A1 true WO2009154294A1 (ja) | 2009-12-23 |
Family
ID=41434205
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
PCT/JP2009/061329 WO2009154294A1 (ja) | 2008-06-20 | 2009-06-22 | 移動量抽出装置及びプログラム、画像補正装置及びプログラム並びに記録媒体 |
Country Status (3)
Country | Link |
---|---|
US (1) | US20110135206A1 (ja) |
JP (1) | JP4771186B2 (ja) |
WO (1) | WO2009154294A1 (ja) |
Cited By (2)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US8620100B2 (en) | 2009-02-13 | 2013-12-31 | National University Corporation Shizuoka University | Motion blur device, method and program |
JP6423566B1 (ja) * | 2018-06-21 | 2018-11-14 | 株式会社 ディー・エヌ・エー | 画像処理装置、画像処理プログラム、及び、画像処理方法 |
Families Citing this family (10)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US9886552B2 (en) | 2011-08-12 | 2018-02-06 | Help Lighting, Inc. | System and method for image registration of multiple video streams |
JP5412692B2 (ja) * | 2011-10-04 | 2014-02-12 | 株式会社モルフォ | 画像処理装置、画像処理方法、画像処理プログラム及び記録媒体 |
US9020203B2 (en) | 2012-05-21 | 2015-04-28 | Vipaar, Llc | System and method for managing spatiotemporal uncertainty |
CN103020711A (zh) * | 2012-12-25 | 2013-04-03 | 中国科学院深圳先进技术研究院 | 分类器训练方法及其系统 |
US9940750B2 (en) | 2013-06-27 | 2018-04-10 | Help Lighting, Inc. | System and method for role negotiation in multi-reality environments |
WO2016203282A1 (en) | 2015-06-18 | 2016-12-22 | The Nielsen Company (Us), Llc | Methods and apparatus to capture photographs using mobile devices |
KR20180057564A (ko) * | 2016-11-22 | 2018-05-30 | 한국전자통신연구원 | 영상 부호화/복호화 방법, 장치 및 비트스트림을 저장한 기록 매체 |
US11361407B2 (en) | 2017-04-09 | 2022-06-14 | Indiana University Research And Technology Corporation | Motion correction systems and methods for improving medical image data |
CN109191489B (zh) * | 2018-08-16 | 2022-05-20 | 株洲斯凯航空科技有限公司 | 一种飞行器着陆标志的检测跟踪方法与系统 |
EP3933760A4 (en) * | 2019-06-07 | 2022-12-07 | Mayekawa Mfg. Co., Ltd. | IMAGE PROCESSING DEVICE, IMAGE PROCESSING PROGRAM AND IMAGE PROCESSING METHOD |
Citations (4)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
JP2005309558A (ja) * | 2004-04-19 | 2005-11-04 | Sony Corp | 画像処理方法および装置、並びにプログラム |
JP2006520042A (ja) * | 2003-03-07 | 2006-08-31 | キネティック リミテッド | スキャニング装置及び方法 |
JP2007035020A (ja) * | 2005-06-22 | 2007-02-08 | Konica Minolta Medical & Graphic Inc | 領域抽出装置、領域抽出方法及びプログラム |
JP2007041752A (ja) * | 2005-08-02 | 2007-02-15 | Casio Comput Co Ltd | 画像処理装置 |
Family Cites Families (1)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
JP4344849B2 (ja) * | 2004-05-21 | 2009-10-14 | 国立大学法人東京工業大学 | 光位相分布測定方法 |
-
2009
- 2009-06-22 WO PCT/JP2009/061329 patent/WO2009154294A1/ja active Application Filing
- 2009-06-22 US US12/999,828 patent/US20110135206A1/en not_active Abandoned
- 2009-06-22 JP JP2010518003A patent/JP4771186B2/ja not_active Expired - Fee Related
Patent Citations (4)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
JP2006520042A (ja) * | 2003-03-07 | 2006-08-31 | キネティック リミテッド | スキャニング装置及び方法 |
JP2005309558A (ja) * | 2004-04-19 | 2005-11-04 | Sony Corp | 画像処理方法および装置、並びにプログラム |
JP2007035020A (ja) * | 2005-06-22 | 2007-02-08 | Konica Minolta Medical & Graphic Inc | 領域抽出装置、領域抽出方法及びプログラム |
JP2007041752A (ja) * | 2005-08-02 | 2007-02-15 | Casio Comput Co Ltd | 画像処理装置 |
Non-Patent Citations (1)
Title |
---|
MAKOTO FUJISAWA ET AL. ET AL.: "GPU o Mochiita Video Eizo no Anteika", TRANSACTIONS OF INFORMATION PROCESSING SOCIETY OF JAPAN, vol. 49, no. 2, 15 February 2008 (2008-02-15), pages 1022 - 1030 * |
Cited By (3)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US8620100B2 (en) | 2009-02-13 | 2013-12-31 | National University Corporation Shizuoka University | Motion blur device, method and program |
JP6423566B1 (ja) * | 2018-06-21 | 2018-11-14 | 株式会社 ディー・エヌ・エー | 画像処理装置、画像処理プログラム、及び、画像処理方法 |
JP2019219985A (ja) * | 2018-06-21 | 2019-12-26 | 株式会社 ディー・エヌ・エー | 画像処理装置、画像処理プログラム、及び、画像処理方法 |
Also Published As
Publication number | Publication date |
---|---|
JP4771186B2 (ja) | 2011-09-14 |
JPWO2009154294A1 (ja) | 2011-12-01 |
US20110135206A1 (en) | 2011-06-09 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
JP4771186B2 (ja) | 移動量抽出装置及びプログラム、画像補正装置及びプログラム並びに記録媒体 | |
US10755428B2 (en) | Apparatuses and methods for machine vision system including creation of a point cloud model and/or three dimensional model | |
KR102006043B1 (ko) | 깊이 카메라를 이용한 머리 포즈 추적 기법 | |
US9609181B2 (en) | Image signal processor and method for synthesizing super-resolution images from non-linear distorted images | |
US8872817B2 (en) | Real-time three-dimensional real environment reconstruction apparatus and method | |
US11222409B2 (en) | Image/video deblurring using convolutional neural networks with applications to SFM/SLAM with blurred images/videos | |
JP4582174B2 (ja) | 追跡処理装置、追跡処理方法、プログラム | |
WO2012063468A1 (ja) | 画像処理装置、画像処理方法、およびプログラム | |
JP2009134509A (ja) | モザイク画像生成装置及びモザイク画像生成方法 | |
JP7082713B2 (ja) | ローリングシャッター画像/ビデオのsfm/slamに対するアプリケーションで畳み込みニューラルネットワークを用いる画像/ビデオにおけるローリングシャッター補正 | |
CN116917949A (zh) | 根据单目相机输出来对对象进行建模 | |
US20230334636A1 (en) | Temporal filtering weight computation | |
CN117456124B (zh) | 一种基于背靠背双目鱼眼相机的稠密slam的方法 | |
CN111712857A (zh) | 图像处理方法、装置、云台和存储介质 | |
JP4017578B2 (ja) | 手ぶれ補正装置、手ぶれ補正方法および手ぶれ補正プログラムを記録した記録媒体 | |
JP7164873B2 (ja) | 画像処理装置及びプログラム | |
US11954801B2 (en) | Concurrent human pose estimates for virtual representation | |
JP6154759B2 (ja) | カメラパラメータ推定装置、カメラパラメータ推定方法及びカメラパラメータ推定プログラム | |
JP2011242134A (ja) | 画像処理装置、画像処理方法、プログラム、及び電子装置 | |
Florez et al. | Video stabilization taken with a snake robot | |
JP7074694B2 (ja) | 情報端末装置及びプログラム | |
WO2023248732A1 (ja) | 信号処理装置、信号処理方法 | |
JP4286301B2 (ja) | 手ぶれ補正装置、手ぶれ補正方法および手ぶれ補正プログラムを記録した記録媒体 | |
WO2023095667A1 (ja) | データ処理装置およびデータ処理方法、並びにプログラム | |
JP2006172026A (ja) | カメラ運動と3次元情報の復元装置、復元方法、およびプログラム |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
121 | Ep: the epo has been informed by wipo that ep was designated in this application |
Ref document number: 09766742 Country of ref document: EP Kind code of ref document: A1 |
|
WWE | Wipo information: entry into national phase |
Ref document number: 2010518003 Country of ref document: JP |
|
NENP | Non-entry into the national phase |
Ref country code: DE |
|
WWE | Wipo information: entry into national phase |
Ref document number: 12999828 Country of ref document: US |
|
122 | Ep: pct application non-entry in european phase |
Ref document number: 09766742 Country of ref document: EP Kind code of ref document: A1 |