JP4654817B2 - Multiple image composition method and multiple image composition device - Google Patents

Multiple image composition method and multiple image composition device Download PDF

Info

Publication number
JP4654817B2
JP4654817B2 JP2005217884A JP2005217884A JP4654817B2 JP 4654817 B2 JP4654817 B2 JP 4654817B2 JP 2005217884 A JP2005217884 A JP 2005217884A JP 2005217884 A JP2005217884 A JP 2005217884A JP 4654817 B2 JP4654817 B2 JP 4654817B2
Authority
JP
Japan
Prior art keywords
image
processing unit
value
processing
motion vector
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Expired - Fee Related
Application number
JP2005217884A
Other languages
Japanese (ja)
Other versions
JP2007036741A (en
Inventor
睦裕 山中
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Panasonic Corp
Panasonic Electric Works Co Ltd
Original Assignee
Panasonic Corp
Matsushita Electric Works Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Panasonic Corp, Matsushita Electric Works Ltd filed Critical Panasonic Corp
Priority to JP2005217884A priority Critical patent/JP4654817B2/en
Publication of JP2007036741A publication Critical patent/JP2007036741A/en
Application granted granted Critical
Publication of JP4654817B2 publication Critical patent/JP4654817B2/en
Expired - Fee Related legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Description

本発明は、同一の被写体を含み、時系列的(時間軸方向に連続的)に撮像された複数の画像の合成を行う複数画像合成方法及び複数画像合成装置に関するものである。   The present invention relates to a multi-image synthesis method and a multi-image synthesis apparatus that synthesize a plurality of images that are captured in time series (continuously in the time axis direction) including the same subject.

平滑化フィルタを施すことで信号に含まれるランダムノイズ成分が減衰することに着目し、動画を構成する各々の静止画像を時間軸方向に積算して、SN比を改善し、実質的に撮像感度を向上させる画像合成方法が従来から提供されている。またこの方法には、出力画像を画像合成処理の一方の入力画像とし、新規に取得した画像を他方の入力画像とし、両入力画像を平均処理する処理方法が効率的であるため用いられている。この方法は数学的には無限インパルス応答関数としてよく知られている処理方法である。そしてこの処理方法による高感度化手法と、動作を構成する静止画像を二次元的に拡大して望遠効果を得る手法を、共通の記憶手段を切替え利用して実現した映像信号処理装置が提供されている(例えば特許文献1)。   Focusing on the fact that the random noise component included in the signal is attenuated by applying the smoothing filter, the respective still images constituting the moving image are integrated in the time axis direction, the SN ratio is improved, and the imaging sensitivity is substantially increased. Conventionally, an image composition method for improving the image quality has been provided. This method is also used because the output image is used as one input image for image composition processing, the newly acquired image is used as the other input image, and the processing method of averaging both input images is efficient. . This method is mathematically well known as an infinite impulse response function. And there is provided a video signal processing apparatus that realizes a high sensitivity technique by this processing method and a technique for obtaining a telephoto effect by two-dimensionally enlarging a still image constituting an operation by switching a common storage means. (For example, Patent Document 1).

また露光量の異なる複数の画像を連続取得して、暗い部分の再現に適した画像と明るい部分の再現に適した画像とを組み合わせることで見掛け上のダイナミックレンジの拡大を実現する合成処理方法も従来からあり、この見掛け上のダイナミックレンジの拡大を実現する方法と撮像装置の変位による画像間の動きを補正する方法を併用して画像合成を行う撮像画面合成装置が提供されている(例えば特許文献2)。   There is also a synthesis method that achieves the expansion of the apparent dynamic range by continuously acquiring multiple images with different exposure amounts and combining images suitable for reproducing dark areas and images suitable for reproducing bright areas. Conventionally, there has been provided an imaging screen synthesizing apparatus that performs image synthesis by using both a method for realizing the expansion of the apparent dynamic range and a method for correcting a motion between images due to displacement of the imaging device (for example, patents). Reference 2).

更に特許文献1に開示されている映像信号処理装置と同じ原理によりノイズ低減処理を、画像を複数のブロックに分割しブロック毎に動きベクトルを算出して位置補正を行うことで実施している映像信号処理装置も提供されている(例えば特許文献3)。
特許第2781936号公報(段落0084,0085) 特許第3110797号公報(段落0019〜0022) 特開2000−13643号公報(段落0043,0044)
Further, a video that is implemented by performing noise reduction processing based on the same principle as the video signal processing device disclosed in Patent Document 1 by dividing an image into a plurality of blocks, calculating a motion vector for each block, and performing position correction. A signal processing apparatus is also provided (for example, Patent Document 3).
Japanese Patent No. 2781936 (paragraphs 0084, 0085) Japanese Patent No. 3110797 (paragraphs 0019 to 0022) JP 2000-13643 A (paragraphs 0043 and 0044)

ところで、上述の無限インパルス応答関数では、過去の情報が減衰しながらも残るので、この無限インパルス応答関数を用いる画像合成を特許文献1に開示されている映像信号処理装置のように行うと、画枠に対して動いている被写体には尾を引くような残像が発生する。このような例に限らず、時系列的に得られる複数の画像を用いて合成して新たな画像を生成する際には、被写体の動きにより何らかの不具合が生じるという課題があった。   By the way, in the above infinite impulse response function, past information remains while being attenuated. Therefore, when image synthesis using this infinite impulse response function is performed as in the video signal processing apparatus disclosed in Patent Document 1, the image is displayed. An afterimage such as a tail is generated in a subject moving with respect to the frame. In addition to such an example, when a new image is generated by combining a plurality of images obtained in time series, there is a problem that some trouble occurs due to the movement of the subject.

また特許文献2に開示されている撮像画面合成装置では、画面全体を代表する一つの動きベクトルにより動き補正を行っているので、異なる方向に移動する複数の被写体がある場合や、回転や拡大縮小などの変形を伴う場合に配置を補正しきれないという課題があった。   In the imaging screen composition device disclosed in Patent Document 2, motion correction is performed using a single motion vector that represents the entire screen. Therefore, when there are a plurality of subjects moving in different directions, rotation, enlargement / reduction, and the like. There has been a problem that the arrangement cannot be corrected when there is a deformation such as.

また特許文献3に開示されている映像信号処理装置のように、動きベクトルを求める点を細かく増やしていくと、画像上の各画素の動きベクトルを求めることができるものの、処理時間の増大や局所的な精度低下という問題があり、また処理対象部分の動きベクトルを求めれば動き補正の効果が改善されるものの、被写体の移動により影となる部分や誤った動きベクトルが算出された部分が破綻してしまうという課題があった。   In addition, as in the video signal processing device disclosed in Patent Document 3, if the points for obtaining the motion vector are increased in detail, the motion vector of each pixel on the image can be obtained. However, if the motion vector of the part to be processed is obtained, the effect of motion correction can be improved. However, the shadowed part or the part where the wrong motion vector is calculated breaks down due to the movement of the subject. There was a problem that it would end up.

本発明は、上述の課題に鑑みて為されたもので、その目的とするところは複雑な被写体の移動があっても残像などの不具合を低減でき、また動き補正が対応できないような画像の変化が生じた場合の画像の破綻を防止できる複数画像合成方法及び複数画像合成装置を提供することにある。   The present invention has been made in view of the above-described problems, and the purpose of the present invention is to reduce image defects such as afterimages that cannot be accommodated by motion correction even if there is a complicated subject movement. An object of the present invention is to provide a multi-image composition method and a multi-image composition apparatus that can prevent the image from failing when the image is generated.

上述の目的を達成するために、請求項1の複数画像合成方法の発明では、時系列的に撮像された同一の被写体を含む複数の画像の間で、画素値の重み付き平均処理を行う複数画像合成方法であって、合成に使用される第一の画像と合成に使用される別の第二の画像との間で被写体の動きベクトルの候補を複数算出し、画像の一部であって1以上の画素からなる処理単位毎に、第二の画像において現時点の処理対象となっている処理単位に含まれる画素の値を基準として、第一の画像において第二の画像の処理単位の対応位置から複数の動きベクトルの候補にて位置補正された複数の処理単位に含まれる画素の値との差の絶対値を該処理単位を含む領域について積算し、該積算値を評価値として求め、何れの評価値も所定の閾値を超えた場合には該処理単位では第二の画像の該処理単位に含まれる各画素の値を合成処理結果として採用し、評価値の少なくとも一つが該閾値以下となった場合には該処理単位では第二の画像の該処理単位に含まれる各画素の値と、最も評価値が小さい位置補正後の第一の画像上の処理単位に含まれる各画素の値とにより重み付き平均処理を行うことを特徴とする。 In order to achieve the above-mentioned object, in the invention of the multiple image composition method according to claim 1, a plurality of weighted average processing of pixel values is performed between a plurality of images including the same subject imaged in time series. An image composition method, wherein a plurality of motion vector candidates for a subject are calculated between a first image used for composition and another second image used for composition, Correspondence of the processing unit of the second image in the first image with reference to the value of the pixel included in the processing unit currently being processed in the second image for each processing unit composed of one or more pixels The absolute value of the difference from the value of the pixel included in the plurality of processing units whose position has been corrected with a plurality of motion vector candidates from the position is integrated for the region including the processing unit, and the integrated value is obtained as an evaluation value , If any of the evaluated value also exceeds a predetermined threshold value In the processing unit adopts the value of each pixel included in the processing unit of the second image as a composite processing result, in the processing unit when at least one of the evaluation value is equal to or less than the threshold value a second image The weighted average processing is performed using the value of each pixel included in the processing unit and the value of each pixel included in the processing unit on the first image after position correction with the smallest evaluation value. .

請求項1の複数画像合成方法の発明によれば、単独の動きベクトルによる補正では対応できないような複雑な被写体の移動があっても残像などの不具合を低減でき、また動き補正が対応できないような画像の変化が生じた場合の画像の破綻を防止できる。   According to the multiple image composition method of the first aspect of the present invention, it is possible to reduce problems such as afterimages even if there is a complicated movement of the subject that cannot be dealt with by correction with a single motion vector. It is possible to prevent the image from failing when the image changes.

請求項2の複数画像合成方法の発明では、請求項1の発明において、前記重み付平均処理を行う場合には、前記処理単位での前記評価値が大きくなるほど前記第二の画像の加算割合が大きくなるように処理単位ごとに加算割合を設定する前記重み付き平均処理を施すことを特徴とする。 In the invention of the multiple image composition method of claim 2, in the invention of claim 1, when the weighted average process is performed, the addition ratio of the second image increases as the evaluation value in the processing unit increases. The weighted average processing for setting the addition ratio for each processing unit so as to increase is performed.

請求項2の複数画像合成方法の発明によれば、単独の動きベクトルによる補正では対応できないような複雑な被写体の移動があっても残像などの不具合を低減でき、また動き補正が対応できないような画像の変化が生じた場合の画像の破綻を防止でき、特に隣接する処理単位の処理の違いにより処理単位の境界が目立ったり、突発的なノイズにより部分的に誤った選択が為された場合の孤立した処理単位での違和感が生じたりする不具合を低減できる。 According to the multiple image composition method of the second aspect of the present invention, it is possible to reduce problems such as afterimages even if there is a complicated movement of the subject that cannot be dealt with by correction with a single motion vector, and motion compensation cannot be dealt with. It is possible to prevent image failure when image changes occur, especially when processing unit boundaries are conspicuous due to differences in processing between adjacent processing units, or when erroneous selections are made due to sudden noise. It is possible to reduce inconvenience that an uncomfortable feeling occurs in an isolated processing unit.

請求項3の複数画像合成方法の発明では、請求項1又は2の発明において、前記動きベクトルの候補の一つは撮像装置自体の姿勢情報から決定されることを特徴とする。 According to a third aspect of the present invention, there is provided the invention of claim 1 or 2, wherein one of the motion vector candidates is determined from posture information of the imaging apparatus itself .

請求項3の複数画像合成方法の発明によれば、撮像手段の姿勢の変位をも加味することができる。 According to the invention of the multiple image composition method of the third aspect, it is possible to take into account the displacement of the posture of the imaging means.

請求項の複数画像合成方法の発明では、請求項1乃至の何れかの発明において、前記第一の画像と前記第二の画像との各々に設定した輝度レベル測定領域における画素値を参照して輝度レベル補正を行ってから、第一の画像と第二の画像とにおいて前記処理単位における画素値の差を求めることを特徴とする。 According to a fourth aspect of the present invention, there is provided the method for synthesizing a plurality of images according to any one of the first to third aspects, wherein a pixel value in a luminance level measurement region set for each of the first image and the second image is referred to Then, after performing luminance level correction, a difference between pixel values in the processing unit is obtained between the first image and the second image.

請求項の発明によれば、輝度変動を評価前に補正するため動きベクトルの候補の評価を輝度変動の影響を受けることなく行える。 According to the fourth aspect of the present invention, motion vector candidates can be evaluated without being affected by the luminance variation because the luminance variation is corrected before the evaluation.

請求項の複数画像合成装置の発明では、時系列的に撮像された同一の被写体を含む複数の画像の間で、画素値の重み付き平均処理を行う複数画像合成方法であって、合成に使用される第一の画像と合成に使用される別の第二の画像との間で被写体の動きベクトルの候補を複数算出する手段と、画像の一部であって1以上の画素からなる処理単位毎に、第二の画像において現時点の処理対象となっている処理単位に含まれる画素の値を基準として、第一の画像において第二の画像の処理単位の対応位置から複数の動きベクトルの候補にて位置補正された複数の処理単位に含まれる画素の値との差の絶対値を該処理単位を含む領域について積算し、該積算値を評価値として求める手段と、何れの評価値も所定の閾値を超えた場合には該処理単位では第二の画像の該処理単位に含まれる各画素の値を合成処理結果として採用し、評価値の少なくとも一つが該閾値以下となった場合には該処理単位では第二の画像の該処理単位に含まれる各画素の値と、最も評価値が小さい位置補正後の第一の画像上の処理単位に含まれる各画素の値とにより重み付き平均処理を行う画像合成手段とを備えていることを特徴とする。 The invention of claim 5 is a multiple image composition method for performing weighted average processing of pixel values between a plurality of images including the same subject imaged in time series. Means for calculating a plurality of motion vector candidates for a subject between a first image used and another second image used for synthesis, and a process comprising a part of the image and one or more pixels For each unit, a plurality of motion vectors from the corresponding position of the processing unit of the second image in the first image with reference to the value of the pixel included in the processing unit currently processed in the second image. the absolute value of the difference between the values of pixels included in the plurality of processing units located corrected by candidate integrates the area including the processing unit, means for determining the integrated value as an evaluation value, none of the evaluation value When a predetermined threshold is exceeded, the processing unit Adopts the value of each pixel included in the processing unit of the second image as a composite processing result, the processing of the second image in the processing unit when at least one of the evaluation value is equal to or less than the threshold value Image synthesizing means for performing weighted average processing based on the value of each pixel included in the unit and the value of each pixel included in the processing unit on the first image after position correction having the smallest evaluation value; It is characterized by that.

請求項の発明によれば、複雑な被写体の移動があっても残像などの不具合を低減でき、また動き補正が対応できないような画像の変化が生じた場合の画像の破綻を防止できる複数画像合成装置を提供できる。 According to the invention of claim 5, a plurality of images that can reduce defects such as afterimages even when there is a complicated movement of the subject, and can prevent image failure when an image change that cannot be compensated for motion correction occurs. A synthesizer can be provided.

請求項の複数画像合成装置の発明では、請求項の発明において、前記画像合成手段は、前記重み付平均処理を行う場合には、前記処理単位での前記評価値が大きくなるほど前記第二の画像の加算割合が大きくなるように処理単位ごとに加算割合を設定する重み付き平均処理を施すことを特徴とする。 According to a sixth aspect of the present invention, in the fifth aspect of the invention, when the weighted average processing is performed, the image combining means increases the evaluation value in the processing unit as the second evaluation value . A weighted average process is performed in which the addition ratio is set for each processing unit so that the addition ratio of the image of the image becomes larger.

請求項の発明によれば、単独の動きベクトルによる補正では対応できないような複雑な被写体の移動があっても残像などの不具合を低減でき、また動き補正が対応できないような画像の変化が生じた場合の画像の破綻を防止でき、特に隣接する処理単位の処理の違いにより処理単位の境界が目立ったり、突発的なノイズにより部分的に誤った選択が為された場合の孤立した処理単位での違和感が生じたりする不具合を低減できる複数画像合成装置を提供できる。 According to the invention of claim 6 , even if there is a complicated movement of the subject that cannot be dealt with by correction with a single motion vector, it is possible to reduce problems such as afterimages, and image changes that cannot deal with motion correction occur. This is an isolated processing unit especially when the boundary between processing units is conspicuous due to the difference in processing between adjacent processing units, or when an erroneous selection is made partially due to sudden noise. It is possible to provide a multi-image composition device that can reduce the problem of uncomfortable feeling.

本発明の複数画像合成方法及び複数画像合成装置は、単独の動きベクトルによる補正では対応できないような複雑な被写体の移動があっても残像などの不具合を低減でき、また動き補正が対応できないような画像の変化が生じた場合の画像の破綻を防止できるという効果がある。   The multi-image composition method and multi-image composition apparatus of the present invention can reduce defects such as afterimages even when there is a complicated movement of a subject that cannot be dealt with by correction using a single motion vector, and motion compensation cannot be handled. There is an effect that it is possible to prevent the failure of the image when the image changes.

以下本発明を実施形態により説明する。
(実施形態1)
本実施形態の撮像装置は図1(b)に示すように建築物の天井等に取り付けられ、被写体Mnを含む所定撮影領域を時系列的に連続撮像する撮像手段1と、この撮像手段1で撮像された動画を取り込んで画像合成を行う画像合成処理部2とで構成され、画像合成処理部2からの出力画像を表示する画像表示手段3が接続される。
Embodiments of the present invention will be described below.
(Embodiment 1)
As shown in FIG. 1 (b), the imaging apparatus of the present embodiment is attached to the ceiling of a building, and the imaging unit 1 that continuously images a predetermined imaging region including the subject Mn in time series, and the imaging unit 1 An image composition processing unit 2 that captures the captured moving image and performs image composition is connected to an image display unit 3 that displays an output image from the image composition processing unit 2.

画像合成処理部2は、図1(a)に示すように出力画像を保存する画像記憶手段4と、画像記憶手段4で保存している保存画像(第一の画像)と撮像手段1で得られる動画を構成する複数の静止画像の内最も最新の取得画像(第二の画像)との画像合成を行う画像合成手段5と、撮像手段1の姿勢を制御する姿勢制御手段6と、撮像手段1の撮像条件を制御する撮像条件制御手段7と、姿勢制御手段6からの撮像手段1の姿勢情報と撮像条件制御手段7からの条件情報とから画像での動きベクトルに換算する画像動き換算手段8と、画像上の対象被写体領域を参照して動きベクトル測定点を設定し、取得画像と保存画像との間で動きベクトルを算出する画像動き算出手段9と、対象被写体領域を設定するための動体検出手段10及び顔検出手段11とで構成される。   As shown in FIG. 1A, the image composition processing unit 2 obtains the image storage means 4 for saving the output image, the saved image (first image) saved in the image storage means 4, and the imaging means 1. Image synthesizing means 5 for synthesizing with the latest acquired image (second image) among a plurality of still images constituting the moving image, attitude control means 6 for controlling the attitude of the imaging means 1, and imaging means Imaging condition control means 7 that controls one imaging condition, and image motion conversion means that converts the attitude information of the imaging means 1 from the attitude control means 6 and the condition information from the imaging condition control means 7 into a motion vector in the image. 8, a motion vector measurement point that sets a motion vector measurement point by referring to the target subject area on the image, and calculates a motion vector between the acquired image and the saved image, and a target subject area Moving body detection means 10 and face detection hand 11 to be composed of.

ここで画像合成処理部2を構成する各手段は、マイクロコンピュータにおいてプログラムを実行することで実現される機能で構成しても良く、またハードウェア構成により個々に独立する形で構成しても良い。また画像合成処理部2は図1(b)に示すように撮像手段1とは別体となっているが、撮像手段1と一体となって撮像装置を構成しても良い。更に画像表示手段3をも一体に組み込んで撮像装置を構成しても良い。   Here, each means constituting the image composition processing unit 2 may be configured by a function realized by executing a program in a microcomputer, or may be configured independently by a hardware configuration. . The image composition processing unit 2 is separate from the imaging unit 1 as shown in FIG. 1B, but may be integrated with the imaging unit 1 to constitute an imaging apparatus. Furthermore, the image display means 3 may be integrated into the image pickup apparatus.

次に画像合成処理部2の各手段の動作について説明する。   Next, the operation of each unit of the image composition processing unit 2 will be described.

まず動体検出手段10は、例えば取得画像と合成対象となる保存画像との差分画像を作成して変化領域を抽出し、この変化領域を2値化して近接距離の変化領域同士を一つにまとめる外接矩形を生成して統合領域とし、この統合領域毎に、取得画像と保存画像間の濃淡パターンマッチングを行い、類似度が低い統合領域を動体候補として図2(a)のように抽出して画像動き算出手段9に被写体領域情報として与えるものである。   First, the moving object detection unit 10 creates a difference image between, for example, an acquired image and a saved image to be synthesized, extracts a change area, binarizes the change area, and combines the change areas of the proximity distances into one. A circumscribed rectangle is generated as an integrated region, and for each integrated region, density pattern matching is performed between the acquired image and the saved image, and an integrated region with a low similarity is extracted as a moving object candidate as shown in FIG. This is given to the image motion calculation means 9 as subject area information.

一方顔検出手段11は、例えば画像データから生成したカラーヒストグラムと、予め作成している顔部分のテンプレートヒストグラムとを比較して顔の部分の領域を決定する方法等公知の技術を用いて顔領域を抽出するもので、この顔領域情報を画像動き算出手段9に与える。   On the other hand, the face detection unit 11 compares the color histogram generated from the image data with a template histogram of the face part created in advance, for example, using a known technique such as a method for determining the face part area. This face area information is given to the image motion calculating means 9.

画像動き算出手段9は、動体検出手段10により検出された動体(対象被写体領域)に対して動きベクトル測定点を複数設定するのである。ここで、被写体Mnが物品の場合は、撮像画像の枠内で移動したとしても変形を伴うことはないが、被写体Mnが人の場合は手足等に部分的な移動や変形を伴うことがあり、これらの影響を避けて人の全体的な動きを測定するためには、顔や胴体の動きを測定する必要があるので、画像動き算出手段9は動体として設定された領域(図2(a)において破線で示す外接矩形の領域)の上半部に密となるように動きベクトル測定点(図2(a)において□で示す)を設定し、各動きベクトル測定点における動きベクトルをブロックマッチング法により図2(b)の「→」で示すように求める。   The image motion calculation means 9 sets a plurality of motion vector measurement points for the moving object (target subject area) detected by the moving object detection means 10. Here, when the subject Mn is an article, even if the subject Mn moves within the frame of the captured image, there is no deformation, but when the subject Mn is a person, the limb or the like may be partially moved or deformed. In order to measure the overall movement of a person while avoiding these effects, it is necessary to measure the movement of the face and the torso, so the image motion calculation means 9 is a region set as a moving body (FIG. 2 (a ) Set the motion vector measurement points (indicated by □ in FIG. 2A) so as to be dense in the upper half of the circumscribed rectangle indicated by the broken line in Fig. 2), and block matching the motion vectors at each motion vector measurement point This is obtained by the method as shown by “→” in FIG.

更にこれら求めた動きベクトルから代表値を求める。この代表値の求め方は、例えば求めた動きベクトルにおいて過半数以上等しい動きベクトルを代表値の動きベクトルM1とする。尚本実施形態では後述するように複数の動きベクトルを扱うので、代表値を唯一のものと限定せず、複数の動作ベクトル測定値を画像合成手段5へ出力するようにしても良い。   Further, a representative value is obtained from these obtained motion vectors. The representative value is obtained by, for example, setting a motion vector equal to more than a majority of the obtained motion vectors as the representative value motion vector M1. In this embodiment, since a plurality of motion vectors are handled as will be described later, the representative value is not limited to a unique value, and a plurality of motion vector measurement values may be output to the image synthesis means 5.

画像動き算出手段9は、この顔領域(図2(c)の破線で示す矩形内)に対して目、耳、鼻など陰影のある各部位に対応するように図示するように動きベクトル測定点(図2(c)において□で示す)を設定し、各動きベクトル測定点の動きベクトルをブロックマッチング法により図2(d)に「→」で示すように測定する。この場合口周辺は言葉を発生する際に変形するので、口周辺の動きベクトル測定点は密に設定する必要があるが、顔全体の変位を求める場合には口周辺は重要ではないので、口周辺には動きベクトル測定点を設定しなくても良い。各動きベクトル測定点においては、顔領域の抽出の手法により目、耳、鼻などの顔部品を認識することで正確な位置を求めることができる。要求される設定位置精度次第では予め顔形状の統計情報から顔輪郭に対する各顔部品の相対位置を求めておき、それを顔輪郭となる矩形に幾何的に適用しても良い。   The image motion calculation means 9 moves the motion vector measurement points as shown so as to correspond to the shadowed regions such as the eyes, ears, and nose with respect to this face region (inside the rectangle indicated by the broken line in FIG. 2C). (Indicated by □ in FIG. 2C) is set, and the motion vector at each motion vector measurement point is measured by the block matching method as indicated by “→” in FIG. In this case, the mouth periphery is deformed when a word is generated, so the motion vector measurement points around the mouth need to be set closely, but the mouth periphery is not important when determining the displacement of the entire face. It is not necessary to set motion vector measurement points in the periphery. At each motion vector measurement point, an accurate position can be obtained by recognizing facial parts such as eyes, ears, and nose by a method of extracting a facial region. Depending on the required set position accuracy, the relative position of each face part with respect to the face outline may be obtained in advance from the face shape statistical information, and geometrically applied to the rectangle that will be the face outline.

図2(d)は、対象被写体領域(顔領域)に設定した各動きベクトル測定点で求めた動きベクトルの分布M2(x、y)<以下動きベクトルM2という>を示しており、口の変形部分を除いて顔を変形のない剛体と仮定すると、取得画像と、保存画像との顔の動きは、一次変換で近似することができる。しかし図示するように口の周辺で動きが一様でない状態に対応するためにはデータをマッピングする等の処理が必要となる。   FIG. 2D shows a motion vector distribution M2 (x, y) <hereinafter referred to as motion vector M2> obtained at each motion vector measurement point set in the target subject region (face region). Assuming that the face is a rigid body without deformation except for the part, the movement of the face between the acquired image and the saved image can be approximated by a primary transformation. However, as shown in the drawing, in order to cope with a state in which the movement is not uniform around the mouth, processing such as data mapping is required.

撮像手段1は図3に示すようにモータのような駆動手段(図示せず)が姿勢制御手段6により制御されることで、水平方向(パン)Hと垂直方向(チルト)Vの姿勢が制御されるが、画像動き換算手段8は姿勢制御手段6から撮像手段1の姿勢情報を取り込み、画像合成に使用される各画像の取得時の相対的な振れ角θを測定する。勿論撮像カメラの姿勢制御が為されない場合には姿勢情報として変位ゼロが採用される。画像動き換算手段8は上述のように求めた振れ角θを、撮像条件制御手段7から得る撮像手段1の光学系の焦点距離lからなる条件情報を用いて動きベクトルに換算する。図4(a)、(b)は換算例を示しており、この図では事象を判り易くするために、画像における中心からの座標をxのみで表し、上述の振れ角θと、上述の焦点距離lとを用いて取得画像と保存画像との撮影時刻のずれによる結像面α上での像の移動量Δxを、Δx=[(l+x)/l]tanθと定義し、必要に応じて姿勢制御手段6からの歪曲情報も加えて動きベクトル分布M3(x,y)<動きベクトルM3という>を設定する。 As shown in FIG. 3, the image pickup means 1 is controlled by the attitude control means 6 with a drive means (not shown) such as a motor, thereby controlling the attitude in the horizontal direction (pan) H and the vertical direction (tilt) V. However, the image motion conversion means 8 takes in the posture information of the imaging means 1 from the posture control means 6 and measures the relative shake angle θ at the time of acquisition of each image used for image synthesis. Of course, when the posture control of the imaging camera is not performed, zero displacement is adopted as the posture information. The image motion conversion unit 8 converts the shake angle θ obtained as described above into a motion vector using condition information including the focal length l of the optical system of the imaging unit 1 obtained from the imaging condition control unit 7. 4A and 4B show conversion examples. In this figure, in order to make the event easy to understand, the coordinates from the center in the image are represented only by x, the above-mentioned deflection angle θ, and the above-mentioned focus. The amount of movement Δx on the image plane α due to the difference in shooting time between the acquired image and the stored image using the distance l is defined as Δx = [(l 2 + x 2 ) / l] tan θ, which is necessary Accordingly, distortion information from the posture control means 6 is also added to set a motion vector distribution M3 (x, y) <referred to as a motion vector M3>.

画像合成手段5は、取得画像と保存画像との間で画像上同一座標の画素値をm:nの重み付け平均を求め、この平均した画素値による画像を合成画像として出力し、この出力した合成画像を画像記憶手段2に記憶させ、撮影手段1から取得する新たな画像と処理する際に新たな保存画像として用いる。この一連の処理は図5に示す無限長インパルス応答IIRフィルタとして示すことができ、このフィルタが平滑化フィルタとして作用してノイズの低減が図れる。ところで、時系列的に取得された個々の画像の、最新の合成画像における加算割合は図6に示すようになり、動く被写体では過去の情報が尾を引くように残るため、ぶれが生じる。尚図6中左端の棒グラフが新規に撮影した取得画像を、その右隣りから右方向に過去に遡る形で取得された画像のバーを示し、この画像のバーの長さが最新の合成画像、つまり保存画像における加算割合を示す。   The image synthesizing unit 5 obtains a weighted average of m: n pixel values of the same coordinates on the acquired image and the saved image, outputs an image based on the averaged pixel value as a synthesized image, and outputs the synthesized image The image is stored in the image storage unit 2 and used as a new saved image when processed with a new image acquired from the photographing unit 1. This series of processing can be shown as an infinite length impulse response IIR filter shown in FIG. 5, and this filter acts as a smoothing filter to reduce noise. By the way, the addition ratio of the individual images acquired in time series in the latest composite image is as shown in FIG. 6, and the moving information is blurred because the past information remains so as to have a tail. In addition, the bar graph of the left end in FIG. 6 shows a bar of an image acquired in a form that goes back in the past in the right direction from the right side of the acquired image, and the bar length of this image is the latest composite image, That is, the addition ratio in the stored image is shown.

そこで画像合成手段5は、画像動き算出手段9からの動きベクトルM1,動きベクトルM2と、画像動き換算手段8からの動きベクトルM3とを動きベクトル候補として用いて動き補正を行う。   Therefore, the image synthesizing unit 5 performs motion correction using the motion vectors M1 and M2 from the image motion calculating unit 9 and the motion vector M3 from the image motion converting unit 8 as motion vector candidates.

図7は動き補正の原理を示す図であって、図7(a)に示す保存画像と、図7(b)の取得画像とでは時間経過により撮像された被写体Mnには動きがある。ここで図8(d)に示すように取得画像上に1以上の画素からなる処理単位(図8(d)で示す□)を設定し、被写体Mnが人であって、移動する人の胴体に対応する処理単位では取得画像(図8(d))から動体検出によって求めた動きベクトルM1により動きを補正した保存画像において取得画像との類似性が高いので、動きベクトルM1が適用される(図8(c))。また移動する人の顔に対応する処理単位では顔検出により求められた動きベクトルM2により動き補正された保存画像において取得画像との類似性が高いので動きベクトルM2が適用される(図8(b))。更に背景に対応する処理単位では撮像手段1の変位により求められた動きベクトルM3により動き補正された保存画像において、取得画像との類似性が高いので、動きベクトルM3による動き補正が適用される。   FIG. 7 is a diagram showing the principle of motion correction. In the saved image shown in FIG. 7A and the acquired image shown in FIG. 7B, the subject Mn captured over time has a motion. Here, as shown in FIG. 8D, a processing unit consisting of one or more pixels (□ shown in FIG. 8D) is set on the acquired image, the subject Mn is a person, and the torso of the moving person In the processing unit corresponding to, since the similarity with the acquired image is high in the saved image in which the motion is corrected by the motion vector M1 obtained by moving object detection from the acquired image (FIG. 8D), the motion vector M1 is applied ( FIG. 8 (c)). Further, in the processing unit corresponding to the face of a moving person, the motion vector M2 is applied because the saved image whose motion is corrected by the motion vector M2 obtained by the face detection has high similarity with the acquired image (FIG. 8B). )). Furthermore, in the processing unit corresponding to the background, the saved image that has been motion-corrected by the motion vector M3 obtained by the displacement of the imaging unit 1 has high similarity to the acquired image, and therefore motion correction by the motion vector M3 is applied.

次に画像合成手段5は画像合成する前に上述のように動き補正後の補正画像と取得画像の類似性を評価する処理を行う。   Next, the image synthesizing unit 5 performs the process of evaluating the similarity between the corrected image after the motion correction and the acquired image as described above before synthesizing the image.

この場合に補正画像と取得画像との間の画素値の差の絶対値を上述の処理単位内若しくはその処理単位の近傍において積算してブロックマッチングエラーを求めて評価するか、或いは空間差分画像において同様にブロックマッチングエラーを求めて評価する。この場合ブロックマッチングエラーでは、値が大きくなる程類似性が低くなる。   In this case, the absolute value of the difference between the pixel values between the corrected image and the acquired image is integrated within the processing unit or in the vicinity of the processing unit to obtain a block matching error for evaluation, or in the spatial difference image Similarly, block matching errors are obtained and evaluated. In this case, in the block matching error, the similarity decreases as the value increases.

このようにして類似性を評価する処理を経て、その評価結果に基づいて合成時における保存画像と取得画像の画素値の加算割合を決定して、画像合成を行うのである。ここで加算値の割合の決定は、例えば動きベクトルM1によって動き補正を行った保存画像に対する類似性の評価値と、動きベクトルM2によって動き補正を行った保存画像に対する類似性の評価値とを比較し、動きベクトルM1側の類似性が高い場合には取得画像と動きベクトルM1で補正した保存画像とを1:3の割合で重み付き平均をとる(図9の(I)参照)。また動きベクトルM2側の類似性が高い場合には取得画像と動きベクトルM2で動き補正した保存画像とを1:3の割合で重み付き平均をとる(図9の(II)参照)。また何れの評価値も予め設定している所定の閾値を超えなかった場合には保存画像の画素値は使用せず、取得画像の画素値がそのまま適用される(図9の(III)参照)。
(実施形態2)
図10(a)は本実施形態における、評価の結果と画像の加算割合の関係を表す図である。実施形態1では閾値を境にして保存画像の画素値の割合がゼロと3/4と二者から択一するようになっていたが、本実施形態では類似性の評価値に応じて保存画像の画素値の割合を段階的に変更する。尚図10(a)の各画像の割合の区分は、図10(b)に示す動き補正無しの保存画像(i)、動き補正有りの保存画像(iii)、新規の取得画像(ii)に基づく。
In this way, through the process of evaluating the similarity, the addition ratio of the pixel values of the saved image and the acquired image at the time of composition is determined based on the evaluation result, and image composition is performed. Here, the ratio of the added value is determined by, for example, comparing the similarity evaluation value for the saved image subjected to motion correction with the motion vector M1 and the similarity evaluation value for the saved image subjected to motion correction with the motion vector M2. If the similarity on the side of the motion vector M1 is high, the acquired image and the saved image corrected with the motion vector M1 are weighted and averaged at a ratio of 1: 3 (see (I) in FIG. 9). Also the stored images when the high similarity of motion vectors M2 side and the motion compensation in the acquired image and motion vector M2 1: taking a weighted average at a rate of 3 ((II) see in Figure 9). Further, if any evaluation value does not exceed a predetermined threshold value set in advance, the pixel value of the saved image is not used and the pixel value of the acquired image is applied as it is (see (III) in FIG. 9). .
(Embodiment 2)
FIG. 10A is a diagram illustrating the relationship between the evaluation result and the image addition ratio in the present embodiment. In the first embodiment, the ratio of the pixel values of the stored image is selected from zero and 3/4 with the threshold as the boundary. In the present embodiment, however, the stored image is selected according to the similarity evaluation value. The ratio of the pixel value is changed stepwise. 10A is divided into a stored image (i) without motion correction, a stored image (iii) with motion correction, and a new acquired image (ii) shown in FIG. 10 (b). Based.

尚本実施形態は図1における画像合成手段で5の画素値の演算方法が異なる他は、実施形態1と同じである。これにより、隣接する処理単位の処理の違いにより処理単位の境界が目立ったり、突発的なノイズにより部分的に誤った選択が為された場合の孤立した処理単位での違和感が生じたりする不具合を低減できる。   The present embodiment is the same as the first embodiment except that the image synthesizing unit in FIG. As a result, there is a problem that the boundary between processing units is noticeable due to the difference in processing between adjacent processing units, or that there is a sense of incongruity in isolated processing units when an erroneous selection is made due to sudden noise. Can be reduced.

尚図10(a)では加算割合を段階的に設定しているが、画素値の計算方法によっては連続的な値を設定しても良い。   Although the addition ratio is set stepwise in FIG. 10A, a continuous value may be set depending on the pixel value calculation method.

また保存画像、取得画像の各々に設定した輝度レベル測定領域における画素値を参照して輝度レベル補正を行い、この輝度レベル補正後、動きベクトルの候補を評価するようにしても良い。
(実施形態3)
本実施形態の撮影装置は、図11(b)に示すように撮像手段1及び画像処理部2を可搬型の筐体12に一体的に組み込んで構成されたもので、筐体12を使用者Uが手に持って任意の方向に向けられる構成となっており、撮像手段1の姿勢を変化させる姿勢制御手段は有しないが、図11(a)に示すように姿勢制御手段の代わりに撮像手段1の変位測定手段13が備えられている以外は図2にある実施形態1と同じである。変位測定手段13は角速度センサ(図示せず)により合成に使用される画像の取得時点での光学系光軸の角度を測定するもので、必要であれば加速度センサを設け同じく相対位置を測定する。建造物や三脚などに固定されていて向きの変わらない撮像装置では変位測定手段13は不要で、変位測定手段13の出力に代わる情報として変位ゼロが採用される。
Alternatively, the brightness level correction may be performed with reference to the pixel value in the brightness level measurement region set for each of the stored image and the acquired image, and the motion vector candidates may be evaluated after the brightness level correction.
(Embodiment 3)
As shown in FIG. 11B, the photographing apparatus according to the present embodiment is configured by integrating the imaging unit 1 and the image processing unit 2 into a portable housing 12, and the housing 12 is used by the user. U has a structure that is directed in any direction in his hand, no attitude control means for changing the attitude of the imaging means 1, instead of the attitude control means, as shown in FIG. 11 (a), Except that the displacement measuring means 13 of the imaging means 1 is provided, it is the same as the first embodiment shown in FIG. The displacement measuring means 13 measures the angle of the optical axis of the optical system at the time of acquisition of an image used for synthesis by an angular velocity sensor (not shown). If necessary, an acceleration sensor is provided to measure the relative position. . In an imaging device that is fixed to a building or a tripod and does not change its orientation, the displacement measuring means 13 is unnecessary, and zero displacement is adopted as information that replaces the output of the displacement measuring means 13.

この変位情報の利用方法は実施形態1における姿勢制御手段6から出力される撮像手段1の変位情報と同じである。   The method of using this displacement information is the same as the displacement information of the imaging means 1 output from the attitude control means 6 in the first embodiment.

尚その他の構成は実施形態1と同じであるので同じ構成要素に同じ符号を付して説明は省略する。   Since other configurations are the same as those of the first embodiment, the same components are denoted by the same reference numerals and description thereof is omitted.

(a)は実施形態1の回路構成図、(b)は実施形態1の使用例図である。(A) is a circuit block diagram of Embodiment 1, (b) is a usage example figure of Embodiment 1. FIG. (a)は実施形態1の対象被写体領域の動きベクトル測定点の設定例図、(b)は実施形態1の対象被写体領域の各動きベクトル測定点の測定結果図、(c)は実施形態1の顔領域の動きベクトル測定点の設定例図、(d)は実施形態1の顔領域の各動きベクトル測定点の測定結果図である。(A) is a setting example diagram of the motion vector measurement point of the target subject area of the first embodiment, (b) is a measurement result diagram of each motion vector measurement point of the target subject area of the first embodiment, and (c) is the first embodiment. FIG. 6D is a diagram illustrating an example of setting motion vector measurement points in the face area, and FIG. 6D is a measurement result diagram of each motion vector measurement point in the face area of the first embodiment. 実施形態1に用いる撮像手段の姿勢制御の説明図である。It is explanatory drawing of the attitude | position control of the imaging means used for Embodiment 1. FIG. 実施形態1に用いる撮像手段の動きベクトル換算例の説明図である。It is explanatory drawing of the example of motion vector conversion of the imaging means used for Embodiment 1. FIG. 実施形態1の画像合成手段の処理動作に対応した無限長インパルス応答IIRフィルタの等価回路図である。6 is an equivalent circuit diagram of an infinite impulse response IIR filter corresponding to the processing operation of the image synthesizing unit of Embodiment 1. FIG. 実施形態1における合成画像の保存画像と取得画像の加算割合の説明図である。FIG. 6 is an explanatory diagram of an addition ratio between a stored image of a composite image and an acquired image in the first embodiment. 実施形態1における動き補正の原理説明図である。FIG. 3 is a diagram for explaining the principle of motion correction in the first embodiment. 実施形態1における動き補正と各動きベクトルとの関係説明図である。FIG. 3 is a diagram illustrating the relationship between motion correction and each motion vector in the first embodiment. 実施形態2の動きベクトルによる補正の類似性と合成画像における保存画像と取得画像の割合の説明図である。FIG. 10 is an explanatory diagram of similarity of correction by a motion vector according to Embodiment 2 and a ratio of a stored image and an acquired image in a composite image. 実施形態2の動きベクトルによる補正の類似性と合成画像における保存画像と取得画像の割合の説明図である。FIG. 10 is an explanatory diagram of similarity of correction by a motion vector according to the second embodiment and a ratio of a stored image and an acquired image in a composite image. (a)は実施形態3の使用例図、(b)は実施形態3の回路構成図である。(A) is a usage example diagram of the third embodiment, (b) is a circuit configuration diagram of the third embodiment.

符号の説明Explanation of symbols

1 撮像カメラ
2 画像合成処理部
3 画像表示手段
4 画像記憶手段
5 画像合成手段
6 姿勢制御手段
7 撮像条件制御手段
8 画像動き換算手段
9 画像動き算出手段
10 動体検出手段
11 顔検出手段
Mn 被写体
DESCRIPTION OF SYMBOLS 1 Imaging camera 2 Image composition process part 3 Image display means 4 Image storage means 5 Image composition means 6 Attitude control means 7 Imaging condition control means 8 Image motion conversion means 9 Image motion calculation means 10 Moving object detection means 11 Face detection means Mn Subject

Claims (6)

時系列的に撮像された同一の被写体を含む複数の画像の間で、画素値の重み付き平均処理を行う複数画像合成方法であって、
合成に使用される第一の画像と合成に使用される別の第二の画像との間で被写体の動きベクトルの候補を複数算出し、
画像の一部であって1以上の画素からなる処理単位毎に、第二の画像において現時点の処理対象となっている処理単位に含まれる画素の値を基準として、第一の画像において第二の画像の処理単位の対応位置から複数の動きベクトルの候補にて位置補正された複数の処理単位に含まれる画素の値との差の絶対値を該処理単位を含む領域について積算し、該積算値を評価値として求め、
何れの評価値も所定の閾値を超えた場合には該処理単位では第二の画像の該処理単位に含まれる各画素の値を合成処理結果として採用し、評価値の少なくとも一つが該閾値以下となった場合には該処理単位では第二の画像の該処理単位に含まれる各画素の値と、最も評価値が小さい位置補正後の第一の画像上の処理単位に含まれる各画素の値とにより重み付き平均処理を行うことを特徴とする複数画像合成方法。
A multiple image synthesis method for performing weighted averaging of pixel values between a plurality of images including the same subject imaged in time series,
Calculating a plurality of motion vector candidates between the first image used for composition and another second image used for composition;
For each processing unit that is a part of the image and that includes one or more pixels, the second value in the first image is determined based on the value of the pixel included in the processing unit that is the current processing target in the second image. The absolute values of the differences from the values of the pixels included in the plurality of processing units whose positions have been corrected by the plurality of motion vector candidates from the corresponding position of the processing unit of the image for the region including the processing unit, and the integration Find the value as the evaluation value ,
When any evaluation value exceeds a predetermined threshold value, the processing unit adopts the value of each pixel included in the processing unit of the second image as a synthesis processing result, and at least one of the evaluation values is equal to or lower than the threshold value. In this case, in the processing unit, the value of each pixel included in the processing unit of the second image and each pixel included in the processing unit on the first image after position correction having the smallest evaluation value A multi-image synthesis method, wherein weighted averaging is performed according to a value.
前記重み付平均処理を行う場合には、前記処理単位での前記評価値が大きくなるほど前記第二の画像の加算割合が大きくなるように処理単位ごとに加算割合を設定する重み付き平均処理を施すことを特徴とする請求項1記載の複数画像合成方法。 When performing the weighted average processing, weighted average processing is performed in which the addition ratio is set for each processing unit so that the addition ratio of the second image increases as the evaluation value in the processing unit increases. The multi-image synthesis method according to claim 1. 前記動きベクトルの候補の一つは撮像装置自体の姿勢情報から決定されることを特徴とする請求項1又は2記載の複数画像合成方法。 3. The multiple image synthesis method according to claim 1, wherein one of the motion vector candidates is determined from posture information of the imaging apparatus itself. 前記第一の画像と前記第二の画像との各々に設定した輝度レベル測定領域における画素値を参照して輝度レベル補正を行ってから、第一の画像と第二の画像とにおいて前記処理単位における画素値の差を求めることを特徴とする請求項1乃至3の何れか記載の複数画像合成方法。 After performing brightness level correction with reference to the pixel value in the brightness level measurement region set in each of the first image and the second image, the processing unit in the first image and the second image 4. The method for synthesizing a plurality of images according to any one of claims 1 to 3, wherein a difference between pixel values is obtained . 時系列的に撮像された同一の被写体を含む複数の画像の間で、画素値の重み付き平均処理を行う複数画像合成方法であって、
合成に使用される第一の画像と合成に使用される別の第二の画像との間で被写体の動きベクトルの候補を複数算出する手段と、
画像の一部であって1以上の画素からなる処理単位毎に、第二の画像において現時点の処理対象となっている処理単位に含まれる画素の値を基準として、第一の画像において第二の画像の処理単位の対応位置から複数の動きベクトルの候補にて位置補正された複数の処理単位に含まれる画素の値との差の絶対値を該処理単位を含む領域について積算し、該積算値を評価値として求める手段と、
何れの評価値も所定の閾値を超えた場合には該処理単位では第二の画像の該処理単位に含まれる各画素の値を合成処理結果として採用し、評価値の少なくとも一つが該閾値以下となった場合には該処理単位では第二の画像の該処理単位に含まれる各画素の値と、最も評価値が小さい位置補正後の第一の画像上の処理単位に含まれる各画素の値とにより重み付き平均処理を行う画像合成手段とを備えていることを特徴とする複数画像合成装置
A multiple image synthesis method for performing weighted averaging of pixel values between a plurality of images including the same subject imaged in time series,
Means for calculating a plurality of motion vector candidates for a subject between a first image used for composition and another second image used for composition;
For each processing unit that is a part of the image and that includes one or more pixels, the second value in the first image is determined based on the value of the pixel included in the processing unit that is the current processing target in the second image. The absolute values of the differences from the values of the pixels included in the plurality of processing units whose positions have been corrected by the plurality of motion vector candidates from the corresponding position of the processing unit of the image for the region including the processing unit, and the integration Means for obtaining a value as an evaluation value;
When any evaluation value exceeds a predetermined threshold value, the processing unit adopts the value of each pixel included in the processing unit of the second image as a synthesis processing result, and at least one of the evaluation values is equal to or lower than the threshold value. In this case, in the processing unit, the value of each pixel included in the processing unit of the second image and each pixel included in the processing unit on the first image after position correction having the smallest evaluation value A multi-image synthesizing apparatus comprising: an image synthesizing unit that performs weighted average processing according to values .
前記画像合成手段は、前記重み付平均処理を行う場合には、前記処理単位での前記評価値が大きくなるほど前記第二の画像の加算割合が大きくなるように処理単位ごとに加算割合を設定する重み付き平均処理を施すことを特徴とする請求項5記載の複数画像合成装置。 When performing the weighted average process, the image composition unit sets an addition ratio for each processing unit so that the addition ratio of the second image increases as the evaluation value in the processing unit increases. 6. The multi-image composition apparatus according to claim 5, wherein weighted average processing is performed .
JP2005217884A 2005-07-27 2005-07-27 Multiple image composition method and multiple image composition device Expired - Fee Related JP4654817B2 (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
JP2005217884A JP4654817B2 (en) 2005-07-27 2005-07-27 Multiple image composition method and multiple image composition device

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
JP2005217884A JP4654817B2 (en) 2005-07-27 2005-07-27 Multiple image composition method and multiple image composition device

Publications (2)

Publication Number Publication Date
JP2007036741A JP2007036741A (en) 2007-02-08
JP4654817B2 true JP4654817B2 (en) 2011-03-23

Family

ID=37795430

Family Applications (1)

Application Number Title Priority Date Filing Date
JP2005217884A Expired - Fee Related JP4654817B2 (en) 2005-07-27 2005-07-27 Multiple image composition method and multiple image composition device

Country Status (1)

Country Link
JP (1) JP4654817B2 (en)

Families Citing this family (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP5445235B2 (en) * 2010-03-09 2014-03-19 ソニー株式会社 Image processing apparatus, image processing method, and program
JP5687553B2 (en) 2011-04-25 2015-03-18 オリンパス株式会社 Image composition apparatus, image composition method, and image composition program

Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JPH05103252A (en) * 1991-10-09 1993-04-23 Sanyo Electric Co Ltd Pan tilt processing circuit for jiggle correcting camera
JPH08251474A (en) * 1995-03-15 1996-09-27 Canon Inc Motion vector detector, motion vector detection method, image shake correction device, image tracking device and image pickup device
JPH1051787A (en) * 1996-08-01 1998-02-20 Sharp Corp Motion vector detector
JP2000092378A (en) * 1998-09-16 2000-03-31 Olympus Optical Co Ltd Image pickup device
JP2002333645A (en) * 2001-05-10 2002-11-22 Canon Inc Image blur correction device and camera
JP2004015376A (en) * 2002-06-06 2004-01-15 Canon Inc Apparatus for preventing image shake and camera
WO2004077820A1 (en) * 2003-02-25 2004-09-10 Matsushita Electric Industrial Co. Ltd. Image pickup processing method and image pickup apparatus

Patent Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JPH05103252A (en) * 1991-10-09 1993-04-23 Sanyo Electric Co Ltd Pan tilt processing circuit for jiggle correcting camera
JPH08251474A (en) * 1995-03-15 1996-09-27 Canon Inc Motion vector detector, motion vector detection method, image shake correction device, image tracking device and image pickup device
JPH1051787A (en) * 1996-08-01 1998-02-20 Sharp Corp Motion vector detector
JP2000092378A (en) * 1998-09-16 2000-03-31 Olympus Optical Co Ltd Image pickup device
JP2002333645A (en) * 2001-05-10 2002-11-22 Canon Inc Image blur correction device and camera
JP2004015376A (en) * 2002-06-06 2004-01-15 Canon Inc Apparatus for preventing image shake and camera
WO2004077820A1 (en) * 2003-02-25 2004-09-10 Matsushita Electric Industrial Co. Ltd. Image pickup processing method and image pickup apparatus

Also Published As

Publication number Publication date
JP2007036741A (en) 2007-02-08

Similar Documents

Publication Publication Date Title
US7834907B2 (en) Image-taking apparatus and image processing method
CN107852462B (en) Camera module, solid-state imaging element, electronic apparatus, and imaging method
US8767036B2 (en) Panoramic imaging apparatus, imaging method, and program with warning detection
JP4962460B2 (en) Imaging apparatus, imaging method, and program
JP4618370B2 (en) Imaging apparatus, imaging method, and program
KR101624450B1 (en) Image processing device, image processing method, and storage medium
JP3676360B2 (en) Image capture processing method
US8279297B2 (en) Image processing apparatus, image processing method, and storage medium
US8704888B2 (en) Imaging device and image analysis method
JP2010136302A (en) Imaging apparatus, imaging method and program
JP2010147635A (en) Imaging apparatus, imaging method, and program
KR100775104B1 (en) Image stabilizer and system having the same and method thereof
EP2219366A1 (en) Image capturing device, image capturing method, and image capturing program
JP4779491B2 (en) Multiple image composition method and imaging apparatus
JP4715366B2 (en) Multiple image composition method and multiple image composition device
JP4654817B2 (en) Multiple image composition method and multiple image composition device
EP3796639A1 (en) A method for stabilizing a camera frame of a video sequence
JP2017220885A (en) Image processing system, control method, and control program
JP6772000B2 (en) Image processing equipment, image processing methods and programs
JP2021111929A (en) Imaging device, control method of imaging device, and program
JP6790038B2 (en) Image processing device, imaging device, control method and program of image processing device
US20230319407A1 (en) Image processing device, image display system, method, and program
JP5393877B2 (en) Imaging device and integrated circuit
JP6604783B2 (en) Image processing apparatus, imaging apparatus, and image processing program
JP2009088884A (en) Method and device for detecting motion vector in picked-up data

Legal Events

Date Code Title Description
A621 Written request for application examination

Free format text: JAPANESE INTERMEDIATE CODE: A621

Effective date: 20080214

A977 Report on retrieval

Free format text: JAPANESE INTERMEDIATE CODE: A971007

Effective date: 20100212

A131 Notification of reasons for refusal

Free format text: JAPANESE INTERMEDIATE CODE: A131

Effective date: 20100223

A521 Request for written amendment filed

Free format text: JAPANESE INTERMEDIATE CODE: A523

Effective date: 20100426

RD04 Notification of resignation of power of attorney

Free format text: JAPANESE INTERMEDIATE CODE: A7424

Effective date: 20100712

A131 Notification of reasons for refusal

Free format text: JAPANESE INTERMEDIATE CODE: A131

Effective date: 20100831

A521 Request for written amendment filed

Free format text: JAPANESE INTERMEDIATE CODE: A523

Effective date: 20101101

TRDD Decision of grant or rejection written
A01 Written decision to grant a patent or to grant a registration (utility model)

Free format text: JAPANESE INTERMEDIATE CODE: A01

Effective date: 20101124

A01 Written decision to grant a patent or to grant a registration (utility model)

Free format text: JAPANESE INTERMEDIATE CODE: A01

A61 First payment of annual fees (during grant procedure)

Free format text: JAPANESE INTERMEDIATE CODE: A61

Effective date: 20101207

FPAY Renewal fee payment (event date is renewal date of database)

Free format text: PAYMENT UNTIL: 20140107

Year of fee payment: 3

R151 Written notification of patent or utility model registration

Ref document number: 4654817

Country of ref document: JP

Free format text: JAPANESE INTERMEDIATE CODE: R151

FPAY Renewal fee payment (event date is renewal date of database)

Free format text: PAYMENT UNTIL: 20140107

Year of fee payment: 3

LAPS Cancellation because of no payment of annual fees