JP2011250133A - Imaging apparatus and control method of the same - Google Patents

Imaging apparatus and control method of the same Download PDF

Info

Publication number
JP2011250133A
JP2011250133A JP2010121152A JP2010121152A JP2011250133A JP 2011250133 A JP2011250133 A JP 2011250133A JP 2010121152 A JP2010121152 A JP 2010121152A JP 2010121152 A JP2010121152 A JP 2010121152A JP 2011250133 A JP2011250133 A JP 2011250133A
Authority
JP
Japan
Prior art keywords
distance
background
main subject
amount
image
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
JP2010121152A
Other languages
Japanese (ja)
Other versions
JP5683135B2 (en
Inventor
Kimiaki Kano
公章 鹿野
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Canon Inc
Original Assignee
Canon Inc
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Canon Inc filed Critical Canon Inc
Priority to JP2010121152A priority Critical patent/JP5683135B2/en
Publication of JP2011250133A publication Critical patent/JP2011250133A/en
Application granted granted Critical
Publication of JP5683135B2 publication Critical patent/JP5683135B2/en
Expired - Fee Related legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Landscapes

  • Lens Barrels (AREA)
  • Automatic Focus Adjustment (AREA)
  • Studio Devices (AREA)
  • Exposure Control For Cameras (AREA)
  • Focusing (AREA)

Abstract

PROBLEM TO BE SOLVED: To keep a blur amount of the background constant even when a change in distances to a main subject and to a background occurs for an imaging device provided with a diaphragm to control exposure.SOLUTION: In an imaging apparatus, a main subject/background distance calculation part 801 calculates distances to a main subject and to a background, and a memory 12 stores each information on distance to the main subject and to the background calculated when a blur amount of a background image is being designated or when the recording is being started, and information on a focal distance and a diaphragm value. A difference amount between the distances to the main subject and to the background stored in the memory 12 is compared with a difference amount between the distance to the main subject and to the background during photographing as needed and when the compared two amounts are different, a exposure control part 802 controls diaphragm 2 in accordance with the amount of change in the distance to the background to change a depth of field.

Description

本発明は、撮影画像において背景画像の非合焦の度合いを制御する技術に関するものである。   The present invention relates to a technique for controlling the degree of out-of-focus of a background image in a captured image.

従来の撮像装置は、撮影レンズ部内の絞りとND(Neutral Density)フィルタ、撮像素子の蓄積時間を制御する電子シャッタにより、被写体撮影時の明るさに応じて露出補正を行う。また撮影者は撮像装置のズーム機能を用いて、任意にズーム位置を設定し、焦点距離を変えることができる。被写体撮影時の明るさやズーム位置によって、絞りのF値と焦点距離が変化すると被写界深度が変わる。
一方、撮像装置には、立体感や奥行き感のある高品位画像が求められている。しかし撮影環境によっては絞り値が変化することで被写界深度が深くなり、または被写界深度が浅くなる。このため、背景画像のボケ量を一定に保つことが困難となり、奥行き感のある画像を得ることが難しいという問題がある。
A conventional image pickup apparatus performs exposure correction according to the brightness at the time of photographing an object by using an aperture in the photographing lens unit, an ND (Neutral Density) filter, and an electronic shutter that controls the accumulation time of the image sensor. The photographer can arbitrarily set the zoom position and change the focal length by using the zoom function of the image pickup apparatus. The depth of field changes when the F value of the aperture and the focal length change depending on the brightness and zoom position at the time of shooting the subject.
On the other hand, a high-quality image having a stereoscopic effect and a sense of depth is required for the imaging device. However, depending on the shooting environment, the depth of field becomes deeper or the depth of field becomes shallower as the aperture value changes. For this reason, it is difficult to keep the blur amount of the background image constant, and there is a problem that it is difficult to obtain an image with a sense of depth.

これを回避するため、従来の撮像装置では、撮像素子上に写る被写界を二次元的に複数の領域に分割し、各領域で光学系のピント位置を移動させることで、主要被写体と背景被写体に分離する。主要被写体にピントを合わせた画像と、背景被写体にピントの合っていないピンボケ状態となるピント位置で撮影した画像を合成することで、被写界深度の浅い画像を生成する手段が講じられている(特許文献1参照)。   In order to avoid this, the conventional imaging apparatus divides the object scene that appears on the imaging device into a plurality of regions in a two-dimensional manner, and moves the focus position of the optical system in each region, so that the main subject and the background Separate the subject. Means are provided to generate an image with a shallow depth of field by synthesizing an image focused on the main subject and an image taken at the out-of-focus position where the background subject is not in focus. (See Patent Document 1).

特開2008−245054号公報JP 2008-244504 A

しかしながら、特許文献1に開示の装置では、主要被写体の画像を得るためにピント位置を移動させる動作と、背景被写体がピンボケ状態となるようにピント位置を移動させる動作が必要となる。つまり、被写界深度の浅い画像を生成するために、2度もピント位置を移動させる必要がある。そのため、被写界深度が浅い1枚の画像を得るのに要する時間は、ピント位置の移動にかかる時間分だけ長くなってしまう。また、被写界深度が浅い画像を生成するためにピント位置を2回移動させることは、動画撮影や連写撮影などのように、連続した画像データの取得が必要な場合には適さない。
そこで本発明は、主被写体までの距離と背景までの距離に変化が生じた場合でも、背景画像のボケ量を一定に保つことを目的とする。
However, the apparatus disclosed in Patent Document 1 requires an operation for moving the focus position in order to obtain an image of the main subject and an operation for moving the focus position so that the background subject is out of focus. That is, in order to generate an image with a shallow depth of field, it is necessary to move the focus position twice. Therefore, the time required to obtain one image with a shallow depth of field is increased by the time required to move the focus position. Also, moving the focus position twice to generate an image with a shallow depth of field is not suitable when continuous image data acquisition is required, such as moving image shooting or continuous shooting.
SUMMARY OF THE INVENTION Accordingly, an object of the present invention is to keep the amount of blur of a background image constant even when a change occurs in the distance to the main subject and the distance to the background.

上記課題を解決するために本発明に係る装置は、撮影レンズ及び露出制御用の絞りを備えた撮像装置であって、主被写体までの距離及び背景までの距離を算出する算出手段と、前記算出手段が算出した距離、前記撮影レンズに係る焦点距離、及び前記絞りに係る絞り値の情報を記憶する記憶手段と、前記記憶手段が記憶している前記主被写体までの距離と前記背景までの距離との差分量を、撮影中の主被写体までの距離と背景までの距離との差分量と比較し、差分量が相違する場合、背景までの距離の変化量に応じて前記絞りを制御して被写界深度を変更する制御手段と、を備える。   In order to solve the above-described problems, an apparatus according to the present invention is an imaging apparatus including a photographing lens and an aperture for exposure control, and a calculation unit that calculates a distance to a main subject and a distance to a background, and the calculation Storage means for storing information on a distance calculated by the means, a focal length related to the photographing lens, and an aperture value related to the aperture, a distance to the main subject and a distance to the background stored in the storage means Is compared with the difference between the distance to the main subject being photographed and the distance to the background, and if the difference is different, the aperture is controlled according to the amount of change in the distance to the background. Control means for changing the depth of field.

本発明によれば、主被写体までの距離と背景までの距離に変化が生じた場合でも、背景画像のボケ量を一定に保つことができる。   According to the present invention, even when the distance to the main subject and the distance to the background change, the blur amount of the background image can be kept constant.

図2乃至8と併せて本発明の第1実施形態を説明するために、撮像装置の構成例を示すブロック図である。It is a block diagram which shows the structural example of an imaging device, in order to demonstrate 1st Embodiment of this invention combined with FIG. 被写体の画像情報とともに距離情報を取得可能な撮像素子の一例を示す説明図である。It is explanatory drawing which shows an example of the image pick-up element which can acquire distance information with the image information of a to-be-photographed object. 撮像素子におけるマイクロレンズの瞳分割方式と、位相差検出方式の基本原理を説明する図である。It is a figure explaining the basic principle of the pupil division system of a micro lens in an image sensor, and a phase difference detection system. カメラ信号処理回路及びCPUの処理を説明する図である。It is a figure explaining a camera signal processing circuit and processing of CPU. 位相差検出素子に関する補間処理方法の一例を説明する図である。It is a figure explaining an example of the interpolation processing method regarding a phase difference detection element. 被写界深度の原理を説明する図である。It is a figure explaining the principle of a depth of field. 主被写体までの距離と背景までの距離の変化に応じた被写界深度の制御に関する説明図である。It is explanatory drawing regarding control of the depth of field according to the change of the distance to the main subject and the distance to the background. 制御動作例を示すフローチャートである。It is a flowchart which shows the example of control operation. 本発明の第2実施形態に係る制御動作例を示すフローチャートである。It is a flowchart which shows the control operation example which concerns on 2nd Embodiment of this invention.

[第1実施形態]
図1は本発明の一実施形態に係る撮像装置の構成例の概要を示す。撮影レンズ1は被写体を撮影する撮像光学系を構成し、露出制御用の絞り2は入射光量を制限し、撮像素子3に適正な露光量の光を入射させる。撮像素子3は、TG(Timing Generator)回路9から出力されるタイミングパルスで駆動され、入射光を光電変換によって電気信号に変換して出力する。TG回路9は、CPU(中央演算処理装置)8によって制御され、例えば、所定の電圧振幅をもつ駆動パルスを水平同期信号HDに同期させて、一定周期の信号として出力する。
CDS及びA/D回路4は撮像素子3の出力信号を受けて、相関二重サンプリング(Correlated Double Sampling)処理と、アナログ信号からデジタル信号への変換処理を行う。TG回路9はCDS及びA/D回路4に駆動信号を生成して送信する。
カメラ信号処理回路5は、CDS及びA/D回路4からの出力信号を輝度信号と色信号に分離し(Y/C分離)、映像信号処理、色信号処理、ガンマ補正等を行い、画像信号を生成する。ビデオ信号処理回路6は、カメラ信号処理回路5からの出力信号をビデオ信号に変換し、表示部10及び記録媒体11に出力する。ビデオ信号処理回路6に設けられた画像メモリ7は、エンコード又はデコード処理後の画像データに係るバッファリング処理用の記憶装置である。後述する操作部13からの操作指示に従って録画動作が開始された場合、ビデオ信号処理回路6は画像メモリ7を用いて、入力された映像信号のエンコード処理又はデコード処理を行う。記録媒体11には所定の処理を施された画像データが記録される。
[First Embodiment]
FIG. 1 shows an outline of a configuration example of an imaging apparatus according to an embodiment of the present invention. The photographing lens 1 constitutes an imaging optical system for photographing a subject, and an aperture 2 for exposure control limits the amount of incident light so that an appropriate exposure amount of light is incident on the imaging element 3. The imaging device 3 is driven by a timing pulse output from a TG (Timing Generator) circuit 9 and converts incident light into an electrical signal by photoelectric conversion and outputs the electrical signal. The TG circuit 9 is controlled by a CPU (Central Processing Unit) 8 and outputs, for example, a drive pulse having a predetermined voltage amplitude in synchronization with the horizontal synchronization signal HD as a signal having a constant period.
The CDS and A / D circuit 4 receives the output signal of the image sensor 3 and performs a correlated double sampling process and a conversion process from an analog signal to a digital signal. The TG circuit 9 generates and transmits a drive signal to the CDS and A / D circuit 4.
The camera signal processing circuit 5 separates the output signal from the CDS and A / D circuit 4 into a luminance signal and a color signal (Y / C separation), performs video signal processing, color signal processing, gamma correction, etc. Is generated. The video signal processing circuit 6 converts the output signal from the camera signal processing circuit 5 into a video signal and outputs it to the display unit 10 and the recording medium 11. An image memory 7 provided in the video signal processing circuit 6 is a storage device for buffering processing related to image data after encoding or decoding processing. When a recording operation is started in accordance with an operation instruction from the operation unit 13 to be described later, the video signal processing circuit 6 performs an encoding process or a decoding process on the input video signal using the image memory 7. Image data subjected to a predetermined process is recorded on the recording medium 11.

撮像装置は、被写体までの距離情報を算出する算出手段又は焦点状態を検出して焦点調節用のレンズ又はレンズ群(以下、フォーカスレンズという)の移動量を算出する算出手段(図示しない)を備える。該算出手段からの出力信号はカメラ信号処理回路5を介してCPU8に入力される。CPU8内に示す主被写体/背景距離算出部(以下、距離算出部という)801、露出制御部802、光学制御部803は、CPU8がプログラムを解釈して実行する処理を機能ブロックで表している。距離算出処理を担当する距離算出部801は撮像装置から主被写体までの距離情報と、撮像装置から背景までの距離情報を算出する。露出制御部802は静止画又は動画の撮影時に適正露出が行われるように、絞り値やシャッタ速度などを制御する。光学制御部803は、フォーカスレンズやズームレンズなどへの駆動信号を生成してそれらの動作を制御する。
CPU8は、被写体撮影中に設定した主被写体までの距離及び背景までの距離と、被写界深度情報に基づき、主被写体までの距離に対する背景までの距離に変化が生じた場合、主被写体に対する被写界深度を制御する。これにより、絞り2の状態が変化することで、背景画像のボケ量を一定に保つ制御が行われる。
メモリ12はCPU8に接続され、CPU8の処理に必要な各種データやプログラムを記憶する。操作部13は、撮影者の操作指示をCPU8に伝え、背景画像のボケ量を指定するための選択スイッチ(ボケ指定選択スイッチ)や、録画の開始や終了を指示する録画スイッチなどを有する。
The imaging apparatus includes calculation means for calculating distance information to a subject or calculation means (not shown) for detecting a focus state and calculating a movement amount of a focus adjustment lens or lens group (hereinafter referred to as a focus lens). . An output signal from the calculation means is input to the CPU 8 via the camera signal processing circuit 5. A main subject / background distance calculation unit (hereinafter referred to as a distance calculation unit) 801, an exposure control unit 802, and an optical control unit 803 shown in the CPU 8 represent processing that the CPU 8 interprets and executes a program as functional blocks. A distance calculation unit 801 in charge of distance calculation processing calculates distance information from the imaging device to the main subject and distance information from the imaging device to the background. An exposure control unit 802 controls an aperture value, a shutter speed, and the like so that proper exposure is performed when a still image or a moving image is captured. The optical control unit 803 generates drive signals for a focus lens, a zoom lens, and the like and controls their operations.
When the distance to the background relative to the distance to the main subject is changed based on the distance to the main subject, the distance to the background, and the depth of field information set during the subject shooting, the CPU 8 changes the subject to the main subject. Control the depth of field. As a result, the state of the diaphragm 2 is changed, so that the amount of blurring of the background image is controlled to be constant.
The memory 12 is connected to the CPU 8 and stores various data and programs necessary for the processing of the CPU 8. The operation unit 13 includes a selection switch (blur designation selection switch) for transmitting a photographer's operation instruction to the CPU 8 and designating a blur amount of the background image, a recording switch for instructing start and end of recording, and the like.

次に図2乃至6を用いて、距離情報の分布を表すマップ(以下、距離マップという)データの生成処理について説明する。以下では、撮像素子3の内部に瞳分割した画素を埋め込み、これらを焦点状態検出素子として使用した距離算出法を説明する。
図2は撮像素子3における焦点状態検出素子の配置例を示す。拡大図で示すように、撮影用の素子と焦点状態検出素子が配列されている。本例では、第1行及び第2行、並びに第4行及び第5行の各画素列に示す素子(丸枠内の正方形で示す)が撮像に使用される。第3行の第2列、第6列、第10列・・・に示す画素位置の検出素子(丸枠内に2つの長方形で示す)が焦点状態検出に使用される。焦点状態検出素子は連続的に設けた方が検出精度は向上するが、画像の劣化が大きくなってしまうため、精度向上と高画質化はトレードオフの関係にある。各素子の前面には丸枠で示すように、入射光を効率よく集光するマイクロレンズが配置されている。
Next, a process for generating map data representing the distribution of distance information (hereinafter referred to as distance map) will be described with reference to FIGS. Hereinafter, a distance calculation method in which pixels obtained by pupil division are embedded in the imaging element 3 and these are used as a focus state detection element will be described.
FIG. 2 shows an example of arrangement of focus state detection elements in the image sensor 3. As shown in the enlarged view, a photographing element and a focus state detecting element are arranged. In this example, elements (indicated by squares in a round frame) shown in the pixel rows of the first and second rows, and the fourth and fifth rows are used for imaging. Detection elements (indicated by two rectangles in a circle) at the pixel positions shown in the second column, the sixth column, the tenth column,... Of the third row are used for focus state detection. Although the detection accuracy is improved when the focus state detection elements are continuously provided, the deterioration of the image is increased, so that there is a trade-off between improving the accuracy and improving the image quality. As shown by a round frame, a microlens that efficiently collects incident light is disposed on the front surface of each element.

焦点状態検出素子では、図3(A)に示すように光束を瞳分割することで、各分割光束が一対の受光素子A,Bにそれぞれ入射する。図2に示すように水平方向に並んで配置された受光素子A(丸枠内の左側の長方形参照)の各出力をまとめることで第1の像(以下、A像という)が得られる。受光素子B(丸枠内の右側の長方形参照)の各出力をまとめることで第2の像(以下、B像という)が得られる。
次に、A像、B像から撮像レンズのピントずれ量を求める焦点検出原理について説明する。図3(B)に示すように、撮影レンズのA領域を通る光束によって撮像面上に形成される被写体像(A像)と、B領域を通る光束によって形成される被写体像(B像)の位置は、合焦時、前ピン時、後ピン時で変化する。結像面と撮像面との距離であるデフォーカス量が大きいほど、A像とB像のズレは大きくなり、また前ピン時と後ピン時では像のズレ量の符号が逆になる。これを利用して像ズレからデフォーカス量を検出するのが位相差検出方式である。
In the focus state detection element, as shown in FIG. 3A, each light beam is incident on the pair of light receiving elements A and B by dividing the light beam into pupils. As shown in FIG. 2, a first image (hereinafter referred to as an A image) is obtained by combining the outputs of the light receiving elements A (see the left rectangle in the round frame) arranged side by side in the horizontal direction. A second image (hereinafter referred to as a B image) is obtained by combining the outputs of the light receiving element B (see the rectangle on the right side in the round frame).
Next, the focus detection principle for obtaining the focus shift amount of the imaging lens from the A and B images will be described. As shown in FIG. 3B, a subject image (A image) formed on the imaging surface by the light beam passing through the A region of the photographing lens and a subject image (B image) formed by the light beam passing through the B region. The position changes at the time of focusing, front pin, and rear pin. The larger the defocus amount, which is the distance between the imaging surface and the imaging surface, the greater the deviation between the A image and the B image, and the sign of the image deviation amount at the front pin and the rear pin is reversed. The phase difference detection method detects the defocus amount from the image shift using this.

次に図4を用いて、カメラ信号処理回路5(符号501乃至504参照)、CPU8(符号506乃至512参照)の処理を説明する。
先ず、位相差検出に係る画素部分の画像キャプチャとしての補間処理について説明する。撮像素子3から読み出した撮像信号については、焦点状態検出素子の位置に相当する部分で画素信号の欠落が起こる。このため、画素補間処理部501は、焦点状態検出素子の周囲に位置する画素のデータを用いた補間によって、焦点状態検出素子の位置に相当するデータを算出する。画素補間処理部501の出力信号は映像信号処理部502に送られ、ビデオ信号処理回路6で取り扱える信号となる。補間演算の方法については、図5に示すように、対象となる焦点状態検出素子の上下に位置する同色画素の信号を単純平均する処理が挙げられる。本例では、5行5列の中央に焦点状態検出素子が位置しており、「Exx」(x=1乃至5)は各画素の電荷レベルを表す。単純平均法では焦点状態検出素子の位置におけるE33の値が、1画素をおいてその上下に位置するE31とE35との平均値として算出される。この他、焦点状態検出素子の位置を中心として上下左右にある、より多くの画素の電荷レベルのデータを用いて、重み付け平均値を算出する方法など、様々な方法を採用できる。かかる方法は画素のキズ補正と同様な技術で既に公知である為、詳細な説明は省略する。
図4にオン/オフスイッチの記号で示すAF(オートフォーカス)ゲートスイッチ503は、CDS及びA/D回路4でA/D変換した撮像信号のうち、どの部分をAF信号処理するかを選択する。AFゲートスイッチ503は後述のAF制御部512からの信号に従って制御される。TV−AF信号処理部504は、AFゲートスイッチ503で抽出した信号を帯域通過フィルタなどで処理して所定範囲の周波数成分を抽出することで、画像の鮮鋭度を表す値を得る。
Next, processing of the camera signal processing circuit 5 (see reference numerals 501 to 504) and the CPU 8 (see reference numerals 506 to 512) will be described with reference to FIG.
First, an interpolation process as an image capture of a pixel portion related to phase difference detection will be described. With respect to the image signal read from the image sensor 3, the pixel signal is lost at a portion corresponding to the position of the focus state detection element. For this reason, the pixel interpolation processing unit 501 calculates data corresponding to the position of the focus state detection element by interpolation using data of pixels located around the focus state detection element. The output signal of the pixel interpolation processing unit 501 is sent to the video signal processing unit 502 and becomes a signal that can be handled by the video signal processing circuit 6. As for the interpolation calculation method, as shown in FIG. 5, there is a process of simply averaging signals of pixels of the same color positioned above and below the target focus state detection element. In this example, the focus state detection element is located at the center of 5 rows and 5 columns, and “Exx” (x = 1 to 5) represents the charge level of each pixel. In the simple average method, the value of E33 at the position of the focus state detection element is calculated as an average value of E31 and E35 positioned above and below one pixel. In addition, various methods such as a method of calculating a weighted average value using charge level data of a larger number of pixels on the top, bottom, left, and right centering on the position of the focus state detection element can be employed. Since such a method is already known in the same technique as pixel defect correction, detailed description thereof is omitted.
An AF (autofocus) gate switch 503 indicated by a symbol of an on / off switch in FIG. 4 selects which part of the imaging signal A / D converted by the CDS and A / D circuit 4 is to be subjected to AF signal processing. . The AF gate switch 503 is controlled in accordance with a signal from an AF control unit 512 described later. The TV-AF signal processing unit 504 obtains a value representing the sharpness of the image by processing the signal extracted by the AF gate switch 503 with a band-pass filter or the like to extract a predetermined range of frequency components.

次にTV―AF信号の生成について説明する。TV―AF信号は、例えば撮像信号のフィルタリング処理によって所定の高周波成分のレベルを求めることで生成される。その際、焦点状態検出素子の位置に対応する画素部分が欠落した信号のままでフィルタリング処理を行うと、誤差を含んだ信号となってしまう。そこで映像信号のうち、どの部分についてAF信号処理を行うかがAFゲートスイッチ503により制御される。つまり、該スイッチはAFエリアを決める以外に、焦点状態検出素子の位置に対応する画素部分が欠落している映像信号の水平ラインについては、TV−AF信号処理を通さないようにする役目をもつ。これにより、焦点状態検出素子の位置に相当する画素部分の欠落によって影響を受けないTV−AF信号が得られる。
CPU8内の構成部507、509、512は、CPU8の処理を機能ブロックで表している。切り替えスイッチの記号で図示するセレクタ506は、CDS及びA/D回路4でA/D変換した撮像信号のうち、前述したA像、B像への振り分けを行う。つまり、セレクタ506が第1の状態に切り替わったとき、A像のデータを位相差演算処理部507が取得し、セレクタ506が第2の状態に切り替わったとき、B像のデータを位相差演算処理部507が取得する。位相差演算処理部507は、撮像画面内の各位置におけるA像、B像にどれだけのズレ量があるかを算出して、各検出位置でのズレ量をテーブル508に示すように二次元配列データとして管理する。
Next, generation of a TV-AF signal will be described. The TV-AF signal is generated, for example, by obtaining the level of a predetermined high-frequency component by filtering the imaging signal. At this time, if the filtering process is performed with the pixel portion corresponding to the position of the focus state detection element missing, a signal containing an error is generated. Therefore, the AF gate switch 503 controls which part of the video signal is subjected to the AF signal processing. That is, in addition to determining the AF area, the switch serves to prevent the TV-AF signal processing from being performed for the horizontal line of the video signal in which the pixel portion corresponding to the position of the focus state detection element is missing. . As a result, a TV-AF signal that is not affected by the missing pixel portion corresponding to the position of the focus state detection element is obtained.
Configuration units 507, 509, and 512 in the CPU 8 represent processing of the CPU 8 as functional blocks. A selector 506, which is illustrated by a changeover switch symbol, distributes the image signals A / D converted by the CDS and A / D circuit 4 to the above-described A image and B image. That is, when the selector 506 is switched to the first state, the phase difference calculation processing unit 507 acquires the data of the A image, and when the selector 506 is switched to the second state, the phase difference calculation processing is performed on the data of the B image. The part 507 acquires. The phase difference calculation processing unit 507 calculates how much the amount of deviation is in the A and B images at each position in the imaging screen, and the amount of deviation at each detected position is two-dimensionally as shown in the table 508. Manage as array data.

距離マップ作成処理部509は、位相差演算処理部507が求めたズレ量に基づいて、合焦レンズ位置を算出し、合焦レンズ位置と距離テーブル510を用いて撮像画面における各エリアの合焦距離を算出する。CPU8は、例えば、距離テーブル510に示すデータ形式で、ズームレンズ位置ごとの離散的なフォーカスレンズ位置に対する合焦距離のデータを保持している。距離マップ作成処理部509は、フォーカスレンズ位置に対する合焦距離を、距離テーブル510のデータから補間して求め、撮像面での焦点状態検出エリアごとに被写体までの距離を算出して距離マップを作成する。算出結果はテーブル511に示すように二次元配列データとして管理される。
AF制御部512は、TV−AF信号と距離マップのデータに基づいてフォーカスレンズを駆動することで合焦制御を行う。なお、距離マップ作成処理部509は図1の距離算出部801を構成し、AF制御部512は光学制御部803を構成する。絞り2の制御は後述する処理に従って露出制御部802により行われる。
The distance map creation processing unit 509 calculates a focusing lens position based on the amount of deviation obtained by the phase difference calculation processing unit 507, and uses the focusing lens position and the distance table 510 to focus each area on the imaging screen. Calculate the distance. The CPU 8 holds, for example, in-focus distance data for discrete focus lens positions for each zoom lens position in the data format shown in the distance table 510. The distance map creation processing unit 509 interpolates the focus distance for the focus lens position from the data in the distance table 510, calculates the distance to the subject for each focus state detection area on the imaging surface, and creates a distance map. To do. The calculation result is managed as two-dimensional array data as shown in a table 511.
The AF control unit 512 performs focusing control by driving the focus lens based on the TV-AF signal and the distance map data. The distance map creation processing unit 509 constitutes the distance calculation unit 801 in FIG. 1, and the AF control unit 512 constitutes the optical control unit 803. The control of the diaphragm 2 is performed by the exposure control unit 802 according to a process described later.

次に図6を用いて、被写体までの距離に対する被写界深度の原理を説明する。図中の「l」は撮像装置から主被写体までの距離(以下、主被写体距離という)を表す。同図において、主被写体距離lに対して被写体にピントが合っている状態で、距離lよりも撮像装置から遠い部分でピントが合う後方被写界深度を「Db」とし、該距離lよりも近い部分でピントが合う前方被写界深度を「Da」とする。被写界深度を「D」、レンズの焦点距離を「f」、撮影レンズ部の絞りのFナンバーを「Fno」とし、許容錯乱円を「σ」とすると、撮影条件から、被写界深度は下式で表される。

Figure 2011250133
次に、主被写体距離(Lsと記す)と、撮像装置から背景までの距離(以下、背景距離といい、Lhと記す)に対する被写界深度制御について、図7を用いて説明する。ここで距離Ls及びLhが焦点距離fよりも非常に大きい(Ls>>f)と仮定しており、式(1)は、「|Db|≒l」となる。 Next, the principle of the depth of field with respect to the distance to the subject will be described with reference to FIG. “L” in the figure represents the distance from the imaging device to the main subject (hereinafter referred to as the main subject distance). In the same figure, when the subject is in focus with respect to the main subject distance l, the rear depth of field that is in focus at a portion farther from the imaging device than the distance l is “Db”, which is greater than the distance l. The depth of front depth of field that is in focus in the near area is “Da”. If the depth of field is “D”, the focal length of the lens is “f”, the F number of the aperture of the taking lens unit is “Fno”, and the allowable circle of confusion is “σ”, the depth of field is determined from the shooting conditions. Is represented by the following equation.
Figure 2011250133
Next, depth-of-field control with respect to the main subject distance (denoted as Ls) and the distance from the imaging apparatus to the background (hereinafter referred to as background distance, denoted as Lh) will be described with reference to FIG. Here, it is assumed that the distances Ls and Lh are much larger than the focal length f (Ls >> f), and the expression (1) becomes “| Db | ≈l”.

図7(A)の状態では、「Ls=Ls1」、「Lh=Lh1」であり、このときの主被写体に対する被写界深度が「d1」である。この状態で、撮影が行われている最中に、操作部13に設けた指定選択スイッチなどを用いて、撮影者が現在の背景画像のボケ量を保持する指示を行った場合、Ls1、Lh1、d1の値がメモリ12に記憶される。その後、図7(B)に示すように、「Ls=Ls2」、「Lh=Lh2」に変化する。例えば、背景距離LhがLh2(=Lh1/2)になった場合、被写界深度d2の値が上記関係式から、前記d1の2分の1に相当する被写界深度となるように、絞り2が露出制御部802により制御される。つまり、撮影者の操作指示により背景画像のボケ量が指定された直後での距離Lh1に対して、背景画像のボケ量の指定操作の後に距離Lhが短くなった場合(Lh=Lh2)には、距離Lh1に対する距離Lh2の比に応じて絞り制御が行われる。その際、被写界深度d2が小さくなるように、つまり被写界深度が浅くなるように絞りF値を小さくする制御が行われる。一方、ボケ量の指定後に距離Lhが長くなった場合、距離Lh1に対する距離Lh2の比に応じて、被写界深度d2が大きくなるように、つまり被写界深度が深くなるように絞りF値を大きくする制御が行われる。本制御により、撮影者が背景画像のボケ量を指定した後で、主被写体距離に対して背景距離が変化した場合でも、背景距離の変化量に応じて被写界深度が制御されるので、一定のボケ量をもつ背景画像の生成が可能となる。   In the state of FIG. 7A, “Ls = Ls1” and “Lh = Lh1”, and the depth of field with respect to the main subject at this time is “d1”. In this state, when the photographer gives an instruction to hold the current amount of blur of the background image using a designation selection switch or the like provided on the operation unit 13 during shooting, Ls1, Lh1 , D1 are stored in the memory 12. Thereafter, as shown in FIG. 7B, the state changes to “Ls = Ls2” and “Lh = Lh2”. For example, when the background distance Lh is Lh2 (= Lh1 / 2), from the above relational expression, the value of the depth of field d2 is set to the depth of field corresponding to one half of d1. The aperture 2 is controlled by the exposure control unit 802. That is, when the distance Lh becomes shorter after the background image blur amount designation operation than the distance Lh1 immediately after the background image blur amount is designated by the photographer's operation instruction (Lh = Lh2). The aperture control is performed according to the ratio of the distance Lh2 to the distance Lh1. At that time, control is performed to reduce the aperture F value so that the depth of field d2 becomes small, that is, the depth of field becomes shallow. On the other hand, when the distance Lh increases after the blur amount is specified, the aperture F value is set so that the depth of field d2 increases, that is, the depth of field increases, according to the ratio of the distance Lh2 to the distance Lh1. Control to increase the value is performed. With this control, the depth of field is controlled according to the amount of change in the background distance even when the background distance changes with respect to the main subject distance after the photographer specifies the amount of blur in the background image. A background image having a certain amount of blur can be generated.

次に図8のフローチャートを用いて、背景画像のボケ量を一定に保つ制御について説明する。
撮影開始(S1000)の後、距離マップ作成処理部509は、現在撮影中の被写体画像から距離情報を算出し、距離マップのデータを生成する(S1001)。距離算出部801は、距離マップのデータとピント位置に基づいて主被写体を判別し、主被写体距離と背景距離を算出する(S1002)。
次にCPU8は、ボケ量の指定選択手段、例えば操作部13に設けた指定選択スイッチなどが操作されたことを検出する(S1003)。次にCPU8は、この時点での主被写体距離Ls1と背景距離Lh1の各データをメモリ12に記憶する(S1004)。次にCPU8は不図示のズーム位置の検出部から得られる焦点距離の検出情報を取得してメモリ12に記憶する(S1005)。CPU8は、この時点での絞りF値を検出してメモリ12に記憶する(S1006)。CPU8は、検出した絞りF値、焦点距離f、主被写体距離Ls1に基づき、被写界深度情報d1を算出し、これをメモリ12に記憶する(S1007)。
Next, control for keeping the blur amount of the background image constant will be described using the flowchart of FIG.
After the start of photographing (S1000), the distance map creation processing unit 509 calculates distance information from the subject image currently being photographed, and generates distance map data (S1001). The distance calculation unit 801 determines the main subject based on the distance map data and the focus position, and calculates the main subject distance and the background distance (S1002).
Next, the CPU 8 detects that a defocus amount designation selection unit, for example, a designation selection switch provided in the operation unit 13 is operated (S1003). Next, the CPU 8 stores each data of the main subject distance Ls1 and the background distance Lh1 at this time in the memory 12 (S1004). Next, the CPU 8 acquires focal length detection information obtained from a zoom position detection unit (not shown) and stores it in the memory 12 (S1005). The CPU 8 detects the aperture F value at this time and stores it in the memory 12 (S1006). The CPU 8 calculates depth-of-field information d1 based on the detected aperture F value, focal length f, and main subject distance Ls1, and stores it in the memory 12 (S1007).

背景距離LhがLh2に変化した場合(図7(B)参照)、距離マップ作成処理部509は距離情報を再び算出して距離マップを更新する。更新された距離マップを用いて、再度、主被写体距離Ls2と背景距離Lh2が算出される(S1008)。CPU8は、|Lh1−Ls1|と|Lh2−Ls2|を比較し、両者が等しいか否かを判定する(S1009)。判定結果が「|Lh1−Ls1|=|Lh2−Ls2|」の場合、現在の絞りF値を維持した状態で撮影が継続する(S1010)。一方、|Lh1−Ls1|と|Lh2−Ls2|が等しくない場合、CPU8は、Lh2とLh1の比率「Lh2/Lh1」を算出する(S1011)。この算出結果から、背景画像が一定のボケ量となるように、主被写体に対する最適な被写界深度d2が算出される。つまり、「d2=(Lh2/Lh1)×d1」からd2の値が求まる(S1012)。CPU8は、算出した被写界深度d2に相当する絞りF値を算出した後、絞り2の駆動制御を行う(S1013)。絞りF値が変更された状態で、露出制御部802は、撮像素子3の電子シャッタを制御し、撮影レンズ部内のNDフィルタを変更して露出補正を行う(S1014)。   When the background distance Lh changes to Lh2 (see FIG. 7B), the distance map creation processing unit 509 calculates distance information again and updates the distance map. The main subject distance Ls2 and the background distance Lh2 are calculated again using the updated distance map (S1008). The CPU 8 compares | Lh1-Ls1 | and | Lh2-Ls2 | and determines whether or not they are equal (S1009). If the determination result is “| Lh1−Ls1 | = | Lh2−Ls2 |”, shooting continues with the current aperture F value maintained (S1010). On the other hand, if | Lh1-Ls1 | is not equal to | Lh2-Ls2 |, the CPU 8 calculates the ratio “Lh2 / Lh1” between Lh2 and Lh1 (S1011). From this calculation result, the optimum depth of field d2 for the main subject is calculated so that the background image has a certain amount of blur. That is, the value of d2 is obtained from “d2 = (Lh2 / Lh1) × d1” (S1012). After calculating the aperture F value corresponding to the calculated depth of field d2, the CPU 8 performs drive control of the aperture 2 (S1013). With the aperture F value changed, the exposure control unit 802 controls the electronic shutter of the image sensor 3 and changes the ND filter in the photographing lens unit to perform exposure correction (S1014).

第1実施形態では、被写体撮影中に距離マップを作成し、ボケ量の指定が実行された時点における主被写体距離Ls1、背景距離Lh1、焦点距離f、絞りF値、被写界深度d1のデータをメモリに記憶する。そしてボケ量の指定が実行された後での主被写体距離Ls2及び背景距離Lh2が再び算出される。これらを記憶済みのデータと比較することで、主被写体距離と背景距離との差分量|Lh1−Ls1|と|Lh2−Ls2|について判定処理が行われる。差分量に相違が生じた場合、背景距離の変化量(Lh2/Lh1)に応じて被写界深度が変更され、これにより背景画像のボケ量を一定に制御することができる。   In the first embodiment, a distance map is created during subject photographing, and data of the main subject distance Ls1, the background distance Lh1, the focal length f, the aperture F value, and the depth of field d1 when the blur amount is designated. Is stored in the memory. Then, the main subject distance Ls2 and the background distance Lh2 after the blur amount designation is executed are calculated again. By comparing these with the stored data, the determination processing is performed for the difference amounts | Lh1-Ls1 | and | Lh2-Ls2 | between the main subject distance and the background distance. When a difference occurs in the difference amount, the depth of field is changed in accordance with the amount of change in the background distance (Lh2 / Lh1), thereby making it possible to control the blur amount of the background image to be constant.

[第2実施形態]
次に、本発明の第2実施形態を説明する。第2実施形態では、録画待機中の状態から録画の開始が指示されたときに、背景画像のボケ量を一定に制御する。第2実施形態に係る撮像装置の構成は第1実施形態の場合と同様であるため、制御動作の流れについて図9に示すフローチャートを用いて説明する。以下、図8と相違するS1100乃至1102を説明し、第1実施形態の場合と同様の処理については既に使用したステップ番号を用いることにより、各処理の詳細な説明を省略する。
S1002の後、S1100にて操作部13に設けられた録画スッチなどの操作部材が操作され、録画が開始する。S1101でCPU8は、録画開始直後の主被写体距離Ls1と背景距離Lh1を、距離マップのデータから算出し、メモリ12に記憶してS1005に進む。
S1009にて|Lh1−Ls1|と|Lh2−Ls2|が等しくない場合、S1011乃至1014の処理を経てS1102に進む。撮影者が録画スイッチなどの操作部材を操作して録画の終了が撮像装置に指示されると、録画動作が終了した時点で、前記動作が終了する。
[Second Embodiment]
Next, a second embodiment of the present invention will be described. In the second embodiment, when the start of recording is instructed from a recording standby state, the amount of blur of the background image is controlled to be constant. Since the configuration of the imaging apparatus according to the second embodiment is the same as that of the first embodiment, the flow of the control operation will be described with reference to the flowchart shown in FIG. Hereinafter, S1100 to 1102 different from FIG. 8 will be described, and the same processes as in the first embodiment will be omitted by using the step numbers that have already been used.
After S1002, an operation member such as a recording switch provided in the operation unit 13 is operated in S1100, and recording starts. In S1101, the CPU 8 calculates the main subject distance Ls1 and the background distance Lh1 immediately after the start of recording from the distance map data, stores them in the memory 12, and proceeds to S1005.
If | Lh1-Ls1 | and | Lh2-Ls2 | are not equal in S1009, the processing proceeds to S1102 through the processing of S1011 to 1014. When the photographer operates an operation member such as a recording switch to instruct the imaging apparatus to end recording, the operation ends when the recording operation ends.

第2実施形態によれば、録画開始や終了のトリガ信号によって撮像装置を遠隔操作する場合でも、背景画像のボケ量を一定に制御することができる。   According to the second embodiment, even when the imaging apparatus is remotely operated by a recording start or end trigger signal, the amount of blur of the background image can be controlled to be constant.

1 撮影レンズ
2 絞り
8 CPU
12 メモリ
801 主被写体/背景距離算出部
802 露出制御部
803 光学制御部
1 Shooting Lens 2 Aperture 8 CPU
12 Memory 801 Main subject / background distance calculation unit 802 Exposure control unit 803 Optical control unit

Claims (5)

撮影レンズ及び露出制御用の絞りを備えた撮像装置であって、
主被写体までの距離及び背景までの距離を算出する算出手段と、
前記算出手段が算出した距離、前記撮影レンズに係る焦点距離、及び前記絞りに係る絞り値の情報を記憶する記憶手段と、
前記記憶手段が記憶している前記主被写体までの距離と前記背景までの距離との差分量を、撮影中の主被写体までの距離と背景までの距離との差分量と比較し、差分量が相違する場合、背景までの距離の変化量に応じて前記絞りを制御して被写界深度を変更する制御手段と、を備えることを特徴とする撮像装置。
An imaging device having a photographing lens and an aperture for controlling exposure,
A calculation means for calculating the distance to the main subject and the distance to the background;
Storage means for storing information of the distance calculated by the calculation means, the focal length of the photographing lens, and the aperture value of the diaphragm;
The difference amount between the distance to the main subject and the distance to the background stored in the storage unit is compared with the difference amount between the distance to the main subject being photographed and the distance to the background, and the difference amount is An imaging apparatus comprising: a control unit that controls the diaphragm according to the amount of change in the distance to the background when there is a difference, and changes the depth of field.
前記記憶手段は、背景画像のボケ量が指定された時点で前記算出手段が算出した前記主被写体までの距離及び前記背景までの距離の情報を記憶することを特徴とする、請求項1記載の撮像装置。   2. The storage unit according to claim 1, wherein the storage unit stores information on a distance to the main subject and a distance to the background calculated by the calculation unit when a blur amount of a background image is designated. Imaging device. 前記記憶手段は、録画が開始された時点で前記算出手段が算出した前記主被写体までの距離及び前記背景までの距離の情報を記憶することを特徴とする、請求項1記載の撮像装置。   The imaging apparatus according to claim 1, wherein the storage unit stores information on a distance to the main subject and a distance to the background calculated by the calculation unit when recording is started. 前記制御手段は、前記差分量が相違する場合、前記記憶手段が記憶している背景までの距離と前記時点の後に前記算出手段が算出した背景までの距離との比に応じて被写界深度を算出し、該被写界深度に相当する絞り値に変更して露出制御を行うことを特徴とする、請求項2又は3項記載の撮像装置。   When the difference amount is different, the control means determines the depth of field according to the ratio of the distance to the background stored in the storage means and the distance to the background calculated by the calculation means after the time point. The imaging apparatus according to claim 2, wherein exposure control is performed by calculating the aperture value corresponding to the depth of field and performing exposure control. 撮影レンズ及び露出制御用の絞りを備えた撮像装置の制御方法であって、
主被写体までの距離及び背景までの距離を算出する算出ステップと、
前記算出ステップで算出した距離、前記撮影レンズに係る焦点距離、及び前記絞りに係る絞り値を記憶する記憶ステップと、
前記記憶ステップで記憶した前記主被写体までの距離と前記背景までの距離との差分量を、撮影中の主被写体までの距離と背景までの距離との差分量と比較し、差分量が相違する場合、背景までの距離の変化量に応じて前記絞りを制御して被写界深度を変更するステップを有することを特徴とする撮像装置の制御方法。
A method for controlling an image pickup apparatus having a photographing lens and an aperture for exposure control,
A calculation step for calculating the distance to the main subject and the distance to the background;
A storage step of storing the distance calculated in the calculation step, the focal length related to the photographing lens, and the aperture value related to the aperture;
The difference amount between the distance to the main subject and the distance to the background stored in the storing step is compared with the difference amount between the distance to the main subject being photographed and the distance to the background, and the difference amounts are different. A control method for an imaging apparatus, comprising: a step of changing the depth of field by controlling the diaphragm according to a change amount of a distance to a background.
JP2010121152A 2010-05-27 2010-05-27 Imaging apparatus and control method thereof Expired - Fee Related JP5683135B2 (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
JP2010121152A JP5683135B2 (en) 2010-05-27 2010-05-27 Imaging apparatus and control method thereof

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
JP2010121152A JP5683135B2 (en) 2010-05-27 2010-05-27 Imaging apparatus and control method thereof

Publications (2)

Publication Number Publication Date
JP2011250133A true JP2011250133A (en) 2011-12-08
JP5683135B2 JP5683135B2 (en) 2015-03-11

Family

ID=45414840

Family Applications (1)

Application Number Title Priority Date Filing Date
JP2010121152A Expired - Fee Related JP5683135B2 (en) 2010-05-27 2010-05-27 Imaging apparatus and control method thereof

Country Status (1)

Country Link
JP (1) JP5683135B2 (en)

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2013186293A (en) * 2012-03-08 2013-09-19 Seiko Epson Corp Image generation device and image display method
JP2018017876A (en) * 2016-07-27 2018-02-01 キヤノン株式会社 Imaging apparatus and control method of the same, and image processing apparatus and method

Citations (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JPH04349436A (en) * 1991-05-27 1992-12-03 Minolta Camera Co Ltd Camera

Patent Citations (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JPH04349436A (en) * 1991-05-27 1992-12-03 Minolta Camera Co Ltd Camera

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2013186293A (en) * 2012-03-08 2013-09-19 Seiko Epson Corp Image generation device and image display method
JP2018017876A (en) * 2016-07-27 2018-02-01 キヤノン株式会社 Imaging apparatus and control method of the same, and image processing apparatus and method

Also Published As

Publication number Publication date
JP5683135B2 (en) 2015-03-11

Similar Documents

Publication Publication Date Title
JP5388544B2 (en) Imaging apparatus and focus control method thereof
JP5322995B2 (en) Imaging apparatus and control method thereof
JP5447619B2 (en) Imaging device
US20150124158A1 (en) Focus control apparatus and control method therefor
JP2014211574A (en) Imaging apparatus, and control method and program therefor
JP2010191883A (en) Image-capturing device and image processing program
JP2012003087A (en) Imaging apparatus
JP2008199477A (en) Imaging apparatus
US20180063412A1 (en) Control apparatus, image capturing apparatus, control method, and non-transitory computer-readable storage medium
JP6581333B2 (en) Image processing apparatus, imaging apparatus, and program
JP2009124313A (en) Electronic camera
JP6360338B2 (en) Image processing apparatus, image processing method and program, and imaging apparatus
JP5446311B2 (en) Imaging device
JP2013113857A (en) Imaging device, and control method therefor
JP5683135B2 (en) Imaging apparatus and control method thereof
JP4925168B2 (en) Imaging method and apparatus
JP2014123050A (en) Focus detection device, focus detection method, program and imaging device
JP2018017876A (en) Imaging apparatus and control method of the same, and image processing apparatus and method
JP4612814B2 (en) Automatic focusing apparatus, control method therefor, and imaging apparatus
JP6305016B2 (en) FOCUS CONTROL DEVICE, FOCUS CONTROL DEVICE CONTROL METHOD, AND IMAGING DEVICE
JP6329400B2 (en) Image processing apparatus, image processing method and program, and imaging apparatus
JP2011013499A (en) Imaging apparatus
JP6071173B2 (en) Imaging apparatus, control method thereof, and program
JP5415208B2 (en) Imaging device
JP6366374B2 (en) Focus detection apparatus and control method thereof

Legal Events

Date Code Title Description
A621 Written request for application examination

Free format text: JAPANESE INTERMEDIATE CODE: A621

Effective date: 20130521

A977 Report on retrieval

Free format text: JAPANESE INTERMEDIATE CODE: A971007

Effective date: 20140206

A131 Notification of reasons for refusal

Free format text: JAPANESE INTERMEDIATE CODE: A131

Effective date: 20140218

A521 Request for written amendment filed

Free format text: JAPANESE INTERMEDIATE CODE: A523

Effective date: 20140421

A131 Notification of reasons for refusal

Free format text: JAPANESE INTERMEDIATE CODE: A131

Effective date: 20140812

A521 Request for written amendment filed

Free format text: JAPANESE INTERMEDIATE CODE: A523

Effective date: 20141010

A131 Notification of reasons for refusal

Free format text: JAPANESE INTERMEDIATE CODE: A131

Effective date: 20141111

A521 Request for written amendment filed

Free format text: JAPANESE INTERMEDIATE CODE: A523

Effective date: 20141127

TRDD Decision of grant or rejection written
A01 Written decision to grant a patent or to grant a registration (utility model)

Free format text: JAPANESE INTERMEDIATE CODE: A01

Effective date: 20141216

A61 First payment of annual fees (during grant procedure)

Free format text: JAPANESE INTERMEDIATE CODE: A61

Effective date: 20150113

R151 Written notification of patent or utility model registration

Ref document number: 5683135

Country of ref document: JP

Free format text: JAPANESE INTERMEDIATE CODE: R151

LAPS Cancellation because of no payment of annual fees