JP2022025741A - Viewpoint position detection system - Google Patents

Viewpoint position detection system Download PDF

Info

Publication number
JP2022025741A
JP2022025741A JP2020128797A JP2020128797A JP2022025741A JP 2022025741 A JP2022025741 A JP 2022025741A JP 2020128797 A JP2020128797 A JP 2020128797A JP 2020128797 A JP2020128797 A JP 2020128797A JP 2022025741 A JP2022025741 A JP 2022025741A
Authority
JP
Japan
Prior art keywords
feature point
gamma
image
detect
detected
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
JP2020128797A
Other languages
Japanese (ja)
Other versions
JP7472706B2 (en
Inventor
一成 濱田
Kazunari Hamada
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Nippon Seiki Co Ltd
Original Assignee
Nippon Seiki Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Nippon Seiki Co Ltd filed Critical Nippon Seiki Co Ltd
Priority to JP2020128797A priority Critical patent/JP7472706B2/en
Publication of JP2022025741A publication Critical patent/JP2022025741A/en
Application granted granted Critical
Publication of JP7472706B2 publication Critical patent/JP7472706B2/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Landscapes

  • Instrument Panels (AREA)
  • Image Processing (AREA)
  • Position Input By Displaying (AREA)
  • Facsimile Image Signal Circuits (AREA)
  • Image Analysis (AREA)

Abstract

To provide a viewpoint detection system with which it is possible to detect the viewpoint position of users even when a grayscale difference in an imaged image is insufficient.SOLUTION: A viewpoint detection system pertaining to the present invention comprises: a camera for imaging a driver D of a vehicle by infrared light projected to the driver D; and a control unit for detecting the positions of feature points of the driver D on the basis of a grayscale difference in the imaged image of the camera, detecting the positions of left and right irises IL, IR of the driver D in the imaged image on the basis of the detected feature points, and detecting the position of viewpoint IC of the driver D on the basis of the detected positions of the irises IL, IR . When it is difficult to detect the position of a feature point such as an inner canthus E1R, an outer canthus E2R or an upper eyelid E3R and difficult to detect the position of one of the irises IL, IR, the control unit generates a gamma corrected image Pγ in which the gamma value of the imaged image is adjusted, and detects the position of the feature point on the basis of a grayscale difference in the gamma corrected image Pγ.SELECTED DRAWING: Figure 5

Description

本発明は、車両のフロントウインドシールドやコンバイナに虚像を表示するヘッドアップディスプレイ装置等に用いられ、車両の運転者等の利用者の視点の位置を検出する視点位置検出システムに関する。 The present invention relates to a viewpoint position detection system that is used for a front windshield of a vehicle, a head-up display device that displays a virtual image on a combiner, and the like, and detects the position of the viewpoint of a user such as a driver of the vehicle.

車両のフロントウインドシールドやコンバイナ等の反射透光部材を透過する実景(車両前方の風景)に重ねて、その反射透光部材に反射された表示光により虚像を生成して表示するヘッドアップディスプレイ装置は、車両の運転者等の利用者の視線移動を極力抑えつつ、利用者が所望する情報を虚像により提供することによって、安全で快適な車両運行に寄与する。 A head-up display device that creates and displays a virtual image by the display light reflected by the reflected light-transmitting member, which is superimposed on the actual scene (scenery in front of the vehicle) that passes through the reflected light-transmitting member such as the front windshield and combiner of the vehicle. Contributes to safe and comfortable vehicle operation by providing information desired by the user as a virtual image while suppressing the movement of the line of sight of the user such as the driver of the vehicle as much as possible.

また、ヘッドアップディスプレイ装置には、利用者に赤外光を照射して利用者を撮像し、その撮像画像に基づいて利用者の目等の位置(視点位置)を検出することにより、脇見や居眠り等の利用者の状態把握に供するものがある。 In addition, the head-up display device irradiates the user with infrared light to image the user, and detects the position (viewpoint position) of the user's eyes or the like based on the captured image to look aside. There is something to help grasp the user's condition such as falling asleep.

例えば特許文献1に記載のヘッドアップディスプレイ装置は、表示手段から発せられる可視光をコンバイナ部材にて利用者に向けて反射して表示像を結像してなるもので、利用者に向けて赤外線を照射する赤外線照射手段と、表示手段から発せられる可視光をコンバイナ部材に向けて反射し、利用者及びコンバイナ部材にて反射される赤外線を透過するミラー部材と、ミラー部材を透過する赤外線を感受して利用者をそれぞれ異なる方向から撮像する複数の撮像手段と、撮像手段によって撮像された画像に基づいて利用者の目の位置を算出する画像処理手段とを備え、利用者の目の位置を精度良く算出することが可能となっている。 For example, the head-up display device described in Patent Document 1 is formed by reflecting visible light emitted from a display means toward a user by a combiner member to form a display image, and infrared rays are directed toward the user. The infrared irradiation means that irradiates the infrared rays, the mirror member that reflects the visible light emitted from the display means toward the combiner member and transmits the infrared rays reflected by the user and the combiner member, and the infrared rays transmitted through the mirror member. It is provided with a plurality of imaging means for imaging the user from different directions and an image processing means for calculating the position of the user's eyes based on the image captured by the imaging means, and the position of the user's eyes can be determined. It is possible to calculate with high accuracy.

特開2008-126984号公報Japanese Unexamined Patent Publication No. 2008-126984

ところで、このようなヘッドアップディスプレイ装置に用いられる視点検出システムでは、撮像画像(撮像画像データ)における階調差(コントラスト)に基づいて利用者の特徴点(目そのものを特徴点としてもよいが、顔の輪郭や耳、鼻尖、眉、顎その他を特徴点とすることもある。)の位置を検出することにより、利用者の左右の目の位置を検出し、その視点位置を検出するが、太陽光や街灯の光等の強い外光が撮像手段に入射すると、あるいは、撮像手段において強い外光に合わせて露出が調整された結果として、撮像画像における利用者の顔の一部が明るくなりすぎたり(露出オーバー)、暗くなりすぎたりすることがある(露出アンダー)。そうすると、特徴点を検出するための十分な階調差が得られず、視点を検出することができなくなるという問題があった。 By the way, in the viewpoint detection system used in such a head-up display device, the user's characteristic points (eyes themselves may be used as characteristic points) based on the gradation difference (contrast) in the captured image (captured image data). By detecting the contour of the face and the position of the ears, nose tip, eyebrows, chin, etc.), the positions of the left and right eyes of the user are detected, and the viewpoint position is detected. When strong external light such as sunlight or street light is incident on the imaging means, or as a result of adjusting the exposure according to the strong external light in the imaging means, a part of the user's face in the captured image becomes bright. It may be too dark (overexposed) or too dark (underexposed). Then, there is a problem that a sufficient gradation difference for detecting the feature point cannot be obtained and the viewpoint cannot be detected.

本発明は、上記の事情に鑑みてなされたもので、撮像画像における階調差が十分でない場合も利用者の視点位置を検出することができる視点検出システムを提供することを課題としている。 The present invention has been made in view of the above circumstances, and an object of the present invention is to provide a viewpoint detection system capable of detecting a user's viewpoint position even when the gradation difference in a captured image is not sufficient.

上記課題を解決するために、本発明に係る視点検出システムは、車両の利用者に投影された赤外光により前記利用者を撮像する撮像手段と、前記撮像手段の撮像画像における階調差に基づいて、前記利用者の特徴点の位置を検出する特徴点検出手段と、前記特徴点検出手段により検出された特徴点に基づいて、前記撮像画像における前記利用者の左右の目の位置を検出し、検出された左右の目の位置に基づいて、前記利用者の視点位置を検出する視点位置検出手段とを備え、前記特徴点検出手段は、前記特徴点の位置の検出が困難で前記視点位置検出手段による前記左右の目の一方の位置の検出が困難な場合に、前記撮像画像のガンマ値を調整したガンマ補正画像を生成し、前記ガンマ補正画像における階調差に基づいて、前記特徴点の位置を検出することを特徴とする。 In order to solve the above problems, the viewpoint detection system according to the present invention has a gradation difference between an image pickup means for capturing an image of the user by infrared light projected on the user of the vehicle and an image captured by the image pickup means. Based on the feature point detecting means for detecting the position of the feature point of the user and the feature point detected by the feature point detecting means, the positions of the left and right eyes of the user in the captured image are detected. A viewpoint position detecting means for detecting the viewpoint position of the user based on the detected positions of the left and right eyes is provided, and the feature point detecting means is difficult to detect the position of the feature point and the viewpoint. When it is difficult for the position detecting means to detect the position of one of the left and right eyes, a gamma-corrected image in which the gamma value of the captured image is adjusted is generated, and the feature is based on the gradation difference in the gamma-corrected image. It is characterized by detecting the position of a point.

前記特徴点検出手段は、前記特徴点の位置の検出が困難で前記視点位置検出手段による前記左右の目の一方の位置の検出が困難な場合に、前記撮像画像のうち位置の検出が困難な特徴点を含む領域のガンマ値を調整して前記ガンマ補正画像を生成してもよい。 When it is difficult for the feature point detecting means to detect the position of the feature point and it is difficult for the viewpoint position detecting means to detect the position of one of the left and right eyes, it is difficult to detect the position of the captured image. The gamma-corrected image may be generated by adjusting the gamma value of the region including the feature points.

例えば、前記特徴点検出手段は、前記撮像画像のガンマ値がαの場合に、前記撮像画像の入力が所定の閾値未満の範囲でガンマ値をαよりも大きくなるように調整して前記ガンマ補正画像を生成してもよく、さらに、前記撮像画像の入力が前記閾値以上の範囲でガンマ値をαよりも小さくなるように調整して前記ガンマ補正画像を生成してもよい。 For example, when the gamma value of the captured image is α, the feature point detecting means adjusts the gamma value to be larger than α within a range where the input of the captured image is less than a predetermined threshold, and the gamma correction is performed. An image may be generated, and further, the gamma-corrected image may be generated by adjusting the gamma value to be smaller than α in the range where the input of the captured image is equal to or larger than the threshold value.

また、前記特徴点検出手段は、前記特徴点の位置の検出が困難で前記視点位置検出手段による前記左右の目の両方の位置の検出が困難な場合に、前記ガンマ補正画像を生成しなくてもよい。 Further, the feature point detecting means does not generate the gamma-corrected image when it is difficult to detect the position of the feature point and it is difficult for the viewpoint position detecting means to detect the positions of both the left and right eyes. May be good.

本発明に係る視点検出システムによれば、撮像画像における階調差が十分でない場合も利用者の視点位置を検出することができる。 According to the viewpoint detection system according to the present invention, the viewpoint position of the user can be detected even when the gradation difference in the captured image is not sufficient.

発明を実施するための形態に係る視点検出システムが適用されたヘッドアップディスプレイ装置を示す説明図である。It is explanatory drawing which shows the head-up display apparatus to which the viewpoint detection system which concerns on embodiment for carrying out an invention is applied. 図1のヘッドアップディスプレイ装置による運転者の撮像画像を特徴点とともに示す説明図である。It is explanatory drawing which shows the image taken by the driver by the head-up display apparatus of FIG. 1 together with the feature point. 図1のヘッドアップディスプレイ装置に外光が侵入する様子を示す説明図である。It is explanatory drawing which shows the state which the outside light invades into the head-up display device of FIG. 外光により特徴点の検出が困難な撮像画像を特徴点とともに示す説明図である。It is explanatory drawing which shows the captured image which it is difficult to detect the feature point by the outside light together with the feature point. 図4の撮像画像のガンマ値を調整したガンマ補正画像を特徴点とともに示す説明図である。It is explanatory drawing which shows the gamma correction image which adjusted the gamma value of the captured image of FIG. 4 together with a feature point. ガンマ補正画像の生成方法の例を示す説明図である。It is explanatory drawing which shows the example of the generation method of the gamma-corrected image. ガンマ補正画像の生成方法の他の例を示す説明図である。It is explanatory drawing which shows the other example of the generation method of the gamma-corrected image. ガンマ補正画像の生成方法のさらに他の例を示す説明図である。It is explanatory drawing which shows still another example of the generation method of the gamma-corrected image.

本発明を実施するための形態について、図面に基づいて説明する。 A mode for carrying out the present invention will be described with reference to the drawings.

図1に示すように、本実施の形態に係る視点検出システムが適用されたヘッドアップディスプレイ装置(HUD)1は、車両のフロントウインドシールド2の下方に設けられ、フロントウインドシールド2の一部に可視光である表示光Lを投影する。表示光Lは、フロントウインドシールド2に反射されて虚像Vを生成し、車両の運転者Dにフロントウインドシールド2を透過する実景に重ねて虚像Vを視認させる。 As shown in FIG. 1, the head-up display device (HUD) 1 to which the viewpoint detection system according to the present embodiment is applied is provided below the front windshield 2 of the vehicle and is a part of the front windshield 2. The display light L 1 which is visible light is projected. The display light L 1 is reflected by the front windshield 2 to generate a virtual image V, and causes the driver D of the vehicle to visually recognize the virtual image V by superimposing it on the actual scene transmitted through the front windshield 2.

また、HUD1は、視点検出システムにより運転者Dの状態をモニターするDMS(ドライバーモニタリングシステム)の機能を持ち、運転者Dに赤外光Lを投影して運転者Dを撮像し、撮像画像に基づいて運転者Dの視点の位置を検出可能である。 Further, the HUD 1 has a DMS (driver monitoring system) function of monitoring the state of the driver D by a viewpoint detection system, projects infrared light L2 onto the driver D, captures the driver D, and captures an image. The position of the viewpoint of the driver D can be detected based on the above.

詳細には、HUD1は、黒色のABS樹脂等により成形されて外光の侵入が防止された筐体3に覆われて外部と区画され、筐体3には図示を略すポリカーボネート等の透明な樹脂で覆われた開口部(透光部)4が形成されている。筐体3の内部には、表示ユニット5と、折返鏡6と、凹面鏡7と、赤外光照射ユニット8と、カメラ9と、制御部10とが保持・収容されている。 Specifically, the HUD 1 is covered with a housing 3 molded from a black ABS resin or the like to prevent the intrusion of external light and is partitioned from the outside, and the housing 3 is a transparent resin such as polycarbonate (not shown). An opening (transmissive portion) 4 covered with is formed. A display unit 5, a folding mirror 6, a concave mirror 7, an infrared light irradiation unit 8, a camera 9, and a control unit 10 are held and housed inside the housing 3.

表示ユニット5は、チップ型の発光ダイオードからなる光源及び液晶パネルが設けられ、液晶パネルが光源の出射光を2次元的に変調することにより、可視光である映像光(表示光L)を投影表示する。折返鏡6は、平面部分を有するように成形されたポリカーボネート等の樹脂にアルミニウム等の金属を蒸着してなり、光を単純に反射する。凹面鏡7は、凹面部分を有するように成形されたポリカーボネート等の樹脂に誘電体多層膜等を蒸着してなり、可視光を拡大して反射するとともに、赤外光を透過させる特性を有する。 The display unit 5 is provided with a light source and a liquid crystal panel made of a chip-type light emitting diode, and the liquid crystal panel two -dimensionally modulates the emitted light of the light source to generate visible light (display light L1). Display by projection. The folding mirror 6 is formed by depositing a metal such as aluminum on a resin such as polycarbonate molded so as to have a flat portion, and simply reflects light. The concave mirror 7 is formed by depositing a dielectric multilayer film or the like on a resin such as polycarbonate molded so as to have a concave portion, and has a property of magnifying and reflecting visible light and transmitting infrared light.

赤外光照射ユニット8は、凹面鏡7の裏側(凹面鏡7に対して開口部4及び折返鏡6の反対側)に設けられ、発光ダイオードからなる光源が発する赤外光(近赤外線)を凹面鏡7に向けて照射する。カメラ9は、赤外照射ユニット8から照射される波長帯の赤外光に感応する撮像素子、及び、赤外光を透過してその撮像素子に結像させ得るレンズを備え、近赤外線画像を撮影する。赤外光を選択的に透過させるために、可視光等の波長を吸収又は反射するフィルターを用いてもよい。 The infrared light irradiation unit 8 is provided on the back side of the concave mirror 7 (opposite the opening 4 and the folding mirror 6 with respect to the concave mirror 7), and emits infrared light (near infrared rays) emitted by a light source composed of a light emitting diode. Irradiate toward. The camera 9 includes an image pickup element that is sensitive to infrared light in the wavelength band emitted from the infrared irradiation unit 8 and a lens that can transmit infrared light and form an image on the image pickup device, and captures a near-infrared image. Take a picture. In order to selectively transmit infrared light, a filter that absorbs or reflects wavelengths such as visible light may be used.

制御部10は、マイクロプロセッサ、メモリ及びそれらを動作させるための各種電子部品、基板、ケースからなり、車両情報や運転者Dの入力情報に基づいてHUD1の映像を適切に表示するように表示ユニット5を制御する。 The control unit 10 is composed of a microprocessor, a memory, various electronic components for operating them, a board, and a case, and is a display unit so as to appropriately display the image of the HUD1 based on the vehicle information and the input information of the driver D. 5 is controlled.

また、制御部10は、図2に示すように、カメラ9による撮像画像Pの階調差(コントラスト)に基づいて、運転者Dの左目Eの虹彩(黒目)Iの位置及び右目Eの虹彩Iの位置を検出し、左右の虹彩I,Iの中間点Iの位置を視点位置として検出する。検出された視点位置は、運転者Dの状態検出(脇見や居眠り等)に利用可能であり、視点情報として外部に出力される。 Further, as shown in FIG. 2, the control unit 10 determines the position of the iris (black eye) IL of the driver D' s left eye EL and the right eye E based on the gradation difference (contrast) of the image P captured by the camera 9. The position of the iris IR of R is detected, and the position of the midpoint IC of the left and right irises IL and IR is detected as the viewpoint position. The detected viewpoint position can be used for detecting the state of the driver D (looking aside, taking a nap, etc.), and is output to the outside as viewpoint information.

すなわち、制御部10は、カメラ9による撮像画像Pの階調差に基づいて、運転者Dの左目Eのおよその位置を認識するための特徴点として鼻尖N、人中R、顎F、左の口角M及び左の眉頭Bを検出するとともに、左目Eに関する特徴点として目頭E1L、目尻E2L及び上眼瞼E3Lを検出し、虹彩Iの位置を検出する。 That is, the control unit 10 has the nose tip N, the philtrum R, and the chin F as feature points for recognizing the approximate position of the left eye EL of the driver D based on the gradation difference of the image P captured by the camera 9. The left corner of the mouth ML and the left eyelid BL are detected, and the inner corner E 1L , the outer corner of the eye E 2L , and the upper eyelid E 3L are detected as characteristic points related to the left eye EL , and the position of the iris IL is detected.

また、制御部10は、カメラ9による撮像画像Pの階調差に基づいて、運転者Dの右目Eのおよその位置を認識するための特徴点として鼻尖N、人中R、顎F、右の口角M及び右の眉頭Bを検出するとともに、右目Eに関する特徴点として目頭E1R、目尻E2R及び上眼瞼E3Rを検出し、虹彩Iの位置を検出する。 Further, the control unit 10 has the nose tip N, the philtrum R , and the chin F as feature points for recognizing the approximate position of the driver D's right eye ER based on the gradation difference of the image P captured by the camera 9. The right corner of the mouth MR and the right eyelid BR are detected, and the inner corner E 1R , the outer corner of the eye E 2R , and the upper eyelid E 3R are detected as feature points related to the right eye ER , and the position of the iris IR is detected.

HUD1において、表示ユニット5からの表示光Lは、折返鏡6で反射され、次いで、凹面鏡7で拡大して反射され、開口部4を通過してフロントウインドシールド2に投影される。フロントウインドシールド2に投影された表示光Lは、運転者Dの側に拡大して反射され、虚像Vを生成してフロントウインドシールド2を透過する実景に重ねて運転者Dに表示する。 In the HUD 1, the display light L 1 from the display unit 5 is reflected by the folding mirror 6, then magnified and reflected by the concave mirror 7, passes through the opening 4, and is projected onto the front windshield 2. The display light L 1 projected on the front windshield 2 is enlarged and reflected toward the driver D, generates a virtual image V, and is displayed on the driver D over the actual scene transmitted through the front windshield 2.

一方、赤外光照射ユニット8からの赤外光Lは、凹面鏡7を透過し、開口部4を通過してフロントウインドシールド2に投影され、フロントウインドシールド2で運転者Dの側に反射されて運転者Dを照射する。そして、運転者Dに反射されると赤外光Lの一部は逆の経路を辿り、凹面鏡7を透過してカメラ9に入射した赤外光Lにより運転者Dが撮像され、その撮像画像Pが制御部10に入力される。この撮像は、虚像Vの表示中、定期的又は不定期的に行われ、ここでは、虚像Vが表示されている間、定期的なフレームレートの動画撮像が行われる。 On the other hand, the infrared light L 2 from the infrared light irradiation unit 8 passes through the concave mirror 7, passes through the opening 4, is projected onto the front windshield 2, and is reflected by the front windshield 2 toward the driver D. And irradiate the driver D. Then, when reflected by the driver D, a part of the infrared light L 2 follows the reverse path, and the driver D is imaged by the infrared light L 2 transmitted through the concave mirror 7 and incident on the camera 9. The captured image P is input to the control unit 10. This imaging is performed periodically or irregularly during the display of the virtual image V, and here, a moving image at a periodic frame rate is performed while the virtual image V is displayed.

制御部10は、運転者Dの撮像画像Pが入力されると、上記のとおり、運転者Dの左右の虹彩I,Iの位置を検出し、虹彩I,Iの中間点Iの位置を視点位置として検出する。ただ、本来は、図2に示すような運転者Dの撮像画像Pが得られるはずであるが、図3に示すように、太陽光等の強い外光Lの赤外成分がフロントウインドシールド2、開口部4、凹面鏡7を透過してカメラ9に入射すると、撮像画像Pにおいては図4に示すような白飛びした領域Sが生じるとともに、その周辺の領域Sの階調差が低下し、領域Sに左目Eや右目Eを検出するための特徴点が含まれると、特徴点の位置の検出が困難になり、虹彩Iや虹彩I(図4では虹彩I)の検出が困難になる。 When the captured image P of the driver D is input, the control unit 10 detects the positions of the left and right irises IL and IR of the driver D as described above, and the intermediate point I of the iris IL and IR . The position of C is detected as the viewpoint position. However, originally, the captured image P of the driver D as shown in FIG. 2 should be obtained, but as shown in FIG. 3 , the infrared component of the strong external light L3 such as sunlight is the front windshield. 2. When the image is transmitted to the camera 9 through the opening 4 and the concave mirror 7, an overexposed area S1 as shown in FIG . 4 is generated in the captured image P, and the gradation difference of the peripheral area S2 is increased. When the area S 2 contains feature points for detecting the left eye EL and the right eye ER, it becomes difficult to detect the position of the feature points, and the iris IL and the iris IR (iris I in FIG. 4). R ) becomes difficult to detect.

そこで、制御部10は、特徴点の位置の検出が困難で虹彩I,Iの一方の位置の検出が困難な場合は、虹彩I,Iの他方の位置についてのみ撮像画像Pにおける階調差に基づいて検出し、虹彩I,Iの一方の位置については、撮像画像Pのガンマ値を調整したガンマ補正画像を生成し、ガンマ補正画像における階調差に基づいて検出する。 Therefore, when it is difficult for the control unit 10 to detect the position of the feature point and it is difficult to detect one position of the iris IL and IR , the control unit 10 takes only the other position of the iris IL and IR in the captured image P. Detects based on the gradation difference, and for one position of the iris IL or IR, a gamma-corrected image with the gamma value of the captured image P adjusted is generated, and detection is performed based on the gradation difference in the gamma-corrected image. ..

例えば、図4の撮像画像Pでは、左目Eについては階調差に基づいて鼻尖N、人中R、顎F、口角M、眉頭B、目頭E1L、目尻E2L及び上眼瞼E3Lをいずれも検出することができ、虹彩Iの位置を検出可能であるが、右目Eについては階調差に基づいて眉頭B、目頭E1R、目尻E2R及び上眼瞼E3Rを検出することができず、虹彩Iの位置を検出不可能であるから、制御部10は、図5に示すように、撮像画像Pのうち位置の検出が困難な特徴点(眉頭B、目頭E1R、目尻E2R及び上眼瞼E3R)を含む領域Sを決定し(領域Sは、ここでは、既知の特徴点(鼻尖N、人中R、顎F、口角M、眉頭B、目頭E1L、目尻E2L、上眼瞼E3Lのいずれか)に基づいて右目Eのおよその位置を推定し、そこから推定される右目Eの周辺領域として、位置の検出が困難な特徴点を含むように決定される。)、領域Sのガンマ値を調整してガンマ補正画像Pγを生成する。制御部10は、ガンマ補正画像Pγにおいては、撮像画像Pで検出することができなかった眉頭B、目頭E1R、目尻E2R及び上眼瞼E3Rを検出することができるので、虹彩Iの位置、ひいては、視点位置を検出することができる。 For example, in the captured image P of FIG. 4, for the left eye EL, the nose tip N, the human middle R, the jaw F, the mouth angle ML, the inner corner of the eyelid BL, the inner corner of the eye E 1L , the outer corner of the eye E 2L , and the upper eyelid E. All 3L can be detected and the position of the iris IL can be detected, but for the right eye ER, the inner corner of the eye BR , the inner corner of the eye E 1R , the outer corner of the eye E 2R and the upper eyelid E 3R are detected based on the gradation difference. Since the position of the iris IR cannot be detected because it cannot be detected, the control unit 10 has a feature point (eyelid BR ,) in which the position of the captured image P is difficult to detect, as shown in FIG. The region S3 including the inner corner of the eye E 1R , the outer corner of the eye E 2R and the upper eyelid E 3R ) was determined (the region S 3 is here the known feature points (nose tip N, human R , jaw F, mouth angle MR, inner corner of the eyelid). The approximate position of the right eye ER is estimated based on ( 1L of the inner corner of the eye, the outer corner of the eye E 2L , or the upper eyelid E 3L ), and the position is detected as the peripheral region of the right eye ER estimated from the position. It is determined to include difficult feature points) , the gamma value of region S3 is adjusted to generate a gamma-corrected image P γ . In the gamma-corrected image P γ , the control unit 10 can detect the inner corner of the eye BR , the inner corner of the eye E 1R , the outer corner of the eye E 2R , and the upper eyelid E 3R , which could not be detected in the captured image P. The position of R , and thus the position of the viewpoint, can be detected.

ガンマ補正画像Pγの生成方法は、階調差が十分に得られるようになれば特に限定されるものではなく、撮像画像Pのうち領域Sだけのガンマ値を調整するのではなく、撮像画像P全体のガンマ値を調整してもよい。 The method of generating the gamma-corrected image P γ is not particularly limited as long as the gradation difference can be sufficiently obtained, and the gamma value of only the region S3 of the captured image P is not adjusted but is captured. The gamma value of the entire image P may be adjusted.

例えば、撮像画像Pのガンマ値(カメラ9のイメージセンサ(CMOS、CCD等)が入力光量に対してリニアな信号を発生し、この信号(入力)にガンマ値を適用して変換したデータ(出力)が撮像画像Pとして記録される場合の当該ガンマ値)γがγ=αの場合、図6に示すように(図6ではα=1)、撮像画像Pの入力が0から所定の閾値未満までの範囲でγをαよりも大きくなるように調整してガンマ補正画像Pγを生成することができる。図6のガンマ曲線では、0から255までの入力のうち中間地128を閾値とし、閾値未満の範囲ではγ>α(=1)で下に凸の曲線、閾値以上の範囲ではγ=α(=1)でリニアな特性とすることにより、撮像画像Pでは明度が低すぎて十分な階調差が得られなかった領域の階調差を拡大したガンマ補正画像Pγが生成される。 For example, the gamma value of the captured image P (the image sensor (CMOS, CCD, etc.) of the camera 9 generates a linear signal with respect to the input light amount, and the gamma value is applied to this signal (input) to convert the data (output). ) Is the gamma value when recorded as the captured image P) When γ is γ = α, as shown in FIG. 6 (α = 1 in FIG. 6), the input of the captured image P is from 0 to less than a predetermined threshold. The gamma-corrected image Pγ can be generated by adjusting γ to be larger than α in the range up to. In the gamma curve of FIG. 6, the middle ground 128 of the inputs from 0 to 255 is set as the threshold, the curve is convex downward with γ> α (= 1) in the range below the threshold, and γ = α (γ = α) in the range above the threshold. By setting the linear characteristic in 1), a gamma-corrected image P γ is generated in which the gradation difference in the region where the brightness of the captured image P is too low to obtain a sufficient gradation difference is enlarged.

図7も、撮像画像Pのガンマ値γがγ=α(=1)の場合に、撮像画像Pの入力が閾値未満の範囲でガンマ値をαよりも大きくなるように調整してガンマ補正画像Pγを生成する例を示すが、図7のガンマ曲線は、閾値を176として閾値未満の範囲では入力が80の付近に変曲点を有し、変曲点未満の範囲ではγ>α(=1)で下に凸の曲線、変曲点以上の範囲ではγ>α(=1)で上に凸の曲線、閾値以上の範囲ではγ=α(=1)でリニアな特性を有する。これにより、図6のガンマ曲線によると十分な階調差が得られていた箇所の明度が低下して階調差が縮小する場合に、そのような階調差の縮小を防ぐガンマ補正画像Pγが生成される。 Also in FIG. 7, when the gamma value γ of the captured image P is γ = α (= 1), the gamma value is adjusted to be larger than α within the range where the input of the captured image P is less than the threshold value, and the gamma-corrected image is also obtained. An example of generating Pγ is shown. In the gamma curve of FIG. 7, the gamma curve has a turning point near the input of 80 in the range below the threshold with the threshold set to 176, and γ> α (=) in the range below the turning point. It has a downwardly convex curve in 1), an upwardly convex curve in the range above the turning point with γ> α (= 1), and a linear characteristic with γ = α (= 1) in the range above the threshold. As a result, when the brightness of the portion where the sufficient gradation difference is obtained is reduced according to the gamma curve of FIG. 6 and the gradation difference is reduced, the gamma-corrected image P that prevents such reduction of the gradation difference is prevented. γ is generated.

図8は、撮像画像Pのガンマ値γがγ=α(=1)の場合に、撮像画像Pの入力が閾値以上の範囲でガンマ値をαよりも小さくなるように調整してガンマ補正画像Pγを生成する例を示す。図8のガンマ曲線は、図6のガンマ曲線と同様に、閾値を128として閾値未満の範囲ではγ>α(=1)で下に凸であるが、閾値以上の範囲ではγ<α(=1)で上に凸であり(図8では、厳密には閾値でα=1で変曲点になり、閾値を超えた範囲でγ<αが成り立つ。)、これにより、閾値以上の範囲で明度が撮像画像Pの状態よりも高く、図6のガンマ曲線の効果を補強するガンマ補正画像Pγが生成される。 FIG. 8 shows a gamma-corrected image in which when the gamma value γ of the captured image P is γ = α (= 1), the gamma value is adjusted to be smaller than α in the range where the input of the captured image P is equal to or greater than the threshold value. An example of generating Pγ is shown. Similar to the gamma curve of FIG. 6, the gamma curve of FIG. 8 has a threshold of 128 and is convex downward with γ> α (= 1) in the range below the threshold, but γ <α (=) in the range above the threshold. In 1), it is convex upward (strictly speaking, in FIG. 8, the threshold value is α = 1, which is a turning point, and γ <α holds in the range exceeding the threshold value), whereby the range above the threshold value is established. A gamma-corrected image P γ is generated in which the brightness is higher than that of the captured image P and reinforces the effect of the gamma curve of FIG.

なお、制御部10は、特徴点の位置の検出が困難で左右の虹彩I,Iの両方の位置の検出が困難な場合は、ガンマ補正画像Pγを生成せず、左右の虹彩I,Iの一方の位置が検出可能な状態に遷移してから、ガンマ補正画像Pγの生成とそれに基づく特徴点の位置(並びに虹彩I,Iの位置及び視点位置)の検出を開始する。 If the control unit 10 has difficulty in detecting the position of the feature point and it is difficult to detect the positions of both the left and right irises IL and IR, the control unit 10 does not generate the gamma-corrected image P γ and the left and right iris I. After transitioning to a state in which one of the positions of L and IR can be detected, the gamma-corrected image P γ is generated and the position of the feature point based on the gamma-corrected image P γ (and the position of the iris IL and IR and the viewpoint position) are detected. Start.

つまり、制御部10は、運転者Dが横を向いていたり体勢を傾けて顔が撮影範囲外にあったりする状態から、正面に向き直ったり体勢を戻したりして、強い光の影響がなければ左右の虹彩I,Iを検知することができるにもかかわらず、強い光によって虹彩I,Iの一方を検知することができない状態に遷移した瞬間にのみ、虹彩I,Iの他方の位置を撮像画像Pに基づいて、虹彩I,Iの一方の位置をガンマ補正画像Pγに基づいて検出し、視点位置を検出する。その後、強い光の影響が消失して虹彩I,Iの一方も検知可能になれば、制御部10は、ガンマ補正画像Pγの生成を中止し、虹彩I,Iの両方の位置を撮像画像Pに基づいて検出する。 That is, the control unit 10 is not affected by strong light by turning to the front or returning to the front from the state where the driver D is facing sideways or tilting his / her posture and his / her face is out of the shooting range. Although the left and right irises IL and IR can be detected, only at the moment when one of the irises IL and IR cannot be detected due to strong light, the iris IL and IR can be detected. The other position is detected based on the captured image P, and one position of the iris IL and IR is detected based on the gamma-corrected image P γ , and the viewpoint position is detected. After that, when the influence of the strong light disappears and one of the iris IL and IR can be detected, the control unit 10 stops the generation of the gamma-corrected image P γ and both the iris IL and IR are stopped. The position is detected based on the captured image P.

本実施の形態に係る視点検出システムは、車両2の運転者Dに投影された赤外光Lにより運転者Dを撮像するカメラ9と、カメラ9の撮像画像Pにおける階調差に基づいて、運転者Dの特徴点の位置を検出し、検出された特徴点に基づいて、撮像画像Pにおける運転者Dの左右の虹彩I,Iの位置を検出し、検出された左右の虹彩I,Iの位置に基づいて、運転者Dの視点位置を検出する制御部10とを備え、制御部10は、特徴点の位置の検出が困難で左右の虹彩I,Iの一方の位置の検出が困難な場合に、撮像画像Pのガンマ値を調整したガンマ補正画像Pγを生成し、ガンマ補正画像Pγにおける階調差に基づいて、検出が困難だった特徴点の位置を検出するので、撮像画像Pにおける階調差が十分でない場合も、ガンマ値を調整したガンマ補正画像Pγを生成して階調差を確保することにより、運転者Dの視点位置を検出することができる。 The viewpoint detection system according to the present embodiment is based on the gradation difference between the camera 9 that captures the driver D by the infrared light L 2 projected on the driver D of the vehicle 2 and the captured image P of the camera 9. , The position of the feature point of the driver D is detected, and the positions of the left and right irises IL and IR of the driver D in the captured image P are detected based on the detected feature points, and the detected left and right irises. A control unit 10 for detecting the viewpoint position of the driver D based on the positions of the IL and IR is provided, and the control unit 10 has difficulty in detecting the position of the feature point, and the left and right irises IL and IR When it is difficult to detect one position, a gamma-corrected image P γ with the gamma value of the captured image P adjusted is generated, and the feature points that were difficult to detect based on the gradation difference in the gamma-corrected image P γ . Since the position is detected, even if the gradation difference in the captured image P is not sufficient, the viewpoint position of the driver D is detected by generating the gamma-corrected image P γ with the adjusted gamma value and securing the gradation difference. can do.

この実施の形態では、制御部10は、特徴点の位置の検出が困難で左右の虹彩I,Iの一方の位置の検出が困難な場合に、撮像画像Pのうち位置の検出が困難な特徴点(図4では、眉頭B、目頭E1R、目尻E2R及び上眼瞼E3R)を含む領域Sのガンマ値を調整してガンマ補正画像Pγを生成するので、撮像画像P全体のガンマ値を調整するよりも画像処理の領域が限定され、処理時間を短縮することができる。 In this embodiment, when it is difficult for the control unit 10 to detect the position of the feature point and it is difficult to detect the position of one of the left and right irises IL and IR, it is difficult to detect the position of the captured image P. Since the gamma-corrected image P γ is generated by adjusting the gamma value of the region S3 including the characteristic points (in FIG. 4 , the inner corner of the eye BR , the inner corner of the eye E 1R , the outer corner of the eye E 2R and the upper eyelid E 3R ), the captured image P The image processing area is limited and the processing time can be shortened as compared with adjusting the entire gamma value.

また、制御部10は、特徴点の位置の検出が困難で左右の虹彩I,Iの両方の位置の検出が困難な場合は、ガンマ補正画像Pγを生成せず、左右の虹彩I,Iの一方の位置が検出可能な状態に遷移してから、ガンマ補正画像Pγの生成とそれに基づく特徴点の位置の検出を開始するので、ガンマ値の調整が不要又は無意義な条件下ではガンマ補正画像Pγの生成を行わないことにより、不要又は無意義なガンマ補正画像Pγの生成に伴う視点位置の検出の遅延や視点情報の外部への出力の遅延を抑制することができる。 Further, when it is difficult to detect the position of the feature point and it is difficult to detect both the positions of the left and right irises IL and IR, the control unit 10 does not generate the gamma-corrected image P γ and the left and right irises I. Since the generation of the gamma-corrected image P γ and the detection of the position of the feature point based on the gamma-corrected image P γ are started after one of the positions of L and IR transitions to the detectable state, the adjustment of the gamma value is unnecessary or meaningless. By not generating the gamma-corrected image P γ under the conditions, the delay in detecting the viewpoint position and the delay in outputting the viewpoint information to the outside due to the generation of the unnecessary or meaningless gamma-corrected image P γ are suppressed. Can be done.

以上、本発明を実施するための形態について例示したが、本発明の実施形態は上述したものに限られず、発明の趣旨を逸脱しない範囲で適宜変更等してもよい。 Although the embodiments for carrying out the present invention have been illustrated above, the embodiments of the present invention are not limited to those described above, and may be appropriately modified without departing from the spirit of the invention.

例えば、上記実施の形態では、左右の虹彩I,Iの中間点Iの位置を視点位置としたが、虹彩I,Iの位置自体やその他の位置を視点位置としてもよい。 For example, in the above embodiment, the position of the intermediate point IC of the left and right irises IL and IR is set as the viewpoint position, but the position of the iris IL and IR itself and other positions may be set as the viewpoint position.

また、特徴点として図2に示した部位以外の部位を検出してもよく、目の位置として虹彩以外の部位の位置(例えば、瞳孔の位置)を検出してもかまわない。 Further, a portion other than the portion shown in FIG. 2 may be detected as a feature point, and a portion other than the iris (for example, the position of the pupil) may be detected as the eye position.

さらに、閾値の設定方法も任意であり、閾値を複数設定してもよく、一例として第1の閾値と第1の閾値よりも大きい第2の閾値を設定し、第1の閾値未満の領域(少なくとも、明度が低すぎて必要十分な階調差が得られない領域を含む。)の範囲でγ>αとなるようにガンマ値を調整するとともに、第2の閾値を超える領域(少なくとも、明度が高すぎて必要十分な階調差が得られない領域を含む。)の範囲でγ<αとなるようにガンマ値を調整してガンマ補正画像を生成してもかまわない(図6及び図7のガンマ曲線は、第1の閾値のみが設定されている場合に相当し、図8のガンマ曲線は、第1の閾値及び第2の閾値が一致している場合に相当する。)。 Further, the method of setting the threshold value is also arbitrary, and a plurality of threshold values may be set. As an example, a first threshold value and a second threshold value larger than the first threshold value are set, and a region below the first threshold value ( At least, the gamma value is adjusted so that γ> α in the range (including the region where the brightness is too low to obtain the necessary and sufficient gradation difference), and the region exceeding the second threshold value (at least, the brightness). Gamma-corrected images may be generated by adjusting the gamma value so that γ <α in the range (including the region where is too high to obtain the necessary and sufficient gradation difference). The gamma curve of No. 7 corresponds to the case where only the first threshold value is set, and the gamma curve of FIG. 8 corresponds to the case where the first threshold value and the second threshold value match.)

2 車両
9 カメラ(撮像手段)
10 制御部(特徴点検出手段、視点位置検出手段)
眉頭(特徴点)
眉頭(特徴点)
D 運転者(利用者)
1L 目頭(特徴点)
1R 目頭(特徴点)
2L 目尻(特徴点)
2R 目尻(特徴点)
3L 上眼瞼(特徴点)
3R 上眼瞼(特徴点)
左目
右目
F 顎(特徴点)
虹彩
虹彩
口角(特徴点)
口角(特徴点)
N 鼻尖(特徴点)
P 撮像画像
γ ガンマ補正画像
R 人中(特徴点)
2 Vehicle 9 Camera (imaging means)
10 Control unit (feature point detection means, viewpoint position detection means)
BL eyebrows (feature points)
BR eyebrows (feature points)
D Driver (user)
E 1L inner corner (feature point)
E 1R inner corner (feature point)
E 2L outer corner of the eye (feature point)
E 2R outer corner of the eye (feature point)
E 3L upper eyelid (feature point)
E 3R upper eyelid (feature point)
E L Left eye E R Right eye F Jaw (feature point)
IL iris IR iris ML mouth corner (feature point)
MR mouth angle (feature point)
N Nose tip (feature point)
P Captured image P γ Gamma corrected image R Philtrum (feature point)

Claims (5)

車両の利用者に投影された赤外光により前記利用者を撮像する撮像手段と、
前記撮像手段の撮像画像における階調差に基づいて、前記利用者の特徴点の位置を検出する特徴点検出手段と、
前記特徴点検出手段により検出された特徴点に基づいて、前記撮像画像における前記利用者の左右の目の位置を検出し、検出された左右の目の位置に基づいて、前記利用者の視点位置を検出する視点位置検出手段とを備え、
前記特徴点検出手段は、前記特徴点の位置の検出が困難で前記視点位置検出手段による前記左右の目の一方の位置の検出が困難な場合に、前記撮像画像のガンマ値を調整したガンマ補正画像を生成し、前記ガンマ補正画像における階調差に基づいて、前記特徴点の位置を検出することを特徴とする視点位置検出システム。
An image pickup means for imaging the user by infrared light projected on the user of the vehicle, and an imaging means.
The feature point detecting means for detecting the position of the feature point of the user based on the gradation difference in the captured image of the imaging means, and the feature point detecting means.
Based on the feature points detected by the feature point detecting means, the positions of the left and right eyes of the user in the captured image are detected, and the viewpoint position of the user is detected based on the detected positions of the left and right eyes. Equipped with a viewpoint position detecting means to detect
The feature point detecting means adjusts the gamma value of the captured image when it is difficult to detect the position of the feature point and it is difficult for the viewpoint position detecting means to detect the position of one of the left and right eyes. A viewpoint position detection system characterized in that an image is generated and the position of the feature point is detected based on the gradation difference in the gamma-corrected image.
前記特徴点検出手段は、前記特徴点の位置の検出が困難で前記視点位置検出手段による前記左右の目の一方の位置の検出が困難な場合に、前記撮像画像のうち位置の検出が困難な特徴点を含む領域のガンマ値を調整して前記ガンマ補正画像を生成することを特徴とする請求項1に記載の視点位置検出システム。 When it is difficult for the feature point detecting means to detect the position of the feature point and it is difficult for the viewpoint position detecting means to detect the position of one of the left and right eyes, it is difficult to detect the position of the captured image. The viewpoint position detection system according to claim 1, wherein the gamma-corrected image is generated by adjusting the gamma value of a region including a feature point. 前記特徴点検出手段は、前記撮像画像のガンマ値がαの場合に、前記撮像画像の入力が所定の閾値未満の範囲でガンマ値をαよりも大きくなるように調整して前記ガンマ補正画像を生成することを特徴とする請求項1又は請求項2に記載の視点位置検出システム。 When the gamma value of the captured image is α, the feature point detecting means adjusts the gamma value to be larger than α within a range where the input of the captured image is less than a predetermined threshold value to obtain the gamma-corrected image. The viewpoint position detection system according to claim 1 or 2, wherein the viewpoint position detection system is generated. 前記特徴点検出手段は、前記撮像画像のガンマ値がαの場合に、前記撮像画像の入力が前記閾値以上の範囲でガンマ値をαよりも小さくなるように調整して前記ガンマ補正画像を生成することを特徴とする請求項3に記載の視点位置検出システム。 When the gamma value of the captured image is α, the feature point detecting means adjusts the gamma value to be smaller than α in the range where the input of the captured image is equal to or greater than the threshold value to generate the gamma-corrected image. 3. The viewpoint position detection system according to claim 3. 前記特徴点検出手段は、前記特徴点の位置の検出が困難で前記視点位置検出手段による前記左右の目の両方の位置の検出が困難な場合に、前記ガンマ補正画像を生成しないことを特徴とする請求項1乃至請求項4のいずれか1項に記載の視点位置検出システム。 The feature point detecting means is characterized in that it does not generate the gamma-corrected image when it is difficult to detect the position of the feature point and it is difficult for the viewpoint position detecting means to detect the positions of both the left and right eyes. The viewpoint position detection system according to any one of claims 1 to 4.
JP2020128797A 2020-07-30 2020-07-30 Viewpoint detection system Active JP7472706B2 (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
JP2020128797A JP7472706B2 (en) 2020-07-30 2020-07-30 Viewpoint detection system

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
JP2020128797A JP7472706B2 (en) 2020-07-30 2020-07-30 Viewpoint detection system

Publications (2)

Publication Number Publication Date
JP2022025741A true JP2022025741A (en) 2022-02-10
JP7472706B2 JP7472706B2 (en) 2024-04-23

Family

ID=80264745

Family Applications (1)

Application Number Title Priority Date Filing Date
JP2020128797A Active JP7472706B2 (en) 2020-07-30 2020-07-30 Viewpoint detection system

Country Status (1)

Country Link
JP (1) JP7472706B2 (en)

Family Cites Families (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2007025935A (en) 2005-07-14 2007-02-01 Mitsubishi Electric Corp Image-detecting device
JP5396298B2 (en) 2010-02-04 2014-01-22 本田技研工業株式会社 Face orientation detection device
JP5766564B2 (en) 2011-09-15 2015-08-19 株式会社東芝 Face authentication apparatus and face authentication method
JP6720881B2 (en) 2017-01-19 2020-07-08 カシオ計算機株式会社 Image processing apparatus and image processing method

Also Published As

Publication number Publication date
JP7472706B2 (en) 2024-04-23

Similar Documents

Publication Publication Date Title
JP5212927B2 (en) Face shooting system
JP5151207B2 (en) Display device
JP4635572B2 (en) Video display device
JP2001183735A (en) Method and device for image pickup
KR100607433B1 (en) Night vision system and control method thereof
JP2000305481A (en) Projection type display device and information storage media
JP2007004448A (en) Line-of-sight detecting apparatus
KR101446779B1 (en) Photographing control method and apparatus for prohibiting flash
KR20170011362A (en) Imaging apparatus and method for the same
WO2021225030A1 (en) Electronic apparatus and imaging device
JP2006248365A (en) Back monitoring mirror of movement body, driver photographing device, driver monitoring device and safety driving support device
WO2021210235A1 (en) Electronic device
JP2009240551A (en) Sight line detector
JP7472706B2 (en) Viewpoint detection system
US20220329740A1 (en) Electronic apparatus, method for controlling electronic apparatus, and non-transitory computer readable storage medium
CN208842331U (en) Vehicle imaging system
JP2004215062A (en) Imaging device
JP2010056883A (en) Optical device and photographing device
JP2004186721A (en) Camera
JP2017162233A (en) Visual line detection device and visual line detection method
KR101279436B1 (en) Photographing apparatus, and photographing method
JPH0884280A (en) Head mount type video camera
JP7446898B2 (en) Electronics
US11816862B2 (en) Vehicle display device
JP2021119065A (en) Head-up display device

Legal Events

Date Code Title Description
A621 Written request for application examination

Free format text: JAPANESE INTERMEDIATE CODE: A621

Effective date: 20230519

A977 Report on retrieval

Free format text: JAPANESE INTERMEDIATE CODE: A971007

Effective date: 20240301

TRDD Decision of grant or rejection written
A01 Written decision to grant a patent or to grant a registration (utility model)

Free format text: JAPANESE INTERMEDIATE CODE: A01

Effective date: 20240312

A61 First payment of annual fees (during grant procedure)

Free format text: JAPANESE INTERMEDIATE CODE: A61

Effective date: 20240325

R150 Certificate of patent or registration of utility model

Ref document number: 7472706

Country of ref document: JP

Free format text: JAPANESE INTERMEDIATE CODE: R150