JP2010049491A - Driving state monitoring apparatus - Google Patents

Driving state monitoring apparatus Download PDF

Info

Publication number
JP2010049491A
JP2010049491A JP2008213181A JP2008213181A JP2010049491A JP 2010049491 A JP2010049491 A JP 2010049491A JP 2008213181 A JP2008213181 A JP 2008213181A JP 2008213181 A JP2008213181 A JP 2008213181A JP 2010049491 A JP2010049491 A JP 2010049491A
Authority
JP
Japan
Prior art keywords
image
luminance value
face image
region
correction
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
JP2008213181A
Other languages
Japanese (ja)
Inventor
Takehiko Tanaka
勇彦 田中
Tomonori Akiyama
知範 秋山
Takuhiro Omi
拓寛 大見
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Denso Corp
Toyota Motor Corp
Original Assignee
Denso Corp
Toyota Motor Corp
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Denso Corp, Toyota Motor Corp filed Critical Denso Corp
Priority to JP2008213181A priority Critical patent/JP2010049491A/en
Publication of JP2010049491A publication Critical patent/JP2010049491A/en
Pending legal-status Critical Current

Links

Landscapes

  • Image Processing (AREA)
  • Closed-Circuit Television Systems (AREA)
  • Traffic Control Systems (AREA)
  • Image Analysis (AREA)

Abstract

<P>PROBLEM TO BE SOLVED: To provide a driving state monitoring apparatus that extracts a characteristic point from a face image of a driver, even when brightness is largely changed according to a change of traveling environment of a vehicle. <P>SOLUTION: The driving state monitoring apparatus 1 includes: an image acquisition part 11 acquiring the face image of the driver obtained by performing imaging by an imager 10; an image division part 12 dividing the obtained face image into a plurality of areas; and an image correction part 13 correcting, for each area, a luminance value of a pixel included in the area such that an average luminance of each area of the face image approaches to a prescribed luminance value. The image correction part 13 corrects the luminance value in each area to properly perform the correction according to the brightness of each area. By the correction, the luminance value of the pixel of each area can be converged in a prescribed range allowing image analysis. <P>COPYRIGHT: (C)2010,JPO&INPIT

Description

本発明は、撮像器により運転者の顔画像を取得し、取得した顔画像に基づいて運転者の運転状態を監視する運転状態監視装置に関するものである。   The present invention relates to a driving state monitoring device that acquires a driver's face image with an image pickup device and monitors the driving state of the driver based on the acquired face image.

近年、車両を運転している運転者の顔画像を取得し、この顔画像を解析処理することにより、運転状態を監視する監視装置が開発されている。この監視装置では、取得した顔画像における目、鼻及び口といった特徴点を抽出し、これらの特徴点の状態や顔の向き等に基づいて運転状態が判断される。例えば、特許文献1に記載される顔向き判別装置では、取得した顔画像における顔の位置及び顔の中心位置を検出して顔の向きを求め、この検出結果の精度の指標となる信頼度を併せて算出している。
特開2007−72628号公報
In recent years, a monitoring device has been developed that acquires a face image of a driver driving a vehicle and analyzes the face image to monitor a driving state. In this monitoring device, feature points such as eyes, nose and mouth in the acquired face image are extracted, and the driving state is determined based on the state of these feature points, the orientation of the face, and the like. For example, in the face orientation discriminating apparatus described in Patent Document 1, the face orientation and the face center location in the acquired face image are detected to determine the face orientation, and the reliability as an index of the accuracy of the detection result is obtained. It is also calculated.
JP 2007-72628 A

取得した顔画像からの特徴点の抽出には、例えばエッジ検出等の画像解析技術が用いられる。このため、装置が採用する画像解析アルゴリズムにより認識可能な程度に、顔画像においてこれらの特徴点が顕在化されている必要がある。しかしながら、車両の走行環境の変化に伴って、取得される顔画像の状態は様々に変化する。特に、運転者の顔に太陽光が直接に当たっている場合には、光が当たっている部分の明るさは非常に明るくなり、影となる部分の明るさは相対的に非常に暗くなる。撮像器は、制御プログラムにより明るさに対応して露出やシャッター速度を制御しようとするが、その制御可能な範囲を超えてしまうと、いわゆる白とびまたは黒つぶれといった現象が顔画像の一部の領域で発生する場合がある。これらの現象は、顔画像の輝度値が、画像解析の可能な一定の範囲を超えて分布した場合に発生する。このような領域に特徴点が含まれていた場合には、画像解析により特徴点を抽出することはできない。   For example, an image analysis technique such as edge detection is used to extract feature points from the acquired face image. For this reason, it is necessary to make these feature points manifest in the face image to the extent that they can be recognized by the image analysis algorithm employed by the apparatus. However, as the traveling environment of the vehicle changes, the state of the acquired face image changes variously. In particular, when sunlight directly hits the driver's face, the brightness of the portion that is exposed to light is very bright, and the brightness of the shadow portion is relatively very dark. The imager tries to control the exposure and shutter speed according to the brightness by the control program, but if it exceeds the controllable range, a phenomenon such as so-called overexposure or underexposure may occur in part of the face image. May occur in the region. These phenomena occur when the luminance value of the face image is distributed beyond a certain range where image analysis is possible. When a feature point is included in such a region, the feature point cannot be extracted by image analysis.

そこで、本発明の課題は、車両の走行環境の変化に伴い明るさが大きく変化した場合であっても、運転者の顔画像からの特徴点の抽出が可能な運転状態監視装置を提供することにある。   Accordingly, an object of the present invention is to provide a driving state monitoring device capable of extracting feature points from a driver's face image even when the brightness greatly changes with changes in the driving environment of the vehicle. It is in.

本発明は、撮像器により運転者の顔画像を取得し、取得した顔画像の特徴点を抽出し、抽出した特徴点の状態に基づいて運転者の運転状態を監視する運転状態監視装置において、顔画像を複数の領域に分割する画像分割手段と、顔画像の各領域の平均輝度が所定の輝度値に近づくように、各領域の輝度値を補正する画像補正手段とを備えることを特徴とする。   The present invention is a driving state monitoring device that acquires a driver's face image by an imager, extracts feature points of the acquired face image, and monitors a driver's driving state based on the extracted feature point state. An image dividing unit that divides a face image into a plurality of regions, and an image correction unit that corrects the luminance value of each region so that the average luminance of each region of the face image approaches a predetermined luminance value. To do.

本発明の運転状態監視装置では、撮像器により取得した運転者の顔画像を複数の領域に分割し、分割された領域毎に輝度値の補正を行うので、それぞれの領域の明るさに応じて適切な補正を行うことができる。この補正は、各領域の平均輝度値が所定の輝度値に近づくように行われるので、各領域の輝度値を、画像解析の可能な一定の範囲内に収束させることができる。従って、取得した顔画像の輝度値が、画像解析の可能な一定の範囲を超えて分布した場合であっても、取得した顔画像からの特徴点の抽出が可能となる。   In the driving state monitoring device of the present invention, the driver's face image acquired by the image pickup device is divided into a plurality of regions, and the luminance value is corrected for each of the divided regions, so according to the brightness of each region. Appropriate corrections can be made. Since this correction is performed so that the average luminance value of each region approaches a predetermined luminance value, the luminance value of each region can be converged within a certain range where image analysis is possible. Therefore, even if the luminance value of the acquired face image is distributed beyond a certain range where image analysis is possible, feature points can be extracted from the acquired face image.

また、本発明に係る運転状態監視装置では、画像補正手段により補正された顔画像の各領域間の境界部分の輝度値を、領域間の境界を跨いで一方の領域から他方の領域に向かうに従って輝度値が連続的に変化するように調整する画像調整手段を更に備えることを特徴とする。   In the driving state monitoring apparatus according to the present invention, the luminance value of the boundary portion between the regions of the face image corrected by the image correcting unit is increased from one region to the other region across the boundary between the regions. The image processing device further includes image adjusting means for adjusting the brightness value so as to continuously change.

本発明の運転状態監視装置では、領域毎に輝度値の補正が行われるので、各領域間の境界において、大きく輝度値が異なる場合がある。このままの顔画像に対して、例えばエッジ検出のような画像処理により特徴点の検出を行うと、各領域間の境界が何らかの特徴点と判定されてしまう可能性がある。本発明では、各領域間の境界を跨いで一方の領域から他方の領域に向かうに従って輝度値が連続的に変化するように、各領域間の境界部分の輝度値を調整するので、各領域間の境界を何らかの特徴点と誤判断することなく、顔画像の解析を行うことが可能となる。   In the operation state monitoring device of the present invention, since the luminance value is corrected for each region, the luminance value may be greatly different at the boundary between the regions. If a feature point is detected on the face image as it is, for example, by image processing such as edge detection, there is a possibility that the boundary between the regions is determined as some feature point. In the present invention, the luminance value of the boundary portion between the regions is adjusted so that the luminance value continuously changes from one region to the other region across the boundary between the regions. It is possible to analyze the face image without misjudging the boundary as a feature point.

また、本発明の運転状態監視装置では、画像補正手段は、顔画像の各領域の輝度値の分布が所定の輝度値を中心に圧縮されるように輝度値を補正することを特徴とする。   In the driving state monitoring apparatus according to the present invention, the image correction unit corrects the luminance value so that the distribution of the luminance value of each region of the face image is compressed around a predetermined luminance value.

本発明の運転状態監視装置では、各領域の輝度値の分布が所定の輝度値を中心に圧縮されるように各領域の輝度値が補正されるので、各領域の平均輝度値は所定の輝度値に近づけられる。従って、各領域の輝度値を画像解析の可能な一定の範囲内に確実に収束させることが可能となる。   In the operation state monitoring device of the present invention, the luminance value of each region is corrected so that the distribution of luminance values of each region is compressed around the predetermined luminance value, so the average luminance value of each region is a predetermined luminance. Can be close to the value. Therefore, it is possible to reliably converge the luminance value of each region within a certain range where image analysis is possible.

本発明によれば、車両の走行環境の変化に伴い明るさが大きく変化し、撮像器により取得した顔画像の輝度値が画像解析の可能な一定の範囲を超えて分布した場合であっても、運転者の顔画像からの特徴点の抽出が可能となる。その結果、より適切な運転状態の監視が可能となる。   According to the present invention, even when the brightness changes greatly with changes in the driving environment of the vehicle, and the brightness value of the face image acquired by the imaging device is distributed beyond a certain range where image analysis is possible. The feature points can be extracted from the driver's face image. As a result, it is possible to monitor the operation state more appropriately.

以下、図面を参照して、本発明に係る運転状態監視装置の実施形態について詳細に説明する。なお、全図中、同一又は相当部分には同一符号を付すこととする。   Hereinafter, an embodiment of an operation state monitoring device according to the present invention will be described in detail with reference to the drawings. In the drawings, the same or corresponding parts are denoted by the same reference numerals.

図1は、本発明に係る運転状態監視装置の実施形態を示す概略構成図である。運転状態監視装置1は、撮像器10と、ECU(Electronic Control Unit)20とを備える。ECU20は、画像取得部11、画像分割部12、画像補正部13、画像調整部14及び運転状態判断部15を備える。ECU20は、CPU、ROM、RAMおよび入出力インターフェース等により構成される。   FIG. 1 is a schematic configuration diagram showing an embodiment of an operation state monitoring apparatus according to the present invention. The driving state monitoring device 1 includes an imager 10 and an ECU (Electronic Control Unit) 20. The ECU 20 includes an image acquisition unit 11, an image division unit 12, an image correction unit 13, an image adjustment unit 14, and an operation state determination unit 15. The ECU 20 includes a CPU, a ROM, a RAM, an input / output interface, and the like.

運転状態監視装置1は、撮像器10により車両の運転者の顔画像を取得し、取得した顔画像の特徴点を抽出し、抽出した特徴点の状態に基づいて運転者の運転状態を監視する装置である。顔画像の特徴点は、例えば目、鼻及び口といった顔の部位であり、これらの特徴点は、例えばエッジ検出等の画像処理技術を用いて抽出される。そして、運転状態監視装置1は、例えば目の瞬き、鼻及び口の位置から判別した顔の向き等の特徴点の状態に基づいて、運転者の運転状態を監視する。   The driving state monitoring device 1 acquires a face image of the driver of the vehicle with the image pickup device 10, extracts feature points of the acquired face image, and monitors the driving state of the driver based on the state of the extracted feature points. Device. The feature points of the face image are facial parts such as eyes, nose and mouth, for example, and these feature points are extracted using an image processing technique such as edge detection. And the driving | running state monitoring apparatus 1 monitors a driver | operator's driving | running state based on the state of feature points, such as direction of the face discriminate | determined from the blink of eyes, the position of a nose, and a mouth, for example.

撮像器10は、運転者の顔画像を撮像し、撮像した顔画像のデータを画像取得部11に送出する。この撮像器10は、例えば赤外線カメラ等により構成される。   The imager 10 captures a driver's face image and sends the captured face image data to the image acquisition unit 11. The imaging device 10 is configured by, for example, an infrared camera.

画像取得部11は、撮像器10から送出された顔画像のデータを取得する。図2は、撮像された顔画像を示す図である。図2に示される顔画像では、顔の左下側から光が入射しており、顔の左下の領域の輝度が大きくなっている。一方、入射光の影となる顔の右上側の領域の輝度は小さくなっている。画像取得部11は、取得した顔画像のデータを画像分割部12に送出する。   The image acquisition unit 11 acquires face image data sent from the imaging device 10. FIG. 2 is a diagram showing a captured face image. In the face image shown in FIG. 2, light is incident from the lower left side of the face, and the luminance of the lower left region of the face is increased. On the other hand, the luminance of the area on the upper right side of the face that is a shadow of incident light is small. The image acquisition unit 11 sends the acquired face image data to the image division unit 12.

画像分割部12は、画像取得部11から送出された顔画像のデータを取得し、取得した顔画像を複数の領域に分割する。図3〜図4は、顔画像の分割の例を示す図である。図3は、元の顔画像の縦及び横の長さを1/2にして均等にA〜Dの領域に4分割した場合の例である。このように、画像分割部12は、予め設定された分割の仕方に従って顔画像を分割することができる。また、図4は、取得した顔画像の輝度分布に従って4分割した場合の例である。この顔画像は、画像内の輝度値に基づいて引かれる所定の間隔の等高線によりA〜Dの領域に分割されている。画像分割部12は、複数の領域に分割した顔画像のデータを画像補正部13に送出する。   The image dividing unit 12 acquires the face image data sent from the image acquiring unit 11 and divides the acquired face image into a plurality of regions. 3 to 4 are diagrams illustrating examples of face image division. FIG. 3 shows an example in which the length and width of the original face image are halved and equally divided into four areas A to D. In this way, the image dividing unit 12 can divide the face image according to a preset division method. FIG. 4 shows an example in which the image is divided into four according to the luminance distribution of the acquired face image. This face image is divided into areas A to D by contour lines with a predetermined interval drawn based on the luminance value in the image. The image dividing unit 12 sends the face image data divided into a plurality of areas to the image correcting unit 13.

画像補正部13は、画像分割部12から送出された顔画像のデータを取得し、分割された顔画像の各領域の平均輝度が所定の輝度値に近づくように、領域に含まれる画素の輝度値を領域毎に補正する。図5は、顔画像中のある領域における補正前後の輝度値の分布を示す図である。   The image correction unit 13 acquires the face image data sent from the image dividing unit 12, and the luminance of the pixels included in the region so that the average luminance of each region of the divided face image approaches a predetermined luminance value. The value is corrected for each region. FIG. 5 is a diagram showing the distribution of luminance values before and after correction in a certain area in the face image.

図5(a)は、補正前の輝度値の分布を示す図である。補正前においては、この領域の画素の平均輝度値は輝度Lである。画像補正部13は、例えば顔画像の領域に含まれる画素の輝度値の分布が所定の輝度値Lを中心に所定の割合で圧縮されるように画素の輝度値を補正する。 FIG. 5A is a diagram illustrating a distribution of luminance values before correction. Before correction, the average luminance value of the pixel of this region is the luminance L A. Image correcting unit 13, for example, distribution of luminance values of pixels included in the region of the face image to correct the luminance value of the pixel to be compressed at a predetermined rate about a predetermined luminance value L M.

図5(b)は、補正後の輝度値の分布を示す図である。ここでの補正は、低輝度側及び高輝度側の補正幅をそれぞれ補正幅C及び補正幅Cとして、画素の輝度値の分布を(256−(C+C))/256の割合で圧縮して行われる。図5(b)の例では、補正幅C及び補正幅Cは共に32である。これらの補正幅は、例えば領域に含まれる画素の平均輝度値Lと所定の輝度値Lとの差に応じて設定することができる。具体的には、補正幅C及び補正幅Cは、平均輝度値Lと所定の輝度値Lとの差に所定の係数を乗じて得るように構成することが可能である。この所定の係数は、運転状態監視装置1が採用する画像解析アルゴリズムに依存して設定することができる。図5(b)に示すように、この領域の補正後の画素の平均輝度値は輝度Lであり、補正前に比べて所定の輝度値Lに近づいている。なお、この所定の輝度値Lは、輝度の階調幅の中心値であることが好ましい。例えば、輝度の階調幅が0〜255である場合には、所定の輝度値Lを127に設定することができる。 FIG. 5B is a diagram showing a distribution of luminance values after correction. Correction here is the correction width of the low luminance side and the high luminance side as a correction width C L and the correction width C H, respectively, the ratio of distribution (256- (C L + C H )) / 256 of the luminance values of the pixels This is done with compression. In the example of FIG. 5B, both the correction width C L and the correction width C H are 32. These correction width can be set in accordance with the difference between the average luminance values L A and the predetermined luminance value L M of the pixels included in the example region. Specifically, the correction width C L and the correction width C H may be configured to be multiplied by a predetermined coefficient to the difference between the average luminance values L A and the predetermined luminance value L M. This predetermined coefficient can be set depending on the image analysis algorithm employed by the driving state monitoring device 1. As shown in FIG. 5 (b), the average luminance value of the pixel after correction of this region is the luminance L B, it is approaching the predetermined luminance value L M, as compared to before the correction. The predetermined luminance value L M is preferably a center value of the gradation range of the luminance. For example, when the gradation width of the luminance is 0 to 255, it is possible to set the predetermined luminance value L M 127.

図6は、顔画像の各領域の画素の輝度値を領域毎に補正した例を示す図である。図6(a)は、補正前の領域A〜Dの輝度値の分布を示す図である。領域A〜Dの平均輝度値はそれぞれ、輝度La1、Lb1、Lc1、Ld1で示される。図6(b)は、補正後の領域A〜Dの輝度値の分布を示す図である。領域A〜Dの平均輝度値はそれぞれ、輝度La2、Lb2、Lc2、Ld2で示される。なお、図6(a)、図6(b)のいずれのヒストグラムにおいても、横軸は輝度値を表し、縦軸は画素数を表している。補正後における各領域の平均輝度値はいずれも、補正前と比較して所定の輝度値Lに近づいている。このように、領域毎に輝度値の補正が行われるので、それぞれの領域の輝度分布に応じて適切な補正を行うことができる。また、この補正は各領域の平均輝度値が所定の輝度値に近づくように行われるので、各領域に含まれる画素の輝度値を画像解析の可能な一定の範囲内に収束させることができる。 FIG. 6 is a diagram illustrating an example in which the luminance value of the pixel in each region of the face image is corrected for each region. FIG. 6A is a diagram illustrating the distribution of luminance values in the regions A to D before correction. The average luminance values of the regions A to D are indicated by luminances L a1 , L b1 , L c1 , and L d1 , respectively. FIG. 6B is a diagram illustrating the distribution of luminance values in the corrected regions A to D. The average luminance values of the regions A to D are indicated by luminances L a2 , L b2 , L c2 , and L d2 , respectively. In both histograms of FIGS. 6A and 6B, the horizontal axis represents the luminance value, and the vertical axis represents the number of pixels. Both the average luminance values of the respective regions after the correction, as compared to before the correction are approaching a predetermined luminance value L M. As described above, since the luminance value is corrected for each region, it is possible to perform an appropriate correction according to the luminance distribution of each region. Further, since this correction is performed so that the average luminance value of each region approaches a predetermined luminance value, the luminance value of the pixels included in each region can be converged within a certain range where image analysis is possible.

なお、以上の説明では、顔画像の領域に含まれる画素の輝度値の分布が所定の輝度値Lを中心に所定の割合で圧縮されるように画素の輝度値を補正することとしたが、補正の仕方はこれに限られない。例えば、画像補正部13は、いわゆるレベル補正により、各領域の画素の輝度値を補正することができる。このレベル補正は、指定した輝度の範囲に分布する画素の輝度値を、別に指定した輝度の範囲に同じ割合で分布するように補正する処理である。例えば、図7(a)では、ある領域の画素の輝度値は輝度値0〜130近傍に分布し、低輝度側に偏っている。この領域の画素に対して輝度値0〜255の範囲に輝度値が分布するようにレベル補正を行うと、図7(b)に示すような輝度値の分布が得られる。この補正により、領域に含まれる画素の平均輝度値を、例えば輝度値127に設定される所定の輝度値Lに近づけることができる。 In the above description, it is assumed that correcting the luminance value of the pixels as the distribution of luminance values of pixels included in the region of the face image is compressed at a predetermined rate about a predetermined luminance value L M The method of correction is not limited to this. For example, the image correction unit 13 can correct the luminance value of the pixels in each region by so-called level correction. This level correction is a process of correcting the luminance values of the pixels distributed in the designated luminance range so that they are distributed at the same ratio in the separately designated luminance range. For example, in FIG. 7A, the luminance values of pixels in a certain region are distributed in the vicinity of luminance values 0 to 130, and are biased toward the low luminance side. When level correction is performed on the pixels in this region so that the luminance values are distributed in the range of luminance values 0 to 255, a luminance value distribution as shown in FIG. 7B is obtained. By this correction, it is possible to make the average luminance value of pixels included in the region, the predetermined luminance value L M, for example, be set to the brightness value 127.

画像補正部13は、以上のように分割した領域毎に補正した顔画像のデータを画像調整部14に送出する。   The image correction unit 13 sends the face image data corrected for each region divided as described above to the image adjustment unit 14.

画像調整部14は、画像補正部13から送出された顔画像のデータを取得する。上記のように、運転状態監視装置1では、分割した領域毎に輝度値の補正が行われるので、例えば図8に示すように、各領域間の境界において輝度値が大きく異なる場合がある。このような状態の顔画像に対して、例えばエッジ検出のような画像処理により特徴点の検出を行うと、各領域間の境界が何らかの特徴点と判定されてしまう可能性がある。この誤判定を防止するために、画像調整部14は、各領域間の境界を跨いで一方の領域から他方の領域に向かうに従って輝度値が滑らかに連続的に変化するように、各領域間の境界部分の画素の輝度値を調整する。   The image adjustment unit 14 acquires the face image data sent from the image correction unit 13. As described above, in the operation state monitoring device 1, since the luminance value is corrected for each divided region, for example, as shown in FIG. If a feature point is detected by image processing such as edge detection for a face image in such a state, there is a possibility that a boundary between the regions is determined as some feature point. In order to prevent this erroneous determination, the image adjustment unit 14 crosses the boundary between the regions so that the luminance value smoothly and continuously changes from one region to the other region. The luminance value of the pixel at the boundary is adjusted.

図9及び図10を用いて、輝度値の調整について具体的に説明する。ここでは、図9に示す領域Aと領域Cとの境界Pを含む部分において、境界Pからの距離が距離g以内にある画素の輝度値を調整する場合について説明する。この調整は、図10に示すように、領域Aの点aと領域Cの点cとの間の輝度差(Y−Y)の傾きに応じて、点aと点cとの間の画素の輝度値を変更して行われる。領域Aの点aから点Pに向かって距離dにある画素の輝度値Yは、下記式(1)で表される。式(1)における輝度値Yは、点aからの距離dにある画素の調整前の輝度値である。
=Y−(Y−Y)/2/g×d …(1)
The adjustment of the luminance value will be specifically described with reference to FIGS. 9 and 10. Here, a description will be given of a case where the luminance value of a pixel whose distance from the boundary P is within the distance g in a portion including the boundary P between the region A and the region C shown in FIG. As shown in FIG. 10, this adjustment is performed between the point a and the point c according to the slope of the luminance difference (Y a −Y c ) between the point a in the region A and the point c in the region C. This is done by changing the luminance value of the pixel. Luminance value Y P of a pixel in the direction from a point of the region A to the point P at a distance d A is represented by the following formula (1). The luminance value Y d in the equation (1) is a luminance value before adjustment of the pixel located at the distance d A from the point a.
Y P = Y d − (Y a −Y c ) / 2 / g × d A (1)

領域Cの点cから点Pに向かって距離dにある画素の輝度値Yは、以下に示す式(2)で表される。式(2)における輝度値Yは、点cからの距離dにある画素の調整前の輝度値である。
=Y+(Y−Y)/2/g×d …(2)
Luminance value Y P of a pixel in the direction from the point c region C to the point P at a distance d C is represented by the formula (2) shown below. Luminance value Y d in the equation (2) is the luminance value before adjustment of the pixels at a distance d C from the point c.
Y P = Y d + (Y a −Y c ) / 2 / g × d C (2)

全ての領域間の境界において、以上説明したような調整を行うことにより、領域間の境界が画像処理により特徴点として検出されることを防止できる。なお、距離gは、画像解析のアルゴリズム等の特性に基づいて、各領域間の境界を特徴点として検出しない程度の値に予め設定することができる。図11は、各領域間の境界部分の画素の輝度値が調整された顔画像を示す図である。図11に示されるように、各領域間の境界を跨いで一方の領域から他方の領域に向かうに従って輝度値が滑らかに変化している。   By performing the adjustment as described above at the boundaries between all the regions, it is possible to prevent the boundaries between the regions from being detected as feature points by image processing. The distance g can be set in advance to a value that does not detect the boundary between the regions as a feature point based on characteristics such as an image analysis algorithm. FIG. 11 is a diagram illustrating a face image in which the luminance values of the pixels at the boundary portions between the regions are adjusted. As shown in FIG. 11, the luminance value smoothly changes from one region to the other region across the boundary between the regions.

画像調整部14は、以上のように輝度値の調整が行われた顔画像のデータを運転状態判断部15に送出する。   The image adjustment unit 14 sends the face image data for which the brightness value has been adjusted as described above to the driving state determination unit 15.

運転状態判断部15は、画像調整部14から送出された顔画像のデータを取得し、取得した顔画像の特徴点を画像処理により抽出する。さらに、運転状態判断部15は、抽出した特徴点の状態に基づいて運転者の運転状態を監視する。ここで、顔画像の特徴点は、例えば目、鼻及び口といった顔の部位であり、運転状態判断部15は、例えば目の瞬き、鼻及び口の位置から判別した顔の向き等の特徴点の状態に基づいて、運転者の運転状態を監視する。   The driving state determination unit 15 acquires face image data sent from the image adjustment unit 14 and extracts feature points of the acquired face image by image processing. Further, the driving state determination unit 15 monitors the driving state of the driver based on the extracted state of the feature points. Here, the feature points of the face image are facial parts such as eyes, nose, and mouth, for example, and the driving state determination unit 15 determines the feature points such as the orientation of the face determined from the blink of the eyes, the position of the nose and the mouth, for example. The driving state of the driver is monitored based on the state.

以上において、画像分割部12は、顔画像を複数の領域に分割する画像分割手段を構成する。画像補正部13は、顔画像の各領域の平均輝度が所定の輝度値に近づくように、各領域の輝度値を補正する画像補正手段を構成する。画像調整部14は、画像補正手段により補正された顔画像の各領域間の境界部分の輝度値を、各領域間の境界を跨いで一方の領域から他方の領域に向かうに従って輝度値が連続的に変化するように調整する画像調整手段を構成する。   In the above, the image dividing unit 12 constitutes an image dividing unit that divides a face image into a plurality of regions. The image correction unit 13 constitutes an image correction unit that corrects the luminance value of each region so that the average luminance of each region of the face image approaches a predetermined luminance value. The image adjustment unit 14 continuously increases the luminance value of the boundary portion between the regions of the face image corrected by the image correcting unit, from one region to the other region across the boundary between the regions. An image adjusting unit that adjusts so as to change to the above is configured.

以上のように本実施形態にあっては、画像分割部12によって運転者の顔画像を複数の領域に分割し、画像補正部13によって各領域毎に輝度値の補正を行うので、それぞれの領域の明るさに応じて適切な補正を行うことができる。また、この補正は、各領域の平均輝度値が所定の輝度値に近づくように行われるので、各領域に含まれる画素の輝度値を、画像解析の可能な一定の範囲内に収束させることができる。従って、画像取得部11により取得された顔画像の画素の輝度値が、運転状態判断部15による画像解析の可能な一定の範囲を超えて分布した場合であっても、取得した顔画像からの特徴点の抽出が可能となる。   As described above, in the present embodiment, the driver's face image is divided into a plurality of regions by the image dividing unit 12, and the luminance value is corrected for each region by the image correcting unit 13. Appropriate correction can be performed according to the brightness of the image. In addition, since this correction is performed so that the average luminance value of each region approaches a predetermined luminance value, the luminance value of the pixels included in each region can be converged within a certain range where image analysis is possible. it can. Therefore, even if the luminance value of the pixel of the face image acquired by the image acquisition unit 11 is distributed beyond a certain range where the image analysis by the driving state determination unit 15 is possible, Feature points can be extracted.

また、画像補正部13は、輝度値の分布が所定の輝度値を中心に圧縮されるように各領域の画素の輝度値を補正するので、各領域の平均輝度値は所定の輝度値に近づけられる。従って、各領域に含まれる画素の輝度値を画像解析の可能な一定の範囲内に確実に収束させることが可能となる。また、画像補正部13は、所定の輝度値に対する各領域の平均輝度値のずれに応じて輝度値の分布を圧縮する際の割合を設定するので、各領域の輝度平均値が大きくずれていても、適切に所定の輝度値に近づくように画素の輝度値の補正を行うことができる。さらに、画像補正部13は、各領域の平均輝度値が輝度の階調幅の中心値に近づけられるように補正を行うため、各領域の画素の輝度値は、より適切に画像解析可能な範囲に収束される。   Further, since the image correction unit 13 corrects the luminance value of the pixels in each region so that the distribution of luminance values is compressed around the predetermined luminance value, the average luminance value in each region approaches the predetermined luminance value. It is done. Accordingly, it is possible to reliably converge the luminance values of the pixels included in each region within a certain range where image analysis is possible. Further, since the image correction unit 13 sets the ratio at the time of compressing the distribution of the luminance value according to the deviation of the average luminance value of each region with respect to the predetermined luminance value, the luminance average value of each region is greatly deviated. In addition, the luminance value of the pixel can be corrected so as to appropriately approach the predetermined luminance value. Further, since the image correction unit 13 performs correction so that the average luminance value of each region approaches the central value of the luminance gradation width, the luminance value of the pixel of each region is within a range where image analysis can be performed more appropriately. Converged.

さらに、本実施形態にあっては、画像調整部14が、顔画像の各領域間の境界を跨いで一方の領域から他方の領域に向かうに従って輝度値が連続的に変化するように、領域間の境界部分の画素の輝度値を調整するので、運転状態判断部15は、各領域間の境界を何らかの特徴点と誤判断することなく、顔画像の解析を行うことが可能となる。   Furthermore, in the present embodiment, the image adjustment unit 14 crosses the boundary between the regions of the face image so that the luminance value continuously changes from one region to the other region. Since the brightness value of the pixel at the boundary portion of the region is adjusted, the driving state determination unit 15 can analyze the face image without erroneously determining the boundary between the regions as any feature point.

本発明に係る運転状態監視装置の一実施形態を示す概略構成図である。It is a schematic block diagram which shows one Embodiment of the driving | running state monitoring apparatus which concerns on this invention. 撮像された顔画像の例を示す図である。It is a figure which shows the example of the imaged face image. 顔画像を4分割した一例を示す図である。It is a figure which shows an example which divided the face image into 4 parts. 顔画像を4分割した他の例を示す図である。It is a figure which shows the other example which divided the face image into 4 parts. 顔画像中のある領域における補正前後の輝度値の分布を示す図である。It is a figure which shows distribution of the luminance value before and behind correction | amendment in a certain area | region in a face image. 顔画像の各領域の画素の輝度値を領域毎に補正した一例を示す図である。It is a figure which shows an example which correct | amended the luminance value of the pixel of each area | region of a face image for every area | region. レベル補正による顔画像のある領域の補正の一例を示す図である。It is a figure which shows an example of correction | amendment of the area | region with a face image by level correction. 輝度値の補正後の顔画像を示す図である。It is a figure which shows the face image after correction | amendment of a luminance value. 顔画像において、輝度値の調整を行う2つの領域間の境界の位置を示す図である。It is a figure which shows the position of the boundary between two area | regions which adjust a luminance value in a face image. 顔画像の境界部分の画素の輝度値を調整する手法を示す図である。It is a figure which shows the method of adjusting the luminance value of the pixel of the boundary part of a face image. 各領域間の境界部分の画素の輝度値が調整された顔画像を示す図である。It is a figure which shows the face image in which the luminance value of the pixel of the boundary part between each area | region was adjusted.

符号の説明Explanation of symbols

1…運転状態監視装置、10…撮像器、11…画像取得部、12…画像分割部(画像分割手段)、13…画像補正部(画像補正手段)、14…画像調整部(画像調整手段)、15…運転状態判断部、20…ECU。 DESCRIPTION OF SYMBOLS 1 ... Operating condition monitoring apparatus, 10 ... Image pick-up device, 11 ... Image acquisition part, 12 ... Image division part (image division means), 13 ... Image correction part (image correction means), 14 ... Image adjustment part (image adjustment means) , 15 ... Driving state determination unit, 20 ... ECU.

Claims (3)

撮像器により運転者の顔画像を取得し、取得した顔画像の特徴点を抽出し、抽出した特徴点の状態に基づいて運転者の運転状態を監視する運転状態監視装置において、
前記顔画像を複数の領域に分割する画像分割手段と、
前記顔画像の各領域の平均輝度が所定の輝度値に近づくように、前記各領域の輝度値を補正する画像補正手段と
を備えることを特徴とする運転状態監視装置。
In a driving state monitoring device that acquires a driver's face image with an imager, extracts feature points of the acquired face image, and monitors a driver's driving state based on the state of the extracted feature points.
Image dividing means for dividing the face image into a plurality of regions;
An operation state monitoring apparatus comprising: an image correcting unit that corrects the luminance value of each region so that the average luminance of each region of the face image approaches a predetermined luminance value.
前記画像補正手段により補正された前記顔画像の前記各領域間の境界部分の輝度値を、前記領域間の境界を跨いで一方の領域から他方の領域に向かうに従って輝度値が連続的に変化するように調整する画像調整手段を更に備えることを特徴とする請求項1記載の運転状態監視装置。   The luminance value of the boundary portion between the regions of the face image corrected by the image correcting means continuously changes from one region to the other region across the boundary between the regions. The operation state monitoring apparatus according to claim 1, further comprising an image adjusting unit that adjusts in such a manner. 前記画像補正手段は、前記顔画像の前記各領域の輝度値の分布が前記所定の輝度値を中心に圧縮されるように前記輝度値を補正すること
を特徴とする請求項1または2記載の運転状態監視装置。
The said image correction means correct | amends the said luminance value so that distribution of the luminance value of each said area | region of the said face image may be compressed centering | focusing on the said predetermined luminance value. Operating state monitoring device.
JP2008213181A 2008-08-21 2008-08-21 Driving state monitoring apparatus Pending JP2010049491A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
JP2008213181A JP2010049491A (en) 2008-08-21 2008-08-21 Driving state monitoring apparatus

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
JP2008213181A JP2010049491A (en) 2008-08-21 2008-08-21 Driving state monitoring apparatus

Publications (1)

Publication Number Publication Date
JP2010049491A true JP2010049491A (en) 2010-03-04

Family

ID=42066528

Family Applications (1)

Application Number Title Priority Date Filing Date
JP2008213181A Pending JP2010049491A (en) 2008-08-21 2008-08-21 Driving state monitoring apparatus

Country Status (1)

Country Link
JP (1) JP2010049491A (en)

Cited By (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2013017049A (en) * 2011-07-04 2013-01-24 Canon Inc Image processor, image processing method and computer program
KR20190058071A (en) * 2017-11-21 2019-05-29 현대모비스 주식회사 Apparatus and method for correcting image data
WO2019159364A1 (en) * 2018-02-19 2019-08-22 三菱電機株式会社 Passenger state detection device, passenger state detection system, and passenger state detection method
JP2021014227A (en) * 2019-07-16 2021-02-12 株式会社Subaru Occupant protection system of vehicle

Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2000059629A (en) * 1998-08-05 2000-02-25 Minolta Co Ltd Image correction device for image processor, image correction method and machine readable recording medium recording image correction program
JP2005084824A (en) * 2003-09-05 2005-03-31 Toshiba Corp Face image collation apparatus and face image collation method and passage controller
JP2006139701A (en) * 2004-11-15 2006-06-01 Niles Co Ltd Eye aperture detection apparatus

Patent Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2000059629A (en) * 1998-08-05 2000-02-25 Minolta Co Ltd Image correction device for image processor, image correction method and machine readable recording medium recording image correction program
JP2005084824A (en) * 2003-09-05 2005-03-31 Toshiba Corp Face image collation apparatus and face image collation method and passage controller
JP2006139701A (en) * 2004-11-15 2006-06-01 Niles Co Ltd Eye aperture detection apparatus

Cited By (10)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2013017049A (en) * 2011-07-04 2013-01-24 Canon Inc Image processor, image processing method and computer program
KR20190058071A (en) * 2017-11-21 2019-05-29 현대모비스 주식회사 Apparatus and method for correcting image data
KR102329630B1 (en) * 2017-11-21 2021-11-22 현대모비스 주식회사 Apparatus and method for correcting image data
WO2019159364A1 (en) * 2018-02-19 2019-08-22 三菱電機株式会社 Passenger state detection device, passenger state detection system, and passenger state detection method
JPWO2019159364A1 (en) * 2018-02-19 2020-05-28 三菱電機株式会社 Passenger status detection device, passenger status detection system, and passenger status detection method
CN111712852A (en) * 2018-02-19 2020-09-25 三菱电机株式会社 Passenger state detection device, passenger state detection system, and passenger state detection method
US11361560B2 (en) 2018-02-19 2022-06-14 Mitsubishi Electric Corporation Passenger state detection device, passenger state detection system, and passenger state detection method
CN111712852B (en) * 2018-02-19 2023-08-11 三菱电机株式会社 Passenger state detection device, system and method
JP2021014227A (en) * 2019-07-16 2021-02-12 株式会社Subaru Occupant protection system of vehicle
JP7285155B2 (en) 2019-07-16 2023-06-01 株式会社Subaru vehicle occupant protection system

Similar Documents

Publication Publication Date Title
US8036427B2 (en) Vehicle and road sign recognition device
US8050456B2 (en) Vehicle and road sign recognition device
JP5435307B2 (en) In-vehicle camera device
US20080044060A1 (en) Image processor and image processing method
JP6029954B2 (en) Imaging device
JP2011165050A (en) White line recognition device
US7929025B2 (en) Automatic white balance control system, automatic white balance module, and method thereof
US8659676B2 (en) Image analysis device and method thereof
US20120300035A1 (en) Electronic camera
JP2013005234A5 (en)
US9349071B2 (en) Device for detecting pupil taking account of illuminance and method thereof
JP2010049491A (en) Driving state monitoring apparatus
KR101715489B1 (en) Image generating device and image generating method
JP2013146032A (en) Driver monitor system and processing method thereof
KR20150059302A (en) Method for recognizing character by fitting image shot, and information processing device for executing it
US20210090237A1 (en) Deposit detection device and deposit detection method
US8743236B2 (en) Image processing method, image processing apparatus, and imaging apparatus
CN111565283A (en) Traffic light color identification method, correction method and device
JP4668863B2 (en) Imaging device
US9122935B2 (en) Object detection method, storage medium, integrated circuit, and object detection apparatus
TWI630818B (en) Dynamic image feature enhancement method and system
KR20190064419A (en) Method, apparatus and system for detecting and reducing the effects of color fringing in digital video acquired by a camera
KR102188163B1 (en) System for processing a low light level image and method thereof
JP2006209277A (en) Object detecting device, object detecting method and imaging device
KR20170032158A (en) Imaging apparatus

Legal Events

Date Code Title Description
A621 Written request for application examination

Free format text: JAPANESE INTERMEDIATE CODE: A621

Effective date: 20101206

A977 Report on retrieval

Free format text: JAPANESE INTERMEDIATE CODE: A971007

Effective date: 20110902

A131 Notification of reasons for refusal

Free format text: JAPANESE INTERMEDIATE CODE: A131

Effective date: 20110920

A02 Decision of refusal

Free format text: JAPANESE INTERMEDIATE CODE: A02

Effective date: 20120228