JP2006268565A - Occupant detector and occupant detection method - Google Patents

Occupant detector and occupant detection method Download PDF

Info

Publication number
JP2006268565A
JP2006268565A JP2005086980A JP2005086980A JP2006268565A JP 2006268565 A JP2006268565 A JP 2006268565A JP 2005086980 A JP2005086980 A JP 2005086980A JP 2005086980 A JP2005086980 A JP 2005086980A JP 2006268565 A JP2006268565 A JP 2006268565A
Authority
JP
Japan
Prior art keywords
candidate
area
occupant
image
candidate area
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
JP2005086980A
Other languages
Japanese (ja)
Other versions
JP4765363B2 (en
Inventor
Naoko Okubo
直子 大久保
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Nissan Motor Co Ltd
Original Assignee
Nissan Motor Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Nissan Motor Co Ltd filed Critical Nissan Motor Co Ltd
Priority to JP2005086980A priority Critical patent/JP4765363B2/en
Publication of JP2006268565A publication Critical patent/JP2006268565A/en
Application granted granted Critical
Publication of JP4765363B2 publication Critical patent/JP4765363B2/en
Expired - Fee Related legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Landscapes

  • Image Processing (AREA)
  • Image Analysis (AREA)

Abstract

<P>PROBLEM TO BE SOLVED: To accurately detect an occupant even when the shape of a photographed surface has a shape similar to that of the head part of the occupant, even when a moving object except the occupant is included in a photographing area, or even when a face area image of the occupant is partially hidden behind an obstacle. <P>SOLUTION: A photograph part 2 photographs an original image including the vehicle occupant, a computing part 3 generates an edge image from the original image to extract an area put between extracted edges as a candidate area. Then, the computing part 3 integrates the extracted candidate area and an adjacent pixel matching a predetermined condition together, and integrates and updates the candidate areas to store it in a storage part 4. The computing part 3 calculates characteristic quantity of the updated candidate area and extracts a target candidate area having high probability of a face area from the candidate area. In the target candidate area, the computing part 3 and the storage area 4 determine movement quantity and front-back relationship on the screen in a fixed time for extracting the face area. <P>COPYRIGHT: (C)2007,JPO&INPIT

Description

本発明は、画像処理によって車内の乗員の位置や状態を検出する乗員検出装置及び乗員検出方法に関する。   The present invention relates to an occupant detection device and an occupant detection method for detecting the position and state of an occupant in a vehicle by image processing.

従来より、時間差を設けて乗員を含む車内空間の画像を2枚撮像し、撮像された2枚の画像それぞれのエッジ画像を作成し、2枚のエッジ画像の差分画像形状に基づいて乗員の位置や状態を検出する乗員検出装置が知られている(例えば、特許文献1参照)。
特開2000−113164号公報
Conventionally, two images of the interior space including the occupant are provided with a time difference, and edge images of the two captured images are created, and the position of the occupant is based on the difference image shape of the two edge images. There is known an occupant detection device that detects the state of the vehicle (for example, see Patent Document 1).
JP 2000-113164 A

しかしながら、従来の乗員検出装置は、エッジ画像の差分画像形状から乗員の位置や状態を検出する構成になっているため、撮像面における形状が乗員の頭部と同じような形状を有し、且つ、移動している乗員以外の被写体が撮像領域内に存在する場合、その被写体を乗員として誤検出してしまうことがある。また、この被写体が乗員の頭部と撮像面との間に存在する場合には、撮像画像内では被写体が乗員の頭部を部分的に隠してしまうために、乗員の頭部領域と認識できる特徴的な形状が得られず、乗員を検出できないことがある。   However, since the conventional occupant detection device is configured to detect the position and state of the occupant from the difference image shape of the edge image, the shape on the imaging surface has the same shape as the head of the occupant, and When a subject other than the moving occupant is present in the imaging region, the subject may be erroneously detected as the occupant. In addition, when the subject exists between the head of the occupant and the imaging surface, the subject partially hides the occupant's head in the captured image, so that the subject can be recognized as the head region of the occupant. A characteristic shape may not be obtained and an occupant may not be detected.

本発明は上記の点に鑑みてなされたもので、撮像面における形状が乗員の頭部と同じような形状を有し、且つ、移動している乗員以外の被写体が撮像領域内に存在する場合や、障害物によって乗員の顔領域の映像が部分的に隠れている場合においても、乗員を正確に検出することが可能な乗員検出装置及び乗員検出方法を提供することを目的とする。   The present invention has been made in view of the above points, and the shape on the imaging surface has the same shape as the head of an occupant, and a subject other than the moving occupant is present in the imaging region. Another object of the present invention is to provide an occupant detection device and an occupant detection method capable of accurately detecting an occupant even when an image of the occupant's face region is partially hidden by an obstacle.

上述の課題を解決するために、本発明に係る乗員検出装置及び乗員検出方法は、車内の画像からエッジ画像を生成し、エッジ画像に挟まれる領域を候補領域として抽出し、候補領域に隣接する画素の画素値と候補領域の画素値の差分値が所定範囲内であるか否かを判別し、差分値が所定範囲内である場合、候補領域に隣接する画素を候補領域に統合して候補領域を更新し、候補領域が車内の乗員の顔に対応する画像領域であるか否かを判定する。   In order to solve the above-described problem, an occupant detection device and an occupant detection method according to the present invention generate an edge image from an in-vehicle image, extract an area between the edge images as a candidate area, and are adjacent to the candidate area. It is determined whether or not the difference value between the pixel value of the pixel and the pixel value of the candidate area is within a predetermined range, and if the difference value is within the predetermined range, the pixels adjacent to the candidate area are integrated into the candidate area to be a candidate The area is updated, and it is determined whether or not the candidate area is an image area corresponding to the face of an occupant in the vehicle.

本発明に係る乗員検出装置及び乗員検出方法によれば、撮像面における形状が乗員の頭部と同じような形状を有し、且つ、移動している乗員以外の被写体が撮像領域内に存在する場合や、障害物によって乗員の顔領域の映像が部分的に隠れている場合においても、乗員を正確に検出することができる。   According to the occupant detection device and the occupant detection method according to the present invention, the shape on the imaging surface has the same shape as the occupant's head, and a subject other than the moving occupant exists in the imaging region. Even when the image of the occupant's face area is partially hidden by an obstacle, the occupant can be accurately detected.

以下に本発明の実施形態について、図面とともに詳述する。   Hereinafter, embodiments of the present invention will be described in detail with reference to the drawings.

〔乗員検出装置の構成〕
始めに、図1を参照して、本発明の実施例となる乗員検出装置の構成について説明する。
[Configuration of passenger detection device]
First, the configuration of an occupant detection device according to an embodiment of the present invention will be described with reference to FIG.

本発明の実施例となる乗員検出装置1は、図1に示すように、車両に搭載され、主として、撮像部2と、演算部3と、記憶部4とを備える。撮像部2は、遠赤外線カメラ(IRカメラ)等の撮影機器により構成され、図2に示すように、マップランプ部やセンタークラスター部等の位置に設置される。なお、撮像部2の設置場所は、乗員の顔を撮影可能な位置であれば任意の位置で構わない。   As shown in FIG. 1, an occupant detection device 1 according to an embodiment of the present invention is mounted on a vehicle and mainly includes an imaging unit 2, a calculation unit 3, and a storage unit 4. The imaging unit 2 is configured by a photographing device such as a far-infrared camera (IR camera), and is installed at a position such as a map lamp unit or a center cluster unit as shown in FIG. The installation location of the imaging unit 2 may be an arbitrary position as long as it can capture a passenger's face.

演算部3は、周知の情報処理装置により構成され、内部CPU(Central Processing Unit:中央演算処理装置)がプログラムを実行することにより、後述する顔領域抽出処理を実現する。なお、演算部3は、本発明に係るエッジ画像生成手段、候補領域抽出手段、候補領域更新手段及び領域種別判定手段として機能する。記憶部4は、RAM(Random access memory)等の記憶装置により構成され、後述する顔領域抽出処理で抽出される候補領域及び更新候補領域のデータ(画素値)を一時記憶する。   The calculation unit 3 is configured by a well-known information processing device, and an internal CPU (Central Processing Unit) executes a program to realize a face area extraction process described later. The calculation unit 3 functions as an edge image generation unit, a candidate region extraction unit, a candidate region update unit, and a region type determination unit according to the present invention. The storage unit 4 is configured by a storage device such as a RAM (Random access memory), and temporarily stores candidate area and update candidate area data (pixel values) extracted by a face area extraction process described later.

このような構成を有する乗員検出装置1は以下に示す顔領域抽出処理を実行することにより、撮像面における形状が乗員の頭部と同じような形状を有し、且つ、移動している乗員以外の被写体が撮像領域内に存在する場合や、障害物によって乗員の顔領域の映像が部分的に隠れている場合においても、乗員を正確に検出する。以下、図3に示すフローチャートを参照して、この顔領域抽出処理を実行する際の乗員検出装置1の動作について説明する。   The occupant detection device 1 having such a configuration performs the face area extraction process shown below, so that the shape on the imaging surface is the same as that of the occupant's head and other than the moving occupant The occupant is accurately detected even when the subject is present in the imaging region or when the image of the occupant's face region is partially hidden by an obstacle. The operation of the occupant detection device 1 when executing this face area extraction process will be described below with reference to the flowchart shown in FIG.

〔顔領域抽出処理〕
図3に示すフローチャートは、車両のエンジンが始動、若しくは乗員検出装置1の電源がオンになるのに応じて開始となり、ステップS1に進む。
[Facial area extraction processing]
The flowchart shown in FIG. 3 starts when the vehicle engine starts or when the occupant detection device 1 is turned on, and proceeds to step S1.

ステップS1の処理では、演算部3が、後段の処理で利用する変数を初期化する。初期化が終了すると、次のステップS2に進む。   In the process of step S1, the calculation unit 3 initializes variables used in the subsequent process. When the initialization is completed, the process proceeds to the next step S2.

ステップS2の処理では、演算部3が、車室内の乗員の顔を含む遠赤外線画像を原画像として撮像部2より取得する。原画像を取得後に、次のステップS3に進む。   In the process of step S2, the calculation part 3 acquires the far-infrared image containing the passenger | crew's face in a vehicle interior from the imaging part 2 as an original image. After obtaining the original image, the process proceeds to the next step S3.

ステップS3の処理では、演算部3が、本発明に係るエッジ画像生成手段として機能し、ステップS2で取得した原画像のエッジ画像を生成する。具体的には、図4に示すように、原画像41に被写体42,43及び背景44が含まれている場合、演算部3は、ソーベル(Sobel)フィルタを用いてこの原画像41の水平方向の温度勾配を抽出することで縦ソーベル画像を生成し、縦ソーベル画像中の画素iの画素値Siを次に示す数式1によってラベル画像Eiに変換することにより、エッジ画像を生成する。なお、原画像41は遠赤外線画像であるので、画素値Siは温度勾配に相当し、数式1中のTHPは正の値である所定の閾値であり、THMは負の値である所定の閾値である。

Figure 2006268565
In the process of step S3, the calculation unit 3 functions as an edge image generation unit according to the present invention, and generates an edge image of the original image acquired in step S2. Specifically, as illustrated in FIG. 4, when the original image 41 includes the subjects 42 and 43 and the background 44, the calculation unit 3 uses the Sobel filter to convert the original image 41 in the horizontal direction. A vertical sobel image is generated by extracting the temperature gradient, and an edge image is generated by converting the pixel value Si of the pixel i in the vertical sobel image into a label image Ei according to the following formula 1. Since the original image 41 is a far-infrared image, the pixel value Si corresponds to a temperature gradient, THP in Equation 1 is a predetermined threshold value that is a positive value, and THM is a predetermined threshold value that is a negative value. It is.
Figure 2006268565

すなわち、被写体42,43の温度が背景44よりも高い場合、被写体42,43に対応する画素値は背景44の画素値よりも高くなる。また、被写体43の温度が被写体42の温度よりも高い場合、被写体43に対応する画素値は被写体42の画素値よりも高くなる。従って、これらの画像に対してエッジ画像生成を行うと、背景44と被写体42,43との境目にエッジが抽出され、図5に示されるようなエッジ画像51が生成される。エッジ画像51中のエッジ52,53はラベル画像Ei=1の正のエッジであり、エッジ54,55はラベル画像Ei =−1の負のエッジである。また、それ以外の画像領域はラベル画像Ei =0と定義する。なお、エッジ画像51の生成方法は上記方法に限られるものではなく、例えば、ソーベルフィルタ以外のフィルタを利用してもよい。また、エッジ強度の正負が判別されるのであれば、エッジ画像の画素値は温度以外の値であっても構わない。エッジ画像51が検出されると、次のステップS4に進む。   That is, when the temperatures of the subjects 42 and 43 are higher than the background 44, the pixel values corresponding to the subjects 42 and 43 are higher than the pixel values of the background 44. When the temperature of the subject 43 is higher than the temperature of the subject 42, the pixel value corresponding to the subject 43 is higher than the pixel value of the subject 42. Therefore, when edge images are generated for these images, an edge is extracted at the boundary between the background 44 and the subjects 42 and 43, and an edge image 51 as shown in FIG. 5 is generated. Edges 52 and 53 in the edge image 51 are positive edges of the label image Ei = 1, and edges 54 and 55 are negative edges of the label image Ei = −1. The other image area is defined as label image Ei = 0. Note that the method of generating the edge image 51 is not limited to the above method, and for example, a filter other than a Sobel filter may be used. Further, the pixel value of the edge image may be a value other than temperature as long as the positive / negative of the edge strength is determined. When the edge image 51 is detected, the process proceeds to the next step S4.

ステップS4の処理では、演算部3が、本発明に係る候補領域抽出手段として機能し、ステップS3の処理で得られたエッジ画像51中のエッジに挟まれる画像領域を候補領域として抽出し、抽出した候補領域のデータを記憶部4に記憶する。具体的には、演算部3は、ステップS3の処理で得られたエッジ画像51を横方向に走査し、走査線上で正の値の画素から負の値の画素に終わるライン上の画素値を1、それ以外の画素値を0に変換する。すなわち、演算部3は、図6に示すように、エッジ画像51を横方向に走査したラインL上で、正のエッジ52から始まり負のエッジ55で終わるライン上の領域Aの画素値を1とする。また同様に、演算部3は、正のエッジ53から始まり負のエッジ54で終わるラインL上の範囲Aの画素値を1とする。なお、演算部3は、それ以外のライン上の画素値を0に変換する。この結果、図7に示すように、画素値が1である候補領域72,73と画素値が0である背景領域74の画像が得られる。この処理の際、演算部3は、後述の処理のために抽出された候補領域72,73及び背景領域74に識別番号n(n = 1、2、3、・・・、N : Nは候補領域の総数)を割り当て、記憶部4に一時保存する。候補領域の抽出が終わると、次のステップS5に進む。   In the process of step S4, the calculation unit 3 functions as a candidate area extraction unit according to the present invention, and extracts and extracts an image area sandwiched between edges in the edge image 51 obtained by the process of step S3 as a candidate area. The candidate area data is stored in the storage unit 4. Specifically, the calculation unit 3 scans the edge image 51 obtained in the process of step S3 in the horizontal direction, and calculates pixel values on a line ending from a positive value pixel to a negative value pixel on the scanning line. 1 and other pixel values are converted to 0. That is, as shown in FIG. 6, the arithmetic unit 3 sets the pixel value of the region A on the line L, which starts from the positive edge 52 and ends at the negative edge 55, on the line L obtained by scanning the edge image 51 in the horizontal direction. And Similarly, the calculation unit 3 sets the pixel value in the range A on the line L starting from the positive edge 53 and ending at the negative edge 54 to 1. The calculation unit 3 converts the pixel values on the other lines to 0. As a result, as shown in FIG. 7, images of candidate areas 72 and 73 having a pixel value of 1 and a background area 74 having a pixel value of 0 are obtained. In this processing, the calculation unit 3 identifies the identification numbers n (n = 1, 2, 3,..., N: N is a candidate in the candidate regions 72 and 73 and the background region 74 extracted for the processing described later. The total number of areas) is allocated and temporarily stored in the storage unit 4. When extraction of the candidate area is completed, the process proceeds to the next step S5.

ステップS5の処理では、演算部3が、本発明に係る候補領域更新手段として機能し、候補領域に隣接する画素が候補領域の画素値と同じ値をとる画素であるか否かを判定し、同じ値をとる画素であれば、その画素を注目する候補領域に統合し、記憶部4に記憶されている候補領域データを更新する。具体的には、演算部3は、原画像における識別番号nの候補領域に属する画素の画素値Ti(iは候補領域nに存在する画素)の平均値TAnを算出する。次に、演算部3は、識別番号nの候補領域に属する画素の画素値TAnと識別番号nの候補領域に隣接している画素の画素値Tjとの差分の絶対値が次の数式2に示されるように所定の閾値THA以下であるか否かを判別し、差分の絶対値が所定の閾値以下である場合、識別番号nの候補領域に隣接している画素Tjを注目する候補領域に統合する。なお、所定の閾値THAは、エッジを跨いで画素を統合してしまうことを避けるために、数式1で設定された閾値THP及びTHMの絶対値よりも小さい値が望ましい。画素の統合が完了すると、次のステップS6に進む。

Figure 2006268565
In the process of step S5, the calculation unit 3 functions as a candidate area update unit according to the present invention, determines whether a pixel adjacent to the candidate area is a pixel having the same value as the pixel value of the candidate area, If the pixels have the same value, the pixels are integrated into the candidate area to be noticed, and the candidate area data stored in the storage unit 4 is updated. Specifically, the calculation unit 3 calculates an average value TAn of pixel values Ti (i is a pixel existing in the candidate area n) of pixels belonging to the candidate area with the identification number n in the original image. Next, the calculation unit 3 calculates the absolute value of the difference between the pixel value TAn of the pixel belonging to the candidate area with the identification number n and the pixel value Tj of the pixel adjacent to the candidate area with the identification number n to the following Equation 2. It is determined whether or not it is equal to or less than a predetermined threshold THA as shown, and if the absolute value of the difference is equal to or less than the predetermined threshold, the pixel Tj adjacent to the candidate area with the identification number n is set as a candidate area to be noted. Integrate. Note that the predetermined threshold THA is preferably smaller than the absolute values of the thresholds THP and THM set in Equation 1 in order to avoid integrating pixels across edges. When the integration of the pixels is completed, the process proceeds to the next step S6.
Figure 2006268565

ステップS6の処理では、演算部3が、ステップS5で更新された識別番号nの候補領域nに対して、新規に隣接する画素値jが数式2を満たすか否かの判定を行い、数式2を満たさなければ、更新を終了し次のステップS7に進む。また、満たすのであればステップS5に戻る。ここで、図7、図8及び図9を参照して、ステップS4からステップS6までの処理を詳述すると、図7に示されるステップS4の処理で得られた候補領域72,73と図4に示される原画像41とを重畳させると、図8に示す重畳画像81が得られる。このとき、候補領域72は、図4に示される被写体42の一部分であるので、被写体42の領域と重複していることになる。そこで、ステップS5により、候補領域72と等しい画素値の領域を抽出し、ステップS6で等しい画素値をもつ領域を候補領域72に統合することにより、図9に示すように、被写体42と重畳するように更新された候補領域94が得られる。なお、ここまでが候補領域抽出処理工程であり、ステップS1からステップS6の工程で、エッジに基づく簡便な領域抽出を行った後に、隣接する画素の画素値が近い場合には、その画素が同じ領域に所属するとして領域の更新を行い、最終的な領域抽出を行っている。また、この処理によれば、画面上の全部の画素について領域分割を行う一般的な領域分割手法、例えば領域拡張法と比較して、乗員の画像領域がどの領域に含まれるかの判別を行う回数が削減され、大幅に演算負荷を減らすことができる。   In the process of step S6, the calculation unit 3 determines whether or not the newly adjacent pixel value j satisfies Expression 2 for the candidate area n with the identification number n updated in Step S5. If not satisfied, the update is terminated and the process proceeds to the next step S7. If it satisfies, the process returns to step S5. Here, the processing from step S4 to step S6 will be described in detail with reference to FIGS. 7, 8, and 9. The candidate regions 72 and 73 obtained by the processing of step S4 shown in FIG. When the original image 41 shown in FIG. 8 is superimposed, a superimposed image 81 shown in FIG. 8 is obtained. At this time, the candidate area 72 is a part of the subject 42 shown in FIG. Therefore, an area having a pixel value equal to that of the candidate area 72 is extracted in step S5, and the area having the same pixel value is integrated in the candidate area 72 in step S6, thereby superimposing the object 42 as shown in FIG. Thus, the updated candidate area 94 is obtained. This is the candidate region extraction processing step, and after performing simple region extraction based on edges in steps S1 to S6, if the pixel values of adjacent pixels are close, the pixels are the same The region is updated as belonging to the region, and the final region extraction is performed. Further, according to this processing, it is determined which region the occupant's image region is included in comparison with a general region dividing method that divides the region for all pixels on the screen, for example, the region expanding method. The number of times is reduced, and the calculation load can be greatly reduced.

次のステップS7からステップS14の処理では、演算部3及び記憶部4が、本発明に係る領域種別判定手段として機能し、複数の候補領域の中から乗員の顔領域に対応するものを抽出する。具体的には、ステップS7の処理では、演算部3が、抽出された各候補領域の形状特徴量を算出する。例えば、演算部3は、経路候補の形状特徴量として、候補領域の円形度、候補領域の面積、候補領域に外接する矩形領域のアスペクト比の3つを算出する。なお、形状特徴量は上記3つに限られるものではなく、検出したい被写体の形状を適切に表現できる形状特徴量であればどのような値でもよい。各候補領域の形状特徴量を算出すると、次のステップS8に進む。   In the processing from the next step S7 to step S14, the calculation unit 3 and the storage unit 4 function as region type determination means according to the present invention, and extract one corresponding to the occupant's face region from among a plurality of candidate regions. . Specifically, in the process of step S7, the calculation unit 3 calculates the shape feature amount of each extracted candidate region. For example, the calculation unit 3 calculates three candidate shape feature amounts: the circularity of the candidate region, the area of the candidate region, and the aspect ratio of the rectangular region that circumscribes the candidate region. Note that the shape feature amount is not limited to the above three, and may be any value as long as the shape feature amount can appropriately represent the shape of the subject to be detected. When the shape feature amount of each candidate area is calculated, the process proceeds to the next step S8.

ステップS8の処理では、演算部3が、ステップS7で算出した候補領域の形状特徴量に基づいて、注目する候補領域が乗員の顔領域に相当しない蓋然性が高い領域を判定する。具体的には、演算部3は、注目する候補領域の形状特徴量が、全て所定の閾値範囲内という条件を満たせば次のステップS9に進み、注目する候補領域の形状特徴量が一つでも所定の閾値範囲内でなければ、注目する候補領域は乗員の顔以外であると判定し、ステップS2に戻る。   In the process of step S8, the calculation unit 3 determines an area with a high probability that the candidate area of interest does not correspond to the face area of the passenger based on the shape feature amount of the candidate area calculated in step S7. Specifically, the calculation unit 3 proceeds to the next step S9 if the shape feature values of the candidate region to be noticed all satisfy a condition that it is within a predetermined threshold range, and even if there is one shape feature value of the candidate region to be noticed. If it is not within the predetermined threshold range, it is determined that the candidate region of interest is other than the occupant's face, and the process returns to step S2.

ステップS9の処理では、演算部3が、注目する候補領域の画素値特徴量を算出し、画素値特徴量が所定の範囲内であるか否かの判定を行う。具体的には、演算部3は、注目する候補領域内の画素値の分散値を画素値特徴量として算出する。ここで、画素値特徴量として分散値を適用する理由は、例えば、顔領域以外の候補領域が暖かい缶ジュース飲料等の場合、候補領域内の温度変化が少ないので画素値の分散が小さい。一方、顔領域内は目、口、鼻の影響によって、候補領域内に温度のムラが生じるため分散が大きくなる傾向がある。そこで、分散値が所定の閾値範囲内であれば、その候補領域は顔領域以外の領域であると判定できる。注目する候補領域の画素値特徴量が所定の閾値範囲内であればステップS10に進み、注目する候補領域の画素値特徴量が所定の閾値範囲内でなければステップS2に戻る。なお、本実施例では、画素特徴量として候補領域内の分散値を算出したが、演算部3は、領域内の画素値分布の規則性を画素特徴量として算出しても良い。具体的には、顔領域内の温度分布は、目、口、鼻によって発生し、画像上での画素値の分布に規則性があるので、画素値分布の平均的なテンプレートを予め準備しておき、テンプレートとの相関値を規則性の指標として画素特徴量を算出する。   In the process of step S9, the calculation unit 3 calculates the pixel value feature amount of the candidate region of interest, and determines whether the pixel value feature amount is within a predetermined range. Specifically, the calculation unit 3 calculates a variance value of pixel values in the target candidate region as a pixel value feature amount. Here, the reason why the variance value is applied as the pixel value feature amount is that, for example, when the candidate region other than the face region is a warm can juice drink or the like, the variation in the pixel value is small because the temperature change in the candidate region is small. On the other hand, in the face area, due to the influence of eyes, mouth, and nose, temperature unevenness occurs in the candidate area, so that the dispersion tends to increase. Therefore, if the variance value is within a predetermined threshold range, the candidate area can be determined to be an area other than the face area. If the pixel value feature amount of the candidate area to be noticed is within the predetermined threshold range, the process proceeds to step S10, and if the pixel value feature quantity of the candidate region to be noticed is not within the predetermined threshold range, the process returns to step S2. In the present embodiment, the variance value in the candidate region is calculated as the pixel feature amount. However, the calculation unit 3 may calculate the regularity of the pixel value distribution in the region as the pixel feature amount. Specifically, the temperature distribution in the face region is generated by the eyes, mouth, and nose, and the pixel value distribution on the image is regular, so prepare an average template of pixel value distribution in advance. The pixel feature amount is calculated using the correlation value with the template as an index of regularity.

ステップS10の処理では、演算部3が、ステップS9までの処理工程を終えた画像を記憶部3に記憶する。記憶されると、次のステップS11に進む。   In the process of step S10, the calculation unit 3 stores the image after the processing steps up to step S9 in the storage unit 3. If stored, the process proceeds to the next step S11.

ステップS11の処理では、演算部3が、注目する候補領域の異なる2つの時刻における画像上での移動量を算出し、2つの時刻間における移動量が所定の範囲内であるか否かの判定を行い、所定の範囲内であれば顔領域と判定し、所定の範囲内でなければ顔領域以外と判定する。ここで、図10を参照して、この判定処理の根拠について説明する。被写体102が乗員の顔101と同等の大きさに撮像される場合、カメラ103の取り付け位置にもよるが被写体102と撮像面104間の距離は、乗員の顔101と撮像面104間の距離の半分程度になる。そして、この距離において、被写体102が乗員の手によって動かされた場合、撮像面104における移動が発生するが、その移動量は、一般的には、着座した乗員の顔101を動かした際に画面で発生する移動量よりも大きくなる。これは、実空間において着座している乗員の場合、顔を動かす速度と比べて手を動かす速度の方が一般的に速くなりえることと、顔は手に比べて撮像面104から遠い距離に存在するため実空間における移動速度が同じであれば、画像中における顔領域の移動量の方が小さくなることによる。従って、この判定処理によれば、候補領域が顔領域に対応するか否かを正確に判定することができる。   In the process of step S11, the calculation unit 3 calculates the movement amount on the image at two different times of the candidate region of interest, and determines whether or not the movement amount between the two times is within a predetermined range. If it is within the predetermined range, it is determined as a face region, and if it is not within the predetermined range, it is determined as a region other than the face region. Here, the basis of this determination processing will be described with reference to FIG. When the subject 102 is imaged to the same size as the occupant's face 101, the distance between the subject 102 and the imaging surface 104 is the distance between the occupant's face 101 and the imaging surface 104, depending on the mounting position of the camera 103. It becomes about half. When the subject 102 is moved by the occupant's hand at this distance, movement on the imaging surface 104 occurs, but the amount of movement is generally displayed when the seated occupant's face 101 is moved. It becomes larger than the movement amount generated in This is because, in the case of an occupant seated in real space, the speed of moving the hand can be generally faster than the speed of moving the face, and the face is farther away from the imaging surface 104 than the hand. If the movement speed in the real space is the same because it exists, the movement amount of the face area in the image is smaller. Therefore, according to this determination process, it can be accurately determined whether or not the candidate region corresponds to the face region.

ステップS12の処理では、演算部3が、複数の候補領域が存在した場合、複数の候補領域の前後方向の位置関係を求める。具体的には、演算部3は、各領域のトラッキング処理を行い、候補領域が画面上で重なった場合に、領域の形状が変化しない方が手前に存在すると判断する。前後関係を求めると次のステップS13の処理に進む。   In the process of step S12, when there are a plurality of candidate areas, the calculation unit 3 obtains the positional relationship in the front-rear direction of the plurality of candidate areas. Specifically, the calculation unit 3 performs tracking processing of each region, and determines that the region where the shape of the region does not change is present in the foreground when the candidate regions overlap on the screen. When the context is obtained, the process proceeds to the next step S13.

ステップS13の処理では、演算部3が、ステップS12の処理で後方の領域と判断した候補領域を顔領域として抽出する。ここで、図10を参照して、後方の領域を顔領域と判定する根拠を説明する。被写体102は被写体101よりも撮像面104よりも近い距離に存在するため、乗員の顔101は被写体102に隠されて撮像面104での形状が変化する現象が発生する。そして、被写体102が乗員が手に持った温かい飲料等であるとすると、被写体102が乗員の顔領域101よりも後方に移動する頻度は低いため、手前に存在する被写体102は顔以外の領域であるとみなすことができる。演算部3が、候補領域を顔領域として抽出すると、次のステップS14の処理に進む。   In the process of step S13, the calculation unit 3 extracts the candidate area determined as the rear area in the process of step S12 as a face area. Here, with reference to FIG. 10, the basis for determining the rear area as the face area will be described. Since the subject 102 exists at a distance closer to the imaging surface 104 than the subject 101, a phenomenon occurs in which the occupant's face 101 is hidden by the subject 102 and the shape on the imaging surface 104 changes. If the subject 102 is a warm drink or the like held by the occupant in the hand, the subject 102 is less likely to move backward than the occupant's face region 101. Can be considered. When the calculation unit 3 extracts the candidate area as the face area, the process proceeds to the next step S14.

ステップS14の処理では、演算部3が、顔領域抽出処理工程を終了するか否かの判定を行う。顔領域抽出処理を終了するのであれば、顔領域抽出処理を終了し、顔領域抽出処理を終了しないのであれば、ステップS2に戻る。   In the process of step S14, the calculation unit 3 determines whether or not to end the face area extraction process. If the face area extraction process is to be terminated, the face area extraction process is terminated. If the face area extraction process is not to be terminated, the process returns to step S2.

なお、本実施例における顔領域抽出処理では、撮像手段によって取得される原画像として遠赤外線画像を使用している。なぜならば、顔領域を抽出するときに、顔領域内部に生じるの温度差と顔領域と背景との温度差に比べれば、顔領域内部の温度差は小さく、また、可視画像のように目、鼻、口等の輪郭がエッジを生じないため、顔の輪郭のみがエッジとして生じ、顔領域全体を効率よく抽出することができる。換言すると、ある被写体の部位による温度変化が小さい場合は、その被写体を面として捕らえることが可能であり、被写体表面上に模様が存在していたとしても、可視光のように模様を捉えることがないため、被写体の輪郭だけを求めたい場合に、遠赤外線画像は適している。   In the face area extraction process in the present embodiment, a far-infrared image is used as the original image acquired by the imaging means. This is because when extracting a face area, the temperature difference inside the face area is small compared to the temperature difference occurring inside the face area and the temperature difference between the face area and the background. Since the nose, mouth, and other contours do not generate edges, only the facial contours are generated as edges, and the entire face region can be efficiently extracted. In other words, when the temperature change due to the part of a subject is small, it is possible to capture the subject as a surface, and even if a pattern exists on the subject surface, it can capture the pattern like visible light. Therefore, the far-infrared image is suitable when only the contour of the subject is desired.

以上の説明から明らかなように、本発明の実施例による乗員検出装置1では、撮像部2が、車両乗員を含む原画像を撮像し、演算部3が、原画像からエッジ画像を生成し、抽出されたエッジに挟まれる領域を候補領域として抽出する。次に、演算部3が、抽出された候補領域と所定の条件に合致する隣接する画素とを統合し、候補領域を統合して更新し、記憶部4に記憶する。次に、演算部3が、更新された候補領域の特徴量を算出し、候補領域の中から顔領域である蓋然性の高い、注目する候補領域を抽出する。そして、演算部3及び記憶部4が、注目する候補領域に対して、一定時間における画面上での移動量及び前後関係を判定し、顔領域を抽出する。従って、本発明の実施例による乗員検出装置1によれば、撮像面における形状が乗員の頭部と同じような形状を有し、且つ、移動している乗員以外の被写体が撮像領域内に存在する場合や、障害物によって乗員の顔領域の映像が部分的に隠れている場合においても、乗員を正確に検出することができる。   As is clear from the above description, in the occupant detection device 1 according to the embodiment of the present invention, the imaging unit 2 captures an original image including a vehicle occupant, the calculation unit 3 generates an edge image from the original image, A region between the extracted edges is extracted as a candidate region. Next, the calculation unit 3 integrates the extracted candidate regions and adjacent pixels that meet a predetermined condition, integrates and updates the candidate regions, and stores them in the storage unit 4. Next, the calculation unit 3 calculates the feature amount of the updated candidate area, and extracts a candidate area to be noticed having a high probability of being a face area from the candidate areas. Then, the calculation unit 3 and the storage unit 4 determine the amount of movement and the front-rear relationship on the screen for a certain time with respect to the candidate region of interest, and extract a face region. Therefore, according to the occupant detection device 1 according to the embodiment of the present invention, the shape on the imaging surface has the same shape as the head of the occupant, and a subject other than the moving occupant exists in the imaging region. Even when the image of the occupant's face area is partially hidden by an obstacle, the occupant can be accurately detected.

以上、本発明者によってなされた発明を適用した実施例について説明したが、この実施例による本発明の開示の一部をなす論述及び図面により本発明は限定されることはない。すなわち、上記実施例の形態に基づいて当業者等によりなされる他の実施例、実施例及び運用技術等は全て本発明の範疇に含まれることを付け加えておく。   As mentioned above, although the Example which applied the invention made | formed by this inventor was demonstrated, this invention is not limited by the description and drawing which make a part of indication of this invention by this Example. That is, it is added that all other examples, examples, operation techniques and the like made by those skilled in the art based on the form of the above examples are included in the scope of the present invention.

本発明の実施例に係る乗員検出装置のブロック図である。1 is a block diagram of an occupant detection device according to an embodiment of the present invention. 撮像部の取り付け例を示す図である。It is a figure which shows the example of attachment of an imaging part. 本発明の実施例に係る顔領域抽出処理の流れを示すフローチャート図である。It is a flowchart figure which shows the flow of the face area extraction process which concerns on the Example of this invention. 撮像部によって撮像された原画像を示す図である。It is a figure which shows the original image imaged by the imaging part. 図4に示す現画像から得られたエッジ画像を示す図である。It is a figure which shows the edge image obtained from the present image shown in FIG. 候補領域抽出の方法を説明するための図である。It is a figure for demonstrating the method of candidate area | region extraction. 図5に示すエッジ画像から得られた候補領域を示す図である。It is a figure which shows the candidate area | region obtained from the edge image shown in FIG. 候補領域と原画像を重畳表示した図である。It is the figure which displayed the candidate area | region and the original image superimposed. 更新された候補領域を示す図である。It is a figure which shows the updated candidate area | region. 複数の候補領域の前後方向の位置関係を求める方法を説明するための図である。It is a figure for demonstrating the method of calculating | requiring the positional relationship of the front-back direction of a some candidate area | region.

符号の説明Explanation of symbols

1:乗員検出装置
2:撮像部
3:演算部
4:記憶部
41:原画像
72,73:候補領域
94:更新された候補領域
1: Occupant detection device 2: Imaging unit 3: Calculation unit 4: Storage unit
41: Original image
72, 73: Candidate areas
94: Updated candidate area

Claims (8)

車内の画像を撮像する撮像手段と、
前記撮像手段により撮像された画像のエッジ画像を生成するエッジ画像生成手段と、
前記エッジ画像生成手段によって生成されたエッジ画像に挟まれる画像領域を候補領域として抽出する候補領域抽出手段と、
前記候補領域抽出手段によって抽出された候補領域に隣接する画素の画素値と候補領域の画素値の差分値が所定範囲内であるか否かを判別し、当該差分値が所定範囲内である場合、候補領域に隣接する画素を候補領域に統合して候補領域を更新する候補領域更新手段と、
前記候補領域更新手段による処理後の候補領域が車内の乗員の顔に対応する画像領域であるか否かを判定する領域種別判定手段と
を有することを特徴とする乗員検出装置。
An imaging means for capturing an image in the vehicle;
Edge image generation means for generating an edge image of an image captured by the imaging means;
Candidate area extraction means for extracting an image area sandwiched between edge images generated by the edge image generation means as candidate areas;
When the difference value between the pixel value of the pixel adjacent to the candidate area extracted by the candidate area extraction means and the pixel value of the candidate area is within a predetermined range, and the difference value is within the predetermined range Candidate area updating means for updating the candidate area by integrating the pixels adjacent to the candidate area into the candidate area;
An occupant detection apparatus comprising: an area type determination unit that determines whether or not the candidate area processed by the candidate area update unit is an image area corresponding to an occupant's face in a vehicle.
前記領域種別判定手段は、前記候補領域更新手段による処理後の候補領域の縦横比、円形度、及び大きさのうちの少なくとも一つが所定の範囲内である場合、候補領域は車内の乗員の顔に対応する画像領域であると判定することを特徴とする請求項1に記載の乗員検出装置。   The area type determining means is configured to determine whether the candidate area is the face of an occupant in the vehicle when at least one of the aspect ratio, circularity, and size of the candidate area after processing by the candidate area updating means is within a predetermined range. The occupant detection device according to claim 1, wherein the occupant detection device is determined to be an image region corresponding to. 前記領域種別判定手段は、前記候補領域更新手段による処理後の候補領域の移動量を算出し、移動量が所定時間継続して所定の閾値範囲外である場合、候補領域は乗員の顔以外の画像領域であると判定することを特徴とする請求項1又は請求項2に記載の乗員検出装置。   The area type determination means calculates the amount of movement of the candidate area after processing by the candidate area update means, and if the movement amount continues outside the predetermined threshold range for a predetermined time, the candidate area is other than the occupant's face The occupant detection device according to claim 1, wherein the occupant detection device is determined to be an image region. 前記領域種別判定手段は、前記候補領域更新手段による処理後の候補領域が複数存在する場合、前記撮像手段が前記乗員を正面から撮像し、複数の候補領域毎に候補領域に対応する被写体と前記撮像手段との距離の大小を比較し、前記撮像手段との距離が最も大きい被写体に対応する候補領域が車内の乗員の顔に対応する画像領域であると判定することを特徴とする請求項1乃至請求項3のうち、いずれか1項に記載の乗員検出装置。   When there are a plurality of candidate areas after processing by the candidate area updating means, the area type determining means images the occupant from the front, and subjects corresponding to the candidate areas for each of the plurality of candidate areas 2. The distance between the image capturing unit and the image capturing unit is compared, and it is determined that the candidate region corresponding to the subject having the longest distance from the image capturing unit is an image region corresponding to the face of an occupant in the vehicle. The occupant detection device according to any one of claims 3 to 3. 前記領域種別判定手段は、前記候補領域更新手段による処理後の候補領域内の画素値の均一性が所定の閾値範囲内である場合、候補領域が車内の乗員の顔に対応する画像領域であると判定することを特徴とする請求項1乃至請求項4のうち、いずれか1項に記載の乗員検出装置。   The region type determination unit is an image region corresponding to a passenger's face in a vehicle when the uniformity of pixel values in the candidate region after processing by the candidate region update unit is within a predetermined threshold range. The occupant detection device according to any one of claims 1 to 4, wherein the occupant detection device is determined. 前記領域種別判定手段は、前記前記候補領域更新手段による処理後の候補領域の形状又は位置が、所定時間以上、且つ、所定値以上変化しない場合、候補領域は乗員の顔以外の画像領域であると判定することを特徴とする請求項1乃至請求項5のうち、いずれか1項に記載の乗員検出装置。   The region type determination unit is an image region other than the occupant's face when the shape or position of the candidate region processed by the candidate region update unit does not change for a predetermined time or more and a predetermined value or more. The occupant detection device according to any one of claims 1 to 5, wherein the occupant detection device is determined. 前記撮像手段によって撮像される画像は遠赤外線画像であることを特徴とする請求項1乃至請求項6のうち、いずれか1項に記載の乗員検出装置。   The occupant detection device according to any one of claims 1 to 6, wherein the image captured by the imaging unit is a far-infrared image. 車内の画像からエッジ画像を生成する生成ステップと、
前記エッジ画像に挟まれる領域を候補領域として抽出する抽出ステップと、
前記候補領域に隣接する画素の画素値と候補領域の画素値の差分値が所定範囲内であるか否かを判別する判別ステップと、
前記差分値が所定範囲内である場合、候補領域に隣接する画素を候補領域に統合して候補領域を更新する更新ステップと、
前記判別及び更新ステップ後の候補領域が車内の乗員の顔に対応する画像領域であるか否かを判定するステップと
を有することを特徴とする乗員検出方法。
A generation step for generating an edge image from an image in the vehicle;
An extraction step of extracting a region sandwiched between the edge images as a candidate region;
A determination step of determining whether or not a difference value between a pixel value of a pixel adjacent to the candidate region and a pixel value of the candidate region is within a predetermined range;
When the difference value is within a predetermined range, an update step of updating the candidate area by integrating pixels adjacent to the candidate area into the candidate area;
Determining whether the candidate area after the determination and updating step is an image area corresponding to the face of the passenger in the vehicle.
JP2005086980A 2005-03-24 2005-03-24 Occupant detection device and occupant detection method Expired - Fee Related JP4765363B2 (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
JP2005086980A JP4765363B2 (en) 2005-03-24 2005-03-24 Occupant detection device and occupant detection method

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
JP2005086980A JP4765363B2 (en) 2005-03-24 2005-03-24 Occupant detection device and occupant detection method

Publications (2)

Publication Number Publication Date
JP2006268565A true JP2006268565A (en) 2006-10-05
JP4765363B2 JP4765363B2 (en) 2011-09-07

Family

ID=37204446

Family Applications (1)

Application Number Title Priority Date Filing Date
JP2005086980A Expired - Fee Related JP4765363B2 (en) 2005-03-24 2005-03-24 Occupant detection device and occupant detection method

Country Status (1)

Country Link
JP (1) JP4765363B2 (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
KR20150033047A (en) * 2013-09-23 2015-04-01 에스케이텔레콤 주식회사 Method and Apparatus for Preprocessing Image for Detecting Objects

Families Citing this family (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN103020580B (en) * 2011-09-23 2015-10-28 无锡中星微电子有限公司 Fast face detecting method

Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JPH07257228A (en) * 1994-03-18 1995-10-09 Nissan Motor Co Ltd Display device for vehicle
JPH10149449A (en) * 1996-11-20 1998-06-02 Canon Inc Picture division method, picture identification method, picture division device and picture identification device

Patent Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JPH07257228A (en) * 1994-03-18 1995-10-09 Nissan Motor Co Ltd Display device for vehicle
JPH10149449A (en) * 1996-11-20 1998-06-02 Canon Inc Picture division method, picture identification method, picture division device and picture identification device

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
KR20150033047A (en) * 2013-09-23 2015-04-01 에스케이텔레콤 주식회사 Method and Apparatus for Preprocessing Image for Detecting Objects
KR102150661B1 (en) * 2013-09-23 2020-09-01 에스케이 텔레콤주식회사 Method and Apparatus for Preprocessing Image for Detecting Objects

Also Published As

Publication number Publication date
JP4765363B2 (en) 2011-09-07

Similar Documents

Publication Publication Date Title
CN109788215B (en) Image processing apparatus, computer-readable storage medium, and image processing method
KR101758684B1 (en) Apparatus and method for tracking object
US8345922B2 (en) Apparatus for detecting a pupil, program for the same, and method for detecting a pupil
JPWO2020121973A1 (en) Learning methods for object identification systems, arithmetic processing devices, automobiles, lamps for vehicles, and classifiers
US8055016B2 (en) Apparatus and method for normalizing face image used for detecting drowsy driving
JP5836095B2 (en) Image processing apparatus and image processing method
JP5068134B2 (en) Target area dividing method and target area dividing apparatus
JP6194604B2 (en) Recognizing device, vehicle, and computer executable program
US9367759B2 (en) Cooperative vision-range sensors shade removal and illumination field correction
JP2010113506A (en) Occupant position detection device, occupant position detection method, and occupant position detection program
JP4674179B2 (en) Shadow recognition method and shadow boundary extraction method
JP2007011490A (en) Device and method for detecting road parameter
JP2016206774A (en) Three-dimensional object detection apparatus and three-dimensional object detection method
JP2009205283A (en) Image processing apparatus, method and program
WO2020132920A1 (en) Systems and methods for object recognition
CN102713511A (en) Distance calculation device for vehicle
JP6375911B2 (en) Curve mirror detector
EP2927869A2 (en) Movement amount estimation device, movement amount estimation method, and movement amount estimation program
JP4765363B2 (en) Occupant detection device and occupant detection method
JP6677141B2 (en) Parking frame recognition device
JP2009070097A (en) Vehicle length measuring device and vehicle model determination device
JP2021051348A (en) Object distance estimation apparatus and object distance estimation method
JP6677142B2 (en) Parking frame recognition device
JP2004145592A (en) Motion vector extraction device, method and program, and its recording medium
EP3144853B1 (en) Detection of water droplets on a vehicle camera lens

Legal Events

Date Code Title Description
A621 Written request for application examination

Free format text: JAPANESE INTERMEDIATE CODE: A621

Effective date: 20080227

A977 Report on retrieval

Free format text: JAPANESE INTERMEDIATE CODE: A971007

Effective date: 20101006

A131 Notification of reasons for refusal

Free format text: JAPANESE INTERMEDIATE CODE: A131

Effective date: 20101012

A521 Written amendment

Free format text: JAPANESE INTERMEDIATE CODE: A523

Effective date: 20101129

A131 Notification of reasons for refusal

Free format text: JAPANESE INTERMEDIATE CODE: A131

Effective date: 20110301

A521 Written amendment

Free format text: JAPANESE INTERMEDIATE CODE: A523

Effective date: 20110406

A01 Written decision to grant a patent or to grant a registration (utility model)

Free format text: JAPANESE INTERMEDIATE CODE: A01

Effective date: 20110517

A61 First payment of annual fees (during grant procedure)

Free format text: JAPANESE INTERMEDIATE CODE: A61

Effective date: 20110530

R150 Certificate of patent or registration of utility model

Free format text: JAPANESE INTERMEDIATE CODE: R150

FPAY Renewal fee payment (event date is renewal date of database)

Free format text: PAYMENT UNTIL: 20140624

Year of fee payment: 3

LAPS Cancellation because of no payment of annual fees