JP2020194294A - Jointpoint detection apparatus - Google Patents

Jointpoint detection apparatus Download PDF

Info

Publication number
JP2020194294A
JP2020194294A JP2019098788A JP2019098788A JP2020194294A JP 2020194294 A JP2020194294 A JP 2020194294A JP 2019098788 A JP2019098788 A JP 2019098788A JP 2019098788 A JP2019098788 A JP 2019098788A JP 2020194294 A JP2020194294 A JP 2020194294A
Authority
JP
Japan
Prior art keywords
joint
joint point
point
points
target
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
JP2019098788A
Other languages
Japanese (ja)
Inventor
紀 宇野
Tadashi Uno
紀 宇野
聡 毛利
Satoshi Mori
聡 毛利
弘法 松尾
Hironori Matsuo
弘法 松尾
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Aisin Corp
Original Assignee
Aisin Seiki Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Aisin Seiki Co Ltd filed Critical Aisin Seiki Co Ltd
Priority to JP2019098788A priority Critical patent/JP2020194294A/en
Publication of JP2020194294A publication Critical patent/JP2020194294A/en
Pending legal-status Critical Current

Links

Images

Abstract

To provide, as an example, a jointpoint detection apparatus capable of classifying a plurality of jointpoints acquired from an image for each human body with a small calculation amount.SOLUTION: A jointpoint detection apparatus according to the embodiment comprises: a detection unit that detects a plurality of jointpoints included in a target area based on at least one of a distance image and a brightness image of the target area; and a classification unit that classifies the plurality of jointpoints for each human body based on the distance image.SELECTED DRAWING: Figure 3

Description

本開示は、関節点検出装置に関する。 The present disclosure relates to a joint point detector.

従来、撮像装置によって撮像された画像から人体の関節点の座標を推定する技術が知られている。 Conventionally, a technique of estimating the coordinates of a joint point of a human body from an image captured by an imaging device has been known.

非特許文献1には、関節点間の関連度(PAF:Part Affinity Fields)を算出し、算出した関連度に基づき、複数の関節点を人体ごとに分類する技術が開示されている。かかる技術によれば、画像に複数の人体が写り込んでいる場合であっても、人体ごとの姿勢を推定することができる。 Non-Patent Document 1 discloses a technique for calculating the degree of association (PAF: Part Affinity Fields) between joint points and classifying a plurality of joint points for each human body based on the calculated degree of association. According to such a technique, it is possible to estimate the posture of each human body even when a plurality of human bodies are reflected in the image.

“Realtime Multi-Person 2D Pose Estimation using Part Affinity Fields” Zhe Cao<インターネット>https://arxiv.org/pdf/1611.08050.pdf (平成31年4月10日検索)“Realtime Multi-Person 2D Pose Optimization using Part Affinity Fields” Zhe Cao <Internet> https://arxiv.org/pdf/1611.08050.pdf (Searched April 10, 2019)

しかしながら、関節点間の関連度は演算量が大きいため、たとえばリアルタイムに人体の関節点を検出するためには、高い演算能力を持つ演算装置が必要となる。 However, since the degree of relevance between the joint points is large in the amount of calculation, for example, in order to detect the joint points of the human body in real time, an arithmetic unit having a high calculation ability is required.

本開示は、一例として、画像から取得される複数の関節点を少ない演算量で人体ごとに分類することができる関節点検出装置を提供する。 As an example, the present disclosure provides a joint point detection device capable of classifying a plurality of joint points acquired from an image for each human body with a small amount of calculation.

本開示に係る関節点検出装置は、一例として、対象領域の距離画像および輝度画像のうち少なくとも一方に基づき、前記対象領域に含まれる複数の関節点を検出する検出部と、前記距離画像に基づき、前記複数の関節点を人体ごとに分類する分類部とを備える。対象領域に含まれる複数の関節点を距離画像に基づいて人体ごとに分類することで、たとえばPAFによる従来の分類手法と比較して演算量を低減することができる。 As an example, the joint point detection device according to the present disclosure is based on a detection unit that detects a plurality of joint points included in the target area based on at least one of a distance image and a brightness image of the target area, and based on the distance image. A classification unit for classifying the plurality of joint points for each human body is provided. By classifying a plurality of joint points included in the target region for each human body based on a distance image, it is possible to reduce the amount of calculation as compared with a conventional classification method using, for example, PAF.

上記関節点検出装置において、前記分類部は、前記複数の関節点のうちの対象関節点と、前記対象関節点に接続される前記関節点の候補である複数の接続候補関節点のうちの1つとを結ぶ線分上に存在する各点の距離の総和に基づく乖離度を前記複数の接続候補関節点ごとに算出し、前記複数の接続候補関節点のうち、前記乖離度が最も小さい前記接続候補関節点を、前記対象関節点に接続される前記関節点として決定する。 In the above joint point detection device, the classification unit is one of a target joint point among the plurality of joint points and a plurality of connection candidate joint points that are candidates for the joint point connected to the target joint point. The degree of deviation based on the sum of the distances of the points existing on the line connecting the two is calculated for each of the plurality of connection candidate joint points, and the connection having the smallest degree of deviation among the plurality of connection candidate joint points. A candidate joint point is determined as the joint point connected to the target joint point.

異なる人物の関節点同士を結んだ線分上には、たとえば背景部分等、距離が大きくなる箇所が存在する可能性が高い。そこで、分類部は、複数の接続候補関節点のうち乖離度が低い接続候補関節点を、対象関節点と接続される関節点として決定する。これにより、画像から取得される複数の関節点を少ない演算量で人体ごとに分類することができる。 On the line segment connecting the joint points of different people, there is a high possibility that there is a place where the distance becomes large, such as a background part. Therefore, the classification unit determines the connection candidate joint point having a low degree of deviation from the plurality of connection candidate joint points as the joint point connected to the target joint point. As a result, a plurality of joint points acquired from the image can be classified for each human body with a small amount of calculation.

上記関節点検出装置において、前記分類部は、前記複数の関節点のうち距離が最も近い前記関節点から順に前記対象関節点として選択する。このように、距離が最も近い関節点から順に対象関節点として選択することで、乖離度を適切に求めることができる。 In the above-mentioned joint point detection device, the classification unit selects as the target joint point in order from the said joint point having the shortest distance among the plurality of joint points. In this way, the degree of divergence can be appropriately obtained by selecting the target joint points in order from the joint point having the closest distance.

上記関節点検出装置において、前記分類部は、線分上に存在する各点の座標をブレゼンハムのアルゴリズムを用いて算出する。ブレゼンハムのアルゴリズムは、整数の加減算とビットシフトのみで実装可能であるため、ブレゼンハムのアルゴリズムを用いることで、線分上に存在する各点の座標を少ない演算量で高速に算出することができる。 In the joint point detection device, the classification unit calculates the coordinates of each point existing on the line segment using Bresenham's algorithm. Since the Bresenham's algorithm can be implemented only by adding and subtracting integers and bit shifting, the coordinates of each point existing on the line segment can be calculated at high speed with a small amount of calculation by using the Bresenham's algorithm.

上記関節点検出装置において、前記分類部は、前記複数の関節点のうちの対象関節点と、前記対象関節点に接続される前記関節点の候補である複数の接続候補関節点のうちの1つとを結ぶ関節点間ベクトルを算出し、前記対象関節点と前記複数の接続候補関節点のうちの1つとを結ぶ線分を分割する複数の分割点間を結ぶ複数の分割点間ベクトルを算出し、前記関節点間ベクトルと前記複数の分割点間ベクトルとの向きの一致度を複数の接続候補関節点ごとに算出し、前記複数の接続候補関節点のうち、前記一致度が最も高い前記接続候補関節点を、前記対象関節点に接続される前記関節点として決定する。 In the above joint point detection device, the classification unit is one of a target joint point among the plurality of joint points and a plurality of connection candidate joint points that are candidates for the joint point connected to the target joint point. Calculate the vector between the joint points connecting the ones, and calculate the vector between the plurality of division points connecting the plurality of division points that divide the line segment connecting the target joint point and one of the plurality of connection candidate joint points. Then, the degree of coincidence between the directions of the vector between the joint points and the vector between the plurality of division points is calculated for each of the plurality of connection candidate joint points, and among the plurality of connection candidate joint points, the degree of coincidence is the highest. A connection candidate joint point is determined as the joint point connected to the target joint point.

同一人物の関節点同士を対象関節点および接続候補関節点とした場合、関節点間ベクトルの向きと複数の分割点間ベクトルの向きとは理論上一致するため、一致度は高くなる。これに対し、異なる人物の関節点同士を対象関節点および接続候補関節点とした場合の一致度は、同一人物の関節点同士を対象関節点および接続候補関節点とした場合と比較して小さくなる。これは、対象関節点と接続候補関節点とが異なる人物の関節点同士である場合、関節点間ベクトルの向きと、複数の分割点間ベクトルの向きとが一致しないためである。そこで、分類部は、複数の接続候補関節点のうち一致度が高い接続候補関節点を対象関節点と接続される関節点として決定する。これにより、画像から取得される複数の関節点を少ない演算量で人体ごとに分類することができる。 When the joint points of the same person are set as the target joint point and the connection candidate joint point, the direction of the vector between the joint points and the direction of the vector between the plurality of division points are theoretically the same, so that the degree of coincidence is high. On the other hand, the degree of coincidence when the joint points of different persons are set as the target joint points and the connection candidate joint points is smaller than that when the joint points of the same person are set as the target joint points and the connection candidate joint points. Become. This is because the directions of the vector between the joint points and the directions of the vectors between the plurality of division points do not match when the target joint points and the connection candidate joint points are the joint points of different persons. Therefore, the classification unit determines the connection candidate joint point having a high degree of coincidence among the plurality of connection candidate joint points as the joint point connected to the target joint point. As a result, a plurality of joint points acquired from the image can be classified for each human body with a small amount of calculation.

図1は、第1実施形態に係る関節点検出装置が搭載される車両の車室内を上方から見た平面図である。FIG. 1 is a plan view of the interior of a vehicle equipped with the joint point detection device according to the first embodiment as viewed from above. 図2は、第1実施形態に係る制御システムの構成を示すブロック図である。FIG. 2 is a block diagram showing a configuration of a control system according to the first embodiment. 図3は、第1実施形態に係るECUの機能的構成を示すブロック図である。FIG. 3 is a block diagram showing a functional configuration of the ECU according to the first embodiment. 図4は、関節点情報の一例を示す図である。FIG. 4 is a diagram showing an example of joint point information. 図5は、乖離度の算出手順の説明図である。FIG. 5 is an explanatory diagram of a procedure for calculating the degree of deviation. 図6は、第1実施形態に係るECUが実行する処理の手順を示すフローチャートである。FIG. 6 is a flowchart showing a procedure of processing executed by the ECU according to the first embodiment. 図7は、一致度の算出手順の説明図である。FIG. 7 is an explanatory diagram of a procedure for calculating the degree of agreement. 図8は、一致度の算出手順の説明図である。FIG. 8 is an explanatory diagram of a procedure for calculating the degree of agreement. 図9は、一致度の算出手順の説明図である。FIG. 9 is an explanatory diagram of a procedure for calculating the degree of agreement. 図10は、第2実施形態に係るECUが実行する処理の手順を示すフローチャートである。FIG. 10 is a flowchart showing a procedure of processing executed by the ECU according to the second embodiment.

以下に、本開示に係る関節点検出装置を実施するための形態(以下、「実施形態」と呼ぶ)について図面を参照しつつ詳細に説明する。なお、この実施形態により本開示に係る関節点検出装置が限定されるものではない。また、以下の各実施形態において同一の部位には同一の符号を付し、重複する説明は省略される。 Hereinafter, a mode for carrying out the joint point detection device according to the present disclosure (hereinafter, referred to as “the embodiment”) will be described in detail with reference to the drawings. It should be noted that this embodiment does not limit the joint point detection device according to the present disclosure. Further, in each of the following embodiments, the same parts are designated by the same reference numerals, and duplicate description is omitted.

(第1実施形態)
〔1.車両1の構成〕
図1は、第1実施形態に係る関節点検出装置が搭載される車両1の車室内を上方から見た平面図である。図1に示すように、車両1の車室内には、複数の座席2が設けられている。たとえば、車室内の前方側には運転席2aおよび助手席2bが設けられ、後方側には複数の後部座席2cが設けられる。
(First Embodiment)
[1. Configuration of vehicle 1]
FIG. 1 is a plan view of the interior of the vehicle 1 on which the joint point detection device according to the first embodiment is mounted, as viewed from above. As shown in FIG. 1, a plurality of seats 2 are provided in the vehicle interior of the vehicle 1. For example, a driver's seat 2a and a passenger seat 2b are provided on the front side of the vehicle interior, and a plurality of rear seats 2c are provided on the rear side.

車室内の前方側には、撮像装置3が設けられる。撮像装置3は、たとえば、TOF(Time OF Flight)距離画像カメラであり、輝度画像および距離画像を撮像する。輝度画像は、輝度値を画素値とする画像である。また、距離画像は、撮像装置3によって撮像される車室内の対象領域(撮像領域)に含まれる各点から撮像装置3(撮像位置)までの距離値を画素値とする画像である。 An image pickup device 3 is provided on the front side of the vehicle interior. The image pickup device 3 is, for example, a TOF (Time OF Flight) distance image camera, and captures a luminance image and a distance image. The luminance image is an image in which the luminance value is a pixel value. Further, the distance image is an image in which the distance value from each point included in the target area (imaging area) in the vehicle interior captured by the imaging device 3 to the imaging device 3 (imaging position) is used as a pixel value.

第1実施形態において、撮像装置3は、車室内の全ての座席2を撮像可能なように、言い換えれば、車室内の乗員全員を撮像可能なように、向き、画角および設置位置等が決められる。たとえば、撮像装置3は、ダッシュボード、ルームミラー、天井等に設置され得る。なお、これに限らず、撮像装置3は、特定の座席2(たとえば、運転席2a)に着座した乗員のみを撮像可能な位置に配置されてもよい。 In the first embodiment, the image pickup device 3 determines the orientation, the angle of view, the installation position, and the like so that all the seats 2 in the vehicle interior can be imaged, in other words, all the occupants in the vehicle interior can be imaged. Be done. For example, the image pickup device 3 may be installed on a dashboard, a rearview mirror, a ceiling, or the like. Not limited to this, the imaging device 3 may be arranged at a position where only an occupant seated in a specific seat 2 (for example, the driver's seat 2a) can be imaged.

ここでは、1台の撮像装置3を用いて輝度画像および距離画像の両方を撮像するものとするが、車両1には、たとえば、輝度画像を撮像する撮像装置と距離画像を撮像する撮像装置とが別々に設けられてもよい。距離画像を撮像する撮像装置は、TOF距離画像カメラに限定されず、たとえば、ステレオカメラの他、ストラクチャードライト方式による3Dスキャナ等であってもよい。 Here, it is assumed that both the brightness image and the distance image are captured by using one image pickup device 3, but the vehicle 1 includes, for example, an image pickup device that captures the brightness image and an image pickup device that captures the distance image. May be provided separately. The imaging device that captures the distance image is not limited to the TOF distance image camera, and may be, for example, a stereo camera, a 3D scanner by a structured light method, or the like.

〔2.制御システム100の構成〕
車両1には、関節点検出装置を含む制御システム100が設けられる。かかる制御システム100の構成について図2を参照して説明する。図2は、第1実施形態に係る制御システム100の構成を示すブロック図である。
[2. Configuration of control system 100]
The vehicle 1 is provided with a control system 100 including a joint point detecting device. The configuration of the control system 100 will be described with reference to FIG. FIG. 2 is a block diagram showing a configuration of the control system 100 according to the first embodiment.

図2に示すように、制御システム100は、撮像装置3と、車載機器8と、ECU10と、車内ネットワーク20とを備える。ECU10は、関節点検出装置の一例である。 As shown in FIG. 2, the control system 100 includes an image pickup device 3, an in-vehicle device 8, an ECU 10, and an in-vehicle network 20. The ECU 10 is an example of a joint point detecting device.

撮像装置3は、たとえばNTSC(National Television System Committee)ケーブル等の出力線を介してECU10に接続され、撮像した輝度画像および距離画像を出力線を介してECU10に出力する。 The image pickup device 3 is connected to the ECU 10 via an output line such as an NTSC (National Television System Committee) cable, and outputs the captured luminance image and distance image to the ECU 10 via the output line.

車載機器8は、車両1に搭載され、ECU10によって制御される機器である。たとえば、複数の座席2に対応して設けられる複数のエアバッグ装置である。エアバッグ装置は、車両1の衝突時などにエアバッグを展開することによって、座席2に着座した乗員を衝撃から保護する。エアバッグ装置は、ECU10の制御に応じて、エアバッグの展開力を切り替えることが可能であるものとする。 The in-vehicle device 8 is a device mounted on the vehicle 1 and controlled by the ECU 10. For example, a plurality of airbag devices provided corresponding to a plurality of seats 2. The airbag device protects the occupant seated in the seat 2 from impact by deploying the airbag in the event of a collision of the vehicle 1. It is assumed that the airbag device can switch the deploying force of the airbag according to the control of the ECU 10.

なお、車載機器8は、エアバッグ装置に限定されない。たとえば、車載機器8は、座席2の位置を調整するシート調整装置であってもよい。たとえば、シート調整装置は、運転席2aと助手席2bとに設けられ、運転席2aおよび助手席2bの前後位置や高さ位置等を調整する。 The in-vehicle device 8 is not limited to the airbag device. For example, the in-vehicle device 8 may be a seat adjusting device that adjusts the position of the seat 2. For example, the seat adjusting device is provided in the driver's seat 2a and the passenger seat 2b, and adjusts the front-rear position, the height position, and the like of the driver's seat 2a and the passenger seat 2b.

ECU10は、車内ネットワーク20を介して制御信号を送ることにより、各種の車載機器8を制御することができる。その他、ECU10は、ブレーキシステムの制御、操舵システムの制御等を実行し得る。 The ECU 10 can control various in-vehicle devices 8 by sending control signals via the in-vehicle network 20. In addition, the ECU 10 can execute control of the brake system, control of the steering system, and the like.

ECU10は、たとえば、CPU(Central Processing Unit)11と、SSD(Solid State Drive)12と、ROM(Read Only Memory)13と、RAM(Random Access Memory)14とを備える。CPU11は、ROM13等の不揮発性の記憶装置にインストールされ記憶されたプログラムを実行することによって、関節点検出装置としての機能を実現する。RAM14は、CPU11での演算で用いられる各種のデータを一時的に記憶する。SSD12は、書き換え可能な不揮発性の記憶装置であって、ECU10の電源がオフされた場合にあってもデータを記憶することができる。CPU11、ROM13、およびRAM14等は、同一パッケージ内に集積され得る。ECU10は、CPU11に替えて、DSP(Digital Signal Processor)等の他の論理演算プロセッサや論理回路等が用いられる構成であってもよい。SSD12に替えてHDD(Hard Disk Drive)、Flash Memory等の書き替え可能な不揮発性の記憶装置が設けられてもよいし、SSD12またはHDDは、ECU10とは別に設けられてもよい。 The ECU 10 includes, for example, a CPU (Central Processing Unit) 11, an SSD (Solid State Drive) 12, a ROM (Read Only Memory) 13, and a RAM (Random Access Memory) 14. The CPU 11 realizes a function as a joint point detection device by executing a program installed and stored in a non-volatile storage device such as a ROM 13. The RAM 14 temporarily stores various data used in the calculation by the CPU 11. The SSD 12 is a rewritable non-volatile storage device, and can store data even when the power of the ECU 10 is turned off. The CPU 11, ROM 13, RAM 14, and the like can be integrated in the same package. The ECU 10 may have a configuration in which another logical operation processor such as a DSP (Digital Signal Processor), a logic circuit, or the like is used instead of the CPU 11. Instead of the SSD 12, a rewritable non-volatile storage device such as an HDD (Hard Disk Drive) or Flash Memory may be provided, or the SSD 12 or the HDD may be provided separately from the ECU 10.

〔3.ECU10の機能的構成〕
次に、ECU10の機能的構成について図3を参照して説明する。図3は、第1実施形態に係るECU10の機能的構成を示すブロック図である。
[3. Functional configuration of ECU 10]
Next, the functional configuration of the ECU 10 will be described with reference to FIG. FIG. 3 is a block diagram showing a functional configuration of the ECU 10 according to the first embodiment.

図3に示すように、ECU10は、制御部50と、記憶部60とを備える。制御部50は、取得部51と、関節点検出部52と、分類部53と、人体情報生成部54と、機器制御部55とを備える。記憶部60は、輝度画像61と、距離画像62と、関節点情報63と、分類済関節点情報64と、人体情報65とを記憶する。 As shown in FIG. 3, the ECU 10 includes a control unit 50 and a storage unit 60. The control unit 50 includes an acquisition unit 51, a joint point detection unit 52, a classification unit 53, a human body information generation unit 54, and a device control unit 55. The storage unit 60 stores the luminance image 61, the distance image 62, the joint point information 63, the classified joint point information 64, and the human body information 65.

取得部51、関節点検出部52、分類部53、人体情報生成部54および機器制御部55は、CPU11が、ROM13に格納されたプログラムを実行することによって実現される。なお、これらの構成は、ハードウェア回路にて実現されてもよい。記憶部60は、たとえばSSD12により構成される。 The acquisition unit 51, the joint point detection unit 52, the classification unit 53, the human body information generation unit 54, and the device control unit 55 are realized by the CPU 11 executing the program stored in the ROM 13. Note that these configurations may be realized by a hardware circuit. The storage unit 60 is composed of, for example, an SSD 12.

取得部51は、撮像装置3によって撮像された輝度画像および距離画像を撮像装置3から取得する。取得部51は、取得した輝度画像および距離画像を記憶部60に記憶する(図3中、輝度画像61および距離画像62に相当)。 The acquisition unit 51 acquires a luminance image and a distance image captured by the image pickup device 3 from the image pickup device 3. The acquisition unit 51 stores the acquired luminance image and distance image in the storage unit 60 (corresponding to the luminance image 61 and the distance image 62 in FIG. 3).

関節点検出部52は、記憶部60に記憶された輝度画像61を用い、対象領域に含まれる複数の関節点を検出する。関節点とは、人体の各部位の位置を示す特徴点であり、肩や肘といった関節だけでなく、首の付け根、鼻、目、耳等の部位も含む。関節点の検出手法としては、たとえば、PCM(Part Confidence Maps)を用いることができるが、PCMに限らず、いかなる公知技術を用いても構わない。関節点検出部52は、検出した複数の関節点の情報を関節点情報63として記憶部60に記憶する。 The joint point detection unit 52 detects a plurality of joint points included in the target area by using the luminance image 61 stored in the storage unit 60. A joint point is a characteristic point indicating the position of each part of the human body, and includes not only joints such as shoulders and elbows but also parts such as the base of the neck, nose, eyes, and ears. As a method for detecting the joint point, for example, PCM (Part Confidence Maps) can be used, but the technique is not limited to PCM, and any known technique may be used. The joint point detection unit 52 stores the detected information on the plurality of joint points in the storage unit 60 as the joint point information 63.

図4は、関節点情報の一例を示す図である。図4に示すように、関節点情報63には、各関節点の2次元座標(X座標、Y座標)の他、各関節点の部位の情報が含まれる。たとえば、対象領域Rから検出された複数の関節点のうち、関節点p1,q1には、関節点p1,q1の部位が「肩」であることを示す情報が含まれる。同様に、関節点p2,q2には、関節点p2,q2の部位が「肘」であることを示す情報が含まれ、関節点p3,q3には、関節点p3,q3の部位が「首」であることを示す情報が含まれる。 FIG. 4 is a diagram showing an example of joint point information. As shown in FIG. 4, the joint point information 63 includes information on the portion of each joint point in addition to the two-dimensional coordinates (X coordinate, Y coordinate) of each joint point. For example, among the plurality of joint points detected from the target region R, the joint points p1 and q1 include information indicating that the site of the joint points p1 and q1 is the “shoulder”. Similarly, the joint points p2 and q2 include information indicating that the site of the joint points p2 and q2 is the "elbow", and the joint points p3 and q3 include the site of the joint points p3 and q3 as the "neck". Information is included to indicate that.

関節点情報63だけでは、検出された関節点が人物M1および人物M2のいずれの関節点であるかを判別することができない。PAF(Part Affinity Fields)に基づいて、複数の関節点を人体ごとに分類することは可能であるが、PAFは演算量が大きいため、たとえばリアルタイムに人体の関節点を検出するためには高い演算能力を持つ演算装置が必要となる。 It is not possible to determine whether the detected joint point is the joint point of the person M1 or the person M2 only from the joint point information 63. It is possible to classify multiple joint points for each human body based on PAF (Part Affinity Fields), but since PAF has a large amount of calculation, for example, it is expensive to detect the joint points of the human body in real time. An arithmetic unit with capability is required.

分類部53は、記憶部60に記憶された距離画像62に基づき、複数の関節点を人物ごとに分類する。 The classification unit 53 classifies a plurality of joint points for each person based on the distance image 62 stored in the storage unit 60.

第1実施形態に係る分類部53は、距離画像62を用いて「乖離度」を算出する。乖離度とは、ある関節点(後述する対象関節点)と他の関節点(後述する接続候補関節点)との結びつきの低さの度合いを示す値である。 The classification unit 53 according to the first embodiment calculates the “degree of deviation” using the distance image 62. The degree of divergence is a value indicating the degree of low connection between a certain joint point (target joint point described later) and another joint point (connection candidate joint point described later).

図5は、乖離度の算出手順の説明図である。図5には、関節点p1が対象関節点として選択された場合の例を示している。 FIG. 5 is an explanatory diagram of a procedure for calculating the degree of deviation. FIG. 5 shows an example when the joint point p1 is selected as the target joint point.

分類部53は、対象関節点である関節点p1の部位と隣接する部位の関節点を接続候補関節点として選択する。図5に示すように、関節点p1の部位は「肩」である。このため、分類部53は、肩に隣接する部位である「肘」の関節点p2,q2を接続候補関節点として選択する。 The classification unit 53 selects a joint point of a portion adjacent to the portion of the joint point p1 which is the target joint point as a connection candidate joint point. As shown in FIG. 5, the site of the joint point p1 is the “shoulder”. Therefore, the classification unit 53 selects the joint points p2 and q2 of the “elbow”, which is a portion adjacent to the shoulder, as the connection candidate joint points.

つづいて、分類部53は、関節点p1と関節点p2との乖離度を算出する。具体的には、分類部53は、まず、関節点p1と関節点p2とを端点とする線分L1上に存在する各点の座標を算出する。この処理は、たとえば、ブレゼンハムのアルゴリズム(Bresenham's line algorithm)を用いて行われる。ブレゼンハムのアルゴリズムは、与えられた始点と終点の間に連続した複数の点を置き、近似的な直線を引くためのアルゴリズムである。ブレゼンハムのアルゴリズムは、整数の加減算とビットシフトのみで実装可能であるため、ブレゼンハムのアルゴリズムを用いることで、線分L1上に存在する複数の画素を少ない演算量で高速に特定することができる。 Subsequently, the classification unit 53 calculates the degree of deviation between the joint point p1 and the joint point p2. Specifically, the classification unit 53 first calculates the coordinates of each point existing on the line segment L1 having the joint point p1 and the joint point p2 as end points. This process is performed, for example, using the Bresenham's line algorithm. Bresenham's algorithm is an algorithm for placing a plurality of consecutive points between a given start point and end point and drawing an approximate straight line. Since the Bresenham's algorithm can be implemented only by adding and subtracting integers and bit shifting, it is possible to identify a plurality of pixels existing on the line segment L1 at high speed with a small amount of calculation by using the Bresenham's algorithm.

つづいて、分類部53は、線分L1上の各座標に位置する画素の距離値を距離画像62から取得する。そして、分類部53は、線分L1上に存在する複数の画素の距離値の総和を線分L1上に存在する複数の画素の数で割った値を乖離度として算出する。すなわち、線分L1上に存在する画素の数をk、各画素の距離値をZとしたとき、乖離度は、下記の数式(1)で定義される。

Figure 2020194294
Subsequently, the classification unit 53 acquires the distance value of the pixels located at each coordinate on the line segment L1 from the distance image 62. Then, the classification unit 53 calculates the value obtained by dividing the sum of the distance values of the plurality of pixels existing on the line segment L1 by the number of the plurality of pixels existing on the line segment L1 as the degree of deviation. That is, when the number of pixels present on the line segment L1 k, the distance value of each pixel was Z i, the deviation degree is defined by the following equation (1).
Figure 2020194294

乖離度は、画素の数kによって正規化されるが、これは、線分上に存在する画素の数が線分ごとに異なるためである。なお、線分L1上に存在する画素の数kは、端点である関節点p1,q1を含んでもよいし、含まなくてもよい。 The degree of divergence is normalized by the number of pixels k, because the number of pixels existing on the line segment differs for each line segment. The number k of pixels existing on the line segment L1 may or may not include the joint points p1 and q1 which are the end points.

分類部53は、関節点p1と関節点q2とを端点とする線分L2についても上記と同様の手順で乖離度を算出する。 The classification unit 53 calculates the degree of deviation of the line segment L2 having the joint point p1 and the joint point q2 as end points in the same procedure as described above.

関節点p1,p2のように同一人物M1の関節点同士は、多くの場合、平坦な面でつながる。これは、同一人物の関節点同士を結ぶ線分上には、基本的には、その人物の画素しか存在せず、その人物よりも遠くに位置する背景の画素は線分上に存在し得ないためである。 In many cases, the joint points of the same person M1 such as the joint points p1 and p2 are connected by a flat surface. This is because, basically, only the pixels of the person exist on the line segment connecting the joint points of the same person, and the pixels of the background located farther than the person can exist on the line segment. Because there is no such thing.

これに対し、関節点p1,q2のように異なる人物M1,M2の関節点同士を結んだ線分L2上には、距離が大きくなる箇所が存在する可能性が高い。たとえば、図5に示す例の場合、関節点p1と関節点q2とを結ぶ線分L2上には、人物M1,M2よりも後方に位置する後部座席2cの画素が存在する。この部分の画素の距離値は、人物M1,M2の部分の画素の距離値と比較して大きいため、関節点p1および関節点q2の組み合わせの乖離度は、関節点p1および関節点p2の組み合わせの乖離度よりも高くなる。 On the other hand, there is a high possibility that there is a place where the distance becomes large on the line segment L2 connecting the joint points of different persons M1 and M2 such as the joint points p1 and q2. For example, in the case of the example shown in FIG. 5, the pixels of the rear seat 2c located behind the persons M1 and M2 are present on the line segment L2 connecting the joint point p1 and the joint point q2. Since the distance value of the pixels in this portion is larger than the distance value of the pixels in the parts of the persons M1 and M2, the degree of deviation of the combination of the joint point p1 and the joint point q2 is the combination of the joint point p1 and the joint point p2. It will be higher than the degree of divergence.

分類部53は、接続候補関節点である関節点p2,q2のうち、乖離度が低い関節点p2を、対象関節点である関節点p1と接続される「肘」の関節点として決定する。具体的には、分類部53は、関節点p1と関節点p2に対し、同一人物M1の関節点であることを示す情報(以下、「人物情報」と記載する)を付加する。 The classification unit 53 determines the joint point p2 having a low degree of divergence among the joint points p2 and q2 which are the connection candidate joint points as the joint points of the “elbow” connected to the joint point p1 which is the target joint point. Specifically, the classification unit 53 adds information (hereinafter, referred to as “person information”) indicating that the joint points p1 and the joint points p2 are the joint points of the same person M1.

分類部53は、上述した肘以外に、関節点p1の部位である肩に隣接する部位である「首」についても同様に、乖離度に基づく分類を行う。すなわち、分類部53は、関節点p1を対象関節点、関節点p3,q3(図4参照)を接続候補関節点とし、関節点p1,p3の組み合わせおよび関節点p1,q3の組み合わせについてそれぞれ乖離度を算出し、乖離度が低い方の接続候補関節点すなわち関節点p3を、関節点p1と接続される「首」の関節点として決定する。 In addition to the elbow described above, the classification unit 53 also classifies the "neck", which is a site adjacent to the shoulder, which is the site of the joint point p1, based on the degree of divergence. That is, the classification unit 53 sets the joint point p1 as the target joint point and the joint points p3 and q3 (see FIG. 4) as connection candidate joint points, and the combination of the joint points p1 and p3 and the combination of the joint points p1 and q3 deviate from each other. The degree is calculated, and the connection candidate joint point having the lower degree of deviation, that is, the joint point p3, is determined as the joint point of the "neck" connected to the joint point p1.

このようにして、全ての関節点の全ての組み合わせについての分類を終えると、分類部53は、関節点情報63に人物情報を付加した分類済関節点情報64を記憶部60に記憶する。分類済関節点情報64は、各関節点の2次元座標、各関節点の部位および各関節点の人物情報を含んだ情報である。これにより、関節点p1〜p3を含む人物M1の関節点には、同一の人物情報(たとえば、M1)が付加される。また、関節点q1〜q3を含む人物M2の関節点には、同一の人物情報(たとえば、M2)が付加される。このように、関節点情報63に人物情報を付加することで、対象領域Rから検出された複数の関節点を人物ごとに分類することができる。 When the classification of all combinations of all the joint points is completed in this way, the classification unit 53 stores the classified joint point information 64 in which the person information is added to the joint point information 63 in the storage unit 60. The classified joint point information 64 is information including the two-dimensional coordinates of each joint point, the site of each joint point, and the person information of each joint point. As a result, the same person information (for example, M1) is added to the joint points of the person M1 including the joint points p1 to p3. Further, the same person information (for example, M2) is added to the joint points of the person M2 including the joint points q1 to q3. By adding the person information to the joint point information 63 in this way, it is possible to classify the plurality of joint points detected from the target area R for each person.

人体情報生成部54は、記憶部60に記憶された分類済関節点情報64に基づき、乗員の各部位の長さを含む人体情報を生成する。たとえば、人体情報生成部54は、人物M1について、左肩の関節点p1と右肩の関節点との距離を算出することで、人物M1の肩幅の情報を得ることができる。人体情報生成部54は、人物M1,M2ごとに分類された分類済関節点情報64を用いて人物M1,M2ごとに人体情報を生成する。そして、人体情報生成部54は、生成した人体情報65を記憶部60に記憶する。 The human body information generation unit 54 generates human body information including the length of each part of the occupant based on the classified joint point information 64 stored in the storage unit 60. For example, the human body information generation unit 54 can obtain information on the shoulder width of the person M1 by calculating the distance between the joint point p1 of the left shoulder and the joint point of the right shoulder for the person M1. The human body information generation unit 54 generates human body information for each person M1 and M2 using the classified joint point information 64 classified for each person M1 and M2. Then, the human body information generation unit 54 stores the generated human body information 65 in the storage unit 60.

なお、人体情報生成部54は、分類済関節点情報64に基づいて人物M1,M2の姿勢を推定してもよい。 The human body information generation unit 54 may estimate the postures of the persons M1 and M2 based on the classified joint point information 64.

機器制御部55は、記憶部60に記憶された人体情報65に基づき、車両1に搭載される各種の車載機器8を制御する。一例として、機器制御部55は、人体情報65に基づき、車載機器8としてのエアバッグ装置を制御することができる。たとえば、人体情報65に含まれる乗員の肩幅が閾値以下である場合、乗員が子供であると判定して、エアバッグ装置の展開モードを、大人用の通常モードよりも弱く展開する弱展開モードに切り替えてもよい。このように、乗員の体格に応じた強度でエアバッグを展開させることで、エアバッグの安全性を高めることができる。 The device control unit 55 controls various in-vehicle devices 8 mounted on the vehicle 1 based on the human body information 65 stored in the storage unit 60. As an example, the device control unit 55 can control the airbag device as the in-vehicle device 8 based on the human body information 65. For example, when the shoulder width of the occupant included in the human body information 65 is equal to or less than the threshold value, it is determined that the occupant is a child, and the deployment mode of the airbag device is set to the weak deployment mode in which the airbag device is deployed weaker than the normal mode for adults. You may switch. In this way, by deploying the airbag with strength according to the physique of the occupant, the safety of the airbag can be enhanced.

また、機器制御部55は、人体情報65に基づき、車載機器8としてのシート調整装置を制御することにより、座席2の前後位置や高さ位置等を、乗員の身長等に応じて自動的に調整することができる。これにより、座席2の位置等を乗員が自ら調整する手間を省くことができる。 Further, the device control unit 55 automatically sets the front-rear position, the height position, etc. of the seat 2 according to the height of the occupant, etc. by controlling the seat adjustment device as the in-vehicle device 8 based on the human body information 65. Can be adjusted. As a result, it is possible to save the occupant from having to adjust the position of the seat 2 by himself / herself.

〔4.ECU10の具体的動作〕
次に、ECU10の具体的動作について図6を参照して説明する。図6は、第1実施形態に係るECU10が実行する処理の手順を示すフローチャートである。なお、図6には、1つの対象関節点の1つの組み合わせを決定するまでの手順を示している。
[4. Specific operation of ECU 10]
Next, the specific operation of the ECU 10 will be described with reference to FIG. FIG. 6 is a flowchart showing a procedure of processing executed by the ECU 10 according to the first embodiment. Note that FIG. 6 shows a procedure for determining one combination of one target joint point.

図6に示すように、ECU10は、対象領域Rから検出された複数の関節点のうち、距離値が最も小さい関節点を対象関節点として選択する(ステップS101)。つづいて、ECU10は、対象領域Rから検出された複数の関節点のうち、対象関節点の部位と隣接する部位の関節点を接続候補関節点として選択する(ステップS102)。 As shown in FIG. 6, the ECU 10 selects the joint point having the smallest distance value among the plurality of joint points detected from the target region R as the target joint point (step S101). Subsequently, the ECU 10 selects a joint point of a portion adjacent to the portion of the target joint point as a connection candidate joint point among the plurality of joint points detected from the target region R (step S102).

つづいて、ECU10は、対象関節点と接続候補関節点とを結ぶ線分上の各点の座標を算出し(ステップS103)、算出した各座標の距離値を用いて乖離度を算出する(ステップS104)。 Subsequently, the ECU 10 calculates the coordinates of each point on the line segment connecting the target joint point and the connection candidate joint point (step S103), and calculates the degree of deviation using the calculated distance value of each coordinate (step S103). S104).

つづいて、ECU10は、全ての接続候補関節点の乖離度を算出し終えたか否かを判定する(ステップS105)。この処理において、乖離度を算出していない接続候補関節点が存在する場合(ステップS105,No)、ECU10は、処理をステップS103に戻し、乖離度を算出していない接続候補関節点について、ステップS103〜S105の処理を行う。 Subsequently, the ECU 10 determines whether or not the deviation degrees of all the connection candidate joint points have been calculated (step S105). In this process, if there is a connection candidate joint point for which the degree of deviation has not been calculated (steps S105, No), the ECU 10 returns the process to step S103, and steps for the connection candidate joint point for which the degree of deviation has not been calculated. Processes S103 to S105 are performed.

一方、ステップS105において、全ての接続候補関節点の乖離度を算出し終えたと判定した場合(ステップS105,Yes)、ECU10は、乖離度が最も低い接続候補関節点を対象関節点と接続される関節点として決定し(ステップS106)、1つの対象関節点の1つの組み合わせについての処理を終了する。 On the other hand, when it is determined in step S105 that the deviation degrees of all the connection candidate joint points have been calculated (steps S105, Yes), the ECU 10 connects the connection candidate joint points having the lowest degree of deviation to the target joint points. It is determined as a joint point (step S106), and the process for one combination of one target joint point is completed.

ECU10は、1つの対象関節点についての全ての組み合わせ(たとえば、対象関節点が肩であれば、「肩および肘」、「肩および首」)を決定した後、その対象関節点の次に距離値が小さい関節点を対象関節点として選択して(ステップS101)、ステップS102以降の処理を行う。このとき、ECU10は、全ての組み合わせが決定された対象関節点を、次の対象関節点についての接続候補関節点から除外する。 After determining all combinations for one target joint point (for example, if the target joint point is a shoulder, "shoulder and elbow", "shoulder and neck"), the ECU 10 is the distance next to the target joint point. A joint point having a small value is selected as the target joint point (step S101), and the processes after step S102 are performed. At this time, the ECU 10 excludes the target joint points for which all combinations have been determined from the connection candidate joint points for the next target joint point.

たとえば後部座席2cに着座した人物の関節点が対象関節点として選択された場合に、運転席2aに着座した人物の関節点が接続候補関節点として選択されると、異なる人物の関節点同士の乖離度が、同一人物の関節点同士の乖離度よりも低くなるおそれがある。これに対し、距離が最も近い関節点から順に対象関節点として選択するとともに、全ての組み合わせが決定された対象関節点を次の対象関節点の接続候補関節点として選択しないようにすることで、乖離度を適切に求めることができる。 For example, if the joint point of a person seated in the back seat 2c is selected as the target joint point, and the joint point of the person seated in the driver's seat 2a is selected as the connection candidate joint point, the joint points of different persons are selected. The degree of divergence may be lower than the degree of divergence between the joint points of the same person. On the other hand, by selecting the target joint points in order from the joint point closest to the distance and not selecting the target joint point for which all combinations have been determined as the connection candidate joint point for the next target joint point. The degree of divergence can be calculated appropriately.

(第2実施形態)
次に、第2実施形態に係る複数の関節点の分類手法について図7〜図9を参照して説明する。図7〜図9は、一致度の算出手順の説明図である。
(Second Embodiment)
Next, a method for classifying a plurality of joint points according to the second embodiment will be described with reference to FIGS. 7 to 9. 7 to 9 are explanatory views of the procedure for calculating the degree of agreement.

第2実施形態に係る分類部53は、距離画像62を用いて「一致度」を算出する。一致度とは、対象関節点と接続候補関節点とを結ぶベクトルの向きと、対象関節点と接続候補関節点とを結ぶ線分を複数の分割点で分割した場合の各分割点間のベクトルの向きとの一致度である。 The classification unit 53 according to the second embodiment calculates the “matching degree” using the distance image 62. The degree of coincidence is the direction of the vector connecting the target joint point and the connection candidate joint point, and the vector between each division point when the line segment connecting the target joint point and the connection candidate joint point is divided by a plurality of division points. It is the degree of agreement with the direction of.

たとえば、図7に示すように、分類部53は、対象関節点である関節点p1から接続候補関節点である関節点p2に向かう関節点間ベクトルva0を距離画像62の距離値に基づいて算出する。 For example, as shown in FIG. 7, the classification unit 53 calculates the vector va0 between the joint points from the target joint point p1 to the connection candidate joint point p2 based on the distance value of the distance image 62. To do.

具体的には、関節点p1,p2の3次元座標(X座標、Y座標、および距離値)をそれぞれP1=(X,Y,Z)、P2=(X,Y,Z)とすると、関節点間ベクトルva0は、(X−X,Y−Y,Z−Z)となる。 Specifically, the three-dimensional coordinates of the articulation points p1, p2 (X coordinate, Y coordinate, and distance value), respectively P1 = (X 1, Y 1 , Z 1), P2 = (X 2, Y 2, Z 2 ), the vector between the joint points va0 is (X 2- X 1 , Y 2- Y 1 , Z 2- Z 1 ).

また、分類部53は、関節点p1と関節点p2とを結ぶ2次元座標上の線分を均等に分割する複数の分割点a1,a2,a3を設定する。そして、分類部53は、関節点p1から分割点a1に向かう分割点間ベクトルva1、分割点a1から分割点a2に向かう分割点間ベクトルva2、分割点a2から分割点a3に向かう分割点間ベクトルva3、分割点a3から関節点p2に向かう分割点間ベクトルva4を距離画像62の距離値に基づいて算出する。 Further, the classification unit 53 sets a plurality of division points a1, a2, and a3 that evenly divide the line segment on the two-dimensional coordinates connecting the joint point p1 and the joint point p2. Then, the classification unit 53 includes a division point vector va1 from the joint point p1 to the division point a1, a division point vector va2 from the division point a1 to the division point a2, and a division point vector from the division point a2 to the division point a3. The vector between the division points va4 from the division point a3 to the joint point p2 is calculated based on the distance value of the distance image 62.

具体的には、分割点a1〜a3の3次元座標をそれぞれa1=(Xa1,Ya1,Za1)、a2=(Xa2,Ya2,Za2)、a3=(Xa3,Ya3,Za3)とすると、分割点間ベクトルva1は、(Xa1−X,Ya1−Y,Za1−Z)、分割点間ベクトルva2は、(Xa2−Xa1,Ya2−Ya1,Za2−Za1)、分割点間ベクトルva3は、(Xa3−Xa2,Ya3−Ya2,Za3−Za2)、分割点間ベクトルva4は、(X−Xa3,Y−Ya3,Z−Za3)となる。 Specifically, the three-dimensional coordinates of the division points a1 to a3 are a1 = (X a1 , Y a1 , Z a1 ), a2 = (X a2 , Y a2 , Z a2 ), a3 = (X a3 , Y a3 ), respectively. , Z a3 ), the inter-division point vector va1 is (X a1- X 1 , Y a1- Y 1 , Z a1- Z 1 ), and the inter-division point vector va2 is (X a2- X a1 , Y a2). -Y a1, Z a2 -Z a1) , division point between vectors va3 is, (X a3 -X a2, Y a3 -Y a2, Z a3 -Z a2), division point between vectors va4 is, (X 2 -X a3, Y 2 -Y a3, a Z 2 -Z a3).

そして、分類部53は、関節点p1,p2についてのベクトルの向きの一致度を算出する。ここで、一致度は、1−(sinθ+sinθ+・・・+sinθ)/kで定義される。「k」は線分の分割数であり、図7に示す例の場合、「k」は4である。また、「θ」は、分割点間ベクトルと関節点間ベクトルとのなす角であり、図7に示す例の場合、「θ」は、分割点間ベクトルva1と関節点間ベクトルva0とのなす角である。 Then, the classification unit 53 calculates the degree of coincidence of the directions of the vectors with respect to the joint points p1 and p2. Here, the degree of coincidence is defined as 1- (sin 2 θ 1 + sin 2 θ 2 + ... + sin 2 θ k ) / k. “K” is the number of divisions of the line segment, and in the case of the example shown in FIG. 7, “k” is 4. Further, "θ 1 " is the angle formed by the vector between the division points and the vector between the joint points. In the case of the example shown in FIG. 7, "θ 1 " is the vector between the division points va1 and the vector between the joint points va0. It is the angle between the two.

また、1−(sinθ+sinθ+・・・+sinθ)/kは、1−(1−cosθ+1−cosθ+・・・+1−cosθ)/kと変換することができる。そこで、分類部53は、これをさらに変換し、{((va1・va0)/(|va1|・|va0|))+((va2・va0)/(|va2|・|va0|))+・・・((vak・va0)/(|vak|・|va0|))}/kの式を用いて一致度を算出する。 In addition, 1- (1-cos 2 θ 1 + sin 2 θ 2 + ... + sin 2 θ k ) / k is 1- (1-cos 2 θ 1 + 1-cos 2 θ 2 + ... + 1-cos 2 θ. It can be converted to k ) / k. Therefore, the classification unit 53 further converts this to {((va1 · va0) / (| va1 | · | va0 |)) 2 + ((va2 · va0) / (| va2 | · | va0 |)). 2 + ... ((Vak · va0) / (| vak | · | va0 |)) 2 } / k is used to calculate the degree of coincidence.

同様に、分類部53は、対象関節点である関節点p1から接続候補関節点である関節点q2に向かう関節点間ベクトルvb0を算出する。また、分類部53は、関節点p1と関節点q2とを結ぶ線分を分割する複数の分割点b1,b2,b3を設定し、関節点p1から分割点b1に向かう分割点間ベクトルvb1、分割点b1から分割点b2に向かう分割点間ベクトルvb2、分割点b2から分割点b3に向かう分割点間ベクトルvb3、分割点b3から関節点q2に向かう分割点間ベクトルvb4を算出する。そして、分類部53は、関節点間ベクトルvb0および分割点間ベクトルvb1〜vb4を用いて一致度を算出する。 Similarly, the classification unit 53 calculates the inter-joint vector vb0 from the target joint point p1 to the connection candidate joint point q2. Further, the classification unit 53 sets a plurality of division points b1, b2, b3 for dividing the line segment connecting the joint point p1 and the joint point q2, and the vector between the division points vb1 from the joint point p1 to the division point b1. The inter-division vector vb2 from the division point b1 to the division point b2, the inter-division vector vb3 from the division point b2 to the division point b3, and the inter-division vector vb4 from the division point b3 to the joint point q2 are calculated. Then, the classification unit 53 calculates the degree of coincidence using the vector between the joint points vb0 and the vectors vb1 to vb4 between the division points.

ここで、関節点p1,p2のように同一人物M1の関節点同士を対象関節点および接続候補関節点とした場合の一致度は、理論上1となる。これは、たとえば図8に示すように、対象関節点(関節点p1)と接続候補関節点(関節点p2)とが同一人物M1の関節点同士であれば、関節点間ベクトルva0の向きと、各分割点間ベクトルva1〜va4の向きとが理論上一致し、「θ」〜「θ」が全て0となることで、「1−cosθ」、「1−cosθ」・・・「1−cosθ」の値が全て0になるためである。 Here, the degree of agreement when the joint points of the same person M1 as the joint points p1 and p2 are set as the target joint point and the connection candidate joint point is theoretically 1. For example, as shown in FIG. 8, if the target joint point (joint point p1) and the connection candidate joint point (joint point p2) are the joint points of the same person M1, the direction of the vector between the joint points va0 , The directions of the vectors va1 to va4 between the dividing points are theoretically the same, and "θ 1 " to "θ k " are all 0, so that "1-cos 2 θ 1 " and "1-cos 2 θ" This is because the values of " 2 " ... "1-cos 2 θ k " are all 0.

これに対し、関節点p1,q2のように異なる人物M1,M2の関節点同士を対象関節点および接続候補関節点とした場合の一致度は、1よりも低くなる。これは、たとえば図9に示すように、対象関節点(関節点p1)と接続候補関節点(関節点q2)とが異なる人物M1,M2の関節点同士である場合、関節点間ベクトルvb0の向きと、各分割点間ベクトルvb1〜vb4の向きとが一致しないためである。 On the other hand, when the joint points of different persons M1 and M2 such as the joint points p1 and q2 are set as the target joint points and the connection candidate joint points, the degree of coincidence is lower than 1. For example, as shown in FIG. 9, when the target joint point (joint point p1) and the connection candidate joint point (joint point q2) are different joint points of the persons M1 and M2, the vector between the joint points vb0 This is because the orientation does not match the orientation of the vectors vb1 to vb4 between the division points.

分類部53は、接続候補関節点である関節点p2,q2のうち、一致度が高い関節点p2を、対象関節点である関節点p1と接続される「肘」の関節点として決定する。具体的には、分類部53は、関節点p1と関節点p2に対し、同一人物M1の関節点であることを示す人物情報を付加する。 The classification unit 53 determines the joint point p2 having a high degree of coincidence among the joint points p2 and q2 which are the connection candidate joint points as the joint points of the “elbow” connected to the joint point p1 which is the target joint point. Specifically, the classification unit 53 adds personal information indicating that the joint points p1 and the joint points p2 are the joint points of the same person M1.

次に、第2実施形態に係るECU10の具体的動作について図10を参照して説明する。図10は、第2実施形態に係るECUが実行する処理の手順を示すフローチャートである。図10には、1つの対象関節点の1つの組み合わせを決定するまでの手順を示している。 Next, the specific operation of the ECU 10 according to the second embodiment will be described with reference to FIG. FIG. 10 is a flowchart showing a procedure of processing executed by the ECU according to the second embodiment. FIG. 10 shows a procedure for determining one combination of one target joint point.

図10に示すように、ECU10は、対象領域Rから検出された複数の関節点のうち、距離値が最も小さい関節点を対象関節点として選択する(ステップS201)。つづいて、ECU10は、対象領域Rから検出された複数の関節点のうち、対象関節点の部位と隣接する部位の関節点を接続候補関節点として選択する(ステップS202)。 As shown in FIG. 10, the ECU 10 selects the joint point having the smallest distance value among the plurality of joint points detected from the target region R as the target joint point (step S201). Subsequently, the ECU 10 selects a joint point of a portion adjacent to the portion of the target joint point as a connection candidate joint point among the plurality of joint points detected from the target region R (step S202).

つづいて、ECU10は、関節点間ベクトルを算出するとともに(ステップS203)、分割点間ベクトルを算出し(ステップS204)、一致度を算出する(ステップS205)。 Subsequently, the ECU 10 calculates the vector between the joint points (step S203), calculates the vector between the division points (step S204), and calculates the degree of agreement (step S205).

つづいて、ECU10は、全ての接続候補関節点の一致度を算出し終えたか否かを判定する(ステップS206)。この処理において、一致度を算出していない接続候補関節点が存在する場合(ステップS206,No)、ECU10は、処理をステップS203に戻し、一致度を算出していない接続候補関節点について、ステップS203〜S206の処理を行う。 Subsequently, the ECU 10 determines whether or not the degree of coincidence of all the connection candidate joint points has been calculated (step S206). In this process, if there is a connection candidate joint point for which the degree of coincidence has not been calculated (steps S206, No), the ECU 10 returns the process to step S203, and steps for the connection candidate joint point for which the degree of coincidence has not been calculated. Processes S203 to S206 are performed.

一方、ステップS206において、全ての接続候補関節点の一致度を算出し終えたと判定した場合(ステップS206,Yes)、ECU10は、一致度が最も高い接続候補関節点を対象関節点と接続される関節点として決定し(ステップS207)、1つの対象関節点の1つの組み合わせについての処理を終了する。 On the other hand, when it is determined in step S206 that the matching degrees of all the connection candidate joint points have been calculated (steps S206, Yes), the ECU 10 connects the connection candidate joint points having the highest matching degree to the target joint points. It is determined as a joint point (step S207), and the process for one combination of one target joint point is completed.

(変形例)
ECU10は、上述した乖離度および一致度の両方を算出し、乖離度および一致度に基づいて、対象関節点に接続される関節点を決定してもよい。この際、ECU10は、周囲の明るさ等の状況に応じて乖離度および一致度の何れか一方に重み付けを行ってもよい。
(Modification example)
The ECU 10 may calculate both the degree of deviation and the degree of coincidence described above, and determine the joint point connected to the target joint point based on the degree of deviation and the degree of agreement. At this time, the ECU 10 may weight either the degree of deviation or the degree of coincidence according to the situation such as the brightness of the surroundings.

上述した実施形態では、関節点検出部52が、輝度画像61を用いて関節点情報63を生成する場合の例について説明した。これに限らず、関節点検出部52は、距離画像62を用いて関節点情報63を生成してもよい。また、関節点検出部52は、輝度画像61および距離画像62の両方を用いて関節点情報63を生成してもよい。 In the above-described embodiment, an example in which the joint point detection unit 52 generates the joint point information 63 using the luminance image 61 has been described. Not limited to this, the joint point detection unit 52 may generate the joint point information 63 using the distance image 62. Further, the joint point detection unit 52 may generate the joint point information 63 by using both the luminance image 61 and the distance image 62.

上述した実施形態では、関節点検出装置としてのECU10を用いて、車両1の車室内に存在する乗員の人体情報を取得する場合の例を示したが、関節点検出装置は、たとえば、工場で作業する作業者の負荷を監視するために、作業者の姿勢といった人体情報を取得する等、他の用途にも用いられ得る。 In the above-described embodiment, an example is shown in which the ECU 10 as the joint point detection device is used to acquire the human body information of the occupant existing in the vehicle interior of the vehicle 1, but the joint point detection device can be used, for example, in a factory. It can also be used for other purposes such as acquiring human body information such as the posture of a worker in order to monitor the load of the worker.

上述してきたように、実施形態に係る関節点検出装置(一例として、ECU10)は、対象領域(一例として、対象領域R)の距離画像(一例として、距離画像62)および輝度画像(一例として、輝度画像61)のうち少なくとも一方に基づき、対象領域に含まれる複数の関節点を検出する検出部(一例として、関節点検出部52)と、距離画像に基づき、複数の関節点を人体ごとに分類する分類部(一例として、分類部53)とを備える。対象領域に含まれる複数の関節点を距離画像に基づいて人体ごとに分類することで、たとえばPAFによる従来の分類手法と比較して演算量を低減することができる。 As described above, the joint point detection device (for example, ECU 10) according to the embodiment includes a distance image (as an example, a distance image 62) and a brightness image (for example, an example) of a target region (for example, the target region R). A detection unit (for example, a joint point detection unit 52) that detects a plurality of joint points included in the target region based on at least one of the brightness images 61), and a plurality of joint points for each human body based on a distance image. It is provided with a classification unit (as an example, a classification unit 53) for classification. By classifying a plurality of joint points included in the target region for each human body based on a distance image, it is possible to reduce the amount of calculation as compared with a conventional classification method using, for example, PAF.

上記関節点検出装置において、分類部は、複数の関節点のうちの対象関節点と、対象関節点に接続される関節点の候補である複数の接続候補関節点のうちの1つとを結ぶ線分上に存在する各点の距離の総和に基づく乖離度を複数の接続候補関節点ごとに算出する。そして、分類部は、複数の接続候補関節点のうち、乖離度が最も低い接続候補関節点を、対象関節点に接続される関節点として決定する。 In the above-mentioned joint point detection device, the classification unit is a line connecting a target joint point among a plurality of joint points and one of a plurality of connection candidate joint points that are candidates for a joint point connected to the target joint point. The degree of divergence based on the sum of the distances of the points existing on the line segment is calculated for each of a plurality of connection candidate joint points. Then, the classification unit determines the connection candidate joint point having the lowest degree of divergence among the plurality of connection candidate joint points as the joint point connected to the target joint point.

異なる人物の関節点同士を結んだ線分上には、たとえば背景部分等、距離が大きくなる箇所が存在する可能性が高い。このため、乖離度が低い接続候補関節点を対象関節点と接続される関節点として決定する。これにより、画像から取得される複数の関節点を少ない演算量で人体ごとに分類することができる。 On the line segment connecting the joint points of different people, there is a high possibility that there is a place where the distance becomes large, such as a background part. Therefore, the connection candidate joint point with a low degree of divergence is determined as the joint point connected to the target joint point. As a result, a plurality of joint points acquired from the image can be classified for each human body with a small amount of calculation.

上記関節点検出装置において、分類部は、複数の関節点のうち距離が最も近い関節点から順に対象関節点として選択する。このように、距離が最も近い関節点から順に対象関節点として選択することで、乖離度を適切に求めることができる。 In the above-mentioned joint point detection device, the classification unit selects the target joint points in order from the joint point having the shortest distance among the plurality of joint points. In this way, the degree of divergence can be appropriately obtained by selecting the target joint points in order from the joint point having the closest distance.

上記関節点検出装置において、分類部は、線分上に存在する各点の座標をブレゼンハムのアルゴリズムを用いて算出する。ブレゼンハムのアルゴリズムは、整数の加減算とビットシフトのみで実装可能であるため、ブレゼンハムのアルゴリズムを用いることで、線分上に存在する各点の座標を少ない演算量で高速に算出することができる。 In the joint point detection device, the classification unit calculates the coordinates of each point existing on the line segment using Bresenham's algorithm. Since the Bresenham's algorithm can be implemented only by adding and subtracting integers and bit shifting, the coordinates of each point existing on the line segment can be calculated at high speed with a small amount of calculation by using the Bresenham's algorithm.

上記関節点検出装置において、分類部は、複数の関節点のうちの対象関節点と、対象関節点に接続される関節点の候補である複数の接続候補関節点のうちの1つとを結ぶ関節点間ベクトルを算出する。また、分類部は、対象関節点と複数の接続候補関節点のうちの1つとを結ぶ線分を分割する複数の分割点間を結ぶ複数の分割点間ベクトルを算出する。また、分類部は、関節点間ベクトルと複数の分割点間ベクトルとの向きの一致度を複数の接続候補関節点ごとに算出する。そして、分類部は、複数の接続候補関節点のうち一致度が最も高い接続候補関節点を対象関節点に接続される関節点として決定する。 In the above-mentioned joint point detection device, the classification unit connects a target joint point among a plurality of joint points and one of a plurality of connection candidate joint points that are candidates for a joint point connected to the target joint point. Calculate the vector between points. Further, the classification unit calculates a vector between a plurality of division points connecting the plurality of division points for dividing the line segment connecting the target joint point and one of the plurality of connection candidate joint points. Further, the classification unit calculates the degree of coincidence between the directions of the vector between the joint points and the vector between the plurality of division points for each of the plurality of connection candidate joint points. Then, the classification unit determines the connection candidate joint point having the highest degree of coincidence among the plurality of connection candidate joint points as the joint point connected to the target joint point.

同一人物の関節点同士を対象関節点および接続候補関節点とした場合、関節点間ベクトルの向きと複数の分割点間ベクトルの向きとは理論上一致するため、この場合の一致度は高くなる。これに対し、異なる人物の関節点同士を対象関節点および接続候補関節点とした場合の一致度は、同一人物の関節点同士を対象関節点および接続候補関節点とした場合と比較して小さくなる。これは、対象関節点と接続候補関節点とが異なる人物の関節点同士である場合、関節点間ベクトルの向きと、複数の分割点間ベクトルの向きとが一致しないためである。そこで、分類部は、複数の接続候補関節点のうち一致度が高い接続候補関節点を対象関節点と接続される関節点として決定する。これにより、画像から取得される複数の関節点を少ない演算量で人体ごとに分類することができる。 When the joint points of the same person are set as the target joint point and the connection candidate joint point, the direction of the vector between the joint points and the direction of the vector between the plurality of division points theoretically match, so that the degree of coincidence in this case is high. .. On the other hand, the degree of coincidence when the joint points of different persons are set as the target joint points and the connection candidate joint points is smaller than that when the joint points of the same person are set as the target joint points and the connection candidate joint points. Become. This is because the directions of the vector between the joint points and the directions of the vectors between the plurality of division points do not match when the target joint points and the connection candidate joint points are the joint points of different persons. Therefore, the classification unit determines the connection candidate joint point having a high degree of coincidence among the plurality of connection candidate joint points as the joint point connected to the target joint point. As a result, a plurality of joint points acquired from the image can be classified for each human body with a small amount of calculation.

以上、本開示の実施形態を例示したが、上記実施形態および変形例はあくまで一例であって、発明の範囲を限定することは意図していない。上記実施形態や変形例は、その他の様々な形態で実施されることが可能であり、発明の要旨を逸脱しない範囲で、種々の省略、置き換え、組み合わせ、変更を行うことができる。また、各実施形態や各変形例の構成や形状は、部分的に入れ替えて実施することも可能である。 Although the embodiments of the present disclosure have been illustrated above, the above-described embodiments and modifications are merely examples, and are not intended to limit the scope of the invention. The above-described embodiment and modification can be implemented in various other forms, and various omissions, replacements, combinations, and changes can be made without departing from the gist of the invention. Further, the configuration and shape of each embodiment and each modification can be partially exchanged.

1…車両、2…座席、3…撮像装置、8…車載機器、10…ECU、50…制御部、51…取得部、52…関節点検出部、53…分類部、54…人体情報生成部、55…機器制御部、60…記憶部、61…輝度画像、62…距離画像、63…関節点情報、64…分類済関節点情報、65…人体情報。 1 ... Vehicle, 2 ... Seat, 3 ... Imaging device, 8 ... In-vehicle device, 10 ... ECU, 50 ... Control unit, 51 ... Acquisition unit, 52 ... Joint point detection unit, 53 ... Classification unit, 54 ... Human body information generation unit , 55 ... device control unit, 60 ... storage unit, 61 ... brightness image, 62 ... distance image, 63 ... joint point information, 64 ... classified joint point information, 65 ... human body information.

Claims (5)

対象領域の距離画像および輝度画像のうち少なくとも一方に基づき、前記対象領域に含まれる複数の関節点を検出する検出部と、
前記距離画像に基づき、前記複数の関節点を人体ごとに分類する分類部と
を備える、関節点検出装置。
A detection unit that detects a plurality of joint points included in the target area based on at least one of a distance image and a brightness image of the target area.
A joint point detection device including a classification unit that classifies the plurality of joint points for each human body based on the distance image.
前記分類部は、
前記複数の関節点のうちの対象関節点と、前記対象関節点に接続される前記関節点の候補である複数の接続候補関節点のうちの1つとを結ぶ線分上に存在する各点の距離の総和に基づく乖離度を前記複数の接続候補関節点ごとに算出し、前記複数の接続候補関節点のうち、前記乖離度が最も小さい前記接続候補関節点を、前記対象関節点に接続される前記関節点として決定する、請求項1に記載の関節点検出装置。
The classification unit
Each point existing on a line connecting a target joint point among the plurality of joint points and one of a plurality of connection candidate joint points that are candidates for the joint point connected to the target joint point. The degree of divergence based on the total distance is calculated for each of the plurality of connection candidate joint points, and the connection candidate joint point having the smallest degree of divergence among the plurality of connection candidate joint points is connected to the target joint point. The joint point detection device according to claim 1, which is determined as the joint point.
前記分類部は、
前記複数の関節点のうち距離が最も近い前記関節点から順に前記対象関節点として選択する、請求項2に記載の関節点検出装置。
The classification unit
The joint point detecting device according to claim 2, wherein the target joint point is selected in order from the joint point having the closest distance among the plurality of joint points.
前記分類部は、
線分上に存在する各点の座標をブレゼンハムのアルゴリズムを用いて算出する、請求項3に記載の関節点検出装置。
The classification unit
The joint point detection device according to claim 3, wherein the coordinates of each point existing on the line segment are calculated by using the Bresenham's algorithm.
前記分類部は、
前記複数の関節点のうちの対象関節点と、前記対象関節点に接続される前記関節点の候補である複数の接続候補関節点のうちの1つとを結ぶ関節点間ベクトルを算出し、前記対象関節点と前記複数の接続候補関節点のうちの1つとを結ぶ線分を分割する複数の分割点間を結ぶ複数の分割点間ベクトルを算出し、前記関節点間ベクトルと前記複数の分割点間ベクトルとの向きの一致度を複数の接続候補関節点ごとに算出し、前記複数の接続候補関節点のうち、前記一致度が最も高い前記接続候補関節点を、前記対象関節点に接続される前記関節点として決定する、請求項1〜4のいずれか一つに記載の関節点検出装置。
The classification unit
A vector between the target joint points connecting the target joint point among the plurality of joint points and one of the plurality of connection candidate joint points that are candidates for the joint point connected to the target joint point is calculated. A line segment connecting a target joint point and one of the plurality of connection candidate joint points is divided. A plurality of division point vectors connecting the plurality of division points are calculated, and the joint point vector and the plurality of divisions are calculated. The degree of coincidence of the orientation with the point-to-point vector is calculated for each of the plurality of connection candidate joint points, and the connection candidate joint point having the highest degree of coincidence among the plurality of connection candidate joint points is connected to the target joint point. The joint point detection device according to any one of claims 1 to 4, which is determined as the joint point to be used.
JP2019098788A 2019-05-27 2019-05-27 Jointpoint detection apparatus Pending JP2020194294A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
JP2019098788A JP2020194294A (en) 2019-05-27 2019-05-27 Jointpoint detection apparatus

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
JP2019098788A JP2020194294A (en) 2019-05-27 2019-05-27 Jointpoint detection apparatus

Publications (1)

Publication Number Publication Date
JP2020194294A true JP2020194294A (en) 2020-12-03

Family

ID=73548643

Family Applications (1)

Application Number Title Priority Date Filing Date
JP2019098788A Pending JP2020194294A (en) 2019-05-27 2019-05-27 Jointpoint detection apparatus

Country Status (1)

Country Link
JP (1) JP2020194294A (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2023228304A1 (en) * 2022-05-25 2023-11-30 Nec Corporation Key-point associating apparatus, key-point associating method, and non-transitory computer-readable storage medium

Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2014155693A (en) * 2012-12-28 2014-08-28 Toshiba Corp Movement information processor and program
JP2019040306A (en) * 2017-08-23 2019-03-14 株式会社 ディー・エヌ・エー Information processing device, information processing program, and information processing method

Patent Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2014155693A (en) * 2012-12-28 2014-08-28 Toshiba Corp Movement information processor and program
JP2019040306A (en) * 2017-08-23 2019-03-14 株式会社 ディー・エヌ・エー Information processing device, information processing program, and information processing method

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
ANGEL MARTINEZ-GONZALEZ, ET AL.: "Real-time Convolutional Networks for Depth-based Human Pose Estimation", 2018 IEEE/RSJ INTERNATIONAL CONFERENCE ON INTELLIGENT ROBOTS AND SYSTEMS (IROS), JPN6023008245, 1 October 2018 (2018-10-01), US, pages 41 - 47, ISSN: 0005005589 *

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2023228304A1 (en) * 2022-05-25 2023-11-30 Nec Corporation Key-point associating apparatus, key-point associating method, and non-transitory computer-readable storage medium

Similar Documents

Publication Publication Date Title
JP4899424B2 (en) Object detection device
JP3797949B2 (en) Image processing apparatus and method
US9493120B2 (en) Image generator
JP2008002838A (en) System for detecting vehicle occupant, actuator control system, and vehicle
JP6565188B2 (en) Parallax value deriving apparatus, device control system, moving body, robot, parallax value deriving method, and program
JP6739672B2 (en) Physical constitution estimation device and physical constitution estimation method
US20200090299A1 (en) Three-dimensional skeleton information generating apparatus
JP2007514211A (en) Visual tracking using depth data
CN114144814A (en) System, device and method for measuring the mass of an object in a vehicle
JP6479272B1 (en) Gaze direction calibration apparatus, gaze direction calibration method, and gaze direction calibration program
US20080001386A1 (en) Distance detecting apparatus, air bag system controlling apparatus, and method of detecting distance
JP6778620B2 (en) Road marking device, road marking system, and road marking method
CN110537207B (en) Face orientation estimating device and face orientation estimating method
US20190329671A1 (en) Occupant information determination apparatus
JP2010079582A (en) Apparatus, method and program for detecting object
US11153484B2 (en) Image processing device and image processing method
JP2020194294A (en) Jointpoint detection apparatus
JP4873711B2 (en) Device for detecting three-dimensional objects
JP6519138B2 (en) PARALLEL VALUE DERIVING DEVICE, MOBILE OBJECT, ROBOT, PARALLEL VALUE PRODUCTION METHOD, AND PROGRAM
JP2017167970A (en) Image processing apparatus, object recognition apparatus, device control system, image processing method, and program
JP2019217830A (en) Occupant recognition device
JP2021056968A (en) Object determination apparatus
CN112339769B (en) Personalized settings based on body measurements
JP2021103092A (en) Seat posture detection device
JP2017117357A (en) Three-dimensional object detection device

Legal Events

Date Code Title Description
A621 Written request for application examination

Free format text: JAPANESE INTERMEDIATE CODE: A621

Effective date: 20220309

A977 Report on retrieval

Free format text: JAPANESE INTERMEDIATE CODE: A971007

Effective date: 20230222

A131 Notification of reasons for refusal

Free format text: JAPANESE INTERMEDIATE CODE: A131

Effective date: 20230307

A02 Decision of refusal

Free format text: JAPANESE INTERMEDIATE CODE: A02

Effective date: 20230905