WO2014103217A1 - Operation device and operation detection method - Google Patents

Operation device and operation detection method Download PDF

Info

Publication number
WO2014103217A1
WO2014103217A1 PCT/JP2013/007304 JP2013007304W WO2014103217A1 WO 2014103217 A1 WO2014103217 A1 WO 2014103217A1 JP 2013007304 W JP2013007304 W JP 2013007304W WO 2014103217 A1 WO2014103217 A1 WO 2014103217A1
Authority
WO
WIPO (PCT)
Prior art keywords
behavior
user
driver
face
control unit
Prior art date
Application number
PCT/JP2013/007304
Other languages
French (fr)
Japanese (ja)
Inventor
洋一 成瀬
Original Assignee
株式会社デンソー
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 株式会社デンソー filed Critical 株式会社デンソー
Publication of WO2014103217A1 publication Critical patent/WO2014103217A1/en

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/01Input arrangements or combined input and output arrangements for interaction between user and computer
    • G06F3/011Arrangements for interaction with the human body, e.g. for user immersion in virtual reality
    • G06F3/013Eye tracking input arrangements
    • B60K35/10
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/01Input arrangements or combined input and output arrangements for interaction between user and computer
    • G06F3/011Arrangements for interaction with the human body, e.g. for user immersion in virtual reality
    • G06F3/012Head tracking input arrangements
    • B60K2360/149

Definitions

  • the present disclosure relates to an operation device and an operation detection method for receiving an operation based on a user's line of sight.
  • Patent Document 1 describes a system that detects an operation based on a user's line of sight.
  • the system detects the coordinates of the position on the screen that is on the user's line of sight (line-of-sight coordinates), receives a coordinate confirmation instruction from the user, and performs processing corresponding to the line-of-sight coordinate when the coordinate confirmation instruction is received Execute.
  • the present disclosure has been made in view of the above points, and an object thereof is to provide an operation device that can be operated easily and freely without using a hand, and an operation detection method.
  • the operating device includes a display unit that displays an image on the screen of the display device, and a gaze point on the screen that the user is gazing on based on a captured image of the user's face.
  • An operation unit that considers that an operation corresponding to the gaze point specified by the unit has been performed.
  • an electronic device that can perform various operations by arranging a touch panel on a display and touching an object such as an icon displayed on the screen is known. If applied, a more intuitive operation is possible.
  • the display unit displays an image on which one or more objects are drawn on the screen of the display device, and the operation unit includes the detection unit.
  • the behavior of the user is detected by the above, when the gaze point specified by the specifying unit when the behavior is performed is in the area where any one object is drawn, an operation corresponding to the object is performed. It is considered to have been done.
  • the operation detection method displays an image on a screen of a display device, and based on a captured image of the user's face, a gaze point on the screen on which the user is gazing. Identifying and detecting a predetermined behavior of moving the entire user's head based on the captured image, and detecting the user's behavior in the detection, when the behavior is made It is provided that an operation corresponding to the gaze point specified in specifying is performed.
  • the user when the user moves the entire head in a predetermined manner, such as turning the face in a predetermined direction while gazing at a specific position on the screen of the display device, the user's An operation corresponding to the gaze point is detected. For this reason, after selecting an operation by gazing at the screen, the user can confirm the selection by an extremely simple behavior of moving the entire head, and can perform the operation easily and freely without using a hand. Can do.
  • FIG. 1 is a block diagram illustrating a configuration of an in-vehicle system 1 according to the first embodiment.
  • the in-vehicle system 1 includes an in-vehicle device 10 that displays various images on the display device 20, and a line-of-sight detection device 30 (eye tracker) that detects a position or the like where the driver is gazing based on a captured image of the driver's face.
  • a line-of-sight detection device 30 eye tracker
  • the line-of-sight detection device 30 includes a light projecting unit that projects infrared rays toward the driver's face, a camera 31 that captures the driver's face, and the like. Analyzing the eye image. Then, the line-of-sight direction of the driver is detected from the analysis result, and the coordinates (gaze point) of the position on the screen of the display device 20 being watched by the driver are detected based on the line-of-sight direction.
  • the camera 31 is disposed at a position adjacent to the meter of the host vehicle in a state tilted upward by a predetermined angle from the horizontal direction, and images the driver's face from a viewpoint closer to the lower side than the front of the face (see FIG. 2).
  • the in-vehicle device 10 includes a CPU, a ROM, a RAM, an I / O, and the like.
  • the in-vehicle device 10 outputs video signals to the control unit 12 that performs overall control of the in-vehicle device 10 and the display device 20.
  • a video signal output unit 11 for displaying an image.
  • the control unit 12 is configured to generate various image data, and outputs a video signal based on the generated image data to the display device 20 via the video signal output unit 11, so that the display device 20 Display various images on the screen.
  • the in-vehicle device 10 generates image data of an operation screen for accepting an operation on the own device, and causes the display device 20 to display the operation screen.
  • One or a plurality of objects (icons, etc.) are drawn on this operation screen, and the in-vehicle device 10 behaves in a manner that the driver inclines the face sideways while gazing at any object (in other words, The operation corresponding to the object is detected in response to the behavior of shaking the face in the horizontal direction (hereinafter, this behavior is also referred to as an operation confirmation behavior).
  • a behavior in which the driver tilts the face upward or downward or a behavior that moves the face back and forth may be regarded as an operation confirmation behavior.
  • the behavior of the entire other head may be detected as the operation confirmation behavior, such as a behavior of repeatedly shaking the entire head left and right or up and down.
  • the line-of-sight detection device 30 detects the position coordinates of the left and right eyes of the driver on the photographed image and the gazing point on the operation screen based on the photographed image of the driver's face in response to an instruction from the vehicle-mounted device 10. Provided to the apparatus 10.
  • the photographed image of the driver's face is viewed from a position closer to the lower side than the front of the face, and is projected with the driver's face slightly looked up. Therefore, when the position coordinate of the left eye of the driver is L (lx, ly) and the position coordinate of the right eye is R (rx, ry), the vertical direction (y-axis direction) is obtained when the driver's face is facing directly in front.
  • the difference ⁇ d ry ⁇ ly, which is the interval between the left and right eye positions, is a value near 0 (see FIG. 3). As the driver's face is tilted laterally, the absolute value of ⁇ d increases.
  • control unit 12 of the in-vehicle device 10 that acquired the position coordinates L and R calculates ⁇ d based on this, detects the driver's face orientation based on ⁇ d, and further determines the operation confirmation behavior based on the face orientation. Is detected.
  • the orientation of the driver's face can be detected more easily.
  • the face orientation can be detected easily and accurately. The processing load when detecting the behavior can be reduced.
  • the control unit 12 displays the operation screen 100 on the display device 20, and then based on the position coordinates L and R, the absolute value
  • control unit 12 periodically calculates
  • the control unit 12 specifies the object on the operation screen on which the driver is gazing based on the gazing point acquired from the line-of-sight detection device 30.
  • the control unit 12 performs an operation corresponding to the object 101.
  • the processing corresponding to the object 101 is executed.
  • the control unit 12 may further change the display mode of the object 101.
  • the operation confirmation behavior may be detected as follows. That is, after the time Tsta, the control unit 12 periodically calculates
  • control unit 12 may similarly specify an object on the operation screen that the driver is gazing at. Then, when the driver is gazing at the same object 101 until the operation confirmation behavior is detected, the control unit 12 may consider that an operation corresponding to the object 101 has been performed.
  • control unit 12 detects the behavior of tilting the face to the right (right operation confirmation behavior) and the behavior of tilting to the left (left operation confirmation behavior) as different operation confirmation behaviors. May be.
  • ⁇ d (t) ry ⁇ ly
  • ⁇ d (t) a negative value
  • ⁇ d (t) a positive value
  • the direction in which the driver tilted the face can be determined based on the sign of ⁇ d (t)
  • the control unit 12 can detect the right and left operation confirmation behavior based on ⁇ d (t).
  • the control unit 12 detects the operation R corresponding to the object, and the left operation confirmation behavior is detected.
  • the operation L corresponding to the object may be detected.
  • the control unit 12 determines the time (time Tsta) based on the gazing point acquired from the line-of-sight detection device 30.
  • the object on the operation screen that the driver is gazing at is specified (see FIG. 7).
  • the control unit 12 periodically calculates
  • the control unit 12 identifies the object that the driver is watching (see FIG. 7). Thereafter, when
  • control unit 12 regards that the operation R corresponding to the object has been performed, and corresponds to the object and the operation R. Execute the process.
  • the control unit 12 periodically calculates ⁇ d (t) after time Tsta when ⁇ d (t) becomes equal to or greater than the threshold value DL1, and ⁇ d (t) starting from time Tsta. The integral value of may be calculated. Then, when the integrated value becomes equal to or greater than the threshold value DL3 by time Tend, it may be considered that the left operation confirmation behavior has been made (see FIG. 8).
  • control unit 12 may calculate the integral value of ⁇ d (t) starting from the time Tsta after the time Tsta when ⁇ d (t) becomes equal to or less than the threshold value DR1 with respect to the right-side operation confirmation behavior. . Then, when the integrated value becomes equal to or less than the threshold value DR2 by time Tend, it may be considered that the left operation confirmation behavior has been performed (see FIG. 8).
  • control unit 12 similarly identifies the object on the operation screen that the driver is watching, and the driver is watching the same object during the detection. In this case, it is needless to say that the operation L or R has been performed.
  • the driver can easily and freely operate the in-vehicle device 10 even when both hands are closed by driving.
  • control unit 12 of the in-vehicle device 10 causes the display device 20 to display an operation screen, and the process proceeds to S205.
  • the control unit 12 instructs the line-of-sight detection device 30 to detect the position coordinates L and R of the left and right eyes of the driver.
  • the position coordinates L and R are detected according to the instruction, and when the detection is successful, the position coordinates L and R are provided to the in-vehicle device 10.
  • control unit 12 determines whether or not the position coordinates L and R have been successfully detected and acquired. If a positive determination is obtained (S210: Yes), the control unit 12 shifts the process to S205, and if a negative determination is obtained (S210: No), the control unit 12 shifts the process to S215. .
  • control unit 12 specifies the position coordinates of the left and right eyes of the driver, calculates
  • control unit 12 determines whether or not
  • the control unit 12 instructs the line-of-sight detection device 30 to detect the driver's gaze point.
  • the line-of-sight detection device 30 detects a gazing point according to the instruction, and when the detection is successful, the gazing point is provided to the in-vehicle device 10.
  • control unit 12 determines whether or not the gaze point has been successfully detected and the gaze point has been acquired. Then, when an affirmative determination is obtained (S235: Yes), the control unit 12 proceeds to S240, and when a negative determination is obtained (S235: No), the control unit 12 proceeds to S230. .
  • control unit 12 sets the current time as Tsta, determines the provided gaze point as the gaze point at time Tsta, and shifts the processing to S245.
  • control unit 12 specifies an object (an object being watched by the driver) drawn in an area including the gazing point on the operation screen displayed on the display device 20, and the object is displayed on the display device 20. Is highlighted, and the process proceeds to S250. Note that if there is no corresponding object, the control unit 12 may shift the process to S205 again.
  • control unit 12 determines whether or not the current time t has reached Tend (whether or not a predetermined time has elapsed from time Tsta). If a positive determination is obtained (S250: Yes), the control unit 12 shifts the process to S255, and if a negative determination is obtained (S250: No), the control unit 12 shifts the process to S310. .
  • control unit 12 instructs the line-of-sight detection device 30 to detect the gaze point of the driver, and the process proceeds to S260.
  • the control unit 12 determines whether or not the gazing point has been successfully detected and the gazing point has been acquired from the line-of-sight detection device 30. When a positive determination is obtained (S260: Yes), the control unit 12 shifts the process to S265, and when a negative determination is obtained (S260: No), the control unit 12 shifts the process to S255. To do.
  • control unit 12 determines the provided gazing point as the gazing point of the current driver, and the process proceeds to S270.
  • the control unit 12 specifies an object (an object that the driver is gazing at) drawn in an area including the above-mentioned gazing point on the operation screen, and the object and the highlighted object (at time Tsta). It is determined whether or not (objects being watched) match. Then, when an affirmative determination is obtained (S270: Yes), the control unit 12 proceeds to S275, and when a negative determination is obtained (S270: No), the control unit 12 proceeds to S310. .
  • control unit 12 instructs the line-of-sight detection device 30 to detect the position coordinates L and R of the left and right eyes of the driver, and the process proceeds to S280.
  • control unit 12 determines whether or not the position coordinates L and R have been successfully detected and can be acquired from the line-of-sight detection device 30. If an affirmative determination is obtained (S280: Yes), the control unit 12 shifts the process to S285, and if a negative determination is obtained (S280: No), the control unit 12 shifts the process to S275. .
  • control unit 12 specifies the position coordinates of the left and right eyes of the driver, calculates
  • control unit 12 calculates an integrated value of
  • control unit 12 determines whether or not the integrated value of
  • control unit 12 considers that an operation corresponding to the identified object (in other words, an object that was being watched by the driver while the operation confirmation behavior was performed) was performed, and responds to the operation. The process is executed and the present process is terminated.
  • control unit 12 causes the display device 20 to release the highlighted display of the object being watched at time Tsta, and ends this processing. To do.
  • the control unit 12 determines that
  • control unit 12 calculates ⁇ d (t) instead of
  • control unit 12 may compare ⁇ d (t) with the threshold value DR1 or DL1 instead of comparing
  • control unit 12 may compare ⁇ d (t) with the threshold value DR2 or DL2 instead of comparing the integrated value of
  • control unit 12 may compare the integrated value of ⁇ d (t) with the threshold value DR3 or DL3 instead of comparing the integrated value of
  • FIG. 11 is a block diagram illustrating a configuration of a mobile device 50 (for example, a smartphone) according to the second embodiment.
  • the portable device 50 includes a display unit 51 that displays various images, a CPU, a ROM, a RAM, an I / O, and the like.
  • the portable device 50 is based on a control unit 52 that performs overall control of the portable device 50 and a photographed image of the user's face.
  • a line-of-sight detection unit 53 (eye tracker) that detects a position where the user is gazing.
  • the display unit 51 has a substantially rectangular screen 51 a, and the screen 51 a is arranged on the surface of the portable device 50 having a rectangular plate shape, One of the short sides in the screen 51a is the upper side and the other is the lower side.
  • the portable device 50 is configured to be used in a state where the upper short side of the screen 51a is positioned upward.
  • the line-of-sight detection unit 53 includes a light projecting unit that projects infrared rays toward the user's face, a camera 53a that captures the user's face, and the like, and specifies the position of the user's eyes from the captured image. Analyzing the image of the user's eyes. And a user's gaze direction is detected from an analysis result, and the coordinate (gaze point) of the position on the screen 51a of the display part 51 which the user is gazing based on this gaze direction is detected.
  • the camera 53a is disposed adjacent to the lower side of the screen 51a.
  • the control unit 12 is configured to generate various image data, and displays various images on the screen 51 a of the display unit 51.
  • the same operation detection process as that of the first embodiment is performed, and when the user performs an operation confirmation behavior while gazing at an object on the operation screen, an operation corresponding to the object is performed. Detected.
  • control unit 52 of the portable device 50 generates image data of an operation screen for accepting an operation on the own device, and displays the operation screen on the screen 51 a of the display unit 51. Further, since the camera 53a is arranged adjacent to the lower short side of the screen 51a, the user's face that is gazing at the screen 51a is usually photographed from a lower viewpoint from the front of the face. It becomes a state to do.
  • the photographed image of the user's face by the camera 53a is projected with the face slightly looked up, as in the first embodiment, and is set to ⁇ d (t) which is the interval between the positions of the left and right eyes in the vertical direction. Based on this, the orientation of the user's face can be detected.
  • the line-of-sight detection unit 53 detects the position coordinates L and R of the left and right eyes of the driver and the gazing point on the captured image based on the captured image of the user's face in accordance with an instruction from the control unit 52. This is provided to the control unit 52.
  • control unit 52 that has acquired the position coordinates L and R calculates ⁇ d based on them in the same manner as in the first embodiment, detects the driver's face orientation based on ⁇ d, and further determines the operation confirmation behavior. To detect. Then, when the user is gazing at the same object during the detection of the operation confirmation behavior, the control unit 52 regards that the operation corresponding to the object has been performed.
  • the present invention is not limited to the case where such an operation screen is displayed.
  • the driver is determined in advance on the screen of the display device 20 (or the display unit 51 of the portable device 50) while the operation confirmation behavior is performed.
  • an operation corresponding to the area may be detected. Even in such a case, the same effect can be obtained.
  • an object that the driver or the like is gazing at when the face orientation of the driver or the like is inclined by a predetermined amount in the lateral direction is specified. Then, when an operation confirmation behavior is made while gazing at the object, an operation corresponding to the object is detected.
  • the present invention is not limited to this.
  • the control unit 12 of the in-vehicle device 10 or the control unit 52 of the portable device 50 detects that a driver or the like is gazing at a specific object.
  • the degree of inclination of the face direction of a driver or the like in the horizontal direction may be detected.
  • these control units consider that the operation confirmation behavior has been made, and A corresponding operation may be detected. Even in such a case, the same effect can be obtained.
  • the in-vehicle device 10 in the first embodiment and the portable device 50 in the second embodiment correspond to an operation device, and the display unit 51 of the portable device 50 corresponds to a display device.
  • S200 of the operation detection process is displayed on the display unit, S230 and S255 are specified on the specified unit, S205 to S225 and S275 to S300 are detected on the detection unit, and S270 and S305 are displayed on the operating unit.
  • the flowchart in the present disclosure or the process of the flowchart is configured by a plurality of sections (or referred to as steps), and each section is expressed as, for example, S100.
  • each section can be divided into multiple subsections, while multiple sections can be combined into a single section. Further, each section configured in this manner can be referred to as a device, module, or means.

Abstract

According to the present invention, a line-of-sight detection device (30), on the basis of an image of a driver's face photographed from below, detects the driver's fixation point and the position coordinates of the driver's right and left eyes in the photographic image, and provides the fixation point and position coordinates to an onboard device (10). The onboard device displays on a display device (20) an operation screen in which is depicted an icon or other such object, and, in addition, on the basis of the position coordinates acquired from the line-of-sight detection device, calculates the vertical spacing between the positions of the right and left eyes in the photographic image, and on the basis thereof, detects a behavior in which the driver tilts the orientation of his or her face sideways. Then, when the driver is gazing at the same object when he or she engages in this behavior, it is assumed that an operation that corresponds to the object has been performed. This makes it possible for the operation to be carried out easily and freely without using the hands.

Description

操作装置、及び操作検出方法Operation device and operation detection method 関連出願の相互参照Cross-reference of related applications
 本開示は、2012年12月26日に出願された日本出願番号2012-282842号に基づくもので、ここにその記載内容を援用する。 This disclosure is based on Japanese Patent Application No. 2012-282842 filed on December 26, 2012, the contents of which are incorporated herein.
 本開示は、ユーザの視線により操作を受け付ける操作装置および操作検出方法に関する。 The present disclosure relates to an operation device and an operation detection method for receiving an operation based on a user's line of sight.
 従来、車載装置や携帯機器等の電子装置では、装置とユーザとの間のインターフェースとしてボタン等が用いられていたが、近年では、ディスプレイに配されたタッチパネルを用いたものが多く見受けられる。 Conventionally, in an electronic device such as an in-vehicle device or a portable device, a button or the like has been used as an interface between the device and a user. However, in recent years, many devices using a touch panel arranged on a display are seen.
 このようなタッチパネルを用いることで、ディスプレイに表示されたアイコン等をタッチすることで操作を行うことができ、より直感的な操作が可能となる。しかしながら、指先で操作を行うという身体的な負担が低減されることは無く、例えば運転中等のように、両手が塞がっている状態では自由に操作を行うことが困難である。 By using such a touch panel, it is possible to operate by touching an icon or the like displayed on the display, and more intuitive operation is possible. However, the physical burden of performing the operation with the fingertip is not reduced, and it is difficult to perform the operation freely when both hands are closed, for example, during driving.
 これに対し、特許文献1には、ユーザの視線に基づき操作を検出するシステムについて記載されている。該システムは、ユーザの視線上にある画面上の位置の座標(視線座標)を検出すると共に、ユーザから座標確定の指示を受け付け、座標確定の指示を受け付けた際の視線座標に対応する処理を実行する。 On the other hand, Patent Document 1 describes a system that detects an operation based on a user's line of sight. The system detects the coordinates of the position on the screen that is on the user's line of sight (line-of-sight coordinates), receives a coordinate confirmation instruction from the user, and performs processing corresponding to the line-of-sight coordinate when the coordinate confirmation instruction is received Execute.
特開平5-108251号広報Japanese Laid-Open Patent Publication No. 5-108251
 特許文献1に記載のシステムによれば、視線により操作を選択することができるため、煩わしいキー操作が不要となるが、操作の選択を確定させるための操作を手で行う必要があり、依然として両手が塞がっている状態では自由に操作を行うことが困難である。 According to the system described in Patent Document 1, since an operation can be selected by line of sight, a troublesome key operation is not necessary, but an operation for confirming the selection of the operation needs to be performed by hand, and both hands are still required. It is difficult to operate freely in a state where is closed.
 本開示は上記点に鑑みてなされたものであり、手を使うこと無く、簡単且つ自由に操作を行うことができる操作装置、また、操作検出方法を提供することを目的とする。 The present disclosure has been made in view of the above points, and an object thereof is to provide an operation device that can be operated easily and freely without using a hand, and an operation detection method.
 本開示の第一の態様によれば、操作装置は、表示装置の画面に画像を表示する表示部と、ユーザの顔の撮影画像に基づき、該ユーザが注視している画面上の注視点を特定する特定部と、撮影画像に基づき、予め定められたユーザの頭部全体を動かす挙動を検出する検出部と、検出部によりユーザの挙動が検出されると、該挙動がなされた際に特定部により特定された注視点に対応する操作が行われたとみなす操作部と、を備える。 According to the first aspect of the present disclosure, the operating device includes a display unit that displays an image on the screen of the display device, and a gaze point on the screen that the user is gazing on based on a captured image of the user's face. A specific part to be identified, a detection part that detects a behavior of moving the entire user's head based on a photographed image, and a user's behavior detected by the detection part, the specific part is identified when the behavior is made An operation unit that considers that an operation corresponding to the gaze point specified by the unit has been performed.
 このような構成によれば、ユーザが表示装置の画面上の特定の位置を注視した状態で、顔の向きを所定方向に向ける等、予め定められた態様で頭部全体を動かすと、ユーザの注視点に対応する操作が検出される。このため、ユーザは、画面の注視により操作を選択した後、頭部全体を動かすという極めて簡単な挙動により該選択を確定させることができ、手を使うこと無く、簡単且つ自由に操作を行うことができる。 According to such a configuration, when the user moves the entire head in a predetermined manner, such as turning the face in a predetermined direction while gazing at a specific position on the screen of the display device, An operation corresponding to the gaze point is detected. For this reason, after selecting an operation by gazing at the screen, the user can confirm the selection by an extremely simple behavior of moving the entire head, and can perform the operation easily and freely without using a hand. Can do.
 また、上述したように、ディスプレイにタッチパネルを配し、画面上に表示されたアイコン等のオブジェクトをタッチすることで各種操作を行うことができる電子装置が知られており、このような電子装置に適用すれば、より直感的な操作が可能となる。 In addition, as described above, an electronic device that can perform various operations by arranging a touch panel on a display and touching an object such as an icon displayed on the screen is known. If applied, a more intuitive operation is possible.
 本開示の第二の態様によれば、第一の態様にかかる操作装置において、表示部は、1または複数のオブジェクトが描かれた画像を表示装置の画面に表示し、操作部は、検出部によりユーザの挙動が検出されると、該挙動がなされた際に特定部により特定された注視点が、いずれか一つのオブジェクトが描かれた領域にある場合には、該オブジェクトに対応する操作が行われたとみなす。 According to the second aspect of the present disclosure, in the operation device according to the first aspect, the display unit displays an image on which one or more objects are drawn on the screen of the display device, and the operation unit includes the detection unit. When the behavior of the user is detected by the above, when the gaze point specified by the specifying unit when the behavior is performed is in the area where any one object is drawn, an operation corresponding to the object is performed. It is considered to have been done.
 このような構成によれば、ユーザが画面上の特定のオブジェクトを目視した状態で、予め定められた態様で頭部全体を動かすと、該オブジェクトに対応する操作が検出される。このため、ユーザは、頭部全体を動かすという極めて簡単な挙動により注視によるオブジェクトの選択を確定させることができ、手を使うこと無く、より直感的に、簡単且つ自由な操作を行うことができる。 According to such a configuration, when the user moves the entire head in a predetermined manner while viewing a specific object on the screen, an operation corresponding to the object is detected. For this reason, the user can confirm the selection of the object by gaze by an extremely simple behavior of moving the entire head, and can perform a simple and free operation more intuitively without using a hand. .
 本開示の第三の態様によれば、操作検出方法は、表示装置の画面に画像を表示することと、ユーザの顔の撮影画像に基づき、該ユーザが注視している画面上の注視点を特定することと、撮影画像に基づき、予め定められた前記ユーザの頭部全体を動かす挙動を検出することと、検出することにおいて前記ユーザの挙動が検出されると、該挙動がなされた際に特定することにおいて特定された注視点に対応する操作が行われたとみなすことと、を備える。 According to the third aspect of the present disclosure, the operation detection method displays an image on a screen of a display device, and based on a captured image of the user's face, a gaze point on the screen on which the user is gazing. Identifying and detecting a predetermined behavior of moving the entire user's head based on the captured image, and detecting the user's behavior in the detection, when the behavior is made It is provided that an operation corresponding to the gaze point specified in specifying is performed.
 このような方法によれば、ユーザが表示装置の画面上の特定の位置を注視した状態で、顔の向きを所定方向に向ける等、予め定められた態様で頭部全体を動かすと、ユーザの注視点に対応する操作が検出される。このため、ユーザは、画面の注視により操作を選択した後、頭部全体を動かすという極めて簡単な挙動により該選択を確定させることができ、手を使うこと無く、簡単且つ自由に操作を行うことができる。 According to such a method, when the user moves the entire head in a predetermined manner, such as turning the face in a predetermined direction while gazing at a specific position on the screen of the display device, the user's An operation corresponding to the gaze point is detected. For this reason, after selecting an operation by gazing at the screen, the user can confirm the selection by an extremely simple behavior of moving the entire head, and can perform the operation easily and freely without using a hand. Can do.
 本開示についての上記目的およびその他の目的、特徴や利点は、添付の図面を参照しながら下記の詳細な記述により、より明確になる。図面において、
第1実施形態における車載システムの構成を示すブロック図である。 第1実施形態における車載システムの視線検出装置が備えるカメラの配置位置の説明図である。 第1実施形態におけるドライバの顔の向きを検出する処理についての説明図である。 第1実施形態におけるドライバの操作確定挙動に応じて操作を確定させる処理についての説明図である。 第1実施形態におけるドライバの操作確定挙動の検出方法の一例についての説明図である。 第1実施形態におけるドライバの操作確定挙動の検出方法の一例についての説明図である。 第1実施形態におけるドライバの操作確定挙動の検出方法の一例についての説明図である。 第1実施形態におけるドライバの操作確定挙動の検出方法の一例についての説明図である。 第1実施形態における操作検出処理のフローチャートである。 第1実施形態における操作検出処理のフローチャートである。 第2実施形態における携帯機器の構成を示すブロック図である。 第2実施形態における携帯機器の外観を示す概略図である。
The above and other objects, features and advantages of the present disclosure will become more apparent from the following detailed description with reference to the accompanying drawings. In the drawing
It is a block diagram which shows the structure of the vehicle-mounted system in 1st Embodiment. It is explanatory drawing of the arrangement position of the camera with which the gaze detection apparatus of the vehicle-mounted system in 1st Embodiment is provided. It is explanatory drawing about the process which detects direction of the face of the driver in 1st Embodiment. It is explanatory drawing about the process which confirms operation according to the operation confirmation behavior of the driver in 1st Embodiment. It is explanatory drawing about an example of the detection method of the operation confirmation behavior of the driver in 1st Embodiment. It is explanatory drawing about an example of the detection method of the operation confirmation behavior of the driver in 1st Embodiment. It is explanatory drawing about an example of the detection method of the operation confirmation behavior of the driver in 1st Embodiment. It is explanatory drawing about an example of the detection method of the operation confirmation behavior of the driver in 1st Embodiment. It is a flowchart of the operation detection process in 1st Embodiment. It is a flowchart of the operation detection process in 1st Embodiment. It is a block diagram which shows the structure of the portable apparatus in 2nd Embodiment. It is the schematic which shows the external appearance of the portable apparatus in 2nd Embodiment.
 以下、本開示の実施形態について図面を用いて説明する。なお、本開示の実施の形態は、下記の実施形態に何ら限定されることはなく、本開示の技術的範囲に属する限り種々の形態を採りうる。 Hereinafter, embodiments of the present disclosure will be described with reference to the drawings. Embodiments of the present disclosure are not limited to the following embodiments, and various forms can be adopted as long as they belong to the technical scope of the present disclosure.
 [第1実施形態]
 [構成の説明]
 図1は、第1実施形態の車載システム1の構成を示すブロック図である。車載システム1は、表示装置20に各種画像を表示する車載装置10と、ドライバの顔の撮影画像に基づきドライバが注視している位置等を検出する視線検出装置30(アイトラッカー)とを有する。
[First Embodiment]
[Description of configuration]
FIG. 1 is a block diagram illustrating a configuration of an in-vehicle system 1 according to the first embodiment. The in-vehicle system 1 includes an in-vehicle device 10 that displays various images on the display device 20, and a line-of-sight detection device 30 (eye tracker) that detects a position or the like where the driver is gazing based on a captured image of the driver's face.
 視線検出装置30は、ドライバの顔に向けて赤外線を投光する投光部や、ドライバの顔を撮影するカメラ31等から構成されており、撮影画像からドライバの目の位置を特定し、ドライバの目の画像を解析する。そして、解析結果からドライバの視線方向を検出し、該視線方向に基づき、ドライバが注視している表示装置20の画面上の位置の座標(注視点)を検出する。なお、カメラ31は、自車両のメータに隣接した位置に、水平方向から所定角度上方に傾けた状態で配されており、ドライバの顔を、顔の正面から下方寄りの視点から撮影する(図2)。 The line-of-sight detection device 30 includes a light projecting unit that projects infrared rays toward the driver's face, a camera 31 that captures the driver's face, and the like. Analyzing the eye image. Then, the line-of-sight direction of the driver is detected from the analysis result, and the coordinates (gaze point) of the position on the screen of the display device 20 being watched by the driver are detected based on the line-of-sight direction. The camera 31 is disposed at a position adjacent to the meter of the host vehicle in a state tilted upward by a predetermined angle from the horizontal direction, and images the driver's face from a viewpoint closer to the lower side than the front of the face (see FIG. 2).
 また、車載装置10は、CPU,ROM,RAM,I/O等から構成され、当該車載装置10を統括制御する制御部12と、表示装置20に対し映像信号を出力し、その画面上に各種画像を表示させる映像信号出力部11とを有する。 The in-vehicle device 10 includes a CPU, a ROM, a RAM, an I / O, and the like. The in-vehicle device 10 outputs video signals to the control unit 12 that performs overall control of the in-vehicle device 10 and the display device 20. And a video signal output unit 11 for displaying an image.
 制御部12は、各種画像データを生成するよう構成されており、映像信号出力部11を介して、表示装置20に対し、生成した画像データに基づく映像信号を出力することで、表示装置20の画面に各種画像を表示させる。 The control unit 12 is configured to generate various image data, and outputs a video signal based on the generated image data to the display device 20 via the video signal output unit 11, so that the display device 20 Display various images on the screen.
 [動作の説明]
 (1)動作の概要
 次に、車載装置10の動作の概要について説明する。
[Description of operation]
(1) Outline of Operation Next, an outline of the operation of the in-vehicle device 10 will be described.
 車載装置10は、自装置に対する操作を受け付けるための操作画面の画像データを生成し、表示装置20に操作画面を表示させる。この操作画面には、1または複数のオブジェクト(アイコン等)が描かれており、車載装置10は、ドライバがいずれかのオブジェクトを注視しつつ、顔の向きを横方向に傾ける挙動(換言すれば、顔を横方向に振る挙動)をしたことに応じて、該オブジェクトに対応する操作を検出する(以後、該挙動を操作確定挙動とも記載する)。 The in-vehicle device 10 generates image data of an operation screen for accepting an operation on the own device, and causes the display device 20 to display the operation screen. One or a plurality of objects (icons, etc.) are drawn on this operation screen, and the in-vehicle device 10 behaves in a manner that the driver inclines the face sideways while gazing at any object (in other words, The operation corresponding to the object is detected in response to the behavior of shaking the face in the horizontal direction (hereinafter, this behavior is also referred to as an operation confirmation behavior).
 なお、例えば、ドライバが顔の向きを上方向或いは下方向に傾ける挙動、若しくは、顔を前後させる挙動を、操作確定挙動とすることも考えられる。しかし、このような挙動はドライバが何気なく行ってしまう可能性があり、顔の向きを横方向に傾ける挙動を操作確定挙動とすることで、ドライバの意思をより正確に反映させることができる。また、例えば、頭部全体を左右或いは上下方向に繰り返し振る挙動等のように、他の頭部全体の挙動を操作確定挙動として検出しても良い。 Note that, for example, a behavior in which the driver tilts the face upward or downward or a behavior that moves the face back and forth may be regarded as an operation confirmation behavior. However, there is a possibility that such a behavior is unexpectedly performed by the driver, and the intention of the driver can be more accurately reflected by setting the behavior of tilting the face in the horizontal direction as the operation confirmation behavior. Further, for example, the behavior of the entire other head may be detected as the operation confirmation behavior, such as a behavior of repeatedly shaking the entire head left and right or up and down.
 視線検出装置30は、車載装置10からの指示に応じて、ドライバの顔の撮影画像に基づき、該撮影画像上におけるドライバの左右の目の位置座標や操作画面上の注視点を検出し、車載装置10に提供する。 The line-of-sight detection device 30 detects the position coordinates of the left and right eyes of the driver on the photographed image and the gazing point on the operation screen based on the photographed image of the driver's face in response to an instruction from the vehicle-mounted device 10. Provided to the apparatus 10.
 このドライバの顔の撮影画像は、顔の正面から下方寄りの位置を視点とするものであり、ドライバの顔をやや見上げた状態で映し出したものである。このため、ドライバの左目の位置座標をL(lx,ly)、右目の位置座標をR(rx,ry)とした場合、ドライバの顔が真正面を向いた状態では、上下方向(y軸方向)の左右の目の位置の間隔である差分Δd=ry-lyは0付近の値となる(図3参照)。そして、ドライバの顔の向きが横方向に傾くにつれ、該Δdの絶対値が大きくなる。 The photographed image of the driver's face is viewed from a position closer to the lower side than the front of the face, and is projected with the driver's face slightly looked up. Therefore, when the position coordinate of the left eye of the driver is L (lx, ly) and the position coordinate of the right eye is R (rx, ry), the vertical direction (y-axis direction) is obtained when the driver's face is facing directly in front. The difference Δd = ry−ly, which is the interval between the left and right eye positions, is a value near 0 (see FIG. 3). As the driver's face is tilted laterally, the absolute value of Δd increases.
 そこで、位置座標L,Rを取得した車載装置10の制御部12は、これに基づきΔdを算出すると共に、Δdに基づきドライバの顔の向きを検出し、さらに、顔の向きに基づき操作確定挙動を検出する。 Therefore, the control unit 12 of the in-vehicle device 10 that acquired the position coordinates L and R calculates Δd based on this, detects the driver's face orientation based on Δd, and further determines the operation confirmation behavior based on the face orientation. Is detected.
 無論、このほかにも、例えば、正面或いは上方等からのドライバの顔の撮影画像や、頭部全体の撮影画像等を用いて、顔の向きを検出することも考えられる。また、ドライバの目の位置座標以外にも、例えば、ドライバの顔の画像から特徴点を検出し、該特徴点の位置等に基づき、顔の向きを検出することも考えられる。 Of course, in addition to this, it is also conceivable to detect the orientation of the face using, for example, a photographed image of the driver's face from the front or above, a photographed image of the entire head, and the like. In addition to the position coordinates of the driver's eyes, for example, it is conceivable to detect feature points from the driver's face image and detect the face orientation based on the position of the feature points.
 しかし、ドライバの目の位置に着目することで、より容易にドライバの顔の向きを検出することができる。また、ドライバの顔を下方から撮影した画像を用いると共に、左右の目の上下方向の位置の間隔に着目することで、容易且つ正確に顔の向きを検出することができ、その結果、操作確定挙動を検出する際の処理負荷を低減させることができる。 However, by paying attention to the position of the driver's eyes, the orientation of the driver's face can be detected more easily. In addition, by using an image of the driver's face taken from below and paying attention to the distance between the vertical positions of the left and right eyes, the face orientation can be detected easily and accurately. The processing load when detecting the behavior can be reduced.
 図4,5を用いて具体的に説明すると、制御部12は、表示装置20に操作画面100を表示させた後、位置座標L,Rに基づき、時刻tにおける上記差分の絶対値|Δd(t)|を算出する。そして、制御部12は、|Δd(t)|が予め定められた閾値D1以上になると、その時点の時刻をTstaとする。さらに、制御部12は、視線検出装置30から取得した注視点に基づき、時刻Tstaにおいてドライバが注視している操作画面100上のオブジェクトを特定し、表示装置20に対し、該オブジェクト101を強調表示させても良い。 4 and 5, the control unit 12 displays the operation screen 100 on the display device 20, and then based on the position coordinates L and R, the absolute value | Δd ( t) | Then, when | Δd (t) | becomes equal to or greater than a predetermined threshold value D1, the control unit 12 sets the time at that time as Tsta. Further, the control unit 12 specifies an object on the operation screen 100 that the driver is gazing at time Tsta based on the gazing point acquired from the line-of-sight detection device 30, and highlights the object 101 on the display device 20. You may let them.
 その後、制御部12は、周期的に|Δd(t)|を算出し、時刻Tendまでに(時刻Tstaから所定時間が経過するまでに)|Δd(t)|が閾値D2以上になった場合には、操作確定挙動がなされたものとみなす。 Thereafter, the control unit 12 periodically calculates | Δd (t) |, and when | Δd (t) | becomes equal to or greater than the threshold value D2 by the time Tend (before a predetermined time elapses from the time Tsta). It is assumed that the operation confirmation behavior has been made.
 また、操作確定挙動の検出中、制御部12は、視線検出装置30から取得した注視点に基づき、ドライバが注視している操作画面上のオブジェクトを特定する。そして、操作確定挙動が検出されるまでの間、ドライバが同一のオブジェクト101(強調表示されたオブジェクト101)を注視していた場合には、制御部12は、該オブジェクト101に対応する操作がなされたものとみなし、該オブジェクト101に対応する処理を実行する。なお、このとき、制御部12は、該オブジェクト101の表示態様をさらに変化させても良い。 Further, during the detection of the operation confirmation behavior, the control unit 12 specifies the object on the operation screen on which the driver is gazing based on the gazing point acquired from the line-of-sight detection device 30. When the driver is gazing at the same object 101 (the highlighted object 101) until the operation confirmation behavior is detected, the control unit 12 performs an operation corresponding to the object 101. The processing corresponding to the object 101 is executed. At this time, the control unit 12 may further change the display mode of the object 101.
 このほかにも、以下のようにして操作確定挙動を検出しても良い。すなわち、制御部12は、時刻Tsta以後、周期的に|Δd(t)|を算出すると共に、時刻Tstaを起点とした|Δd(t)|の積分値を算出し、時刻Tendまでに該積分値が閾値D3以上となった場合に、操作確定挙動がなされたとみなしても良い(図6参照)。 In addition to this, the operation confirmation behavior may be detected as follows. That is, after the time Tsta, the control unit 12 periodically calculates | Δd (t) |, calculates an integrated value of | Δd (t) | starting from the time Tsta, and performs the integration until the time Tend. When the value is equal to or greater than the threshold value D3, it may be considered that the operation confirmation behavior has been made (see FIG. 6).
 また、この操作確定挙動の検出中、制御部12は、同様にしてドライバが注視している操作画面上のオブジェクトを特定しても良い。そして、操作確定挙動が検出されるまでの間、ドライバが同一オブジェクト101を注視していた場合には、制御部12は、該オブジェクト101に対応する操作がなされたものとみなしても良い。 In addition, during the detection of the operation confirmation behavior, the control unit 12 may similarly specify an object on the operation screen that the driver is gazing at. Then, when the driver is gazing at the same object 101 until the operation confirmation behavior is detected, the control unit 12 may consider that an operation corresponding to the object 101 has been performed.
 また、このほかにも、制御部12は、顔の向きを右方向に傾ける挙動(右側操作確定挙動)と、左方向に傾ける挙動(左側操作確定挙動)を、別の操作確定挙動として検出しても良い。 In addition to this, the control unit 12 detects the behavior of tilting the face to the right (right operation confirmation behavior) and the behavior of tilting to the left (left operation confirmation behavior) as different operation confirmation behaviors. May be.
 上述したように、Δd(t)=ry-lyであるため、ドライバが右方向に顔を傾けた場合には、Δd(t)は負の値に、左方向に顔を傾けた場合には、Δd(t)は正の値となる。このため、Δd(t)の符号によりドライバが顔を傾けた向きを判定することができ、制御部12は、Δd(t)に基づき、右側,左側操作確定挙動を検出することができる。 As described above, since Δd (t) = ry−ly, when the driver tilts the face to the right, Δd (t) is a negative value, and when the face tilts to the left , Δd (t) is a positive value. For this reason, the direction in which the driver tilted the face can be determined based on the sign of Δd (t), and the control unit 12 can detect the right and left operation confirmation behavior based on Δd (t).
 そこで、制御部12は、ドライバがいずれかのオブジェクトを注視した状態で、右側操作確定挙動が検出された場合には、該オブジェクトに対応する操作Rを検出し、左側操作確定挙動が検出された場合には、該オブジェクトに対応する操作Lを検出しても良い。 Therefore, when the right operation confirmation behavior is detected while the driver is gazing at any object, the control unit 12 detects the operation R corresponding to the object, and the left operation confirmation behavior is detected. In this case, the operation L corresponding to the object may be detected.
 具体的に説明すると、左側操作確定挙動に関して、制御部12は、Δd(t)が予め定められた閾値DL1以上になると、視線検出装置30から取得した注視点に基づき、その時点(時刻Tsta)においてドライバが注視している操作画面上のオブジェクトを特定する(図7参照)。 More specifically, regarding the left-side operation determination behavior, when Δd (t) is equal to or greater than a predetermined threshold DL1, the control unit 12 determines the time (time Tsta) based on the gazing point acquired from the line-of-sight detection device 30. The object on the operation screen that the driver is gazing at is specified (see FIG. 7).
 その後、制御部12は、周期的に|Δd(t)|を算出し、時刻Tendまでに|Δd(t)|が閾値DL2以上になった場合には、左側操作確定挙動がなされたものとみなす。そして、この左側操作確定挙動の検出中、ドライバが同一オブジェクトを注視していた場合には、制御部12は、該オブジェクトに対応する操作Lがなされたものとみなし、該オブジェクト及び操作Lに対応する処理を実行する。 Thereafter, the control unit 12 periodically calculates | Δd (t) |, and if | Δd (t) | is equal to or greater than the threshold DL2 by time Tend, the left-side operation confirmation behavior is assumed to have been performed. I reckon. If the driver is gazing at the same object during the detection of the left-side operation determination behavior, the control unit 12 regards that the operation L corresponding to the object has been performed, and corresponds to the object and the operation L. Execute the process.
 一方、右側操作確定挙動に関しても、左側操作確定挙動と同様、制御部12は、Δd(t)が予め定められた閾値DR1以下になると、ドライバが注視しているオブジェクトを特定する(図7参照)。その後、時刻Tendまでに|Δd(t)|が閾値DR2以下になった場合には、制御部12は、右側操作確定挙動がなされたものとみなす。 On the other hand, regarding the right operation confirmed behavior, as in the left operation confirmed behavior, when Δd (t) is equal to or less than a predetermined threshold DR1, the control unit 12 identifies the object that the driver is watching (see FIG. 7). ). Thereafter, when | Δd (t) | becomes equal to or less than the threshold value DR2 by time Tend, the control unit 12 considers that the right operation confirmation behavior has been performed.
 そして、この右側操作確定挙動の検出中、ドライバが同一オブジェクトを注視していた場合には、制御部12は、該オブジェクトに対応する操作Rがなされたものとみなし、該オブジェクト及び操作Rに対応する処理を実行する。 If the driver is gazing at the same object during the detection of the right-side operation determination behavior, the control unit 12 regards that the operation R corresponding to the object has been performed, and corresponds to the object and the operation R. Execute the process.
 なお、左側操作確定挙動に関して、制御部12は、Δd(t)が閾値DL1以上となった時刻Tsta以後、周期的にΔd(t)を算出すると共に、時刻Tstaを起点としたΔd(t)の積分値を算出しても良い。そして、時刻Tendまでに該積分値が閾値DL3以上となった場合に、左側操作確定挙動がなされたとみなしても良い(図8参照)。 Regarding the left-side operation confirmation behavior, the control unit 12 periodically calculates Δd (t) after time Tsta when Δd (t) becomes equal to or greater than the threshold value DL1, and Δd (t) starting from time Tsta. The integral value of may be calculated. Then, when the integrated value becomes equal to or greater than the threshold value DL3 by time Tend, it may be considered that the left operation confirmation behavior has been made (see FIG. 8).
 また、右側操作確定挙動に関しても同様に、制御部12は、Δd(t)が閾値DR1以下となった時刻Tsta以後、時刻Tstaを起点としたΔd(t)の積分値を算出しても良い。そして、時刻Tendまでに該積分値が閾値DR2以下となった場合に、左側操作確定挙動がなされたとみなしても良い(図8参照)。 Similarly, the control unit 12 may calculate the integral value of Δd (t) starting from the time Tsta after the time Tsta when Δd (t) becomes equal to or less than the threshold value DR1 with respect to the right-side operation confirmation behavior. . Then, when the integrated value becomes equal to or less than the threshold value DR2 by time Tend, it may be considered that the left operation confirmation behavior has been performed (see FIG. 8).
 無論、上記方法による左側或いは右側操作確定挙動の検出中、制御部12は、同様にしてドライバが注視している操作画面上のオブジェクトを特定し、検出中にドライバが同一のオブジェクトを注視していた場合には、操作L或いはRがなされたものとみなすことは言うまでも無い。 Of course, during the detection of the left or right operation confirmation behavior by the above method, the control unit 12 similarly identifies the object on the operation screen that the driver is watching, and the driver is watching the same object during the detection. In this case, it is needless to say that the operation L or R has been performed.
 このようにしてドライバの操作を検出することで、ドライバは、運転により両手が塞がった状態であっても、簡単且つ自由に車載装置10を操作することができる。 By detecting the operation of the driver in this way, the driver can easily and freely operate the in-vehicle device 10 even when both hands are closed by driving.
 (2)操作検出処理
 次に、一例として、時刻Tsta以後、|Δd(t)|の積分値を算出し、該積分値が閾値D3以上となったか否かを判定することで、操作確定挙動を検出する操作検出処理について、図9,10に記載のフローチャートを用いて説明する。本処理は、車載装置10にて、ドライバ等からの操作の受け付けを開始する際に実行される。
(2) Operation detection process Next, as an example, after time Tsta, an integrated value of | Δd (t) | is calculated, and it is determined whether or not the integrated value is equal to or greater than a threshold value D3. The operation detection process for detecting the above will be described with reference to the flowcharts shown in FIGS. This process is executed when the in-vehicle device 10 starts accepting an operation from a driver or the like.
 S200では、車載装置10の制御部12は、表示装置20に操作画面を表示させ、S205に処理を移行する。 In S200, the control unit 12 of the in-vehicle device 10 causes the display device 20 to display an operation screen, and the process proceeds to S205.
 S205では、制御部12は、視線検出装置30に対し、ドライバの左右の目の位置座標L,Rの検出を指示する。視線検出装置30では、該指示に応じて位置座標L,Rの検出がなされ、検出に成功すると、車載装置10に対し位置座標L,Rが提供される。 In S205, the control unit 12 instructs the line-of-sight detection device 30 to detect the position coordinates L and R of the left and right eyes of the driver. In the line-of-sight detection device 30, the position coordinates L and R are detected according to the instruction, and when the detection is successful, the position coordinates L and R are provided to the in-vehicle device 10.
 続くS210では、制御部12は、位置座標L,Rの検出に成功し、これらを取得することができたか否かを判定する。そして、制御部12は、肯定判定が得られた場合には(S210:Yes)、S205に処理を移行し、否定判定が得られた場合には(S210:No)、S215に処理を移行する。 In subsequent S210, the control unit 12 determines whether or not the position coordinates L and R have been successfully detected and acquired. If a positive determination is obtained (S210: Yes), the control unit 12 shifts the process to S205, and if a negative determination is obtained (S210: No), the control unit 12 shifts the process to S215. .
 S215では、制御部12は、ドライバの左右の目の位置座標を特定すると共に、これらに基づき上述した|Δd(t)|を算出し(S220)、S225に処理を移行する。 In S215, the control unit 12 specifies the position coordinates of the left and right eyes of the driver, calculates | Δd (t) | described above based on these coordinates (S220), and shifts the processing to S225.
 S225では、制御部12は、|Δd(t)|が閾値D1以上か否かを判定し、肯定判定が得られた場合には(S225:Yes)、S230に処理を移行すると共に、否定判定が得られた場合には(S225:No)、S205に処理を移行する。 In S225, the control unit 12 determines whether or not | Δd (t) | is greater than or equal to the threshold value D1, and when an affirmative determination is obtained (S225: Yes), the process proceeds to S230 and a negative determination is made. Is obtained (S225: No), the process proceeds to S205.
 S230では、制御部12は、視線検出装置30に対し、ドライバの注視点の検出を指示する。視線検出装置30では、該指示に応じて注視点の検出がなされ、検出に成功すると、車載装置10に対し注視点が提供される。 In S230, the control unit 12 instructs the line-of-sight detection device 30 to detect the driver's gaze point. The line-of-sight detection device 30 detects a gazing point according to the instruction, and when the detection is successful, the gazing point is provided to the in-vehicle device 10.
 続くS235では、制御部12は、注視点の検出に成功し、該注視点を取得できたか否かを判定する。そして、制御部12は、肯定判定が得られた場合には(S235:Yes)、S240に処理を移行し、否定判定が得られた場合には(S235:No)、S230に処理を移行する。 In subsequent S235, the control unit 12 determines whether or not the gaze point has been successfully detected and the gaze point has been acquired. Then, when an affirmative determination is obtained (S235: Yes), the control unit 12 proceeds to S240, and when a negative determination is obtained (S235: No), the control unit 12 proceeds to S230. .
 S240では、制御部12は、現在の時刻をTstaとして設定すると共に、提供された注視点を時刻Tstaにおける注視点として確定させ、S245に処理を移行する。 In S240, the control unit 12 sets the current time as Tsta, determines the provided gaze point as the gaze point at time Tsta, and shifts the processing to S245.
 S245では、制御部12は、表示装置20に表示された操作画面における上記注視点を含む領域に描かれたオブジェクト(ドライバが注視しているオブジェクト)を特定すると共に、表示装置20に対し該オブジェクトを強調表示させ、S250に処理を移行する。なお、該当するオブジェクトが存在しない場合には、制御部12は、再度S205に処理を移行しても良い。 In S <b> 245, the control unit 12 specifies an object (an object being watched by the driver) drawn in an area including the gazing point on the operation screen displayed on the display device 20, and the object is displayed on the display device 20. Is highlighted, and the process proceeds to S250. Note that if there is no corresponding object, the control unit 12 may shift the process to S205 again.
 S250では、制御部12は、現在の時刻tがTendに到達する前か否か(時刻Tstaから予め定められた時間が経過する前か否か)を判定する。そして、制御部12は、肯定判定が得られた場合には(S250:Yes)、S255に処理を移行し、否定判定が得られた場合には(S250:No)、S310に処理を移行する。 In S250, the control unit 12 determines whether or not the current time t has reached Tend (whether or not a predetermined time has elapsed from time Tsta). If a positive determination is obtained (S250: Yes), the control unit 12 shifts the process to S255, and if a negative determination is obtained (S250: No), the control unit 12 shifts the process to S310. .
 S255では、制御部12は、視線検出装置30に対し、ドライバの注視点の検出を指示し、S260に処理を移行する。 In S255, the control unit 12 instructs the line-of-sight detection device 30 to detect the gaze point of the driver, and the process proceeds to S260.
 S260では、制御部12は、注視点の検出に成功し、視線検出装置30から該注視点を取得できたか否かを判定する。そして、制御部12は、肯定判定が得られた場合には(S260:Yes)、S265に処理を移行すると共に、否定判定が得られた場合には(S260:No)、S255に処理を移行する。 In S260, the control unit 12 determines whether or not the gazing point has been successfully detected and the gazing point has been acquired from the line-of-sight detection device 30. When a positive determination is obtained (S260: Yes), the control unit 12 shifts the process to S265, and when a negative determination is obtained (S260: No), the control unit 12 shifts the process to S255. To do.
 S265では、制御部12は、提供された注視点を現在のドライバの注視点として確定させ、S270に処理を移行する。 In S265, the control unit 12 determines the provided gazing point as the gazing point of the current driver, and the process proceeds to S270.
 S270では、制御部12は、操作画面における上記注視点を含む領域に描かれたオブジェクト(ドライバが注視しているオブジェクト)を特定すると共に、該オブジェクトと、強調表示されているオブジェクト(時刻Tstaにて注視されていたオブジェクト)が一致するか否かを判定する。そして、制御部12は、肯定判定が得られた場合には(S270:Yes)、S275に処理を移行し、否定判定が得られた場合には(S270:No)、S310に処理を移行する。 In S270, the control unit 12 specifies an object (an object that the driver is gazing at) drawn in an area including the above-mentioned gazing point on the operation screen, and the object and the highlighted object (at time Tsta). It is determined whether or not (objects being watched) match. Then, when an affirmative determination is obtained (S270: Yes), the control unit 12 proceeds to S275, and when a negative determination is obtained (S270: No), the control unit 12 proceeds to S310. .
 S275では、制御部12は、視線検出装置30に対し、ドライバの左右の目の位置座標L,Rの検出を指示し、S280に処理を移行する。 In S275, the control unit 12 instructs the line-of-sight detection device 30 to detect the position coordinates L and R of the left and right eyes of the driver, and the process proceeds to S280.
 S280では、制御部12は、位置座標L,Rの検出に成功し、視線検出装置30からこれらを取得することができたか否かを判定する。そして、制御部12は、肯定判定が得られた場合には(S280:Yes)、S285に処理を移行し、否定判定が得られた場合には(S280:No)、S275に処理を移行する。 In S280, the control unit 12 determines whether or not the position coordinates L and R have been successfully detected and can be acquired from the line-of-sight detection device 30. If an affirmative determination is obtained (S280: Yes), the control unit 12 shifts the process to S285, and if a negative determination is obtained (S280: No), the control unit 12 shifts the process to S275. .
 S285では、制御部12は、ドライバの左右の目の位置座標を特定すると共に、これらに基づき、上述した|Δd(t)|を算出し(S290)、S295に処理を移行する。 In S285, the control unit 12 specifies the position coordinates of the left and right eyes of the driver, calculates | Δd (t) | described above based on these coordinates (S290), and shifts the processing to S295.
 S295では、制御部12は、時刻Tsta以後から現時点にかけての|Δd(t)|の積分値を算出し、S300に処理を移行する。 In S295, the control unit 12 calculates an integrated value of | Δd (t) | from time Tsta to the present time, and proceeds to S300.
 S300では、制御部12は、|Δd(t)|の積分値が閾値D3以上であるか否かを判定する。そして、制御部12は、肯定判定が得られた場合には(S300:Yes)、S305に処理を移行し、否定判定が得られた場合には(S300:No)、S250に処理を移行する。 In S300, the control unit 12 determines whether or not the integrated value of | Δd (t) | is equal to or greater than the threshold value D3. Then, when an affirmative determination is obtained (S300: Yes), the control unit 12 proceeds to S305, and when a negative determination is obtained (S300: No), the control unit 12 proceeds to S250. .
 S305では、制御部12は、特定されたオブジェクト(換言すれば、操作確定挙動がなされた間にドライバにより注視されていたオブジェクト)に対応する操作がなされたものとみなすと共に、該操作に対応する処理を実行し、本処理を終了する。 In S305, the control unit 12 considers that an operation corresponding to the identified object (in other words, an object that was being watched by the driver while the operation confirmation behavior was performed) was performed, and responds to the operation. The process is executed and the present process is terminated.
 一方、現在の時刻tがTendに到達した場合等に移行するS310では、制御部12は、表示装置20に対し、時刻Tstaにて注視されていたオブジェクトの強調表示を解除させ、本処理を終了する。 On the other hand, in S310, which shifts to the case where the current time t reaches Tend, the control unit 12 causes the display device 20 to release the highlighted display of the object being watched at time Tsta, and ends this processing. To do.
 なお、時刻Tsta以後、|Δd(t)|が閾値D2以上となったか否かを判定することで、操作確定挙動を検出する場合であれば、制御部12は、S300において、|Δd(t)|の積分値の判定に替えて、|Δd(t)|が閾値D2以上であるか否かを判定しても良い。 Note that after the time Tsta, if it is a case where an operation confirmation behavior is detected by determining whether or not | Δd (t) | is equal to or greater than the threshold value D2, the control unit 12 determines that | Δd (t ) | Instead of determining the integral value, it may be determined whether | Δd (t) | is equal to or greater than the threshold value D2.
 また、右側,左側操作確定挙動を別の操作確定挙動として検出する場合であれば、制御部12は、S220,S290において、|Δd(t)|に替えてΔd(t)を算出すると共に、S295において、|Δd(t)|の積分値に替えて、Δd(t)の積分値を算出しても良い。 If the right and left operation fixed behaviors are detected as different operation fixed behaviors, the control unit 12 calculates Δd (t) instead of | Δd (t) | in S220 and S290, In S295, instead of the integrated value of | Δd (t) |, the integrated value of Δd (t) may be calculated.
 また、制御部12は、S225において、|Δd(t)|の比較に替えて、Δd(t)と閾値DR1或いはDL1との比較を行っても良い(具体的には、Δd(t)が正の値であれば、Δd(t)が閾値DL1以上か否かを判定し、負の値であれば、Δd(t)が閾値DR1以下か否かを判定しても良い)。 In S225, the control unit 12 may compare Δd (t) with the threshold value DR1 or DL1 instead of comparing | Δd (t) | (specifically, Δd (t) is If it is a positive value, it is determined whether or not Δd (t) is greater than or equal to the threshold value DL1, and if it is a negative value, it may be determined whether or not Δd (t) is less than or equal to the threshold value DR1.
 また、制御部12は、S300において、|Δd(t)|の積分値の比較に替えて、Δd(t)と閾値DR2或いはDL2との比較を行っても良い(具体的には、Δd(t)が正の値であれば、Δd(t)が閾値DL2以上か否かを判定し、負の値であれば、Δd(t)が閾値DR2以下か否かを判定しても良い)。 In S300, the control unit 12 may compare Δd (t) with the threshold value DR2 or DL2 instead of comparing the integrated value of | Δd (t) | (specifically, Δd ( If t) is a positive value, it is determined whether or not Δd (t) is greater than or equal to the threshold DL2, and if it is a negative value, it may be determined whether or not Δd (t) is less than or equal to the threshold DR2. .
 また、制御部12は、S300において、|Δd(t)|の積分値の比較に替えて、Δd(t)の積分値と閾値DR3或いはDL3との比較を行っても良い(具体的には、Δd(t)の積分値が正の値であれば、Δd(t)の積分値が閾値DL3以上か否かを判定し、負の値であれば、Δd(t)の積分値が閾値DR3以下か否かを判定しても良い)。 In S300, the control unit 12 may compare the integrated value of Δd (t) with the threshold value DR3 or DL3 instead of comparing the integrated value of | Δd (t) | If the integrated value of Δd (t) is a positive value, it is determined whether the integrated value of Δd (t) is greater than or equal to the threshold value DL3. If the integrated value of Δd (t) is a negative value, the integrated value of Δd (t) is the threshold value. It may be determined whether or not DR3 or less).
 [第2実施形態]
 図11は、第2実施形態の携帯機器50(例えば、スマートフォン等)の構成を示すブロック図である。携帯機器50は、各種画像を表示する表示部51と、CPU,ROM,RAM,I/O等から構成され、当該携帯機器50を統括制御する制御部52と、ユーザの顔の撮影画像に基づきユーザが注視している位置を検出する視線検出部53(アイトラッカー)とを有する。
[Second Embodiment]
FIG. 11 is a block diagram illustrating a configuration of a mobile device 50 (for example, a smartphone) according to the second embodiment. The portable device 50 includes a display unit 51 that displays various images, a CPU, a ROM, a RAM, an I / O, and the like. The portable device 50 is based on a control unit 52 that performs overall control of the portable device 50 and a photographed image of the user's face. A line-of-sight detection unit 53 (eye tracker) that detects a position where the user is gazing.
 図12に記載されているように、表示部51は、略長方形の画面51aを有しており、画面51aは、矩形の板状の形状を有する携帯機器50の表面に配されていると共に、画面51aにおける短手辺の一方が上側、他方が下側となっている。そして、携帯機器50は、画面51aの上側の短手辺が上方に位置する状態で使用されるよう構成されている。 As shown in FIG. 12, the display unit 51 has a substantially rectangular screen 51 a, and the screen 51 a is arranged on the surface of the portable device 50 having a rectangular plate shape, One of the short sides in the screen 51a is the upper side and the other is the lower side. The portable device 50 is configured to be used in a state where the upper short side of the screen 51a is positioned upward.
 また、視線検出部53は、ユーザの顔に向けて赤外線を投光する投光部や、ユーザの顔を撮影するカメラ53a等から構成されており、撮影画像からユーザの目の位置を特定し、ユーザの目の画像を解析する。そして、解析結果からユーザの視線方向を検出し、該視線方向に基づき、ユーザが注視している表示部51の画面51a上の位置の座標(注視点)を検出する。なお、カメラ53aは、画面51aの下側に隣接して配されている。 The line-of-sight detection unit 53 includes a light projecting unit that projects infrared rays toward the user's face, a camera 53a that captures the user's face, and the like, and specifies the position of the user's eyes from the captured image. Analyzing the image of the user's eyes. And a user's gaze direction is detected from an analysis result, and the coordinate (gaze point) of the position on the screen 51a of the display part 51 which the user is gazing based on this gaze direction is detected. The camera 53a is disposed adjacent to the lower side of the screen 51a.
 また、制御部12は、各種画像データを生成するよう構成されており、表示部51の画面51aに各種画像を表示させる。 The control unit 12 is configured to generate various image data, and displays various images on the screen 51 a of the display unit 51.
 第2実施形態の携帯機器50においても、第1実施形態と同様の操作検出処理が行われ、ユーザが操作画面上のオブジェクトを注視しつつ操作確定挙動を行うと、該オブジェクトに対応する操作が検出される。 In the portable device 50 of the second embodiment, the same operation detection process as that of the first embodiment is performed, and when the user performs an operation confirmation behavior while gazing at an object on the operation screen, an operation corresponding to the object is performed. Detected.
 すなわち、携帯機器50の制御部52は、自装置に対する操作を受け付けるための操作画面の画像データを生成し、表示部51の画面51aに操作画面を表示する。また、カメラ53aは、画面51aの下側の短手辺に隣接して配されているため、通常は、画面51aを注視しているユーザの顔を、顔の正面から下方寄りの視点から撮影する状態となる。 That is, the control unit 52 of the portable device 50 generates image data of an operation screen for accepting an operation on the own device, and displays the operation screen on the screen 51 a of the display unit 51. Further, since the camera 53a is arranged adjacent to the lower short side of the screen 51a, the user's face that is gazing at the screen 51a is usually photographed from a lower viewpoint from the front of the face. It becomes a state to do.
 このため、カメラ53aによるユーザの顔の撮影画像は、第1実施形態と同様、顔をやや見上げた状態で映し出したものとなり、上下方向の左右の目の位置の間隔であるΔd(t)に基づき、ユーザの顔の向きを検出することができる。 For this reason, the photographed image of the user's face by the camera 53a is projected with the face slightly looked up, as in the first embodiment, and is set to Δd (t) which is the interval between the positions of the left and right eyes in the vertical direction. Based on this, the orientation of the user's face can be detected.
 そこで、視線検出部53は、制御部52からの指示に応じて、ユーザの顔の撮影画像に基づき、該撮影画像上におけるドライバの左右の目の位置座標L,Rや注視点を検出し、制御部52に提供する。 Therefore, the line-of-sight detection unit 53 detects the position coordinates L and R of the left and right eyes of the driver and the gazing point on the captured image based on the captured image of the user's face in accordance with an instruction from the control unit 52. This is provided to the control unit 52.
 一方、位置座標L,Rを取得した制御部52は、第1実施形態と同様にしてこれらに基づきΔdを算出すると共に、Δdに基づきドライバの顔の向きを検出し、さらに、操作確定挙動を検出する。そして、制御部52は、操作確定挙動の検出中、ユーザが同一のオブジェクトを注視していた場合には、該オブジェクトに対応する操作がなされたものとみなす。 On the other hand, the control unit 52 that has acquired the position coordinates L and R calculates Δd based on them in the same manner as in the first embodiment, detects the driver's face orientation based on Δd, and further determines the operation confirmation behavior. To detect. Then, when the user is gazing at the same object during the detection of the operation confirmation behavior, the control unit 52 regards that the operation corresponding to the object has been performed.
 [他の実施形態]
 (1)第1,第2実施形態における操作検出処理では、ドライバやユーザにより操作確定挙動がなされる間に、該ドライバ等により操作画面上の同一のオブジェクトが注視されていた場合に、該オブジェクトに対応する操作が検出される。
[Other Embodiments]
(1) In the operation detection process according to the first and second embodiments, when the same object on the operation screen is being watched by the driver or the like while the operation is confirmed by the driver or the user, the object is detected. An operation corresponding to is detected.
 しかしながら、このような操作画面を表示した場合に限らず、例えば、操作確定挙動がなされる間、該ドライバ等が、表示装置20(或いは携帯機器50の表示部51)の画面上における予め定められた領域を注視していた場合に、該領域に対応する操作を検出する構成としても良い。このような場合であっても、同様の効果を得ることができる。 However, the present invention is not limited to the case where such an operation screen is displayed. For example, the driver is determined in advance on the screen of the display device 20 (or the display unit 51 of the portable device 50) while the operation confirmation behavior is performed. When an area is watched, an operation corresponding to the area may be detected. Even in such a case, the same effect can be obtained.
 (2)また、第1,第2実施形態における操作検出処理では、ドライバ等の顔の向きが、横方向に所定量傾いた状態となった段階でドライバ等が注視しているオブジェクトが特定される。そして、該オブジェクトを注視したまま操作確定挙動がなされた場合には、該オブジェクトに対応する操作が検出される。 (2) Further, in the operation detection processing in the first and second embodiments, an object that the driver or the like is gazing at when the face orientation of the driver or the like is inclined by a predetermined amount in the lateral direction is specified. The Then, when an operation confirmation behavior is made while gazing at the object, an operation corresponding to the object is detected.
 しかしながら、これに限定されることは無く、例えば、車載装置10の制御部12や、携帯機器50の制御部52は、ドライバ等が特定のオブジェクトを注視していることを検出した段階で、該ドライバ等の顔の向きの横方向への傾き度合いを検出しても良い。そして、これらの制御部は、該オブジェクトを注視したまま、該ドライバ等の該傾き度合いが、当初検出したものから所定量変化した場合には、操作確定挙動がなされたものとみなし、該オブジェクトに対応する操作を検出しても良い。このような場合であっても、同様の効果を得ることができる。 However, the present invention is not limited to this. For example, when the control unit 12 of the in-vehicle device 10 or the control unit 52 of the portable device 50 detects that a driver or the like is gazing at a specific object, The degree of inclination of the face direction of a driver or the like in the horizontal direction may be detected. Then, when the degree of inclination of the driver or the like changes by a predetermined amount from what was initially detected while gazing at the object, these control units consider that the operation confirmation behavior has been made, and A corresponding operation may be detected. Even in such a case, the same effect can be obtained.
 [特許請求の範囲との対応]
 上記実施形態の説明で用いた用語と、請求の範囲の記載に用いた用語との対応を示す。
[Correspondence with Claims]
The correspondence between the terms used in the description of the above embodiment and the terms used in the description of the claims is shown.
 第1実施形態における車載装置10,第2実施形態における携帯機器50が、操作装置に相当し、携帯機器50の表示部51が、表示装置に相当する。 The in-vehicle device 10 in the first embodiment and the portable device 50 in the second embodiment correspond to an operation device, and the display unit 51 of the portable device 50 corresponds to a display device.
 また、操作検出処理のS200が表示部,表示することに、S230,S255が特定部,特定することに、S205~S225,S275~S300が検出部,検出することに、S270,S305が操作部,操作することに相当する。
本開示におけるフローチャート、あるいは、フローチャートの処理は、複数のセクション(あるいはステップと言及される)から構成され、各セクションは、たとえば、S100と表現される。さらに、各セクションは、複数のサブセクションに分割されることができ、一方、複数のセクションが合わさって一つのセクションにすることも可能である。さらに、このように構成される各セクションは、デバイス、モジュール、ミーンズとして言及されることができる。
In addition, S200 of the operation detection process is displayed on the display unit, S230 and S255 are specified on the specified unit, S205 to S225 and S275 to S300 are detected on the detection unit, and S270 and S305 are displayed on the operating unit. , Equivalent to operating.
The flowchart in the present disclosure or the process of the flowchart is configured by a plurality of sections (or referred to as steps), and each section is expressed as, for example, S100. Furthermore, each section can be divided into multiple subsections, while multiple sections can be combined into a single section. Further, each section configured in this manner can be referred to as a device, module, or means.
 本開示は、実施例に準拠して記述されたが、本開示は当該実施例や構造に限定されるものではないと理解される。本開示は、様々な変形例や均等範囲内の変形をも包含する。加えて、様々な組み合わせや形態、さらには、それらに一要素のみ、それ以上、あるいはそれ以下、を含む他の組み合わせや形態をも、本開示の範疇や思想範囲に入るものである。 Although the present disclosure has been described based on the embodiments, it is understood that the present disclosure is not limited to the embodiments and structures. The present disclosure includes various modifications and modifications within the equivalent range. In addition, various combinations and forms, as well as other combinations and forms including only one element, more or less, are within the scope and spirit of the present disclosure.

Claims (7)

  1.  表示装置(20,51)の画面に画像を表示する表示部(S200)と、
     ユーザの顔の撮影画像に基づき、該ユーザが注視している前記画面上の注視点を特定する特定部(S230,S255)と、
     前記撮影画像に基づき、予め定められた前記ユーザの頭部全体を動かす挙動を検出する検出部(S205~S225,S275~S300)と、
     前記検出部により前記ユーザの前記挙動が検出されると、該挙動がなされた際に前記特定部により特定された前記注視点に対応する操作が行われたとみなす操作部(S270,S305)と、
     を備える操作装置(10,50)。
    A display unit (S200) for displaying an image on the screen of the display device (20, 51);
    A specifying unit (S230, S255) for specifying a gaze point on the screen on which the user is gazing based on a photographed image of the user's face;
    A detection unit (S205 to S225, S275 to S300) that detects a behavior of moving the entire head of the user based on the captured image;
    When the behavior of the user is detected by the detection unit, an operation unit (S270, S305) that considers that an operation corresponding to the gazing point identified by the identification unit is performed when the behavior is performed;
    An operating device (10, 50).
  2.  請求項1に記載の操作装置において、
     前記表示部は、1または複数のオブジェクトが描かれた画像を前記表示装置の画面に表示し、
     前記操作部は、前記検出部により前記ユーザの前記挙動が検出されると、該挙動がなされた際に前記特定部により特定された前記注視点が、いずれか一つの前記オブジェクトが描かれた領域にある場合には、該オブジェクトに対応する操作が行われたとみなす操作装置。
    The operating device according to claim 1,
    The display unit displays an image on which one or more objects are drawn on the screen of the display device,
    When the behavior of the user is detected by the detection unit, the operation unit is an area in which any one of the objects is drawn with the point of interest identified by the identification unit when the behavior is performed In the case of the operation device, it is assumed that an operation corresponding to the object has been performed.
  3.  請求項1または請求項2に記載の操作装置において、
     前記挙動とは、顔の向きを横方向に傾ける動きである操作装置。
    In the operating device according to claim 1 or 2,
    The behavior is an operating device that is a movement that tilts the face in a lateral direction.
  4.  請求項1から請求項3のうちのいずれか1項に記載の操作装置において、
     前記検出部は、前記撮影画像に基づき前記ユーザの目の位置を検出し、該目の位置に基づき、前記挙動を検出する操作装置。
    The operating device according to any one of claims 1 to 3,
    The operation unit detects the user's eye position based on the captured image, and detects the behavior based on the eye position.
  5.  請求項3を引用する請求項4に記載の操作装置において、
     前記撮影画像は、前記ユーザの顔を下方から撮影した画像であり、
     前記検出部は、前記撮影画像における前記ユーザの左右の目の上下方向の位置の間隔を検出すると共に、該間隔に基づき、該ユーザが顔の向きを横方向に傾けた度合いを判別し、該度合いに基づき、前記挙動を検出する操作装置。
    In the operating device according to claim 4, which refers to claim 3,
    The captured image is an image of the user's face taken from below,
    The detection unit detects an interval between vertical positions of the left and right eyes of the user in the captured image, and determines a degree by which the user tilts the face in the horizontal direction based on the interval. An operating device that detects the behavior based on the degree.
  6.  請求項1から請求項5のうちのいずれか1項に記載の操作装置において、
     前記操作装置は、車両に搭載されている操作装置。
    In the operating device according to any one of claims 1 to 5,
    The operating device is an operating device mounted on a vehicle.
  7.  表示装置(20,51)の画面に画像を表示すること(S200)と、
     ユーザの顔の撮影画像に基づき、該ユーザが注視している前記画面上の注視点を特定すること(S230,S255)と、
     前記撮影画像に基づき、予め定められた前記ユーザの頭部全体を動かす挙動を検出すること(S205~S225,S275~S300)と、
     前記検出することにおいて前記ユーザの前記挙動が検出されると、該挙動がなされた際に前記特定することにおいて特定された前記注視点に対応する操作が行われたとみなすこと(S270,S305)と、
     を備える操作検出方法。
    Displaying an image on the screen of the display device (20, 51) (S200);
    Identifying a point of interest on the screen on which the user is gazing based on a captured image of the user's face (S230, S255);
    Detecting a predetermined behavior of moving the entire user's head based on the captured image (S205 to S225, S275 to S300);
    When the behavior of the user is detected in the detection, it is considered that an operation corresponding to the gaze point specified in the specification is performed when the behavior is performed (S270, S305); ,
    An operation detection method comprising:
PCT/JP2013/007304 2012-12-26 2013-12-12 Operation device and operation detection method WO2014103217A1 (en)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
JP2012282842A JP2014126997A (en) 2012-12-26 2012-12-26 Operation device, and operation detection method
JP2012-282842 2012-12-26

Publications (1)

Publication Number Publication Date
WO2014103217A1 true WO2014103217A1 (en) 2014-07-03

Family

ID=51020341

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/JP2013/007304 WO2014103217A1 (en) 2012-12-26 2013-12-12 Operation device and operation detection method

Country Status (2)

Country Link
JP (1) JP2014126997A (en)
WO (1) WO2014103217A1 (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2016208261A1 (en) * 2015-06-26 2016-12-29 ソニー株式会社 Information processing device, information processing method, and program

Families Citing this family (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2017038248A1 (en) 2015-09-04 2017-03-09 富士フイルム株式会社 Instrument operation device, instrument operation method, and electronic instrument system
JP6911834B2 (en) * 2016-03-29 2021-07-28 ソニーグループ株式会社 Information processing equipment, information processing methods, and programs
CN109074212B (en) * 2016-04-26 2021-12-31 索尼公司 Information processing apparatus, information processing method, and program
JP6922686B2 (en) 2017-11-20 2021-08-18 トヨタ自動車株式会社 Operating device
JP7192570B2 (en) * 2019-02-27 2022-12-20 株式会社Jvcケンウッド Recording/playback device, recording/playback method and program

Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2010009484A (en) * 2008-06-30 2010-01-14 Denso It Laboratory Inc Onboard equipment control device and onboard equipment control method
JP2011209928A (en) * 2010-03-29 2011-10-20 Ntt Docomo Inc Mobile terminal
JP2011243108A (en) * 2010-05-20 2011-12-01 Nec Corp Electronic book device and electronic book operation method
JP2012073790A (en) * 2010-09-28 2012-04-12 Nintendo Co Ltd Information processing program, information processor, information processing method and information processing system

Family Cites Families (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2008117333A (en) * 2006-11-08 2008-05-22 Sony Corp Information processor, information processing method, individual identification device, dictionary data generating and updating method in individual identification device and dictionary data generating and updating program
US8788977B2 (en) * 2008-11-20 2014-07-22 Amazon Technologies, Inc. Movement recognition as input mechanism

Patent Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2010009484A (en) * 2008-06-30 2010-01-14 Denso It Laboratory Inc Onboard equipment control device and onboard equipment control method
JP2011209928A (en) * 2010-03-29 2011-10-20 Ntt Docomo Inc Mobile terminal
JP2011243108A (en) * 2010-05-20 2011-12-01 Nec Corp Electronic book device and electronic book operation method
JP2012073790A (en) * 2010-09-28 2012-04-12 Nintendo Co Ltd Information processing program, information processor, information processing method and information processing system

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2016208261A1 (en) * 2015-06-26 2016-12-29 ソニー株式会社 Information processing device, information processing method, and program
US10496186B2 (en) 2015-06-26 2019-12-03 Sony Corporation Information processing apparatus, information processing method, and program

Also Published As

Publication number Publication date
JP2014126997A (en) 2014-07-07

Similar Documents

Publication Publication Date Title
US9489500B2 (en) Manipulation apparatus
US11205348B2 (en) Drive assist device
WO2014103217A1 (en) Operation device and operation detection method
EP2826689B1 (en) Mobile terminal
US20150367859A1 (en) Input device for a motor vehicle
US9430046B2 (en) Gesture based image capturing system for vehicle
JP2006522397A (en) Multi-view display
JPWO2007088939A1 (en) Information processing device
US9141185B2 (en) Input device
KR101491169B1 (en) Device and method for controlling AVN of vehicle
US20190236343A1 (en) Gesture detection device
JP2014197252A (en) Gesture operation apparatus, program thereof, and vehicle mounted with gesture operation apparatus
JP2005280396A (en) Operation instruction device
JP2010179828A (en) Operation input device for vehicle
US20180052564A1 (en) Input control apparatus, input control method, and input control system
JP2008052536A (en) Touch panel type input device
US11276378B2 (en) Vehicle operation system and computer readable non-transitory storage medium
JP2018103646A (en) Vehicular information display device and vehicular information display program
JP2007080187A (en) Operation input device
JP4849193B2 (en) Incorrect operation prevention device and operation error prevention method for in-vehicle equipment
JP6390380B2 (en) Display operation device
JP6975595B2 (en) Information terminal and information terminal control program
JP2018073310A (en) Display system and display program
JP2018073311A (en) Operation input system and operation input program
JP6371589B2 (en) In-vehicle system, line-of-sight input reception method, and computer program

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 13866854

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: DE

122 Ep: pct application non-entry in european phase

Ref document number: 13866854

Country of ref document: EP

Kind code of ref document: A1