WO2018230526A1 - Input system and input method - Google Patents

Input system and input method Download PDF

Info

Publication number
WO2018230526A1
WO2018230526A1 PCT/JP2018/022305 JP2018022305W WO2018230526A1 WO 2018230526 A1 WO2018230526 A1 WO 2018230526A1 JP 2018022305 W JP2018022305 W JP 2018022305W WO 2018230526 A1 WO2018230526 A1 WO 2018230526A1
Authority
WO
WIPO (PCT)
Prior art keywords
display
gesture
gesture recognition
recognition space
image
Prior art date
Application number
PCT/JP2018/022305
Other languages
French (fr)
Japanese (ja)
Inventor
中井 潤
隆 大河平
Original Assignee
パナソニックIpマネジメント株式会社
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by パナソニックIpマネジメント株式会社 filed Critical パナソニックIpマネジメント株式会社
Publication of WO2018230526A1 publication Critical patent/WO2018230526A1/en

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/01Input arrangements or combined input and output arrangements for interaction between user and computer
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/01Input arrangements or combined input and output arrangements for interaction between user and computer
    • G06F3/03Arrangements for converting the position or the displacement of a member into a coded form
    • G06F3/033Pointing devices displaced or positioned by the user, e.g. mice, trackballs, pens or joysticks; Accessories therefor
    • G06F3/0346Pointing devices displaced or positioned by the user, e.g. mice, trackballs, pens or joysticks; Accessories therefor with detection of the device orientation or free movement in a 3D space, e.g. 3D mice, 6-DOF [six degrees of freedom] pointers using gyroscopes, accelerometers or tilt-sensors
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/01Input arrangements or combined input and output arrangements for interaction between user and computer
    • G06F3/048Interaction techniques based on graphical user interfaces [GUI]
    • G06F3/0484Interaction techniques based on graphical user interfaces [GUI] for the control of specific functions or operations, e.g. selecting or manipulating an object, an image or a displayed text element, setting a parameter value or selecting a range

Definitions

  • This disclosure relates to an input system and an input method using gesture input in a vehicle.
  • This disclosure provides a technique that allows a gesture operation to be comfortably performed in a vehicle.
  • the input system includes a display, a sensor, a display, and an optical plate.
  • the display presents information to passengers in the vehicle.
  • the sensor recognizes a passenger's gesture operation performed in a gesture recognition space set in the vicinity of the display.
  • the indicator is installed outside the gesture recognition space.
  • the optical plate is installed between the gesture recognition space and the display, and forms a gesture guide image displayed on the display in the gesture recognition space.
  • the display displays information reflecting the gesture operation recognized by the sensor.
  • FIG. 1A is a diagram illustrating a configuration example of gesture input.
  • FIG. 1B is a diagram illustrating a configuration example of gesture input.
  • FIG. 2A is a diagram illustrating a configuration example in which gesture input and an aerial display are combined.
  • FIG. 2B is a diagram illustrating a configuration example in which gesture input and an aerial display are combined.
  • FIG. 3A is a diagram illustrating an installation example in the vehicle of the input system according to the embodiment of the present disclosure.
  • FIG. 3B is a diagram illustrating an installation example in the vehicle of the input system according to the embodiment of the present disclosure.
  • FIG. 3C is a diagram illustrating an installation example in the vehicle of the input system according to the embodiment of the present disclosure.
  • FIG. 3A is a diagram illustrating an installation example in the vehicle of the input system according to the embodiment of the present disclosure.
  • FIG. 3B is a diagram illustrating an installation example in the vehicle of the input system according to the embodiment of the present disclosure.
  • FIG. 3C is
  • FIG. 4 is a diagram illustrating an example of an installation method of the optical plate and the display.
  • FIG. 5 is a block diagram illustrating a configuration of the input system according to the embodiment of the present disclosure.
  • FIG. 6A is a diagram illustrating a specific example of the gesture operation using the input system according to the embodiment of the present disclosure.
  • FIG. 6B is a diagram illustrating a specific example of the gesture operation using the input system according to the embodiment of the present disclosure.
  • FIG. 6C is a diagram illustrating a specific example of the gesture operation using the input system according to the embodiment of the present disclosure.
  • FIG. 7 is a flowchart showing an operation of the input system according to the embodiment of the present disclosure.
  • FIG. 8A is a diagram illustrating an installation example of the input system according to the modification in the vehicle.
  • FIG. 8B is a diagram illustrating an installation example of the input system according to the modification in the vehicle.
  • FIG. 1A and FIG. 1B are diagrams showing a configuration example of gesture input.
  • FIG. 1A is a configuration example in which a camera is used as the gesture detection sensor 50.
  • a gesture detection camera is installed below the information display 30. The direction of the camera is set so that a space that is a predetermined distance away from the screen of the display 30 is within the angle of view. A space that falls within the angle of view of the camera in front of the screen is a gesture recognition space S1.
  • FIG. 1B is a configuration example in which a highly sensitive capacitive touch panel sensor is used as the gesture detection sensor 50.
  • a capacitive touch panel sensor is installed on the surface of the display 30.
  • a non-contact space close to the surface of the display 30 is a gesture recognition space S1.
  • the user can perform a hover operation in the gesture recognition space S1.
  • the gesture recognition space S1 becomes wider as the sensitivity of the touch panel sensor is higher.
  • an infrared sensor, an ultrasonic sensor, or the like can be used instead of the camera or the capacitive touch panel sensor.
  • the gesture input particularly the gesture input of the configuration example shown in FIG. 1A has the following problems.
  • (1) It is difficult for the operator to perform a gesture operation in the range of the gesture recognition space S1.
  • the gesture recognition space S1 is an empty space as seen from the operator, and it is difficult for the operator to grasp the space range.
  • (2) Displaying a gesture operation guide on the display 30 is troublesome for non-operators.
  • the gesture input and the aerial display are used in combination. Specifically, using the aerial display, the gesture operation guide video is displayed in the air in the gesture recognition space S1.
  • FIG. 2A and FIG. 2B are diagrams showing a configuration example in which gesture input and an aerial display are combined.
  • FIG. 2A is a configuration example in which a camera is used as the gesture detection sensor 50
  • FIG. 2B is a configuration example in which a capacitive touch panel sensor is used as the gesture detection sensor 50.
  • the aerial display is realized by the display 41 and the optical plate 42.
  • the optical plate 42 subjected to the special processing described above forms an image displayed on the display 41 installed on one side of the optical plate 42 in a line-symmetric space position on the opposite side of the optical plate 42. Can be made.
  • the installation positions of the optical plate 42 and the display device 41 are determined based on the relationship between the position of the guide image I1 to be displayed in the gesture recognition space S1 set in front of the display 30 and the reflection.
  • the combination of gesture input and aerial display has the following advantages. (1) By displaying an aerial gesture operation guide that is visible only to the operator, it is easy to perform a gesture operation within the range of the gesture recognition space S1. (2) By displaying an aerial gesture operation guide that is visible only to the operator, there is no trouble for anyone other than the operator. (3) Since only the gesture operation guide is displayed, there is little visual trouble for the operator. (4) It is possible to intuitively operate devices other than the display displayed in front of the eyes and the device in front of the eyes. (5) The amount of information to be presented can be ensured by installing an aerial display for gesture operation guide separately from the display 30 for displaying normal information.
  • the aerial display technology as of 2017 has lower luminance and lower resolution than general liquid crystal displays and organic EL (organic electro-luminescence: OEL) displays, and is suitable for displaying small characters and pictures. Not.
  • OEL organic electro-luminescence
  • the aerial display is preferably used in combination with the normal display 30. Since the aerial display technology as of 2017 has a narrower viewing angle than a general display, there is an advantage in terms of presenting information only to a specific operator.
  • the above-described aerial display requires the display 41 and the optical plate 42, and a space for storing the display 41 and the optical plate 42 is required below the display 30.
  • FIG. 3A to 3C are diagrams illustrating an installation example of the input system according to the embodiment of the present disclosure in a vehicle.
  • FIG. 3A is a schematic view of the vicinity of the driver's seat in the vehicle.
  • a display 30 for presenting information is installed on the dashboard 4.
  • a center display of a car navigation apparatus can be used as the information presentation display 30.
  • the display of a smart phone or a tablet fixed to the holder on the dashboard 4 may be used.
  • the gesture recognition space S ⁇ b> 1 is set to a space below the display surface of the display 30. Specifically, it is installed on the inclined surface of the center console 5 that extends while tilting forward and downward from the installation position of the display 30 of the dashboard 4.
  • the gesture recognition space S1 is set on the upper side of the inclined surface, but may be set on the center or lower side of the inclined surface.
  • the steering wheel 3a is installed on the right side of the inclined surface of the center console 5, and the driver can easily reach the gesture recognition space S1 with the left hand.
  • FIG. 3B is a schematic diagram showing the positional relationship between the display 30 and the optical plate 42.
  • An optical plate 42 is installed in parallel with the inclined surface of the center console 5.
  • a guide image Ia with a left arrow is displayed in the air above the optical plate 42.
  • FIG. 3C is a schematic view of the positional relationship among the display 30, the optical plate 42, and the display device 41 viewed from the side surface direction. From the viewpoint E1 of the driver, the guide video Ia appears to appear on the inclined surface of the center console 5.
  • FIG. 4 is a diagram illustrating an example of an installation method of the optical plate 42 and the display device 41.
  • the display device 41 is housed and installed in a storage box 45.
  • the inside of the storage box 45 is subjected to low reflection processing.
  • a normal liquid crystal display module (LCM) can be used for the display device 41.
  • a light control film (LCF) 43 is attached to the surface of the display 41 on the display surface side.
  • the light control film 43 is a film that suppresses diffused light and improves the parallelism of light, and can improve luminance and visibility when the display device 41 is viewed from the front.
  • the optical plate 42 is installed at the position of the upper lid of the storage box 45.
  • the periphery of the display 41 can be darkened and an image is formed in the air on the opposite side of the optical plate 42.
  • the visibility of the guide video can be improved.
  • the storage box 45 shown in FIG. 4 is installed inside the inclined surface of the center console 5 shown in FIG. 3A, for example.
  • FIG. 5 is a block diagram illustrating a configuration of the input system 2 according to the embodiment of the present disclosure.
  • the input system 2 includes a control device 10, a display 30, an aerial image display device 40, and a gesture detection sensor 50.
  • the aerial image display device 40 includes a display 41 and an optical plate 42 as main members.
  • the light control film 43 shown in FIG. 4 is not essential and can be omitted.
  • the control device 10 includes a processing unit 11, an input / output unit (I / O unit) 12, and a recording unit 13.
  • the processing unit 11 includes a screen control unit 111, a guide control unit 112, a detection information acquisition unit 113, an operation content determination unit 114, and a device control unit 115.
  • the function of the processing unit 11 can be realized by cooperation of hardware resources and software resources.
  • Hardware resources include CPU (central processing unit), GPU (graphics processing unit), DSP (digital signal processor), FPGA (field-programmable gate array), ROM (read-only memory), RAM (random-access memory), Other LSIs (large-scale integration) can be used.
  • Programs such as operating system, application, firmware, etc. can be used as software resources.
  • the recording unit 13 is a nonvolatile memory, and includes a recording medium such as a NAND flash memory chip, an SSD (solid-state drive), and an HDD (hard disk drive).
  • the control device 10 may be mounted in a dedicated housing, or may be mounted in a head unit such as a car navigation device or display audio.
  • a head unit such as a car navigation device or display audio.
  • casings may be sufficient, and the form which utilizes those existing hardware resources by a time division may be sufficient.
  • the form which utilizes the hardware resource of the information equipment brought in from the outside, such as a smart phone and a tablet, may be sufficient.
  • the display 30 is a display installed in the vehicle interior as described above, and a liquid crystal display or an organic EL display can be used.
  • the gesture detection sensor 50 is a sensor for recognizing a passenger's gesture operation performed in the gesture recognition space S1 set in the vicinity of the display 30 in the vehicle interior. As described above, a camera, a non-contact type touch panel, or the like can be used.
  • the I / O unit 12 outputs an image signal supplied from the processing unit 11 to the display 30, outputs an image signal supplied from the processing unit 11 to the display 41, and a detection signal supplied from the gesture detection sensor 50. Is output to the processing unit 11.
  • the screen control unit 111 generates all image data to be displayed on the display 30, and outputs and displays the image data on the display 30.
  • the guide control unit 112 generates image data to be displayed in the air as a gesture guide video in the gesture recognition space S1, and outputs and displays the image data on the display device 41.
  • the guide control unit 112 displays a symbol image indicating the operation content in the air as a gesture guide image.
  • graphic symbol marks such as circles, triangles, squares, crosses, arrows, and crosses may be displayed in the air, or icons representing the operation contents may be displayed in the air.
  • the guide control unit 112 may display an image defining the range of the gesture recognition space S1 in the air as a gesture guide image.
  • the image of the frame of the gesture recognition space S1 may be an aerial image.
  • a point image may be displayed in the air at the position of each vertex in the gesture recognition space S1.
  • the detection information acquisition unit 113 acquires detection information detected by the gesture detection sensor 50. For example, the image data of the gesture recognition space S1 photographed by the camera is acquired.
  • the operation content determination unit 114 determines the operation content based on the detection information acquired by the detection information acquisition unit 113. For example, a hand is detected as an object from the acquired image, and the detected movement of the hand is followed.
  • the operation content determination unit 114 specifies the gesture operation content from the detected hand movement. Note that the hand search range in the image may be narrowed down to a nearby region where the guide video is displayed.
  • the screen control unit 111 causes the display 30 to display an image reflecting the gesture operation determined by the operation content determination unit 114.
  • the screen control unit 111 displays an image (a mark, an icon, a pictogram, a symbol, or the like) indicating that the gesture operation has been accepted.
  • the screen control unit 111 displays an image (for example, an icon during processing and an icon indicating completion of processing) indicating the state of device operation corresponding to the accepted gesture operation.
  • the device control unit 115 executes the operation content corresponding to the gesture operation determined by the operation content determination unit 114 for the device in the vehicle. For example, an operation of a car navigation device, an operation of display audio, an operation of an air conditioner, an operation of a power window, an operation of turning on / off a room lamp, and the like are executed. In addition, you may perform driving operation of vehicles, such as turning on / off of a blinker, a gear shift, a sound of a horn, passing, and start / end of a wiper.
  • FIG. 6A to 6C are diagrams illustrating specific examples of the gesture operation using the input system 2 according to the embodiment of the present disclosure.
  • FIG. 6A is an example in the case of executing a function that is not displayed on a device in front of the operator's eyes (in the vicinity of performing the gesture operation).
  • a display 30 is a display of the car navigation device, and the volume of the voice guidance of the car navigation device is changed by a gesture operation.
  • a left guide image Ia and a right guide image Ib are imaged in the air.
  • the volume is decreased by the gesture of the operator receiving the guide image Ia indicated by the left arrow in the left direction, and the volume is increased by the gesture of receiving the guide image Ib indicated by the right arrow in the right direction.
  • the volume of the volume bar 30a displayed on the display 30 is reduced by lowering the volume by moving the guide image Ia indicated by the left arrow in the left direction.
  • FIG. 6B shows an example of operating a device that is not in front of the operator.
  • a guide image Ic with an upward arrow is imaged in the air.
  • the right blinker blinks with a gesture in which the operator receives the upward arrow guide image Ic upward.
  • an icon 30b indicating that the right turn signal blinks is displayed on the screen of the display 30.
  • a guide image indicated by a down arrow is formed in the air, and the right blinker is turned off by a gesture in which the operator receives the guide image indicated by the down arrow downward.
  • FIG. 6C shows an example of operating a device that is in front of the operator but is difficult to reach.
  • the display 30 is a display of the car navigation apparatus, and flicks or swipes the map displayed on the display 30.
  • a map 30c is displayed on the screen of the display 30, and a left guide image Ia and a right guide image Ib are formed in the air in the gesture recognition space S1 above the optical plate 42.
  • the operator flicks or swipes the map 30c in the left direction with a gesture that causes the left arrow guide image Ia to move left, and flicks or swipes the map 30c in the right direction with a gesture that causes the right arrow guide image Ib to move in the right direction. Is done.
  • a flick operation is performed when the hand movement is less than a predetermined speed
  • a swipe operation is performed when the hand movement is greater than the predetermined speed.
  • the map 30c is flicked to the left by receiving the guide image Ia of the left arrow in the left direction.
  • FIG. 7 is a flowchart showing the operation of the input system 2 according to the embodiment of the present disclosure.
  • the guide control unit 112 displays a predetermined guide image in the air in the gesture recognition space S1 (S10).
  • the detection information acquisition unit 113 acquires detection information based on an operator's gesture operation detected by the gesture detection sensor 50 (S11).
  • the operation content determination unit 114 identifies the operation content based on the acquired detection information (S12).
  • the screen control unit 111 causes the display 30 to display an image indicating completion of reception of the specified operation content (S13).
  • the device control unit 115 controls the device according to the specified operation content (S14).
  • the gesture operation within the gesture recognition space S1 is facilitated, and the probability that the gesture operation will be missed is increased. It can be greatly reduced. Therefore, a passenger in the vehicle can perform a gesture operation comfortably.
  • the guide video is displayed in the air toward the driver, the guide video cannot be seen by the passenger sitting in the passenger seat due to the narrow viewing angle characteristics. Therefore, the visual annoyance of passengers other than the subject does not occur. Further, if the guide video is displayed in the air only during the gesture operation, the visual inconvenience of the subject does not occur. Moreover, the amount of information presentation can be ensured by using together with the existing display for information display. Only the aerial display limits the amount of information presented.
  • FIGS. 8A and 8B are diagrams showing an installation example of the input system 2 according to the modification in the vehicle.
  • the gesture recognition space S ⁇ b> 1 is installed on the center portion of the dashboard 4.
  • the aerial image display device 40 is installed inside the center of the dashboard 4.
  • the information presentation display 30 is installed at a position close to the gesture recognition space S ⁇ b> 1 on the inclined surface of the center console 5.
  • a display of a head unit such as display audio can be used as the information presentation display 30.
  • an aerial video display device 40 having a display 41 and an optical plate 42 is embedded in the upper part of the joint portion of the steering column 3b with the steering wheel 3a.
  • the guide image I1 is imaged in the back of the steering wheel 3a (above the steering column 3b) as viewed from the driver.
  • the display in the instrument panel 6 can be used as a display for presenting information.
  • the example which uses a liquid crystal display module for the display 41 was demonstrated in FIG. 4, in the use with which a display image is limited, you may create the display 41 by installing several light emitting diodes on a board
  • the display device 41 can be created simply by installing eight light emitting diodes at predetermined positions on the substrate.
  • the guide control unit 112 may adjust the luminance of the light emitting diode according to the brightness in the vehicle.
  • the brightness in the vehicle is determined based on illuminance information detected by an illuminance sensor (not shown) installed in the vehicle.
  • the illuminance at the current position may be acquired from a server of the Japan Meteorological Agency or a private weather company via a wireless communication network.
  • the guide control unit 112 decreases the luminance of the light emitting diode as the illuminance in the vehicle is lower.
  • the luminance of the light emitting diode can be controlled by adjusting the drive current or the PWM (pulse width modulation) ratio.
  • the original display luminance is low, so that it is not necessary to reduce the luminance at night.
  • luminance control when using the liquid crystal display module is not excluded.
  • the input system (2) includes a display (30), a sensor (50), a display (41), and an optical plate (42).
  • the display (30) presents information to passengers in the vehicle.
  • the sensor (50) recognizes the occupant's gesture operation performed in the gesture recognition space (S1) set in the vicinity of the display (30).
  • the display (41) is installed outside the gesture recognition space (S1).
  • the optical plate (42) is installed between the gesture recognition space (S1) and the display (41), and images the gesture guide image displayed on the display (41) in the gesture recognition space (S1).
  • the display (30) displays information reflecting the gesture operation recognized by the sensor (50).
  • the display device (41) displays a symbol image indicating the operation content as a gesture guide image
  • the optical plate (42) displays the symbol image as a gesture recognition space (S1). To form an image.
  • the display (41) displays an image defining the range of the gesture recognition space (S1) as a gesture guide image, and the optical plate (42) is a gesture recognition space.
  • An image defining the range of (S1) is imaged in the gesture recognition space (S1).
  • the display (30) is a center display (30) installed on the dashboard (4), and the gesture recognition space (S1) is a center display. It is set in the space below the front with respect to the display surface of (30).
  • the gesture recognition space (S1) is set on the center of the dashboard (4), and the display (30) is the center console (5). It is installed at a position close to the gesture recognition space (S1).
  • the gesture guide image is displayed at the same height as the windshield, the visibility of the gesture guide image during driving can be improved.
  • the input method includes a step of recognizing a passenger's gesture operation performed in a gesture recognition space (S1) set in the vicinity of a display (30) for presenting information to a passenger in the vehicle (1). . Moreover, the input method uses an optical plate (42) installed between the gesture recognition space (S1) and the display (41) installed outside the gesture recognition space (S1). 41) imaging the gesture guide image displayed in 41) in the gesture recognition space (S1). Further, the input method includes a step of displaying information reflecting the recognized gesture operation on the display (30).
  • the present disclosure relates to a technique capable of performing a gesture operation comfortably in a vehicle, and is particularly useful as an input system and an input method.

Landscapes

  • Engineering & Computer Science (AREA)
  • General Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Human Computer Interaction (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • User Interface Of Digital Computer (AREA)
  • Position Input By Displaying (AREA)

Abstract

This input system has: a display; a sensor; an indicator; and an optical plate. The display presents information to an occupant in a vehicle. The sensor recognizes an occupant's gesture carried out in a gesture recognition space set near the display. The indicator is installed outside the gesture recognition space. The optical plate is installed between the gesture recognition space and the indicator, and forms, in the gesture recognition space, a gesture guide image that has been displayed on the indicator. The display displays information reflecting the gesture recognized by the sensor.

Description

入力システム、および入力方法Input system and input method
 本開示は、車両内におけるジェスチャ入力を用いた入力システム、および入力方法に関する。 This disclosure relates to an input system and an input method using gesture input in a vehicle.
 近年、空中に映像を表示させる空中ディスプレイの開発が進められている。例えば、映像を、特殊な光学プレートを通過させることにより、実像の反対側の等距離の空中に実像を結像させる技術が開発されている(例えば、特許文献1参照)。当該光学プレートは、短冊状の鏡面ガラスを積層して形成した光学部材を2つ、交差させて接合することにより1枚のプレートを形成している。ディスプレイ等から発光された光が、直交する2つの光学部材で反射し、反対側の空中で再び結像する。 In recent years, development of aerial displays that display images in the air has been underway. For example, a technique for forming a real image in the air at an equal distance on the opposite side of the real image by passing the image through a special optical plate has been developed (see, for example, Patent Document 1). The optical plate forms one plate by crossing and joining two optical members formed by laminating strip-shaped mirror glass. Light emitted from a display or the like is reflected by two orthogonal optical members and forms an image again in the air on the opposite side.
 また近年、タッチパネルやボタンに触れることなく、手の動きで操作入力するジェスチャ操作の開発が進められている。ジェスチャ操作は操作対象に触れる必要がないため、離れた位置から操作対象を操作することができる。 In recent years, the development of gesture operations that allow operation input by hand movement without touching the touch panel or buttons has been promoted. Since the gesture operation does not need to touch the operation target, the operation target can be operated from a remote position.
日本国特許第4865088号Japanese Patent No. 4865088
 本開示は、車両内において快適にジェスチャ操作を行うことができる技術を提供する。 This disclosure provides a technique that allows a gesture operation to be comfortably performed in a vehicle.
 本開示の一態様の入力システムは、ディスプレイと、センサと、表示器と、光学プレートと、を有する。ディスプレイは、車両内の搭乗者に情報を提示する。センサは、ディスプレイの近傍に設定されたジェスチャ認識空間内で行われる搭乗者のジェスチャ操作を認識する。表示器は、ジェスチャ認識空間の外に設置される。光学プレートは、ジェスチャ認識空間と表示器の間に設置され、表示器に表示されたジェスチャガイド映像をジェスチャ認識空間に結像させる。ディスプレイは、センサにより認識されたジェスチャ操作を反映した情報を表示する。 The input system according to one aspect of the present disclosure includes a display, a sensor, a display, and an optical plate. The display presents information to passengers in the vehicle. The sensor recognizes a passenger's gesture operation performed in a gesture recognition space set in the vicinity of the display. The indicator is installed outside the gesture recognition space. The optical plate is installed between the gesture recognition space and the display, and forms a gesture guide image displayed on the display in the gesture recognition space. The display displays information reflecting the gesture operation recognized by the sensor.
 なお、以上の構成要素の任意の組み合わせ、本開示の表現を装置、システム、方法、プログラム、プログラムを記録した記録媒体、それらを搭載した自動運転車両などの間で変換したものもまた、本開示の態様として有効である。 Note that any combination of the above components, the expression of the present disclosure converted between an apparatus, a system, a method, a program, a recording medium on which the program is recorded, an autonomous driving vehicle equipped with the same, and the like are also disclosed in the present disclosure. It is effective as an embodiment of
 本開示によれば、車両内において快適にジェスチャ操作を行うことができる。 According to the present disclosure, it is possible to perform a gesture operation comfortably in the vehicle.
図1Aは、ジェスチャ入力の構成例を示す図である。FIG. 1A is a diagram illustrating a configuration example of gesture input. 図1Bは、ジェスチャ入力の構成例を示す図である。FIG. 1B is a diagram illustrating a configuration example of gesture input. 図2Aは、ジェスチャ入力と空中ディスプレイを組み合わせた構成例を示す図である。FIG. 2A is a diagram illustrating a configuration example in which gesture input and an aerial display are combined. 図2Bは、ジェスチャ入力と空中ディスプレイを組み合わせた構成例を示す図である。FIG. 2B is a diagram illustrating a configuration example in which gesture input and an aerial display are combined. 図3Aは、本開示の実施の形態に係る入力システムの、車両内における設置例を示す図である。FIG. 3A is a diagram illustrating an installation example in the vehicle of the input system according to the embodiment of the present disclosure. 図3Bは、本開示の実施の形態に係る入力システムの、車両内における設置例を示す図である。FIG. 3B is a diagram illustrating an installation example in the vehicle of the input system according to the embodiment of the present disclosure. 図3Cは、本開示の実施の形態に係る入力システムの、車両内における設置例を示す図である。FIG. 3C is a diagram illustrating an installation example in the vehicle of the input system according to the embodiment of the present disclosure. 図4は、光学プレートと表示器の設置方法の一例を示す図である。FIG. 4 is a diagram illustrating an example of an installation method of the optical plate and the display. 図5は、本開示の実施の形態に係る入力システムの構成を示すブロック図である。FIG. 5 is a block diagram illustrating a configuration of the input system according to the embodiment of the present disclosure. 図6Aは、本開示の実施の形態に係る入力システムを用いたジェスチャ操作の具体例を示す図である。FIG. 6A is a diagram illustrating a specific example of the gesture operation using the input system according to the embodiment of the present disclosure. 図6Bは、本開示の実施の形態に係る入力システムを用いたジェスチャ操作の具体例を示す図である。FIG. 6B is a diagram illustrating a specific example of the gesture operation using the input system according to the embodiment of the present disclosure. 図6Cは、本開示の実施の形態に係る入力システムを用いたジェスチャ操作の具体例を示す図である。FIG. 6C is a diagram illustrating a specific example of the gesture operation using the input system according to the embodiment of the present disclosure. 図7は、本開示の実施の形態に係る入力システムの動作を示すフローチャートである。FIG. 7 is a flowchart showing an operation of the input system according to the embodiment of the present disclosure. 図8Aは、変形例に係る入力システムの、車両内における設置例を示す図である。FIG. 8A is a diagram illustrating an installation example of the input system according to the modification in the vehicle. 図8Bは、変形例に係る入力システムの、車両内における設置例を示す図である。FIG. 8B is a diagram illustrating an installation example of the input system according to the modification in the vehicle.
 本開示の実施の形態の説明に先立ち、従来の技術における問題点を簡単に説明する。近年、車両の電装化が加速しており、車両内においてタッチ操作やボタン操作する機会が増えている。車両内において搭乗者は基本的に、シートに座った状態で各種機器を操作する必要があり、操作部の位置によっては手を大きく伸ばす必要がある。従って車両内の機器の操作に、ジェスチャ操作を導入すれば、搭乗者の操作にかかる負担を軽減することが期待できる。しかし、ジェスチャ操作は空中で行われるため、ジェスチャ操作の認識範囲から外れると操作入力が機器に認識されなくなる。その状況は、ジェスチャ操作を行っている搭乗者にとって大きなストレスとなる。 Prior to the description of the embodiment of the present disclosure, the problems in the prior art will be briefly described. In recent years, the electrification of vehicles is accelerating, and opportunities for touch operations and button operations in vehicles are increasing. In the vehicle, the passenger basically needs to operate various devices while sitting on the seat, and depending on the position of the operation unit, it is necessary to greatly extend his hand. Therefore, if a gesture operation is introduced into the operation of equipment in the vehicle, it can be expected that the burden on the operation of the passenger will be reduced. However, since the gesture operation is performed in the air, the operation input is not recognized by the device if it is out of the recognition range of the gesture operation. This situation is a great stress for passengers who perform gesture operations.
 図1Aおよび図1Bは、ジェスチャ入力の構成例を示す図である。図1Aは、ジェスチャ検知センサ50としてカメラを使用する構成例である。当該構成例では、情報表示用のディスプレイ30の下側に、ジェスチャ検知用のカメラが設置される。カメラの向きは、ディスプレイ30の画面から所定の距離、手前に離れた空間が画角に収まるように設定される。この画面手前のカメラの画角に収まる空間が、ジェスチャ認識空間S1となる。 FIG. 1A and FIG. 1B are diagrams showing a configuration example of gesture input. FIG. 1A is a configuration example in which a camera is used as the gesture detection sensor 50. In this configuration example, a gesture detection camera is installed below the information display 30. The direction of the camera is set so that a space that is a predetermined distance away from the screen of the display 30 is within the angle of view. A space that falls within the angle of view of the camera in front of the screen is a gesture recognition space S1.
 図1Bは、ジェスチャ検知センサ50として、高感度な静電容量式のタッチパネルセンサを使用する構成例である。当該構成例では、ディスプレイ30の表面に、静電容量式のタッチパネルセンサが設置される。ディスプレイ30の表面に近接する、非接触の空間がジェスチャ認識空間S1となる。ユーザはジェスチャ認識空間S1内でホバー操作を行うことができる。ジェスチャ認識空間S1は、タッチパネルセンサの感度が高いほど広くなる。なお、カメラや静電容量式のタッチパネルセンサに代えて、赤外線センサや超音波センサ等を使用することも可能である。 FIG. 1B is a configuration example in which a highly sensitive capacitive touch panel sensor is used as the gesture detection sensor 50. In the configuration example, a capacitive touch panel sensor is installed on the surface of the display 30. A non-contact space close to the surface of the display 30 is a gesture recognition space S1. The user can perform a hover operation in the gesture recognition space S1. The gesture recognition space S1 becomes wider as the sensitivity of the touch panel sensor is higher. Note that an infrared sensor, an ultrasonic sensor, or the like can be used instead of the camera or the capacitive touch panel sensor.
 ジェスチャ入力、特に図1Aに示した構成例のジェスチャ入力には、以下の問題がある。(1)操作者がジェスチャ認識空間S1の範囲でジェスチャ操作を行うことが難しい。操作者から見てジェスチャ認識空間S1は何もない空間であり、操作者がその空間範囲を把握することは難しい。(2)ディスプレイ30にジェスチャ操作用のガイドを表示させると、操作者以外にとっては煩わしい。(3)目の前に表示されるディスプレイ30や目の前にある機器を操作するには、直接接触して操作するのが自然であり、ジェスチャ操作のメリットが訴求しにくい。 The gesture input, particularly the gesture input of the configuration example shown in FIG. 1A has the following problems. (1) It is difficult for the operator to perform a gesture operation in the range of the gesture recognition space S1. The gesture recognition space S1 is an empty space as seen from the operator, and it is difficult for the operator to grasp the space range. (2) Displaying a gesture operation guide on the display 30 is troublesome for non-operators. (3) In order to operate the display 30 displayed in front of the eyes or the device in front of the eyes, it is natural to directly touch and operate, and it is difficult to appeal the merit of the gesture operation.
 そこで本開示の実施の形態では、ジェスチャ入力と空中ディスプレイを組み合わせて使用する。具体的には空中ディスプレイを使用して、ジェスチャ認識空間S1にジェスチャ操作のガイド映像を空中表示させる。 Therefore, in the embodiment of the present disclosure, the gesture input and the aerial display are used in combination. Specifically, using the aerial display, the gesture operation guide video is displayed in the air in the gesture recognition space S1.
 図2Aおよび図2Bは、ジェスチャ入力と空中ディスプレイを組み合わせた構成例を示す図である。図2Aは、ジェスチャ検知センサ50としてカメラを使用する構成例であり、図2Bは、ジェスチャ検知センサ50として静電容量式のタッチパネルセンサを使用する構成例である。 FIG. 2A and FIG. 2B are diagrams showing a configuration example in which gesture input and an aerial display are combined. FIG. 2A is a configuration example in which a camera is used as the gesture detection sensor 50, and FIG. 2B is a configuration example in which a capacitive touch panel sensor is used as the gesture detection sensor 50.
 空中ディスプレイは、表示器41と光学プレート42で実現される。上述した特殊加工が施された光学プレート42は、光学プレート42を挟んで一方側に設置された表示器41に表示された映像を、光学プレート42の反対側の線対称な空間位置に結像させることができる。光学プレート42及び表示器41の設置位置は、ディスプレイ30の手前に設定されたジェスチャ認識空間S1に表示させるべきガイド映像I1の位置と、上記反射の関係をもとに決定される。 The aerial display is realized by the display 41 and the optical plate 42. The optical plate 42 subjected to the special processing described above forms an image displayed on the display 41 installed on one side of the optical plate 42 in a line-symmetric space position on the opposite side of the optical plate 42. Can be made. The installation positions of the optical plate 42 and the display device 41 are determined based on the relationship between the position of the guide image I1 to be displayed in the gesture recognition space S1 set in front of the display 30 and the reflection.
 ジェスチャ入力と空中ディスプレイを組み合わせると、以下のメリットがある。(1)操作者のみ視認可能な空中ジェスチャ操作ガイドを表示することにより、ジェスチャ認識空間S1の範囲内でのジェスチャ操作が容易になる。(2)操作者のみ視認可能な空中ジェスチャ操作ガイドを表示することにより、操作者以外に煩わしさが発生しない。(3)ジェスチャ操作ガイドを表示するのみであるため、操作者の視覚的な煩わしさが少ない。(4)目の前に表示されるディスプレイや目の前の機器以外の機器を直感的に操作可能になる。(5)通常の情報表示用のディスプレイ30とは別に、ジェスチャ操作ガイド用の空中ディスプレイを装備することにより、提示する情報量を確保することができる。 The combination of gesture input and aerial display has the following advantages. (1) By displaying an aerial gesture operation guide that is visible only to the operator, it is easy to perform a gesture operation within the range of the gesture recognition space S1. (2) By displaying an aerial gesture operation guide that is visible only to the operator, there is no trouble for anyone other than the operator. (3) Since only the gesture operation guide is displayed, there is little visual trouble for the operator. (4) It is possible to intuitively operate devices other than the display displayed in front of the eyes and the device in front of the eyes. (5) The amount of information to be presented can be ensured by installing an aerial display for gesture operation guide separately from the display 30 for displaying normal information.
 なお2017年現在の空中ディスプレイ技術は、一般的な液晶ディスプレイや有機EL(organic electro-luminescence:OEL)ディスプレイと比較して低輝度・低解像度であり、小さな文字や、絵などの表示には適していない。また、半透過な状態で映像が空中に浮かび上がるためコントラストが低く、明るい環境下では視認性が低下する。従って、空中ディスプレイは通常のディスプレイ30との併用が好ましい。なお2017年現在の空中ディスプレイ技術は、一般的なディスプレイより狭視野角であるため、特定の操作者にのみ情報を提示するという観点では優位性がある。 The aerial display technology as of 2017 has lower luminance and lower resolution than general liquid crystal displays and organic EL (organic electro-luminescence: OEL) displays, and is suitable for displaying small characters and pictures. Not. In addition, since the image floats in the air in a semi-transparent state, the contrast is low, and the visibility is lowered in a bright environment. Therefore, the aerial display is preferably used in combination with the normal display 30. Since the aerial display technology as of 2017 has a narrower viewing angle than a general display, there is an advantage in terms of presenting information only to a specific operator.
 以下、ジェスチャ入力と空中ディスプレイを組み合わせた入力システムを、車両内で使用する例を説明する。上述の空中ディスプレイは、表示器41と光学プレート42が必要であり、ディスプレイ30の下側に、表示器41と光学プレート42を格納するスペースが必要である。 Hereinafter, an example in which an input system combining a gesture input and an aerial display is used in a vehicle will be described. The above-described aerial display requires the display 41 and the optical plate 42, and a space for storing the display 41 and the optical plate 42 is required below the display 30.
 図3A~図3Cは、本開示の実施の形態に係る入力システムの、車両内における設置例を示す図である。図3Aは車両内の運転席付近の模式図である。情報提示用のディスプレイ30が、ダッシュボード4の上に設置されている。情報提示用のディスプレイ30には例えば、カーナビゲーション装置のセンタディスプレイを使用することができる。なお、ダッシュボード4上のホルダに固定されたスマートフォンやタブレットのディスプレイであってもよい。 3A to 3C are diagrams illustrating an installation example of the input system according to the embodiment of the present disclosure in a vehicle. FIG. 3A is a schematic view of the vicinity of the driver's seat in the vehicle. A display 30 for presenting information is installed on the dashboard 4. For example, a center display of a car navigation apparatus can be used as the information presentation display 30. In addition, the display of a smart phone or a tablet fixed to the holder on the dashboard 4 may be used.
 ジェスチャ認識空間S1は、ディスプレイ30の表示面に対して手前下方の空間に設定される。具体的にはダッシュボード4のディスプレイ30の設置位置から手前下方に傾斜しながら延びるセンタコンソール5の傾斜面に設置される。図3Aでは、当該傾斜面の上側にジェスチャ認識空間S1が設定されているが、当該傾斜面の中央部または下側に設定されてもよい。ステアリングホイール3aはセンタコンソール5の傾斜面の右側に設置されており、運転者はジェスチャ認識空間S1に対して、左手で容易にリーチすることができる。 The gesture recognition space S <b> 1 is set to a space below the display surface of the display 30. Specifically, it is installed on the inclined surface of the center console 5 that extends while tilting forward and downward from the installation position of the display 30 of the dashboard 4. In FIG. 3A, the gesture recognition space S1 is set on the upper side of the inclined surface, but may be set on the center or lower side of the inclined surface. The steering wheel 3a is installed on the right side of the inclined surface of the center console 5, and the driver can easily reach the gesture recognition space S1 with the left hand.
 図3Bは、ディスプレイ30と光学プレート42の位置関係を示す概略図である。センタコンソール5の傾斜面と平行に光学プレート42が設置される。光学プレート42の上空に左矢印のガイド映像Iaが空中表示される。図3Cは、ディスプレイ30と光学プレート42と表示器41の位置関係を側面方向から見た模式図である。運転者の視点E1からは、センタコンソール5の傾斜面にガイド映像Iaが浮かび上がって見える。 FIG. 3B is a schematic diagram showing the positional relationship between the display 30 and the optical plate 42. An optical plate 42 is installed in parallel with the inclined surface of the center console 5. A guide image Ia with a left arrow is displayed in the air above the optical plate 42. FIG. 3C is a schematic view of the positional relationship among the display 30, the optical plate 42, and the display device 41 viewed from the side surface direction. From the viewpoint E1 of the driver, the guide video Ia appears to appear on the inclined surface of the center console 5.
 図4は、光学プレート42と表示器41の設置方法の一例を示す図である。表示器41は格納ボックス45に収納されて設置される。格納ボックス45の内側は低反射処理が施されている。表示器41には通常の液晶ディスプレモジュール(LCM)を使用することができる。表示器41の表示面側の表面にはライトコントロールフィルム(LCF)43が貼り付けられている。ライトコントロールフィルム43は拡散光を抑制し、光の平行度を向上させるフィルムであり、表示器41を正面から見た場合の輝度・視認性を向上させることができる。格納ボックス45の上蓋の位置に光学プレート42が設置される。 FIG. 4 is a diagram illustrating an example of an installation method of the optical plate 42 and the display device 41. The display device 41 is housed and installed in a storage box 45. The inside of the storage box 45 is subjected to low reflection processing. A normal liquid crystal display module (LCM) can be used for the display device 41. A light control film (LCF) 43 is attached to the surface of the display 41 on the display surface side. The light control film 43 is a film that suppresses diffused light and improves the parallelism of light, and can improve luminance and visibility when the display device 41 is viewed from the front. The optical plate 42 is installed at the position of the upper lid of the storage box 45.
 このように光学プレート42の一方側に設置される表示器41を格納ボックス45で覆うことにより、表示器41の周囲を暗くすることができ、光学プレート42の反対側の空中に結像されるガイド映像の視認性を向上させることができる。また表示器41にライトコントロールフィルム43を貼り、格納ボックス45の内側に低反射処理を施すことにより、空中にゴースト映像が結像されることを防止することができる。図4に示した格納ボックス45が例えば、図3Aに示したセンタコンソール5の傾斜面の内側に設置される。 Thus, by covering the display 41 installed on one side of the optical plate 42 with the storage box 45, the periphery of the display 41 can be darkened and an image is formed in the air on the opposite side of the optical plate 42. The visibility of the guide video can be improved. Further, by attaching a light control film 43 to the display 41 and applying a low reflection process to the inside of the storage box 45, it is possible to prevent a ghost image from being formed in the air. The storage box 45 shown in FIG. 4 is installed inside the inclined surface of the center console 5 shown in FIG. 3A, for example.
 図5は、本開示の実施の形態に係る入力システム2の構成を示すブロック図である。入力システム2は、制御装置10、ディスプレイ30、空中映像表示装置40及びジェスチャ検知センサ50を含む。空中映像表示装置40は主要部材として、表示器41及び光学プレート42を含む。なお図4に示したライトコントロールフィルム43は必須ではなく省略可能である。 FIG. 5 is a block diagram illustrating a configuration of the input system 2 according to the embodiment of the present disclosure. The input system 2 includes a control device 10, a display 30, an aerial image display device 40, and a gesture detection sensor 50. The aerial image display device 40 includes a display 41 and an optical plate 42 as main members. The light control film 43 shown in FIG. 4 is not essential and can be omitted.
 制御装置10は処理部11、入/出力部(I/O部)12、及び記録部13を有する。処理部11は、画面制御部111、ガイド制御部112、検知情報取得部113、操作内容判定部114、及び機器制御部115を含む。処理部11の機能はハードウェア資源とソフトウェア資源の協働により実現できる。ハードウェア資源としてCPU(central processing unit)、GPU(graphics processing unit)、DSP(digital signal processor)、FPGA(field-programmable gate array)、ROM(read-only memory)、RAM(random-access memory)、その他のLSI(large-scale integration)を利用できる。ソフトウェア資源としてオペレーティングシステム、アプリケーション、ファームウェア等のプログラムを利用できる。記録部13は不揮発性メモリであり、NAND型フラッシュメモリチップ、SSD(solid-state drive)、HDD(hard disk drive)等の記録媒体を有する。 The control device 10 includes a processing unit 11, an input / output unit (I / O unit) 12, and a recording unit 13. The processing unit 11 includes a screen control unit 111, a guide control unit 112, a detection information acquisition unit 113, an operation content determination unit 114, and a device control unit 115. The function of the processing unit 11 can be realized by cooperation of hardware resources and software resources. Hardware resources include CPU (central processing unit), GPU (graphics processing unit), DSP (digital signal processor), FPGA (field-programmable gate array), ROM (read-only memory), RAM (random-access memory), Other LSIs (large-scale integration) can be used. Programs such as operating system, application, firmware, etc. can be used as software resources. The recording unit 13 is a nonvolatile memory, and includes a recording medium such as a NAND flash memory chip, an SSD (solid-state drive), and an HDD (hard disk drive).
 制御装置10は、専用の筐体内に実装されてもよいし、カーナビゲーション装置や、ディスプレイオーディオ等のヘッドユニット内に実装されてもよい。後者の場合、制御装置10の機能を実装した基板を、それらの筐体内に追加する形態でもよいし、それらの既存のハードウェア資源を時分割で活用する形態でもよい。また、スマートフォンやタブレット等の外部から持ち込まれた情報機器のハードウェア資源を活用する形態でもよい。 The control device 10 may be mounted in a dedicated housing, or may be mounted in a head unit such as a car navigation device or display audio. In the case of the latter, the form which adds the board | substrate which mounted the function of the control apparatus 10 in those housing | casings may be sufficient, and the form which utilizes those existing hardware resources by a time division may be sufficient. Moreover, the form which utilizes the hardware resource of the information equipment brought in from the outside, such as a smart phone and a tablet, may be sufficient.
 ディスプレイ30は上述のように車室内に設置されるディスプレイであり、液晶ディスプレイや有機ELディスプレイを使用することができる。ジェスチャ検知センサ50は、車室内のディスプレイ30の近傍に設定されたジェスチャ認識空間S1内で行われる搭乗者のジェスチャ操作を認識するセンサである。上述のようにカメラ、非接触式のタッチパネル等を使用することができる。 The display 30 is a display installed in the vehicle interior as described above, and a liquid crystal display or an organic EL display can be used. The gesture detection sensor 50 is a sensor for recognizing a passenger's gesture operation performed in the gesture recognition space S1 set in the vicinity of the display 30 in the vehicle interior. As described above, a camera, a non-contact type touch panel, or the like can be used.
 I/O部12は、処理部11から供給される画像信号をディスプレイ30に出力し、処理部11から供給される画像信号を表示器41に出力し、ジェスチャ検知センサ50から供給される検知信号を処理部11に出力する。 The I / O unit 12 outputs an image signal supplied from the processing unit 11 to the display 30, outputs an image signal supplied from the processing unit 11 to the display 41, and a detection signal supplied from the gesture detection sensor 50. Is output to the processing unit 11.
 画面制御部111は、ディスプレイ30に表示させるべ画像データを生成し、ディスプレイ30に出力して表示させる。ガイド制御部112は、ジェスチャ認識空間S1にジェスチャガイド映像として空中に表示させるべき画像データを生成し、表示器41に出力して表示させる。例えばガイド制御部112は、ジェスチャガイド映像として、操作内容を示すシンボル映像を空中に表示させる。例えば、丸、三角、四角、バツ印、矢印、十字などの図形シンボルマークを空中に表示させてもよいし、操作内容を表象するアイコンを空中に表示させてもよい。 The screen control unit 111 generates all image data to be displayed on the display 30, and outputs and displays the image data on the display 30. The guide control unit 112 generates image data to be displayed in the air as a gesture guide video in the gesture recognition space S1, and outputs and displays the image data on the display device 41. For example, the guide control unit 112 displays a symbol image indicating the operation content in the air as a gesture guide image. For example, graphic symbol marks such as circles, triangles, squares, crosses, arrows, and crosses may be displayed in the air, or icons representing the operation contents may be displayed in the air.
 またガイド制御部112は、ジェスチャガイド映像として、ジェスチャ認識空間S1の範囲を規定する映像を空中に表示させてもよい。例えば、ジェスチャ認識空間S1の枠の映像を空中映像させてもよい。またジェスチャ認識空間S1の各頂点の位置に点の映像を空中表示させてもよい。 Further, the guide control unit 112 may display an image defining the range of the gesture recognition space S1 in the air as a gesture guide image. For example, the image of the frame of the gesture recognition space S1 may be an aerial image. In addition, a point image may be displayed in the air at the position of each vertex in the gesture recognition space S1.
 検知情報取得部113は、ジェスチャ検知センサ50により検知された検知情報を取得する。例えば、カメラで撮影されたジェスチャ認識空間S1の画像データを取得する。操作内容判定部114は、検知情報取得部113により取得された検知情報をもとに操作内容を判定する。例えば、取得された画像内からオブジェクトとして手を検出し、検出した手の動きを追従する。操作内容判定部114は、検出した手の動きからジェスチャ操作内容を特定する。なお画像内における手の探索範囲は、ガイド映像が表示される近傍の領域に絞り込んでもよい。 The detection information acquisition unit 113 acquires detection information detected by the gesture detection sensor 50. For example, the image data of the gesture recognition space S1 photographed by the camera is acquired. The operation content determination unit 114 determines the operation content based on the detection information acquired by the detection information acquisition unit 113. For example, a hand is detected as an object from the acquired image, and the detected movement of the hand is followed. The operation content determination unit 114 specifies the gesture operation content from the detected hand movement. Note that the hand search range in the image may be narrowed down to a nearby region where the guide video is displayed.
 画面制御部111は、操作内容判定部114により判定されたジェスチャ操作を反映した画像をディスプレイ30に表示させる。画面制御部111は、ジェスチャ操作の受け付け完了を示す画像(マーク、アイコン、ピクト、シンボルなど)を表示させる。また画面制御部111は、受け付けたジェスチャ操作に対応する機器動作の状況を示す画像(例えば、処理中のアイコン、処理完了のアイコン)を表示させる。 The screen control unit 111 causes the display 30 to display an image reflecting the gesture operation determined by the operation content determination unit 114. The screen control unit 111 displays an image (a mark, an icon, a pictogram, a symbol, or the like) indicating that the gesture operation has been accepted. In addition, the screen control unit 111 displays an image (for example, an icon during processing and an icon indicating completion of processing) indicating the state of device operation corresponding to the accepted gesture operation.
 機器制御部115は、車両内の機器に対する、操作内容判定部114により判定されたジェスチャ操作に対応する操作内容を実行する。例えば、カーナビゲーション装置の操作、ディスプレイオーディオの操作、エアコンの操作、パワーウインドウの操作、ルームランプの点灯/消灯操作などを実行する。なお、ウインカの点灯/消灯、ギアシフト、クラクションの鳴動、パッシング、ワイパーの始動/終了など車両の運転操作を実行してもよい。 The device control unit 115 executes the operation content corresponding to the gesture operation determined by the operation content determination unit 114 for the device in the vehicle. For example, an operation of a car navigation device, an operation of display audio, an operation of an air conditioner, an operation of a power window, an operation of turning on / off a room lamp, and the like are executed. In addition, you may perform driving operation of vehicles, such as turning on / off of a blinker, a gear shift, a sound of a horn, passing, and start / end of a wiper.
 図6A~図6Cは、本開示の実施の形態に係る入力システム2を用いたジェスチャ操作の具体例を示す図である。図6Aは、操作者の目の前(ジェスチャ操作を実施する近辺)にある機器の、表示されていない機能を実行する場合の例である。 6A to 6C are diagrams illustrating specific examples of the gesture operation using the input system 2 according to the embodiment of the present disclosure. FIG. 6A is an example in the case of executing a function that is not displayed on a device in front of the operator's eyes (in the vicinity of performing the gesture operation).
 図6Aにおいて、ディスプレイ30はカーナビゲーション装置のディスプレイであり、カーナビゲーション装置の音声案内の音量をジェスチャ操作で変更する。光学プレート42の上空のジェスチャ認識空間S1には、左矢印のガイド映像Iaと右矢印のガイド映像Ibが空中に結像されている。操作者が、左矢印のガイド映像Iaを左方向にはらうジェスチャで音量が低下し、右矢印のガイド映像Ibを右方向にはらうジェスチャで音量が増大する。図6Aでは、左矢印のガイド映像Iaを左方向にはらうことにより、音量が低下し、ディスプレイ30に表示されたボリュームバー30aの目盛りが低下している。 6A, a display 30 is a display of the car navigation device, and the volume of the voice guidance of the car navigation device is changed by a gesture operation. In the gesture recognition space S1 above the optical plate 42, a left guide image Ia and a right guide image Ib are imaged in the air. The volume is decreased by the gesture of the operator receiving the guide image Ia indicated by the left arrow in the left direction, and the volume is increased by the gesture of receiving the guide image Ib indicated by the right arrow in the right direction. In FIG. 6A, the volume of the volume bar 30a displayed on the display 30 is reduced by lowering the volume by moving the guide image Ia indicated by the left arrow in the left direction.
 図6Bは、操作者の目の前にない機器を操作する場合の例である。光学プレート42の上空のジェスチャ認識空間S1には、上矢印のガイド映像Icが空中に結像されている。操作者が、上矢印のガイド映像Icを上方向にはらうジェスチャで右ウインカが点滅する。右ウインカが点滅すると、ディスプレイ30の画面に、右ウインカが点滅していることを示すアイコン30bが表示される。なお図示しないが右ウインカの点滅中、下矢印のガイド映像が空中に結像され、操作者が下矢印のガイド映像を下方向にはらうジェスチャで右ウインカが消灯する。 FIG. 6B shows an example of operating a device that is not in front of the operator. In the gesture recognition space S1 above the optical plate 42, a guide image Ic with an upward arrow is imaged in the air. The right blinker blinks with a gesture in which the operator receives the upward arrow guide image Ic upward. When the right turn signal blinks, an icon 30b indicating that the right turn signal blinks is displayed on the screen of the display 30. Although not shown, while the right blinker is blinking, a guide image indicated by a down arrow is formed in the air, and the right blinker is turned off by a gesture in which the operator receives the guide image indicated by the down arrow downward.
 図6Cは、操作者の目の前だが、手が届きにくい機器を操作する場合の例である。図6Cにおいて、ディスプレイ30はカーナビゲーション装置のディスプレイであり、ディスプレイ30に表示された地図をフリック又はスワイプする。ディスプレイ30の画面には地図30cが表示され、光学プレート42の上空のジェスチャ認識空間S1には、左矢印のガイド映像Iaと右矢印のガイド映像Ibが空中に結像されている。操作者が、左矢印のガイド映像Iaを左方向にはらうジェスチャで地図30cが左方向にフリック又はスワイプされ、右矢印のガイド映像Ibを右方向にはらうジェスチャで地図30cが右方向にフリック又はスワイプされる。手の動きが所定速度未満の場合はフリック操作になり、所定速度以上の場合はスワイプ操作となる。図6Cでは、左矢印のガイド映像Iaを左方向にはらうことにより、地図30cが左方向にフリックされている。 FIG. 6C shows an example of operating a device that is in front of the operator but is difficult to reach. In FIG. 6C, the display 30 is a display of the car navigation apparatus, and flicks or swipes the map displayed on the display 30. A map 30c is displayed on the screen of the display 30, and a left guide image Ia and a right guide image Ib are formed in the air in the gesture recognition space S1 above the optical plate 42. The operator flicks or swipes the map 30c in the left direction with a gesture that causes the left arrow guide image Ia to move left, and flicks or swipes the map 30c in the right direction with a gesture that causes the right arrow guide image Ib to move in the right direction. Is done. A flick operation is performed when the hand movement is less than a predetermined speed, and a swipe operation is performed when the hand movement is greater than the predetermined speed. In FIG. 6C, the map 30c is flicked to the left by receiving the guide image Ia of the left arrow in the left direction.
 図7は、本開示の実施の形態に係る入力システム2の動作を示すフローチャートである。ガイド制御部112は、ジェスチャ認識空間S1に所定のガイド映像を空中表示させる(S10)。検知情報取得部113は、ジェスチャ検知センサ50により検知された、操作者のジェスチャ操作に基づく検知情報を取得する(S11)。操作内容判定部114は、取得された検知情報をもとに操作内容を特定する(S12)。画面制御部111は、特定された操作内容の受付完了を示す画像をディスプレイ30に表示させる(S13)。機器制御部115は、特定された操作内容に応じて機器を制御する(S14)。 FIG. 7 is a flowchart showing the operation of the input system 2 according to the embodiment of the present disclosure. The guide control unit 112 displays a predetermined guide image in the air in the gesture recognition space S1 (S10). The detection information acquisition unit 113 acquires detection information based on an operator's gesture operation detected by the gesture detection sensor 50 (S11). The operation content determination unit 114 identifies the operation content based on the acquired detection information (S12). The screen control unit 111 causes the display 30 to display an image indicating completion of reception of the specified operation content (S13). The device control unit 115 controls the device according to the specified operation content (S14).
 以上説明したように本実施の形態によれば、ジェスチャ認識空間S1にガイド映像を表示させることにより、ジェスチャ認識空間S1の範囲内でのジェスチャ操作が容易になり、ジェスチャ操作が空振りになる確率を大きく低下させることができる。従って車両内の搭乗者が快適にジェスチャ操作を行うことができる。 As described above, according to the present embodiment, by displaying the guide video in the gesture recognition space S1, the gesture operation within the gesture recognition space S1 is facilitated, and the probability that the gesture operation will be missed is increased. It can be greatly reduced. Therefore, a passenger in the vehicle can perform a gesture operation comfortably.
 また運転者の方向に向けてガイド映像を空中表示させると、狭視野角の特性から助手席に座っている搭乗者には当該ガイド映像が見えない。従って、対象者以外の搭乗者の視覚的な煩わしさが発生しない。また、ジェスチャ操作時のみガイド映像を空中表示させれば、対象者の視覚的な煩わしさも発生しない。また情報表示用の既存のディスプレイと併用することにより、情報提示量を確保することができる。空中表示だけでは提示する情報量が制限される。 Also, if the guide video is displayed in the air toward the driver, the guide video cannot be seen by the passenger sitting in the passenger seat due to the narrow viewing angle characteristics. Therefore, the visual annoyance of passengers other than the subject does not occur. Further, if the guide video is displayed in the air only during the gesture operation, the visual inconvenience of the subject does not occur. Moreover, the amount of information presentation can be ensured by using together with the existing display for information display. Only the aerial display limits the amount of information presented.
 以上、本開示を実施の形態をもとに説明した。これらの実施の形態は例示であり、それらの各構成要素や各処理プロセスの組合せにいろいろな変形例が可能なこと、またそうした変形例も本開示の範囲にあることは当業者に理解されるところである。 The present disclosure has been described based on the embodiments. Those skilled in the art will understand that these embodiments are exemplifications, and that various modifications can be made to combinations of the respective constituent elements and processing processes, and such modifications are also within the scope of the present disclosure. By the way.
 図8Aおよび図8Bは、変形例に係る入力システム2の、車両内における設置例を示す図である。図8Aに示す変形例では、ジェスチャ認識空間S1は、ダッシュボード4の中央部の上に設置される。空中映像表示装置40はダッシュボード4の中央部の内部に設置される。情報提示用のディスプレイ30は、センタコンソール5の傾斜面のジェスチャ認識空間S1に近接する位置に設置される。情報提示用のディスプレイ30には例えば、ディスプレイオーディオ等のヘッドユニットのディスプレイを使用することができる。 8A and 8B are diagrams showing an installation example of the input system 2 according to the modification in the vehicle. In the modification shown in FIG. 8A, the gesture recognition space S <b> 1 is installed on the center portion of the dashboard 4. The aerial image display device 40 is installed inside the center of the dashboard 4. The information presentation display 30 is installed at a position close to the gesture recognition space S <b> 1 on the inclined surface of the center console 5. For example, a display of a head unit such as display audio can be used as the information presentation display 30.
 図8Bに示す変形例では、表示器41及び光学プレート42を有する空中映像表示装置40が、ステアリングコラム3bのステアリングホイール3aとの接合部分の上部に埋め込まれる。ガイド映像I1は、運転者から見てステアリングホイール3aの奥(ステアリングコラム3bの上空)に結像される。情報提示用のディスプレイには、インストルメントパネル6の内のディスプレイを使用することができる。 8B, an aerial video display device 40 having a display 41 and an optical plate 42 is embedded in the upper part of the joint portion of the steering column 3b with the steering wheel 3a. The guide image I1 is imaged in the back of the steering wheel 3a (above the steering column 3b) as viewed from the driver. The display in the instrument panel 6 can be used as a display for presenting information.
 また図4では、表示器41に液晶ディスプレイモジュールを使用する例を説明したが、表示映像が限定される用途では、基板上に複数の発光ダイオードを設置して表示器41を作成してもよい。例えば、ジェスチャ認識空間S1の頂点に輝点を表示させるだけの場合、基板上の所定の位置に8個の発光ダイオードを設置するだけで表示器41を作成することができる。 Moreover, although the example which uses a liquid crystal display module for the display 41 was demonstrated in FIG. 4, in the use with which a display image is limited, you may create the display 41 by installing several light emitting diodes on a board | substrate. . For example, when only a bright spot is displayed at the vertex of the gesture recognition space S1, the display device 41 can be created simply by installing eight light emitting diodes at predetermined positions on the substrate.
 このような液晶層通過による減衰が発生しない表示器41を使用する場合、夜間の車室内では空中映像が眩しく見える場合がある。ガイド制御部112は、車両内の明るさに応じて発光ダイオードの輝度を調整してもよい。車両内の明るさは、車両に設置された照度センサ(不図示)により検知された照度情報をもとに判定する。なお気象庁または民間気象会社のサーバから無線通信ネットワークを介して、現在位置の照度を取得してもよい。ガイド制御部112は、車両内の照度が低いほど発光ダイオードの輝度を低下させる。発光ダイオードの輝度は、駆動電流またはPWM(pulse width modulation)比を調整することによりコントロールすることができる。なお表示器41に液晶ディスプレイモジュールを使用している場合は、もともとの表示輝度が低いため夜間に輝度を低下させる必要性は低いが、液晶ディスプレイモジュール使用時における輝度コントロールを排除するものではない。 When using the display 41 that does not cause attenuation due to passing through the liquid crystal layer, the aerial image may appear dazzling in the passenger compartment at night. The guide control unit 112 may adjust the luminance of the light emitting diode according to the brightness in the vehicle. The brightness in the vehicle is determined based on illuminance information detected by an illuminance sensor (not shown) installed in the vehicle. The illuminance at the current position may be acquired from a server of the Japan Meteorological Agency or a private weather company via a wireless communication network. The guide control unit 112 decreases the luminance of the light emitting diode as the illuminance in the vehicle is lower. The luminance of the light emitting diode can be controlled by adjusting the drive current or the PWM (pulse width modulation) ratio. When a liquid crystal display module is used for the display device 41, the original display luminance is low, so that it is not necessary to reduce the luminance at night. However, luminance control when using the liquid crystal display module is not excluded.
 なお、実施の形態は、以下の項目によって特定されてもよい。 Note that the embodiment may be specified by the following items.
 [項目1]
 入力システム(2)は、ディスプレイ(30)と、センサ(50)と、表示器(41)と、光学プレート(42)と、を有する。ディスプレイ(30)は、車両内の搭乗者に情報を提示する。センサ(50)は、ディスプレイ(30)の近傍に設定されたジェスチャ認識空間(S1)内で行われる搭乗者のジェスチャ操作を認識する。表示器(41)は、ジェスチャ認識空間(S1)の外に設置される。光学プレート(42)は、ジェスチャ認識空間(S1)と表示器(41)の間に設置され、表示器(41)に表示されたジェスチャガイド映像をジェスチャ認識空間(S1)に結像させる。ディスプレイ(30)は、センサ(50)により認識されたジェスチャ操作を反映した情報を表示する。
[Item 1]
The input system (2) includes a display (30), a sensor (50), a display (41), and an optical plate (42). The display (30) presents information to passengers in the vehicle. The sensor (50) recognizes the occupant's gesture operation performed in the gesture recognition space (S1) set in the vicinity of the display (30). The display (41) is installed outside the gesture recognition space (S1). The optical plate (42) is installed between the gesture recognition space (S1) and the display (41), and images the gesture guide image displayed on the display (41) in the gesture recognition space (S1). The display (30) displays information reflecting the gesture operation recognized by the sensor (50).
 これにより、ジェスチャ認識空間(S1)の範囲内でのジェスチャ操作が容易になり、ジェスチャ操作の操作性を向上させることができる。 Thereby, the gesture operation within the range of the gesture recognition space (S1) is facilitated, and the operability of the gesture operation can be improved.
 [項目2]
 項目1に記載の入力システム(2)において、表示器(41)は、ジェスチャガイド映像として、操作内容を示すシンボル映像を表示し、光学プレート(42)は、シンボル映像をジェスチャ認識空間(S1)に結像させる。
[Item 2]
In the input system (2) according to item 1, the display device (41) displays a symbol image indicating the operation content as a gesture guide image, and the optical plate (42) displays the symbol image as a gesture recognition space (S1). To form an image.
 これにより、ジェスチャ操作による操作性をさらに向上させることができる。 This makes it possible to further improve the operability by gesture operation.
 [項目3]
 項目1に記載の入力システム(2)において、表示器(41)は、ジェスチャガイド映像として、ジェスチャ認識空間(S1)の範囲を規定する映像を表示し、光学プレート(42)は、ジェスチャ認識空間(S1)の範囲を規定する映像をジェスチャ認識空間(S1)に結像させる。
[Item 3]
In the input system (2) according to item 1, the display (41) displays an image defining the range of the gesture recognition space (S1) as a gesture guide image, and the optical plate (42) is a gesture recognition space. An image defining the range of (S1) is imaged in the gesture recognition space (S1).
 これにより、ジェスチャ認識空間(S1)の外縁が認識しやすくなる。 This makes it easier to recognize the outer edge of the gesture recognition space (S1).
 [項目4]
 項目1から3のいずれかに記載の入力システム(2)において、ディスプレイ(30)は、ダッシュボード(4)に設置されるセンタディスプレイ(30)であり、ジェスチャ認識空間(S1)は、センタディスプレイ(30)の表示面に対して手前下方の空間に設定される。
[Item 4]
In the input system (2) according to any one of items 1 to 3, the display (30) is a center display (30) installed on the dashboard (4), and the gesture recognition space (S1) is a center display. It is set in the space below the front with respect to the display surface of (30).
 これにより、センタディスプレイ(30)をタッチ操作する場合と比較して、手を伸ばす距離が短くなるため、操作性が向上する。 Thereby, compared with the case where the center display (30) is touch-operated, the distance for reaching the hand is shortened, so that the operability is improved.
 [項目5]
 項目1から3のいずれかに記載の入力システム(2)において、ジェスチャ認識空間(S1)は、ダッシュボード(4)の中央部の上に設定され、ディスプレイ(30)は、センタコンソール(5)のジェスチャ認識空間(S1)に近接する位置に設置される。
[Item 5]
In the input system (2) according to any one of items 1 to 3, the gesture recognition space (S1) is set on the center of the dashboard (4), and the display (30) is the center console (5). It is installed at a position close to the gesture recognition space (S1).
 これにより、ジェスチャガイド映像がフロントガラスと同じ高さに表示されるため、運転中におけるジェスチャガイド映像の視認性を高めることができる。 Thereby, since the gesture guide image is displayed at the same height as the windshield, the visibility of the gesture guide image during driving can be improved.
 [項目6]
 入力方法は、車両(1)内の搭乗者に情報を提示するためのディスプレイ(30)の近傍に設定されたジェスチャ認識空間(S1)内で行われる搭乗者のジェスチャ操作を認識するステップを有する。また、入力方法は、ジェスチャ認識空間(S1)と、ジェスチャ認識空間(S1)の外に設置される表示器(41)の間に設置される光学プレート(42)を使用して、表示器(41)に表示されたジェスチャガイド映像をジェスチャ認識空間(S1)に結像させるステップを有する。さらに、入力方法は、ディスプレイ(30)に、認識されたジェスチャ操作を反映した情報を表示させるステップを有する。
[Item 6]
The input method includes a step of recognizing a passenger's gesture operation performed in a gesture recognition space (S1) set in the vicinity of a display (30) for presenting information to a passenger in the vehicle (1). . Moreover, the input method uses an optical plate (42) installed between the gesture recognition space (S1) and the display (41) installed outside the gesture recognition space (S1). 41) imaging the gesture guide image displayed in 41) in the gesture recognition space (S1). Further, the input method includes a step of displaying information reflecting the recognized gesture operation on the display (30).
 本開示は、車両内において快適にジェスチャ操作を行うことができる技術に関し、特に入力システム、および入力方法として有用である。 The present disclosure relates to a technique capable of performing a gesture operation comfortably in a vehicle, and is particularly useful as an input system and an input method.
 2 入力システム
 3a ステアリングホイール
 3b ステアリングコラム
 4 ダッシュボード
 5 センタコンソール
 6 インストルメントパネル
 10 制御装置
 11 処理部
 12 I/O部
 13 記録部
 30 ディスプレイ
 30a ボリュームバー
 30b アイコン
 30c 地図
 40 空中映像表示装置
 41 表示器
 42 光学プレート
 43 ライトコントロールフィルム
 45 格納ボックス
 50 ジェスチャ検知センサ
 111 画面制御部
 112 ガイド制御部
 113 検知情報取得部
 114 操作内容判定部
 115 機器制御部
 E1 視点
 I1,Ia,Ib,Ic ガイド映像
 S1 ジェスチャ認識空間
2 Input System 3a Steering Wheel 3b Steering Column 4 Dashboard 5 Center Console 6 Instrument Panel 10 Control Unit 11 Processing Unit 12 I / O Unit 13 Recording Unit 30 Display 30a Volume Bar 30b Icon 30c Map 40 Aerial Video Display Device 41 Indicator 42 Optical plate 43 Light control film 45 Storage box 50 Gesture detection sensor 111 Screen control unit 112 Guide control unit 113 Detection information acquisition unit 114 Operation content determination unit 115 Device control unit E1 Viewpoint I1, Ia, Ib, Ic Guide video S1 Gesture recognition space

Claims (6)

  1.  車両内の搭乗者に情報を提示するためのディスプレイと、
     前記ディスプレイの近傍に設定されたジェスチャ認識空間内で行われる搭乗者のジェスチャ操作を認識するセンサと、
     前記ジェスチャ認識空間の外に設置される表示器と、
     前記ジェスチャ認識空間と前記表示器の間に設置され、前記表示器に表示されたジェスチャガイド映像を前記ジェスチャ認識空間に結像させる光学プレートと、を備え、
     前記ディスプレイは、前記センサにより認識されたジェスチャ操作を反映した情報を表示する、
     入力システム。
    A display for presenting information to passengers in the vehicle;
    A sensor for recognizing a passenger's gesture operation performed in a gesture recognition space set in the vicinity of the display;
    An indicator installed outside the gesture recognition space;
    An optical plate that is installed between the gesture recognition space and the display and forms a gesture guide image displayed on the display on the gesture recognition space;
    The display displays information reflecting a gesture operation recognized by the sensor;
    Input system.
  2.  前記表示器は、前記ジェスチャガイド映像として、操作内容を示すシンボル映像を表示し、
     前記光学プレートは、前記シンボル映像を前記ジェスチャ認識空間に結像させる、
     請求項1に記載の入力システム。
    The display device displays a symbol image indicating the operation content as the gesture guide image,
    The optical plate images the symbol image in the gesture recognition space;
    The input system according to claim 1.
  3.  前記表示器は、前記ジェスチャガイド映像として、前記ジェスチャ認識空間の範囲を規定する映像を表示し、
     前記光学プレートは、前記ジェスチャ認識空間の範囲を規定する映像を前記ジェスチャ認識空間に結像させる、
     請求項1に記載の入力システム。
    The indicator displays an image defining a range of the gesture recognition space as the gesture guide image,
    The optical plate forms an image defining a range of the gesture recognition space on the gesture recognition space;
    The input system according to claim 1.
  4.  前記ディスプレイは、ダッシュボードに設置されるセンタディスプレイであり、
     前記ジェスチャ認識空間は、前記センタディスプレイの表示面に対して手前下方の空間に設定される、
     請求項1から3のいずれかに記載の入力システム。
    The display is a center display installed on the dashboard,
    The gesture recognition space is set in a space in front of and below the display surface of the center display.
    The input system according to claim 1.
  5.  前記ジェスチャ認識空間は、ダッシュボードの中央部の上に設定され、
     前記ディスプレイは、センタコンソールの前記ジェスチャ認識空間に近接する位置に設置される、
     請求項1から3のいずれかに記載の入力システム。
    The gesture recognition space is set on the center of the dashboard,
    The display is installed at a position close to the gesture recognition space of a center console.
    The input system according to claim 1.
  6.  車両内の搭乗者に情報を提示するためのディスプレイの近傍に設定されたジェスチャ認識空間内で行われる搭乗者のジェスチャ操作を認識するステップと、
     前記ジェスチャ認識空間と、前記ジェスチャ認識空間の外に設置される表示器の間に設置される光学プレートを使用して、前記表示器に表示されたジェスチャガイド映像を前記ジェスチャ認識空間に結像させるステップと、
     前記ディスプレイに、認識されたジェスチャ操作を反映した情報を表示させるステップと、を有する、
     入力方法。
    Recognizing a passenger's gesture operation performed in a gesture recognition space set in the vicinity of a display for presenting information to a passenger in the vehicle;
    Using the optical plate installed between the gesture recognition space and a display installed outside the gesture recognition space, the gesture guide image displayed on the display is imaged on the gesture recognition space. Steps,
    Displaying information reflecting the recognized gesture operation on the display,
    input method.
PCT/JP2018/022305 2017-06-13 2018-06-12 Input system and input method WO2018230526A1 (en)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
JP2017116226A JP2020126282A (en) 2017-06-13 2017-06-13 Input system and input method
JP2017-116226 2017-06-13

Publications (1)

Publication Number Publication Date
WO2018230526A1 true WO2018230526A1 (en) 2018-12-20

Family

ID=64658632

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/JP2018/022305 WO2018230526A1 (en) 2017-06-13 2018-06-12 Input system and input method

Country Status (2)

Country Link
JP (1) JP2020126282A (en)
WO (1) WO2018230526A1 (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111984117A (en) * 2020-08-12 2020-11-24 深圳创维-Rgb电子有限公司 Panoramic map control method, device, equipment and storage medium

Families Citing this family (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP7216925B2 (en) * 2020-08-28 2023-02-02 大日本印刷株式会社 Aerial imaging device, aerial input device, display device with aerial imaging device, moving body and hologram imaging lens

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2005138755A (en) * 2003-11-07 2005-06-02 Denso Corp Device and program for displaying virtual images
JP2007326409A (en) * 2006-06-06 2007-12-20 Toyota Motor Corp Display device for vehicle
JP5509391B1 (en) * 2013-06-07 2014-06-04 株式会社アスカネット Method and apparatus for detecting a designated position of a reproduced image in a non-contact manner
JP2016021082A (en) * 2014-07-11 2016-02-04 船井電機株式会社 Image display device
JP2017084136A (en) * 2015-10-28 2017-05-18 アルパイン株式会社 Gesture input device

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2005138755A (en) * 2003-11-07 2005-06-02 Denso Corp Device and program for displaying virtual images
JP2007326409A (en) * 2006-06-06 2007-12-20 Toyota Motor Corp Display device for vehicle
JP5509391B1 (en) * 2013-06-07 2014-06-04 株式会社アスカネット Method and apparatus for detecting a designated position of a reproduced image in a non-contact manner
JP2016021082A (en) * 2014-07-11 2016-02-04 船井電機株式会社 Image display device
JP2017084136A (en) * 2015-10-28 2017-05-18 アルパイン株式会社 Gesture input device

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111984117A (en) * 2020-08-12 2020-11-24 深圳创维-Rgb电子有限公司 Panoramic map control method, device, equipment and storage medium

Also Published As

Publication number Publication date
JP2020126282A (en) 2020-08-20

Similar Documents

Publication Publication Date Title
TWI578021B (en) Augmented reality interactive system and dynamic information interactive and display method thereof
CN107351763B (en) Control device for vehicle
JP3979002B2 (en) Computer user interface system and user interface providing method
JP6413207B2 (en) Vehicle display device
US10591723B2 (en) In-vehicle projection display system with dynamic display area
US9008904B2 (en) Graphical vehicle command system for autonomous vehicles on full windshield head-up display
US9057874B2 (en) Virtual cursor for road scene object selection on full windshield head-up display
KR20170141484A (en) Control device for a vehhicle and control metohd thereof
US9256325B2 (en) Curved display apparatus for vehicle
KR101610098B1 (en) Curved display apparatus for vehicle
JP6331567B2 (en) Display input device for vehicle
KR102051606B1 (en) Electronic apparatus for vehicle
KR20180053290A (en) Control device for a vehhicle and control metohd thereof
US20160124224A1 (en) Dashboard system for vehicle
WO2018230526A1 (en) Input system and input method
US11068054B2 (en) Vehicle and control method thereof
US11828947B2 (en) Vehicle and control method thereof
TWM564749U (en) Vehicle multi-display control system
JP2005313722A (en) Operation display device of on-vehicle equipment and operation display method thereof
JP2015019279A (en) Electronic apparatus
JP6236211B2 (en) Display device for transportation equipment
JP2018162023A (en) Operation device
WO2023248687A1 (en) Virtual image display device
Adachi Chances and Challenges for Automotive Displays
JP6146261B2 (en) Car navigation system

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 18817280

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: DE

122 Ep: pct application non-entry in european phase

Ref document number: 18817280

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: JP