WO2014021063A1 - Operating device - Google Patents

Operating device Download PDF

Info

Publication number
WO2014021063A1
WO2014021063A1 PCT/JP2013/068693 JP2013068693W WO2014021063A1 WO 2014021063 A1 WO2014021063 A1 WO 2014021063A1 JP 2013068693 W JP2013068693 W JP 2013068693W WO 2014021063 A1 WO2014021063 A1 WO 2014021063A1
Authority
WO
WIPO (PCT)
Prior art keywords
display
areas
screen
shape
image
Prior art date
Application number
PCT/JP2013/068693
Other languages
French (fr)
Japanese (ja)
Inventor
輝子 石川
雅基 加藤
太田 聡
郁代 笹島
貴之 波田野
広川 拓郎
Original Assignee
日本精機株式会社
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 日本精機株式会社 filed Critical 日本精機株式会社
Publication of WO2014021063A1 publication Critical patent/WO2014021063A1/en

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/01Input arrangements or combined input and output arrangements for interaction between user and computer
    • G06F3/03Arrangements for converting the position or the displacement of a member into a coded form
    • G06F3/033Pointing devices displaced or positioned by the user, e.g. mice, trackballs, pens or joysticks; Accessories therefor
    • G06F3/0362Pointing devices displaced or positioned by the user, e.g. mice, trackballs, pens or joysticks; Accessories therefor with detection of 1D translations or rotations of an operating part of the device, e.g. scroll wheels, sliders, knobs, rollers or belts
    • B60K35/10
    • B60K35/60
    • B60K35/81
    • BPERFORMING OPERATIONS; TRANSPORTING
    • B62LAND VEHICLES FOR TRAVELLING OTHERWISE THAN ON RAILS
    • B62DMOTOR VEHICLES; TRAILERS
    • B62D1/00Steering controls, i.e. means for initiating a change of direction of the vehicle
    • B62D1/02Steering controls, i.e. means for initiating a change of direction of the vehicle vehicle-mounted
    • B62D1/04Hand wheels
    • B62D1/046Adaptations on rotatable parts of the steering wheel for accommodation of switches
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/01Input arrangements or combined input and output arrangements for interaction between user and computer
    • G06F3/03Arrangements for converting the position or the displacement of a member into a coded form
    • G06F3/033Pointing devices displaced or positioned by the user, e.g. mice, trackballs, pens or joysticks; Accessories therefor
    • G06F3/0354Pointing devices displaced or positioned by the user, e.g. mice, trackballs, pens or joysticks; Accessories therefor with detection of 2D relative movements between the device, or an operating part thereof, and a plane or surface, e.g. 2D mice, trackballs, pens or pucks
    • G06F3/03547Touch pads, in which fingers can move on a surface
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/01Input arrangements or combined input and output arrangements for interaction between user and computer
    • G06F3/048Interaction techniques based on graphical user interfaces [GUI]
    • G06F3/0481Interaction techniques based on graphical user interfaces [GUI] based on specific properties of the displayed interaction object or a metaphor-based environment, e.g. interaction with desktop elements like windows or icons, or assisted by a cursor's changing behaviour or appearance
    • G06F3/0482Interaction with lists of selectable items, e.g. menus
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/01Input arrangements or combined input and output arrangements for interaction between user and computer
    • G06F3/048Interaction techniques based on graphical user interfaces [GUI]
    • G06F3/0487Interaction techniques based on graphical user interfaces [GUI] using specific features provided by the input device, e.g. functions controlled by the rotation of a mouse with dual sensing arrangements, or of the nature of the input device, e.g. tap gestures based on pressure sensed by a digitiser
    • G06F3/0488Interaction techniques based on graphical user interfaces [GUI] using specific features provided by the input device, e.g. functions controlled by the rotation of a mouse with dual sensing arrangements, or of the nature of the input device, e.g. tap gestures based on pressure sensed by a digitiser using a touch-screen or digitiser, e.g. input of commands through traced gestures
    • G06F3/04883Interaction techniques based on graphical user interfaces [GUI] using specific features provided by the input device, e.g. functions controlled by the rotation of a mouse with dual sensing arrangements, or of the nature of the input device, e.g. tap gestures based on pressure sensed by a digitiser using a touch-screen or digitiser, e.g. input of commands through traced gestures for inputting data by handwriting, e.g. gesture or text
    • B60K2360/113
    • B60K2360/1446
    • B60K2360/782
    • HELECTRICITY
    • H01ELECTRIC ELEMENTS
    • H01HELECTRIC SWITCHES; RELAYS; SELECTORS; EMERGENCY PROTECTIVE DEVICES
    • H01H13/00Switches having rectilinearly-movable operating part or parts adapted for pushing or pulling in one direction only, e.g. push-button switch
    • H01H13/02Details
    • H01H13/04Cases; Covers
    • H01H13/08Casing of switch constituted by a handle serving a purpose other than the actuation of the switch
    • HELECTRICITY
    • H01ELECTRIC ELEMENTS
    • H01HELECTRIC SWITCHES; RELAYS; SELECTORS; EMERGENCY PROTECTIVE DEVICES
    • H01H13/00Switches having rectilinearly-movable operating part or parts adapted for pushing or pulling in one direction only, e.g. push-button switch
    • H01H13/02Details
    • H01H13/12Movable parts; Contacts mounted thereon
    • H01H13/14Operating parts, e.g. push-button
    • HELECTRICITY
    • H01ELECTRIC ELEMENTS
    • H01HELECTRIC SWITCHES; RELAYS; SELECTORS; EMERGENCY PROTECTIVE DEVICES
    • H01H2231/00Applications
    • H01H2231/026Car

Definitions

  • the present invention relates to an operating device mounted on a vehicle.
  • a display unit that displays a plurality of options for selecting the processing content of a vehicle-mounted device and a steering spoke that constitutes a steering wheel are installed and divided into a substantially similar shape to the display content of the display unit.
  • Input operation means for generating operation information indicating a detection area corresponding to the contact position of the operation finger of the operator, and the operator based on the operation information generated by the input operation means.
  • a technique for determining an option corresponding to a detection area corresponding to the contact position of the operating finger and controlling a vehicle-mounted device is disclosed.
  • an object of the present invention is to provide an operation device capable of performing an intuitive operation.
  • the present invention provides an input unit including an operation surface having a plurality of operation regions, and a sensor unit capable of detecting a contact position of a detected object with respect to the operation surface; Display means comprising a plurality of display areas; Control means for controlling a predetermined electronic device based on a contact position of the detected object detected by the sensor unit and switching a display image displayed in at least one of the plurality of display areas.
  • An operating device The plurality of display areas are arranged three-dimensionally, The plurality of operation areas are three-dimensionally formed, At least one of the plurality of operation areas has a shape substantially similar to any one of the plurality of display areas.
  • An input means comprising: an operation surface having a plurality of operation regions; and a sensor unit capable of detecting a contact position of the detected object with respect to the operation surface; Display means comprising a plurality of display areas; Control means for controlling a predetermined electronic device based on a contact position of the detected object detected by the sensor unit and switching a display image displayed in at least one of the plurality of display areas.
  • the plurality of display areas are arranged three-dimensionally, The plurality of operation areas are three-dimensionally formed,
  • the first three-dimensional identification unit that is formed on the operation surface and divides the plurality of operation areas has a shape substantially similar to the shape of the second three-dimensional identification unit that divides the plurality of display areas. .
  • the input means is mounted on a steering wheel,
  • the plurality of display areas are arranged above the input means, At least one of the plurality of operation areas has a shape substantially similar to the shape of the display area located above the operation area.
  • the input means is mounted on a steering wheel,
  • the plurality of display areas are arranged above the input means,
  • the first three-dimensional identification unit has a shape substantially similar to the shape of the second three-dimensional identification unit located above the first three-dimensional identification unit.
  • the plurality of operation areas are formed so as to have different surface tactile sensations.
  • At least one of the plurality of operation areas is characterized in that a design part imitating a part of any shape of the plurality of display areas is formed on the surface thereof.
  • the control means causes the display means to display an emphasized image that emphasizes a display area in cooperation with an operation area in contact with the detected object, based on the contact position of the detected object detected by the sensor unit.
  • the display means displays an icon image having a predetermined shape as the display image in at least one of the plurality of display areas, and at least one of the plurality of operation areas is a three-dimensional image that is substantially similar to the icon image.
  • An icon operation unit having a simple shape is formed.
  • the display means includes a projection type display that emits display light, and a plurality of screens that form the plurality of display areas and project the display light to display the display image.
  • control means is characterized in that the display image is switched at a direction and a speed substantially coincident with a locus of the detected object detected by the sensor unit.
  • the present invention enables an intuitive operation.
  • the figure which shows the electrical structure of the operating device which is embodiment of this invention.
  • the operating device is an operating device 1000 mounted on the vehicle 1 shown in FIG.
  • the control unit (control means) 300 performs various operations in accordance with the operation of the in-vehicle electronic device.
  • the display device (display means) 500 displays an image corresponding to the result of the operation or various operations.
  • the vehicle 1 includes a steering 10.
  • the steering 10 is a part of the steering device of the vehicle 1, and includes a main body 11 and a steering wheel 12.
  • the main body 11 is a spoke connected to a steering shaft (not shown) of the vehicle 1, and includes a first input device 100 on the right side and a second input device 200 on the left side when viewed from the user.
  • the main body 31 is formed with attachment holes (not shown) that match the shapes of the first and second input devices 100 and 200, respectively. By attaching the first and second input devices 100 and 200 to the respective mounting holes, only the operation surfaces described later of the first and second input devices 100 and 200 are exposed.
  • the steering wheel 12 is a ring-shaped member attached to the main body 11 and gripped when the driver steers the vehicle 1.
  • the in-vehicle electronic device 20 is an audio device, a car navigation device, or the like, and is detachably arranged in the vehicle 1 by a cradle placed on a dashboard or the like in addition to an electronic device fitted in the instrument panel of the vehicle 1. And an electronic device that is simply brought into the vehicle 1 to be operated in the vehicle 1 and includes a high-function mobile phone called a smartphone.
  • the in-vehicle electronic device 20 is electrically connected to a control unit 300 described later, and operates according to a control signal received from the control unit 300. In addition, an image corresponding to the operation of the in-vehicle electronic device 20 is displayed in a display area described later of the display device 500.
  • the operation device 1000 includes a first input device 100, a second input device 200, a control unit 300, a storage unit 400, and a display device 500.
  • FIG. 3 is a front view and a sectional view showing the first input device 100.
  • the first input device 100 includes a contact sensor 110 and a switch device 120.
  • the touch pad device detects a position where a thumb or the like touches the operation surface under the control of the control unit 300 described later, and includes a front cover 111, a sensor sheet (sensor unit) 112, and a spacer 113.
  • the front cover 111 is formed in a sheet shape from a light-shielding insulating material such as a synthetic resin, and is positioned on the operation surface 111a touched by a user's finger or the like when a touch operation or a gesture operation is performed, and on the periphery of the operation surface 111a. And a peripheral edge portion 111b which is covered with a case or the like (not shown) and cannot be touched by a user's finger or the like. As shown in FIG. 3, a step portion 111c is formed on the operation surface 111a.
  • the operation surface 111a has a first operation region R1 and a second operation region having different depth positions with respect to the step portion 111c.
  • the operation area R2 is three-dimensionally formed. That is, the stepped portion 111c serves as a first three-dimensional identification unit that divides the first and second operation regions R1 and R2 on the operation surface 111a.
  • the first operation region R1 is an operation region located below the stepped portion 111c of the operation surface 111a, and includes a flat portion 111d that is a planar portion of the operation surface 111a, and the first operation region R1. And a hollow portion 111e that is recessed in a circular shape so as to sink in the back side direction from the flat surface portion 111d.
  • the “front side” means the user side with respect to the first input device 100 as shown by the double-ended arrows in FIG. 3B, and the “back side” means the opposite side.
  • region R2 is an operation area
  • the sensor sheet 112 is a projected capacitive sensor sheet having a plurality of sensors 1120 (detection electrodes) for detecting the position of a detection target such as a finger, and is located on the back side of the front cover 111.
  • the sensor 1120 is made of a translucent material, and the sensor sheet 112 is light transmissive.
  • the sensor sheet 112 includes a layer having a first sensor array 112a for detecting the position of the detected object in the X direction, and a first layer for detecting the position of the detected object in the Y direction. It is schematically configured by superposing layers having two sensor rows 112b. By combining the first sensor row 112a and the second sensor row 112b, the sensor 1120 is arranged in a matrix on the sensor sheet 112 as a result.
  • the first sensor row 112a and the second sensor row 112b are each electrically connected to a control unit 300 described later.
  • the control part 300 can detect the change of the electrostatic capacitance in each sensor.
  • the control unit 300 calculates an input coordinate value (X, Y) indicating the contact position of the detection target based on the change in capacitance.
  • the input coordinate value is a coordinate value in the XY coordinate system of each sensor 1120 set in advance on the operation surface.
  • the input coordinate values are the X coordinate assigned to the center of gravity position of the distribution of changes in capacitance in the X direction (for example, the position of the sensor 1120 where the capacitance is higher than a certain threshold value and the largest), and the Y direction. And the Y coordinate assigned to the position of the center of gravity of the distribution of the change in the electrostatic capacity (for example, the position of the sensor 1120 where the electrostatic capacity is higher than a certain threshold value and the largest).
  • the control unit 300 calculates the input coordinate value (X, Y) by calculating the X coordinate and the Y coordinate.
  • the sensor sheet 112 is integrally formed with the surface cover 111 by drawing to be processed into the same shape as the surface cover 111 (see FIG. 3B).
  • the surface cover 111 and the sensor sheet 112 become like a single sheet, and the stepped portions 111c, the flat surface portion 111d, the recessed portion 111e, the raised portion 111f, etc. of the operation surface are provided.
  • the shape is composed of a bent portion of the single sheet.
  • the back surface of the front cover 111 and the front surface of the sensor sheet 112 come into contact with each other by being integrally molded in this way.
  • the sensor 1120 is arranged corresponding to the step shape of the front cover 111. Since the sensor 1120 is arranged in this way, even when the gesture operation is performed on the operation surface having a stepped shape such as the stepped portion 111c, the control unit 300 changes the capacitance of each sensor. It can be detected.
  • the spacer 113 is located on the back side of the sensor sheet 112, and is formed in accordance with the shape of the integrally formed surface cover 111 and sensor sheet 112 as shown in FIG. This is a member that retains these shapes when pressure is applied from the front side.
  • the switch device 120 is located on the back side of the contact sensor 110 and is electrically connected to the control unit 300.
  • a pressing operation When the user performs an operation of pressing the operation surface of the first input device 100 (hereinafter referred to as a pressing operation), the switch device 120 is pressed and transmits a predetermined input signal to the control unit 300.
  • the pressing operation is performed when control different from control by touch operation or gesture operation on the operation surface of the first input device 100 is executed.
  • the first input device 100 is attached to the main body 11 of the steering 10 by, for example, a case (not shown) of the contact sensor 110 being welded to the main body 11 with a soft resin. By being attached in this way, when the user presses the operation surface, the contact sensor 110 sinks and the switch device 120 is pressed.
  • the first input device 100 is configured by the above units.
  • FIG. 5 is a front view and a sectional view showing the second input device 200.
  • the second input device 200 has the same configuration as the first input device 100 and includes a contact sensor 210 and a switch device 220.
  • the contact sensor 210 is a touch pad device similar to the contact sensor 110, and includes a surface cover 211, a sensor sheet 212, and a spacer 213.
  • the front cover 211 is formed in a sheet shape from a light-shielding insulating material such as a synthetic resin, and is positioned on the operation surface 211a touched by a user's finger or the like when a touch operation or a gesture operation is performed, and on the periphery of the operation surface 211a And a peripheral portion 211b that is covered with a case or the like (not shown) and cannot be touched by a user's finger or the like. As shown in FIG. 5, the entire operation surface 211a is raised from the peripheral edge 211b to the front side, and a stepped portion 211c is formed.
  • a light-shielding insulating material such as a synthetic resin
  • the operation surface 211a is divided
  • the third operation region R3 is an operation region located on the lower side of the operation surface 211a with respect to the stepped portion 211c.
  • the third operation region R3 includes a planar portion 211d that is a planar portion of the operation surface 211a and the third operation region R3. And a hollow portion 211e that is recessed in a circular shape so as to sink in the back side direction from the flat surface portion 211d.
  • the “front side” means the user side with respect to the second input device 200 as shown by the double-ended arrows in FIG. 5B, and the “back side” means the opposite side.
  • region R4 is an operation area
  • the sensor sheet 212 is a projected capacitance type sensor sheet having a plurality of sensors (detection electrodes) for detecting the position of an object to be detected such as a finger. Located on the side.
  • the sensor is made of a translucent material, and the sensor sheet 212 is light transmissive.
  • the spacer 213 is located on the back side of the sensor sheet 212, and is formed in accordance with the shape of the integrally formed surface cover 211 and sensor sheet 212 as shown in FIG. This is a member that retains these shapes when pressure is applied from the front side.
  • the switch device 220 is located on the back side of the contact sensor 210 and is electrically connected to the control unit 300.
  • a pressing operation When the user performs an operation of pressing the operation surface of the second input device 200 (hereinafter referred to as a pressing operation), the switch device 220 is pressed and transmits a predetermined input signal to the control unit 300.
  • the pressing operation is performed when a control different from the control by the touch operation or the gesture operation on the operation surface of the second input device 200 is executed.
  • the second input device 200 is configured by the above units.
  • control unit 300 includes a CPU (Central Processing Unit) and the like, and executes an operation program stored in the storage unit 400 to perform various processes and controls. At least a part of the control unit 300 may be configured by various dedicated circuits such as an ASIC (Application Specific Integrated Circuit).
  • ASIC Application Specific Integrated Circuit
  • the storage unit 400 includes a ROM (Read Only Memory), a RAM (Random Access Memory), a flash memory, and the like.
  • the work area of the CPU that configures the control unit 300, a program area that stores an operation program executed by the CPU, and data It functions as an area.
  • the program area stores operation programs such as a program for executing vehicle-mounted electronic device control processing, which will be described later, and a program for executing display device control processing.
  • a virtual operation area In the data area, a virtual operation area, execution conditions, corresponding operation data, image data, and the like are stored in advance.
  • the virtual operation area is an area that is set by virtually dividing the operation surface of the first and second input devices 100 and 200.
  • the operation surfaces 111a and 211a are mainly configured by the stepped portion 111c, Operation areas R1 to R4 that are divided vertically with respect to 211c are set.
  • the virtual operation area may include an area indicating the depressions 111e and 211e.
  • the execution condition is data serving as a trigger for transmitting a control signal to the in-vehicle electronic device 20 in each virtual operation region, that is, for controlling the in-vehicle electronic device 20.
  • the execution condition includes a condition as to whether or not it is a specific locus, a condition as to whether or not a predetermined virtual operation area has been touched, and a condition as to whether or not a pressing operation has been detected.
  • a plurality of execution conditions can be set for each virtual operation area. For example, a condition for whether or not a specific locus is set and a condition for whether or not a pressing operation is detected are set for the same virtual operation area. May be.
  • the “specific trajectory” is, for example, a substantially arc-shaped trajectory along the raised portions 111f and 211f.
  • Corresponding operation data is data of a control signal that causes the in-vehicle electronic device 20 to execute a predetermined operation.
  • the image data is image signal data that causes the display device 500 to display a predetermined display image.
  • the control unit 300 performs control to update a display image to be displayed on the display device 500 based on an operation on the first and second input devices 100 and 200 and a predetermined operation of the in-vehicle electronic device 20 according to the operation. .
  • storage part 400 is suitably stored as a default value or a user's own operation using a known data registration method.
  • FIG. 6 is a perspective view showing the display device 500.
  • the display device 500 is disposed in front of the user and above the steering 10, and includes a projector (projection display) 510, a mirror 520, a first screen 530, a second screen 540, and a third screen.
  • a screen 550, a fourth screen 560, a fifth screen 570, and a screen holder 580 are provided.
  • the projector 510 emits display light L indicating a display image toward the first to fifth screens 530 to 570.
  • display light L indicating a display image toward the first to fifth screens 530 to 570.
  • liquid crystal that forms display light L by transmitting light from a light source through a liquid crystal panel. Consists of a projector. A plurality of projectors 510 may be provided.
  • the mirror 520 is a reflecting member formed by depositing a metal such as aluminum on a resin material such as polycarbonate to form a reflecting surface.
  • the display light L emitted from the projector 510 is reflected by the mirror 520 and projected onto the first to fifth screens 530 to 570.
  • the first to fifth screens 530 to 570 constitute display areas on which the display light L emitted from the projector 510 is projected to display (display) display images.
  • the first to fifth screens 530 to 570 are reflection screens that reflect the display light L and display a display image, and have a three-dimensional shape including a plane, a curved surface, a spherical surface, or a combination thereof.
  • the first screen 530 is a screen located in the center, and has a first display area for displaying vehicle information such as vehicle speed, engine speed, remaining fuel, electric energy, travel distance, and navigation information as a display image.
  • the second screen 540 is a screen located on the lower right side when viewed from the user side.
  • the second screen 540 cooperates with the first operation region R1 of the first input device 100 to display the first operation as a display image.
  • a second display area that displays information related to the operation of the region R1 or the operation of the in-vehicle electronic device 20 associated with the operation is configured.
  • the third screen 550 is a screen located on the upper right side when viewed from the user side, and displays various indicators and warnings as a display image.
  • the second operation area of the first input device 100 is displayed.
  • a third display area for displaying information on the operation of the second operation area R2 or the operation of the in-vehicle electronic device 20 associated with this operation as a display image is configured.
  • the fourth screen 560 is a screen located on the lower left side when viewed from the user side. As described later, the fourth screen 560 cooperates with the third operation region R3 of the second input device 200 to display a third operation as a display image. A fourth display area that displays information related to the operation of the region R3 or the operation of the in-vehicle electronic device 20 associated with the operation is configured.
  • the fifth screen 570 is a screen located on the upper left side when viewed from the user side, displays various indicators and warnings as display images, and, as will be described later, a fourth operation area of the second input device 100.
  • a fifth display region is configured to display information related to the operation of the fourth operation region R4 or the operation of the in-vehicle electronic device 20 associated with this operation.
  • the first to fifth screens 530 to 570 are respectively held in a three-dimensional manner, that is, with a space therebetween in the depth direction, by being held by a plurality of holding surfaces provided with steps of the screen holder 580 and having different positions in the depth direction. The positions are arranged differently.
  • the second and fourth screens 540 and 560 are positioned on the first front side (user side) with a space between the first screen 530 and the third and fifth screens 550 and 570 are the first screen 530. 2.
  • the screen holder 580 is a plate-like member that is made of, for example, synthetic resin and holds the first to fifth screens 530 to 570 at predetermined positions via an adhesive material (not shown) that does not cause thermal shrinkage.
  • the first to fifth screens 530 to 570 may be fixed by a frame-like member such as a bezel.
  • step portions 581 to 584 are formed on the holding surface for holding the first to fifth screens 530 to 570 of the screen holder 580 so as to divide the first to fifth screens 530 to 570.
  • FIG. 7 is a diagram showing a gesture operation OP1 for the first operation region R1 of the first input device 100 and a display image on the second screen 540 of the display device 500 that is updated (switched) accordingly.
  • the outside temperature indicated as “OUTSIDE TEMP” in FIG. 7
  • the travelable distance indicated as “DISTANCE TO EMPTY” in FIG. 7
  • the first operation region R1 of the first input device 100 has a shape substantially similar to the shape of the second screen 540 disposed above the first input device 100, and the first operation region R1.
  • R1 and the second screen 540 are associated with each other.
  • substantially similar includes not only complete similarity but also a case where the shapes are close enough to be recognized as similar shapes when the two are compared.
  • the user grasps the shape of the first operation region R1 by visually recognizing the shape of the second screen 540 arranged in a three-dimensional manner, and the fingertip of the three-dimensional shape of the operation surface 111a grasped through vision is applied to the stepped portion 111c and the like. By touching with and feeling through the sense of touch, a gesture operation or the like can be intuitively performed on the first operation region R1.
  • the second screen 540 is arranged in a three-dimensional manner, the user can easily recognize the shape, and the user can feel the three-dimensional shape of the first operation region R1 by touch, and the user views the operation surface 111a. It is possible to perform gesture operations and the like intuitively.
  • an operation on the first operation region R1 when the user performs a gesture operation OP1 that traces the first operation region R1, the control unit 300 transmits an image signal to the projector 510, and is approximately the locus of the gesture operation OP1.
  • the display image on the second screen 540 is sequentially switched in the matching direction (right direction in the present embodiment) and the speed to display a peripheral image showing, for example, the right rear landscape of the vehicle 1. Since the display image on the second screen 540 is switched at a direction and speed substantially matching the locus of the gesture operation OP1 with respect to the first operation region R1, the user operates as if he / she actually operates while touching the display image. Is possible.
  • FIG. 8 is a diagram illustrating a touch operation OP2 for the second operation region R2 of the first input device 100 and a display image on the third screen 550 of the display device 500 that is updated accordingly.
  • FIG. 9 is a diagram showing a gesture operation OP3 for the second operation region R2 of the first input device 100 and a display image on the third screen 550 of the display device 500 that is updated accordingly.
  • the indicator of the direction indicator is displayed on the third screen 550 as a display image of the initial screen.
  • the stepped portion 582 that divides the second and third screens 540 and 550 is formed in a shape substantially similar to the shape of the stepped portion 111c that divides the first and second operation regions R1 and R2.
  • the stepped portion 582 becomes a second three-dimensional identification unit that is substantially similar to the shape of the stepped portion 111 c that is the first three-dimensional identification unit in the first input device 100.
  • the user grasps a part of the shape (lower part in the present embodiment) and the position of the second operation region R2 divided from the first operation region R1 by the step portion 111c by visually recognizing the three-dimensional shape of the step portion 582. Then, the user can intuitively perform a gesture operation or the like on the second operation region R2 by touching the step 111c or the like with a fingertip and feeling the three-dimensional shape of the operation surface 111a grasped through vision visually.
  • the step portion 582 has a three-dimensional shape, it is easy for the user to visually recognize the shape, and the three-dimensional shape of the step portion 111c can be sensed by touch, and the user can intuitively operate the gesture without looking at the operation surface 111a. It can be carried out.
  • the control unit 300 transmits an image signal to the projector 510, and the third operation region R2 is performed.
  • a menu image consisting of an icon group indicating content that can be displayed on the display device 500 is displayed on the screen 550 at a position and shape along the step 582. Furthermore, as shown in FIG.
  • the control unit 300 has a direction that substantially matches the locus of the gesture operation OP3 (this embodiment).
  • the icons are sequentially slid at the upper right) and speed. Since the display image on the third screen 550 is switched in a direction and speed that substantially match the locus of the gesture operation OP3 with respect to the second operation region R2, the user operates as if he / she actually operates while touching the display image. Is possible. Note that when the pressing operation is performed on the second operation region R2 while the menu image is displayed, the selected content is displayed on the second screen 540 as a display image.
  • FIG. 10 is a diagram showing a gesture operation OP4 for the third operation region R3 of the second input device 100 and a display image on the fourth screen 560 of the display device 500 that is updated accordingly.
  • an audio operation image is displayed as a display image of the initial screen.
  • the third operation region R3 of the second input device 200 has a shape substantially similar to the shape of the fourth screen 560 disposed above the second input device 200, and the third operation region R3.
  • R3 and the fourth screen 560 are associated with each other.
  • the user grasps the shape of the third operation region R3 by visually recognizing the shape of the fourth screen 560 arranged in three dimensions, and the fingertip of the three-dimensional shape of the operation surface 211a grasped through vision is applied to the step portion 211c and the like.
  • a gesture operation or the like can be intuitively performed on the third operation region R3. Since the fourth screen 560 is three-dimensionally arranged, the user can easily recognize the shape thereof, and the three-dimensional shape of the third operation region R3 can be sensed by touch, and the user views the operation surface 211a. It is possible to perform gesture operations and the like intuitively.
  • the control unit 300 transmits a volume control signal to the in-vehicle electronic device 20 Then, the volume is changed, an image signal is transmitted to the projector 510, and the volume display in the display image on the fourth screen 560 is updated in accordance with the change in the volume.
  • FIG. 11 is a diagram showing a touch operation OP5 for the fourth operation region R4 of the second input device 200 and a display image on the fifth screen 570 of the display device 500 that is updated accordingly.
  • FIG. 12 is a diagram showing a gesture operation OP6 for the fourth operation region R4 of the second input device 200 and a display image on the fifth screen 570 of the display device 500 that is updated accordingly.
  • an indicator of a direction indicator is displayed as a display image of the initial screen.
  • the stepped portion 584 that divides the fourth and fifth screens 560 and 570 is formed in a shape that is substantially similar to the shape of the stepped portion 211c that divides the third and fourth operation regions R3 and R4.
  • the stepped portion 584 is a second three-dimensional identification unit that is substantially similar to the shape of the stepped portion 211 c that is the first three-dimensional identification unit in the second input device 200.
  • the user grasps a part of the shape (lower part in the present embodiment) and the position of the fourth operation region R4 that is divided from the third operation region R3 by the step portion 211c by visually recognizing the three-dimensional shape of the step portion 584. Then, the user can intuitively perform a gesture operation or the like on the fourth operation region R4 by touching the stepped portion 211c or the like with a fingertip and feeling the three-dimensional shape of the operation surface 211a grasped through sight through a tactile sense.
  • the stepped portion 584 has a three-dimensional shape, it is easy for the user to visually recognize the shape, and the three-dimensional shape of the stepped portion 211c can be sensed by tactile sense. It can be carried out.
  • the control unit 300 transmits an image signal to the projector 510, and A menu image consisting of an icon group indicating the function of the in-vehicle electronic device 20 is displayed on the screen 570 of 5 at a position and shape along the step portion 584. Furthermore, as illustrated in FIG.
  • the control unit 300 substantially matches the trajectory of the gesture operation OP6 (this embodiment).
  • the icons are sequentially slid at the upper left) and the speed. Since the display image on the fifth screen 570 is switched in a direction and speed substantially matching the locus of the gesture operation OP6 with respect to the fourth operation region R4, the user operates as if he / she actually operates while touching the display image. Is possible. Note that when the pressing operation is performed on the fourth operation region R4 while the menu image is displayed, the selected function is executed.
  • the operation device 1000 includes an operation surface 111a having a plurality of operation regions R1 and R2, and a sensor sheet (sensor unit) 112 that can detect the contact position of the detection target with respect to the operation surface 111a.
  • Control means 300, The plurality of screens 530 to 570 are arranged three-dimensionally, The plurality of operation regions R1, R2, and R3, R4 are three-dimensionally formed, The first operation region R1 and the third operation region R3 have shapes substantially similar to the shapes of the second screen 540 and the fourth screen 560, respectively. According to this, the user can intuitively perform a gesture operation or the like without looking at the operation surfaces 111a and 211a.
  • the operation device 1000 includes an operation surface 111a having a plurality of operation regions R1 and R2, and a sensor sheet (sensor unit) 112 that can detect a contact position of the detection target with respect to the operation surface 111a.
  • a second input device (input means) 200 comprising: A display device (display means) 500 including a plurality of screens (display areas) 530 to 570; A control unit that controls a predetermined in-vehicle electronic device 20 based on the contact position of the detected object detected by the sensor sheets 112 and 212, and switches an image displayed on at least one of the plurality of screens 530 to 570.
  • the plurality of screens 530 to 570 are arranged three-dimensionally, The plurality of operation regions R1, R2, and R3, R4 are three-dimensionally formed, A step portion (first three-dimensional identification portion) 111d formed on the operation surface 111a and dividing the operation regions R1 and R2 and a step portion (first three-dimensional identification portion) formed on the operation surface 211a and dividing the operation regions R3 and R4.
  • 211c is a step portion for dividing the second screen 540 and the third screen 550, and a step portion for dividing the fourth screen 560 and the fifth screen 570.
  • (Second three-dimensional identification part) It has a shape substantially similar to the shape of 584. According to this, the user can intuitively perform a gesture operation or the like without looking at the operation surfaces 111a and 211a.
  • the first and second input devices 100 and 200 are mounted on the steering 10, and the screens 530 to 570 are disposed above the first and second input devices 100 and 200.
  • the first operation region R1 and the third operation region R3 have shapes substantially similar to the second screen 540 and the fourth screen 560 located above the first operation region R1 and the third operation region R3. According to this, the left and right positions of the associated operation areas R1 and R3 and the screens 540 and 560 substantially coincide with each other, so that the operation on the operation surfaces 111a and 211a can be performed more comfortably while looking at the screens 540 and 560. be able to.
  • the first and second input devices 100 and 200 are mounted on the steering 10, and the screens 530 to 570 are disposed above the first and second input devices 100 and 200.
  • the step portions 111c and 211c have a shape substantially similar to the shape of the step portions 582 and 584 located above the step portions 111c and 211c. According to this, the left and right positions of the stepped portions 111c and 211c and the stepped portions 582 and 584 that are associated with each other substantially coincide with each other, so that the operation on the operation surfaces 111a and 211a can be performed more comfortably while looking at the stepped portions 582 and 584. It can be performed.
  • the display device 500 includes a projector (projection type display) 510 that emits display light L, and a plurality of screens 530 to 570 that form a plurality of display areas and display images by projecting the display light L. It is prepared. According to this, it is possible to easily obtain a plurality of display areas arranged three-dimensionally using the screens 530 to 570.
  • a projector projection type display
  • a plurality of screens 530 to 570 that form a plurality of display areas and display images by projecting the display light L. It is prepared. According to this, it is possible to easily obtain a plurality of display areas arranged three-dimensionally using the screens 530 to 570.
  • control unit 300 switches the display image at a direction and a speed that substantially coincide with the locus of the detected object detected by the sensor sheets 112 and 212. According to this, the user can operate as if he / she is directly touching the display image, and intuitive operation is possible.
  • the present invention is not limited to the above-described embodiment, and it is needless to say that changes (including deletion of components) can be made as appropriate without departing from the scope of the invention.
  • the two input devices 100 and 200 are provided as input means.
  • one input means may be provided.
  • three or more operation areas may be formed.
  • the shape of two or more operation areas in one input means may be a shape that is substantially similar to any one of the display areas.
  • an emphasized image such as a gradation or a pattern may be displayed at the periphery of each display area so as to emphasize the shape of the display area, and the shape of the display area can be visually recognized more clearly.
  • the plurality of operation areas may be formed so as to have different tactile sensations by means such as printing, paint coating, laser processing, etc., and the difference between touched operation areas can be made easier with a fingertip or the like. It can be felt and is more suitable for intuitive operation.
  • a background image pronounced of the tactile sensation of the operation area may be displayed in a display area that is associated (the display image is updated according to the operation and the shape is substantially similar). It is more preferable that the user can intuitively perform the operation by grasping the surface tactile sensation and feeling the surface tactile sensation in the operation area while touching the operation surface.
  • a glossy background image such as a metallic tone pronounced of the tactile sensation is displayed in a display area associated with the tactile sensation operation area having a low frictional resistance (smooth).
  • a background image including concavities and convexities pronounced of the tactile sensation is displayed in a display region associated therewith.
  • each screen may be formed so as to have a texture pronounced of the tactile sensation of the operation area.
  • a screen associated with the tactile sensation area is formed with a texture free from unevenness that reminds the tactile sensation.
  • a screen associated therewith is formed with a texture including irregularities that recalls the tactile sensation.
  • the operation device 1000 is linked to the operation regions R2 and R4 on the surfaces of the second operation region R2 and the fourth operation region R4 of the first and second input devices 100 and 200, respectively.
  • the first and second design portions 130 and 230 simulating part of the characteristic shapes of the third screen 550 and the fifth screen 570 are formed.
  • the first and second design portions 130 and 230 are printed and formed at locations corresponding to the second operation region R2 and the fourth operation region R4 of the front covers 111 and 211, respectively.
  • “simulated” means that the shape of a part of the screen, which is the display area, is simplified and simplified, and the shapes are close enough to be recognized as being similar when comparing the two. Including the case of a shape.
  • the first design portion 130 is a curved design that imitates the lower curved shape as a part of the shape of the third screen 550 as shown in FIG.
  • the second design portion 230 is a curved design that imitates the lower curved shape as a part of the shape of the fifth screen 570 as shown in FIG.
  • the user visually recognizes a part (lower part) of the characteristic shapes of the third screen 550 and the fifth screen 570 and the first and second designs 130 and 230 imitating them, thereby operating the operation device 1000. Even if the user is unfamiliar with the operation, such as when it is the first time, the relationship between the third screen 550 and the fifth screen 570 and the second operation region R4 and the fourth operation region R4 can be understood at a glance. can do.
  • the control unit 300 transmits an image signal to the projector 510 and the third screen 550.
  • a menu image including a group of icons indicating displayable content is displayed at a position and shape along the step portion 582.
  • the control unit 300 displays an enhanced image that emphasizes the third screen 550 that cooperates with the second operation region R2 that is touched by the finger or the like that is the detection target.
  • a colored background image having a color different from that in the normal state is displayed on the entire third screen 550 as a background.
  • the emphasized image may be any image that emphasizes the display area, and a surrounding line, a gradation, a pattern, or the like may be displayed on the periphery of each display area.
  • the control unit 300 transmits an image signal to the projector 510 and the fifth screen.
  • a menu image including a group of icons indicating the function of the in-vehicle electronic device 20 is displayed on the 570 at a position and shape along the stepped portion 584.
  • the control unit 300 displays an enhanced image that emphasizes the fifth screen 570 that cooperates with the fourth operation region R4 in contact with the finger or the like that is the detection target.
  • a colored background image having a color different from that in the normal state is displayed on the entire fifth screen 570 as a background.
  • the second screen 540 or the fourth screen 560 associated with these operation regions R1 or R3.
  • the highlighted image for enhancing the screen 540 or 560 is displayed. The user can grasp whether or not he / she has touched the desired operation areas R1 to R4 at the time when the user has touched the operation areas R1 to R4 by the emphasized image, and can improve the operability.
  • the second operation region R2 and the third operation region R3 among the plurality of operation regions R1, R2, R3, and R4 are respectively provided on the surface of the plurality of screens 530 to 570.
  • the first and second design portions 130 and 230 imitating part of the shapes of the third screen 550 and the fifth screen 570 are formed. According to this, even when the operation device 1000 is unfamiliar with the operation such as the first appearance, the third screen 550 and the fifth screen 570, the second operation region R4, and the fourth operation at a glance.
  • the cooperative relationship with the region R4 can be grasped.
  • control unit 300 displays screens 540 to 570 that cooperate with the operation areas R1, R2, R3, and R4 in contact with the detected object based on the contact positions of the detected object detected by the sensor sheets 112 and 212.
  • An emphasized image to be emphasized is displayed on the display device 500. According to this, it is possible to grasp whether or not the desired operation areas R1 to R4 have been touched when the operation areas R1 to R4 are touched, and the operability can be improved.
  • the display device 500 displays icon images IC1 to IC4 and IC5 to IC8 having predetermined shapes as the display images on the third screen 550 and the fifth screen 570,
  • icon operation units 111g to 111j having a three-dimensional shape substantially similar to the icon images IC1 to IC4 and IC5 to IC8.
  • 211g to 211j are formed.
  • the icon images IC1 to IC4 displayed on the third screen 550 have a circular shape as shown in FIG. 15 and indicate contents that can be displayed on the display device 500, respectively, along the step portion 582.
  • a menu image for selecting contents to be displayed and displayed as a whole is configured.
  • the icon operation portions 111g to 111j formed in the second operation region R2 have convex shapes substantially similar to the icon images IC1 to IC4, respectively, and the step portions 111c so as to have the same positional relationship as the icon images IC1 to IC4. It is formed to follow.
  • the icon images IC1 to IC4 are switched in a selected state or a non-selected state in accordance with a touch operation on the icon operation units 111g to 111j, respectively, and cooperate with the icon operation units 111g to 111j on a one-to-one basis. That is, as shown in FIG. 15, when the user performs (touches) the touch operation OP7 on the icon operation unit 111j, the control unit 300 transmits an image signal to the projector 510 and displays it on the third screen 550. Then, the icon image IC4 linked to the icon operation unit 111j is selected. In the present embodiment, the icon image IC4 in the selected state is displayed with negative / positive inversion.
  • the content corresponding to the icon image IC4 in the selected state is displayed on the second screen 540 as a display image.
  • the icon operation units 111g to 111h are touch-operated, the corresponding icon images IC1 to IC3 are selected.
  • the user can feel the shape of the icon images IC1 to IC4 grasped through the sight by touching the icon operation units 111g to 111j with a fingertip and feel the touch through the sense of touch. Is possible.
  • the icon operation units 111g to 111j may have a concave shape substantially similar to the icon images IC1 to IC4.
  • the icon images IC5 to IC8 displayed on the fifth screen 570 have a circular shape as shown in FIG. 16, and each indicate the function of the in-vehicle electronic device 20, and are displayed along the step portion 584. As a whole, a menu image for selecting the function of the in-vehicle electronic device 20 is configured.
  • the icon operation portions 211g to 211j formed in the fourth operation region R4 have convex shapes substantially similar to the icon images IC5 to IC8, respectively, and the step portions 211c so as to have the same positional relationship as the icon images IC5 to IC8. It is formed to follow.
  • the icon images IC5 to IC8 are switched in a selected state or a non-selected state in accordance with a touch operation on the icon operation units 211g to 211j, respectively, and cooperate with the icon operation units 211g to 211j on a one-to-one basis. That is, as shown in FIG. 16, when the user performs a touch operation OP8 (touches) on the icon operation unit 211j, the control unit 300 transmits an image signal to the projector 510 and displays it on the fifth screen 570. Then, the icon image IC8 linked with the icon operation unit 211j is selected. In the present embodiment, the icon image IC8 in the selected state is displayed with negative / positive inversion.
  • a function corresponding to the icon image IC8 in the selected state is executed.
  • the icon operation units 211g to 211h are touch-operated, the corresponding icon images IC5 to IC7 are selected.
  • the user can feel the shape of the icon images IC5 to IC8 grasped visually through touching the icon operation units 211g to 211j with a fingertip and feel the touch through the sense of touch and actually operate the icon images IC5 to IC8. Is possible.
  • the icon operation units 211g to 211j may have a concave shape substantially similar to the icon images IC5 to IC8.
  • the display device 500 includes icon images IC1 to IC4 and IC5 to IC8 having predetermined shapes as display images on the third screen 550 and the fifth screen 570 among the plurality of screens 530 to 570.
  • the second operation region R2 and the fourth operation region R4 are icon operation portions 111g to 111j having a three-dimensional shape substantially similar to the icon images IC1 to IC4 and IC5 to IC8, and 211g to 211j are formed. According to this, the user can operate the icon images IC1 to IC4 and IC5 to IC8 as if they are directly touching, and intuitive operation is possible.
  • the present invention is applicable to an operating device mounted on a vehicle.

Abstract

Provided is an operating device which enables intuitive operation. An operating device (1000) is provided with: input means (input units) (100, 200) each provided with an operation plane having a plurality of operation regions, and a sensor part capable of detecting the position of contact of an object to be detected with the operation plane; a display means (display unit) (500) provided with a plurality of display regions; and a control means for controlling a predetermined electronic device on the basis of the position of contact of the object to be detected, which was detected by the sensor part, and switching a display image displayed in at least one of the plurality of display regions. The plurality of display regions are disposed in three dimensions, the plurality of operation regions are formed in three dimensions, and at least one of the plurality of operation regions has a shape approximately similar to the shape of any of the plurality of display regions.

Description

操作装置Operating device
 本発明は、車両に搭載される操作装置に関するものである。 The present invention relates to an operating device mounted on a vehicle.
 例えば特許文献1には、車両搭載機器の処理内容を選択させる複数の選択肢を表示する表示手段と、ステアリングホイールを構成するステアリングスポーク上に設置され、表示手段の表示内容と略相似形状に分割された検出領域を有し、操作者の操作指の接触位置に対応した検出領域を示す操作情報を生成する入力操作手段と、を備え、入力操作手段により生成された操作情報に基づいて、操作者の操作指の接触位置に対応した検出領域と相似的に対応した選択肢を決定し、車両搭載機器を制御する技術が開示されている。 For example, in Patent Document 1, a display unit that displays a plurality of options for selecting the processing content of a vehicle-mounted device and a steering spoke that constitutes a steering wheel are installed and divided into a substantially similar shape to the display content of the display unit. Input operation means for generating operation information indicating a detection area corresponding to the contact position of the operation finger of the operator, and the operator based on the operation information generated by the input operation means. A technique for determining an option corresponding to a detection area corresponding to the contact position of the operating finger and controlling a vehicle-mounted device is disclosed.
特開2005-96519号公報JP-A-2005-96519
 しかしながら、特許文献1に開示された技術では、操作者は表示手段の表示画面を見ながら入力操作ができるものの、表示手段の表示画面と入力操作手段の検出面とはいずれも平坦な形状であって、検出面を見ることなく直感的な操作を行うには表示画面と操作面との対応づけが不十分であり、なお改良の余地があった。 However, in the technique disclosed in Patent Document 1, the operator can perform an input operation while looking at the display screen of the display unit, but both the display screen of the display unit and the detection surface of the input operation unit have a flat shape. Thus, in order to perform an intuitive operation without looking at the detection surface, the correspondence between the display screen and the operation surface is insufficient, and there is still room for improvement.
 そこで、本発明は、上述した課題に着目してなされたものであり、直感的な操作を行うことが可能な操作装置を提供することを目的とするものである。 Therefore, the present invention has been made paying attention to the above-described problems, and an object of the present invention is to provide an operation device capable of performing an intuitive operation.
 本発明は、前記課題を解決するため、複数の操作領域を有する操作面と、前記操作面に対する被検出体の接触位置を検出可能なセンサ部と、を備える入力手段と、
複数の表示領域を備える表示手段と、
前記センサ部が検出した前記被検出体の接触位置に基づいて、所定の電子機器を制御し、また、前記複数の表示領域の少なくとも1つに表示される表示画像を切り換える制御手段と、を備える操作装置であって、
前記複数の表示領域は立体的に配置され、
前記複数の操作領域は立体的に形成され、
前記複数の操作領域の少なくとも1つは、前記複数の表示領域のいずれかの形状と略相似する形状からなることを特徴とする。
In order to solve the above-described problem, the present invention provides an input unit including an operation surface having a plurality of operation regions, and a sensor unit capable of detecting a contact position of a detected object with respect to the operation surface;
Display means comprising a plurality of display areas;
Control means for controlling a predetermined electronic device based on a contact position of the detected object detected by the sensor unit and switching a display image displayed in at least one of the plurality of display areas. An operating device,
The plurality of display areas are arranged three-dimensionally,
The plurality of operation areas are three-dimensionally formed,
At least one of the plurality of operation areas has a shape substantially similar to any one of the plurality of display areas.
 本発明は、前記課題を解決するため、
複数の操作領域を有する操作面と、前記操作面に対する被検出体の接触位置を検出可能なセンサ部と、を備える入力手段と、
複数の表示領域を備える表示手段と、
前記センサ部が検出した前記被検出体の接触位置に基づいて、所定の電子機器を制御し、また、前記複数の表示領域の少なくとも1つに表示される表示画像を切り換える制御手段と、を備える操作装置であって、
前記複数の表示領域は立体的に配置され、
前記複数の操作領域は立体的に形成され、
前記操作面に形成され前記複数の操作領域を分割する第1の立体識別部は、前記複数の表示領域を分割する第2の立体識別部の形状と略相似する形状からなることを特徴とする。
In order to solve the above problems, the present invention provides:
An input means comprising: an operation surface having a plurality of operation regions; and a sensor unit capable of detecting a contact position of the detected object with respect to the operation surface;
Display means comprising a plurality of display areas;
Control means for controlling a predetermined electronic device based on a contact position of the detected object detected by the sensor unit and switching a display image displayed in at least one of the plurality of display areas. An operating device,
The plurality of display areas are arranged three-dimensionally,
The plurality of operation areas are three-dimensionally formed,
The first three-dimensional identification unit that is formed on the operation surface and divides the plurality of operation areas has a shape substantially similar to the shape of the second three-dimensional identification unit that divides the plurality of display areas. .
 また、前記入力手段は、ステアリングに搭載され、
前記複数の表示領域は、前記入力手段の上方に配置され、
前記複数の操作領域の少なくとも1つは、その上方に位置する表示領域の形状と略相似する形状からなることを特徴とする。
The input means is mounted on a steering wheel,
The plurality of display areas are arranged above the input means,
At least one of the plurality of operation areas has a shape substantially similar to the shape of the display area located above the operation area.
 また、前記入力手段は、ステアリングに搭載され、
前記複数の表示領域は、前記入力手段の上方に配置され、
前記第1の立体識別部は、その上方に位置する前記第2の立体識別部の形状と略相似する形状からなることを特徴とする。
The input means is mounted on a steering wheel,
The plurality of display areas are arranged above the input means,
The first three-dimensional identification unit has a shape substantially similar to the shape of the second three-dimensional identification unit located above the first three-dimensional identification unit.
 また、前記複数の操作領域は、それぞれ表面の触感が異なるように形成されてなることを特徴とする。 Further, the plurality of operation areas are formed so as to have different surface tactile sensations.
 また、前記複数の操作領域の少なくとも1つは、その表面に前記複数の表示領域のいずれかの形状の一部を模した意匠部が形成されてなることを特徴とする。 Further, at least one of the plurality of operation areas is characterized in that a design part imitating a part of any shape of the plurality of display areas is formed on the surface thereof.
前記制御手段は、前記センサ部が検出した前記被検出体の接触位置に基づいて、前記被検出体が接触した操作領域と連携する表示領域を強調する強調画像を前記表示手段に表示させることを特徴とする。 The control means causes the display means to display an emphasized image that emphasizes a display area in cooperation with an operation area in contact with the detected object, based on the contact position of the detected object detected by the sensor unit. Features.
 また、前記表示手段は、前記複数の表示領域の少なくとも1つに前記表示画像として所定形状のアイコン画像を表示し、前記複数の操作領域の少なくとも1つは、前記アイコン画像と略相似する立体的な形状からなるアイコン操作部が形成されてなることを特徴とする。 The display means displays an icon image having a predetermined shape as the display image in at least one of the plurality of display areas, and at least one of the plurality of operation areas is a three-dimensional image that is substantially similar to the icon image. An icon operation unit having a simple shape is formed.
 また、前記表示手段は、表示光を発する投射型表示器と、前記複数の表示領域を構成し、前記表示光が投影されて前記表示画像を表示する複数のスクリーンと、を備えてなることを特徴とする。 Further, the display means includes a projection type display that emits display light, and a plurality of screens that form the plurality of display areas and project the display light to display the display image. Features.
 また、前記制御手段は、前記センサ部が検出した前記被検出体の軌跡と略一致する方向及び速度で前記表示画像を切り換えることを特徴とする。 Further, the control means is characterized in that the display image is switched at a direction and a speed substantially coincident with a locus of the detected object detected by the sensor unit.
 本発明は、直感的な操作を行うことが可能となる。 The present invention enables an intuitive operation.
本発明の実施形態である操作装置の電気的構成を示す図。The figure which shows the electrical structure of the operating device which is embodiment of this invention. 操作装置が搭載される車両内の運転席付近の概観を示す図。The figure which shows the general view of the driver's seat vicinity in the vehicle in which the operating device is mounted. 第1の入力装置の正面図及びA-A断面図。The front view and AA sectional view of the 1st input device. センサシートが有する第1のセンサ列及び第2のセンサ列を説明する図。The figure explaining the 1st sensor row and the 2nd sensor row which a sensor sheet has. 第2の入力装置の正面図及びB-B断面図。The front view and BB sectional view of the 2nd input device. 表示装置を示す斜視図。The perspective view which shows a display apparatus. 第1の操作領域に対するジェスチャ操作と、それに応じて更新される表示装置の表示画像の一例を示す図。The figure which shows an example of the display image of the gesture operation with respect to the 1st operation area | region, and the display apparatus updated according to it. 第2の操作領域に対するタッチ操作と、それに応じて更新される表示装置の表示画像の一例を示す図。The figure which shows an example of the touch operation with respect to a 2nd operation area | region, and the display image of the display apparatus updated according to it. 第2の操作領域に対するジェスチャ操作と、それに応じて更新される表示装置の表示画像の一例を示す図。The figure which shows an example of the gesture operation with respect to a 2nd operation area | region, and the display image of the display apparatus updated according to it. 第3の操作領域に対するジェスチャ操作と、それに応じて更新される表示装置の表示画像の一例を示す図。The figure which shows an example of the display image of gesture operation with respect to the 3rd operation area | region, and the display apparatus updated according to it. 第4の操作領域に対するタッチ操作と、それに応じて更新される表示装置の表示画像の一例を示す図。The figure which shows an example of the display operation of the touch operation with respect to the 4th operation area | region, and the display apparatus updated according to it. 第4の操作領域に対するジェスチャ操作と、それに応じて更新される表示装置の表示画像の一例を示す図。The figure which shows an example of the display image of gesture operation with respect to the 4th operation area | region, and the display apparatus updated according to it. 本発明の他の実施形態の一例を示す図。The figure which shows an example of other embodiment of this invention. 本発明の他の実施形態の一例を示す図。The figure which shows an example of other embodiment of this invention. 本発明の他の実施形態の一例を示す図。The figure which shows an example of other embodiment of this invention. 本発明の他の実施形態の一例を示す図。The figure which shows an example of other embodiment of this invention.
 以下、本発明を適用した実施形態を添付図面に基づいて説明する。
 本実施形態に係る操作装置は、図1に示す、車両1に搭載される操作装置1000である。車両1のユーザ(通常、運転者)が、第1、第2の入力装置(入力手段)100、200を操作すると、制御部(制御手段)300は、その操作に応じた各種動作を車載電子機器20に実行させ、また、操作あるいは各種動作の結果に応じた画像を表示装置(表示手段)500に表示させる。
Embodiments to which the present invention is applied will be described below with reference to the accompanying drawings.
The operating device according to the present embodiment is an operating device 1000 mounted on the vehicle 1 shown in FIG. When a user (usually a driver) of the vehicle 1 operates the first and second input devices (input means) 100 and 200, the control unit (control means) 300 performs various operations in accordance with the operation of the in-vehicle electronic device. The display device (display means) 500 displays an image corresponding to the result of the operation or various operations.
 (車両1の構成)
 車両1は、図2に示すように、ステアリング10を備える。
(Configuration of vehicle 1)
As shown in FIG. 2, the vehicle 1 includes a steering 10.
 ステアリング10は、車両1の操舵装置の一部であり、本体部11と、ステアリングホイール12と、を備える。 The steering 10 is a part of the steering device of the vehicle 1, and includes a main body 11 and a steering wheel 12.
 本体部11は、車両1の図示しないステアリングシャフトと接続されるスポーク部であり、ユーザから見て右側に第1の入力装置100を備え、左側に第2の入力装置200を備える。また、本体部31には、それぞれ第1、第2の入力装置100、200の形状にあわせた取付孔(図示せず)が形成されている。各取付孔に第1、第2の入力装置100、200が取り付けられることにより、第1、第2の入力装置100、200の後述する操作面のみが露出する。 The main body 11 is a spoke connected to a steering shaft (not shown) of the vehicle 1, and includes a first input device 100 on the right side and a second input device 200 on the left side when viewed from the user. The main body 31 is formed with attachment holes (not shown) that match the shapes of the first and second input devices 100 and 200, respectively. By attaching the first and second input devices 100 and 200 to the respective mounting holes, only the operation surfaces described later of the first and second input devices 100 and 200 are exposed.
 ステアリングホイール12は、本体部11に取り付けられる、運転者が車両1を操舵する際に握るリング形状の部材である。 The steering wheel 12 is a ring-shaped member attached to the main body 11 and gripped when the driver steers the vehicle 1.
 車載電子機器20は、オーディオ装置、カーナビゲーション装置等であり、車両1のインパネ内にはめ込まれる電子機器のほか、ダッシュボード等上に載置されるクレイドルによって車両1内に着脱自在に配設される電子機器を含み、また、単に車両1内に持ち込まれて車両1内で動作させる電子機器を含み、スマートフォンと称される高機能携帯電話を含む。車載電子機器20は後述する制御部300と電気的に接続され、制御部300から受信した制御信号に従って動作する。また、表示装置500の後述する表示領域には、車載電子機器20の動作に対応した画像が表示される。 The in-vehicle electronic device 20 is an audio device, a car navigation device, or the like, and is detachably arranged in the vehicle 1 by a cradle placed on a dashboard or the like in addition to an electronic device fitted in the instrument panel of the vehicle 1. And an electronic device that is simply brought into the vehicle 1 to be operated in the vehicle 1 and includes a high-function mobile phone called a smartphone. The in-vehicle electronic device 20 is electrically connected to a control unit 300 described later, and operates according to a control signal received from the control unit 300. In addition, an image corresponding to the operation of the in-vehicle electronic device 20 is displayed in a display area described later of the display device 500.
 (操作装置1000の構成)
 操作装置1000は、第1の入力装置100と、第2の入力装置200と、制御部300と、記憶部400と、表示装置500と、を備える。
(Configuration of operation device 1000)
The operation device 1000 includes a first input device 100, a second input device 200, a control unit 300, a storage unit 400, and a display device 500.
 図3は、第1の入力装置100を示す正面図及び断面図である。第1の入力装置100は、図3(a)及び(b)に示すように、接触センサ110と、スイッチ装置120と、を備える。 FIG. 3 is a front view and a sectional view showing the first input device 100. As shown in FIGS. 3A and 3B, the first input device 100 includes a contact sensor 110 and a switch device 120.
 接触センサ110は、ユーザが、親指等でその操作面上を触れる操作(以下、タッチ操作と言う)あるいは所定の軌跡を描くようになぞる操作(以下、ジェスチャ操作と言う)を行った際に、後述する制御部300の制御のもと、親指等が操作面に触れた位置を検出するタッチパッド装置であり、表面カバー111と、センサシート(センサ部)112と、スペーサ113と、を備える。 When the user performs an operation of touching the operation surface with a thumb or the like (hereinafter referred to as a touch operation) or an operation of tracing a predetermined locus (hereinafter referred to as a gesture operation). The touch pad device detects a position where a thumb or the like touches the operation surface under the control of the control unit 300 described later, and includes a front cover 111, a sensor sheet (sensor unit) 112, and a spacer 113.
 表面カバー111は、合成樹脂等の遮光性の絶縁材料からシート状に形成され、タッチ操作あるいはジェスチャ操作が行われる際に、ユーザの指等が触れる操作面111aと、操作面111aの周縁に位置し、図示しないケース等に覆われてユーザの指等が触れられない周縁部111bと、を有する。操作面111aには、図3に示すように、段差部111cが形成されており、操作面111aは、この段差部111cを基準としてそれぞれ奥行き位置の異なる、第1の操作領域R1と第2の操作領域R2とが立体的に形成されている。すなわち、段差部111cは、操作面111aにおいて第1、第2の操作領域R1、R2を分割する第1の立体識別部となる。 The front cover 111 is formed in a sheet shape from a light-shielding insulating material such as a synthetic resin, and is positioned on the operation surface 111a touched by a user's finger or the like when a touch operation or a gesture operation is performed, and on the periphery of the operation surface 111a. And a peripheral edge portion 111b which is covered with a case or the like (not shown) and cannot be touched by a user's finger or the like. As shown in FIG. 3, a step portion 111c is formed on the operation surface 111a. The operation surface 111a has a first operation region R1 and a second operation region having different depth positions with respect to the step portion 111c. The operation area R2 is three-dimensionally formed. That is, the stepped portion 111c serves as a first three-dimensional identification unit that divides the first and second operation regions R1 and R2 on the operation surface 111a.
 第1の操作領域R1は、操作面111aのうち段差部111cを基準として下側に位置する操作領域であり、操作面111aの平面状の部分である平面部111dと、第1の操作領域R1の略中央に位置し、平面部111dから裏側方向に沈むように円状に窪んだ窪み部111eと、を有する。なお、「表側」とは、図3(b)の両端矢印で示すように、第1の入力装置100に対してユーザ側をいい、「裏側」とは、その反対側をいうものとする。 The first operation region R1 is an operation region located below the stepped portion 111c of the operation surface 111a, and includes a flat portion 111d that is a planar portion of the operation surface 111a, and the first operation region R1. And a hollow portion 111e that is recessed in a circular shape so as to sink in the back side direction from the flat surface portion 111d. The “front side” means the user side with respect to the first input device 100 as shown by the double-ended arrows in FIG. 3B, and the “back side” means the opposite side.
 第2の操作領域R2は、操作面111aのうち段差部111cを基準として上側に位置する操作領域であり、平面部111dから表側方向に盛り上がるように隆起する隆起部111fを有する。すなわち、第2の操作領域R2は、第1の操作領域R1よりも表側方向に位置することとなる。 2nd operation area | region R2 is an operation area | region located above the level | step-difference part 111c among the operation surfaces 111a, and has the raised part 111f which protrudes so that it may swell in the front side direction from the plane part 111d. That is, the second operation area R2 is located in the front side direction with respect to the first operation area R1.
 センサシート112は、指等の被検出体の位置を検出するための複数のセンサ1120(検出電極)を有する投影静電容量方式のセンサシートであり、表面カバー111の裏面側に位置する。なお、センサ1120は半透明材料からなり、センサシート112は光透過性である。 The sensor sheet 112 is a projected capacitive sensor sheet having a plurality of sensors 1120 (detection electrodes) for detecting the position of a detection target such as a finger, and is located on the back side of the front cover 111. The sensor 1120 is made of a translucent material, and the sensor sheet 112 is light transmissive.
 センサシート112は、図4に示すように、X方向における被検出体の位置を検出するための第1のセンサ列112aを有する層と、Y方向における被検出体の位置を検出するための第2のセンサ列112bを有する層を重ね合わせて概略構成されている。第1のセンサ列112aと第2のセンサ列112bが合わさることにより、結果的に、センサシート112には、センサ1120がマトリックス状に配置されることになる。第1のセンサ列112aと第2のセンサ列112bは、各々、後述する制御部300と電気的に接続されている。 As shown in FIG. 4, the sensor sheet 112 includes a layer having a first sensor array 112a for detecting the position of the detected object in the X direction, and a first layer for detecting the position of the detected object in the Y direction. It is schematically configured by superposing layers having two sensor rows 112b. By combining the first sensor row 112a and the second sensor row 112b, the sensor 1120 is arranged in a matrix on the sensor sheet 112 as a result. The first sensor row 112a and the second sensor row 112b are each electrically connected to a control unit 300 described later.
 表面カバー111に指等の被検出体が触れると、その裏面側に位置するセンサ1120と被検出体との間の静電容量が変化する。制御部300と各センサ1120とは電気的に接続されているので、制御部300は、各センサにおける静電容量の変化を検出できる。制御部300は、この静電容量の変化に基づいて、被検出体の接触位置を示す入力座標値(X,Y)を算出する。入力座標値は、操作面上に予め設定されている、各センサ1120におけるXY座標系における座標値である。入力座標値は、X方向における静電容量の変化の分布の重心位置(例えば、静電容量が一定の閾値よりも高くかつ最も大きいセンサ1120の位置)に割り当てられているX座標と、Y方向における静電容量の変化の分布の重心位置(例えば、静電容量が一定の閾値よりも高くかつ最も大きいセンサ1120の位置)に割り当てられているY座標と、によって表される。制御部300は、このX座標及びY座標を算出することによって入力座標値(X,Y)を算出する。 When the detected object such as a finger touches the front cover 111, the capacitance between the sensor 1120 located on the rear surface side and the detected object changes. Since the control part 300 and each sensor 1120 are electrically connected, the control part 300 can detect the change of the electrostatic capacitance in each sensor. The control unit 300 calculates an input coordinate value (X, Y) indicating the contact position of the detection target based on the change in capacitance. The input coordinate value is a coordinate value in the XY coordinate system of each sensor 1120 set in advance on the operation surface. The input coordinate values are the X coordinate assigned to the center of gravity position of the distribution of changes in capacitance in the X direction (for example, the position of the sensor 1120 where the capacitance is higher than a certain threshold value and the largest), and the Y direction. And the Y coordinate assigned to the position of the center of gravity of the distribution of the change in the electrostatic capacity (for example, the position of the sensor 1120 where the electrostatic capacity is higher than a certain threshold value and the largest). The control unit 300 calculates the input coordinate value (X, Y) by calculating the X coordinate and the Y coordinate.
 図3に戻って、センサシート112は、絞り加工により表面カバー111と一体成形されることで、表面カバー111と同様の形状に加工される(図3(b)参照)。このように一体成形されることで、表面カバー111とセンサシート112は、一枚のシートのようになり、操作面が有する段差部111c、平面部111d、窪み部111e、隆起部111f等の段差形状は、その一枚のシートの曲がった部分で構成されることになる。また、このように一体成形されることで、表面カバー111の裏面とセンサシート112の表面とが当接する。これにより、表面カバー111の段差形状に対応して、センサ1120が配置されることになる。このようにセンサ1120が配置されているため、段差部111c等の段差形状を有した操作面上で行われたジェスチャ操作であっても、制御部300は、各センサにおける静電容量の変化を検出できる。 Referring back to FIG. 3, the sensor sheet 112 is integrally formed with the surface cover 111 by drawing to be processed into the same shape as the surface cover 111 (see FIG. 3B). By being integrally molded in this way, the surface cover 111 and the sensor sheet 112 become like a single sheet, and the stepped portions 111c, the flat surface portion 111d, the recessed portion 111e, the raised portion 111f, etc. of the operation surface are provided. The shape is composed of a bent portion of the single sheet. In addition, the back surface of the front cover 111 and the front surface of the sensor sheet 112 come into contact with each other by being integrally molded in this way. Thereby, the sensor 1120 is arranged corresponding to the step shape of the front cover 111. Since the sensor 1120 is arranged in this way, even when the gesture operation is performed on the operation surface having a stepped shape such as the stepped portion 111c, the control unit 300 changes the capacitance of each sensor. It can be detected.
 スペーサ113は、センサシート112の裏面側に位置し、図3(b)に示すように、一体成形された表面カバー111とセンサシート112の形状に合わせて形成され、ユーザの操作により表面カバー111の表側から押圧が加わった際にこれらの形状を保持する部材である。 The spacer 113 is located on the back side of the sensor sheet 112, and is formed in accordance with the shape of the integrally formed surface cover 111 and sensor sheet 112 as shown in FIG. This is a member that retains these shapes when pressure is applied from the front side.
 スイッチ装置120は、接触センサ110の裏面側に位置し、制御部300と電気的に接続される。ユーザが第1の入力装置100の操作面を押下する操作(以下、押下操作という)を行うと、スイッチ装置120は押され、所定の入力信号を制御部300に送信する。押下操作は、第1の入力装置100の操作面に対するタッチ操作あるいはジェスチャ操作による制御とは異なる制御を実行させる際になされる。 The switch device 120 is located on the back side of the contact sensor 110 and is electrically connected to the control unit 300. When the user performs an operation of pressing the operation surface of the first input device 100 (hereinafter referred to as a pressing operation), the switch device 120 is pressed and transmits a predetermined input signal to the control unit 300. The pressing operation is performed when control different from control by touch operation or gesture operation on the operation surface of the first input device 100 is executed.
 ここで、第1の入力装置100は、ステアリング10の本体部11に、例えば、接触センサ110の図示しないケースが本体部11と軟質樹脂で溶着されることにより、取り付けられる。このように取り付けられることにより、ユーザが操作面を押下すると、接触センサ110が沈み、スイッチ装置120が押される仕組みとなっている。 Here, the first input device 100 is attached to the main body 11 of the steering 10 by, for example, a case (not shown) of the contact sensor 110 being welded to the main body 11 with a soft resin. By being attached in this way, when the user presses the operation surface, the contact sensor 110 sinks and the switch device 120 is pressed.
 以上の各部により、第1の入力装置100は構成される。 The first input device 100 is configured by the above units.
 図5は、第2の入力装置200を示す正面図及び断面図である。第2の入力装置200は、第1の入力装置100と同様の構成であり、接触センサ210とスイッチ装置220とを備える。 FIG. 5 is a front view and a sectional view showing the second input device 200. The second input device 200 has the same configuration as the first input device 100 and includes a contact sensor 210 and a switch device 220.
 接触センサ210は、接触センサ110と同様のタッチパッド装置であり、表面カバー211と、センサシート212と、スペーサ213と、を備える。 The contact sensor 210 is a touch pad device similar to the contact sensor 110, and includes a surface cover 211, a sensor sheet 212, and a spacer 213.
 表面カバー211は、合成樹脂等の遮光性の絶縁材料からシート状に形成され、タッチ操作あるいはジェスチャ操作が行われる際に、ユーザの指等が触れる操作面211aと、操作面211aの周縁に位置し、図示しないケース等に覆われてユーザの指等が触れられない周縁部211bと、を有する。操作面211aは、図5に示すように、全体が周縁部211bから表側に隆起し、また、段差部211cが形成されている。そして操作面211aは、この段差部211cを基準としてそれぞれ奥行き位置の異なる、第3の操作領域R3と第4の操作領域R4とに分割される。すなわち、段差部211cは、操作面211aにおいて第3、第4の操作領域R3、R4を分割する第1の立体識別部となる。 The front cover 211 is formed in a sheet shape from a light-shielding insulating material such as a synthetic resin, and is positioned on the operation surface 211a touched by a user's finger or the like when a touch operation or a gesture operation is performed, and on the periphery of the operation surface 211a And a peripheral portion 211b that is covered with a case or the like (not shown) and cannot be touched by a user's finger or the like. As shown in FIG. 5, the entire operation surface 211a is raised from the peripheral edge 211b to the front side, and a stepped portion 211c is formed. And the operation surface 211a is divided | segmented into 3rd operation area | region R3 and 4th operation area | region R4 which each differ in a depth position on the basis of this level | step-difference part 211c. That is, the step portion 211c serves as a first three-dimensional identification unit that divides the third and fourth operation regions R3 and R4 on the operation surface 211a.
 第3の操作領域R3は、操作面211aのうち段差部211cを基準として下側に位置する操作領域であり、操作面211aの平面状の部分である平面部211dと、第3の操作領域R3の略中央に位置し、平面部211dから裏側方向に沈むように円状に窪んだ窪み部211eと、を有する。なお、「表側」とは、図5(b)の両端矢印で示すように、第2の入力装置200に対してユーザ側をいい、「裏側」とは、その反対側をいうものとする。 The third operation region R3 is an operation region located on the lower side of the operation surface 211a with respect to the stepped portion 211c. The third operation region R3 includes a planar portion 211d that is a planar portion of the operation surface 211a and the third operation region R3. And a hollow portion 211e that is recessed in a circular shape so as to sink in the back side direction from the flat surface portion 211d. The “front side” means the user side with respect to the second input device 200 as shown by the double-ended arrows in FIG. 5B, and the “back side” means the opposite side.
 第4の操作領域R4は、操作面211aのうち段差部211cを基準として上側に位置する操作領域であり、平面部211dから表側方向に盛り上がるように隆起する隆起部211fを有する。すなわち、第4の操作領域R4は、第3の操作領域R3よりも表側方向に位置することとなる。 4th operation area | region R4 is an operation area | region located above the level | step-difference part 211c among the operation surfaces 211a, and has the protruding part 211f which protrudes so that it may swell in the front side direction from the plane part 211d. That is, the fourth operation region R4 is positioned in the front side direction with respect to the third operation region R3.
 センサシート212は、センサシート112と同様に、指等の被検出体の位置を検出するための複数のセンサ(検出電極)を有する投影静電容量方式のセンサシートであり、表面カバー211の裏面側に位置する。なお、センサは半透明材料からなり、センサシート212は光透過性である。 Similar to the sensor sheet 112, the sensor sheet 212 is a projected capacitance type sensor sheet having a plurality of sensors (detection electrodes) for detecting the position of an object to be detected such as a finger. Located on the side. The sensor is made of a translucent material, and the sensor sheet 212 is light transmissive.
 スペーサ213は、センサシート212の裏面側に位置し、図5(b)に示すように、一体成形された表面カバー211とセンサシート212の形状に合わせて形成され、ユーザの操作により表面カバー211の表側から押圧が加わった際にこれらの形状を保持する部材である。 The spacer 213 is located on the back side of the sensor sheet 212, and is formed in accordance with the shape of the integrally formed surface cover 211 and sensor sheet 212 as shown in FIG. This is a member that retains these shapes when pressure is applied from the front side.
 スイッチ装置220は、接触センサ210の裏面側に位置し、制御部300と電気的に接続される。ユーザが第2の入力装置200の操作面を押下する操作(以下、押下操作という)を行うと、スイッチ装置220は押され、所定の入力信号を制御部300に送信する。押下操作は、第2の入力装置200の操作面に対するタッチ操作あるいはジェスチャ操作による制御とは異なる制御を実行させる際になされる。 The switch device 220 is located on the back side of the contact sensor 210 and is electrically connected to the control unit 300. When the user performs an operation of pressing the operation surface of the second input device 200 (hereinafter referred to as a pressing operation), the switch device 220 is pressed and transmits a predetermined input signal to the control unit 300. The pressing operation is performed when a control different from the control by the touch operation or the gesture operation on the operation surface of the second input device 200 is executed.
 以上の各部により、第2の入力装置200は構成される。 The second input device 200 is configured by the above units.
 図1に戻って、制御部300は、CPU(Central Processing Unit)等から構成され、記憶部400に格納されている動作プログラムを実行して、各種の処理や制御を行う。制御部300は、その少なくとも一部が、ASIC(Application Specific Integrated Circuit)等の各種専用回路によって構成されてもよい。 Returning to FIG. 1, the control unit 300 includes a CPU (Central Processing Unit) and the like, and executes an operation program stored in the storage unit 400 to perform various processes and controls. At least a part of the control unit 300 may be configured by various dedicated circuits such as an ASIC (Application Specific Integrated Circuit).
 記憶部400は、ROM(Read Only Memory)、RAM(Random Access Memory)、フラッシュメモリ等から構成され、制御部300を構成するCPUのワークエリア、CPUが実行する動作プログラムを記憶するプログラムエリア、データエリア等として機能する。 The storage unit 400 includes a ROM (Read Only Memory), a RAM (Random Access Memory), a flash memory, and the like. The work area of the CPU that configures the control unit 300, a program area that stores an operation program executed by the CPU, and data It functions as an area.
 プログラムエリアには、後述する車載電子機器制御処理を実行するためのプログラム、表示装置制御処理を実行するためのプログラム等の動作プログラムが記憶される。 The program area stores operation programs such as a program for executing vehicle-mounted electronic device control processing, which will be described later, and a program for executing display device control processing.
 データエリアには、仮想操作領域、実行条件、対応動作データ、画像データ等が予め記憶されている。 In the data area, a virtual operation area, execution conditions, corresponding operation data, image data, and the like are stored in advance.
 仮想操作領域は、第1、第2の入力装置100、200の操作面を仮想的に分割して設定される領域であり、本実施形態においては、主として操作面111a、211aを段差部111c、211cを基準として上下に分割してなる操作領域R1~R4が設定される。仮想操作領域としては、この他に窪み部111e、211eを示す領域を含むものであってもよい。 The virtual operation area is an area that is set by virtually dividing the operation surface of the first and second input devices 100 and 200. In the present embodiment, the operation surfaces 111a and 211a are mainly configured by the stepped portion 111c, Operation areas R1 to R4 that are divided vertically with respect to 211c are set. In addition to this, the virtual operation area may include an area indicating the depressions 111e and 211e.
 実行条件は、各仮想操作領域において制御信号を車載電子機器20に送信する、すなわち、車載電子機器20の制御を実行するトリガーの役割となるデータである。実行条件は複数あり、各仮想操作領域によって異なる条件が対応付けられている。本実施形態においては、実行条件は、特定の軌跡であるか否かの条件、所定の仮想操作領域に接触したか否かの条件、押下操作が検出されたか否かの条件、を含む。また、実行条件は各仮想操作領域毎に複数設定することができ、例えば特定の軌跡であるか否かの条件と押下操作が検出されたか否かの条件とが同じ仮想操作領域に対して設定されてもよい。「特定の軌跡」とは、例えば隆起部111f、211fに沿った略円弧状の軌跡などが上げられる。 The execution condition is data serving as a trigger for transmitting a control signal to the in-vehicle electronic device 20 in each virtual operation region, that is, for controlling the in-vehicle electronic device 20. There are a plurality of execution conditions, and different conditions are associated with each virtual operation area. In the present embodiment, the execution condition includes a condition as to whether or not it is a specific locus, a condition as to whether or not a predetermined virtual operation area has been touched, and a condition as to whether or not a pressing operation has been detected. Also, a plurality of execution conditions can be set for each virtual operation area. For example, a condition for whether or not a specific locus is set and a condition for whether or not a pressing operation is detected are set for the same virtual operation area. May be. The “specific trajectory” is, for example, a substantially arc-shaped trajectory along the raised portions 111f and 211f.
 対応動作データは、車載電子機器20に所定の動作を実行させる制御信号のデータである。対応動作データは複数あり、その各々が仮想操作領域及び実行条件と対応付けられている。すなわち、操作装置1000は、ユーザが接触する操作面の位置とジェスチャ操作のパターンとの組み合わせによって、車載電子機器20に複数の異なる制御を実行することが可能となっている。 Corresponding operation data is data of a control signal that causes the in-vehicle electronic device 20 to execute a predetermined operation. There are a plurality of corresponding operation data, each of which is associated with a virtual operation area and an execution condition. That is, the operation device 1000 can execute a plurality of different controls on the in-vehicle electronic device 20 depending on the combination of the position of the operation surface with which the user contacts and the pattern of the gesture operation.
 画像データは、表示装置500に所定の表示画像を表示させる画像信号のデータである。制御部300は、第1、第2の入力装置100、200に対する操作やこの操作に応じた車載電子機器20の所定の動作に基づいて、表示装置500に表示させる表示画像を更新する制御を行う。 The image data is image signal data that causes the display device 500 to display a predetermined display image. The control unit 300 performs control to update a display image to be displayed on the display device 500 based on an operation on the first and second input devices 100 and 200 and a predetermined operation of the in-vehicle electronic device 20 according to the operation. .
 なお、記憶部400に格納される各データは、既知のデータ登録手法を用いて、デフォルト値として或いはユーザ自身の操作により適宜格納される。 In addition, each data stored in the memory | storage part 400 is suitably stored as a default value or a user's own operation using a known data registration method.
 図6は、表示装置500を示す斜視図である。表示装置500は、ユーザの正面であってステアリング10の上方に配置され、プロジェクタ(投射型表示器)510と、ミラー520と、第1のスクリーン530と、第2のスクリーン540と、第3のスクリーン550と、第4のスクリーン560と、第5のスクリーン570と、スクリーンホルダ580と、を備える。 FIG. 6 is a perspective view showing the display device 500. The display device 500 is disposed in front of the user and above the steering 10, and includes a projector (projection display) 510, a mirror 520, a first screen 530, a second screen 540, and a third screen. A screen 550, a fourth screen 560, a fifth screen 570, and a screen holder 580 are provided.
 プロジェクタ510は、表示画像を示す表示光Lを第1~第5のスクリーン530~570に向けて出射するものであり、例えば光源からの光を液晶パネルを透過させて表示光Lを形成する液晶プロジェクタからなる。なお、プロジェクタ510は複数であってよい。 The projector 510 emits display light L indicating a display image toward the first to fifth screens 530 to 570. For example, liquid crystal that forms display light L by transmitting light from a light source through a liquid crystal panel. Consists of a projector. A plurality of projectors 510 may be provided.
 ミラー520は、例えばポリカーボネートなどの樹脂材料に例えばアルミニウムなどの金属を蒸着させて反射面を形成してなる反射部材である。プロジェクタ510から出射された表示光Lは、ミラー520を反射し、第1~第5のスクリーン530~570に投影される。 The mirror 520 is a reflecting member formed by depositing a metal such as aluminum on a resin material such as polycarbonate to form a reflecting surface. The display light L emitted from the projector 510 is reflected by the mirror 520 and projected onto the first to fifth screens 530 to 570.
 第1~第5のスクリーン530~570は、プロジェクタ510から出射された表示光Lが投影されてそれぞれ表示画像を映し出す(表示する)表示領域を構成するものである。本実施形態において、第1~第5のスクリーン530~570は、表示光Lを反射して表示画像を表示する反射スクリーンであり、平面、曲面、球面あるいはこれらの組み合わせからなる立体形状からなる。
 第1のスクリーン530は、中央に位置するスクリーンであり、表示画像として例えば車速、エンジン回転数、残燃料、電力量、走行距離などの車両情報やナビゲーション情報などを表示する第1の表示領域を構成する。
 第2のスクリーン540は、ユーザ側から見て右側下方に位置するスクリーンであり、後述するように、第1の入力装置100の第1の操作領域R1と連携し、表示画像として第1の操作領域R1の操作あるいはこの操作に伴う車載電子機器20の動作に関する情報を表示する第2の表示領域を構成する。
 第3のスクリーン550は、ユーザ側から見て右側上方に位置するスクリーンであり、表示画像として各種インジケータやワーニングを表示するほか、後述するように、第1の入力装置100の第2の操作領域R2と連携し、表示画像として第2の操作領域R2の操作あるいはこの操作に伴う車載電子機器20の動作に関する情報を表示する第3の表示領域を構成する。
 第4のスクリーン560は、ユーザ側から見て左側下方に位置するスクリーンであり、後述するように、第2の入力装置200の第3の操作領域R3と連携し、表示画像として第3の操作領域R3の操作あるいはこの操作に伴う車載電子機器20の動作に関する情報を表示する第4の表示領域を構成する。
 第5のスクリーン570は、ユーザ側から見て左側上方に位置するスクリーンであり、表示画像として各種インジケータやワーニングを表示するほか、後述するように、第2の入力装置100の第4の操作領域R4と連携し、第4の操作領域R4の操作あるいはこの操作に伴う車載電子機器20の動作に関する情報を表示する第5の表示領域を構成する。
 第1~第5スクリーン530~570は、スクリーンホルダ580の段差が設けられ奥行き方向の位置が異なる複数の保持面にそれぞれ保持されることで、立体的に、すなわち、空間を隔てて奥行き方向の位置が異なるように配置される。具体的には、第1のスクリーン530に対して第2、第4のスクリーン540、560が空間を隔てて一段表側(ユーザ側)に位置し、第3、第5のスクリーン550、570が第2、第4のスクリーン540、560に対してさらに一段表側に位置するように配置される。これにより、第1のスクリーン530に表示される表示画像と、第2、第4のスクリーン540、560に表示される表示画像と、第3、第5のスクリーン550、570に表示される表示画像と、は奥行きが異なり、立体感のある表示が可能となる。
The first to fifth screens 530 to 570 constitute display areas on which the display light L emitted from the projector 510 is projected to display (display) display images. In the present embodiment, the first to fifth screens 530 to 570 are reflection screens that reflect the display light L and display a display image, and have a three-dimensional shape including a plane, a curved surface, a spherical surface, or a combination thereof.
The first screen 530 is a screen located in the center, and has a first display area for displaying vehicle information such as vehicle speed, engine speed, remaining fuel, electric energy, travel distance, and navigation information as a display image. Constitute.
The second screen 540 is a screen located on the lower right side when viewed from the user side. As will be described later, the second screen 540 cooperates with the first operation region R1 of the first input device 100 to display the first operation as a display image. A second display area that displays information related to the operation of the region R1 or the operation of the in-vehicle electronic device 20 associated with the operation is configured.
The third screen 550 is a screen located on the upper right side when viewed from the user side, and displays various indicators and warnings as a display image. In addition, as described later, the second operation area of the first input device 100 is displayed. In cooperation with R2, a third display area for displaying information on the operation of the second operation area R2 or the operation of the in-vehicle electronic device 20 associated with this operation as a display image is configured.
The fourth screen 560 is a screen located on the lower left side when viewed from the user side. As described later, the fourth screen 560 cooperates with the third operation region R3 of the second input device 200 to display a third operation as a display image. A fourth display area that displays information related to the operation of the region R3 or the operation of the in-vehicle electronic device 20 associated with the operation is configured.
The fifth screen 570 is a screen located on the upper left side when viewed from the user side, displays various indicators and warnings as display images, and, as will be described later, a fourth operation area of the second input device 100. In cooperation with R4, a fifth display region is configured to display information related to the operation of the fourth operation region R4 or the operation of the in-vehicle electronic device 20 associated with this operation.
The first to fifth screens 530 to 570 are respectively held in a three-dimensional manner, that is, with a space therebetween in the depth direction, by being held by a plurality of holding surfaces provided with steps of the screen holder 580 and having different positions in the depth direction. The positions are arranged differently. Specifically, the second and fourth screens 540 and 560 are positioned on the first front side (user side) with a space between the first screen 530 and the third and fifth screens 550 and 570 are the first screen 530. 2. It arrange | positions so that it may be located in the 1st stage | surface front side with respect to the 4th screen 540,560. Accordingly, the display image displayed on the first screen 530, the display image displayed on the second and fourth screens 540 and 560, and the display image displayed on the third and fifth screens 550 and 570 are displayed. Is different in depth and enables display with a stereoscopic effect.
 スクリーンホルダ580は、例えば合成樹脂からなり、熱収縮が起こらない粘着材(図示しない)を介して第1~第5のスクリーン530~570を所定位置に保持する板状部材である。なお、第1~第5のスクリーン530~570はベゼルなどの枠状部材によって固定してもよい。また、スクリーンホルダ580の第1~第5のスクリーン530~570を保持する保持面には、第1~第5のスクリーン530~570を分割するように段差部581~584が形成されている。 The screen holder 580 is a plate-like member that is made of, for example, synthetic resin and holds the first to fifth screens 530 to 570 at predetermined positions via an adhesive material (not shown) that does not cause thermal shrinkage. The first to fifth screens 530 to 570 may be fixed by a frame-like member such as a bezel. Further, step portions 581 to 584 are formed on the holding surface for holding the first to fifth screens 530 to 570 of the screen holder 580 so as to divide the first to fifth screens 530 to 570.
 次に、第1、第2の入力装置100、200の操作領域R1~R4と表示装置500の表示領域との連携について説明する。 Next, the cooperation between the operation areas R1 to R4 of the first and second input devices 100 and 200 and the display area of the display device 500 will be described.
 図7は、第1の入力装置100の第1の操作領域R1に対するジェスチャ操作OP1と、これに伴って更新(切り換え)される表示装置500の第2のスクリーン540の表示画像を示す図である。第2のスクリーン540には、初期画面の表示画像として運転情報である外気温(図7では「OUTSIDE TEMP」と表記)と走行可能距離(図7では「DISTANCE TO EMPTY」と表記)とが表示されている。ここで、第1の入力装置100の第1の操作領域R1は、第1の入力装置100の上方に配置される第2のスクリーン540の形状と略相似する形状からなり、第1の操作領域R1と第2のスクリーン540との対応づけが行われている。なお、「略相似」とは完全な相似のほか、両者を比較した場合に似た形状であると認識できる程度に形状が近い場合も含まれる。ユーザは、立体的に配置される第2のスクリーン540の形状を視認することで第1の操作領域R1の形状を把握し、視覚を通じて把握した操作面111aの立体形状を段差部111c等に指先で触れて触覚を通じて感じることで、直感的に第1の操作領域R1にジェスチャ操作等を行うことができる。第2のスクリーン540が立体的に配置されていることで、ユーザはその形状を視認しやすく、さらに第1の操作領域R1の立体形状を触覚によって感じとることができ、ユーザが操作面111aを見ることなく直感的にジェスチャ操作等を行うことができる。第1の操作領域R1に対する操作の一例として、ユーザが第1の操作領域R1をなぞるジェスチャ操作OP1を行うと、制御部300は、プロジェクタ510に画像信号を送信し、ジェスチャ操作OP1の軌跡と略一致する方向(本実施形態においては右方向)及び速度で第2のスクリーン540の表示画像を順に切り換えて例えば車両1の右後方の風景を示す周辺画像を表示させる。第2のスクリーン540の表示画像が、第1の操作領域R1に対するジェスチャ操作OP1の軌跡と略一致する方向及び速度で切り換えられるため、ユーザは実際に表示画像に触れながら操作するような感覚で操作が可能となる。 FIG. 7 is a diagram showing a gesture operation OP1 for the first operation region R1 of the first input device 100 and a display image on the second screen 540 of the display device 500 that is updated (switched) accordingly. . On the second screen 540, the outside temperature (indicated as “OUTSIDE TEMP” in FIG. 7) and the travelable distance (indicated as “DISTANCE TO EMPTY” in FIG. 7) are displayed as display images of the initial screen. Has been. Here, the first operation region R1 of the first input device 100 has a shape substantially similar to the shape of the second screen 540 disposed above the first input device 100, and the first operation region R1. R1 and the second screen 540 are associated with each other. Note that “substantially similar” includes not only complete similarity but also a case where the shapes are close enough to be recognized as similar shapes when the two are compared. The user grasps the shape of the first operation region R1 by visually recognizing the shape of the second screen 540 arranged in a three-dimensional manner, and the fingertip of the three-dimensional shape of the operation surface 111a grasped through vision is applied to the stepped portion 111c and the like. By touching with and feeling through the sense of touch, a gesture operation or the like can be intuitively performed on the first operation region R1. Since the second screen 540 is arranged in a three-dimensional manner, the user can easily recognize the shape, and the user can feel the three-dimensional shape of the first operation region R1 by touch, and the user views the operation surface 111a. It is possible to perform gesture operations and the like intuitively. As an example of an operation on the first operation region R1, when the user performs a gesture operation OP1 that traces the first operation region R1, the control unit 300 transmits an image signal to the projector 510, and is approximately the locus of the gesture operation OP1. The display image on the second screen 540 is sequentially switched in the matching direction (right direction in the present embodiment) and the speed to display a peripheral image showing, for example, the right rear landscape of the vehicle 1. Since the display image on the second screen 540 is switched at a direction and speed substantially matching the locus of the gesture operation OP1 with respect to the first operation region R1, the user operates as if he / she actually operates while touching the display image. Is possible.
 図8は、第1の入力装置100の第2の操作領域R2に対するタッチ操作OP2と、これに伴って更新される表示装置500の第3のスクリーン550の表示画像を示す図である。また、図9は、第1の入力装置100の第2の操作領域R2に対するジェスチャ操作OP3と、これに伴って更新される表示装置500の第3のスクリーン550の表示画像を示す図である。図8に示すように、第3のスクリーン550には、初期画面の表示画像として方向指示器のインジケータが表示されている。ここで、第2、第3のスクリーン540、550を分割する段差部582は、第1、第2の操作領域R1、R2を分割する段差部111cの形状と略相似する形状に形成される。すなわち、段差部582は、第1の入力装置100における第1の立体識別部である段差部111cの形状と略相似する第2の立体識別部となる。ユーザは、段差部582の立体形状を視認することで段差部111cによって第1の操作領域R1と分割される第2の操作領域R2の形状の一部(本実施形態では下部)と位置を把握し、視覚を通じて把握した操作面111aの立体形状を段差部111c等に指先で触れて触覚を通じて感じることで、直感的に第2の操作領域R2にジェスチャ操作等を行うことができる。段差部582が立体形状であるため、ユーザはその形状を視認しやすく、さらに段差部111cの立体形状を触覚によって感じることができ、ユーザが操作面111aを見ることなく直感的にジェスチャ操作等を行うことができる。第2の操作領域R2に対する操作の一例として、ユーザが第2の操作領域R2に対してタッチ操作OP2を行う(タッチする)と、制御部300は、プロジェクタ510に画像信号を送信し、第3のスクリーン550に段差部582に沿うような位置及び形状で、表示装置500に表示可能なコンテンツを示すアイコン群からなるメニュー画像を表示させる。さらに、図9に示すように、ユーザが第2の操作領域R2を段差部111cに沿ってなぞるジェスチャ操作OP3を行うと、制御部300は、ジェスチャ操作OP3の軌跡と略一致する方向(本実施形態では右上方)及び速度でアイコン群を順次スライドさせる。第3のスクリーン550の表示画像が、第2の操作領域R2に対するジェスチャ操作OP3の軌跡と略一致する方向及び速度で切り換えられるため、ユーザは実際に表示画像に触れながら操作するような感覚で操作が可能となる。なお、メニュー画像が表示されている状態で第2の操作領域R2に対して押下操作を行うと、選択されたコンテンツが表示画像として第2のスクリーン540に表示される。 FIG. 8 is a diagram illustrating a touch operation OP2 for the second operation region R2 of the first input device 100 and a display image on the third screen 550 of the display device 500 that is updated accordingly. FIG. 9 is a diagram showing a gesture operation OP3 for the second operation region R2 of the first input device 100 and a display image on the third screen 550 of the display device 500 that is updated accordingly. As shown in FIG. 8, the indicator of the direction indicator is displayed on the third screen 550 as a display image of the initial screen. Here, the stepped portion 582 that divides the second and third screens 540 and 550 is formed in a shape substantially similar to the shape of the stepped portion 111c that divides the first and second operation regions R1 and R2. That is, the stepped portion 582 becomes a second three-dimensional identification unit that is substantially similar to the shape of the stepped portion 111 c that is the first three-dimensional identification unit in the first input device 100. The user grasps a part of the shape (lower part in the present embodiment) and the position of the second operation region R2 divided from the first operation region R1 by the step portion 111c by visually recognizing the three-dimensional shape of the step portion 582. Then, the user can intuitively perform a gesture operation or the like on the second operation region R2 by touching the step 111c or the like with a fingertip and feeling the three-dimensional shape of the operation surface 111a grasped through vision visually. Since the step portion 582 has a three-dimensional shape, it is easy for the user to visually recognize the shape, and the three-dimensional shape of the step portion 111c can be sensed by touch, and the user can intuitively operate the gesture without looking at the operation surface 111a. It can be carried out. As an example of the operation on the second operation region R2, when the user performs (touches) the touch operation OP2 on the second operation region R2, the control unit 300 transmits an image signal to the projector 510, and the third operation region R2 is performed. A menu image consisting of an icon group indicating content that can be displayed on the display device 500 is displayed on the screen 550 at a position and shape along the step 582. Furthermore, as shown in FIG. 9, when the user performs a gesture operation OP3 in which the second operation region R2 is traced along the stepped portion 111c, the control unit 300 has a direction that substantially matches the locus of the gesture operation OP3 (this embodiment). In the form, the icons are sequentially slid at the upper right) and speed. Since the display image on the third screen 550 is switched in a direction and speed that substantially match the locus of the gesture operation OP3 with respect to the second operation region R2, the user operates as if he / she actually operates while touching the display image. Is possible. Note that when the pressing operation is performed on the second operation region R2 while the menu image is displayed, the selected content is displayed on the second screen 540 as a display image.
 図10は、第2の入力装置100の第3の操作領域R3に対するジェスチャ操作OP4と、これに伴って更新される表示装置500の第4のスクリーン560の表示画像を示す図である。第4のスクリーン560には、初期画面の表示画像としてオーディオ操作画像が表示されている。ここで、第2の入力装置200の第3の操作領域R3は、第2の入力装置200の上方に配置される第4のスクリーン560の形状と略相似する形状からなり、第3の操作領域R3と第4のスクリーン560との対応づけが行われている。ユーザは、立体的に配置される第4のスクリーン560の形状を視認することで第3の操作領域R3の形状を把握し、視覚を通じて把握した操作面211aの立体形状を段差部211c等に指先で触れて触覚を通じて感じることによって、直感的に第3の操作領域R3にジェスチャ操作等を行うことができる。第4のスクリーン560が立体的に配置されていることで、ユーザはその形状を視認しやすく、さらに第3の操作領域R3の立体形状を触覚によって感じとることができ、ユーザが操作面211aを見ることなく直感的にジェスチャ操作等を行うことができる。第3の操作領域R3に対する操作の一例として、第3の操作領域R3の窪み部211e内を時計回りになぞるジェスチャ操作OP4を行うと、制御部300は、車載電子機器20に音量制御信号を送信して音量を変更させ、プロジェクタ510に画像信号を送信して音量の変更に伴って第4のスクリーン560の表示画像における音量表示を更新する。 FIG. 10 is a diagram showing a gesture operation OP4 for the third operation region R3 of the second input device 100 and a display image on the fourth screen 560 of the display device 500 that is updated accordingly. On the fourth screen 560, an audio operation image is displayed as a display image of the initial screen. Here, the third operation region R3 of the second input device 200 has a shape substantially similar to the shape of the fourth screen 560 disposed above the second input device 200, and the third operation region R3. R3 and the fourth screen 560 are associated with each other. The user grasps the shape of the third operation region R3 by visually recognizing the shape of the fourth screen 560 arranged in three dimensions, and the fingertip of the three-dimensional shape of the operation surface 211a grasped through vision is applied to the step portion 211c and the like. By touching with and feeling through the sense of touch, a gesture operation or the like can be intuitively performed on the third operation region R3. Since the fourth screen 560 is three-dimensionally arranged, the user can easily recognize the shape thereof, and the three-dimensional shape of the third operation region R3 can be sensed by touch, and the user views the operation surface 211a. It is possible to perform gesture operations and the like intuitively. As an example of the operation on the third operation region R3, when the gesture operation OP4 that traces the inside of the hollow portion 211e of the third operation region R3 in the clockwise direction is performed, the control unit 300 transmits a volume control signal to the in-vehicle electronic device 20 Then, the volume is changed, an image signal is transmitted to the projector 510, and the volume display in the display image on the fourth screen 560 is updated in accordance with the change in the volume.
 図11は、第2の入力装置200の第4の操作領域R4に対するタッチ操作OP5と、これに伴って更新される表示装置500の第5のスクリーン570の表示画像を示す図である。また、図12は、第2の入力装置200の第4の操作領域R4に対するジェスチャ操作OP6と、これに伴って更新される表示装置500の第5のスクリーン570の表示画像を示す図である。図11に示すように、第5のスクリーン570には、初期画面の表示画像として方向指示器のインジケータが表示されている。ここで、第4、第5のスクリーン560、570を分割する段差部584は、第3、第4の操作領域R3、R4を分割する段差部211cの形状と略相似する形状に形成される。すなわち、段差部584は、第2の入力装置200における第1の立体識別部である段差部211cの形状と略相似する第2の立体識別部となる。ユーザは、段差部584の立体形状を視認することで段差部211cによって第3の操作領域R3と分割される第4の操作領域R4の形状の一部(本実施形態では下部)と位置を把握し、視覚を通じて把握した操作面211aの立体形状を段差部211c等に指先で触れて触覚を通じて感じることによって、直感的に第4の操作領域R4にジェスチャ操作等を行うことができる。段差部584が立体形状であるため、ユーザはその形状を視認しやすく、さらに段差部211cの立体形状を触覚によって感じることができ、ユーザが操作面211aを見ることなく直感的にジェスチャ操作等を行うことができる。第4の操作領域R4に対する操作の一例として、ユーザが第4の操作領域R4に対してタッチ操作OP5を行う(タッチする)と、制御部300は、プロジェクタ510に画像信号を送信して、第5のスクリーン570に段差部584に沿うような位置及び形状で、車載電子機器20の機能を示すアイコン群からなるメニュー画像を表示させる。さらに、図12に示すように、ユーザが第4の操作領域R4を段差部211cに沿ってなぞるジェスチャ操作OP6を行うと、制御部300は、ジェスチャ操作OP6の軌跡と略一致する方向(本実施形態では左上方)と速度でアイコン群を順次スライドさせる。第5のスクリーン570の表示画像が、第4の操作領域R4に対するジェスチャ操作OP6の軌跡と略一致する方向及び速度で切り換えられるため、ユーザは実際に表示画像に触れながら操作するような感覚で操作が可能となる。なお、メニュー画像が表示されている状態で第4の操作領域R4に対して押下操作を行うと、選択された機能が実行される。 FIG. 11 is a diagram showing a touch operation OP5 for the fourth operation region R4 of the second input device 200 and a display image on the fifth screen 570 of the display device 500 that is updated accordingly. FIG. 12 is a diagram showing a gesture operation OP6 for the fourth operation region R4 of the second input device 200 and a display image on the fifth screen 570 of the display device 500 that is updated accordingly. As shown in FIG. 11, on the fifth screen 570, an indicator of a direction indicator is displayed as a display image of the initial screen. Here, the stepped portion 584 that divides the fourth and fifth screens 560 and 570 is formed in a shape that is substantially similar to the shape of the stepped portion 211c that divides the third and fourth operation regions R3 and R4. That is, the stepped portion 584 is a second three-dimensional identification unit that is substantially similar to the shape of the stepped portion 211 c that is the first three-dimensional identification unit in the second input device 200. The user grasps a part of the shape (lower part in the present embodiment) and the position of the fourth operation region R4 that is divided from the third operation region R3 by the step portion 211c by visually recognizing the three-dimensional shape of the step portion 584. Then, the user can intuitively perform a gesture operation or the like on the fourth operation region R4 by touching the stepped portion 211c or the like with a fingertip and feeling the three-dimensional shape of the operation surface 211a grasped through sight through a tactile sense. Since the stepped portion 584 has a three-dimensional shape, it is easy for the user to visually recognize the shape, and the three-dimensional shape of the stepped portion 211c can be sensed by tactile sense. It can be carried out. As an example of the operation on the fourth operation region R4, when the user performs (touches) the touch operation OP5 on the fourth operation region R4, the control unit 300 transmits an image signal to the projector 510, and A menu image consisting of an icon group indicating the function of the in-vehicle electronic device 20 is displayed on the screen 570 of 5 at a position and shape along the step portion 584. Furthermore, as illustrated in FIG. 12, when the user performs a gesture operation OP6 in which the fourth operation region R4 is traced along the stepped portion 211c, the control unit 300 substantially matches the trajectory of the gesture operation OP6 (this embodiment). In the form, the icons are sequentially slid at the upper left) and the speed. Since the display image on the fifth screen 570 is switched in a direction and speed substantially matching the locus of the gesture operation OP6 with respect to the fourth operation region R4, the user operates as if he / she actually operates while touching the display image. Is possible. Note that when the pressing operation is performed on the fourth operation region R4 while the menu image is displayed, the selected function is executed.
 本実施形態に係る操作装置1000は、複数の操作領域R1、R2を有する操作面111aと、操作面111aに対する被検出体の接触位置を検出可能なセンサシート(センサ部)112と、を備える第1の入力装置(入力手段)100と、複数の操作領域R3、R4を有する操作面211aと、操作面211aに対する被検出体の接触位置を検出可能なセンサシート(センサ部)212と、を備える第2の入力装置(入力手段)200と、
複数のスクリーン(表示領域)530~570を備える表示装置(表示手段)500と、
センサシート112、212が検出した前記被検出体の接触位置に基づいて、所定の車載電子機器20を制御し、また、複数のスクリーン530~570の少なくとも1つに表示される画像を切り換える制御部(制御手段)300と、を備え、
複数のスクリーン530~570は立体的に配置され、
複数の操作領域R1、R2及びR3、R4は立体的に形成され、
第1の操作領域R1及び第3の操作領域R3は、それぞれ第2のスクリーン540及び第4のスクリーン560の形状と略相似する形状からなるものである。
これによれば、ユーザは操作面111a、211aを見ることなく直感的にジェスチャ操作等を行うことが可能となる。
The operation device 1000 according to the present embodiment includes an operation surface 111a having a plurality of operation regions R1 and R2, and a sensor sheet (sensor unit) 112 that can detect the contact position of the detection target with respect to the operation surface 111a. 1 input device (input means) 100, an operation surface 211a having a plurality of operation regions R3 and R4, and a sensor sheet (sensor unit) 212 capable of detecting the contact position of the detected object with respect to the operation surface 211a. A second input device (input means) 200;
A display device (display means) 500 including a plurality of screens (display areas) 530 to 570;
A control unit that controls a predetermined in-vehicle electronic device 20 based on the contact position of the detected object detected by the sensor sheets 112 and 212, and switches an image displayed on at least one of the plurality of screens 530 to 570. (Control means) 300,
The plurality of screens 530 to 570 are arranged three-dimensionally,
The plurality of operation regions R1, R2, and R3, R4 are three-dimensionally formed,
The first operation region R1 and the third operation region R3 have shapes substantially similar to the shapes of the second screen 540 and the fourth screen 560, respectively.
According to this, the user can intuitively perform a gesture operation or the like without looking at the operation surfaces 111a and 211a.
 また、本実施形態に係る操作装置1000は、複数の操作領域R1、R2を有する操作面111aと、操作面111aに対する被検出体の接触位置を検出可能なセンサシート(センサ部)112と、を備える第1の入力装置(入力手段)100と、複数の操作領域R3、R4を有する操作面211aと、操作面211aに対する被検出体の接触位置を検出可能なセンサシート(センサ部)212と、を備える第2の入力装置(入力手段)200と、
複数のスクリーン(表示領域)530~570を備える表示装置(表示手段)500と、
センサシート112、212が検出した前記被検出体の接触位置に基づいて、所定の車載電子機器20を制御し、また、複数のスクリーン530~570の少なくとも1つに表示される画像を切り換える制御部(制御手段)300と、を備え、
複数のスクリーン530~570は立体的に配置され、
複数の操作領域R1、R2及びR3、R4は立体的に形成され、
操作面111aに形成され操作領域R1、R2を分割する段差部(第1の立体識別部)111dと操作面211aに形成され操作領域R3、R4を分割する段差部(第1の立体識別部)211cとは、それぞれ第2のスクリーン540と第3のスクリーン550を分割する段差部(第2の立体識別部)582の形状と第4のスクリーン560と第5のスクリーン570とを分割する段差部(第2の立体識別部)584の形状と略相似する形状からなるものである。
これによれば、ユーザは操作面111a、211aを見ることなく直感的にジェスチャ操作等を行うことが可能となる。
In addition, the operation device 1000 according to the present embodiment includes an operation surface 111a having a plurality of operation regions R1 and R2, and a sensor sheet (sensor unit) 112 that can detect a contact position of the detection target with respect to the operation surface 111a. A first input device (input means) 100 provided; an operation surface 211a having a plurality of operation regions R3 and R4; a sensor sheet (sensor unit) 212 capable of detecting a contact position of an object to be detected with respect to the operation surface 211a; A second input device (input means) 200 comprising:
A display device (display means) 500 including a plurality of screens (display areas) 530 to 570;
A control unit that controls a predetermined in-vehicle electronic device 20 based on the contact position of the detected object detected by the sensor sheets 112 and 212, and switches an image displayed on at least one of the plurality of screens 530 to 570. (Control means) 300,
The plurality of screens 530 to 570 are arranged three-dimensionally,
The plurality of operation regions R1, R2, and R3, R4 are three-dimensionally formed,
A step portion (first three-dimensional identification portion) 111d formed on the operation surface 111a and dividing the operation regions R1 and R2 and a step portion (first three-dimensional identification portion) formed on the operation surface 211a and dividing the operation regions R3 and R4. 211c is a step portion for dividing the second screen 540 and the third screen 550, and a step portion for dividing the fourth screen 560 and the fifth screen 570. (Second three-dimensional identification part) It has a shape substantially similar to the shape of 584.
According to this, the user can intuitively perform a gesture operation or the like without looking at the operation surfaces 111a and 211a.
 また、第1、第2の入力装置100、200はステアリング10に搭載され、スクリーン530~570は、第1、第2の入力装置100、200の上方に配置され、
第1の操作領域R1及び第3の操作領域R3は、その上方に位置する第2のスクリーン540及び第4のスクリーン560と略相似する形状からなるものである。
これによれば、対応づけられた操作領域R1、R3とスクリーン540、560との左右方向の位置が略一致するため、スクリーン540、560を見ながらより違和感なく操作面111a、211aに対する操作を行うことができる。
The first and second input devices 100 and 200 are mounted on the steering 10, and the screens 530 to 570 are disposed above the first and second input devices 100 and 200.
The first operation region R1 and the third operation region R3 have shapes substantially similar to the second screen 540 and the fourth screen 560 located above the first operation region R1 and the third operation region R3.
According to this, the left and right positions of the associated operation areas R1 and R3 and the screens 540 and 560 substantially coincide with each other, so that the operation on the operation surfaces 111a and 211a can be performed more comfortably while looking at the screens 540 and 560. be able to.
 また、第1、第2の入力装置100、200はステアリング10に搭載され、スクリーン530~570は、第1、第2の入力装置100、200の上方に配置され、
段差部111c、211cは、その上方に位置する段差部582、584の形状と略相似する形状からなるものである。
これによれば、対応づけられた段差部111c、211cと段差部582、584との左右方向の位置が略一致するため、段差部582、584を見ながらより違和感なく操作面111a、211aに対する操作を行うことができる。
The first and second input devices 100 and 200 are mounted on the steering 10, and the screens 530 to 570 are disposed above the first and second input devices 100 and 200.
The step portions 111c and 211c have a shape substantially similar to the shape of the step portions 582 and 584 located above the step portions 111c and 211c.
According to this, the left and right positions of the stepped portions 111c and 211c and the stepped portions 582 and 584 that are associated with each other substantially coincide with each other, so that the operation on the operation surfaces 111a and 211a can be performed more comfortably while looking at the stepped portions 582 and 584. It can be performed.
 また、表示装置500は、表示光Lを発するプロジェクタ(投射型表示器)510と、複数の表示領域を構成し、表示光Lが投影されて画像を表示する複数のスクリーン530~570と、を備えてなるものである。
これによれば、スクリーン530~570を用いて容易に立体的に配置された複数の表示領域を得ることができる。
Further, the display device 500 includes a projector (projection type display) 510 that emits display light L, and a plurality of screens 530 to 570 that form a plurality of display areas and display images by projecting the display light L. It is prepared.
According to this, it is possible to easily obtain a plurality of display areas arranged three-dimensionally using the screens 530 to 570.
 また、制御部300は、センサシート112、212が検出した被検出体の軌跡と略一致する方向及び速度で表示画像を切り換えるものである。これによれば、ユーザは表示画像を直接触れているような感覚で操作することができ、直感的な操作が可能となる。 Further, the control unit 300 switches the display image at a direction and a speed that substantially coincide with the locus of the detected object detected by the sensor sheets 112 and 212. According to this, the user can operate as if he / she is directly touching the display image, and intuitive operation is possible.
 本発明は、上記の実施形態に限定されるものではなく、その要旨を逸脱しない範囲内で適宜、変更(構成要素の削除も含む)等が可能であることはもちろんである。本実施形態においては入力手段として2つの入力装置100、200を備える構成であったが、1つの入力手段を備えるものであってもよい。また、操作領域は3つ以上形成されるものであってもよい。また、1つの入力手段における2つ以上の操作領域の形状が、表示領域のいずれかと略相似する形状からなるものであってもよい。また、表示領域の形状を強調するように、各表示領域の周縁部にグラデーションや模様等の強調画像を表示しても良く、表示領域の形状をより明確に視認することができる。 The present invention is not limited to the above-described embodiment, and it is needless to say that changes (including deletion of components) can be made as appropriate without departing from the scope of the invention. In this embodiment, the two input devices 100 and 200 are provided as input means. However, one input means may be provided. Further, three or more operation areas may be formed. Moreover, the shape of two or more operation areas in one input means may be a shape that is substantially similar to any one of the display areas. Further, an emphasized image such as a gradation or a pattern may be displayed at the periphery of each display area so as to emphasize the shape of the display area, and the shape of the display area can be visually recognized more clearly.
 また、複数の操作領域は、印刷や塗料の塗り分け、レーザ等による表面加工等の手段によってそれぞれ異なる触感となるように形成されてもよく、触れた操作領域の相違を指先等でより容易に感じ取ることができ、直感的な操作を行うのにさらに好適である。また、その場合、操作領域の触感を想起させる背景画像を対応づけられた(その操作に応じて表示画像が更新され、形状が略相似する)表示領域に表示しても良く、視覚によって操作領域の表面触感を把握し、操作面に触れながら操作領域の表面触感を触覚によって感じることで、直感的に操作を行うことができさらに好適である。例えば、摩擦抵抗の少ない(ツルツルした)触感の操作領域について、これと対応づけられる表示領域にその触感を想起させる金属調などの光沢のある背景画像を表示させる。または、摩擦抵抗の多い(ザラザラした)触感の操作領域について、これと対応付けられる表示領域にその触感を想起させる凹凸を含む背景画像を表示させる。また、複数の表示領域をスクリーンで構成する場合は、操作領域の触感を想起させるような質感となるように各スクリーンを形成してもよい。例えば、材質や表面加工によって摩擦抵抗の少ない(ツルツルした)触感の操作領域について、これと対応づけられるスクリーンをその触感を想起させる凹凸のない質感で形成する。または、摩擦抵抗の多い(ザラザラした)触感の操作領域について、これと対応付けられるスクリーンをその触感を想起させる凹凸を含む質感で形成する。 In addition, the plurality of operation areas may be formed so as to have different tactile sensations by means such as printing, paint coating, laser processing, etc., and the difference between touched operation areas can be made easier with a fingertip or the like. It can be felt and is more suitable for intuitive operation. In this case, a background image reminiscent of the tactile sensation of the operation area may be displayed in a display area that is associated (the display image is updated according to the operation and the shape is substantially similar). It is more preferable that the user can intuitively perform the operation by grasping the surface tactile sensation and feeling the surface tactile sensation in the operation area while touching the operation surface. For example, a glossy background image such as a metallic tone reminiscent of the tactile sensation is displayed in a display area associated with the tactile sensation operation area having a low frictional resistance (smooth). Alternatively, for a tactile sensation operation region with a high frictional resistance (a rough texture), a background image including concavities and convexities reminiscent of the tactile sensation is displayed in a display region associated therewith. Further, when a plurality of display areas are configured by screens, each screen may be formed so as to have a texture reminiscent of the tactile sensation of the operation area. For example, for a tactile sensation operating area with a low frictional resistance (smooth) depending on the material and surface processing, a screen associated with the tactile sensation area is formed with a texture free from unevenness that reminds the tactile sensation. Alternatively, with respect to a tactile sensation operation region having a high frictional resistance, a screen associated therewith is formed with a texture including irregularities that recalls the tactile sensation.
 図13及び14は、本発明の他の実施形態の一例を示すものである。
 他の実施形態である操作装置1000は、第1、第2の入力装置100、200の第2の操作領域R2及び第4の操作領域R4の表面に、これら操作領域R2及びR4とそれぞれ連携する第3のスクリーン550及び第5のスクリーン570の特徴的な形状の一部を模した第1、第2の意匠部130、230を形成してなる。なお、本実施形態においては、第1、第2の意匠部130、230は、表面カバー111、211の第2の操作領域R2及び第4の操作領域R4に対応する個所にそれぞれ印刷形成されてなる。また、「模した」とは表示領域であるスクリーンの一部の形状を簡略化、単純化し、両者を比較した場合に似た形状であると認識できる程度に近い形状とすることを言い、同一形状である場合も含む。
13 and 14 show an example of another embodiment of the present invention.
The operation device 1000 according to another embodiment is linked to the operation regions R2 and R4 on the surfaces of the second operation region R2 and the fourth operation region R4 of the first and second input devices 100 and 200, respectively. The first and second design portions 130 and 230 simulating part of the characteristic shapes of the third screen 550 and the fifth screen 570 are formed. In the present embodiment, the first and second design portions 130 and 230 are printed and formed at locations corresponding to the second operation region R2 and the fourth operation region R4 of the front covers 111 and 211, respectively. Become. In addition, “simulated” means that the shape of a part of the screen, which is the display area, is simplified and simplified, and the shapes are close enough to be recognized as being similar when comparing the two. Including the case of a shape.
 第1の意匠部130は、図13に示すように第3のスクリーン550の形状の一部として下部の曲線形状を模した曲線形状の意匠である。また、第2の意匠部230は、図14に示すように第5のスクリーン570の形状の一部として下部の曲線形状を模した曲線形状の意匠である。ユーザは、第3のスクリーン550、第5のスクリーン570の特徴的な形状の一部(下部)とこれらを模した第1、第2の意匠130、230とを視認することによって、操作装置1000が初見である場合など操作に不慣れな場合であっても、一見して第3のスクリーン550及び第5のスクリーン570と第2の操作領域R4及び第4の操作領域R4との連携関係を把握することができる。 The first design portion 130 is a curved design that imitates the lower curved shape as a part of the shape of the third screen 550 as shown in FIG. The second design portion 230 is a curved design that imitates the lower curved shape as a part of the shape of the fifth screen 570 as shown in FIG. The user visually recognizes a part (lower part) of the characteristic shapes of the third screen 550 and the fifth screen 570 and the first and second designs 130 and 230 imitating them, thereby operating the operation device 1000. Even if the user is unfamiliar with the operation, such as when it is the first time, the relationship between the third screen 550 and the fifth screen 570 and the second operation region R4 and the fourth operation region R4 can be understood at a glance. can do.
 また、図13に示すように、ユーザが第2の操作領域R2に対してタッチ操作OP2を行う(タッチする)と、制御部300は、プロジェクタ510に画像信号を送信し、第3のスクリーン550に段差部582に沿うような位置及び形状で、表示可能なコンテンツを示すアイコン群からなるメニュー画像を表示させる。また、このとき、制御部300は、被検出体である指等が接触した第2の操作領域R2と連携する第3のスクリーン550を強調する強調画像を表示させる。強調画像の一例として、本実施形態では背景として通常時と色の異なる着色背景画像を第3のスクリーン550全体に表示する。強調画像は、表示領域を強調するものであればよく、各表示領域の周縁部に囲み線やグラデーションや模様等を表示しても良い。同様に、図14に示すように、ユーザが第4の操作領域R4に対してタッチ操作OP5を行う(タッチする)と、制御部300は、プロジェクタ510に画像信号に送信し、第5のスクリーン570に段差部584に沿うような位置及び形状で、車載電子機器20の機能を示すアイコン群からなるメニュー画像を表示させる。また、このとき、制御部300は、被検出体である指等が接触した第4の操作領域R4と連携する第5のスクリーン570を強調する強調画像を表示させる。強調画像の一例として、本実施形態では背景として通常時と色の異なる着色背景画像を第5のスクリーン570全体に表示する。なお、図示しないが、ユーザの指等が第1の操作領域R1あるいは第3の操作領域R3に接触した場合は、これら操作領域R1あるいはR3と連携する第2のスクリーン540あるいは第4のスクリーン560にこれらスクリーン540あるいは560を強調する強調画像が表示される。ユーザは、強調画像によって、操作領域R1~R4に接触した時点で、所望の操作領域R1~R4に接触したか否かを把握することができ、操作性を向上させることができる。 Also, as shown in FIG. 13, when the user performs (touches) the touch operation OP2 on the second operation region R2, the control unit 300 transmits an image signal to the projector 510 and the third screen 550. A menu image including a group of icons indicating displayable content is displayed at a position and shape along the step portion 582. At this time, the control unit 300 displays an enhanced image that emphasizes the third screen 550 that cooperates with the second operation region R2 that is touched by the finger or the like that is the detection target. As an example of the emphasized image, in this embodiment, a colored background image having a color different from that in the normal state is displayed on the entire third screen 550 as a background. The emphasized image may be any image that emphasizes the display area, and a surrounding line, a gradation, a pattern, or the like may be displayed on the periphery of each display area. Similarly, as illustrated in FIG. 14, when the user performs (touches) the touch operation OP5 on the fourth operation region R4, the control unit 300 transmits an image signal to the projector 510 and the fifth screen. A menu image including a group of icons indicating the function of the in-vehicle electronic device 20 is displayed on the 570 at a position and shape along the stepped portion 584. At this time, the control unit 300 displays an enhanced image that emphasizes the fifth screen 570 that cooperates with the fourth operation region R4 in contact with the finger or the like that is the detection target. As an example of an emphasized image, in this embodiment, a colored background image having a color different from that in the normal state is displayed on the entire fifth screen 570 as a background. Although not shown, when the user's finger or the like comes into contact with the first operation region R1 or the third operation region R3, the second screen 540 or the fourth screen 560 associated with these operation regions R1 or R3. The highlighted image for enhancing the screen 540 or 560 is displayed. The user can grasp whether or not he / she has touched the desired operation areas R1 to R4 at the time when the user has touched the operation areas R1 to R4 by the emphasized image, and can improve the operability.
 本実施形態における操作装置1000は、複数の操作領域R1、R2及びR3、R4のうち第2の操作領域R2及び第3の操作領域R3は、その表面に複数のスクリーン530~570のうちそれぞれ第3のスクリーン550及び第5のスクリーン570の形状の一部を模した第一、第二の意匠部130、230が形成されてなることを特徴とする。
 これによれば、操作装置1000が初見である場合など操作に不慣れな場合であっても、一見して第3のスクリーン550及び第5のスクリーン570と第2の操作領域R4及び第4の操作領域R4との連携関係を把握することができる。
In the operation device 1000 according to the present embodiment, the second operation region R2 and the third operation region R3 among the plurality of operation regions R1, R2, R3, and R4 are respectively provided on the surface of the plurality of screens 530 to 570. The first and second design portions 130 and 230 imitating part of the shapes of the third screen 550 and the fifth screen 570 are formed.
According to this, even when the operation device 1000 is unfamiliar with the operation such as the first appearance, the third screen 550 and the fifth screen 570, the second operation region R4, and the fourth operation at a glance. The cooperative relationship with the region R4 can be grasped.
 また、制御部300は、センサシート112、212が検出した前記被検出体の接触位置に基づいて、前記被検出体が接触した操作領域R1、R2及びR3、R4と連携するスクリーン540~570を強調する強調画像を表示装置500に表示させることを特徴とする。
 これによれば、操作領域R1~R4に接触した時点で、所望の操作領域R1~R4に接触したか否かを把握することができ、操作性を向上させることができる。
In addition, the control unit 300 displays screens 540 to 570 that cooperate with the operation areas R1, R2, R3, and R4 in contact with the detected object based on the contact positions of the detected object detected by the sensor sheets 112 and 212. An emphasized image to be emphasized is displayed on the display device 500.
According to this, it is possible to grasp whether or not the desired operation areas R1 to R4 have been touched when the operation areas R1 to R4 are touched, and the operability can be improved.
 図15及び16は、本発明の他の実施形態の一例を示すものである。
 他の実施形態である操作装置1000は、表示装置500が、第3のスクリーン550及び第5のスクリーン570に前記表示画像として所定形状のアイコン画像IC1~IC4及びIC5~IC8を表示し、第1、第2の入力装置100、200の第2の操作領域R2及び第4の操作領域R4に、アイコン画像IC1~IC4及びIC5~IC8と略相似する立体的な形状からなるアイコン操作部111g~111j及び211g~211jがそれぞれ形成されてなる。
15 and 16 show an example of another embodiment of the present invention.
In the operation device 1000 according to another embodiment, the display device 500 displays icon images IC1 to IC4 and IC5 to IC8 having predetermined shapes as the display images on the third screen 550 and the fifth screen 570, In the second operation region R2 and the fourth operation region R4 of the second input devices 100 and 200, icon operation units 111g to 111j having a three-dimensional shape substantially similar to the icon images IC1 to IC4 and IC5 to IC8. And 211g to 211j are formed.
 第3のスクリーン550に表示されるアイコン画像IC1~IC4は、図15に示すように円形状であり、表示装置500に表示可能なコンテンツをそれぞれ示すものであって、段差部582に沿うように表示されて全体として表示するコンテンツを選択するメニュー画像を構成するものである。第2の操作領域R2に形成されるアイコン操作部111g~111jは、それぞれアイコン画像IC1~IC4と略相似する凸形状からなり、アイコン画像IC1~IC4と同様の位置関係となるように段差部111cに沿うように形成される。アイコン画像IC1~IC4は、それぞれアイコン操作部111g~111jに対するタッチ操作に伴って選択状態あるいは非選択状態が切り換えられるものであり、アイコン操作部111g~111jと1対1で連携するものである。すなわち、図15に示すように、ユーザがアイコン操作部111jに対してタッチ操作OP7を行う(タッチする)と、制御部300は、プロジェクタ510に画像信号を送信し、第3のスクリーン550に表示されアイコン操作部111jと連携するアイコン画像IC4を選択状態とする。なお、本実施形態においては、選択状態となったアイコン画像IC4をネガポジ反転して表示している。アイコン画像IC4が選択状態であるときに第2の操作領域R2に対して押下操作を行うと、選択状態であったアイコン画像IC4に対応するコンテンツが表示画像として第2のスクリーン540に表示される。なお、図示しないがアイコン操作部111g~111hがタッチ操作された場合は、それぞれに対応するアイコン画像IC1~IC3が選択状態となる。ユーザは、視覚を通じて把握されるアイコン画像IC1~IC4の形状をアイコン操作部111g~111jに指先で触れて触覚を通じて感じることによって、実際にアイコン画像550a~550dに触れながら操作するような感覚で操作が可能となる。なお、アイコン操作部111g~111jは、アイコン画像IC1~IC4と略相似する凹形状からなるものであってもよい。 The icon images IC1 to IC4 displayed on the third screen 550 have a circular shape as shown in FIG. 15 and indicate contents that can be displayed on the display device 500, respectively, along the step portion 582. A menu image for selecting contents to be displayed and displayed as a whole is configured. The icon operation portions 111g to 111j formed in the second operation region R2 have convex shapes substantially similar to the icon images IC1 to IC4, respectively, and the step portions 111c so as to have the same positional relationship as the icon images IC1 to IC4. It is formed to follow. The icon images IC1 to IC4 are switched in a selected state or a non-selected state in accordance with a touch operation on the icon operation units 111g to 111j, respectively, and cooperate with the icon operation units 111g to 111j on a one-to-one basis. That is, as shown in FIG. 15, when the user performs (touches) the touch operation OP7 on the icon operation unit 111j, the control unit 300 transmits an image signal to the projector 510 and displays it on the third screen 550. Then, the icon image IC4 linked to the icon operation unit 111j is selected. In the present embodiment, the icon image IC4 in the selected state is displayed with negative / positive inversion. When the pressing operation is performed on the second operation region R2 while the icon image IC4 is in the selected state, the content corresponding to the icon image IC4 in the selected state is displayed on the second screen 540 as a display image. . Although not shown, when the icon operation units 111g to 111h are touch-operated, the corresponding icon images IC1 to IC3 are selected. The user can feel the shape of the icon images IC1 to IC4 grasped through the sight by touching the icon operation units 111g to 111j with a fingertip and feel the touch through the sense of touch. Is possible. The icon operation units 111g to 111j may have a concave shape substantially similar to the icon images IC1 to IC4.
 第5のスクリーン570に表示されるアイコン画像IC5~IC8は、図16に示すように円形状であり、車載電子機器20の機能をそれぞれ示すものであって、段差部584に沿うように表示されて全体として車載電子機器20の機能を選択するメニュー画像を構成するものである。第4の操作領域R4に形成されるアイコン操作部211g~211jは、それぞれアイコン画像IC5~IC8と略相似する凸形状からなり、アイコン画像IC5~IC8と同様の位置関係となるように段差部211cに沿うように形成される。アイコン画像IC5~IC8は、それぞれアイコン操作部211g~211jに対するタッチ操作に伴って選択状態あるいは非選択状態が切り換えられるものであり、アイコン操作部211g~211jと1対1で連携するものである。すなわち、図16に示すように、ユーザがアイコン操作部211jに対してタッチ操作OP8を行う(タッチする)と、制御部300は、プロジェクタ510に画像信号を送信し、第5のスクリーン570に表示されアイコン操作部211jと連携するアイコン画像IC8を選択状態とする。なお、本実施形態においては、選択状態となったアイコン画像IC8をネガポジ反転して表示している。アイコン画像IC8が選択状態であるときに第4の操作領域R4に対して押下操作を行うと、選択状態であったアイコン画像IC8に対応する機能が実行される。なお、図示しないがアイコン操作部211g~211hがタッチ操作された場合は、それぞれに対応するアイコン画像IC5~IC7が選択状態となる。ユーザは、視覚を通じて把握されるアイコン画像IC5~IC8の形状をアイコン操作部211g~211jに指先で触れて触覚を通じて感じることによって、実際にアイコン画像IC5~IC8に触れながら操作するような感覚で操作が可能となる。なお、アイコン操作部211g~211jは、アイコン画像IC5~IC8と略相似する凹形状からなるものであってもよい。 The icon images IC5 to IC8 displayed on the fifth screen 570 have a circular shape as shown in FIG. 16, and each indicate the function of the in-vehicle electronic device 20, and are displayed along the step portion 584. As a whole, a menu image for selecting the function of the in-vehicle electronic device 20 is configured. The icon operation portions 211g to 211j formed in the fourth operation region R4 have convex shapes substantially similar to the icon images IC5 to IC8, respectively, and the step portions 211c so as to have the same positional relationship as the icon images IC5 to IC8. It is formed to follow. The icon images IC5 to IC8 are switched in a selected state or a non-selected state in accordance with a touch operation on the icon operation units 211g to 211j, respectively, and cooperate with the icon operation units 211g to 211j on a one-to-one basis. That is, as shown in FIG. 16, when the user performs a touch operation OP8 (touches) on the icon operation unit 211j, the control unit 300 transmits an image signal to the projector 510 and displays it on the fifth screen 570. Then, the icon image IC8 linked with the icon operation unit 211j is selected. In the present embodiment, the icon image IC8 in the selected state is displayed with negative / positive inversion. When the pressing operation is performed on the fourth operation region R4 when the icon image IC8 is in the selected state, a function corresponding to the icon image IC8 in the selected state is executed. Although not shown, when the icon operation units 211g to 211h are touch-operated, the corresponding icon images IC5 to IC7 are selected. The user can feel the shape of the icon images IC5 to IC8 grasped visually through touching the icon operation units 211g to 211j with a fingertip and feel the touch through the sense of touch and actually operate the icon images IC5 to IC8. Is possible. The icon operation units 211g to 211j may have a concave shape substantially similar to the icon images IC5 to IC8.
 本実施形態における操作装置1000は、表示装置500は、複数のスクリーン530~570のうち第3のスクリーン550及び第5のスクリーン570に前記表示画像として所定形状のアイコン画像IC1~IC4及びIC5~IC8を表示し、
複数の操作領域R1~R4のうち第2の操作領域R2及び第4の操作領域R4は、アイコン画像IC1~IC4及びIC5~IC8と略相似する立体的な形状からなるアイコン操作部111g~111j及び211g~211jが形成されてなるものである。
 これによれば、ユーザはアイコン画像IC1~IC4及びIC5~IC8を直接触れているような感覚で操作することができ、直感的な操作が可能となる。
In the operation device 1000 according to the present embodiment, the display device 500 includes icon images IC1 to IC4 and IC5 to IC8 having predetermined shapes as display images on the third screen 550 and the fifth screen 570 among the plurality of screens 530 to 570. Is displayed,
Among the plurality of operation regions R1 to R4, the second operation region R2 and the fourth operation region R4 are icon operation portions 111g to 111j having a three-dimensional shape substantially similar to the icon images IC1 to IC4 and IC5 to IC8, and 211g to 211j are formed.
According to this, the user can operate the icon images IC1 to IC4 and IC5 to IC8 as if they are directly touching, and intuitive operation is possible.
 本発明は、車両に搭載される操作装置に適用可能である。 The present invention is applicable to an operating device mounted on a vehicle.
1 車両
 10 ステアリング
 11 本体部
 12 ステアリングホイール
 1000 操作装置
  100 第1の入力装置(入力手段)
  110 接触センサ
  111 表面カバー
  111a 操作面
  111b 周縁部
  111c 段差部(第1の立体識別部)
  111d 平面部
  111e 窪み部
  111f 隆起部
  111g~111j アイコン操作部
  112 センサシート(センサ部)
  200 第2の入力装置(入力手段)
  210 接触センサ
  211 表面カバー
  211a 操作面
  211b 周縁部
  211c 段差部(第1の立体識別部)
  211d 平面部
  211e 窪み部
  211f 隆起部
  212 センサシート(センサ部)
  300 制御部(制御手段)
  400 記憶部
  500 表示装置(表示手段)
  510 プロジェクタ(投射型表示器)
  520 ミラー
  530 第1のスクリーン(表示領域)
  540 第2のスクリーン(表示領域)
  550 第3のスクリーン(表示領域)
  560 第4のスクリーン(表示領域)
  570 第5のスクリーン(表示領域)
  580 スクリーンホルダ
  581 段差部
  582 段差部(第2の立体識別部)
  583 段差部
  584 段差部(第2の立体識別部)
  L 表示光
  R1 第1の操作領域(操作領域)
  R2 第2の操作領域(操作領域)
  R3 第3の操作領域(操作領域)
  R4 第4の操作領域(操作領域)
 20 車載電子機器
OP1 ジェスチャ操作
OP2 タッチ操作
OP3 ジェスチャ操作
OP4 ジェスチャ操作
OP5 タッチ操作
OP6 ジェスチャ操作
OP7 タッチ操作
OP8 タッチ操作
DESCRIPTION OF SYMBOLS 1 Vehicle 10 Steering 11 Main-body part 12 Steering wheel 1000 Operation apparatus 100 1st input device (input means)
110 Contact sensor 111 Front cover 111a Operation surface 111b Peripheral part 111c Step part (1st solid identification part)
111d Plane portion 111e Recessed portion 111f Raised portion 111g to 111j Icon operation portion 112 Sensor sheet (sensor portion)
200 Second input device (input means)
210 Contact sensor 211 Front cover 211a Operation surface 211b Peripheral part 211c Step part (first three-dimensional identification part)
211d plane part 211e hollow part 211f protuberance part 212 sensor sheet (sensor part)
300 Control unit (control means)
400 storage unit 500 display device (display means)
510 Projector (Projection Display)
520 Mirror 530 First screen (display area)
540 Second screen (display area)
550 Third screen (display area)
560 Fourth screen (display area)
570 5th screen (display area)
580 Screen holder 581 Stepped portion 582 Stepped portion (second three-dimensional identification portion)
583 Stepped portion 584 Stepped portion (second solid identification portion)
L display light R1 first operation area (operation area)
R2 Second operation area (operation area)
R3 Third operation area (operation area)
R4 Fourth operation area (operation area)
20 On-vehicle electronic device OP1 Gesture operation OP2 Touch operation OP3 Gesture operation OP4 Gesture operation OP5 Touch operation OP6 Gesture operation OP7 Touch operation OP8 Touch operation

Claims (10)

  1. 複数の操作領域を有する操作面と、前記操作面に対する被検出体の接触位置を検出可能なセンサ部と、を備える入力手段と、
    複数の表示領域を備える表示手段と、
    前記センサ部が検出した前記被検出体の接触位置に基づいて、所定の電子機器を制御し、また、前記複数の表示領域の少なくとも1つに表示される表示画像を切り換える制御手段と、を備える操作装置であって、
    前記複数の表示領域は立体的に配置され、
    前記複数の操作領域は立体的に形成され、
    前記複数の操作領域の少なくとも1つは、前記複数の表示領域のいずれかの形状と略相似する形状からなることを特徴とする操作装置。
    An input means comprising: an operation surface having a plurality of operation regions; and a sensor unit capable of detecting a contact position of the detected object with respect to the operation surface;
    Display means comprising a plurality of display areas;
    Control means for controlling a predetermined electronic device based on a contact position of the detected object detected by the sensor unit and switching a display image displayed in at least one of the plurality of display areas. An operating device,
    The plurality of display areas are arranged three-dimensionally,
    The plurality of operation areas are three-dimensionally formed,
    At least one of the plurality of operation areas has a shape substantially similar to any one of the plurality of display areas.
  2. 複数の操作領域を有する操作面と、前記操作面に対する被検出体の接触位置を検出可能なセンサ部と、を備える入力手段と、
    複数の表示領域を備える表示手段と、
    前記センサ部が検出した前記被検出体の接触位置に基づいて、所定の電子機器を制御し、また、前記複数の表示領域の少なくとも1つに表示される表示画像を切り換える制御手段と、を備える操作装置であって、
    前記複数の表示領域は立体的に配置され、
    前記複数の操作領域は立体的に形成され、
    前記操作面に形成され前記複数の操作領域を分割する第1の立体識別部は、前記複数の表示領域を分割する第2の立体識別部の形状と略相似する形状からなることを特徴とする操作装置。
    An input means comprising: an operation surface having a plurality of operation regions; and a sensor unit capable of detecting a contact position of the detected object with respect to the operation surface;
    Display means comprising a plurality of display areas;
    Control means for controlling a predetermined electronic device based on a contact position of the detected object detected by the sensor unit and switching a display image displayed in at least one of the plurality of display areas. An operating device,
    The plurality of display areas are arranged three-dimensionally,
    The plurality of operation areas are three-dimensionally formed,
    The first three-dimensional identification unit that is formed on the operation surface and divides the plurality of operation areas has a shape substantially similar to the shape of the second three-dimensional identification unit that divides the plurality of display areas. Operating device.
  3. 前記入力手段は、ステアリングに搭載され、
    前記複数の表示領域は、前記入力手段の上方に配置され、
    前記複数の操作領域の少なくとも1つは、その上方に位置する表示領域の形状と略相似する形状からなることを特徴とする請求項1に記載の操作装置。
    The input means is mounted on a steering wheel,
    The plurality of display areas are arranged above the input means,
    The operating device according to claim 1, wherein at least one of the plurality of operation areas has a shape substantially similar to a shape of a display area located above the operation area.
  4. 前記入力手段は、ステアリングに搭載され、
    前記複数の表示領域は、前記入力手段の上方に配置され、
    前記第1の立体識別部は、その上方に位置する前記第2の立体識別部の形状と略相似する形状からなることを特徴とする請求項2に記載の操作装置。
    The input means is mounted on a steering wheel,
    The plurality of display areas are arranged above the input means,
    The operating device according to claim 2, wherein the first three-dimensional identification unit has a shape substantially similar to a shape of the second three-dimensional identification unit located above the first three-dimensional identification unit.
  5. 前記複数の操作領域は、それぞれ表面の触感が異なるように形成されてなることを特徴とする請求項1から請求項4のいずれかに記載の操作装置。 5. The operating device according to claim 1, wherein each of the plurality of operation regions is formed so as to have a different surface tactile sensation.
  6. 前記複数の操作領域の少なくとも1つは、その表面に前記複数の表示領域のいずれかの形状の一部を模した意匠部が形成されてなることを特徴とする請求項1または請求項2に記載の操作装置。 3. The design part according to claim 1, wherein at least one of the plurality of operation regions is formed with a design portion imitating a part of any one of the plurality of display regions on a surface thereof. The operating device described.
  7. 前記制御手段は、前記センサ部が検出した前記被検出体の接触位置に基づいて、前記被検出体が接触した操作領域と連携する表示領域を強調する強調画像を前記表示手段に表示させることを特徴とする請求項1または請求項2に記載の操作装置。 The control means causes the display means to display an emphasized image that emphasizes a display area in cooperation with an operation area in contact with the detected object, based on the contact position of the detected object detected by the sensor unit. The operating device according to claim 1 or 2, characterized in that
  8. 前記表示手段は、前記複数の表示領域の少なくとも1つに前記表示画像として所定形状のアイコン画像を表示し、
    前記複数の操作領域の少なくとも1つは、前記アイコン画像と略相似する立体的な形状からなるアイコン操作部が形成されてなることを特徴とする請求項1または請求項2に記載の操作装置。
    The display means displays an icon image having a predetermined shape as the display image in at least one of the plurality of display areas,
    The operation device according to claim 1 or 2, wherein at least one of the plurality of operation areas is formed with an icon operation unit having a three-dimensional shape substantially similar to the icon image.
  9. 前記表示手段は、表示光を発する投射型表示器と、前記複数の表示領域を構成し、前記表示光が投影されて前記表示画像を表示する複数のスクリーンと、を備えてなることを特徴とする請求項1から請求項8のいずれかに記載の操作装置。 The display means includes: a projection display that emits display light; and a plurality of screens that constitute the plurality of display areas and that display the display image by projecting the display light. The operating device according to any one of claims 1 to 8.
  10. 前記制御手段は、前記センサ部が検出した前記被検出体の軌跡と略一致する方向及び速度で前記表示画像を切り換えることを特徴とする請求項1から請求項9のいずれかに記載の操作装置。 10. The operating device according to claim 1, wherein the control unit switches the display image at a direction and a speed that substantially coincide with a locus of the detected object detected by the sensor unit. .
PCT/JP2013/068693 2012-08-02 2013-07-09 Operating device WO2014021063A1 (en)

Applications Claiming Priority (4)

Application Number Priority Date Filing Date Title
JP2012172279 2012-08-02
JP2012-172279 2012-08-02
JP2012269982A JP2014043232A (en) 2012-08-02 2012-12-11 Operation device
JP2012-269982 2012-12-11

Publications (1)

Publication Number Publication Date
WO2014021063A1 true WO2014021063A1 (en) 2014-02-06

Family

ID=50027748

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/JP2013/068693 WO2014021063A1 (en) 2012-08-02 2013-07-09 Operating device

Country Status (2)

Country Link
JP (1) JP2014043232A (en)
WO (1) WO2014021063A1 (en)

Cited By (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2017039392A (en) * 2015-08-20 2017-02-23 マツダ株式会社 Vehicle display
JP2017134508A (en) * 2016-01-26 2017-08-03 豊田合成株式会社 Touch sensor device
EP3222471A4 (en) * 2014-11-19 2017-11-29 Panasonic Intellectual Property Management Co., Ltd. Input device and input method therefor
US10759461B2 (en) 2019-01-31 2020-09-01 Toyota Motor Engineering & Manufacturing North America, Inc. Multi-function vehicle input apparatuses with rotatable dials for vehicle systems control and methods incorporating the same
FR3121640A1 (en) * 2021-04-09 2022-10-14 Faurecia Interieur Industrie Vehicle control panel and method of making
US20230091049A1 (en) * 2021-09-17 2023-03-23 Toyota Jidosha Kabushiki Kaisha Vehicular display control device, vehicular display system, vehicle, display method, and non-transitory computer-readable medium storing program

Families Citing this family (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP6329932B2 (en) 2015-11-04 2018-05-23 矢崎総業株式会社 Vehicle operation system
JP2019206297A (en) * 2018-05-30 2019-12-05 トヨタ自動車株式会社 Interior structure
JP2021075157A (en) 2019-11-08 2021-05-20 トヨタ自動車株式会社 Input device for vehicle
WO2022230591A1 (en) * 2021-04-28 2022-11-03 テイ・エス テック株式会社 Input device

Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2003312373A (en) * 2002-04-24 2003-11-06 Toyota Motor Corp Input device
JP2006011237A (en) * 2004-06-29 2006-01-12 Denso Corp Display system for vehicle
JP2006117009A (en) * 2004-10-19 2006-05-11 Tokai Rika Co Ltd Input and display device for vehicle
JP2007106353A (en) * 2005-10-17 2007-04-26 Denso Corp Vehicular information display device, and vehicular information display system
JP2012059085A (en) * 2010-09-10 2012-03-22 Diamond Electric Mfg Co Ltd On-vehicle information apparatus
JP2012093802A (en) * 2010-10-22 2012-05-17 Aisin Aw Co Ltd Image display device, image display method, and program

Patent Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2003312373A (en) * 2002-04-24 2003-11-06 Toyota Motor Corp Input device
JP2006011237A (en) * 2004-06-29 2006-01-12 Denso Corp Display system for vehicle
JP2006117009A (en) * 2004-10-19 2006-05-11 Tokai Rika Co Ltd Input and display device for vehicle
JP2007106353A (en) * 2005-10-17 2007-04-26 Denso Corp Vehicular information display device, and vehicular information display system
JP2012059085A (en) * 2010-09-10 2012-03-22 Diamond Electric Mfg Co Ltd On-vehicle information apparatus
JP2012093802A (en) * 2010-10-22 2012-05-17 Aisin Aw Co Ltd Image display device, image display method, and program

Cited By (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
EP3222471A4 (en) * 2014-11-19 2017-11-29 Panasonic Intellectual Property Management Co., Ltd. Input device and input method therefor
JP2017039392A (en) * 2015-08-20 2017-02-23 マツダ株式会社 Vehicle display
JP2017134508A (en) * 2016-01-26 2017-08-03 豊田合成株式会社 Touch sensor device
US10759461B2 (en) 2019-01-31 2020-09-01 Toyota Motor Engineering & Manufacturing North America, Inc. Multi-function vehicle input apparatuses with rotatable dials for vehicle systems control and methods incorporating the same
FR3121640A1 (en) * 2021-04-09 2022-10-14 Faurecia Interieur Industrie Vehicle control panel and method of making
US11635831B2 (en) 2021-04-09 2023-04-25 Faurecia Interieur Industrie Vehicle control panel and production method
US20230091049A1 (en) * 2021-09-17 2023-03-23 Toyota Jidosha Kabushiki Kaisha Vehicular display control device, vehicular display system, vehicle, display method, and non-transitory computer-readable medium storing program
US11820226B2 (en) * 2021-09-17 2023-11-21 Toyota Jidosha Kabushiki Kaisha Vehicular display control device, vehicular display system, vehicle, display method, and non-transitory computer-readable medium storing program

Also Published As

Publication number Publication date
JP2014043232A (en) 2014-03-13

Similar Documents

Publication Publication Date Title
WO2014021063A1 (en) Operating device
EP2010411B1 (en) Operating device
EP2870528A1 (en) Light-based touch controls on a steering wheel and dashboard
JP2014229014A (en) Touch panel input operation device
CN106095150B (en) Touch input device and vehicle having the same
US10166868B2 (en) Vehicle-mounted equipment operation support system
KR20170029180A (en) Vehicle, and control method for the same
JP2012176631A (en) Control apparatus
JP2014142777A (en) Touch panel input operation device
JP5954145B2 (en) Input device
US20220100276A1 (en) Method for generating a haptic feedback for an interface, and associated interface
JP2012208762A (en) Touch panel input operation device
JP2013033309A (en) Touch panel input operation device
JP2018195134A (en) On-vehicle information processing system
EP2988194B1 (en) Operating device for vehicle
JP2020204868A (en) Display device
KR102263593B1 (en) Vehicle, and control method for the same
JP2017208185A (en) Input device
JP6329932B2 (en) Vehicle operation system
JP5640816B2 (en) Input device
JP2014029576A (en) Touch panel input operation device
JP6911821B2 (en) Input device
JP2020093591A (en) Vehicle operation device
JP2014127017A (en) Touch panel input operation device
JP2013232081A (en) Touch panel input operation device

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 13826153

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: DE

122 Ep: pct application non-entry in european phase

Ref document number: 13826153

Country of ref document: EP

Kind code of ref document: A1