WO2015004848A1 - Gesture input device - Google Patents

Gesture input device Download PDF

Info

Publication number
WO2015004848A1
WO2015004848A1 PCT/JP2014/003184 JP2014003184W WO2015004848A1 WO 2015004848 A1 WO2015004848 A1 WO 2015004848A1 JP 2014003184 W JP2014003184 W JP 2014003184W WO 2015004848 A1 WO2015004848 A1 WO 2015004848A1
Authority
WO
WIPO (PCT)
Prior art keywords
screen
display
target screen
input device
gesture input
Prior art date
Application number
PCT/JP2014/003184
Other languages
French (fr)
Japanese (ja)
Inventor
泰徳 鈴木
Original Assignee
株式会社デンソー
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 株式会社デンソー filed Critical 株式会社デンソー
Publication of WO2015004848A1 publication Critical patent/WO2015004848A1/en

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/01Input arrangements or combined input and output arrangements for interaction between user and computer
    • G06F3/048Interaction techniques based on graphical user interfaces [GUI]
    • G06F3/0487Interaction techniques based on graphical user interfaces [GUI] using specific features provided by the input device, e.g. functions controlled by the rotation of a mouse with dual sensing arrangements, or of the nature of the input device, e.g. tap gestures based on pressure sensed by a digitiser
    • G06F3/0488Interaction techniques based on graphical user interfaces [GUI] using specific features provided by the input device, e.g. functions controlled by the rotation of a mouse with dual sensing arrangements, or of the nature of the input device, e.g. tap gestures based on pressure sensed by a digitiser using a touch-screen or digitiser, e.g. input of commands through traced gestures
    • G06F3/04886Interaction techniques based on graphical user interfaces [GUI] using specific features provided by the input device, e.g. functions controlled by the rotation of a mouse with dual sensing arrangements, or of the nature of the input device, e.g. tap gestures based on pressure sensed by a digitiser using a touch-screen or digitiser, e.g. input of commands through traced gestures by partitioning the display area of the touch-screen or the surface of the digitising tablet into independently controllable areas, e.g. virtual keyboards or menus
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F2203/00Indexing scheme relating to G06F3/00 - G06F3/048
    • G06F2203/048Indexing scheme relating to G06F3/048
    • G06F2203/04803Split screen, i.e. subdividing the display area or the window area into separate subareas

Definitions

  • the present disclosure relates to a gesture input device such as a multi-touch panel that changes a display image on a display by an input operation using a plurality of indicators.
  • a touch panel capable of various input operations using an indicator such as a plurality of fingers is known.
  • Such a touch panel is generally called a multi-touch panel.
  • pinch out that enlarges the image on the display by widening the interval between the two indicators placed on the touch panel, and the interval between the two indicators placed on the touch panel A pinch-in that reduces the image on the display by narrowing the screen is known.
  • a conventional multi-touch panel as disclosed in Patent Document 1 does not assume a configuration in which a display is divided into a plurality of screens. Therefore, when such a configuration is adopted, there is a possibility that the case where input operations are performed by a plurality of indicators across the boundaries between the divided screens may not be supported.
  • the multi-touch panel may not be able to determine which screen is operated. is there.
  • the multi-touch panel cannot determine which screen is operated.
  • An object of the present disclosure is to provide a gesture input device that changes a display image on a display by an input operation using a plurality of indicators, and the input operation is performed by a plurality of indicators across a boundary between the screens obtained by dividing the display into a plurality of screens.
  • An object of the present invention is to provide a gesture input device that can change a display image on a screen desired by a user even when it is performed.
  • a gesture input device includes a display that displays an image, and is a gesture input device that changes a display image on a display by an input operation using a plurality of indicators, the operation position of the indicator with respect to the display.
  • An operation position detection unit to detect, a screen division processing unit that divides the display into a plurality of screens, and an operation target from among the plurality of divided screens based on the operation positions of the plurality of indicators detected by the operation position detection unit.
  • a confirmation unit for confirming one operation target screen.
  • one operation target screen to be operated is determined based on the operation positions of the plurality of indicators detected by the operation position detection unit. can do. Therefore, even when an input operation is performed with a plurality of indicators across the boundaries between the divided plurality of screens, the display image can be changed only on the confirmed operation target screen. Therefore, the display image can be changed only on the screen desired by the user.
  • a gesture input device that changes a display image on a display by an input operation with a plurality of indicators, even when the input operation is performed with a plurality of indicators across a boundary between the screens obtained by dividing the display into a plurality of screens.
  • the display image on the screen desired by the user can be changed.
  • FIG. 1 is a block diagram showing a schematic configuration of a multi-touch panel.
  • FIG. 2 is a flowchart showing an example of map image conversion-related processing in the Dual Map in the control unit.
  • FIG. 3A is a schematic diagram for explaining an example of determining an operation target screen in the first embodiment.
  • FIG. 3B is a schematic diagram for explaining an example of determining the operation target screen in the first embodiment.
  • FIG. 4A is a schematic diagram for explaining an example of determining an operation target screen in the first modification example;
  • FIG. 4B is a schematic diagram for explaining an example of determining the operation target screen in the first modification.
  • FIG. 1 is a block diagram showing a schematic configuration of a multi-touch panel.
  • FIG. 2 is a flowchart showing an example of map image conversion-related processing in the Dual Map in the control unit.
  • FIG. 3A is a schematic diagram for explaining an example of determining an operation target screen in the first embodiment.
  • FIG. 3B is a schematic diagram for explaining an
  • FIG. 5 is a schematic diagram for explaining an example of specific processing in the control unit when the operation position is located on the non-target screen during the pinch-out
  • FIG. 6 is a schematic diagram for explaining an example of specific processing in the control unit when the operation position is located on the non-target screen at the start of pinch-in.
  • FIG. 1 is a block diagram illustrating a schematic configuration of a multi-touch panel 100 to which the present disclosure is applied.
  • a multi-touch panel 100 shown in FIG. 1 includes a display 1, an operation position detection unit 2, and a control unit 3.
  • the multi-touch panel 100 corresponds to the gesture input device according to the present disclosure.
  • the multi-touch panel 100 may be mounted on a vehicle, or may be a part of a tablet PC, desktop PC, mobile phone, or the like.
  • the multi-touch panel 100 is a touch panel that can simultaneously detect a plurality of operation positions on the screen of the display 1.
  • the display 1 displays an image corresponding to various application programs (hereinafter referred to as applications) executed by the control unit 3, and can display, for example, full color.
  • applications application programs
  • the display 1 for example, a liquid crystal display, an organic EL display, or the like can be used.
  • the operation position detection unit 2 uses a touch sensor integrated with the display 1, detects which position on the screen of the display 1 the touch operation is performed on, and determines the coordinates of the operation position on the control unit 3 To enter.
  • the touch sensor may be based on a capacitance method, may be based on a resistive film method, or may be based on another method.
  • the control unit 3 is configured as a normal computer, and includes, for example, a well-known CPU, ROM, EEPROM, RAM, I / O, and a bus line (none of which is shown) for connecting these configurations. It has been.
  • the control unit 3 executes processing in various applications based on information input from the operation position detection unit 2 and the like. As shown in FIG. 1, the control unit 3 includes a screen division processing unit 31, a display control unit 32, an operation determination unit 33, a determination unit 34, and an operation restriction unit 35 as functional blocks. Note that the application is stored in a memory such as a ROM.
  • Embodiment 1 will be described by taking as an example a case where an application (hereinafter referred to as “Dual Map”) is used that divides the screen of one display 1 into left and right and displays a map on each of the divided left and right screens.
  • Each screen obtained by dividing into left and right is hereinafter referred to as a left divided screen and a right divided screen.
  • the screen division processing unit 31 divides the screen of one display 1 into left and right parts in Dual Map. For example, the drawing area of the display 1 is reset to a rectangular area for the left divided screen and a rectangular area for the right divided screen.
  • the coordinates of the upper left corner of the pixel coordinates of the display 1 are the coordinates of the upper left corner of the rectangular area for the left divided screen, and the center coordinates of the lowermost stage are the coordinates of the lower right corner of the rectangular area for the left divided screen. That's fine. Further, if the coordinates of the lower right corner of the pixel coordinates of the display 1 are the coordinates of the lower right corner of the rectangular area for the right divided screen, the center coordinates of the uppermost stage are the coordinates of the upper left corner of the rectangular area for the right divided screen. Good.
  • the display control unit 32 displays a map on each of the left divided screen and the right divided screen divided by the screen division processing unit 31.
  • the operation determination unit 33, the determination unit 34, and the operation restriction unit 35 will be described in detail later.
  • Dual Map the scale of the map on the screen is changed by operation input such as pinch out and pinch in with two fingers.
  • Pinch-out is an operation input for enlarging a map on the screen by widening the interval between two fingers placed on the screen of the display 1.
  • Pinch-in is an operation input for reducing the map on the screen by narrowing the interval between two fingers placed on the display 1.
  • Embodiment 1 although the structure which uses a finger for operation input is shown, it does not necessarily restrict to this.
  • a pen-like artifact other than a finger may be used as an indicator for performing operation input.
  • map image conversion related processing processing related to image conversion such as map scale change in DualDMap in the control unit 3 (hereinafter referred to as map image conversion related processing) will be described using the flowchart of FIG. 2 is started when Dual Map is activated.
  • map image conversion related processing processing related to image conversion such as map scale change in DualDMap in the control unit 3
  • step S1 the operation determination unit 33 determines whether or not a first finger touch operation has been detected.
  • the operation determination unit 33 obtains only the coordinates of one operation position from the operation position detection unit 2, it may be determined that the first finger touch operation has been detected.
  • the process proceeds to step S2.
  • the process proceeds to step S11.
  • step S2 the operation determination unit 33 determines whether a single touch operation is detected.
  • the single touch operation is a touch operation performed with one finger.
  • the single touch operation includes a tap for touching and releasing a point on the screen, a slide for shifting the position while touching the screen with one finger, and the like.
  • the operation determination unit 33 may determine based on the fact that the operation position detection unit 2 cannot obtain the coordinates of one operation position. In the case of a slide, the operation determination unit 33 may determine based on the change in the coordinates of the operation position obtained from the operation position detection unit 2. Note that a touch operation performed simultaneously with a plurality of fingers, such as a pinch operation, is referred to as a multi-touch operation.
  • step S2 If it is determined that a single touch operation has been detected (YES in step S2), the process proceeds to step S3. On the other hand, if it is determined that the single touch operation has not been detected (NO in step S2), the process proceeds to step S4.
  • step S3 the display control unit 32 performs a single touch process, and proceeds to step S11.
  • the image is changed according to the single touch operation.
  • the single touch operation is a slide
  • a map on the screen that includes the coordinates of the operation position where the touch operation of one finger is detected is displayed on the slide screen. Translate by the direction and amount according to the direction and amount.
  • step S4 the operation determination unit 33 determines whether a second finger touch operation is detected. As an example, when the operation determination unit 33 obtains the coordinates of two operation positions from the operation position detection unit 2, it may be determined that the touch operation of the second finger has been detected. When the operation determination unit 33 determines that the touch operation of the second finger has been detected (YES in step S4), the process proceeds to step S5. On the other hand, when the operation determination unit 33 determines that the touch operation of the second finger is not detected (NO in step S4), the process returns to step S2 to repeat the flow.
  • the operation determination unit 33 When the operation determination unit 33 detects the touch operation of the second finger, the operation determination unit 33 stores the detected operation positions of the two fingers in a volatile memory such as a RAM as the initial positions of the two fingers. .
  • step S5 the confirmation unit 34 performs a confirmation process, and proceeds to step S6.
  • one operation target screen to be operated is determined from the left divided screen and the right divided screen based on the initial positions of the two fingers detected by the operation position detection unit 2.
  • one operation target screen is determined as follows. First, among the initial positions of the fingers detected by the operation position detector 2, an initial position farther from the boundary between the left divided screen and the right divided screen is obtained. Then, the screen where the initial position farther from the boundary is located is determined as the operation target screen.
  • the vertical line is drawn from the coordinates of the initial position of each finger with respect to the boundary line between the left split screen and the right split screen, and the coordinates of the point where this perpendicular line and the boundary line intersect (Hereinafter referred to as the intersection coordinates). Then, a linear distance between the obtained intersection coordinates and the coordinates of each initial position is calculated, and an initial position having a longer calculated linear distance is set as an initial position farther from the boundary.
  • FIG. 3A and FIG. 3B an example of confirmation of the operation target screen will be shown using FIG. 3A and FIG. 3B.
  • L is a left divided screen
  • R is a right divided screen
  • Bo is a boundary line between the left divided screen and the right divided screen
  • F1 is an operation position of the first finger
  • F2 is an operation position of the second finger. Indicates the operation position.
  • the first finger operation position F1 and the second finger operation position F2 are set as initial positions.
  • the first finger operation position F1 is located on the left divided screen L, while the second finger operation position F2 is located on the right divided screen R.
  • the finger operation position F2 is farther from the boundary line Bo, the right divided screen R is determined as the operation target screen.
  • the operation positions F1 and F2 of the two fingers should be located to the right as a whole, so that the initial position is farther from the boundary line. Will be located on the right split screen. Therefore, even if the first finger operation position F1 is positioned on the left split screen while the second finger operation position F2 is positioned on the right split screen, the right split screen is displayed as desired by the user. The operation target screen is confirmed.
  • the coordinates of the point where the line segment connecting the coordinates of the initial position of each finger and the boundary line between the left divided screen and the right divided screen may be obtained.
  • the linear distance between the obtained coordinates and the coordinates of each initial position is calculated, and the initial position with the longer calculated linear distance may be set as the initial position farther from the boundary.
  • the method for determining the operation target screen is not limited to the above-described method, and a screen corresponding to the center position between the initial positions of each finger may be determined as the operation target screen (hereinafter, modified example 1).
  • the center position may be paraphrased as the gravity center position.
  • both the operation positions F1 and F2 of the two fingers are located on the right split screen R, and the operation position F1 of the first finger and the operation position F2 of the second finger
  • the right split screen R is determined as the operation target screen. Therefore, when both the operation positions F1 and F2 of the two fingers are positioned on the right split screen with the intention of the user performing an operation input on the right split screen, The divided screen is determined as the operation target screen.
  • the first finger operation position F1 is located on the left divided screen L, while the second finger operation position F2 is located on the right divided screen R.
  • the center position C2 between the finger operation position F1 and the second finger operation position F2 is located on the right divided screen R, the right divided screen R is determined as the operation target screen.
  • the operation positions F1 and F2 of the two fingers should be located to the right as a whole, and therefore the operation position of the first finger The center position between F1 and the second finger operation position F2 is positioned on the right split screen.
  • Modification 2 Note that the method of determining the operation target screen in the determination process is not limited to the above-described method, and a screen having an operation position at which the touch operation of the first finger is detected may be determined as the operation target screen (hereinafter referred to as the operation target screen) Modification 2).
  • the confirmation process is not performed again until the touch-off occurs, for example, while the pinch operation is continued. .
  • the touch-off indicates that both of the two fingers leave the screen of the display 1 after the touch operation is once started.
  • step S6 the operation determination unit 33 determines whether a pinch operation has been detected. Pinch operation is pinch out or pinch in. The detection of the pinch operation may be made based on the fact that the coordinates of the operation position obtained from the operation position detection unit 2 have changed from the initial position described above. If the operation determination unit 33 determines that a pinch operation has been detected (YES in step S6), the process proceeds to step S7. On the other hand, if it is determined that a pinch operation has not been detected (NO in step S6), the flow of step S6 is repeated.
  • step S7 if the pinch operation is a pinch out, the process proceeds to step S8.
  • step S8 if the pinch operation is pinch-in, the process proceeds to step S9.
  • the operation determination unit 33 determines whether the pinch operation is a pinch out or a pinch in as follows.
  • the distance between the initial positions is calculated from the coordinates of the initial positions of the fingers, and the distance between the current positions is calculated from the coordinates of the current operation position of each finger (hereinafter referred to as the current position).
  • a ratio of the distance between the current positions to the distance between the initial positions (hereinafter referred to as a conversion ratio) is calculated.
  • the operation determination unit 33 determines that the calculated conversion ratio is greater than 1 and determines that it is pinch-out, and when the calculated conversion ratio is less than 1, the operation determination unit 33 determines that it is pinch-in.
  • step S8 the display control unit 32 enlarges the map on the screen determined as the operation target screen in the determination process according to the conversion ratio described above, and proceeds to step S10.
  • step S9 the display control unit 32 reduces the map on the screen determined as the operation target screen in the determination process according to the conversion ratio described above, and proceeds to step S10.
  • the operation restriction unit 35 enables the operation on the operation target screen until the touch-off occurs, for example, while the pinch operation is continued. And invalidate operations on non-target screens.
  • the non-target screen represents a screen that is not the operation target screen among the right split screen and the left split screen.
  • the operation restriction unit 35 Restrict image conversion such as changing the scale of a map on a split screen.
  • the operation restriction unit 35 uses the operation position detected on the left divided screen for image conversion on the right divided screen, not for image conversion on the left divided screen.
  • the determined operation target screen is a right split screen.
  • FIG. 5 shows an example of pinch-out
  • FIG. 6 shows an example of pinch-in.
  • the operation positions F1 and F2 of the two fingers were positioned on the right split screen at the start of the pinch out, but the operation position F1 is positioned on the left split screen during the pinch out.
  • the operation restricting unit 35 enlarges (allows) the map of the right divided screen, but does not perform (restricts) image conversion of the map of the left divided screen.
  • the conversion ratio of the distance between the current position located on the left split screen and the current position located on the right split screen with respect to the distance between the initial positions is calculated, and the right split is performed according to the calculated conversion ratio. Enlarge the map on the screen.
  • the operation restriction unit 35 causes the map of the right split screen to be reduced.
  • the map of the left split screen is not converted.
  • the conversion ratio of the distance between the current position to the distance between the initial position located on the left split screen and the initial position located on the right split screen is calculated, and the right split is performed according to the calculated conversion ratio.
  • the gesture input device enlarges or reduces the map at a ratio intended by the user. It is possible to perform image conversion such as. Therefore, it becomes possible to provide more comfortable operability for the user.
  • the operation target screen is displayed so as to be identifiable as compared to the non-target screen.
  • a mark indicating that the operation target screen is displayed on the operation target screen, or a frame or the like is highlighted.
  • the operation target screen and the non-target screen may be displayed in a distinguishable manner by reducing the luminance of the non-target screen from that of the operation target screen.
  • step S10 the operation determination unit 33 determines whether or not touch-off has occurred.
  • a configuration may be adopted in which touch-off is determined when the coordinates of one operation position cannot be obtained from the operation position detection unit 2. If it is determined that the touch-off has occurred (YES in step S10), the process proceeds to step S11. On the other hand, if it is determined that the touch-off has not occurred (NO in step S10), the process returns to step S7 to repeat the flow.
  • step S11 if it is the end timing of the map image conversion related process (YES in step S11), the flow is ended. On the other hand, if it is not the end timing of the map image conversion related process (NO in step S11), the process returns to step S1 and the flow is repeated.
  • the end timing of the map image conversion related processing there is a time when Dual Map ends.
  • the display 1 is divided into a plurality of screens, it is possible to determine a screen that the user intends to perform an operation input as an operation target screen. And according to the operation input from a user, it becomes possible to change a display only on the confirmed operation object screen.
  • the input operation is performed with the plurality of indicators across the boundary between the screens obtained by dividing the display 1 into a plurality of screens. In this case, the display image on the screen desired by the user can be changed.
  • the pinch operation is described as an example of the multi-touch operation.
  • the present invention is not limited to this.
  • the present disclosure can be applied to a multi-touch operation other than the pinch operation as long as it is an operation performed with a plurality of indicators and an operation target screen needs to be specified.
  • the present disclosure can also be applied to an operation of rotating an image by rotating one of the operation positions of the two pointers as an axis and rotating the other.
  • the configuration using the multi-touch panel as the gesture input device of the present disclosure is shown, but the configuration is not necessarily limited thereto.
  • the present invention is applicable not only to contact-type gesture recognition that detects a touch operation on a screen but also to a device that performs non-contact-type gesture recognition that does not require touching the screen.
  • non-contact type gesture recognition there are a method using a change in capacitance due to the proximity of a human body, a method using a finger movement imaged by a camera, and the like.
  • a gesture input device includes a display that displays an image, and is a gesture input device that changes a display image on a display by an input operation using a plurality of indicators, the operation position of the indicator with respect to the display.
  • An operation position detection unit to detect, a screen division processing unit that divides the display into a plurality of screens, and an operation target from among the plurality of divided screens based on the operation positions of the plurality of indicators detected by the operation position detection unit.
  • a confirmation unit for confirming one operation target screen.
  • the display image can be changed only on the confirmed operation target screen. Therefore, the display image can be changed only on the screen desired by the user.
  • the input operation is performed with the plurality of indicators across the boundary between the screens obtained by dividing the display into a plurality of screens. Even in this case, the display image on the screen desired by the user can be changed.
  • each “unit” in the present embodiment focuses on the function of the control unit 3 and categorizes the inside of the control unit 3 for convenience, and the inside of the control unit 3 is classified into each “unit”. It does not mean that it is physically divided into corresponding parts. Accordingly, each “unit” can be realized as software as a part of a computer program, or can be realized as hardware using an IC chip or a large-scale integrated circuit.
  • each step is expressed as, for example, S1. Further, each step can be divided into a plurality of sub-steps. On the other hand, a plurality of steps can be combined into one step.

Abstract

Provided is a gesture input device (100), which is provided with a display (1) for displaying images, and which changes the images displayed on the display (1) by means of input operations performed by a plurality of manipulating bodies. The gesture input device (100) is provided with: an operating position detecting section (2) that detects operating positions of the manipulating bodies with respect to the display (1); a screen splitting section (31) that splits the display (1) into a plurality of screens; and a determining section (34) that determines one screen to be operated from among the split screens on the basis of the operating positions of the manipulating bodies, said operating positions having been detected by means of the operating position detecting section (2).

Description

ジェスチャ入力装置Gesture input device 関連出願の相互参照Cross-reference of related applications
 本出願は、2013年7月11日に出願された日本国特許出願2013-145537号に基づくものであり、ここにその記載内容を参照により援用する。 This application is based on Japanese Patent Application No. 2013-145537 filed on July 11, 2013, the contents of which are incorporated herein by reference.
 本開示は、複数の指示体による入力操作によってディスプレイ上の表示画像を変更するマルチタッチパネル等のジェスチャ入力装置に関するものである。 The present disclosure relates to a gesture input device such as a multi-touch panel that changes a display image on a display by an input operation using a plurality of indicators.
 従来、特許文献1に開示されているように、複数の指等の指示体により種々の入力操作が可能なタッチパネルが知られている。このようなタッチパネルは一般的にマルチタッチパネルと呼ばれている。 Conventionally, as disclosed in Patent Document 1, a touch panel capable of various input operations using an indicator such as a plurality of fingers is known. Such a touch panel is generally called a multi-touch panel.
 また、複数の指示体による入力操作の一例としては、タッチパネルに載せた2本の指示体の間隔を拡げることでディスプレイ上の画像を拡大するピンチアウト、タッチパネルに載せた2本の指示体の間隔を狭めることでディスプレイ上の画像を縮小するピンチインなどが知られている。 In addition, as an example of an input operation using a plurality of indicators, pinch out that enlarges the image on the display by widening the interval between the two indicators placed on the touch panel, and the interval between the two indicators placed on the touch panel A pinch-in that reduces the image on the display by narrowing the screen is known.
 本願発明者はタッチパネルについて次のことを見出した。特許文献1に開示されているような従来のマルチタッチパネルは、ディスプレイを複数画面に分割する構成を想定していない。よって、このような構成が採用された場合に、分割された各画面間の境界を跨いで複数の指示体によって入力操作が行われるケースには対応できない虞がある。 The inventor of the present application has found the following regarding the touch panel. A conventional multi-touch panel as disclosed in Patent Document 1 does not assume a configuration in which a display is divided into a plurality of screens. Therefore, when such a configuration is adopted, there is a possibility that the case where input operations are performed by a plurality of indicators across the boundaries between the divided screens may not be supported.
 一例としては、ピンチアウトによって、同一画面上に位置していた2本の指示体がそれぞれ別の画面に位置するようになった場合に、どちらの画面に対する操作かをマルチタッチパネルが判断できない虞がある。他にも、ピンチインの開始時に2本の指示体がそれぞれ別の画面に位置する場合に、どちらの画面に対する操作かをマルチタッチパネルが判断できない虞がある。 As an example, when two indicators that were located on the same screen are located on different screens due to pinch-out, the multi-touch panel may not be able to determine which screen is operated. is there. In addition, when two indicators are located on different screens at the start of pinch-in, there is a possibility that the multi-touch panel cannot determine which screen is operated.
日本国公開特許公報2001-228971号Japanese Patent Publication No. 2001-228971
 本開示の目的は、複数の指示体による入力操作によってディスプレイ上の表示画像を変更するジェスチャ入力装置において、ディスプレイを複数画面に分割した各画面間の境界を跨いで複数の指示体によって入力操作が行われた場合にも、ユーザの所望する画面上の表示画像を変更できるジェスチャ入力装置を提供することにある。 An object of the present disclosure is to provide a gesture input device that changes a display image on a display by an input operation using a plurality of indicators, and the input operation is performed by a plurality of indicators across a boundary between the screens obtained by dividing the display into a plurality of screens. An object of the present invention is to provide a gesture input device that can change a display image on a screen desired by a user even when it is performed.
 本開示の一例に係るジェスチャ入力装置は、画像を表示するディスプレイを備え、複数の指示体による入力操作によってディスプレイ上の表示画像を変更するジェスチャ入力装置であって、指示体のディスプレイに対する操作位置を検出する操作位置検出部と、ディスプレイを複数画面に分割する画面分割処理部と、操作位置検出部で検出した複数の指示体の操作位置に基づき、分割した複数画面のうちから、操作対象とする操作対象画面を1つ確定する確定部とを備える。 A gesture input device according to an example of the present disclosure includes a display that displays an image, and is a gesture input device that changes a display image on a display by an input operation using a plurality of indicators, the operation position of the indicator with respect to the display. An operation position detection unit to detect, a screen division processing unit that divides the display into a plurality of screens, and an operation target from among the plurality of divided screens based on the operation positions of the plurality of indicators detected by the operation position detection unit. And a confirmation unit for confirming one operation target screen.
 このようなジェスチャ入力装置によれば、ディスプレイを複数画面に分割した場合にも、操作位置検出部で検出した複数の指示体の操作位置に基づき、操作対象とする操作対象画面を1つに確定することができる。よって、分割した複数画面間の境界を跨いで複数の指示体によって入力操作が行われた場合にも、確定した操作対象画面でのみ表示画像を変更することが可能になる。従って、ユーザの所望する画面でのみ表示画像を変更することも可能になる。 According to such a gesture input device, even when the display is divided into a plurality of screens, one operation target screen to be operated is determined based on the operation positions of the plurality of indicators detected by the operation position detection unit. can do. Therefore, even when an input operation is performed with a plurality of indicators across the boundaries between the divided plurality of screens, the display image can be changed only on the confirmed operation target screen. Therefore, the display image can be changed only on the screen desired by the user.
 複数の指示体による入力操作によってディスプレイ上の表示画像を変更するジェスチャ入力装置において、ディスプレイを複数画面に分割した各画面間の境界を跨いで複数の指示体によって入力操作が行われた場合にも、ユーザの所望する画面上の表示画像を変更することが可能になる。 In a gesture input device that changes a display image on a display by an input operation with a plurality of indicators, even when the input operation is performed with a plurality of indicators across a boundary between the screens obtained by dividing the display into a plurality of screens. The display image on the screen desired by the user can be changed.
 本開示についての上記および他の目的、特徴や利点は、添付の図面を参照した下記の詳細な説明から、より明確になる。添付図面において
図1は、マルチタッチパネルの概略的な構成を示すブロック図であり、 図2は、制御部でのDual Mapにおける地図画像変換関連処理の一例を示すフローチャートであり、 図3Aは、実施形態1における操作対象画面の確定の例を説明するための模式図であり、 図3Bは、実施形態1における操作対象画面の確定の例を説明するための模式図であり、 図4Aは、変形例1における操作対象画面の確定の例を説明するための模式図であり、 図4Bは、変形例1における操作対象画面の確定の例を説明するための模式図であり、 図5は、ピンチアウトの継続中に操作位置が対象外画面に位置した場合の制御部での具体的な処理の一例について説明するための模式図であり、 図6は、ピンチインの開始時に操作位置が対象外画面に位置した場合の制御部での具体的な処理の一例について説明するための模式図である。
The above and other objects, features and advantages of the present disclosure will become more apparent from the following detailed description with reference to the accompanying drawings. In the attached drawings
FIG. 1 is a block diagram showing a schematic configuration of a multi-touch panel. FIG. 2 is a flowchart showing an example of map image conversion-related processing in the Dual Map in the control unit. FIG. 3A is a schematic diagram for explaining an example of determining an operation target screen in the first embodiment. FIG. 3B is a schematic diagram for explaining an example of determining the operation target screen in the first embodiment. FIG. 4A is a schematic diagram for explaining an example of determining an operation target screen in the first modification example; FIG. 4B is a schematic diagram for explaining an example of determining the operation target screen in the first modification. FIG. 5 is a schematic diagram for explaining an example of specific processing in the control unit when the operation position is located on the non-target screen during the pinch-out, FIG. 6 is a schematic diagram for explaining an example of specific processing in the control unit when the operation position is located on the non-target screen at the start of pinch-in.
(実施形態1)
 以下、本開示の実施形態(以下、実施形態1)について図面を用いて説明する。図1は、本開示が適用されたマルチタッチパネル100の概略的な構成を示すブロック図である。図1に示すマルチタッチパネル100は、ディスプレイ1、操作位置検出部2、及び制御部3を含んでいる。マルチタッチパネル100が本開示におけるジェスチャ入力装置に相当する。
(Embodiment 1)
Hereinafter, embodiments of the present disclosure (hereinafter referred to as Embodiment 1) will be described with reference to the drawings. FIG. 1 is a block diagram illustrating a schematic configuration of a multi-touch panel 100 to which the present disclosure is applied. A multi-touch panel 100 shown in FIG. 1 includes a display 1, an operation position detection unit 2, and a control unit 3. The multi-touch panel 100 corresponds to the gesture input device according to the present disclosure.
 マルチタッチパネル100は、車両に搭載されるものであってもよいし、タブレット型PCやデスクトップ型PCや携帯電話機等の一部であってもよい。また、マルチタッチパネル100は、ディスプレイ1の画面に対する複数の操作位置を同時に検出可能なタッチパネルである。 The multi-touch panel 100 may be mounted on a vehicle, or may be a part of a tablet PC, desktop PC, mobile phone, or the like. The multi-touch panel 100 is a touch panel that can simultaneously detect a plurality of operation positions on the screen of the display 1.
 ディスプレイ1は、制御部3が実行する各種アプリケーションプログラム(以下、アプリケーション)に応じた画像を表示するものであって、例えばフルカラー表示が可能なものである。ディスプレイ1は、例えば、液晶ディスプレイ、有機ELディスプレイ等を用いることができる。 The display 1 displays an image corresponding to various application programs (hereinafter referred to as applications) executed by the control unit 3, and can display, for example, full color. As the display 1, for example, a liquid crystal display, an organic EL display, or the like can be used.
 操作位置検出部2は、ディスプレイ1と一体になったタッチセンサが用いられ、ディスプレイ1の画面上のどの位置に対してタッチ操作が行われたかを検出し、その操作位置の座標を制御部3に入力する。タッチセンサは、静電容量方式によるものであってもよいし、抵抗膜方式によるものであってもよいし、他の方式によるものであってもよい。 The operation position detection unit 2 uses a touch sensor integrated with the display 1, detects which position on the screen of the display 1 the touch operation is performed on, and determines the coordinates of the operation position on the control unit 3 To enter. The touch sensor may be based on a capacitance method, may be based on a resistive film method, or may be based on another method.
 制御部3は、通常のコンピュータとして構成されており、内部には例えば周知のCPU、ROM、EEPROM、RAM、I/O及びこれらの構成を接続するバスライン(いずれも図示せず)等が備えられている。制御部3は、操作位置検出部2から入力された情報等に基づき、各種アプリケーションにおける処理を実行する。また、図1に示すように、制御部3は、機能ブロックとして、画面分割処理部31、表示制御部32、操作判定部33、確定部34、及び操作制限部35を備えている。なお、アプリケーションはROM等のメモリに格納されているものとする。 The control unit 3 is configured as a normal computer, and includes, for example, a well-known CPU, ROM, EEPROM, RAM, I / O, and a bus line (none of which is shown) for connecting these configurations. It has been. The control unit 3 executes processing in various applications based on information input from the operation position detection unit 2 and the like. As shown in FIG. 1, the control unit 3 includes a screen division processing unit 31, a display control unit 32, an operation determination unit 33, a determination unit 34, and an operation restriction unit 35 as functional blocks. Note that the application is stored in a memory such as a ROM.
 実施形態1では、1つのディスプレイ1の画面を左右に2分割し、分割した左右の画面にそれぞれ地図を表示するアプリケーション(以下、Dual Map)を用いる場合を例に挙げて説明する。左右に分割して得られた各画面は、以下では左分割画面、右分割画面と呼ぶ。 Embodiment 1 will be described by taking as an example a case where an application (hereinafter referred to as “Dual Map”) is used that divides the screen of one display 1 into left and right and displays a map on each of the divided left and right screens. Each screen obtained by dividing into left and right is hereinafter referred to as a left divided screen and a right divided screen.
 画面分割処理部31は、Dual Mapにおいて1つのディスプレイ1の画面を左右に2分割する。例えば、ディスプレイ1の描画領域を、左分割画面用の長方形領域と右分割画面用の長方形領域とに設定し直す。 The screen division processing unit 31 divides the screen of one display 1 into left and right parts in Dual Map. For example, the drawing area of the display 1 is reset to a rectangular area for the left divided screen and a rectangular area for the right divided screen.
 一例として、ディスプレイ1のピクセル座標のうち、左上隅の座標を左分割画面用の長方形領域における左上隅の座標とし、最下段の中心座標を左分割画面用の長方形領域における右下隅の座標とすればよい。また、ディスプレイ1のピクセル座標のうち、右下隅の座標を右分割画面用の長方形領域における右下隅の座標とし、最上段の中心座標を右分割画面用の長方形領域における左上隅の座標とすればよい。 As an example, the coordinates of the upper left corner of the pixel coordinates of the display 1 are the coordinates of the upper left corner of the rectangular area for the left divided screen, and the center coordinates of the lowermost stage are the coordinates of the lower right corner of the rectangular area for the left divided screen. That's fine. Further, if the coordinates of the lower right corner of the pixel coordinates of the display 1 are the coordinates of the lower right corner of the rectangular area for the right divided screen, the center coordinates of the uppermost stage are the coordinates of the upper left corner of the rectangular area for the right divided screen. Good.
 そして、表示制御部32は、画面分割処理部31によって分割された左分割画面及び右分割画面の各々に地図を表示する。操作判定部33、確定部34、及び操作制限部35については後に詳述する。 Then, the display control unit 32 displays a map on each of the left divided screen and the right divided screen divided by the screen division processing unit 31. The operation determination unit 33, the determination unit 34, and the operation restriction unit 35 will be described in detail later.
 また、Dual Mapでは、2本の指によるピンチアウトやピンチインといった操作入力によって画面上の地図の縮尺を変更する。ピンチアウトとは、ディスプレイ1の画面に載せた2本の指の間隔を拡げることで画面上の地図を拡大する操作入力である。ピンチインとは、ディスプレイ1に載せた2本の指の間隔を狭めることで画面上の地図を縮小する操作入力である。 Also, in Dual Map, the scale of the map on the screen is changed by operation input such as pinch out and pinch in with two fingers. Pinch-out is an operation input for enlarging a map on the screen by widening the interval between two fingers placed on the screen of the display 1. Pinch-in is an operation input for reducing the map on the screen by narrowing the interval between two fingers placed on the display 1.
 なお、実施形態1では、操作入力に指を用いる構成を示すが、必ずしもこれに限らない。例えば、操作入力を行う指示体として、指以外のペン状の人工物等を用いてもよい。 In addition, in Embodiment 1, although the structure which uses a finger for operation input is shown, it does not necessarily restrict to this. For example, a pen-like artifact other than a finger may be used as an indicator for performing operation input.
 ここで、図2のフローチャートを用いて、制御部3でのDual Mapにおける地図の縮尺変更等の画像変換に関連する処理(以下、地図画像変換関連処理と呼ぶ)について説明する。図2のフローチャートは、Dual Mapが起動したときに開始される。図2のフローチャートでは、説明の簡略化のため、マルチタッチ操作としてピンチアウト及びピンチインを例に説明する。 Here, processing related to image conversion such as map scale change in DualDMap in the control unit 3 (hereinafter referred to as map image conversion related processing) will be described using the flowchart of FIG. The flowchart of FIG. 2 is started when Dual Map is activated. In the flowchart of FIG. 2, for the sake of simplicity of explanation, pinch out and pinch in will be described as examples of multi-touch operations.
 まず、ステップS1では、操作判定部33が、1本目の指のタッチ操作を検出したか否かを判定する。一例として、操作判定部33が操作位置検出部2から1箇所の操作位置の座標のみを得た場合に、1本目の指のタッチ操作を検出したと判定すればよい。そして、1本目の指のタッチ操作を検出したと判定した場合(ステップS1でYES)には、ステップS2に移る。一方、1本目の指のタッチ操作を検出していないと判定した場合(ステップS1でNO)には、ステップS11に移る。 First, in step S1, the operation determination unit 33 determines whether or not a first finger touch operation has been detected. As an example, when the operation determination unit 33 obtains only the coordinates of one operation position from the operation position detection unit 2, it may be determined that the first finger touch operation has been detected. When it is determined that the first finger touch operation has been detected (YES in step S1), the process proceeds to step S2. On the other hand, when it is determined that the first finger touch operation is not detected (NO in step S1), the process proceeds to step S11.
 ステップS2では、操作判定部33が、シングルタッチ操作を検出したか否かを判定する。シングルタッチ操作とは、指1本で行うタッチ操作のことである。一例としては、シングルタッチ操作は、画面上の一点をタッチして離すタップや1本の指で画面上にタッチしたまま位置をずらすスライド等がある。 In step S2, the operation determination unit 33 determines whether a single touch operation is detected. The single touch operation is a touch operation performed with one finger. As an example, the single touch operation includes a tap for touching and releasing a point on the screen, a slide for shifting the position while touching the screen with one finger, and the like.
 タップであれば、操作判定部33が操作位置検出部2から1箇所の操作位置の座標を得られなくなったことをもとに判定すればよい。スライドであれば、操作判定部33が操作位置検出部2から得る操作位置の座標が変化したことをもとに判定すればよい。なお、ピンチ操作等の複数本の指で同時に行うタッチ操作をマルチタッチ操作と呼ぶ。 If it is a tap, the operation determination unit 33 may determine based on the fact that the operation position detection unit 2 cannot obtain the coordinates of one operation position. In the case of a slide, the operation determination unit 33 may determine based on the change in the coordinates of the operation position obtained from the operation position detection unit 2. Note that a touch operation performed simultaneously with a plurality of fingers, such as a pinch operation, is referred to as a multi-touch operation.
 そして、シングルタッチ操作を検出したと判定した場合(ステップS2でYES)には、ステップS3に移る。一方、シングルタッチ操作を検出していないと判定した場合(ステップS2でNO)には、ステップS4に移る。 If it is determined that a single touch operation has been detected (YES in step S2), the process proceeds to step S3. On the other hand, if it is determined that the single touch operation has not been detected (NO in step S2), the process proceeds to step S4.
 ステップS3では、表示制御部32がシングルタッチ処理を行って、ステップS11に移る。シングルタッチ処理では、シングルタッチ操作に応じた画像の変更を行わせる。一例として、シングルタッチ操作がスライドであった場合には、左分割画面及び右分割画面のうち、1本の指のタッチ操作を検出した操作位置の座標が含まれる画面上の地図を、スライドの方向及び量に応じた方向及び量だけ平行移動させる。 In step S3, the display control unit 32 performs a single touch process, and proceeds to step S11. In the single touch process, the image is changed according to the single touch operation. As an example, when the single touch operation is a slide, a map on the screen that includes the coordinates of the operation position where the touch operation of one finger is detected is displayed on the slide screen. Translate by the direction and amount according to the direction and amount.
 ステップS4では、操作判定部33が、2本目の指のタッチ操作を検出したか否かを判定する。一例として、操作判定部33が操作位置検出部2から2箇所の操作位置の座標を得た場合に、2本目の指のタッチ操作を検出したと判定すればよい。そして、操作判定部33が2本目の指のタッチ操作を検出したと判定した場合(ステップS4でYES)には、ステップS5に移る。一方、操作判定部33が2本目の指のタッチ操作を検出していないと判定した場合(ステップS4でNO)には、ステップS2に戻ってフローを繰り返す。 In step S4, the operation determination unit 33 determines whether a second finger touch operation is detected. As an example, when the operation determination unit 33 obtains the coordinates of two operation positions from the operation position detection unit 2, it may be determined that the touch operation of the second finger has been detected. When the operation determination unit 33 determines that the touch operation of the second finger has been detected (YES in step S4), the process proceeds to step S5. On the other hand, when the operation determination unit 33 determines that the touch operation of the second finger is not detected (NO in step S4), the process returns to step S2 to repeat the flow.
 操作判定部33は、2本目の指のタッチ操作を検出した場合には、検出した2本の指の各操作位置を、2本の指の各初期位置としてRAM等の揮発性メモリに記憶する。 When the operation determination unit 33 detects the touch operation of the second finger, the operation determination unit 33 stores the detected operation positions of the two fingers in a volatile memory such as a RAM as the initial positions of the two fingers. .
 ステップS5では、確定部34が確定処理を行って、ステップS6に移る。確定処理では、操作位置検出部2で検出した2本の指の初期位置に基づき、左分割画面及び右分割画面のうちから、操作対象とする操作対象画面を1つ確定する。 In step S5, the confirmation unit 34 performs a confirmation process, and proceeds to step S6. In the confirmation process, one operation target screen to be operated is determined from the left divided screen and the right divided screen based on the initial positions of the two fingers detected by the operation position detection unit 2.
 一例としては、以下のようにして操作対象画面を1つ確定する。まず、操作位置検出部2で検出した各指の初期位置のうち、左分割画面と右分割画面との境界からより遠い初期位置を求める。そして、境界からより遠い初期位置が位置する画面を、操作対象画面として確定する。 As an example, one operation target screen is determined as follows. First, among the initial positions of the fingers detected by the operation position detector 2, an initial position farther from the boundary between the left divided screen and the right divided screen is obtained. Then, the screen where the initial position farther from the boundary is located is determined as the operation target screen.
 境界からより遠い初期位置の求め方としては、左分割画面と右分割画面との境界線に対して、各指の初期位置の座標から垂線を下ろし、この垂線と境界線とが交わる点の座標(以下、交点座標)を求める。そして、求めた交点座標と各初期位置の座標との直線距離を算出し、算出した直線距離が長い方の初期位置を、境界からより遠い初期位置とする。 To find the initial position farther from the boundary, the vertical line is drawn from the coordinates of the initial position of each finger with respect to the boundary line between the left split screen and the right split screen, and the coordinates of the point where this perpendicular line and the boundary line intersect (Hereinafter referred to as the intersection coordinates). Then, a linear distance between the obtained intersection coordinates and the coordinates of each initial position is calculated, and an initial position having a longer calculated linear distance is set as an initial position farther from the boundary.
 ここで、図3A及び図3Bを用いて、操作対象画面の確定の例を示す。図3Aと図3Bにおいて、Lは左分割画面、Rは右分割画面、Boは左分割画面と右分割画面との境界線、F1は1本目の指の操作位置、F2は2本目の指の操作位置を表している。なお、後述する図4Aから図6についても同様である。図3A及び図3Bのいずれについても、1本目の指の操作位置F1、2本目の指の操作位置F2は初期位置とする。 Here, an example of confirmation of the operation target screen will be shown using FIG. 3A and FIG. 3B. 3A and 3B, L is a left divided screen, R is a right divided screen, Bo is a boundary line between the left divided screen and the right divided screen, F1 is an operation position of the first finger, and F2 is an operation position of the second finger. Indicates the operation position. The same applies to FIGS. 4A to 6 described later. 3A and 3B, the first finger operation position F1 and the second finger operation position F2 are set as initial positions.
 図3Aのように、2本の指の操作位置F1、F2のいずれも右分割画面Rに位置する場合であって、2本目の指の操作位置F2の方が境界線Boからより遠い場合には、右分割画面Rを操作対象画面と確定する。よって、ユーザが右分割画面に対して操作入力を行うことを意図して2本の指の操作位置F1、F2のいずれも右分割画面に位置させた場合には、ユーザの所望の通り、右分割画面Rが操作対象画面と確定されることになる。 As shown in FIG. 3A, when both the operation positions F1 and F2 of the two fingers are located on the right split screen R and the operation position F2 of the second finger is farther from the boundary line Bo. Confirms the right split screen R as the operation target screen. Therefore, when both the operation positions F1 and F2 of the two fingers are positioned on the right split screen with the intention of the user performing an operation input on the right split screen, The divided screen R is determined as the operation target screen.
 また、図3Bのように、1本目の指の操作位置F1は左分割画面Lに位置する一方、2本目の指の操作位置F2は右分割画面Rに位置する場合であって、2本目の指の操作位置F2の方が境界線Boからより遠い場合には、右分割画面Rを操作対象画面と確定する。ユーザが右分割画面に対して操作入力を行うことを意図する場合には、2本の指の操作位置F1、F2は全体的に右寄りに位置する筈であるため、境界線からより遠い初期位置が右分割画面に位置することになる。よって、1本目の指の操作位置F1は左分割画面に位置する一方、2本目の指の操作位置F2は右分割画面に位置する場合であっても、ユーザの所望の通り、右分割画面が操作対象画面と確定されることになる。 3B, the first finger operation position F1 is located on the left divided screen L, while the second finger operation position F2 is located on the right divided screen R. When the finger operation position F2 is farther from the boundary line Bo, the right divided screen R is determined as the operation target screen. When the user intends to perform an operation input on the right split screen, the operation positions F1 and F2 of the two fingers should be located to the right as a whole, so that the initial position is farther from the boundary line. Will be located on the right split screen. Therefore, even if the first finger operation position F1 is positioned on the left split screen while the second finger operation position F2 is positioned on the right split screen, the right split screen is displayed as desired by the user. The operation target screen is confirmed.
 なお、境界からより遠い初期位置の求め方としては、各指の初期位置の座標間を結ぶ線分と、左分割画面と右分割画面との境界線とが交わる点の座標を求めもよい。この場合は、求めた座標と各初期位置の座標との直線距離を算出し、算出した直線距離が長い方の初期位置を、境界からより遠い初期位置とする構成とすればよい。
(変形例1)
 また、操作対象画面の確定の方法としては、前述したものに限らず、各指の初期位置間の中心位置に対応する画面を操作対象画面と確定してもよい(以下、変形例1)。中心位置は、重心位置と言い換えてもよい。
As a method of obtaining the initial position farther from the boundary, the coordinates of the point where the line segment connecting the coordinates of the initial position of each finger and the boundary line between the left divided screen and the right divided screen may be obtained. In this case, the linear distance between the obtained coordinates and the coordinates of each initial position is calculated, and the initial position with the longer calculated linear distance may be set as the initial position farther from the boundary.
(Modification 1)
The method for determining the operation target screen is not limited to the above-described method, and a screen corresponding to the center position between the initial positions of each finger may be determined as the operation target screen (hereinafter, modified example 1). The center position may be paraphrased as the gravity center position.
 ここで、図4A及び図4Bを用いて、変形例1における操作対象画面の確定の例を示す。図4Aのように、2本の指の操作位置F1、F2のいずれも右分割画面Rに位置する場合であって、1本目の指の操作位置F1と2本目の指の操作位置F2との中心位置C1が右分割画面Rに位置する場合には、右分割画面Rを操作対象画面と確定する。よって、ユーザが右分割画面に対して操作入力を行うことを意図して2本の指の操作位置F1、F2のいずれも右分割画面に位置させた場合には、ユーザの所望の通り、右分割画面が操作対象画面と確定されることになる。 Here, using FIG. 4A and FIG. 4B, an example of confirmation of the operation target screen in the first modification will be shown. As shown in FIG. 4A, both the operation positions F1 and F2 of the two fingers are located on the right split screen R, and the operation position F1 of the first finger and the operation position F2 of the second finger When the center position C1 is located on the right split screen R, the right split screen R is determined as the operation target screen. Therefore, when both the operation positions F1 and F2 of the two fingers are positioned on the right split screen with the intention of the user performing an operation input on the right split screen, The divided screen is determined as the operation target screen.
 また、図4Bのように、1本目の指の操作位置F1は左分割画面Lに位置する一方、2本目の指の操作位置F2は右分割画面Rに位置する場合であって、1本目の指の操作位置F1と2本目の指の操作位置F2との中心位置C2が右分割画面Rに位置する場合には、右分割画面Rを操作対象画面と確定する。ユーザが右分割画面に対して操作入力を行うことを意図する場合には、2本の指の操作位置F1、F2は全体的に右寄りに位置する筈であるため、1本目の指の操作位置F1と2本目の指の操作位置F2との中心位置は右分割画面に位置することになる。よって、1本目の指の操作位置F1は左分割画面に位置する一方、2本目の指の操作位置F2は右分割画面に位置する場合であっても、ユーザの所望の通り、右分割画面が操作対象画面と確定されることになる。
(変形例2)
 なお、確定処理における操作対象画面の確定の方法としては、前述したものに限らず、1本目の指のタッチ操作を検出した操作位置がある画面を、操作対象画面として確定してもよい(以下、変形例2)。
Also, as shown in FIG. 4B, the first finger operation position F1 is located on the left divided screen L, while the second finger operation position F2 is located on the right divided screen R. When the center position C2 between the finger operation position F1 and the second finger operation position F2 is located on the right divided screen R, the right divided screen R is determined as the operation target screen. When the user intends to perform an operation input on the right split screen, the operation positions F1 and F2 of the two fingers should be located to the right as a whole, and therefore the operation position of the first finger The center position between F1 and the second finger operation position F2 is positioned on the right split screen. Therefore, even if the first finger operation position F1 is positioned on the left split screen while the second finger operation position F2 is positioned on the right split screen, the right split screen is displayed as desired by the user. The operation target screen is confirmed.
(Modification 2)
Note that the method of determining the operation target screen in the determination process is not limited to the above-described method, and a screen having an operation position at which the touch operation of the first finger is detected may be determined as the operation target screen (hereinafter referred to as the operation target screen) Modification 2).
 図2のフローチャートでも示すが、確定部34での確定処理で操作対象画面を一旦確定した場合には、ピンチ操作が継続している間など、タッチオフとなるまでは、再度の確定処理は行わない。なお、タッチオフとは、一旦タッチ操作が開始された後に、2本の指のいずれもがディスプレイ1の画面から離れることを示している。 As shown in the flowchart of FIG. 2, once the operation target screen is confirmed by the confirmation process in the confirmation unit 34, the confirmation process is not performed again until the touch-off occurs, for example, while the pinch operation is continued. . Note that the touch-off indicates that both of the two fingers leave the screen of the display 1 after the touch operation is once started.
 ステップS6では、操作判定部33が、ピンチ操作を検出したか否かを判定する。ピンチ操作とはピンチアウトやピンチインである。ピンチ操作の検出は、操作位置検出部2から得られる操作位置の座標が前述の初期位置から変化したことをもとに判定すればよい。そして、操作判定部33がピンチ操作を検出したと判定した場合(ステップS6でYES)には、ステップS7に移る。一方、ピンチ操作を検出していないと判定した場合(ステップS6でNO)には、ステップS6のフローを繰り返す。 In step S6, the operation determination unit 33 determines whether a pinch operation has been detected. Pinch operation is pinch out or pinch in. The detection of the pinch operation may be made based on the fact that the coordinates of the operation position obtained from the operation position detection unit 2 have changed from the initial position described above. If the operation determination unit 33 determines that a pinch operation has been detected (YES in step S6), the process proceeds to step S7. On the other hand, if it is determined that a pinch operation has not been detected (NO in step S6), the flow of step S6 is repeated.
 ステップS7では、ピンチ操作がピンチアウトの場合には、ステップS8に移る。一方、ピンチ操作がピンチインの場合には、ステップS9に移る。一例として、ピンチ操作がピンチアウトかピンチインかの判定は、操作判定部33が以下のようにして行う。 In step S7, if the pinch operation is a pinch out, the process proceeds to step S8. On the other hand, when the pinch operation is pinch-in, the process proceeds to step S9. As an example, the operation determination unit 33 determines whether the pinch operation is a pinch out or a pinch in as follows.
 まず、各指の初期位置の座標から初期位置間の距離を算出するとともに、各指の現在の操作位置(以下、現在位置)の座標から現在位置間の距離を算出する。続いて、初期位置間の距離に対する現在位置間の距離の比率(以下、変換比率)を算出する。そして、操作判定部33は、算出した変換比率が1よりも大きい場合にはピンチアウトと判定し、算出した変換比率が1よりも小さい場合にはピンチインと判定する。 First, the distance between the initial positions is calculated from the coordinates of the initial positions of the fingers, and the distance between the current positions is calculated from the coordinates of the current operation position of each finger (hereinafter referred to as the current position). Subsequently, a ratio of the distance between the current positions to the distance between the initial positions (hereinafter referred to as a conversion ratio) is calculated. Then, the operation determination unit 33 determines that the calculated conversion ratio is greater than 1 and determines that it is pinch-out, and when the calculated conversion ratio is less than 1, the operation determination unit 33 determines that it is pinch-in.
 ステップS8では、表示制御部32が、確定処理で操作対象画面と確定した画面上の地図を、前述の変換比率に応じて拡大させ、ステップS10に移る。また、ステップS9では、表示制御部32が、確定処理で操作対象画面と確定した画面上の地図を、前述の変換比率に応じて縮小させ、ステップS10に移る。 In step S8, the display control unit 32 enlarges the map on the screen determined as the operation target screen in the determination process according to the conversion ratio described above, and proceeds to step S10. In step S9, the display control unit 32 reduces the map on the screen determined as the operation target screen in the determination process according to the conversion ratio described above, and proceeds to step S10.
 なお、確定部34での確定処理で操作対象画面を一旦確定した場合には、ピンチ操作が継続している間など、タッチオフとなるまでは、操作制限部35が、操作対象画面に対する操作を有効とし、対象外画面に対する操作を無効とする。なお、対象外画面は、右分割画面と左分割画面のうち、操作対象画面ではない画面を表す。 If the operation target screen is once confirmed by the confirmation process in the confirmation unit 34, the operation restriction unit 35 enables the operation on the operation target screen until the touch-off occurs, for example, while the pinch operation is continued. And invalidate operations on non-target screens. The non-target screen represents a screen that is not the operation target screen among the right split screen and the left split screen.
 一例としては、確定部34が右分割画面を操作対象画面と確定した場合には、ピンチ操作の開始時や継続中に操作位置が左分割画面に位置した場合でも、操作制限部35が、左分割画面における地図の縮尺変更等の画像変換を制限する。例えば、操作制限部35は、左分割画面において検出したその操作位置を、左分割画面における画像変換のためには用いずに、右分割画面における画像変換のために用いるようにする。 As an example, when the determination unit 34 determines the right divided screen as the operation target screen, even when the operation position is positioned on the left divided screen when the pinch operation starts or continues, the operation restriction unit 35 Restrict image conversion such as changing the scale of a map on a split screen. For example, the operation restriction unit 35 uses the operation position detected on the left divided screen for image conversion on the right divided screen, not for image conversion on the left divided screen.
 ここで、図5及び図6を用いて、ピンチ操作の開始時や継続中に操作位置が対象外画面に位置した場合の制御部3での具体的な処理の一例について説明する。なお、図5及び図6において、例えば、確定した操作対象画面は右分割画面とする。また、図5はピンチアウトを行う場合、図6はピンチインを行う場合の例である。 Here, with reference to FIGS. 5 and 6, an example of specific processing in the control unit 3 when the operation position is located on the non-target screen when the pinch operation starts or continues will be described. 5 and 6, for example, the determined operation target screen is a right split screen. FIG. 5 shows an example of pinch-out, and FIG. 6 shows an example of pinch-in.
 図5に示すように、ピンチアウトの開始時に2本の指の操作位置F1、F2が右分割画面に位置していたものの、ピンチアウトの継続中に操作位置F1が左分割画面に位置するようになった場合、操作制限部35は、右分割画面の地図の拡大は行わせる(許可する)が、左分割画面の地図の画像変換は行わせない(制限する)。また、初期位置間の距離に対する、左分割画面に位置する現在位置と右分割画面に位置する現在位置との間の距離の変換比率を算出して、算出した変換比率に応じた分だけ右分割画面の地図の拡大を行わせる。 As shown in FIG. 5, the operation positions F1 and F2 of the two fingers were positioned on the right split screen at the start of the pinch out, but the operation position F1 is positioned on the left split screen during the pinch out. In such a case, the operation restricting unit 35 enlarges (allows) the map of the right divided screen, but does not perform (restricts) image conversion of the map of the left divided screen. In addition, the conversion ratio of the distance between the current position located on the left split screen and the current position located on the right split screen with respect to the distance between the initial positions is calculated, and the right split is performed according to the calculated conversion ratio. Enlarge the map on the screen.
 一方、図6に示すように、ピンチインの開始時に操作位置F1が左分割画面に位置する状態からピンチインが行われた場合、操作制限部35は、右分割画面の地図の縮小は行わせるが、左分割画面の地図の画像変換は行わせない。また、左分割画面に位置する初期位置と右分割画面に位置する初期位置との間の距離に対する、現在位置間の距離の変換比率を算出して、算出した変換比率に応じた分だけ右分割画面の地図の縮小を行わせる。 On the other hand, as shown in FIG. 6, when the pinch-in is performed from the state where the operation position F1 is located on the left split screen at the start of the pinch-in, the operation restriction unit 35 causes the map of the right split screen to be reduced. The map of the left split screen is not converted. In addition, the conversion ratio of the distance between the current position to the distance between the initial position located on the left split screen and the initial position located on the right split screen is calculated, and the right split is performed according to the calculated conversion ratio. Causes the map on the screen to be reduced.
 これによれば、ピンチ操作等のマルチタッチ操作の開始時や継続中に操作位置が対象外画面に位置した場合であっても、ジェスチャ入力装置はユーザの意図した比率での地図の拡大又は縮小などの画像変換を行うことが可能になる。よって、ユーザにとってより快適な操作性を提供することが可能になる。 According to this, even when a multi-touch operation such as a pinch operation is started or continued, the gesture input device enlarges or reduces the map at a ratio intended by the user. It is possible to perform image conversion such as. Therefore, it becomes possible to provide more comfortable operability for the user.
 また、操作対象画面は、対象外画面に比べて識別可能に表示されることが好ましい。一例としては、操作対象画面と対象外画面のうち、操作対象画面には、操作対象画面であることを示すマークが表示されたり、枠等の強調表示が行われたりすればよい。加えて、または代わりに、対象外画面の輝度を操作対象画面よりも落とすことで、操作対象画面と対象外画面とを識別可能に表示してもよい。 In addition, it is preferable that the operation target screen is displayed so as to be identifiable as compared to the non-target screen. As an example, of the operation target screen and the non-target screen, a mark indicating that the operation target screen is displayed on the operation target screen, or a frame or the like is highlighted. In addition or alternatively, the operation target screen and the non-target screen may be displayed in a distinguishable manner by reducing the luminance of the non-target screen from that of the operation target screen.
 ステップS10では、操作判定部33が、タッチオフとなったか否かを判定する。一例として、操作位置検出部2から1箇所の操作位置の座標も得られなくなった場合に、タッチオフと判定する構成とすればよい。そして、タッチオフとなったと判定した場合(ステップS10でYES)には、ステップS11に移る。一方、タッチオフとなっていないと判定した場合(ステップS10でNO)には、ステップS7に戻ってフローを繰り返す。 In step S10, the operation determination unit 33 determines whether or not touch-off has occurred. As an example, a configuration may be adopted in which touch-off is determined when the coordinates of one operation position cannot be obtained from the operation position detection unit 2. If it is determined that the touch-off has occurred (YES in step S10), the process proceeds to step S11. On the other hand, if it is determined that the touch-off has not occurred (NO in step S10), the process returns to step S7 to repeat the flow.
 なお、タッチオフとなっていないと判定される場合としては、操作位置検出部2から2箇所の操作位置の座標が得られている場合や1箇所の操作位置の座標が得られている場合がある。つまり、2本の指の両方ともが画面に触れている場合や、2本の指のうちの1本だけが画面から離れた場合に、タッチオフとなっていないと判定される。 In addition, as a case where it is determined that the touch-off is not established, there are cases where the coordinates of two operation positions are obtained from the operation position detection unit 2 or the coordinates of one operation position are obtained. . That is, when both two fingers are touching the screen, or when only one of the two fingers is away from the screen, it is determined that the touch-off has not occurred.
 ステップS11では、地図画像変換関連処理の終了タイミングであった場合(ステップS11でYES)には、フローを終了する。一方、地図画像変換関連処理の終了タイミングでなかった場合(ステップS11でNO)には、ステップS1に戻ってフローを繰り返す。地図画像変換関連処理の終了タイミングの一例としては、Dual Mapが終了したときがある。 In step S11, if it is the end timing of the map image conversion related process (YES in step S11), the flow is ended. On the other hand, if it is not the end timing of the map image conversion related process (NO in step S11), the process returns to step S1 and the flow is repeated. As an example of the end timing of the map image conversion related processing, there is a time when Dual Map ends.
 以上の構成によれば、ディスプレイ1を複数画面に分割した場合にも、ユーザが操作入力を行うことを意図している画面を操作対象画面と確定することが可能になる。そして、ユーザからの操作入力に応じて、確定した操作対象画面でのみ表示を変更することが可能になる。その結果、複数の指示体による入力操作によってディスプレイ1上の表示画像を変更するマルチタッチパネル100において、ディスプレイ1を複数画面に分割した各画面間の境界を跨いで複数の指示体によって入力操作が行われた場合にも、ユーザの所望する画面上の表示画像を変更することが可能になる。 According to the above configuration, even when the display 1 is divided into a plurality of screens, it is possible to determine a screen that the user intends to perform an operation input as an operation target screen. And according to the operation input from a user, it becomes possible to change a display only on the confirmed operation object screen. As a result, in the multi-touch panel 100 that changes the display image on the display 1 by an input operation with a plurality of indicators, the input operation is performed with the plurality of indicators across the boundary between the screens obtained by dividing the display 1 into a plurality of screens. In this case, the display image on the screen desired by the user can be changed.
 なお、実施形態1では、マルチタッチ操作としてピンチ操作を例に挙げて説明を行ったが、必ずしもこれに限らない。例えば、複数本の指示体で行う操作であり、操作対象とする画面を特定する必要のある操作であれば、ピンチ操作以外のマルチタッチ操作にも本開示を適用可能である。一例としては、2本の指示体による各操作位置のうちの一方を軸にし、もう一方を回転させることで画像を回転させる操作にも、本開示を適用可能である。 In the first embodiment, the pinch operation is described as an example of the multi-touch operation. However, the present invention is not limited to this. For example, the present disclosure can be applied to a multi-touch operation other than the pinch operation as long as it is an operation performed with a plurality of indicators and an operation target screen needs to be specified. As an example, the present disclosure can also be applied to an operation of rotating an image by rotating one of the operation positions of the two pointers as an axis and rotating the other.
 また、実施形態1では、本開示のジェスチャ入力装置としてマルチタッチパネルを用いる構成を示したが、必ずしもこれに限らない。例えば、画面に対するタッチ操作を検出する接触式のジェスチャ認識だけでなく、画面に接触する必要のない非接触式のジェスチャ認識を行う装置にも適用可能である。非接触式のジェスチャ認識の一例としては、人体の近接による静電容量の変化を利用する方法やカメラで撮像した指の動きを利用する方法等がある。 In the first embodiment, the configuration using the multi-touch panel as the gesture input device of the present disclosure is shown, but the configuration is not necessarily limited thereto. For example, the present invention is applicable not only to contact-type gesture recognition that detects a touch operation on a screen but also to a device that performs non-contact-type gesture recognition that does not require touching the screen. As an example of non-contact type gesture recognition, there are a method using a change in capacitance due to the proximity of a human body, a method using a finger movement imaged by a camera, and the like.
 なお、本開示は、上述した各実施形態に限定されるものではなく、本開示に示した範囲で種々の変更が可能であり、異なる実施形態にそれぞれ開示された技術的手段を適宜組み合わせて得られる実施形態についても本開示の技術的範囲に含まれる。 Note that the present disclosure is not limited to the above-described embodiments, and various modifications can be made within the scope shown in the present disclosure, and the technical means disclosed in different embodiments can be appropriately combined. Embodiments to be included are also included in the technical scope of the present disclosure.
 本開示の一例に係るジェスチャ入力装置は、画像を表示するディスプレイを備え、複数の指示体による入力操作によってディスプレイ上の表示画像を変更するジェスチャ入力装置であって、指示体のディスプレイに対する操作位置を検出する操作位置検出部と、ディスプレイを複数画面に分割する画面分割処理部と、操作位置検出部で検出した複数の指示体の操作位置に基づき、分割した複数画面のうちから、操作対象とする操作対象画面を1つ確定する確定部とを備える。 A gesture input device according to an example of the present disclosure includes a display that displays an image, and is a gesture input device that changes a display image on a display by an input operation using a plurality of indicators, the operation position of the indicator with respect to the display. An operation position detection unit to detect, a screen division processing unit that divides the display into a plurality of screens, and an operation target from among the plurality of divided screens based on the operation positions of the plurality of indicators detected by the operation position detection unit. And a confirmation unit for confirming one operation target screen.
 これによれば、ディスプレイを複数画面に分割した場合にも、操作位置検出部で検出した複数の指示体の操作位置に基づき、操作対象とする操作対象画面を1つに確定することができる。よって、分割した複数画面間の境界を跨いで複数の指示体によって入力操作が行われた場合にも、確定した操作対象画面でのみ表示画像を変更することが可能になる。従って、ユーザの所望する画面でのみ表示画像を変更することも可能になる。 According to this, even when the display is divided into a plurality of screens, it is possible to determine one operation target screen as an operation target based on the operation positions of the plurality of indicators detected by the operation position detection unit. Therefore, even when an input operation is performed with a plurality of indicators across the boundaries between the divided plurality of screens, the display image can be changed only on the confirmed operation target screen. Therefore, the display image can be changed only on the screen desired by the user.
 その結果、複数の指示体による入力操作によってディスプレイ上の表示画像を変更するジェスチャ入力装置において、ディスプレイを複数画面に分割した各画面間の境界を跨いで複数の指示体によって入力操作が行われた場合にも、ユーザの所望する画面上の表示画像を変更することが可能になる。 As a result, in the gesture input device that changes the display image on the display by the input operation with a plurality of indicators, the input operation is performed with the plurality of indicators across the boundary between the screens obtained by dividing the display into a plurality of screens. Even in this case, the display image on the screen desired by the user can be changed.
 なお、本実施形態における「部」は、制御部3の有する機能に着目して、制御部3の内部を便宜的に分類したものであり、制御部3の内部が、それぞれの「部」に対応する部分に物理的に区分されていることを意味するものではない。従って、それぞれの「部」は、コンピュータープログラムの一部分としてソフトウェア的に実現することもできるし、ICチップや大規模集積回路によってハードウェア的に実現することもできる。 Note that the “unit” in the present embodiment focuses on the function of the control unit 3 and categorizes the inside of the control unit 3 for convenience, and the inside of the control unit 3 is classified into each “unit”. It does not mean that it is physically divided into corresponding parts. Accordingly, each “unit” can be realized as software as a part of a computer program, or can be realized as hardware using an IC chip or a large-scale integrated circuit.
 さらに、本実施例のフローチャート、あるいは、フローチャートの処理は、複数のステップから構成され、各ステップは、たとえば、S1と表現される。さらに、各ステップは、複数のサブステップに分割されうる。一方、複数のステップが合わさって一つのステップにすることも可能である。 Furthermore, the flowchart of this embodiment or the process of the flowchart is composed of a plurality of steps, and each step is expressed as, for example, S1. Further, each step can be divided into a plurality of sub-steps. On the other hand, a plurality of steps can be combined into one step.
 以上、本開示に係るジェスチャ入力装置の実施形態、構成、態様を例示したが、本開示に係る実施形態、構成、態様は、上述した各実施形態、各構成、各態様に限定されるものではない。例えば、異なる実施形態、構成、態様にそれぞれ開示された技術的部を適宜組み合わせて得られる実施形態、構成、態様についても本開示に係る実施形態、構成、態様の範囲に含まれる。 The embodiments, configurations, and aspects of the gesture input device according to the present disclosure have been illustrated above, but the embodiments, configurations, and aspects according to the present disclosure are not limited to the above-described embodiments, configurations, and aspects. Absent. For example, embodiments, configurations, and aspects obtained by appropriately combining technical sections disclosed in different embodiments, configurations, and aspects are also included in the scope of the embodiments, configurations, and aspects according to the present disclosure.

Claims (9)

  1.  画像を表示するディスプレイ(1)を備え、複数の指示体による入力操作によって前記ディスプレイ(1)上の表示画像を変更するジェスチャ入力装置(100)であって、
     前記指示体の前記ディスプレイ(1)に対する操作位置を検出する操作位置検出部(2)と、
     前記ディスプレイ(1)を複数画面に分割する画面分割処理部(31)と、
     前記操作位置検出部(2)で検出した複数の前記指示体の操作位置に基づき、分割した複数画面のうちから、操作対象とする操作対象画面を1つ確定する確定部(34)とを備えるジェスチャ入力装置(100)。
    A gesture input device (100) comprising a display (1) for displaying an image and changing a display image on the display (1) by an input operation with a plurality of indicators,
    An operation position detector (2) for detecting an operation position of the indicator with respect to the display (1);
    A screen division processing unit (31) for dividing the display (1) into a plurality of screens;
    A determination unit (34) for determining one operation target screen to be operated from among the plurality of divided screens based on the operation positions of the plurality of indicators detected by the operation position detection unit (2); Gesture input device (100).
  2.  請求項1において、
     前記確定部(34)は、前記操作対象画面を確定した後、複数の前記指示体の前記ディスプレイ(1)に対する操作が継続している間は、再度の前記操作対象画面の確定を行わないジェスチャ入力装置(100)。
    In claim 1,
    The confirming unit (34) is a gesture that, after confirming the operation target screen, does not confirm the operation target screen again while the operations of the indicators on the display (1) are continued. Input device (100).
  3.  請求項2において、
     複数の前記指示体の前記ディスプレイ(1)に対する操作が継続している間は、前記確定部(34)で確定した前記操作対象画面に対する操作を有効とし、前記操作対象画面ではない対象外画面に対する操作を無効とする操作制限部(35)を備えるジェスチャ入力装置(100)。
    In claim 2,
    While the operation of the plurality of indicators on the display (1) continues, the operation on the operation target screen determined by the determination unit (34) is validated, and the target screen other than the operation target screen is displayed. A gesture input device (100) including an operation restriction unit (35) for invalidating an operation.
  4.  請求項3において、
     前記ジェスチャ入力装置(100)は、前記操作位置検出部(2)で検出した複数の前記指示体の操作位置間の距離に応じて前記ディスプレイ(1)上の表示画像を変更し、
     前記操作位置検出部(2)は、前記操作制限部(35)によって前記対象外画面に対する操作が無効となっている場合であっても、前記指示体の操作位置が前記対象外画面に対応する位置にある場合には、前記支持体の操作位置を検出し、
     前記ジェスチャ入力装置(100)は、前記操作位置検出部(2)で検出した複数の前記指示体の操作位置に対応する画面がそれぞれ前記操作対象画面と前記対象外画面とに分かれている場合であっても、前記複数の前記支持体の操作位置に応じて前記操作対象画面上の表示画像を変更するジェスチャ入力装置(100)。
    In claim 3,
    The gesture input device (100) changes a display image on the display (1) according to a distance between operation positions of the plurality of indicators detected by the operation position detection unit (2),
    In the operation position detection unit (2), the operation position of the indicator corresponds to the non-target screen even when the operation restriction unit (35) disables the operation on the non-target screen. If in position, detect the operating position of the support,
    In the gesture input device (100), the screens corresponding to the operation positions of the plurality of indicators detected by the operation position detector (2) are divided into the operation target screen and the non-target screen, respectively. Even if it exists, the gesture input device (100) which changes the display image on the said operation target screen according to the operation position of the said several support body.
  5.  請求項1~4のいずれか1項において、
     前記確定部(34)は、前記操作位置検出部(2)で検出した複数の前記操作位置に対応する画面のうち、分割した前記複数画面同士の境界からより遠い操作位置に対応する画面を、前記操作対象画面として確定するジェスチャ入力装置(100)。
    In any one of claims 1 to 4,
    The confirmation unit (34), among the screens corresponding to the plurality of operation positions detected by the operation position detection unit (2), a screen corresponding to an operation position farther from the boundary between the plurality of divided screens, A gesture input device (100) for confirming the operation target screen.
  6.  請求項1~4のいずれか1項において、
     前記確定部(34)は、前記操作位置検出部(2)で検出した複数の前記操作位置の重心位置に対応する画面を、前記操作対象画面と確定するジェスチャ入力装置(100)。
    In any one of claims 1 to 4,
    The determination unit (34) is a gesture input device (100) for determining a screen corresponding to the gravity center position of the plurality of operation positions detected by the operation position detection unit (2) as the operation target screen.
  7.  請求項1~4のいずれか1項において、
     前記確定部(34)は、前記操作位置検出部(2)で検出した複数の前記操作位置に対応する画面のうち、最初に検出した前記操作位置を含む画面を、前記操作対象画面として確定するジェスチャ入力装置(100)。
    In any one of claims 1 to 4,
    The determination unit (34) determines, as the operation target screen, a screen including the operation position detected first among the plurality of screens corresponding to the operation positions detected by the operation position detection unit (2). Gesture input device (100).
  8.  請求項1~7のいずれか1項において、
     前記確定部(34)により確定された前記操作対象画面は、前記操作対象画面ではない対象外画面に比べて識別可能に表示されるジェスチャ入力装置(100)。
    In any one of claims 1 to 7,
    The gesture input device (100) in which the operation target screen determined by the determination unit (34) is displayed in an identifiable manner as compared to a non-target screen that is not the operation target screen.
  9.  請求項1~8のいずれか1項において、
     前記ジェスチャ入力装置(100)は、複数の前記指示体によるタッチ操作によって前記ディスプレイ(1)上の表示画像を変更するマルチタッチパネルであるジェスチャ入力装置(100)。
    In any one of claims 1 to 8,
    The gesture input device (100) is a multi-touch panel that changes a display image on the display (1) by a touch operation with a plurality of the indicators.
PCT/JP2014/003184 2013-07-11 2014-06-16 Gesture input device WO2015004848A1 (en)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
JP2013-145537 2013-07-11
JP2013145537A JP6171643B2 (en) 2013-07-11 2013-07-11 Gesture input device

Publications (1)

Publication Number Publication Date
WO2015004848A1 true WO2015004848A1 (en) 2015-01-15

Family

ID=52279559

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/JP2014/003184 WO2015004848A1 (en) 2013-07-11 2014-06-16 Gesture input device

Country Status (2)

Country Link
JP (1) JP6171643B2 (en)
WO (1) WO2015004848A1 (en)

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
DE112017005699T5 (en) 2016-12-16 2019-07-25 Panasonic Intellectual Property Management Co., Ltd. Input device for a vehicle and input method
CN111782032A (en) * 2020-05-26 2020-10-16 北京理工大学 Input system and method based on finger micro-gestures

Families Citing this family (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2017027422A (en) * 2015-07-24 2017-02-02 アルパイン株式会社 Display device and display processing method
JP6757140B2 (en) * 2016-01-08 2020-09-16 キヤノン株式会社 Display control device and its control method, program, and storage medium
JP2017224195A (en) * 2016-06-16 2017-12-21 パイオニア株式会社 Input device
JP6806646B2 (en) 2017-07-26 2021-01-06 株式会社デンソーテン Display control device, display system, display control method and program
JP6973025B2 (en) 2017-12-20 2021-11-24 コニカミノルタ株式会社 Display devices, image processing devices and programs
JP2019109803A (en) 2017-12-20 2019-07-04 コニカミノルタ株式会社 Touch panel sharing support device, touch panel sharing method, and computer program
JP7102740B2 (en) * 2018-01-12 2022-07-20 コニカミノルタ株式会社 Information processing device, control method of information processing device, and program
JP7119408B2 (en) * 2018-02-15 2022-08-17 コニカミノルタ株式会社 Image processing device, screen handling method, and computer program

Citations (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2013051047A1 (en) * 2011-10-03 2013-04-11 古野電気株式会社 Display device, display program, and display method

Family Cites Families (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US8707174B2 (en) * 2010-02-25 2014-04-22 Microsoft Corporation Multi-screen hold and page-flip gesture
JP5718042B2 (en) * 2010-12-24 2015-05-13 株式会社ソニー・コンピュータエンタテインメント Touch input processing device, information processing device, and touch input control method
CN103189833A (en) * 2011-06-08 2013-07-03 松下电器产业株式会社 Text character input device and display change method
KR101859102B1 (en) * 2011-09-16 2018-05-17 엘지전자 주식회사 Mobile terminal and control method for mobile terminal
JP5729610B2 (en) * 2011-12-19 2015-06-03 アイシン・エィ・ダブリュ株式会社 Display device

Patent Citations (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2013051047A1 (en) * 2011-10-03 2013-04-11 古野電気株式会社 Display device, display program, and display method

Cited By (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
DE112017005699T5 (en) 2016-12-16 2019-07-25 Panasonic Intellectual Property Management Co., Ltd. Input device for a vehicle and input method
US10967737B2 (en) 2016-12-16 2021-04-06 Panasonic Intellectual Property Management Co., Ltd. Input device for vehicle and input method
DE112017005699B4 (en) 2016-12-16 2022-05-05 Panasonic Intellectual Property Management Co., Ltd. Input device for a vehicle and input method
CN111782032A (en) * 2020-05-26 2020-10-16 北京理工大学 Input system and method based on finger micro-gestures

Also Published As

Publication number Publication date
JP2015018432A (en) 2015-01-29
JP6171643B2 (en) 2017-08-02

Similar Documents

Publication Publication Date Title
WO2015004848A1 (en) Gesture input device
JP6132644B2 (en) Information processing apparatus, display control method, computer program, and storage medium
US10627990B2 (en) Map information display device, map information display method, and map information display program
US20200174632A1 (en) Thumbnail display apparatus, thumbnail display method, and computer readable medium for switching displayed images
US11435870B2 (en) Input/output controller and input/output control program
JP6432409B2 (en) Touch panel control device and touch panel control program
JP2012226520A (en) Electronic apparatus, display method and program
TWI597653B (en) Method, apparatus and computer program product for adjusting size of screen object
US9292185B2 (en) Display device and display method
WO2016181436A1 (en) Image output control method, image output control program, and display device
US9671948B2 (en) Image-display control system, image-display control method, and non-transitory computer-readable storage medium storing image-display control program
US9501210B2 (en) Information processing apparatus
US10318132B2 (en) Display device and display method
US8731824B1 (en) Navigation control for a touch screen user interface
US20180173411A1 (en) Display device, display method, and non-transitory computer readable recording medium
WO2018179552A1 (en) Touch panel device, method for display control thereof, and program
US20170351423A1 (en) Information processing apparatus, information processing method and computer-readable storage medium storing program
US20170115869A1 (en) Display device
US20190087077A1 (en) Information processing apparatus, screen control method
JP2017151670A (en) Display device, display method, and program
US20180173362A1 (en) Display device, display method used in the same, and non-transitory computer readable recording medium
JP6093635B2 (en) Information processing device
WO2017183194A1 (en) Display control device
JP2012173980A (en) Display device, display method and display program
JP6661421B2 (en) Information processing apparatus, control method, and program

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 14823393

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: DE

122 Ep: pct application non-entry in european phase

Ref document number: 14823393

Country of ref document: EP

Kind code of ref document: A1