JP2015018432A - Gesture input device - Google Patents

Gesture input device Download PDF

Info

Publication number
JP2015018432A
JP2015018432A JP2013145537A JP2013145537A JP2015018432A JP 2015018432 A JP2015018432 A JP 2015018432A JP 2013145537 A JP2013145537 A JP 2013145537A JP 2013145537 A JP2013145537 A JP 2013145537A JP 2015018432 A JP2015018432 A JP 2015018432A
Authority
JP
Japan
Prior art keywords
screen
display
target screen
screens
target
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
JP2013145537A
Other languages
Japanese (ja)
Other versions
JP2015018432A5 (en
JP6171643B2 (en
Inventor
泰徳 鈴木
Yasunori Suzuki
泰徳 鈴木
Original Assignee
株式会社デンソー
Denso Corp
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 株式会社デンソー, Denso Corp filed Critical 株式会社デンソー
Priority to JP2013145537A priority Critical patent/JP6171643B2/en
Publication of JP2015018432A publication Critical patent/JP2015018432A/en
Publication of JP2015018432A5 publication Critical patent/JP2015018432A5/ja
Application granted granted Critical
Publication of JP6171643B2 publication Critical patent/JP6171643B2/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING; COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/01Input arrangements or combined input and output arrangements for interaction between user and computer
    • G06F3/048Interaction techniques based on graphical user interfaces [GUI]
    • G06F3/0487Interaction techniques based on graphical user interfaces [GUI] using specific features provided by the input device, e.g. functions controlled by the rotation of a mouse with dual sensing arrangements, or of the nature of the input device, e.g. tap gestures based on pressure sensed by a digitiser
    • G06F3/0488Interaction techniques based on graphical user interfaces [GUI] using specific features provided by the input device, e.g. functions controlled by the rotation of a mouse with dual sensing arrangements, or of the nature of the input device, e.g. tap gestures based on pressure sensed by a digitiser using a touch-screen or digitiser, e.g. input of commands through traced gestures
    • G06F3/04886Interaction techniques based on graphical user interfaces [GUI] using specific features provided by the input device, e.g. functions controlled by the rotation of a mouse with dual sensing arrangements, or of the nature of the input device, e.g. tap gestures based on pressure sensed by a digitiser using a touch-screen or digitiser, e.g. input of commands through traced gestures by partitioning the screen or tablet into independently controllable areas, e.g. virtual keyboards, menus
    • GPHYSICS
    • G06COMPUTING; CALCULATING; COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F2203/00Indexing scheme relating to G06F3/00 - G06F3/048
    • G06F2203/048Indexing scheme relating to G06F3/048
    • G06F2203/04803Split screen, i.e. subdividing the display area or the window area into separate subareas

Abstract

PROBLEM TO BE SOLVED: To provide a gesture input device that changes the display on a display through input operation by a plurality of indication bodies, and enables, even when the input operation is performed by the plurality of indication bodies over the boundary of a plurality of screens obtained by dividing the display into plurality, the change of the display on the screen desired by a user.SOLUTION: A gesture input device includes an operation position detection unit 2 that detects an operation position with respect to a display 1, a screen division processing unit 31 that divides the display 1 into a plurality of screens, and a determination unit 34 that determines, of screens corresponding to a plurality of positions of operation by fingers detected by the operation position detection unit 2, the screen corresponding to the position of operation farther from the boundary of the divided screens as an operation target screen.

Description

  The present invention relates to a gesture input device such as a multi-touch panel that changes display on a display by input operations using a plurality of indicators.

  Conventionally, as disclosed in Patent Document 1, a touch panel is known in which various input operations can be performed using an indicator such as a plurality of fingers. Such a touch panel is generally called a multi-touch panel.

  In addition, as an example of an input operation using a plurality of indicators, pinch out that enlarges the image on the display by widening the interval between the two indicators placed on the touch panel, and the interval between the two indicators placed on the touch panel A pinch-in that reduces the image on the display by narrowing the screen is known.

JP 2001-228971 A

  However, the conventional multi-touch panel disclosed in Patent Document 1 does not assume a configuration in which the display is divided into a plurality of screens. Therefore, when such a configuration is adopted, there is a problem that it is not possible to cope with a case where input operations are performed by a plurality of indicators across the boundary between the divided screens.

  As an example, when two indicators that were located on the same screen are located on different screens due to a pinch-out, the multi-touch panel cannot determine which screen is operated. is there. In addition, when two indicators are located on different screens at the start of pinch-in, the multi-touch panel cannot determine which screen is operated.

  The present invention has been made in view of the above-described conventional problems, and an object thereof is to divide a display into a plurality of screens in a gesture input device that changes a display on a display by an input operation using a plurality of indicators. It is also possible to change the display on the screen desired by the user even when an input operation is performed with a plurality of indicators across the boundary between the screens.

  A gesture input device according to the present invention is a gesture input device (100) that includes a display (1) that displays an image, and that changes a display on the display by an input operation using a plurality of indicators. An operation position detection unit (2) for detecting an operation position, a screen division processing unit (31) for dividing the display into a plurality of screens, and a plurality of divisions based on the operation positions of a plurality of indicators detected by the operation position detection unit A confirmation unit (34) for confirming one operation target screen to be operated from among the screens is provided.

  According to this, even when the display is divided into a plurality of screens, it is possible to determine one operation target screen as an operation target based on the operation positions of the plurality of indicators detected by the operation position detection unit. Therefore, even when an input operation is performed with a plurality of indicators across the boundaries between the divided plurality of screens, it is possible to change the display only on the confirmed operation target screen. Therefore, the display can be changed only on the screen desired by the user.

  As a result, in a gesture input device that changes the display on the display by an input operation with a plurality of indicators, the input operation is performed with a plurality of indicators across the boundary between the screens obtained by dividing the display into a plurality of screens. Even in this case, the display on the screen desired by the user can be changed.

1 is a block diagram illustrating a schematic configuration of a multi-touch panel 100. FIG. 7 is a flowchart illustrating an example of map image conversion-related processing in Dual Map in the control unit 3; (A) And (b) is a schematic diagram for demonstrating the example of the determination of the operation target screen in Embodiment 1. FIG. (A) And (b) is a schematic diagram for demonstrating the example of the determination of the operation target screen in the modification 1. FIG. It is a schematic diagram for demonstrating an example of the specific process in the control part 3 when an operation position is located in a non-target screen while pinch out is continued. It is a schematic diagram for demonstrating an example of the specific process in the control part 3 when an operation position is located in a non-target screen at the time of the start of pinch in.

  Hereinafter, embodiments of the present invention (hereinafter referred to as Embodiment 1) will be described with reference to the drawings. FIG. 1 is a block diagram showing a schematic configuration of a multi-touch panel 100 to which the present invention is applied. A multi-touch panel 100 shown in FIG. 1 includes a display 1, an operation position detection unit 2, and a control unit 3. The multi-touch panel 100 corresponds to the gesture input device.

  Multi-touch panel 100 may be mounted on a vehicle, or may be a part of a tablet PC, desktop PC, mobile phone, or the like. The multi-touch panel 100 is a touch panel that can simultaneously detect a plurality of operation positions on the screen of the display 1.

  The display 1 displays a screen corresponding to various application programs (hereinafter referred to as applications) executed by the control unit 3 and can display, for example, full color. The display 1 can be configured using, for example, a liquid crystal display, an organic EL display, or the like.

  The operation position detection unit 2 uses a touch sensor integrated with the display 1, detects which position on the screen of the display 1 the touch operation is performed on, and determines the coordinates of the operation position on the control unit 3 To enter. The touch sensor may be based on a capacitance method, may be based on a resistive film method, or may be based on another method.

  The control unit 3 is configured as a normal computer, and includes, for example, a well-known CPU, ROM, EEPROM, RAM, I / O, and a bus line (none of which is shown) for connecting these configurations. It has been. The control unit 3 executes processing in various applications based on information input from the operation position detection unit 2 and the like. As shown in FIG. 1, the control unit 3 includes a screen division processing unit 31, a display control unit 32, an operation determination unit 33, a determination unit 34, and an operation restriction unit 35 as functional blocks. Note that the application is stored in a memory such as a ROM.

  In the first embodiment, the following description will be given by taking as an example a case where an application (hereinafter referred to as “Dual Map”) is used which divides the screen of one display 1 into left and right and displays a map on each of the divided left and right screens. Each screen obtained by dividing into left and right is hereinafter referred to as a left divided screen and a right divided screen.

  The screen division processing unit 31 performs a process of dividing the screen of one display 1 into left and right parts in the Dual Map. For example, the drawing area of the display 1 is reset to a rectangular area for the left divided screen and a rectangular area for the right divided screen.

  As an example, the coordinates of the upper left corner of the pixel coordinates of the display 1 are the coordinates of the upper left corner of the rectangular area for the left divided screen, and the center coordinates of the lowermost stage are the coordinates of the lower right corner of the rectangular area for the left divided screen. That's fine. Further, if the coordinates of the lower right corner of the pixel coordinates of the display 1 are the coordinates of the lower right corner of the rectangular area for the right divided screen, the center coordinates of the uppermost stage are the coordinates of the upper left corner of the rectangular area for the right divided screen. Good.

  Then, the display control unit 32 displays a map on each of the left divided screen and the right divided screen divided by the screen division processing unit 31. The operation determination unit 33, the determination unit 34, and the operation restriction unit 35 will be described in detail later.

  In the Dual Map, the scale of the map on the screen is changed by an operation input such as pinch out or pinch in with two fingers. Pinch-out is an operation input for enlarging a map on the screen by widening the interval between two fingers placed on the screen of the display 1. Pinch-in is an operation input for reducing the map on the screen by narrowing the interval between two fingers placed on the display 1.

  In addition, in Embodiment 1, although the structure which uses a finger | toe for operation input is shown, it does not necessarily restrict to this. For example, it is good also as a structure which uses pen-like artifacts other than a finger | toe etc. as an indicator which performs operation input.

  Here, processing related to image conversion (hereinafter, map image conversion related processing) such as map scale change in Dual Map in the control unit 3 will be described using the flowchart of FIG. 2. The flowchart of FIG. 2 is started when the Dual Map is activated. In the flowchart of FIG. 2, only the pinch-out and the pinch-in will be described as an example for the multi-touch operation for the sake of simplicity.

  First, in step S1, the operation determination unit 33 determines whether or not a first finger touch operation is detected. As an example, when only the coordinates of one operation position are obtained from the operation position detection unit 2, it may be determined that the touch operation of the first finger has been detected. When it is determined that the first finger touch operation has been detected (YES in step S1), the process proceeds to step S2. On the other hand, when it is determined that the first finger touch operation is not detected (NO in step S1), the process proceeds to step S11.

  In step S2, the operation determination unit 33 determines whether a single touch operation is detected. A single touch operation is a touch operation performed with one finger. For example, a tap that touches and releases a point on the screen, a slide that moves a position while touching on the screen with one finger, and the like. There is.

  In the case of a tap, the determination may be made based on the fact that one operation position coordinate cannot be obtained from the operation position detection unit 2. The determination may be made based on the change in the coordinates of the operation position. Note that a touch operation performed simultaneously with a plurality of fingers, such as a pinch operation described later, is referred to as a multi-touch operation.

  And when it determines with having detected single-touch operation (it is YES at step S2), it moves to step S3. On the other hand, if it is determined that the single touch operation has not been detected (NO in step S2), the process proceeds to step S4.

  In step S3, the display control unit 32 performs a single touch process, and proceeds to step S11. In the single touch process, the image is changed according to the single touch operation. As an example, when the single touch operation is a slide, a map on the screen that includes the coordinates of the operation position where the touch operation of one finger is detected is displayed on the slide screen. Translate by the direction and amount according to the direction and amount.

  In step S4, the operation determination unit 33 determines whether a second finger touch operation has been detected. As an example, when the coordinates of the two operation positions are obtained from the operation position detection unit 2, it may be determined that the touch operation of the second finger has been detected. If it is determined that the touch operation of the second finger has been detected (YES in step S4), the process proceeds to step S5. On the other hand, if it is determined that the second finger touch operation has not been detected (NO in step S4), the process returns to step S2 and the flow is repeated.

  When the operation determination unit 33 detects the touch operation of the second finger, the operation determination unit 33 stores the detected operation positions of the two fingers in a volatile memory such as a RAM as the initial positions of the two fingers. Shall.

  In step S5, the determination unit 34 performs a determination process, and proceeds to step S6. In the confirmation process, one operation target screen to be operated is determined from the left divided screen and the right divided screen based on the initial positions of the two fingers detected by the operation position detection unit 2.

  As an example, one operation target screen is determined as follows. First, among the initial positions of the fingers detected by the operation position detector 2, an initial position farther from the boundary between the left divided screen and the right divided screen is obtained. Then, the screen on which the initial position farther from the boundary is located is determined as the operation target screen.

  To find the initial position farther from the boundary, the vertical line is drawn from the coordinates of the initial position of each finger with respect to the boundary line between the left split screen and the right split screen, and the coordinates of the point where this perpendicular line and the boundary line intersect (Hereinafter referred to as the intersection coordinates). Then, a linear distance between the obtained intersection coordinates and the coordinates of each initial position is calculated, and an initial position having a longer calculated linear distance is set as an initial position farther from the boundary.

  Here, an example of determining the operation target screen will be described with reference to FIGS. 3A and 3B. In FIG. 3, L represents the left divided screen, R represents the right divided screen, Bo represents the boundary line between the left divided screen and the right divided screen, F1 represents the operation position of the first finger, and F2 represents the operation position of the second finger. ing. The same applies to FIGS. 4 to 6 described later. In both FIG. 3A and FIG. 3B, F1 and F2 are the initial positions.

  As shown in FIG. 3A, when both F1 and F2 are located on the right split screen R, and F2 is farther from the boundary line Bo, the right split screen R is set as the operation target screen. Determine. Therefore, when the user intends to perform an operation input on the right split screen and both F1 and F2 are positioned on the right split screen, the right split screen R is the operation target screen as desired by the user. Will be confirmed.

  Further, as shown in FIG. 3B, when F1 is located on the left divided screen L and F2 is located on the right divided screen R, and F2 is farther from the boundary line Bo, The right split screen R is determined as the operation target screen. When the user intends to perform an operation input on the right split screen, F1 and F2 should be located to the right as a whole, and therefore an initial position farther from the boundary line is located on the right split screen. It will be. Therefore, even if F1 is located on the left divided screen, and F2 is located on the right divided screen, the right divided screen is determined as the operation target screen as desired by the user.

  In addition, as a method of obtaining the initial position farther from the boundary, it is possible to obtain the coordinates of the point where the line segment connecting the coordinates of the initial position of each finger and the boundary line of the left divided screen and the right divided screen intersect. Good. In this case, the linear distance between the obtained coordinates and the coordinates of each initial position is calculated, and the initial position with the longer calculated linear distance may be set as the initial position farther from the boundary.

  The method for determining the operation target screen is not limited to the above-described method, and a screen corresponding to the center position between the initial positions of each finger may be determined as the operation target screen (hereinafter, modified example 1). . The center position may be paraphrased as the gravity center position.

  Here, an example of confirmation of the operation target screen in the first modification will be described with reference to FIGS. 4A and 4B. As shown in FIG. 4A, when both F1 and F2 are located on the right divided screen R and the center position C1 between F1 and F2 is located on the right divided screen R, the right divided screen is displayed. R is determined as the operation target screen. Therefore, when the user intends to perform an operation input on the right divided screen and both F1 and F2 are positioned on the right divided screen, the right divided screen is the operation target screen as desired by the user. It will be confirmed.

  Further, as shown in FIG. 4B, F1 is located on the left divided screen L, while F2 is located on the right divided screen R, and the center position C2 between F1 and F2 is on the right divided screen R. If it is located, the right split screen R is determined as the operation target screen. When the user intends to perform an operation input on the right split screen, F1 and F2 should be located on the right side as a whole, so the center position of F1 and F2 is located on the right split screen. It will be. Therefore, even if F1 is located on the left divided screen, and F2 is located on the right divided screen, the right divided screen is determined as the operation target screen as desired by the user.

  Note that the method of determining the operation target screen in the determination process is not limited to the above-described method, and a configuration in which the screen having the operation position where the first finger touch operation is detected is determined as the operation target screen (hereinafter referred to as deformation). Example 2) is also possible.

  As shown in the flowchart of FIG. 2, once the operation target screen is confirmed by the confirmation process in the confirmation unit 34, the confirmation process is not performed again until the touch-off occurs, for example, while the pinch operation is continued. . Here, the touch-off indicates that both of the two fingers are separated from the screen of the display 1 after the touch operation is once started.

  In step S6, the operation determination unit 33 determines whether a pinch operation has been detected. The pinch operation is the above-described pinch out or pinch in. The detection of the pinch operation may be determined based on the fact that the coordinates of the operation position obtained from the operation position detection unit 2 have changed from the initial position described above. And when it determines with having detected pinch operation (it is YES at step S6), it moves to step S7. On the other hand, if it is determined that a pinch operation has not been detected (NO in step S6), the flow of step S6 is repeated.

  In step S7, if the pinch operation is a pinch out, the process proceeds to step S8. On the other hand, when the pinch operation is pinch-in, the process proceeds to step S9. As an example, the operation determination unit 33 determines whether the pinch operation is a pinch out or a pinch in as follows.

  First, the distance between the initial positions is calculated from the coordinates of the initial positions of the fingers, and the distance between the current positions is calculated from the coordinates of the current operation position (hereinafter referred to as the current position) of each finger. Subsequently, a ratio of the distance between the current positions to the distance between the initial positions (hereinafter referred to as a conversion ratio) is calculated. When the calculated conversion ratio is greater than 1, it is determined that the image is pinched out. When the calculated conversion ratio is less than 1, it is determined that the image is pinched in.

  In step S8, the display control unit 32 enlarges the map on the screen determined as the operation target screen in the determination process by an amount corresponding to the conversion ratio described above, and proceeds to step S10. In step S9, the display control unit 32 reduces the map on the screen determined as the operation target screen in the determination process by an amount corresponding to the conversion ratio described above, and proceeds to step S10.

  If the operation target screen is once confirmed by the confirmation process in the confirmation unit 34, the operation restriction unit 35 enables the operation on the operation target screen until the touch-off occurs, for example, while the pinch operation is continued. On the other hand, the operation on the non-target screen other than the operation target screen is invalidated.

  As an example, when the right split screen is determined as the operation target screen, even when the operation position is located on the left split screen when the pinch operation starts or continues, the operation restriction unit 35 displays the map on the left split screen. The image conversion such as the change of the scale is not performed. For example, the operation position detected on the left divided screen is not used for image conversion on the left divided screen, but is used for image conversion on the right divided screen.

  Here, with reference to FIGS. 5 and 6, an example of specific processing in the control unit 3 when the operation position is located on the non-target screen when the pinch operation starts or continues will be described. In both examples of FIGS. 5 and 6, it is assumed that the determined operation target screen is a right split screen. FIG. 5 shows an example of pinch-out, and FIG. 6 shows an example of pinch-in.

  As shown in FIG. 5, the operation positions F1 and F2 of the two fingers were positioned on the right split screen at the start of the pinch out, but the operation position F1 is positioned on the left split screen during the pinch out. In this case, the operation restriction unit 35 enlarges the map on the right divided screen, but does not perform image conversion on the map on the left divided screen. In addition, the conversion ratio of the distance between the current position located on the left split screen and the current position located on the right split screen with respect to the distance between the initial positions is calculated, and the right split is performed according to the calculated conversion ratio. Enlarge the map on the screen.

  On the other hand, as shown in FIG. 6, when the pinch-in is performed from the state where the operation position F1 is located on the left split screen at the start of the pinch-in, the operation restriction unit 35 causes the map of the right split screen to be reduced. The map of the left split screen is not converted. In addition, the conversion ratio of the distance between the current position to the distance between the initial position located on the left split screen and the initial position located on the right split screen is calculated, and the right split is performed according to the calculated conversion ratio. Causes the map on the screen to be reduced.

  According to this, even when the multi-touch operation such as a pinch operation is started or continued, the image conversion such as the enlargement or reduction of the map at the ratio intended by the user is performed even when the operation position is located on the non-target screen. It becomes possible to do. Therefore, it becomes possible to provide more comfortable operability for the user.

  In addition, the operation target screen is preferably displayed so as to be identifiable as compared to the non-target screen. As an example, of the operation target screen and the non-target screen, the operation target screen may be configured to display a mark indicating that the operation target screen is displayed or to highlight a frame or the like. In addition, the operation target screen and the non-target screen may be displayed so as to be distinguishable by lowering the luminance of the non-target screen from that of the operation target screen.

  In step S10, the operation determination unit 33 determines whether or not a touch-off has occurred. As an example, a configuration may be adopted in which touch-off is determined when the coordinates of one operation position cannot be obtained from the operation position detection unit 2. If it is determined that the touch-off has occurred (YES in step S10), the process proceeds to step S11. On the other hand, if it is determined that the touch-off has not occurred (NO in step S10), the process returns to step S7 to repeat the flow.

  In addition, as a case where it is determined that the touch-off is not established, there are cases where the coordinates of two operation positions are obtained from the operation position detection unit 2 or the coordinates of one operation position are obtained. . That is, when both two fingers are touching the screen, or when only one of the two fingers is away from the screen, it is determined that the touch-off has not occurred.

  In step S11, when it is the end timing of the map image conversion related process (YES in step S11), the flow is ended. On the other hand, if it is not the end timing of the map image conversion related process (NO in step S11), the process returns to step S1 and the flow is repeated. As an example of the end timing of the map image conversion related process, there is a time when the Dual Map ends.

  According to the above configuration, even when the display 1 is divided into a plurality of screens, the screen that the user intends to perform the operation input can be determined as the operation target screen. And according to the operation input from a user, it becomes possible to change a display only in the confirmed operation object screen. As a result, in the multi-touch panel 100 that changes the display on the display 1 by an input operation using a plurality of indicators, the input operation is performed by the plurality of indicators across the boundary between the screens obtained by dividing the display 1 into a plurality of screens. Even in the case of a change, the display on the screen desired by the user can be changed.

  In the first embodiment, the pinch operation is described as an example of the multi-touch operation. However, the present invention is not limited to this. For example, the present invention can be applied to a multi-touch operation other than a pinch operation as long as it is an operation performed with a plurality of indicators and an operation target screen needs to be specified. As an example, the present invention can also be applied to an operation of rotating an image by rotating one of the operation positions of two indicators as an axis and rotating the other.

  In the first embodiment, the configuration using the multi-touch panel as the gesture input device of the claims is shown, but the configuration is not necessarily limited thereto. For example, the present invention is applicable not only to contact-type gesture recognition that detects a touch operation on a screen but also to a device that performs non-contact-type gesture recognition that does not require touching the screen. As an example of non-contact type gesture recognition, there are a method using a change in capacitance due to the proximity of a human body, a method using a finger movement imaged by a camera, and the like.

  The present invention is not limited to the above-described embodiments, and various modifications can be made within the scope of the claims, and the technical means disclosed in different embodiments can be appropriately combined. Such embodiments are also included in the technical scope of the present invention.

DESCRIPTION OF SYMBOLS 1 Display, 2 Operation position detection part, 31 Screen division process part, 34 Determination part, 100 Multi-touch panel (gesture input device)

Claims (9)

  1. A gesture input device (100) comprising a display (1) for displaying an image and changing a display on the display by an input operation using a plurality of indicators.
    An operation position detector (2) for detecting an operation position of the indicator with respect to the display;
    A screen division processing unit (31) for dividing the display into a plurality of screens;
    And a determination unit (34) for determining one operation target screen to be operated from among the plurality of divided screens based on the operation positions of the plurality of indicators detected by the operation position detection unit. Gesture input device.
  2. In claim 1,
    The confirmation unit, after confirming the operation target screen, does not confirm the operation target screen again while the operations of the indicators on the display are continued. Gesture input device.
  3. In claim 2,
    While the operation of the plurality of indicators on the display continues, the operation on the operation target screen determined by the determination unit is enabled, while the operation on the non-target screen other than the operation target screen is disabled. A gesture input device comprising an operation restriction unit (35) for performing the operation.
  4. In claim 3,
    The display on the display is changed according to the distance between the operation positions of the plurality of indicators detected by the operation position detection unit,
    When the operation position of the indicator is at a position corresponding to the non-target screen, even when the operation restriction unit invalidates the operation on the non-target screen. , Detect this operation position,
    Even in the case where the screens corresponding to the operation positions of the plurality of indicators detected by the operation position detection unit are divided into the operation target screen and the non-target screen, respectively, according to these operation positions A gesture input device that changes a display on an operation target screen.
  5. In any one of Claims 1-4,
    The determination unit determines, as the operation target screen, a screen corresponding to an operation position farther from a boundary between the plurality of divided screens among the plurality of screens corresponding to the operation position detected by the operation position detection unit. A gesture input device.
  6. In any one of Claims 1-4,
    The gesture input device, wherein the determination unit determines a screen corresponding to the gravity center position of the plurality of operation positions detected by the operation position detection unit as the operation target screen.
  7. In any one of Claims 1-4,
    The determination unit determines a screen having the operation position detected first among the plurality of operation positions detected by the operation position detection unit as the operation target screen.
  8. In any one of Claims 1-7,
    The gesture input device characterized in that the operation target screen determined by the determination unit is displayed so as to be distinguishable as compared to non-target screens other than the operation target screen.
  9. In any one of Claims 1-8,
    A gesture input device, which is a multi-touch panel that changes a display on the display by a touch operation with a plurality of indicators.
JP2013145537A 2013-07-11 2013-07-11 Gesture input device Active JP6171643B2 (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
JP2013145537A JP6171643B2 (en) 2013-07-11 2013-07-11 Gesture input device

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
JP2013145537A JP6171643B2 (en) 2013-07-11 2013-07-11 Gesture input device
PCT/JP2014/003184 WO2015004848A1 (en) 2013-07-11 2014-06-16 Gesture input device

Publications (3)

Publication Number Publication Date
JP2015018432A true JP2015018432A (en) 2015-01-29
JP2015018432A5 JP2015018432A5 (en) 2015-09-03
JP6171643B2 JP6171643B2 (en) 2017-08-02

Family

ID=52279559

Family Applications (1)

Application Number Title Priority Date Filing Date
JP2013145537A Active JP6171643B2 (en) 2013-07-11 2013-07-11 Gesture input device

Country Status (2)

Country Link
JP (1) JP6171643B2 (en)
WO (1) WO2015004848A1 (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2017027422A (en) * 2015-07-24 2017-02-02 アルパイン株式会社 Display device and display processing method

Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2012137837A (en) * 2010-12-24 2012-07-19 Sony Computer Entertainment Inc Touch input processing device, information processing device and touch input control method
WO2012169190A1 (en) * 2011-06-08 2012-12-13 パナソニック株式会社 Text character input device and display change method
US20130072262A1 (en) * 2011-09-16 2013-03-21 Lg Electronics Inc. Mobile terminal and method for controlling the same
WO2013051047A1 (en) * 2011-10-03 2013-04-11 古野電気株式会社 Display device, display program, and display method
JP2013521547A (en) * 2010-02-25 2013-06-10 マイクロソフト コーポレーション Multi-screen hold and page flip gestures
JP2013127728A (en) * 2011-12-19 2013-06-27 Aisin Aw Co Ltd Display device

Patent Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2013521547A (en) * 2010-02-25 2013-06-10 マイクロソフト コーポレーション Multi-screen hold and page flip gestures
JP2012137837A (en) * 2010-12-24 2012-07-19 Sony Computer Entertainment Inc Touch input processing device, information processing device and touch input control method
WO2012169190A1 (en) * 2011-06-08 2012-12-13 パナソニック株式会社 Text character input device and display change method
US20130072262A1 (en) * 2011-09-16 2013-03-21 Lg Electronics Inc. Mobile terminal and method for controlling the same
WO2013051047A1 (en) * 2011-10-03 2013-04-11 古野電気株式会社 Display device, display program, and display method
JP2013127728A (en) * 2011-12-19 2013-06-27 Aisin Aw Co Ltd Display device

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2017027422A (en) * 2015-07-24 2017-02-02 アルパイン株式会社 Display device and display processing method

Also Published As

Publication number Publication date
WO2015004848A1 (en) 2015-01-15
JP6171643B2 (en) 2017-08-02

Similar Documents

Publication Publication Date Title
JP6333387B2 (en) User interface for manipulating user interface objects
TWI585672B (en) Electronic display device and icon control method
US9098942B2 (en) Legend indicator for selecting an active graph series
US10444961B2 (en) Hover-based interaction with rendered content
US10282067B2 (en) Method and apparatus of controlling an interface based on touch operations
US9442649B2 (en) Optimal display and zoom of objects and text in a document
US9753567B2 (en) Electronic medium display device that performs page turning in response to user operation pressing screen, page turning method, and program
JP6188288B2 (en) Information processing apparatus and control method thereof
US10126914B2 (en) Information processing device, display control method, and computer program recording medium
US8176435B1 (en) Pinch to adjust
JP6138641B2 (en) Map information display device, map information display method, and map information display program
US20160202887A1 (en) Method for managing application icon and terminal
US20160004373A1 (en) Method for providing auxiliary information and touch control display apparatus using the same
JP5873942B2 (en) Mobile terminal device
JP5497722B2 (en) Input device, information terminal, input control method, and input control program
US9804769B2 (en) Interface switching method and electronic device using the same
KR20140001753A (en) A method and apparatus for outputting graphics to a display
EP2214085B1 (en) Display/input device
US9529527B2 (en) Information processing apparatus and control method, and recording medium
CA2731807C (en) Internal scroll activation and cursor adornment
EP2256614B1 (en) Display control apparatus, display control method, and computer program
US10318146B2 (en) Control area for a touch screen
US8976140B2 (en) Touch input processor, information processor, and touch input control method
US20130067373A1 (en) Explicit touch selection and cursor placement
AU2013222958B2 (en) Method and apparatus for object size adjustment on a screen

Legal Events

Date Code Title Description
A521 Written amendment

Free format text: JAPANESE INTERMEDIATE CODE: A523

Effective date: 20150710

A621 Written request for application examination

Free format text: JAPANESE INTERMEDIATE CODE: A621

Effective date: 20160526

A131 Notification of reasons for refusal

Free format text: JAPANESE INTERMEDIATE CODE: A131

Effective date: 20170207

A521 Written amendment

Free format text: JAPANESE INTERMEDIATE CODE: A523

Effective date: 20170327

TRDD Decision of grant or rejection written
A01 Written decision to grant a patent or to grant a registration (utility model)

Free format text: JAPANESE INTERMEDIATE CODE: A01

Effective date: 20170606

A61 First payment of annual fees (during grant procedure)

Free format text: JAPANESE INTERMEDIATE CODE: A61

Effective date: 20170619

R151 Written notification of patent or utility model registration

Ref document number: 6171643

Country of ref document: JP

Free format text: JAPANESE INTERMEDIATE CODE: R151