WO2015004848A1 - Dispositif d'entrée gestuelle - Google Patents

Dispositif d'entrée gestuelle Download PDF

Info

Publication number
WO2015004848A1
WO2015004848A1 PCT/JP2014/003184 JP2014003184W WO2015004848A1 WO 2015004848 A1 WO2015004848 A1 WO 2015004848A1 JP 2014003184 W JP2014003184 W JP 2014003184W WO 2015004848 A1 WO2015004848 A1 WO 2015004848A1
Authority
WO
WIPO (PCT)
Prior art keywords
screen
display
target screen
input device
gesture input
Prior art date
Application number
PCT/JP2014/003184
Other languages
English (en)
Japanese (ja)
Inventor
泰徳 鈴木
Original Assignee
株式会社デンソー
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 株式会社デンソー filed Critical 株式会社デンソー
Publication of WO2015004848A1 publication Critical patent/WO2015004848A1/fr

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/01Input arrangements or combined input and output arrangements for interaction between user and computer
    • G06F3/048Interaction techniques based on graphical user interfaces [GUI]
    • G06F3/0487Interaction techniques based on graphical user interfaces [GUI] using specific features provided by the input device, e.g. functions controlled by the rotation of a mouse with dual sensing arrangements, or of the nature of the input device, e.g. tap gestures based on pressure sensed by a digitiser
    • G06F3/0488Interaction techniques based on graphical user interfaces [GUI] using specific features provided by the input device, e.g. functions controlled by the rotation of a mouse with dual sensing arrangements, or of the nature of the input device, e.g. tap gestures based on pressure sensed by a digitiser using a touch-screen or digitiser, e.g. input of commands through traced gestures
    • G06F3/04886Interaction techniques based on graphical user interfaces [GUI] using specific features provided by the input device, e.g. functions controlled by the rotation of a mouse with dual sensing arrangements, or of the nature of the input device, e.g. tap gestures based on pressure sensed by a digitiser using a touch-screen or digitiser, e.g. input of commands through traced gestures by partitioning the display area of the touch-screen or the surface of the digitising tablet into independently controllable areas, e.g. virtual keyboards or menus
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F2203/00Indexing scheme relating to G06F3/00 - G06F3/048
    • G06F2203/048Indexing scheme relating to G06F3/048
    • G06F2203/04803Split screen, i.e. subdividing the display area or the window area into separate subareas

Definitions

  • the present disclosure relates to a gesture input device such as a multi-touch panel that changes a display image on a display by an input operation using a plurality of indicators.
  • a touch panel capable of various input operations using an indicator such as a plurality of fingers is known.
  • Such a touch panel is generally called a multi-touch panel.
  • pinch out that enlarges the image on the display by widening the interval between the two indicators placed on the touch panel, and the interval between the two indicators placed on the touch panel A pinch-in that reduces the image on the display by narrowing the screen is known.
  • a conventional multi-touch panel as disclosed in Patent Document 1 does not assume a configuration in which a display is divided into a plurality of screens. Therefore, when such a configuration is adopted, there is a possibility that the case where input operations are performed by a plurality of indicators across the boundaries between the divided screens may not be supported.
  • the multi-touch panel may not be able to determine which screen is operated. is there.
  • the multi-touch panel cannot determine which screen is operated.
  • An object of the present disclosure is to provide a gesture input device that changes a display image on a display by an input operation using a plurality of indicators, and the input operation is performed by a plurality of indicators across a boundary between the screens obtained by dividing the display into a plurality of screens.
  • An object of the present invention is to provide a gesture input device that can change a display image on a screen desired by a user even when it is performed.
  • a gesture input device includes a display that displays an image, and is a gesture input device that changes a display image on a display by an input operation using a plurality of indicators, the operation position of the indicator with respect to the display.
  • An operation position detection unit to detect, a screen division processing unit that divides the display into a plurality of screens, and an operation target from among the plurality of divided screens based on the operation positions of the plurality of indicators detected by the operation position detection unit.
  • a confirmation unit for confirming one operation target screen.
  • one operation target screen to be operated is determined based on the operation positions of the plurality of indicators detected by the operation position detection unit. can do. Therefore, even when an input operation is performed with a plurality of indicators across the boundaries between the divided plurality of screens, the display image can be changed only on the confirmed operation target screen. Therefore, the display image can be changed only on the screen desired by the user.
  • a gesture input device that changes a display image on a display by an input operation with a plurality of indicators, even when the input operation is performed with a plurality of indicators across a boundary between the screens obtained by dividing the display into a plurality of screens.
  • the display image on the screen desired by the user can be changed.
  • FIG. 1 is a block diagram showing a schematic configuration of a multi-touch panel.
  • FIG. 2 is a flowchart showing an example of map image conversion-related processing in the Dual Map in the control unit.
  • FIG. 3A is a schematic diagram for explaining an example of determining an operation target screen in the first embodiment.
  • FIG. 3B is a schematic diagram for explaining an example of determining the operation target screen in the first embodiment.
  • FIG. 4A is a schematic diagram for explaining an example of determining an operation target screen in the first modification example;
  • FIG. 4B is a schematic diagram for explaining an example of determining the operation target screen in the first modification.
  • FIG. 1 is a block diagram showing a schematic configuration of a multi-touch panel.
  • FIG. 2 is a flowchart showing an example of map image conversion-related processing in the Dual Map in the control unit.
  • FIG. 3A is a schematic diagram for explaining an example of determining an operation target screen in the first embodiment.
  • FIG. 3B is a schematic diagram for explaining an
  • FIG. 5 is a schematic diagram for explaining an example of specific processing in the control unit when the operation position is located on the non-target screen during the pinch-out
  • FIG. 6 is a schematic diagram for explaining an example of specific processing in the control unit when the operation position is located on the non-target screen at the start of pinch-in.
  • FIG. 1 is a block diagram illustrating a schematic configuration of a multi-touch panel 100 to which the present disclosure is applied.
  • a multi-touch panel 100 shown in FIG. 1 includes a display 1, an operation position detection unit 2, and a control unit 3.
  • the multi-touch panel 100 corresponds to the gesture input device according to the present disclosure.
  • the multi-touch panel 100 may be mounted on a vehicle, or may be a part of a tablet PC, desktop PC, mobile phone, or the like.
  • the multi-touch panel 100 is a touch panel that can simultaneously detect a plurality of operation positions on the screen of the display 1.
  • the display 1 displays an image corresponding to various application programs (hereinafter referred to as applications) executed by the control unit 3, and can display, for example, full color.
  • applications application programs
  • the display 1 for example, a liquid crystal display, an organic EL display, or the like can be used.
  • the operation position detection unit 2 uses a touch sensor integrated with the display 1, detects which position on the screen of the display 1 the touch operation is performed on, and determines the coordinates of the operation position on the control unit 3 To enter.
  • the touch sensor may be based on a capacitance method, may be based on a resistive film method, or may be based on another method.
  • the control unit 3 is configured as a normal computer, and includes, for example, a well-known CPU, ROM, EEPROM, RAM, I / O, and a bus line (none of which is shown) for connecting these configurations. It has been.
  • the control unit 3 executes processing in various applications based on information input from the operation position detection unit 2 and the like. As shown in FIG. 1, the control unit 3 includes a screen division processing unit 31, a display control unit 32, an operation determination unit 33, a determination unit 34, and an operation restriction unit 35 as functional blocks. Note that the application is stored in a memory such as a ROM.
  • Embodiment 1 will be described by taking as an example a case where an application (hereinafter referred to as “Dual Map”) is used that divides the screen of one display 1 into left and right and displays a map on each of the divided left and right screens.
  • Each screen obtained by dividing into left and right is hereinafter referred to as a left divided screen and a right divided screen.
  • the screen division processing unit 31 divides the screen of one display 1 into left and right parts in Dual Map. For example, the drawing area of the display 1 is reset to a rectangular area for the left divided screen and a rectangular area for the right divided screen.
  • the coordinates of the upper left corner of the pixel coordinates of the display 1 are the coordinates of the upper left corner of the rectangular area for the left divided screen, and the center coordinates of the lowermost stage are the coordinates of the lower right corner of the rectangular area for the left divided screen. That's fine. Further, if the coordinates of the lower right corner of the pixel coordinates of the display 1 are the coordinates of the lower right corner of the rectangular area for the right divided screen, the center coordinates of the uppermost stage are the coordinates of the upper left corner of the rectangular area for the right divided screen. Good.
  • the display control unit 32 displays a map on each of the left divided screen and the right divided screen divided by the screen division processing unit 31.
  • the operation determination unit 33, the determination unit 34, and the operation restriction unit 35 will be described in detail later.
  • Dual Map the scale of the map on the screen is changed by operation input such as pinch out and pinch in with two fingers.
  • Pinch-out is an operation input for enlarging a map on the screen by widening the interval between two fingers placed on the screen of the display 1.
  • Pinch-in is an operation input for reducing the map on the screen by narrowing the interval between two fingers placed on the display 1.
  • Embodiment 1 although the structure which uses a finger for operation input is shown, it does not necessarily restrict to this.
  • a pen-like artifact other than a finger may be used as an indicator for performing operation input.
  • map image conversion related processing processing related to image conversion such as map scale change in DualDMap in the control unit 3 (hereinafter referred to as map image conversion related processing) will be described using the flowchart of FIG. 2 is started when Dual Map is activated.
  • map image conversion related processing processing related to image conversion such as map scale change in DualDMap in the control unit 3
  • step S1 the operation determination unit 33 determines whether or not a first finger touch operation has been detected.
  • the operation determination unit 33 obtains only the coordinates of one operation position from the operation position detection unit 2, it may be determined that the first finger touch operation has been detected.
  • the process proceeds to step S2.
  • the process proceeds to step S11.
  • step S2 the operation determination unit 33 determines whether a single touch operation is detected.
  • the single touch operation is a touch operation performed with one finger.
  • the single touch operation includes a tap for touching and releasing a point on the screen, a slide for shifting the position while touching the screen with one finger, and the like.
  • the operation determination unit 33 may determine based on the fact that the operation position detection unit 2 cannot obtain the coordinates of one operation position. In the case of a slide, the operation determination unit 33 may determine based on the change in the coordinates of the operation position obtained from the operation position detection unit 2. Note that a touch operation performed simultaneously with a plurality of fingers, such as a pinch operation, is referred to as a multi-touch operation.
  • step S2 If it is determined that a single touch operation has been detected (YES in step S2), the process proceeds to step S3. On the other hand, if it is determined that the single touch operation has not been detected (NO in step S2), the process proceeds to step S4.
  • step S3 the display control unit 32 performs a single touch process, and proceeds to step S11.
  • the image is changed according to the single touch operation.
  • the single touch operation is a slide
  • a map on the screen that includes the coordinates of the operation position where the touch operation of one finger is detected is displayed on the slide screen. Translate by the direction and amount according to the direction and amount.
  • step S4 the operation determination unit 33 determines whether a second finger touch operation is detected. As an example, when the operation determination unit 33 obtains the coordinates of two operation positions from the operation position detection unit 2, it may be determined that the touch operation of the second finger has been detected. When the operation determination unit 33 determines that the touch operation of the second finger has been detected (YES in step S4), the process proceeds to step S5. On the other hand, when the operation determination unit 33 determines that the touch operation of the second finger is not detected (NO in step S4), the process returns to step S2 to repeat the flow.
  • the operation determination unit 33 When the operation determination unit 33 detects the touch operation of the second finger, the operation determination unit 33 stores the detected operation positions of the two fingers in a volatile memory such as a RAM as the initial positions of the two fingers. .
  • step S5 the confirmation unit 34 performs a confirmation process, and proceeds to step S6.
  • one operation target screen to be operated is determined from the left divided screen and the right divided screen based on the initial positions of the two fingers detected by the operation position detection unit 2.
  • one operation target screen is determined as follows. First, among the initial positions of the fingers detected by the operation position detector 2, an initial position farther from the boundary between the left divided screen and the right divided screen is obtained. Then, the screen where the initial position farther from the boundary is located is determined as the operation target screen.
  • the vertical line is drawn from the coordinates of the initial position of each finger with respect to the boundary line between the left split screen and the right split screen, and the coordinates of the point where this perpendicular line and the boundary line intersect (Hereinafter referred to as the intersection coordinates). Then, a linear distance between the obtained intersection coordinates and the coordinates of each initial position is calculated, and an initial position having a longer calculated linear distance is set as an initial position farther from the boundary.
  • FIG. 3A and FIG. 3B an example of confirmation of the operation target screen will be shown using FIG. 3A and FIG. 3B.
  • L is a left divided screen
  • R is a right divided screen
  • Bo is a boundary line between the left divided screen and the right divided screen
  • F1 is an operation position of the first finger
  • F2 is an operation position of the second finger. Indicates the operation position.
  • the first finger operation position F1 and the second finger operation position F2 are set as initial positions.
  • the first finger operation position F1 is located on the left divided screen L, while the second finger operation position F2 is located on the right divided screen R.
  • the finger operation position F2 is farther from the boundary line Bo, the right divided screen R is determined as the operation target screen.
  • the operation positions F1 and F2 of the two fingers should be located to the right as a whole, so that the initial position is farther from the boundary line. Will be located on the right split screen. Therefore, even if the first finger operation position F1 is positioned on the left split screen while the second finger operation position F2 is positioned on the right split screen, the right split screen is displayed as desired by the user. The operation target screen is confirmed.
  • the coordinates of the point where the line segment connecting the coordinates of the initial position of each finger and the boundary line between the left divided screen and the right divided screen may be obtained.
  • the linear distance between the obtained coordinates and the coordinates of each initial position is calculated, and the initial position with the longer calculated linear distance may be set as the initial position farther from the boundary.
  • the method for determining the operation target screen is not limited to the above-described method, and a screen corresponding to the center position between the initial positions of each finger may be determined as the operation target screen (hereinafter, modified example 1).
  • the center position may be paraphrased as the gravity center position.
  • both the operation positions F1 and F2 of the two fingers are located on the right split screen R, and the operation position F1 of the first finger and the operation position F2 of the second finger
  • the right split screen R is determined as the operation target screen. Therefore, when both the operation positions F1 and F2 of the two fingers are positioned on the right split screen with the intention of the user performing an operation input on the right split screen, The divided screen is determined as the operation target screen.
  • the first finger operation position F1 is located on the left divided screen L, while the second finger operation position F2 is located on the right divided screen R.
  • the center position C2 between the finger operation position F1 and the second finger operation position F2 is located on the right divided screen R, the right divided screen R is determined as the operation target screen.
  • the operation positions F1 and F2 of the two fingers should be located to the right as a whole, and therefore the operation position of the first finger The center position between F1 and the second finger operation position F2 is positioned on the right split screen.
  • Modification 2 Note that the method of determining the operation target screen in the determination process is not limited to the above-described method, and a screen having an operation position at which the touch operation of the first finger is detected may be determined as the operation target screen (hereinafter referred to as the operation target screen) Modification 2).
  • the confirmation process is not performed again until the touch-off occurs, for example, while the pinch operation is continued. .
  • the touch-off indicates that both of the two fingers leave the screen of the display 1 after the touch operation is once started.
  • step S6 the operation determination unit 33 determines whether a pinch operation has been detected. Pinch operation is pinch out or pinch in. The detection of the pinch operation may be made based on the fact that the coordinates of the operation position obtained from the operation position detection unit 2 have changed from the initial position described above. If the operation determination unit 33 determines that a pinch operation has been detected (YES in step S6), the process proceeds to step S7. On the other hand, if it is determined that a pinch operation has not been detected (NO in step S6), the flow of step S6 is repeated.
  • step S7 if the pinch operation is a pinch out, the process proceeds to step S8.
  • step S8 if the pinch operation is pinch-in, the process proceeds to step S9.
  • the operation determination unit 33 determines whether the pinch operation is a pinch out or a pinch in as follows.
  • the distance between the initial positions is calculated from the coordinates of the initial positions of the fingers, and the distance between the current positions is calculated from the coordinates of the current operation position of each finger (hereinafter referred to as the current position).
  • a ratio of the distance between the current positions to the distance between the initial positions (hereinafter referred to as a conversion ratio) is calculated.
  • the operation determination unit 33 determines that the calculated conversion ratio is greater than 1 and determines that it is pinch-out, and when the calculated conversion ratio is less than 1, the operation determination unit 33 determines that it is pinch-in.
  • step S8 the display control unit 32 enlarges the map on the screen determined as the operation target screen in the determination process according to the conversion ratio described above, and proceeds to step S10.
  • step S9 the display control unit 32 reduces the map on the screen determined as the operation target screen in the determination process according to the conversion ratio described above, and proceeds to step S10.
  • the operation restriction unit 35 enables the operation on the operation target screen until the touch-off occurs, for example, while the pinch operation is continued. And invalidate operations on non-target screens.
  • the non-target screen represents a screen that is not the operation target screen among the right split screen and the left split screen.
  • the operation restriction unit 35 Restrict image conversion such as changing the scale of a map on a split screen.
  • the operation restriction unit 35 uses the operation position detected on the left divided screen for image conversion on the right divided screen, not for image conversion on the left divided screen.
  • the determined operation target screen is a right split screen.
  • FIG. 5 shows an example of pinch-out
  • FIG. 6 shows an example of pinch-in.
  • the operation positions F1 and F2 of the two fingers were positioned on the right split screen at the start of the pinch out, but the operation position F1 is positioned on the left split screen during the pinch out.
  • the operation restricting unit 35 enlarges (allows) the map of the right divided screen, but does not perform (restricts) image conversion of the map of the left divided screen.
  • the conversion ratio of the distance between the current position located on the left split screen and the current position located on the right split screen with respect to the distance between the initial positions is calculated, and the right split is performed according to the calculated conversion ratio. Enlarge the map on the screen.
  • the operation restriction unit 35 causes the map of the right split screen to be reduced.
  • the map of the left split screen is not converted.
  • the conversion ratio of the distance between the current position to the distance between the initial position located on the left split screen and the initial position located on the right split screen is calculated, and the right split is performed according to the calculated conversion ratio.
  • the gesture input device enlarges or reduces the map at a ratio intended by the user. It is possible to perform image conversion such as. Therefore, it becomes possible to provide more comfortable operability for the user.
  • the operation target screen is displayed so as to be identifiable as compared to the non-target screen.
  • a mark indicating that the operation target screen is displayed on the operation target screen, or a frame or the like is highlighted.
  • the operation target screen and the non-target screen may be displayed in a distinguishable manner by reducing the luminance of the non-target screen from that of the operation target screen.
  • step S10 the operation determination unit 33 determines whether or not touch-off has occurred.
  • a configuration may be adopted in which touch-off is determined when the coordinates of one operation position cannot be obtained from the operation position detection unit 2. If it is determined that the touch-off has occurred (YES in step S10), the process proceeds to step S11. On the other hand, if it is determined that the touch-off has not occurred (NO in step S10), the process returns to step S7 to repeat the flow.
  • step S11 if it is the end timing of the map image conversion related process (YES in step S11), the flow is ended. On the other hand, if it is not the end timing of the map image conversion related process (NO in step S11), the process returns to step S1 and the flow is repeated.
  • the end timing of the map image conversion related processing there is a time when Dual Map ends.
  • the display 1 is divided into a plurality of screens, it is possible to determine a screen that the user intends to perform an operation input as an operation target screen. And according to the operation input from a user, it becomes possible to change a display only on the confirmed operation object screen.
  • the input operation is performed with the plurality of indicators across the boundary between the screens obtained by dividing the display 1 into a plurality of screens. In this case, the display image on the screen desired by the user can be changed.
  • the pinch operation is described as an example of the multi-touch operation.
  • the present invention is not limited to this.
  • the present disclosure can be applied to a multi-touch operation other than the pinch operation as long as it is an operation performed with a plurality of indicators and an operation target screen needs to be specified.
  • the present disclosure can also be applied to an operation of rotating an image by rotating one of the operation positions of the two pointers as an axis and rotating the other.
  • the configuration using the multi-touch panel as the gesture input device of the present disclosure is shown, but the configuration is not necessarily limited thereto.
  • the present invention is applicable not only to contact-type gesture recognition that detects a touch operation on a screen but also to a device that performs non-contact-type gesture recognition that does not require touching the screen.
  • non-contact type gesture recognition there are a method using a change in capacitance due to the proximity of a human body, a method using a finger movement imaged by a camera, and the like.
  • a gesture input device includes a display that displays an image, and is a gesture input device that changes a display image on a display by an input operation using a plurality of indicators, the operation position of the indicator with respect to the display.
  • An operation position detection unit to detect, a screen division processing unit that divides the display into a plurality of screens, and an operation target from among the plurality of divided screens based on the operation positions of the plurality of indicators detected by the operation position detection unit.
  • a confirmation unit for confirming one operation target screen.
  • the display image can be changed only on the confirmed operation target screen. Therefore, the display image can be changed only on the screen desired by the user.
  • the input operation is performed with the plurality of indicators across the boundary between the screens obtained by dividing the display into a plurality of screens. Even in this case, the display image on the screen desired by the user can be changed.
  • each “unit” in the present embodiment focuses on the function of the control unit 3 and categorizes the inside of the control unit 3 for convenience, and the inside of the control unit 3 is classified into each “unit”. It does not mean that it is physically divided into corresponding parts. Accordingly, each “unit” can be realized as software as a part of a computer program, or can be realized as hardware using an IC chip or a large-scale integrated circuit.
  • each step is expressed as, for example, S1. Further, each step can be divided into a plurality of sub-steps. On the other hand, a plurality of steps can be combined into one step.

Abstract

La présente invention porte sur un dispositif d'entrée gestuelle (100), qui est pourvu d'un dispositif d'affichage (1) pour afficher des images, et qui change les images affichées sur le dispositif d'affichage (1) au moyen d'opérations d'entrée effectuées par une pluralité de corps manipulateurs. Le dispositif d'entrée gestuelle (100) comprend: une section de détection de positions d'actionnement (2) qui détecte des positions d'actionnement des corps manipulateurs relativement au dispositif d'affichage (1); une section de division en écrans (31) qui divise le dispositif d'affichage en une pluralité d'écrans; et une section de détermination (34) qui détermine un écran à actionner parmi les écrans divisés sur la base des positions d'actionnement des corps manipulateurs, lesdites positions d'actionnement ayant été détectées au moyen de la section de détection de positions d'actionnement (2).
PCT/JP2014/003184 2013-07-11 2014-06-16 Dispositif d'entrée gestuelle WO2015004848A1 (fr)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
JP2013-145537 2013-07-11
JP2013145537A JP6171643B2 (ja) 2013-07-11 2013-07-11 ジェスチャ入力装置

Publications (1)

Publication Number Publication Date
WO2015004848A1 true WO2015004848A1 (fr) 2015-01-15

Family

ID=52279559

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/JP2014/003184 WO2015004848A1 (fr) 2013-07-11 2014-06-16 Dispositif d'entrée gestuelle

Country Status (2)

Country Link
JP (1) JP6171643B2 (fr)
WO (1) WO2015004848A1 (fr)

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
DE112017005699T5 (de) 2016-12-16 2019-07-25 Panasonic Intellectual Property Management Co., Ltd. Eingabevorrichtung für ein Fahrzeug und Eingabeverfahren
CN111782032A (zh) * 2020-05-26 2020-10-16 北京理工大学 一种基于手指微手势的输入系统及方法

Families Citing this family (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2017027422A (ja) * 2015-07-24 2017-02-02 アルパイン株式会社 表示装置及び表示処理方法
JP6757140B2 (ja) * 2016-01-08 2020-09-16 キヤノン株式会社 表示制御装置及びその制御方法、プログラム、並びに記憶媒体
JP2017224195A (ja) * 2016-06-16 2017-12-21 パイオニア株式会社 入力装置
JP6806646B2 (ja) 2017-07-26 2021-01-06 株式会社デンソーテン 表示制御装置、表示システム、表示制御方法およびプログラム
JP6973025B2 (ja) 2017-12-20 2021-11-24 コニカミノルタ株式会社 表示装置、画像処理装置及びプログラム
JP2019109803A (ja) 2017-12-20 2019-07-04 コニカミノルタ株式会社 タッチパネル共用支援装置、タッチパネル共用方法、およびコンピュータプログラム
JP7102740B2 (ja) * 2018-01-12 2022-07-20 コニカミノルタ株式会社 情報処理装置、情報処理装置の制御方法、およびプログラム
JP7119408B2 (ja) * 2018-02-15 2022-08-17 コニカミノルタ株式会社 画像処理装置、画面取扱い方法、およびコンピュータプログラム

Citations (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2013051047A1 (fr) * 2011-10-03 2013-04-11 古野電気株式会社 Dispositif, programme ainsi que procédé d'affichage

Family Cites Families (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US8707174B2 (en) * 2010-02-25 2014-04-22 Microsoft Corporation Multi-screen hold and page-flip gesture
JP5718042B2 (ja) * 2010-12-24 2015-05-13 株式会社ソニー・コンピュータエンタテインメント タッチ入力処理装置、情報処理装置およびタッチ入力制御方法
TW201303705A (zh) * 2011-06-08 2013-01-16 Panasonic Corp 文字輸入裝置及顯示變更方法
KR101859102B1 (ko) * 2011-09-16 2018-05-17 엘지전자 주식회사 이동 단말기 및 이동 단말기의 제어 방법
JP5729610B2 (ja) * 2011-12-19 2015-06-03 アイシン・エィ・ダブリュ株式会社 表示装置

Patent Citations (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2013051047A1 (fr) * 2011-10-03 2013-04-11 古野電気株式会社 Dispositif, programme ainsi que procédé d'affichage

Cited By (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
DE112017005699T5 (de) 2016-12-16 2019-07-25 Panasonic Intellectual Property Management Co., Ltd. Eingabevorrichtung für ein Fahrzeug und Eingabeverfahren
US10967737B2 (en) 2016-12-16 2021-04-06 Panasonic Intellectual Property Management Co., Ltd. Input device for vehicle and input method
DE112017005699B4 (de) 2016-12-16 2022-05-05 Panasonic Intellectual Property Management Co., Ltd. Eingabevorrichtung für ein Fahrzeug und Eingabeverfahren
CN111782032A (zh) * 2020-05-26 2020-10-16 北京理工大学 一种基于手指微手势的输入系统及方法

Also Published As

Publication number Publication date
JP2015018432A (ja) 2015-01-29
JP6171643B2 (ja) 2017-08-02

Similar Documents

Publication Publication Date Title
WO2015004848A1 (fr) Dispositif d'entrée gestuelle
JP6132644B2 (ja) 情報処理装置、表示制御方法、コンピュータプログラム、及び記憶媒体
US10627990B2 (en) Map information display device, map information display method, and map information display program
US20200174632A1 (en) Thumbnail display apparatus, thumbnail display method, and computer readable medium for switching displayed images
US11435870B2 (en) Input/output controller and input/output control program
JP6432409B2 (ja) タッチパネルの制御装置およびタッチパネルの制御プログラム
JP2012226520A (ja) 電子機器、表示方法及びプログラム
TWI597653B (zh) 調整螢幕物件尺寸的方法、裝置及電腦程式產品
US9292185B2 (en) Display device and display method
WO2016181436A1 (fr) Procédé de commande de sortie d'image, programme de commande de sortie d'image, et dispositif d'affichage
US9671948B2 (en) Image-display control system, image-display control method, and non-transitory computer-readable storage medium storing image-display control program
JP5620895B2 (ja) 表示制御装置、方法及びプログラム
US9501210B2 (en) Information processing apparatus
US10318132B2 (en) Display device and display method
US8731824B1 (en) Navigation control for a touch screen user interface
US20180173411A1 (en) Display device, display method, and non-transitory computer readable recording medium
WO2018179552A1 (fr) Dispositif de panneau tactile, procédé de commande d'affichage associé et programme
US20170351423A1 (en) Information processing apparatus, information processing method and computer-readable storage medium storing program
US20170115869A1 (en) Display device
US20190087077A1 (en) Information processing apparatus, screen control method
JP2017151670A (ja) 表示装置、表示方法及びプログラム
US20180173362A1 (en) Display device, display method used in the same, and non-transitory computer readable recording medium
JP6093635B2 (ja) 情報処理装置
WO2017183194A1 (fr) Dispositif de commande d'affichage
JP2012173980A (ja) 表示装置、表示方法、及び表示プログラム

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 14823393

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: DE

122 Ep: pct application non-entry in european phase

Ref document number: 14823393

Country of ref document: EP

Kind code of ref document: A1