WO2022198012A1 - Procédé d'entrée de texte multicouche pour des dispositifs de réalité augmentée - Google Patents

Procédé d'entrée de texte multicouche pour des dispositifs de réalité augmentée Download PDF

Info

Publication number
WO2022198012A1
WO2022198012A1 PCT/US2022/020897 US2022020897W WO2022198012A1 WO 2022198012 A1 WO2022198012 A1 WO 2022198012A1 US 2022020897 W US2022020897 W US 2022020897W WO 2022198012 A1 WO2022198012 A1 WO 2022198012A1
Authority
WO
WIPO (PCT)
Prior art keywords
keyboard
virtual
area
input
boundary
Prior art date
Application number
PCT/US2022/020897
Other languages
English (en)
Inventor
Chao MEI
Yi Xu
Original Assignee
Innopeak Technology, Inc.
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Innopeak Technology, Inc. filed Critical Innopeak Technology, Inc.
Priority to CN202280021646.0A priority Critical patent/CN117083584A/zh
Publication of WO2022198012A1 publication Critical patent/WO2022198012A1/fr

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/01Input arrangements or combined input and output arrangements for interaction between user and computer
    • G06F3/011Arrangements for interaction with the human body, e.g. for user immersion in virtual reality
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/01Input arrangements or combined input and output arrangements for interaction between user and computer
    • G06F3/02Input arrangements using manually operated switches, e.g. using keyboards or dials
    • G06F3/023Arrangements for converting discrete items of information into a coded form, e.g. arrangements for interpreting keyboard generated codes as alphanumeric codes, operand codes or instruction codes
    • G06F3/0233Character input methods
    • G06F3/0236Character input methods using selection techniques to select from displayed items
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/01Input arrangements or combined input and output arrangements for interaction between user and computer
    • G06F3/02Input arrangements using manually operated switches, e.g. using keyboards or dials
    • G06F3/023Arrangements for converting discrete items of information into a coded form, e.g. arrangements for interpreting keyboard generated codes as alphanumeric codes, operand codes or instruction codes
    • G06F3/0233Character input methods
    • G06F3/0237Character input methods using prediction or retrieval techniques
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/01Input arrangements or combined input and output arrangements for interaction between user and computer
    • G06F3/048Interaction techniques based on graphical user interfaces [GUI]
    • G06F3/0481Interaction techniques based on graphical user interfaces [GUI] based on specific properties of the displayed interaction object or a metaphor-based environment, e.g. interaction with desktop elements like windows or icons, or assisted by a cursor's changing behaviour or appearance
    • G06F3/0482Interaction with lists of selectable items, e.g. menus

Definitions

  • the embodiments of the present disclosure relate to the field of vision enhancement technology, and in particular, to systems and methods for text input on virtual reality and augmented reality devices.
  • XR extended reality
  • An XR experience refers to all real-and virtual environments generated by computing and display technologies applications.
  • XR experiences encompasses "virtual reality”; “augmented reality”; and
  • a virtual reality (VR) experience involves presentation of digital or virtual image that is not transparent to other actual real-world visual input.
  • AR augmented reality
  • AR augmented reality
  • MR mixed reality
  • AR experiences are an extension of AR experiences that enables virtual elements to interact with real world elements in an environment. Such technologies provide for a simulated environment with which a user can interact, thereby providing an immersive experience.
  • Text input into visual enhancement display technologies can be a challenge.
  • HMD head-mounted display devices
  • a physical keyboard and mouse can be connected to an HMD.
  • a user can typically tell, based on the change in resistance of a key, when a key has been sufficiently depressed. This tactile feedback relieves a user from having to constantly look at the physical keyboard to visual verify that input is being entered. Accordingly, the user's eyes are freed up away from the keyboard.
  • external input devices such as a physical keyboard
  • an AR application may include the HMD used to view an item for maintenance (e.g., a vehicle or home repair) and display instructions over the real world view to assist in repair.
  • retrieving and connecting a physical keyboard could be cumbersome and the physical keyboard may simply get in the way of the repair.
  • a virtual keyboard is essentially a replica of a real keyboard (or portion thereof) that is presented to the user, for example, on a touch screen.
  • a user contacts the touch screen at a location for the desired to input.
  • a virtual keyboard there is no way to provide the tactile feedback associated with using a physical keyboard.
  • a user must focus their attention to the virtual keyboard on the touch screen in order to see what they are typing. This makes it difficult for a user accurately select inputs while remaining immersed in the simulated experience. Instead, the user must shift their focus to the touch screen to ensure accurate finger or thumb placement for correct text input. Doing such for each entered character is inefficient and potentially burdensome to a user.
  • systems and methods are provided for generating a virtual keyboard for text input on virtual reality and augmented reality devices.
  • a method for text input comprises generating a virtual keyboard on a mobile device, wherein the virtual keyboard comprises a first operation area, a first plurality of virtual key areas, and a first boundary at a first interface between the first operation area and the first plurality of virtual key areas; detecting a user input on the mobile device that crosses the first boundary; selecting an input key of the virtual keyboard based on the detected user input; and displaying text based on the selected input key.
  • a system for text input comprises a memory configured to store instructions and one or more processors communicably coupled to the memory and configured to execute the instruction to perform the method as described above.
  • a non-transitory computer-readable storage medium storing a plurality of instructions executable by one or more processors, the plurality of instructions when executed by the one or more processors cause the one or more processors to perform the method as described above.
  • a device for generating a virtual keyboard comprises a memory storing software and one or more processors coupled to the memory.
  • the one or more processors configured to execute the instructions to generate a virtual keyboard comprising at least one operation area, a plurality of virtual key areas adjacent to the operation area, and at least one validation boundary provided at the interface between the at least one operation area and each of the plurality of virtual key areas; and display a graphical representation of the virtual keyboard on a display screen.
  • FIG. 1 illustrates an example of visual enhancement system according to embodiments disclosed herein.
  • FIG. 2 illustrates an example virtual keyboard layout according to various embodiments of the present disclosure.
  • FIGS. 3 and 4 illustrate the example virtual keyboard layout of FIG. 1 including steps for a selection mechanism according to an embodiment of the present disclosure.
  • FIG. 5 illustrates another example virtual keyboard layout including a predictive layer according to various embodiments of the present disclosure.
  • FIG. 6 illustrates another example virtual keyboard layout including a plurality of concentric keyboards according to various embodiments of the present disclosure.
  • FIG. 7 illustrates an example virtual keyboard layout including a plurality of abstraction layers according to various embodiments of the present disclosure.
  • FIG. 8 illustrates another example virtual keyboard layout according to an embodiment of the present disclosure.
  • FIG. 9 is an example computing component that may be used to implement various features of embodiments described in the present disclosure.
  • Embodiments presented herein provide systems and methods configured to generate a virtual keyboard usable to input text in XR applications (e.g., VR, MR, and/or AR applications).
  • Various embodiments provided herein utilize an input detection device communicably coupled to a display device.
  • the input detection device such as a mobile device, is configured to maintain a virtual keyboard thereon.
  • the input detection device may be configured to detect user inputs thereon, such as physical contact with a touch screen of the mobile device, and convert the detected user inputs to input key selections.
  • the input detection device is communicably coupled to an external display device that presents the virtual keyboard to a user.
  • the display device may be an XR display device, such as a head-mounted display device (HMD), tethered to the input detection device via wired or wireless communication.
  • Embodiments herein may detect continuous user input (e.g., uninterrupted physical contact with the input detection device) executing selection mechanism to select desired input keys on the virtual keyboard. Additionally, feedback and/or a graphical representation of the location of the user input on the virtual keyboard is also presented to the user by the display device. This allows for input key selection without the user altering their gaze to the input detection device. Thus, the user need not monitor input selection by directly viewing the input detection device, and is able to view their surroundings via the display device while simultaneously using the virtual keyboard.
  • HMD head-mounted display device
  • multiple handheld controllers may each be assigned a portion of a split virtual keyboard and button selection is made by sliding the fingertip along the surface of a touchpad on each controller.
  • the confirmation of the text input may be completed by pressing a trigger button.
  • the first method causes rapid user fatigue due to the need for numerous handheld controller clicks and frustration due to imprecision of the point and shoot accuracy; the second increases the possibility of user fainting because it involves frequent head movements and increased text input requires faster head movements; and the third is inefficient because when there are many keys on the keyboard sliding fingertips across a traditional QWERTY layout to locate a keys is not efficient and could result in fatigue and inaccurate text selection.
  • a display devices being tethered to a mobile device
  • one approach is to use an existing text input interface on the mobile device.
  • mobile devices have a floating full keyboard (e.g., a QWERTY keyboard), a T9 keyboard, a handwriting interface, etc.
  • these keyboards require the user to view the keyboard interface on the mobile device screen to ensure accurate key selection and finger placement, at least because the tactile feedback of a conventional physical keyboard cannot be replicated on the keyboard interface.
  • the user may want to maintain their field of view (FOV) on the simulated environment within their line of sight to ensure and maintain the immersive experience.
  • FOV field of view
  • these methods utilize input key selection mechanisms of tap and lift-off, both of which have limitations in XR applications.
  • Tap selection may be less accurate due to the user's inability to directly monitor finger placement.
  • the lift-off selection may affect the efficiency of typing since users are required to reposition fingers after the lift-off from the input device, which interrupts planning of finger movement to the next character since the user needs to reposition the finger after each lift-off.
  • embodiments disclosed herein provides for an improved and optimized virtual keyboard and text input method for XR display applications.
  • Embodiments herein provide for a layout that is different from but based on a traditional QWERTY key layout.
  • the layout leverages existing familiarity with the QWERTY layout, while being optimized for the unique layouts disclosed herein.
  • Embodiments disclosed herein provide for an input detection device (such as a mobile device) that generates and maintains a virtual keyboard with a graphical representation of the virtual keyboard layout projected on a display device (such as an XR enabled device) external to the input detection device.
  • the virtual keyboard may also be displayed on the input detection device (e.g., on a display surface), while in others the virtual keyboard may not be displayed graphically.
  • Embodiments herein also display the location of user input on the virtual keyboard by projecting a graphical representation (e.g., a graphical icon) of the location on the display device. As such, the user may monitor physical input location relative to the virtual keyboard on the input detection device via the projected graphical representation. By displaying the user input location on the virtual keyboard by the display device, the user does not need to monitor actual input placement or interrupt the input movement planning.
  • embodiments disclosed herein may utilize an "In 'n out" selection mechanism for input key selection, which provides for improved key input while reducing inaccurate key selection.
  • the selection mechanism detects a continuous user input (e.g., continuous physical contact between the user input, such as a finger or thumb, and the input detection device) that moves a contact point from an idle state in an operation area into one of a plurality of virtual key areas, each virtual key area assigned an input key of the virtual keyboard. Based on the movement, embodiments herein trigger a pre-selection state and designate the input key of the one virtual key area as a candidate input key.
  • Moving the contact point out of the one of the plurality of virtual key areas back into the central region confirms the pre-selection state and executes the input key (e.g., in the case of character input keys, enters the character for text input).
  • the user need not lift their finger or tap to execute an input key selection.
  • the user need not look to the input detection device to confirm finger placement and accurate text entry.
  • FIG. 1 illustrates an example of visual enhancement system 100 according to embodiments disclosed herein.
  • the visual enhancement system 100 includes a display device 110 and an input detection device 120.
  • the display device 110 is be communicatively coupled to the input detection device 120 by a communications link 130, such as by a wired lead or wireless connectivity.
  • the communications link 130 may be implemented using one or more wireless communication protocols, such as Bluetooth ® , Wi Fi, cellular communications protocols (e.g., 4G LTE, 5G, and the like), and so on.
  • the communications link 130 may comprise a data cable, such as a Universal Serial Bus (USB) cable, HDMI cable, and so on.
  • USB Universal Serial Bus
  • the input detection device 120 may be implemented as any processing systems as described below.
  • the mobile device may be a mobile smart phone, tablet computer, personal computer, laptop computer, personal digital assistant, smart watch (or other wearable smart devices), and so on.
  • the input detection device 120 may be a computer or processing system, such as the computer system 900 of FIG. 9.
  • the input detection device 120 may be a mobile device, such as, but not limited to, a mobile telephone, tablet computer, wearable smart device (e.g., a smartwatch and the like), etc.
  • the input detection device 120 may be implemented as any computing system known in the art, for example, a personal computer, laptop computer, personal digital assistant, and so on.
  • the input detection device needs to only be configured to detect user inputs and convert those user inputs into input key selections to facilitate text entry thereon.
  • Example user inputs may be, for example, physical contact with a touch screen or other responsive surface via, for example, a finger, thumb, appendage, or user input device (e.g., a stylus or pen); voice command input; gesture command inputs; and the like.
  • Embodiments herein refer to physical or direct contact with the input detection device, which refers to any contact on the input detection device (e.g., a touch screen) that is detected as an input.
  • a physical or direct contact from the finger need not require the finger to directly contact (e.g., in the case of covered by a glove or other material), so long as the contact is a result of the user exerting force on the input detection device via an appendage or input device.
  • the input detection device 120 may be any computing device; however, the input detection device 120 will be referred to herein as mobile device 120 for illustrative purposes only.
  • FIG. 1 illustrates an example architecture of the mobile device 120 that may facilitate text input via a virtual keyboard.
  • the mobile device 120 includes sensors 121, sensors interface 122, virtual keyboard module 124, application(s) 126, and graphics engine 128.
  • the components of system 100 including sensors 121, sensors interface 122, virtual keyboard module 124, application(s) 126, and graphics engine 128, interoperate to implement various embodiments for generating a virtual keyboard that is maintained on the mobile device 120, displaying the virtual keyboard to a user, and receiving user input.
  • Sensors 121 can be configured to detect when a physical object (e.g., one or more finger(s), one or more stylus pen(s), or any input object or device) has come into physical contact with a portion of display surface 125.
  • the display surface 125 may be a multi-touch display surface configured to detect contact from one or more physical objects (e.g., multiple fingers or pens) with the display surface 125.
  • sensors 121 can detect when one or more fingers of a user comes in contact with display surface 125.
  • Sensors 121 can be embedded in the display surface 125 and can include for example, pressure sensors, temperature sensors, image scanners, barcode scanners, etc., that interoperate with sensor interface 122 to detect multiple simultaneous inputs
  • the display surface 125 may include sensors for implementing a touch screen interface.
  • the sensors 121 may be implemented as resistive sensors, capacitive sensors, optical imaging sensors (e.g., CMOS sensors and the like), dispersive signal sensors, acoustic pulse recognition sensors, and so on for touch screen applications.
  • display surface 125 can include an interactive multi-touch surface.
  • display surface 125 also functions as a presentation surface to display video output data to the user of the visual enhancement system 100.
  • Sensors 121 can be included (e.g., embedded) in a plurality of locations across display surface 125. Sensors 121 can detect locations where physical contact with the display surface 125 has occurred. The density of sensors 121 can be sufficient such that contact across the entirety of display surface 125 can be detected. Thus, sensors 121 are configured to detect and differentiate between simultaneous contact at a plurality of different locations on the display surface 125.
  • Sensor interface 122 can receive raw sensor signal data from sensors 121 and can convert the raw sensor signal data into contact input data (e.g., digital data) that can be compatibly processed by other modules of mobile device 120. Sensor interface 122 or the other modules can buffer contact input data as needed to determine changes in contact on display surface 125 over time. For example, sensor interface 122 or the other modules can determine a change in position of a contact with the display surface 125 over time.
  • contact input data e.g., digital data
  • raw sensor signal data from sensors 121 can change as new contacts are detected, existing contacts are moved while maintaining continuous and uninterrupted contact (e.g., continuous contact of a user's finger with the display surface 125 while the finger is moved across the display surface 125), and existing contacts are released (e.g., a finger causing contact is removed from contact with the display surface 125) on display surface 125.
  • sensor interface 122 can initiate buffering of raw sensor signal data (e.g., within a buffer in system memory of the mobile device 120). As contacts on display surface 125 change, sensor interface 122 can track the changes in raw sensor signal data and update locations and ordering of detected contacts within the buffer.
  • sensor interface 122 can determine that contact at a first point in time was detected at a first location and then the contact was subsequently moved to a second location.
  • the sensor interface 122 can determine that the contact between the first location and second location was a continuous contact with the display surface 125 (e.g., physical contact between both locations was not interrupted for example by a separation or lift-off).
  • the first location may be a first area of a virtual keyboard absent of input keys and the second location may be an area corresponding to an input key.
  • sensor interface 122 can convert the contents of the buffer to candidate input key data (e.g., input data representing the input key corresponding to the second location) for a pre-selection state.
  • Sensor interface 122 may then send the candidate input key data to other modules at mobile device 120.
  • the candidate input key data may be used by the other modules to identify or display (e.g., on mobile device 120 and/or display device 110) the candidate input key in a pre-selection state.
  • the sensor interface 122 can determine that the contact was moved, without separating the physical contact from the display surface 125, to a third location outside of the area corresponding to the input key.
  • the continuous contact and locations may be stored in a buffer.
  • sensor interface 122 can convert the contents of the buffer to input key data (e.g., input data representing the confirmation of the candidate input key data).
  • Sensor interface 122 may then send the input key data to other modules at mobile device 120. Other modules can buffer the input key data as needed to determine changes at a location over time.
  • the input key data may be used by the other modules to execute the selected input key for text entry.
  • sensor interface 122 can cancel the contents of the buffer to cancel key data (e.g., input data representing cancellation of candidate input key).
  • Sensor interface 122 may then send the cancel key data to other modules at mobile device 120.
  • the cancel key data may be used by the other modules to cancel the pre-selection state and cancel input of the candidate input key.
  • Virtual keyboard module 124 is configured to maintain a virtual keyboard within mobile device 120.
  • the virtual keyboard module 124 may generate data defining the virtual keyboard (e.g., virtual keyboard data).
  • the virtual keyboard module 124 executes software to create the virtual keyboard.
  • the virtual keyboard module 124 may define areas of the display surface 125 for input keys (e.g., alpha-numeric character input keys and/or command input keys, such as, but not limited to, enter command, backspace command, space command, etc.) of the virtual keyboard (e.g., each area assigned an input key).
  • input keys e.g., alpha-numeric character input keys and/or command input keys, such as, but not limited to, enter command, backspace command, space command, etc.
  • the virtual keyboard module 124 may be stored in a memory (e.g., random access memory (RAM), cache and/or other dynamic storage devices).
  • a memory e.g., random access memory (RAM), cache and/or other dynamic storage devices.
  • Virtual keyboard module 124 is configured to present a graphical representation the virtual keyboard on the display device 110.
  • virtual keyboard module 124 may generate image data of the virtual keyboard (e.g., virtual keyboard image data) for rendering a visualization of the virtual keyboard on the display device 110.
  • the virtual keyboard module 124 communicates the virtual keyboard image data to the display device 110 via the wired or wireless connection.
  • the display device 110 may receive the virtual keyboard image data and generate the graphical representation of the virtual keyboard, which is visually displayed to a user of the display device 110 via display screen(s) 111.
  • the virtual keyboard module 124 may be configured to generate a graphical representation of the virtual keyboard on the display surface 125 of the mobile device 120.
  • the virtual keyboard module 124 may provide the virtual keyboard image data to the graphics engine 128 which converts the virtual keyboard data into a visualization of the virtual keyboard on the display surface 125.
  • the virtual keyboard module 124 need not display the virtual keyboard on the display surface 125. In such cases, the virtual keyboard module 124 assigns the sub-areas of the display surface 125 as set forth above and uses user interaction with each sub-area to input text.
  • the display surface 125 may display a solid color or any desired image while the user interacts with the display surface 125 to select input keys from the virtual keyboard.
  • the location of the virtual keyboard within the display surface 125 may be set in advance. For example, a pre-defined layout, orientation, and location within the display surface. In another embodiment, the location of the virtual keyboard within the display surface 125 may be determined based on an initial position of contact with the display surface 125 by the input device. For example, upon initiating the virtual keyboard, the virtual keyboard module 124 may receive contact input data from the sensor interface 122 for a first contact position on display surface 125. The virtual keyboard module 124 may determine the first contact position as a center position of the virtual keyboard and generate the virtual keyboard around that center position.
  • Virtual keyboard module 124 can generate the virtual keyboard in response to selecting an application data field within an application 126.
  • an application 126 can present and maintain application user-interface on display surface 125 and/or display device 110. The user may select an application data field for purposes of entering text.
  • virtual keyboard module 124 can generate
  • IB virtual keyboard data and maintain virtual keyboard in the mobile device 120 and communicate the virtual keyboard data to the display device 110 for presentation to the user.
  • the virtual keyboard may be configured based on a QWERTY keyboard, modified for the configuration of the virtual keyboard (referred to herein as an AQWERT layout).
  • virtual keyboard can be configured based on any type of keyboard layout.
  • the virtual keyboard can include function keys, application keys, cursor control keys, enter keys, numeric keypad, operating system keys, etc.
  • Virtual keyboard can also be configured to present characters of essentially any alphabet such as, for example, English, French, German, Italian, Spanish, Chinese, Japanese, etc.
  • virtual keyboard module 124 can receive input contact data (e.g., data representing contact with a virtual key) from sensor interface 122. From the input contact data, virtual keyboard module 124 can generate the appropriate character code or command input for a character from essentially any character or command set, such, as for example, Unicode, ASCII, EBCDIC, ISO-8859 character sets, ANSI, Microsoft ® Windows ® character sets, Shift JIS, EUC-KR, etc. The virtual keyboard module 124 can send the character and/or command code to application 126. Application 126 can receive the character and/or command code and present the corresponding character in the application data field and/or execute command (e.g., enter command, back space, delete, etc.).
  • input contact data e.g., data representing contact with a virtual key
  • virtual keyboard module 124 can generate the appropriate character code or command input for a character from essentially any character or command set, such, as for example, Unicode, ASCII, EBCDIC, ISO-8859 character sets, ANS
  • the virtual keyboard module 124 can send a character and/or command code to a replication window in the application user-interface and/or in the display device 110 (e.g., as shown in FIGS. 8), which can receive the character code and present the corresponding character and/or execute command. Accordingly, a user can be given a visual indication of the character that was sent to application 126, without having to alter their field of view to look at display surface 125.
  • Virtual keyboard module 124 can also buffer one or more character and/or command codes as candidate inputs until an indication is received that confirms input selection or cancels the selection. The indication may confirm that the user intended to select the candidate character and/or command input key.
  • the indication may indicate that the user did not intend the selected input key.
  • the indication can result from satisfying a logical condition, for example, the selection mechanism as disclosed herein and described in connection with FIGS. 2-8.
  • Virtual keyboard module 124 can present a sequence of buffered characters codes for verification (e.g., displayed in a replication window). Then in response to an indication confirming selection, virtual keyboard module 124 can send the buffered sequence of character codes to application 126.
  • application 126 can apply application data field logic (e.g., data typing rules, validation rules, etc.) to input text. For example, when receiving character codes for text, application 126 can input codes for letters (e.g., the letter "a") and numbers (e.g., a "2"). Accordingly, a user is given a visual indication on display device 110 of the character actually displayed at application 126, without having to alter their field of view to look at the mobile device 120.
  • application data field logic e.g., data typing rules, validation rules, etc.
  • application 126 can input codes for letters (e.g., the letter "a") and numbers (e.g., a "2"). Accordingly, a user is given a visual indication on display device 110 of the character actually displayed at application 126, without having to alter their field of view to look at the mobile device 120.
  • FIGS. 2-8 depict various examples of virtual keyboards that may be generated by the virtual keyboard module 124.
  • FIGS. 2-8 may illustrate the graphical representation of the virtual keyboard as generated at the display device 110 and/or the graphical representation of the virtual keyboard on the mobile device 120.
  • the display device 110 includes a frame 112 supporting one or more display screen(s) 111.
  • the frame 112 also houses a local processing and data module 114, such as one or more hardware processors and memory (e.g., non-volatile memory, such as flash memory), both of which may be utilized to assist in the processing, buffering, caching, and storage of data and generating of content on display screen(s) 111.
  • the local processing and data module 114 may be operatively coupled to the mobile device 120 via communications link 130.
  • the components of the display device 110, including the local processing and data module 114 and display screen(s) 111 interoperate to implement various embodiments for displaying the virtual keyboard to a user based on data from the virtual keyboard module 124.
  • the display device 110 may include one or more display screen(s) 111, and various mechanical and electronic components and systems to support the functioning of display screen(s) 111.
  • the display screen(s) 111 may be coupled to a frame 112, which may be wearable by a user (not shown) and which is configured to position the display screen(s) 111 in front of the eyes of the user.
  • the display screen(s) 111 may be one or more of Organic Light-Emitting Diode (OLED) display, Liquid Crystal Display (LCD), laser display, and so on. While two display screens are shown in FIG. 1, other configurations are possible. For example, 1, 2, 3, etc. display screens.
  • the display device may be a device capable of providing an XR experience.
  • a mixed reality display (MRD) and/or a virtual reality display (VRD) can include the display device 110, for example MR devices (e.g., Microsoft Hololens 1 and 2, Magic Leap One, Nreal Light, Oppo Air Glass, etc.) and/or VR glasses (HTC VIVE, Oculus Rift, SAMSUNG HMD Odyssey, etc.).
  • the display device 110 may be a head mounted display (HMD) worn on the head of the wearer.
  • HMD head mounted display
  • the local processing and data module 114 includes at least a rendering engine 116.
  • the rendering engine 116 receives virtual keyboard image data from the virtual keyboard module 124 and converts the virtual keyboard image data into a graphical data. The graphical data is then output to the display screen(s) 111 for generating a representation of the virtual keyboard displayed on the display screen(s) 111.
  • the virtual keyboard module 124 may be included in the local processing and data module 114. In this case, the functions of the virtual keyboard module 124 may be executed on the display device instead of the mobile device 120.
  • the sensor interface 122 may communicate contact input data to the virtual keyboard module on the display device 110 via the wired and/or wireless connection. The virtual keyboard module may then operate as described above and output virtual keyboard data to the rendering engine 116.
  • the display device 110 may also include one or more outward -facing imaging systems 113 configured to observe the surroundings in the environment (e.g., a 3D space) around the wearer.
  • the display device 110 may comprise one or more outward-facing imaging systems disposed on the frame 112.
  • an outward-facing imaging system can be disposed at approximately a central portion of the frame 112 between the eyes of the user, as shown in FIG. 1.
  • the outward-facing imaging system can be disposed on one or more sides of the frame 112 adjacent to one or both eyes of the user.
  • the outward facing imaging system 113 may be positioned in any orientation or position relative to the display device 110.
  • the outward-facing imaging system 113 captures an image of a portion of the world in front of the display device 110.
  • the entire region available for viewing or imaging by a viewer may be referred to as the field of regard (FOR).
  • the FOR may include substantially all of the solid angle around the display device 110 because the display may be moved about the environment to image objects surrounding the display (in front, in back, above, below, or on the sides of the wearer).
  • the portion of the FOR in front of the display system may be referred to as the field of view (FOV) and the outward-facing imaging system 113 may be used to capture images of the FOV.
  • FOV field of view
  • Images obtained from the outward-facing imaging system 113 can be used as part of XR applications (e.g., as images on which virtual objects are superimposed onto).
  • the virtual keyboard may be superimposed over the images obtained from the outward -facing imaging system 113. In this way, the user may view the virtual keyboard without having to alter their field of view to look at display surface 125 of the mobile device 120.
  • the outward-facing imaging system 113 may be configured as a digital camera comprising an optical lens system and an image sensor.
  • the outward-facing imaging system 113 may be configured to operate in the infrared (IR) spectrum, visible light spectrum, or in any other suitable wavelength range or range of wavelengths of electromagnetic radiation.
  • the imaging sensor may be configured as either a CMOS (complementary metal-oxide semiconductor) or CCD (charged-coupled device) sensor.
  • the image sensor may be configured to detect light in the IR spectrum, visible light spectrum, or in any other suitable wavelength range or range of wavelengths of electromagnetic radiation.
  • the display device 110 and/or mobile device 120 may also include microphones, speakers, actuators, inertial measurement units (IMUs), accelerometers, compasses, global positioning system (GPS) units, radio devices, and/or gyroscopes.
  • IMUs inertial measurement units
  • GPS global positioning system
  • the data at the local processing and data module 114 may include data a) captured from sensors on the display device 110 (which may be, e.g., operatively coupled to the frame 112), such as image capture devices (e.g., outward-facing imaging system 113), microphones, IMUs, accelerometers, compasses, GPS units, radio devices, and/or gyroscopes; and/or b) acquired from sensors at the mobile device 120 (e.g., image capture devices, microphones, IMUs, accelerometers, compasses, GPS units, radio devices, and/or gyroscopes) and/or processed by the mobile device 120.
  • image capture devices e.g., outward-facing imaging system 113
  • sensors at the mobile device 120 e.g., image capture devices, microphones, IMUs, accelerometers, compasses, GPS units, radio devices, and/or gyroscopes
  • processed by the mobile device 120 e.g., image capture devices, microphones, I
  • FIG. 2 illustrates an example virtual keyboard layout 200 according to various embodiments of the present disclosure.
  • the layout 200 comprises one or more keyboard areas, each comprising a plurality sub-areas generated within a display area.
  • the sub-areas comprise at least one operation area and a plurality of virtual key areas adjacent to the operation area.
  • a validation boundary is provided at the interface between the operation area and each of the virtual key areas.
  • the layout 200 may be generated and maintained by the virtual keyboard module 124 of FIG. 1.
  • the layout 200 includes a first keyboard area 210 and a second keyboard area 220 spaced apart from each other within the display area 201.
  • Keyboard area 210 comprises a first plurality of virtual key areas 216a-216n (collectively referred to herein as first virtual key areas 216) that are adjacent to a first operation area 212.
  • a first validation boundary 218 is provided at the interface between the first operation area 212 and each of the first virtual key areas 216.
  • Each first virtual key area 216 shares a section or portion of the first validation boundary 218 with the first operation area 212.
  • second keyboard area 220 comprises a second plurality of virtual key areas 226a-226n (collectively referred to herein as second virtual key areas 226) that are adjacent to a second operation area 212.
  • a second validation boundary 228 is provided at the interface between the second operation area 222 and each of the second virtual key areas 226.
  • Each second virtual key area 226 shares a section or portion of the second validation boundary 228 with the second operation area 222.
  • the validation boundaries 218 and 228 may be displayed with a higher line weight, different color, and/or different line type than the other portions of the layout 200. In this way, the user may be notified of which boundary is assigned as the validation boundary.
  • the first and second keyboard areas 210 and 220 comprise a circular shape 214 and 224, respectively.
  • the circular shape 214 comprises an annulus, in which the first virtual key areas 216 are distributed.
  • the first virtual key areas 216 surround the first operation area 212 (sometimes referred to herein as a first central area).
  • the circular shape 224 also comprises an annulus, in which the second virtual key areas 226 are distributed, surrounding the second operation area 222 (sometimes referred to herein as a second central area).
  • the first virtual key areas 216 divide the annulus of the first keyboard area 210 into sub-areas of approximately equal surface area and the second virtual key areas 226 divide the annulus of the second keyboard area 220 into sub-areas of approximately equal surface area.
  • FIG. 2 illustrates circular shapes, the disclosure herein is not intended to be limited thereto.
  • the keyboard areas may comprise any other shape as desired, such as, but not limited to, squares shapes, rectangular shapes, triangular shapes, ovular shapes, etc.
  • the first and second keyboard areas need not have the same shape and may be differently shaped as desired.
  • the first and second keyboard areas 210 and 220 comprise twenty-six characters key areas and multiple command keys (e.g., space, backspace, and return) mapped onto the first and second virtual key areas 216 and 226.
  • Locations for each character key in the respective keyboard may be determined based on: 1) the character's position relative to other characters in the traditional QWERTY layout, and 2) the character's left and right hand zoning in the traditional QWERTY layout. For example, as shown in FIG. 2, the character 'Q' is to the right of character 'W' and on top of character ⁇ ' in both traditional QWERTY and the example layout.
  • character 'F' is located on the first keyboard area 210 (e.g., the left keyboard), while character is on second keyboard area 220 (e.g., the right keyboard).
  • the virtual keyboard module 124 may assign a character or command input to a virtual key area 216 or 226 based on proximity to other character and/or commands of the traditional QWERTY and placement thereon.
  • embodiments herein allow users to rely on familiarity with the traditional QWERTY layout so to optimize usage of the virtual keyboard and reduce learning curves of new text input methods.
  • implementations herein are not limited to only the key placement as shown in FIG. 2.
  • Character and command input positions relative to each other may be adjusted and customizable as desired by a user.
  • the character and command inputs may be assigned to any virtual key area 216 or 226 as desired.
  • the shapes and sizes of the virtual key areas may also be adjustable. For example, less or more than the 26 virtual key areas may be used as desired and assigned as desired.
  • multiple abstraction layers or levels may be utilized to access additional virtual key areas, as described below in connection with FIG. 7 and/or concentric shapes may be used as described in connection with FIG. 6.
  • the embodiments herein may be used for any language as desired.
  • the layout 200 may be generated and maintained by the virtual keyboard module 124 of FIG. 1.
  • virtual keyboard module 124 may generate image data of the layout 200 for rendering the virtual keyboard on the display device 110.
  • display area 201 may represent edges of an FOV displayed or viewed on the display screen(s) 111. The user may then visually perceive the virtual keyboard while using the display device 110 and need not alter their gaze from the intended FOV.
  • the first and second keyboard areas 210 and 220 may be superimposed over a background 205, which may be the image or real-world environment contained within the FOV of the viewer as viewed through the display device 110.
  • the background 205 may be an image of the FOV of the surroundings as seen through the display device 110 and/or as captured by the more outward -facing imaging systems 113.
  • the background 205 may be a generated and rendered image over which the layout 200 is superimposed.
  • the virtual keyboard module 124 may maintain the layout 200 at the mobile device 120, which may be used for character and command input selection to facilitate text input while viewing the FOV of the display device 110.
  • the display surface 125 may be converted to an operation interface and areas of the display surface 125 assigned to sub-areas of the layout 200.
  • the display area 201 may correspond the display surface 125 and sub-areas of the display surface 125 may be assigned to correspond to the first operation area 212 and areas surrounding assigned to the virtual key areas 216, with the validation boundary 218 therebetween.
  • a distinct area of the display surface 125 may be assigned as the second keyboard area 220 in a similar manner.
  • the relative position of areas assigned to each respective keyboard may be pre-determined and based on screen size of the display surface 125 and/or the user's hand size.
  • the position of areas assigned to the first and second keyboard areas 210 and 220 on the display surface 125 may be based on a first contact position by the user (e.g., the initial point of contact by the user's finger, thumb, or other input device).
  • a first contact position may be registered by the virtual keyboard module 124 as a center of either the first or second keyboard and the virtual keyboard generated based on the registered position.
  • the display surface 125 may be divided into a left half and a right half. A first contact position detected on the left may result in registering a center position of the first keyboard area 210, while a first contact position detected on the right may result in registering a center position for the second keyboard area 220.
  • the virtual keyboard module 124 outputs the virtual keyboard data to the graphics engine 128.
  • the graphics engine 128 may render the layout 200 onto the display surface 125 over background 205.
  • the background 205 may be a solid color background or other background as desired.
  • the actual layout 200 does not need to be displayed on the display screen.
  • First and second keyboard areas 210 and 220 also include graphical icons 215 and 225, respectively, displayed thereon.
  • the graphical icons 215 and 225 are graphical representations of a position of user input on the virtual keyboard.
  • each graphical icon 215 and 225 indicate a position of a user input relative to each keyboard area 210 and 220, respectively.
  • graphical icon 215 represents a position of a user input detected to contact an area corresponding to the first keyboard area 210
  • the graphic icon 225 represents a position of a user input on an area corresponding to the second keyboard area 220.
  • the graphical icon 215 and 225 are circular icons having a solid color (e.g., black, blue, yellow, green, etc.).
  • graphical icon 215 and 225 may be any shape and/or fill pattern as desired.
  • sensors 121 of the display surface 125 may detect a physical contact with the display surface by a physical object.
  • the contact is provided to sensor interface 122, which provides contact input data including location on the display surface 125 to the virtual keyboard module 124.
  • the virtual keyboard module 124 generates virtual keyboard data including the contact location data (e.g., a location on the display surface 125), which is used to render the graphical icon.
  • the location may be provided as coordinates on a coordinate system based on the display surface 125.
  • the graphical icon is then displayed in the display device 110 (and optionally on the mobile device 120) at a location relative to the virtual keyboard based on the location on the display surface 125. As long as physical and direct contact is maintained on the display surface 125, a graphical icon corresponding to the contact is displayed.
  • graphical icon 215 may be a result of a user's thumb, finger, or input device contacting the left half of the display surface 125.
  • graphical icon 225 may be a result of a user's thumb, finger, or input device contacting the right half of the display surface 125.
  • the user may utilize the mobile device 120 using both hands and select character and command input keys using both hands simultaneously, while being able to perceive the location of each contact relative to the sub- areas of each keyboard area 210 and 220 via the display device 110.
  • the user can operate the mobile device 120 to input text without a need for the user to gaze onto the mobile device and avert their gaze from the intended FOV.
  • FIGS. 3 and 4 illustrate the example layout 200 of the virtual keyboard of FIG. 1 including a selection mechanism according to an embodiment of the present disclosure.
  • FIG. 3 illustrates an example mechanism to confirm input key selection
  • FIG. 4 illustrates an example mechanism to cancel an input key selection.
  • graphical icons 215 and 225 are generated.
  • the graphical icons 215 and 225 may be in an idle state.
  • Idle state may refer to a physical contact that does not move, or may refer to a physical contact that remains within the operation area of the respective keyboard area.
  • the graphic icon 215 does not change position, and may represent an idle state of the contact corresponding to icon 215.
  • the contact may be in an idle state in the case that the graphical icon 215 remains within the operation area 212, as shown in FIG. 3.
  • FIG. 3 Also illustrated in FIG. 3 is the selection mechanism used to select a character or command input key from the layout 200.
  • an input e.g., select character input key "O" in the illustrative example
  • the user moves the physical contact (referred to herein as contact point) from position PI to position P2 and then to position P3.
  • the graphical icon 225 illustrates an idle state or initial contact position by a physical contact with the display surface 125.
  • the user moves the contact point from position PI to position P2 in virtual key area 226a.
  • This change in position includes crossing the second validation boundary 228, as shown by the arrow extending from PI to P2 in a first direction.
  • the virtual keyboard detects that the contact point is moved from operation area 222 into a virtual key area 226, the virtual keyboard triggers a pre-selection state and designates the character input key "O" as a candidate input key.
  • the pre-selection state and candidate input key designation may be triggered in response to detecting the contact point crossed the validation boundary 228.
  • Moving the contact point into the virtual key area 226a via another boundary other than the section of the validation boundary 228 shared by the operation area 222 and the virtual key area 226a may not designate the input key as a candidate input key and may not trigger the pre-selection state.
  • the pre-selection state may not be triggered in a case that virtual keyboard detects that the contact point is moved to position P2 from outside of the circular shape 224 (e.g., a side of the annulus opposite the operation area 222).
  • the pre-selection state may not be triggered in a case where the virtual keyboard detects that the contact point was moved to P2 from another virtual key area 226.
  • pre-selection may not trigger when the contact point cross a boundary shared by another virtual key area, such as the boundary from virtual key area 226a and a neighboring virtual key area.
  • the user moves the contact point from the virtual key area 226a back into the operation area 222 by crossing validation boundary 228.
  • the virtual keyboard detects this movement, the virtual keyboard confirms the candidate input key and enters the selected input key for text entry. For example, as shown in FIG. 3, the user moves the contact point from position P2 to position P3 in the operation area 222 via the validation boundary 228, as shown by the arrow extending between P2 and P3 in a second direction. Detecting this movement triggers the virtual keyboard to confirm the selection of input key "O" as input.
  • the confirmation may be triggered in response to detecting that the contact point crossed back over the validation boundary 228.
  • the idle state may be triggered and subsequent input key selection via moving the contact point to the same or another virtual key area.
  • the candidate input is cancelled (e.g., pre-selection is cancelled).
  • the virtual keyboard detects such a movement and cancels the pre-selection state for the designated candidate input key. In this case, no text or command input is entered and the user may have to return to the operation area 222 before selecting an input key.
  • cancelling the pre-selection state may occur if the contact point is moved from position P2 to a neighboring virtual key area 226, as shown by arrows 405 and 415.
  • FIGS. S and 4 are described in connection with a specific virtual key area (e.g., virtual key area 226a), the foregoing selection mechanism applies equally for each virtual key area of both the first and second keyboard areas 210 and 220.
  • the graphical icon 215 in FIGS. S and 4 is not shown as moving, it will be appreciated that the same procedure for selection and cancellation of input keys described above in connection with keyboard area 220, will apply to keyboard area 210 as well.
  • various embodiments of the selection mechanism may require continuous physical contact on the display surface between the idle state, pre-selection state, and confirmation of the selected input.
  • continuous and uninterrupted physical contact with between the user (e.g., finger, thumb, or other input device) and the display surface 125 must be maintained throughout movement from position PI to position P3 to confirm the pre-selection state. Detecting interruption in the physical contact (e.g., lift-off or other separation) with the display surface 125 may result in cancelling the pre selection state.
  • the above described selection mechanism may be referred to as an "In 'n out” operation or selection mechanism because input key selection is triggered in a case where movement is out of the operation area and back into the operation area.
  • the confirmation is triggered only where the out of and into movement of the contact point is detected.
  • the example selection mechanism may allow easy cancellation of input operations by crossing a boundary other than the validation boundary; reduce user confusion from by limiting contact point movement options; and may support additional layers of input to enable additional functions (e.g., as described below in connection with FIGS. 5-8).
  • embodiments herein are not limited to continuous physical contact and other methods may be used.
  • sequential contact points may be used instead of requiring continuous contact.
  • a contact at position PI at a first point in time may initiate the idle state and contact at position P2 at a second point in time may trigger the pre-selection state.
  • a contact at position P3 at a third point in time confirms the pre-selection state.
  • the sequentially third contact is outside the circular shape 224 or in another virtual key, then the third contact cancels the pre selection state.
  • the contact may be the result of a tap or touch and lift-off, and need not be continuous.
  • the sequential contact method may avoid cancelling operations due to inadvertent lift-offs from the display surface.
  • one or more feedback mechanisms may be employed to notify the user of triggering, confirming, and/or cancelling a pre-selection state.
  • Example feedback mechanism include, but are not limited to, as haptic feedback (e.g., vibration generated by an actuator, force generated by an actuator, speaker induced vibrations, etc.); audio feedback (e.g., sound notifications transmitted over a speaker to the user); and/or visual feedback (e.g., colors changes, graphical changes, line weight changes, etc.).
  • Feedback mechanism may be included in the mobile device 120 and/or the display device 110.
  • one or more actuators included in the mobile device 120 may generate haptic feedback received by the user's hand (e.g., finger, thumb, or input device).
  • a speaker on the mobile device 120 may generate an audio feedback that can be heard by the user.
  • the display device 110 may include actuators for generating feedback, speakers for generating audio feedback, etc.
  • feedback may be generated for one or more steps of the selection mechanism.
  • a first feedback may be generated upon the initial physical contact (e.g., position PI).
  • a second feedback may be generated upon crossing the validation boundary and triggering a pre-selection state (e.g., position P2).
  • a third feedback may be generated upon crossing back over the validation boundary (e.g., position P3).
  • a fourth feedback may be generated upon detecting a cancelling condition. In this way, the user may be notified of each operation.
  • the first, second, third, and fourth may be the same type of feedback (e.g., all haptic) or different (e.g., one or more tactile, one or more visual, and/or one or more audio).
  • embodiments herein are not limited to the four feedbacks described above.
  • feedback may be generated only in response to confirmation of a pre-selection state is triggered or cancelling the pre-selection state.
  • feedback may be generated in response to triggering the pre-selection state, confirming pre-selection state, and cancelling the pre-selection state.
  • each feedback may be one of a high- range, low- range, or mid-range frequency.
  • each feedback may be have a high range, low range, or middle range volume; a high range, low range, or middle range frequency; or a combination thereof.
  • each feedback may be a change to a different color at the boundary and/or area; a different line weight (e.g., bolding a boundary or graphical representation of the character and/or command assigned to an area); change in font; or a combination thereof.
  • the parameters thereof e.g., frequency, volume, color, etc.
  • moving from position PI to position P2 triggers a haptic having a mid-frequency.
  • Moving to position P3 e.g., confirming the pre-selection state
  • cancelling the pre-selection state generates a frequency haptic feedback that is lower than that for the PI to P2 movement.
  • the user would expect to receive a higher frequency feedback if confirmation is performed. But if the user accidently cancels the pre-selection state (or if the user chooses to cancel), the lower frequency feedback is generated.
  • This feedback notifies the user that the pre-selection state was cancelled instead of confirmed.
  • confirmation may be a lower frequency, while cancellation may be a higher frequency.
  • the movement from PI to P2 may not generate a feedback if so desired.
  • Audio feedback may be used in a similar manner.
  • the use of one or more feedback mechanism may further assist the user to monitor input selections and learn the selection mechanism. In this way, the user may learn the layout 200 and over time may not need to monitor contact points via graphical icons 215 and/or 225. Similarly, the feedback mechanisms may assist with reducing a need to view the mobile device 120.
  • FIG. 5 illustrates another example virtual keyboard layout 500 according to various embodiments.
  • the layout 500 is substantially similar to layout 200, such that like reference numbers in FIG. 5 represent the same elements from FIG. 2.
  • the selection mechanism described in connection with FIGS. 3 and 4 may be applied equally to layout 500.
  • layout 500 includes an example abstraction layer. More particularly, layout 500 includes a predictive layer 510 spaced apart from the first and second keyboard areas 210 and 220.
  • the layer 510 is generated and/or displayed on an upper long edge of the display area 201.
  • the embodiments herein are not limited to this illustrative example, and the layer 510 may be generated and/or displayed anywhere within the display area 201 as long as it is separated from the first and second keyboard areas 210 and 220 by a space (or gap).
  • the layer 510 may be disposed along a lower long edge, left or right short edge, diagonally, etc.
  • the layer 510 is shown having a rectangular shape, other shapes are possible. For example, ovular shaped, curved or arced, etc.
  • the predictive layer 510 comprises a plurality of sub-areas that are assigned functions or commands similar to the keyboard areas 210 and 220.
  • a suggestion window 511 is assigned to a first sub-area of the predictive layer 510
  • a page forward input 515 is assigned to a second sub-area
  • page backward input 513 is assigned to a third sub-area.
  • Suggestion window 511 may be configured to display suggestions of predictive text. For example, as shown in FIG. 5, suggestion window 511 displays suggestions 512a-512n, separated by separation boundaries 517a-517n, that are based on confirmed and/or candidate input keys. For example, the user may have confirmed "jus" and the predictive text generated suggestions 512a-512n.
  • the user may have confirmed "ju" and "s" is designated as a candidate text.
  • the page forward input 515 may be configured to page forward through a plurality of suggestions, for example, in the case that suggestions require additional space than offered by the window 511.
  • the page backward input 513 may be configured to page back through the plurality of suggestions.
  • the virtual keyboard module 124 of FIG. 1 may be configured to receive candidate inputs and confirmed selections of text and apply predictive text techniques known in the art to the confirmed selection to generate suggestions 512.
  • selecting one of the suggestions 512, page forward input 515, and/or page backward input 513 may be performed using the selection method as describe above in connection to FIGS. 3 and 4.
  • the predictive layer 510 also includes a validation boundary 516.
  • the space or gap between the validation boundary 516 and the keyboard areas 210 and/or 220 defines an operation area, similar to the first and second operation areas described above.
  • the validation boundary 516 may be displayed with a heavier line weight, different color, and/or different line type than the other portions of the layout 500. In this way, the user may identify which boundary is assigned as the validation boundary.
  • Detection of a contact point (e.g., corresponding to either graphical icon 215 or 225) moving across the validation boundary 516 to a desired suggestion 512 triggers a pre-selection state for the selected candidate suggestion. Then, detecting movement across the validation boundary 516 back into the space (or gap) between the predictive layer 510 and the keyboard areas 210 and 220 confirms the selection and inputs the selected text. Whereas, moving the contact point over another boundary of the layer 510 (e.g., the upper edge, boundary between the window 511 and one of the neighboring icons 515 or 513, or one of separation boundaries 517) cancels the pre-selection state, as described above. Selecting either of the page forward input 515 or page backward input 513 is performed in a similar manner based on crossing the validation boundary 516.
  • a contact point e.g., corresponding to either graphical icon 215 or 225
  • the validation boundary 516 may be assigned as the edge of the window 511 that is closest to the first and second keyboard areas 210 and 220.
  • the validation boundary 516 is the lower edge of the predictive layer 510.
  • Assigning the closest edge of the layer 510 as the validation boundary may be useful to enable the use of various abstraction layers for accessing different functionality, such as that of the predictive layer 510 and/or other input keys.
  • the first and second keyboard areas 210 and 220 may be set as a first layer and the predictive layer 510 as a second layer. When a user swipes through the first layers (e.g., moves the contact point outside of the first layer), the user is then able to perform selection mechanism on the second layer (e.g., in this case, the layer 510).
  • the user may move a contact point to the desired suggestion to trigger the pre-selection state, as described above. Then, the user may move the other contact point to the "enter" command input key (e.g., assigned to virtual key area 226n) to trigger the pre-selection state and then confirm the pre-selection state, as described above. For example, the graphical icon 215 for a contact point may be moved to the desired suggestion and the graphical icon 225 for the other contact point may be moved to the virtual area 226n.
  • the "enter" command input key e.g., assigned to virtual key area 226n
  • the graphical icon 215 triggers the pre-selection state for the desired suggestion and then the icon 225 similarly triggers a pre-selection of the "enter" command input key. Finally, the user may move the graphical icon 225 back into the operation area 222 by crossing over the validation boundary 228 to confirm the pre-selection state and select the "enter" command, which then enters the desired suggestion.
  • the predictive layer 510 is described above as suggesting predictive text shown in English, the embodiments herein are not so limited. For example, other languages may be utilized (e.g., Chinese, Japanese, Spanish, etc.). Furthermore, the suggestions maybe also include one or more symbols and/or emojis.
  • FIG. 6 illustrates another example virtual keyboard layout 600 including a plurality of concentric keyboard areas according to various embodiments of the present disclosure.
  • the layout 600 is similar to layout 200, such that like reference numbers in FIG. 6 represent the same elements from FIG. 2. Furthermore, the selection mechanism described in connection with FIGS. 3-5 may be applied equally to layout 600.
  • the layout 600 includes a first keyboard area 610 and a second keyboard area 620 within the display area 601, having background 605.
  • Display area 601 and background 605 may be substantially similar to display area 201 and background 205 of FIG. 2, respectively.
  • the first and second keyboard areas 610 and 620 are concentric.
  • Keyboard area 610 comprises a first plurality of virtual key areas 616a-616n (collectively referred to herein as first virtual key areas 616) that are adjacent to a first operation area 612 within shape 614.
  • a first validation boundary 618 is provided at the interface between the first operation area 612 and each of the first virtual key areas 616.
  • the virtual key areas 616, first operation area 612, and the first validation boundary 618 may be substantially similar to the virtual key areas 216, first operation area 212, and first validation boundary 218 of FIG. 2, respectively.
  • Second keyboard area 620 comprises a second plurality of virtual key areas 626a-626n (collectively referred to herein as second virtual key areas 626) that are adjacent to a second operation area 622 within shape 624.
  • a second validation boundary 628 is provided at the interface between the second operation area 622 and each of the second virtual key areas 626.
  • the second virtual key areas 626, second operation are 622, and the second validation boundary 628 may be substantially similar to the second virtual key areas 226, second operation area 222, and second validation boundary 228 of FIG. 2, respectively.
  • the second keyboard area 620 is surrounded by the first operation area 612.
  • the first operation area 612 is defined by between the first validation boundary 618 and the outer edge of shape 624.
  • the layout 600 is similar to the concept of abstraction layers described in connection with FIG. 5 above.
  • the second keyboard area 620 may be considered a first layer and the first keyboard area 610 may be considered a second layer.
  • the selection method described in connection with FIGS. 3-5 is applicable to layout 600. For example, moving contact point shown as graphical icon 615 from second operation area 622 to one of virtual key areas 626 triggers a pre-selection state.
  • moving the contact point back into the second operation area 622 confirms the pre selection state.
  • the user moves the contact point through the second keyboard area 620 to the first operation area 612 (e.g., idle state) and into one of the virtual key areas 616.
  • the user may then cross back over the validation boundary 618 into operation area 612. The user may then move the contact point back over the second keyboard area 620 into second operation area 622 if desired or move to another virtual key area 616 to trigger another pre selection state.
  • the user may wish to input the text "YES.”
  • the user physically contacts the display surface 125 inside the area corresponding to the second operation area 622.
  • the user may swipe the contact point to move the graphical icon 615 through the validation boundary 628 to trigger pre selection state for input key Y.
  • the user then moves the contact point back across the validation boundary 628 into the operation area 622.
  • the user moves the contact point through the second keyboard are into the operation area 612. Moving through the virtual key areas 626 does not confirm a pre-selection state, since the selection mechanism identifies the movement as a pre-selection followed by cancelling the pre-selection state.
  • the user may then move the contact point from the operation area 612 into the virtual key area 616 for the input key "E” to trigger pre-selection of the input key. Crossing back over the validation boundary 618 then confirms the pre-selection state and enters input key "E”. To select the input key "S”, the user may repeat the steps for the virtual key area assigned to the 'S' input key.
  • FIG. 6 While concentric circles are shown in FIG. 6, it will be appreciated that other shapes are possible. For example, concentric squares, rectangles, etc.
  • FIG. 6 depicts the second keyboard area 620 is within the first keyboard area 610, other implementations are possible.
  • the first keyboard area 610 may be within the second keyboard area 620.
  • different combinations of input keys opposed to those shown in FIG. 6, may be assigned to each respective keyboard areas.
  • first keyboard area 210 and/or second keyboard area 220 of FIG. 2 may include a concentric keyboard area. That is, for example, first keyboard area 210 may be implemented as layout 600, while the second keyboard area 220 may be provided adjacent to the two concentric keyboard areas. In this case, the second keyboard area 220 may also include a concentric keyboard area as well.
  • the input keys assigned to each virtual key area may be arranged as desired.
  • FIG. 7 illustrates an example virtual keyboard layout 700 including a plurality of abstraction layers according to various embodiments of the present disclosure.
  • the layout 700 is similar to layout 600, such that like reference numbers in FIG. 7 represent the same elements from FIG. 6. Furthermore, the selection mechanism described in connection with FIGS. 3-5 may be applied equally to layout 700.
  • FIG. 7 depicts layout 700 including a plurality of abstraction layers.
  • layout 700 includes four layers, a first layer corresponding to second keyboard area 620, a second layer corresponding to the first keyboard area 610, and two additional layers 720 and 710.
  • the layers 710 and 720 may be representative of additional layers that may operate in a similar manner, but include sub-areas for additional functionality.
  • the layers may function in a manner similar to the layers described in connection with FIGS. 5 and 6.
  • layer 710 is spaced apart from the first keyboard area 610 and comprises a third plurality of virtual key areas 716a-716n (collectively referred to as third virtual key areas 716).
  • Each of the third virtual key area 716 of FIG. 7 are assigned to character and command input keys, for example, additional inputs not included in the first and second keyboard areas 610 and 620.
  • a third validation boundary 718 is provided between the first keyboard area 610 and each third virtual key areas 716. Accordingly, the space or gap between the first keyboard area 610 and the layer 710 defines a third operation area.
  • the selection mechanism as described herein may be performed by moving a contact point, as represented by graphical icon 615, from the third operation area to one of the third virtual key areas 716 by crossing the validation boundary 718 and then back into the third operation area via the validation boundary 718.
  • the virtual key areas 716 may be assigned any character or command input key.
  • the virtual key areas 716 are assigned any input key not already assigned to the keyboard areas 610 or 620.
  • additional input keys include, but are not limited to special characters (e.g., exclamation point, "at” symbol, pound sign, dollar sign, question mark, period, comma, etc.), additional input commands (e.g., scroll up, scroll down, home, end, delete command, print screen, caps lock, tab, etc.), numerical input keys, hot keys to execute pre-defined macros and one-point application initialization, etc.
  • one of virtual key areas 716 is assigned a page up command (e.g., virtual key area 716a) and another of virtual key areas 716 is assigned a page down command (e.g., virtual key area 716n). Selection of these input commands may permit the user to scroll through multiple pages of virtual input key areas assigned to different command and character input keys.
  • Layer 720 is an illustrative example of another predictive layer, similar to layer 510 of FIG. 5. However, unlike FIG. 5, the predictive layer of FIG. 7 is positioned along the left short edge of the display area 601. Similar to layer 510 of FIG. 5, layer 700 here comprises a plurality of sub-areas 726a-726n each assigned to a command input or predictive suggestion. For example, a sub-area 726a is assigned to a page up input, sub-area 726n is assigned to page down input, and the remaining sub-areas 726 are each assigned to a predictive text suggestion.
  • a fourth validation boundary 728 is provided between the layer 710 and each sub-area 726. Accordingly, the space or gap between layer 710 and layer 720 defines a fourth operation area. The selection mechanism as described herein may be performed by moving contact point, as represented by graphical icon 615, from the fourth operation area to one of the sub-areas 726 by crossing the validation boundary 728 and then back into the fourth operation area via the validation boundary 728.
  • embodiments herein may include any number of abstraction layers (e.g., 1, 2, 3, 4, 5, 6, etc.) as desired.
  • layers may be positioned anywhere within the display area (e.g., display area 201, 501, and/or 601).
  • layer 710 may be positioned as shown in FIG. 7 and a predictive layer positioned as shown in FIG. 5.
  • each layer may have any desired shaped (e.g., rectangular, ovular, curved or arced, etc.).
  • FIG. 8 illustrates another example layout 800 of a virtual keyboard according to an embodiment of the present disclosure.
  • the layout 800 is similar to layout 200, except that a gap is provided between each virtual key area.
  • the layout 800 includes a first keyboard area 810 having a and a second keyboard area 820 spaced apart from each other within the display area 801.
  • First keyboard area 810 comprises a shape 814 (e.g., a circular shape in this example) and a first plurality of virtual key areas 816a-816n (collectively referred to herein as first virtual key areas 816) that are adjacent to a first operation area 812.
  • a space 819a-n is provided between each first virtual key area 816.
  • the first virtual key areas 816 and first operation area 812 may be substantially similar to the first virtual key areas 216 and first operation area 212, except that each virtual key area 816 comprises edges 813, 818, and 817 (e.g., virtual key area 816a comprises edges 813a, 818a, and 817a).
  • a first validation boundary is provided at the interface between the first operation area 812 and each of the first virtual key areas 826.
  • the first validation boundary is configured to facilitate the selection mechanism as described above in connection with FIGS. 3 and 4 (e.g., triggering pre-selection states based on crossing the validation boundary into a virtual key area 816, confirming the pre-selection by crossing back over the validation boundary, and/or cancelling the pre-selection state).
  • the validation boundary may be defined by the edges 818 (e.g., the interface between the operation area and each virtual key area 816) and edges 813 and 817 (e.g., the interface between each first virtual key area 816 and each space 819).
  • edges 818 e.g., the interface between the operation area and each virtual key area 816) and edges 813 and 817 (e.g., the interface between each first virtual key area 816 and each space 819.
  • moving contact point corresponding to graphical icon 815 from the space 819 into a virtual key area 816 e.g., crossing either edge 813 or 817) and then back into space 819 will trigger a pre-selection state and confirm the pre-selection state.
  • crossing edge 818 will function to trigger the pre-selection state and confirmation of the same, similar to the examples described in connection to FIGS. 3 and 4.
  • the operation area 812 may include both the central area and the spaces 819. As such, the operation area for this example may be star shaped.
  • the validation boundary may be defined by the edges 818 and an extension 811 of the edge 818.
  • extension 811a may be an example of extending the edge 818a toward neighboring edge 818b.
  • the extensions 811 and edges 818 form continuous boundary, that may be similar to the validation boundary 218 of FIG.2.
  • moving contact point corresponding to graphical icon 815 from the operation area 812 into a virtual key area 816 e.g., crossing edge 818) and then back into operation area 812 will trigger a pre-selection state and confirm the pre-selection state.
  • crossing the extension 811 into one of spaces 819 does not trigger pre-selection.
  • crossing into space 819 form a virtual key area 816 may function to cancel a pre-selection state.
  • second keyboard area 820 comprises a shape 824 (e.g., a circular shape in this example) and a second plurality of virtual key areas 826a-826n (collectively referred to herein as second virtual key areas 826) that are adjacent to a second operation area 822.
  • a space 829a-n is provided between each second virtual key area 826.
  • Each virtual key area 826 comprises edges 823, 828, and 827 (e.g., virtual key area 826a comprises edges 823a, 828a, and 827a).
  • a second validation boundary is provided at the interface between the second operation area 822 and each of the second virtual key areas 826. The second validation boundary is configured to facilitate the selection mechanism as described above in connection with FIGS. 3 and 4.
  • the validation boundary may be defined by the edges 828 and edges 823 and 827.
  • the operation area 822 may include both the central area and the spaces 829.
  • the validation boundary may be defined by the edges 828 and extensions 821 of the edge 828, forming a continuous boundary that may be similar to the validation boundary 228 of FIG.2.
  • spaces 819 and 829 are shown as triangular in FIG. 8, other shapes are possible.
  • the spaces 819 and 829 may be rectangular, semi-ovular, or any polygon shape as desired.
  • Predictive layer 840 may include the suggestion window, page forward input, and page backward input, each of which may be selected according to the selection mechanism as described in connection with FIG. 5.
  • FIG. 8 also includes a replication layer 830 configured to display one or more candidate input characters and/or confirmed input characters. For example, upon triggering a pre-selection state for a given input key, the corresponding character may be displayed in the replication layer 830. For example, based on confirming pre-selection states for "J" and “u”, the characters "J" and “u” are displayed. Then, based on triggering the pre-selection state for character "s", a character "s" may be displayed with a cursor (e.g., insertion point) adjacent thereto. Each confirmed selection maybe displayed with current candidate inputs so that the user may view an entire text input (e.g., full word and/or sentence).
  • a cursor e.g., insertion point
  • FIG. 9 is an example computing component that may be used to implement various features of embodiments described in the present disclosure.
  • FIG. 9 depicts a block diagram of an example computer system 900 in which various of the embodiments described herein may be implemented.
  • computer system 900 may be the input detection device or mobile device 120 and/or the display device 110 of FIG. 1.
  • Certain components of computer system 900 may be applicable for implementing the mobile device 120, but not for the display device 110 (for example, cursor control 916).
  • the computer system 900 includes a bus 902 or other communication mechanism for communicating information, one or more hardware processors 904 coupled with bus 902 for processing information.
  • Hardware processor(s) 904 may be, for example, one or more general purpose microprocessors.
  • the computer system 900 also includes a main memory 906, such as a random access memory (RAM), cache and/or other dynamic storage devices, coupled to bus 902 for storing information and instructions to be executed by processor 904.
  • Main memory 906 also may be used for storing temporary variables or other intermediate information during execution of instructions to be executed by processor 904. For example main memory 906 may maintain the virtual keyboard as described above.
  • Such instructions when stored in storage media accessible to processor 904, render computer system 900 into a special-purpose machine that is customized to perform the operations specified in the instructions.
  • the computer system 900 further includes a read only memory (ROM) 908 or other static storage device coupled to bus 902 for storing static information and instructions for processor 904.
  • ROM read only memory
  • a storage device 910 such as a magnetic disk, optical disk, or USB thumb drive (Flash drive), etc., is provided and coupled to bus 902 for storing information and instructions.
  • the computer system 900 may be coupled via bus 902 to a display 912, such as a liquid crystal display (LCD) (or touch screen), for displaying information to a computer user.
  • a display 912 such as a liquid crystal display (LCD) (or touch screen)
  • An input device 914 is coupled to bus 902 for communicating information and command selections to processor 904.
  • cursor control 916 is Another type of user input device
  • cursor control 916 such as a mouse, a trackball, or cursor direction keys for communicating direction information and command selections to processor 904 and for controlling cursor movement on display 912.
  • the same direction information and command selections as cursor control may be implemented via receiving touches on a touch screen without a cursor.
  • the computing system 900 may include a user interface module to implement a GUI that may be stored in a mass storage device as executable software codes that are executed by the computing device(s).
  • This and other modules may include, by way of example, components, such as software components, object-oriented software components, class components and task components, processes, functions, attributes, procedures, subroutines, segments of program code, drivers, firmware, microcode, circuitry, data, databases, data structures, tables, arrays, and variables.
  • the word “component,” “engine,” “module”, “system,” “database,” data store,” and the like, as used herein, can refer to logic embodied in hardware or firmware, or to a collection of software instructions, possibly having entry and exit points, written in a programming language, such as, for example, Java, C or C++.
  • a software component may be compiled and linked into an executable program, installed in a dynamic link library, or may be written in an interpreted programming language such as, for example, BASIC, Perl, or Python. It will be appreciated that software components may be callable from other components or from themselves, and/or may be invoked in response to detected events or interrupts.
  • Software components configured for execution on computing devices may be provided on a computer readable medium, such as a compact disc, digital video disc, flash drive, magnetic disc, or any other tangible medium, or as a digital download (and may be originally stored in a compressed or installable format that requires installation, decompression or decryption prior to execution).
  • a computer readable medium such as a compact disc, digital video disc, flash drive, magnetic disc, or any other tangible medium, or as a digital download (and may be originally stored in a compressed or installable format that requires installation, decompression or decryption prior to execution).
  • Such software code may be stored, partially or fully, on a memory device of the executing computing device, for execution by the computing device.
  • Software instructions may be embedded in firmware, such as an EPROM.
  • hardware components may be comprised of connected logic units, such as gates and flip- flops, and/or may be comprised of programmable units, such as programmable gate arrays or processors.
  • the computer system 900 may implement the techniques described herein using customized hard-wired logic, one or more ASICs or FPGAs, firmware and/or program logic which in combination with the computer system causes or programs computer system 900 to be a special-purpose machine. According to one embodiment, the techniques herein are performed by computer system 900 in response to processor(s) 904 executing one or more sequences of one or more instructions contained in main memory 906. Such instructions may be read into main memory 906 from another storage medium, such as storage device 910. Execution of the sequences of instructions contained in main memory 906 causes processor(s) 904 to perform the process steps described herein. In alternative embodiments, hard-wired circuitry may be used in place of or in combination with software instructions.
  • non-transitory media refers to any media that store data and/or instructions that cause a machine to operate in a specific fashion. Such non-transitory media may comprise non-volatile media and/or volatile media.
  • Non-volatile media includes, for example, optical or magnetic disks, such as storage device 910.
  • Volatile media includes dynamic memory, such as main memory 906.
  • non-transitory media include, for example, a floppy disk, a flexible disk, hard disk, solid state drive, magnetic tape, or any other magnetic data storage medium, a CD-ROM, any other optical data storage medium, any physical medium with patterns of holes, a RAM, a PROM, and EPROM, a FLASH-EPROM, NVRAM, any other memory chip or cartridge, and networked versions of the same.
  • Non-transitory media is distinct from but may be used in conjunction with transmission media.
  • Transmission media participates in transferring information between non-transitory media.
  • transmission media includes coaxial cables, copper wire and fiber optics, including the wires that comprise bus 902.
  • transmission media can also take the form of acoustic or light waves, such as those generated during radio-wave and infra-red data communications.
  • the computer system 900 also includes a communication interface 918 coupled to bus 902.
  • Communication interface 918 provides a two-way data communication coupling to one or more network links that are connected to one or more local networks.
  • communication interface 918 may be an integrated services digital network (ISDN) card, cable modem, satellite modem, or a modem to provide a data communication connection to a corresponding type of telephone line.
  • ISDN integrated services digital network
  • communication interface 918 may be a local area network (LAN) card to provide a data communication connection to a compatible LAN (or WAN component to communicated with a WAN).
  • LAN local area network
  • Wireless links may also be implemented.
  • communication interface 918 sends and receives electrical, electromagnetic or optical signals that carry digital data streams representing various types of information.
  • a network link typically provides data communication through one or more networks to other data devices.
  • a network link may provide a connection through local network to a host computer or to data equipment operated by an Internet Service Provider (ISP).
  • ISP Internet Service Provider
  • the ISP in turn provides data communication services through the world wide packet data communication network now commonly referred to as the "Internet.”
  • Internet Internet
  • Local network and Internet both use electrical, electromagnetic or optical signals that carry digital data streams.
  • the signals through the various networks and the signals on network link and through communication interface 918, which carry the digital data to and from computer system 900, are example forms of transmission media.
  • the computer system 900 can send messages and receive data, including program code, through the network(s), network link and communication interface 918.
  • a server might transmit a requested code for an application program through the Internet, the ISP, the local network and the communication interface 918.
  • the received code may be executed by processor 904 as it is received, and/or stored in storage device 910, or other non-volatile storage for later execution.
  • Each of the processes, methods, and algorithms described in the preceding sections may be embodied in, and fully or partially automated by, code components executed by one or more computer systems or computer processors comprising computer hardware.
  • the one or more computer systems or computer processors may also operate to support performance of the relevant operations in a "cloud computing" environment or as a "software as a service” (SaaS).
  • SaaS software as a service
  • the processes and algorithms may be implemented partially or wholly in application-specific circuitry.
  • the various features and processes described above may be used independently of one another, or may be combined in various ways.
  • circuit and component might describe a given unit of functionality that can be performed in accordance with one or more embodiments of the present disclosure.
  • a component might be implemented utilizing any form of hardware, software, or a combination thereof.
  • processors, controllers, ASICs, PLAs, PALs, CPLDs, FPGAs, logical components, software routines or other mechanisms might be implemented to make up a component.
  • Various components described herein may be implemented as discrete components or described functions and features can be shared in part or in total among one or more components. In other words, as would be apparent to one of ordinary skill in the art after reading this description, the various features and functionality described herein may be implemented in any given application.
  • computer program medium and “computer usable medium” are used to generally refer to transitory or non-transitory media. These and other various forms of computer program media or computer usable media may be involved in carrying one or more sequences of one or more instructions to a processing device for execution. Such instructions embodied on the medium, are generally referred to as “computer program code” or a “computer program product” (which may be grouped in the form of computer programs or other groupings). When executed, such instructions might enable a computing component to perform features or functions of the present disclosure as discussed herein.

Landscapes

  • Engineering & Computer Science (AREA)
  • General Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Human Computer Interaction (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • User Interface Of Digital Computer (AREA)

Abstract

L'invention concerne des systèmes et des procédés pour générer un clavier virtuel pour une entrée de texte dans des dispositifs de réalité virtuelle et de réalité augmentée. Les systèmes et les procédés selon l'invention peuvent générer un clavier virtuel sur un dispositif mobile, le clavier virtuel comprenant une première zone de fonctionnement, une première pluralité de zones de touches virtuelles, et une première limite au niveau d'une première interface entre la première zone de fonctionnement et la première pluralité de zones de touches virtuelles; détecter une entrée d'utilisateur dans le dispositif mobile qui croise la première limite; sélectionner une touche d'entrée du clavier virtuel sur la base de l'entrée d'utilisateur détectée; et afficher un texte sur la base de la touche d'entrée sélectionnée.
PCT/US2022/020897 2021-03-18 2022-03-18 Procédé d'entrée de texte multicouche pour des dispositifs de réalité augmentée WO2022198012A1 (fr)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202280021646.0A CN117083584A (zh) 2021-03-18 2022-03-18 用于增强现实设备的多层文本输入方法

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
US202163162905P 2021-03-18 2021-03-18
US63/162,905 2021-03-18

Publications (1)

Publication Number Publication Date
WO2022198012A1 true WO2022198012A1 (fr) 2022-09-22

Family

ID=83320894

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/US2022/020897 WO2022198012A1 (fr) 2021-03-18 2022-03-18 Procédé d'entrée de texte multicouche pour des dispositifs de réalité augmentée

Country Status (2)

Country Link
CN (1) CN117083584A (fr)
WO (1) WO2022198012A1 (fr)

Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20040104896A1 (en) * 2002-11-29 2004-06-03 Daniel Suraqui Reduced keyboards system using unistroke input and having automatic disambiguating and a recognition method using said system
US20110071818A1 (en) * 2008-05-15 2011-03-24 Hongming Jiang Man-machine interface for real-time forecasting user's input
CN106020538A (zh) * 2016-05-13 2016-10-12 张伟 一种触摸屏的输入键盘
CN112218134A (zh) * 2020-09-08 2021-01-12 华为技术加拿大有限公司 一种输入方法以及相关设备

Patent Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20040104896A1 (en) * 2002-11-29 2004-06-03 Daniel Suraqui Reduced keyboards system using unistroke input and having automatic disambiguating and a recognition method using said system
US20110071818A1 (en) * 2008-05-15 2011-03-24 Hongming Jiang Man-machine interface for real-time forecasting user's input
CN106020538A (zh) * 2016-05-13 2016-10-12 张伟 一种触摸屏的输入键盘
CN112218134A (zh) * 2020-09-08 2021-01-12 华为技术加拿大有限公司 一种输入方法以及相关设备

Also Published As

Publication number Publication date
CN117083584A (zh) 2023-11-17

Similar Documents

Publication Publication Date Title
US10412334B2 (en) System with touch screen displays and head-mounted displays
US9891822B2 (en) Input device and method for providing character input interface using a character selection gesture upon an arrangement of a central item and peripheral items
JP2015518225A (ja) 英数字を入力するためのユーザ・インタフェース
JP5801348B2 (ja) 入力システム、入力方法およびスマートフォン
JP2015005283A (ja) 放射状配置のソフト・キーパッドを含むユーザ・インターフェース
CN112136096A (zh) 将物理输入设备显示为虚拟对象
KR20220044443A (ko) 버튼에 배정된 특정 그룹 문자 배정 변환 방법
US10387032B2 (en) User interface input method and system for handheld and mobile devices
US9791932B2 (en) Semaphore gesture for human-machine interface
US12013987B2 (en) Non-standard keyboard input system
WO2022246334A1 (fr) Procédé d'entrée de texte pour des dispositifs de réalité augmentée
US20140129933A1 (en) User interface for input functions
US20230009807A1 (en) Text entry method and mobile device
WO2022198012A1 (fr) Procédé d'entrée de texte multicouche pour des dispositifs de réalité augmentée
TWI631484B (zh) 基於方向的文字輸入方法及其系統與電腦可讀取記錄媒體
US11244138B2 (en) Hologram-based character recognition method and apparatus
JP2013003802A (ja) 文字入力装置、文字入力装置の制御方法、制御プログラム、及び記録媒体
US20150347004A1 (en) Indic language keyboard interface
KR101654710B1 (ko) 손동작 기반 문자 입력 장치 및 이를 이용한 문자 입력 방법
WO2013078621A1 (fr) Procédé d'entrée d'écran tactile pour dispositif électronique, et dispositif électronique
JP2024011566A (ja) 情報処理装置、プログラム、及び情報処理方法
JP2014135708A (ja) 文字入力方法と装置
KR20150049661A (ko) 터치패드 입력 정보 처리 장치 및 방법
KR20140039569A (ko) 터치스크린의 사용자 인터페이스 운용방법
KR20140109309A (ko) 전자장치에서 문자입력시 직전에 입력된 문자가 무엇인지 알려주기 위한 방법

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 22772265

Country of ref document: EP

Kind code of ref document: A1

WWE Wipo information: entry into national phase

Ref document number: 202280021646.0

Country of ref document: CN

NENP Non-entry into the national phase

Ref country code: DE

122 Ep: pct application non-entry in european phase

Ref document number: 22772265

Country of ref document: EP

Kind code of ref document: A1