WO2021208965A1 - 文本输入方法、移动设备、头戴式显示设备以及存储介质 - Google Patents

文本输入方法、移动设备、头戴式显示设备以及存储介质 Download PDF

Info

Publication number
WO2021208965A1
WO2021208965A1 PCT/CN2021/087238 CN2021087238W WO2021208965A1 WO 2021208965 A1 WO2021208965 A1 WO 2021208965A1 CN 2021087238 W CN2021087238 W CN 2021087238W WO 2021208965 A1 WO2021208965 A1 WO 2021208965A1
Authority
WO
WIPO (PCT)
Prior art keywords
text
control area
mobile device
user
area
Prior art date
Application number
PCT/CN2021/087238
Other languages
English (en)
French (fr)
Inventor
徐步诣
徐毅
Original Assignee
Oppo广东移动通信有限公司
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Oppo广东移动通信有限公司 filed Critical Oppo广东移动通信有限公司
Priority to CN202180016556.8A priority Critical patent/CN115176224A/zh
Publication of WO2021208965A1 publication Critical patent/WO2021208965A1/zh
Priority to US17/933,354 priority patent/US20230009807A1/en

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/01Input arrangements or combined input and output arrangements for interaction between user and computer
    • G06F3/048Interaction techniques based on graphical user interfaces [GUI]
    • G06F3/0487Interaction techniques based on graphical user interfaces [GUI] using specific features provided by the input device, e.g. functions controlled by the rotation of a mouse with dual sensing arrangements, or of the nature of the input device, e.g. tap gestures based on pressure sensed by a digitiser
    • G06F3/0488Interaction techniques based on graphical user interfaces [GUI] using specific features provided by the input device, e.g. functions controlled by the rotation of a mouse with dual sensing arrangements, or of the nature of the input device, e.g. tap gestures based on pressure sensed by a digitiser using a touch-screen or digitiser, e.g. input of commands through traced gestures
    • G06F3/04886Interaction techniques based on graphical user interfaces [GUI] using specific features provided by the input device, e.g. functions controlled by the rotation of a mouse with dual sensing arrangements, or of the nature of the input device, e.g. tap gestures based on pressure sensed by a digitiser using a touch-screen or digitiser, e.g. input of commands through traced gestures by partitioning the display area of the touch-screen or the surface of the digitising tablet into independently controllable areas, e.g. virtual keyboards or menus
    • GPHYSICS
    • G02OPTICS
    • G02BOPTICAL ELEMENTS, SYSTEMS OR APPARATUS
    • G02B27/00Optical systems or apparatus not provided for by any of the groups G02B1/00 - G02B26/00, G02B30/00
    • G02B27/01Head-up displays
    • G02B27/017Head mounted
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/01Input arrangements or combined input and output arrangements for interaction between user and computer
    • G06F3/011Arrangements for interaction with the human body, e.g. for user immersion in virtual reality
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/01Input arrangements or combined input and output arrangements for interaction between user and computer
    • G06F3/02Input arrangements using manually operated switches, e.g. using keyboards or dials
    • G06F3/023Arrangements for converting discrete items of information into a coded form, e.g. arrangements for interpreting keyboard generated codes as alphanumeric codes, operand codes or instruction codes
    • G06F3/0233Character input methods
    • G06F3/0236Character input methods using selection techniques to select from displayed items
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/01Input arrangements or combined input and output arrangements for interaction between user and computer
    • G06F3/048Interaction techniques based on graphical user interfaces [GUI]
    • G06F3/0481Interaction techniques based on graphical user interfaces [GUI] based on specific properties of the displayed interaction object or a metaphor-based environment, e.g. interaction with desktop elements like windows or icons, or assisted by a cursor's changing behaviour or appearance
    • G06F3/0482Interaction with lists of selectable items, e.g. menus
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/01Input arrangements or combined input and output arrangements for interaction between user and computer
    • G06F3/048Interaction techniques based on graphical user interfaces [GUI]
    • G06F3/0484Interaction techniques based on graphical user interfaces [GUI] for the control of specific functions or operations, e.g. selecting or manipulating an object, an image or a displayed text element, setting a parameter value or selecting a range
    • G06F3/04842Selection of displayed objects or displayed text elements
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/01Input arrangements or combined input and output arrangements for interaction between user and computer
    • G06F3/048Interaction techniques based on graphical user interfaces [GUI]
    • G06F3/0484Interaction techniques based on graphical user interfaces [GUI] for the control of specific functions or operations, e.g. selecting or manipulating an object, an image or a displayed text element, setting a parameter value or selecting a range
    • G06F3/0485Scrolling or panning
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/01Input arrangements or combined input and output arrangements for interaction between user and computer
    • G06F3/048Interaction techniques based on graphical user interfaces [GUI]
    • G06F3/0487Interaction techniques based on graphical user interfaces [GUI] using specific features provided by the input device, e.g. functions controlled by the rotation of a mouse with dual sensing arrangements, or of the nature of the input device, e.g. tap gestures based on pressure sensed by a digitiser
    • G06F3/0488Interaction techniques based on graphical user interfaces [GUI] using specific features provided by the input device, e.g. functions controlled by the rotation of a mouse with dual sensing arrangements, or of the nature of the input device, e.g. tap gestures based on pressure sensed by a digitiser using a touch-screen or digitiser, e.g. input of commands through traced gestures
    • G06F3/04883Interaction techniques based on graphical user interfaces [GUI] using specific features provided by the input device, e.g. functions controlled by the rotation of a mouse with dual sensing arrangements, or of the nature of the input device, e.g. tap gestures based on pressure sensed by a digitiser using a touch-screen or digitiser, e.g. input of commands through traced gestures for inputting data by handwriting, e.g. gesture or text
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04MTELEPHONIC COMMUNICATION
    • H04M1/00Substation equipment, e.g. for use by subscribers
    • H04M1/72Mobile telephones; Cordless telephones, i.e. devices for establishing wireless links to base stations without route selection
    • H04M1/724User interfaces specially adapted for cordless or mobile telephones
    • H04M1/72403User interfaces specially adapted for cordless or mobile telephones with means for local support of applications that increase the functionality
    • H04M1/72409User interfaces specially adapted for cordless or mobile telephones with means for local support of applications that increase the functionality by interfacing with external accessories
    • H04M1/724094Interfacing with a device worn on the user's body to provide access to telephonic functionalities, e.g. accepting a call, reading or composing a message
    • H04M1/724097Worn on the head
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04MTELEPHONIC COMMUNICATION
    • H04M1/00Substation equipment, e.g. for use by subscribers
    • H04M1/72Mobile telephones; Cordless telephones, i.e. devices for establishing wireless links to base stations without route selection
    • H04M1/724User interfaces specially adapted for cordless or mobile telephones
    • H04M1/72403User interfaces specially adapted for cordless or mobile telephones with means for local support of applications that increase the functionality
    • H04M1/72409User interfaces specially adapted for cordless or mobile telephones with means for local support of applications that increase the functionality by interfacing with external accessories
    • H04M1/72412User interfaces specially adapted for cordless or mobile telephones with means for local support of applications that increase the functionality by interfacing with external accessories using two-way short-range wireless interfaces
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04MTELEPHONIC COMMUNICATION
    • H04M2250/00Details of telephonic subscriber devices
    • H04M2250/70Details of telephonic subscriber devices methods for entering alphabetical characters, e.g. multi-tap or dictionary disambiguation

Definitions

  • the embodiments of the present application relate to the field of vision enhancement technology, and in particular, to a text input method, a mobile device, a head-mounted display device, and a storage medium.
  • VR Virtual Reality
  • AR Augmented Reality
  • MR Mixed Reality
  • head-mounted display devices may include VR devices, AR devices, and MR devices.
  • HMD head-mounted display devices
  • the text input interface is a very challenging problem. Under normal circumstances, the text input interface can be implemented using a handheld controller. However, this method is cumbersome and unfavorable for user input operations, and is inefficient.
  • a mobile device such as a smart phone
  • this method also has drawbacks, such as requiring the user to watch the screen of the mobile device.
  • the embodiments of the present application provide a text input method, a mobile device, a head-mounted display device, and a storage medium, which can not only reduce the need for users to move their eyes to watch the screen of the mobile device when using the mobile device as a text input device, but also improve Text input efficiency.
  • an embodiment of the present application provides a text input method, which is applied to a mobile device.
  • the operation interface of the mobile device includes a keyboard area and a control area, and the method includes:
  • the target text is determined from the at least one candidate text, and the target text is sent to the text input interface of the head-mounted display device; wherein the target text is generated according to the second operation instruction.
  • an embodiment of the present application provides a text input method applied to a head-mounted display device, and the method includes:
  • an embodiment of the present application provides a mobile device, the mobile device including a first display unit, a first receiving unit, and a first sending unit; wherein,
  • the first receiving unit is configured to receive a first operation instruction from a keyboard area; wherein the operation interface of the mobile device includes a keyboard area and a control area;
  • a first display unit configured to display at least one candidate text in the control area, the at least one candidate text being generated according to the first operation instruction;
  • the first receiving unit is further configured to receive a second operation instruction from the control area
  • the first sending unit is configured to determine a target text from the at least one candidate text, and send the target text to a text input interface of a head-mounted display device; wherein, the target text is based on the second operation Instructions are generated.
  • an embodiment of the present application provides a mobile device, the mobile device including a first memory and a first processor; wherein,
  • the first memory is configured to store a computer program that can run on the first processor
  • the first processor is configured to execute the method according to any one of the first aspects when running the computer program.
  • an embodiment of the present application provides a head-mounted display device, the head-mounted display device includes a second display unit, a second receiving unit, and an input unit; wherein,
  • the second display unit is configured to display a text input interface
  • the second receiving unit is configured to receive the target text sent by the mobile device
  • the input unit is configured to input the target text into the text input interface.
  • an embodiment of the present application provides a head-mounted display device, the head-mounted display device includes a second memory and a second processor; wherein,
  • the second memory is configured to store a computer program that can run on the second processor
  • the second processor is configured to execute the method according to any one of the second aspects when running the computer program.
  • an embodiment of the present application provides a computer storage medium that stores a computer program, and when the computer program is executed by a first processor, the method according to any one of the first aspects is implemented, Or when executed by the second processor, the method according to any one of the second aspects is implemented.
  • the embodiments of the present application provide a text input method, a mobile device, a head-mounted display device, and a storage medium.
  • the operation interface of the mobile device includes a keyboard area and a control area, and receives a first operation from the keyboard area. Instruction; display at least one candidate text in the control area, the at least one candidate text is generated according to the first operation instruction; receive the second operation instruction from the control area; determine the target text from the at least one candidate text, and The target text is sent to the text input interface of the head-mounted display device; wherein, the target text is generated according to the second operation instruction.
  • a text input interface is displayed; the target text sent by the mobile device is received, and the target text is input into the text input interface.
  • the operation interface of the mobile device is used to realize the text input of the head-mounted display device, and the operation interface of the mobile device is divided into a keyboard area and a control area, it is possible to input operations with both hands to increase the typing speed, and the user does not need to type while typing. Gazing on the mobile device not only reduces the need for users to move their eyes to watch the screen of the mobile device when using the mobile device as a text input device, but also improves the efficiency of text input.
  • Figure 1 is a schematic diagram of an application scenario of a visual enhancement system provided by related technologies
  • FIG. 2 is a schematic diagram of a text input application scenario of a handheld controller provided by related technologies
  • FIG. 3 is a schematic flowchart of a text input method provided by an embodiment of the application.
  • FIG. 4 is a schematic diagram of the layout of an operation interface provided by an embodiment of the application.
  • FIG. 5 is a schematic flowchart of another text input method provided by an embodiment of the application.
  • FIG. 6 is a schematic diagram of the layout of another operation interface provided by an embodiment of the application.
  • FIG. 7 is a schematic flowchart of another text input method provided by an embodiment of the application.
  • FIG. 8 is a schematic diagram of the layout of yet another operation interface provided by an embodiment of the application.
  • FIG. 9 is a schematic diagram of the composition structure of a mobile device provided by an embodiment of this application.
  • FIG. 10 is a schematic diagram of the hardware structure of a mobile device provided by an embodiment of this application.
  • FIG. 11 is a schematic diagram of the composition structure of a head-mounted display device provided by an embodiment of the application.
  • FIG. 12 is a schematic diagram of the hardware structure of a head-mounted display device provided by an embodiment of the application.
  • Augmented Reality can enhance the images seen on screens or other displays. These images are generated by superimposing computer-generated images, sounds, or other data on the real world.
  • MR Mixed Reality
  • MR can not only superimpose virtual objects into the real world, but can also anchor virtual objects into the real world and allow users to interact with the combined virtual/real objects.
  • a head-mounted display device refers to a display device worn on the head or as a part of a helmet, which has display optics in front of one or both eyes.
  • Optical See Through HMD (Optical See Through HMD, OST-HMD) is a type of HMD that allows users to see through the screen. In the embodiments of this application, most of the MR glasses belong to this type (such as HoloLens, Magic Leap, etc.). Another type of HMD is a video pass-through HMD.
  • the visual enhancement system 10 may include a head-mounted display device 110 and a mobile device 120.
  • the head-mounted display device 110 and the mobile device 120 are in a wired or wireless communication connection.
  • the head-mounted display device 110 may refer to a monocular or binocular head-mounted display (Head-Mounted Display, HMD), such as AR glasses.
  • the head-mounted display device 110 may include one or more display modules 111 placed near the position of the user's single eye or both eyes. Among them, through the display module 111 of the head-mounted display device 110, the content displayed therein can be presented in front of the user's eyes, and the displayed content can fill or partially fill the user's field of vision.
  • the display module 111 may refer to one or more organic light-emitting diode (OLED) modules, liquid crystal display (LCD) modules, laser display modules, and the like.
  • OLED organic light-emitting diode
  • LCD liquid crystal display
  • laser display modules and the like.
  • the head-mounted display device 110 may also include one or more sensors and one or more cameras.
  • the head-mounted display device 110 may include one or more sensors such as an inertial measurement unit (IMU), an accelerometer, a gyroscope, a proximity sensor, and a depth camera.
  • IMU inertial measurement unit
  • the mobile device 120 may be wirelessly connected to the head-mounted display device 110 according to one or more wireless communication protocols (for example, Bluetooth, Wireless Fidelity (WIFI), etc.). Alternatively, the mobile device 120 may also be wiredly connected to the head-mounted display device 110 via a data cable (such as a USB cable) according to one or more data transmission protocols such as Universal Serial Bus (USB).
  • a data cable such as a USB cable
  • USB Universal Serial Bus
  • the mobile device 120 may be implemented in various forms.
  • the mobile devices described in the embodiments of the present application may include smart phones, tablet computers, notebook computers, laptop computers, palmtop computers, personal digital assistants (Personal Digital Assistant, PDA), smart watches, and so on.
  • a user operating on the mobile device 120 can control the operation at the head-mounted display device 110 via the mobile device 120.
  • data collected by sensors in the head-mounted display device 110 may also be sent back to the mobile device 120 for further processing or storage.
  • the head-mounted display device 110 may include VR devices (such as HTC VIVE, Oculus Rift, SAMSUNG HMD Odyssey, etc.) and MR devices (such as Microsoft Hololens 1&2, Magic Leap One, Nreal Light, etc.).
  • VR devices such as HTC VIVE, Oculus Rift, SAMSUNG HMD Odyssey, etc.
  • MR devices such as Microsoft Hololens 1&2, Magic Leap One, Nreal Light, etc.
  • AR glasses in some cases.
  • the text input interface is an important but very challenging problem. Under normal circumstances, such a text input interface can be implemented using a handheld controller. However, this method is cumbersome and inefficient, especially when the input text is very long; in addition, due to the large number of actions related to the mobile controller, they usually cause rapid user fatigue. Therefore, the embodiments of the present application need to provide an effective text input interface.
  • this relatively popular method is to enter text in a "point and shoot" style.
  • the user uses the virtual ray from the controller to aim at the keys on the virtual keyboard, and completes the confirmation of the key input by clicking the trigger button.
  • the trigger button is usually behind the controller. This method can use one hand or two hands.
  • each controller is assigned a virtual keyboard. Press the fingertip along the surface of the touchpad on the controller to select the key, and then press the trigger button to complete the confirmation of the text input.
  • the first two methods cause rapid user fatigue due to the need for a large number of mobile controllers.
  • the third method increases the user's chance of fainting because it involves moving the head.
  • the last method does not involve a lot of hand or head movement, when there are many keys on the keyboard, sliding your fingertips to locate the keys is not efficient.
  • a possible alternative is to introduce a circular keyboard layout with multi-letter keys, which can be operated on the touchpad of the controller with one hand.
  • the circular layout is consistent with the circular shape of the touchpad on some controllers of the VR headset.
  • This method has a letter selection mode and a word selection mode. For word selection, this method relies on using the frequency of words in the English language to provide users with multiple choices of words based on a multi-letter key sequence.
  • this method provides the convenience of one-handed operation and does not easily cause fatigue; however, this method requires the user to learn a new keyboard layout. In addition, using only one hand also reduces the maximum input speed.
  • voice technology and aerial typing technology with gesture tracking are prone to errors and cannot provide users with privacy; in addition, aerial typing also relies on cameras, gloves, or other devices to track gestures, which is relatively easy to make mistakes and cause user fatigue.
  • yet another possible alternative involves the use of additional input devices for text input.
  • a method of using a smart watch as an input device of smart glasses for AR glasses that are bound to a mobile device (such as a smart phone) (via a USB cable, or wirelessly using Bluetooth, WIFI, etc.), a simple or direct option is to use the existing text input interface on the mobile device .
  • mobile devices have a floating full keyboard (specifically, a QWERTY keyboard), a T9 keyboard, a handwriting interface, and so on.
  • all these methods require the user to watch the keyboard interface on the mobile device screen.
  • the user may want to keep the virtual object or the physical world within their line of sight, and in a VR setting, the user may not be able to watch the mobile device, so the above method is not an ideal choice.
  • an embodiment of the present application provides a text input method.
  • the operation interface of the mobile device includes a keyboard area and a control area, and receives a first operation instruction from the keyboard area; and displays at least one candidate in the control area.
  • Text the at least one candidate text is generated according to a first operation instruction; receiving a second operation instruction from a control area; determining a target text from the at least one candidate text, and sending the target text to a head-mounted display device
  • the text input interface where the target text is generated according to the second operation instruction.
  • a text input interface is displayed; the target text sent by the mobile device is received, and the target text is input into the text input interface.
  • the operation interface of the mobile device is used to realize the text input of the head-mounted display device, and the operation interface of the mobile device is divided into a keyboard area and a control area, it is possible to input operations with both hands to increase the typing speed, and the user does not need to type while typing. Gazing on the mobile device not only reduces the need for users to move their eyes to watch the screen of the mobile device when using the mobile device as a text input device, but also improves the efficiency of text input.
  • FIG. 3 shows a schematic flowchart of a text input method provided by an embodiment of the present application.
  • the method may include:
  • the embodiment of the present application may use the operation interface of a mobile device (such as a smart phone) as the operation interface of the head-mounted display device for text input.
  • a mobile device such as a smart phone
  • users can operate with both hands to increase typing speed.
  • the operation interface displayed on the screen of the mobile device may include a keyboard area and a control area so that the user can perform two-handed operations.
  • the screen of the mobile device can be divided into two parts, including the left area of the screen and the right area of the screen.
  • the method further includes:
  • the keyboard area is displayed in the left area of the screen, and the control area is displayed in the right area of the screen.
  • the keyboard area can be displayed in the left area of the screen, and the control area can be displayed in the right area of the screen.
  • the keyboard area can also be displayed on the right area of the screen, and the control area can be displayed on the left area of the screen.
  • whether to display the keyboard area in the left area or the right area of the screen can be determined according to the user's preference or other factors.
  • the embodiments of this application do not make specific limitations.
  • the method may further include:
  • the left side area and the right side area are adjusted in size.
  • the size of the left area and the size of the right area can be adjusted adaptively according to the screen size of the mobile device and the user's hand size, or even according to the user's preferences, to make it more convenient for the user operate.
  • the keyboard area may include a virtual keyboard.
  • the virtual keyboard may include at least one of the following: a circular layout keyboard, a QWERTY keyboard, a T9 keyboard, a QuickPath keyboard, a Swype keyboard, and a predefined keyboard.
  • the QWERTY keyboard can also be called the Curty keyboard or the full keyboard, which is currently the most widely used keyboard layout.
  • the T9 keyboard is a traditional non-smart phone keyboard. There are relatively few keys on it. There are only 1-9 number keys commonly used, and each number key carries 3 pinyin, so as to realize "9 number keys to input all Chinese characters "Function.
  • the QuickPath keyboard can be called a sliding keyboard, allowing users to use gesture input, and is usually used in iOS devices. Swype is a touch-screen keyboard that allows users to use their thumbs or other fingers to gently swipe the letters on the keyboard to complete input.
  • the predefined keyboard can be a keyboard different from the QWERTY keyboard, T9 keyboard, QuickPath keyboard, and Swype keyboard, which can be customized according to user needs.
  • the user can select the target keyboard from the above-mentioned virtual keyboards according to actual needs, and there is no limitation here.
  • the screen of the mobile device in the embodiment of the present application may be placed horizontally, so that the keyboard area and the control area are arranged and displayed on the screen of the mobile device.
  • FIG. 4 shows a schematic diagram of the layout of an operation interface provided by an embodiment of the present application. As shown in FIG. 4, the screen of the mobile device is placed horizontally, and the operation interface (including the keyboard area 401 and the control area 402) is displayed on the screen of the mobile device.
  • the screen of the mobile device is divided into two parts, the left area is the display keyboard area 401, in which is placed a multi-letter keyboard layout similar to the T9 keyboard; the right area is the control area 402, which can present at least one candidate text , Such as p, q, r, s, etc.
  • S302 Display at least one candidate text in the control area, where the at least one candidate text is generated according to the first operation instruction.
  • the first operation instruction may be generated by a user's finger touching and sliding the virtual keyboard. That is, in some embodiments, the receiving the first operation instruction from the keyboard area may include:
  • At least one candidate text is generated according to the first touch sliding operation.
  • the first operation instruction is generated based on the user's finger performing the first touch and slide operation on the virtual keyboard.
  • the at least one candidate text will be presented in the control area.
  • the user's finger here usually refers to the user's left finger, which may specifically be the left thumb, but may also be any other finger, which is not specifically limited in the embodiment of the present application.
  • the method may further include: when it is detected that the user's finger is executed on the virtual keyboard, the first When touching and sliding, the selected key in the virtual keyboard will be highlighted.
  • the method may further include: controlling the mobile device to vibrate when it is detected that the user's finger touches and slides to a new button on the virtual keyboard.
  • the mobile device detects that the user's finger performs a sliding operation on the key on the virtual keyboard to select one of the multi-letter keys
  • the selected key can also be highlighted or highlighted on the screen of the mobile device, for example, Use colors to distinguish them for feedback.
  • the embodiment of the present application may also provide other types of feedback, for example, when the user's finger slides to a new letter key, the mobile device vibrates.
  • the embodiment of the present application may even display the operation interface of the mobile device on the head-mounted display device, so as to feed back the selected key to the user.
  • the candidate text can be letters/numbers, words, or Chinese characters, which are mainly related to the input mode.
  • the keyboard area can support multiple input modes.
  • the multiple input modes may include at least a letter input mode and a word input mode, and may even include other input modes such as a Chinese character input mode.
  • the method may further include:
  • the keyboard area is controlled to switch between multiple input modes.
  • controlling the keyboard area to switch between multiple input modes according to the third operation instruction may include:
  • the keyboard area is controlled to switch between multiple input modes.
  • the mobile device receives the third operation instruction, this time it can be between these multiple input modes To switch.
  • the displaying at least one candidate text in the control area may include:
  • the method may further include: highlighting the selected button.
  • the user can slide her left thumb (or any finger she chooses) on the keyboard area to select one of the multi-letter keys.
  • the selected key can be highlighted on the screen of the mobile device for feedback.
  • other types of feedback may also be provided, for example, when sliding to a new button, the mobile device vibrates.
  • the displaying at least one candidate text in the control area may include:
  • the sliding track of the user's finger on the keyboard area is determined, and at least one candidate word is displayed in the control area according to the sliding track.
  • the presenting at least one candidate word in the control area according to the sliding track may include:
  • At least one candidate word is generated and displayed in the control area.
  • the method may further include:
  • the first preset key is any key in the virtual keyboard.
  • first preset time and the second preset time may be different.
  • the first preset time is used to determine whether a preset button is selected in the sliding track
  • the second preset time is used to determine whether a preset button is continuously selected in the sliding track.
  • the operation of the virtual keyboard in the keyboard area is similar to the QuickPath keyboard on iOS devices and the Swype keyboard on Android devices.
  • the user can use the user's finger to slide on each letter of the word without tapping separately, and without lifting the user's finger.
  • an algorithm for determining the selected letter key can be implemented by, for example, detecting a pause in the path.
  • the user can use the left thumb to slide on the virtual keyboard, and then can display a group of candidate words matching the selected key sequence in the control area.
  • the embodiment of the present application may also support foreign language input, such as a Chinese character input mode.
  • the input mode is the Chinese character input mode
  • the displaying at least one candidate text in the control area may include:
  • the sliding track of the user's finger on the keyboard area is determined, and at least one candidate Chinese character is displayed in the control area according to the sliding track.
  • the Chinese character input mode is similar to the word input mode.
  • multiple schemes for example, pinyin
  • pinyin can be used to input Chinese characters as words composed of English letters. Therefore, the input of Chinese text can also be realized through the word input mode.
  • the mobile device after the mobile device receives the first operation instruction from the keyboard area, at this time it can generate at least one candidate text (such as letters, words, Chinese characters, etc.) according to the first operation instruction, and then present it in the control area for Further determine the target text to be entered.
  • at least one candidate text such as letters, words, Chinese characters, etc.
  • S304 Determine the target text from the at least one candidate text, and send the target text to the text input interface of the head-mounted display device; wherein the target text is generated according to the second operation instruction.
  • the selection of the target text may be determined by the second operation instruction from the control area received by the mobile device.
  • the second operation instruction may be generated by the user's finger touching and sliding the control area.
  • the receiving the second operation instruction from the control area and determining the target text from the at least one candidate text may include:
  • the target text is determined from at least one candidate text according to the sliding direction corresponding to the second touch sliding operation; wherein, the second operation instruction is based on the user's finger in the control area Generated by performing the second touch sliding operation.
  • the target text can be selected.
  • the user's finger here generally refers to the user's right finger, which may specifically be the right thumb, but may also be any other finger, which is not specifically limited in the embodiment of the present application.
  • the target text may be selected based on a sliding gesture (specifically, a sliding direction) used by the user on the right area of the screen.
  • a sliding gesture specifically, a sliding direction
  • four candidate texts such as p, q, r, and s are displayed in the control area.
  • the letter q is displayed on the upper side.
  • the letter r is displayed on the right side.
  • the letter p is displayed on the left, at this time, swipe to the left to select and confirm as the letter p;
  • the letter s is displayed on the lower side, at this time, swipe down to select and confirm as The letter s.
  • the method may further include: if the number of the at least one candidate text is only one, detecting that the user's finger performs a sliding operation in any direction in the control area, or detecting that the user's finger is When the control area performs a single-click operation, the target text is determined according to the second operation instruction.
  • direction options can be displayed in the control area.
  • four candidate texts can be laid out in four directions (up, down, left, and right) for selection.
  • six-direction layout, eight-direction layout, etc. are also possible, depending on the user's preference and the ability of the mobile device to distinguish sliding directions, and the embodiment of the present application does not specifically limit it.
  • buttons in the control area at this time including the first button and the second button, to switch between multiple sets of candidate texts.
  • the method may further include:
  • the control area is controlled to switch between multiple sets of candidate texts.
  • controlling the control area to switch between multiple sets of candidate texts may include:
  • the control area is controlled to switch between multiple sets of candidate texts.
  • controlling the control area to switch between multiple sets of candidate texts according to the fourth operation instruction may include:
  • the control area is controlled to switch between multiple sets of candidate texts.
  • the first button is used to trigger the update display of the at least one candidate text to the next group
  • the second button is used to trigger the update display of the at least one candidate text to the next group.
  • the embodiment of the present application may display two buttons at the bottom of the control area: a “next group” button and a “previous group” button. At this time, the user only needs to click the "next group” or "previous group” button to browse multiple sets of candidate texts.
  • the user can also simply slide toward the button to trigger the previous group of candidate texts and the next group of candidate texts.
  • the sliding directions of "bottom left diagonal” and “bottom right diagonal” are reserved for browsing multiple sets of candidate text, while the sliding directions of "up”, “down”, “left”, and “right” Used to select the target text.
  • the "previous group” button is located at the lower left corner of the control area, and the “next group” button is located at the lower right corner of the control area.
  • the at least one candidate text can also be set in the form of a list.
  • the method may further include:
  • the candidate text in the scroll list is controlled to be scrolled and displayed.
  • the number of candidate texts presented in the control area at this time is relatively large.
  • the at least one candidate text can be set as a scrolling list. The user can scroll up or down the list and highlight a candidate text in the list. Through different sliding operations, you can select and confirm the highlighted text and use it as the target text to be entered.
  • the list can be a vertical list or a circular list.
  • the display order of these candidate texts in the list can be determined according to the user’s preferences, or according to other methods.
  • the display order of words can be based on the frequency of words in the English corpus (for example, displaying the most frequent words (At the top) to determine, but the embodiment of the application does not make any limitation.
  • the user can keep pressing the letter key and pause for a short time; or, the user can also use her Quickly tap the area on the right side of the screen to confirm the input of repeated keystrokes.
  • glove-based or camera-based gesture recognition can also be used to implement aerial typing in a similar manner, and then send the target text to the text input interface of the head-mounted display device.
  • the screen of the mobile device is divided into two parts, which are used to display the keyboard area and the control area, which can be operated with both hands, and can enable users to easily display the head-mounted display bound to the mobile device
  • the device performs text input while using elements already familiar to the user to implement the operation interface of the new computing device.
  • the multi-letter keyboard layout is similar to the familiar T9 keyboard layout, which can shorten the user's learning time.
  • This embodiment provides a text input method, which is applied to a mobile device.
  • the operation interface of the mobile device includes a keyboard area and a control area, and receives a first operation instruction from the keyboard area; displays at least one candidate text in the control area, and the at least one candidate text is generated according to the first operation instruction;
  • the second operation instruction of the area determine the target text from the at least one candidate text, and send the target text to the text input interface of the head-mounted display device; wherein the target text is generated according to the second operation instruction.
  • the operation interface of the mobile device is used to realize the text input of the head-mounted display device, and the operation interface of the mobile device is divided into a keyboard area and a control area, it is possible to input operations with both hands to increase the typing speed, and the user does not need to type while typing. Gazing on the mobile device not only reduces the need for users to move their eyes to watch the screen of the mobile device when using the mobile device as a text input device, but also improves the efficiency of text input.
  • FIG. 5 shows a schematic flowchart of another text input method provided by an embodiment of the present application. As shown in Figure 5, the method may include:
  • S501 Display a text input interface.
  • S502 Receive the target text sent by the mobile device, and input the target text into the text input interface.
  • the target text is determined by the mobile device receiving the user's finger to perform touch and sliding operations on the keyboard area and the control area of the mobile device.
  • the embodiment of the present application may use the operation interface of a mobile device (such as a smart phone) as the operation interface of the head-mounted display device for text input.
  • a mobile device such as a smart phone
  • the target text can be sent to the head-mounted display device, and then synchronized to the text input interface of the head-mounted display device for display.
  • the embodiment of the present application may display the operation interface of the mobile device on the head-mounted display device in order to provide User operation feedback.
  • the method may further include: displaying an operation interface of the mobile device in the head-mounted display device;
  • the receiving the target text sent by the mobile device may include:
  • the operation interface and text input interface of the mobile device can be displayed through the display module of the head-mounted display device.
  • the operation interface of the mobile device can be displayed at this time, and then the user uses the mobile device to perform touch operations on its own operation interface in order to determine the target text and input it to the head-mounted display device simultaneously Text input interface.
  • the operation interface presented in the head-mounted display device is consistent with the operation interface presented by the mobile device itself.
  • the operation interface may include a keyboard area and a control area, and the keyboard area includes a virtual keyboard.
  • the displaying the operation interface of the mobile device may include:
  • the keyboard area and the control area are displayed in the head-mounted display device, and the selected keys in the virtual keyboard are highlighted.
  • the method may further include: displaying the position of the user's finger on the virtual keyboard with a preset mark.
  • the keyboard area and the control area may also be displayed on the head-mounted display device.
  • the keyboard area includes a virtual keyboard, and the virtual keyboard is provided with multiple letter keys
  • the virtual keyboard and multiple letter keys can be displayed on the head-mounted display device.
  • the selected key can be highlighted on the screen of the mobile device, and on the other hand, the selected key can also be displayed on the headset. Highlight on the display device.
  • the user's finger can also be displayed on the head-mounted display device with a preset mark to indicate the current position of the user's finger.
  • FIG. 6 shows a schematic diagram of the layout of another operation interface provided by an embodiment of the present application.
  • the operation interface including the keyboard area 601 and the control area 602 is displayed on the head-mounted display device.
  • the head-mounted display In the display module of the device the key can also be highlighted on the virtual keyboard in the keyboard area 601, and a mark (for example, the black dot shown in FIG. 6) representing the position of the user's finger is displayed on the virtual keyboard.
  • the method may further include:
  • the sliding direction of the user's finger is determined based on at least one candidate text displayed in the control area; wherein the sliding direction is used to instruct the user's finger to select the target text through a touch sliding operation on the operation interface of the mobile device.
  • the sliding direction of the user's finger can be determined, and then the user can perform the touch and sliding operation with his finger.
  • the letter N is located on the upper side of the control area.
  • swipe up to select and confirm the target text as letter N is displayed on the left
  • swipe to the left to select and confirm The target text is the letter M
  • the letter O is displayed on the right.
  • swipe to the right to select and confirm the target text as the letter O is displayed on the right.
  • the user can use both hands to input letters, for example, the left hand selects the letter keys in the keyboard area, and the right hand selects the target text in the control area.
  • the head-mounted display device can focus on displaying the text input interface at this time, and synchronize the target text to the text input interface for input display.
  • This embodiment provides a text input method, which is applied to a head-mounted display device.
  • a text input interface By displaying a text input interface; receiving the target text sent by the mobile device, and inputting the target text into the text input interface.
  • the operation interface of the mobile device is used to realize the text input of the head-mounted display device, and the operation interface of the mobile device is divided into a keyboard area and a control area, it is possible to input operations with both hands to increase the typing speed, and the user does not need to type while typing. Gazing on the mobile device not only reduces the need for users to move their eyes to watch the screen of the mobile device when using the mobile device as a text input device, but also improves the efficiency of text input.
  • FIG. 7 shows a schematic flowchart of yet another text input method provided by an embodiment of the present application.
  • the method may include:
  • S702 Display at least one candidate text in the control area.
  • S704 Determine the target text from the at least one candidate text.
  • At least one candidate text is generated according to the first operation instruction, and the target text is generated according to the second operation instruction.
  • steps S701 to S704 is a mobile device. After the mobile device determines the target text, the mobile device sends it to the head-mounted display device for input.
  • S705 Send the target text from the mobile device to the head-mounted display device.
  • S706 Input the received target text into the text input interface of the head-mounted display device.
  • the method is applied to a visual enhancement system.
  • the visual enhancement system may include a mobile device and a head-mounted display device.
  • a wired communication connection can be established between the mobile device and the head-mounted display device through a data cable
  • a wireless communication connection can also be established through a wireless communication protocol.
  • the wireless communication protocol may include at least one of the following: Bluetooth (Bluetooth) protocol, wireless fidelity (Wireless Fidelity, WIFI) protocol, infrared data (Infrared Data Association, IrDA) protocol, and Near Field Communication (NFC) )protocol.
  • Bluetooth Bluetooth
  • WIFI wireless Fidelity
  • IrDA Infrared Data Association
  • NFC Near Field Communication
  • the embodiment of the present application uses a mobile device as a head-mounted display device (such as AR glasses) for text input operation interface and method.
  • the screen of the mobile device is divided into two parts.
  • the left area of the screen is used as the display keyboard area, which can be a multi-letter keyboard layout similar to the T9 keyboard;
  • the right area of the screen is used as the display control area. It is the area where the user selects among multiple numbers, letters, words or Chinese characters. Therefore, the user can use both hands to increase the typing speed; and when typing, the user does not need to use vision on the screen of the mobile device, that is, it can achieve "blind typing" processing (the user does not need to stare at the screen of the mobile device).
  • the keyboard area may include a virtual keyboard
  • the virtual keyboard may include multiple letter keys.
  • the virtual keyboard may include at least one of the following: a circular layout keyboard, a QWERTY keyboard, a T9 keyboard, a QuickPath keyboard, a Swype keyboard, and a predefined keyboard.
  • the user can slide her left thumb (or any finger she chooses) on the keyboard area to select one of the multi-letter keys.
  • the selected key can be highlighted on the screen of the mobile device for feedback.
  • other types of feedback may also be provided, for example, the mobile device vibrates when sliding to a new button.
  • the operation interface (including the keyboard area and the control area) can also be displayed on the head-mounted display device, and the corresponding keys can also be highlighted on the head-mounted display device.
  • some marks for example, black dots
  • indicating the position of the finger can also be displayed on the virtual keyboard of the head-mounted display device, as shown in FIG. 6 in detail.
  • the user can use a sliding gesture in the control area to select the target text (specifically, letters here).
  • the target text specifically, letters here
  • the letter N is displayed on the upper side of the control area.
  • the upward sliding gesture will select and confirm the letter N as the target text, and input it into the text input interface of the head-mounted display device.
  • the embodiment of the present application can use two hands together to input letters.
  • you can slide or click in any direction to select and confirm the target text, and enter the text on the head-mounted display device Input interface.
  • keyboard area can also support word input mode. At this time, you can use simple gestures (for example, double-click in the right area of the screen) to switch between the two input modes.
  • the operation of the virtual keyboard in the keyboard area is similar to the QuickPath keyboard on iOS devices and the Swype keyboard on Android devices.
  • the user can use the user's finger to slide on each letter of the word without tapping separately, and without lifting the user's finger.
  • an algorithm for determining the selected letter can be implemented by, for example, detecting a pause in the path.
  • the user can use the left thumb to slide on the virtual keyboard, and then can display a group of candidate words matching the selected key sequence in the control area.
  • FIG. 8 shows a schematic diagram of the layout of yet another operation interface provided by an embodiment of the present application. As shown in Fig.
  • the operation interface can be displayed on a head-mounted display device and/or a mobile device.
  • words such as "am”, “bot", “cot”, “ant”, etc. will be displayed in the control area, such as the right side of the head-mounted display device.
  • the side area and/or the area on the right side of the screen of the mobile device; and in the head-mounted display device, a mark for indicating the position of the user's finger (such as the black dot in FIG. 8) may also be displayed. Then, you can perform a directional sliding in the control area to determine the selectable/confirmed word, and input the word into the text input interface of the head-mounted display device.
  • the embodiment of the present application may also support foreign language input, such as a Chinese character input mode.
  • a Chinese character input mode such as a Chinese character input mode.
  • multiple schemes for example, pinyin
  • pinyin can be used to input Chinese characters as words composed of English letters. Therefore, the input of Chinese text can also be realized through the word input mode.
  • the user can keep holding down a letter and pause for a short time.
  • the user can also use the right thumb to quickly click on the right area to confirm the input of repeated keystrokes.
  • the right area can only display a limited number of direction options. For example, a four-direction (up, down, left, right) layout can be used to select among 4 candidate words.
  • six-direction layout and eight-direction layout are also possible, depending on the user's preferences and the ability of the mobile device to distinguish sliding directions.
  • two buttons can be displayed at the bottom of the right area: the "next group” button and the "previous group” button. At this time, users only need to click the "Next" button or the "Previous” button, and the user can browse to multiple sets of possible words.
  • buttons can also trigger the display of the previous group of words and the next group of words (for example, the diagonal direction of the lower left and the diagonal direction of the lower right are reserved for browsing Word group, and "up", "left” and “right” are used to select the target word).
  • these multiple possible words may also be implemented as a scrollable list.
  • the user can use the up and down sliding motion to scroll the list and highlight a word in the list.
  • different sliding actions can select and confirm the highlighted word for input into the text input interface of the head-mounted display device (for example, sliding to the right).
  • the list can be a vertical list or a circular list; the display order of the words in the list can be determined based on the frequency of the words in the English corpus. For example, the most frequent word can be displayed at the top of the list.
  • the virtual keyboard in the keyboard area can have different layouts, such as a circular layout, or even a traditional QWERTY keyboard layout.
  • the size of the left area and the right area can be adjusted based on the size of the screen and the size of the user's hand.
  • function keys such as Backspace, Space, Enter, etc. may be placed in the right area. Then, simply swipe in the direction of the function button to enter the function button.
  • the user can also always use the right click to confirm the selection and input of letter keys instead of using the Swype keyboard.
  • glove-based or camera-based gesture recognition can be used to implement aerial typing in a similar manner, and then the target text can be input to the text input interface of the head-mounted display device.
  • the keyboard area and the control area are displayed separately, so that you can use both hands to operate, which can improve input efficiency; and the use of a multi-letter keyboard layout with a small number of keys can also eliminate the need for users Keep staring on the mobile device (this is impossible in VR); on the contrary, the user can continue to keep the virtual content or the real world within her line of sight (in MR/AR, it is more desirable to keep it); in addition, In the keyboard area, the multi-letter keyboard layout is similar to the T9 keyboard that users are already familiar with, and it also shortens the user's learning time.
  • This embodiment provides a text input method.
  • the specific implementation of the foregoing embodiment is described in detail through the foregoing embodiment. It can be seen from the above that the operation interface of the mobile device is used to implement the text input of the head-mounted display device, and The operation interface of the mobile device is divided into a keyboard area and a control area, so that both hands can input operations to improve typing speed, and the user does not need to look at the mobile device when typing, which not only reduces the need for the user to use the mobile device as a text input device Move the line of sight to watch the screen of the mobile device, and it also improves the efficiency of text input.
  • FIG. 9 shows a schematic diagram of the composition structure of a mobile device 90 provided by an embodiment of the present application.
  • the mobile device 90 may include: a first display unit 901, a first receiving unit 902, and a first sending unit 903; wherein,
  • the first receiving unit 902 is configured to receive a first operation instruction from a keyboard area; wherein the operation interface of the mobile device includes a keyboard area and a control area;
  • the first display unit 901 is configured to display at least one candidate text in the control area, and the at least one candidate text is generated according to the first operation instruction;
  • the first receiving unit 902 is further configured to receive a second operation instruction from the control area;
  • the first sending unit 903 is configured to determine a target text from the at least one candidate text, and send the target text to the text input interface of the head-mounted display device; wherein the target text is generated according to a second operation instruction.
  • the first display unit 901 is further configured to display the keyboard area on the left area of the screen on the screen of the mobile device, and display the control area on the screen. The area on the right.
  • the mobile device 90 may further include an adjustment unit 904 configured to perform operations on the left area and the right area based on the screen size of the mobile device and the user's hand size. Size adjustment.
  • the keyboard area includes a virtual keyboard.
  • the first receiving unit 902 is specifically configured to, when it is detected that the user's finger performs a first touch sliding operation on the virtual keyboard, according to the first The touch and slide operation generates the at least one candidate text; wherein the first operation instruction is generated based on the user's finger performing the first touch and slide operation on the virtual keyboard.
  • the keyboard area supports multiple input modes, and the multiple input modes include at least a letter input mode and a word input mode; accordingly, the first receiving unit 902 is further configured to receive data from the control The third operation instruction of the area;
  • the first display unit 901 is further configured to control the keyboard area to switch between the multiple input modes according to the third operation instruction.
  • the first display unit 901 is specifically configured to control the keyboard area to switch between the multiple input modes when it is detected that the user's finger performs a double-click operation on the control area.
  • the first display unit 901 is specifically configured to determine that the user's finger is performing a first touch and sliding operation on the virtual keyboard when the input mode is a letter input mode.
  • the selected key in the keyboard area displays at least one candidate letter in the control area according to the selected key.
  • the first display unit 901 is further configured to highlight the selected key.
  • the first display unit 901 is specifically configured to determine that the user's finger is performing a first touch sliding operation on the virtual keyboard when the input mode is a word input mode. According to the sliding track of the keyboard area, at least one candidate word is displayed in the control area according to the sliding track.
  • the first display unit 901 is further configured to determine to select the at least one preset if it is detected in the sliding track that the dwell time of the user's finger on the at least one preset button is greater than the first preset time. Set keys; and according to the sequence of the at least one preset key in the sliding track, at least one candidate word is generated and displayed in the control area.
  • the first display unit 901 is further configured to determine that if the dwell time of the user's finger on the first preset button is greater than the second preset time in the sliding track, it is determined to repeatedly select the first preset button.
  • a preset button or, if it is detected in the sliding track that the user's finger stays on the first preset button and the user's finger performs a click operation on the control area, it is determined to repeatedly select the first preset button; wherein, The first preset key is any key in the virtual keyboard.
  • the first display unit 901 is further configured to, when detecting that the user's finger performs a second touch sliding operation in the control area, according to the sliding direction corresponding to the second touch sliding operation, start from the at least The target text is determined in one candidate text; wherein, the second operation instruction is generated based on the user's finger performing a second touch and slide operation on the control area.
  • the first receiving unit 902 is further configured to receive a fourth operation instruction from the control area;
  • the first display unit 901 is further configured to control the control area to switch between multiple sets of candidate texts according to the fourth operation instruction.
  • the control area includes a first button and a second button.
  • the first display unit 901 is specifically configured to touch the first button or the second button when it is detected that the user's finger is in the control area.
  • the button performs a click operation, the control area is controlled to switch between multiple sets of candidate texts; wherein, the first button is used to trigger an updated display of the at least one candidate text to the next set, and the first The two buttons are used to trigger an update display of the at least one candidate text to the upper group.
  • the first display unit 901 is further configured to control the control area when it is detected that the user's finger performs a third touch sliding operation toward the first button or the second button in the control area Switch the display between multiple sets of candidate texts.
  • the mobile device 90 may further include a setting unit 905 configured to set the at least one candidate text as a scrolling list;
  • the first display unit 901 is further configured to control the candidate text in the scroll list to scroll and display according to the sliding direction corresponding to the fourth touch and slide operation when it is detected that the user's finger performs the fourth touch and slide operation in the control area.
  • a "unit" may be a part of a circuit, a part of a processor, a part of a program or software, etc., of course, may also be a module, or may be non-modular.
  • the various components in this embodiment may be integrated into one processing unit, or each unit may exist alone physically, or two or more units may be integrated into one unit.
  • the above-mentioned integrated unit can be realized in the form of hardware or software function module.
  • the integrated unit is implemented in the form of a software function module and is not sold or used as an independent product, it can be stored in a computer readable storage medium.
  • the technical solution of this embodiment is essentially or It is said that the part that contributes to the existing technology or all or part of the technical solution can be embodied in the form of a software product.
  • the computer software product is stored in a storage medium and includes several instructions to enable a computer device (which can It is a personal computer, a server, or a network device, etc.) or a processor (processor) that executes all or part of the steps of the method described in this embodiment.
  • the aforementioned storage media include: U disk, mobile hard disk, read only memory (Read Only Memory, ROM), random access memory (Random Access Memory, RAM), magnetic disk or optical disk and other media that can store program codes.
  • an embodiment of the present application provides a computer storage medium, which is applied to the mobile device 90, and the computer storage medium stores a computer program that, when executed by a first processor, implements any one of the foregoing embodiments. Methods.
  • FIG. 10 shows a schematic diagram of the hardware structure of a mobile device 90 provided by an embodiment of the present application.
  • a mobile device 90 may include: a first communication interface 1001, a first memory 1002, and a first processor 1003; various components are coupled together through a first bus system 1004.
  • the first bus system 1004 is used to implement connection and communication between these components.
  • the first bus system 1004 also includes a power bus, a control bus, and a status signal bus.
  • various buses are marked as the first bus system 1004 in FIG. 10.
  • the first communication interface 1001 is used for receiving and sending signals in the process of sending and receiving information with other external network elements;
  • the first memory 1002 is configured to store a computer program that can run on the first processor 1003;
  • the first processor 1003 is configured to execute: when running the computer program:
  • the target text is determined from the at least one candidate text, and the target text is sent to the text input interface of the head-mounted display device; wherein the target text is generated according to the second operation instruction.
  • the first memory 1002 in the embodiment of the present application may be a volatile memory or a non-volatile memory, or may include both volatile and non-volatile memory.
  • the non-volatile memory can be read-only memory (Read-Only Memory, ROM), programmable read-only memory (Programmable ROM, PROM), erasable programmable read-only memory (Erasable PROM, EPROM), and electrically available Erase programmable read-only memory (Electrically EPROM, EEPROM) or flash memory.
  • the volatile memory may be a random access memory (Random Access Memory, RAM), which is used as an external cache.
  • RAM static random access memory
  • DRAM dynamic random access memory
  • DRAM synchronous dynamic random access memory
  • DDRSDRAM Double Data Rate Synchronous Dynamic Random Access Memory
  • Enhanced SDRAM, ESDRAM Synchronous Link Dynamic Random Access Memory
  • Synchlink DRAM Synchronous Link Dynamic Random Access Memory
  • DRRAM Direct Rambus RAM
  • the first processor 1003 may be an integrated circuit chip with signal processing capability. In the implementation process, the steps of the foregoing method can be completed by an integrated logic circuit of hardware in the first processor 1003 or instructions in the form of software.
  • the aforementioned first processor 1003 may be a general-purpose processor, a digital signal processor (Digital Signal Processor, DSP), an application specific integrated circuit (ASIC), a ready-made programmable gate array (Field Programmable Gate Array, FPGA) Or other programmable logic devices, discrete gates or transistor logic devices, discrete hardware components.
  • DSP Digital Signal Processor
  • ASIC application specific integrated circuit
  • FPGA ready-made programmable gate array
  • the methods, steps, and logical block diagrams disclosed in the embodiments of the present application can be implemented or executed.
  • the general-purpose processor may be a microprocessor or the processor may also be any conventional processor or the like.
  • the steps of the method disclosed in the embodiments of the present application may be directly embodied as being executed and completed by a hardware decoding processor, or executed and completed by a combination of hardware and software modules in the decoding processor.
  • the software module can be located in a mature storage medium in the field, such as random access memory, flash memory, read-only memory, programmable read-only memory, or electrically erasable programmable memory, registers.
  • the storage medium is located in the first memory 1002, and the first processor 1003 reads the information in the first memory 1002, and completes the steps of the foregoing method in combination with its hardware.
  • the embodiments described in this application can be implemented by hardware, software, firmware, middleware, microcode, or a combination thereof.
  • the processing unit can be implemented in one or more application specific integrated circuits (ASIC), digital signal processor (Digital Signal Processing, DSP), digital signal processing equipment (DSP Device, DSPD), programmable Logic device (Programmable Logic Device, PLD), Field-Programmable Gate Array (Field-Programmable Gate Array, FPGA), general-purpose processors, controllers, microcontrollers, microprocessors, and others for performing the functions described in this application Electronic unit or its combination.
  • ASIC application specific integrated circuits
  • DSP Digital Signal Processing
  • DSP Device digital signal processing equipment
  • PLD programmable Logic Device
  • PLD Field-Programmable Gate Array
  • FPGA Field-Programmable Gate Array
  • the technology described in this application can be implemented through modules (such as procedures, functions, etc.) that perform the functions described in this application.
  • the software codes can be stored in the memory and executed by the
  • the first processor 1003 is further configured to execute the method described in any one of the foregoing embodiments when the computer program is running.
  • This embodiment provides a mobile device, which may include a first display unit, a first receiving unit, and a first sending unit.
  • a mobile device which may include a first display unit, a first receiving unit, and a first sending unit.
  • the operation interface of the mobile device is used to realize the text input of the head-mounted display device, and the operation interface of the mobile device is divided into a keyboard area and a control area, it is possible to input operations with both hands to increase the typing speed, and the user does not need to type while typing. Gazing on the mobile device not only reduces the need for users to move their eyes to watch the screen of the mobile device when using the mobile device as a text input device, but also improves the efficiency of text input.
  • FIG. 11 shows a schematic diagram of the composition structure of a head-mounted display device 110 provided by an embodiment of the present application.
  • the head-mounted display device 110 may include: a second display unit 1101, a second receiving unit 1102, and an input unit 1103; wherein,
  • the second display unit 1101 is configured to display a text input interface
  • the second receiving unit 1102 is configured to receive the target text sent by the mobile device
  • the input unit 1103 is configured to input the target text into the text input interface.
  • the second display unit 1101 is further configured to display the operation interface of the mobile device
  • the second receiving unit 1102 is specifically configured to receive the target text sent by the mobile device based on the response of the mobile device to the operation interface.
  • the operation interface includes a keyboard area and a control area, and the keyboard area includes a virtual keyboard;
  • the second display unit 1101 is further configured to display the keyboard area and the control area in the display module of the head-mounted display device, and to raise the selected key in the virtual keyboard. Bright display.
  • the second display unit 1101 is further configured to display the position of the user's finger on the virtual keyboard with a preset mark.
  • the head-mounted display device 110 may further include a determining unit 1104 configured to determine the sliding direction of the user's finger based on at least one candidate text presented in the control area; wherein, the sliding direction It is used to instruct the user's finger to select the target text by touching and sliding on the operation interface of the mobile device.
  • a determining unit 1104 configured to determine the sliding direction of the user's finger based on at least one candidate text presented in the control area; wherein, the sliding direction It is used to instruct the user's finger to select the target text by touching and sliding on the operation interface of the mobile device.
  • a "unit" may be a part of a circuit, a part of a processor, a part of a program or software, etc., of course, it may also be a module, or it may also be non-modular.
  • the various components in this embodiment may be integrated into one processing unit, or each unit may exist alone physically, or two or more units may be integrated into one unit.
  • the above-mentioned integrated unit can be realized in the form of hardware or software function module.
  • the integrated unit is implemented in the form of a software function module and is not sold or used as an independent product, it can be stored in a computer readable storage medium.
  • this embodiment provides a computer storage medium applied to the head-mounted display device 110.
  • the computer storage medium stores a computer program.
  • the computer program is executed by the second processor, the computer program Any one of the methods.
  • FIG. 12 shows a schematic diagram of the hardware structure of the head-mounted display device 110 provided in this application embodiment.
  • FIG. 12 may include: a second communication interface 1201, a second memory 1202, and a second processor 1203; various components are coupled together through a second bus system 1204.
  • the second bus system 1204 is used to implement connection and communication between these components.
  • the second bus system 1204 also includes a power bus, a control bus, and a status signal bus.
  • various buses are marked as the second bus system 1204 in FIG. 12.
  • the second communication interface 1201 is used for receiving and sending signals in the process of sending and receiving information with other external network elements;
  • the second memory 1202 is configured to store a computer program that can run on the second processor 1203;
  • the second processor 1203 is configured to execute: when the computer program is running:
  • the second processor 1203 is further configured to execute the method described in any one of the foregoing embodiments when the computer program is running.
  • the hardware function of the second memory 1202 is similar to that of the first memory 1002, and the hardware function of the second processor 1203 is similar to that of the first processor 1003; it will not be described in detail here.
  • This embodiment provides a head-mounted display device, which includes a second display unit, a second receiving unit, and an input unit.
  • a head-mounted display device which includes a second display unit, a second receiving unit, and an input unit.
  • the operation interface of the mobile device is used to realize the text input of the head-mounted display device, and the operation interface of the mobile device is divided into a keyboard area and a control area, it is possible to input operations with both hands to increase the typing speed, and the user does not need to type while typing. Gazing on the mobile device not only reduces the need for users to move their eyes to watch the screen of the mobile device when using the mobile device as a text input device, but also improves the efficiency of text input.
  • the screen of the mobile device since the screen of the mobile device is divided into two parts, which are used to display the keyboard area and the control area respectively, it can use both hands to operate, and it can enable the user to easily wear a headset bound to the mobile device.
  • Type display device for text input and at the same time, in the operation interface of the mobile device, use the elements that the user is already familiar with.
  • the multi-letter keyboard layout is similar to the familiar T9 keyboard layout, which can also shorten the user’s learning time;
  • the operation interface of the mobile device to realize the text input of the head-mounted display device not only reduces the need for users to move their eyes to watch the screen of the mobile device when using the mobile device as a text input device, but also improves the efficiency of text input.

Abstract

本申请实施例公开了一种文本输入方法、移动设备、头戴式显示设备以及存储介质,应用于移动设备,该移动设备的操作界面包括键盘区域和控制区域,该方法包括:接收来自于键盘区域的第一操作指令;在控制区域显示至少一个候选文本;接收来自于控制区域的第二操作指令;从至少一个候选文本中确定目标文本,并将目标文本发送至头戴式显示设备的文本输入界面。这样,由于使用移动设备的操作界面来实现头戴式显示设备的文本输入,且该操作界面划分为键盘区域和控制区域,如此不仅能够减小用户使用移动设备作为文本输入设备时需要移动视线观看移动设备屏幕的情况,而且还能够提高文本输入效率。

Description

文本输入方法、移动设备、头戴式显示设备以及存储介质
相关申请的交叉引用
本申请要求以Buyi Xu、Yi Xu的名义于2020年04月14日提交的、申请号为63/009,862的题为“Text Entry Interface for Head-Mounted Display”的在先美国临时专利申请的优先权,其全部内容通过引用结合在本申请中。
技术领域
本申请实施例涉及视觉增强技术领域,尤其涉及一种文本输入方法、移动设备、头戴式显示设备以及存储介质。
背景技术
近年来,随着虚拟现实(Virtual Reality,VR)、增强现实(Augmented Reality,AR)和混合现实(Mixed Reality,MR)等视觉增强技术的发展,可以通过计算机系统模拟出虚拟的三维世界,使得用户能够与虚拟场景进行互动,并且带给用户身临其境的感受。
其中,头戴式显示设备(Head-Mounted Display,HMD)可以包括VR设备、AR设备和MR设备等。对于HMD而言,文本输入界面是一个非常具有挑战的问题。通常情况下,文本输入界面可以使用手持式控制器实现。然而,这种方法比较繁琐且不利于用户的输入操作,而且效率低下。另外,在某些情况下,虽然可以使用移动设备(如智能手机)已有的文本输入界面,但是这种方法也存在弊端,比如需要用户观看移动设备的屏幕。
发明内容
本申请实施例提供一种文本输入方法、移动设备、头戴式显示设备以及存储介质,不仅可以减小用户使用移动设备作为文本输入设备时需要移动视线观看移动设备屏幕的情况,而且还可以提高文本输入效率。
本申请实施例的技术方案可以如下实现:
第一方面,本申请实施例提供了一种文本输入方法,应用于移动设备,该移动设备的操作界面包括键盘区域和控制区域,该方法包括:
接收来自于键盘区域的第一操作指令;
在控制区域显示至少一个候选文本,所述至少一个候选文本是根据第一操作指令生成;
接收来自于控制区域的第二操作指令;
从所述至少一个候选文本中确定目标文本,并将目标文本发送至头戴式显示设备的文本输入界面;其中,目标文本是根据第二操作指令生成。
第二方面,本申请实施例提供了一种文本输入方法,应用于头戴式显示设备,该方法包括:
显示文本输入界面;
接收移动设备发送的目标文本,并将所述目标文本输入到所述文本输入界面中。
第三方面,本申请实施例提供了一种移动设备,所述移动设备包括第一显示单元、第一接收单元和第一发送单元;其中,
第一接收单元,配置为接收来自于键盘区域的第一操作指令;其中,所述移动设备的操作界面包括键盘区域和控制区域;
第一显示单元,配置为在所述控制区域显示至少一个候选文本,所述至少一个候选文本是根据所述第一操作指令生成;
第一接收单元,还配置为接收来自于所述控制区域的第二操作指令;
第一发送单元,配置为从所述至少一个候选文本中确定目标文本,并将所述目标文本发送至头戴式显示设备的文本输入界面;其中,所述目标文本是根据所述第二操作指令生成。
第四方面,本申请实施例提供了一种移动设备,所述移动设备包括第一存储器和第一处理器;其中,
所述第一存储器,用于存储能够在所述第一处理器上运行的计算机程序;
所述第一处理器,用于在运行所述计算机程序时,执行如第一方面中任一项所述的方法。
第五方面,本申请实施例提供了一种头戴式显示设备,所述头戴式显示设备包括第二显示单元、第二接收单元和输入单元;其中,
所述第二显示单元,配置为显示文本输入界面;
所述第二接收单元,配置为接收移动设备发送的目标文本;
所述输入单元,配置为将所述目标文本输入到所述文本输入界面中。
第六方面,本申请实施例提供了一种头戴式显示设备,所述头戴式显示设备包括第二存储器和第二处理器;其中,
所述第二存储器,用于存储能够在所述第二处理器上运行的计算机程序;
所述第二处理器,用于在运行所述计算机程序时,执行如第二方面中任一项所述的方法。
第七方面,本申请实施例提供了一种计算机存储介质,该计算机存储介质存储有计算机程序,所述计算机程序被第一处理器执行时实现如第一方面中任一项所述的方法、或者被第二处理器执行时实现如第二方面中任一项所述的方法。
本申请实施例提供了一种文本输入方法、移动设备、头戴式显示设备以及存储介质,在移动设备侧,移动设备的操作界面包括键盘区域和控制区域,接收来自于键盘区域的第一操作指令;在控制区域显示至少一个候选文本,所述至少一个候选文本是根据第一操作指令生成;接收来自于控制区域的第二操作指令;从所述至少一个候选文本中确定目标文本,并将目标文本发送至头戴式显示设备的文本输入界面;其中,目标文本是根据第二操作指令生成。在头戴式显示设备侧,显示文本输入界面;接收移动设备发送的目标文本,并将所述目标文本输入到所述文本输入界面中。这样,由于使用移动设备的操作界面来实现头戴式显示设备的文本输入,且移动设备的操作界面划分为键盘区域和控制区域,从而可以双手输入操作以提高打字速度,同时打字时用户还无需注视在移动设备上,如此不仅减小了用户使用移动设备作为文本输入设备时需要移动视线观看移动设备屏幕的情况,而且还提高了文本输入效率。
附图说明
图1为相关技术提供的一种视觉增强系统的应用场景示意图;
图2为相关技术提供的一种手持式控制器的文本输入应用场景示意图;
图3为本申请实施例提供的一种文本输入方法的流程示意图;
图4为本申请实施例提供的一种操作界面的布局示意图;
图5为本申请实施例提供的另一种文本输入方法的流程示意图;
图6为本申请实施例提供的另一种操作界面的布局示意图;
图7为本申请实施例提供的又一种文本输入方法的流程示意图;
图8为本申请实施例提供的又一种操作界面的布局示意图;
图9为本申请实施例提供的一种移动设备的组成结构示意图;
图10为本申请实施例提供的一种移动设备的硬件结构示意图;
图11为本申请实施例提供的一种头戴式显示设备的组成结构示意图;
图12为本申请实施例提供的一种头戴式显示设备的硬件结构示意图。
具体实施方式
为了能够更加详尽地了解本申请实施例的特点与技术内容,下面结合附图对本申请实施例的实现进行详细阐述,所附附图仅供参考说明之用,并非用来限定本申请实施例。
除非另有定义,本文所使用的所有的技术和科学术语与属于本申请的技术领域的技术人员通常理解的含义相同。本文中所使用的术语只是为了描述本申请实施例的目的,不是旨在限制本申请。
在以下的描述中,涉及到“一些实施例”,其描述了所有可能实施例的子集,但是可以理解,“一些实施例”可以是所有可能实施例的相同子集或不同子集,并且可以在不冲突的情况下相互结合。还需要指出,本申请实施例所涉及的术语“第一\第二\第三”仅是用于区别类似的对象,不代表针对对象的特定排序,可以理解地,“第一\第二\第三”在允许的情况下可以互换特定的顺序或先后次序,以使这里描述的本申请实施例能够以除了在这里图示或描述的以外的顺序实施。
对本申请实施例进行进一步详细说明之前,对本申请实施例中涉及的名词和术语进行说明,本申请实施例中涉及的名词和术语适用于如下的解释:
增强现实(Augmented Reality,AR)可以增强在屏幕或其他显示器上看到的图像,这些图像是通过将计算机生成的图像、声音或者其他数据叠加在现实世界中而产生的。
混合现实(Mixed Reality,MR)不仅可以将虚拟对象叠加到现实世界中,而且还可以将虚拟对象锚定到现实世界中,并允许用户能够与组合的虚拟/现实对象进行交互。
头戴式显示设备(Head-Mounted Display,HMD)是指戴在头部或者作为头盔一部分的显示设备,其在一只眼睛或者两只眼睛的前方具有显示光学器件。
光学透视式HMD(Optical See Through HMD,OST-HMD)是一种类型的HMD,其允许用户透视屏幕。在本申请实施例中,大多数的MR眼镜属于这种类型(如HoloLens,MagicLeap等)。HMD的另一种类型为视频直通式HMD。
参见图1,其示出了本申请实施例提供的一种视觉增强系统的应用场景示意图。如图1所示,视觉增强系统10可以包括头戴式显示设备110和移动设备120。其中,头戴式显示设备110和移动设备120通过有线或者无线进行通信连接。
这里,头戴式显示设备110,可以是指单目或者双目的头戴式显示器(Head-Mounted Display,HMD),比如AR眼镜。在图1中,头戴式显示设备110可以包括在靠近使用者单眼或者双眼的位置区域放置的一个或多个显示模块111。其中,通过头戴式显示设备110的显示模块111,可以将其中显示的内容呈现在使用者的眼前,而且显示内容能够填满或部分填满使用者的视野。还需要说明的是,显示模块111可以是指一个或多个有机发光二极管(Organic Light-Emitting Diode,OLED)模块、液晶显示器(Liquid Crystal Display,LCD)模块、激光显示模块等。
另外,在一些实施例中,头戴式显示设备110,还可以包括一个或多个传感器和一个或多个相机。例如,头戴式显示设备110可以包括诸如惯性测量单元(Inertial Measurement Unit,IMU)、加速计、陀螺仪、接近传感器和深度相机等的一个或多个传感器。
移动设备120可以根据一个或多个无线通信协议(例如,蓝牙、无线保真(Wireless Fidelity,WIFI)等)无线连接到头戴式显示设备110。或者,移动设备120也可以根据诸如通用串行总线(Universal Serial Bus,USB)之类的一个或多个数据传输协议经由数据电缆(如USB线缆)有线连接到头戴式显示设备110。这里,移动设备120可以以各种形式来实施。例如,本申请实施例中描述的移动设备可以包括诸如智能手机、平板电脑、笔记本电脑、膝上电脑、掌上电脑、个人数字助理(Personal Digital Assistant,PDA)、智能手表等等。
在一些实施例中,在移动设备120上操作的用户可以经由移动设备120控制头戴式显示设备110处的操作。另外,由头戴式显示设备110中的传感器收集的数据也可以被发送回移动设备120以供进一步处理或存储。
可以理解,在本申请实施例中,头戴式显示设备110可以包括VR设备(如HTC VIVE、Oculus Rift、SAMSUNG HMD Odyssey等)和MR设备(如Microsoft Hololens1&2、Magic Leap One、Nreal Light等)。其中,MR设备在某些情况下被称为AR眼镜。对于HMD而言,文本输入界面是一个重要但非常具有挑战性的问题。通常情况下,这样的文本输入界面可以使用手持式控制器实现。然而这种方法繁琐且效率低下,尤其体现在输入文本很长的情况下;另外,由于与移动控制器相关的大量动作,它们通常还会导致用户快速疲劳。因此,本申请实施例需要提供一种有效的文本输入界面。
在相关技术中,目前存在有一些使用手持式控制器进行文本输入的方法。下面将结合图2对其中四种方法进行介绍:
a)射线投射。如图2中的(a)所示,这种相对流行的方法是以“瞄准并射击”风格输入文本。用户使用源自控制器的虚拟射线来瞄准虚拟键盘上的按键,通过点击触发按钮来完成按键输入的确认,触发按钮通常在控制器的后面,该方法可以使用一只手或两只手。
b)像鼓一样。如图2中的(b)所示,用户使用虚拟键盘上的鼓槌之类的控制器,向下运动会触发按键输入事件。
c)用头部定向。如图2中的(c)所示,用户移动其头部,并使用源自HMD(代表头部方向)的虚拟射线指向虚拟键盘。通过按压控制器上的触发按钮或HMD自身上的按钮来进行确认。
d)拆分键盘。如图2中的(d)所示,每个控制器分配有一个虚拟键盘。通过沿控制器上的触控板表面滑动指尖来进行按键选择,然后通过按压触发按钮来完成文本输入的确认。
这里,前两种方法由于需要大量移动控制器而导致用户快速疲劳。第三种方法由于涉及到移动头部而增加用户晕厥的可能性。虽然最后一种方法不涉及手部或头部的大量运动,但是当键盘上有许多按键时,滑动指尖来定位按键的效率不高。
进一步地,一种可能的替代方案为引入带有多字母按键的圆形键盘布局,其可以用一只手在控制器的触控板上进行操作。圆形布局与VR耳机的某些控制器上触控板的圆形形状一致。该方法具有字母选择模式和单词选择模式。对于单词选择,该方法依赖于使用英语语言中的单词频率,以基于多字母按键序列为用户提供单词的多种选择。虽然该方法提供了单手操作的便利性且不会容易导致疲劳;但是该方法要求用户学习新的键盘布局。此外,仅使用一只手还会降低最大输入速度。
进一步地,另一种可能的替代方案为语音技术和带有手势跟踪的空中打字技术。但是语音输入容易出错,并且不能给用户提供隐私;而且空中打字还依赖于相机、手套或其他设备来跟踪手势,也相对容易出错和造成用户疲劳。
进一步地,又一种可能的替代方案为涉及使用附加的输入设备进行文本输入。例如,一种使用智能手表作为智能眼镜的输入设备的方法。另外,对于(通过USB线缆,或者使用蓝牙、WIFI等无线方式)绑定到移动设备(如智能手机)的AR眼镜,一种简单或直接 的选择为使用移动设备上已有的文本输入界面。通常情况下,移动设备具有浮动的全键盘(具体是指QWERTY键盘)、T9键盘、手写界面等。但是,所有这些方法都要求用户观看移动设备屏幕上的键盘界面。然而对于MR/AR场景,用户可能希望将虚拟对象或物理世界保持在其视线范围内,而且在VR设置中,用户可能无法观看移动设备,因此以上方法并不是一种理想选择。
基于此,本申请实施例提供了一种文本输入方法,在移动设备侧,移动设备的操作界面包括键盘区域和控制区域,接收来自于键盘区域的第一操作指令;在控制区域显示至少一个候选文本,所述至少一个候选文本是根据第一操作指令生成;接收来自于控制区域的第二操作指令;从所述至少一个候选文本中确定目标文本,并将目标文本发送至头戴式显示设备的文本输入界面;其中,目标文本是根据第二操作指令生成。在头戴式显示设备侧,显示文本输入界面;接收移动设备发送的目标文本,并将所述目标文本输入到所述文本输入界面中。这样,由于使用移动设备的操作界面来实现头戴式显示设备的文本输入,且移动设备的操作界面划分为键盘区域和控制区域,从而可以双手输入操作以提高打字速度,同时打字时用户还无需注视在移动设备上,如此不仅减小了用户使用移动设备作为文本输入设备时需要移动视线观看移动设备屏幕的情况,而且还提高了文本输入效率。
下面将结合附图对本申请各实施例进行详细说明。
本申请的一实施例中,参见图3,其示出了本申请实施例提供的一种文本输入方法的流程示意图。如图3所示,该方法可以包括:
S301:接收来自于键盘区域的第一操作指令。
需要说明的是,在关于头戴式显示设备的文本输入操作中,本申请实施例可以使用移动设备(如智能手机)的操作界面作为头戴式显示设备进行文本输入的操作界面。另外,用户可以双手操作以提高打字速度。
这样,在移动设备的屏幕中显示的操作界面,该操作界面可以包括键盘区域和控制区域,以便用户可以进行双手操作。
还需要说明的是,移动设备的屏幕可以划分为两部分,包括屏幕的左侧区域和屏幕的右侧区域。在一些实施例中,该方法还包括:
在所述移动设备的屏幕中,将所述键盘区域显示在所述屏幕的左侧区域,以及将所述控制区域显示在所述屏幕的右侧区域。
也就是说,对于操作界面而言,一种具体的示例中,可以将键盘区域显示在屏幕的左侧区域,将控制区域显示在屏幕的右侧区域。另一种具体的示例中,还可以将键盘区域显示在屏幕的右侧区域,将控制区域显示在屏幕的左侧区域。这里,关于是将键盘区域显示在屏幕的左侧区域还是右侧区域(或者是说,将控制区域显示在屏幕的左侧区域还是右侧区域),可以根据用户的喜好或者其他因素来确定,本申请实施例不作具体限定。
另外,对于移动设备的屏幕所划分的左侧区域和右侧区域,左侧区域尺寸和右侧区域尺寸是可以进行调整的。在一些实施例中,该方法还可以包括:
基于所述移动设备的屏幕尺寸和用户的手部尺寸,对所述左侧区域和所述右侧区域进行尺寸调整。
需要说明的是,对于左侧区域尺寸和右侧区域尺寸,可以根据移动设备的屏幕尺寸和用户的手部尺寸进行适应性调整,甚至也可以根据用户的喜好进行调整,以使得更方便用户的操作。
在本申请实施例中,键盘区域可以包括虚拟键盘。其中,虚拟键盘根据键盘布局的差异,使得虚拟键盘至少可以包括下述其中一种:圆形布局键盘、QWERTY键盘、T9键盘、QuickPath键盘、Swype键盘和预定义键盘。
在这里,QWERTY键盘又可称为柯蒂键盘、全键盘,是目前最为广泛使用的键盘布局方式。T9键盘是传统的非智能手机键盘,其上的按键相对较少,常用的只有1~9个数字键, 且每个数字键上携带有3个拼音,从而实现“9个数字键输入所有汉字”功能。QuickPath键盘可称为滑动键盘,允许用户使用手势输入,通常应用于iOS设备中。Swype是一种触屏键盘,允许用户用拇指或是其他手指轻轻滑动键盘上的字母即可完成输入。
另外,预定义键盘可以是与QWERTY键盘、T9键盘、QuickPath键盘、Swype键盘不同的键盘,其具体可以根据用户需求进行自定义设置。在本申请实施例中,用户可以根据实际需要从上述这些虚拟键盘中选择目标键盘,这里不作任何限定。
还需要说明的是,本申请实施例中移动设备的屏幕可以为水平放置,以便将键盘区域和控制区域排列显示在移动设备的屏幕中。参见图4,其示出了本申请实施例提供的一种操作界面的布局示意图。如图4所示,移动设备的屏幕水平放置,操作界面(包括键盘区域401和控制区域402)显示在移动设备的屏幕中。其中,移动设备的屏幕划分为两部分,左侧区域为显示键盘区域401,其内放置有类似于T9键盘的多字母键盘布局;右侧区域为控制区域402,其内可以呈现至少一个候选文本,如p、q、r、s等。
S302:在控制区域显示至少一个候选文本,所述至少一个候选文本是根据第一操作指令生成。
需要说明的是,键盘区域内放置有虚拟键盘,这里的第一操作指令可以是由用户手指对虚拟键盘进行触摸滑动操作生成的。也就是说,在一些实施例中,所述接收来自于键盘区域的第一操作指令,可以包括:
当检测到用户手指在所述虚拟键盘上执行第一触摸滑动操作时,根据第一触摸滑动操作生成至少一个候选文本。
在这里,第一操作指令是基于用户手指在虚拟键盘上执行第一触摸滑动操作生成的。另外,在生成至少一个候选文本后,这至少一个候选文本将呈现在控制区域中。
具体来说,如果用户在键盘区域内滑动她的手指,那么可以实现对多个字母按键(也可以称为“数字按键”)之一的选择。在一种具体的示例中,这里的用户手指通常是指用户的左手指,具体可以是左拇指,但是也可以是其他任何手指,本申请实施例不作具体限定。
另外,虚拟键盘上设置有多个字母按键,为了方便反馈用户所选择的按键,在一种可能的实施方式中,该方法还可以包括:当检测到用户手指在所述虚拟键盘上执行第一触摸滑动操作时,将虚拟键盘中的被选择按键进行高亮显示。
在另一种可能的实施方式中,该方法还可以包括:当检测到用户手指在所述虚拟键盘上触摸滑动到新的按键时,控制移动设备震动。
也就是说,如果移动设备检测到用户手指在虚拟键盘上的按键执行滑动操作以选择多字母按键之一,那么被选择按键还可以在移动设备的屏幕上进行高亮显示或者突出显示,比如可以用颜色进行区分,以便反馈。在这里,除了进行高亮显示的反馈类型之外,本申请实施例还可以提供其他类型的反馈,例如用户手指在滑动到新的字母按键时,移动设备震动。此外,本申请实施例甚至也可以将移动设备的操作界面在头戴式显示设备中进行显示,以便向用户反馈所选择的按键。
这样,一旦键盘区域接收到第一操作指令以确定出被选择按键时,这时候根据被选择按键将会在控制区域呈现出至少一个候选文本。其中,候选文本可以是字母/数字,也可以是单词,还可以是汉字,其主要和输入模式有关。
在本申请实施例中,键盘区域可以支持多种输入模式。这里,多种输入模式至少可包括字母输入模式和单词输入模式,甚至还可包括汉字输入模式等其他输入模式。在一些实施例中,该方法还可以包括:
接收来自于控制区域的第三操作指令;
根据所述第三操作指令,控制键盘区域在多种输入模式之间进行切换。
在一种具体的示例中,所述根据所述第三操作指令,控制键盘区域在多种输入模式之 间进行切换,可以包括:
当检测到用户手指在所述控制区域执行双击操作时,控制所述键盘区域在多种输入模式之间进行切换。
也就是说,如果用户使用简单的手势,例如在控制区域(或者说,在右侧区域)进行双击操作,换言之,移动设备接收到第三操作指令,这时候可以在这多种输入模式之间进行切换。
可以理解,对于这多种输入模式,候选文本与输入模式之间具有对应关系,下面将以字母输入模式和单词输入模式为例分别进行说明。
在一种可能的实施方式中,在输入模式为字母输入模式时,所述在控制区域显示至少一个候选文本,可以包括:
当检测到用户手指在虚拟键盘上执行第一触摸滑动操作时,确定用户手指在所述键盘区域的被选择按键,根据被选择按键在控制区域显示至少一个候选字母。
进一步地,为了方便反馈用户所选择的按键,在一种具体的示例中,该方法还可以包括:将被选择按键进行高亮显示。
也就是说,在字母输入模式下,用户可以在键盘区域上滑动她的左拇指(或者她选择的任何手指)以选择多字母按键之一。所选择的按键可以将在移动设备屏幕上高亮显示,以便反馈。此外,在本申请实施例中,还可以提供其他类型的反馈,例如在滑动到新的按键上时,移动设备震动。
在另一种可能的实施方式中,在输入模式为单词输入模式时,所述在控制区域显示至少一个候选文本,可以包括:
当检测到用户手指在虚拟键盘上执行第一触摸滑动操作时,确定用户手指在所述键盘区域的滑动轨迹,根据滑动轨迹在控制区域显示至少一个候选单词。
进一步地,在一些实施例中,所述根据滑动轨迹在控制区域呈现至少一个候选单词,可以包括:
若所述滑动轨迹中检测到用户手指在至少一个预设按键上的停留时间大于第一预设时间,则确定选择所述至少一个预设按键;
根据所述至少一个预设按键在所述滑动轨迹中的顺序,生成至少一个候选单词并在所述控制区域进行显示。
需要说明的是,对于重复键入同一个字母按键,在一些实施例中,该方法还可以包括:
若所述滑动轨迹中检测到用户手指在第一预设按键上的停留时间大于第二预设时间,则确定重复选择所述第一预设按键;或者,
若所述滑动轨迹中检测到用户手指停留在第一预设按键上且用户手指在控制区域进行点击操作,则确定重复选择所述第一预设按键;
在这里,第一预设按键为虚拟键盘中的任意一个按键。
还需要说明的是,第一预设时间和第二预设时间可以不同。其中,第一预设时间是用于判断在滑动轨迹中某预设按键是否被选择,第二预设时间是用于判断在滑动轨迹中某预设按键是否连续被选择。
也就是说,在单词输入模式下,键盘区域内的虚拟键盘的操作类似于iOS设备上的QuickPath键盘和安卓设备上的Swype键盘。其中,在Swype键盘上,用户无需单独敲击即可使用用户手指在单词的每个字母上滑动,而且不必抬起用户手指。然后,一种确定被选择字母按键的算法可以是通过例如检测路径中的停顿来实现的。在本申请实施例中,用户可以使用左拇指在虚拟键盘上滑动,然后可以在控制区域中显示与被选择按键顺序匹配的一组候选单词。
以app为例,需要键入a-p-p,这时候涉及到重复键入同一个字母按键,那么用户可以保持按住该字母按键并短暂停顿,或者用户还可以使用右拇指快速点击控制区域,进而可 以确认重复按键的输入。
另外,本申请实施例还可以支持外语输入,比如汉字输入模式。在一些实施例中,在输入模式为汉字输入模式时,所述在控制区域显示至少一个候选文本,可以包括:
当检测到用户手指在虚拟键盘上执行第一触摸滑动操作时,确定用户手指在所述键盘区域的滑动轨迹,根据滑动轨迹在控制区域显示至少一个候选汉字。
在本申请实施例中,汉字输入模式与单词输入模式类似。在一种具体的示例中,可以使用多种方案(例如,拼音)将汉字输入为由英文字母组成的单词。因此,中文文本的输入也可以通过单词输入模式来实现。
这样,移动设备在接收到来自于键盘区域的第一操作指令后,这时候可以根据第一操作指令可以生成至少一个候选文本(如字母、单词、汉字等),然后在控制区域进行呈现,以便进一步确定待输入的目标文本。
S303:接收来自于控制区域的第二操作指令。
S304:从所述至少一个候选文本中确定目标文本,并将目标文本发送至头戴式显示设备的文本输入界面;其中,目标文本是根据第二操作指令生成。
需要说明的是,对于目标文本的选择,可以是由移动设备所接收的来自于控制区域的第二操作指令确定。这里,第二操作指令可以是由用户手指对控制区域进行触摸滑动操作生成的。
在一些实施例中,所述接收来自于控制区域的第二操作指令,从所述至少一个候选文本中确定目标文本,可以包括:
当检测到用户手指在控制区域执行第二触摸滑动操作时,根据第二触摸滑动操作对应的滑动方向,从至少一个候选文本中确定目标文本;其中,第二操作指令是基于用户手指在控制区域执行第二触摸滑动操作生成的。
具体来说,如果用户在控制区域内滑动她的手指,那么可以实现对目标文本的选择。在一种具体的示例中,这里的用户手指通常是指用户的右手指,具体可以是右拇指,但是也可以是其他任何手指,本申请实施例不作具体限定。
还需要说明的是,对于在控制区域显示的至少一个候选文本,可以是基于用户在屏幕右侧区域上使用的滑动手势(具体是指滑动方向)来选择目标文本。如图4所示,在控制区域呈现p、q、r、s等四个候选文本,其中,字母q显示在上侧,这时候向上滑动可以选择并确认为字母q;字母r显示在右侧,这时候向右滑动可以选择并确认为字母r;字母p显示在左侧,这时候向左滑动可以选择并确认为字母p;字母s显示在下侧,这时候向下滑动可以选择并确认为字母s。
进一步地,在一些实施例中,该方法还可以包括:若所述至少一个候选文本的数量仅有一个,则检测到用户手指在所述控制区域执行任意方向滑动操作,或者检测到用户手指在所述控制区域执行单击操作时,根据所述第二操作指令确定所述目标文本。
也就是说,如果候选文本的数量只有一个,即退格(Backspace)、空格(Space)、输入(Enter)等功能按键被选择时,这时候在控制区域内只有一个候选文本可用,那么可以向任意方向滑动或者单击等操作来选择并确认待输入的目标文本。
还可以理解,在控制区域中只能显示有限数量的方向选项。例如,可以使用四方向(上、下、左、右)布局四个候选文本进行选择。此外,六方向布局、八方向布局等也是有可能的,具体取决于用户的喜好和移动设备区分滑动方向的能力,本申请实施例不作具体限定。
如果至少一个候选文本的数量过多而超出可用的方向数量时,这时候还可以在控制区域设置两个按钮,包括第一按钮和第二按钮,用以进行多组候选文本之间的显示切换。
在一些实施例中,该方法还可以包括:
接收来自于控制区域的第四操作指令;
根据第四操作指令,控制控制区域在多组候选文本之间进行显示切换。
在一种具体的示例中,该控制区域包括第一按钮和第二按钮,所述根据第四操作指令,控制控制区域在多组候选文本之间进行显示切换,可以包括:
当检测到用户手指在所述控制区域对所述第一按钮或所述第二按钮执行单击操作时,控制控制区域在多组候选文本之间进行显示切换。
在另一种具体的示例中,所述根据第四操作指令,控制控制区域在多组候选文本之间进行显示切换,可以包括:
当检测到用户手指在所述控制区域朝向所述第一按钮或所述第二按钮执行第三触摸滑动操作时,控制所述控制区域在多组候选文本之间进行显示切换。
在这里,第一按钮用于触发对所述至少一个候选文本向下一组更新显示,第二按钮用于触发对所述至少一个候选文本向上一组更新显示。
需要说明的是,本申请实施例可以在控制区域的底部显示两个按钮:“下一组”按钮和“上一组”按钮。这时候用户只需要单击“下一组”或者“上一组”的按钮,就可以浏览多组候选文本。
还需要说明的是,用户还可以朝向按钮方向的简单滑动来触发上一组候选文本和下一组候选文本。例如,“左下对角线”和“右下对角线”的滑动方向预留给浏览多组候选文本,而“向上”、“向下”、“向左”、“向右”的滑动方向用于选择目标文本。在一种具体的示例中,“上一组”按钮位于控制区域的左下角,“下一组”按钮位于控制区域的右下角,如果用户手指在控制区域朝向“上一组”按钮进行滑动(如沿着左下对角线方向滑动),这时候可以浏览当前组的上一组的候选文本;如果用户手指在控制区域朝向“下一组”按钮进行滑动(如沿着右下对角线方向滑动),这时候可以浏览当前组的下一组的候选文本。
进一步地,对于这至少一个候选文本,还可以设置为列表形式。在一些实施例中,该方法还可以包括:
将所述至少一个候选文本设置为滚动列表;
当检测到用户手指在所述控制区域执行第四触摸滑动操作时,根据所述第四触摸滑动操作对应的滑动方向,控制所述滚动列表内的候选文本进行滚动显示。
需要说明的是,对于单词输入模式或者汉字输入模式,这时候在控制区域呈现的候选文本的数量较多,为了方便选择,可以将这至少一个候选文本设置为滚动列表。用户可以使用向上滑动或者向下滑动来滚动该列表,并且将列表中的一个候选文本进行高亮显示。通过不同的滑动操作可以选择并确认高亮显示的文本并将其作为待输入的目标文本。
还需要说明的是,该列表可以是垂直列表,也可以是循环列表。另外,这些候选文本在列表中的显示顺序可以根据用户的喜好来确定,也可以根据其他方式确定,比如对于单词的显示顺序,可以基于英语语料库中的单词频率(例如,将频率最高的单词显示在顶部)来确定,但是本申请实施例不作任何限定。
进一步地,在本申请实施例中,如果需要重复键入同一个字母按键(例如,对于app,需要键入a-p-p),那么用户可以保持按住该字母按键并短暂停顿;或者,用户也可以使用她的手指快速点击屏幕右侧区域以确认重复按键的输入。
除此之外,在本申请实施例中,还可以使用基于手套或者基于相机的手势识别,以类似方式来实现空中打字,进而将目标文本发送至头戴式显示设备的文本输入界面。
简言之,由于移动设备的屏幕划分为两部分,分别用来显示键盘区域和控制区域,从而能够使用双手进行操作,而且可以使得用户能够很轻松地为与移动设备绑定的头戴式显示设备进行文本输入,同时使用用户已经熟悉的元素来实现新的计算设备的操作界面。这样,在移动设备的操作界面中,使用用户已经熟悉的元素,比如多字母键盘布局类似于用于已经熟悉的T9键盘布局,能够缩短用户的学习时间。
本实施例提供了一种文本输入方法,应用于移动设备。移动设备的操作界面包括键盘区域和控制区域,接收来自于键盘区域的第一操作指令;在控制区域显示至少一个候选文 本,所述至少一个候选文本是根据第一操作指令生成;接收来自于控制区域的第二操作指令;从所述至少一个候选文本中确定目标文本,并将目标文本发送至头戴式显示设备的文本输入界面;其中,目标文本是根据第二操作指令生成。这样,由于使用移动设备的操作界面来实现头戴式显示设备的文本输入,且移动设备的操作界面划分为键盘区域和控制区域,从而可以双手输入操作以提高打字速度,同时打字时用户还无需注视在移动设备上,如此不仅减小了用户使用移动设备作为文本输入设备时需要移动视线观看移动设备屏幕的情况,而且还提高了文本输入效率。
本申请的另一实施例中,参见图5,其示出了本申请实施例提供的另一种文本输入方法的流程示意图。如图5所示,该方法可以包括:
S501:显示文本输入界面。
S502:接收移动设备发送的目标文本,并将所述目标文本输入到所述文本输入界面中。
需要说明的是,目标文本是由移动设备接收用户手指对所述移动设备的键盘区域和控制区域分别执行触摸滑动操作确定的。
还需要说明的是,在关于头戴式显示设备的文本输入操作中,本申请实施例可以使用移动设备(如智能手机)的操作界面作为头戴式显示设备进行文本输入的操作界面。用户在移动设备上进行操作之后,可以将目标文本发送给头戴式显示设备,然后同步到头戴式显示设备的文本输入界面中进行显示。
在这里,为了减少用户使用移动设备作为文本输入设备时需要移动视线观看移动设备屏幕的情况,这时候本申请实施例可以通过将移动设备的操作界面在头戴式显示设备上进行显示,以便给予用户操作反馈。在一些实施例中,该方法还可以包括:在所述头戴式显示设备中,显示所述移动设备的操作界面;
相应地,所述接收移动设备发送的目标文本,可以包括:
基于所述移动设备对所述操作界面的响应,接收所述移动设备发送的目标文本。
需要说明的是,通过头戴式显示设备的显示模块,可以显示移动设备的操作界面和文本输入界面。当聚焦于移动设备的操作界面时,这时候可以显示移动设备的操作界面,然后用户通过移动设备在其自身的操作界面上进行触摸操作,以便确定出目标文本并同步输入到头戴式显示设备的文本输入界面。
还需要说明的是,头戴式显示设备中呈现的操作界面和移动设备自身呈现的操作界面是一致的。该操作界面可以包括键盘区域和控制区域,且键盘区域包括虚拟键盘。如此,在一些实施例中,所述显示所述移动设备的操作界面,可以包括:
将所述键盘区域和所述控制区域在所述头戴式显示设备中进行显示,并将所述虚拟键盘中的被选择按键进行高亮显示。
进一步地,在一些实施例中,该方法还可以包括:将用户手指在所述虚拟键盘上的位置以预设标记进行显示。
也就是说,在本申请实施例中,键盘区域和控制区域也可以在头戴式显示设备中进行显示。由于键盘区域包括有虚拟键盘,而虚拟键盘上设置有多个字母按键,那么虚拟键盘以及多个字母按键均可在头戴式显示设备中显示。当用户手指在移动设备上触摸滑动操作以选择出多字母按键之一,此时一方面被选择按键可以在移动设备的屏幕上进行高亮显示,另一方面被选择按键也可以在头戴式显示设备上进行高亮显示。另外,为了便于反馈,还可以将用户手指以预设标记在头戴式显示设备上进行显示,以指示出用户手指目前所处的位置。
示例性地,参见图6,其示出了本申请实施例提供的另一种操作界面的布局示意图。如图6所示,该操作界面(包括键盘区域601和控制区域602)显示在头戴式显示设备中,当用户手指在移动设备上触摸滑动操作以选择出MNO按键时,在头戴式显示设备的显示模块中,也可以将该按键在键盘区域601中的虚拟键盘上进行高亮显示,同时在虚拟键盘 上显示出表示用户手指位置的标记(例如,图6所示的黑点)。
还需要说明的是,当移动设备中的控制区域呈现至少一个候选文本时,这时候在头戴式显示设备的显示模块中,控制区域602中同步会呈现出至少一个候选文本。为了方便用户确定其手指的滑动方向,在一些实施例中,该方法还可以包括:
基于所述控制区域显示的至少一个候选文本,确定用户手指的滑动方向;其中,所述滑动方向用于指示用户手指在移动设备的操作界面上通过触摸滑动操作选择目标文本。
也就是说,根据控制区域呈现的至少一个候选文本,可以确定出用户手指的滑动方向,然后由用户通过其手指执行该触摸滑动操作。示例性地,仍以图6所示,字母N位于控制区域的上侧,这时候向上滑动可以选择并确认目标文本为字母N;字母M显示在左侧,这时候向左滑动可以选择并确认目标文本为字母M;字母O显示在右侧,这时候向右滑动可以选择并确认目标文本为字母O。这样,用户可以共同使用两只手来输入字母,比如左手在键盘区域进行字母按键的选择,右手在控制区域进行目标文本的选择。
这样,在确定出目标文本之后,这时候头戴式显示设备可以聚焦于显示文本输入界面,并将该目标文本同步到该文本输入界面进行输入显示。
本实施例提供了一种文本输入方法,应用于头戴式显示设备。通过显示文本输入界面;接收移动设备发送的目标文本,并将所述目标文本输入到所述文本输入界面中。这样,由于使用移动设备的操作界面来实现头戴式显示设备的文本输入,且移动设备的操作界面划分为键盘区域和控制区域,从而可以双手输入操作以提高打字速度,同时打字时用户还无需注视在移动设备上,如此不仅减小了用户使用移动设备作为文本输入设备时需要移动视线观看移动设备屏幕的情况,而且还提高了文本输入效率。
本申请的又一实施例中,参见图7,其示出了本申请实施例提供的又一种文本输入方法的流程示意图。如图7所示,该方法可以包括:
S701:接收来自于键盘区域的第一操作指令。
S702:在控制区域显示至少一个候选文本。
S703:接收来自于控制区域的第二操作指令。
S704:从所述至少一个候选文本中确定目标文本。
需要说明的是,至少一个候选文本是根据第一操作指令生成,而目标文本是根据第二操作指令生成的。
还需要说明的是,步骤S701~S704的执行主体是移动设备。在移动设备确定出目标文本后,然后由移动设备将其发送给头戴式显示设备进行输入。
S705:将目标文本由移动设备发送至头戴式显示设备。
S706:将接收到的目标文本输入到头戴式显示设备的文本输入界面中。
在申请实施例中,该方法应用于视觉增强系统。在视觉增强系统中,该视觉增强系统可以包括移动设备和头戴式显示设备。其中,移动设备与头戴式显示设备之间可通过数据电缆建立有线通信连接,还可通过无线通信协议建立无线通信连接。
这里,无线通信协议至少可以包括下述之一:蓝牙(Bluetooth)协议、无线保真(Wireless Fidelity,WIFI)协议、红外数据(Infrared Data Association,IrDA)协议和近距离传输(Near Field Communication,NFC)协议。根据任意一种无线通信协议,可以建立移动设备与头戴式显示设备之间的无线通信连接,以便进行数据和信息交互。
还需要说明的是,本申请实施例使用移动设备作为头戴式显示设备(如AR眼镜)进行文本输入的操作界面和方法。如图4所示,移动设备的屏幕划分为两部分,其中,屏幕左侧区域用作显示键盘区域,可以是类似于T9键盘的多字母键盘布局;屏幕右侧区域用作显示控制区域,可以是用户在多个数字、字母、单词或汉字中进行选择的区域。从而用户可以使用双手操作来提高打字速度;而且在打字时,用户还无需在移动设备屏幕上使用视觉,也即能够实现“盲打”处理(用户不需要盯着移动设备的屏幕)。
在本申请实施例中,键盘区域可以包括虚拟键盘,而虚拟键盘又包括有多个字母按键。其中,虚拟键盘根据键盘布局的差异,使得虚拟键盘至少可以包括下述其中一种:圆形布局键盘、QWERTY键盘、T9键盘、QuickPath键盘、Swype键盘和预定义键盘。
以字母输入模式为例,在字母输入模式下,用户可以在键盘区域上滑动她的左拇指(或者她选择的任何手指)以选择多字母按键之一。所选择的按键可以将在移动设备屏幕上高亮显示,以便反馈。在本申请实施例中,还可以提供其他类型的反馈,例如在滑动到新的按键上时,移动设备震动。这里,操作界面(包括键盘区域和控制区域)也可以在头戴式显示设备中进行显示,且相应的按键也可以在头戴式显示设备中突出显示。另外,还可以在头戴式显示设备的虚拟键盘上显示一些表示手指位置的标记(例如,黑点),具体如图6所示。
进一步地,对于每一个字母按键,其对应的一组字母将相应地显示在右侧区域(即控制区域)。然后,用户可在控制区域中使用滑动手势来选择目标文本(这里具体是指字母)。例如,在图6中,字母N显示在控制区域的上侧,这时候向上的滑动手势会选择并确认字母N为目标文本,并输入到头戴式显示设备的文本输入界面。这样,本申请实施例可以共同使用两只手来输入字母。但是,如果在控制区域只有一个选择可用(例如Backspace、Space、Enter等功能按键),那么这时候可以向任意方向滑动或单击以选择并确认目标文本,并输入到头戴式显示设备的文本输入界面。
还需要说明的是,键盘区域还可以支持单词输入模式。这时候可以使用简单的手势(例如,在屏幕右侧区域进行双击)来在两种输入模式之间进行切换。
以单词输入模式为例,键盘区域内的虚拟键盘的操作类似于iOS设备上的QuickPath键盘和安卓设备上的Swype键盘。其中,在Swype键盘上,用户无需单独敲击即可使用用户手指在单词的每个字母上滑动,而且不必抬起用户手指。然后,一种确定被选择字母的算法可以是通过例如检测路径中的停顿来实现的。在本申请实施例中,用户可以使用左拇指在虚拟键盘上滑动,然后可以在控制区域中显示与被选择按键顺序匹配的一组候选单词。示例性地,参见图8,其示出了本申请实施例提供的又一种操作界面的布局示意图。如图8所示,该操作界面可以显示在头戴式显示设备和/或移动设备中。当确定了ABC、MNO、TUV的按键顺序时,诸如“am”,“bot”,“cot”,“ant”等之类的单词就会显示在控制区域中,比如头戴式显示设备的右侧区域和/或移动设备的屏幕右侧区域;并且在头戴式显示设备中,还可以显示用于指示用户手指位置的标记(如图8中的黑点)。然后,可以在控制区域进行方向性滑动来确定可选择/确认单词,并将该单词输入到头戴式显示设备的文本输入界面。
另外,本申请实施例还可以支持外语输入,比如汉字输入模式。在一种具体的示例中,可以使用多种方案(例如,拼音)将汉字输入为由英文字母组成的单词。因此,中文文本的输入也可以通过单词输入模式来实现。
在一些实施例中,如果需要重复键入同一个字母按键(例如,对于app,要键入a-p-p),那么用户可以保持按住一个字母并短暂停顿。在另一实施例中,用户还可以使用右拇指快速点击右侧区域以确认重复按键的输入。
在这里,右侧区域只能显示有限数量的方向选项。例如,可以使用四方向(上,下,左,右)布局在4个候选单词中进行选择。此外,六方向布局、八方向布局也是有可能的,具体取决于用户的喜好和移动设备区分滑动方向的能力。但是,如果可能的单词数量超过了可用的方向数量,那么可以在右侧区域底部显示两个按钮:“下一组”按钮和“上一组”按钮。这时候,只需要单击“下一个”按钮或者“上一个”按钮,用户就可以浏览到多组可能的单词。在另一实施例中,朝向这两个按钮方向的简单滑动也可触发上一组单词和下一组单词显示出来(例如,左下对角线方向和右下对角线方向被预留给浏览单词组,而“向上”滑动、“向左”滑动、“向右”滑动用于选择目标单词)。
还需要说明的是,在又一实施例中,这些多个可能的单词还可以实现为可滚动的列表。用户可以使用上下滑动动作来滚动列表,并且将列表中的一个单词突出显示。另外,不同的滑动动作可以选择并确认突出显示的单词以供输入到头戴式显示设备的文本输入界面(例如,向右滑动)。在这里,该列表可以是垂直列表或者循环列表;对于列表中单词的显示顺序,可以基于英语语料库中的单词频率来确定。例如,频率最高的单词可显示在列表顶部。
除此之外,本申请实施例的一个扩展是键盘区域内的虚拟键盘可以具有不同的布局,例如圆形布局,或者甚至是传统QWERTY键盘布局等。其中,在一实施例中,可以基于屏幕的尺寸和用户的手部尺寸来调整左侧区域和右侧区域的尺寸。在另一实施例中,诸如Backspace、Space、Enter等之类的功能按键可以放置在右侧区域。然后,朝向功能按键的方向简单滑动即可进入该功能按键。在又一实施例中,在单词输入模式期间,用户还可以总是使用在右侧单击来确认字母按键的选择和输入,而非是使用Swype键盘。
本申请实施例的另一个扩展是可以使用基于手套或基于相机的手势识别,以类似方式来实现空中打字,进而将目标文本输入到头戴式显示设备的文本输入界面。
这样,由于移动设备的屏幕划分为两部分,分别显示键盘区域和控制区域,从而能够使用双手进行操作,可以提高输入效率;而且使用带有少量按键的多字母键盘布局,还可以使得用户不需要一直注视在移动设备上(这在VR中是不可能的);相反,用户可以继续将虚拟内容或现实世界保持在她的视线范围内(在MR/AR中,更希望如此保持);另外,在键盘区域,多字母键盘布局类似于用户已经熟悉的T9键盘,还缩短了用户的学习时间。
本实施例提供了一种文本输入方法,通过上述实施例对前述实施例的具体实现进行详细阐述,从中可以看出,由于使用移动设备的操作界面来实现头戴式显示设备的文本输入,且移动设备的操作界面划分为键盘区域和控制区域,从而可以双手输入操作以提高打字速度,而且打字时用户还无需注视在移动设备上,如此不仅减小了用户使用移动设备作为文本输入设备时需要移动视线观看移动设备屏幕的情况,而且还提高了文本输入效率。
本申请的再一实施例中,基于前述实施例相同的发明构思,参见图9,其示出了本申请实施例提供的一种移动设备90的组成结构示意图。如图9所示,该移动设备90可以包括:第一显示单元901、第一接收单元902和第一发送单元903;其中,
第一接收单元902,配置为接收来自于键盘区域的第一操作指令;其中,所述移动设备的操作界面包括键盘区域和控制区域;
第一显示单元901,配置为在所述控制区域显示至少一个候选文本,所述至少一个候选文本是根据所述第一操作指令生成;
第一接收单元902,还配置为接收来自于所述控制区域的第二操作指令;
第一发送单元903,配置为从所述至少一个候选文本中确定目标文本,并将目标文本发送至头戴式显示设备的文本输入界面;其中,所述目标文本是根据第二操作指令生成。
在一些实施例中,第一显示单元901,还配置为在所述移动设备的屏幕中,将所述键盘区域显示在所述屏幕的左侧区域,以及将所述控制区域显示在所述屏幕的右侧区域。
在一些实施例中,参见图9,移动设备90还可以包括调整单元904,配置为基于所述移动设备的屏幕尺寸和用户的手部尺寸,对所述左侧区域和所述右侧区域进行尺寸调整。
在一些实施例中,所述键盘区域包括虚拟键盘,相应地,第一接收单元902,具体配置为当检测到用户手指在所述虚拟键盘上执行第一触摸滑动操作时,根据所述第一触摸滑动操作生成所述至少一个候选文本;其中,所述第一操作指令是基于用户手指在所述虚拟键盘上执行第一触摸滑动操作生成的。
在一些实施例中,所述键盘区域支持多种输入模式,所述多种输入模式至少包括字母输入模式和单词输入模式;相应地,第一接收单元902,还配置为接收来自于所述控制区域的第三操作指令;
第一显示单元901,还配置为根据所述第三操作指令,控制所述键盘区域在所述多种输入模式之间进行切换。
在一些实施例中,第一显示单元901,具体配置为当检测到用户手指在所述控制区域执行双击操作时,控制所述键盘区域在所述多种输入模式之间进行切换。
在一些实施例中,第一显示单元901,具体配置为在输入模式为字母输入模式的情况下,当检测到用户手指在所述虚拟键盘上执行第一触摸滑动操作时,确定所述用户手指在所述键盘区域的被选择按键,根据所述被选择按键在所述控制区域显示至少一个候选字母。
在一些实施例中,第一显示单元901,还配置为将所述被选择按键进行高亮显示。
在一些实施例中,第一显示单元901,具体配置为在输入模式为单词输入模式的情况下,当检测到用户手指在所述虚拟键盘上执行第一触摸滑动操作时,确定所述用户手指在所述键盘区域的滑动轨迹,根据所述滑动轨迹在所述控制区域显示至少一个候选单词。
在一些实施例中,第一显示单元901,还配置为若所述滑动轨迹中检测到用户手指在至少一个预设按键上的停留时间大于第一预设时间,则确定选择所述至少一个预设按键;以及根据所述至少一个预设按键在所述滑动轨迹中的顺序,生成至少一个候选单词并在所述控制区域进行显示。
在一些实施例中,第一显示单元901,还配置为若所述滑动轨迹中检测到用户手指在第一预设按键上的停留时间大于第二预设时间,则确定重复选择所述第一预设按键;或者,若所述滑动轨迹中检测到用户手指停留在第一预设按键上且用户手指在所述控制区域进行点击操作,则确定重复选择所述第一预设按键;其中,所述第一预设按键为所述虚拟键盘中的任意一个按键。
在一些实施例中,第一显示单元901,还配置为当检测到用户手指在所述控制区域执行第二触摸滑动操作时,根据所述第二触摸滑动操作对应的滑动方向,从所述至少一个候选文本中确定所述目标文本;其中,所述第二操作指令是基于用户手指在所述控制区域执行第二触摸滑动操作生成的。
在一些实施例中,第一接收单元902,还配置为接收来自于控制区域的第四操作指令;
第一显示单元901,还配置为根据所述第四操作指令,控制所述控制区域进行多组候选文本之间的显示切换。
在一些实施例中,控制区域包括第一按钮和第二按钮,相应地,第一显示单元901,具体配置为当检测到用户手指在所述控制区域对所述第一按钮或所述第二按钮执行单击操作时,控制所述控制区域在多组候选文本之间进行显示切换;其中,所述第一按钮用于触发对所述至少一个候选文本向下一组更新显示,所述第二按钮用于触发对所述至少一个候选文本向上一组更新显示。
在一些实施例中,第一显示单元901,还配置为当检测到用户手指在所述控制区域朝向所述第一按钮或所述第二按钮执行第三触摸滑动操作时,控制所述控制区域在多组候选文本之间进行显示切换。
在一些实施例中,参见图9,移动设备90还可以包括设置单元905,配置为将所述至少一个候选文本设置为滚动列表;
第一显示单元901,还配置为当检测到用户手指在所述控制区域执行第四触摸滑动操作时,根据第四触摸滑动操作对应的滑动方向,控制滚动列表内的候选文本进行滚动显示。
可以理解地,在本申请实施例中,“单元”可以是部分电路、部分处理器、部分程序或软件等等,当然也可以是模块,还可以是非模块化的。而且在本实施例中的各组成部分可以集成在一个处理单元中,也可以是各个单元单独物理存在,也可以两个或两个以上单元集成在一个单元中。上述集成的单元既可以采用硬件的形式实现,也可以采用软件功能模块的形式实现。
所述集成的单元如果以软件功能模块的形式实现并非作为独立的产品进行销售或使 用时,可以存储在一个计算机可读取存储介质中,基于这样的理解,本实施例的技术方案本质上或者说对现有技术做出贡献的部分或者该技术方案的全部或部分可以以软件产品的形式体现出来,该计算机软件产品存储在一个存储介质中,包括若干指令用以使得一台计算机设备(可以是个人计算机,服务器,或者网络设备等)或processor(处理器)执行本实施例所述方法的全部或部分步骤。而前述的存储介质包括:U盘、移动硬盘、只读存储器(Read Only Memory,ROM)、随机存取存储器(Random Access Memory,RAM)、磁碟或者光盘等各种可以存储程序代码的介质。
因此,本申请实施例提供了一种计算机存储介质,应用于移动设备90,该计算机存储介质存储有计算机程序,所述计算机程序被第一处理器执行时实现前述实施例中任一项所述的方法。
基于上述移动设备90的组成以及计算机存储介质,参见图10,其示出了本申请实施例提供的一种移动设备90的硬件结构示意图。如图10所示,可以包括:第一通信接口1001、第一存储器1002和第一处理器1003;各个组件通过第一总线系统1004耦合在一起。可理解,第一总线系统1004用于实现这些组件之间的连接通信。第一总线系统1004除包括数据总线之外,还包括电源总线、控制总线和状态信号总线。但是为了清楚说明起见,在图10中将各种总线都标为第一总线系统1004。其中,第一通信接口1001,用于在与其他外部网元之间进行收发信息过程中,信号的接收和发送;
第一存储器1002,用于存储能够在第一处理器1003上运行的计算机程序;
第一处理器1003,用于在运行所述计算机程序时,执行:
接收来自于键盘区域的第一操作指令;
在控制区域显示至少一个候选文本,所述至少一个候选文本是根据第一操作指令生成;
接收来自于控制区域的第二操作指令;
从所述至少一个候选文本中确定目标文本,并将目标文本发送至头戴式显示设备的文本输入界面;其中,目标文本是根据第二操作指令生成。
可以理解,本申请实施例中的第一存储器1002可以是易失性存储器或非易失性存储器,或可包括易失性和非易失性存储器两者。其中,非易失性存储器可以是只读存储器(Read-Only Memory,ROM)、可编程只读存储器(Programmable ROM,PROM)、可擦除可编程只读存储器(Erasable PROM,EPROM)、电可擦除可编程只读存储器(Electrically EPROM,EEPROM)或闪存。易失性存储器可以是随机存取存储器(Random Access Memory,RAM),其用作外部高速缓存。通过示例性但不是限制性说明,许多形式的RAM可用,例如静态随机存取存储器(Static RAM,SRAM)、动态随机存取存储器(Dynamic RAM,DRAM)、同步动态随机存取存储器(Synchronous DRAM,SDRAM)、双倍数据速率同步动态随机存取存储器(Double Data Rate SDRAM,DDRSDRAM)、增强型同步动态随机存取存储器(Enhanced SDRAM,ESDRAM)、同步连接动态随机存取存储器(Synchlink DRAM,SLDRAM)和直接内存总线随机存取存储器(Direct Rambus RAM,DRRAM)。本申请描述的系统和方法的第一存储器1002旨在包括但不限于这些和任意其它适合类型的存储器。
而第一处理器1003可能是一种集成电路芯片,具有信号的处理能力。在实现过程中,上述方法的各步骤可以通过第一处理器1003中的硬件的集成逻辑电路或者软件形式的指令完成。上述的第一处理器1003可以是通用处理器、数字信号处理器(Digital Signal Processor,DSP)、专用集成电路(Application Specific Integrated Circuit,ASIC)、现成可编程门阵列(Field Programmable Gate Array,FPGA)或者其他可编程逻辑器件、分立门或者晶体管逻辑器件、分立硬件组件。可以实现或者执行本申请实施例中的公开的各方法、步骤及逻辑框图。通用处理器可以是微处理器或者该处理器也可以是任何常规的处理器等。结合本申请实施例所公开的方法的步骤可以直接体现为硬件译码处理器执行完成,或者用译码处理器中的硬件及软件模块组合执行完成。软件模块可以位于随机存储器,闪存、只 读存储器,可编程只读存储器或者电可擦写可编程存储器、寄存器等本领域成熟的存储介质中。该存储介质位于第一存储器1002,第一处理器1003读取第一存储器1002中的信息,结合其硬件完成上述方法的步骤。
可以理解的是,本申请描述的这些实施例可以用硬件、软件、固件、中间件、微码或其组合来实现。对于硬件实现,处理单元可以实现在一个或多个专用集成电路(Application Specific Integrated Circuits,ASIC)、数字信号处理器(Digital Signal Processing,DSP)、数字信号处理设备(DSP Device,DSPD)、可编程逻辑设备(Programmable Logic Device,PLD)、现场可编程门阵列(Field-Programmable Gate Array,FPGA)、通用处理器、控制器、微控制器、微处理器、用于执行本申请所述功能的其它电子单元或其组合中。对于软件实现,可通过执行本申请所述功能的模块(如过程、函数等)来实现本申请所述的技术。软件代码可存储在存储器中并通过处理器执行。存储器可以在处理器中或在处理器外部实现。
可选地,作为另一个实施例,第一处理器1003还配置为在运行所述计算机程序时,执行前述实施例中任一项所述的方法。
本实施例提供了一种移动设备,该移动设备可以包括第一显示单元、第一接收单元和第一发送单元。这样,由于使用移动设备的操作界面来实现头戴式显示设备的文本输入,且移动设备的操作界面划分为键盘区域和控制区域,从而可以双手输入操作以提高打字速度,同时打字时用户还无需注视在移动设备上,如此不仅减小了用户使用移动设备作为文本输入设备时需要移动视线观看移动设备屏幕的情况,而且还提高了文本输入效率。
本申请的再一实施例中,基于前述实施例相同的发明构思,参见图11,其示出了本申请实施例提供的一种头戴式显示设备110的组成结构示意图。如图11所示,该头戴式显示设备110可以包括:第二显示单元1101、第二接收单元1102和输入单元1103;其中,
第二显示单元1101,配置为显示文本输入界面;
第二接收单元1102,配置为接收移动设备发送的目标文本;
输入单元1103,配置为将所述目标文本输入到所述文本输入界面中。
在一些实施例中,第二显示单元1101,还配置为显示所述移动设备的操作界面;
相应地,第二接收单元1102,具体配置为基于所述移动设备对所述操作界面的响应,接收所述移动设备发送的目标文本。
在一些实施例中,操作界面包括键盘区域和控制区域,且键盘区域包括虚拟键盘;
相应地,第二显示单元1101,还配置为将所述键盘区域和所述控制区域在所述头戴式显示设备的显示模块中进行显示,并将所述虚拟键盘中的被选择按键进行高亮显示。
在一些实施例中,第二显示单元1101,还配置为将用户手指在所述虚拟键盘上的位置以预设标记进行显示。
在一些实施例中,参见图11,头戴式显示设备110还可以包括确定单元1104,配置为基于所述控制区域呈现的至少一个候选文本,确定用户手指的滑动方向;其中,所述滑动方向用于指示用户手指在移动设备的操作界面上通过触摸滑动操作选择目标文本。
可以理解地,在本实施例中,“单元”可以是部分电路、部分处理器、部分程序或软件等等,当然也可以是模块,还可以是非模块化的。而且在本实施例中的各组成部分可以集成在一个处理单元中,也可以是各个单元单独物理存在,也可以两个或两个以上单元集成在一个单元中。上述集成的单元既可以采用硬件的形式实现,也可以采用软件功能模块的形式实现。
所述集成的单元如果以软件功能模块的形式实现并非作为独立的产品进行销售或使用时,可以存储在一个计算机可读取存储介质中。基于这样的理解,本实施例提供了一种计算机存储介质,应用于头戴式显示设备110,该计算机存储介质存储有计算机程序,所述计算机程序被第二处理器执行时实现前述实施例中任一项所述的方法。
基于上述头戴式显示设备110的组成以及计算机存储介质,参见图12,其示出了本申 请实施例提供的头戴式显示设备110的硬件结构示意图。如图12所示,可以包括:第二通信接口1201、第二存储器1202和第二处理器1203;各个组件通过第二总线系统1204耦合在一起。可理解,第二总线系统1204用于实现这些组件之间的连接通信。第二总线系统1204除包括数据总线之外,还包括电源总线、控制总线和状态信号总线。但是为了清楚说明起见,在图12中将各种总线都标为第二总线系统1204。其中,第二通信接口1201,用于在与其他外部网元之间进行收发信息过程中,信号的接收和发送;
第二存储器1202,用于存储能够在第二处理器1203上运行的计算机程序;
第二处理器1203,用于在运行所述计算机程序时,执行:
显示文本输入界面;
接收移动设备发送的目标文本,并将所述目标文本输入到所述文本输入界面中。
可选地,作为另一个实施例,第二处理器1203还配置为在运行所述计算机程序时,执行前述实施例中任一项所述的方法。
可以理解,第二存储器1202与第一存储器1002的硬件功能类似,第二处理器1203与第一处理器1003的硬件功能类似;这里不再详述。
本实施例提供了一种头戴式显示设备,该头戴式显示设备包括第二显示单元、第二接收单元和输入单元。这样,由于使用移动设备的操作界面来实现头戴式显示设备的文本输入,且移动设备的操作界面划分为键盘区域和控制区域,从而可以双手输入操作以提高打字速度,同时打字时用户还无需注视在移动设备上,如此不仅减小了用户使用移动设备作为文本输入设备时需要移动视线观看移动设备屏幕的情况,而且还提高了文本输入效率。
需要说明的是,在本申请中,术语“包括”、“包含”或者其任何其他变体意在涵盖非排他性的包含,从而使得包括一系列要素的过程、方法、物品或者装置不仅包括那些要素,而且还包括没有明确列出的其他要素,或者是还包括为这种过程、方法、物品或者装置所固有的要素。在没有更多限制的情况下,由语句“包括一个……”限定的要素,并不排除在包括该要素的过程、方法、物品或者装置中还存在另外的相同要素。
上述本申请实施例序号仅仅为了描述,不代表实施例的优劣。
本申请所提供的几个方法实施例中所揭露的方法,在不冲突的情况下可以任意组合,得到新的方法实施例。
本申请所提供的几个产品实施例中所揭露的特征,在不冲突的情况下可以任意组合,得到新的产品实施例。
本申请所提供的几个方法或设备实施例中所揭露的特征,在不冲突的情况下可以任意组合,得到新的方法实施例或设备实施例。
以上所述,仅为本申请的具体实施方式,但本申请的保护范围并不局限于此,任何熟悉本技术领域的技术人员在本申请揭露的技术范围内,可轻易想到变化或替换,都应涵盖在本申请的保护范围之内。因此,本申请的保护范围应以所述权利要求的保护范围为准。
工业实用性
本申请实施例中,由于移动设备的屏幕划分为两部分,分别用来显示键盘区域和控制区域,从而能够使用双手进行操作,而且可以使得用户能够很轻松地为与移动设备绑定的头戴式显示设备进行文本输入,同时在移动设备的操作界面中,使用用户已经熟悉的元素,比如多字母键盘布局类似于用于已经熟悉的T9键盘布局,还能够缩短用户的学习时间;如此,使用移动设备的操作界面来实现头戴式显示设备的文本输入,不仅减小了用户使用移动设备作为文本输入设备时需要移动视线观看移动设备屏幕的情况,而且还提高了文本输入效率。

Claims (26)

  1. 一种文本输入方法,应用于移动设备,所述移动设备的操作界面包括键盘区域和控制区域,所述方法包括:
    接收来自于所述键盘区域的第一操作指令;
    在所述控制区域显示至少一个候选文本,所述至少一个候选文本是根据所述第一操作指令生成;
    接收来自于所述控制区域的第二操作指令;
    从所述至少一个候选文本中确定目标文本,并将所述目标文本发送至头戴式显示设备的文本输入界面;其中,所述目标文本是根据所述第二操作指令生成。
  2. 根据权利要求1所述的方法,其中,所述方法还包括:
    在所述移动设备的屏幕中,将所述键盘区域显示在所述屏幕的左侧区域,以及将所述控制区域显示在所述屏幕的右侧区域。
  3. 根据权利要求2所述的方法,其中,所述方法还包括:
    基于所述移动设备的屏幕尺寸和用户的手部尺寸,对所述左侧区域和所述右侧区域进行尺寸调整。
  4. 根据权利要求1所述的方法,其中,所述键盘区域包括虚拟键盘,所述接收来自于所述键盘区域的第一操作指令,包括:
    当检测到用户手指在所述虚拟键盘上执行第一触摸滑动操作时,根据所述第一触摸滑动操作生成所述至少一个候选文本;其中,所述第一操作指令是基于用户手指在所述虚拟键盘上执行第一触摸滑动操作生成的。
  5. 根据权利要求1所述的方法,其中,所述键盘区域支持多种输入模式,所述多种输入模式至少包括字母输入模式和单词输入模式;
    所述方法还包括:
    接收来自于所述控制区域的第三操作指令;
    根据所述第三操作指令,控制所述键盘区域在所述多种输入模式之间进行切换。
  6. 根据权利要求5所述的方法,其中,所述根据所述第三操作指令,控制所述键盘区域在所述多种输入模式之间进行切换,包括:
    当检测到用户手指在所述控制区域执行双击操作时,控制所述键盘区域在所述多种输入模式之间进行切换。
  7. 根据权利要求4所述的方法,其中,当输入模式为字母输入模式时,所述在所述控制区域显示至少一个候选文本,包括:
    当检测到用户手指在所述虚拟键盘上执行第一触摸滑动操作时,确定所述用户手指在所述键盘区域的被选择按键,根据所述被选择按键在所述控制区域显示至少一个候选字母。
  8. 根据权利要求7所述的方法,其中,所述方法还包括:
    将所述被选择按键进行高亮显示。
  9. 根据权利要求4所述的方法,其中,当输入模式为单词输入模式时,所述在所述控制区域显示至少一个候选文本,包括:
    当检测到用户手指在所述虚拟键盘上执行第一触摸滑动操作时,确定所述用户手指在所述键盘区域的滑动轨迹,根据所述滑动轨迹在所述控制区域显示至少一个候选单词。
  10. 根据权利要求9所述的方法,其中,所述根据所述滑动轨迹在所述控制区域呈现至少一个候选单词,包括:
    若所述滑动轨迹中检测到用户手指在至少一个预设按键上的停留时间大于第一预设时间,则确定选择所述至少一个预设按键;
    根据所述至少一个预设按键在所述滑动轨迹中的顺序,生成至少一个候选单词并在所述控制区域进行显示。
  11. 根据权利要求9所述的方法,其中,所述方法还包括:
    若所述滑动轨迹中检测到用户手指在第一预设按键上的停留时间大于第二预设时间,则确定重复选择所述第一预设按键;或者,
    若所述滑动轨迹中检测到用户手指停留在第一预设按键上且用户手指在所述控制区域进行点击操作,则确定重复选择所述第一预设按键;
    其中,所述第一预设按键为所述虚拟键盘中的任意一个按键。
  12. 根据权利要求1所述的方法,其中,所述接收来自于所述控制区域的第二操作指令,从所述至少一个候选文本中确定目标文本,包括:
    当检测到用户手指在所述控制区域执行第二触摸滑动操作时,根据所述第二触摸滑动操作对应的滑动方向,从所述至少一个候选文本中确定所述目标文本;其中,所述第二操作指令是基于用户手指在所述控制区域执行第二触摸滑动操作生成的。
  13. 根据权利要求1所述的方法,其中,所述方法还包括:
    接收来自于所述控制区域的第四操作指令;
    根据所述第四操作指令,控制所述控制区域在多组候选文本之间进行显示切换。
  14. 根据权利要求13所述的方法,其中,所述控制区域包括第一按钮和第二按钮,所述根据所述第四操作指令,控制所述控制区域在多组候选文本之间进行显示切换,包括:
    当检测到用户手指在所述控制区域对所述第一按钮或所述第二按钮执行单击操作时,控制所述控制区域在多组候选文本之间进行显示切换;
    其中,所述第一按钮用于触发对所述至少一个候选文本向下一组更新显示,所述第二按钮用于触发对所述至少一个候选文本向上一组更新显示。
  15. 根据权利要求14所述的方法,其中,所述方法还包括:
    当检测到用户手指在所述控制区域朝向所述第一按钮或所述第二按钮执行第三触摸滑动操作时,控制所述控制区域在多组候选文本之间进行显示切换。
  16. 根据权利要求13所述的方法,其中,所述方法还包括:
    将所述至少一个候选文本设置为滚动列表;
    当检测到用户手指在所述控制区域执行第四触摸滑动操作时,根据所述第四触摸滑动操作对应的滑动方向,控制所述滚动列表内的候选文本进行滚动显示。
  17. 一种文本输入方法,应用于头戴式显示设备,所述方法包括:
    显示文本输入界面;
    接收移动设备发送的目标文本,并将所述目标文本输入到所述文本输入界面中。
  18. 根据权利要求17所述的方法,其中,所述方法还包括:
    显示所述移动设备的操作界面;
    相应地,所述接收移动设备发送的目标文本,包括:
    基于所述移动设备对所述操作界面的响应,接收所述移动设备发送的目标文本。
  19. 根据权利要求18所述的方法,其中,所述操作界面包括键盘区域和控制区域,且所述键盘区域包括虚拟键盘;
    所述显示所述移动设备的操作界面,包括:
    将所述键盘区域和所述控制区域在所述头戴式显示设备中进行显示,并将所述虚拟键盘中的被选择按键进行高亮显示。
  20. 根据权利要求19所述的方法,其中,所述方法还包括:
    将用户手指在所述虚拟键盘上的位置以预设标记进行显示。
  21. 根据权利要求19所述的方法,其中,所述方法还包括:
    基于所述控制区域显示的至少一个候选文本,确定用户手指在所述控制区域的滑动方 向;其中,所述滑动方向用于指示用户手指在所述移动设备的操作界面上通过触摸滑动操作选择目标文本。
  22. 一种移动设备,所述移动设备包括第一显示单元、第一接收单元和第一发送单元;其中,
    所述第一接收单元,配置为接收来自于键盘区域的第一操作指令;其中,所述移动设备的操作界面包括键盘区域和控制区域;
    所述第一显示单元,配置为在所述控制区域显示至少一个候选文本,所述至少一个候选文本是根据所述第一操作指令生成;
    所述第一接收单元,还配置为接收来自于所述控制区域的第二操作指令;
    所述第一发送单元,配置为从所述至少一个候选文本中确定目标文本,并将所述目标文本发送至头戴式显示设备的文本输入界面;其中,所述目标文本是根据所述第二操作指令生成。
  23. 一种移动设备,所述移动设备包括第一存储器和第一处理器;其中,
    所述第一存储器,用于存储能够在所述第一处理器上运行的计算机程序;
    所述第一处理器,用于在运行所述计算机程序时,执行如权利要求1至16任一项所述的方法。
  24. 一种头戴式显示设备,所述头戴式显示设备包括第二显示单元、第二接收单元和输入单元;其中,
    所述第二显示单元,配置为显示文本输入界面;
    所述第二接收单元,配置为接收移动设备发送的目标文本;
    所述输入单元,配置为将所述目标文本输入到所述文本输入界面中。
  25. 一种头戴式显示设备,所述头戴式显示设备包括第二存储器和第二处理器;其中,
    所述第二存储器,用于存储能够在所述第二处理器上运行的计算机程序;
    所述第二处理器,用于在运行所述计算机程序时,执行如权利要求17至21任一项所述的方法。
  26. 一种计算机存储介质,其中,所述计算机存储介质存储有计算机程序,所述计算机程序被第一处理器执行时实现如权利要求1至16任一项所述的方法、或者被第二处理器执行时实现如权利要求17至21任一项所述的方法。
PCT/CN2021/087238 2020-04-14 2021-04-14 文本输入方法、移动设备、头戴式显示设备以及存储介质 WO2021208965A1 (zh)

Priority Applications (2)

Application Number Priority Date Filing Date Title
CN202180016556.8A CN115176224A (zh) 2020-04-14 2021-04-14 文本输入方法、移动设备、头戴式显示设备以及存储介质
US17/933,354 US20230009807A1 (en) 2020-04-14 2022-09-19 Text entry method and mobile device

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
US202063009862P 2020-04-14 2020-04-14
US63/009,862 2020-04-14

Related Child Applications (1)

Application Number Title Priority Date Filing Date
US17/933,354 Continuation US20230009807A1 (en) 2020-04-14 2022-09-19 Text entry method and mobile device

Publications (1)

Publication Number Publication Date
WO2021208965A1 true WO2021208965A1 (zh) 2021-10-21

Family

ID=78083955

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/CN2021/087238 WO2021208965A1 (zh) 2020-04-14 2021-04-14 文本输入方法、移动设备、头戴式显示设备以及存储介质

Country Status (3)

Country Link
US (1) US20230009807A1 (zh)
CN (1) CN115176224A (zh)
WO (1) WO2021208965A1 (zh)

Families Citing this family (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN116227474B (zh) * 2023-05-09 2023-08-25 之江实验室 一种对抗文本的生成方法、装置、存储介质及电子设备

Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20150130688A1 (en) * 2013-11-12 2015-05-14 Google Inc. Utilizing External Devices to Offload Text Entry on a Head Mountable Device
CN108064372A (zh) * 2016-12-24 2018-05-22 深圳市柔宇科技有限公司 头戴式显示设备及其内容输入方法
CN108121438A (zh) * 2016-11-30 2018-06-05 成都理想境界科技有限公司 基于头戴式显示设备的虚拟键盘输入方法及装置
US20190004694A1 (en) * 2017-06-30 2019-01-03 Guangdong Virtual Reality Technology Co., Ltd. Electronic systems and methods for text input in a virtual environment

Family Cites Families (14)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP4210936B2 (ja) * 2004-07-08 2009-01-21 ソニー株式会社 情報処理装置及びこれに用いるプログラム
US8059101B2 (en) * 2007-06-22 2011-11-15 Apple Inc. Swipe gestures for touch screen keyboards
US9141284B2 (en) * 2009-05-28 2015-09-22 Microsoft Technology Licensing, Llc Virtual input devices created by touch input
US8498864B1 (en) * 2012-09-27 2013-07-30 Google Inc. Methods and systems for predicting a text
WO2015061761A1 (en) * 2013-10-24 2015-04-30 Fleksy, Inc. User interface for text input and virtual keyboard manipulation
WO2015161354A1 (en) * 2014-04-25 2015-10-29 Espial Group Inc. Text entry using rollover character row
CN105786376A (zh) * 2016-02-12 2016-07-20 李永贵 一种触摸键盘
US10275023B2 (en) * 2016-05-05 2019-04-30 Google Llc Combining gaze input and touch surface input for user interfaces in augmented and/or virtual reality
CN106527916A (zh) * 2016-09-22 2017-03-22 乐视控股(北京)有限公司 基于虚拟现实设备的操作方法、装置及操作设备
CN107980110A (zh) * 2016-12-08 2018-05-01 深圳市柔宇科技有限公司 头戴式显示设备及其内容输入方法
CN108415654A (zh) * 2017-02-10 2018-08-17 上海真曦通信技术有限公司 虚拟输入系统和相关方法
CN108932100A (zh) * 2017-05-26 2018-12-04 成都理想境界科技有限公司 一种虚拟键盘的操作方法及头戴式显示设备
CN108646997A (zh) * 2018-05-14 2018-10-12 刘智勇 一种虚拟及增强现实设备与其他无线设备进行交互的方法
CN110456922B (zh) * 2019-08-16 2021-07-20 清华大学 输入方法、输入装置、输入系统和电子设备

Patent Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20150130688A1 (en) * 2013-11-12 2015-05-14 Google Inc. Utilizing External Devices to Offload Text Entry on a Head Mountable Device
CN108121438A (zh) * 2016-11-30 2018-06-05 成都理想境界科技有限公司 基于头戴式显示设备的虚拟键盘输入方法及装置
CN108064372A (zh) * 2016-12-24 2018-05-22 深圳市柔宇科技有限公司 头戴式显示设备及其内容输入方法
US20190004694A1 (en) * 2017-06-30 2019-01-03 Guangdong Virtual Reality Technology Co., Ltd. Electronic systems and methods for text input in a virtual environment

Also Published As

Publication number Publication date
CN115176224A (zh) 2022-10-11
US20230009807A1 (en) 2023-01-12

Similar Documents

Publication Publication Date Title
US10359932B2 (en) Method and apparatus for providing character input interface
US10412334B2 (en) System with touch screen displays and head-mounted displays
US9298270B2 (en) Written character inputting device and method
US20170293351A1 (en) Head mounted display linked to a touch sensitive input device
US20150220265A1 (en) Information processing device, information processing method, and program
KR101919009B1 (ko) 안구 동작에 의한 제어 방법 및 이를 위한 디바이스
WO2014058934A2 (en) Arced or slanted soft input panels
US10387033B2 (en) Size reduction and utilization of software keyboards
US20150199111A1 (en) Gui system, display processing device, and input processing device
US10521101B2 (en) Scroll mode for touch/pointing control
KR102381051B1 (ko) 키패드를 표시하는 전자장치 및 그의 키패드 표시 방법
WO2021208965A1 (zh) 文本输入方法、移动设备、头戴式显示设备以及存储介质
KR102311268B1 (ko) 입력 필드의 이동 방법 및 장치
US20230236673A1 (en) Non-standard keyboard input system
WO2022246334A1 (en) Text input method for augmented reality devices
US20230259265A1 (en) Devices, methods, and graphical user interfaces for navigating and inputting or revising content
KR102038660B1 (ko) 이동 단말기의 키보드 인터페이스 표시 방법
JP2022150657A (ja) 制御装置、表示システム、および、プログラム
KR20160112337A (ko) 터치스크린을 이용한 한글 입력방법

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 21788491

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: DE

122 Ep: pct application non-entry in european phase

Ref document number: 21788491

Country of ref document: EP

Kind code of ref document: A1