CN113805707A - Input method, input device and input device - Google Patents

Input method, input device and input device Download PDF

Info

Publication number
CN113805707A
CN113805707A CN202010556065.XA CN202010556065A CN113805707A CN 113805707 A CN113805707 A CN 113805707A CN 202010556065 A CN202010556065 A CN 202010556065A CN 113805707 A CN113805707 A CN 113805707A
Authority
CN
China
Prior art keywords
input
keyboard
target
scene
user
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202010556065.XA
Other languages
Chinese (zh)
Inventor
臧娇娇
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Beijing Sogou Technology Development Co Ltd
Original Assignee
Beijing Sogou Technology Development Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Beijing Sogou Technology Development Co Ltd filed Critical Beijing Sogou Technology Development Co Ltd
Priority to CN202010556065.XA priority Critical patent/CN113805707A/en
Publication of CN113805707A publication Critical patent/CN113805707A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/01Input arrangements or combined input and output arrangements for interaction between user and computer
    • G06F3/02Input arrangements using manually operated switches, e.g. using keyboards or dials
    • G06F3/023Arrangements for converting discrete items of information into a coded form, e.g. arrangements for interpreting keyboard generated codes as alphanumeric codes, operand codes or instruction codes
    • G06F3/0233Character input methods
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/01Input arrangements or combined input and output arrangements for interaction between user and computer
    • G06F3/048Interaction techniques based on graphical user interfaces [GUI]
    • G06F3/0481Interaction techniques based on graphical user interfaces [GUI] based on specific properties of the displayed interaction object or a metaphor-based environment, e.g. interaction with desktop elements like windows or icons, or assisted by a cursor's changing behaviour or appearance

Landscapes

  • Engineering & Computer Science (AREA)
  • General Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Human Computer Interaction (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • User Interface Of Digital Computer (AREA)

Abstract

The embodiment of the application discloses an input method, an input device and a device for inputting. An embodiment of the method comprises: detecting an input scene of a user; and in response to the input scene being the target input scene, switching the original keyboard in the input interface to a target keyboard associated with the target input scene. According to the embodiment, the keyboard switching can be automatically carried out according to the input scene, so that the input operation is more convenient and faster, and the input efficiency is improved.

Description

Input method, input device and input device
Technical Field
The embodiment of the application relates to the technical field of computers, in particular to an input method, an input device and an input device.
Background
The input method uses commonly used keyboards including 9-key pinyin keyboards, 26-key pinyin keyboards, and specifically includes chinese keyboards, english keyboards, numeric keyboards, and the like. When the user inputs content by using the input method application, the user can switch among various keyboards to trigger a required input mode.
In the prior art, for example, a chinese keyboard is usually used as a default keyboard, and when a user needs to input non-chinese characters such as a password, a license plate number, and the like, the user needs to manually switch the chinese keyboard to an english keyboard and then switch between upper and lower cases of english. In the process, if the number needs to be input, the digital keyboard needs to be switched to. This method requires the user to switch the keyboard many times during the input process, which affects the input efficiency.
Disclosure of Invention
The embodiment of the application provides an input method, an input device and an input device, and aims to solve the technical problem that in the prior art, the keyboard switching operation is not convenient enough in the input process, so that the input efficiency is low.
In a first aspect, an embodiment of the present application provides an input method, where the method includes: detecting an input scene of a user; in response to the input scene being a target input scene, switching an original keyboard in an input interface to a target keyboard associated with the target input scene.
In a second aspect, an embodiment of the present application provides an input device, including: a detection unit configured to detect an input scene of a user; a switching unit configured to switch an original keyboard in an input interface to a target keyboard associated with a target input scene in response to the input scene being the target input scene.
In a third aspect, an embodiment of the present application provides an apparatus for input, comprising a memory, and one or more programs, wherein the one or more programs are stored in the memory, and the one or more programs are configured to be executed by the one or more processors and include instructions for: detecting an input scene of a user; in response to the input scene being a target input scene, switching an original keyboard in an input interface to a target keyboard associated with the target input scene.
In a fourth aspect, embodiments of the present application provide a computer-readable medium on which a computer program is stored, which when executed by a processor, implements the method as described in the first aspect above.
According to the input method, the input device and the input device, the original keyboard in the input interface is switched to the target keyboard associated with the target input scene under the condition that the input scene is detected to be the target input scene, so that the keyboard switching can be automatically carried out according to the input scene, the keyboard is not required to be manually switched by a user, the input operation is more convenient, and the input efficiency is improved.
Drawings
Other features, objects and advantages of the present application will become more apparent upon reading of the following detailed description of non-limiting embodiments thereof, made with reference to the accompanying drawings in which:
FIG. 1 is a flow diagram of one embodiment of an input method according to the present application;
FIG. 2 is a schematic diagram of a target keyboard according to an input method of the present application;
FIG. 3 is a schematic diagram of an embodiment of an input device according to the present application;
FIG. 4 is a schematic diagram of a structure of an apparatus for input according to the present application;
FIG. 5 is a schematic diagram of a server in accordance with some embodiments of the present application.
Detailed Description
The present application will be described in further detail with reference to the following drawings and examples. It is to be understood that the specific embodiments described herein are merely illustrative of the relevant invention and not restrictive of the invention. It should be noted that, for convenience of description, only the portions related to the related invention are shown in the drawings.
It should be noted that the embodiments and features of the embodiments in the present application may be combined with each other without conflict. The present application will be described in detail below with reference to the embodiments with reference to the attached drawings.
Referring to FIG. 1, a flow 100 of one embodiment of an input method according to the present application is shown. The input method can be operated in various electronic devices including but not limited to: a server, a smart phone, a tablet computer, an e-book reader, an MP3 (Moving Picture Experts Group Audio Layer III) player, an MP4 (Moving Picture Experts Group Audio Layer IV) player, a laptop, a car computer, a desktop computer, a set-top box, an intelligent tv, a wearable device, and so on.
The electronic device may be installed with various types of client applications, such as an input method application, a web browser application, an instant messaging tool, a shopping-like application, a search-like application, a mailbox client, social platform software, and the like.
The input method application mentioned in the embodiment of the application can support various input methods. The input method may be an encoding method used for inputting various symbols to electronic devices such as computers and mobile phones, and a user may conveniently input a desired character or character string to the electronic devices using the input method application. It should be noted that, in the embodiment of the present application, in addition to the common chinese input method (such as pinyin input method, wubi input method, zhuyin input method, phonetic input method, handwriting input method, etc.), the input method may also support other languages (such as english input method, japanese hiragana input method, korean input method, etc.), and the input method and the language category of the input method are not limited at all.
The input method in this embodiment may include the following steps:
step 101, detecting an input scene of a user.
In the present embodiment, various types of client applications, such as an input method application, an instant messaging application, a document editing application, a browser, and the like, may be installed in an execution body of an input method (such as the electronic device described above). The input method application can be configured with a plurality of keyboards, and a default keyboard can be displayed each time the input method application is started. The default keyboard may be preset by the system or customized by the user, typically a Chinese keyboard. And may adopt 9-key pinyin keyboard, 26-key pinyin keyboard and other keyboard forms. Besides the default keyboard, the input method application can be configured with an english keyboard, a numeric keyboard, a symbolic keyboard and the like, and the embodiment of the application is not limited.
In this embodiment, the input method application may also be configured with a target keyboard, which is a smart keyboard that is different from existing conventional keyboards. The target keyboard can be automatically switched or automatically started under some special input scenes so as to meet the input requirements of a user in the special input scenes.
In this embodiment, the execution subject may detect an input scene of the user in real time when the input method application is started or during the input process of the user. The manner of detecting the input scene may include, but is not limited to: detecting an input scene by user behavior data, detecting an input scene by acquiring an input box type, detecting an input scene by intercepting and recognizing a display interface image, detecting an input scene based on context information, and the like. The input scenario may include, but is not limited to, a chinese input scenario, a numeric input scenario, an english character and number cross input scenario, and the like.
In some optional implementations of the present embodiment, the input scenario may be detected by user behavior data. The user behavior data here may include records of user operations on the keyboard in the input method interface, such as records of keyboard switching operations. The execution main body can analyze the record of the keyboard switching operation, determine the keyboard frequently used by the user, and further determine the input scene.
For example, if the operation records reflect that the user manually switches between the numeric keypad and the english keypad frequently, the input scene may be considered as an english character and numeric intersection input scene. If the operation record reflects that the user switches the default keyboard to another keyboard, such as a numeric keyboard, when the input method application is started for multiple times, the input scene may be considered as a numeric input scene.
In some optional implementations of the present embodiment, the input scene may be detected by acquiring an input box type. First, page data may be acquired, and relevant information of an input box may be extracted from the page data. The input frame here is the input frame where the input position is located. The input position refers to a position where an input focus (such as a cursor) is located. The related information of the input box may include descriptive text or the like such as "please input an 8-digit password containing letters and numbers", "please input a phone number", or the like. Then, an input scene may be determined based on the relevant information of the input box. For example, the input scene may be determined by performing semantic analysis on the description text.
As an example, when the text is described as "please input an 8-digit password containing english characters and numbers", since the password needs to contain english characters and numbers, it can be determined that the current input scenario is an english character and number cross input scenario. As yet another example, in the case where the recognized character string is "please input a phone number", since the phone number is generally a number, it may be determined that the current input scene is a number input scene.
In some optional implementations of the present embodiment, the input scene may be detected by intercepting and recognizing the display interface image. First, a screen capture may be performed on the display interface to obtain an interface image. Then, the characters in the interface image can be detected, and the character strings in the interface image can be determined. Finally, semantic analysis can be performed on the character string to determine the current input scene.
As an example, in a license plate number entry scenario, there is typically a string "please enter license plate number: beijing XXX ", wherein XXX is the content to be input by the user. After the display interface is intercepted, the characters in the display interface can be detected, and the character strings are obtained. And performing semantic analysis on the character string to determine that the character string indicates the user to input the license plate number, and at the moment, determining that the current input scene is the license plate number input scene.
When the interface image is intercepted, all or part of the interface image may be intercepted. When a part is cut, the current input position may be detected first. Then, the interface image with a preset size can be intercepted by taking the input position as the lower right vertex, so that useless characters in the screen cause interference. It should be noted that, the input method application may have a background screen capture function by pre-configuring the input method application. The input method application may discard the captured interface image after recognizing the characters of the interface image. In addition, the input method application can call a local screenshot tool to realize screenshot, or prompt a user to manually screenshot, so that an input scene is detected according to the interface image after screenshot, and a keyboard is automatically switched based on the detected input scene.
When detecting characters in the interface image, an OCR (Optical Character Recognition) technique may be employed. In the technology, firstly, brightness detection can be carried out on an interface image, and dark and bright modes of a plurality of areas of the interface image are detected, so that the shape of a character is determined; thereafter, the character shape is translated into a computer word using character recognition methods (e.g., Euclidean space alignment, dynamic program alignment, neural network-based character alignment, etc.).
In the detection of characters in the interface image, a convolutional neural network can also be used. The executing body may store a pre-trained recognition model in advance, and the recognition model may be built based on a convolutional neural network and is pre-trained, and may be used to recognize characters in the image. The execution subject may input an interface image into the recognition model to obtain text information of the target picture.
In the semantic analysis of the character string, a pre-trained semantic analysis model can be used. The semantic analysis model can be obtained by pre-training through a machine learning method (such as a supervised learning mode).
In some optional implementations of the present embodiment, the input scenario may be detected based on context information. Specifically, the execution subject may first obtain context information. The context information may include several pieces of content that the user has recently input or sent. Then, a character type of the context information may be detected, and thus an input scene may be determined based on the character type of the context information. The character types may include, but are not limited to, numeric, chinese, english, symbolic, and the like. For example, the context information includes 2 pieces of content that is input or sent by the user last time, and if the user continuously sends 2 pieces of english messages in the instant messaging tool, the type of the historical input content may be considered as english, so that the current input scene is determined to be an english input scene.
It should be noted that the context information is not limited to the content input or sent by the user, but may also include the content received by the user, and the content may be sent by the opposite user. For example, setting the context information as 2 pieces of information, and if the latest 2 pieces of information in the instant messaging tool are respectively an english message sent by a local user and another english message returned by an opposite-end user, the type of the historical input content can be considered as english, so that the current input scene is determined as an english input scene.
Further, a rule for determining an input scene based on the characters of the context information may be set in advance. For example, when the character types in the context information are the same and are all non-chinese, the input scene may be determined as the input scene corresponding to the character types in the context information. For another example, when the character types in the context information are different, the input scene may be determined as a cross input scene, and a specific category of the cross input scene may correspond to the character types in the context information.
It should be noted that the manner of detecting the input scene of the user is not limited to the manner listed in the above example, and other manners may also be adopted, and this embodiment is not limited.
And 102, responding to the input scene as a target input scene, and switching an original keyboard in the input interface into a target keyboard associated with the target input scene.
In this embodiment, the target input scenario may include one or more of the input scenarios. For example, the target input scene may include, but is not limited to, a number input scene, an english character input scene, a symbol input scene, and may further include a cross input scene of at least two items, such as an english character and number cross input scene, an english character, number and symbol cross input scene, and the like.
The digital input scenario may include, but is not limited to, a telephone number input scenario, a fax number input scenario, and other scenarios requiring digital input. The cross input scenario of english characters and numbers may include, but is not limited to, a password input scenario, a license plate number input scenario, a user name input scenario, a mailbox input scenario, a programming input scenario, and the like. The cross input scenario of english characters, numbers and symbols may include, but is not limited to, a user name input scenario, a mailbox input scenario, and the like.
In this embodiment, different target input scenarios may be associated with different target keyboards. For example, for an english character, number and symbol cross-entry scenario, the target keyboard may be a keyboard containing numeric keys, english characters and character keys. For the scenario of cross input of english characters and numbers, the target keyboard may be a keyboard including numeric keys and english character keys. For a numeric input scenario, the target keyboard may be a numeric keyboard.
In addition, different target input scenarios may also be associated with the same target keyboard. For example, in the scenario of cross input of english characters and numbers and the scenario of numeric input, the target keyboard may be a keyboard including numeric keys and english character keys.
In this embodiment, when the execution main body detects that the current input scene is the target input scene, the execution main body may switch the original keyboard in the input interface to the target keyboard associated with the target input scene. The input interface may include a display interface of an input method application. The original keyboard may be a default keyboard for the input method application or some keyboard preset by the user.
By way of example, FIG. 2 is a schematic diagram of a target keyboard. When the input location is in the password entry box, the default Chinese keyboard may be automatically switched to the target keyboard as shown in FIG. 2. The target keyboard can comprise numeric keys and English character keys.
In some optional implementation manners of this embodiment, in order to avoid interference caused by automatic switching of the target keyboard to the user and avoid that the target keyboard does not satisfy the use habit of the user, the execution main body may further display the guidance information first when it is detected that the current input scene is the target input scene. And switching an original keyboard in an input interface to the target keyboard in response to detecting that the user triggers the target keyboard based on the guide information.
The guiding information is used for guiding a user to trigger the target keyboard. The guidance information may be in the form of a logo, keys, text chains, etc. Taking a text chain form as an example, after the current input scene is detected as the target input scene, a text chain "click XX button to switch XXX keyboard" may be displayed in the input method interface to prompt the user to switch the target keyboard.
In some scenarios, when the target input scenario is a numeric input scenario, the target keyboard may be displayed with numeric keys, and the input mode of the target keyboard is a numeric input mode.
In other scenarios, when the target input scenario is an english character input scenario, the target keyboard may display an english character key, and the input mode of the target keyboard is an english input mode.
In other scenes, when the target input scene is an English character and number cross input scene, the target keyboard displays a number key and an English character key, and the input mode of the target keyboard is an English input mode.
When the target keyboard comprises English character keys, after the original keyboard in the input interface is switched to the target keyboard, in response to detecting that the user clicks the English character keys, the execution main body can output the lower-case English characters corresponding to the English character keys as input contents. And in response to the fact that the user presses the English character key for a long time, outputting the capital English characters corresponding to the English character key as input contents. The execution body may output capitalized english characters corresponding to the english character keys as input contents. And in response to the fact that the user presses the English character key for a long time, outputting the lower case English characters corresponding to the English character key as input contents. Therefore, the user does not need to switch between capital and small cases, and large and small English characters can be flexibly input by using the same keyboard. Therefore, the user does not need to switch between capital and small cases, and large and small English characters can be flexibly input by using the same keyboard.
In some optional implementations of the embodiment, after switching the original keyboard in the input interface to the target keyboard associated with the target input scenario, in response to detecting that the user performs at least one of the following operations, exiting the target keyboard: exiting the target input scene (e.g., exiting the current application, exiting the current page, etc.), switching the target input scene to another input scene (e.g., changing the input position to another input box), and performing a closing operation on the target keyboard (e.g., triggering a close key in the target keyboard, etc.). After exiting the target keyboard, the default keyboard may be automatically displayed, or the input method application may be exited.
According to the method provided by the embodiment of the application, the original keyboard in the input interface is switched to the target keyboard associated with the target input scene under the condition that the input scene is detected to be the target input scene, so that the keyboard switching can be automatically carried out according to the input scene, a user does not need to manually switch the keyboard, the input operation is more convenient, and the input efficiency is improved.
With further reference to fig. 3, as an implementation of the methods shown in the above figures, the present application provides an embodiment of an input device, which corresponds to the embodiment of the method shown in fig. 1, and which is particularly applicable to various electronic devices.
As shown in fig. 3, the input device 300 of the present embodiment includes: a detection unit 301 configured to detect an input scene of a user; a switching unit 302 configured to switch an original keyboard in the input interface to a target keyboard associated with the target input scene in response to the input scene being the target input scene.
In some optional implementations of the present embodiment, the detecting unit 301 is further configured to: acquiring user behavior data, wherein the user behavior data comprises operation records of a user on a keyboard, and the operation records comprise records of keyboard switching operation; and determining an input scene based on the record of the keyboard switching operation.
In some optional implementations of the present embodiment, the detecting unit 301 is further configured to: acquiring page data and extracting relevant information of an input box from the page data; and determining an input scene based on the relevant information of the input box.
In some optional implementations of the present embodiment, the detecting unit 301 is further configured to: the display interface is subjected to screen capture to obtain an interface image; detecting characters in the interface image, and determining character strings in the interface image; and performing semantic analysis on the character strings to determine an input scene.
In some optional implementations of the present embodiment, the detecting unit 301 is further configured to: acquiring context information and detecting the character type of the context information; and determining an input scene based on the character type of the context information.
In some optional implementations of the present embodiment, the switching unit 302 is further configured to: displaying guide information, wherein the guide information is used for guiding a user to trigger a target keyboard; and switching an original keyboard in an input interface to the target keyboard in response to detecting that the user triggers the target keyboard based on the guide information.
In some optional implementations of this embodiment, the apparatus further includes: an exit unit configured to exit the target keyboard in response to detecting that a user performs at least one of the following operations: exiting the target input scene, switching the target input scene into other input scenes, and executing closing operation on the target keyboard.
In some optional implementation manners of this embodiment, when the target input scene is a numeric input scene, the target keyboard displays numeric keys, and an input mode of the target keyboard is a numeric input mode.
In some optional implementation manners of this embodiment, when the target input scenario is an english character input scenario, the target keyboard displays an english character key, and an input mode of the target keyboard is an english input mode.
In some optional implementation manners of this embodiment, when the target input scenario is an english character and number cross input scenario, the target keyboard displays numeric keys and english character keys, and an input mode of the target keyboard is an english input mode.
In some optional implementations of the present embodiment, the above-mentioned cross input scenario of english characters and numbers includes at least one of the following: a password input scene, a license plate number input scene, a user name input scene, a mailbox input scene and a programming input scene.
In some optional implementations of this embodiment, the apparatus further includes: a first output unit configured to output, as input content, lower-case english characters corresponding to the english character keys in response to detecting that the user clicks the english character keys; and the second output unit is configured to respond to the detection that the user presses the English character key for a long time, and output the capital English characters corresponding to the English character key as input content.
In some optional implementations of this embodiment, the apparatus further includes: a third output unit configured to output capitalized English characters corresponding to the English character keys as input content in response to detecting that the user clicks the English character keys; and the fourth output unit is configured to respond to the detection that the user presses the English character key for a long time, and output the lower-case English characters corresponding to the English character key as input content.
According to the device provided by the embodiment of the application, the original keyboard in the input interface is switched to the target keyboard associated with the target input scene by detecting that the input scene is the target input scene, so that the keyboard switching can be automatically carried out according to the input scene, the keyboard is not required to be manually switched by a user, the input operation is more convenient, and the input efficiency is improved.
Fig. 4 is a block diagram illustrating an apparatus 400 for input according to an example embodiment, where the apparatus 400 may be an intelligent terminal or a server. For example, the apparatus 400 may be a mobile phone, a computer, a digital broadcast terminal, a messaging device, a game console, a tablet device, a medical device, an exercise device, a personal digital assistant, and the like.
Referring to fig. 4, the apparatus 400 may include one or more of the following components: processing components 402, memory 404, power components 406, multimedia components 408, audio components 410, input/output (I/O) interfaces 412, sensor components 414, and communication components 416.
The processing component 402 generally controls overall operation of the apparatus 400, such as operations associated with display, telephone calls, data communications, camera operations, and recording operations. The processing element 402 may include one or more processors 420 to execute instructions to perform all or part of the steps of the methods described above. Further, the processing component 402 can include one or more modules that facilitate interaction between the processing component 402 and other components. For example, the processing component 402 can include a multimedia module to facilitate interaction between the multimedia component 408 and the processing component 402.
The memory 404 is configured to store various types of data to support operations at the apparatus 400. Examples of such data include instructions for any application or method operating on the device 400, contact data, phonebook data, messages, pictures, videos, and so forth. The memory 404 may be implemented by any type or combination of volatile or non-volatile memory devices such as Static Random Access Memory (SRAM), electrically erasable programmable read-only memory (EEPROM), erasable programmable read-only memory (EPROM), programmable read-only memory (PROM), read-only memory (ROM), magnetic memory, flash memory, magnetic or optical disks.
Power supply components 406 provide power to the various components of device 400. The power components 406 may include a power management system, one or more power supplies, and other components associated with generating, managing, and distributing power for the apparatus 400.
The multimedia component 408 includes a screen that provides an output interface between the device 400 and a user. In some embodiments, the screen may include a Liquid Crystal Display (LCD) and a Touch Panel (TP). If the screen includes a touch panel, the screen may be implemented as a touch screen to receive an input signal from a user. The touch panel includes one or more touch sensors to sense touch, slide, and gestures on the touch panel. The touch sensor may not only sense the boundary of the touch or slide action but also detect the duration and pressure associated with the touch or slide operation. In some embodiments, the multimedia component 408 includes a front facing camera and/or a rear facing camera. The front-facing camera and/or the rear-facing camera may receive external multimedia data when the device 400 is in an operational mode, such as a shooting mode or a video mode. Each front camera and rear camera may be a fixed optical lens system or have a focal length and optical zoom capability.
The audio component 410 is configured to output and/or input audio signals. For example, audio component 410 includes a Microphone (MIC) configured to receive external audio signals when apparatus 400 is in an operational mode, such as a call mode, a recording mode, and a voice recognition mode. The received audio signals may further be stored in the memory 404 or transmitted via the communication component 416. In some embodiments, audio component 410 also includes a speaker for outputting audio signals.
The I/O interface 412 provides an interface between the processing component 402 and peripheral interface modules, which may be keyboards, click wheels, buttons, etc. These buttons may include, but are not limited to: a home button, a volume button, a start button, and a lock button.
The sensor component 414 includes one or more sensors for providing various aspects of status assessment for the apparatus 400. For example, the sensor assembly 414 may detect an open/closed state of the device 400, the relative positioning of the components, such as a display and keypad of the apparatus 400, the sensor assembly 414 may also detect a change in the position of the apparatus 400 or a component of the apparatus 400, the presence or absence of user contact with the apparatus 400, orientation or acceleration/deceleration of the apparatus 400, and a change in the temperature of the apparatus 400. The sensor assembly 414 may include a proximity sensor configured to detect the presence of a nearby object without any physical contact. The sensor assembly 414 may also include a light sensor, such as a CMOS or CCD image sensor, for use in imaging applications. In some embodiments, the sensor assembly 414 may also include an acceleration sensor, a gyroscope sensor, a magnetic sensor, a pressure sensor, or a temperature sensor.
The communication component 416 is configured to facilitate wired or wireless communication between the apparatus 400 and other devices. The apparatus 400 may access a wireless network based on a communication standard, such as WiFi, 2G or 3G, or a combination thereof. In an exemplary embodiment, the communication component 416 receives broadcast signals or broadcast related information from an external broadcast management system via a broadcast channel. In an exemplary embodiment, the communication component 416 further includes a Near Field Communication (NFC) module to facilitate short-range communications. For example, the NFC module may be implemented based on Radio Frequency Identification (RFID) technology, infrared data association (IrDA) technology, Ultra Wideband (UWB) technology, Bluetooth (BT) technology, and other technologies.
In an exemplary embodiment, the apparatus 400 may be implemented by one or more Application Specific Integrated Circuits (ASICs), Digital Signal Processors (DSPs), Digital Signal Processing Devices (DSPDs), Programmable Logic Devices (PLDs), Field Programmable Gate Arrays (FPGAs), controllers, micro-controllers, microprocessors or other electronic components for performing the above-described methods.
In an exemplary embodiment, a non-transitory computer-readable storage medium comprising instructions, such as the memory 404 comprising instructions, executable by the processor 420 of the apparatus 400 to perform the above-described method is also provided. For example, the non-transitory computer readable storage medium may be a ROM, a Random Access Memory (RAM), a CD-ROM, a magnetic tape, a floppy disk, an optical data storage device, and the like.
Fig. 5 is a schematic diagram of a server in some embodiments of the present application. The server 500 may vary widely in configuration or performance and may include one or more Central Processing Units (CPUs) 522 (e.g., one or more processors) and memory 532, one or more storage media 530 (e.g., one or more mass storage devices) storing applications 542 or data 544. Memory 532 and storage media 530 may be, among other things, transient storage or persistent storage. The program stored on the storage medium 530 may include one or more modules (not shown), each of which may include a series of instruction operations for the server. Still further, the central processor 522 may be configured to communicate with the storage medium 530, and execute a series of instruction operations in the storage medium 530 on the server 500.
The server 500 may also include one or more power supplies 526, one or more wired or wireless network interfaces 550, one or more input-output interfaces 558, one or more keyboards 556, and/or one or more operating systems 541, such as Windows Server, Mac OS XTM, UnixTM, LinuxTM, FreeBSDTM, etc.
A non-transitory computer readable storage medium having instructions therein which, when executed by a processor of an apparatus (smart terminal or server), enable the apparatus to perform an input method, the method comprising: detecting an input scene of a user; in response to the input scene being a target input scene, switching an original keyboard in an input interface to a target keyboard associated with the target input scene.
Optionally, the detecting an input scenario of the user includes: acquiring user behavior data, wherein the user behavior data comprises operation records of a user on a keyboard, and the operation records comprise records of keyboard switching operation; determining an input scenario based on the record of the keyboard switch operation.
Optionally, the detecting an input scenario of the user includes: acquiring page data and extracting relevant information of an input box from the page data; and determining an input scene based on the relevant information of the input box.
Optionally, the detecting an input scenario of the user includes: the display interface is subjected to screen capture to obtain an interface image; detecting characters in the interface image, and determining character strings in the interface image; and performing semantic analysis on the character string to determine an input scene.
Optionally, the detecting an input scenario of the user includes: acquiring context information and detecting the character type of the context information; determining an input scene based on the character type of the context information.
Optionally, the switching the original keyboard in the input interface to the target keyboard includes: displaying guide information, wherein the guide information is used for guiding a user to trigger a target keyboard; and in response to detecting that the user triggers the target keyboard based on the guiding information, switching an original keyboard in an input interface to the target keyboard.
Optionally, the device being configured to execute the one or more programs by the one or more processors includes instructions for: exiting the target keyboard in response to detecting that a user performed at least one of: exiting the target input scene, switching the target input scene into other input scenes, and executing closing operation on the target keyboard.
Optionally, when the target input scene is a digital input scene, the target keyboard displays digital keys, and an input mode of the target keyboard is a digital input mode.
Optionally, when the target input scene is an english character input scene, the target keyboard displays an english character key, and the input mode of the target keyboard is an english input mode.
Optionally, when the target input scene is an english character and number cross input scene, the target keyboard displays a number key and an english character key, and the input mode of the target keyboard is an english input mode.
Optionally, the cross input scenario of english characters and numbers includes at least one of the following: a password input scene, a license plate number input scene, a user name input scene, a mailbox input scene and a programming input scene.
Optionally, the device being configured to execute the one or more programs by the one or more processors includes instructions for: responding to the fact that the user clicks the English character key, and outputting lower case English characters corresponding to the English character key as input content; and responding to the fact that the user presses the English character key for a long time, and outputting the capital English characters corresponding to the English character key as input contents.
Optionally, the device being configured to execute the one or more programs by the one or more processors includes instructions for: responding to the fact that the user clicks the English character key, and outputting capitalized English characters corresponding to the English character key as input content; and responding to the fact that the user presses the English character key for a long time, and outputting the lower case English character corresponding to the English character key as input content.
Other embodiments of the present application will be apparent to those skilled in the art from consideration of the specification and practice of the application disclosed herein. This application is intended to cover any variations, uses, or adaptations of the invention following, in general, the principles of the application and including such departures from the present disclosure as come within known or customary practice in the art to which the invention pertains. It is intended that the specification and examples be considered as exemplary only, with a true scope and spirit of the application being indicated by the following claims.
It will be understood that the present application is not limited to the precise arrangements described above and shown in the drawings and that various modifications and changes may be made without departing from the scope thereof. The scope of the application is limited only by the appended claims.
The above description is only exemplary of the present application and should not be taken as limiting the present application, as any modification, equivalent replacement, or improvement made within the spirit and principle of the present application should be included in the protection scope of the present application.
The present application provides an input method, an input device and an input device, and the principles and embodiments of the present application are described herein using specific examples, and the descriptions of the above examples are only used to help understand the method and the core ideas of the present application; meanwhile, for a person skilled in the art, according to the idea of the present application, there may be variations in the specific embodiments and the application scope, and in summary, the content of the present specification should not be construed as a limitation to the present application.

Claims (10)

1. An input method, characterized in that the method comprises:
detecting an input scene of a user;
in response to the input scene being a target input scene, switching an original keyboard in an input interface to a target keyboard associated with the target input scene.
2. The method of claim 1, wherein the detecting the input scene of the user comprises:
acquiring user behavior data, wherein the user behavior data comprises operation records of a user on a keyboard, and the operation records comprise records of keyboard switching operation;
determining an input scenario based on the record of the keyboard switch operation.
3. The method of claim 1, wherein the detecting the input scene of the user comprises:
acquiring page data and extracting relevant information of an input box from the page data;
and determining an input scene based on the relevant information of the input box.
4. The method of claim 1, wherein the detecting the input scene of the user comprises:
the display interface is subjected to screen capture to obtain an interface image;
detecting characters in the interface image, and determining character strings in the interface image;
and performing semantic analysis on the character string to determine an input scene.
5. The method of claim 1, wherein the detecting the input scene of the user comprises:
acquiring context information and detecting the character type of the context information;
determining an input scene based on the character type of the context information.
6. The method of claim 1, wherein switching an original keyboard in an input interface to a target keyboard comprises:
displaying guide information, wherein the guide information is used for guiding a user to trigger a target keyboard;
and in response to detecting that the user triggers the target keyboard based on the guiding information, switching an original keyboard in an input interface to the target keyboard.
7. The method of claim 1, wherein after switching an original keyboard in an input interface to a target keyboard associated with the target input scenario, the method further comprises:
exiting the target keyboard in response to detecting that a user performed at least one of: exiting the target input scene, switching the target input scene into other input scenes, and executing closing operation on the target keyboard.
8. An input device, the device comprising:
a detection unit configured to detect an input scene of a user;
a switching unit configured to switch an original keyboard in an input interface to a target keyboard associated with a target input scene in response to the input scene being the target input scene.
9. An apparatus for input, comprising a memory, and one or more programs, wherein the one or more programs are stored in the memory and configured to be executed by the one or more processors, the one or more programs comprising instructions for:
detecting an input scene of a user;
in response to the input scene being a target input scene, switching an original keyboard in an input interface to a target keyboard associated with the target input scene.
10. A computer-readable medium, on which a computer program is stored which, when being executed by a processor, carries out the method according to any one of claims 1-7.
CN202010556065.XA 2020-06-17 2020-06-17 Input method, input device and input device Pending CN113805707A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202010556065.XA CN113805707A (en) 2020-06-17 2020-06-17 Input method, input device and input device

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202010556065.XA CN113805707A (en) 2020-06-17 2020-06-17 Input method, input device and input device

Publications (1)

Publication Number Publication Date
CN113805707A true CN113805707A (en) 2021-12-17

Family

ID=78892715

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202010556065.XA Pending CN113805707A (en) 2020-06-17 2020-06-17 Input method, input device and input device

Country Status (1)

Country Link
CN (1) CN113805707A (en)

Similar Documents

Publication Publication Date Title
CN107688399B (en) Input method and device and input device
CN110795014B (en) Data processing method and device and data processing device
CN113936697B (en) Voice processing method and device for voice processing
CN105487799A (en) Content conversion method and device
CN112381091A (en) Video content identification method and device, electronic equipment and storage medium
CN112068764A (en) Language switching method and device for language switching
CN108073291B (en) Input method and device and input device
CN112667852B (en) Video-based searching method and device, electronic equipment and storage medium
CN113805707A (en) Input method, input device and input device
CN113407099A (en) Input method, device and machine readable medium
CN107977089B (en) Input method and device and input device
CN111814797A (en) Picture character recognition method and device and computer readable storage medium
CN112486603A (en) Interface adaptation method and device for adapting interface
CN111258691B (en) Input method interface processing method, device and medium
CN109388328B (en) Input method, device and medium
CN111381688B (en) Method and device for real-time transcription and storage medium
CN110716653B (en) Method and device for determining association source
CN113407039A (en) Input method, device and machine readable medium
CN115509371A (en) Key identification method and device for identifying keys
CN115480653A (en) Input method, device and device for input
CN115878829A (en) Input method, input device and input device
CN114518800A (en) Request sending method and device and request sending device
CN113703588A (en) Input method, input device and input device
CN112445453A (en) Input method and device and electronic equipment
CN112528129A (en) Language searching method and device for multi-language translation system

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination