CN111538470A - Information input method and device and terminal equipment - Google Patents

Information input method and device and terminal equipment Download PDF

Info

Publication number
CN111538470A
CN111538470A CN202010291651.6A CN202010291651A CN111538470A CN 111538470 A CN111538470 A CN 111538470A CN 202010291651 A CN202010291651 A CN 202010291651A CN 111538470 A CN111538470 A CN 111538470A
Authority
CN
China
Prior art keywords
information
voice
touch
wake
input
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN202010291651.6A
Other languages
Chinese (zh)
Other versions
CN111538470B (en
Inventor
罗占伟
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Beijing Xiaomi Mobile Software Co Ltd
Original Assignee
Beijing Xiaomi Mobile Software Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Beijing Xiaomi Mobile Software Co Ltd filed Critical Beijing Xiaomi Mobile Software Co Ltd
Priority to CN202010291651.6A priority Critical patent/CN111538470B/en
Publication of CN111538470A publication Critical patent/CN111538470A/en
Application granted granted Critical
Publication of CN111538470B publication Critical patent/CN111538470B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/16Sound input; Sound output
    • G06F3/167Audio in a user interface, e.g. using voice commands for navigating, audio feedback
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F9/00Arrangements for program control, e.g. control units
    • G06F9/06Arrangements for program control, e.g. control units using stored programs, i.e. using an internal store of processing equipment to receive or retain programs
    • G06F9/44Arrangements for executing specific programs
    • G06F9/4401Bootstrapping
    • G06F9/4418Suspend and resume; Hibernate and awake

Abstract

The disclosure relates to an information input method, an information input device and terminal equipment, and belongs to the technical field of information input. According to the information input method provided by the disclosure, the voice input function is started under the condition that the user is inconvenient to touch, and the user experience is optimized. The information input method comprises the following steps: detecting a non-touch awakening operation in an information input interface of an input method; and responding to the detected non-touch awakening operation, and starting a voice input function of the input method.

Description

Information input method and device and terminal equipment
Technical Field
The present disclosure relates to the field of information input technologies, and in particular, to an information input method and apparatus, and a terminal device.
Background
Terminal equipment such as mobile phones and tablet computers adopt a touch technology to realize information input. For example, in an input interface of a chat software application or an input interface of a memo pad, a user inputs text information through a touch virtual keyboard. However, such an approach relies on a touch operation by the user, and it is difficult to achieve information input in a case where the user inconveniently touches the screen. Therefore, it is necessary to provide a new information input method for an application scenario where a user does not have any convenience in touching a screen.
Disclosure of Invention
The disclosure provides an information input method, an information input device and terminal equipment, which aim to overcome technical defects in the related art.
In a first aspect, an embodiment of the present disclosure provides an information input method, where the method is applied to a terminal device, the terminal device further includes a voice acquisition component, and the method includes:
detecting a non-touch awakening operation in an information input interface of an input method;
and responding to the detected non-touch awakening operation, and starting a voice input function of the input method.
In one embodiment, the non-touch wake-up operation includes: a motion wake-up operation and/or a voice wake-up operation.
In one embodiment, the terminal device further includes a motion sensor, and in a case that the non-touch wake-up operation includes the motion wake-up operation, the detecting the non-touch wake-up operation includes:
and detecting the motion parameters of the terminal equipment through the motion sensor, and responding to the motion parameters meeting set conditions, and detecting the non-touch awakening operation.
In one embodiment, the motion parameters include: at least one of a motion amplitude, a number of periodic motions, and a motion trajectory.
In one embodiment, in a case that the non-touch wake-up operation includes a voice wake-up operation, the detecting the non-touch wake-up operation includes:
and acquiring and recognizing voice information through the voice acquisition component, responding to the recognized voice information comprising a set awakening word, and detecting the non-touch awakening operation.
In one embodiment, the starting of the voice input function of the input method includes:
voice information is collected through the voice collecting assembly, and the collected voice information is converted into visual information;
and inputting the visual information on the information input interface.
In one embodiment, said entering said visual information at said information input interface comprises:
acquiring voiceprint information according to the collected voice information;
acquiring a corresponding input mode according to the voiceprint information;
and inputting the visual information in the information input interface in the input mode.
In a second aspect, an embodiment of the present disclosure provides an information input apparatus, where the apparatus is applied to a terminal device, the terminal device further includes a voice acquisition component, and the apparatus includes:
the detection module is used for detecting non-touch awakening operation in an information input interface of the input method; and
and the starting module is used for responding to the detected non-touch awakening operation and starting the voice input function of the input method.
In one embodiment, the non-touch wake-up operation includes: a motion wake-up operation and/or a voice wake-up operation.
In an embodiment, the terminal device further includes a motion sensor, and when the non-touch wake-up operation includes the motion wake-up operation, the detection module is specifically configured to:
and detecting the motion parameters of the terminal equipment through the motion sensor, and responding to the motion parameters meeting set conditions, and detecting the non-touch awakening operation.
In one embodiment, the motion parameters include: at least one of a motion amplitude, a number of periodic motions, and a motion trajectory.
In an embodiment, when the non-touch wake-up operation includes a voice wake-up operation, the detection module is specifically configured to:
and acquiring and recognizing voice information through the voice acquisition component, responding to the recognized voice information comprising a set awakening word, and detecting the non-touch awakening operation.
In one embodiment, the start module comprises:
the conversion unit is used for acquiring voice information through the voice acquisition assembly and converting the acquired voice information into visual information; and
and the input unit is used for inputting the visual information on the information input interface.
In one embodiment, the input unit includes:
the first acquisition subunit is used for acquiring voiceprint information according to the acquired voice information;
the second acquisition subunit is used for acquiring a corresponding input mode according to the voiceprint information;
and the input subunit is used for inputting the visual information in the information input interface in the input mode.
In a third aspect, an embodiment of the present disclosure provides a terminal device, where the terminal device includes:
a voice acquisition component;
a memory storing the processor-executable instructions; and
a processor configured to execute the executable instructions in the memory to implement the method of any of claims 1-7.
In a fourth aspect, the disclosed embodiments provide a readable storage medium having stored thereon executable instructions, which when executed by a processor, implement the method provided by the first aspect described above.
The information input method, the information input device and the terminal equipment provided by the disclosure at least have the following beneficial effects:
according to the information input method provided by the embodiment of the disclosure, the non-touch wake-up operation is detected in the information input interface of the input method, and the voice input function of the input method is started in response to the detection of the non-touch wake-up operation. By adopting the mode, the voice input function of the input method is started through the non-touch awakening operation, so that information can be input through the voice input function of the input method under the condition that a user does not conveniently touch a screen, the technical defects in the related technology are overcome, and the user experience is optimized.
Drawings
The accompanying drawings, which are incorporated in and constitute a part of this specification, illustrate embodiments consistent with the present disclosure and together with the description, serve to explain the principles of the disclosure.
FIG. 1 is a schematic flow diagram illustrating an information input method according to an exemplary embodiment;
FIG. 2 is a schematic flow diagram illustrating an information input method according to another exemplary embodiment;
FIG. 3 is a schematic flow diagram illustrating an information input method according to another exemplary embodiment;
FIG. 4 is a block diagram illustrating an information input device according to an exemplary embodiment;
FIG. 5 is a block diagram of an information input device according to another exemplary embodiment;
FIG. 6 is a block diagram illustrating an information input device according to another exemplary embodiment;
fig. 7 is a block diagram of a terminal device shown according to an example embodiment.
Detailed Description
Reference will now be made in detail to the exemplary embodiments, examples of which are illustrated in the accompanying drawings. When the following description refers to the accompanying drawings, like numbers in different drawings represent the same or similar elements unless otherwise indicated. The embodiments described in the following exemplary embodiments do not represent all embodiments consistent with the present disclosure. Rather, they are merely examples of apparatus and methods consistent with certain aspects of the present disclosure, as detailed in the appended claims.
The terminology used in the present disclosure is for the purpose of describing particular embodiments only and is not intended to be limiting of the disclosure. Unless otherwise defined, technical or scientific terms used herein shall have the ordinary meaning as understood by one of ordinary skill in the art to which this disclosure belongs. The use of the terms "a" or "an" and the like in the description and in the claims of this disclosure do not denote a limitation of quantity, but rather denote the presence of at least one. Unless otherwise indicated, the word "comprise" or "comprises", and the like, means that the element or item listed before "comprises" or "comprising" covers the element or item listed after "comprises" or "comprising" and its equivalents, and does not exclude other elements or items. The terms "connected" or "coupled" and the like are not restricted to physical or mechanical connections, but may include electrical connections, whether direct or indirect. As used in the specification and claims of this disclosure, the singular forms "a", "an", and "the" are intended to include the plural forms as well, unless the context clearly indicates otherwise. It should also be understood that the term "and/or" as used herein is meant to encompass any and all possible combinations of one or more of the associated listed items.
The related art provides an information input method realized through a voice input function, and particularly, voice information sent by a user is collected, and the collected voice information is converted into text information to be displayed on an input interface. However, the information input method provided in the related art is activated in response to a specific key trigger, thereby causing a user to have difficulty in using the information input method in a case where it is inconvenient to touch the screen.
Based on the above problem, the embodiments of the present disclosure provide an information input method, an information input device, and a terminal device. The terminal equipment suitable for the information input method and the information input device comprises a voice acquisition component, and the voice information input from the outside is acquired through the voice acquisition component.
FIG. 1 is a flow chart illustrating an information input method according to an example embodiment. As shown in fig. 1, an information input method provided by the embodiment of the present disclosure includes:
step 101, detecting a non-touch wakeup operation in an information input interface of an input method.
The information input interface of the input method can be selected as follows: an in-chat application input method input interface, an in-browser input method input interface, or an in-memo input method input interface. And responding to the triggering operation of the information input interface, and enabling the terminal equipment to enter a non-touch awakening operation detection mode.
In the embodiments of the present disclosure, the non-touch wake-up operation has various optional modes, which are specifically described in the following cases.
As a first optional way, the non-touch wake-up operation includes a motion wake-up operation. In such a case, the terminal device to which the information input method is applied further includes a motion sensor (e.g., an acceleration sensor, a velocity sensor, an angle sensor, etc.). Moreover, the detecting the non-touch wakeup operation in step 101 specifically includes: the motion sensor is used for detecting the motion parameters of the terminal equipment, and the non-touch awakening operation is detected in response to the detected motion parameters meeting the set conditions.
Optionally, the motion parameter is a motion amplitude, and the setting condition is that the motion amplitude is greater than or equal to a first set threshold. At this time, step 101 specifically includes: the motion amplitude of the terminal equipment is detected through the motion sensor, and when the detected motion amplitude is larger than or equal to a first set threshold value, non-touch awakening operation is detected.
Optionally, the motion parameter includes a number of periodic motions, and the setting condition is that the number of periodic motions is greater than or equal to a second set threshold. At this time, step 101 specifically includes: and detecting the periodic movement times of the terminal equipment through the movement sensor, and detecting the non-touch awakening operation when the detected times are greater than or equal to a second set threshold value.
Optionally, the motion parameter includes a motion trajectory, and the setting condition is a specified trajectory type. The designated track type can be selected as a reciprocating motion track, or a turning motion track (for example, turning with a long side or a short side parallel to the terminal device as an axis), and the like. At this time, step 101 specifically includes: and detecting the motion track of the terminal equipment through the motion sensor, wherein the non-touch awakening operation is detected when the detected motion track is a set motion track.
Optionally, the motion parameters include at least two of motion amplitude, periodic motion number, and motion trajectory. In such a case, when the detected at least two parameters both satisfy the setting condition, the non-touch wake-up operation is detected.
Wherein, under the condition that the motion parameters comprise motion amplitude and/or motion times, the numerical values of the first set threshold and the second set threshold can be set according to the statistical user use habit. The smaller the numerical values of the first set threshold and the second set threshold are, the higher the sensitivity of the detection of the motion awakening operation is; the greater the values of the first set threshold and the second set threshold, the greater the accuracy of the detection of the motion wake-up operation.
As a second optional way, the non-touch wake-up operation includes: and (5) voice wake-up operation. In such a case, the detecting the non-touch wakeup operation in step 101 specifically includes: the voice information is collected and recognized through the voice collecting assembly, and the non-touch awakening operation is detected in response to the fact that the recognized voice information comprises a set awakening word.
For example, before performing the non-touch wakeup detection operation, sound characteristics (e.g., voice, tone, voiceprint information, etc.) of a user speaking a set wakeup word are collected and stored in advance. Furthermore, in step 101, recognizing the speech information specifically includes: and recognizing the voice characteristics according to the collected voice information. And when the recognized sound characteristics are matched with the sound characteristics of the pre-stored set awakening words, the fact that the non-touch awakening operation is detected is indicated.
As a third optional manner, the non-touch wake-up operation includes: an action wakeup operation and a voice wakeup operation. For example, the non-touch wake-up operation is to acquire a set wake-up word during the shaking process.
In such a case, the non-touch wake-up operation is detected in response to detecting the motion wake-up operation and the voice wake-up operation within a set period of time. The action awakening operation and the voice awakening operation can be detected simultaneously within a set time period, or the action awakening operation and the voice awakening operation are detected successively within the set time period. And the time length of the set time period is set according to the requirement of a user.
By adopting the mode, the detection accuracy of the non-touch awakening operation is improved through two different operation modes of the action awakening operation and the voice awakening operation, the detection error times of the non-touch awakening operation are reduced, and the user experience is optimized.
And 102, responding to the detected non-touch awakening operation, and starting a voice input function of the input method.
In this way, the voice input function of the input method is started through the non-touch awakening operation, so that information can be input through the voice input function of the input method under the condition that a user does not conveniently touch a screen, the technical defects in the related technology are overcome, and the user experience is optimized.
FIG. 2 is a flowchart illustrating step 102, according to an example embodiment. In one embodiment, as shown in fig. 2, step 102 specifically includes:
and step 1021, acquiring voice information through the voice acquisition component, and converting the acquired voice information into visual information.
Optionally, the visual information is text information, and the collected voice information is converted into text information through voice recognition. Optionally, the visual information is image information (e.g., expression images), the collected voice information is converted into text information through voice recognition, and image information corresponding to the text information is searched according to the text information. The image information is pre-stored in the local terminal device, or the image information is downloaded in the network according to the character information.
In one embodiment, when the non-touch wake-up operation is a motion wake-up operation, step 1021 specifically includes: and responding to the detected non-touch awakening operation, controlling the voice acquisition assembly to enable and acquire voice information through the voice acquisition assembly, and converting the acquired voice information into visual information.
In this way, the non-touch wake-up operation is also a trigger operation enabled by the voice capture component. Therefore, before the non-touch wake-up operation is detected in the information input interface, the voice acquisition component is controlled to be in the disable state, and in this way, the voice acquisition component is prevented from continuously acquiring the voice information and increasing energy consumption.
In step 1021, the voice capture component is triggered to convert the captured voice into visual information through the non-touch wake-up operation. That is, the voice input function is started in the input method information input interface through the non-touch wakeup operation. In this way, the user does not need to touch the screen to start the voice input function of the input method, and the user experience is optimized.
Step 1022, inputting the visual information in the information input interface.
If the visual information converted in step 1021 is text information, then in step 1022, text information is input in the input interface; if the visual information converted in step 1021 is image information, then in step 1022, the image information is input in the input interface.
In one embodiment, FIG. 3 is a flowchart illustrating step 1022, according to an example embodiment. As shown in fig. 3, step 1022 includes:
and 301, acquiring voiceprint information of the user according to the acquired voice information.
The voiceprint information has physiological feature uniqueness and can identify the identity of the user. Optionally, voiceprint information of a user frequently used by the terminal device is stored in advance. For example, when the non-touch wake-up operation includes a voice wake-up operation, voiceprint information of the user is acquired in the process of storing the set wake-up word, and the voiceprint information, the set wake-up word, and the user identity are stored in a correlated manner.
And step 302, acquiring a corresponding input mode according to the voiceprint information.
Optionally, the user identity is determined according to the pre-stored object relationship between the voiceprint information and the user identity and the voiceprint information acquired in step 301, and then the preset input mode is determined according to the user identity.
For example, a correspondence relationship between a user identity and an input method is stored in advance in the terminal device, and the input method includes a first input method corresponding to a user of the middle-aged or elderly person and a second input method corresponding to a user of a young person in the correspondence relationship, which is classified into the middle-aged or elderly person and the young person according to the age of the user. The word size of the first input mode is larger than that of the second input mode.
In this way, the identity of the user is determined based on the voiceprint information, and then a personalized input mode is obtained according to the identity of the user.
And step 303, inputting the visual information in the information input interface in the input mode.
By adopting the mode, the personalized input is realized aiming at different user identities, so that the information input method has different input interface display effects aiming at different users, and the user experience is further optimized.
According to the information input method provided by the embodiment of the disclosure, under the condition that the non-touch wake-up operation is detected in the information input page, the voice information is collected through the voice collecting assembly, and the collected voice information is converted into the visual information to be input into the information input page. By adopting the mode, the voice input function of the input method is started through the non-touch awakening operation, so that information can be input through the voice input function of the input method under the condition that a user does not conveniently touch a screen, the technical defects in the related technology are overcome, and the user experience is optimized.
An information input device is also provided in an embodiment of the present disclosure, and fig. 4 is a block diagram illustrating an information input device according to an exemplary embodiment. The information input device is applied to terminal equipment with a voice acquisition assembly. As shown in fig. 4, the information input apparatus includes: a detection module 410 and an activation module 420. The detection module 410 is configured to detect a non-touch wake-up operation in the information input interface.
The starting module 420 is configured to start a voice input function of the input method in response to detecting the non-touch wake-up operation.
In one embodiment, the non-touch wake-up operation includes: a motion wake-up operation and/or a voice wake-up operation.
In an embodiment, the terminal device further includes a motion sensor, and in a case that the non-touch wake-up operation includes a motion wake-up operation, the detection module 410 is specifically configured to: the motion sensor is used for detecting the motion parameters of the terminal equipment, responding to the fact that the motion parameters meet set conditions, and detecting non-touch awakening operation.
In one embodiment, the motion parameters include: at least one of a motion amplitude, a number of periodic motions, and a motion trajectory.
In an embodiment, in a case that the non-touch wake-up operation includes a voice wake-up operation, the detection module 410 is specifically configured to: the voice information is collected and recognized through the voice collecting assembly, and the non-touch awakening operation is detected in response to the fact that the recognized voice information comprises a set awakening word.
In one embodiment, FIG. 5 is a block diagram illustrating a startup module in accordance with an exemplary embodiment. As shown in fig. 5, the start module 420 includes a conversion unit 421 and an input unit 422.
The conversion unit 421 is configured to collect voice information through the voice collection component, and convert the collected voice information into visual information.
The input unit 422 is used to input visual information on the information input interface.
In one embodiment, FIG. 6 is a block diagram illustrating an input unit according to an exemplary embodiment. As shown in fig. 6, the input unit 422 includes: a first acquisition subunit 4221, a second acquisition subunit 4222, and an input subunit 4223.
The first obtaining subunit 4221 is configured to obtain voiceprint information according to the collected voice information.
The second obtaining subunit 4222 is configured to obtain a corresponding input mode according to the voiceprint information.
The input subunit 4223 is used for inputting visual information in an input manner in the input interface.
The embodiment of the disclosure also provides a terminal device. The terminal device includes: a voice acquisition component. A memory and a processor. Wherein the memory stores processor-executable instructions configured to execute the processor-executable instructions in the memory to implement the steps of the information input method provided above.
In the embodiment of the present disclosure, the terminal device may be selected from a mobile phone, a tablet computer, a wearable device (a smart watch, a smart bracelet, a helmet, etc.), an in-vehicle device, or a medical device.
Fig. 7 is a block diagram of a terminal device provided in accordance with an example embodiment. As shown in fig. 7, terminal device 700 may include one or more of the following components: a processing component 702, a memory 704, a power component 706, a multimedia component 708, an audio component 710, an input/output (I/O) interface 712, a sensor component 714, a communication component 716, and an image capture component.
The processing component 702 generally refers to the overall operation of the terminal device 700, such as operations associated with display, telephone calls, data communications, camera operations, and recording operations. The processing components 702 may include one or more processors 720 to execute instructions. Further, the processing component 702 may include one or more modules that facilitate interaction between the processing component 702 and other components. For example, the processing component 702 may include a multimedia module to facilitate interaction between the multimedia component 708 and the processing component 702.
The memory 704 is configured to store various types of data to support operations at the terminal device 700. Examples of such data include instructions for any application or method operating on terminal device 700, contact data, phonebook data, messages, pictures, videos, and so forth. The memory 704 may be implemented by any type or combination of volatile or non-volatile memory devices such as Static Random Access Memory (SRAM), electrically erasable programmable read-only memory (EEPROM), erasable programmable read-only memory (EPROM), programmable read-only memory (PROM), read-only memory (ROM), magnetic memory, flash memory, magnetic or optical disks.
The power component 706 provides power to the various components of the terminal device 700. The power components 706 may include a power management system, one or more power sources, and other components associated with generating, managing, and distributing power for the terminal device 700.
The multimedia component 708 includes a screen providing an output interface between the terminal device 700 and the target object. In some embodiments, the screen may include a Liquid Crystal Display (LCD) and a Touch Panel (TP). If the screen includes a touch panel, the screen may be implemented as a touch screen to receive an input signal from a target object. The touch panel includes one or more touch sensors to sense touch, slide, and gestures on the touch panel. The touch sensor may not only sense the boundary of a touch or slide action, but also detect the duration and pressure associated with the touch or slide operation.
The audio component 710 is configured to output and/or input audio signals. For example, the audio component 710 includes a Microphone (MIC) configured to receive an external audio signal when the terminal device 700 is in an operation mode, such as a call mode, a recording mode, and a voice recognition mode. The received audio signal may further be stored in the memory 704 or transmitted via the communication component 716. In some embodiments, audio component 710 also includes a speaker for outputting audio signals.
The I/O interface 712 provides an interface between the processing component 702 and peripheral interface modules, which may be keyboards, click wheels, buttons, etc.
The sensor component 714 includes one or more sensors for providing various aspects of status assessment for the terminal device 700. For example, sensor component 714 can detect an open/closed state of terminal device 700, the relative positioning of components, such as a display and keypad of terminal device 700, sensor component 714 can also detect a change in the position of terminal device 700 or one of the components, the presence or absence of a target object in contact with terminal device 700, orientation or acceleration/deceleration of terminal device 700, and a change in the temperature of terminal device 700. As another example, the sensor assembly 714 also includes a light sensor disposed below the OLED display screen.
The communication component 716 is configured to facilitate wired or wireless communication between the terminal device 700 and other devices. The terminal device 700 may access a wireless network based on a communication standard, such as WiFi, 2G or 3G, or a combination thereof. In an exemplary embodiment, the communication component 716 receives a broadcast signal or broadcast related information from an external broadcast management system via a broadcast channel. In an exemplary embodiment, the communication component 716 further includes a Near Field Communication (NFC) module to facilitate short-range communications. For example, the NFC module may be implemented based on Radio Frequency Identification (RFID) technology, infrared data association (IrDA) technology, Ultra Wideband (UWB) technology, Bluetooth (BT) technology, and other technologies.
In an exemplary embodiment, the terminal device 700 may be implemented by one or more Application Specific Integrated Circuits (ASICs), Digital Signal Processors (DSPs), Digital Signal Processing Devices (DSPDs), Programmable Logic Devices (PLDs), Field Programmable Gate Arrays (FPGAs), controllers, micro-controllers, microprocessors or other electronic components.
In an exemplary embodiment, the disclosed embodiment also provides a readable storage medium, and the readable storage medium stores executable instructions. The executable instructions can be executed by a processor of the terminal equipment to realize the steps of the information input method. The readable storage medium may be, among others, ROM, Random Access Memory (RAM), CD-ROM, magnetic tape, floppy disk, optical data storage device, and the like.
Other embodiments of the disclosure will be apparent to those skilled in the art from consideration of the specification and practice of the disclosure disclosed herein. This disclosure is intended to cover any variations, uses, or adaptations of the disclosure following, in general, the principles of the disclosure and including such departures from the present disclosure as come within known or customary practice within the art to which the disclosure pertains. It is intended that the specification and examples be considered as exemplary only, with a true scope and spirit of the disclosure being indicated by the following claims.

Claims (16)

1. An information input method is characterized in that the method is applied to a terminal device, the terminal device further comprises a voice acquisition component, and the method comprises the following steps:
detecting a non-touch awakening operation in an information input interface of an input method;
and responding to the detected non-touch awakening operation, and starting a voice input function of the input method.
2. The method of claim 1, wherein the non-touch wake-up operation comprises: a motion wake-up operation and/or a voice wake-up operation.
3. The method according to claim 2, wherein the terminal device further comprises a motion sensor, and in the case that the non-touch wake-up operation comprises the motion wake-up operation, the detecting the non-touch wake-up operation comprises:
and detecting the motion parameters of the terminal equipment through the motion sensor, and responding to the motion parameters meeting set conditions, and detecting the non-touch awakening operation.
4. The method of claim 3, wherein the motion parameters comprise: at least one of a motion amplitude, a number of periodic motions, and a motion trajectory.
5. The method of claim 2, wherein in the case that the non-touch wake-up operation comprises a voice wake-up operation, the detecting the non-touch wake-up operation comprises:
and acquiring and recognizing voice information through the voice acquisition component, responding to the recognized voice information comprising a set awakening word, and detecting the non-touch awakening operation.
6. The method of claim 1, wherein the initiating a voice input function of the input method comprises:
voice information is collected through the voice collecting assembly, and the collected voice information is converted into visual information;
and inputting the visual information on the information input interface.
7. The method of claim 6, wherein said entering said visual information at said information input interface comprises:
acquiring voiceprint information according to the collected voice information;
acquiring a corresponding input mode according to the voiceprint information;
and inputting the visual information in the information input interface in the input mode.
8. An information input device, characterized in that, the device is applied to terminal equipment, terminal equipment still includes pronunciation collection component, the device includes:
the detection module is used for detecting non-touch awakening operation in an information input interface of the input method; and
and the starting module is used for responding to the detected non-touch awakening operation and starting the voice input function of the input method.
9. The apparatus of claim 8, wherein the non-touch wake-up operation comprises: a motion wake-up operation and/or a voice wake-up operation.
10. The apparatus according to claim 9, wherein the terminal device further includes a motion sensor, and in a case that the non-touch wake-up operation includes the motion wake-up operation, the detection module is specifically configured to:
and detecting the motion parameters of the terminal equipment through the motion sensor, and responding to the motion parameters meeting set conditions, and detecting the non-touch awakening operation.
11. The apparatus of claim 10, wherein the motion parameters comprise: at least one of a motion amplitude, a number of periodic motions, and a motion trajectory.
12. The apparatus of claim 9, wherein, when the non-touch wake-up operation comprises a voice wake-up operation, the detection module is specifically configured to:
and acquiring and recognizing voice information through the voice acquisition component, responding to the recognized voice information comprising a set awakening word, and detecting the non-touch awakening operation.
13. The apparatus of claim 8, wherein the activation module comprises:
the conversion unit is used for acquiring voice information through the voice acquisition assembly and converting the acquired voice information into visual information; and
and the input unit is used for inputting the visual information on the information input interface.
14. The apparatus of claim 13, wherein the input unit comprises:
the first acquisition subunit is used for acquiring voiceprint information according to the acquired voice information;
the second acquisition subunit is used for acquiring a corresponding input mode according to the voiceprint information;
and the input subunit is used for inputting the visual information in the information input interface in the input mode.
15. A terminal device, characterized in that the terminal device comprises:
a voice acquisition component;
a memory storing the processor-executable instructions; and
a processor configured to execute the executable instructions in the memory to implement the method of any of claims 1-7.
16. A readable storage medium having stored thereon executable instructions, wherein the executable instructions when executed by a processor implement the method of any one of claims 1-7.
CN202010291651.6A 2020-04-14 2020-04-14 Information input method and device and terminal equipment Active CN111538470B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202010291651.6A CN111538470B (en) 2020-04-14 2020-04-14 Information input method and device and terminal equipment

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202010291651.6A CN111538470B (en) 2020-04-14 2020-04-14 Information input method and device and terminal equipment

Publications (2)

Publication Number Publication Date
CN111538470A true CN111538470A (en) 2020-08-14
CN111538470B CN111538470B (en) 2023-09-26

Family

ID=71978698

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202010291651.6A Active CN111538470B (en) 2020-04-14 2020-04-14 Information input method and device and terminal equipment

Country Status (1)

Country Link
CN (1) CN111538470B (en)

Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20160124564A1 (en) * 2014-10-29 2016-05-05 Fih (Hong Kong) Limited Electronic device and method for automatically switching input modes of electronic device
CN105589642A (en) * 2014-10-29 2016-05-18 深圳富泰宏精密工业有限公司 Input method automatic switching system and method of handheld electronic device
CN107193914A (en) * 2017-05-15 2017-09-22 广东艾檬电子科技有限公司 A kind of pronunciation inputting method and mobile terminal
CN107680589A (en) * 2017-09-05 2018-02-09 百度在线网络技术(北京)有限公司 Voice messaging exchange method, device and its equipment
CN108965584A (en) * 2018-06-21 2018-12-07 北京百度网讯科技有限公司 A kind of processing method of voice messaging, device, terminal and storage medium
CN109618059A (en) * 2019-01-03 2019-04-12 北京百度网讯科技有限公司 The awakening method and device of speech identifying function in mobile terminal

Patent Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20160124564A1 (en) * 2014-10-29 2016-05-05 Fih (Hong Kong) Limited Electronic device and method for automatically switching input modes of electronic device
CN105589642A (en) * 2014-10-29 2016-05-18 深圳富泰宏精密工业有限公司 Input method automatic switching system and method of handheld electronic device
CN107193914A (en) * 2017-05-15 2017-09-22 广东艾檬电子科技有限公司 A kind of pronunciation inputting method and mobile terminal
CN107680589A (en) * 2017-09-05 2018-02-09 百度在线网络技术(北京)有限公司 Voice messaging exchange method, device and its equipment
CN108965584A (en) * 2018-06-21 2018-12-07 北京百度网讯科技有限公司 A kind of processing method of voice messaging, device, terminal and storage medium
CN109618059A (en) * 2019-01-03 2019-04-12 北京百度网讯科技有限公司 The awakening method and device of speech identifying function in mobile terminal

Also Published As

Publication number Publication date
CN111538470B (en) 2023-09-26

Similar Documents

Publication Publication Date Title
CN107919123B (en) Multi-voice assistant control method, device and computer readable storage medium
US10942580B2 (en) Input circuitry, terminal, and touch response method and device
CN105224195B (en) Terminal operation method and device
WO2018027501A1 (en) Terminal, touch response method, and device
EP3933570A1 (en) Method and apparatus for controlling a voice assistant, and computer-readable storage medium
EP3531240A1 (en) Fingerprint acquisition method, apparatus and computer-readable storage medium
CN111063354B (en) Man-machine interaction method and device
EP3208742B1 (en) Method and apparatus for detecting pressure
CN107666536B (en) Method and device for searching terminal
CN111968635B (en) Speech recognition method, device and storage medium
EP4184506A1 (en) Audio processing
US20180238748A1 (en) Pressure detection method and apparatus, and storage medium
CN106409317B (en) Method and device for extracting dream speech
CN111696553A (en) Voice processing method and device and readable medium
CN109151186B (en) Theme switching method and device, electronic equipment and computer readable storage medium
US9183372B2 (en) Mobile terminal and control method thereof
CN108133708B (en) Voice assistant control method and device and mobile terminal
CN108766427B (en) Voice control method and device
CN110673917A (en) Information management method and device
CN113936697B (en) Voice processing method and device for voice processing
CN111679746A (en) Input method and device and electronic equipment
CN111538470B (en) Information input method and device and terminal equipment
US10198614B2 (en) Method and device for fingerprint recognition
CN113035189A (en) Document demonstration control method, device and equipment
CN113873165A (en) Photographing method and device and electronic equipment

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant