CN109994111B - Interaction method, interaction device and mobile terminal - Google Patents

Interaction method, interaction device and mobile terminal Download PDF

Info

Publication number
CN109994111B
CN109994111B CN201910143207.7A CN201910143207A CN109994111B CN 109994111 B CN109994111 B CN 109994111B CN 201910143207 A CN201910143207 A CN 201910143207A CN 109994111 B CN109994111 B CN 109994111B
Authority
CN
China
Prior art keywords
sound signal
preset
signal
volume difference
microphone
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN201910143207.7A
Other languages
Chinese (zh)
Other versions
CN109994111A (en
Inventor
罗春晖
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Vivo Mobile Communication Co Ltd
Original Assignee
Vivo Mobile Communication Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Vivo Mobile Communication Co Ltd filed Critical Vivo Mobile Communication Co Ltd
Priority to CN201910143207.7A priority Critical patent/CN109994111B/en
Publication of CN109994111A publication Critical patent/CN109994111A/en
Application granted granted Critical
Publication of CN109994111B publication Critical patent/CN109994111B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G10MUSICAL INSTRUMENTS; ACOUSTICS
    • G10LSPEECH ANALYSIS OR SYNTHESIS; SPEECH RECOGNITION; SPEECH OR VOICE PROCESSING; SPEECH OR AUDIO CODING OR DECODING
    • G10L15/00Speech recognition
    • G10L15/22Procedures used during a speech recognition process, e.g. man-machine dialogue
    • GPHYSICS
    • G10MUSICAL INSTRUMENTS; ACOUSTICS
    • G10LSPEECH ANALYSIS OR SYNTHESIS; SPEECH RECOGNITION; SPEECH OR VOICE PROCESSING; SPEECH OR AUDIO CODING OR DECODING
    • G10L21/00Processing of the speech or voice signal to produce another audible or non-audible signal, e.g. visual or tactile, in order to modify its quality or its intelligibility
    • G10L21/02Speech enhancement, e.g. noise reduction or echo cancellation
    • G10L21/0208Noise filtering
    • GPHYSICS
    • G10MUSICAL INSTRUMENTS; ACOUSTICS
    • G10LSPEECH ANALYSIS OR SYNTHESIS; SPEECH RECOGNITION; SPEECH OR VOICE PROCESSING; SPEECH OR AUDIO CODING OR DECODING
    • G10L21/00Processing of the speech or voice signal to produce another audible or non-audible signal, e.g. visual or tactile, in order to modify its quality or its intelligibility
    • G10L21/02Speech enhancement, e.g. noise reduction or echo cancellation
    • G10L21/0208Noise filtering
    • G10L21/0216Noise filtering characterised by the method used for estimating noise
    • GPHYSICS
    • G10MUSICAL INSTRUMENTS; ACOUSTICS
    • G10LSPEECH ANALYSIS OR SYNTHESIS; SPEECH RECOGNITION; SPEECH OR VOICE PROCESSING; SPEECH OR AUDIO CODING OR DECODING
    • G10L25/00Speech or voice analysis techniques not restricted to a single one of groups G10L15/00 - G10L21/00
    • G10L25/48Speech or voice analysis techniques not restricted to a single one of groups G10L15/00 - G10L21/00 specially adapted for particular use
    • G10L25/51Speech or voice analysis techniques not restricted to a single one of groups G10L15/00 - G10L21/00 specially adapted for particular use for comparison or discrimination
    • GPHYSICS
    • G10MUSICAL INSTRUMENTS; ACOUSTICS
    • G10LSPEECH ANALYSIS OR SYNTHESIS; SPEECH RECOGNITION; SPEECH OR VOICE PROCESSING; SPEECH OR AUDIO CODING OR DECODING
    • G10L15/00Speech recognition
    • G10L15/22Procedures used during a speech recognition process, e.g. man-machine dialogue
    • G10L2015/223Execution procedure of a spoken command
    • GPHYSICS
    • G10MUSICAL INSTRUMENTS; ACOUSTICS
    • G10LSPEECH ANALYSIS OR SYNTHESIS; SPEECH RECOGNITION; SPEECH OR VOICE PROCESSING; SPEECH OR AUDIO CODING OR DECODING
    • G10L21/00Processing of the speech or voice signal to produce another audible or non-audible signal, e.g. visual or tactile, in order to modify its quality or its intelligibility
    • G10L21/02Speech enhancement, e.g. noise reduction or echo cancellation
    • G10L21/0208Noise filtering
    • G10L21/0216Noise filtering characterised by the method used for estimating noise
    • G10L2021/02161Number of inputs available containing the signal or the noise to be suppressed
    • G10L2021/02165Two microphones, one receiving mainly the noise signal and the other one mainly the speech signal

Landscapes

  • Engineering & Computer Science (AREA)
  • Computational Linguistics (AREA)
  • Health & Medical Sciences (AREA)
  • Audiology, Speech & Language Pathology (AREA)
  • Human Computer Interaction (AREA)
  • Physics & Mathematics (AREA)
  • Acoustics & Sound (AREA)
  • Multimedia (AREA)
  • Signal Processing (AREA)
  • Quality & Reliability (AREA)
  • Telephone Function (AREA)

Abstract

The embodiment of the invention provides an interaction method, an interaction device and a mobile terminal, wherein the method comprises the following steps: acquiring a first sound signal through a first microphone and acquiring a second sound signal through a second microphone; identifying a volume difference of the first sound signal and the second sound signal; and under the condition that the volume difference value meets a first preset condition, determining a target interactive instruction and executing the target interactive instruction. According to the embodiment of the invention, the interaction with the mobile terminal can be accurately and quickly realized only by analyzing the collected sound signals of the two microphones without additionally arranging entity keys or sequentially operating on a screen according to display.

Description

Interaction method, interaction device and mobile terminal
Technical Field
The present invention relates to the field of mobile communications technologies, and in particular, to an interaction method, an interaction device, and a mobile terminal.
Background
With the development of mobile terminal technology, mobile terminals are used more and more in people's daily life, and when people use the mobile terminals, the mobile terminals can interact with the mobile terminals through entity key clicking, screen clicking and the like.
In the prior art, in order to make the overall structure of the mobile terminal compact and beautiful, the number of entity keys arranged in the mobile terminal is very small, and people can only realize small interaction with the mobile terminal by clicking the entity keys and can not meet the interaction requirements; the screen click interaction usually needs to be sequentially operated according to the current display of the screen, and accurate and fast interaction cannot be realized.
Disclosure of Invention
The embodiment of the invention provides an interaction method, an interaction device and a mobile terminal, and aims to solve the problem that in the prior art, interaction between a user and the mobile terminal is not accurate and rapid enough.
In order to solve the above technical problem, the present invention provides an interaction method, including:
acquiring a first sound signal through a first microphone and acquiring a second sound signal through a second microphone;
identifying a volume difference of the first sound signal and the second sound signal;
and under the condition that the volume difference value meets a first preset condition, determining a target interactive instruction and executing the target interactive instruction.
The embodiment of the invention also provides an interaction device, which comprises:
the sound signal acquisition module is used for acquiring a first sound signal through a first microphone and acquiring a second sound signal through a second microphone;
a volume difference value identification module for identifying the volume difference value of the first sound signal and the second sound signal;
and the interactive instruction execution module is used for determining a target interactive instruction and executing the target interactive instruction under the condition that the volume difference value meets a first preset condition.
The embodiment of the invention also provides a mobile terminal which comprises any one of the interaction devices.
The embodiment of the present invention further provides a mobile terminal, which includes a processor, a memory, and a computer program stored in the memory and capable of running on the processor, and when the computer program is executed by the processor, the steps of the foregoing interaction method are implemented.
An embodiment of the present invention further provides a computer-readable storage medium, on which a computer program is stored, where the computer program, when executed by a processor, implements the steps of the foregoing interaction method.
In the embodiment of the invention, the target interaction instruction can be determined according to the volume difference value of the sound signals collected by the two microphones, and the target interaction instruction is executed, so that in the embodiment of the invention, the interaction with the mobile terminal can be accurately and quickly realized only by analyzing the sound signals collected by the two microphones without additionally arranging an entity key and sequentially operating on a screen according to display. Specifically, in practical applications, after a user sends a sound signal, a first sound signal can be collected through the first microphone, and a second sound signal can be collected through the second microphone, because the sound signal propagation can change along with the distance and the like, a volume difference exists between the first sound signal collected by the first microphone and the second sound signal collected by the second microphone, and under the condition that the volume difference meets a first preset condition, the user can be considered to want to interact with the mobile terminal, so that a target interaction instruction is determined, and the target interaction instruction is executed, and then the quick interaction between the user and the mobile terminal is realized.
Drawings
Fig. 1 is a flowchart illustrating steps of an interactive method according to an embodiment of the mobile terminal of the present invention;
FIG. 2 is a flowchart illustrating steps of an interactive method according to an embodiment of the present invention;
fig. 3 is a schematic diagram of an acceleration detection module according to an embodiment of the present invention corresponding to a collection time of a microphone;
FIG. 4 is a diagram of a mobile terminal according to an embodiment of the present invention;
FIG. 5 is a schematic diagram of a second microphone collecting sound signals according to an embodiment of the present invention;
FIG. 6 is a schematic diagram of a first microphone collecting sound signals according to an embodiment of the present invention;
FIG. 7 is a schematic diagram of sound signals collected by the first microphone and the second microphone according to the embodiment of the present invention;
FIG. 8 is a block diagram of an interactive apparatus according to an embodiment of the present invention;
FIG. 9 is a block diagram of a detailed structure of an interactive apparatus according to an embodiment of the present invention;
fig. 10 is a block diagram of a mobile terminal according to an embodiment of the present invention.
Detailed Description
The technical solutions in the embodiments of the present invention will be clearly and completely described below with reference to the drawings in the embodiments of the present invention, and it is obvious that the described embodiments are some, not all, embodiments of the present invention. All other embodiments, which can be derived by a person skilled in the art from the embodiments given herein without making any creative effort, shall fall within the protection scope of the present invention.
Referring to fig. 1, a flowchart illustrating steps of an interaction method in the embodiment of the present invention is shown, where the method may specifically include:
step 101: a first sound signal is acquired by a first microphone and a second sound signal is acquired by a second microphone.
The embodiment of the present invention may be applied to a mobile terminal, and the mobile terminal may specifically be a mobile phone, a computer, an electronic reader, and the like, which is not specifically limited in this respect. The mobile terminal at least comprises: a first microphone and a second microphone.
In a specific application, a certain distance can be reserved between the first microphone and the second microphone, when sound is transmitted, the first microphone can collect a first sound signal, and the first microphone can collect a second sound signal. It can be understood that if the sound source is closer to the first microphone, the first sound signal collected by the first microphone is stronger than the second sound signal collected by the second microphone; if the sound source is closer to the second microphone, the first sound signal collected by the first microphone is weaker than the second sound signal collected by the second microphone.
Step 102: a volume difference is identified for the first sound signal and the second sound signal.
In the embodiment of the present invention, after the first sound signal and the second sound signal are obtained, the volume difference between the first sound signal and the second sound signal is identified, for example, if the intensity of one of the first sound signal and the second sound signal is much greater than the intensity of the other sound signal, and the volume difference is large, it may be indicated that the user has purposefully sent out sound information to one of the first microphone and the second microphone, and it may be preliminarily determined that the user may wish to interact with the mobile terminal.
Step 103: and under the condition that the volume difference value meets a first preset condition, determining a target interactive instruction and executing the target interactive instruction.
In the embodiment of the present invention, the first preset condition may be a specific difference interval, and the target interactive instruction may be confirmed and executed when the volume difference satisfies the first preset condition. For example, a mapping relationship between each difference interval and the target interactive instruction may be preset, and when the volume difference belongs to one of the difference intervals, the target interactive instruction may be determined according to a corresponding relationship between the difference interval and the target interactive instruction; it is understood that there may be only one target interactive command, and the target interactive command is determined as long as the volume difference satisfies the first preset condition.
In a specific application, the target interaction instruction may be used to: the first preset condition and the target interaction instruction are not specifically limited in the embodiment of the present invention.
In the embodiment of the invention, the target interaction instruction can be determined according to the volume difference value of the sound signals collected by the two microphones, and the target interaction instruction is executed, so that in the embodiment of the invention, the interaction with the mobile terminal can be accurately and quickly realized only by analyzing the sound signals collected by the two microphones without additionally arranging an entity key and sequentially operating on a screen according to display. Specifically, in practical applications, after a user sends a sound signal, a first sound signal can be collected through the first microphone, and a second sound signal can be collected through the second microphone, because the sound signal propagation can change along with the distance and the like, a volume difference exists between the first sound signal collected by the first microphone and the second sound signal collected by the second microphone, and under the condition that the volume difference meets a first preset condition, the user can be considered to want to interact with the mobile terminal, so that a target interaction instruction is determined, and the target interaction instruction is executed, and then the quick interaction between the user and the mobile terminal is realized.
Referring to fig. 2, a flowchart illustrating steps of an interaction method in the embodiment of the present invention is shown, where the method may specifically include:
step 201: a first sound signal is acquired by a first microphone and a second sound signal is acquired by a second microphone.
Step 202: and judging whether the first instruction is received or not.
In the embodiment of the present invention, the first instruction may be used to indicate that, in the microphone interaction and the entity key interaction, the microphone interaction is preferentially processed, so that, in a case where the first instruction is received, the step of identifying the volume difference between the first sound signal and the second sound signal is performed, so that the microphone interaction process may be performed normally.
In a specific application, a user may send an instruction to the mobile terminal by triggering an entity button, or setting a terminal scene, for example, setting a microphone touch scene, and then the mobile terminal may receive the first instruction. It can be understood that the first instruction may be issued in other manners according to an actual application scenario, and the specific application scenario is not limited in the embodiment of the present invention. In the embodiment of the invention, the subsequent interaction process is executed only under the condition of receiving the first instruction, so that the false triggering caused by the unconscious operation of the user is avoided, and the execution of the interaction method can better accord with the intention of the user.
Optionally, a second instruction may be further set corresponding to the first instruction, where the second instruction may be used to indicate that, in the microphone interaction and the entity key interaction, the key interaction is preferentially processed, so that, when the second instruction is received, the step of identifying the volume difference between the first sound signal and the second sound signal is prohibited from being executed, so that the microphone interaction process is closed, and a user's requirement for microphone interaction is flexibly met.
Step 203: in the event that the first instruction is received, identifying a volume difference for the first sound signal and the second sound signal.
In the embodiment of the present invention, when the first instruction is received, it may be considered that the user wishes to perform the interaction process in the embodiment of the present invention, and therefore, the volume difference between the first sound signal and the second sound signal may be further identified.
Preferably, the identifying a volume difference of the first sound signal and the second sound signal comprises:
substep a1 (not shown): demodulating the first sound signal and the second sound signal into a digital signal.
In the embodiment of the present invention, the first sound Signal collected by the first microphone and the second sound Signal collected by the second microphone are usually analog signals, and the first sound Signal and the second sound Signal may be demodulated into Digital signals by a Digital Signal Processing (DSP) module, so that the demodulated first sound Signal and the demodulated second sound Signal can be further analyzed and calculated by the controller.
Substep a2 (not shown): and outputting the volume difference value according to the demodulated first sound signal and the demodulated second sound signal.
In the embodiment of the invention, the volume difference value can be obtained by calculating the difference between the demodulated first sound signal and the demodulated second sound signal; the volume difference value can feed back the strength relation between the first sound signal and the second sound signal; for example, the demodulated second sound signal may be subtracted from the demodulated first sound signal to obtain a volume difference, and if the volume difference is a positive value, it indicates that the first sound signal is stronger than the second sound signal, and the larger the volume difference, the more the first sound signal is stronger than the second sound signal; if the volume difference is a negative value, the second sound signal is stronger than the first sound signal, and the smaller the negative value is, the more the second sound signal is stronger than the first sound signal is.
In a specific application, optionally, the volume difference is amplified, and a differential signal is output, because the differential signal is greater than the volume difference, and then the difference between the first sound signal and the second sound signal is further analyzed through the differential signal, so that a more obvious difference situation can be presented.
In an optional implementation manner of the embodiment of the present invention, after the volume difference is obtained, whether a preset operation is received may be determined according to the volume difference and a preset signal threshold.
In the embodiment of the invention, whether the preset operation is received or not is analyzed according to the comparison condition of the volume difference value and the preset signal threshold. In a specific application, the preset operation may be an operation of knocking near any one of the first microphone and the second microphone, or making a loud sound, so that sound signals collected by the first microphone and the second microphone are greatly distinguished, and when a volume difference value is greater than a preset positive signal threshold value or less than a preset negative signal threshold value, the preset operation is considered to be received; it can be understood that, if the volume difference is between the preset negative signal threshold and the preset positive signal threshold, it may be indicated that the sound signals collected by the first microphone and the second microphone are not significantly different, and it may be determined that the preset operation is not received.
In practical application, the operation of tapping the mobile terminal by the user is a touch operation with simple operation and high recognition, and in the tapping action, the jitter of the mobile terminal is accompanied, and whether the user wants to interact with the mobile terminal can be more accurately determined according to the jitter, so after the steps of acquiring a target jitter signal in step 204, recording acquisition time in step 205, and the like, the step of determining whether a preset operation is received according to the volume difference and the preset signal threshold may include step 206, specifically:
step 204: and collecting a target jitter signal.
In the embodiment of the present invention, for example, when a user wants to interact with a mobile terminal through a tapping operation, it is considered that if a tapping sound is collected by the first microphone or the second microphone, but the tapping is not specific to the first microphone or the second microphone, an erroneous determination may be generated. Therefore, the target jitter signal is collected and used as one of conditions for subsequent interactive judgment, and the interactive accuracy can be improved.
In a specific application, the target jitter signal may be a jitter signal corresponding to a tapped mobile terminal, specifically, different jitter signals may be generated when the tapped mobile terminal and the mobile terminal are smoothly moved, and a tapping action is usually accompanied by a peak jitter, so that the peak jitter is collected as the target jitter signal.
In specific application, the target shaking signal can be acquired through an acceleration detection module in the mobile terminal, such as a gyroscope and the like.
Step 205: and recording the jitter collecting time of the target jitter signal and the sound collecting time of the first sound signal and the second sound signal.
In the embodiment of the invention, the jitter collecting time of the collected target jitter signal and the sound collecting time of the collected first sound signal and the second sound signal can be recorded, and then whether the dynamic collecting time and the sound collecting time are synchronous or not can be further analyzed, if synchronous, the microphone can be knocked, if asynchronous, the microphone can be not knocked, and when the microphone does not receive knocking operation, the preset operation can be considered not to be received. Therefore, misjudgment caused by the fact that sound similar to a peak waveform is generated from the outside and is close to a certain microphone is avoided, and identification accuracy is greatly improved.
Step 206: and determining whether a preset operation is received or not according to the volume difference value, a preset signal threshold value, the jitter acquisition time and the sound acquisition time.
In a specific implementation of the embodiment of the present invention, the preset signal threshold includes: presetting a first signal threshold and a second signal threshold; the preset first signal threshold is higher than the preset second signal threshold;
the determining whether a preset operation is received according to the volume difference value and a preset signal threshold value, and the jitter collecting time and the sound collecting time comprises:
under the condition that the volume difference value is higher than the preset first signal threshold value, judging that the first microphone receives a first trigger; under the condition that the volume difference value is lower than the preset second signal threshold value, judging that the second microphone receives a second trigger; judging whether the jitter acquisition time is matched with the sound acquisition time or not under the condition that the first trigger exists and/or the second trigger exists; and under the condition that the jitter acquisition time is matched with the sound acquisition time, judging that a preset operation is received.
In the embodiment of the invention, when the volume difference is higher than the preset first signal threshold, it is indicated that the first microphone receives a first trigger, and the first trigger may be specifically a tapping trigger; when the volume difference is lower than a preset second signal threshold, it indicates that the second microphone receives a second trigger, and the second trigger may be specifically a tapping trigger; and under the condition that the first microphone receives the first trigger and/or the second microphone receives the second trigger, whether the jitter acquisition time is matched with the sound acquisition time can be further judged, if so, the first microphone or the second microphone receives a knocking operation, and the mobile terminal receives a preset operation. For example, as shown in fig. 3, a corresponding situation of a first sound signal collected by a first microphone and a shake signal collected by an acceleration detection module when the first microphone receives a triple tap is shown.
In the embodiment of the invention, a double-microphone design is utilized, sound signals received by a first microphone and a second microphone are amplified by a terminal according to the knocking action of a user on a certain microphone, and the signals are judged by the threshold value and time of the signals and synchronously judged by the signal jitter data of an acceleration sensor, so that user interaction is completed, noise of the external environment can be eliminated, the signal to noise ratio is improved, and false identification caused by simple sound attack or interference is avoided by adding the judgment of an acceleration detection module.
In practical use, as shown in fig. 4, an exemplary mobile terminal structure is that a first microphone 10 (a primary microphone) is disposed at a bottom end of the mobile terminal, a second microphone 20 (a secondary microphone) is disposed at a top end of the mobile terminal, and a user can identify a type of a key corresponding to a tap by simply tapping on the top or bottom of the mobile terminal, so that the operation is very simple, and especially, emergency application under special conditions is very effective, and therefore, user experience can be greatly improved.
Step 207: and under the condition that the number of times of the received preset operation in preset time meets a second preset condition, determining a target interactive instruction and executing the target interactive instruction.
In the embodiment of the present invention, the preset time may be determined according to an actual application scenario, for example, the preset time may be set to a short time, such as 5 seconds, and if the user sends out multiple touch operations in a short time, the user may be considered to need to interact, so that the target interaction instruction may be determined and executed.
In a specific application, the target interactive instruction and the second preset condition may have a corresponding relationship, for example, when the second preset condition is 3, the corresponding target interactive instruction is a preset call dialing instruction, and if 3 times of preset operations are received within a preset time, the preset call dialing instruction may be confirmed and executed. It can be understood that when the second preset condition is 2, the corresponding target interaction instruction may be a screen capture instruction, and the like. It can be understood that the second preset condition and the corresponding relationship of the target interactive instruction may be set by a person skilled in the art according to an actual application scenario, for example, corresponding target interactive instructions such as 1 preset operation, 4 preset operations, 5 preset operations and the like are respectively set to be received within a preset time, and the target interactive instruction may also be set according to the actual application scenario, for example, instructions such as adjusting volume, muting, alarming, fast dialing, fast photographing, starting recording and the like are set, which is not specifically limited in the embodiment of the present invention.
The following describes the working principle of the interaction method according to the embodiment of the present invention, taking as an example that if three preset operations are received within a preset time, a target interaction instruction is determined and executed.
If it is analyzed that three preset operations corresponding to the second microphone are received according to the first sound signal collected by the first microphone and the second sound signal collected by the second microphone, specifically, the preset operations may be tapping operations, and the second sound signal collected by the second microphone is significantly stronger than the first sound signal collected by the first microphone, as shown in fig. 5, there are 3 transient peaks in the sound collection diagram corresponding to the second microphone. Suitably, the second preset condition and the corresponding target interactive instruction may be that, when the second microphone receives the preset operation for 3 times, the first target interactive instruction is determined and executed.
If it is analyzed that three preset operations corresponding to the first microphone are received according to the first sound signal collected by the first microphone and the second sound signal collected by the second microphone, specifically, the preset operations may be tapping operations, and the first sound signal collected by the first microphone is significantly stronger than the second sound signal collected by the second microphone, as shown in fig. 6, there are 3 transient peaks in the sound collection diagram corresponding to the first microphone. Suitably, the second preset condition and the corresponding target interactive instruction may be that, when the first microphone receives the preset operation for 3 times, the second target interactive instruction is determined and executed.
It can be understood that the second preset condition and the corresponding target interaction instruction may be, or may be, that the third target interaction instruction is determined and executed under the condition that the first microphone receives the preset operation for 3 times and the second microphone also receives the preset operation for 3 times, and the embodiment of the present invention does not limit a specific application scenario.
In the embodiment of the invention, when the knocking operation is taken as the preset operation, the knocking sound data is similar to pulses, and the instantaneous high amplitude is generated in a short time, so that other sounds generated at a certain microphone can be distinguished, and a better interaction effect is obtained.
It can be understood that, if the mobile terminal collects the sound signal due to a collision, a noise, and the like, and the collision, the noise, and the like are not usually directed to one of the microphones, so that the sound signals collected by the first microphone and the second microphone are not greatly different, as shown in fig. 7, the first microphone and the second microphone may have 3 sharp peaks, and the corresponding differential signal is not large, the analysis sub-module may conclude that the preset operation is not received, and thus, the erroneous interaction may not be caused. Therefore, the embodiment of the invention can not only eliminate the interference of noise in the external environment, but also avoid the misjudgment of the system caused by similar knocking sound generated by the outside, for example, when a certain part of the mobile terminal is knocked, the scene directly thrown away from the mobile terminal can be distinguished, thereby obtaining accurate interaction.
In the embodiment of the invention, the target interaction instruction can be determined according to the volume difference value of the sound signals collected by the two microphones, and the target interaction instruction is executed, so that in the embodiment of the invention, the interaction with the mobile terminal can be accurately and quickly realized only by analyzing the sound signals collected by the two microphones without additionally arranging an entity key and sequentially operating on a screen according to display. Specifically, in practical applications, after a user sends a sound signal, a first sound signal can be collected through the first microphone, and a second sound signal can be collected through the second microphone, because the sound signal propagation can change along with the distance and the like, a volume difference exists between the first sound signal collected by the first microphone and the second sound signal collected by the second microphone, and under the condition that the volume difference meets a first preset condition, the user can be considered to want to interact with the mobile terminal, so that a target interaction instruction is determined, and the target interaction instruction is executed, and then the quick interaction between the user and the mobile terminal is realized.
Referring to fig. 8, a block diagram of an interactive apparatus 300 according to an embodiment of the present invention is shown. The method specifically comprises the following steps:
a sound signal obtaining module 310, configured to obtain a first sound signal through a first microphone and obtain a second sound signal through a second microphone;
a volume difference identification module 320, configured to identify a volume difference between the first sound signal and the second sound signal;
the interactive instruction executing module 330 is configured to determine a target interactive instruction and execute the target interactive instruction when the volume difference satisfies a first preset condition.
In the embodiment of the invention, the target interaction instruction can be determined according to the volume difference value of the sound signals collected by the two microphones, and the target interaction instruction is executed, so that in the embodiment of the invention, the interaction with the mobile terminal can be accurately and quickly realized only by analyzing the sound signals collected by the two microphones without additionally arranging an entity key and sequentially operating on a screen according to display. Specifically, in practical applications, after a user sends a sound signal, a first sound signal can be collected through the first microphone, and a second sound signal can be collected through the second microphone, because the sound signal propagation can change along with the distance and the like, a volume difference exists between the first sound signal collected by the first microphone and the second sound signal collected by the second microphone, and under the condition that the volume difference meets a first preset condition, the user can be considered to want to interact with the mobile terminal, so that a target interaction instruction is determined, and the target interaction instruction is executed, and then the quick interaction between the user and the mobile terminal is realized.
Alternatively, referring to fig. 9, on the basis of fig. 8, in the apparatus:
the volume difference value identification module 320 includes:
a signal processing submodule 3201 configured to demodulate the first sound signal and the second sound signal into digital signals;
a difference amplification submodule 3202, configured to output the volume difference according to the demodulated first sound signal and the demodulated second sound signal;
the interactive instruction execution module 330 includes:
the analysis submodule 3301 is configured to determine whether a preset operation is received according to the volume difference and a preset signal threshold;
the interactive instruction execution sub-module 3302 is configured to determine the target interactive instruction when the number of times of the preset operation received within the preset time meets a second preset condition.
Optionally, the apparatus further comprises:
the acceleration detection module 340 is configured to acquire a target jitter signal of the interaction apparatus;
a timing module 350, configured to record a jitter collecting time of the target jitter signal and sound collecting times of the first sound signal and the second sound signal;
the analysis submodule includes:
and the analysis unit is used for determining whether a preset operation is received or not according to the volume difference value, a preset signal threshold value, the jitter acquisition time and the sound acquisition time.
Optionally, the preset signal threshold includes: presetting a first signal threshold and a second signal threshold; the analysis unit includes:
the first judging subunit is configured to judge that the first microphone receives a first trigger when the volume difference is higher than the preset first signal threshold;
the second judging subunit is configured to judge that the second microphone receives a second trigger when the volume difference is lower than the preset second signal threshold;
the third judging subunit is configured to judge whether the jitter collecting time matches the sound collecting time in the presence of the first trigger and/or the second trigger;
and the fourth judging subunit is used for judging that a preset operation is received under the condition that the jitter collecting time is matched with the sound collecting time.
Optionally, the apparatus further comprises;
the analog switch module 360 is configured to determine whether a first instruction is received; and entering the volume difference value identification module under the condition that the first instruction is received.
In the embodiment of the invention, the target interaction instruction can be determined according to the volume difference value of the sound signals collected by the two microphones, and the target interaction instruction is executed, so that in the embodiment of the invention, the interaction with the mobile terminal can be accurately and quickly realized only by analyzing the sound signals collected by the two microphones without additionally arranging an entity key and sequentially operating on a screen according to display. Specifically, in practical applications, after a user sends a sound signal, a first sound signal can be collected through the first microphone, and a second sound signal can be collected through the second microphone, because the sound signal propagation can change along with the distance and the like, a volume difference exists between the first sound signal collected by the first microphone and the second sound signal collected by the second microphone, and under the condition that the volume difference meets a first preset condition, the user can be considered to want to interact with the mobile terminal, so that a target interaction instruction is determined, and the target interaction instruction is executed, and then the quick interaction between the user and the mobile terminal is realized.
It is understood that the steps of the embodiment of the apparatus are described in the embodiment of the method, and are not described herein again.
It should be noted that the foregoing embodiments are described as a series of acts or combinations for simplicity in explanation, but it should be understood by those skilled in the art that the present invention is not limited by the order of acts or acts described, as some steps may occur in other orders or concurrently in accordance with the invention. Further, those skilled in the art will appreciate that the embodiments described in the specification are presently preferred and that no particular act is required to implement the invention.
Fig. 10 is a schematic diagram of a hardware structure of a mobile terminal implementing various embodiments of the present invention.
The mobile terminal 500 includes, but is not limited to: a radio frequency unit 501, a network module 502, an audio output unit 503, an input unit 504, a sensor 505, a display unit 506, a user input unit 507, an interface unit 508, a memory 509, a processor 510, and a power supply 511. Those skilled in the art will appreciate that the mobile terminal architecture illustrated in fig. 10 is not intended to be limiting of mobile terminals, and that a mobile terminal may include more or fewer components than those illustrated, or some components may be combined, or a different arrangement of components. In the embodiment of the present invention, the mobile terminal includes, but is not limited to, a mobile phone, a tablet computer, a notebook computer, a palm computer, a vehicle-mounted terminal, a wearable device, a pedometer, and the like.
The processor 510 is configured to obtain a first sound signal through a first microphone and obtain a second sound signal through a second microphone; identifying a volume difference of the first sound signal and the second sound signal; and under the condition that the volume difference value meets a first preset condition, determining a target interactive instruction and executing the target interactive instruction.
In the embodiment of the invention, the target interaction instruction can be determined according to the volume difference value of the sound signals collected by the two microphones, and the target interaction instruction is executed, so that in the embodiment of the invention, the interaction with the mobile terminal can be accurately and quickly realized only by analyzing the sound signals collected by the two microphones without additionally arranging an entity key and sequentially operating on a screen according to display. Specifically, in practical applications, after a user sends a sound signal, a first sound signal can be collected through the first microphone, and a second sound signal can be collected through the second microphone, because the sound signal propagation can change along with the distance and the like, a volume difference exists between the first sound signal collected by the first microphone and the second sound signal collected by the second microphone, and under the condition that the volume difference meets a first preset condition, the user can be considered to want to interact with the mobile terminal, so that a target interaction instruction is determined, and the target interaction instruction is executed, and then the quick interaction between the user and the mobile terminal is realized.
It should be understood that, in the embodiment of the present invention, the radio frequency unit 501 may be used for receiving and sending signals during a message sending and receiving process or a call process, and specifically, receives downlink data from a base station and then processes the received downlink data to the processor 510; in addition, the uplink data is transmitted to the base station. In general, radio frequency unit 501 includes, but is not limited to, an antenna, at least one amplifier, a transceiver, a coupler, a low noise amplifier, a duplexer, and the like. In addition, the radio frequency unit 501 can also communicate with a network and other devices through a wireless communication system.
The mobile terminal provides the user with wireless broadband internet access through the network module 502, such as helping the user send and receive e-mails, browse webpages, access streaming media, and the like.
The audio output unit 503 may convert audio data received by the radio frequency unit 501 or the network module 502 or stored in the memory 509 into an audio signal and output as sound. Also, the audio output unit 503 may also provide audio output related to a specific function performed by the mobile terminal 500 (e.g., a call signal reception sound, a message reception sound, etc.). The audio output unit 503 includes a speaker, a buzzer, a receiver, and the like.
The input unit 504 is used to receive an audio or video signal. The input Unit 504 may include a Graphics Processing Unit (GPU) 5041 and a microphone 5042, and the Graphics processor 5041 processes image data of still pictures or video obtained by an image capturing mobile terminal (e.g., a camera) in a video capturing mode or an image capturing mode. The processed image frames may be displayed on the display unit 506. The image frames processed by the graphic processor 5041 may be stored in the memory 509 (or other storage medium) or transmitted via the radio frequency unit 501 or the network module 502. The microphone 5042 may receive sounds and may be capable of processing such sounds into audio data. The processed audio data may be converted into a format output transmittable to a mobile communication base station via the radio frequency unit 501 in case of the phone call mode.
The mobile terminal 500 also includes at least one sensor 505, such as a light sensor, motion sensor, and other sensors. Specifically, the light sensor includes an ambient light sensor that adjusts the brightness of the display panel 5061 according to the brightness of ambient light, and a proximity sensor that turns off the display panel 5061 and/or a backlight when the mobile terminal 500 is moved to the ear. As one of the motion sensors, the accelerometer sensor can detect the magnitude of acceleration in each direction (generally three axes), detect the magnitude and direction of gravity when stationary, and can be used to identify the posture of the mobile terminal (such as horizontal and vertical screen switching, related games, magnetometer posture calibration), and vibration identification related functions (such as pedometer, tapping); the sensors 505 may also include fingerprint sensors, pressure sensors, iris sensors, molecular sensors, gyroscopes, barometers, hygrometers, thermometers, infrared sensors, etc., which are not described in detail herein.
The display unit 506 is used to display information input by the user or information provided to the user. The Display unit 506 may include a Display panel 5061, and the Display panel 5061 may be configured in the form of a Liquid Crystal Display (LCD), an Organic Light-Emitting Diode (OLED), or the like.
The user input unit 507 may be used to receive input numeric or character information and generate key signal inputs related to user settings and function control of the mobile terminal. Specifically, the user input unit 507 includes a touch panel 5071 and other input devices 5072. Touch panel 5071, also referred to as a touch screen, may collect touch operations by a user on or near it (e.g., operations by a user on or near touch panel 5071 using a finger, stylus, or any suitable object or attachment). The touch panel 5071 may include two parts of a touch detection mobile terminal and a touch controller. The touch detection mobile terminal detects the touch direction of a user, detects a signal brought by touch operation and transmits the signal to the touch controller; the touch controller receives touch information from the touch sensing mobile terminal, converts the touch information into touch point coordinates, sends the touch point coordinates to the processor 510, receives a command from the processor 510, and executes the command. In addition, the touch panel 5071 may be implemented in various types such as a resistive type, a capacitive type, an infrared ray, and a surface acoustic wave. In addition to the touch panel 5071, the user input unit 507 may include other input devices 5072. In particular, other input devices 5072 may include, but are not limited to, a physical keyboard, function keys (e.g., volume control keys, switch keys, etc.), a trackball, a mouse, and a joystick, which are not described in detail herein.
Further, the touch panel 5071 may be overlaid on the display panel 5061, and when the touch panel 5071 detects a touch operation thereon or nearby, the touch operation is transmitted to the processor 510 to determine the type of the touch event, and then the processor 510 provides a corresponding visual output on the display panel 5061 according to the type of the touch event. Although in fig. 10, the touch panel 5071 and the display panel 5061 are two independent components to implement the input and output functions of the mobile terminal, in some embodiments, the touch panel 5071 and the display panel 5061 may be integrated to implement the input and output functions of the mobile terminal, and is not limited herein.
The interface unit 508 is an interface through which an external mobile terminal is connected to the mobile terminal 500. For example, the external mobile terminal may include a wired or wireless headset port, an external power supply (or battery charger) port, a wired or wireless data port, a memory card port, a port for connecting the mobile terminal having an identification module, an audio input/output (I/O) port, a video I/O port, an earphone port, and the like. The interface unit 508 may be used to receive input (e.g., data information, power, etc.) from an external mobile terminal and transmit the received input to one or more elements within the mobile terminal 500 or may be used to transmit data between the mobile terminal 500 and the external mobile terminal.
The memory 509 may be used to store software programs as well as various data. The memory 509 may mainly include a storage program area and a storage data area, wherein the storage program area may store an operating system, an application program required by at least one function (such as a sound playing function, an image playing function, etc.), and the like; the storage data area may store data (such as audio data, a phonebook, etc.) created according to the use of the cellular phone, and the like. Further, the memory 509 may include high-speed random access memory, and may also include non-volatile memory, such as at least one magnetic disk storage device, flash memory device, or other volatile solid-state storage device.
The processor 510 is a control center of the mobile terminal, connects various parts of the entire mobile terminal using various interfaces and lines, and performs various functions of the mobile terminal and processes data by operating or executing software programs and/or modules stored in the memory 509 and calling data stored in the memory 509, thereby performing overall monitoring of the mobile terminal. Processor 510 may include one or more processing units; preferably, the processor 510 may integrate an application processor, which mainly handles operating systems, user interfaces, application programs, etc., and a modem processor, which mainly handles wireless communications. It will be appreciated that the modem processor described above may not be integrated into processor 510.
The mobile terminal 500 may further include a power supply 511 (e.g., a battery) for supplying power to various components, and preferably, the power supply 511 may be logically connected to the processor 510 via a power management system, so that functions of managing charging, discharging, and power consumption are performed via the power management system.
In addition, the mobile terminal 500 includes some functional modules that are not shown, and thus, are not described in detail herein.
Preferably, an embodiment of the present invention further provides a mobile terminal, which includes a processor 510, a memory 509, and a computer program that is stored in the memory 509 and can be run on the processor 510, and when the computer program is executed by the processor 510, the processes of the above-described interaction apparatus embodiment are implemented, and the same technical effect can be achieved, and in order to avoid repetition, details are not described here again.
The embodiment of the present invention further provides a computer-readable storage medium, where a computer program is stored on the computer-readable storage medium, and when the computer program is executed by a processor, the computer program implements each process of the interaction method embodiment, and can achieve the same technical effect, and in order to avoid repetition, details are not repeated here. The computer-readable storage medium may be a Read-Only Memory (ROM), a Random Access Memory (RAM), a magnetic disk or an optical disk.
It should be noted that, in this document, the terms "comprises," "comprising," or any other variation thereof, are intended to cover a non-exclusive inclusion, such that a process, method, article, or mobile terminal that comprises a list of elements does not include only those elements but may include other elements not expressly listed or inherent to such process, method, article, or mobile terminal. Without further limitation, an element defined by the phrase "comprising an … …" does not exclude the presence of other like elements in a process, method, article, or mobile terminal that comprises the element.
Through the above description of the embodiments, those skilled in the art will clearly understand that the method of the above embodiments can be implemented by software plus a necessary general hardware platform, and certainly can also be implemented by hardware, but in many cases, the former is a better implementation manner. Based on such understanding, the technical solutions of the present invention may be embodied in the form of a software product, which is stored in a storage medium (such as ROM/RAM, magnetic disk, optical disk) and includes instructions for enabling a terminal (such as a mobile phone, a computer, a server, an air conditioner, or a network device) to execute the method according to the embodiments of the present invention.
While the present invention has been described with reference to the embodiments shown in the drawings, the present invention is not limited to the embodiments, which are illustrative and not restrictive, and it will be apparent to those skilled in the art that various changes and modifications can be made therein without departing from the spirit and scope of the invention as defined in the appended claims.

Claims (11)

1. An interactive method, characterized in that the method comprises:
acquiring a first sound signal through a first microphone and acquiring a second sound signal through a second microphone; the first microphone is arranged at the bottom end of the mobile terminal, and the second microphone is arranged at the top end of the mobile terminal;
identifying a volume difference of the first sound signal and the second sound signal;
and under the condition that the volume difference value meets a first preset condition, determining a target interactive instruction and executing the target interactive instruction.
2. The method of claim 1, wherein the identifying a volume difference for the first sound signal and the second sound signal comprises:
demodulating the first sound signal and the second sound signal into digital signals;
outputting the volume difference value according to the demodulated first sound signal and the demodulated second sound signal;
under the condition that the volume difference value meets a first preset condition, determining a target interaction instruction comprises the following steps:
determining whether a preset operation is received or not according to the volume difference and a preset signal threshold;
and under the condition that the number of times of the received preset operation in preset time meets a second preset condition, determining a target interaction instruction.
3. The method of claim 2, wherein determining whether a preset operation is received according to the volume difference and a preset signal threshold further comprises:
collecting a target jitter signal;
recording the jitter acquisition time of the target jitter signal and the sound acquisition time of the first sound signal and the second sound signal;
the determining whether a preset operation is received according to the volume difference and a preset signal threshold includes:
and determining whether a preset operation is received or not according to the volume difference value, a preset signal threshold value, the jitter acquisition time and the sound acquisition time.
4. The method of claim 3, wherein the pre-set signal threshold comprises: presetting a first signal threshold and a second signal threshold; the preset first signal threshold is higher than the preset second signal threshold;
the determining whether a preset operation is received according to the volume difference value and a preset signal threshold value, and the jitter collecting time and the sound collecting time comprises:
under the condition that the volume difference value is higher than the preset first signal threshold value, judging that the first microphone receives a first trigger;
under the condition that the volume difference value is lower than the preset second signal threshold value, judging that the second microphone receives a second trigger;
judging whether the jitter acquisition time is matched with the sound acquisition time or not under the condition that the first trigger exists and/or the second trigger exists;
and under the condition that the jitter acquisition time is matched with the sound acquisition time, judging that a preset operation is received.
5. The method of claim 2, 3 or 4, wherein prior to identifying the volume difference between the first sound signal and the second sound signal, further comprising:
judging whether a first instruction is received or not;
and executing the step of identifying the volume difference value of the first sound signal and the second sound signal when the first instruction is received.
6. An interactive apparatus, characterized in that the apparatus comprises:
the sound signal acquisition module is used for acquiring a first sound signal through a first microphone and acquiring a second sound signal through a second microphone; the first microphone is arranged at the bottom end of the mobile terminal, and the second microphone is arranged at the top end of the mobile terminal;
a volume difference value identification module for identifying the volume difference value of the first sound signal and the second sound signal;
and the interactive instruction execution module is used for determining a target interactive instruction and executing the target interactive instruction under the condition that the volume difference value meets a first preset condition.
7. The apparatus of claim 6, wherein the volume difference identification module comprises:
a signal processing submodule for demodulating the first sound signal and the second sound signal into digital signals;
the difference amplification submodule is used for outputting the volume difference value according to the demodulated first sound signal and the demodulated second sound signal;
the interactive instruction execution module comprises:
the analysis submodule is used for determining whether a preset operation is received or not according to the volume difference value and a preset signal threshold;
and the interactive instruction execution submodule is used for determining a target interactive instruction under the condition that the number of times of the received preset operation in the preset time meets a second preset condition.
8. The apparatus of claim 7, further comprising:
the acceleration detection module is used for acquiring a target jitter signal of the interaction device;
the timing module is used for recording the jitter acquisition time of the target jitter signal and the sound acquisition time of the first sound signal and the second sound signal;
the analysis submodule includes:
and the analysis unit is used for determining whether a preset operation is received or not according to the volume difference value, a preset signal threshold value, the jitter acquisition time and the sound acquisition time.
9. The apparatus of claim 8, wherein the preset signal threshold comprises: presetting a first signal threshold and a second signal threshold; the analysis unit includes:
the first judging subunit is configured to judge that the first microphone receives a first trigger when the volume difference is higher than the preset first signal threshold;
the second judging subunit is configured to judge that the second microphone receives a second trigger when the volume difference is lower than the preset second signal threshold;
the third judging subunit is configured to judge whether the jitter collecting time matches the sound collecting time in the presence of the first trigger and/or the second trigger;
and the fourth judging subunit is used for judging that a preset operation is received under the condition that the jitter collecting time is matched with the sound collecting time.
10. The apparatus of claim 7, 8 or 9, further comprising;
the analog switch module is used for judging whether a first instruction is received or not; and entering the volume difference value identification module under the condition that the first instruction is received.
11. A mobile terminal, characterized in that it comprises an interaction device according to any one of claims 6 to 10.
CN201910143207.7A 2019-02-26 2019-02-26 Interaction method, interaction device and mobile terminal Active CN109994111B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201910143207.7A CN109994111B (en) 2019-02-26 2019-02-26 Interaction method, interaction device and mobile terminal

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201910143207.7A CN109994111B (en) 2019-02-26 2019-02-26 Interaction method, interaction device and mobile terminal

Publications (2)

Publication Number Publication Date
CN109994111A CN109994111A (en) 2019-07-09
CN109994111B true CN109994111B (en) 2021-11-23

Family

ID=67130527

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201910143207.7A Active CN109994111B (en) 2019-02-26 2019-02-26 Interaction method, interaction device and mobile terminal

Country Status (1)

Country Link
CN (1) CN109994111B (en)

Families Citing this family (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113566399B (en) * 2020-04-28 2022-09-20 广东美的制冷设备有限公司 Control method and device of air conditioner and air conditioner
CN112333534B (en) * 2020-09-17 2023-11-14 深圳Tcl新技术有限公司 Noise elimination method and device, intelligent television system and readable storage medium
CN113009960B (en) * 2021-02-03 2022-12-09 上海橙捷健康科技有限公司 Time synchronization method for camera image data and pressure treadmill data
CN113934150A (en) * 2021-10-18 2022-01-14 交互未来(北京)科技有限公司 Method and device for controlling intelligent household appliance and electronic equipment
CN113918020A (en) * 2021-10-20 2022-01-11 北京小雅星空科技有限公司 Intelligent interaction method and related device

Citations (10)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2012148770A2 (en) * 2011-04-28 2012-11-01 United Video Properties, Inc. Systems and methods for deducing user information from input device behavior
CN105516435A (en) * 2015-12-01 2016-04-20 广东小天才科技有限公司 Method and device for adjusting volume of mobile terminal
JP2016178497A (en) * 2015-03-20 2016-10-06 カシオ計算機株式会社 Reproduction device and program
CN107146613A (en) * 2017-04-10 2017-09-08 北京猎户星空科技有限公司 A kind of voice interactive method and device
JP2017175456A (en) * 2016-03-24 2017-09-28 ヤマハ株式会社 Signal processing apparatus
CN107220021A (en) * 2017-05-16 2017-09-29 北京小鸟看看科技有限公司 Phonetic entry recognition methods, device and headset equipment
CN107241642A (en) * 2017-07-28 2017-10-10 维沃移动通信有限公司 A kind of player method and terminal
EP3312718A1 (en) * 2016-10-20 2018-04-25 Nokia Technologies OY Changing spatial audio fields
CN108345442A (en) * 2018-01-18 2018-07-31 维沃移动通信有限公司 A kind of operation recognition methods and mobile terminal
CN108650392A (en) * 2018-04-24 2018-10-12 维沃移动通信有限公司 A kind of call recording method and mobile terminal

Family Cites Families (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2001147919A (en) * 1999-11-24 2001-05-29 Sharp Corp Device and method for processing voice and storage medium to be utilized therefor
KR101605347B1 (en) * 2009-12-18 2016-03-22 삼성전자주식회사 Method and apparatus for controlling external output of a portable terminal
EP2860726B1 (en) * 2011-12-30 2017-12-06 Samsung Electronics Co., Ltd Electronic apparatus and method of controlling electronic apparatus
CN105183245B (en) * 2015-08-31 2019-07-26 联想(北京)有限公司 A kind of information processing method and electronic equipment
CN105677192A (en) * 2016-02-29 2016-06-15 珠海市魅族科技有限公司 Control method and control device of mobile terminal
CN105847470B (en) * 2016-03-27 2018-11-27 深圳市润雨投资有限公司 A kind of wear-type full voice control mobile phone
WO2018090252A1 (en) * 2016-11-16 2018-05-24 深圳达闼科技控股有限公司 Voice instruction recognition method for robot, and related robot device
CN109167884A (en) * 2018-10-31 2019-01-08 维沃移动通信有限公司 A kind of method of servicing and device based on user speech

Patent Citations (10)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2012148770A2 (en) * 2011-04-28 2012-11-01 United Video Properties, Inc. Systems and methods for deducing user information from input device behavior
JP2016178497A (en) * 2015-03-20 2016-10-06 カシオ計算機株式会社 Reproduction device and program
CN105516435A (en) * 2015-12-01 2016-04-20 广东小天才科技有限公司 Method and device for adjusting volume of mobile terminal
JP2017175456A (en) * 2016-03-24 2017-09-28 ヤマハ株式会社 Signal processing apparatus
EP3312718A1 (en) * 2016-10-20 2018-04-25 Nokia Technologies OY Changing spatial audio fields
CN107146613A (en) * 2017-04-10 2017-09-08 北京猎户星空科技有限公司 A kind of voice interactive method and device
CN107220021A (en) * 2017-05-16 2017-09-29 北京小鸟看看科技有限公司 Phonetic entry recognition methods, device and headset equipment
CN107241642A (en) * 2017-07-28 2017-10-10 维沃移动通信有限公司 A kind of player method and terminal
CN108345442A (en) * 2018-01-18 2018-07-31 维沃移动通信有限公司 A kind of operation recognition methods and mobile terminal
CN108650392A (en) * 2018-04-24 2018-10-12 维沃移动通信有限公司 A kind of call recording method and mobile terminal

Non-Patent Citations (3)

* Cited by examiner, † Cited by third party
Title
Ana Tajadura-Jiménez;Nadia Bianchi-Berthouze;Enrico Furfaro.Sonification of Surface Tapping Changes Behavior, Surface Perception, and Emotion.《IEEE MultiMedia 》.2015,第22卷(第1期), *
Sakeson Yanpanyanon;Thongthai Wongwichai;Takamitsu Tanaka.Joint, space and volume study by interactive cube puzzle.《IEEE 2018 International Workshop on Advanced Image Technology (IWAIT)》.2018,全文. *
王中宝.触屏手机中手势交互的设计研究.《中国优秀硕士学位论文全文数据库 信息科技辑》.2013,(第S1期),全文. *

Also Published As

Publication number Publication date
CN109994111A (en) 2019-07-09

Similar Documents

Publication Publication Date Title
CN109994111B (en) Interaction method, interaction device and mobile terminal
CN110740259B (en) Video processing method and electronic equipment
CN109078319B (en) Game interface display method and terminal
CN108459797B (en) Control method of folding screen and mobile terminal
CN108989672B (en) Shooting method and mobile terminal
CN110913139B (en) Photographing method and electronic equipment
CN108307106B (en) Image processing method and device and mobile terminal
CN108881617B (en) Display switching method and mobile terminal
CN110012143B (en) Telephone receiver control method and terminal
CN108196815B (en) Method for adjusting call sound and mobile terminal
CN109412932B (en) Screen capturing method and terminal
CN109618218B (en) Video processing method and mobile terminal
CN109922294B (en) Video processing method and mobile terminal
CN111401463A (en) Method for outputting detection result, electronic device, and medium
CN111405181B (en) Focusing method and electronic equipment
CN107277364B (en) Shooting method, mobile terminal and computer readable storage medium
CN108762641B (en) Text editing method and terminal equipment
CN108388459B (en) Message display processing method and mobile terminal
CN110572600A (en) video processing method and electronic equipment
CN108093119B (en) Strange incoming call number marking method and mobile terminal
CN109240531B (en) Touch data sampling compensation method and device, mobile terminal and storage medium
CN110764650A (en) Key trigger detection method and electronic equipment
CN107743174B (en) Clipping judgment method of sound signal and mobile terminal
CN110769153B (en) Image processing method and electronic equipment
CN110536009B (en) Communication establishing method and mobile terminal

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant