CN105469794A - Information processing method and electronic equipment - Google Patents

Information processing method and electronic equipment Download PDF

Info

Publication number
CN105469794A
CN105469794A CN201510894668.XA CN201510894668A CN105469794A CN 105469794 A CN105469794 A CN 105469794A CN 201510894668 A CN201510894668 A CN 201510894668A CN 105469794 A CN105469794 A CN 105469794A
Authority
CN
China
Prior art keywords
information
audio
electronic equipment
sub
semantics
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN201510894668.XA
Other languages
Chinese (zh)
Other versions
CN105469794B (en
Inventor
陈佳子
鲁希达
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Lenovo Beijing Ltd
Original Assignee
Lenovo Beijing Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Lenovo Beijing Ltd filed Critical Lenovo Beijing Ltd
Priority to CN201510894668.XA priority Critical patent/CN105469794B/en
Publication of CN105469794A publication Critical patent/CN105469794A/en
Application granted granted Critical
Publication of CN105469794B publication Critical patent/CN105469794B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • GPHYSICS
    • G10MUSICAL INSTRUMENTS; ACOUSTICS
    • G10LSPEECH ANALYSIS TECHNIQUES OR SPEECH SYNTHESIS; SPEECH RECOGNITION; SPEECH OR VOICE PROCESSING TECHNIQUES; SPEECH OR AUDIO CODING OR DECODING
    • G10L15/00Speech recognition
    • G10L15/26Speech to text systems
    • GPHYSICS
    • G10MUSICAL INSTRUMENTS; ACOUSTICS
    • G10LSPEECH ANALYSIS TECHNIQUES OR SPEECH SYNTHESIS; SPEECH RECOGNITION; SPEECH OR VOICE PROCESSING TECHNIQUES; SPEECH OR AUDIO CODING OR DECODING
    • G10L15/00Speech recognition
    • G10L15/22Procedures used during a speech recognition process, e.g. man-machine dialogue

Landscapes

  • Engineering & Computer Science (AREA)
  • Computational Linguistics (AREA)
  • Health & Medical Sciences (AREA)
  • Audiology, Speech & Language Pathology (AREA)
  • Human Computer Interaction (AREA)
  • Physics & Mathematics (AREA)
  • Acoustics & Sound (AREA)
  • Multimedia (AREA)
  • User Interface Of Digital Computer (AREA)

Abstract

The invention discloses an information processing method and electronic equipment. The method comprises the steps that whether a first output attribute of audio data of the electronic equipment reaches a first preset condition or not is detected, and a first detection result is generated; when the first detection result represents that the first output attribute of the audio data of the electronic equipment reaches the first preset condition, first information of an environment where the electronic equipment is located is collected; first audio information in the first environmental information is acquired; the first audio information is analyzed to obtain a first analysis result; when the first analysis result meets a second preset condition, the electronic equipment is controlled to output first prompt information. The electronic equipment can automatically output prompt that auto output is large to a user, the user experience is improved, and the functional diversity and humanization of the electronic equipment are highlighted.

Description

Information processing method and electronic equipment
Technical field
The present invention relates to field of information processing, be specifically related to a kind of information processing method and electronic equipment.
Background technology
Carry out in the process of audio frequency output at electronic equipments such as utilizing mobile phone, panel computer PAD, integrated computer; usually following problems may be encountered: the use sense that user 1 considers self is by must be larger by voice output; the content of speaking that prompting user 1 current volume causing user 1 successfully cannot hear that user 2 couples of users 1 say is larger, user 2 can only make user 1 hear this prompting by the mode constantly improving speaking volume.If be considered as a kind of manual prompt's method depending on this mode improving constantly speaking volume by user 2, so this manual prompt's method feasibility is poor, the good experience that user cannot be allowed to experience electronic equipment bring.
Summary of the invention
For solving the technical matters of existing existence, the embodiment of the present invention provides a kind of information processing method and electronic equipment, electronic equipment can export larger prompting from trend user output audio, promotes Consumer's Experience, highlights functional diversity and the hommization of electronic equipment.
The technical scheme of the embodiment of the present invention is achieved in that
Embodiments provide a kind of information processing method, be applied in an electronic equipment, described method comprises:
Whether the first output attribute detecting the voice data of described electronic equipment reaches the first predetermined condition, and generates the first testing result;
When the first output attribute that the first testing result characterizes the voice data of described electronic equipment reaches the first predetermined condition, gather the first environment information residing for described electronic equipment;
Obtain the first audio-frequency information in described first environment information;
Resolve described first audio-frequency information, obtain the first analysis result;
When described first analysis result second predetermined condition, control described electronic equipment and export the first information.
In such scheme, in described first environment information, at least comprise the first audio-frequency information and the second audio-frequency information;
The first audio-frequency information in the described first environment information of described acquisition, comprising:
In first environment information, isolate the information with the first predetermined characteristic;
Determine that isolated information is the first audio-frequency information.
In such scheme, described first audio-frequency information of described parsing, obtains the first analysis result, comprising:
Obtain the semantics information of the first audio-frequency information;
Described semantics information is divided, obtains at least one sub-semantics information;
In described at least one sub-semantics information, filter out the sub-semantics information of satisfied first pre-defined rule;
Using filtered out sub-semantics information as described first analysis result.
In such scheme, described method also comprises:
According to the lexicon preset, the first audio-frequency information is carried out to the division of vocabulary, obtain N number of sub-vocabulary;
Determine the type of N number of sub-vocabulary;
In N number of sub-vocabulary, filter out M the sub-vocabulary meeting the first kind;
The set determining filtered out M sub-vocabulary is described first analysis result;
M, N are positive integer, and M≤N.
In such scheme, described method also comprises:
Filtered out sub-semantics information and the first predetermined information are carried out similarity comparison;
When the similarity of filtered out sub-semantics information and the first predetermined information is positioned at the first similar range, determine the first analysis result second predetermined condition.
In such scheme, before the semantics information of acquisition first audio-frequency information, described method also comprises:
Speech recognition is carried out to described first audio-frequency information, obtains the first text message and/or the second text message; Described first text message is the text message the first audio-frequency information being carried out to the full content that speech recognition obtains, and described second text message is the text message the first audio-frequency information being carried out to the partial content that speech recognition obtains;
Accordingly, the semantics information of described acquisition first audio-frequency information, comprising: the semantics information obtaining described the first noumenon information and/or the second text message.
The embodiment of the present invention additionally provides a kind of electronic equipment, and described electronic equipment comprises:
Whether the first detecting unit, reach the first predetermined condition for the first output attribute detecting the voice data of described electronic equipment, and generate the first testing result;
First collecting unit, when the first output attribute for characterizing the voice data of described electronic equipment when the first testing result reaches the first predetermined condition, gathers the first environment information residing for described electronic equipment;
First acquiring unit, for obtaining the first audio-frequency information in described first environment information;
First resolution unit, for resolving described first audio-frequency information, obtains the first analysis result;
First control module, for when described first analysis result second predetermined condition, controls described electronic equipment and exports the first information.
In such scheme, described first acquiring unit, for:
In first environment information, isolate the information with the first predetermined characteristic;
Determine that isolated information is the first audio-frequency information; The first audio-frequency information and the second audio-frequency information is at least comprised in described first environment information.
In such scheme, described first resolution unit, for:
Obtain the semantics information of the first audio-frequency information;
Described semantics information is divided, obtains at least one sub-semantics information;
In described at least one sub-semantics information, filter out the sub-semantics information of satisfied first pre-defined rule;
Using filtered out sub-semantics information as described first analysis result.
In such scheme, described first resolution unit, specifically for:
According to the lexicon preset, the first audio-frequency information is carried out to the division of vocabulary, obtain N number of sub-vocabulary;
Determine the type of N number of sub-vocabulary;
In N number of sub-vocabulary, filter out M the sub-vocabulary meeting the first kind;
The set determining filtered out M sub-vocabulary is described first analysis result;
M, N are positive integer, and M≤N.
In such scheme, described first control module, also for:
Filtered out sub-semantics information and the first predetermined information are carried out similarity comparison;
When the similarity of filtered out sub-semantics information and the first predetermined information is positioned at the first similar range, determine the first analysis result second predetermined condition.
In such scheme, described electronic equipment also comprises: the first recognition unit, for:
Speech recognition is carried out to described first audio-frequency information, obtains the first text message and/or the second text message; Described first text message is the text message the first audio-frequency information being carried out to the full content that speech recognition obtains, and described second text message is the text message the first audio-frequency information being carried out to the partial content that speech recognition obtains;
Accordingly, described first acquiring unit, for obtaining the semantics information of described the first noumenon information and/or the second text message.
The information processing method that the embodiment of the present invention provides and electronic equipment, wherein, described method comprises: whether the first output attribute detecting the voice data of described electronic equipment reaches the first predetermined condition, and generates the first testing result; When the first output attribute that the first testing result characterizes the voice data of described electronic equipment reaches the first predetermined condition, gather the first environment information residing for described electronic equipment; Obtain the first audio-frequency information in described first environment information; Resolve described first audio-frequency information, obtain the first analysis result; When described first analysis result second predetermined condition, control described electronic equipment and export the first information.Electronic equipment can export larger prompting from trend user output audio, promotes Consumer's Experience, highlights functional diversity and the hommization of electronic equipment.
Accompanying drawing explanation
Fig. 1 is the realization flow schematic diagram of the first embodiment of information processing method provided by the invention;
Fig. 2 is the realization flow schematic diagram of the second embodiment of information processing method provided by the invention;
Fig. 3 is the composition structural representation of the first embodiment of electronic equipment provided by the invention;
Fig. 4 is the composition structural representation of the second embodiment of electronic equipment provided by the invention.
Embodiment
Below in conjunction with accompanying drawing to a preferred embodiment of the present invention will be described in detail, should be appreciated that following illustrated preferred embodiment is only for instruction and explanation of the present invention, is not intended to limit the present invention.
In each embodiment in following information processing method provided by the invention and electronic equipment, involved electronic equipment includes but not limited to: all kinds such as industrial control computer, personal computer computing machine, integral computer, panel computer, mobile phone, electronic reader etc.Described electronic equipment can also be the wearable electronic equipments such as intelligent glasses, Intelligent glove, intelligent watch, Intelligent bracelet, intelligent dress ornament.The object of the preferred electronic equipment of the embodiment of the present invention is mobile phone.
First embodiment of information processing method provided by the invention, be applied in an electronic equipment, described electronic equipment has an audio output unit, and described audio output unit can be loudspeaker and/or audio amplifier.
Fig. 1 is the realization flow schematic diagram of the first embodiment of information processing method provided by the invention; As shown in Figure 1, described method comprises:
Step 101: whether the first output attribute detecting the voice data of described electronic equipment reaches the first predetermined condition, and generates the first testing result;
Here, the first output attribute is the output volume of audio output unit, and the first predetermined condition is the first threshold that the output volume of audio output unit reaches predetermined.Namely judge whether the output volume of electronic equipment reaches first threshold as 10dB, first threshold can also value be just other, sets flexibly according to actual conditions.
Step 102: when the first output attribute that the first testing result characterizes the voice data of described electronic equipment reaches the first predetermined condition, gather the first environment information residing for described electronic equipment;
Here, when judging that the output volume of electronic equipment reaches first threshold, determine that electronic equipment output volume is now comparatively large, have the possibility of other user had influence on except using the user of this electronic equipment, gather the environmental information residing for electronic equipment.In actual applications, in environmental information residing for electronic equipment except having voice data that audio output unit plays, also may have the content of speaking of other user (user 2), use the user of this electronic equipment speak content or talk data between user 2 and user 1 of (user 1).
Step 103: obtain the first audio-frequency information in described first environment information;
Here, in gathered environmental information, obtain the voice data that people says.Wherein, the first audio-frequency information is behaved the voice data said, is not the voice data that audio output unit is play.
Step 104: resolve described first audio-frequency information, obtain the first analysis result;
Here, the voice data that people says is resolved.
Step 105: when described first analysis result second predetermined condition, controls described electronic equipment and exports the first information;
Here, if parsing this voice data is user 2 when wishing the voice data that sound turns down by user 1, control electronic equipment and export the first information, prompting user 1 needs volume to export to turn down.Send information by the pilot lamp on keyboard with the flicker of certain frequency, also by the first information display is sent information on a display screen, can export so that user 1 turns volume as early as possible down after seeing this information in time.
In the embodiment of the present invention, when judging that the output volume of electronic equipment reaches first threshold, gather the environmental information residing for electronic equipment, and obtain the voice data that people in environmental information says, and this voice data is resolved, if parse be wish volume is turned down the meaning time, electronic equipment exports the first information, so as user 1 turn down as early as possible after seeing this information in time volume export.As can be seen here, utilize electronic equipment of the present invention can export larger information from trend user output audio, promote Consumer's Experience, highlight functional diversity and the hommization of electronic equipment.
Second embodiment of information processing method provided by the invention, be applied in an electronic equipment, described electronic equipment has an audio output unit, and described audio output unit can be loudspeaker and/or audio amplifier.
Fig. 2 is the realization flow schematic diagram of the second embodiment of information processing method provided by the invention; As shown in Figure 2, described method comprises:
Step 201: whether the first output attribute detecting the voice data of described electronic equipment reaches the first predetermined condition, and generates the first testing result;
Here, the first output attribute is the output volume of audio output unit, and the first predetermined condition is the first threshold that the output volume of audio output unit reaches predetermined.Namely judge whether the output volume of electronic equipment reaches first threshold as 10dB, first threshold can also value be just other, sets flexibly according to actual conditions.
Step 202: when the first output attribute that the first testing result characterizes the voice data of described electronic equipment reaches the first predetermined condition, gather the first environment information residing for described electronic equipment;
Here, when judging that the output volume of electronic equipment reaches first threshold, determining that electronic equipment output volume is now comparatively large, having the possibility of other user had influence on except using this electronic equipment, gather the environmental information residing for electronic equipment.In actual applications, in the environmental information residing for electronic equipment except having voice data that audio output unit plays, speak content or the talk data between user 2 and user 1 of the content of speaking of user 2, user 1 may also be had.
Step 203: in first environment information, isolates the information with the first predetermined characteristic, determines that isolated information is the first audio-frequency information;
Here, the first audio-frequency information and the second audio-frequency information is at least comprised in described first environment information.In actual applications, consider that the audio frequency that exported by this electrical part of audio output unit and people's audio frequency of speaking all have difference in audio frequency, tone, tone color etc., so can distinguish by the difference of these aspects the voice data that the voice data that exported by audio output unit in environmental information and people speak.Namely described first predetermined characteristic behaves the feature such as audio frequency, tone, tone color when speaking.Namely first audio-frequency information is voice data that people says; Second audio-frequency information is the voice data exported by audio output unit.
Need to illustrate, step 203 can be used as further illustrating the method for the first audio-frequency information obtained in described first environment information.
Step 204: the semantics information obtaining the first audio-frequency information, described semantics information is divided, obtain at least one sub-semantics information, in described at least one sub-semantics information, filter out the sub-semantics information of satisfied first pre-defined rule, using filtered out sub-semantics information as described first analysis result;
Here, the content of speaking that voice data that people speaks is generally user 2 is isolated by the first predetermined characteristic, meaning of one's words division is carried out to the content of speaking of user 2, obtain multiple sub-semantics information, and filter out and can embody user 2 and to speak the sub-semantics information semantics information of the first pre-defined rule (meet) of main contents, using filtered out sub-voice messaging as the first analysis result.Such as, the content of speaking of user 2 is " please the volume of your computer being turned down ", divide through the sub-meaning of one's words and obtain " // you // computer volume // turn down // a little ", the general idea filtering out this word is " volume is turned down ".
Step 204 is specifically by such as under type realization: according to the lexicon preset, the first audio-frequency information is carried out to the division of vocabulary, obtain N number of sub-vocabulary; Determine the type of N number of sub-vocabulary; In N number of sub-vocabulary, filter out M the sub-vocabulary meeting the first kind; The set determining filtered out M sub-vocabulary is described first analysis result; M, N are positive integer, and M≤N.Still are " please the volume of your computer being turned down " for the content of speaking of user 2, according to lexicons (lexicon of each type records the vocabulary of respective type) such as the noun/verb/pair/adjective/pronoun/prepositions preset, vocabulary division is carried out to the content of speaking of user 2, obtains " (preposition) // your (adjective) // computer volume (noun) // tune (verb) // little (adjective) // a little (adverbial word) ".Verb in usual voice data, verb and adjective, verb and noun represent user 2 and to speak the general idea of content, so in the present embodiment, filter out verb " tune " and adjective " little ", and be regarded as the sub-vocabulary meeting the first kind, and determine that this verb " is turned down " and " to turn down " with the set of adjective " little " and can representative of consumer 2 to speak the general idea of content.
Needs illustrate, step 204 can be used as described first audio-frequency information of parsing, obtains further illustrating of the method for the first analysis result.
Step 205: filtered out sub-semantics information and the first predetermined information are carried out similarity comparison; When the similarity of filtered out sub-semantics information and the first predetermined information is positioned at the first similar range, determine the first analysis result second predetermined condition, and control described electronic equipment and export the first information;
Here, consider user 2 speak content may be wish user 1 by audio frequency export turn a little meanings down, also may be other meaning, such as, shout user 1 and have a meal, lock a door, so in the present embodiment, after filtering out the general idea that user 2 speaks, also need to judge whether this content of speaking is allow user 1 audio frequency be exported the content turned down.Concrete, judge the similarity of sub-semantics information and the first predetermined information filtered out, this first predetermined information is allow user 1 audio frequency be exported the meaning of one's words content turned down, if judge the similarity of sub-semantics information and the first predetermined information filtered out be positioned at the first similar range as 80% time, determine the main meaning of one's words of the content of speaking of user 2 be wish user 1 by volume export turn down.
Here, if parsing user 2 content of speaking is that when wishing the meaning that sound turns down by user 1, control electronic equipment and export the first information, prompting user 1 needs volume to export to turn down.Send information by the pilot lamp on keyboard with the flicker of certain frequency, also by the first information display is sent information on a display screen, can export so that user 1 turns volume as early as possible down after seeing this information in time.
In the present invention's preferred embodiment, before the semantics information of acquisition first audio-frequency information, described method also comprises: carry out speech recognition to described first audio-frequency information, obtains the first text message and/or the second text message; Described first text message is the text message the first audio-frequency information being carried out to the full content that speech recognition obtains, and described second text message is the text message the first audio-frequency information being carried out to the partial content that speech recognition obtains; Accordingly, the semantics information of described acquisition first audio-frequency information, comprising: the semantics information obtaining described the first noumenon information and/or the second text message.Here, before the semantics information getting the first audio-frequency information, the first accessed audio-frequency information can be carried out the conversion that namely speech recognition carries out voice data-text, text data is converted to it, text data can be the full contents of the first audio-frequency information also can be main contents, and on the basis of text data, carry out acquisition and the division of semantics information, and then obtain user 2 and to speak the main meaning of one's words of content.
In actual applications, when electronic equipment detects the voice data such as such as " the turning down like this " of user 1, " good ", " as early as possible/adjust " at once, electronic equipment no longer can export (cut out) reminder-data, to adapt to the actual user demand of user.
In the embodiment of the present invention, when judging that the output volume of electronic equipment reaches first threshold, gather the environmental information residing for electronic equipment, and obtain the voice data that people in environmental information says, and this voice data meaning of one's words is divided, be met the general idea of the sub-semantics information of the first pre-defined rule and the content of speaking of speaker, when this general idea is when wishing that volume is turned down by user 1, electronic equipment exports the first information, exports so that user 1 turns volume as early as possible down after seeing this information in time.As can be seen here, utilize electronic equipment of the present invention can export larger information from trend user output audio, promote Consumer's Experience, highlight functional diversity and the hommization of electronic equipment.
First embodiment of electronic equipment provided by the invention, described electronic equipment has an audio output unit, and described audio output unit can be loudspeaker and/or audio amplifier.
Fig. 3 is the composition structural representation of the first embodiment of electronic equipment provided by the invention; As shown in Figure 3, described electronic equipment comprises: the first detecting unit 301, first collecting unit 302, first acquiring unit 303, first resolution unit 304 and the first control module 305; Wherein,
Whether the first detecting unit 301, reach the first predetermined condition for the first output attribute detecting the voice data of described electronic equipment, and generate the first testing result;
Here, the first output attribute is the output volume of audio output unit, and the first predetermined condition is the first threshold that the output volume of audio output unit reaches predetermined.Namely the first detecting unit 301 judges whether the output volume of electronic equipment reaches first threshold as 10dB, and first threshold can also value be just other, sets flexibly according to actual conditions.
First collecting unit 302, when the first output attribute for characterizing the voice data of described electronic equipment when the first testing result reaches the first predetermined condition, gathers the first environment information residing for described electronic equipment;
Here, when the first detecting unit 301 judges that the output volume of electronic equipment reaches first threshold, determine that electronic equipment output volume is now larger, have the possibility of other user had influence on except using the user of this electronic equipment, the first collecting unit 302 gathers the environmental information residing for electronic equipment.In actual applications, in environmental information residing for electronic equipment except having voice data that audio output unit plays, also may have the content of speaking of other user (user 2), use the user of this electronic equipment speak content or talk data between user 2 and user 1 of (user 1).
First acquiring unit 303, for obtaining the first audio-frequency information in described first environment information;
Here, the first acquiring unit 303, in gathered environmental information, obtains the voice data that people says.Wherein, the first audio-frequency information is behaved the voice data said, is not the voice data that audio output unit is play.
First resolution unit 304, for resolving described first audio-frequency information, obtains the first analysis result;
Here, the voice data that the first resolution unit 304 couples of people say is resolved.
First control module 305, for when described first analysis result second predetermined condition, controls described electronic equipment and exports the first information.
Here, if parsing this voice data is user 2 when wishing the voice data that sound turns down by user 1, the first control module 305 controls electronic equipment and exports the first information, and prompting user 1 needs volume to export to turn down.Send information by the pilot lamp on keyboard with the flicker of certain frequency, also by the first information display is sent information on a display screen, can export so that user 1 turns volume as early as possible down after seeing this information in time.
In the embodiment of the present invention, when judging that the output volume of electronic equipment reaches first threshold, gather the environmental information residing for electronic equipment, and obtain the voice data that people in environmental information says, and this voice data is resolved, if parse be wish volume is turned down the meaning time, electronic equipment exports the first information, so as user 1 turn down as early as possible after seeing this information in time volume export.As can be seen here, utilize electronic equipment of the present invention can export larger information from trend user output audio, promote Consumer's Experience, highlight functional diversity and the hommization of electronic equipment.
Second embodiment of electronic equipment provided by the invention, described electronic equipment has an audio output unit, and described audio output unit can be loudspeaker and/or audio amplifier.
Fig. 4 is the composition structural representation of the second embodiment of electronic equipment provided by the invention; As shown in Figure 4, described electronic equipment comprises: the first detecting unit 401, first collecting unit 402, first acquiring unit 403, first resolution unit 404 and the first control module 405; Wherein,
Whether the first detecting unit 401, reach the first predetermined condition for the first output attribute detecting the voice data of described electronic equipment, and generate the first testing result;
Here, the first output attribute is the output volume of audio output unit, and the first predetermined condition is the first threshold that the output volume of audio output unit reaches predetermined.Namely the first detecting unit 401 judges whether the output volume of electronic equipment reaches first threshold as 10dB, and first threshold can also value be just other, sets flexibly according to actual conditions.
First collecting unit 402, when the first output attribute for characterizing the voice data of described electronic equipment when the first testing result reaches the first predetermined condition, gathers the first environment information residing for described electronic equipment;
Here, when the first detecting unit 401 judges that the output volume of electronic equipment reaches first threshold, determine that electronic equipment output volume is now larger, have the possibility of other user had influence on except using the user of this electronic equipment, the first collecting unit 402 gathers the environmental information residing for electronic equipment.In actual applications, in environmental information residing for electronic equipment except having voice data that audio output unit plays, also may have the content of speaking of other user (user 2), use the user of this electronic equipment speak content or talk data between user 2 and user 1 of (user 1).
Described first acquiring unit 403, in first environment information, isolates the information with the first predetermined characteristic; Determine that isolated information is the first audio-frequency information.
Here, the first audio-frequency information and the second audio-frequency information is at least comprised in described first environment information.In actual applications, consider that the audio frequency that exported by this electrical part of audio output unit and people's audio frequency of speaking all have difference in audio frequency, tone, tone color etc., so described first acquiring unit 403 can distinguish by the difference of these aspects the voice data that the voice data that exported by audio output unit in environmental information and people speak.Namely described first predetermined characteristic behaves the feature such as audio frequency, tone, tone color when speaking.Namely first audio-frequency information is voice data that people says; Second audio-frequency information is the voice data exported by audio output unit.
Foregoing teachings can be used as the first acquiring unit 403 and performs further illustrating first this function of audio-frequency information obtained in described first environment information.
Described first resolution unit 404, for obtaining the semantics information of the first audio-frequency information, described semantics information is divided, obtain at least one sub-semantics information, in described at least one sub-semantics information, filter out the sub-semantics information of satisfied first pre-defined rule, using filtered out sub-semantics information as described first analysis result;
Here, the content of speaking that voice data that people speaks is generally user 2 is isolated by the first predetermined characteristic, the content of speaking of described first resolution unit 404 couples of users 2 carries out meaning of one's words division, obtain multiple sub-semantics information, and filter out and can embody user 2 and to speak the sub-semantics information semantics information of the first pre-defined rule (meet) of main contents, using filtered out sub-voice messaging as the first analysis result.Such as, the content of speaking of user 2 is " please the volume of your computer being turned down ", divide through the sub-meaning of one's words and obtain " // you // computer volume // turn down // a little ", the general idea filtering out this word is " volume is turned down ".
Described first resolution unit 404, specifically for according to the lexicon preset, carries out the division of vocabulary, obtains N number of sub-vocabulary to the first audio-frequency information; Determine the type of N number of sub-vocabulary; In N number of sub-vocabulary, filter out M the sub-vocabulary meeting the first kind; The set determining filtered out M sub-vocabulary is described first analysis result; M, N are positive integer, and M≤N.Still are " please the volume of your computer being turned down " for the content of speaking of user 2, according to lexicons (lexicon of each type records the vocabulary of respective type) such as the noun/verb/pair/adjective/pronoun/prepositions preset, the content of speaking of described first resolution unit 404 couples of users 2 carries out vocabulary division, obtains " (preposition) // your (adjective) // computer volume (noun) // tune (verb) // little (adjective) // a little (adverbial word) ".Verb in usual voice data, verb and adjective, verb and noun represent user 2 and to speak the general idea of content, so in the present embodiment, filter out verb " tune " and adjective " little ", and be regarded as the sub-vocabulary meeting the first kind, and determine that this verb " is turned down " and " to turn down " with the set of adjective " little " and can representative of consumer 2 to speak the general idea of content.
Foregoing teachings performs as the first resolution unit 404 and resolves described first audio-frequency information, obtains further illustrating of first this function of analysis result.
Described first control module 405, also for filtered out sub-semantics information and the first predetermined information are carried out similarity comparison; When the similarity of filtered out sub-semantics information and the first predetermined information is positioned at the first similar range, determines the first analysis result second predetermined condition, control described electronic equipment and export the first information;
Here, consider user 2 speak content may be wish user 1 by audio frequency export turn a little meanings down, also may be other meaning, such as, shout user 1 and have a meal, lock a door, so in the present embodiment, after the first resolution unit 404 filters out the general idea that user 2 speaks, whether this content of speaking is allow user 1 audio frequency be exported the content turned down also to need the first control module 405 to judge.Concrete, first control module 405 judges the similarity of sub-semantics information and the first predetermined information filtered out, this first predetermined information is allow user 1 audio frequency be exported the meaning of one's words content turned down, if judge the similarity of sub-semantics information and the first predetermined information filtered out be positioned at the first similar range as 80% time, determine the main meaning of one's words of the content of speaking of user 2 be wish user 1 by volume export turn down.First control module 405 controls electronic equipment and exports the first information, and prompting user 1 needs volume to export to turn down.Send information by the pilot lamp on keyboard with the flicker of certain frequency, also by the first information display is sent information on a display screen, can export so that user 1 turns volume as early as possible down after seeing this information in time.
Described electronic equipment also comprises: the first recognition unit (not illustrating in Fig. 4), for carrying out speech recognition to described first audio-frequency information, obtains the first text message and/or the second text message; Described first text message is the text message the first audio-frequency information being carried out to the full content that speech recognition obtains, and described second text message is the text message the first audio-frequency information being carried out to the partial content that speech recognition obtains; Accordingly, described first acquiring unit 403, for obtaining the semantics information of described the first noumenon information and/or the second text message.Also be, before the semantics information getting the first audio-frequency information, the first accessed audio-frequency information can be carried out the conversion that namely speech recognition carries out voice data-text by the first recognition unit, text data is converted to it, text data can be the full contents of the first audio-frequency information also can be main contents, and on the basis of text data, the first resolution unit 404 carries out acquisition and the division of semantics information, and then obtain user 2 and to speak the main meaning of one's words of content.
In actual applications, when electronic equipment detects the voice data such as such as " the turning down like this " of user 1, " good ", " as early as possible/adjust " at once, electronic equipment no longer can export (cut out) reminder-data, to adapt to the actual user demand of user.
In the embodiment of the present invention, when judging that the output volume of electronic equipment reaches first threshold, gather the environmental information residing for electronic equipment, and obtain the voice data that people in environmental information says, and this voice data meaning of one's words is divided, be met the general idea of the sub-semantics information of the first pre-defined rule and the content of speaking of speaker, when this general idea is when wishing that volume is turned down by user 1, electronic equipment exports the first information, exports so that user 1 turns volume as early as possible down after seeing this information in time.As can be seen here, utilize electronic equipment of the present invention can export larger information from trend user output audio, promote Consumer's Experience, highlight functional diversity and the hommization of electronic equipment.
Those skilled in the art should understand, embodiments of the invention can be provided as method, system or computer program.Therefore, the present invention can adopt the form of hardware embodiment, software implementation or the embodiment in conjunction with software and hardware aspect.And the present invention can adopt in one or more form wherein including the upper computer program implemented of computer-usable storage medium (including but not limited to magnetic disk memory and optical memory etc.) of computer usable program code.
The present invention describes with reference to according to the process flow diagram of the method for the embodiment of the present invention, equipment (system) and computer program and/or block scheme.Should understand can by the combination of the flow process in each flow process in computer program instructions realization flow figure and/or block scheme and/or square frame and process flow diagram and/or block scheme and/or square frame.These computer program instructions can being provided to the processor of multi-purpose computer, special purpose computer, Embedded Processor or other programmable data processing device to produce a machine, making the instruction performed by the processor of computing machine or other programmable data processing device produce device for realizing the function of specifying in process flow diagram flow process or multiple flow process and/or block scheme square frame or multiple square frame.
These computer program instructions also can be stored in can in the computer-readable memory that works in a specific way of vectoring computer or other programmable data processing device, the instruction making to be stored in this computer-readable memory produces the manufacture comprising command device, and this command device realizes the function of specifying in process flow diagram flow process or multiple flow process and/or block scheme square frame or multiple square frame.
These computer program instructions also can be loaded in computing machine or other programmable data processing device, make on computing machine or other programmable devices, to perform sequence of operations step to produce computer implemented process, thus the instruction performed on computing machine or other programmable devices is provided for the step realizing the function of specifying in process flow diagram flow process or multiple flow process and/or block scheme square frame or multiple square frame.
The above, be only preferred embodiment of the present invention, be not intended to limit protection scope of the present invention.

Claims (12)

1. an information processing method, be applied in an electronic equipment, described method comprises:
Whether the first output attribute detecting the voice data of described electronic equipment reaches the first predetermined condition, and generates the first testing result;
When the first output attribute that the first testing result characterizes the voice data of described electronic equipment reaches the first predetermined condition, gather the first environment information residing for described electronic equipment;
Obtain the first audio-frequency information in described first environment information;
Resolve described first audio-frequency information, obtain the first analysis result;
When described first analysis result second predetermined condition, control described electronic equipment and export the first information.
2. method according to claim 1, is characterized in that, at least comprises the first audio-frequency information and the second audio-frequency information in described first environment information;
The first audio-frequency information in the described first environment information of described acquisition, comprising:
In first environment information, isolate the information with the first predetermined characteristic;
Determine that isolated information is the first audio-frequency information.
3. method according to claim 2, is characterized in that, described first audio-frequency information of described parsing, obtains the first analysis result, comprising:
Obtain the semantics information of the first audio-frequency information;
Described semantics information is divided, obtains at least one sub-semantics information;
In described at least one sub-semantics information, filter out the sub-semantics information of satisfied first pre-defined rule;
Using filtered out sub-semantics information as described first analysis result.
4. method according to claim 3, is characterized in that, described method also comprises:
According to the lexicon preset, the first audio-frequency information is carried out to the division of vocabulary, obtain N number of sub-vocabulary;
Determine the type of N number of sub-vocabulary;
In N number of sub-vocabulary, filter out M the sub-vocabulary meeting the first kind;
The set determining filtered out M sub-vocabulary is described first analysis result;
M, N are positive integer, and M≤N.
5. method according to claim 3, is characterized in that, described method also comprises:
Filtered out sub-semantics information and the first predetermined information are carried out similarity comparison;
When the similarity of filtered out sub-semantics information and the first predetermined information is positioned at the first similar range, determine the first analysis result second predetermined condition.
6. method according to claim 3, is characterized in that, before the semantics information of acquisition first audio-frequency information, described method also comprises:
Speech recognition is carried out to described first audio-frequency information, obtains the first text message and/or the second text message; Described first text message is the text message the first audio-frequency information being carried out to the full content that speech recognition obtains, and described second text message is the text message the first audio-frequency information being carried out to the partial content that speech recognition obtains;
Accordingly, the semantics information of described acquisition first audio-frequency information, comprising: the semantics information obtaining described the first noumenon information and/or the second text message.
7. an electronic equipment, described electronic equipment comprises:
Whether the first detecting unit, reach the first predetermined condition for the first output attribute detecting the voice data of described electronic equipment, and generate the first testing result;
First collecting unit, when the first output attribute for characterizing the voice data of described electronic equipment when the first testing result reaches the first predetermined condition, gathers the first environment information residing for described electronic equipment;
First acquiring unit, for obtaining the first audio-frequency information in described first environment information;
First resolution unit, for resolving described first audio-frequency information, obtains the first analysis result;
First control module, for when described first analysis result second predetermined condition, controls described electronic equipment and exports the first information.
8. electronic equipment according to claim 7, is characterized in that, described first acquiring unit, for:
In first environment information, isolate the information with the first predetermined characteristic;
Determine that isolated information is the first audio-frequency information; The first audio-frequency information and the second audio-frequency information is at least comprised in described first environment information.
9. electronic equipment according to claim 8, is characterized in that, described first resolution unit, for:
Obtain the semantics information of the first audio-frequency information;
Described semantics information is divided, obtains at least one sub-semantics information;
In described at least one sub-semantics information, filter out the sub-semantics information of satisfied first pre-defined rule;
Using filtered out sub-semantics information as described first analysis result.
10. electronic equipment according to claim 8, is characterized in that, described first resolution unit, specifically for:
According to the lexicon preset, the first audio-frequency information is carried out to the division of vocabulary, obtain N number of sub-vocabulary;
Determine the type of N number of sub-vocabulary;
In N number of sub-vocabulary, filter out M the sub-vocabulary meeting the first kind;
The set determining filtered out M sub-vocabulary is described first analysis result;
M, N are positive integer, and M≤N.
11. electronic equipments according to claim 9, is characterized in that, described first control module, also for:
Filtered out sub-semantics information and the first predetermined information are carried out similarity comparison;
When the similarity of filtered out sub-semantics information and the first predetermined information is positioned at the first similar range, determine the first analysis result second predetermined condition.
12. electronic equipments according to claim 9, is characterized in that, described electronic equipment also comprises: the first recognition unit, for:
Speech recognition is carried out to described first audio-frequency information, obtains the first text message and/or the second text message; Described first text message is the text message the first audio-frequency information being carried out to the full content that speech recognition obtains, and described second text message is the text message the first audio-frequency information being carried out to the partial content that speech recognition obtains;
Accordingly, described first acquiring unit, for obtaining the semantics information of described the first noumenon information and/or the second text message.
CN201510894668.XA 2015-12-08 2015-12-08 Information processing method and electronic equipment Active CN105469794B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201510894668.XA CN105469794B (en) 2015-12-08 2015-12-08 Information processing method and electronic equipment

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201510894668.XA CN105469794B (en) 2015-12-08 2015-12-08 Information processing method and electronic equipment

Publications (2)

Publication Number Publication Date
CN105469794A true CN105469794A (en) 2016-04-06
CN105469794B CN105469794B (en) 2019-09-24

Family

ID=55607421

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201510894668.XA Active CN105469794B (en) 2015-12-08 2015-12-08 Information processing method and electronic equipment

Country Status (1)

Country Link
CN (1) CN105469794B (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN108363557A (en) * 2018-02-02 2018-08-03 刘国华 Man-machine interaction method, device, computer equipment and storage medium

Citations (19)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN1815555A (en) * 2005-02-04 2006-08-09 光宝科技股份有限公司 Electronic radio device and its volume prompting method
CN1893498A (en) * 2005-07-01 2007-01-10 英华达(上海)电子有限公司 Method for intelligently regulating incoming telegrame bell sound-volume of hand-set its prompting manner
CN101222210A (en) * 2007-01-08 2008-07-16 乐金电子(中国)研究开发中心有限公司 Apparatus and method for self-adapting sound volume regulation of mobile phone
US20090326944A1 (en) * 2008-06-30 2009-12-31 Kabushiki Kaisha Toshiba Voice recognition apparatus and method
CN101840700A (en) * 2010-04-28 2010-09-22 宇龙计算机通信科技(深圳)有限公司 Voice recognition method based on mobile terminal and mobile terminal
CN101930789A (en) * 2009-06-26 2010-12-29 英特尔公司 The environment for use audio analysis comes the control audio player
US20120008802A1 (en) * 2008-07-02 2012-01-12 Felber Franklin S Voice detection for automatic volume controls and voice sensors
US20120226502A1 (en) * 2011-03-01 2012-09-06 Kabushiki Kaisha Toshiba Television apparatus and a remote operation apparatus
CN102833505A (en) * 2012-09-14 2012-12-19 高亿实业有限公司 Automatic regulation method and system for television volume, television and television remote control device
CN103024630A (en) * 2011-09-21 2013-04-03 联想(北京)有限公司 Volume regulating method of first electronic equipment and first electronic equipment
CN103928025A (en) * 2014-04-08 2014-07-16 华为技术有限公司 Method and mobile terminal for voice recognition
CN103943106A (en) * 2014-04-01 2014-07-23 北京豪络科技有限公司 Intelligent wristband for gesture and voice recognition
CN104104775A (en) * 2013-04-02 2014-10-15 中兴通讯股份有限公司 Method for automatically adjusting mobile phone ringing tone volume and vibration modes and device thereof
US20140372109A1 (en) * 2013-06-13 2014-12-18 Motorola Mobility Llc Smart volume control of device audio output based on received audio input
CN104243689A (en) * 2014-07-04 2014-12-24 苏州天趣信息科技有限公司 Method for controlling alarm clock based on audio signal collecting and mobile terminal of method
CN104301540A (en) * 2014-10-22 2015-01-21 深圳市朵唯志远科技有限公司 Method for automatically adjusting sound volume of alarm
CN104883437A (en) * 2015-05-04 2015-09-02 南京理工大学 Method and system for adjusting the volume of warning tone through environment-based voice analysis
CN104898446A (en) * 2015-05-29 2015-09-09 四川长虹电器股份有限公司 Control method and intelligent household control device
CN104916068A (en) * 2014-03-14 2015-09-16 联想(北京)有限公司 Information processing method and electronic device

Patent Citations (19)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN1815555A (en) * 2005-02-04 2006-08-09 光宝科技股份有限公司 Electronic radio device and its volume prompting method
CN1893498A (en) * 2005-07-01 2007-01-10 英华达(上海)电子有限公司 Method for intelligently regulating incoming telegrame bell sound-volume of hand-set its prompting manner
CN101222210A (en) * 2007-01-08 2008-07-16 乐金电子(中国)研究开发中心有限公司 Apparatus and method for self-adapting sound volume regulation of mobile phone
US20090326944A1 (en) * 2008-06-30 2009-12-31 Kabushiki Kaisha Toshiba Voice recognition apparatus and method
US20120008802A1 (en) * 2008-07-02 2012-01-12 Felber Franklin S Voice detection for automatic volume controls and voice sensors
CN101930789A (en) * 2009-06-26 2010-12-29 英特尔公司 The environment for use audio analysis comes the control audio player
CN101840700A (en) * 2010-04-28 2010-09-22 宇龙计算机通信科技(深圳)有限公司 Voice recognition method based on mobile terminal and mobile terminal
US20120226502A1 (en) * 2011-03-01 2012-09-06 Kabushiki Kaisha Toshiba Television apparatus and a remote operation apparatus
CN103024630A (en) * 2011-09-21 2013-04-03 联想(北京)有限公司 Volume regulating method of first electronic equipment and first electronic equipment
CN102833505A (en) * 2012-09-14 2012-12-19 高亿实业有限公司 Automatic regulation method and system for television volume, television and television remote control device
CN104104775A (en) * 2013-04-02 2014-10-15 中兴通讯股份有限公司 Method for automatically adjusting mobile phone ringing tone volume and vibration modes and device thereof
US20140372109A1 (en) * 2013-06-13 2014-12-18 Motorola Mobility Llc Smart volume control of device audio output based on received audio input
CN104916068A (en) * 2014-03-14 2015-09-16 联想(北京)有限公司 Information processing method and electronic device
CN103943106A (en) * 2014-04-01 2014-07-23 北京豪络科技有限公司 Intelligent wristband for gesture and voice recognition
CN103928025A (en) * 2014-04-08 2014-07-16 华为技术有限公司 Method and mobile terminal for voice recognition
CN104243689A (en) * 2014-07-04 2014-12-24 苏州天趣信息科技有限公司 Method for controlling alarm clock based on audio signal collecting and mobile terminal of method
CN104301540A (en) * 2014-10-22 2015-01-21 深圳市朵唯志远科技有限公司 Method for automatically adjusting sound volume of alarm
CN104883437A (en) * 2015-05-04 2015-09-02 南京理工大学 Method and system for adjusting the volume of warning tone through environment-based voice analysis
CN104898446A (en) * 2015-05-29 2015-09-09 四川长虹电器股份有限公司 Control method and intelligent household control device

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN108363557A (en) * 2018-02-02 2018-08-03 刘国华 Man-machine interaction method, device, computer equipment and storage medium
US11483657B2 (en) 2018-02-02 2022-10-25 Guohua Liu Human-machine interaction method and device, computer apparatus, and storage medium

Also Published As

Publication number Publication date
CN105469794B (en) 2019-09-24

Similar Documents

Publication Publication Date Title
CN110085251B (en) Human voice extraction method, human voice extraction device and related products
CN103366740B (en) Voice command identification method and device
CN102591455B (en) Selective Transmission of Voice Data
CN112770214B (en) Earphone control method and device and earphone
CN112863547A (en) Virtual resource transfer processing method, device, storage medium and computer equipment
CN106406867B (en) Screen reading method and device based on android system
CN108108142A (en) Voice information processing method, device, terminal device and storage medium
CN104050966A (en) Voice interaction method of terminal equipment and terminal equipment employing voice interaction method
CN105469789A (en) Voice information processing method and voice information processing terminal
WO2013166194A1 (en) Speech recognition systems and methods
CN103853703A (en) Information processing method and electronic equipment
CN105118522A (en) Noise detection method and device
CN111462741B (en) Voice data processing method, device and storage medium
CN103106061A (en) Voice input method and device
CN105120063A (en) Volume prompting method of input voice and electronic device
CN103778915A (en) Speech recognition method and mobile terminal
CN103903625A (en) Audio sound mixing method and device
CN106302972A (en) The reminding method of voice use and terminal unit
CN112752186A (en) Earphone wearing state detection method and device and earphone
CN111724781A (en) Audio data storage method and device, terminal and storage medium
CN105096936A (en) Push-to-talk service control method and apparatus
CN103064828A (en) Method and device for text operating
CN107948854B (en) Operation audio generation method and device, terminal and computer readable medium
CN104851423A (en) Sound message processing method and device
CN108231074A (en) A kind of data processing method, voice assistant equipment and computer readable storage medium

Legal Events

Date Code Title Description
C06 Publication
PB01 Publication
C10 Entry into substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant