CN103873687A - Information processing method and electronic equipment - Google Patents

Information processing method and electronic equipment Download PDF

Info

Publication number
CN103873687A
CN103873687A CN201410086120.8A CN201410086120A CN103873687A CN 103873687 A CN103873687 A CN 103873687A CN 201410086120 A CN201410086120 A CN 201410086120A CN 103873687 A CN103873687 A CN 103873687A
Authority
CN
China
Prior art keywords
speech data
predetermined condition
electronic equipment
voice identifier
audio recording
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN201410086120.8A
Other languages
Chinese (zh)
Inventor
王茜莺
李向阳
彭世峰
董芳菲
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Lenovo Beijing Ltd
Original Assignee
Lenovo Beijing Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Lenovo Beijing Ltd filed Critical Lenovo Beijing Ltd
Priority to CN201410086120.8A priority Critical patent/CN103873687A/en
Publication of CN103873687A publication Critical patent/CN103873687A/en
Pending legal-status Critical Current

Links

Images

Abstract

The invention discloses an information processing method, which is used for improving the prompting effect of electronic equipment. The information processing method comprises the steps of obtaining voice data, wherein the voice data comprises a continuous audio record and a voice identifier corresponding to the voice data is a first voice identifier; parsing the audio record to obtain N character identifiers corresponding to the audio record, wherein the N character identifiers are used for representing the content of the voice data and N is a positive integer; displaying the first voice identifier through a display unit and simultaneously displaying the N character identifiers when the first voice identifier is displayed. The invention additionally discloses the electronic equipment corresponding to the method.

Description

A kind of information processing method and electronic equipment
Technical field
The present invention relates to electronic technology field, particularly a kind of information processing method and electronic equipment.
Background technology
Along with day by day fierce with market competition of developing rapidly of science and technology, the performance of electronic equipment and outward appearance have obtained promoting energetically, wherein smart mobile phone, the electronic equipments such as panel computer are multiple functional with it, small volume and less weight, the recreational advantage such as strong is just being subject to liking of increasing people, particularly various software platforms emerge in multitude, as microblogging, micro-letters etc. are more and more subject to pursuing of people, people be more and more happy to share or exchange by terminal equipment, for example: user can send speech message or video messaging to good friend by micro-letter, thereby can carry out easily interaction.
At present, because people are more and more happy to adopt speech communication mode easily, therefore most electronic equipment is all provided with the application program that can be used in interactive voice, for example, and micro-letter, QQ(MSN).When the application program of installing by electronic equipment as user receives the speech message that contact person sends, only need to play voice messaging can listen to content wherein, and process is comparatively simple, convenient.
Under normal circumstances, for preventing that other people from hearing voice content or avoiding bothering other people, user is accustomed to adopting receiver to listen to voice messaging, but limited owing to adopting receiver to play volume when voice messaging, so need user comparatively quietly could clearly listen to particular content under environment.And, in some occasion, for example, in meeting, if user receives voice messaging, even if now site environment is quieter, but user also inconvenience directly listens to this voice messaging, thereby can not know in time the content of this voice messaging, and current electronic equipment is receiving after voice messaging, conventionally just point out simply user to receive message, and in the time that user is inconvenient to listen to this voice messaging, electronic equipment but can not further be pointed out so that user is known the content of this voice messaging, therefore, in prior art, there is the poor technical problem of electronic equipment prompting effect.
Summary of the invention
The embodiment of the present invention provides a kind of information processing method, for solving the poor problem of electronic equipment prompting effect of the prior art.
A kind of information processing method, is applied in electronic equipment, and described electronic equipment comprises display unit; Described method comprises:
Obtain speech data, described speech data comprises continuous audio recording, and wherein, the voice identifier that described speech data is corresponding is the first voice identifier;
Resolve described audio recording, obtain N the character mark corresponding with described audio recording; A described N character mark is for characterizing the content of described speech data; N is positive integer;
Show described the first voice identifier by described display unit, and, in showing described the first voice identifier, show a described N character mark.
Preferably, show a described N character mark, be specially: N character mark described in Overlapping display in described the first voice identifier.
Preferably, resolve described audio recording, comprising:
Determining unit, for determining whether the current residing state of described speech data meets a predetermined condition;
In the time that the current residing state of described speech data meets described predetermined condition, resolve described audio recording.
Preferably, in the time that the current residing state of described speech data meets described predetermined condition, resolve described audio recording, comprising:
Obtain the first parameter information, described the first parameter information is used for characterizing the current residing equipment state of described electronic equipment;
In the time that described the first parameter information meets the first sub-predetermined condition, determine that the current residing state of described speech data meets described predetermined condition, resolves described audio recording; Wherein, described the first sub-predetermined condition is that described electronic equipment is in mute state.
Preferably, in the time that the current residing state of described speech data meets described predetermined condition, resolve described audio recording, comprising:
Obtain the second parameter information, described the second parameter information is for characterizing the number of times that described the first voice identifier is triggered;
In the time that described the second parameter information meets the second sub-predetermined condition, determine that the current residing state of described speech data meets described predetermined condition, resolves described audio recording; Wherein, described the second sub-predetermined condition is that described the first voice identifier is triggered M time, and M is not less than 2 positive integer.
Preferably, in the time that the current residing state of described speech data meets described predetermined condition, resolve described audio recording, comprising:
Obtain the 3rd parameter information, described the 3rd parameter information is for characterizing the source-information of described speech data;
In the time that described the 3rd parameter information meets the 3rd sub-predetermined condition, determine that the current residing state of described speech data meets described predetermined condition, resolves described audio recording; Described the 3rd sub-predetermined condition be described speech data be receive by described electronic equipment from the data of another electronic equipment that are different from described electronic equipment.
Preferably, in the time showing multiple voice identifier in described display unit, obtain speech data, comprising:
In the time there is the operating body that grips described electronic equipment, obtain the grip position information of described operating body for described electronic equipment;
From described multiple voice identifier, determine K voice identifier according to described grip position information; A described K voice identifier is the voice identifier that described operating body cannot trigger in the time of described grip position, and K is positive integer;
Using speech data corresponding a described K voice identifier as described speech data.
A kind of electronic equipment, described electronic equipment comprises display unit; Described electronic equipment also comprises:
Acquiring unit, for obtaining speech data, described speech data comprises continuous audio recording, wherein, the voice identifier that described speech data is corresponding is the first voice identifier;
Resolution unit, for resolving described audio recording, obtains N the character mark corresponding with described audio recording; A described N character mark is for characterizing the content of described speech data; N is positive integer;
Processing unit, for show described the first voice identifier by described display unit, and, in showing described the first voice identifier, show a described N character mark.
Preferably, described processing unit is used for showing a described N character mark, is specially: N character mark described in Overlapping display in described the first voice identifier.
Preferably, described resolution unit also for: determine that whether the current residing state of described speech data meets a predetermined condition; In the time that the current residing state of described speech data meets described predetermined condition, resolve described audio recording.
Preferably, described resolution unit is in the time that the current residing state of described speech data meets described predetermined condition, resolve described audio recording, be specially: obtain the first parameter information, described the first parameter information is used for characterizing the current residing equipment state of described electronic equipment; In the time that described the first parameter information meets the first sub-predetermined condition, determine that the current residing state of described speech data meets described predetermined condition, resolves described audio recording; Wherein, described the first sub-predetermined condition is that described electronic equipment is in mute state.
Preferably, described resolution unit is in the time that the current residing state of described speech data meets described predetermined condition, resolve described audio recording, be specially: obtain the second parameter information, described the second parameter information is for characterizing the number of times that described the first voice identifier is triggered; In the time that described the second parameter information meets the second sub-predetermined condition, determine that the current residing state of described speech data meets described predetermined condition, resolves described audio recording; Wherein, described the second sub-predetermined condition is that described the first voice identifier is triggered M time, and M is not less than 2 positive integer.
Preferably, described resolution unit is in the time that the current residing state of described speech data meets described predetermined condition, resolve described audio recording, be specially: obtain the 3rd parameter information, described the 3rd parameter information is for characterizing the source-information of described speech data; In the time that described the 3rd parameter information meets the 3rd sub-predetermined condition, determine that the current residing state of described speech data meets described predetermined condition, resolves described audio recording; Described the 3rd sub-predetermined condition be described speech data be receive by described electronic equipment from the data of another electronic equipment that are different from described electronic equipment.
Preferably, in the time showing multiple voice identifier in described display unit, described acquiring unit also for: when existing while gripping the operating body of described electronic equipment, obtain the grip position information of described operating body for described electronic equipment; From described multiple voice identifier, determine K voice identifier according to described grip position information; A described K voice identifier is the voice identifier that described operating body cannot trigger in the time of described grip position, and K is positive integer; Using speech data corresponding a described K voice identifier as described speech data.
In the embodiment of the present invention obtaining when described speech data, by to comprising the parsing of described audio recording, can obtain described N character mark of the content that can characterize described speech data, and in the time showing described voice identifier by described display unit, a described N character mark also will show with described voice identifier information simultaneously, therefore in the time that user receives described speech data, know the content that described speech data is corresponding by described N the character mark information showing on just can be largely.For example, described character mark information can be keyword, user is not in the situation that listening to described audio recording, just can know according to the multiple keywords that show the content that this speech data is relevant, thereby be inconvenient to play in the occasion of described audio recording in noisy environment or meeting etc., electronic equipment can be pointed out user preferably by showing a described N character mark, thereby make user know in time the content of described speech data, therefore strengthened the prompting effect of electronic equipment, improve the practicality of electronic equipment, also correspondingly improved user's experience.
Accompanying drawing explanation
Fig. 1 is the main flow chart of information processing method in the embodiment of the present invention;
Fig. 2 is the schematic diagram that simultaneously shows N character mark and voice identifier in the embodiment of the present invention;
Fig. 3 is the primary structure figure of electronic equipment in the embodiment of the present invention.
Embodiment
Information processing method in the embodiment of the present invention can be applied in electronic equipment, and described electronic equipment comprises display unit; Described method comprises: obtain speech data, described speech data comprises continuous audio recording, and wherein, the voice identifier that described speech data is corresponding is the first voice identifier; Resolve described audio recording, obtain N the character mark corresponding with described audio recording; A described N character mark is for characterizing the content of described speech data; N is positive integer; Show described the first voice identifier by described display unit, and, in showing described the first voice identifier, show a described N character mark.
In the embodiment of the present invention obtaining when described speech data, by to comprising the parsing of described audio recording, can obtain described N character mark of the content that can characterize described speech data, and in the time showing described voice identifier by described display unit, a described N character mark also will show with described voice identifier information simultaneously, therefore in the time that user receives described speech data, know the content that described speech data is corresponding by described N the character mark information showing on just can be largely.For example, described character mark information can be keyword, user is not in the situation that listening to described audio recording, just can know according to the multiple keywords that show the content that this speech data is relevant, thereby be inconvenient to play in the occasion of described audio recording in noisy environment or meeting etc., electronic equipment can be pointed out user preferably by showing a described N character mark, thereby make user know in time the content of described speech data, therefore strengthened the prompting effect of electronic equipment, improve the practicality of electronic equipment, also correspondingly improved user's experience.
For making object, technical scheme and the advantage of the embodiment of the present invention clearer, below in conjunction with the accompanying drawing in the embodiment of the present invention, technical scheme in the embodiment of the present invention is clearly and completely described, obviously, described embodiment is the present invention's part embodiment, rather than whole embodiment.Based on the embodiment in the present invention, those of ordinary skills, not making the every other embodiment obtaining under creative work prerequisite, belong to the scope of protection of the invention.
In the embodiment of the present invention, described electronic equipment can be PC(personal computer), notebook, PAD(panel computer), the different electronic equipment of mobile phone etc., the present invention is not restricted this.
In addition, term "and/or" herein, is only a kind of incidence relation of describing affiliated partner, and expression can exist three kinds of relations, and for example, A and/or B, can represent: individualism A exists A and B, these three kinds of situations of individualism B simultaneously.In addition, character "/" herein, generally represents that forward-backward correlation is to liking a kind of relation of "or".
Below in conjunction with accompanying drawing, the preferred embodiment of the present invention is elaborated.
Refer to Fig. 1, a kind of information processing method is provided in the embodiment of the present invention, be applied in electronic equipment, described electronic equipment comprises display unit; Described method can comprise the following steps:
Step 101: obtain speech data, described speech data comprises continuous audio recording, and wherein, the voice identifier that described speech data is corresponding is the first voice identifier.
In the embodiment of the present invention, described speech data can be described the voice messaging that obtains of electronic equipment, for example described electronic equipment can have audio frequency input unit, described speech data can be voice messaging or the audio/video data obtaining by described audio frequency input unit, and for example described audio frequency input unit can be Mike.Or described speech data can be also the speech message from another electronic equipment or the audio/video data that described electronic equipment receives, such as user can receive the speech message that another user sends by application such as micro-letters.
In the embodiment of the present invention, described audio recording can be one or more audio sections that described speech data comprises.For example, in the time that user receives a speech message by mobile phone, in described speech message, may include one or more sound bites, these one or more sound bites just can be called described audio recording.
Preferably, in the embodiment of the present invention, described speech data has corresponding voice identifier, it is described the first voice identifier, it can be in the time that described electronic equipment determines that the information receiving is speech data that wherein said language the first phonetic symbol is known, the voice identifier corresponding with described speech data generating can be for example the mark of an analogous terms tone signal.And can show this voice identifier by described display unit, thus point out user successfully to receive described speech data, and then user can control described electronic equipment and plays described speech data by triggering described the first voice identifier.
In the embodiment of the present invention, in the time showing multiple voice identifier in described display unit, obtain speech data, can comprise: in the time there is the operating body that grips described electronic equipment, obtain the grip position information of described operating body for described electronic equipment; From described multiple voice identifier, determine K voice identifier according to described grip position information; A described K voice identifier is the voice identifier that described operating body cannot trigger in the time of described grip position, and K is positive integer; Using speech data corresponding a described K voice identifier as described speech data.
Wherein, described operating body can be user's hand.For example, under normal circumstances, user grips described electronic equipment with a hand, and for example described electronic equipment is mobile phone, and another hand can carry out various operations on mobile phone screen.
Be in the embodiment of the present invention, when electronic equipment described in user's handling, and while showing multiple voice identifier in described display unit, can determine the grip position information of described operating body and described electronic equipment, thereby determine described K the voice identifier that user cannot trigger, and a described K voice identifier is defined as to described the first voice identifier, to its corresponding speech data is resolved, like this, even if user can not trigger these voice identifier, cannot listen to these voice, the scheme that also can provide by the embodiment of the present invention is known the content of these speech datas.
For example, conventionally user is when one hand is held mobile phone, in certain area, can use thumb to click the part voice identifier showing in screen, and when user's the part that holds mobile phone lower side, and when screen is longer, user may just cannot trigger the voice identifier being presented near screen top, now, the hand that described electronic equipment can detect user in mobile phone compared with tail end position, thereby obtain corresponding described grip position information, can be by the voice identifier information near screen top according to described grip position information, for example there are 3 voice identifier to be presented at the region near screen top, and this region user cannot touch, mobile phone can be defined as described the first voice identifier by these 3 voice identifier automatically, thereby can be for further processing to it.
Step 102: resolve described audio recording, obtain N the character mark corresponding with described audio recording; A described N character mark is for characterizing the content of described speech data; N is positive integer.
In the embodiment of the present invention, after obtaining described speech data, by the described audio recording wherein comprising being resolved to, can obtain described N character mark information of the content that can characterize described speech data.
In the embodiment of the present invention, resolve described audio recording, can comprise: determine whether the current residing state of described speech data meets a predetermined condition; In the time that the current residing state of described speech data meets described predetermined condition, resolve described audio recording.
Preferably, in the embodiment of the present invention, in the time that the current residing state of described speech data meets described predetermined condition, resolve described audio recording and specifically can be divided into three kinds of modes.
Wherein first kind of way can be: obtain the first parameter information, described the first parameter information is used for characterizing the current residing equipment state of described electronic equipment; In the time that described the first parameter information meets the first sub-predetermined condition, determine that the current residing state of described speech data meets described predetermined condition, resolves described audio recording; Wherein, described the first sub-predetermined condition is that described electronic equipment is in mute state.
In first kind of way, can determine the current residing equipment state of described electronic equipment by described the first parameter information obtaining.For example, the equipment state of current described electronic equipment can be the state corresponding with a contextual model scene, the corresponding jingle bell state of for example outdoor pattern, the corresponding mute state of conference model etc.In the time confirming described electronic equipment in described mute state by described the first parameter information, the contextual model that for example described electronic equipment now arranges is conference model, in described mute state, can confirm described the first parameter information, meet described predetermined condition, therefore can further resolve described speech data.Because described electronic equipment is in the time of mute state, user is general is inconvenient to listen to speech data, adopts the scheme of the embodiment of the present invention, can allow user in the content of knowing speech data without listen in the situation that, improves user and experiences.
Wherein the second way can be: obtain the second parameter information, described the second parameter information is for characterizing the number of times that described the first voice identifier is triggered; In the time that described the second parameter information meets the second sub-predetermined condition, determine that the current residing state of described speech data meets described predetermined condition, resolves described audio recording; Wherein, described the second sub-predetermined condition is that described the first voice identifier is triggered M time, and M is not less than 2 positive integer;
In the second way, can be by judging that the number of times that described the first voice identifier is triggered determines whether described the second parameter information meets described predetermined condition.
For example, described the second sub-predetermined condition is that described the first voice identifier is triggered 2 times, when user receives described speech data by mobile phone, it is play described audio recording for the first time by clicking described the first voice identifier, and then, user clicks for the second time again described the first voice identifier described audio recording is play for the second time, now, mobile phone can obtain and show that number of times that described the first voice identifier is triggered is described second parameter information of 2 times, it is the number of times that number of times that described the first voice identifier is triggered has reached described the second sub pre-conditioned setting, therefore can determine that described the second parameter information meets described predetermined condition, and then can resolve described voice messaging, thereby can solve that in noisy environment user repeatedly plays described speech data but the situation of not knowing yet its corresponding content.
Wherein the third mode can be: obtain the 3rd parameter information, described the 3rd parameter information is for characterizing the source-information of described speech data; In the time that described the 3rd parameter information meets the 3rd sub-predetermined condition, determine that the current residing state of described speech data meets described predetermined condition, resolves described audio recording; Described the 3rd sub-predetermined condition be described speech data be receive by described electronic equipment from the data of another electronic equipment that are different from described electronic equipment.
In described the third mode, if showing described speech data, described the 3rd parameter information when being different from another electronic equipment of described electronic equipment, can determine that described the 3rd parameter information meets described the 3rd sub-predetermined condition.For example, micro-letter software and contact person A that user uses mobile phone to install carry out interactive voice, in reciprocal process process, when mobile phone detects while there is speech data 1 in this micro-letter software, can obtain the 3rd parameter information in the source that shows this speech data 1, if determine that by described the 3rd parameter information this speech data 1 is from other electronic equipment, the i.e. data-message of this speech data 1 for sending from the electronic equipment of contact person A, now can determine that described the 3rd parameter meets described the 3rd sub-predetermined condition, thereby can resolve it.
And if described the 3rd parameter information shows that this speech data 1 carrys out mobile phone self, for example, user inputs a speech message by micro-letter microphone in mobile phone, determines that described the 3rd parameter information does not meet described the 3rd sub-predetermined condition, thereby can not resolve speech data 1.
Preferably, described electronic equipment can also be confirmed whether described speech data to resolve by detecting ambient noise intensity around.For example, in the time that ambient noise intensity exceedes default noise intensity, can automatically the speech data receiving be resolved.Or, judging by any in above three kinds of modes whether the current residing state of described speech data meets after described predetermined condition, described electronic equipment can also be confirmed whether described speech data to resolve by detecting ambient noise intensity around, thereby can in the situation that user more needs, resolve described speech data.
Therefore can determine whether to resolve described speech data by above-mentioned several different modes, be applicable to several scenes, improve the flexibility of described electronic equipment processed voice data.
In the embodiment of the present invention, resolve described audio recording, the process of determining N character mark from described audio recording can be: described speech data is resolved, obtain the data content corresponding with described speech data, for example text message, a described N character information can be just to resolve the corresponding text message that described speech data obtains.For example, data content corresponding to described speech data parsing is text " I prepare at 1 in afternoon and set out ", described N character mark can determining acquisition can be just the text that described data content is corresponding, i.e. " I prepare at 1 in afternoon and set out ", thus can data content corresponding more described speech data be expressed.
Preferably, in said process, if obtained the data content corresponding with described speech data by resolving described speech data, can further determine corresponding keyword according to described data content, thereby can determine a described N character mark according to described keyword, now to described N character denotation can be formed by described keyword.For example, in the time that data content corresponding to described speech data is " I prepare at 1 in afternoon and set out ", definite keyword can be " I ", " afternoon ", " setting out ", and the character string that a described N character mark is corresponding can be just " I set out afternoon ".
Preferably, in the embodiment of the present invention, resolve described speech data, obtain described N the character mark corresponding with described audio-frequency information, can also be: by the monitoring of syllable in described speech data being determined to the feature syllable comprising, described feature syllable can be the syllable for showing time, place, thing etc., thereby can carry out voice parsing to described feature syllable, and then can obtain a described N character mark.
For example, can determine that by monitoring the first syllable in described speech data is the syllable of corresponding time, the second syllable is the syllable in corresponding place, in the time resolving described speech data, can only resolve described the first syllable and described the second syllable wherein, thereby obtain corresponding data content.For example " 10 point " " meeting room " etc., thus need not resolve whole speech data, reduce the work load of electronic equipment.
Step 103: show described the first voice identifier by described display unit, and, in showing described the first voice identifier, show a described N character mark.
In the embodiment of the present invention, after described speech data is resolved, show a described N character mark, be specifically as follows: N character mark described in Overlapping display in described the first voice identifier.Wherein, show described the first voice identifier by described display unit, can be specifically: determine viewing area corresponding to described the first voice identifier according to the duration of described audio recording; In addition, in described the first voice identifier, described in Overlapping display, N character mark can be specifically: on viewing area corresponding to described the first voice identifier, show a described N character mark, thereby described the first voice identifier and a described N character mark can be shown simultaneously.
It should be noted that, show that by described display unit described the first voice identifier can be before resolving described speech data.For example, in the time that obtaining described speech data, described electronic equipment just can show described the first voice identifier by described display unit.Or can be also again itself and described the first voice identifier to be shown by described display unit after resolving after described speech data, obtaining a described N character mark simultaneously, thereby can point out more intuitively the content of described speech data for user.
For example, refer to Fig. 2, wherein numeral 20 represents described display unit, the first speech data from contact person Fang Beibei that numeral 21 receives, the second speech data of digital 22 representative of consumer oneself input.Wherein, described the first speech data showing is after being resolved, and by described N the character mark obtaining and the state that shows of corresponding voice identifier simultaneously, and described second speech data is the speech data of not resolving, therefore the demonstration voice identifier corresponding with described second speech data only.
Therefore above-mentioned processing method can make in the time that user is inconvenient to listen to described audio recording, can obtain its corresponding character mark by parsing, and by by these character marks and voice identifier Overlapping display, thereby good demonstration is provided and has pointed out effect for user.
Refer to Fig. 3, the present invention also provides a kind of electronic equipment, comprises display unit described in described electronic equipment, and described electronic equipment can comprise acquiring unit 301, resolution unit 302 and processing unit 303.
Described acquiring unit 301 can be for obtaining speech data, and described speech data comprises continuous audio recording, and wherein, the voice identifier that described speech data is corresponding is the first voice identifier.
Described resolution unit 302 can, for resolving described audio recording, obtain N the character mark corresponding with described audio recording; A described N character mark is for characterizing the content of described speech data; N is positive integer.
Described processing unit 303 can be for show described the first voice identifier by described display unit, and, in showing described the first voice identifier, show a described N character mark.
Wherein, described processing unit 303 is specifically as follows for showing a described N character mark: N character mark described in Overlapping display in described the first voice identifier.
Described resolution unit 302 can also be used for: determine whether the current residing state of described speech data meets a predetermined condition; In the time that the current residing state of described speech data meets described predetermined condition, resolve described audio recording.
Described resolution unit 302 is in the time that the current residing state of described speech data meets described predetermined condition, resolve described audio recording, be specifically as follows: obtain the first parameter information, described the first parameter information is used for characterizing the current residing equipment state of described electronic equipment; In the time that described the first parameter information meets the first sub-predetermined condition, determine that the current residing state of described speech data meets described predetermined condition, resolves described audio recording; Wherein, described the first sub-predetermined condition is that described electronic equipment is in mute state.
Described resolution unit 302 is in the time that the current residing state of described speech data meets described predetermined condition, resolve described audio recording, be specifically as follows: obtain the second parameter information, described the second parameter information is for characterizing the number of times that described the first voice identifier is triggered; In the time that described the second parameter information meets the second sub-predetermined condition, determine that the current residing state of described speech data meets described predetermined condition, resolves described audio recording; Wherein, described the second sub-predetermined condition is that described the first voice identifier is triggered M time, and M is not less than 2 positive integer.
Described resolution unit 302 is in the time that the current residing state of described speech data meets described predetermined condition, resolve described audio recording, be specifically as follows: obtain the 3rd parameter information, described the 3rd parameter information is for characterizing the source-information of described speech data; In the time that described the 3rd parameter information meets the 3rd sub-predetermined condition, determine that the current residing state of described speech data meets described predetermined condition, resolves described audio recording; Described the 3rd sub-predetermined condition be described speech data be receive by described electronic equipment from the data of another electronic equipment that are different from described electronic equipment.
In the embodiment of the present invention, in the time showing multiple voice identifier in described display unit, described acquiring unit 301 can also be used for: in the time there is the operating body that grips described electronic equipment, obtain the grip position information of described operating body for described electronic equipment; From described multiple voice identifier, determine K voice identifier according to described grip position information; A described K voice identifier is the voice identifier that described operating body cannot trigger in the time of described grip position, and K is positive integer; Using speech data corresponding a described K voice identifier as described speech data.
Information processing method in the embodiment of the present invention can be applied in electronic equipment, and described electronic equipment comprises display unit; Described method comprises: obtain speech data, described speech data comprises continuous audio recording, and wherein, the voice identifier that described speech data is corresponding is the first voice identifier; Resolve described audio recording, obtain N the character mark corresponding with described audio recording; A described N character mark is for characterizing the content of described speech data; N is positive integer; Show described the first voice identifier by described display unit, and, in showing described the first voice identifier, show a described N character mark.
In the embodiment of the present invention obtaining when described speech data, by to comprising the parsing of described audio recording, can obtain described N character mark of the content that can characterize described speech data, and in the time showing described voice identifier by described display unit, a described N character mark also will show with described voice identifier information simultaneously, therefore in the time that user receives described speech data, know the content that described speech data is corresponding by described N the character mark information showing on just can be largely.For example, described character mark information can be keyword, user is not in the situation that listening to described audio recording, just can know according to the multiple keywords that show the content that this speech data is relevant, thereby be inconvenient to play in the occasion of described audio recording in noisy environment or meeting etc., electronic equipment can be pointed out user preferably by showing a described N character mark, thereby make user know in time the content of described speech data, therefore strengthened the prompting effect of electronic equipment, improve the practicality of electronic equipment, also correspondingly improved user's experience.
Specifically, computer program instructions corresponding to information processing method in the embodiment of the present application can be stored in CD, hard disk, on the storage mediums such as USB flash disk, in the time that the computer program instructions corresponding with information processing method in storage medium read or be performed by an electronic equipment, comprise the steps:
Obtain speech data, described speech data comprises continuous audio recording, and wherein, the voice identifier that described speech data is corresponding is the first voice identifier;
Resolve described audio recording, obtain N the character mark corresponding with described audio recording; A described N character mark is for characterizing the content of described speech data; N is positive integer;
Show described the first voice identifier by described display unit, and, in showing described the first voice identifier, show a described N character mark.
That optionally, in described storage medium, stores shows that with step computer instruction that a described N character mark is corresponding, being performed in process, is specially: N character mark described in Overlapping display in described the first voice identifier.
Optionally, that in described storage medium, stores resolves computer instruction that described audio recording is corresponding being performed in process with step, specifically comprise the steps: determining unit, for determining whether the current residing state of described speech data meets a predetermined condition; In the time that the current residing state of described speech data meets described predetermined condition, resolve described audio recording.
Optionally, in the time that the current residing state of described speech data meets described predetermined condition, that in described storage medium, stores resolves described audio recording with step, and corresponding computer instruction, being performed in process, specifically comprises the steps:
Obtain the first parameter information, described the first parameter information is used for characterizing the current residing equipment state of described electronic equipment;
In the time that described the first parameter information meets the first sub-predetermined condition, determine that the current residing state of described speech data meets described predetermined condition, resolves described audio recording; Wherein, described the first sub-predetermined condition is that described electronic equipment is in mute state.
Optionally, in the time that the current residing state of described speech data meets described predetermined condition, that in described storage medium, stores resolves described audio recording with step, and corresponding computer instruction, being performed in process, specifically also comprises the steps:
Obtain the second parameter information, described the second parameter information is for characterizing the number of times that described the first voice identifier is triggered;
In the time that described the second parameter information meets the second sub-predetermined condition, determine that the current residing state of described speech data meets described predetermined condition, resolves described audio recording; Wherein, described the second sub-predetermined condition is that described the first voice identifier is triggered M time, and M is not less than 2 positive integer.
Optionally, in the time that the current residing state of described speech data meets described predetermined condition, that in described storage medium, stores resolves described audio recording with step, and corresponding computer instruction, being performed in process, specifically also comprises the steps:
Obtain the 3rd parameter information, described the 3rd parameter information is for characterizing the source-information of described speech data;
In the time that described the 3rd parameter information meets the 3rd sub-predetermined condition, determine that the current residing state of described speech data meets described predetermined condition, resolves described audio recording; Described the 3rd sub-predetermined condition be described speech data be receive by described electronic equipment from the data of another electronic equipment that are different from described electronic equipment.
Optionally, in the time showing multiple voice identifier in described display unit, the computer instruction corresponding with step acquisition speech data of storing in described storage medium, being performed in process, specifically comprises the steps:
In the time there is the operating body that grips described electronic equipment, obtain the grip position information of described operating body for described electronic equipment;
From described multiple voice identifier, determine K voice identifier according to described grip position information; A described K voice identifier is the voice identifier that described operating body cannot trigger in the time of described grip position, and K is positive integer;
Using speech data corresponding a described K voice identifier as described speech data.
Obviously, those skilled in the art can carry out various changes and modification and not depart from the spirit and scope of the present invention the present invention.Like this, if within of the present invention these are revised and modification belongs to the scope of the claims in the present invention and equivalent technologies thereof, the present invention is also intended to comprise these changes and modification interior.

Claims (14)

1. an information processing method, is applied in electronic equipment, and described electronic equipment comprises display unit; Described method comprises:
Obtain speech data, described speech data comprises continuous audio recording, and wherein, the voice identifier that described speech data is corresponding is the first voice identifier;
Resolve described audio recording, obtain N the character mark corresponding with described audio recording; A described N character mark is for characterizing the content of described speech data; N is positive integer;
Show described the first voice identifier by described display unit, and, in showing described the first voice identifier, show a described N character mark.
2. the method for claim 1, is characterized in that, shows a described N character mark, is specially: N character mark described in Overlapping display in described the first voice identifier.
3. method as claimed in claim 2, is characterized in that, resolves described audio recording, comprising:
Determining unit, for determining whether the current residing state of described speech data meets a predetermined condition;
In the time that the current residing state of described speech data meets described predetermined condition, resolve described audio recording.
4. method as claimed in claim 3, is characterized in that, in the time that the current residing state of described speech data meets described predetermined condition, resolves described audio recording, comprising:
Obtain the first parameter information, described the first parameter information is used for characterizing the current residing equipment state of described electronic equipment;
In the time that described the first parameter information meets the first sub-predetermined condition, determine that the current residing state of described speech data meets described predetermined condition, resolves described audio recording; Wherein, described the first sub-predetermined condition is that described electronic equipment is in mute state.
5. method as claimed in claim 3, is characterized in that, in the time that the current residing state of described speech data meets described predetermined condition, resolves described audio recording, comprising:
Obtain the second parameter information, described the second parameter information is for characterizing the number of times that described the first voice identifier is triggered;
In the time that described the second parameter information meets the second sub-predetermined condition, determine that the current residing state of described speech data meets described predetermined condition, resolves described audio recording; Wherein, described the second sub-predetermined condition is that described the first voice identifier is triggered M time, and M is not less than 2 positive integer.
6. method as claimed in claim 3, is characterized in that, in the time that the current residing state of described speech data meets described predetermined condition, resolves described audio recording, comprising:
Obtain the 3rd parameter information, described the 3rd parameter information is for characterizing the source-information of described speech data;
In the time that described the 3rd parameter information meets the 3rd sub-predetermined condition, determine that the current residing state of described speech data meets described predetermined condition, resolves described audio recording; Described the 3rd sub-predetermined condition be described speech data be receive by described electronic equipment from the data of another electronic equipment that are different from described electronic equipment.
7. the method as described in claim as arbitrary in claim 1-6, is characterized in that, in the time showing multiple voice identifier in described display unit, obtains speech data, comprising:
In the time there is the operating body that grips described electronic equipment, obtain the grip position information of described operating body for described electronic equipment;
From described multiple voice identifier, determine K voice identifier according to described grip position information; A described K voice identifier is the voice identifier that described operating body cannot trigger in the time of described grip position, and K is positive integer;
Using speech data corresponding a described K voice identifier as described speech data.
8. an electronic equipment, described electronic equipment comprises display unit; Described electronic equipment also comprises:
Acquiring unit, for obtaining speech data, described speech data comprises continuous audio recording, wherein, the voice identifier that described speech data is corresponding is the first voice identifier;
Resolution unit, for resolving described audio recording, obtains N the character mark corresponding with described audio recording; A described N character mark is for characterizing the content of described speech data; N is positive integer;
Processing unit, for show described the first voice identifier by described display unit, and, in showing described the first voice identifier, show a described N character mark.
9. electronic equipment as claimed in claim 8, is characterized in that, described processing unit is used for showing a described N character mark, is specially: N character mark described in Overlapping display in described the first voice identifier.
10. electronic equipment as claimed in claim 9, is characterized in that, described resolution unit also for: determine that whether the current residing state of described speech data meets a predetermined condition; In the time that the current residing state of described speech data meets described predetermined condition, resolve described audio recording.
11. electronic equipments as claimed in claim 10, it is characterized in that, described resolution unit is in the time that the current residing state of described speech data meets described predetermined condition, resolve described audio recording, be specially: obtain the first parameter information, described the first parameter information is used for characterizing the current residing equipment state of described electronic equipment; In the time that described the first parameter information meets the first sub-predetermined condition, determine that the current residing state of described speech data meets described predetermined condition, resolves described audio recording; Wherein, described the first sub-predetermined condition is that described electronic equipment is in mute state.
12. electronic equipments as claimed in claim 10, it is characterized in that, described resolution unit is in the time that the current residing state of described speech data meets described predetermined condition, resolve described audio recording, be specially: obtain the second parameter information, described the second parameter information is for characterizing the number of times that described the first voice identifier is triggered; In the time that described the second parameter information meets the second sub-predetermined condition, determine that the current residing state of described speech data meets described predetermined condition, resolves described audio recording; Wherein, described the second sub-predetermined condition is that described the first voice identifier is triggered M time, and M is not less than 2 positive integer.
13. electronic equipments as claimed in claim 10, it is characterized in that, described resolution unit is in the time that the current residing state of described speech data meets described predetermined condition, resolve described audio recording, be specially: obtain the 3rd parameter information, described the 3rd parameter information is for characterizing the source-information of described speech data; In the time that described the 3rd parameter information meets the 3rd sub-predetermined condition, determine that the current residing state of described speech data meets described predetermined condition, resolves described audio recording; Described the 3rd sub-predetermined condition be described speech data be receive by described electronic equipment from the data of another electronic equipment that are different from described electronic equipment.
Electronic equipment as described in 14. claims as arbitrary in claim 8-13, it is characterized in that, in the time showing multiple voice identifier in described display unit, described acquiring unit also for: when existing while gripping the operating body of described electronic equipment, obtain the grip position information of described operating body for described electronic equipment; From described multiple voice identifier, determine K voice identifier according to described grip position information; A described K voice identifier is the voice identifier that described operating body cannot trigger in the time of described grip position, and K is positive integer; Using speech data corresponding a described K voice identifier as described speech data.
CN201410086120.8A 2014-03-10 2014-03-10 Information processing method and electronic equipment Pending CN103873687A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201410086120.8A CN103873687A (en) 2014-03-10 2014-03-10 Information processing method and electronic equipment

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201410086120.8A CN103873687A (en) 2014-03-10 2014-03-10 Information processing method and electronic equipment

Publications (1)

Publication Number Publication Date
CN103873687A true CN103873687A (en) 2014-06-18

Family

ID=50911795

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201410086120.8A Pending CN103873687A (en) 2014-03-10 2014-03-10 Information processing method and electronic equipment

Country Status (1)

Country Link
CN (1) CN103873687A (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN105516472A (en) * 2015-11-30 2016-04-20 联想(北京)有限公司 Information processing method and electronic apparatus

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101079836A (en) * 2006-12-21 2007-11-28 腾讯科技(深圳)有限公司 An instant communication method and system based on asymmetric media
CN102349087A (en) * 2009-03-12 2012-02-08 谷歌公司 Automatically providing content associated with captured information, such as information captured in real-time
CN103369477A (en) * 2013-07-02 2013-10-23 华为技术有限公司 Method, device and client for displaying medium information, graphic control display method and device
CN103379460A (en) * 2012-04-20 2013-10-30 华为终端有限公司 Method and terminal for processing voice message
CN103546623A (en) * 2012-07-12 2014-01-29 百度在线网络技术(北京)有限公司 Method, device and equipment for sending voice information and text description information thereof

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101079836A (en) * 2006-12-21 2007-11-28 腾讯科技(深圳)有限公司 An instant communication method and system based on asymmetric media
CN102349087A (en) * 2009-03-12 2012-02-08 谷歌公司 Automatically providing content associated with captured information, such as information captured in real-time
CN103379460A (en) * 2012-04-20 2013-10-30 华为终端有限公司 Method and terminal for processing voice message
CN103546623A (en) * 2012-07-12 2014-01-29 百度在线网络技术(北京)有限公司 Method, device and equipment for sending voice information and text description information thereof
CN103369477A (en) * 2013-07-02 2013-10-23 华为技术有限公司 Method, device and client for displaying medium information, graphic control display method and device

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN105516472A (en) * 2015-11-30 2016-04-20 联想(北京)有限公司 Information processing method and electronic apparatus

Similar Documents

Publication Publication Date Title
US10516776B2 (en) Volume adjusting method, system, apparatus and computer storage medium
KR101726945B1 (en) Reducing the need for manual start/end-pointing and trigger phrases
CN110164437B (en) Voice recognition method and terminal for instant messaging
CN108446022B (en) User device and control method thereof
CN103327181B (en) Voice chatting method capable of improving efficiency of voice information learning for users
CN107995360B (en) Call processing method and related product
CN106024014A (en) Voice conversion method and device and mobile terminal
JP2017509917A (en) Determination of motion commands based at least in part on spatial acoustic characteristics
CN103973877A (en) Method and device for using characters to realize real-time communication in mobile terminal
US9444927B2 (en) Methods for voice management, and related devices
KR101419764B1 (en) Mobile terminal control method for voice emoticon
CN109257498B (en) Sound processing method and mobile terminal
US20150248896A1 (en) Causation of rendering of song audio information
KR20120002737A (en) Method and apparatus for controlling operation in portable terminal using mic
CN110943908A (en) Voice message sending method, electronic device and medium
CN110830368A (en) Instant messaging message sending method and electronic equipment
CN111526247A (en) Method and device for displaying voice text
WO2016157993A1 (en) Information processing device, information processing method, and program
CN111026358B (en) Voice message playing method, playing device and readable storage medium
CN106228994B (en) A kind of method and apparatus detecting sound quality
CN104980583A (en) Event reminding method and terminal
CN104333641A (en) Calling method and device
CN112425144B (en) Information prompting method and related product
US11940896B2 (en) Information processing device, information processing method, and program
CN103873687A (en) Information processing method and electronic equipment

Legal Events

Date Code Title Description
C06 Publication
PB01 Publication
C10 Entry into substantive examination
SE01 Entry into force of request for substantive examination
RJ01 Rejection of invention patent application after publication
RJ01 Rejection of invention patent application after publication

Application publication date: 20140618